1
0

Compare commits

...

426 Commits

Author SHA1 Message Date
Erik Johnston
10a67f0d69 Don't spuriously create new listeners 2015-06-18 11:12:03 +01:00
Erik Johnston
9bc36b7f31 Don't reuse name 2015-06-18 10:57:42 +01:00
Erik Johnston
c59e904839 Mark as notified 2015-06-18 10:52:00 +01:00
Erik Johnston
e70d484e1c Keep track of previous listeners 2015-06-17 16:20:51 +01:00
Erik Johnston
6844bb8a6f Paranoia try..except 2015-06-17 15:38:46 +01:00
Erik Johnston
30b53812de Store timeout 2015-06-17 15:29:31 +01:00
Erik Johnston
bddacb6dd1 Add some things to help debug notifer leak 2015-06-17 15:15:51 +01:00
Erik Johnston
bc42ca121f Merge pull request #185 from matrix-org/erikj/listeners_config
Change listener config.
2015-06-15 18:05:58 +01:00
Erik Johnston
f0583f65e1 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-06-15 15:17:47 +01:00
Erik Johnston
6a7cf6b41f Merge pull request #186 from matrix-org/hotfixes-v0.9.2-r2
Hotfixes v0.9.2-r2
2015-06-15 14:39:46 +01:00
Erik Johnston
2acee97c2b Changelog 2015-06-15 14:20:25 +01:00
Erik Johnston
7f7ec84d6f Bump version 2015-06-15 14:16:29 +01:00
Erik Johnston
cebde85b94 Merge branch 'master' of github.com:matrix-org/synapse into hotfixes-v0.9.2-r2 2015-06-15 14:16:12 +01:00
Erik Johnston
9d0326baa6 Remove redundant newline 2015-06-15 11:27:29 +01:00
Erik Johnston
186f61a3ac Document listener config. Remove deprecated config options 2015-06-15 11:25:53 +01:00
Erik Johnston
fe9bac3749 Merge pull request #184 from matrix-org/hotfixes-v0.9.2-r1
Hotfixes v0.9.2-r1
2015-06-13 12:26:42 +01:00
Erik Johnston
6c01ceb8d0 Bump version 2015-06-13 12:22:14 +01:00
Erik Johnston
2eda996a63 Add a dummy.sql into delta/20 as pip isn't packinging the pushers.py 2015-06-13 12:21:58 +01:00
Matthew Hodgson
4706f3964d Merge pull request #183 from intelfx/install-python-schema-deltas
MANIFEST.in: include python schema delta scripts
2015-06-13 12:09:17 +01:00
Ivan Shapovalov
4df76b0a5d MANIFEST.in: include python schema delta scripts (we now have one in 20/) 2015-06-13 11:08:49 +03:00
Erik Johnston
a005b7269a Add backwards compat support for metrics, manhole and webclient config options 2015-06-12 17:44:23 +01:00
Erik Johnston
261ccd7f5f Fix tests 2015-06-12 17:17:29 +01:00
Erik Johnston
942e39e87c PEP8 2015-06-12 17:13:54 +01:00
Erik Johnston
9c5fc81c2d Correctly handle x_forwaded listener option 2015-06-12 17:13:23 +01:00
Erik Johnston
fd2c07bfed Use config.listeners 2015-06-12 15:33:07 +01:00
Erik Johnston
405f8c4796 Merge branch 'release-v0.9.2' 2015-06-12 11:53:03 +01:00
Erik Johnston
c42ed47660 Fix up create_resource_tree 2015-06-12 11:52:52 +01:00
Erik Johnston
1a87f5f26c Mention config option name 2015-06-12 11:46:41 +01:00
Erik Johnston
a3dc31cab9 s/some/certain 2015-06-12 11:45:13 +01:00
Erik Johnston
4dd47236e7 Update change log 2015-06-12 11:42:52 +01:00
Erik Johnston
295b400d57 Merge branch 'release-v0.9.2' into develop 2015-06-11 16:08:48 +01:00
Erik Johnston
716cf144ec Update change log 2015-06-11 16:07:06 +01:00
Erik Johnston
1e365e88bd Bump schema version 2015-06-11 15:50:39 +01:00
Erik Johnston
2d41dc0069 Bump version 2015-06-11 15:49:19 +01:00
Erik Johnston
f7f07dc517 Begin changing the config format 2015-06-11 15:48:52 +01:00
David Baker
b8690dd840 Catch any exceptions in the pusher loop. Use a lower timeout for pushers so we can see if they're actually still running. 2015-06-05 11:40:22 +01:00
David Baker
da84946de4 pep8 2015-06-04 16:43:45 +01:00
David Baker
63a7b3ad1e Add script to (re)convert the pushers table to changing the unique key. Also give the python db upgrade scripts the database engine so they can convert parameter strings, and add *args **kwargs to the upgrade function so we can add more args in future and previous scripts will ignore them. 2015-06-04 16:16:01 +01:00
Erik Johnston
5730b20c6d Merge pull request #175 from matrix-org/erikj/thumbnail_thread
Thumbnail images on a seperate thread
2015-06-03 17:26:56 +01:00
Erik Johnston
8047fd2434 Merge pull request #176 from matrix-org/erikj/backfill_auth
Improve backfill.
2015-06-03 17:25:37 +01:00
Erik Johnston
3bbd0d0e09 Merge pull request #180 from matrix-org/erikj/prev_state_context
Don't needlessly compute prev_state
2015-06-03 17:20:56 +01:00
Erik Johnston
9dda396baa Merge pull request #179 from matrix-org/erikj/state_group_outliers
Don't compute EventContext for outliers.
2015-06-03 17:20:40 +01:00
Erik Johnston
13ed3b9985 Merge pull request #178 from matrix-org/erikj/cache_state_groups
Add cache to get_state_groups.
2015-06-03 17:20:33 +01:00
Erik Johnston
bd2cf9d4bf Merge pull request #177 from matrix-org/erikj/content_repo_http_client
SYN-403: Make content repository use its own http client.
2015-06-03 17:20:27 +01:00
Erik Johnston
d4902a7ad0 Merge pull request #174 from matrix-org/erikj/compress_option
Add config option to disable compression of http responses
2015-06-03 17:18:17 +01:00
Erik Johnston
55bf90b9e4 Don't needlessly compute prev_state 2015-06-03 16:44:24 +01:00
Erik Johnston
53f0bf85d7 Comment 2015-06-03 16:43:40 +01:00
Erik Johnston
1c3d844e73 Don't needlessly compute context 2015-06-03 16:41:51 +01:00
Erik Johnston
0d7d9c37b6 Add cache to get_state_groups 2015-06-03 14:45:55 +01:00
Erik Johnston
d8866d7277 Caches should be bound to instances.
Before, caches were global and so different instances of the stores
would share caches. This caused problems in the unit tests.
2015-06-03 14:45:17 +01:00
Erik Johnston
2ef2f6d593 SYN-403: Make content repository use its own http client. 2015-06-03 10:17:37 +01:00
Erik Johnston
3483b78d1a Log where a request came from in federation 2015-06-02 18:15:13 +01:00
Erik Johnston
d3ded420b1 Rephrase log line 2015-06-02 16:30:52 +01:00
Erik Johnston
22716774d5 Don't about JSON when warning about content tampering 2015-06-02 16:30:52 +01:00
Erik Johnston
5044e6c544 Thumbnail images on a seperate thread 2015-06-02 15:39:08 +01:00
Erik Johnston
09e23334de Add a timeout 2015-06-02 11:00:37 +01:00
Erik Johnston
02410e9239 Handle the fact we might be missing auth events 2015-06-02 10:58:35 +01:00
Erik Johnston
e552b78d50 Add some logging 2015-06-02 10:28:14 +01:00
Erik Johnston
fde0da6f19 Correctly look up auth_events 2015-06-02 10:19:38 +01:00
Erik Johnston
3f04a08a0c Don't process events we've already processed. Remember to process state events 2015-06-02 10:11:32 +01:00
Erik Johnston
4bbfbf898e Correctly pass in auth_events 2015-06-01 17:02:23 +01:00
Erik Johnston
6e17463228 Don't explode if we don't have the event 2015-06-01 16:39:43 +01:00
Erik Johnston
522f285f9b Add config option to disable compression of http responses 2015-06-01 13:36:30 +01:00
Erik Johnston
b579a8ea18 Merge pull request #172 from intelfx/contrib-systemd
contrib/systemd: log_config.yaml: do not disable existing loggers
2015-05-31 20:53:45 +01:00
Ivan Shapovalov
53ef3a0bfe contrib/systemd: log_config.yaml: do not disable existing loggers
It turned out that merely configuring the root logger is not enough for
"catch-all" semantics. The logging subsystem also needs to be told not
to disable existing loggers (so that their messages will get propagated
to handlers up the logging hierarchy, not just silently discarded).

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
2015-05-31 19:25:21 +03:00
Mark Haines
d70c847b4f Merge pull request #170 from matrix-org/markjh/SYT-8-recaptcha
Allow endpoint for verifying recaptcha to be configured
2015-05-29 15:32:54 +01:00
Erik Johnston
d15f166093 Remove log line 2015-05-29 15:03:24 +01:00
Erik Johnston
ca580ef862 Don't copy twice 2015-05-29 15:02:55 +01:00
Erik Johnston
45bac68064 Merge pull request #169 from matrix-org/erikj/ultrajson
Use ultrajson when possible. Add option to turn off freezing of events.
2015-05-29 14:58:56 +01:00
Mark Haines
784aaa53df Merge branch 'develop' into markjh/SYT-8-recaptcha
Conflicts:
	synapse/handlers/auth.py
2015-05-29 13:49:44 +01:00
Erik Johnston
8355b4d074 Bump syutil version 2015-05-29 13:08:43 +01:00
Erik Johnston
a7b65bdedf Add config option to turn off freezing events. Use new encode_json api and ujson.loads 2015-05-29 12:17:33 +01:00
Mark Haines
d94590ed48 Add config for setting the recaptcha verify api endpoint, so we can test it in sytest 2015-05-29 12:11:40 +01:00
Erik Johnston
afbd3b2fc4 SYN-395: Fix CAPTCHA, don't double decode json 2015-05-28 18:05:00 +01:00
Erik Johnston
79e37a7ecb Correctly pass connection pool parameter 2015-05-28 16:48:53 +01:00
Erik Johnston
0f118e55db Merge pull request #168 from matrix-org/erikj/conn_pool
Make HTTP clients use connection pools.
2015-05-28 16:03:56 +01:00
Erik Johnston
2f54522d44 Merge pull request #167 from matrix-org/erikj/deep_copy_removal
Remove a deep copy
2015-05-28 16:00:07 +01:00
Erik Johnston
dd74436ffd Unused import 2015-05-28 15:47:20 +01:00
Erik Johnston
11f51e6ded Up maxPersistentPerHost count 2015-05-28 15:45:46 +01:00
Erik Johnston
086df80790 Add connection pooling to SimpleHttpClient 2015-05-28 15:43:21 +01:00
Erik Johnston
291e942332 Use connection pool for federation connections 2015-05-28 15:43:21 +01:00
Erik Johnston
31ade3b3e9 Remove a deep copy 2015-05-28 13:45:23 +01:00
Erik Johnston
36b3b75b21 Registration should be disabled by default 2015-05-28 11:01:34 +01:00
Erik Johnston
6d1dea337b Merge branch 'release-v0.9.1' of github.com:matrix-org/synapse 2015-05-26 16:03:32 +01:00
Erik Johnston
99eb1172b0 Merge branch 'release-v0.9.1' of github.com:matrix-org/synapse into develop 2015-05-26 16:02:59 +01:00
Erik Johnston
6cb3212fc2 changelog 2015-05-26 16:00:45 +01:00
Mark Haines
554c63ca60 Iterate over the user_streams not the user_ids 2015-05-26 15:03:49 +01:00
Mark Haines
fff7905409 Merge branch 'bugs/SYN-390' into release-v0.9.1 2015-05-26 14:58:49 +01:00
Mark Haines
00dd207f60 Take a dict of the rule, not the rule list 2015-05-26 14:57:48 +01:00
Erik Johnston
e417469af2 changelog 2015-05-26 11:08:46 +01:00
Erik Johnston
cb7dac3a5d changelog 2015-05-26 11:08:09 +01:00
Erik Johnston
2651fd5e24 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.1 2015-05-26 11:05:50 +01:00
Erik Johnston
764856777c changelog 2015-05-26 11:05:44 +01:00
Mark Haines
e7b25a649c Merge pull request #166 from matrix-org/bugs/SYN-390
SYN-390: Don't modify the dictionary returned from the database here either
2015-05-26 10:40:50 +01:00
Mark Haines
804b732aab SYN-390: Don't modify the dictionary returned from the database here either 2015-05-26 10:35:08 +01:00
Erik Johnston
45fffe8cbe Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.1 2015-05-26 10:22:41 +01:00
Erik Johnston
9ba3c1ede4 Merge pull request #165 from matrix-org/bugs/SYN-390
SYN-390: Don't modify the dictionary returned from the data store
2015-05-26 10:20:36 +01:00
Mark Haines
a0bebeda8b SYN-390: Don't modify the dictionary returned from the data store 2015-05-26 10:14:15 +01:00
Erik Johnston
27e093cbc1 Bump version 2015-05-22 17:03:37 +01:00
Mark Haines
d9f60e8dc8 Merge pull request #163 from matrix-org/markjh/presence_list_cache
Add a cache for the presence list
2015-05-22 17:02:23 +01:00
Mark Haines
0e42dfbe22 Merge pull request #164 from matrix-org/markjh/pusher_performance_2
Add a cache for get_push rules for user, fix cache invalidation
2015-05-22 17:01:56 +01:00
Mark Haines
5ebd33302f Merge pull request #162 from matrix-org/erikj/backfill_fixes
backfill fixes
2015-05-22 17:01:40 +01:00
Mark Haines
17167898c8 Fix the presence tests 2015-05-22 16:22:54 +01:00
Erik Johnston
6eadbfbea0 Remove redundant for loop 2015-05-22 16:12:20 +01:00
Mark Haines
1a9a9abcc7 Add a cache for getting the presence list for a user 2015-05-22 16:11:17 +01:00
Erik Johnston
74b7de83ec Merge branch 'develop' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-22 16:10:42 +01:00
Mark Haines
36317f3dad Merge pull request #156 from matrix-org/erikj/join_perf
Make joining #matrix:matrix.org over federation quicker
2015-05-22 16:09:54 +01:00
Mark Haines
052ac0c8d0 Merge pull request #159 from matrix-org/erikj/metrics_interface_config
Enable changing the interface the metrics listener binds to
2015-05-22 16:09:33 +01:00
Mark Haines
49a2c10279 Merge pull request #157 from matrix-org/markjh/presence_performance
Improve presence performance in loadtest
2015-05-22 16:04:40 +01:00
Mark Haines
5d53c14342 Merge pull request #160 from matrix-org/markjh/appservice_performance
Make the appservice use 'users_in_room' rather than get_room_members …
2015-05-22 16:04:22 +01:00
Mark Haines
4752a990c8 Merge pull request #161 from matrix-org/erikj/txn_logging_fix
Erikj/txn logging fix
2015-05-22 16:03:52 +01:00
Mark Haines
106a3051b8 Remove spurious TODO comment 2015-05-22 15:53:03 +01:00
Erik Johnston
284f55a7fb Add doc strings 2015-05-22 15:18:04 +01:00
Erik Johnston
1ce1509989 s/metric_interface/metric_bind_host/ 2015-05-22 14:51:22 +01:00
Erik Johnston
8bb85c8c5a Update log line 2015-05-22 14:48:06 +01:00
Mark Haines
c8135f808b Remove unused import 2015-05-22 14:45:46 +01:00
Erik Johnston
b21d015c55 Log origin and stats of incoming transactions 2015-05-22 14:44:25 +01:00
Erik Johnston
e70e8e053e Add txn_id to some log lines 2015-05-22 14:44:02 +01:00
Erik Johnston
1b446a5d85 Log less lines at INFO level, but include more helpful information 2015-05-22 14:29:57 +01:00
Erik Johnston
59a0682f3e Enable changing the interface the metrics listener binds to 2015-05-22 13:13:07 +01:00
Mark Haines
b6adfc59f5 Invalidate the get_latest_event_ids_in_room cache when deleting from event_forward_extremities 2015-05-22 13:01:03 +01:00
Erik Johnston
254aa3c986 Revert register_new_matrix_user to use v1 api 2015-05-22 11:59:48 +01:00
Mark Haines
f43544eecc Make the appservice use 'users_in_room' rather than get_room_members since it is cached 2015-05-22 11:01:28 +01:00
Mark Haines
a04cde613e Add a cache for get_push rules for user, fix cache invalidation 2015-05-22 10:39:45 +01:00
Erik Johnston
4429e720ae Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-22 10:33:00 +01:00
Erik Johnston
ee49098843 Changelog 2015-05-21 17:36:52 +01:00
Erik Johnston
51f5d36f4f Merge branch 'hotfixes-v0.9.0-r5' of github.com:matrix-org/synapse 2015-05-21 17:16:10 +01:00
Erik Johnston
f8c2cd129d Bump version 2015-05-21 17:03:30 +01:00
Erik Johnston
f6d1183fc5 Merge branch 'markjh/pusher_performance_master' of github.com:matrix-org/synapse into hotfixes-v0.9.0-r5 2015-05-21 17:02:54 +01:00
Mark Haines
2043527b9b Don't try to use a txn when not in one, remove spurious debug logging 2015-05-21 16:53:03 +01:00
Mark Haines
53447e9cd3 Add caches for things requested by the pushers 2015-05-21 16:41:39 +01:00
Mark Haines
d61ce3f670 Add a cache for get_current_state with state_key 2015-05-21 16:41:39 +01:00
Erik Johnston
a910984b58 Actually return something from lambda 2015-05-21 15:58:41 +01:00
Erik Johnston
e309b1045d Sort backfill events 2015-05-21 15:57:35 +01:00
Erik Johnston
0180bfe4aa Remove dead code 2015-05-21 15:53:41 +01:00
Erik Johnston
1f3d1d85a9 Only get non-state 2015-05-21 15:52:29 +01:00
Erik Johnston
39a3340f73 Skip events we've already seen 2015-05-21 15:48:56 +01:00
Erik Johnston
ae3bff3491 Correctly prepopulate queue 2015-05-21 15:46:07 +01:00
Erik Johnston
dc085ddf8c Don't prepopulate event_results 2015-05-21 15:44:05 +01:00
Erik Johnston
73d23c6ae8 Don't readd things that are already in event_results 2015-05-21 15:40:22 +01:00
Erik Johnston
6189d8e54d PriorityQueue gives lowest first 2015-05-21 15:38:08 +01:00
Erik Johnston
115ef3ddac Correctly capture Queue.Empty exception 2015-05-21 15:37:43 +01:00
Erik Johnston
4fb858d90a Merge branch 'develop' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-21 15:25:54 +01:00
Mark Haines
88f1ea36ce Oops, get_rooms_for_user returns a namedtuple, not a room_id 2015-05-21 15:23:58 +01:00
Erik Johnston
c2633907c5 Merge branch 'erikj/join_perf' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-21 14:58:47 +01:00
Erik Johnston
ebfdd2eb5b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-21 14:54:52 +01:00
Erik Johnston
a551c5dad7 Merge pull request #155 from matrix-org/erikj/perf
Bulk and batch retrieval of events.
2015-05-21 14:54:40 +01:00
Erik Johnston
27e4b45c06 s/for events/for requests for events/ 2015-05-21 14:52:23 +01:00
Erik Johnston
ac5f2bf9db s/for events/for requests for events/ 2015-05-21 14:50:57 +01:00
Erik Johnston
80a167b1f0 Add comments 2015-05-21 11:19:04 +01:00
Mark Haines
7ae8afb7ef Removed unused 'is_visible' method 2015-05-20 14:48:11 +01:00
Erik Johnston
9118a92862 Split up _get_events into defer and txn versions 2015-05-20 13:27:16 +01:00
Mark Haines
8eca5bd50a Fix the presence tests 2015-05-20 13:22:18 +01:00
Mark Haines
e01b825cc9 Clean up the presence_list checking logic a bit 2015-05-20 13:21:59 +01:00
Erik Johnston
ab45e12d31 Make not return a deferred _get_event_from_row_txn 2015-05-20 13:07:19 +01:00
Erik Johnston
f407cbd2f1 PEP8 2015-05-20 13:02:01 +01:00
Erik Johnston
227f8ef031 Split out _get_event_from_row back into defer and _txn version 2015-05-20 13:00:57 +01:00
Erik Johnston
2bc60c55af Fix _get_backfill_events to return events in the correct order 2015-05-20 12:57:00 +01:00
Erik Johnston
20814fabdd Actually fetch state for new backwards extremeties when backfilling. 2015-05-20 11:59:02 +01:00
Erik Johnston
9084cdd70f Ensure event_results is a set 2015-05-19 16:34:31 +01:00
Erik Johnston
5b731178b2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 16:08:00 +01:00
Erik Johnston
3a653515ec Add None check 2015-05-19 15:27:09 +01:00
Erik Johnston
aa729349dd Fix event_backwards_extrem insertion to ignore outliers 2015-05-19 15:27:00 +01:00
Erik Johnston
5b1631a4a9 Add a timeout param to get_event 2015-05-19 14:53:32 +01:00
Erik Johnston
291cba284b Handle the case when things return empty but non none things 2015-05-19 14:42:46 +01:00
Erik Johnston
253f76a0a5 Don't always hit get_server_verify_key_v1_direct 2015-05-19 14:42:38 +01:00
Erik Johnston
6837c5edab Handle the case when things return empty but non none things 2015-05-19 14:27:11 +01:00
Erik Johnston
7223129916 Don't apply new room join hack if depth > 5 2015-05-19 14:16:08 +01:00
Erik Johnston
5ae4a84211 Don't always hit get_server_verify_key_v1_direct 2015-05-19 13:43:34 +01:00
Erik Johnston
118a760719 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 13:20:29 +01:00
David Baker
19505e0392 Disable GZip encoding on static file resources as per comment 2015-05-19 13:20:25 +01:00
Erik Johnston
df431b127b Add forgotten .items() 2015-05-19 13:14:21 +01:00
Erik Johnston
882ac83d8d Fix scripts-dev/convert_server_keys.py to have correct format 2015-05-19 13:12:55 +01:00
Erik Johnston
d3e09f12d0 SYN-383: Actually, we expect this value to be a dict 2015-05-19 13:12:41 +01:00
Erik Johnston
677be13ffc Revert accidental commit 2015-05-19 13:12:28 +01:00
Erik Johnston
350b88656a SYN-383: Actually, we expect this value to be a dict 2015-05-19 13:01:57 +01:00
Erik Johnston
9de94d5a4d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 12:50:17 +01:00
Erik Johnston
2b7120e233 SYN-383: Handle the fact the server might not have signed things 2015-05-19 12:49:38 +01:00
Erik Johnston
8b256a7296 Don't reuse var names 2015-05-19 11:58:22 +01:00
Erik Johnston
62ccc6d95f Don't reuse var names 2015-05-19 11:58:04 +01:00
Erik Johnston
01858bcbf2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 11:56:35 +01:00
Erik Johnston
2aeee2a905 SYN-383: Fix parsing of verify_keys and catching of _DefGen_Return 2015-05-19 11:56:18 +01:00
Erik Johnston
5e7883ec19 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 10:50:43 +01:00
Mark Haines
c6a03c46e6 SYN-383: Extract the response list from 'server_keys' in the response JSON as it might work better than iterating over the top level dict 2015-05-19 10:23:02 +01:00
Mark Haines
e4c65b338d Speed up the get_pagination_rows as well 2015-05-18 18:21:06 +01:00
Mark Haines
99914ec9f8 Merge pull request #152 from matrix-org/notifier_performance
Notifier performance
2015-05-18 17:49:59 +01:00
Erik Johnston
ef910a0358 Do work in parellel when joining a room 2015-05-18 17:17:04 +01:00
Mark Haines
591c4bf223 Cache the most recent serial for each room 2015-05-18 16:21:51 +01:00
Mark Haines
e1150cac4b Move updating the serial and state of the presence cache into a single function 2015-05-18 15:46:37 +01:00
Erik Johnston
165eb2dbe6 Comments and shuffle of functions 2015-05-18 15:18:41 +01:00
Mark Haines
880fb46de0 Merge branch 'notifier_performance' into markjh/presence_performance 2015-05-18 14:33:58 +01:00
Mark Haines
9396723995 Merge pull request #154 from matrix-org/erikj/events_move
Move get_events functions to storage.events
2015-05-18 14:24:35 +01:00
Erik Johnston
65878a2319 Remove unused metric 2015-05-18 14:06:30 +01:00
Mark Haines
ad31fa3040 Don't bother sorting by the room_stream_ids, it shouldn't matter which order they are notified in 2015-05-18 14:04:58 +01:00
Erik Johnston
4d1b6f4ad1 Remove rejected events if we don't want rejected events 2015-05-18 14:03:46 +01:00
Mark Haines
0b0033c40b Merge branch 'develop' into notifier_performance 2015-05-18 13:50:01 +01:00
Mark Haines
755def8083 Add more doc string, reduce C+P boilerplate for getting room list 2015-05-18 13:46:47 +01:00
Mark Haines
1e90715a3d Make sure the notifier stream token goes forward when it is updated. Sort the pending events by the correct room_stream_id 2015-05-18 13:17:36 +01:00
Erik Johnston
131bdf9bb1 Merge branch 'erikj/events_move' of github.com:matrix-org/synapse into erikj/perf 2015-05-18 10:23:37 +01:00
Erik Johnston
10f1bdb9a2 Move get_events functions to storage.events 2015-05-18 10:21:40 +01:00
Erik Johnston
d5cea26d45 Remove pointless newline 2015-05-18 10:16:45 +01:00
Erik Johnston
c71176858b Newline, remove debug logging 2015-05-18 10:11:14 +01:00
Erik Johnston
f8bd4de87d Remove debug logging 2015-05-18 09:58:03 +01:00
Erik Johnston
c3b37abdfd PEP8 2015-05-15 16:59:58 +01:00
Erik Johnston
6c74fd62a0 Revert limiting of fetching, it didn't help perf. 2015-05-15 16:45:35 +01:00
Erik Johnston
9ff7f66a2b init j 2015-05-15 16:36:03 +01:00
Erik Johnston
70f272f71c Don't completely drain the list 2015-05-15 16:34:17 +01:00
Erik Johnston
8763dd80ef Don't fetch prev_content for current_state 2015-05-15 15:33:01 +01:00
Erik Johnston
807229f2f2 Err, defer.gatherResults ftw 2015-05-15 15:20:29 +01:00
Erik Johnston
acb12cc811 Make store.get_current_state fetch events asyncly 2015-05-15 15:20:05 +01:00
Erik Johnston
d62dee7eae Remove more debug logging 2015-05-15 15:06:37 +01:00
Erik Johnston
0f29cfabc3 Remove debug logging 2015-05-15 14:06:42 +01:00
Erik Johnston
e275a9c0d9 preserve log context 2015-05-15 11:54:51 +01:00
Erik Johnston
aa32bd38e4 Add a wait 2015-05-15 11:35:04 +01:00
Erik Johnston
372d4c6d7b Srsly. Don't use closures. Baaaaaad 2015-05-15 11:26:00 +01:00
Erik Johnston
575ec91d82 Correctly pass through params 2015-05-15 11:15:10 +01:00
Mark Haines
10be983f2c Merge pull request #153 from matrix-org/markjh/presence_docstring
Add some doc strings for presence.
2015-05-15 11:11:47 +01:00
Mark Haines
415b158ce2 More whitespace 2015-05-15 11:09:47 +01:00
Erik Johnston
de01438a57 Sort out error handling 2015-05-15 11:00:50 +01:00
Erik Johnston
a2c4f3f150 Fix daedlock 2015-05-15 10:54:04 +01:00
Mark Haines
0a4330cd5d Add some missed argument types, cleanup the whitespace a bit 2015-05-14 17:48:12 +01:00
Mark Haines
47ec693e29 More doc-strings 2015-05-14 17:07:02 +01:00
Erik Johnston
1d566edb81 Remove race condition 2015-05-14 16:54:35 +01:00
David Baker
6e1ad283cf Support gzip encoding for client, client v2 and web client resources (SYN-176). 2015-05-14 16:39:19 +01:00
Erik Johnston
ef3d8754f5 Call from right thread 2015-05-14 15:41:55 +01:00
Erik Johnston
142934084a Count and loop 2015-05-14 15:40:21 +01:00
Erik Johnston
96c5b9f87c Don't start up more fetch_events 2015-05-14 15:36:04 +01:00
Erik Johnston
7cd6a6f6cf Awful idea for speeding up fetching of events 2015-05-14 15:34:02 +01:00
Mark Haines
c5d1b4986b Remove unused arguments and doc PresenceHandler.push_update_to_clients 2015-05-14 14:59:31 +01:00
Erik Johnston
7f4105a5c9 Turn off preemptive transactions 2015-05-14 14:51:06 +01:00
Erik Johnston
f4d58deba1 PEP8 2015-05-14 14:45:42 +01:00
Erik Johnston
386b7330d2 Move from _base to events 2015-05-14 14:45:22 +01:00
Mark Haines
0ad1c67234 Add some doc-strings to notifier 2015-05-14 14:35:07 +01:00
Erik Johnston
7d6a1dae31 Jump out early 2015-05-14 14:27:58 +01:00
Erik Johnston
656223fbd3 Actually, we probably want to run this in a transaction 2015-05-14 14:26:35 +01:00
David Baker
67800f7626 Treat setting your display name to the empty string as removing it (SYN-186). 2015-05-14 14:19:59 +01:00
Erik Johnston
2f7f8e1c2b Preemptively jump into a transaction if we ask for get_prev_content 2015-05-14 14:17:36 +01:00
Mark Haines
4770cec7bc Merge pull request #150 from matrix-org/notifier_unify
Make v1 and v2 client APIs interact with the notifier in the same way.
2015-05-14 14:16:59 +01:00
Erik Johnston
e1e9f0c5b2 loop -> gatherResults 2015-05-14 13:58:49 +01:00
Erik Johnston
ab78a8926e Err, we probably want a bigger limit 2015-05-14 13:47:16 +01:00
Erik Johnston
f6f902d459 Move fetching of events into their own transactions 2015-05-14 13:45:48 +01:00
Erik Johnston
cdb3757942 Refactor _get_events 2015-05-14 13:31:55 +01:00
David Baker
92e1c8983d Disallow whitespace in aliases here too 2015-05-14 13:21:55 +01:00
David Baker
0c894e1ebd Throw error when creating room if alias contains whitespace #SYN-335 2015-05-14 13:11:28 +01:00
Mark Haines
084c365c3a Use the current token when timing out a notifier, make sure the user_id is a string in on_new_user_event 2015-05-14 12:03:26 +01:00
David Baker
c37a6e151f Make shared secret registration work again 2015-05-14 12:03:13 +01:00
Erik Johnston
36ea26c5c0 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/perf 2015-05-14 12:01:38 +01:00
David Baker
7c549dd557 Add ID generator for push_rules_enable to #resolve SYN-378 2015-05-14 11:44:03 +01:00
Mark Haines
899d4675dd Merge branch 'notifier_unify' into notifier_performance 2015-05-14 11:36:44 +01:00
Mark Haines
243c56e725 Merge branch 'develop' into notifier_unify 2015-05-14 11:36:23 +01:00
Mark Haines
3edd2d5c93 Fix v2 sync, update the last_notified_ms only if there was an active listener 2015-05-14 11:25:30 +01:00
David Baker
47fb089eb5 Specify python 2.7 in the virtualenv setup (SYN-319) #resolved 2015-05-14 10:23:10 +01:00
Erik Johnston
4f1d984e56 Add index on events 2015-05-13 17:22:26 +01:00
Mark Haines
5e0c533672 Fix metric counter 2015-05-13 17:20:28 +01:00
Erik Johnston
968b01a91a Actually use async method 2015-05-13 17:02:46 +01:00
Erik Johnston
4071f29653 Fetch events from events_id in their own transactions 2015-05-13 16:59:41 +01:00
Mark Haines
f1b83d88a3 Discard unused NotifierUserStreams 2015-05-13 16:54:02 +01:00
Erik Johnston
a988361aea Typo 2015-05-13 15:44:15 +01:00
Erik Johnston
8888982db3 Don't insert None 2015-05-13 15:43:32 +01:00
Mark Haines
9af432257d Don't set a timer if there's already a result to return 2015-05-13 15:42:13 +01:00
Erik Johnston
cf706cc6ef Don't return None 2015-05-13 15:31:25 +01:00
Erik Johnston
5971d240d4 Limit batch size 2015-05-13 15:26:49 +01:00
Erik Johnston
ca4f458787 Fetch events in bulk 2015-05-13 15:13:42 +01:00
Mark Haines
df6db5c802 Don't bother checking for new events from a source if the stream token hasn't advanced for that source 2015-05-13 15:08:24 +01:00
Erik Johnston
6edff11a88 Don't fetch redaction and rejection stuff for each event, so we can use index only scan 2015-05-13 14:39:05 +01:00
Mark Haines
63878c0379 Don't bother checking for updates if the stream token hasn't advanced for a user 2015-05-13 13:42:21 +01:00
Erik Johnston
02590c3e1d Temp turn off checking for rejections and redactions 2015-05-13 11:31:28 +01:00
Erik Johnston
619a21812b defer.gatherResults loop 2015-05-13 11:29:03 +01:00
Erik Johnston
fec4485e28 Batch fetching of events for state groups 2015-05-13 11:22:42 +01:00
Erik Johnston
409bcc76bd Load events for state group seperately 2015-05-13 11:13:31 +01:00
Mark Haines
cffe6057fb Merge branch 'notifier_unify' into notifier_performance
Conflicts:
	synapse/notifier.py
2015-05-12 16:37:50 +01:00
Erik Johnston
80fd2b574c Don't talk to yourself when backfilling 2015-05-12 16:19:46 +01:00
Erik Johnston
e122685978 You need to call contextmanager 2015-05-12 16:12:37 +01:00
Mark Haines
54ef09f860 Merge pull request #151 from matrix-org/revert-147-presence-performance
Revert "Improvement to performance of presence event stream handling"
2015-05-12 15:44:55 +01:00
Mark Haines
d7b3ac46f8 Revert "Improvement to performance of presence event stream handling" 2015-05-12 15:44:21 +01:00
Mark Haines
4429e4bf24 Merge branch 'develop' into notifier_unify
Conflicts:
	synapse/notifier.py
2015-05-12 15:31:26 +01:00
Mark Haines
ec07dba29e Merge pull request #143 from matrix-org/erikj/SYN-375
SYN-375 - Lots of unhandled deferred exceptions.
2015-05-12 15:25:54 +01:00
Mark Haines
c167cbc9fd Merge pull request #147 from matrix-org/presence-performance
Improvement to performance of presence event stream handling
2015-05-12 15:24:54 +01:00
Mark Haines
a6fb2aa2a5 Merge pull request #144 from matrix-org/erikj/logging_context
Preserving logging contexts
2015-05-12 15:23:50 +01:00
Mark Haines
1fce36b111 Merge pull request #149 from matrix-org/erikj/backfill
Backfill support
2015-05-12 15:20:32 +01:00
Erik Johnston
8b28209c60 Err, delete the right stuff 2015-05-12 15:02:53 +01:00
Erik Johnston
30c72d377e Newlines 2015-05-12 14:47:40 +01:00
Erik Johnston
e4eddf9b36 We do actually want to delete rows out of event_backward_extremities 2015-05-12 14:47:23 +01:00
Erik Johnston
c1779a79bc Fix up _handle_prev_events to not try to insert duplicate rows 2015-05-12 14:41:50 +01:00
Erik Johnston
74850d7f75 Do state groups persistence /after/ checking if we have already persisted the event 2015-05-12 14:14:58 +01:00
Erik Johnston
07a1223156 s/backfil/backfill/ 2015-05-12 14:09:54 +01:00
Erik Johnston
0d31ad5101 Typos everywhere 2015-05-12 14:02:01 +01:00
Erik Johnston
a0dfffb33c And another typo. 2015-05-12 14:00:31 +01:00
Erik Johnston
6e5ac4a28f Err, gatherResults doesn't take a dict... 2015-05-12 13:58:14 +01:00
Erik Johnston
8022b27fc2 Make distributer.fire work as it did 2015-05-12 13:14:48 +01:00
Erik Johnston
95dedb866f Unwrap defer.gatherResults failures 2015-05-12 13:14:29 +01:00
Mark Haines
78672a9fd5 Merge branch 'notifier_unify' into notifier_performance 2015-05-12 13:11:54 +01:00
Erik Johnston
da6a7bbdde Merge branch 'develop' of github.com:matrix-org/synapse into erikj/logging_context 2015-05-12 13:10:42 +01:00
Mark Haines
2551b6645d Update the end_token correctly, otherwise the token doesn't advance and the client gets duplicate events 2015-05-12 11:54:18 +01:00
Mark Haines
5e4ba463b7 Merge branch 'develop' into notifier_unify 2015-05-12 11:41:53 +01:00
Mark Haines
51da995806 Merge pull request #148 from matrix-org/bugs/SYN-377
SYN-377: Make sure that the event is marked as persisted from the main thread.
2015-05-12 11:36:44 +01:00
Mark Haines
5002056b16 SYN-377: Make sure that the StreamIdGenerator.get_next.__exit__ is called from the main thread after the transaction completes, not from database thread before the transaction completes. 2015-05-12 11:20:40 +01:00
Mark Haines
5c75adff95 Add a NotifierUserStream to hold all the notification listeners for a user 2015-05-12 11:00:37 +01:00
Erik Johnston
367382b575 Handle the case where the other side is unreachable when backfilling 2015-05-12 10:35:45 +01:00
Erik Johnston
4df11b5039 Make get_current_token accept a direction parameter, which tells whether the source whether we want a token for going 'forwards' or 'backwards' 2015-05-12 10:28:10 +01:00
Erik Johnston
84e6b4001f Initial hack at wiring together pagination and backfill 2015-05-11 18:01:31 +01:00
Erik Johnston
17653a5dfe Move storage.stream._StreamToken to types.RoomStreamToken 2015-05-11 18:01:01 +01:00
Mark Haines
e269c511f6 Don't bother passing the events to the notifier since it isn't using them 2015-05-11 15:01:51 +01:00
Mark Haines
5e3b254dc8 Use wait_for_events to implement 'get_events' 2015-05-11 14:37:33 +01:00
Erik Johnston
d244fa9741 Merge branch 'hotfixes-v0.9.0-r4' of github.com:matrix-org/synapse into develop 2015-05-11 13:34:31 +01:00
Erik Johnston
e89ca34e0e Merge branch 'hotfixes-v0.9.0-r4' of github.com:matrix-org/synapse 2015-05-11 13:16:11 +01:00
Erik Johnston
79b7154454 Merge pull request #146 from matrix-org/erikj/push_rules_fixes
Fix 500 on push rule updates.
2015-05-11 11:33:47 +01:00
Erik Johnston
4ef556f650 Bump version 2015-05-11 11:31:04 +01:00
Erik Johnston
b036596b75 Prefer to use _simple_*. 2015-05-11 11:24:01 +01:00
Erik Johnston
cd525c0f5a push_rules table expects an 'id' field 2015-05-11 11:24:01 +01:00
Mark Haines
3c224f4d0e SYN-376: Add script for converting server keys from v1 to v2 2015-05-11 11:00:17 +01:00
Erik Johnston
d38862a080 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-10 10:56:46 +01:00
David Baker
2640d6718d Merge pull request #145 from matrix-org/hotfixes-v0.9.0-r3
Hotfixes v0.9.0 r3
2015-05-10 10:54:44 +01:00
Erik Johnston
de87541862 Bump version 2015-05-10 10:51:08 +01:00
Erik Johnston
22d2f498fa Fix push rule bug: can't insert bool into small int column 2015-05-10 10:50:51 +01:00
Matthew Hodgson
d79ffa1898 typo 2015-05-09 14:45:37 +01:00
Erik Johnston
2236ef6c92 Fix up leak. Add warnings. 2015-05-08 19:53:34 +01:00
Erik Johnston
da1aa07db5 Add some docs 2015-05-08 16:52:49 +01:00
Erik Johnston
4ac1941592 PEP8 2015-05-08 16:33:01 +01:00
Erik Johnston
476899295f Change the way we do logging contexts so that they survive divergences 2015-05-08 16:32:18 +01:00
Erik Johnston
fca28d243e Change the way we create observers to deferreds so that we don't get spammed by 'unhandled errors' 2015-05-08 16:28:08 +01:00
Erik Johnston
37feb4031f Merge branch 'hotfixes-v0.9.0-r2' of github.com:matrix-org/synapse 2015-05-08 16:13:15 +01:00
Erik Johnston
0cd1401f8d Bump version 2015-05-08 16:11:51 +01:00
Erik Johnston
724bb1e7d9 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-08 16:11:19 +01:00
Mark Haines
1c7912751e Drop the old table not the new table 2015-05-08 16:04:32 +01:00
Mark Haines
9d36eb4eab Rename unique constraint 2015-05-08 16:01:55 +01:00
Mark Haines
b0f71db3ff Remove unsigned 2015-05-08 15:59:51 +01:00
Mark Haines
84e1cacea4 Bump schema version 2015-05-08 15:58:14 +01:00
Mark Haines
6538d445e8 Make the timestamps in server_keys_json bigints 2015-05-08 15:55:17 +01:00
Erik Johnston
52f98f8a5b Merge branch 'hotfixes-v0.9.0-r1' of github.com:matrix-org/synapse 2015-05-08 14:25:18 +01:00
Erik Johnston
22a7ba8b22 Actually rename all isntances 2015-05-08 13:50:03 +01:00
Erik Johnston
3a42f32134 Reword port script usage 2015-05-08 13:47:48 +01:00
Erik Johnston
4fa0f53521 Support reading directly from a config 2015-05-08 13:45:58 +01:00
Erik Johnston
326121aec4 UPGRADES: s/v0.x.x/v0.9.0 2015-05-08 13:38:29 +01:00
Erik Johnston
9a9386226a Mention Ivan Shapovalov contrib/systemd 2015-05-08 13:37:54 +01:00
Erik Johnston
126d562576 Bump version 2015-05-08 13:29:37 +01:00
Erik Johnston
f08c33e834 Fix port_from_sqlite_to_postgres after changes to storage layer. 2015-05-08 13:29:00 +01:00
Paul "LeoNerd" Evans
45543028bb Use the presence cachemap ordering to early-abort the iteration loop 2015-05-07 22:40:10 +01:00
Paul "LeoNerd" Evans
f683b5de47 Store presence cachemap in an ordered dict, so that the newer serials will be at the end 2015-05-07 21:27:53 +01:00
Erik Johnston
db0dca2f6f Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-07 19:21:00 +01:00
Erik Johnston
89c0cd4acc Merge branch 'release-v0.9.0' of github.com:matrix-org/synapse 2015-05-07 19:07:00 +01:00
Erik Johnston
6101ce427a Slight rewording 2015-05-07 18:58:28 +01:00
Erik Johnston
5fe26a9b5c Reword docs/application_services.rst 2015-05-07 18:54:53 +01:00
Erik Johnston
35698484a5 Add some information on registering AS's 2015-05-07 18:51:09 +01:00
Erik Johnston
63562f6d5a Bump date 2015-05-07 18:20:13 +01:00
Erik Johnston
a151693a3b Bump syweb version 2015-05-07 18:01:46 +01:00
Erik Johnston
ac29318b84 Add link to registration spec 2015-05-07 17:58:50 +01:00
Mark Haines
dfa98f911b revert accidental bcrypt gensalt round reduction from loadtesting 2015-05-07 17:45:42 +01:00
Erik Johnston
4605953b0f Add JIRA issue id 2015-05-07 16:53:18 +01:00
Mark Haines
ef8e8ebd91 pynacl-0.3.0 was released so we can finally start using it directly from pypi 2015-05-07 16:46:51 +01:00
Erik Johnston
3188e94ac4 Explain the change in AS /register api 2015-05-07 16:12:02 +01:00
David Baker
97a64f3ebe Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-05-07 09:33:42 +01:00
David Baker
b850c9fa04 Typo 2015-05-07 09:33:30 +01:00
Mark Haines
4a7a4a5b6c Optional profiling using cProfile 2015-05-06 17:08:00 +01:00
Erik Johnston
771fc05d30 Change log: Link to application services spec. 2015-05-06 13:59:32 +01:00
Erik Johnston
938939fd89 Move CAPTCHA_SETUP to docs/ 2015-05-06 13:48:06 +01:00
Erik Johnston
028a570e17 Linkify docs/postgres.sql 2015-05-06 13:42:40 +01:00
Erik Johnston
0e4393652f Update change log to be more detailed 2015-05-06 13:31:59 +01:00
Mark Haines
b994fb2b96 Don't read from the config file before checking it exists 2015-05-06 12:56:47 +01:00
Erik Johnston
f10fd8a470 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.0 2015-05-06 12:54:36 +01:00
Mark Haines
3c11c9c122 Merge pull request #140 from matrix-org/erikj/scripts_refactor
Seperate scripts/ into scripts/ and scripts-dev/
2015-05-06 12:54:07 +01:00
Erik Johnston
673375fe2d Acutally add scripts-dev/ 2015-05-06 11:46:02 +01:00
Erik Johnston
3c92231094 Re-add scripts/register_new_matrix_user 2015-05-06 11:45:18 +01:00
Erik Johnston
119e5d7702 Seperate scripts/ into scripts/ and scripts-dev/, where scripts/* are automatically added to the package 2015-05-06 11:41:19 +01:00
Erik Johnston
271ee604f8 Update change log 2015-05-06 11:29:54 +01:00
Erik Johnston
04c01882fc Bump version 2015-05-06 09:59:13 +01:00
Mark Haines
f4664a6cbd Merge pull request #138 from matrix-org/erikj/SYN-371
SYN-371 - Failed to persist event
2015-05-05 18:53:15 +01:00
Mark Haines
ecb26beda5 Merge pull request #137 from matrix-org/erikj/executemany
executemany support
2015-05-05 18:30:35 +01:00
Erik Johnston
0c4ac271ca Merge branch 'erikj/executemany' of github.com:matrix-org/synapse into erikj/SYN-371 2015-05-05 18:21:19 +01:00
Erik Johnston
0cf7e480b4 And use buffer(...) there as well 2015-05-05 18:20:01 +01:00
Erik Johnston
ed2584050f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/executemany 2015-05-05 18:15:20 +01:00
Erik Johnston
977338a7af Use buffer(...) when inserting into bytea column 2015-05-05 18:12:53 +01:00
Mark Haines
31049c4d72 Merge pull request #139 from matrix-org/bugs/SYN-369
Fix race with cache invalidation. SYN-369
2015-05-05 17:46:13 +01:00
Mark Haines
deb0237166 Add some doc-string 2015-05-05 17:45:11 +01:00
Mark Haines
e45b05647e Fix the --help option for synapse 2015-05-05 17:39:59 +01:00
Erik Johnston
3d5a955e08 Missed events are not outliers 2015-05-05 17:36:57 +01:00
Mark Haines
d18f37e026 Collect the invalidate callbacks on the transaction object rather than passing around a separate list 2015-05-05 17:32:21 +01:00
Erik Johnston
9951542393 Add a comment about the zip(*[zip(sorted(...),...)]) 2015-05-05 17:06:55 +01:00
Mark Haines
041b6cba61 SYN-369: Add comments to the sequence number logic in the cache 2015-05-05 16:32:44 +01:00
Mark Haines
63075118a5 Add debug flag in synapse/storage/_base.py for debugging the cache logic by comparing what is in the cache with what was in the database on every access 2015-05-05 16:24:04 +01:00
Erik Johnston
531d7955fd Don't insert without deduplication. In this case we never actually use this table, so simply remove the insert entirely 2015-05-05 16:12:28 +01:00
Mark Haines
bfa4a7f8b0 Invalidate the room_member cache if the current state events updates 2015-05-05 15:43:49 +01:00
Mark Haines
d0fece8d3c Missing return for when the event was already persisted 2015-05-05 15:39:09 +01:00
Erik Johnston
bdcd7693c8 Fix indentation 2015-05-05 15:14:48 +01:00
Erik Johnston
43c2e8deae Add support for using executemany 2015-05-05 15:13:25 +01:00
Erik Johnston
1692dc019d Don't call 'encode_parameter' no-op 2015-05-05 15:00:30 +01:00
Mark Haines
a9aea68fd5 Invalidate the caches from the correct thread 2015-05-05 14:57:08 +01:00
Mark Haines
261d809a47 Sequence the modifications to the cache so that selects don't race with inserts 2015-05-05 14:13:50 +01:00
Erik Johnston
d9cc5de9e5 Correctly name transaction 2015-05-05 10:24:10 +01:00
Erik Johnston
b8940cd902 Remove some unused indexes 2015-05-01 16:14:25 +01:00
Erik Johnston
1942382246 Don't log enqueue_ 2015-05-01 16:14:25 +01:00
David Baker
eb9bd2d949 user_id now in user_threepids 2015-05-01 15:04:37 +01:00
Erik Johnston
2d386d7038 That wasn't a deferred 2015-05-01 14:41:25 +01:00
Erik Johnston
4ac2823b3c Remove inlineCallbacks from non-generator 2015-05-01 14:41:25 +01:00
Erik Johnston
22c7c5eb8f Typo 2015-05-01 14:41:25 +01:00
Erik Johnston
42c12c04f6 Remove some run_on_reactors 2015-05-01 14:41:25 +01:00
Erik Johnston
adb5b76ff5 Don't log all auth events every time we call auth.check 2015-05-01 14:41:25 +01:00
Mark Haines
3bcdf3664c Use the daemonize key from the config if it exists 2015-05-01 14:34:55 +01:00
David Baker
9eeb03c0dd Don't use self.execute: it's designed for fetching stuff 2015-05-01 14:21:25 +01:00
Mark Haines
32937f3ea0 database config is not kept in separate config file anymore 2015-05-01 14:06:54 +01:00
Mark Haines
7b50769eb9 Merge pull request #136 from matrix-org/markjh/config_cleanup
Config restructuring.
2015-05-01 14:04:39 +01:00
David Baker
7693f24792 No id field on user 2015-05-01 13:55:42 +01:00
Mark Haines
46a65c282f Allow generate-config to run against an existing config file to generate default keys 2015-05-01 13:54:38 +01:00
David Baker
92b20713d7 More missed get_user_by_id API changes 2015-05-01 13:45:54 +01:00
Erik Johnston
da4ed08739 One too many lens 2015-05-01 13:29:38 +01:00
Erik Johnston
9060dc6b59 Change public room list to use defer.gatherResults 2015-05-01 13:28:36 +01:00
David Baker
1fae1b3166 This api now no longer returns an array 2015-05-01 13:26:41 +01:00
Erik Johnston
80b4119279 Don't wait for storage of access_token 2015-05-01 13:14:05 +01:00
Erik Johnston
4011cf1c42 Cache latest_event_ids_in_room 2015-05-01 13:06:26 +01:00
Mark Haines
50c87b8eed Allow "manhole" to be ommited from the config 2015-04-30 18:11:47 +01:00
Mark Haines
345995fcde Remove the ~, comment the lines instead 2015-04-30 18:10:19 +01:00
Mark Haines
62cebee8ee Update key.py 2015-04-30 17:54:01 +01:00
Mark Haines
95cbfee8ae Update metrics.py 2015-04-30 17:52:20 +01:00
Mark Haines
4ad8350607 Update README.rst 2015-04-30 17:48:29 +01:00
Mark Haines
6ea9cf58be missing import 2015-04-30 17:21:21 +01:00
Mark Haines
c95480963e read the pid_file from the config file in synctl 2015-04-30 17:12:15 +01:00
Mark Haines
069296dbb0 Can't specify bind-port on the cmdline anymore 2015-04-30 17:08:07 +01:00
Mark Haines
2d4d2bbae4 Merge branch 'develop' into markjh/config_cleanup
Conflicts:
	synapse/config/captcha.py
2015-04-30 16:54:55 +01:00
Mark Haines
2f1348f339 Write a default log_config when generating config 2015-04-30 16:52:57 +01:00
Mark Haines
74aaacf82a Don't break when sizes or durations are given as integers 2015-04-30 16:04:02 +01:00
Mark Haines
c28f1d16f0 Add a random string to the auto generated key id 2015-04-30 15:13:14 +01:00
Mark Haines
265f30bd3f Allow --enable-registration to be passed on the commandline 2015-04-30 15:04:06 +01:00
Mark Haines
c9e62927f2 Use disable_registration keys if they are present 2015-04-30 14:34:09 +01:00
Mark Haines
1aa11cf7ce Allow multiple config files, set up a default config before applying the config files 2015-04-30 13:48:15 +01:00
Mark Haines
6b69ddd17a remove duplicate parse_size method 2015-04-30 04:26:29 +01:00
Mark Haines
d624e2a638 Manually generate the default config yaml, remove most of the commandline arguments for synapse anticipating that people will use the yaml instead. Simpify implementing config options by not requiring the classes to hit the super class 2015-04-30 04:24:44 +01:00
121 changed files with 4464 additions and 2443 deletions

View File

@@ -35,3 +35,6 @@ Turned to Dust <dwinslow86 at gmail.com>
Brabo <brabo at riseup.net>
* Installation instruction fixes
Ivan Shapovalov <intelfx100 at gmail.com>
* contrib/systemd: a sample systemd unit file and a logger configuration

View File

@@ -1,9 +1,115 @@
Changes in synapse vX
=====================
Changes in synapse v0.9.2-r2 (2015-06-15)
=========================================
* Changed config option from ``disable_registration`` to
``enable_registration``. Old option will be ignored.
Fix packaging so that schema delta python files get included in the package.
Changes in synapse v0.9.2 (2015-06-12)
======================================
General:
* Use ultrajson for json (de)serialisation when a canonical encoding is not
required. Ultrajson is significantly faster than simplejson in certain
circumstances.
* Use connection pools for outgoing HTTP connections.
* Process thumbnails on separate threads.
Configuration:
* Add option, ``gzip_responses``, to disable HTTP response compression.
Federation:
* Improve resilience of backfill by ensuring we fetch any missing auth events.
* Improve performance of backfill and joining remote rooms by removing
unnecessary computations. This included handling events we'd previously
handled as well as attempting to compute the current state for outliers.
Changes in synapse v0.9.1 (2015-05-26)
======================================
General:
* Add support for backfilling when a client paginates. This allows servers to
request history for a room from remote servers when a client tries to
paginate history the server does not have - SYN-36
* Fix bug where you couldn't disable non-default pushrules - SYN-378
* Fix ``register_new_user`` script - SYN-359
* Improve performance of fetching events from the database, this improves both
initialSync and sending of events.
* Improve performance of event streams, allowing synapse to handle more
simultaneous connected clients.
Federation:
* Fix bug with existing backfill implementation where it returned the wrong
selection of events in some circumstances.
* Improve performance of joining remote rooms.
Configuration:
* Add support for changing the bind host of the metrics listener via the
``metrics_bind_host`` option.
Changes in synapse v0.9.0-r5 (2015-05-21)
=========================================
* Add more database caches to reduce amount of work done for each pusher. This
radically reduces CPU usage when multiple pushers are set up in the same room.
Changes in synapse v0.9.0 (2015-05-07)
======================================
General:
* Add support for using a PostgreSQL database instead of SQLite. See
`docs/postgres.rst`_ for details.
* Add password change and reset APIs. See `Registration`_ in the spec.
* Fix memory leak due to not releasing stale notifiers - SYN-339.
* Fix race in caches that occasionally caused some presence updates to be
dropped - SYN-369.
* Check server name has not changed on restart.
* Add a sample systemd unit file and a logger configuration in
contrib/systemd. Contributed Ivan Shapovalov.
Federation:
* Add key distribution mechanisms for fetching public keys of unavailable
remote home servers. See `Retrieving Server Keys`_ in the spec.
Configuration:
* Add support for multiple config files.
* Add support for dictionaries in config files.
* Remove support for specifying config options on the command line, except
for:
* ``--daemonize`` - Daemonize the home server.
* ``--manhole`` - Turn on the twisted telnet manhole service on the given
port.
* ``--database-path`` - The path to a sqlite database to use.
* ``--verbose`` - The verbosity level.
* ``--log-file`` - File to log to.
* ``--log-config`` - Python logging config file.
* ``--enable-registration`` - Enable registration for new users.
Application services:
* Reliably retry sending of events from Synapse to application services, as per
`Application Services`_ spec.
* Application services can no longer register via the ``/register`` API,
instead their configuration should be saved to a file and listed in the
synapse ``app_service_config_files`` config option. The AS configuration file
has the same format as the old ``/register`` request.
See `docs/application_services.rst`_ for more information.
.. _`docs/postgres.rst`: docs/postgres.rst
.. _`docs/application_services.rst`: docs/application_services.rst
.. _`Registration`: https://github.com/matrix-org/matrix-doc/blob/master/specification/10_client_server_api.rst#registration
.. _`Retrieving Server Keys`: https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys
.. _`Application Services`: https://github.com/matrix-org/matrix-doc/blob/0c6bd9/specification/25_application_service_api.rst#home-server---application-service-api
Changes in synapse v0.8.1 (2015-03-18)
======================================

View File

@@ -5,6 +5,7 @@ include *.rst
include demo/README
recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.py
recursive-include demo *.dh
recursive-include demo *.py

View File

@@ -117,7 +117,7 @@ Installing prerequisites on Mac OS X::
To install the synapse homeserver run::
$ virtualenv ~/.synapse
$ virtualenv -p python2.7 ~/.synapse
$ source ~/.synapse/bin/activate
$ pip install --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
@@ -318,7 +318,7 @@ ArchLinux
If running `$ synctl start` fails with 'returned non-zero exit status 1',
you will need to explicitly call Python2.7 - either running as::
$ python2.7 -m synapse.app.homeserver --daemonize -c homeserver.yaml --pid-file homeserver.pid
$ python2.7 -m synapse.app.homeserver --daemonize -c homeserver.yaml
...or by editing synctl with the correct python executable.
@@ -409,7 +409,6 @@ SRV record, as that is the name other machines will expect it to have::
$ python -m synapse.app.homeserver \
--server-name YOURDOMAIN \
--bind-port 8448 \
--config-path homeserver.yaml \
--generate-config
$ python -m synapse.app.homeserver --config-path homeserver.yaml

View File

@@ -1,4 +1,4 @@
Upgrading to v0.x.x
Upgrading to v0.9.0
===================
Application services have had a breaking API change in this version.

View File

@@ -21,3 +21,5 @@ handlers:
root:
level: INFO
handlers: [journal]
disable_existing_loggers: False

View File

@@ -16,30 +16,31 @@ if [ $# -eq 1 ]; then
fi
fi
export PYTHONPATH=$(readlink -f $(pwd))
echo $PYTHONPATH
for port in 8080 8081 8082; do
echo "Starting server on port $port... "
https_port=$((port + 400))
mkdir -p demo/$port
pushd demo/$port
#rm $DIR/etc/$port.config
python -m synapse.app.homeserver \
--generate-config \
--config-path "demo/etc/$port.config" \
-p "$https_port" \
--unsecure-port "$port" \
--enable_registration \
-H "localhost:$https_port" \
-f "$DIR/$port.log" \
-d "$DIR/$port.db" \
-D --pid-file "$DIR/$port.pid" \
--manhole $((port + 1000)) \
--tls-dh-params-path "demo/demo.tls.dh" \
--media-store-path "demo/media_store.$port" \
$PARAMS $SYNAPSE_PARAMS \
--enable-registration
--config-path "$DIR/etc/$port.config" \
python -m synapse.app.homeserver \
--config-path "demo/etc/$port.config" \
--config-path "$DIR/etc/$port.config" \
-D \
-vv \
popd
done
cd "$CWD"

View File

@@ -0,0 +1,36 @@
Registering an Application Service
==================================
The registration of new application services depends on the homeserver used.
In synapse, you need to create a new configuration file for your AS and add it
to the list specified under the ``app_service_config_files`` config
option in your synapse config.
For example:
.. code-block:: yaml
app_service_config_files:
- /home/matrix/.synapse/<your-AS>.yaml
The format of the AS configuration file is as follows:
.. code-block:: yaml
url: <base url of AS>
as_token: <token AS will add to requests to HS>
hs_token: <token HS will add to requests to AS>
sender_localpart: <localpart of AS user>
namespaces:
users: # List of users we're interested in
- exclusive: <bool>
regex: <regex>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
See the spec_ for further details on how application services work.
.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api

View File

@@ -34,19 +34,15 @@ Synapse config
When you are ready to start using PostgreSQL, add the following line to your
config file::
database_config: <db_config_file>
Where ``<db_config_file>`` is the file name that points to a yaml file of the
following form::
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
database:
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
All key, values in ``args`` are passed to the ``psycopg2.connect(..)``
function, except keys beginning with ``cp_``, which are consumed by the twisted
@@ -86,13 +82,13 @@ complete, restart synapse. For instance::
cp homeserver.db homeserver.db.snapshot
./synctl start
Assuming your database config file (as described in the section *Synapse
config*) is named ``database_config.yaml`` and the SQLite snapshot is at
Assuming your new config file (as described in the section *Synapse config*)
is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at
``homeserver.db.snapshot`` then simply run::
python scripts/port_from_sqlite_to_postgres.py \
--sqlite-database homeserver.db.snapshot \
--postgres-config database_config.yaml
--postgres-config homeserver-postgres.yaml
The flag ``--curses`` displays a coloured curses progress UI.

View File

@@ -0,0 +1,116 @@
import psycopg2
import yaml
import sys
import json
import time
import hashlib
from syutil.base64util import encode_base64
from syutil.crypto.signing_key import read_signing_keys
from syutil.crypto.jsonsign import sign_json
from syutil.jsonutil import encode_canonical_json
def select_v1_keys(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, key_id, verify_key FROM server_signature_keys")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, key_id, verify_key in rows:
results.setdefault(server_name, {})[key_id] = encode_base64(verify_key)
return results
def select_v1_certs(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, tls_certificate FROM server_tls_certificates")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, tls_certificate in rows:
results[server_name] = tls_certificate
return results
def select_v2_json(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, key_id, key_json FROM server_keys_json")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, key_id, key_json in rows:
results.setdefault(server_name, {})[key_id] = json.loads(str(key_json).decode("utf-8"))
return results
def convert_v1_to_v2(server_name, valid_until, keys, certificate):
return {
"old_verify_keys": {},
"server_name": server_name,
"verify_keys": {
key_id: {"key": key}
for key_id, key in keys.items()
},
"valid_until_ts": valid_until,
"tls_fingerprints": [fingerprint(certificate)],
}
def fingerprint(certificate):
finger = hashlib.sha256(certificate)
return {"sha256": encode_base64(finger.digest())}
def rows_v2(server, json):
valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, buffer(key_json))
def main():
config = yaml.load(open(sys.argv[1]))
valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24
server_name = config["server_name"]
signing_key = read_signing_keys(open(config["signing_key_path"]))[0]
database = config["database"]
assert database["name"] == "psycopg2", "Can only convert for postgresql"
args = database["args"]
args.pop("cp_max")
args.pop("cp_min")
connection = psycopg2.connect(**args)
keys = select_v1_keys(connection)
certificates = select_v1_certs(connection)
json = select_v2_json(connection)
result = {}
for server in keys:
if not server in json:
v2_json = convert_v1_to_v2(
server, valid_until, keys[server], certificates[server]
)
v2_json = sign_json(v2_json, server_name, signing_key)
result[server] = v2_json
yaml.safe_dump(result, sys.stdout, default_flow_style=False)
rows = list(
row for server, json in result.items()
for row in rows_v2(server, json)
)
cursor = connection.cursor()
cursor.executemany(
"INSERT INTO server_keys_json ("
" server_name, key_id, from_server,"
" ts_added_ms, ts_valid_until_ms, key_json"
") VALUES (%s, %s, %s, %s, %s, %s)",
rows
)
connection.commit()
if __name__ == '__main__':
main()

10
scripts/port_from_sqlite_to_postgres.py Normal file → Executable file
View File

@@ -1,3 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
@@ -105,7 +106,7 @@ class Store(object):
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine),
LoggingTransaction(txn, desc, self.database_engine, []),
*args, **kwargs
)
except self.database_engine.module.DatabaseError as e:
@@ -377,9 +378,7 @@ class Porter(object):
for i, row in enumerate(rows):
rows[i] = tuple(
self.postgres_store.database_engine.encode_parameter(
conv(j, col)
)
conv(j, col)
for j, col in enumerate(row)
if j > 0
)
@@ -724,6 +723,9 @@ if __name__ == "__main__":
postgres_config = yaml.safe_load(args.postgres_config)
if "database" in postgres_config:
postgres_config = postgres_config["database"]
if "name" not in postgres_config:
sys.stderr.write("Malformed database config: no 'name'")
sys.exit(2)

View File

@@ -33,9 +33,10 @@ def request_registration(user, password, server_location, shared_secret):
).hexdigest()
data = {
"username": user,
"user": user,
"password": password,
"mac": mac,
"type": "org.matrix.login.shared_secret",
}
server_location = server_location.rstrip("/")
@@ -43,7 +44,7 @@ def request_registration(user, password, server_location, shared_secret):
print "Sending registration request..."
req = urllib2.Request(
"%s/_matrix/client/v2_alpha/register" % (server_location,),
"%s/_matrix/client/api/v1/register" % (server_location,),
data=json.dumps(data),
headers={'Content-Type': 'application/json'}
)

2
scripts/upgrade_db_to_v0.6.0.py Normal file → Executable file
View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
from synapse.storage import SCHEMA_VERSION, read_schema
from synapse.storage._base import SQLBaseStore
from synapse.storage.signatures import SignatureStore

View File

@@ -14,6 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
from setuptools import setup, find_packages
@@ -55,5 +56,5 @@ setup(
include_package_data=True,
zip_safe=False,
long_description=long_description,
scripts=["synctl", "register_new_matrix_user"],
scripts=["synctl"] + glob.glob("scripts/*"),
)

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.8.1-r4"
__version__ = "0.9.2-r2"

View File

@@ -20,7 +20,6 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor
from synapse.types import UserID, ClientInfo
import logging
@@ -65,7 +64,10 @@ class Auth(object):
if event.type == EventTypes.Aliases:
return True
logger.debug("Auth events: %s", auth_events)
logger.debug(
"Auth events: %s",
[a.event_id for a in auth_events.values()]
)
if event.type == EventTypes.Member:
allowed = self.is_membership_change_allowed(
@@ -360,7 +362,7 @@ class Auth(object):
default=[""]
)[0]
if user and access_token and ip_addr:
yield self.store.insert_client_ip(
self.store.insert_client_ip(
user=user,
access_token=access_token,
device_id=user_info["device_id"],
@@ -424,8 +426,6 @@ class Auth(object):
@defer.inlineCallbacks
def add_auth_events(self, builder, context):
yield run_on_reactor()
auth_ids = self.compute_auth_events(builder, context.current_state)
auth_events_entries = yield self.store.add_event_hashes(

View File

@@ -32,9 +32,9 @@ from synapse.server import HomeServer
from twisted.internet import reactor
from twisted.application import service
from twisted.enterprise import adbapi
from twisted.web.resource import Resource
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import Site
from twisted.web.server import Site, GzipEncoderFactory, Request
from twisted.web.http import proxiedLogFormatter, combinedLogFormatter
from synapse.http.server import JsonResource, RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
@@ -54,6 +54,8 @@ from synapse.rest.client.v1 import ClientV1RestResource
from synapse.rest.client.v2_alpha import ClientV2AlphaRestResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse import events
from daemonize import Daemonize
import twisted.manhole.telnet
@@ -61,7 +63,6 @@ import synapse
import logging
import os
import re
import resource
import subprocess
@@ -69,6 +70,16 @@ import subprocess
logger = logging.getLogger("synapse.app.homeserver")
class GzipFile(File):
def getChild(self, path, request):
child = File.getChild(self, path, request)
return EncodingResourceWrapper(child, [GzipEncoderFactory()])
def gz_wrap(r):
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
class SynapseHomeServer(HomeServer):
def build_http_client(self):
@@ -87,9 +98,16 @@ class SynapseHomeServer(HomeServer):
import syweb
syweb_path = os.path.dirname(syweb.__file__)
webclient_path = os.path.join(syweb_path, "webclient")
# GZip is disabled here due to
# https://twistedmatrix.com/trac/ticket/7678
# (It can stay enabled for the API resources: they call
# write() with the whole body and then finish() straight
# after and so do not trigger the bug.
# return GzipFile(webclient_path) # TODO configurable?
return File(webclient_path) # TODO configurable?
def build_resource_for_static_content(self):
# This is old and should go away: not going to bother adding gzip
return File("static")
def build_resource_for_content_repo(self):
@@ -120,149 +138,102 @@ class SynapseHomeServer(HomeServer):
**self.db_config.get("args", {})
)
def create_resource_tree(self, redirect_root_to_web_client):
"""Create the resource tree for this Home Server.
def _listener_http(self, config, listener_config):
port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
tls = listener_config.get("tls", False)
This in unduly complicated because Twisted does not support putting
child resources more than 1 level deep at a time.
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
config = self.get_config()
web_client = config.web_client
# list containing (path_str, Resource) e.g:
# [ ("/aaa/bbb/cc", Resource1), ("/aaa/dummy", Resource2) ]
desired_tree = [
(CLIENT_PREFIX, self.get_resource_for_client()),
(CLIENT_V2_ALPHA_PREFIX, self.get_resource_for_client_v2_alpha()),
(FEDERATION_PREFIX, self.get_resource_for_federation()),
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo()),
(SERVER_KEY_PREFIX, self.get_resource_for_server_key()),
(SERVER_KEY_V2_PREFIX, self.get_resource_for_server_key_v2()),
(MEDIA_PREFIX, self.get_resource_for_media_repository()),
(STATIC_PREFIX, self.get_resource_for_static_content()),
]
if web_client:
logger.info("Adding the web client.")
desired_tree.append((WEB_CLIENT_PREFIX,
self.get_resource_for_web_client()))
if web_client and redirect_root_to_web_client:
self.root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
self.root_resource = Resource()
if tls and config.no_tls:
return
metrics_resource = self.get_resource_for_metrics()
if config.metrics_port is None and metrics_resource is not None:
desired_tree.append((METRICS_PREFIX, metrics_resource))
# ideally we'd just use getChild and putChild but getChild doesn't work
# unless you give it a Request object IN ADDITION to the name :/ So
# instead, we'll store a copy of this mapping so we can actually add
# extra resources to existing nodes. See self._resource_id for the key.
resource_mappings = {}
for full_path, res in desired_tree:
logger.info("Attaching %s to path %s", res, full_path)
last_resource = self.root_resource
for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = self._resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = self._resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "client":
if res["compress"]:
client_v1 = gz_wrap(self.get_resource_for_client())
client_v2 = gz_wrap(self.get_resource_for_client_v2_alpha())
else:
client_v1 = self.get_resource_for_client()
client_v2 = self.get_resource_for_client_v2_alpha()
# ===========================
# now attach the actual desired resource
last_path_seg = full_path.split('/')[-1]
resources.update({
CLIENT_PREFIX: client_v1,
CLIENT_V2_ALPHA_PREFIX: client_v2,
})
# if there is already a resource here, thieve its children and
# replace it
res_id = self._resource_id(last_resource, last_path_seg)
if res_id in resource_mappings:
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = self._resource_id(existing_dummy_resource,
child_name)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
if name == "federation":
resources.update({
FEDERATION_PREFIX: self.get_resource_for_federation(),
})
# finally, insert the desired resource in the right place
last_resource.putChild(last_path_seg, res)
res_id = self._resource_id(last_resource, last_path_seg)
resource_mappings[res_id] = res
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: self.get_resource_for_static_content(),
})
return self.root_resource
if name in ["media", "federation", "client"]:
resources.update({
MEDIA_PREFIX: self.get_resource_for_media_repository(),
CONTENT_REPO_PREFIX: self.get_resource_for_content_repo(),
})
def _resource_id(self, resource, path_seg):
"""Construct an arbitrary resource ID so you can retrieve the mapping
later.
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: self.get_resource_for_server_key(),
SERVER_KEY_V2_PREFIX: self.get_resource_for_server_key_v2(),
})
If you want to represent resource A putChild resource B with path C,
the mapping should looks like _resource_id(A,C) = B.
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = self.get_resource_for_web_client()
Args:
resource (Resource): The *parent* Resource
path_seg (str): The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
"""
return "%s-%s" % (resource, path_seg)
if name == "metrics" and metrics_resource:
resources[METRICS_PREFIX] = metrics_resource
root_resource = create_resource_tree(resources)
if tls:
reactor.listenSSL(
port,
SynapseSite(
"synapse.access.https",
listener_config,
root_resource,
),
self.tls_context_factory,
interface=bind_address
)
else:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.https",
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse now listening on port %d", port)
def start_listening(self):
config = self.get_config()
if not config.no_tls and config.bind_port is not None:
reactor.listenSSL(
config.bind_port,
SynapseSite(
"synapse.access.https",
config,
self.root_resource,
),
self.tls_context_factory,
interface=config.bind_host
)
logger.info("Synapse now listening on port %d", config.bind_port)
if config.unsecure_port is not None:
reactor.listenTCP(
config.unsecure_port,
SynapseSite(
"synapse.access.http",
config,
self.root_resource,
),
interface=config.bind_host
)
logger.info("Synapse now listening on port %d", config.unsecure_port)
metrics_resource = self.get_resource_for_metrics()
if metrics_resource and config.metrics_port is not None:
reactor.listenTCP(
config.metrics_port,
SynapseSite(
"synapse.access.metrics",
config,
metrics_resource,
),
interface="127.0.0.1",
)
logger.info("Metrics now running on 127.0.0.1 port %d", config.metrics_port)
for listener in config.listeners:
if listener["type"] == "http":
self._listener_http(config, listener)
elif listener["type"] == "manhole":
f = twisted.manhole.telnet.ShellFactory()
f.username = "matrix"
f.password = "rabbithole"
f.namespace['hs'] = self
reactor.listenTCP(
listener["port"],
f,
interface=listener.get("bind_address", '127.0.0.1')
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
def run_startup_checks(self, db_conn, database_engine):
all_users_native = are_all_users_on_domain(
@@ -395,10 +366,7 @@ def setup(config_options):
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
if re.search(":[0-9]+$", config.server_name):
domain_with_port = config.server_name
else:
domain_with_port = "%s:%s" % (config.server_name, config.bind_port)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_context_factory = context_factory.ServerContextFactory(config)
@@ -407,9 +375,7 @@ def setup(config_options):
hs = SynapseHomeServer(
config.server_name,
domain_with_port=domain_with_port,
upload_dir=os.path.abspath("uploads"),
db_name=config.database_path,
db_config=config.database_config,
tls_context_factory=tls_context_factory,
config=config,
@@ -418,13 +384,7 @@ def setup(config_options):
database_engine=database_engine,
)
hs.create_resource_tree(
redirect_root_to_web_client=True,
)
db_name = hs.get_db_name()
logger.info("Preparing database: %s...", db_name)
logger.info("Preparing database: %r...", config.database_config)
try:
db_conn = database_engine.module.connect(
@@ -446,14 +406,7 @@ def setup(config_options):
)
sys.exit(1)
logger.info("Database prepared in %s.", db_name)
if config.manhole:
f = twisted.manhole.telnet.ShellFactory()
f.username = "matrix"
f.password = "rabbithole"
f.namespace['hs'] = hs
reactor.listenTCP(config.manhole, f, interface='127.0.0.1')
logger.info("Database prepared in %r.", config.database_config)
hs.start_listening()
@@ -480,6 +433,28 @@ class SynapseService(service.Service):
return self._port.stopListening()
class XForwardedForRequest(Request):
def __init__(self, *args, **kw):
Request.__init__(self, *args, **kw)
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return self.requestHeaders.getRawHeaders(
b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip()
def XForwardedFactory(*args, **kwargs):
return XForwardedForRequest(*args, **kwargs)
class SynapseSite(Site):
"""
Subclass of a twisted http Site that does access logging with python's
@@ -487,7 +462,8 @@ class SynapseSite(Site):
"""
def __init__(self, logger_name, config, resource, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs)
if config.captcha_ip_origin_is_x_forwarded:
if config.get("x_forwarded", False):
self.requestFactory = XForwardedFactory
self._log_formatter = proxiedLogFormatter
else:
self._log_formatter = combinedLogFormatter
@@ -498,12 +474,113 @@ class SynapseSite(Site):
self.access_logger.info(line)
def create_resource_tree(desired_tree, redirect_root_to_web_client=True):
"""Create the resource tree for this Home Server.
This in unduly complicated because Twisted does not support putting
child resources more than 1 level deep at a time.
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
if redirect_root_to_web_client and WEB_CLIENT_PREFIX in desired_tree:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
# ideally we'd just use getChild and putChild but getChild doesn't work
# unless you give it a Request object IN ADDITION to the name :/ So
# instead, we'll store a copy of this mapping so we can actually add
# extra resources to existing nodes. See self._resource_id for the key.
resource_mappings = {}
for full_path, res in desired_tree.items():
logger.info("Attaching %s to path %s", res, full_path)
last_resource = root_resource
for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = _resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = _resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
# ===========================
# now attach the actual desired resource
last_path_seg = full_path.split('/')[-1]
# if there is already a resource here, thieve its children and
# replace it
res_id = _resource_id(last_resource, last_path_seg)
if res_id in resource_mappings:
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = _resource_id(
existing_dummy_resource, child_name
)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
# finally, insert the desired resource in the right place
last_resource.putChild(last_path_seg, res)
res_id = _resource_id(last_resource, last_path_seg)
resource_mappings[res_id] = res
return root_resource
def _resource_id(resource, path_seg):
"""Construct an arbitrary resource ID so you can retrieve the mapping
later.
If you want to represent resource A putChild resource B with path C,
the mapping should looks like _resource_id(A,C) = B.
Args:
resource (Resource): The *parent* Resource
path_seg (str): The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
"""
return "%s-%s" % (resource, path_seg)
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
def profile(func):
from cProfile import Profile
from threading import current_thread
def profiled(*args, **kargs):
profile = Profile()
profile.enable()
func(*args, **kargs)
profile.disable()
ident = current_thread().ident
profile.dump_stats("/tmp/%s.%s.%i.pstat" % (
hs.hostname, func.__name__, ident
))
return profiled
from twisted.python.threadpool import ThreadPool
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
def in_thread():
with LoggingContext("run"):
change_resource_limit(hs.config.soft_file_limit)
reactor.run()
if hs.config.daemonize:

View File

@@ -18,29 +18,33 @@ import sys
import os
import subprocess
import signal
import yaml
SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
CONFIGFILE = "homeserver.yaml"
PIDFILE = "homeserver.pid"
GREEN = "\x1b[1;32m"
NORMAL = "\x1b[m"
if not os.path.exists(CONFIGFILE):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), CONFIGFILE
)
)
sys.exit(1)
CONFIG = yaml.load(open(CONFIGFILE))
PIDFILE = CONFIG["pid_file"]
def start():
if not os.path.exists(CONFIGFILE):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), CONFIGFILE
)
)
sys.exit(1)
print "Starting ...",
args = SYNAPSE
args.extend(["--daemonize", "-c", CONFIGFILE, "--pid-file", PIDFILE])
args.extend(["--daemonize", "-c", CONFIGFILE])
subprocess.check_call(args)
print GREEN + "started" + NORMAL

View File

@@ -148,8 +148,8 @@ class ApplicationService(object):
and self.is_interested_in_user(event.state_key)):
return True
# check joined member events
for member in member_list:
if self.is_interested_in_user(member.state_key):
for user_id in member_list:
if self.is_interested_in_user(user_id):
return True
return False
@@ -173,7 +173,7 @@ class ApplicationService(object):
restrict_to(str): The namespace to restrict regex tests to.
aliases_for_event(list): A list of all the known room aliases for
this event.
member_list(list): A list of all joined room members in this room.
member_list(list): A list of all joined user_ids in this room.
Returns:
bool: True if this service would like to know about this event.
"""

View File

@@ -14,9 +14,10 @@
# limitations under the License.
import argparse
import sys
import os
import yaml
import sys
from textwrap import dedent
class ConfigError(Exception):
@@ -24,18 +25,35 @@ class ConfigError(Exception):
class Config(object):
def __init__(self, args):
pass
@staticmethod
def parse_size(string):
def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
suffix = string[-1]
suffix = value[-1]
if suffix in sizes:
string = string[:-1]
value = value[:-1]
size = sizes[suffix]
return int(string) * size
return int(value) * size
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
return value
second = 1000
hour = 60 * 60 * second
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {"s": second, "h": hour, "d": day, "w": week, "y": year}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
@staticmethod
def abspath(file_path):
@@ -77,17 +95,6 @@ class Config(object):
with open(file_path) as file_stream:
return file_stream.read()
@classmethod
def read_yaml_file(cls, file_path, config_name):
cls.check_file(file_path, config_name)
with open(file_path) as file_stream:
try:
return yaml.load(file_stream)
except:
raise ConfigError(
"Error parsing yaml in file %r" % (file_path,)
)
@staticmethod
def default_path(name):
return os.path.abspath(os.path.join(os.path.curdir, name))
@@ -97,84 +104,130 @@ class Config(object):
with open(file_path) as file_stream:
return yaml.load(file_stream)
@classmethod
def add_arguments(cls, parser):
pass
def invoke_all(self, name, *args, **kargs):
results = []
for cls in type(self).mro():
if name in cls.__dict__:
results.append(getattr(cls, name)(self, *args, **kargs))
return results
@classmethod
def generate_config(cls, args, config_dir_path):
pass
def generate_config(self, config_dir_path, server_name):
default_config = "# vim:ft=yaml\n"
default_config += "\n\n".join(dedent(conf) for conf in self.invoke_all(
"default_config", config_dir_path, server_name
))
config = yaml.load(default_config)
return default_config, config
@classmethod
def load_config(cls, description, argv, generate_section=None):
obj = cls()
config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument(
"-c", "--config-path",
action="append",
metavar="CONFIG_FILE",
help="Specify config file"
)
config_parser.add_argument(
"--generate-config",
action="store_true",
help="Generate config file"
help="Generate a config file for the server name"
)
config_parser.add_argument(
"-H", "--server-name",
help="The server name to generate a config file for"
)
config_args, remaining_args = config_parser.parse_known_args(argv)
if config_args.generate_config:
if not config_args.config_path:
config_parser.error(
"Must specify where to generate the config file"
"Must supply a config file.\nA config file can be automatically"
" generated using \"--generate-config -h SERVER_NAME"
" -c CONFIG-FILE\""
)
config_dir_path = os.path.dirname(config_args.config_path)
if os.path.exists(config_args.config_path):
defaults = cls.read_config_file(config_args.config_path)
else:
defaults = {}
else:
if config_args.config_path:
defaults = cls.read_config_file(config_args.config_path)
else:
defaults = {}
parser = argparse.ArgumentParser(
parents=[config_parser],
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
cls.add_arguments(parser)
parser.set_defaults(**defaults)
args = parser.parse_args(remaining_args)
if config_args.generate_config:
config_dir_path = os.path.dirname(config_args.config_path)
config_dir_path = os.path.dirname(config_args.config_path[0])
config_dir_path = os.path.abspath(config_dir_path)
server_name = config_args.server_name
if not server_name:
print "Must specify a server_name to a generate config for."
sys.exit(1)
(config_path,) = config_args.config_path
if not os.path.exists(config_dir_path):
os.makedirs(config_dir_path)
cls.generate_config(args, config_dir_path)
config = {}
for key, value in vars(args).items():
if (key not in set(["config_path", "generate_config"])
and value is not None):
config[key] = value
with open(config_args.config_path, "w") as config_file:
# TODO(mark/paul) We might want to output emacs-style mode
# markers as well as vim-style mode markers into the file,
# to further hint to people this is a YAML file.
config_file.write("# vim:ft=yaml\n")
yaml.dump(config, config_file, default_flow_style=False)
print (
"A config file has been generated in %s for server name"
" '%s' with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it to"
" your needs."
) % (
config_args.config_path, config['server_name']
)
if os.path.exists(config_path):
print "Config file %r already exists" % (config_path,)
yaml_config = cls.read_config_file(config_path)
yaml_name = yaml_config["server_name"]
if server_name != yaml_name:
print (
"Config file %r has a different server_name: "
" %r != %r" % (config_path, server_name, yaml_name)
)
sys.exit(1)
config_bytes, config = obj.generate_config(
config_dir_path, server_name
)
config.update(yaml_config)
print "Generating any missing keys for %r" % (server_name,)
obj.invoke_all("generate_files", config)
sys.exit(0)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(
config_dir_path, server_name
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
"A config file has been generated in %s for server name"
" '%s' with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it to"
" your needs."
) % (config_path, server_name)
print (
"If this server name is incorrect, you will need to regenerate"
" the SSL certificates"
)
sys.exit(0)
return cls(args)
parser = argparse.ArgumentParser(
parents=[config_parser],
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
obj.invoke_all("add_arguments", parser)
args = parser.parse_args(remaining_args)
if not config_args.config_path:
config_parser.error(
"Must supply a config file.\nA config file can be automatically"
" generated using \"--generate-config -h SERVER_NAME"
" -c CONFIG-FILE\""
)
config_dir_path = os.path.dirname(config_args.config_path[0])
config_dir_path = os.path.abspath(config_dir_path)
specified_config = {}
for config_path in config_args.config_path:
yaml_config = cls.read_config_file(config_path)
specified_config.update(yaml_config)
server_name = specified_config["server_name"]
_, config = obj.generate_config(config_dir_path, server_name)
config.pop("log_config")
config.update(specified_config)
obj.invoke_all("read_config", config)
obj.invoke_all("read_arguments", args)
return obj

View File

@@ -17,15 +17,11 @@ from ._base import Config
class AppServiceConfig(Config):
def __init__(self, args):
super(AppServiceConfig, self).__init__(args)
self.app_service_config_files = args.app_service_config_files
def read_config(self, config):
self.app_service_config_files = config.get("app_service_config_files", [])
@classmethod
def add_arguments(cls, parser):
super(AppServiceConfig, cls).add_arguments(parser)
group = parser.add_argument_group("appservice")
group.add_argument(
"--app-service-config-files", type=str, nargs='+',
help="A list of application service config files to use."
)
def default_config(cls, config_dir_path, server_name):
return """\
# A list of application service config file to use
app_service_config_files: []
"""

View File

@@ -17,42 +17,31 @@ from ._base import Config
class CaptchaConfig(Config):
def __init__(self, args):
super(CaptchaConfig, self).__init__(args)
self.recaptcha_private_key = args.recaptcha_private_key
self.recaptcha_public_key = args.recaptcha_public_key
self.enable_registration_captcha = args.enable_registration_captcha
def read_config(self, config):
self.recaptcha_private_key = config["recaptcha_private_key"]
self.recaptcha_public_key = config["recaptcha_public_key"]
self.enable_registration_captcha = config["enable_registration_captcha"]
self.captcha_bypass_secret = config.get("captcha_bypass_secret")
self.recaptcha_siteverify_api = config["recaptcha_siteverify_api"]
# XXX: This is used for more than just captcha
self.captcha_ip_origin_is_x_forwarded = (
args.captcha_ip_origin_is_x_forwarded
)
self.captcha_bypass_secret = args.captcha_bypass_secret
def default_config(self, config_dir_path, server_name):
return """\
## Captcha ##
@classmethod
def add_arguments(cls, parser):
super(CaptchaConfig, cls).add_arguments(parser)
group = parser.add_argument_group("recaptcha")
group.add_argument(
"--recaptcha-public-key", type=str, default="YOUR_PUBLIC_KEY",
help="This Home Server's ReCAPTCHA public key."
)
group.add_argument(
"--recaptcha-private-key", type=str, default="YOUR_PRIVATE_KEY",
help="This Home Server's ReCAPTCHA private key."
)
group.add_argument(
"--enable-registration-captcha", type=bool, default=False,
help="Enables ReCaptcha checks when registering, preventing signup"
+ " unless a captcha is answered. Requires a valid ReCaptcha "
+ "public/private key."
)
group.add_argument(
"--captcha_ip_origin_is_x_forwarded", type=bool, default=False,
help="When checking captchas, use the X-Forwarded-For (XFF) header"
+ " as the client IP and not the actual client IP."
)
group.add_argument(
"--captcha_bypass_secret", type=str,
help="A secret key used to bypass the captcha test entirely."
)
# This Home Server's ReCAPTCHA public key.
recaptcha_private_key: "YOUR_PUBLIC_KEY"
# This Home Server's ReCAPTCHA private key.
recaptcha_public_key: "YOUR_PRIVATE_KEY"
# Enables ReCaptcha checks when registering, preventing signup
# unless a captcha is answered. Requires a valid ReCaptcha
# public/private key.
enable_registration_captcha: False
# A secret key used to bypass the captcha test entirely.
#captcha_bypass_secret: "YOUR_SECRET_HERE"
# The API endpoint to use for verifying m.login.recaptcha responses.
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
"""

View File

@@ -14,28 +14,21 @@
# limitations under the License.
from ._base import Config
import os
import yaml
class DatabaseConfig(Config):
def __init__(self, args):
super(DatabaseConfig, self).__init__(args)
if args.database_path == ":memory:":
self.database_path = ":memory:"
else:
self.database_path = self.abspath(args.database_path)
self.event_cache_size = self.parse_size(args.event_cache_size)
if args.database_config:
with open(args.database_config) as f:
self.database_config = yaml.safe_load(f)
else:
def read_config(self, config):
self.event_cache_size = self.parse_size(
config.get("event_cache_size", "10K")
)
self.database_config = config.get("database")
if self.database_config is None:
self.database_config = {
"name": "sqlite3",
"args": {
"database": self.database_path,
},
"args": {},
}
name = self.database_config.get("name", None)
@@ -50,24 +43,37 @@ class DatabaseConfig(Config):
else:
raise RuntimeError("Unsupported database type '%s'" % (name,))
@classmethod
def add_arguments(cls, parser):
super(DatabaseConfig, cls).add_arguments(parser)
self.set_databasepath(config.get("database_path"))
def default_config(self, config, config_dir_path):
database_path = self.abspath("homeserver.db")
return """\
# Database configuration
database:
# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
args:
# Path to the database
database: "%(database_path)s"
# Number of events to cache in memory.
event_cache_size: "10K"
""" % locals()
def read_arguments(self, args):
self.set_databasepath(args.database_path)
def set_databasepath(self, database_path):
if database_path != ":memory:":
database_path = self.abspath(database_path)
if self.database_config.get("name", None) == "sqlite3":
if database_path is not None:
self.database_config["args"]["database"] = database_path
def add_arguments(self, parser):
db_group = parser.add_argument_group("database")
db_group.add_argument(
"-d", "--database-path", default="homeserver.db",
metavar="SQLITE_DATABASE_PATH", help="The database name."
"-d", "--database-path", metavar="SQLITE_DATABASE_PATH",
help="The path to a sqlite database to use."
)
db_group.add_argument(
"--event-cache-size", default="100K",
help="Number of events to cache in memory."
)
db_group.add_argument(
"--database-config", default=None,
help="Location of the database configuration file."
)
@classmethod
def generate_config(cls, args, config_dir_path):
super(DatabaseConfig, cls).generate_config(args, config_dir_path)
args.database_path = os.path.abspath(args.database_path)

View File

@@ -36,4 +36,6 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
if __name__ == '__main__':
import sys
HomeServerConfig.load_config("Generate config", sys.argv[1:], "HomeServer")
sys.stdout.write(
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0]
)

View File

@@ -20,48 +20,58 @@ from syutil.crypto.signing_key import (
is_signing_algorithm_supported, decode_verify_key_bytes
)
from syutil.base64util import decode_base64
from synapse.util.stringutils import random_string
class KeyConfig(Config):
def __init__(self, args):
super(KeyConfig, self).__init__(args)
self.signing_key = self.read_signing_key(args.signing_key_path)
def read_config(self, config):
self.signing_key = self.read_signing_key(config["signing_key_path"])
self.old_signing_keys = self.read_old_signing_keys(
args.old_signing_key_path
config["old_signing_keys"]
)
self.key_refresh_interval = self.parse_duration(
config["key_refresh_interval"]
)
self.key_refresh_interval = args.key_refresh_interval
self.perspectives = self.read_perspectives(
args.perspectives_config_path
config["perspectives"]
)
@classmethod
def add_arguments(cls, parser):
super(KeyConfig, cls).add_arguments(parser)
key_group = parser.add_argument_group("keys")
key_group.add_argument("--signing-key-path",
help="The signing key to sign messages with")
key_group.add_argument("--old-signing-key-path",
help="The keys that the server used to sign"
" sign messages with but won't use"
" to sign new messages. E.g. it has"
" lost its private key")
key_group.add_argument("--key-refresh-interval",
default=24 * 60 * 60 * 1000, # 1 Day
help="How long a key response is valid for."
" Used to set the exipiry in /key/v2/."
" Controls how frequently servers will"
" query what keys are still valid")
key_group.add_argument("--perspectives-config-path",
help="The trusted servers to download signing"
" keys from")
def default_config(self, config_dir_path, server_name):
base_key_name = os.path.join(config_dir_path, server_name)
return """\
## Signing Keys ##
def read_perspectives(self, perspectives_config_path):
config = self.read_yaml_file(
perspectives_config_path, "perspectives_config_path"
)
# Path to the signing key to sign messages with
signing_key_path: "%(base_key_name)s.signing.key"
# The keys that the server used to sign messages with but won't use
# to sign new messages. E.g. it has lost its private key
old_signing_keys: {}
# "ed25519:auto":
# # Base64 encoded public key
# key: "The public part of your old signing key."
# # Millisecond POSIX timestamp when the key expired.
# expired_ts: 123456789123
# How long key response published by this server is valid for.
# Used to set the valid_until_ts in /key/v2 APIs.
# Determines how quickly servers will query to check which keys
# are still valid.
key_refresh_interval: "1d" # 1 Day.
# The trusted servers to download signing keys from.
perspectives:
servers:
"matrix.org":
verify_keys:
"ed25519:auto":
key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
""" % locals()
def read_perspectives(self, perspectives_config):
servers = {}
for server_name, server_config in config["servers"].items():
for server_name, server_config in perspectives_config["servers"].items():
for key_id, key_data in server_config["verify_keys"].items():
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
@@ -82,66 +92,42 @@ class KeyConfig(Config):
" Try running again with --generate-config"
)
def read_old_signing_keys(self, old_signing_key_path):
old_signing_keys = self.read_file(
old_signing_key_path, "old_signing_key"
)
try:
return syutil.crypto.signing_key.read_old_signing_keys(
old_signing_keys.splitlines(True)
)
except Exception:
raise ConfigError(
"Error reading old signing keys."
)
def read_old_signing_keys(self, old_signing_keys):
keys = {}
for key_id, key_data in old_signing_keys.items():
if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"]
key_bytes = decode_base64(key_base64)
verify_key = decode_verify_key_bytes(key_id, key_bytes)
verify_key.expired_ts = key_data["expired_ts"]
keys[key_id] = verify_key
else:
raise ConfigError(
"Unsupported signing algorithm for old key: %r" % (key_id,)
)
return keys
@classmethod
def generate_config(cls, args, config_dir_path):
super(KeyConfig, cls).generate_config(args, config_dir_path)
base_key_name = os.path.join(config_dir_path, args.server_name)
args.pid_file = os.path.abspath(args.pid_file)
if not args.signing_key_path:
args.signing_key_path = base_key_name + ".signing.key"
if not os.path.exists(args.signing_key_path):
with open(args.signing_key_path, "w") as signing_key_file:
def generate_files(self, config):
signing_key_path = config["signing_key_path"]
if not os.path.exists(signing_key_path):
with open(signing_key_path, "w") as signing_key_file:
key_id = "a_" + random_string(4)
syutil.crypto.signing_key.write_signing_keys(
signing_key_file,
(syutil.crypto.signing_key.generate_signing_key("auto"),),
(syutil.crypto.signing_key.generate_signing_key(key_id),),
)
else:
signing_keys = cls.read_file(args.signing_key_path, "signing_key")
signing_keys = self.read_file(signing_key_path, "signing_key")
if len(signing_keys.split("\n")[0].split()) == 1:
# handle keys in the old format.
key_id = "a_" + random_string(4)
key = syutil.crypto.signing_key.decode_signing_key_base64(
syutil.crypto.signing_key.NACL_ED25519,
"auto",
key_id,
signing_keys.split("\n")[0]
)
with open(args.signing_key_path, "w") as signing_key_file:
with open(signing_key_path, "w") as signing_key_file:
syutil.crypto.signing_key.write_signing_keys(
signing_key_file,
(key,),
)
if not args.old_signing_key_path:
args.old_signing_key_path = base_key_name + ".old.signing.keys"
if not os.path.exists(args.old_signing_key_path):
with open(args.old_signing_key_path, "w"):
pass
if not args.perspectives_config_path:
args.perspectives_config_path = base_key_name + ".perspectives"
if not os.path.exists(args.perspectives_config_path):
with open(args.perspectives_config_path, "w") as perspectives_file:
perspectives_file.write(
'servers:\n'
' matrix.org:\n'
' verify_keys:\n'
' "ed25519:auto":\n'
' key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"\n'
)

View File

@@ -19,25 +19,88 @@ from twisted.python.log import PythonLoggingObserver
import logging
import logging.config
import yaml
from string import Template
import os
DEFAULT_LOG_CONFIG = Template("""
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s\
- %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: ${log_file}
maxBytes: 104857600
backupCount: 10
filters: [context]
level: INFO
console:
class: logging.StreamHandler
formatter: precise
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: INFO
root:
level: INFO
handlers: [file, console]
""")
class LoggingConfig(Config):
def __init__(self, args):
super(LoggingConfig, self).__init__(args)
self.verbosity = int(args.verbose) if args.verbose else None
self.log_config = self.abspath(args.log_config)
self.log_file = self.abspath(args.log_file)
@classmethod
def read_config(self, config):
self.verbosity = config.get("verbose", 0)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
def default_config(self, config_dir_path, server_name):
log_file = self.abspath("homeserver.log")
log_config = self.abspath(
os.path.join(config_dir_path, server_name + ".log.config")
)
return """
# Logging verbosity level.
verbose: 0
# File to write logging to
log_file: "%(log_file)s"
# A yaml python logging config file
log_config: "%(log_config)s"
""" % locals()
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
def add_arguments(cls, parser):
super(LoggingConfig, cls).add_arguments(parser)
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
'-v', '--verbose', dest="verbose", action='count',
help="The verbosity level."
)
logging_group.add_argument(
'-f', '--log-file', dest="log_file", default="homeserver.log",
'-f', '--log-file', dest="log_file",
help="File to log to."
)
logging_group.add_argument(
@@ -45,6 +108,14 @@ class LoggingConfig(Config):
help="Python logging config file"
)
def generate_files(self, config):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
with open(log_config, "wb") as log_config_file:
log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
)
def setup_logging(self):
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"

View File

@@ -17,20 +17,15 @@ from ._base import Config
class MetricsConfig(Config):
def __init__(self, args):
super(MetricsConfig, self).__init__(args)
self.enable_metrics = args.enable_metrics
self.metrics_port = args.metrics_port
def read_config(self, config):
self.enable_metrics = config["enable_metrics"]
self.metrics_port = config.get("metrics_port")
self.metrics_bind_host = config.get("metrics_bind_host", "127.0.0.1")
@classmethod
def add_arguments(cls, parser):
super(MetricsConfig, cls).add_arguments(parser)
metrics_group = parser.add_argument_group("metrics")
metrics_group.add_argument(
'--enable-metrics', dest="enable_metrics", action="store_true",
help="Enable collection and rendering of performance metrics"
)
metrics_group.add_argument(
'--metrics-port', metavar="PORT", type=int,
help="Separate port to accept metrics requests on (on localhost)"
)
def default_config(self, config_dir_path, server_name):
return """\
## Metrics ###
# Enable collection and rendering of performance metrics
enable_metrics: False
"""

View File

@@ -17,56 +17,42 @@ from ._base import Config
class RatelimitConfig(Config):
def __init__(self, args):
super(RatelimitConfig, self).__init__(args)
self.rc_messages_per_second = args.rc_messages_per_second
self.rc_message_burst_count = args.rc_message_burst_count
def read_config(self, config):
self.rc_messages_per_second = config["rc_messages_per_second"]
self.rc_message_burst_count = config["rc_message_burst_count"]
self.federation_rc_window_size = args.federation_rc_window_size
self.federation_rc_sleep_limit = args.federation_rc_sleep_limit
self.federation_rc_sleep_delay = args.federation_rc_sleep_delay
self.federation_rc_reject_limit = args.federation_rc_reject_limit
self.federation_rc_concurrent = args.federation_rc_concurrent
self.federation_rc_window_size = config["federation_rc_window_size"]
self.federation_rc_sleep_limit = config["federation_rc_sleep_limit"]
self.federation_rc_sleep_delay = config["federation_rc_sleep_delay"]
self.federation_rc_reject_limit = config["federation_rc_reject_limit"]
self.federation_rc_concurrent = config["federation_rc_concurrent"]
@classmethod
def add_arguments(cls, parser):
super(RatelimitConfig, cls).add_arguments(parser)
rc_group = parser.add_argument_group("ratelimiting")
rc_group.add_argument(
"--rc-messages-per-second", type=float, default=0.2,
help="number of messages a client can send per second"
)
rc_group.add_argument(
"--rc-message-burst-count", type=float, default=10,
help="number of message a client can send before being throttled"
)
def default_config(self, config_dir_path, server_name):
return """\
## Ratelimiting ##
rc_group.add_argument(
"--federation-rc-window-size", type=int, default=10000,
help="The federation window size in milliseconds",
)
# Number of messages a client can send per second
rc_messages_per_second: 0.2
rc_group.add_argument(
"--federation-rc-sleep-limit", type=int, default=10,
help="The number of federation requests from a single server"
" in a window before the server will delay processing the"
" request.",
)
# Number of message a client can send before being throttled
rc_message_burst_count: 10.0
rc_group.add_argument(
"--federation-rc-sleep-delay", type=int, default=500,
help="The duration in milliseconds to delay processing events from"
" remote servers by if they go over the sleep limit.",
)
# The federation window size in milliseconds
federation_rc_window_size: 1000
rc_group.add_argument(
"--federation-rc-reject-limit", type=int, default=50,
help="The maximum number of concurrent federation requests allowed"
" from a single server",
)
# The number of federation requests from a single server in a window
# before the server will delay processing the request.
federation_rc_sleep_limit: 10
rc_group.add_argument(
"--federation-rc-concurrent", type=int, default=3,
help="The number of federation requests to concurrently process"
" from a single server",
)
# The duration in milliseconds to delay processing events from
# remote servers by if they go over the sleep limit.
federation_rc_sleep_delay: 500
# The maximum number of concurrent federation requests allowed
# from a single server
federation_rc_reject_limit: 50
# The number of federation requests to concurrently process from a
# single server
federation_rc_concurrent: 3
"""

View File

@@ -17,45 +17,44 @@ from ._base import Config
from synapse.util.stringutils import random_string_with_symbols
import distutils.util
from distutils.util import strtobool
class RegistrationConfig(Config):
def __init__(self, args):
super(RegistrationConfig, self).__init__(args)
# `args.enable_registration` may either be a bool or a string depending
# on if the option was given a value (e.g. --enable-registration=true
# would set `args.enable_registration` to "true" not True.)
def read_config(self, config):
self.disable_registration = not bool(
distutils.util.strtobool(str(args.enable_registration))
strtobool(str(config["enable_registration"]))
)
self.registration_shared_secret = args.registration_shared_secret
if "disable_registration" in config:
self.disable_registration = bool(
strtobool(str(config["disable_registration"]))
)
@classmethod
def add_arguments(cls, parser):
super(RegistrationConfig, cls).add_arguments(parser)
self.registration_shared_secret = config.get("registration_shared_secret")
def default_config(self, config_dir, server_name):
registration_shared_secret = random_string_with_symbols(50)
return """\
## Registration ##
# Enable registration for new users.
enable_registration: False
# If set, allows registration by anyone who also has the shared
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
""" % locals()
def add_arguments(self, parser):
reg_group = parser.add_argument_group("registration")
reg_group.add_argument(
"--enable-registration",
const=True,
default=False,
nargs='?',
help="Enable registration for new users.",
)
reg_group.add_argument(
"--registration-shared-secret", type=str,
help="If set, allows registration by anyone who also has the shared"
" secret, even if registration is otherwise disabled.",
"--enable-registration", action="store_true", default=None,
help="Enable registration for new users."
)
@classmethod
def generate_config(cls, args, config_dir_path):
super(RegistrationConfig, cls).generate_config(args, config_dir_path)
if args.enable_registration is None:
args.enable_registration = False
if args.registration_shared_secret is None:
args.registration_shared_secret = random_string_with_symbols(50)
def read_arguments(self, args):
if args.enable_registration is not None:
self.disable_registration = not bool(
strtobool(str(args.enable_registration))
)

View File

@@ -17,32 +17,20 @@ from ._base import Config
class ContentRepositoryConfig(Config):
def __init__(self, args):
super(ContentRepositoryConfig, self).__init__(args)
self.max_upload_size = self.parse_size(args.max_upload_size)
self.max_image_pixels = self.parse_size(args.max_image_pixels)
self.media_store_path = self.ensure_directory(args.media_store_path)
def read_config(self, config):
self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.media_store_path = self.ensure_directory(config["media_store_path"])
def parse_size(self, string):
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
suffix = string[-1]
if suffix in sizes:
string = string[:-1]
size = sizes[suffix]
return int(string) * size
def default_config(self, config_dir_path, server_name):
media_store = self.default_path("media_store")
return """
# Directory where uploaded images and attachments are stored.
media_store_path: "%(media_store)s"
@classmethod
def add_arguments(cls, parser):
super(ContentRepositoryConfig, cls).add_arguments(parser)
db_group = parser.add_argument_group("content_repository")
db_group.add_argument(
"--max-upload-size", default="10M"
)
db_group.add_argument(
"--media-store-path", default=cls.default_path("media_store")
)
db_group.add_argument(
"--max-image-pixels", default="32M",
help="Maximum number of pixels that will be thumbnailed"
)
# The largest allowed upload size in bytes
max_upload_size: "10M"
# Maximum number of pixels that will be thumbnailed
max_image_pixels: "32M"
""" % locals()

View File

@@ -17,64 +17,204 @@ from ._base import Config
class ServerConfig(Config):
def __init__(self, args):
super(ServerConfig, self).__init__(args)
self.server_name = args.server_name
self.bind_port = args.bind_port
self.bind_host = args.bind_host
self.unsecure_port = args.unsecure_port
self.daemonize = args.daemonize
self.pid_file = self.abspath(args.pid_file)
self.web_client = args.web_client
self.manhole = args.manhole
self.soft_file_limit = args.soft_file_limit
if not args.content_addr:
host = args.server_name
def read_config(self, config):
self.server_name = config["server_name"]
self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"]
self.soft_file_limit = config["soft_file_limit"]
self.daemonize = config.get("daemonize")
self.use_frozen_dicts = config.get("use_frozen_dicts", True)
self.listeners = config.get("listeners", [])
bind_port = config.get("bind_port")
if bind_port:
self.listeners = []
bind_host = config.get("bind_host", "")
gzip_responses = config.get("gzip_responses", True)
names = ["client", "webclient"] if self.web_client else ["client"]
self.listeners.append({
"port": bind_port,
"bind_address": bind_host,
"tls": True,
"type": "http",
"resources": [
{
"names": names,
"compress": gzip_responses,
},
{
"names": ["federation"],
"compress": False,
}
]
})
unsecure_port = config.get("unsecure_port", bind_port - 400)
if unsecure_port:
self.listeners.append({
"port": unsecure_port,
"bind_address": bind_host,
"tls": False,
"type": "http",
"resources": [
{
"names": names,
"compress": gzip_responses,
},
{
"names": ["federation"],
"compress": False,
}
]
})
manhole = config.get("manhole")
if manhole:
self.listeners.append({
"port": manhole,
"bind_address": "127.0.0.1",
"type": "manhole",
})
metrics_port = config.get("metrics_port")
if metrics_port:
self.listeners.append({
"port": metrics_port,
"bind_address": config.get("metrics_bind_host", "127.0.0.1"),
"tls": False,
"type": "http",
"resources": [
{
"names": ["metrics"],
"compress": False,
},
]
})
# Attempt to guess the content_addr for the v0 content repostitory
content_addr = config.get("content_addr")
if not content_addr:
for listener in self.listeners:
if listener["type"] == "http" and not listener.get("tls", False):
unsecure_port = listener["port"]
break
else:
raise RuntimeError("Could not determine 'content_addr'")
host = self.server_name
if ':' not in host:
host = "%s:%d" % (host, args.unsecure_port)
host = "%s:%d" % (host, unsecure_port)
else:
host = host.split(':')[0]
host = "%s:%d" % (host, args.unsecure_port)
args.content_addr = "http://%s" % (host,)
host = "%s:%d" % (host, unsecure_port)
content_addr = "http://%s" % (host,)
self.content_addr = args.content_addr
self.content_addr = content_addr
@classmethod
def add_arguments(cls, parser):
super(ServerConfig, cls).add_arguments(parser)
def default_config(self, config_dir_path, server_name):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
unsecure_port = bind_port - 400
else:
bind_port = 8448
unsecure_port = 8008
pid_file = self.abspath("homeserver.pid")
return """\
## Server ##
# The domain name of the server, with optional explicit port.
# This is used by remote servers to connect to this server,
# e.g. matrix.org, localhost:8080, etc.
server_name: "%(server_name)s"
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.
soft_file_limit: 0
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
# Main HTTPS listener
# For when matrix traffic is sent directly to synapse.
-
# The port to listen for HTTPS requests on.
port: %(bind_port)s
# Local interface to listen on.
# The empty string will cause synapse to listen on all interfaces.
bind_address: ''
# This is a 'http' listener, allows us to specify 'resources'.
type: http
tls: true
# Use the X-Forwarded-For (XFF) header as the client IP and not the
# actual client IP.
x_forwarded: false
# List of HTTP resources to serve on this listener.
resources:
-
# List of resources to host on this listener.
names:
- client # The client-server APIs, both v1 and v2
- webclient # The bundled webclient.
# Should synapse compress HTTP responses to clients that support it?
# This should be disabled if running synapse behind a load balancer
# that can do automatic compression.
compress: true
- names: [federation] # Federation APIs
compress: false
# Unsecure HTTP listener,
# For when matrix traffic passes through loadbalancer that unwraps TLS.
- port: %(unsecure_port)s
tls: false
bind_address: ''
type: http
x_forwarded: false
resources:
- names: [client, webclient]
compress: true
- names: [federation]
compress: false
# Turn on the twisted telnet manhole service on localhost on the given
# port.
# - port: 9000
# bind_address: 127.0.0.1
# type: manhole
""" % locals()
def read_arguments(self, args):
if args.manhole is not None:
self.manhole = args.manhole
if args.daemonize is not None:
self.daemonize = args.daemonize
def add_arguments(self, parser):
server_group = parser.add_argument_group("server")
server_group.add_argument(
"-H", "--server-name", default="localhost",
help="The domain name of the server, with optional explicit port. "
"This is used by remote servers to connect to this server, "
"e.g. matrix.org, localhost:8080, etc."
)
server_group.add_argument("-p", "--bind-port", metavar="PORT",
type=int, help="https port to listen on",
default=8448)
server_group.add_argument("--unsecure-port", metavar="PORT",
type=int, help="http port to listen on",
default=8008)
server_group.add_argument("--bind-host", default="",
help="Local interface to listen on")
server_group.add_argument("-D", "--daemonize", action='store_true',
default=None,
help="Daemonize the home server")
server_group.add_argument('--pid-file', default="homeserver.pid",
help="When running as a daemon, the file to"
" store the pid in")
server_group.add_argument('--web_client', default=True, type=bool,
help="Whether or not to serve a web client")
server_group.add_argument("--manhole", metavar="PORT", dest="manhole",
type=int,
help="Turn on the twisted telnet manhole"
" service on the given port.")
server_group.add_argument("--content-addr", default=None,
help="The host and scheme to use for the "
"content repository")
server_group.add_argument("--soft-file-limit", type=int, default=0,
help="Set the soft limit on the number of "
"file descriptors synapse can use. "
"Zero is used to indicate synapse "
"should set the soft limit to the hard"
"limit.")

View File

@@ -23,37 +23,44 @@ GENERATE_DH_PARAMS = False
class TlsConfig(Config):
def __init__(self, args):
super(TlsConfig, self).__init__(args)
def read_config(self, config):
self.tls_certificate = self.read_tls_certificate(
args.tls_certificate_path
config.get("tls_certificate_path")
)
self.no_tls = args.no_tls
self.no_tls = config.get("no_tls", False)
if self.no_tls:
self.tls_private_key = None
else:
self.tls_private_key = self.read_tls_private_key(
args.tls_private_key_path
config.get("tls_private_key_path")
)
self.tls_dh_params_path = self.check_file(
args.tls_dh_params_path, "tls_dh_params"
config.get("tls_dh_params_path"), "tls_dh_params"
)
@classmethod
def add_arguments(cls, parser):
super(TlsConfig, cls).add_arguments(parser)
tls_group = parser.add_argument_group("tls")
tls_group.add_argument("--tls-certificate-path",
help="PEM encoded X509 certificate for TLS")
tls_group.add_argument("--tls-private-key-path",
help="PEM encoded private key for TLS")
tls_group.add_argument("--tls-dh-params-path",
help="PEM dh parameters for ephemeral keys")
tls_group.add_argument("--no-tls", action='store_true',
help="Don't bind to the https port.")
def default_config(self, config_dir_path, server_name):
base_key_name = os.path.join(config_dir_path, server_name)
tls_certificate_path = base_key_name + ".tls.crt"
tls_private_key_path = base_key_name + ".tls.key"
tls_dh_params_path = base_key_name + ".tls.dh"
return """\
# PEM encoded X509 certificate for TLS
tls_certificate_path: "%(tls_certificate_path)s"
# PEM encoded private key for TLS
tls_private_key_path: "%(tls_private_key_path)s"
# PEM dh parameters for ephemeral keys
tls_dh_params_path: "%(tls_dh_params_path)s"
# Don't bind to the https port
no_tls: False
""" % locals()
def read_tls_certificate(self, cert_path):
cert_pem = self.read_file(cert_path, "tls_certificate")
@@ -63,22 +70,13 @@ class TlsConfig(Config):
private_key_pem = self.read_file(private_key_path, "tls_private_key")
return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem)
@classmethod
def generate_config(cls, args, config_dir_path):
super(TlsConfig, cls).generate_config(args, config_dir_path)
base_key_name = os.path.join(config_dir_path, args.server_name)
def generate_files(self, config):
tls_certificate_path = config["tls_certificate_path"]
tls_private_key_path = config["tls_private_key_path"]
tls_dh_params_path = config["tls_dh_params_path"]
if args.tls_certificate_path is None:
args.tls_certificate_path = base_key_name + ".tls.crt"
if args.tls_private_key_path is None:
args.tls_private_key_path = base_key_name + ".tls.key"
if args.tls_dh_params_path is None:
args.tls_dh_params_path = base_key_name + ".tls.dh"
if not os.path.exists(args.tls_private_key_path):
with open(args.tls_private_key_path, "w") as private_key_file:
if not os.path.exists(tls_private_key_path):
with open(tls_private_key_path, "w") as private_key_file:
tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey(
@@ -86,17 +84,17 @@ class TlsConfig(Config):
)
private_key_file.write(private_key_pem)
else:
with open(args.tls_private_key_path) as private_key_file:
with open(tls_private_key_path) as private_key_file:
private_key_pem = private_key_file.read()
tls_private_key = crypto.load_privatekey(
crypto.FILETYPE_PEM, private_key_pem
)
if not os.path.exists(args.tls_certificate_path):
with open(args.tls_certificate_path, "w") as certifcate_file:
if not os.path.exists(tls_certificate_path):
with open(tls_certificate_path, "w") as certifcate_file:
cert = crypto.X509()
subject = cert.get_subject()
subject.CN = args.server_name
subject.CN = config["server_name"]
cert.set_serial_number(1000)
cert.gmtime_adj_notBefore(0)
@@ -110,16 +108,16 @@ class TlsConfig(Config):
certifcate_file.write(cert_pem)
if not os.path.exists(args.tls_dh_params_path):
if not os.path.exists(tls_dh_params_path):
if GENERATE_DH_PARAMS:
subprocess.check_call([
"openssl", "dhparam",
"-outform", "PEM",
"-out", args.tls_dh_params_path,
"-out", tls_dh_params_path,
"2048"
])
else:
with open(args.tls_dh_params_path, "w") as dh_params_file:
with open(tls_dh_params_path, "w") as dh_params_file:
dh_params_file.write(
"2048-bit DH parameters taken from rfc3526\n"
"-----BEGIN DH PARAMETERS-----\n"

View File

@@ -17,28 +17,21 @@ from ._base import Config
class VoipConfig(Config):
def __init__(self, args):
super(VoipConfig, self).__init__(args)
self.turn_uris = args.turn_uris
self.turn_shared_secret = args.turn_shared_secret
self.turn_user_lifetime = args.turn_user_lifetime
def read_config(self, config):
self.turn_uris = config.get("turn_uris", [])
self.turn_shared_secret = config["turn_shared_secret"]
self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"])
@classmethod
def add_arguments(cls, parser):
super(VoipConfig, cls).add_arguments(parser)
group = parser.add_argument_group("voip")
group.add_argument(
"--turn-uris", type=str, default=None, action='append',
help="The public URIs of the TURN server to give to clients"
)
group.add_argument(
"--turn-shared-secret", type=str, default=None,
help=(
"The shared secret used to compute passwords for the TURN"
" server"
)
)
group.add_argument(
"--turn-user-lifetime", type=int, default=(1000 * 60 * 60),
help="How long generated TURN credentials last, in ms"
)
def default_config(self, config_dir_path, server_name):
return """\
## Turn ##
# The public URIs of the TURN server to give to clients
turn_uris: []
# The shared secret used to compute passwords for the TURN server
turn_shared_secret: "YOUR_SHARED_SECRET"
# How long generated TURN credentials last
turn_user_lifetime: "1h"
"""

View File

@@ -18,7 +18,9 @@ from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json
import logging
@@ -40,11 +42,14 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5):
try:
with PreserveLoggingContext():
protocol = yield endpoint.connect(factory)
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
return
protocol = yield preserve_context_over_fn(
endpoint.connect, factory
)
server_response, server_certificate = yield preserve_context_over_deferred(
protocol.remote_key
)
defer.returnValue((server_response, server_certificate))
return
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):

View File

@@ -26,7 +26,7 @@ from synapse.api.errors import SynapseError, Codes
from synapse.util.retryutils import get_retry_limiter
from synapse.util.async import create_observer
from synapse.util.async import ObservableDeferred
from OpenSSL import crypto
@@ -111,6 +111,10 @@ class Keyring(object):
if download is None:
download = self._get_server_verify_key_impl(server_name, key_ids)
download = ObservableDeferred(
download,
consumeErrors=True
)
self.key_downloads[server_name] = download
@download.addBoth
@@ -118,30 +122,31 @@ class Keyring(object):
del self.key_downloads[server_name]
return ret
r = yield create_observer(download)
r = yield download.observe()
defer.returnValue(r)
@defer.inlineCallbacks
def _get_server_verify_key_impl(self, server_name, key_ids):
keys = None
perspective_results = []
for perspective_name, perspective_keys in self.perspective_servers.items():
@defer.inlineCallbacks
def get_key():
try:
result = yield self.get_server_verify_key_v2_indirect(
server_name, key_ids, perspective_name, perspective_keys
)
defer.returnValue(result)
except:
logging.info(
"Unable to getting key %r for %r from %r",
key_ids, server_name, perspective_name,
)
perspective_results.append(get_key())
@defer.inlineCallbacks
def get_key(perspective_name, perspective_keys):
try:
result = yield self.get_server_verify_key_v2_indirect(
server_name, key_ids, perspective_name, perspective_keys
)
defer.returnValue(result)
except Exception as e:
logging.info(
"Unable to getting key %r for %r from %r: %s %s",
key_ids, server_name, perspective_name,
type(e).__name__, str(e.message),
)
perspective_results = yield defer.gatherResults(perspective_results)
perspective_results = yield defer.gatherResults([
get_key(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
])
for results in perspective_results:
if results is not None:
@@ -154,17 +159,22 @@ class Keyring(object):
)
with limiter:
if keys is None:
if not keys:
try:
keys = yield self.get_server_verify_key_v2_direct(
server_name, key_ids
)
except:
pass
except Exception as e:
logging.info(
"Unable to getting key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
)
keys = yield self.get_server_verify_key_v1_direct(
server_name, key_ids
)
if not keys:
keys = yield self.get_server_verify_key_v1_direct(
server_name, key_ids
)
for key_id in key_ids:
if key_id in keys:
@@ -184,7 +194,7 @@ class Keyring(object):
# TODO(mark): Set the minimum_valid_until_ts to that needed by
# the events being validated or the current time if validating
# an incoming request.
responses = yield self.client.post_json(
query_response = yield self.client.post_json(
destination=perspective_name,
path=b"/_matrix/key/v2/query",
data={
@@ -200,6 +210,8 @@ class Keyring(object):
keys = {}
responses = query_response["server_keys"]
for response in responses:
if (u"signatures" not in response
or perspective_name not in response[u"signatures"]):
@@ -323,7 +335,7 @@ class Keyring(object):
verify_key.time_added = time_now_ms
old_verify_keys[key_id] = verify_key
for key_id in response_json["signatures"][server_name]:
for key_id in response_json["signatures"].get(server_name, {}):
if key_id not in response_json["verify_keys"]:
raise ValueError(
"Key response must include verification keys for all"

View File

@@ -16,6 +16,12 @@
from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting
# a dict to frozen_dicts is expensive.
USE_FROZEN_DICTS = True
class _EventInternalMetadata(object):
def __init__(self, internal_metadata_dict):
self.__dict__ = dict(internal_metadata_dict)
@@ -122,7 +128,10 @@ class FrozenEvent(EventBase):
unsigned = dict(event_dict.pop("unsigned", {}))
frozen_dict = freeze(event_dict)
if USE_FROZEN_DICTS:
frozen_dict = freeze(event_dict)
else:
frozen_dict = event_dict
super(FrozenEvent, self).__init__(
frozen_dict,

View File

@@ -18,12 +18,12 @@ from twisted.internet import defer
from synapse.events.utils import prune_event
from syutil.jsonutil import encode_canonical_json
from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
import logging
@@ -78,6 +78,7 @@ class FederationBase(object):
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
)
if new_pdu:
@@ -94,7 +95,7 @@ class FederationBase(object):
yield defer.gatherResults(
[do(pdu) for pdu in pdus],
consumeErrors=True
)
).addErrback(unwrapFirstError)
defer.returnValue(signed_pdus)
@@ -117,16 +118,15 @@ class FederationBase(object):
)
except SynapseError:
logger.warn(
"Signature check failed for %s redacted to %s",
encode_canonical_json(pdu.get_pdu_json()),
encode_canonical_json(redacted_pdu_json),
"Signature check failed for %s",
pdu.event_id,
)
raise
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s, %s",
pdu.event_id, encode_canonical_json(pdu.get_dict())
"Event content has been tampered, redacting.",
pdu.event_id,
)
defer.returnValue(redacted_event)

View File

@@ -22,6 +22,7 @@ from .units import Edu
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.events import FrozenEvent
@@ -164,16 +165,17 @@ class FederationClient(FederationBase):
for p in transaction_data["pdus"]
]
for i, pdu in enumerate(pdus):
pdus[i] = yield self._check_sigs_and_hash(pdu)
# FIXME: We should handle signature failures more gracefully.
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield defer.gatherResults(
[self._check_sigs_and_hash(pdu) for pdu in pdus],
consumeErrors=True,
).addErrback(unwrapFirstError)
defer.returnValue(pdus)
@defer.inlineCallbacks
@log_function
def get_pdu(self, destinations, event_id, outlier=False):
def get_pdu(self, destinations, event_id, outlier=False, timeout=None):
"""Requests the PDU with given origin and ID from the remote home
servers.
@@ -189,6 +191,8 @@ class FederationClient(FederationBase):
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
it's from an arbitary point in the context as opposed to part
of the current block of PDUs. Defaults to `False`
timeout (int): How long to try (in ms) each destination for before
moving to the next destination. None indicates no timeout.
Returns:
Deferred: Results in the requested PDU.
@@ -212,7 +216,7 @@ class FederationClient(FederationBase):
with limiter:
transaction_data = yield self.transport_layer.get_event(
destination, event_id
destination, event_id, timeout=timeout,
)
logger.debug("transaction_data %r", transaction_data)
@@ -222,7 +226,7 @@ class FederationClient(FederationBase):
for p in transaction_data["pdus"]
]
if pdu_list:
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
@@ -255,7 +259,7 @@ class FederationClient(FederationBase):
)
continue
if self._get_pdu_cache is not None:
if self._get_pdu_cache is not None and pdu:
self._get_pdu_cache[event_id] = pdu
defer.returnValue(pdu)
@@ -370,13 +374,17 @@ class FederationClient(FederationBase):
for p in content.get("auth_chain", [])
]
signed_state = yield self._check_sigs_and_hash_and_fetch(
destination, state, outlier=True
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True
)
signed_state, signed_auth = yield defer.gatherResults(
[
self._check_sigs_and_hash_and_fetch(
destination, state, outlier=True
),
self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True
)
],
consumeErrors=True
).addErrback(unwrapFirstError)
auth_chain.sort(key=lambda e: e.depth)
@@ -491,7 +499,7 @@ class FederationClient(FederationBase):
]
signed_events = yield self._check_sigs_and_hash_and_fetch(
destination, events, outlier=True
destination, events, outlier=False
)
have_gotten_all_from_destination = True
@@ -518,7 +526,7 @@ class FederationClient(FederationBase):
# Are we missing any?
seen_events = set(earliest_events_ids)
seen_events.update(e.event_id for e in signed_events)
seen_events.update(e.event_id for e in signed_events if e)
missing_events = {}
for e in itertools.chain(latest_events, signed_events):
@@ -561,7 +569,7 @@ class FederationClient(FederationBase):
res = yield defer.DeferredList(deferreds, consumeErrors=True)
for (result, val), (e_id, _) in zip(res, ordered_missing):
if result:
if result and val:
signed_events.append(val)
else:
failed_to_fetch.add(e_id)

View File

@@ -20,7 +20,6 @@ from .federation_base import FederationBase
from .units import Transaction, Edu
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.events import FrozenEvent
import synapse.metrics
@@ -123,29 +122,28 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
with PreserveLoggingContext():
results = []
results = []
for pdu in pdu_list:
d = self._handle_new_pdu(transaction.origin, pdu)
for pdu in pdu_list:
d = self._handle_new_pdu(transaction.origin, pdu)
try:
yield d
results.append({})
except FederationError as e:
self.send_failure(e, transaction.origin)
results.append({"error": str(e)})
except Exception as e:
results.append({"error": str(e)})
logger.exception("Failed to handle PDU")
try:
yield d
results.append({})
except FederationError as e:
self.send_failure(e, transaction.origin)
results.append({"error": str(e)})
except Exception as e:
results.append({"error": str(e)})
logger.exception("Failed to handle PDU")
if hasattr(transaction, "edus"):
for edu in [Edu(**x) for x in transaction.edus]:
self.received_edu(
transaction.origin,
edu.edu_type,
edu.content
)
if hasattr(transaction, "edus"):
for edu in [Edu(**x) for x in transaction.edus]:
self.received_edu(
transaction.origin,
edu.edu_type,
edu.content
)
for failure in getattr(transaction, "pdu_failures", []):
logger.info("Got failure %r", failure)

View File

@@ -23,8 +23,6 @@ from twisted.internet import defer
from synapse.util.logutils import log_function
from syutil.jsonutil import encode_canonical_json
import logging
@@ -71,7 +69,7 @@ class TransactionActions(object):
transaction.transaction_id,
transaction.origin,
code,
encode_canonical_json(response)
response,
)
@defer.inlineCallbacks
@@ -101,5 +99,5 @@ class TransactionActions(object):
transaction.transaction_id,
transaction.destination,
response_code,
encode_canonical_json(response_dict)
response_dict,
)

View File

@@ -104,7 +104,6 @@ class TransactionQueue(object):
return not destination.startswith("localhost")
@defer.inlineCallbacks
@log_function
def enqueue_pdu(self, pdu, destinations, order):
# We loop through all destinations to see whether we already have
# a transaction in progress. If we do, stick it in the pending_pdus
@@ -208,13 +207,13 @@ class TransactionQueue(object):
# request at which point pending_pdus_by_dest just keeps growing.
# we need application-layer timeouts of some flavour of these
# requests
logger.info(
logger.debug(
"TX [%s] Transaction already in progress",
destination
)
return
logger.info("TX [%s] _attempt_new_transaction", destination)
logger.debug("TX [%s] _attempt_new_transaction", destination)
# list of (pending_pdu, deferred, order)
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
@@ -222,11 +221,11 @@ class TransactionQueue(object):
pending_failures = self.pending_failures_by_dest.pop(destination, [])
if pending_pdus:
logger.info("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures:
logger.info("TX [%s] Nothing to send", destination)
logger.debug("TX [%s] Nothing to send", destination)
return
# Sort based on the order field
@@ -243,6 +242,8 @@ class TransactionQueue(object):
try:
self.pending_transactions[destination] = 1
txn_id = str(self._next_txn_id)
limiter = yield get_retry_limiter(
destination,
self._clock,
@@ -250,9 +251,9 @@ class TransactionQueue(object):
)
logger.debug(
"TX [%s] Attempting new transaction"
"TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)",
destination,
destination, txn_id,
len(pending_pdus),
len(pending_edus),
len(pending_failures)
@@ -262,7 +263,7 @@ class TransactionQueue(object):
transaction = Transaction.create_new(
origin_server_ts=int(self._clock.time_msec()),
transaction_id=str(self._next_txn_id),
transaction_id=txn_id,
origin=self.server_name,
destination=destination,
pdus=pdus,
@@ -276,9 +277,13 @@ class TransactionQueue(object):
logger.debug("TX [%s] Persisted transaction", destination)
logger.info(
"TX [%s] Sending transaction [%s]",
destination,
"TX [%s] {%s} Sending transaction [%s],"
" (PDUs: %d, EDUs: %d, failures: %d)",
destination, txn_id,
transaction.transaction_id,
len(pending_pdus),
len(pending_edus),
len(pending_failures),
)
with limiter:
@@ -314,7 +319,10 @@ class TransactionQueue(object):
code = e.code
response = e.response
logger.info("TX [%s] got %d response", destination, code)
logger.info(
"TX [%s] {%s} got %d response",
destination, txn_id, code
)
logger.debug("TX [%s] Sent transaction", destination)
logger.debug("TX [%s] Marking as delivered...", destination)

View File

@@ -50,13 +50,15 @@ class TransportLayerClient(object):
)
@log_function
def get_event(self, destination, event_id):
def get_event(self, destination, event_id, timeout=None):
""" Requests the pdu with give id and origin from the given server.
Args:
destination (str): The host name of the remote home server we want
to get the state from.
event_id (str): The id of the event being requested.
timeout (int): How long to try (in ms) the destination for before
giving up. None indicates no timeout.
Returns:
Deferred: Results in a dict received from the remote homeserver.
@@ -65,7 +67,7 @@ class TransportLayerClient(object):
destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, )
return self.client.get_json(destination, path=path)
return self.client.get_json(destination, path=path, timeout=timeout)
@log_function
def backfill(self, destination, room_id, event_tuples, limit):

View File

@@ -93,6 +93,8 @@ class TransportLayerServer(object):
yield self.keyring.verify_json_for_server(origin, json_request)
logger.info("Request from %s", origin)
defer.returnValue((origin, content))
@log_function
@@ -196,6 +198,14 @@ class FederationSendServlet(BaseFederationServlet):
transaction_id, str(transaction_data)
)
logger.info(
"Received txn %s from %s. (PDUs: %d, EDUs: %d, failures: %d)",
transaction_id, origin,
len(transaction_data.get("pdus", [])),
len(transaction_data.get("edus", [])),
len(transaction_data.get("failures", [])),
)
# We should ideally be getting this from the security layer.
# origin = body["origin"]

View File

@@ -20,6 +20,8 @@ from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.api.constants import Membership, EventTypes
from synapse.types import UserID
from synapse.util.logcontext import PreserveLoggingContext
import logging
@@ -76,7 +78,9 @@ class BaseHandler(object):
context = yield state_handler.compute_event_context(builder)
if builder.is_state():
builder.prev_state = context.prev_state_events
builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context)
@@ -103,7 +107,9 @@ class BaseHandler(object):
if not suppress_auth:
self.auth.check(event, auth_events=context.current_state)
yield self.store.persist_event(event, context=context)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
federation_handler = self.hs.get_handlers().federation_handler
@@ -137,10 +143,12 @@ class BaseHandler(object):
"Failed to get destination from event %s", s.event_id
)
# Don't block waiting on waking up all the listeners.
notify_d = self.notifier.on_new_room_event(
event, extra_users=extra_users
)
with PreserveLoggingContext():
# Don't block waiting on waking up all the listeners.
notify_d = self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
def log_failure(f):
logger.warn(
@@ -150,8 +158,6 @@ class BaseHandler(object):
notify_d.addErrback(log_failure)
fed_d = federation_handler.handle_new_event(
federation_handler.handle_new_event(
event, destinations=destinations,
)
fed_d.addErrback(log_failure)

View File

@@ -15,7 +15,7 @@
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventTypes
from synapse.appservice import ApplicationService
from synapse.types import UserID
@@ -147,10 +147,7 @@ class ApplicationServicesHandler(object):
)
# We need to know the members associated with this event.room_id,
# if any.
member_list = yield self.store.get_room_members(
room_id=event.room_id,
membership=Membership.JOIN
)
member_list = yield self.store.get_users_in_room(event.room_id)
services = yield self.store.get_app_services()
interested_list = [
@@ -180,7 +177,7 @@ class ApplicationServicesHandler(object):
return
user_info = yield self.store.get_user_by_id(user_id)
if len(user_info) > 0:
if not user_info:
defer.returnValue(False)
return

View File

@@ -159,7 +159,7 @@ class AuthHandler(BaseHandler):
logger.warn("Attempted to login as %s but they do not exist", user)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
stored_hash = user_info[0]["password_hash"]
stored_hash = user_info["password_hash"]
if bcrypt.checkpw(password, stored_hash):
defer.returnValue(user)
else:
@@ -187,8 +187,8 @@ class AuthHandler(BaseHandler):
# each request
try:
client = SimpleHttpClient(self.hs)
data = yield client.post_urlencoded_get_json(
"https://www.google.com/recaptcha/api/siteverify",
resp_body = yield client.post_urlencoded_get_json(
self.hs.config.recaptcha_siteverify_api,
args={
'secret': self.hs.config.recaptcha_private_key,
'response': user_response,
@@ -198,7 +198,8 @@ class AuthHandler(BaseHandler):
except PartialDownloadError as pde:
# Twisted is silly
data = pde.response
resp_body = simplejson.loads(data)
resp_body = simplejson.loads(data)
if 'success' in resp_body and resp_body['success']:
defer.returnValue(True)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)

View File

@@ -22,6 +22,7 @@ from synapse.api.constants import EventTypes
from synapse.types import RoomAlias
import logging
import string
logger = logging.getLogger(__name__)
@@ -40,6 +41,10 @@ class DirectoryHandler(BaseHandler):
def _create_association(self, room_alias, room_id, servers=None):
# general association creation for both human users and app services
for wchar in string.whitespace:
if wchar in room_alias.localpart:
raise SynapseError(400, "Invalid characters in room alias")
if not self.hs.is_mine(room_alias):
raise SynapseError(400, "Room alias must be local")
# TODO(erikj): Change this.

View File

@@ -15,7 +15,6 @@
from twisted.internet import defer
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function
from synapse.types import UserID
from synapse.events.utils import serialize_event
@@ -81,10 +80,9 @@ class EventStreamHandler(BaseHandler):
# thundering herds on restart.
timeout = random.randint(int(timeout*0.9), int(timeout*1.1))
with PreserveLoggingContext():
events, tokens = yield self.notifier.get_events_for(
auth_user, room_ids, pagin_config, timeout
)
events, tokens = yield self.notifier.get_events_for(
auth_user, room_ids, pagin_config, timeout
)
time_now = self.clock.time_msec()

View File

@@ -18,9 +18,11 @@
from ._base import BaseHandler
from synapse.api.errors import (
AuthError, FederationError, StoreError,
AuthError, FederationError, StoreError, CodeMessageException, SynapseError,
)
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.util import unwrapFirstError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor
from synapse.util.frozenutils import unfreeze
@@ -29,6 +31,8 @@ from synapse.crypto.event_signing import (
)
from synapse.types import UserID
from synapse.util.retryutils import NotRetryingDestination
from twisted.internet import defer
import itertools
@@ -73,8 +77,6 @@ class FederationHandler(BaseHandler):
# When joining a room we need to queue any events for that room up
self.room_queues = {}
@log_function
@defer.inlineCallbacks
def handle_new_event(self, event, destinations):
""" Takes in an event from the client to server side, that has already
been authed and handled by the state module, and sends it to any
@@ -89,9 +91,7 @@ class FederationHandler(BaseHandler):
processing.
"""
yield run_on_reactor()
self.replication_layer.send_pdu(event, destinations)
return self.replication_layer.send_pdu(event, destinations)
@log_function
@defer.inlineCallbacks
@@ -160,7 +160,7 @@ class FederationHandler(BaseHandler):
)
try:
yield self._handle_new_event(
_, event_stream_id, max_stream_id = yield self._handle_new_event(
origin,
event,
state=state,
@@ -201,9 +201,11 @@ class FederationHandler(BaseHandler):
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
d = self.notifier.on_new_room_event(
event, extra_users=extra_users
)
with PreserveLoggingContext():
d = self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
def log_failure(f):
logger.warn(
@@ -222,36 +224,254 @@ class FederationHandler(BaseHandler):
@log_function
@defer.inlineCallbacks
def backfill(self, dest, room_id, limit):
def backfill(self, dest, room_id, limit, extremities=[]):
""" Trigger a backfill request to `dest` for the given `room_id`
"""
extremities = yield self.store.get_oldest_events_in_room(room_id)
if not extremities:
extremities = yield self.store.get_oldest_events_in_room(room_id)
pdus = yield self.replication_layer.backfill(
events = yield self.replication_layer.backfill(
dest,
room_id,
limit,
limit=limit,
extremities=extremities,
)
events = []
event_map = {e.event_id: e for e in events}
for pdu in pdus:
event = pdu
event_ids = set(e.event_id for e in events)
# FIXME (erikj): Not sure this actually works :/
context = yield self.state_handler.compute_event_context(event)
edges = [
ev.event_id
for ev in events
if set(e_id for e_id, _ in ev.prev_events) - event_ids
]
events.append((event, context))
logger.info(
"backfill: Got %d events with %d edges",
len(events), len(edges),
)
yield self.store.persist_event(
event,
context=context,
backfilled=True
# For each edge get the current state.
auth_events = {}
state_events = {}
events_to_state = {}
for e_id in edges:
state, auth = yield self.replication_layer.get_state_for_room(
destination=dest,
room_id=room_id,
event_id=e_id
)
auth_events.update({a.event_id: a for a in auth})
auth_events.update({s.event_id: s for s in state})
state_events.update({s.event_id: s for s in state})
events_to_state[e_id] = state
seen_events = yield self.store.have_events(
set(auth_events.keys()) | set(state_events.keys())
)
all_events = events + state_events.values() + auth_events.values()
required_auth = set(
a_id for event in all_events for a_id, _ in event.auth_events
)
missing_auth = required_auth - set(auth_events)
results = yield defer.gatherResults(
[
self.replication_layer.get_pdu(
[dest],
event_id,
outlier=True,
timeout=10000,
)
for event_id in missing_auth
],
consumeErrors=True
).addErrback(unwrapFirstError)
auth_events.update({a.event_id: a for a in results})
yield defer.gatherResults(
[
self._handle_new_event(
dest, a,
auth_events={
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in a.auth_events
},
)
for a in auth_events.values()
if a.event_id not in seen_events
],
consumeErrors=True,
).addErrback(unwrapFirstError)
yield defer.gatherResults(
[
self._handle_new_event(
dest, event_map[e_id],
state=events_to_state[e_id],
backfilled=True,
auth_events={
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in event_map[e_id].auth_events
},
)
for e_id in events_to_state
],
consumeErrors=True
).addErrback(unwrapFirstError)
events.sort(key=lambda e: e.depth)
for event in events:
if event in events_to_state:
continue
yield self._handle_new_event(
dest, event,
backfilled=True,
)
defer.returnValue(events)
@defer.inlineCallbacks
def maybe_backfill(self, room_id, current_depth):
"""Checks the database to see if we should backfill before paginating,
and if so do.
"""
extremities = yield self.store.get_oldest_events_with_depth_in_room(
room_id
)
if not extremities:
logger.debug("Not backfilling as no extremeties found.")
return
# Check if we reached a point where we should start backfilling.
sorted_extremeties_tuple = sorted(
extremities.items(),
key=lambda e: -int(e[1])
)
max_depth = sorted_extremeties_tuple[0][1]
if current_depth > max_depth:
logger.debug(
"Not backfilling as we don't need to. %d < %d",
max_depth, current_depth,
)
return
# Now we need to decide which hosts to hit first.
# First we try hosts that are already in the room
# TODO: HEURISTIC ALERT.
curr_state = yield self.state_handler.get_current_state(room_id)
def get_domains_from_state(state):
joined_users = [
(state_key, int(event.depth))
for (e_type, state_key), event in state.items()
if e_type == EventTypes.Member
and event.membership == Membership.JOIN
]
joined_domains = {}
for u, d in joined_users:
try:
dom = UserID.from_string(u).domain
old_d = joined_domains.get(dom)
if old_d:
joined_domains[dom] = min(d, old_d)
else:
joined_domains[dom] = d
except:
pass
return sorted(joined_domains.items(), key=lambda d: d[1])
curr_domains = get_domains_from_state(curr_state)
likely_domains = [
domain for domain, depth in curr_domains
if domain is not self.server_name
]
@defer.inlineCallbacks
def try_backfill(domains):
# TODO: Should we try multiple of these at a time?
for dom in domains:
try:
events = yield self.backfill(
dom, room_id,
limit=100,
extremities=[e for e in extremities.keys()]
)
except SynapseError:
logger.info(
"Failed to backfill from %s because %s",
dom, e,
)
continue
except CodeMessageException as e:
if 400 <= e.code < 500:
raise
logger.info(
"Failed to backfill from %s because %s",
dom, e,
)
continue
except NotRetryingDestination as e:
logger.info(e.message)
continue
except Exception as e:
logger.exception(
"Failed to backfill from %s because %s",
dom, e,
)
continue
if events:
defer.returnValue(True)
defer.returnValue(False)
success = yield try_backfill(likely_domains)
if success:
defer.returnValue(True)
# Huh, well *those* domains didn't work out. Lets try some domains
# from the time.
tried_domains = set(likely_domains)
tried_domains.add(self.server_name)
event_ids = list(extremities.keys())
states = yield defer.gatherResults([
self.state_handler.resolve_state_groups([e])
for e in event_ids
])
states = dict(zip(event_ids, [s[1] for s in states]))
for e_id, _ in sorted_extremeties_tuple:
likely_domains = get_domains_from_state(states[e_id])
success = yield try_backfill([
dom for dom in likely_domains
if dom not in tried_domains
])
if success:
defer.returnValue(True)
tried_domains.update(likely_domains)
defer.returnValue(False)
@defer.inlineCallbacks
def send_invite(self, target_host, event):
""" Sends the invite to the remote server for signing.
@@ -380,30 +600,14 @@ class FederationHandler(BaseHandler):
# FIXME
pass
for e in auth_chain:
e.internal_metadata.outlier = True
yield self._handle_auth_events(
origin, [e for e in auth_chain if e.event_id != event.event_id]
)
@defer.inlineCallbacks
def handle_state(e):
if e.event_id == event.event_id:
continue
try:
auth_ids = [e_id for e_id, _ in e.auth_events]
auth = {
(e.type, e.state_key): e for e in auth_chain
if e.event_id in auth_ids
}
yield self._handle_new_event(
origin, e, auth_events=auth
)
except:
logger.exception(
"Failed to handle auth event %s",
e.event_id,
)
for e in state:
if e.event_id == event.event_id:
continue
return
e.internal_metadata.outlier = True
try:
@@ -421,13 +625,15 @@ class FederationHandler(BaseHandler):
e.event_id,
)
yield defer.DeferredList([handle_state(e) for e in state])
auth_ids = [e_id for e_id, _ in event.auth_events]
auth_events = {
(e.type, e.state_key): e for e in auth_chain
if e.event_id in auth_ids
}
yield self._handle_new_event(
_, event_stream_id, max_stream_id = yield self._handle_new_event(
origin,
new_event,
state=state,
@@ -435,9 +641,11 @@ class FederationHandler(BaseHandler):
auth_events=auth_events,
)
d = self.notifier.on_new_room_event(
new_event, extra_users=[joinee]
)
with PreserveLoggingContext():
d = self.notifier.on_new_room_event(
new_event, event_stream_id, max_stream_id,
extra_users=[joinee]
)
def log_failure(f):
logger.warn(
@@ -502,7 +710,9 @@ class FederationHandler(BaseHandler):
event.internal_metadata.outlier = False
context = yield self._handle_new_event(origin, event)
context, event_stream_id, max_stream_id = yield self._handle_new_event(
origin, event
)
logger.debug(
"on_send_join_request: After _handle_new_event: %s, sigs: %s",
@@ -516,9 +726,10 @@ class FederationHandler(BaseHandler):
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
d = self.notifier.on_new_room_event(
event, extra_users=extra_users
)
with PreserveLoggingContext():
d = self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
def log_failure(f):
logger.warn(
@@ -591,16 +802,18 @@ class FederationHandler(BaseHandler):
context = yield self.state_handler.compute_event_context(event)
yield self.store.persist_event(
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=False,
)
target_user = UserID.from_string(event.state_key)
d = self.notifier.on_new_room_event(
event, extra_users=[target_user],
)
with PreserveLoggingContext():
d = self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[target_user],
)
def log_failure(f):
logger.warn(
@@ -732,8 +945,10 @@ class FederationHandler(BaseHandler):
event.event_id, event.signatures,
)
outlier = event.internal_metadata.is_outlier()
context = yield self.state_handler.compute_event_context(
event, old_state=state
event, old_state=state, outlier=outlier,
)
if not auth_events:
@@ -744,14 +959,17 @@ class FederationHandler(BaseHandler):
event.event_id, auth_events,
)
is_new_state = not event.internal_metadata.is_outlier()
is_new_state = not outlier
# This is a hack to fix some old rooms where the initial join event
# didn't reference the create event in its auth events.
if event.type == EventTypes.Member and not event.auth_events:
if len(event.prev_events) == 1:
c = yield self.store.get_event(event.prev_events[0][0])
if c.type == EventTypes.Create:
if len(event.prev_events) == 1 and event.depth < 5:
c = yield self.store.get_event(
event.prev_events[0][0],
allow_none=True,
)
if c and c.type == EventTypes.Create:
auth_events[(c.type, c.state_key)] = c
try:
@@ -777,7 +995,7 @@ class FederationHandler(BaseHandler):
)
raise
yield self.store.persist_event(
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
backfilled=backfilled,
@@ -785,7 +1003,7 @@ class FederationHandler(BaseHandler):
current_state=current_state,
)
defer.returnValue(context)
defer.returnValue((context, event_stream_id, max_stream_id))
@defer.inlineCallbacks
def on_query_auth(self, origin, event_id, remote_auth_chain, rejects,
@@ -925,7 +1143,7 @@ class FederationHandler(BaseHandler):
if d in have_events and not have_events[d]
],
consumeErrors=True
)
).addErrback(unwrapFirstError)
if different_events:
local_view = dict(auth_events)
@@ -1170,3 +1388,52 @@ class FederationHandler(BaseHandler):
},
"missing": [e.event_id for e in missing_locals],
})
@defer.inlineCallbacks
def _handle_auth_events(self, origin, auth_events):
auth_ids_to_deferred = {}
def process_auth_ev(ev):
auth_ids = [e_id for e_id, _ in ev.auth_events]
prev_ds = [
auth_ids_to_deferred[i]
for i in auth_ids
if i in auth_ids_to_deferred
]
d = defer.Deferred()
auth_ids_to_deferred[ev.event_id] = d
@defer.inlineCallbacks
def f(*_):
ev.internal_metadata.outlier = True
try:
auth = {
(e.type, e.state_key): e for e in auth_events
if e.event_id in auth_ids
}
yield self._handle_new_event(
origin, ev, auth_events=auth
)
except:
logger.exception(
"Failed to handle auth event %s",
ev.event_id,
)
d.callback(None)
if prev_ds:
dx = defer.DeferredList(prev_ds)
dx.addBoth(f)
else:
f()
for e in auth_events:
process_auth_ev(e)
yield defer.DeferredList(auth_ids_to_deferred.values())

View File

@@ -20,8 +20,9 @@ from synapse.api.errors import RoomError, SynapseError
from synapse.streams.config import PaginationConfig
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID
from synapse.types import UserID, RoomStreamToken
from ._base import BaseHandler
@@ -89,9 +90,19 @@ class MessageHandler(BaseHandler):
if not pagin_config.from_token:
pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token()
yield self.hs.get_event_sources().get_current_token(
direction='b'
)
)
room_token = RoomStreamToken.parse(pagin_config.from_token.room_key)
if room_token.topological is None:
raise SynapseError(400, "Invalid token")
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, room_token.topological
)
user = UserID.from_string(user_id)
events, next_key = yield data_source.get_pagination_rows(
@@ -303,7 +314,7 @@ class MessageHandler(BaseHandler):
event.room_id
),
]
)
).addErrback(unwrapFirstError)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
@@ -328,7 +339,7 @@ class MessageHandler(BaseHandler):
yield defer.gatherResults(
[handle_room(e) for e in room_list],
consumeErrors=True
)
).addErrback(unwrapFirstError)
ret = {
"rooms": rooms_ret,

View File

@@ -18,8 +18,8 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError
from synapse.api.constants import PresenceState
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function
from synapse.types import UserID
import synapse.metrics
@@ -146,6 +146,10 @@ class PresenceHandler(BaseHandler):
self._user_cachemap = {}
self._user_cachemap_latest_serial = 0
# map room_ids to the latest presence serial for a member of that
# room
self._room_serials = {}
metrics.register_callback(
"userCachemap:size",
lambda: len(self._user_cachemap),
@@ -278,15 +282,14 @@ class PresenceHandler(BaseHandler):
now_online = state["presence"] != PresenceState.OFFLINE
was_polling = target_user in self._user_cachemap
with PreserveLoggingContext():
if now_online and not was_polling:
self.start_polling_presence(target_user, state=state)
elif not now_online and was_polling:
self.stop_polling_presence(target_user)
if now_online and not was_polling:
self.start_polling_presence(target_user, state=state)
elif not now_online and was_polling:
self.stop_polling_presence(target_user)
# TODO(paul): perform a presence push as part of start/stop poll so
# we don't have to do this all the time
self.changed_presencelike_data(target_user, state)
# TODO(paul): perform a presence push as part of start/stop poll so
# we don't have to do this all the time
self.changed_presencelike_data(target_user, state)
def bump_presence_active_time(self, user, now=None):
if now is None:
@@ -298,13 +301,34 @@ class PresenceHandler(BaseHandler):
self.changed_presencelike_data(user, {"last_active": now})
def get_joined_rooms_for_user(self, user):
"""Get the list of rooms a user is joined to.
Args:
user(UserID): The user.
Returns:
A Deferred of a list of room id strings.
"""
rm_handler = self.homeserver.get_handlers().room_member_handler
return rm_handler.get_joined_rooms_for_user(user)
def get_joined_users_for_room_id(self, room_id):
rm_handler = self.homeserver.get_handlers().room_member_handler
return rm_handler.get_room_members(room_id)
@defer.inlineCallbacks
def changed_presencelike_data(self, user, state):
statuscache = self._get_or_make_usercache(user)
"""Updates the presence state of a local user.
Args:
user(UserID): The user being updated.
state(dict): The new presence state for the user.
Returns:
A Deferred
"""
self._user_cachemap_latest_serial += 1
statuscache.update(state, serial=self._user_cachemap_latest_serial)
return self.push_presence(user, statuscache=statuscache)
statuscache = yield self.update_presence_cache(user, state)
yield self.push_presence(user, statuscache=statuscache)
@log_function
def started_user_eventstream(self, user):
@@ -318,14 +342,21 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks
def user_joined_room(self, user, room_id):
if self.hs.is_mine(user):
statuscache = self._get_or_make_usercache(user)
"""Called via the distributor whenever a user joins a room.
Notifies the new member of the presence of the current members.
Notifies the current members of the room of the new member's presence.
Args:
user(UserID): The user who joined the room.
room_id(str): The room id the user joined.
"""
if self.hs.is_mine(user):
# No actual update but we need to bump the serial anyway for the
# event source
self._user_cachemap_latest_serial += 1
statuscache.update({}, serial=self._user_cachemap_latest_serial)
statuscache = yield self.update_presence_cache(
user, room_ids=[room_id]
)
self.push_update_to_local_and_remote(
observed_user=user,
room_ids=[room_id],
@@ -333,18 +364,22 @@ class PresenceHandler(BaseHandler):
)
# We also want to tell them about current presence of people.
rm_handler = self.homeserver.get_handlers().room_member_handler
curr_users = yield rm_handler.get_room_members(room_id)
curr_users = yield self.get_joined_users_for_room_id(room_id)
for local_user in [c for c in curr_users if self.hs.is_mine(c)]:
statuscache = yield self.update_presence_cache(
local_user, room_ids=[room_id], add_to_cache=False
)
self.push_update_to_local_and_remote(
observed_user=local_user,
users_to_push=[user],
statuscache=self._get_or_offline_usercache(local_user),
statuscache=statuscache,
)
@defer.inlineCallbacks
def send_invite(self, observer_user, observed_user):
"""Request the presence of a local or remote user for a local user"""
if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
@@ -379,6 +414,15 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks
def invite_presence(self, observed_user, observer_user):
"""Handles a m.presence_invite EDU. A remote or local user has
requested presence updates for a local user. If the invite is accepted
then allow the local or remote user to see the presence of the local
user.
Args:
observed_user(UserID): The local user whose presence is requested.
observer_user(UserID): The remote or local user requesting presence.
"""
accept = yield self._should_accept_invite(observed_user, observer_user)
if accept:
@@ -405,16 +449,34 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks
def accept_presence(self, observed_user, observer_user):
"""Handles a m.presence_accept EDU. Mark a presence invite from a
local or remote user as accepted in a local user's presence list.
Starts polling for presence updates from the local or remote user.
Args:
observed_user(UserID): The user to update in the presence list.
observer_user(UserID): The owner of the presence list to update.
"""
yield self.store.set_presence_list_accepted(
observer_user.localpart, observed_user.to_string()
)
with PreserveLoggingContext():
self.start_polling_presence(
observer_user, target_user=observed_user
)
self.start_polling_presence(
observer_user, target_user=observed_user
)
@defer.inlineCallbacks
def deny_presence(self, observed_user, observer_user):
"""Handle a m.presence_deny EDU. Removes a local or remote user from a
local user's presence list.
Args:
observed_user(UserID): The local or remote user to remove from the
list.
observer_user(UserID): The local owner of the presence list.
Returns:
A Deferred.
"""
yield self.store.del_presence_list(
observer_user.localpart, observed_user.to_string()
)
@@ -423,6 +485,16 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks
def drop(self, observed_user, observer_user):
"""Remove a local or remote user from a local user's presence list and
unsubscribe the local user from updates that user.
Args:
observed_user(UserId): The local or remote user to remove from the
list.
observer_user(UserId): The local owner of the presence list.
Returns:
A Deferred.
"""
if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
@@ -430,34 +502,66 @@ class PresenceHandler(BaseHandler):
observer_user.localpart, observed_user.to_string()
)
with PreserveLoggingContext():
self.stop_polling_presence(
observer_user, target_user=observed_user
)
self.stop_polling_presence(
observer_user, target_user=observed_user
)
@defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None):
"""Get the presence list for a local user. The retured list includes
the current presence state for each user listed.
Args:
observer_user(UserID): The local user whose presence list to fetch.
accepted(bool or None): If not none then only include users who
have or have not accepted the presence invite request.
Returns:
A Deferred list of presence state events.
"""
if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server")
presence = yield self.store.get_presence_list(
presence_list = yield self.store.get_presence_list(
observer_user.localpart, accepted=accepted
)
for p in presence:
observed_user = UserID.from_string(p.pop("observed_user_id"))
p["observed_user"] = observed_user
p.update(self._get_or_offline_usercache(observed_user).get_state())
if "last_active" in p:
p["last_active_ago"] = int(
self.clock.time_msec() - p.pop("last_active")
results = []
for row in presence_list:
observed_user = UserID.from_string(row["observed_user_id"])
result = {
"observed_user": observed_user, "accepted": row["accepted"]
}
result.update(
self._get_or_offline_usercache(observed_user).get_state()
)
if "last_active" in result:
result["last_active_ago"] = int(
self.clock.time_msec() - result.pop("last_active")
)
results.append(result)
defer.returnValue(presence)
defer.returnValue(results)
@defer.inlineCallbacks
@log_function
def start_polling_presence(self, user, target_user=None, state=None):
"""Subscribe a local user to presence updates from a local or remote
user. If no target_user is supplied then subscribe to all users stored
in the presence list for the local user.
Additonally this pushes the current presence state of this user to all
target_users. That state can be provided directly or will be read from
the stored state for the local user.
Also this attempts to notify the local user of the current state of
any local target users.
Args:
user(UserID): The local user that whishes for presence updates.
target_user(UserID): The local or remote user whose updates are
wanted.
state(dict): Optional presence state for the local user.
"""
logger.debug("Start polling for presence from %s", user)
if target_user:
@@ -473,8 +577,7 @@ class PresenceHandler(BaseHandler):
# Also include people in all my rooms
rm_handler = self.homeserver.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
room_ids = yield self.get_joined_rooms_for_user(user)
if state is None:
state = yield self.store.get_presence_state(user.localpart)
@@ -498,9 +601,7 @@ class PresenceHandler(BaseHandler):
# We want to tell the person that just came online
# presence state of people they are interested in?
self.push_update_to_clients(
observed_user=target_user,
users_to_push=[user],
statuscache=self._get_or_offline_usercache(target_user),
)
deferreds = []
@@ -517,6 +618,12 @@ class PresenceHandler(BaseHandler):
yield defer.DeferredList(deferreds, consumeErrors=True)
def _start_polling_local(self, user, target_user):
"""Subscribe a local user to presence updates for a local user
Args:
user(UserId): The local user that wishes for updates.
target_user(UserId): The local users whose updates are wanted.
"""
target_localpart = target_user.localpart
if target_localpart not in self._local_pushmap:
@@ -525,6 +632,17 @@ class PresenceHandler(BaseHandler):
self._local_pushmap[target_localpart].add(user)
def _start_polling_remote(self, user, domain, remoteusers):
"""Subscribe a local user to presence updates for remote users on a
given remote domain.
Args:
user(UserID): The local user that wishes for updates.
domain(str): The remote server the local user wants updates from.
remoteusers(UserID): The remote users that local user wants to be
told about.
Returns:
A Deferred.
"""
to_poll = set()
for u in remoteusers:
@@ -545,6 +663,17 @@ class PresenceHandler(BaseHandler):
@log_function
def stop_polling_presence(self, user, target_user=None):
"""Unsubscribe a local user from presence updates from a local or
remote user. If no target user is supplied then unsubscribe the user
from all presence updates that the user had subscribed to.
Args:
user(UserID): The local user that no longer wishes for updates.
target_user(UserID or None): The user whose updates are no longer
wanted.
Returns:
A Deferred.
"""
logger.debug("Stop polling for presence from %s", user)
if not target_user or self.hs.is_mine(target_user):
@@ -573,6 +702,13 @@ class PresenceHandler(BaseHandler):
return defer.DeferredList(deferreds, consumeErrors=True)
def _stop_polling_local(self, user, target_user):
"""Unsubscribe a local user from presence updates from a local user on
this server.
Args:
user(UserID): The local user that no longer wishes for updates.
target_user(UserID): The user whose updates are no longer wanted.
"""
for localpart in self._local_pushmap.keys():
if target_user and localpart != target_user.localpart:
continue
@@ -585,6 +721,17 @@ class PresenceHandler(BaseHandler):
@log_function
def _stop_polling_remote(self, user, domain, remoteusers):
"""Unsubscribe a local user from presence updates from remote users on
a given domain.
Args:
user(UserID): The local user that no longer wishes for updates.
domain(str): The remote server to unsubscribe from.
remoteusers([UserID]): The users on that remote server that the
local user no longer wishes to be updated about.
Returns:
A Deferred.
"""
to_unpoll = set()
for u in remoteusers:
@@ -606,6 +753,19 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def push_presence(self, user, statuscache):
"""
Notify local and remote users of a change in presence of a local user.
Pushes the update to local clients and remote domains that are directly
subscribed to the presence of the local user.
Also pushes that update to any local user or remote domain that shares
a room with the local user.
Args:
user(UserID): The local user whose presence was updated.
statuscache(UserPresenceCache): Cache of the user's presence state
Returns:
A Deferred.
"""
assert(self.hs.is_mine(user))
logger.debug("Pushing presence update from %s", user)
@@ -617,8 +777,7 @@ class PresenceHandler(BaseHandler):
# and also user is informed of server-forced pushes
localusers.add(user)
rm_handler = self.homeserver.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
room_ids = yield self.get_joined_rooms_for_user(user)
if not localusers and not room_ids:
defer.returnValue(None)
@@ -632,45 +791,24 @@ class PresenceHandler(BaseHandler):
)
yield self.distributor.fire("user_presence_changed", user, statuscache)
@defer.inlineCallbacks
def _push_presence_remote(self, user, destination, state=None):
if state is None:
state = yield self.store.get_presence_state(user.localpart)
del state["mtime"]
state["presence"] = state.pop("state")
if user in self._user_cachemap:
state["last_active"] = (
self._user_cachemap[user].get_state()["last_active"]
)
yield self.distributor.fire(
"collect_presencelike_data", user, state
)
if "last_active" in state:
state = dict(state)
state["last_active_ago"] = int(
self.clock.time_msec() - state.pop("last_active")
)
user_state = {
"user_id": user.to_string(),
}
user_state.update(**state)
yield self.federation.send_edu(
destination=destination,
edu_type="m.presence",
content={
"push": [
user_state,
],
}
)
@defer.inlineCallbacks
def incoming_presence(self, origin, content):
"""Handle an incoming m.presence EDU.
For each presence update in the "push" list update our local cache and
notify the appropriate local clients. Only clients that share a room
or are directly subscribed to the presence for a user should be
notified of the update.
For each subscription request in the "poll" list start pushing presence
updates to the remote server.
For unsubscribe request in the "unpoll" list stop pushing presence
updates to the remote server.
Args:
orgin(str): The source of this m.presence EDU.
content(dict): The content of this m.presence EDU.
Returns:
A Deferred.
"""
deferreds = []
for push in content.get("push", []):
@@ -684,8 +822,7 @@ class PresenceHandler(BaseHandler):
" | %d interested local observers %r", len(observers), observers
)
rm_handler = self.homeserver.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
room_ids = yield self.get_joined_rooms_for_user(user)
if room_ids:
logger.debug(" | %d interested room IDs %r", len(room_ids), room_ids)
@@ -704,20 +841,15 @@ class PresenceHandler(BaseHandler):
self.clock.time_msec() - state.pop("last_active_ago")
)
statuscache = self._get_or_make_usercache(user)
self._user_cachemap_latest_serial += 1
statuscache.update(state, serial=self._user_cachemap_latest_serial)
yield self.update_presence_cache(user, state, room_ids=room_ids)
if not observers and not room_ids:
logger.debug(" | no interested observers or room IDs")
continue
self.push_update_to_clients(
observed_user=user,
users_to_push=observers,
room_ids=room_ids,
statuscache=statuscache,
users_to_push=observers, room_ids=room_ids
)
user_id = user.to_string()
@@ -766,13 +898,58 @@ class PresenceHandler(BaseHandler):
if not self._remote_sendmap[user]:
del self._remote_sendmap[user]
with PreserveLoggingContext():
yield defer.DeferredList(deferreds, consumeErrors=True)
yield defer.DeferredList(deferreds, consumeErrors=True)
@defer.inlineCallbacks
def update_presence_cache(self, user, state={}, room_ids=None,
add_to_cache=True):
"""Update the presence cache for a user with a new state and bump the
serial to the latest value.
Args:
user(UserID): The user being updated
state(dict): The presence state being updated
room_ids(None or list of str): A list of room_ids to update. If
room_ids is None then fetch the list of room_ids the user is
joined to.
add_to_cache: Whether to add an entry to the presence cache if the
user isn't already in the cache.
Returns:
A Deferred UserPresenceCache for the user being updated.
"""
if room_ids is None:
room_ids = yield self.get_joined_rooms_for_user(user)
for room_id in room_ids:
self._room_serials[room_id] = self._user_cachemap_latest_serial
if add_to_cache:
statuscache = self._get_or_make_usercache(user)
else:
statuscache = self._get_or_offline_usercache(user)
statuscache.update(state, serial=self._user_cachemap_latest_serial)
defer.returnValue(statuscache)
@defer.inlineCallbacks
def push_update_to_local_and_remote(self, observed_user, statuscache,
users_to_push=[], room_ids=[],
remote_domains=[]):
"""Notify local clients and remote servers of a change in the presence
of a user.
Args:
observed_user(UserID): The user to push the presence state for.
statuscache(UserPresenceCache): The cache for the presence state to
push.
users_to_push([UserID]): A list of local and remote users to
notify.
room_ids([str]): Notify the local and remote occupants of these
rooms.
remote_domains([str]): A list of remote servers to notify in
addition to those implied by the users_to_push and the
room_ids.
Returns:
A Deferred.
"""
localusers, remoteusers = partitionbool(
users_to_push,
@@ -782,10 +959,7 @@ class PresenceHandler(BaseHandler):
localusers = set(localusers)
self.push_update_to_clients(
observed_user=observed_user,
users_to_push=localusers,
room_ids=room_ids,
statuscache=statuscache,
users_to_push=localusers, room_ids=room_ids
)
remote_domains = set(remote_domains)
@@ -810,11 +984,65 @@ class PresenceHandler(BaseHandler):
defer.returnValue((localusers, remote_domains))
def push_update_to_clients(self, observed_user, users_to_push=[],
room_ids=[], statuscache=None):
self.notifier.on_new_user_event(
users_to_push,
room_ids,
def push_update_to_clients(self, users_to_push=[], room_ids=[]):
"""Notify clients of a new presence event.
Args:
users_to_push([UserID]): List of users to notify.
room_ids([str]): List of room_ids to notify.
"""
with PreserveLoggingContext():
self.notifier.on_new_user_event(
"presence_key",
self._user_cachemap_latest_serial,
users_to_push,
room_ids,
)
@defer.inlineCallbacks
def _push_presence_remote(self, user, destination, state=None):
"""Push a user's presence to a remote server. If a presence state event
that event is sent. Otherwise a new state event is constructed from the
stored presence state.
The last_active is replaced with last_active_ago in case the wallclock
time on the remote server is different to the time on this server.
Sends an EDU to the remote server with the current presence state.
Args:
user(UserID): The user to push the presence state for.
destination(str): The remote server to send state to.
state(dict): The state to push, or None to use the current stored
state.
Returns:
A Deferred.
"""
if state is None:
state = yield self.store.get_presence_state(user.localpart)
del state["mtime"]
state["presence"] = state.pop("state")
if user in self._user_cachemap:
state["last_active"] = (
self._user_cachemap[user].get_state()["last_active"]
)
yield self.distributor.fire(
"collect_presencelike_data", user, state
)
if "last_active" in state:
state = dict(state)
state["last_active_ago"] = int(
self.clock.time_msec() - state.pop("last_active")
)
user_state = {"user_id": user.to_string(), }
user_state.update(state)
yield self.federation.send_edu(
destination=destination,
edu_type="m.presence",
content={"push": [user_state, ], }
)
@@ -823,39 +1051,11 @@ class PresenceEventSource(object):
self.hs = hs
self.clock = hs.get_clock()
@defer.inlineCallbacks
def is_visible(self, observer_user, observed_user):
if observer_user == observed_user:
defer.returnValue(True)
presence = self.hs.get_handlers().presence_handler
if (yield presence.store.user_rooms_intersect(
[u.to_string() for u in observer_user, observed_user])):
defer.returnValue(True)
if self.hs.is_mine(observed_user):
pushmap = presence._local_pushmap
defer.returnValue(
observed_user.localpart in pushmap and
observer_user in pushmap[observed_user.localpart]
)
else:
recvmap = presence._remote_recvmap
defer.returnValue(
observed_user in recvmap and
observer_user in recvmap[observed_user]
)
@defer.inlineCallbacks
@log_function
def get_new_events_for_user(self, user, from_key, limit):
from_key = int(from_key)
observer_user = user
presence = self.hs.get_handlers().presence_handler
cachemap = presence._user_cachemap
@@ -864,17 +1064,27 @@ class PresenceEventSource(object):
clock = self.clock
latest_serial = 0
user_ids_to_check = {user}
presence_list = yield presence.store.get_presence_list(
user.localpart, accepted=True
)
if presence_list is not None:
user_ids_to_check |= set(
UserID.from_string(p["observed_user_id"]) for p in presence_list
)
room_ids = yield presence.get_joined_rooms_for_user(user)
for room_id in set(room_ids) & set(presence._room_serials):
if presence._room_serials[room_id] > from_key:
joined = yield presence.get_joined_users_for_room_id(room_id)
user_ids_to_check |= set(joined)
updates = []
# TODO(paul): use a DeferredList ? How to limit concurrency.
for observed_user in cachemap.keys():
for observed_user in user_ids_to_check & set(cachemap):
cached = cachemap[observed_user]
if cached.serial <= from_key or cached.serial > max_serial:
continue
if not (yield self.is_visible(observer_user, observed_user)):
continue
latest_serial = max(cached.serial, latest_serial)
updates.append(cached.make_event(user=observed_user, clock=clock))
@@ -911,8 +1121,6 @@ class PresenceEventSource(object):
def get_pagination_rows(self, user, pagination_config, key):
# TODO (erikj): Does this make sense? Ordering?
observer_user = user
from_key = int(pagination_config.from_key)
if pagination_config.to_key:
@@ -923,14 +1131,26 @@ class PresenceEventSource(object):
presence = self.hs.get_handlers().presence_handler
cachemap = presence._user_cachemap
user_ids_to_check = {user}
presence_list = yield presence.store.get_presence_list(
user.localpart, accepted=True
)
if presence_list is not None:
user_ids_to_check |= set(
UserID.from_string(p["observed_user_id"]) for p in presence_list
)
room_ids = yield presence.get_joined_rooms_for_user(user)
for room_id in set(room_ids) & set(presence._room_serials):
if presence._room_serials[room_id] >= from_key:
joined = yield presence.get_joined_users_for_room_id(room_id)
user_ids_to_check |= set(joined)
updates = []
# TODO(paul): use a DeferredList ? How to limit concurrency.
for observed_user in cachemap.keys():
for observed_user in user_ids_to_check & set(cachemap):
if not (to_key < cachemap[observed_user].serial <= from_key):
continue
if (yield self.is_visible(observer_user, observed_user)):
updates.append((observed_user, cachemap[observed_user]))
updates.append((observed_user, cachemap[observed_user]))
# TODO(paul): limit

View File

@@ -17,8 +17,8 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
from synapse.api.constants import EventTypes, Membership
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID
from synapse.util import unwrapFirstError
from ._base import BaseHandler
@@ -88,6 +88,9 @@ class ProfileHandler(BaseHandler):
if target_user != auth_user:
raise AuthError(400, "Cannot set another user's displayname")
if new_displayname == '':
new_displayname = None
yield self.store.set_profile_displayname(
target_user.localpart, new_displayname
)
@@ -154,14 +157,13 @@ class ProfileHandler(BaseHandler):
if not self.hs.is_mine(user):
defer.returnValue(None)
with PreserveLoggingContext():
(displayname, avatar_url) = yield defer.gatherResults(
[
self.store.get_profile_displayname(user.localpart),
self.store.get_profile_avatar_url(user.localpart),
],
consumeErrors=True
)
(displayname, avatar_url) = yield defer.gatherResults(
[
self.store.get_profile_displayname(user.localpart),
self.store.get_profile_avatar_url(user.localpart),
],
consumeErrors=True
).addErrback(unwrapFirstError)
state["displayname"] = displayname
state["avatar_url"] = avatar_url

View File

@@ -21,11 +21,12 @@ from ._base import BaseHandler
from synapse.types import UserID, RoomAlias, RoomID
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import StoreError, SynapseError
from synapse.util import stringutils
from synapse.util import stringutils, unwrapFirstError
from synapse.util.async import run_on_reactor
from synapse.events.utils import serialize_event
import logging
import string
logger = logging.getLogger(__name__)
@@ -50,6 +51,10 @@ class RoomCreationHandler(BaseHandler):
self.ratelimit(user_id)
if "room_alias_name" in config:
for wchar in string.whitespace:
if wchar in config["room_alias_name"]:
raise SynapseError(400, "Invalid characters in room alias")
room_alias = RoomAlias.create(
config["room_alias_name"],
self.hs.hostname,
@@ -529,11 +534,17 @@ class RoomListHandler(BaseHandler):
@defer.inlineCallbacks
def get_public_room_list(self):
chunk = yield self.store.get_rooms(is_public=True)
for room in chunk:
joined_users = yield self.store.get_users_in_room(
room_id=room["room_id"],
)
room["num_joined_members"] = len(joined_users)
results = yield defer.gatherResults(
[
self.store.get_users_in_room(room["room_id"])
for room in chunk
],
consumeErrors=True,
).addErrback(unwrapFirstError)
for i, room in enumerate(chunk):
room["num_joined_members"] = len(results[i])
# FIXME (erikj): START is no longer a valid value
defer.returnValue({"start": "START", "end": "END", "chunk": chunk})
@@ -569,8 +580,8 @@ class RoomEventSource(object):
defer.returnValue((events, end_key))
def get_current_key(self):
return self.store.get_room_events_max_id()
def get_current_key(self, direction='f'):
return self.store.get_room_events_max_id(direction)
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):

View File

@@ -92,7 +92,7 @@ class SyncHandler(BaseHandler):
result = yield self.current_sync_for_user(sync_config, since_token)
defer.returnValue(result)
else:
def current_sync_callback():
def current_sync_callback(before_token, after_token):
return self.current_sync_for_user(sync_config, since_token)
rm_handler = self.hs.get_handlers().room_member_handler

View File

@@ -18,6 +18,7 @@ from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.errors import SynapseError, AuthError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID
import logging
@@ -216,7 +217,10 @@ class TypingNotificationHandler(BaseHandler):
self._latest_room_serial += 1
self._room_serials[room_id] = self._latest_room_serial
self.notifier.on_new_user_event(rooms=[room_id])
with PreserveLoggingContext():
self.notifier.on_new_user_event(
"typing_key", self._latest_room_serial, rooms=[room_id]
)
class TypingNotificationEventSource(object):

View File

@@ -14,12 +14,14 @@
# limitations under the License.
from synapse.api.errors import CodeMessageException
from synapse.util.logcontext import preserve_context_over_fn
from syutil.jsonutil import encode_canonical_json
import synapse.metrics
from twisted.internet import defer, reactor
from twisted.web.client import (
Agent, readBody, FileBodyProducer, PartialDownloadError
Agent, readBody, FileBodyProducer, PartialDownloadError,
HTTPConnectionPool,
)
from twisted.web.http_headers import Headers
@@ -54,14 +56,19 @@ class SimpleHttpClient(object):
# The default context factory in Twisted 14.0.0 (which we require) is
# BrowserLikePolicyForHTTPS which will do regular cert validation
# 'like a browser'
self.agent = Agent(reactor)
pool = HTTPConnectionPool(reactor)
pool.maxPersistentPerHost = 10
self.agent = Agent(reactor, pool=pool)
self.version_string = hs.version_string
def request(self, method, *args, **kwargs):
# A small wrapper around self.agent.request() so we can easily attach
# counters to it
outgoing_requests_counter.inc(method)
d = self.agent.request(method, *args, **kwargs)
d = preserve_context_over_fn(
self.agent.request,
method, *args, **kwargs
)
def _cb(response):
incoming_responses_counter.inc(method, response.code)

View File

@@ -16,13 +16,13 @@
from twisted.internet import defer, reactor, protocol
from twisted.internet.error import DNSLookupError
from twisted.web.client import readBody, _AgentBase, _URI
from twisted.web.client import readBody, _AgentBase, _URI, HTTPConnectionPool
from twisted.web.http_headers import Headers
from twisted.web._newclient import ResponseDone
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.async import sleep
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logcontext import preserve_context_over_fn
import synapse.metrics
from syutil.jsonutil import encode_canonical_json
@@ -103,14 +103,17 @@ class MatrixFederationHttpClient(object):
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
self.agent = MatrixFederationHttpAgent(reactor)
pool = HTTPConnectionPool(reactor)
pool.maxPersistentPerHost = 10
self.agent = MatrixFederationHttpAgent(reactor, pool=pool)
self.clock = hs.get_clock()
self.version_string = hs.version_string
@defer.inlineCallbacks
def _create_request(self, destination, method, path_bytes,
body_callback, headers_dict={}, param_bytes=b"",
query_bytes=b"", retry_on_dns_fail=True):
query_bytes=b"", retry_on_dns_fail=True,
timeout=None):
""" Creates and sends a request to the given url
"""
headers_dict[b"User-Agent"] = [self.version_string]
@@ -144,22 +147,22 @@ class MatrixFederationHttpClient(object):
producer = body_callback(method, url_bytes, headers_dict)
try:
with PreserveLoggingContext():
request_deferred = self.agent.request(
destination,
endpoint,
method,
path_bytes,
param_bytes,
query_bytes,
Headers(headers_dict),
producer
)
request_deferred = preserve_context_over_fn(
self.agent.request,
destination,
endpoint,
method,
path_bytes,
param_bytes,
query_bytes,
Headers(headers_dict),
producer
)
response = yield self.clock.time_bound_deferred(
request_deferred,
time_out=60,
)
response = yield self.clock.time_bound_deferred(
request_deferred,
time_out=timeout/1000. if timeout else 60,
)
logger.debug("Got response to %s", method)
break
@@ -181,7 +184,7 @@ class MatrixFederationHttpClient(object):
_flatten_response_never_received(e),
)
if retries_left:
if retries_left and not timeout:
yield sleep(2 ** (5 - retries_left))
retries_left -= 1
else:
@@ -334,7 +337,8 @@ class MatrixFederationHttpClient(object):
defer.returnValue(json.loads(body))
@defer.inlineCallbacks
def get_json(self, destination, path, args={}, retry_on_dns_fail=True):
def get_json(self, destination, path, args={}, retry_on_dns_fail=True,
timeout=None):
""" GETs some json from the given host homeserver and path
Args:
@@ -343,6 +347,9 @@ class MatrixFederationHttpClient(object):
path (str): The HTTP path.
args (dict): A dictionary used to create query strings, defaults to
None.
timeout (int): How long to try (in ms) the destination for before
giving up. None indicates no timeout and that the request will
be retried.
Returns:
Deferred: Succeeds when we get *any* HTTP response.
@@ -370,7 +377,8 @@ class MatrixFederationHttpClient(object):
path.encode("ascii"),
query_bytes=query_bytes,
body_callback=body_callback,
retry_on_dns_fail=retry_on_dns_fail
retry_on_dns_fail=retry_on_dns_fail,
timeout=timeout,
)
if 200 <= response.code < 300:

View File

@@ -17,11 +17,12 @@
from synapse.api.errors import (
cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError
)
from synapse.util.logcontext import LoggingContext
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
import synapse.metrics
import synapse.events
from syutil.jsonutil import (
encode_canonical_json, encode_pretty_printed_json
encode_canonical_json, encode_pretty_printed_json, encode_json
)
from twisted.internet import defer
@@ -85,7 +86,9 @@ def request_handler(request_handler):
"Received request: %s %s",
request.method, request.path
)
yield request_handler(self, request)
d = request_handler(self, request)
with PreserveLoggingContext():
yield d
code = request.code
except CodeMessageException as e:
code = e.code
@@ -166,9 +169,10 @@ class JsonResource(HttpServer, resource.Resource):
_PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"])
def __init__(self, hs):
def __init__(self, hs, canonical_json=True):
resource.Resource.__init__(self)
self.canonical_json = canonical_json
self.clock = hs.get_clock()
self.path_regexs = {}
self.version_string = hs.version_string
@@ -254,6 +258,7 @@ class JsonResource(HttpServer, resource.Resource):
response_code_message=response_code_message,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
canonical_json=self.canonical_json,
)
@@ -275,11 +280,16 @@ class RootRedirect(resource.Resource):
def respond_with_json(request, code, json_object, send_cors=False,
response_code_message=None, pretty_print=False,
version_string=""):
version_string="", canonical_json=True):
if pretty_print:
json_bytes = encode_pretty_printed_json(json_object) + "\n"
else:
json_bytes = encode_canonical_json(json_object)
if canonical_json:
json_bytes = encode_canonical_json(json_object)
else:
json_bytes = encode_json(
json_object, using_frozen_dicts=synapse.events.USE_FROZEN_DICTS
)
return respond_with_json_bytes(
request, code, json_bytes,

View File

@@ -16,12 +16,12 @@
from twisted.internet import defer
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.types import StreamToken
import synapse.metrics
import logging
import time
logger = logging.getLogger(__name__)
@@ -43,63 +43,85 @@ def count(func, l):
class _NotificationListener(object):
""" This represents a single client connection to the events stream.
The events stream handler will have yielded to the deferred, so to
notify the handler it is sufficient to resolve the deferred.
"""
def __init__(self, deferred, timeout):
self.deferred = deferred
self.created = int(time.time() * 1000)
self.timeout = timeout
self.have_notified = False
def notified(self):
return self.deferred.called
def notify(self, token):
""" Inform whoever is listening about the new events.
"""
self.have_notified = True
try:
self.deferred.callback(token)
except defer.AlreadyCalledError:
pass
class _NotifierUserStream(object):
"""This represents a user connected to the event stream.
It tracks the most recent stream token for that user.
At a given point a user may have a number of streams listening for
events.
This listener will also keep track of which rooms it is listening in
so that it can remove itself from the indexes in the Notifier class.
"""
def __init__(self, user, rooms, from_token, limit, timeout, deferred,
def __init__(self, user, rooms, current_token, time_now_ms,
appservice=None):
self.user = user
self.user = str(user)
self.appservice = appservice
self.from_token = from_token
self.limit = limit
self.timeout = timeout
self.deferred = deferred
self.rooms = rooms
self.timer = None
self.listeners = set()
self.rooms = set(rooms)
self.current_token = current_token
self.last_notified_ms = time_now_ms
def notified(self):
return self.deferred.called
def notify(self, stream_key, stream_id, time_now_ms):
"""Notify any listeners for this user of a new event from an
event source.
Args:
stream_key(str): The stream the event came from.
stream_id(str): The new id for the stream the event came from.
time_now_ms(int): The current time in milliseconds.
"""
self.current_token = self.current_token.copy_and_advance(
stream_key, stream_id
)
if self.listeners:
self.last_notified_ms = time_now_ms
listeners = self.listeners
self.listeners = set()
for listener in listeners:
try:
listener.notify(self.current_token)
except:
logger.exception("Failed to notify listener")
def notify(self, notifier, events, start_token, end_token):
""" Inform whoever is listening about the new events. This will
also remove this listener from all the indexes in the Notifier
def remove(self, notifier):
""" Remove this listener from all the indexes in the Notifier
it knows about.
"""
result = (events, (start_token, end_token))
try:
self.deferred.callback(result)
notified_events_counter.inc_by(len(events))
except defer.AlreadyCalledError:
pass
# Should the following be done be using intrusively linked lists?
# -- erikj
for room in self.rooms:
lst = notifier.room_to_listeners.get(room, set())
lst = notifier.room_to_user_streams.get(room, set())
lst.discard(self)
notifier.user_to_listeners.get(self.user, set()).discard(self)
notifier.user_to_user_stream.pop(self.user)
if self.appservice:
notifier.appservice_to_listeners.get(
notifier.appservice_to_user_streams.get(
self.appservice, set()
).discard(self)
# Cancel the timeout for this notifer if one exists.
if self.timer is not None:
try:
notifier.clock.cancel_call_later(self.timer)
except:
logger.warn("Failed to cancel notifier timer")
class Notifier(object):
""" This class is responsible for notifying any listeners when there are
@@ -108,14 +130,18 @@ class Notifier(object):
Primarily used from the /events stream.
"""
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
def __init__(self, hs):
self.hs = hs
self.room_to_listeners = {}
self.user_to_listeners = {}
self.appservice_to_listeners = {}
self.user_to_user_stream = {}
self.room_to_user_streams = {}
self.appservice_to_user_streams = {}
self.event_sources = hs.get_event_sources()
self.store = hs.get_datastore()
self.pending_new_room_events = []
self.clock = hs.get_clock()
@@ -123,47 +149,80 @@ class Notifier(object):
"user_joined_room", self._user_joined_room
)
self.clock.looping_call(
self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS
)
# This is not a very cheap test to perform, but it's only executed
# when rendering the metrics page, which is likely once per minute at
# most when scraping it.
def count_listeners():
all_listeners = set()
all_user_streams = set()
for x in self.room_to_listeners.values():
all_listeners |= x
for x in self.user_to_listeners.values():
all_listeners |= x
for x in self.appservice_to_listeners.values():
all_listeners |= x
for x in self.room_to_user_streams.values():
all_user_streams |= x
for x in self.user_to_user_stream.values():
all_user_streams.add(x)
for x in self.appservice_to_user_streams.values():
all_user_streams |= x
return len(all_listeners)
return sum(len(stream.listeners) for stream in all_user_streams)
metrics.register_callback("listeners", count_listeners)
metrics.register_callback(
"rooms",
lambda: count(bool, self.room_to_listeners.values()),
lambda: count(bool, self.room_to_user_streams.values()),
)
metrics.register_callback(
"users",
lambda: count(bool, self.user_to_listeners.values()),
lambda: len(self.user_to_user_stream),
)
metrics.register_callback(
"appservices",
lambda: count(bool, self.appservice_to_listeners.values()),
lambda: count(bool, self.appservice_to_user_streams.values()),
)
@log_function
@defer.inlineCallbacks
def on_new_room_event(self, event, extra_users=[]):
def on_new_room_event(self, event, room_stream_id, max_room_stream_id,
extra_users=[]):
""" Used by handlers to inform the notifier something has happened
in the room, room event wise.
This triggers the notifier to wake up any listeners that are
listening to the room, and any listeners for the users in the
`extra_users` param.
The events can be peristed out of order. The notifier will wait
until all previous events have been persisted before notifying
the client streams.
"""
yield run_on_reactor()
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
self._notify_pending_new_room_events(max_room_stream_id)
def _notify_pending_new_room_events(self, max_room_stream_id):
"""Notify for the room events that were queued waiting for a previous
event to be persisted.
Args:
max_room_stream_id(int): The highest stream_id below which all
events have been persisted.
"""
pending = self.pending_new_room_events
self.pending_new_room_events = []
for room_stream_id, event, extra_users in pending:
if room_stream_id > max_room_stream_id:
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
else:
self._on_new_room_event(event, room_stream_id, extra_users)
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event"""
# poke any interested application service.
self.hs.get_handlers().appservice_handler.notify_interested_services(
event
@@ -171,194 +230,134 @@ class Notifier(object):
room_id = event.room_id
room_source = self.event_sources.sources["room"]
room_user_streams = self.room_to_user_streams.get(room_id, set())
room_listeners = self.room_to_listeners.get(room_id, set())
_discard_if_notified(room_listeners)
listeners = room_listeners.copy()
user_streams = room_user_streams.copy()
for user in extra_users:
user_listeners = self.user_to_listeners.get(user, set())
user_stream = self.user_to_user_stream.get(str(user))
if user_stream is not None:
user_streams.add(user_stream)
_discard_if_notified(user_listeners)
listeners |= user_listeners
for appservice in self.appservice_to_listeners:
for appservice in self.appservice_to_user_streams:
# TODO (kegan): Redundant appservice listener checks?
# App services will already be in the room_to_listeners set, but
# App services will already be in the room_to_user_streams set, but
# that isn't enough. They need to be checked here in order to
# receive *invites* for users they are interested in. Does this
# make the room_to_listeners check somewhat obselete?
# make the room_to_user_streams check somewhat obselete?
if appservice.is_interested(event):
app_listeners = self.appservice_to_listeners.get(
app_user_streams = self.appservice_to_user_streams.get(
appservice, set()
)
user_streams |= app_user_streams
_discard_if_notified(app_listeners)
logger.debug("on_new_room_event listeners %s", user_streams)
listeners |= app_listeners
logger.debug("on_new_room_event listeners %s", listeners)
# TODO (erikj): Can we make this more efficient by hitting the
# db once?
@defer.inlineCallbacks
def notify(listener):
events, end_key = yield room_source.get_new_events_for_user(
listener.user,
listener.from_token.room_key,
listener.limit,
)
if events:
end_token = listener.from_token.copy_and_replace(
"room_key", end_key
time_now_ms = self.clock.time_msec()
for user_stream in user_streams:
try:
user_stream.notify(
"room_key", "s%d" % (room_stream_id,), time_now_ms
)
listener.notify(
self, events, listener.from_token, end_token
)
def eb(failure):
logger.exception("Failed to notify listener", failure)
with PreserveLoggingContext():
yield defer.DeferredList(
[notify(l).addErrback(eb) for l in listeners],
consumeErrors=True,
)
except:
logger.exception("Failed to notify listener")
@defer.inlineCallbacks
@log_function
def on_new_user_event(self, users=[], rooms=[]):
def on_new_user_event(self, stream_key, new_token, users=[], rooms=[]):
""" Used to inform listeners that something has happend
presence/user event wise.
Will wake up all listeners for the given users and rooms.
"""
yield run_on_reactor()
# TODO(paul): This is horrible, having to manually list every event
# source here individually
presence_source = self.event_sources.sources["presence"]
typing_source = self.event_sources.sources["typing"]
listeners = set()
user_streams = set()
for user in users:
user_listeners = self.user_to_listeners.get(user, set())
_discard_if_notified(user_listeners)
listeners |= user_listeners
user_stream = self.user_to_user_stream.get(str(user))
if user_stream is not None:
user_streams.add(user_stream)
for room in rooms:
room_listeners = self.room_to_listeners.get(room, set())
user_streams |= self.room_to_user_streams.get(room, set())
_discard_if_notified(room_listeners)
listeners |= room_listeners
@defer.inlineCallbacks
def notify(listener):
presence_events, presence_end_key = (
yield presence_source.get_new_events_for_user(
listener.user,
listener.from_token.presence_key,
listener.limit,
)
)
typing_events, typing_end_key = (
yield typing_source.get_new_events_for_user(
listener.user,
listener.from_token.typing_key,
listener.limit,
)
)
if presence_events or typing_events:
end_token = listener.from_token.copy_and_replace(
"presence_key", presence_end_key
).copy_and_replace(
"typing_key", typing_end_key
)
listener.notify(
self,
presence_events + typing_events,
listener.from_token,
end_token
)
def eb(failure):
logger.error(
"Failed to notify listener",
exc_info=(
failure.type,
failure.value,
failure.getTracebackObject())
)
with PreserveLoggingContext():
yield defer.DeferredList(
[notify(l).addErrback(eb) for l in listeners],
consumeErrors=True,
)
time_now_ms = self.clock.time_msec()
for user_stream in user_streams:
try:
user_stream.notify(stream_key, new_token, time_now_ms)
except:
logger.exception("Failed to notify listener")
@defer.inlineCallbacks
def wait_for_events(self, user, rooms, filter, timeout, callback):
def wait_for_events(self, user, rooms, timeout, callback,
from_token=StreamToken("s0", "0", "0")):
"""Wait until the callback returns a non empty response or the
timeout fires.
"""
deferred = defer.Deferred()
time_now_ms = self.clock.time_msec()
from_token = StreamToken("s0", "0", "0")
user = str(user)
user_stream = self.user_to_user_stream.get(user)
if user_stream is None:
appservice = yield self.store.get_app_service_by_user_id(user)
current_token = yield self.event_sources.get_current_token()
rooms = yield self.store.get_rooms_for_user(user)
rooms = [room.room_id for room in rooms]
user_stream = _NotifierUserStream(
user=user,
rooms=rooms,
appservice=appservice,
current_token=current_token,
time_now_ms=time_now_ms,
)
self._register_with_keys(user_stream)
else:
current_token = user_stream.current_token
listener = [_NotificationListener(
user=user,
rooms=rooms,
from_token=from_token,
limit=1,
timeout=timeout,
deferred=deferred,
)]
listeners = [_NotificationListener(deferred, timeout)]
if timeout:
self._register_with_keys(listener[0])
if timeout and not current_token.is_after(from_token):
user_stream.listeners.update(listeners)
if current_token.is_after(from_token):
result = yield callback(from_token, current_token)
else:
result = None
result = yield callback()
timer = [None]
if result:
user_stream.listeners.difference_update(listeners)
defer.returnValue(result)
return
if timeout:
timed_out = [False]
def _timeout_listener():
timed_out[0] = True
timer[0] = None
listener[0].notify(self, [], from_token, from_token)
user_stream.listeners.difference_update(listeners)
for listener in listeners:
listener.notify(current_token)
# We create multiple notification listeners so we have to manage
# canceling the timeout ourselves.
timer[0] = self.clock.call_later(timeout/1000., _timeout_listener)
while not result and not timed_out[0]:
yield deferred
deferred = defer.Deferred()
listener[0] = _NotificationListener(
user=user,
rooms=rooms,
from_token=from_token,
limit=1,
timeout=timeout,
deferred=deferred,
)
self._register_with_keys(listener[0])
result = yield callback()
new_token = yield deferred
result = yield callback(current_token, new_token)
current_token = new_token
if not result:
deferred = defer.Deferred()
listener = _NotificationListener(deferred, timeout)
listeners.append(listener)
user_stream.listeners.add(listener)
if timer[0] is not None:
try:
@@ -368,125 +367,79 @@ class Notifier(object):
defer.returnValue(result)
@defer.inlineCallbacks
def get_events_for(self, user, rooms, pagination_config, timeout):
""" For the given user and rooms, return any new events for them. If
there are no new events wait for up to `timeout` milliseconds for any
new events to happen before returning.
"""
deferred = defer.Deferred()
self._get_events(
deferred, user, rooms, pagination_config.from_token,
pagination_config.limit, timeout
).addErrback(deferred.errback)
return deferred
@defer.inlineCallbacks
def _get_events(self, deferred, user, rooms, from_token, limit, timeout):
from_token = pagination_config.from_token
if not from_token:
from_token = yield self.event_sources.get_current_token()
appservice = yield self.hs.get_datastore().get_app_service_by_user_id(
user.to_string()
)
limit = pagination_config.limit
listener = _NotificationListener(
user,
rooms,
from_token,
limit,
timeout,
deferred,
appservice=appservice
)
def _timeout_listener():
# TODO (erikj): We should probably set to_token to the current
# max rather than reusing from_token.
# Remove the timer from the listener so we don't try to cancel it.
listener.timer = None
listener.notify(
self,
[],
listener.from_token,
listener.from_token,
)
if timeout:
self._register_with_keys(listener)
yield self._check_for_updates(listener)
if not timeout:
_timeout_listener()
else:
# Only add the timer if the listener hasn't been notified
if not listener.notified():
listener.timer = self.clock.call_later(
timeout/1000.0, _timeout_listener
@defer.inlineCallbacks
def check_for_updates(before_token, after_token):
events = []
end_token = from_token
for name, source in self.event_sources.sources.items():
keyname = "%s_key" % name
before_id = getattr(before_token, keyname)
after_id = getattr(after_token, keyname)
if before_id == after_id:
continue
stuff, new_key = yield source.get_new_events_for_user(
user, getattr(from_token, keyname), limit,
)
return
events.extend(stuff)
end_token = end_token.copy_and_replace(keyname, new_key)
if events:
defer.returnValue((events, (from_token, end_token)))
else:
defer.returnValue(None)
result = yield self.wait_for_events(
user, rooms, timeout, check_for_updates, from_token=from_token
)
if result is None:
result = ([], (from_token, from_token))
defer.returnValue(result)
@log_function
def _register_with_keys(self, listener):
for room in listener.rooms:
s = self.room_to_listeners.setdefault(room, set())
s.add(listener)
def remove_expired_streams(self):
time_now_ms = self.clock.time_msec()
expired_streams = []
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
for stream in self.user_to_user_stream.values():
if stream.listeners:
continue
if stream.last_notified_ms < expire_before_ts:
expired_streams.append(stream)
self.user_to_listeners.setdefault(listener.user, set()).add(listener)
for expired_stream in expired_streams:
expired_stream.remove(self)
if listener.appservice:
self.appservice_to_listeners.setdefault(
listener.appservice, set()
).add(listener)
@defer.inlineCallbacks
@log_function
def _check_for_updates(self, listener):
# TODO (erikj): We need to think about limits across multiple sources
events = []
def _register_with_keys(self, user_stream):
self.user_to_user_stream[user_stream.user] = user_stream
from_token = listener.from_token
limit = listener.limit
for room in user_stream.rooms:
s = self.room_to_user_streams.setdefault(room, set())
s.add(user_stream)
# TODO (erikj): DeferredList?
for name, source in self.event_sources.sources.items():
keyname = "%s_key" % name
stuff, new_key = yield source.get_new_events_for_user(
listener.user,
getattr(from_token, keyname),
limit,
)
events.extend(stuff)
from_token = from_token.copy_and_replace(keyname, new_key)
end_token = from_token
if events:
listener.notify(self, events, listener.from_token, end_token)
defer.returnValue(listener)
if user_stream.appservice:
self.appservice_to_user_stream.setdefault(
user_stream.appservice, set()
).add(user_stream)
def _user_joined_room(self, user, room_id):
new_listeners = self.user_to_listeners.get(user, set())
listeners = self.room_to_listeners.setdefault(room_id, set())
listeners |= new_listeners
for l in new_listeners:
l.rooms.add(room_id)
def _discard_if_notified(listener_set):
"""Remove any 'stale' listeners from the given set.
"""
to_discard = set()
for l in listener_set:
if l.notified():
to_discard.add(l)
listener_set -= to_discard
user = str(user)
new_user_stream = self.user_to_user_stream.get(user)
if new_user_stream is not None:
room_streams = self.room_to_user_streams.setdefault(room_id, set())
room_streams.add(new_user_stream)
new_user_stream.rooms.add(room_id)

View File

@@ -24,6 +24,7 @@ import baserules
import logging
import simplejson as json
import re
import random
logger = logging.getLogger(__name__)
@@ -74,35 +75,33 @@ class Pusher(object):
rawrules = yield self.store.get_push_rules_for_user(self.user_name)
for r in rawrules:
r['conditions'] = json.loads(r['conditions'])
r['actions'] = json.loads(r['actions'])
rules = []
for rawrule in rawrules:
rule = dict(rawrule)
rule['conditions'] = json.loads(rawrule['conditions'])
rule['actions'] = json.loads(rawrule['actions'])
rules.append(rule)
enabled_map = yield self.store.get_push_rules_enabled_for_user(self.user_name)
user = UserID.from_string(self.user_name)
rules = baserules.list_with_base_rules(rawrules, user)
rules = baserules.list_with_base_rules(rules, user)
room_id = ev['room_id']
# get *our* member event for display name matching
member_events_for_room = yield self.store.get_current_state(
room_id=ev['room_id'],
event_type='m.room.member',
state_key=None
)
my_display_name = None
room_member_count = 0
for mev in member_events_for_room:
if mev.content['membership'] != 'join':
continue
our_member_event = yield self.store.get_current_state(
room_id=room_id,
event_type='m.room.member',
state_key=self.user_name,
)
if our_member_event:
my_display_name = our_member_event[0].content.get("displayname")
# This loop does two things:
# 1) Find our current display name
if mev.state_key == self.user_name and 'displayname' in mev.content:
my_display_name = mev.content['displayname']
# and 2) Get the number of people in that room
room_member_count += 1
room_members = yield self.store.get_users_in_room(room_id)
room_member_count = len(room_members)
for r in rules:
if r['rule_id'] in enabled_map:
@@ -258,132 +257,154 @@ class Pusher(object):
logger.info("Pusher %s for user %s starting from token %s",
self.pushkey, self.user_name, self.last_token)
wait = 0
while self.alive:
from_tok = StreamToken.from_string(self.last_token)
config = PaginationConfig(from_token=from_tok, limit='1')
chunk = yield self.evStreamHandler.get_stream(
self.user_name, config,
timeout=100*365*24*60*60*1000, affect_presence=False
)
# limiting to 1 may get 1 event plus 1 presence event, so
# pick out the actual event
single_event = None
for c in chunk['chunk']:
if 'event_id' in c: # Hmmm...
single_event = c
break
if not single_event:
self.last_token = chunk['end']
continue
if not self.alive:
continue
processed = False
actions = yield self._actions_for_event(single_event)
tweaks = _tweaks_for_actions(actions)
if len(actions) == 0:
logger.warn("Empty actions! Using default action.")
actions = Pusher.DEFAULT_ACTIONS
if 'notify' not in actions and 'dont_notify' not in actions:
logger.warn("Neither notify nor dont_notify in actions: adding default")
actions.extend(Pusher.DEFAULT_ACTIONS)
if 'dont_notify' in actions:
logger.debug(
"%s for %s: dont_notify",
single_event['event_id'], self.user_name
try:
if wait > 0:
yield synapse.util.async.sleep(wait)
yield self.get_and_dispatch()
wait = 0
except:
if wait == 0:
wait = 1
else:
wait = min(wait * 2, 1800)
logger.exception(
"Exception in pusher loop for pushkey %s. Pausing for %ds",
self.pushkey, wait
)
@defer.inlineCallbacks
def get_and_dispatch(self):
from_tok = StreamToken.from_string(self.last_token)
config = PaginationConfig(from_token=from_tok, limit='1')
timeout = (300 + random.randint(-60, 60)) * 1000
chunk = yield self.evStreamHandler.get_stream(
self.user_name, config,
timeout=timeout, affect_presence=False
)
# limiting to 1 may get 1 event plus 1 presence event, so
# pick out the actual event
single_event = None
for c in chunk['chunk']:
if 'event_id' in c: # Hmmm...
single_event = c
break
if not single_event:
self.last_token = chunk['end']
logger.debug("Event stream timeout for pushkey %s", self.pushkey)
return
if not self.alive:
return
processed = False
actions = yield self._actions_for_event(single_event)
tweaks = _tweaks_for_actions(actions)
if len(actions) == 0:
logger.warn("Empty actions! Using default action.")
actions = Pusher.DEFAULT_ACTIONS
if 'notify' not in actions and 'dont_notify' not in actions:
logger.warn("Neither notify nor dont_notify in actions: adding default")
actions.extend(Pusher.DEFAULT_ACTIONS)
if 'dont_notify' in actions:
logger.debug(
"%s for %s: dont_notify",
single_event['event_id'], self.user_name
)
processed = True
else:
rejected = yield self.dispatch_push(single_event, tweaks)
self.has_unread = True
if isinstance(rejected, list) or isinstance(rejected, tuple):
processed = True
else:
rejected = yield self.dispatch_push(single_event, tweaks)
self.has_unread = True
if isinstance(rejected, list) or isinstance(rejected, tuple):
processed = True
for pk in rejected:
if pk != self.pushkey:
# for sanity, we only remove the pushkey if it
# was the one we actually sent...
logger.warn(
("Ignoring rejected pushkey %s because we"
" didn't send it"), pk
)
else:
logger.info(
"Pushkey %s was rejected: removing",
pk
)
yield self.hs.get_pusherpool().remove_pusher(
self.app_id, pk, self.user_name
)
for pk in rejected:
if pk != self.pushkey:
# for sanity, we only remove the pushkey if it
# was the one we actually sent...
logger.warn(
("Ignoring rejected pushkey %s because we"
" didn't send it"), pk
)
else:
logger.info(
"Pushkey %s was rejected: removing",
pk
)
yield self.hs.get_pusherpool().remove_pusher(
self.app_id, pk, self.user_name
)
if not self.alive:
continue
if not self.alive:
return
if processed:
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
self.store.update_pusher_last_token_and_success(
if processed:
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
self.store.update_pusher_last_token_and_success(
self.app_id,
self.pushkey,
self.user_name,
self.last_token,
self.clock.time_msec()
)
if self.failing_since:
self.failing_since = None
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.last_token,
self.clock.time_msec()
self.failing_since)
else:
if not self.failing_since:
self.failing_since = self.clock.time_msec()
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.failing_since
)
if (self.failing_since and
self.failing_since <
self.clock.time_msec() - Pusher.GIVE_UP_AFTER):
# we really only give up so that if the URL gets
# fixed, we don't suddenly deliver a load
# of old notifications.
logger.warn("Giving up on a notification to user %s, "
"pushkey %s",
self.user_name, self.pushkey)
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
self.store.update_pusher_last_token(
self.app_id,
self.pushkey,
self.user_name,
self.last_token
)
self.failing_since = None
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.failing_since
)
if self.failing_since:
self.failing_since = None
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.failing_since)
else:
if not self.failing_since:
self.failing_since = self.clock.time_msec()
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.failing_since
)
if (self.failing_since and
self.failing_since <
self.clock.time_msec() - Pusher.GIVE_UP_AFTER):
# we really only give up so that if the URL gets
# fixed, we don't suddenly deliver a load
# of old notifications.
logger.warn("Giving up on a notification to user %s, "
"pushkey %s",
self.user_name, self.pushkey)
self.backoff_delay = Pusher.INITIAL_BACKOFF
self.last_token = chunk['end']
self.store.update_pusher_last_token(
self.app_id,
self.pushkey,
self.user_name,
self.last_token
)
self.failing_since = None
self.store.update_pusher_failing_since(
self.app_id,
self.pushkey,
self.user_name,
self.failing_since
)
else:
logger.warn("Failed to dispatch push for user %s "
"(failing for %dms)."
"Trying again in %dms",
self.user_name,
self.clock.time_msec() - self.failing_since,
self.backoff_delay)
yield synapse.util.async.sleep(self.backoff_delay / 1000.0)
self.backoff_delay *= 2
if self.backoff_delay > Pusher.MAX_BACKOFF:
self.backoff_delay = Pusher.MAX_BACKOFF
logger.warn("Failed to dispatch push for user %s "
"(failing for %dms)."
"Trying again in %dms",
self.user_name,
self.clock.time_msec() - self.failing_since,
self.backoff_delay)
yield synapse.util.async.sleep(self.backoff_delay / 1000.0)
self.backoff_delay *= 2
if self.backoff_delay > Pusher.MAX_BACKOFF:
self.backoff_delay = Pusher.MAX_BACKOFF
def stop(self):
self.alive = False

View File

@@ -18,22 +18,23 @@ from distutils.version import LooseVersion
logger = logging.getLogger(__name__)
REQUIREMENTS = {
"syutil>=0.0.6": ["syutil>=0.0.6"],
"syutil>=0.0.7": ["syutil>=0.0.7"],
"Twisted==14.0.2": ["twisted==14.0.2"],
"service_identity>=1.0.0": ["service_identity>=1.0.0"],
"pyopenssl>=0.14": ["OpenSSL>=0.14"],
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
"pynacl": ["nacl"],
"pynacl>=0.0.3": ["nacl>=0.0.3"],
"daemonize": ["daemonize"],
"py-bcrypt": ["bcrypt"],
"frozendict>=0.4": ["frozendict"],
"pillow": ["PIL"],
"pydenticon": ["pydenticon"],
"ujson": ["ujson"],
}
CONDITIONAL_REQUIREMENTS = {
"web_client": {
"matrix_angular_sdk>=0.6.5": ["syweb>=0.6.5"],
"matrix_angular_sdk>=0.6.6": ["syweb>=0.6.6"],
}
}
@@ -50,20 +51,15 @@ def github_link(project, version, egg):
return "https://github.com/%s/tarball/%s/#egg=%s" % (project, version, egg)
DEPENDENCY_LINKS = [
github_link(
project="pyca/pynacl",
version="d4d3175589b892f6ea7c22f466e0e223853516fa",
egg="pynacl-0.3.0",
),
github_link(
project="matrix-org/syutil",
version="v0.0.6",
egg="syutil-0.0.6",
version="v0.0.7",
egg="syutil-0.0.7",
),
github_link(
project="matrix-org/matrix-angular-sdk",
version="v0.6.5",
egg="matrix_angular_sdk-0.6.5",
version="v0.6.6",
egg="matrix_angular_sdk-0.6.6",
),
]

View File

@@ -25,7 +25,7 @@ class ClientV1RestResource(JsonResource):
"""A resource for version 1 of the matrix client API."""
def __init__(self, hs):
JsonResource.__init__(self, hs)
JsonResource.__init__(self, hs, canonical_json=False)
self.register_servlets(self, hs)
@staticmethod

View File

@@ -118,11 +118,14 @@ class PushRuleRestServlet(ClientV1RestServlet):
user.to_string()
)
for r in rawrules:
r["conditions"] = json.loads(r["conditions"])
r["actions"] = json.loads(r["actions"])
ruleslist = []
for rawrule in rawrules:
rule = dict(rawrule)
rule["conditions"] = json.loads(rawrule["conditions"])
rule["actions"] = json.loads(rawrule["actions"])
ruleslist.append(rule)
ruleslist = baserules.list_with_base_rules(rawrules, user)
ruleslist = baserules.list_with_base_rules(ruleslist, user)
rules = {'global': {}, 'device': {}}

View File

@@ -28,7 +28,7 @@ class ClientV2AlphaRestResource(JsonResource):
"""A resource for version 2 alpha of the matrix client API."""
def __init__(self, hs):
JsonResource.__init__(self, hs)
JsonResource.__init__(self, hs, canonical_json=False)
self.register_servlets(self, hs)
@staticmethod

View File

@@ -65,12 +65,12 @@ class PasswordRestServlet(RestServlet):
if 'medium' not in threepid or 'address' not in threepid:
raise SynapseError(500, "Malformed threepid")
# if using email, we must know about the email they're authing with!
threepid_user = yield self.hs.get_datastore().get_user_by_threepid(
threepid_user_id = yield self.hs.get_datastore().get_user_id_by_threepid(
threepid['medium'], threepid['address']
)
if not threepid_user:
if not threepid_user_id:
raise SynapseError(404, "Email address not found", Codes.NOT_FOUND)
user_id = threepid_user
user_id = threepid_user_id
else:
logger.error("Auth succeeded but no known type!", result.keys())
raise SynapseError(500, "", Codes.UNKNOWN)

View File

@@ -82,8 +82,10 @@ class RegisterRestServlet(RestServlet):
[LoginType.EMAIL_IDENTITY]
]
result = None
if service:
is_application_server = True
params = body
elif 'mac' in body:
# Check registration-specific shared secret auth
if 'username' not in body:
@@ -92,6 +94,7 @@ class RegisterRestServlet(RestServlet):
body['username'], body['mac']
)
is_using_shared_secret = True
params = body
else:
authed, result, params = yield self.auth_handler.check_auth(
flows, body, self.hs.get_ip_from_request(request)
@@ -118,7 +121,7 @@ class RegisterRestServlet(RestServlet):
password=new_password
)
if LoginType.EMAIL_IDENTITY in result:
if result and LoginType.EMAIL_IDENTITY in result:
threepid = result[LoginType.EMAIL_IDENTITY]
for reqd in ['medium', 'address', 'validated_at']:

View File

@@ -15,17 +15,18 @@
from .thumbnailer import Thumbnailer
from synapse.http.matrixfederationclient import MatrixFederationHttpClient
from synapse.http.server import respond_with_json
from synapse.util.stringutils import random_string
from synapse.api.errors import (
cs_error, Codes, SynapseError
)
from twisted.internet import defer
from twisted.internet import defer, threads
from twisted.web.resource import Resource
from twisted.protocols.basic import FileSender
from synapse.util.async import create_observer
from synapse.util.async import ObservableDeferred
import os
@@ -52,7 +53,7 @@ class BaseMediaResource(Resource):
def __init__(self, hs, filepaths):
Resource.__init__(self)
self.auth = hs.get_auth()
self.client = hs.get_http_client()
self.client = MatrixFederationHttpClient(hs)
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.store = hs.get_datastore()
@@ -83,13 +84,17 @@ class BaseMediaResource(Resource):
download = self.downloads.get(key)
if download is None:
download = self._get_remote_media_impl(server_name, media_id)
download = ObservableDeferred(
download,
consumeErrors=True
)
self.downloads[key] = download
@download.addBoth
def callback(media_info):
del self.downloads[key]
return media_info
return create_observer(download)
return download.observe()
@defer.inlineCallbacks
def _get_remote_media_impl(self, server_name, media_id):
@@ -269,57 +274,65 @@ class BaseMediaResource(Resource):
if not requirements:
return
remote_thumbnails = []
input_path = self.filepaths.remote_media_filepath(server_name, file_id)
thumbnailer = Thumbnailer(input_path)
m_width = thumbnailer.width
m_height = thumbnailer.height
if m_width * m_height >= self.max_image_pixels:
logger.info(
"Image too large to thumbnail %r x %r > %r",
m_width, m_height, self.max_image_pixels
)
return
def generate_thumbnails():
if m_width * m_height >= self.max_image_pixels:
logger.info(
"Image too large to thumbnail %r x %r > %r",
m_width, m_height, self.max_image_pixels
)
return
scales = set()
crops = set()
for r_width, r_height, r_method, r_type in requirements:
if r_method == "scale":
t_width, t_height = thumbnailer.aspect(r_width, r_height)
scales.add((
min(m_width, t_width), min(m_height, t_height), r_type,
))
elif r_method == "crop":
crops.add((r_width, r_height, r_type))
scales = set()
crops = set()
for r_width, r_height, r_method, r_type in requirements:
if r_method == "scale":
t_width, t_height = thumbnailer.aspect(r_width, r_height)
scales.add((
min(m_width, t_width), min(m_height, t_height), r_type,
))
elif r_method == "crop":
crops.add((r_width, r_height, r_type))
for t_width, t_height, t_type in scales:
t_method = "scale"
t_path = self.filepaths.remote_media_thumbnail(
server_name, file_id, t_width, t_height, t_type, t_method
)
self._makedirs(t_path)
t_len = thumbnailer.scale(t_path, t_width, t_height, t_type)
yield self.store.store_remote_media_thumbnail(
server_name, media_id, file_id,
t_width, t_height, t_type, t_method, t_len
)
for t_width, t_height, t_type in scales:
t_method = "scale"
t_path = self.filepaths.remote_media_thumbnail(
server_name, file_id, t_width, t_height, t_type, t_method
)
self._makedirs(t_path)
t_len = thumbnailer.scale(t_path, t_width, t_height, t_type)
remote_thumbnails.append([
server_name, media_id, file_id,
t_width, t_height, t_type, t_method, t_len
])
for t_width, t_height, t_type in crops:
if (t_width, t_height, t_type) in scales:
# If the aspect ratio of the cropped thumbnail matches a purely
# scaled one then there is no point in calculating a separate
# thumbnail.
continue
t_method = "crop"
t_path = self.filepaths.remote_media_thumbnail(
server_name, file_id, t_width, t_height, t_type, t_method
)
self._makedirs(t_path)
t_len = thumbnailer.crop(t_path, t_width, t_height, t_type)
yield self.store.store_remote_media_thumbnail(
server_name, media_id, file_id,
t_width, t_height, t_type, t_method, t_len
)
for t_width, t_height, t_type in crops:
if (t_width, t_height, t_type) in scales:
# If the aspect ratio of the cropped thumbnail matches a purely
# scaled one then there is no point in calculating a separate
# thumbnail.
continue
t_method = "crop"
t_path = self.filepaths.remote_media_thumbnail(
server_name, file_id, t_width, t_height, t_type, t_method
)
self._makedirs(t_path)
t_len = thumbnailer.crop(t_path, t_width, t_height, t_type)
remote_thumbnails.append([
server_name, media_id, file_id,
t_width, t_height, t_type, t_method, t_len
])
yield threads.deferToThread(generate_thumbnails)
for r in remote_thumbnails:
yield self.store.store_remote_media_thumbnail(*r)
defer.returnValue({
"width": m_width,

View File

@@ -59,7 +59,6 @@ class BaseHomeServer(object):
'config',
'clock',
'http_client',
'db_name',
'db_pool',
'persistence_service',
'replication_layer',
@@ -133,16 +132,8 @@ class BaseHomeServer(object):
setattr(BaseHomeServer, "get_%s" % (depname), _get)
def get_ip_from_request(self, request):
# May be an X-Forwarding-For header depending on config
ip_addr = request.getClientIP()
if self.config.captcha_ip_origin_is_x_forwarded:
# use the header
if request.requestHeaders.hasHeader("X-Forwarded-For"):
ip_addr = request.requestHeaders.getRawHeaders(
"X-Forwarded-For"
)[0]
return ip_addr
# X-Forwarded-For is handled by our custom request type.
return request.getClientIP()
def is_mine(self, domain_specific_string):
return domain_specific_string.domain == self.hostname

View File

@@ -106,7 +106,7 @@ class StateHandler(object):
defer.returnValue(state)
@defer.inlineCallbacks
def compute_event_context(self, event, old_state=None):
def compute_event_context(self, event, old_state=None, outlier=False):
""" Fills out the context with the `current state` of the graph. The
`current state` here is defined to be the state of the event graph
just before the event - i.e. it never includes `event`
@@ -119,9 +119,23 @@ class StateHandler(object):
Returns:
an EventContext
"""
yield run_on_reactor()
context = EventContext()
yield run_on_reactor()
if outlier:
# If this is an outlier, then we know it shouldn't have any current
# state. Certainly store.get_current_state won't return any, and
# persisting the event won't store the state group.
if old_state:
context.current_state = {
(s.type, s.state_key): s for s in old_state
}
else:
context.current_state = {}
context.prev_state_events = []
context.state_group = None
defer.returnValue(context)
if old_state:
context.current_state = {
@@ -155,10 +169,6 @@ class StateHandler(object):
context.current_state = curr_state
context.state_group = group if not event.is_state() else None
prev_state = yield self.store.add_event_hashes(
prev_state
)
if event.is_state():
key = (event.type, event.state_key)
if key in context.current_state:

View File

@@ -51,7 +51,7 @@ logger = logging.getLogger(__name__)
# Remember to update this number every time a change is made to database
# schema files, so the users will be informed on server restarts.
SCHEMA_VERSION = 17
SCHEMA_VERSION = 20
dir_path = os.path.abspath(os.path.dirname(__file__))
@@ -348,7 +348,7 @@ def _upgrade_existing_database(cur, current_version, applied_delta_files,
module_name, absolute_path, python_file
)
logger.debug("Running script %s", relative_path)
module.run_upgrade(cur)
module.run_upgrade(cur, database_engine)
elif ext == ".sql":
# A plain old .sql file, just read and execute it
logger.debug("Applying schema %s", relative_path)

View File

@@ -15,10 +15,8 @@
import logging
from synapse.api.errors import StoreError
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext, LoggingContext
from synapse.util.logcontext import preserve_context_over_fn, LoggingContext
from synapse.util.lrucache import LruCache
import synapse.metrics
@@ -27,11 +25,13 @@ from util.id_generators import IdGenerator, StreamIdGenerator
from twisted.internet import defer
from collections import namedtuple, OrderedDict
import functools
import simplejson as json
import sys
import time
import threading
DEBUG_CACHES = False
logger = logging.getLogger(__name__)
@@ -46,7 +46,6 @@ sql_scheduling_timer = metrics.register_distribution("schedule_time")
sql_query_timer = metrics.register_distribution("query_time", labels=["verb"])
sql_txn_timer = metrics.register_distribution("transaction_time", labels=["desc"])
sql_getevents_timer = metrics.register_distribution("getEvents_time", labels=["desc"])
caches_by_name = {}
cache_counter = metrics.register_cache(
@@ -68,9 +67,20 @@ class Cache(object):
self.name = name
self.keylen = keylen
self.sequence = 0
self.thread = None
caches_by_name[name] = self.cache
def check_thread(self):
expected_thread = self.thread
if expected_thread is None:
self.thread = threading.current_thread()
else:
if expected_thread is not threading.current_thread():
raise ValueError(
"Cache objects can only be accessed from the main thread"
)
def get(self, *keyargs):
if len(keyargs) != self.keylen:
raise ValueError("Expected a key to have %d items", self.keylen)
@@ -82,6 +92,13 @@ class Cache(object):
cache_counter.inc_misses(self.name)
raise KeyError()
def update(self, sequence, *args):
self.check_thread()
if self.sequence == sequence:
# Only update the cache if the caches sequence number matches the
# number that the cache had before the SELECT was started (SYN-369)
self.prefill(*args)
def prefill(self, *args): # because I can't *keyargs, value
keyargs = args[:-1]
value = args[-1]
@@ -96,13 +113,21 @@ class Cache(object):
self.cache[keyargs] = value
def invalidate(self, *keyargs):
self.check_thread()
if len(keyargs) != self.keylen:
raise ValueError("Expected a key to have %d items", self.keylen)
# Increment the sequence number so that any SELECT statements that
# raced with the INSERT don't update the cache (SYN-369)
self.sequence += 1
self.cache.pop(keyargs, None)
def invalidate_all(self):
self.check_thread()
self.sequence += 1
self.cache.clear()
def cached(max_entries=1000, num_args=1, lru=False):
class CacheDescriptor(object):
""" A method decorator that applies a memoizing cache around the function.
The function is presumed to take zero or more arguments, which are used in
@@ -116,43 +141,84 @@ def cached(max_entries=1000, num_args=1, lru=False):
which can be used to insert values into the cache specifically, without
calling the calculation function.
"""
def wrap(orig):
def __init__(self, orig, max_entries=1000, num_args=1, lru=False):
self.orig = orig
self.max_entries = max_entries
self.num_args = num_args
self.lru = lru
def __get__(self, obj, objtype=None):
cache = Cache(
name=orig.__name__,
max_entries=max_entries,
keylen=num_args,
lru=lru,
name=self.orig.__name__,
max_entries=self.max_entries,
keylen=self.num_args,
lru=self.lru,
)
@functools.wraps(orig)
@functools.wraps(self.orig)
@defer.inlineCallbacks
def wrapped(self, *keyargs):
def wrapped(*keyargs):
try:
defer.returnValue(cache.get(*keyargs))
cached_result = cache.get(*keyargs[:self.num_args])
if DEBUG_CACHES:
actual_result = yield self.orig(obj, *keyargs)
if actual_result != cached_result:
logger.error(
"Stale cache entry %s%r: cached: %r, actual %r",
self.orig.__name__, keyargs,
cached_result, actual_result,
)
raise ValueError("Stale cache entry")
defer.returnValue(cached_result)
except KeyError:
ret = yield orig(self, *keyargs)
# Get the sequence number of the cache before reading from the
# database so that we can tell if the cache is invalidated
# while the SELECT is executing (SYN-369)
sequence = cache.sequence
cache.prefill(*keyargs + (ret,))
ret = yield self.orig(obj, *keyargs)
cache.update(sequence, *keyargs[:self.num_args] + (ret,))
defer.returnValue(ret)
wrapped.invalidate = cache.invalidate
wrapped.invalidate_all = cache.invalidate_all
wrapped.prefill = cache.prefill
obj.__dict__[self.orig.__name__] = wrapped
return wrapped
return wrap
def cached(max_entries=1000, num_args=1, lru=False):
return lambda orig: CacheDescriptor(
orig,
max_entries=max_entries,
num_args=num_args,
lru=lru
)
class LoggingTransaction(object):
"""An object that almost-transparently proxies for the 'txn' object
passed to the constructor. Adds logging and metrics to the .execute()
method."""
__slots__ = ["txn", "name", "database_engine"]
__slots__ = ["txn", "name", "database_engine", "after_callbacks"]
def __init__(self, txn, name, database_engine):
def __init__(self, txn, name, database_engine, after_callbacks):
object.__setattr__(self, "txn", txn)
object.__setattr__(self, "name", name)
object.__setattr__(self, "database_engine", database_engine)
object.__setattr__(self, "after_callbacks", after_callbacks)
def call_after(self, callback, *args):
"""Call the given callback on the main twisted thread after the
transaction has finished. Used to invalidate the caches on the
correct thread.
"""
self.after_callbacks.append((callback, args))
def __getattr__(self, name):
return getattr(self.txn, name)
@@ -160,22 +226,23 @@ class LoggingTransaction(object):
def __setattr__(self, name, value):
setattr(self.txn, name, value)
def execute(self, sql, *args, **kwargs):
def execute(self, sql, *args):
self._do_execute(self.txn.execute, sql, *args)
def executemany(self, sql, *args):
self._do_execute(self.txn.executemany, sql, *args)
def _do_execute(self, func, sql, *args):
# TODO(paul): Maybe use 'info' and 'debug' for values?
sql_logger.debug("[SQL] {%s} %s", self.name, sql)
sql = self.database_engine.convert_param_style(sql)
if args and args[0]:
args = list(args)
args[0] = [
self.database_engine.encode_parameter(a) for a in args[0]
]
if args:
try:
sql_logger.debug(
"[SQL values] {%s} " + ", ".join(("<%r>",) * len(args[0])),
self.name,
*args[0]
"[SQL values] {%s} %r",
self.name, args[0]
)
except:
# Don't let logging failures stop SQL from working
@@ -184,8 +251,8 @@ class LoggingTransaction(object):
start = time.time() * 1000
try:
return self.txn.execute(
sql, *args, **kwargs
return func(
sql, *args
)
except Exception as e:
logger.debug("[SQL FAIL] {%s} %s", self.name, e)
@@ -254,6 +321,12 @@ class SQLBaseStore(object):
self._get_event_cache = Cache("*getEvent*", keylen=3, lru=True,
max_entries=hs.config.event_cache_size)
self._event_fetch_lock = threading.Condition()
self._event_fetch_list = []
self._event_fetch_ongoing = 0
self._pending_ds = []
self.database_engine = hs.database_engine
self._stream_id_gen = StreamIdGenerator()
@@ -261,6 +334,8 @@ class SQLBaseStore(object):
self._state_groups_id_gen = IdGenerator("state_groups", "id", self)
self._access_tokens_id_gen = IdGenerator("access_tokens", "id", self)
self._pushers_id_gen = IdGenerator("pushers", "id", self)
self._push_rule_id_gen = IdGenerator("push_rules", "id", self)
self._push_rules_enable_id_gen = IdGenerator("push_rules_enable", "id", self)
def start_profiling(self):
self._previous_loop_ts = self._clock.time_msec()
@@ -291,6 +366,75 @@ class SQLBaseStore(object):
self._clock.looping_call(loop, 10000)
def _new_transaction(self, conn, desc, after_callbacks, func, *args, **kwargs):
start = time.time() * 1000
txn_id = self._TXN_ID
# We don't really need these to be unique, so lets stop it from
# growing really large.
self._TXN_ID = (self._TXN_ID + 1) % (sys.maxint - 1)
name = "%s-%x" % (desc, txn_id, )
transaction_logger.debug("[TXN START] {%s}", name)
try:
i = 0
N = 5
while True:
try:
txn = conn.cursor()
txn = LoggingTransaction(
txn, name, self.database_engine, after_callbacks
)
r = func(txn, *args, **kwargs)
conn.commit()
return r
except self.database_engine.module.OperationalError as e:
# This can happen if the database disappears mid
# transaction.
logger.warn(
"[TXN OPERROR] {%s} %s %d/%d",
name, e, i, N
)
if i < N:
i += 1
try:
conn.rollback()
except self.database_engine.module.Error as e1:
logger.warn(
"[TXN EROLL] {%s} %s",
name, e1,
)
continue
raise
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
logger.warn("[TXN DEADLOCK] {%s} %d/%d", name, i, N)
if i < N:
i += 1
try:
conn.rollback()
except self.database_engine.module.Error as e1:
logger.warn(
"[TXN EROLL] {%s} %s",
name, e1,
)
continue
raise
except Exception as e:
logger.debug("[TXN FAIL] {%s} %s", name, e)
raise
finally:
end = time.time() * 1000
duration = end - start
transaction_logger.debug("[TXN END] {%s} %f", name, duration)
self._current_txn_total_time += duration
self._txn_perf_counters.update(desc, start, end)
sql_txn_timer.inc_by(duration, desc)
@defer.inlineCallbacks
def runInteraction(self, desc, func, *args, **kwargs):
"""Wraps the .runInteraction() method on the underlying db_pool."""
@@ -298,82 +442,54 @@ class SQLBaseStore(object):
start_time = time.time() * 1000
after_callbacks = []
def inner_func(conn, *args, **kwargs):
with LoggingContext("runInteraction") as context:
sql_scheduling_timer.inc_by(time.time() * 1000 - start_time)
if self.database_engine.is_connection_closed(conn):
logger.debug("Reconnecting closed database connection")
conn.reconnect()
current_context.copy_to(context)
start = time.time() * 1000
txn_id = self._TXN_ID
return self._new_transaction(
conn, desc, after_callbacks, func, *args, **kwargs
)
# We don't really need these to be unique, so lets stop it from
# growing really large.
self._TXN_ID = (self._TXN_ID + 1) % (sys.maxint - 1)
result = yield preserve_context_over_fn(
self._db_pool.runWithConnection,
inner_func, *args, **kwargs
)
name = "%s-%x" % (desc, txn_id, )
for after_callback, after_args in after_callbacks:
after_callback(*after_args)
defer.returnValue(result)
@defer.inlineCallbacks
def runWithConnection(self, func, *args, **kwargs):
"""Wraps the .runInteraction() method on the underlying db_pool."""
current_context = LoggingContext.current_context()
start_time = time.time() * 1000
def inner_func(conn, *args, **kwargs):
with LoggingContext("runWithConnection") as context:
sql_scheduling_timer.inc_by(time.time() * 1000 - start_time)
transaction_logger.debug("[TXN START] {%s}", name)
try:
i = 0
N = 5
while True:
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, name, self.database_engine),
*args, **kwargs
)
except self.database_engine.module.OperationalError as e:
# This can happen if the database disappears mid
# transaction.
logger.warn(
"[TXN OPERROR] {%s} %s %d/%d",
name, e, i, N
)
if i < N:
i += 1
try:
conn.rollback()
except self.database_engine.module.Error as e1:
logger.warn(
"[TXN EROLL] {%s} %s",
name, e1,
)
continue
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
logger.warn("[TXN DEADLOCK] {%s} %d/%d", name, i, N)
if i < N:
i += 1
try:
conn.rollback()
except self.database_engine.module.Error as e1:
logger.warn(
"[TXN EROLL] {%s} %s",
name, e1,
)
continue
raise
except Exception as e:
logger.debug("[TXN FAIL] {%s} %s", name, e)
raise
finally:
end = time.time() * 1000
duration = end - start
transaction_logger.debug("[TXN END] {%s} %f", name, duration)
if self.database_engine.is_connection_closed(conn):
logger.debug("Reconnecting closed database connection")
conn.reconnect()
self._current_txn_total_time += duration
self._txn_perf_counters.update(desc, start, end)
sql_txn_timer.inc_by(duration, desc)
current_context.copy_to(context)
return func(conn, *args, **kwargs)
result = yield preserve_context_over_fn(
self._db_pool.runWithConnection,
inner_func, *args, **kwargs
)
with PreserveLoggingContext():
result = yield self._db_pool.runWithConnection(
inner_func, *args, **kwargs
)
defer.returnValue(result)
def cursor_to_dict(self, cursor):
@@ -438,18 +554,49 @@ class SQLBaseStore(object):
@log_function
def _simple_insert_txn(self, txn, table, values):
keys, vals = zip(*values.items())
sql = "INSERT INTO %s (%s) VALUES(%s)" % (
table,
", ".join(k for k in values),
", ".join("?" for k in values)
", ".join(k for k in keys),
", ".join("?" for _ in keys)
)
logger.debug(
"[SQL] %s Args=%s",
sql, values.values(),
txn.execute(sql, vals)
def _simple_insert_many_txn(self, txn, table, values):
if not values:
return
# This is a *slight* abomination to get a list of tuples of key names
# and a list of tuples of value names.
#
# i.e. [{"a": 1, "b": 2}, {"c": 3, "d": 4}]
# => [("a", "b",), ("c", "d",)] and [(1, 2,), (3, 4,)]
#
# The sort is to ensure that we don't rely on dictionary iteration
# order.
keys, vals = zip(*[
zip(
*(sorted(i.items(), key=lambda kv: kv[0]))
)
for i in values
if i
])
for k in keys:
if k != keys[0]:
raise RuntimeError(
"All items must have the same keys"
)
sql = "INSERT INTO %s (%s) VALUES(%s)" % (
table,
", ".join(k for k in keys[0]),
", ".join("?" for _ in keys[0])
)
txn.execute(sql, values.values())
txn.executemany(sql, vals)
def _simple_upsert(self, table, keyvalues, values,
insertion_values={}, desc="_simple_upsert", lock=True):
@@ -782,158 +929,6 @@ class SQLBaseStore(object):
return self.runInteraction("_simple_max_id", func)
def _get_events(self, event_ids, check_redacted=True,
get_prev_content=False):
return self.runInteraction(
"_get_events", self._get_events_txn, event_ids,
check_redacted=check_redacted, get_prev_content=get_prev_content,
)
def _get_events_txn(self, txn, event_ids, check_redacted=True,
get_prev_content=False):
if not event_ids:
return []
events = [
self._get_event_txn(
txn, event_id,
check_redacted=check_redacted,
get_prev_content=get_prev_content
)
for event_id in event_ids
]
return [e for e in events if e]
def _invalidate_get_event_cache(self, event_id):
for check_redacted in (False, True):
for get_prev_content in (False, True):
self._get_event_cache.invalidate(event_id, check_redacted,
get_prev_content)
def _get_event_txn(self, txn, event_id, check_redacted=True,
get_prev_content=False, allow_rejected=False):
start_time = time.time() * 1000
def update_counter(desc, last_time):
curr_time = self._get_event_counters.update(desc, last_time)
sql_getevents_timer.inc_by(curr_time - last_time, desc)
return curr_time
try:
ret = self._get_event_cache.get(event_id, check_redacted, get_prev_content)
if allow_rejected or not ret.rejected_reason:
return ret
else:
return None
except KeyError:
pass
finally:
start_time = update_counter("event_cache", start_time)
sql = (
"SELECT e.internal_metadata, e.json, r.event_id, rej.reason "
"FROM event_json as e "
"LEFT JOIN redactions as r ON e.event_id = r.redacts "
"LEFT JOIN rejections as rej on rej.event_id = e.event_id "
"WHERE e.event_id = ? "
"LIMIT 1 "
)
txn.execute(sql, (event_id,))
res = txn.fetchone()
if not res:
return None
internal_metadata, js, redacted, rejected_reason = res
start_time = update_counter("select_event", start_time)
result = self._get_event_from_row_txn(
txn, internal_metadata, js, redacted,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
rejected_reason=rejected_reason,
)
self._get_event_cache.prefill(event_id, check_redacted, get_prev_content, result)
if allow_rejected or not rejected_reason:
return result
else:
return None
def _get_event_from_row_txn(self, txn, internal_metadata, js, redacted,
check_redacted=True, get_prev_content=False,
rejected_reason=None):
start_time = time.time() * 1000
def update_counter(desc, last_time):
curr_time = self._get_event_counters.update(desc, last_time)
sql_getevents_timer.inc_by(curr_time - last_time, desc)
return curr_time
d = json.loads(js)
start_time = update_counter("decode_json", start_time)
internal_metadata = json.loads(internal_metadata)
start_time = update_counter("decode_internal", start_time)
ev = FrozenEvent(
d,
internal_metadata_dict=internal_metadata,
rejected_reason=rejected_reason,
)
start_time = update_counter("build_frozen_event", start_time)
if check_redacted and redacted:
ev = prune_event(ev)
ev.unsigned["redacted_by"] = redacted
# Get the redaction event.
because = self._get_event_txn(
txn,
redacted,
check_redacted=False
)
if because:
ev.unsigned["redacted_because"] = because
start_time = update_counter("redact_event", start_time)
if get_prev_content and "replaces_state" in ev.unsigned:
prev = self._get_event_txn(
txn,
ev.unsigned["replaces_state"],
get_prev_content=False,
)
if prev:
ev.unsigned["prev_content"] = prev.get_dict()["content"]
start_time = update_counter("get_prev_content", start_time)
return ev
def _parse_events(self, rows):
return self.runInteraction(
"_parse_events", self._parse_events_txn, rows
)
def _parse_events_txn(self, txn, rows):
event_ids = [r["event_id"] for r in rows]
return self._get_events_txn(txn, event_ids)
def _has_been_redacted_txn(self, txn, event):
sql = "SELECT event_id FROM redactions WHERE redacts = ?"
txn.execute(sql, (event.event_id,))
result = txn.fetchone()
return result[0] if result else None
def get_next_stream_id(self):
with self._next_stream_id_lock:
i = self._next_stream_id

View File

@@ -19,6 +19,8 @@ from ._base import IncorrectDatabaseSetup
class PostgresEngine(object):
single_threaded = False
def __init__(self, database_module):
self.module = database_module
self.module.extensions.register_type(self.module.extensions.UNICODE)
@@ -36,9 +38,6 @@ class PostgresEngine(object):
def convert_param_style(self, sql):
return sql.replace("?", "%s")
def encode_parameter(self, param):
return param
def on_new_connection(self, db_conn):
db_conn.set_isolation_level(
self.module.extensions.ISOLATION_LEVEL_REPEATABLE_READ

View File

@@ -17,6 +17,8 @@ from synapse.storage import prepare_database, prepare_sqlite3_database
class Sqlite3Engine(object):
single_threaded = True
def __init__(self, database_module):
self.module = database_module
@@ -26,9 +28,6 @@ class Sqlite3Engine(object):
def convert_param_style(self, sql):
return sql
def encode_parameter(self, param):
return param
def on_new_connection(self, db_conn):
self.prepare_database(db_conn)

View File

@@ -13,10 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import SQLBaseStore
from twisted.internet import defer
from ._base import SQLBaseStore, cached
from syutil.base64util import encode_base64
import logging
from Queue import PriorityQueue, Empty
logger = logging.getLogger(__name__)
@@ -33,16 +36,7 @@ class EventFederationStore(SQLBaseStore):
"""
def get_auth_chain(self, event_ids):
return self.runInteraction(
"get_auth_chain",
self._get_auth_chain_txn,
event_ids
)
def _get_auth_chain_txn(self, txn, event_ids):
results = self._get_auth_chain_ids_txn(txn, event_ids)
return self._get_events_txn(txn, results)
return self.get_auth_chain_ids(event_ids).addCallback(self._get_events)
def get_auth_chain_ids(self, event_ids):
return self.runInteraction(
@@ -79,6 +73,28 @@ class EventFederationStore(SQLBaseStore):
room_id,
)
def get_oldest_events_with_depth_in_room(self, room_id):
return self.runInteraction(
"get_oldest_events_with_depth_in_room",
self.get_oldest_events_with_depth_in_room_txn,
room_id,
)
def get_oldest_events_with_depth_in_room_txn(self, txn, room_id):
sql = (
"SELECT b.event_id, MAX(e.depth) FROM events as e"
" INNER JOIN event_edges as g"
" ON g.event_id = e.event_id AND g.room_id = e.room_id"
" INNER JOIN event_backward_extremities as b"
" ON g.prev_event_id = b.event_id AND g.room_id = b.room_id"
" WHERE b.room_id = ? AND g.is_state is ?"
" GROUP BY b.event_id"
)
txn.execute(sql, (room_id, False,))
return dict(txn.fetchall())
def _get_oldest_events_in_room_txn(self, txn, room_id):
return self._simple_select_onecol_txn(
txn,
@@ -96,6 +112,7 @@ class EventFederationStore(SQLBaseStore):
room_id,
)
@cached()
def get_latest_event_ids_in_room(self, room_id):
return self._simple_select_onecol(
table="event_forward_extremities",
@@ -103,7 +120,7 @@ class EventFederationStore(SQLBaseStore):
"room_id": room_id,
},
retcol="event_id",
desc="get_latest_events_in_room",
desc="get_latest_event_ids_in_room",
)
def _get_latest_events_in_room(self, txn, room_id):
@@ -246,11 +263,13 @@ class EventFederationStore(SQLBaseStore):
do_insert = depth < min_depth if min_depth else True
if do_insert:
self._simple_insert_txn(
self._simple_upsert_txn(
txn,
table="room_depth",
values={
keyvalues={
"room_id": room_id,
},
values={
"min_depth": depth,
},
)
@@ -261,18 +280,19 @@ class EventFederationStore(SQLBaseStore):
For the given event, update the event edges table and forward and
backward extremities tables.
"""
for e_id, _ in prev_events:
# TODO (erikj): This could be done as a bulk insert
self._simple_insert_txn(
txn,
table="event_edges",
values={
self._simple_insert_many_txn(
txn,
table="event_edges",
values=[
{
"event_id": event_id,
"prev_event_id": e_id,
"room_id": room_id,
"is_state": False,
},
)
}
for e_id, _ in prev_events
],
)
# Update the extremities table if this is not an outlier.
if not outlier:
@@ -304,30 +324,32 @@ class EventFederationStore(SQLBaseStore):
txn.execute(query, (event_id, room_id))
# Insert all the prev_events as a backwards thing, they'll get
# deleted in a second if they're incorrect anyway.
for e_id, _ in prev_events:
# TODO (erikj): This could be done as a bulk insert
self._simple_insert_txn(
txn,
table="event_backward_extremities",
values={
"event_id": e_id,
"room_id": room_id,
},
)
# Also delete from the backwards extremities table all ones that
# reference events that we have already seen
query = (
"DELETE FROM event_backward_extremities WHERE EXISTS ("
"SELECT 1 FROM events "
"WHERE "
"event_backward_extremities.event_id = events.event_id "
"AND not events.outlier "
")"
"INSERT INTO event_backward_extremities (event_id, room_id)"
" SELECT ?, ? WHERE NOT EXISTS ("
" SELECT 1 FROM event_backward_extremities"
" WHERE event_id = ? AND room_id = ?"
" )"
" AND NOT EXISTS ("
" SELECT 1 FROM events WHERE event_id = ? AND room_id = ? "
" AND outlier = ?"
" )"
)
txn.executemany(query, [
(e_id, room_id, e_id, room_id, e_id, room_id, False)
for e_id, _ in prev_events
])
query = (
"DELETE FROM event_backward_extremities"
" WHERE event_id = ? AND room_id = ?"
)
txn.execute(query, (event_id, room_id))
txn.call_after(
self.get_latest_event_ids_in_room.invalidate, room_id
)
txn.execute(query)
def get_backfill_events(self, room_id, event_list, limit):
"""Get a list of Events for a given topic that occurred before (and
@@ -342,6 +364,10 @@ class EventFederationStore(SQLBaseStore):
return self.runInteraction(
"get_backfill_events",
self._get_backfill_events, room_id, event_list, limit
).addCallback(
self._get_events
).addCallback(
lambda l: sorted(l, key=lambda e: -e.depth)
)
def _get_backfill_events(self, txn, room_id, event_list, limit):
@@ -350,54 +376,75 @@ class EventFederationStore(SQLBaseStore):
room_id, repr(event_list), limit
)
event_results = event_list
event_results = set()
front = event_list
# We want to make sure that we do a breadth-first, "depth" ordered
# search.
query = (
"SELECT prev_event_id FROM event_edges "
"WHERE room_id = ? AND event_id = ? "
"LIMIT ?"
"SELECT depth, prev_event_id FROM event_edges"
" INNER JOIN events"
" ON prev_event_id = events.event_id"
" AND event_edges.room_id = events.room_id"
" WHERE event_edges.room_id = ? AND event_edges.event_id = ?"
" AND event_edges.is_state = ?"
" LIMIT ?"
)
# We iterate through all event_ids in `front` to select their previous
# events. These are dumped in `new_front`.
# We continue until we reach the limit *or* new_front is empty (i.e.,
# we've run out of things to select
while front and len(event_results) < limit:
queue = PriorityQueue()
new_front = []
for event_id in front:
logger.debug(
"_backfill_interaction: id=%s",
event_id
)
for event_id in event_list:
depth = self._simple_select_one_onecol_txn(
txn,
table="events",
keyvalues={
"event_id": event_id,
},
retcol="depth"
)
txn.execute(
query,
(room_id, event_id, limit - len(event_results))
)
queue.put((-depth, event_id))
for row in txn.fetchall():
logger.debug(
"_backfill_interaction: got id=%s",
*row
)
new_front.append(row[0])
while not queue.empty() and len(event_results) < limit:
try:
_, event_id = queue.get_nowait()
except Empty:
break
front = new_front
event_results += new_front
if event_id in event_results:
continue
return self._get_events_txn(txn, event_results)
event_results.add(event_id)
txn.execute(
query,
(room_id, event_id, False, limit - len(event_results))
)
for row in txn.fetchall():
if row[1] not in event_results:
queue.put((-row[0], row[1]))
return event_results
@defer.inlineCallbacks
def get_missing_events(self, room_id, earliest_events, latest_events,
limit, min_depth):
return self.runInteraction(
ids = yield self.runInteraction(
"get_missing_events",
self._get_missing_events,
room_id, earliest_events, latest_events, limit, min_depth
)
events = yield self._get_events(ids)
events = sorted(
[ev for ev in events if ev.depth >= min_depth],
key=lambda e: e.depth,
)
defer.returnValue(events[:limit])
def _get_missing_events(self, txn, room_id, earliest_events, latest_events,
limit, min_depth):
@@ -429,14 +476,7 @@ class EventFederationStore(SQLBaseStore):
front = new_front
event_results |= new_front
events = self._get_events_txn(txn, event_results)
events = sorted(
[ev for ev in events if ev.depth >= min_depth],
key=lambda e: e.depth,
)
return events[:limit]
return event_results
def clean_room_for_join(self, room_id):
return self.runInteraction(
@@ -449,3 +489,4 @@ class EventFederationStore(SQLBaseStore):
query = "DELETE FROM event_forward_extremities WHERE room_id = ?"
txn.execute(query, (room_id,))
txn.call_after(self.get_latest_event_ids_in_room.invalidate, room_id)

View File

@@ -15,20 +15,36 @@
from _base import SQLBaseStore, _RollbackButIsFineException
from twisted.internet import defer
from twisted.internet import defer, reactor
from synapse.events import FrozenEvent, USE_FROZEN_DICTS
from synapse.events.utils import prune_event
from synapse.util.logcontext import preserve_context_over_deferred
from synapse.util.logutils import log_function
from synapse.api.constants import EventTypes
from synapse.crypto.event_signing import compute_event_reference_hash
from syutil.base64util import decode_base64
from syutil.jsonutil import encode_canonical_json
from syutil.jsonutil import encode_json
from contextlib import contextmanager
import logging
import ujson as json
logger = logging.getLogger(__name__)
# These values are used in the `enqueus_event` and `_do_fetch` methods to
# control how we batch/bulk fetch events from the database.
# The values are plucked out of thing air to make initial sync run faster
# on jki.re
# TODO: Make these configurable.
EVENT_QUEUE_THREADS = 3 # Max number of threads that will fetch events
EVENT_QUEUE_ITERATIONS = 3 # No. times we block waiting for requests for events
EVENT_QUEUE_TIMEOUT_S = 0.1 # Timeout when waiting for requests for events
class EventsStore(SQLBaseStore):
@defer.inlineCallbacks
@log_function
@@ -41,20 +57,32 @@ class EventsStore(SQLBaseStore):
self.min_token -= 1
stream_ordering = self.min_token
if stream_ordering is None:
stream_ordering_manager = yield self._stream_id_gen.get_next(self)
else:
@contextmanager
def stream_ordering_manager():
yield stream_ordering
stream_ordering_manager = stream_ordering_manager()
try:
yield self.runInteraction(
"persist_event",
self._persist_event_txn,
event=event,
context=context,
backfilled=backfilled,
stream_ordering=stream_ordering,
is_new_state=is_new_state,
current_state=current_state,
)
with stream_ordering_manager as stream_ordering:
yield self.runInteraction(
"persist_event",
self._persist_event_txn,
event=event,
context=context,
backfilled=backfilled,
stream_ordering=stream_ordering,
is_new_state=is_new_state,
current_state=current_state,
)
except _RollbackButIsFineException:
pass
max_persisted_id = yield self._stream_id_gen.get_max_token(self)
defer.returnValue((stream_ordering, max_persisted_id))
@defer.inlineCallbacks
def get_event(self, event_id, check_redacted=True,
get_prev_content=False, allow_rejected=False,
@@ -74,18 +102,17 @@ class EventsStore(SQLBaseStore):
Returns:
Deferred : A FrozenEvent.
"""
event = yield self.runInteraction(
"get_event", self._get_event_txn,
event_id,
events = yield self._get_events(
[event_id],
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
if not event and not allow_none:
if not events and not allow_none:
raise RuntimeError("Could not find event %s" % (event_id,))
defer.returnValue(event)
defer.returnValue(events[0] if events else None)
@log_function
def _persist_event_txn(self, txn, event, context, backfilled,
@@ -93,20 +120,17 @@ class EventsStore(SQLBaseStore):
current_state=None):
# Remove the any existing cache entries for the event_id
self._invalidate_get_event_cache(event.event_id)
if stream_ordering is None:
with self._stream_id_gen.get_next_txn(txn) as stream_ordering:
return self._persist_event_txn(
txn, event, context, backfilled,
stream_ordering=stream_ordering,
is_new_state=is_new_state,
current_state=current_state,
)
txn.call_after(self._invalidate_get_event_cache, event.event_id)
# We purposefully do this first since if we include a `current_state`
# key, we *want* to update the `current_state_events` table
if current_state:
txn.call_after(self.get_current_state_for_key.invalidate_all)
txn.call_after(self.get_rooms_for_user.invalidate_all)
txn.call_after(self.get_users_in_room.invalidate, event.room_id)
txn.call_after(self.get_joined_hosts_for_room.invalidate, event.room_id)
txn.call_after(self.get_room_name_and_aliases, event.room_id)
self._simple_delete_txn(
txn,
table="current_state_events",
@@ -122,52 +146,29 @@ class EventsStore(SQLBaseStore):
"room_id": s.room_id,
"type": s.type,
"state_key": s.state_key,
},
}
)
if event.is_state() and is_new_state:
if not backfilled and not context.rejected:
self._simple_insert_txn(
txn,
table="state_forward_extremities",
values={
"event_id": event.event_id,
"room_id": event.room_id,
"type": event.type,
"state_key": event.state_key,
},
)
for prev_state_id, _ in event.prev_state:
self._simple_delete_txn(
txn,
table="state_forward_extremities",
keyvalues={
"event_id": prev_state_id,
}
)
outlier = event.internal_metadata.is_outlier()
if not outlier:
self._store_state_groups_txn(txn, event, context)
self._update_min_depth_for_room_txn(
txn,
event.room_id,
event.depth
)
have_persisted = self._simple_select_one_onecol_txn(
have_persisted = self._simple_select_one_txn(
txn,
table="event_json",
table="events",
keyvalues={"event_id": event.event_id},
retcol="event_id",
retcols=["event_id", "outlier"],
allow_none=True,
)
metadata_json = encode_canonical_json(
event.internal_metadata.get_dict()
metadata_json = encode_json(
event.internal_metadata.get_dict(),
using_frozen_dicts=USE_FROZEN_DICTS
).decode("UTF-8")
# If we have already persisted this event, we don't need to do any
@@ -177,7 +178,9 @@ class EventsStore(SQLBaseStore):
# if we are persisting an event that we had persisted as an outlier,
# but is no longer one.
if have_persisted:
if not outlier:
if not outlier and have_persisted["outlier"]:
self._store_state_groups_txn(txn, event, context)
sql = (
"UPDATE event_json SET internal_metadata = ?"
" WHERE event_id = ?"
@@ -197,6 +200,9 @@ class EventsStore(SQLBaseStore):
)
return
if not outlier:
self._store_state_groups_txn(txn, event, context)
self._handle_prev_events(
txn,
outlier=outlier,
@@ -230,12 +236,14 @@ class EventsStore(SQLBaseStore):
"event_id": event.event_id,
"room_id": event.room_id,
"internal_metadata": metadata_json,
"json": encode_canonical_json(event_dict).decode("UTF-8"),
"json": encode_json(
event_dict, using_frozen_dicts=USE_FROZEN_DICTS
).decode("UTF-8"),
},
)
content = encode_canonical_json(
event.content
content = encode_json(
event.content, using_frozen_dicts=USE_FROZEN_DICTS
).decode("UTF-8")
vals = {
@@ -261,8 +269,8 @@ class EventsStore(SQLBaseStore):
]
}
vals["unrecognized_keys"] = encode_canonical_json(
unrec
vals["unrecognized_keys"] = encode_json(
unrec, using_frozen_dicts=USE_FROZEN_DICTS
).decode("UTF-8")
sql = (
@@ -281,7 +289,9 @@ class EventsStore(SQLBaseStore):
)
if context.rejected:
self._store_rejections_txn(txn, event.event_id, context.rejected)
self._store_rejections_txn(
txn, event.event_id, context.rejected
)
for hash_alg, hash_base64 in event.hashes.items():
hash_bytes = decode_base64(hash_base64)
@@ -293,19 +303,22 @@ class EventsStore(SQLBaseStore):
for alg, hash_base64 in prev_hashes.items():
hash_bytes = decode_base64(hash_base64)
self._store_prev_event_hash_txn(
txn, event.event_id, prev_event_id, alg, hash_bytes
txn, event.event_id, prev_event_id, alg,
hash_bytes
)
for auth_id, _ in event.auth_events:
self._simple_insert_txn(
txn,
table="event_auth",
values={
self._simple_insert_many_txn(
txn,
table="event_auth",
values=[
{
"event_id": event.event_id,
"room_id": event.room_id,
"auth_id": auth_id,
},
)
}
for auth_id, _ in event.auth_events
],
)
(ref_alg, ref_hash_bytes) = compute_event_reference_hash(event)
self._store_event_reference_hash_txn(
@@ -330,19 +343,33 @@ class EventsStore(SQLBaseStore):
vals,
)
for e_id, h in event.prev_state:
self._simple_insert_txn(
txn,
table="event_edges",
values={
self._simple_insert_many_txn(
txn,
table="event_edges",
values=[
{
"event_id": event.event_id,
"prev_event_id": e_id,
"room_id": event.room_id,
"is_state": True,
},
)
}
for e_id, h in event.prev_state
],
)
if is_new_state and not context.rejected:
txn.call_after(
self.get_current_state_for_key.invalidate,
event.room_id, event.type, event.state_key
)
if (event.type == EventTypes.Name
or event.type == EventTypes.Aliases):
txn.call_after(
self.get_room_name_and_aliases.invalidate,
event.room_id
)
self._simple_upsert_txn(
txn,
"current_state_events",
@@ -356,9 +383,11 @@ class EventsStore(SQLBaseStore):
}
)
return
def _store_redaction(self, txn, event):
# invalidate the cache for the redacted event
self._invalidate_get_event_cache(event.redacts)
txn.call_after(self._invalidate_get_event_cache, event.redacts)
txn.execute(
"INSERT INTO redactions (event_id, redacts) VALUES (?,?)",
(event.event_id, event.redacts)
@@ -395,3 +424,409 @@ class EventsStore(SQLBaseStore):
return self.runInteraction(
"have_events", f,
)
@defer.inlineCallbacks
def _get_events(self, event_ids, check_redacted=True,
get_prev_content=False, allow_rejected=False):
if not event_ids:
defer.returnValue([])
event_map = self._get_events_from_cache(
event_ids,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
missing_events_ids = [e for e in event_ids if e not in event_map]
if not missing_events_ids:
defer.returnValue([
event_map[e_id] for e_id in event_ids
if e_id in event_map and event_map[e_id]
])
missing_events = yield self._enqueue_events(
missing_events_ids,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
event_map.update(missing_events)
defer.returnValue([
event_map[e_id] for e_id in event_ids
if e_id in event_map and event_map[e_id]
])
def _get_events_txn(self, txn, event_ids, check_redacted=True,
get_prev_content=False, allow_rejected=False):
if not event_ids:
return []
event_map = self._get_events_from_cache(
event_ids,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
missing_events_ids = [e for e in event_ids if e not in event_map]
if not missing_events_ids:
return [
event_map[e_id] for e_id in event_ids
if e_id in event_map and event_map[e_id]
]
missing_events = self._fetch_events_txn(
txn,
missing_events_ids,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
event_map.update(missing_events)
return [
event_map[e_id] for e_id in event_ids
if e_id in event_map and event_map[e_id]
]
def _invalidate_get_event_cache(self, event_id):
for check_redacted in (False, True):
for get_prev_content in (False, True):
self._get_event_cache.invalidate(event_id, check_redacted,
get_prev_content)
def _get_event_txn(self, txn, event_id, check_redacted=True,
get_prev_content=False, allow_rejected=False):
events = self._get_events_txn(
txn, [event_id],
check_redacted=check_redacted,
get_prev_content=get_prev_content,
allow_rejected=allow_rejected,
)
return events[0] if events else None
def _get_events_from_cache(self, events, check_redacted, get_prev_content,
allow_rejected):
event_map = {}
for event_id in events:
try:
ret = self._get_event_cache.get(
event_id, check_redacted, get_prev_content
)
if allow_rejected or not ret.rejected_reason:
event_map[event_id] = ret
else:
event_map[event_id] = None
except KeyError:
pass
return event_map
def _do_fetch(self, conn):
"""Takes a database connection and waits for requests for events from
the _event_fetch_list queue.
"""
event_list = []
i = 0
while True:
try:
with self._event_fetch_lock:
event_list = self._event_fetch_list
self._event_fetch_list = []
if not event_list:
single_threaded = self.database_engine.single_threaded
if single_threaded or i > EVENT_QUEUE_ITERATIONS:
self._event_fetch_ongoing -= 1
return
else:
self._event_fetch_lock.wait(EVENT_QUEUE_TIMEOUT_S)
i += 1
continue
i = 0
event_id_lists = zip(*event_list)[0]
event_ids = [
item for sublist in event_id_lists for item in sublist
]
rows = self._new_transaction(
conn, "do_fetch", [], self._fetch_event_rows, event_ids
)
row_dict = {
r["event_id"]: r
for r in rows
}
# We only want to resolve deferreds from the main thread
def fire(lst, res):
for ids, d in lst:
if not d.called:
try:
d.callback([
res[i]
for i in ids
if i in res
])
except:
logger.exception("Failed to callback")
reactor.callFromThread(fire, event_list, row_dict)
except Exception as e:
logger.exception("do_fetch")
# We only want to resolve deferreds from the main thread
def fire(evs):
for _, d in evs:
if not d.called:
d.errback(e)
if event_list:
reactor.callFromThread(fire, event_list)
@defer.inlineCallbacks
def _enqueue_events(self, events, check_redacted=True,
get_prev_content=False, allow_rejected=False):
"""Fetches events from the database using the _event_fetch_list. This
allows batch and bulk fetching of events - it allows us to fetch events
without having to create a new transaction for each request for events.
"""
if not events:
defer.returnValue({})
events_d = defer.Deferred()
with self._event_fetch_lock:
self._event_fetch_list.append(
(events, events_d)
)
self._event_fetch_lock.notify()
if self._event_fetch_ongoing < EVENT_QUEUE_THREADS:
self._event_fetch_ongoing += 1
should_start = True
else:
should_start = False
if should_start:
self.runWithConnection(
self._do_fetch
)
rows = yield preserve_context_over_deferred(events_d)
if not allow_rejected:
rows[:] = [r for r in rows if not r["rejects"]]
res = yield defer.gatherResults(
[
self._get_event_from_row(
row["internal_metadata"], row["json"], row["redacts"],
check_redacted=check_redacted,
get_prev_content=get_prev_content,
rejected_reason=row["rejects"],
)
for row in rows
],
consumeErrors=True
)
defer.returnValue({
e.event_id: e
for e in res if e
})
def _fetch_event_rows(self, txn, events):
rows = []
N = 200
for i in range(1 + len(events) / N):
evs = events[i*N:(i + 1)*N]
if not evs:
break
sql = (
"SELECT "
" e.event_id as event_id, "
" e.internal_metadata,"
" e.json,"
" r.redacts as redacts,"
" rej.event_id as rejects "
" FROM event_json as e"
" LEFT JOIN rejections as rej USING (event_id)"
" LEFT JOIN redactions as r ON e.event_id = r.redacts"
" WHERE e.event_id IN (%s)"
) % (",".join(["?"]*len(evs)),)
txn.execute(sql, evs)
rows.extend(self.cursor_to_dict(txn))
return rows
def _fetch_events_txn(self, txn, events, check_redacted=True,
get_prev_content=False, allow_rejected=False):
if not events:
return {}
rows = self._fetch_event_rows(
txn, events,
)
if not allow_rejected:
rows[:] = [r for r in rows if not r["rejects"]]
res = [
self._get_event_from_row_txn(
txn,
row["internal_metadata"], row["json"], row["redacts"],
check_redacted=check_redacted,
get_prev_content=get_prev_content,
rejected_reason=row["rejects"],
)
for row in rows
]
return {
r.event_id: r
for r in res
}
@defer.inlineCallbacks
def _get_event_from_row(self, internal_metadata, js, redacted,
check_redacted=True, get_prev_content=False,
rejected_reason=None):
d = json.loads(js)
internal_metadata = json.loads(internal_metadata)
if rejected_reason:
rejected_reason = yield self._simple_select_one_onecol(
table="rejections",
keyvalues={"event_id": rejected_reason},
retcol="reason",
desc="_get_event_from_row",
)
ev = FrozenEvent(
d,
internal_metadata_dict=internal_metadata,
rejected_reason=rejected_reason,
)
if check_redacted and redacted:
ev = prune_event(ev)
redaction_id = yield self._simple_select_one_onecol(
table="redactions",
keyvalues={"redacts": ev.event_id},
retcol="event_id",
desc="_get_event_from_row",
)
ev.unsigned["redacted_by"] = redaction_id
# Get the redaction event.
because = yield self.get_event(
redaction_id,
check_redacted=False,
allow_none=True,
)
if because:
ev.unsigned["redacted_because"] = because
if get_prev_content and "replaces_state" in ev.unsigned:
prev = yield self.get_event(
ev.unsigned["replaces_state"],
get_prev_content=False,
allow_none=True,
)
if prev:
ev.unsigned["prev_content"] = prev.get_dict()["content"]
self._get_event_cache.prefill(
ev.event_id, check_redacted, get_prev_content, ev
)
defer.returnValue(ev)
def _get_event_from_row_txn(self, txn, internal_metadata, js, redacted,
check_redacted=True, get_prev_content=False,
rejected_reason=None):
d = json.loads(js)
internal_metadata = json.loads(internal_metadata)
if rejected_reason:
rejected_reason = self._simple_select_one_onecol_txn(
txn,
table="rejections",
keyvalues={"event_id": rejected_reason},
retcol="reason",
)
ev = FrozenEvent(
d,
internal_metadata_dict=internal_metadata,
rejected_reason=rejected_reason,
)
if check_redacted and redacted:
ev = prune_event(ev)
redaction_id = self._simple_select_one_onecol_txn(
txn,
table="redactions",
keyvalues={"redacts": ev.event_id},
retcol="event_id",
)
ev.unsigned["redacted_by"] = redaction_id
# Get the redaction event.
because = self._get_event_txn(
txn,
redaction_id,
check_redacted=False
)
if because:
ev.unsigned["redacted_because"] = because
if get_prev_content and "replaces_state" in ev.unsigned:
prev = self._get_event_txn(
txn,
ev.unsigned["replaces_state"],
get_prev_content=False,
)
if prev:
ev.unsigned["prev_content"] = prev.get_dict()["content"]
self._get_event_cache.prefill(
ev.event_id, check_redacted, get_prev_content, ev
)
return ev
def _parse_events(self, rows):
return self.runInteraction(
"_parse_events", self._parse_events_txn, rows
)
def _parse_events_txn(self, txn, rows):
event_ids = [r["event_id"] for r in rows]
return self._get_events_txn(txn, event_ids)
def _has_been_redacted_txn(self, txn, event):
sql = "SELECT event_id FROM redactions WHERE redacts = ?"
txn.execute(sql, (event.event_id,))
result = txn.fetchone()
return result[0] if result else None

View File

@@ -13,7 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import SQLBaseStore
from ._base import SQLBaseStore, cached
from twisted.internet import defer
class PresenceStore(SQLBaseStore):
@@ -87,31 +89,48 @@ class PresenceStore(SQLBaseStore):
desc="add_presence_list_pending",
)
@defer.inlineCallbacks
def set_presence_list_accepted(self, observer_localpart, observed_userid):
return self._simple_update_one(
result = yield self._simple_update_one(
table="presence_list",
keyvalues={"user_id": observer_localpart,
"observed_user_id": observed_userid},
updatevalues={"accepted": True},
desc="set_presence_list_accepted",
)
self.get_presence_list_accepted.invalidate(observer_localpart)
defer.returnValue(result)
def get_presence_list(self, observer_localpart, accepted=None):
keyvalues = {"user_id": observer_localpart}
if accepted is not None:
keyvalues["accepted"] = accepted
if accepted:
return self.get_presence_list_accepted(observer_localpart)
else:
keyvalues = {"user_id": observer_localpart}
if accepted is not None:
keyvalues["accepted"] = accepted
return self._simple_select_list(
table="presence_list",
keyvalues=keyvalues,
retcols=["observed_user_id", "accepted"],
desc="get_presence_list",
)
@cached()
def get_presence_list_accepted(self, observer_localpart):
return self._simple_select_list(
table="presence_list",
keyvalues=keyvalues,
keyvalues={"user_id": observer_localpart, "accepted": True},
retcols=["observed_user_id", "accepted"],
desc="get_presence_list",
desc="get_presence_list_accepted",
)
@defer.inlineCallbacks
def del_presence_list(self, observer_localpart, observed_userid):
return self._simple_delete_one(
yield self._simple_delete_one(
table="presence_list",
keyvalues={"user_id": observer_localpart,
"observed_user_id": observed_userid},
desc="del_presence_list",
)
self.get_presence_list_accepted.invalidate(observer_localpart)

View File

@@ -13,61 +13,60 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from ._base import SQLBaseStore, Table
from ._base import SQLBaseStore, cached
from twisted.internet import defer
import logging
import copy
import simplejson as json
logger = logging.getLogger(__name__)
class PushRuleStore(SQLBaseStore):
@cached()
@defer.inlineCallbacks
def get_push_rules_for_user(self, user_name):
sql = (
"SELECT "+",".join(PushRuleTable.fields)+" "
"FROM "+PushRuleTable.table_name+" "
"WHERE user_name = ? "
"ORDER BY priority_class DESC, priority DESC"
rows = yield self._simple_select_list(
table=PushRuleTable.table_name,
keyvalues={
"user_name": user_name,
},
retcols=PushRuleTable.fields,
desc="get_push_rules_enabled_for_user",
)
rows = yield self._execute("get_push_rules_for_user", None, sql, user_name)
dicts = []
for r in rows:
d = {}
for i, f in enumerate(PushRuleTable.fields):
d[f] = r[i]
dicts.append(d)
rows.sort(
key=lambda row: (-int(row["priority_class"]), -int(row["priority"]))
)
defer.returnValue(dicts)
defer.returnValue(rows)
@cached()
@defer.inlineCallbacks
def get_push_rules_enabled_for_user(self, user_name):
results = yield self._simple_select_list(
PushRuleEnableTable.table_name,
{'user_name': user_name},
PushRuleEnableTable.fields,
table=PushRuleEnableTable.table_name,
keyvalues={
'user_name': user_name
},
retcols=PushRuleEnableTable.fields,
desc="get_push_rules_enabled_for_user",
)
defer.returnValue(
{r['rule_id']: False if r['enabled'] == 0 else True for r in results}
)
defer.returnValue({
r['rule_id']: False if r['enabled'] == 0 else True for r in results
})
@defer.inlineCallbacks
def add_push_rule(self, before, after, **kwargs):
vals = copy.copy(kwargs)
vals = kwargs
if 'conditions' in vals:
vals['conditions'] = json.dumps(vals['conditions'])
if 'actions' in vals:
vals['actions'] = json.dumps(vals['actions'])
# we could check the rest of the keys are valid column names
# but sqlite will do that anyway so I think it's just pointless.
if 'id' in vals:
del vals['id']
vals.pop("id", None)
if before or after:
ret = yield self.runInteraction(
@@ -87,39 +86,39 @@ class PushRuleStore(SQLBaseStore):
defer.returnValue(ret)
def _add_push_rule_relative_txn(self, txn, user_name, **kwargs):
after = None
relative_to_rule = None
if 'after' in kwargs and kwargs['after']:
after = kwargs['after']
relative_to_rule = after
if 'before' in kwargs and kwargs['before']:
relative_to_rule = kwargs['before']
after = kwargs.pop("after", None)
relative_to_rule = kwargs.pop("before", after)
# get the priority of the rule we're inserting after/before
sql = (
"SELECT priority_class, priority FROM ? "
"WHERE user_name = ? and rule_id = ?" % (PushRuleTable.table_name,)
res = self._simple_select_one_txn(
txn,
table=PushRuleTable.table_name,
keyvalues={
"user_name": user_name,
"rule_id": relative_to_rule,
},
retcols=["priority_class", "priority"],
allow_none=True,
)
txn.execute(sql, (user_name, relative_to_rule))
res = txn.fetchall()
if not res:
raise RuleNotFoundException(
"before/after rule not found: %s" % (relative_to_rule,)
)
priority_class, base_rule_priority = res[0]
priority_class = res["priority_class"]
base_rule_priority = res["priority"]
if 'priority_class' in kwargs and kwargs['priority_class'] != priority_class:
raise InconsistentRuleException(
"Given priority class does not match class of relative rule"
)
new_rule = copy.copy(kwargs)
if 'before' in new_rule:
del new_rule['before']
if 'after' in new_rule:
del new_rule['after']
new_rule = kwargs
new_rule.pop("before", None)
new_rule.pop("after", None)
new_rule['priority_class'] = priority_class
new_rule['user_name'] = user_name
new_rule['id'] = self._push_rule_id_gen.get_next_txn(txn)
# check if the priority before/after is free
new_rule_priority = base_rule_priority
@@ -153,12 +152,19 @@ class PushRuleStore(SQLBaseStore):
txn.execute(sql, (user_name, priority_class, new_rule_priority))
# now insert the new rule
sql = "INSERT INTO "+PushRuleTable.table_name+" ("
sql += ",".join(new_rule.keys())+") VALUES ("
sql += ", ".join(["?" for _ in new_rule.keys()])+")"
txn.call_after(
self.get_push_rules_for_user.invalidate, user_name
)
txn.execute(sql, new_rule.values())
txn.call_after(
self.get_push_rules_enabled_for_user.invalidate, user_name
)
self._simple_insert_txn(
txn,
table=PushRuleTable.table_name,
values=new_rule,
)
def _add_push_rule_highest_priority_txn(self, txn, user_name,
priority_class, **kwargs):
@@ -176,18 +182,24 @@ class PushRuleStore(SQLBaseStore):
new_prio = highest_prio + 1
# and insert the new rule
new_rule = copy.copy(kwargs)
if 'id' in new_rule:
del new_rule['id']
new_rule = kwargs
new_rule['id'] = self._push_rule_id_gen.get_next_txn(txn)
new_rule['user_name'] = user_name
new_rule['priority_class'] = priority_class
new_rule['priority'] = new_prio
sql = "INSERT INTO "+PushRuleTable.table_name+" ("
sql += ",".join(new_rule.keys())+") VALUES ("
sql += ", ".join(["?" for _ in new_rule.keys()])+")"
txn.call_after(
self.get_push_rules_for_user.invalidate, user_name
)
txn.call_after(
self.get_push_rules_enabled_for_user.invalidate, user_name
)
txn.execute(sql, new_rule.values())
self._simple_insert_txn(
txn,
table=PushRuleTable.table_name,
values=new_rule,
)
@defer.inlineCallbacks
def delete_push_rule(self, user_name, rule_id):
@@ -206,13 +218,32 @@ class PushRuleStore(SQLBaseStore):
desc="delete_push_rule",
)
self.get_push_rules_for_user.invalidate(user_name)
self.get_push_rules_enabled_for_user.invalidate(user_name)
@defer.inlineCallbacks
def set_push_rule_enabled(self, user_name, rule_id, enabled):
yield self._simple_upsert(
ret = yield self.runInteraction(
"_set_push_rule_enabled_txn",
self._set_push_rule_enabled_txn,
user_name, rule_id, enabled
)
defer.returnValue(ret)
def _set_push_rule_enabled_txn(self, txn, user_name, rule_id, enabled):
new_id = self._push_rules_enable_id_gen.get_next_txn(txn)
self._simple_upsert_txn(
txn,
PushRuleEnableTable.table_name,
{'user_name': user_name, 'rule_id': rule_id},
{'enabled': enabled},
desc="set_push_rule_enabled",
{'enabled': 1 if enabled else 0},
{'id': new_id},
)
txn.call_after(
self.get_push_rules_for_user.invalidate, user_name
)
txn.call_after(
self.get_push_rules_enabled_for_user.invalidate, user_name
)
@@ -224,7 +255,7 @@ class InconsistentRuleException(Exception):
pass
class PushRuleTable(Table):
class PushRuleTable(object):
table_name = "push_rules"
fields = [
@@ -237,10 +268,8 @@ class PushRuleTable(Table):
"actions",
]
EntryType = collections.namedtuple("PushRuleEntry", fields)
class PushRuleEnableTable(Table):
class PushRuleEnableTable(object):
table_name = "push_rules_enable"
fields = [

View File

@@ -112,14 +112,15 @@ class RegistrationStore(SQLBaseStore):
@defer.inlineCallbacks
def user_delete_access_tokens_apart_from(self, user_id, token_id):
rows = yield self.get_user_by_id(user_id)
if len(rows) == 0:
raise Exception("No such user!")
yield self.runInteraction(
"user_delete_access_tokens_apart_from",
self._user_delete_access_tokens_apart_from, user_id, token_id
)
yield self._execute(
"delete_access_tokens_apart_from", None,
def _user_delete_access_tokens_apart_from(self, txn, user_id, token_id):
txn.execute(
"DELETE FROM access_tokens WHERE user_id = ? AND id != ?",
rows[0]['id'], token_id
(user_id, token_id)
)
@defer.inlineCallbacks
@@ -201,15 +202,15 @@ class RegistrationStore(SQLBaseStore):
defer.returnValue(ret)
@defer.inlineCallbacks
def get_user_by_threepid(self, medium, address):
def get_user_id_by_threepid(self, medium, address):
ret = yield self._simple_select_one(
"user_threepids",
{
"medium": medium,
"address": address
},
['user'], True, 'get_user_by_threepid'
['user_id'], True, 'get_user_id_by_threepid'
)
if ret:
defer.returnValue(ret['user'])
defer.returnValue(ret['user_id'])
defer.returnValue(None)

View File

@@ -17,7 +17,7 @@ from twisted.internet import defer
from synapse.api.errors import StoreError
from ._base import SQLBaseStore
from ._base import SQLBaseStore, cached
import collections
import logging
@@ -186,6 +186,7 @@ class RoomStore(SQLBaseStore):
}
)
@cached()
@defer.inlineCallbacks
def get_room_name_and_aliases(self, room_id):
def f(txn):

View File

@@ -64,8 +64,9 @@ class RoomMemberStore(SQLBaseStore):
}
)
self.get_rooms_for_user.invalidate(target_user_id)
self.get_joined_hosts_for_room.invalidate(event.room_id)
txn.call_after(self.get_rooms_for_user.invalidate, target_user_id)
txn.call_after(self.get_joined_hosts_for_room.invalidate, event.room_id)
txn.call_after(self.get_users_in_room.invalidate, event.room_id)
def get_room_member(self, user_id, room_id):
"""Retrieve the current state of a room member.
@@ -76,17 +77,18 @@ class RoomMemberStore(SQLBaseStore):
Returns:
Deferred: Results in a MembershipEvent or None.
"""
def f(txn):
events = self._get_members_events_txn(
txn,
room_id,
user_id=user_id,
)
return events[0] if events else None
return self.runInteraction("get_room_member", f)
return self.runInteraction(
"get_room_member",
self._get_members_events_txn,
room_id,
user_id=user_id,
).addCallback(
self._get_events
).addCallback(
lambda events: events[0] if events else None
)
@cached()
def get_users_in_room(self, room_id):
def f(txn):
@@ -110,15 +112,12 @@ class RoomMemberStore(SQLBaseStore):
Returns:
list of namedtuples representing the members in this room.
"""
def f(txn):
return self._get_members_events_txn(
txn,
room_id,
membership=membership,
)
return self.runInteraction("get_room_members", f)
return self.runInteraction(
"get_room_members",
self._get_members_events_txn,
room_id,
membership=membership,
).addCallback(self._get_events)
def get_rooms_for_user_where_membership_is(self, user_id, membership_list):
""" Get all the rooms for this user where the membership for this user
@@ -190,14 +189,14 @@ class RoomMemberStore(SQLBaseStore):
return self.runInteraction(
"get_members_query", self._get_members_events_txn,
where_clause, where_values
)
).addCallbacks(self._get_events)
def _get_members_events_txn(self, txn, room_id, membership=None, user_id=None):
rows = self._get_members_rows_txn(
txn,
room_id, membership, user_id,
)
return self._get_events_txn(txn, [r["event_id"] for r in rows])
return [r["event_id"] for r in rows]
def _get_members_rows_txn(self, txn, room_id, membership=None, user_id=None):
where_clause = "c.room_id = ?"

View File

@@ -18,7 +18,7 @@ import logging
logger = logging.getLogger(__name__)
def run_upgrade(cur):
def run_upgrade(cur, *args, **kwargs):
cur.execute("SELECT id, regex FROM application_services_regex")
for row in cur.fetchall():
try:

View File

@@ -0,0 +1,18 @@
/* Copyright 2015 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
DROP INDEX IF EXISTS sent_transaction_dest;
DROP INDEX IF EXISTS sent_transaction_sent;
DROP INDEX IF EXISTS user_ips_user;

View File

@@ -0,0 +1,32 @@
/* Copyright 2015 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE IF NOT EXISTS new_server_keys_json (
server_name TEXT NOT NULL, -- Server name.
key_id TEXT NOT NULL, -- Requested key id.
from_server TEXT NOT NULL, -- Which server the keys were fetched from.
ts_added_ms BIGINT NOT NULL, -- When the keys were fetched
ts_valid_until_ms BIGINT NOT NULL, -- When this version of the keys exipires.
key_json bytea NOT NULL, -- JSON certificate for the remote server.
CONSTRAINT server_keys_json_uniqueness UNIQUE (server_name, key_id, from_server)
);
INSERT INTO new_server_keys_json
SELECT server_name, key_id, from_server,ts_added_ms, ts_valid_until_ms, key_json FROM server_keys_json ;
DROP TABLE server_keys_json;
ALTER TABLE new_server_keys_json RENAME TO server_keys_json;

View File

@@ -0,0 +1,19 @@
/* Copyright 2015 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE INDEX events_order_topo_stream_room ON events(
topological_ordering, stream_ordering, room_id
);

View File

@@ -0,0 +1 @@
SELECT 1;

View File

@@ -0,0 +1,76 @@
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Main purpose of this upgrade is to change the unique key on the
pushers table again (it was missed when the v16 full schema was
made) but this also changes the pushkey and data columns to text.
When selecting a bytea column into a text column, postgres inserts
the hex encoded data, and there's no portable way of getting the
UTF-8 bytes, so we have to do it in Python.
"""
import logging
logger = logging.getLogger(__name__)
def run_upgrade(cur, database_engine, *args, **kwargs):
logger.info("Porting pushers table...")
cur.execute("""
CREATE TABLE IF NOT EXISTS pushers2 (
id BIGINT PRIMARY KEY,
user_name TEXT NOT NULL,
access_token BIGINT DEFAULT NULL,
profile_tag VARCHAR(32) NOT NULL,
kind VARCHAR(8) NOT NULL,
app_id VARCHAR(64) NOT NULL,
app_display_name VARCHAR(64) NOT NULL,
device_display_name VARCHAR(128) NOT NULL,
pushkey TEXT NOT NULL,
ts BIGINT NOT NULL,
lang VARCHAR(8),
data TEXT,
last_token TEXT,
last_success BIGINT,
failing_since BIGINT,
UNIQUE (app_id, pushkey, user_name)
)
""")
cur.execute("""SELECT
id, user_name, access_token, profile_tag, kind,
app_id, app_display_name, device_display_name,
pushkey, ts, lang, data, last_token, last_success,
failing_since
FROM pushers
""")
count = 0
for row in cur.fetchall():
row = list(row)
row[8] = bytes(row[8]).decode("utf-8")
row[11] = bytes(row[11]).decode("utf-8")
cur.execute(database_engine.convert_param_style("""
INSERT into pushers2 (
id, user_name, access_token, profile_tag, kind,
app_id, app_display_name, device_display_name,
pushkey, ts, lang, data, last_token, last_success,
failing_since
) values (%s)""" % (','.join(['?' for _ in range(len(row))]))),
row
)
count += 1
cur.execute("DROP TABLE pushers")
cur.execute("ALTER TABLE pushers2 RENAME TO pushers")
logger.info("Moved %d pushers to new table", count)

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import SQLBaseStore
from ._base import SQLBaseStore, cached
from twisted.internet import defer
@@ -43,6 +43,7 @@ class StateStore(SQLBaseStore):
* `state_groups_state`: Maps state group to state events.
"""
@defer.inlineCallbacks
def get_state_groups(self, event_ids):
""" Get the state groups for the given list of event_ids
@@ -71,17 +72,33 @@ class StateStore(SQLBaseStore):
retcol="event_id",
)
state = self._get_events_txn(txn, state_ids)
res[group] = state
res[group] = state_ids
return res
return self.runInteraction(
states = yield self.runInteraction(
"get_state_groups",
f,
)
state_list = yield defer.gatherResults(
[
self._fetch_events_for_group(group, vals)
for group, vals in states.items()
],
consumeErrors=True,
)
defer.returnValue(dict(state_list))
@cached(num_args=1)
def _fetch_events_for_group(self, state_group, events):
return self._get_events(
events, get_prev_content=False
).addCallback(
lambda evs: (state_group, evs)
)
def _store_state_groups_txn(self, txn, event, context):
if context.current_state is None:
return
@@ -104,18 +121,20 @@ class StateStore(SQLBaseStore):
},
)
for state in state_events.values():
self._simple_insert_txn(
txn,
table="state_groups_state",
values={
self._simple_insert_many_txn(
txn,
table="state_groups_state",
values=[
{
"state_group": state_group,
"room_id": state.room_id,
"type": state.type,
"state_key": state.state_key,
"event_id": state.event_id,
},
)
}
for state in state_events.values()
],
)
self._simple_insert_txn(
txn,
@@ -128,6 +147,12 @@ class StateStore(SQLBaseStore):
@defer.inlineCallbacks
def get_current_state(self, room_id, event_type=None, state_key=""):
if event_type and state_key is not None:
result = yield self.get_current_state_for_key(
room_id, event_type, state_key
)
defer.returnValue(result)
def f(txn):
sql = (
"SELECT event_id FROM current_state_events"
@@ -144,11 +169,29 @@ class StateStore(SQLBaseStore):
args = (room_id, )
txn.execute(sql, args)
results = self.cursor_to_dict(txn)
results = txn.fetchall()
return self._parse_events_txn(txn, results)
return [r[0] for r in results]
events = yield self.runInteraction("get_current_state", f)
event_ids = yield self.runInteraction("get_current_state", f)
events = yield self._get_events(event_ids, get_prev_content=False)
defer.returnValue(events)
@cached(num_args=3)
@defer.inlineCallbacks
def get_current_state_for_key(self, room_id, event_type, state_key):
def f(txn):
sql = (
"SELECT event_id FROM current_state_events"
" WHERE room_id = ? AND type = ? AND state_key = ?"
)
args = (room_id, event_type, state_key)
txn.execute(sql, args)
results = txn.fetchall()
return [r[0] for r in results]
event_ids = yield self.runInteraction("get_current_state_for_key", f)
events = yield self._get_events(event_ids, get_prev_content=False)
defer.returnValue(events)

View File

@@ -37,11 +37,9 @@ from twisted.internet import defer
from ._base import SQLBaseStore
from synapse.api.constants import EventTypes
from synapse.api.errors import SynapseError
from synapse.types import RoomStreamToken
from synapse.util.logutils import log_function
from collections import namedtuple
import logging
@@ -55,76 +53,26 @@ _STREAM_TOKEN = "stream"
_TOPOLOGICAL_TOKEN = "topological"
class _StreamToken(namedtuple("_StreamToken", "topological stream")):
"""Tokens are positions between events. The token "s1" comes after event 1.
def lower_bound(token):
if token.topological is None:
return "(%d < %s)" % (token.stream, "stream_ordering")
else:
return "(%d < %s OR (%d = %s AND %d < %s))" % (
token.topological, "topological_ordering",
token.topological, "topological_ordering",
token.stream, "stream_ordering",
)
s0 s1
| |
[0] V [1] V [2]
Tokens can either be a point in the live event stream or a cursor going
through historic events.
When traversing the live event stream events are ordered by when they
arrived at the homeserver.
When traversing historic events the events are ordered by their depth in
the event graph "topological_ordering" and then by when they arrived at the
homeserver "stream_ordering".
Live tokens start with an "s" followed by the "stream_ordering" id of the
event it comes after. Historic tokens start with a "t" followed by the
"topological_ordering" id of the event it comes after, follewed by "-",
followed by the "stream_ordering" id of the event it comes after.
"""
__slots__ = []
@classmethod
def parse(cls, string):
try:
if string[0] == 's':
return cls(topological=None, stream=int(string[1:]))
if string[0] == 't':
parts = string[1:].split('-', 1)
return cls(topological=int(parts[0]), stream=int(parts[1]))
except:
pass
raise SynapseError(400, "Invalid token %r" % (string,))
@classmethod
def parse_stream_token(cls, string):
try:
if string[0] == 's':
return cls(topological=None, stream=int(string[1:]))
except:
pass
raise SynapseError(400, "Invalid token %r" % (string,))
def __str__(self):
if self.topological is not None:
return "t%d-%d" % (self.topological, self.stream)
else:
return "s%d" % (self.stream,)
def lower_bound(self):
if self.topological is None:
return "(%d < %s)" % (self.stream, "stream_ordering")
else:
return "(%d < %s OR (%d = %s AND %d < %s))" % (
self.topological, "topological_ordering",
self.topological, "topological_ordering",
self.stream, "stream_ordering",
)
def upper_bound(self):
if self.topological is None:
return "(%d >= %s)" % (self.stream, "stream_ordering")
else:
return "(%d > %s OR (%d = %s AND %d >= %s))" % (
self.topological, "topological_ordering",
self.topological, "topological_ordering",
self.stream, "stream_ordering",
)
def upper_bound(token):
if token.topological is None:
return "(%d >= %s)" % (token.stream, "stream_ordering")
else:
return "(%d > %s OR (%d = %s AND %d >= %s))" % (
token.topological, "topological_ordering",
token.topological, "topological_ordering",
token.stream, "stream_ordering",
)
class StreamStore(SQLBaseStore):
@@ -139,8 +87,8 @@ class StreamStore(SQLBaseStore):
limit = MAX_STREAM_SIZE
# From and to keys should be integers from ordering.
from_id = _StreamToken.parse_stream_token(from_key)
to_id = _StreamToken.parse_stream_token(to_key)
from_id = RoomStreamToken.parse_stream_token(from_key)
to_id = RoomStreamToken.parse_stream_token(to_key)
if from_key == to_key:
defer.returnValue(([], to_key))
@@ -234,8 +182,8 @@ class StreamStore(SQLBaseStore):
limit = MAX_STREAM_SIZE
# From and to keys should be integers from ordering.
from_id = _StreamToken.parse_stream_token(from_key)
to_id = _StreamToken.parse_stream_token(to_key)
from_id = RoomStreamToken.parse_stream_token(from_key)
to_id = RoomStreamToken.parse_stream_token(to_key)
if from_key == to_key:
return defer.succeed(([], to_key))
@@ -276,7 +224,7 @@ class StreamStore(SQLBaseStore):
return self.runInteraction("get_room_events_stream", f)
@log_function
@defer.inlineCallbacks
def paginate_room_events(self, room_id, from_key, to_key=None,
direction='b', limit=-1,
with_feedback=False):
@@ -288,17 +236,17 @@ class StreamStore(SQLBaseStore):
args = [False, room_id]
if direction == 'b':
order = "DESC"
bounds = _StreamToken.parse(from_key).upper_bound()
bounds = upper_bound(RoomStreamToken.parse(from_key))
if to_key:
bounds = "%s AND %s" % (
bounds, _StreamToken.parse(to_key).lower_bound()
bounds, lower_bound(RoomStreamToken.parse(to_key))
)
else:
order = "ASC"
bounds = _StreamToken.parse(from_key).lower_bound()
bounds = lower_bound(RoomStreamToken.parse(from_key))
if to_key:
bounds = "%s AND %s" % (
bounds, _StreamToken.parse(to_key).upper_bound()
bounds, upper_bound(RoomStreamToken.parse(to_key))
)
if int(limit) > 0:
@@ -333,28 +281,30 @@ class StreamStore(SQLBaseStore):
# when we are going backwards so we subtract one from the
# stream part.
toke -= 1
next_token = str(_StreamToken(topo, toke))
next_token = str(RoomStreamToken(topo, toke))
else:
# TODO (erikj): We should work out what to do here instead.
next_token = to_key if to_key else from_key
events = self._get_events_txn(
txn,
[r["event_id"] for r in rows],
get_prev_content=True
)
return rows, next_token,
self._set_before_and_after(events, rows)
rows, token = yield self.runInteraction("paginate_room_events", f)
return events, next_token,
events = yield self._get_events(
[r["event_id"] for r in rows],
get_prev_content=True
)
return self.runInteraction("paginate_room_events", f)
self._set_before_and_after(events, rows)
defer.returnValue((events, token))
@defer.inlineCallbacks
def get_recent_events_for_room(self, room_id, limit, end_token,
with_feedback=False, from_token=None):
# TODO (erikj): Handle compressed feedback
end_token = _StreamToken.parse_stream_token(end_token)
end_token = RoomStreamToken.parse_stream_token(end_token)
if from_token is None:
sql = (
@@ -365,7 +315,7 @@ class StreamStore(SQLBaseStore):
" LIMIT ?"
)
else:
from_token = _StreamToken.parse_stream_token(from_token)
from_token = RoomStreamToken.parse_stream_token(from_token)
sql = (
"SELECT stream_ordering, topological_ordering, event_id"
" FROM events"
@@ -395,30 +345,49 @@ class StreamStore(SQLBaseStore):
# stream part.
topo = rows[0]["topological_ordering"]
toke = rows[0]["stream_ordering"] - 1
start_token = str(_StreamToken(topo, toke))
start_token = str(RoomStreamToken(topo, toke))
token = (start_token, str(end_token))
else:
token = (str(end_token), str(end_token))
events = self._get_events_txn(
txn,
[r["event_id"] for r in rows],
get_prev_content=True
)
return rows, token
self._set_before_and_after(events, rows)
return events, token
return self.runInteraction(
rows, token = yield self.runInteraction(
"get_recent_events_for_room", get_recent_events_for_room_txn
)
logger.debug("stream before")
events = yield self._get_events(
[r["event_id"] for r in rows],
get_prev_content=True
)
logger.debug("stream after")
self._set_before_and_after(events, rows)
defer.returnValue((events, token))
@defer.inlineCallbacks
def get_room_events_max_id(self):
def get_room_events_max_id(self, direction='f'):
token = yield self._stream_id_gen.get_max_token(self)
defer.returnValue("s%d" % (token,))
if direction != 'b':
defer.returnValue("s%d" % (token,))
else:
topo = yield self.runInteraction(
"_get_max_topological_txn", self._get_max_topological_txn
)
defer.returnValue("t%d-%d" % (topo, token))
def _get_max_topological_txn(self, txn):
txn.execute(
"SELECT MAX(topological_ordering) FROM events"
" WHERE outlier = ?",
(False,)
)
rows = txn.fetchall()
return rows[0][0] if rows else 0
@defer.inlineCallbacks
def _get_min_token(self):
@@ -439,5 +408,5 @@ class StreamStore(SQLBaseStore):
stream = row["stream_ordering"]
topo = event.depth
internal = event.internal_metadata
internal.before = str(_StreamToken(topo, stream - 1))
internal.after = str(_StreamToken(topo, stream))
internal.before = str(RoomStreamToken(topo, stream - 1))
internal.after = str(RoomStreamToken(topo, stream))

Some files were not shown because too many files have changed in this diff Show More