1
0

Compare commits

..

155 Commits

Author SHA1 Message Date
Erik Johnston
b23cb8fba8 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse 2017-10-02 13:52:03 +01:00
Erik Johnston
e4a709eda3 Bump version and change log 2017-10-02 13:51:38 +01:00
Erik Johnston
1a398b19fd Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.23.0 2017-09-26 10:08:59 +01:00
Erik Johnston
f4c8cd5e85 Bump changelog and version 2017-09-26 10:02:48 +01:00
Erik Johnston
b8d832a08c Merge pull request #2470 from matrix-org/erikj/sync_speed_fix
Refactor to speed up incremental syncs
2017-09-25 17:43:14 +01:00
Erik Johnston
e3edca3b5d Refactor to speed up incremental syncs 2017-09-25 17:35:39 +01:00
Richard van der Hoff
cacfa04cb6 Merge pull request #2468 from maxidor/develop
Clarify recommended network setup
2017-09-25 16:37:33 +01:00
Max Dor
e591f7b3f0 Include review feedback 2017-09-25 16:42:26 +02:00
Max Dor
7141f1a5cc Clarify recommended network setup 2017-09-25 16:20:23 +02:00
Erik Johnston
44edac0497 Merge branch 'release-v0.23.0' of github.com:matrix-org/synapse into develop 2017-09-25 14:52:46 +01:00
Richard van der Hoff
29e1c717c3 Merge pull request #2390 from r3dey3/develop
Fix iteration of requests_missing_keys; list doesn't have .values()
2017-09-25 11:56:01 +01:00
Richard van der Hoff
94133d7ce8 Merge branch 'develop' into develop 2017-09-25 11:50:11 +01:00
Erik Johnston
b15c2b7971 Update CHANGES 2017-09-25 11:34:12 +01:00
Erik Johnston
ba8fdc925c Bump version and changes 2017-09-25 11:01:31 +01:00
Richard van der Hoff
79b3cf3e02 Fix logcontxt leak in keyclient (#2465)
preserve_context_over_function doesn't do what you want it to do.
2017-09-25 09:51:39 +01:00
Richard van der Hoff
b4fd710e1a Merge pull request #2464 from rnbdsh/patch-4
Remove non-existing files, add stop, use synctl
2017-09-25 09:33:22 +01:00
rnbdsh
b68b0ede7a Start traditionally, stop synctl
Starting with synctl lead to "no config file found"
Stopping also leads to some (code=exited, status=1/FAILURE), but at least now we can stop the service.
2017-09-24 04:55:19 +02:00
rnbdsh
68f737702b Remove non-existing files, add stop, use synctl
Non-existing files, when running the suggested from https://github.com/matrix-org/synapse#configuring-synapse
/etc/synapse/log_config.yaml so the --log-config leads to an error
/etc/sysconfig/synapse The environment-file or even the /etc/sysconfig does not exist in arch linux

Also instead of calling python2 we use synctl, as this seems to be the proper way to start it, and it gives us a more useful error in the systemctl status. And we now allow stop (and therefore restart).
2017-09-24 04:26:23 +02:00
Richard van der Hoff
f65e31d22f Do an AAAA lookup on SRV record targets (#2462)
Support SRV records which point at AAAA records, as well as A records.

Fixes https://github.com/matrix-org/synapse/issues/2405
2017-09-22 20:26:47 +01:00
Matthew Hodgson
f496399ac4 fix thinko'd docstring 2017-09-22 15:34:14 +01:00
Erik Johnston
3166ed55b2 Fix device list when rejoining room (#2461) 2017-09-22 14:44:17 +01:00
Richard van der Hoff
c94ab5976a Merge pull request #2459 from matrix-org/rav/keyring_cleanups
Clean up Keyring code
2017-09-20 11:29:39 +01:00
Richard van der Hoff
6de74ea6d7 Fix logcontexts in _check_sigs_and_hashes 2017-09-20 01:32:42 +01:00
Richard van der Hoff
72472456d8 Add some more tests for Keyring 2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5c24c239b Fix logcontext handling in verify_json_objects_for_server
preserve_context_over_fn is essentially broken, because (a) it pointlessly
drops the current logcontext before calling its wrapped function, which means
we don't get any useful logcontexts for _handle_key_deferred; (b) it wraps the
resulting deferred in a _PreservingContextDeferred, which is very dangerous
because you then can't yield on it without leaking context back into the
reactor.

Instead, let's specify that the resultant deferreds call their callbacks with
no logcontext.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
c5b0e9f485 Turn _start_key_lookups into an inlineCallbacks function
... which means that logcontexts can be correctly preserved for the stuff it
does.

get_server_verify_keys is now called with the logcontext, so needs to
preserve_fn when it fires off its nested inlineCallbacks function.

Also renames get_server_verify_keys to reflect the fact it's meant to be
private.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
abdefb8a01 Fix potential race in _start_key_lookups
If the verify_request.deferred has already completed, then `remove_deferreds`
will be called immediately. It therefore might resolve the server_to_deferred
deferred while there are still other requests for that server in flight.

To avoid that, we should build the complete list of requests, and *then* add the
callbacks.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
afbd773dc6 Add some comments to _start_key_lookups 2017-09-20 01:32:42 +01:00
Richard van der Hoff
2a4b9ea233 Consistency for how verify_request.deferred is called
Define that it is run with no log context, and make sure that happens.

If we aren't careful to reset the logcontext, we can't bung the deferreds into
defer.gatherResults etc. We don't actually do that directly, but we *do*
resolve other deferreds from affected callbacks (notably the server_to_deferred
map in _start_key_lookups), and those *do* get passed into
defer.gatherResults. It turns out that this way ends up being least confusing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
3b98439eca Factor out _start_key_lookups
... to make it easier to see what's going on.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fde63b880d Replace server_and_json with verify_requests
This is a precursor to factoring some of this code out.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
2d511defd9 pull out handle_key_deferred to top level
There's no need for this to be a nested definition; pulling it out not only
makes it more efficient, but makes it easier to check that it's not accessing
any local variables it shouldn't be.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
dd1ea9763a Fix incorrect key_ids in error message 2017-09-20 01:32:42 +01:00
Richard van der Hoff
e76d1135dd Invalidate signing key cache when we gat an update
This might make the cache slightly more efficient.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
fcf2c0fd1a Remove redundant preserve_fn
preserve_fn is a no-op unless the wrapped function returns a
Deferred. verify_json_objects_for_server returns a list, so this is doing
nothing.
2017-09-20 01:32:42 +01:00
Richard van der Hoff
9864efa532 Fix concurrent server_key requests (#2458)
Fix a bug where we could end up firing off multiple requests for server_keys
for the same server at the same time.
2017-09-19 23:25:44 +01:00
Richard van der Hoff
aa620d09a0 Add a config option to block all room invites (#2457)
- allows sysadmins the ability to lock down their servers so that people can't
send their users room invites.
2017-09-19 16:08:14 +01:00
Richard van der Hoff
2eabdf3f98 add some comments to on_exchange_third_party_invite_request 2017-09-19 12:20:36 +01:00
Richard van der Hoff
5ed109d59f PoC for filtering spammy events (#2456)
Demonstration of how you might add some hooks to filter out spammy events.
2017-09-19 12:20:11 +01:00
Richard van der Hoff
3f405b34e9 Fix overzealous kicking of guest users (#2453)
We should only kick guest users if the guest access event is authorised.
2017-09-19 08:52:52 +01:00
Richard van der Hoff
290777b3d9 Clean up and document handling of logcontexts in Keyring (#2452)
I'm still unclear on what the intended behaviour for
`verify_json_objects_for_server` is, but at least I now understand the
behaviour of most of the things it calls...
2017-09-18 18:31:01 +01:00
Erik Johnston
77c81ca6ea Merge pull request #2451 from matrix-org/erikj/add_state_to_timeline
Don't filter out current state events from timeline
2017-09-18 17:22:33 +01:00
Erik Johnston
2d1b7955ae Don't filter out current state events from timeline 2017-09-18 17:13:03 +01:00
David Baker
862c8da560 Merge pull request #2450 from matrix-org/dbkr/push_event_id_only
Add support for event_id_only push format
2017-09-18 16:41:29 +01:00
Erik Johnston
2d9f341c3e Merge pull request #2449 from matrix-org/erikj/rejoin_device_lists
Correctly handle leaving room in /key/changes
2017-09-18 15:59:13 +01:00
David Baker
436ee0a2ea Also include the room_id
as really it's part of the event ID
2017-09-18 15:58:38 +01:00
David Baker
b393f5db51 Use .get - it's much shorter 2017-09-18 15:50:26 +01:00
David Baker
a2562f9d74 Add support for event_id_only push format
Param in the data dict of a pusher that tells an HTTP pusher to
send just the event_id of the event it's notifying about and the
notification counts. For clients that want to go & fetch the body
of the event themselves anyway.
2017-09-18 15:39:39 +01:00
Erik Johnston
d6dadd95ac Correctly handle leaving room in /key/changes 2017-09-18 15:38:22 +01:00
Erik Johnston
993d3f710b Merge pull request #2443 from matrix-org/erikj/rejoin_device_lists
Send down device list change notif when member leaves/rejoins room
2017-09-18 13:17:53 +01:00
Erik Johnston
4a94eb3ea4 Fix typo 2017-09-15 09:56:54 +01:00
Erik Johnston
3a0cee28d6 Actually hook leave notifs up 2017-09-14 11:49:37 +01:00
Erik Johnston
4f845a0713 Handle joining/leaving rooms in /keys/changes 2017-09-13 16:28:08 +01:00
Erik Johnston
473700f016 Get left rooms 2017-09-13 15:13:41 +01:00
Erik Johnston
9ce866ed4f In sync handle device lists for newly joined/left rooms 2017-09-12 16:44:26 +01:00
Erik Johnston
69ef4987a6 Add left section to /keys/changes 2017-09-08 14:44:36 +01:00
Erik Johnston
53cc8ad35a Send down device list change notif when member leaves/rejoins room 2017-09-07 15:08:39 +01:00
Richard van der Hoff
e2fcba038c Merge pull request #2439 from matrix-org/rav/tox_tweaks
do tox install with pip -e
2017-09-06 17:08:19 +01:00
Richard van der Hoff
5f59f20636 Merge remote-tracking branch 'origin/master' into develop 2017-09-05 21:58:19 +01:00
Richard van der Hoff
59de2c7afa Exclude the github issue template from our sdist (#2440)
PR #2413 added an issue template, but just adding files to the project
directory upsets the packaging scripts: we need to explicitly include or
exclude them.

Move the template into a .github directory to make that easy, and to de-clutter
the root a bit.
2017-09-05 21:57:19 +01:00
Richard van der Hoff
4b616c8cf2 Merge branch 'master' into develop 2017-09-05 17:51:13 +01:00
Richard van der Hoff
4dd61df6f8 do tox install with pip -e
- this ensures we end up with a working virtualenv which we can use for other
things.
2017-09-05 16:35:23 +01:00
Erik Johnston
c0c31656ff Merge pull request #2433 from ptman/patch-1
Document known to work postgres version
2017-09-01 15:28:42 +01:00
Paul Tötterman
8b16b43b7f Document known to work postgres version 2017-09-01 16:52:45 +03:00
Richard van der Hoff
dff396de0f Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 11:20:37 +01:00
Richard van der Hoff
f06ffdb6fa fix python path in jenkins scripts 2017-09-01 10:31:45 +01:00
Richard van der Hoff
6e67aaa7f2 Set --python when running sytest
.. because I want to make the 'install_and_run' script useful for non-synapse
jobs, which do not accept --python. In any case we set up the path here, so
sytest shouldn't be guessing it.
2017-09-01 10:06:21 +01:00
Richard van der Hoff
934ab76835 Merge pull request #2428 from matrix-org/rav/update_upgrade
Tweaks to the upgrade instructions
2017-08-24 11:42:02 +01:00
Richard van der Hoff
fc9878f6a4 Tweaks to the upgrade instructions 2017-08-23 15:27:02 +01:00
Richard van der Hoff
a4d3bfe3d6 Merge pull request #2417 from matrix-org/rav/federation_client
Improvements to the federation test client
2017-08-23 14:50:26 +01:00
Richard van der Hoff
a7effa8400 Merge pull request #2288 from kyrias/bcrypt
python_dependencies: Use bcrypt module instead of py-bcrypt
2017-08-23 14:14:56 +01:00
Richard van der Hoff
a04c6bbf8f test federation client: Allow server-name and key-file as options
so that you don't necessarily need a config file.
2017-08-22 11:19:30 +01:00
Richard van der Hoff
77ea8cbdd7 Merge pull request #2416 from matrix-org/rav/prometheus_config
Add prometheus config
2017-08-22 10:34:40 +01:00
Tom Lant
20b3660495 Merge pull request #2413 from matrix-org/toml-issue-template
Issue template for Synapse
2017-08-21 16:07:35 +01:00
Richard van der Hoff
046b659ce2 Improvements to the federation test client
Make it read the config file, primarily.
2017-08-17 16:59:11 +01:00
Tom Lant
413c270723 Update ISSUE_TEMPLATE.md
Added instructions for checking server version.
2017-08-17 11:14:35 +01:00
Tom Lant
ec3a2dc773 Update ISSUE_TEMPLATE.md
Responding to review comments.
2017-08-17 11:00:51 +01:00
Richard van der Hoff
012875258c Add prometheus config
... from https://github.com/matrix-org/synapse-prometheus-config.
2017-08-16 15:31:44 +01:00
Richard van der Hoff
692250c6be Fix user_dir startup
Add missing parameter to _base.start_worker_reactor
2017-08-16 15:11:29 +01:00
Richard van der Hoff
d2352347cf Fix process startup
escape the % that got added in 92168cb so that the process starts up ok.
2017-08-16 14:57:35 +01:00
Matthew Hodgson
92168cbbc5 explain why CPU affinity is a good idea 2017-08-15 18:27:42 +01:00
Richard van der Hoff
963015005e Merge pull request #2415 from matrix-org/rav/synctl_cpu_affinity
Allow configuration of CPU affinity
2017-08-15 17:42:05 +01:00
Richard van der Hoff
10d8b701a1 Allow configuration of CPU affinity
Make it possible to set the CPU affinity in the config file, so that we don't
need to remember to do it manually every time.
2017-08-15 17:08:28 +01:00
Richard van der Hoff
543c794a76 Factor out common application start
We have 10 copies of this code, and I don't really want to update each one
separately.
2017-08-15 17:04:40 +01:00
Tom Lant
57cd0c3dea Update ISSUE_TEMPLATE.md
Removed the sentence encouraging people not to file a bug - if people are in doubt we'd rather they filed a bug than gave up entirely.
2017-08-14 14:40:32 +01:00
Tom Lant
b524dd4c35 Update ISSUE_TEMPLATE.md
Oops capital L.
2017-08-14 14:36:49 +01:00
Tom Lant
09703609fc Create ISSUE_TEMPLATE.md
A new issue template proposed to try and steer people towards #matrix:matrix.org for support queries relating to running their own homeserver.
2017-08-14 14:35:25 +01:00
hera
eae04f1952 fix english 2017-08-04 23:56:42 +01:00
hera
5699b05072 typo 2017-08-04 23:44:37 +01:00
Erik Johnston
09552f9d9c Reduce spammy log line in synchrotrons 2017-08-02 17:29:51 +01:00
Kenny Keslar
f18373dc5d Fix iteration of requests_missing_keys; list doesn't have .values()
Signed-off-by: Kenny Keslar <r3dey3@r3dey3.com>
2017-07-26 22:44:19 -05:00
Erik Johnston
b27429729d Merge pull request #2375 from matrix-org/erikj/port_script
Fix port script for user directory tables
2017-07-20 16:16:43 +01:00
Erik Johnston
60a9a49f83 Extend comment 2017-07-20 16:16:29 +01:00
Erik Johnston
d7d24750be Fix port script for user directory tables 2017-07-20 10:47:01 +01:00
Erik Johnston
514c2d3c4d Merge pull request #2371 from matrix-org/erikj/push_cache_hit
Increase cache hit ratio for push
2017-07-17 09:42:27 +01:00
Erik Johnston
bfde076022 Increase cache hit ratio for push
We don't update the cache in all code paths, which causes subsequent
calls to miss the cache
2017-07-14 16:11:26 +01:00
Erik Johnston
d3862812ff Merge pull request #2366 from matrix-org/erikj/push_metrics
Add more metrics to push rule evaluation
2017-07-14 11:04:03 +01:00
Erik Johnston
8d26385d76 Add more metrics to push rule evaluation 2017-07-13 14:37:30 +01:00
Erik Johnston
67b7b904ba Merge pull request #2365 from matrix-org/erikj/push_skip_lock
Push: Don't acquire lock unless necessary
2017-07-13 11:44:48 +01:00
Erik Johnston
f60218ec41 Push: Don't acquire lock unless necessary 2017-07-13 11:23:53 +01:00
Erik Johnston
91818723a1 Merge pull request #2362 from matrix-org/erikj/sync_user_users_who_share
Use less DB for device list handling in sync
2017-07-12 10:45:30 +01:00
Erik Johnston
e9aec001f4 Use less DB for device list handling in sync 2017-07-12 10:30:10 +01:00
Erik Johnston
0184a97dbd Merge pull request #2354 from krombel/reduce_static_sync_reply
encode sync-response statically
2017-07-11 14:19:56 +01:00
Krombel
85b9f76f1d split out reducing stuff; just make encode_* static 2017-07-11 13:14:35 +02:00
Erik Johnston
e2cb760dcc Merge pull request #2357 from matrix-org/erikj/push
Don't compute push actions for backfilled events
2017-07-11 10:53:22 +01:00
Erik Johnston
925b3638ff Reduce log levels in tcp replication 2017-07-11 10:04:21 +01:00
Erik Johnston
9a6fd3ef29 Don't compute push actions for backfilled events 2017-07-11 10:02:21 +01:00
Krombel
2f82de18ee fix test 2017-07-10 17:34:58 +02:00
Krombel
6e16aca8b0 encode sync-response statically; omit empty objects from sync-response 2017-07-10 16:42:17 +02:00
Erik Johnston
d4d12daed9 Include registration and as stores in frontend proxy 2017-07-07 18:36:45 +01:00
Erik Johnston
f467a8f66d Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-07 18:26:28 +01:00
Erik Johnston
c9184ed87e Merge pull request #2344 from matrix-org/erikj/frontend_proxy
Add a frontend proxy
2017-07-07 18:25:46 +01:00
Erik Johnston
1fc4a962e4 Add a frontend proxy 2017-07-07 18:19:46 +01:00
Erik Johnston
08284c86ed Merge pull request #2343 from matrix-org/erikj/fastpush
Perf: Don't filter events for push
2017-07-07 14:25:42 +01:00
Erik Johnston
f502b0dea1 Perf: Don't filter events for push
We know the users are joined and we can explicitly check for if they are
ignoring the user, so lets do that.
2017-07-07 14:04:40 +01:00
Erik Johnston
1200f28d66 Merge branch 'hotfixes-v0.22.1' of github.com:matrix-org/synapse 2017-07-06 18:11:49 +01:00
Erik Johnston
76ed3476d3 Bump version and changelog 2017-07-06 18:11:22 +01:00
Erik Johnston
58dc1f2c78 Merge pull request #2342 from matrix-org/erikj/pusher_pool_instantiate
Fix bug where pusherpool didn't start and broke some rooms
2017-07-06 18:08:43 +01:00
Erik Johnston
5a7f561a9b Fix bug where pusherpool didn't start and broke some rooms
Since we didn't instansiate the PusherPool at start time it could fail
at run time, which it did for some users.

This may or may not fix things for those users, but it should happen at
start time and stop the server from starting.
2017-07-06 17:55:51 +01:00
Erik Johnston
ed9a7f5436 Merge pull request #2309 from matrix-org/erikj/user_ip_repl
Fix up user_ip replication commands
2017-07-06 14:33:14 +01:00
Erik Johnston
1f64207f26 Merge branch 'master' of github.com:matrix-org/synapse into develop 2017-07-06 13:57:45 +01:00
Erik Johnston
42b50483be Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse 2017-07-06 10:36:25 +01:00
Erik Johnston
6264cf9666 Bump version and changelog 2017-07-06 10:35:56 +01:00
Erik Johnston
f386632800 Merge pull request #2334 from matrix-org/erikj/refactor_transport_server
Separate federation servlet into different lists
2017-07-05 17:09:07 +01:00
Erik Johnston
5e49a57ecc Separate federation servlet into different lists 2017-07-05 14:32:24 +01:00
Richard van der Hoff
3d31b39297 Merge pull request #2332 from matrix-org/rav/fix_pushes
Fix caching error in the push evaluator
2017-07-05 11:10:53 +01:00
Richard van der Hoff
73cfe48031 Fix caching error in the push evaluator
Initialising `result` to `{}` in the parameters meant that every call to
_flatten_dict used the *same* target dictionary.

I'm hopeful this will fix https://github.com/matrix-org/synapse/issues/2270,
but I suspect it won't. (This code seems to have been here since forever,
unlike the bug, and I don't really think it explains the observed
behaviour). Still, it makes it hard to investigate the problem.
2017-07-05 00:28:43 +01:00
Erik Johnston
05538587ef Bump version and changelog 2017-07-04 14:02:21 +01:00
Erik Johnston
f92d7416d7 Merge pull request #2330 from matrix-org/erikj/cache_size_factor
Increase default cache size
2017-07-04 10:51:21 +01:00
Mark Haines
1f12d808e7 Merge pull request #2323 from matrix-org/markjh/invite_checks
Improve the error handling for bad invites received over federation
2017-07-04 10:50:43 +01:00
Erik Johnston
29a4066a4d Update test 2017-07-04 10:21:25 +01:00
Erik Johnston
7afb4e3f54 Update README 2017-07-04 10:00:52 +01:00
Erik Johnston
495f075b41 Increase default cache factor size. 2017-07-04 09:58:32 +01:00
Erik Johnston
b5e8d529e6 Define CACHE_SIZE_FACTOR once 2017-07-04 09:56:44 +01:00
Mark Haines
3e279411fe Improve the error handling for bad invites received over federation 2017-06-30 16:20:30 +01:00
Erik Johnston
47574c9cba Merge pull request #2321 from matrix-org/erikj/prefill_forward
Prefill forward extrems and event to state groups
2017-06-30 11:03:04 +01:00
Erik Johnston
6ff14ddd2e Make into list 2017-06-29 15:47:37 +01:00
Erik Johnston
5946aa0877 Prefill forward extrems and event to state groups 2017-06-29 15:38:48 +01:00
Erik Johnston
d800ab2847 Merge pull request #2320 from matrix-org/erikj/cache_macaroon_parse
Cache macaroon parse and validation
2017-06-29 15:06:43 +01:00
Erik Johnston
2c365f4723 Cache macaroon parse and validation
Turns out this can be quite expensive for requests, and is easily
cachable. We don't cache the lookup to the DB so invalidation still
works.
2017-06-29 14:50:18 +01:00
Erik Johnston
a1a253ea50 Merge pull request #2319 from matrix-org/erikj/prune_sessions
Use an ExpiringCache for storing registration sessions
2017-06-29 14:20:24 +01:00
Erik Johnston
c72058bcc6 Use an ExpiringCache for storing registration sessions
This is because pruning them was a significant performance drain on
matrix.org
2017-06-29 14:08:37 +01:00
Erik Johnston
27f26e48b7 Serialize user ip command as json 2017-06-27 16:25:38 +01:00
Erik Johnston
8c23221666 Fix up 2017-06-27 15:53:45 +01:00
Erik Johnston
731f3c37a0 Merge branch 'release-v0.22.0' of github.com:matrix-org/synapse into develop 2017-06-27 15:41:34 +01:00
Erik Johnston
4b444723f0 Merge pull request #2308 from matrix-org/erikj/user_ip_repl
Make workers report to master for user ip updates
2017-06-27 15:36:47 +01:00
Erik Johnston
816605a137 Merge pull request #2307 from matrix-org/erikj/user_ip_batch
Batch upsert user ips
2017-06-27 15:08:32 +01:00
Erik Johnston
78cefd78d6 Make workers report to master for user ip updates 2017-06-27 14:58:10 +01:00
Erik Johnston
a0a561ae85 Fix up client ips to read from pending data 2017-06-27 14:46:12 +01:00
Erik Johnston
ed3d0170d9 Batch upsert user ips 2017-06-27 13:37:04 +01:00
Erik Johnston
d04d672a80 Merge pull request #2290 from matrix-org/erikj/ensure_round_trip
Reject local events that don't round trip the DB
2017-06-26 15:12:02 +01:00
Erik Johnston
1bce3e6b35 Remove unused variables 2017-06-26 14:03:27 +01:00
Erik Johnston
e3cbec10c1 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/ensure_round_trip 2017-06-26 14:02:44 +01:00
Erik Johnston
fcf01dd88e Reject local events that don't round trip the DB 2017-06-19 11:33:40 +01:00
Johannes Löthberg
4f66312df8 python_dependencies: Use bcrypt module instead of py-bcrypt
py-bcrypt has been unmaintained for a long while, while bcrypt is
actively maintained. And since ff8b87118d
we're compatible with the bcrypt anyway.

Signed-off-by: Johannes Löthberg <johannes@kyriasis.com>
2017-06-17 17:39:35 +02:00
77 changed files with 2711 additions and 992 deletions

47
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,47 @@
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all
the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Describe here the problem that you are experiencing, or the feature you are requesting.
### Steps to reproduce
- For bugs, list the steps
- that reproduce the bug
- using hyphens as bullet points
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Homeserver**: Was this issue identified on matrix.org or another homeserver?
If not matrix.org:
- **Version**: What version of Synapse is running? <!--
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Install method**: package manager/git clone/pip
- **Platform**: Tell us about the environment in which your homeserver is operating
- distro, hardware, if it's running in a vm/container, etc.

View File

@@ -1,3 +1,80 @@
Changes in synapse v0.23.0 (2017-10-02)
=======================================
No changes since v0.23.0-rc2
Changes in synapse v0.23.0-rc2 (2017-09-26)
===========================================
Bug fixes:
* Fix regression in performance of syncs (PR #2470)
Changes in synapse v0.23.0-rc1 (2017-09-25)
===========================================
Features:
* Add a frontend proxy worker (PR #2344)
* Add support for event_id_only push format (PR #2450)
* Add a PoC for filtering spammy events (PR #2456)
* Add a config option to block all room invites (PR #2457)
Changes:
* Use bcrypt module instead of py-bcrypt (PR #2288) Thanks to @kyrias!
* Improve performance of generating push notifications (PR #2343, #2357, #2365,
#2366, #2371)
* Improve DB performance for device list handling in sync (PR #2362)
* Include a sample prometheus config (PR #2416)
* Document known to work postgres version (PR #2433) Thanks to @ptman!
Bug fixes:
* Fix caching error in the push evaluator (PR #2332)
* Fix bug where pusherpool didn't start and broke some rooms (PR #2342)
* Fix port script for user directory tables (PR #2375)
* Fix device lists notifications when user rejoins a room (PR #2443, #2449)
* Fix sync to always send down current state events in timeline (PR #2451)
* Fix bug where guest users were incorrectly kicked (PR #2453)
* Fix bug talking to IPv6 only servers using SRV records (PR #2462)
Changes in synapse v0.22.1 (2017-07-06)
=======================================
Bug fixes:
* Fix bug where pusher pool didn't start and caused issues when
interacting with some rooms (PR #2342)
Changes in synapse v0.22.0 (2017-07-06)
=======================================
No changes since v0.22.0-rc2
Changes in synapse v0.22.0-rc2 (2017-07-04)
===========================================
Changes:
* Improve performance of storing user IPs (PR #2307, #2308)
* Slightly improve performance of verifying access tokens (PR #2320)
* Slightly improve performance of event persistence (PR #2321)
* Increase default cache factor size from 0.1 to 0.5 (PR #2330)
Bug fixes:
* Fix bug with storing registration sessions that caused frequent CPU churn
(PR #2319)
Changes in synapse v0.22.0-rc1 (2017-06-26)
===========================================

View File

@@ -27,4 +27,5 @@ exclude jenkins*.sh
exclude jenkins*
recursive-exclude jenkins *.sh
prune .github
prune demo/etc

View File

@@ -200,11 +200,11 @@ different. See `the spec`__ for more information on key management.)
.. __: `key_management`_
The default configuration exposes two HTTP ports: 8008 and 8448. Port 8008 is
configured without TLS; it is not recommended this be exposed outside your
local network. Port 8448 is configured to use TLS with a self-signed
certificate. This is fine for testing with but, to avoid your clients
complaining about the certificate, you will almost certainly want to use
another certificate for production purposes. (Note that a self-signed
configured without TLS; it should be behind a reverse proxy for TLS/SSL
termination on port 443 which in turn should be used for clients. Port 8448
is configured to use TLS with a self-signed certificate. If you would like
to do initial test with a client without having to setup a reverse proxy,
you can temporarly use another certificate. (Note that a self-signed
certificate is fine for `Federation`_). You can do so by changing
``tls_certificate_path``, ``tls_private_key_path`` and ``tls_dh_params_path``
in ``homeserver.yaml``; alternatively, you can use a reverse-proxy, but be sure
@@ -283,10 +283,16 @@ Connecting to Synapse from a client
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://localhost:8448`` - remember to specify the
port (``:8448``) unless you changed the configuration. (Leave the identity
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
server as the default - see `Identity servers`_.)
If using port 8448 you will run into errors until you accept the self-signed
certificate. You can easily do this by going to ``https://localhost:8448``
directly with your browser and accept the presented certificate. You can then
go back in your web client and proceed further.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
@@ -359,7 +365,7 @@ https://www.archlinux.org/packages/community/any/matrix-synapse/, which should p
the necessary dependencies. If the default web client is to be served (enabled by default in
the generated config),
https://www.archlinux.org/packages/community/any/python2-matrix-angular-sdk/ will also need to
be installed.
be installed.
Alternatively, to install using pip a few changes may be needed as ArchLinux
defaults to python 3, but synapse currently assumes python 2.7 by default:
@@ -593,8 +599,9 @@ you to run your server on a machine that might not have the same name as your
domain name. For example, you might want to run your server at
``synapse.example.com``, but have your Matrix user-ids look like
``@user:example.com``. (A SRV record also allows you to change the port from
the default 8448. However, if you are thinking of using a reverse-proxy, be
sure to read `Reverse-proxying the federation port`_ first.)
the default 8448. However, if you are thinking of using a reverse-proxy on the
federation port, which is not recommended, be sure to read
`Reverse-proxying the federation port`_ first.)
To use a SRV record, first create your SRV record and publish it in DNS. This
should have the format ``_matrix._tcp.<yourdomain.com> <ttl> IN SRV 10 0 <port>
@@ -674,7 +681,7 @@ For information on how to install and use PostgreSQL, please see
Using a reverse proxy with Synapse
==================================
It is possible to put a reverse proxy such as
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
@@ -692,9 +699,9 @@ federation port has a number of pitfalls. It is possible, but be sure to read
`Reverse-proxying the federation port`_.
The recommended setup is therefore to configure your reverse-proxy on port 443
for client connections, but to also expose port 8448 for server-server
connections. All the Matrix endpoints begin ``/_matrix``, so an example nginx
configuration might look like::
to port 8008 of synapse for client connections, but to also directly expose port
8448 for server-server connections. All the Matrix endpoints begin ``/_matrix``,
so an example nginx configuration might look like::
server {
listen 443 ssl;
@@ -899,12 +906,9 @@ cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. Roughly speaking, a SYNAPSE_CACHE_FACTOR of 1.0 will max out
at around 3-4GB of resident memory - this is what we currently run the
matrix.org on. The default setting is currently 0.1, which is probably
around a ~700MB footprint. You can dial it down further to 0.02 if
desired, which targets roughly ~512MB. Conversely you can dial it up if
you need performance for lots of users and have a box with a lot of RAM.
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -5,39 +5,48 @@ Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
1. If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then
run:
.. code:: bash
source ~/.synapse/bin/activate
2. If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
# restart synapse
synctl restart
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs pip install --upgrade
# restart synapse
./synctl restart
To check whether your update was sucessful, you can check the Server header
returned by the Client-Server API:
.. code:: bash
source ~/.synapse/bin/activate
If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install --upgrade
To check whether your update was sucessfull, run:
.. code:: bash
# replace your.server.domain with ther domain of your synapse homeserver
curl https://<your.server.domain>/_matrix/federation/v1/version
So for the Matrix.org HS server the URL would be: https://matrix.org/_matrix/federation/v1/version.
# replace <host.name> with the hostname of your synapse homeserver.
# You may need to specify a port (eg, :8448) if your server is not
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v0.15.0
====================
@@ -77,7 +86,7 @@ It has been replaced by specifying a list of application service registrations i
``homeserver.yaml``::
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
Where ``registration-01.yaml`` looks like::
url: <String> # e.g. "https://my.application.service.com"
@@ -166,7 +175,7 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.5.0.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
@@ -175,18 +184,18 @@ file and ask for help in #matrix:matrix.org. The upgrade process is,
unfortunately, non trivial and requires human intervention to resolve any
resulting conflicts during the upgrade process.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.5.0.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.
Upgrading to v0.4.0
@@ -245,7 +254,7 @@ automatically generate default config use::
--config-path homeserver.config \
--generate-config
This config can be edited if desired, for example to specify a different SSL
This config can be edited if desired, for example to specify a different SSL
certificate to use. Once done you can run the home server using::
$ python synapse/app/homeserver.py --config-path homeserver.config
@@ -266,20 +275,20 @@ This release completely changes the database schema and so requires upgrading
it before starting the new version of the homeserver.
The script "database-prepare-for-0.0.1.sh" should be used to upgrade the
database. This will save all user information, such as logins and profiles,
database. This will save all user information, such as logins and profiles,
but will otherwise purge the database. This includes messages, which
rooms the home server was a member of and room alias mappings.
Before running the command the homeserver should be first completely
Before running the command the homeserver should be first completely
shutdown. To run it, simply specify the location of the database, e.g.:
./scripts/database-prepare-for-0.0.1.sh "homeserver.db"
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
Once this has successfully completed it will be safe to restart the
homeserver. You may notice that the homeserver takes a few seconds longer to
restart than usual as it reinitializes the database.
On startup of the new version, users can either rejoin remote rooms using room
aliases or by being reinvited. Alternatively, if any other homeserver sends a
message to a room that the homeserver was previously in the local HS will
message to a room that the homeserver was previously in the local HS will
automatically rejoin the room.

20
contrib/prometheus/README Normal file
View File

@@ -0,0 +1,20 @@
This directory contains some sample monitoring config for using the
'Prometheus' monitoring server against synapse.
To use it, first install prometheus by following the instructions at
http://prometheus.io/
Then add a new job to the main prometheus.conf file:
job: {
name: "synapse"
target_group: {
target: "http://SERVER.LOCATION.HERE:PORT/_synapse/metrics"
}
}
Metrics are disabled by default when running synapse; they must be enabled
with the 'enable-metrics' option, either in the synapse config file or as a
command-line option.

View File

@@ -0,0 +1,395 @@
{{ template "head" . }}
{{ template "prom_content_head" . }}
<h1>System Resources</h1>
<h3>CPU</h3>
<div id="process_resource_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_utime"),
expr: "rate(process_cpu_seconds_total[2m]) * 100",
name: "[[job]]",
min: 0,
max: 100,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "%",
yTitle: "CPU Usage"
})
</script>
<h3>Memory</h3>
<div id="process_resource_maxrss"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_resource_maxrss"),
expr: "process_psutil_rss:max",
name: "Maxrss",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "bytes",
yTitle: "Usage"
})
</script>
<h3>File descriptors</h3>
<div id="process_fds"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#process_fds"),
expr: "process_open_fds{job='synapse'}",
name: "FDs",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "",
yTitle: "Descriptors"
})
</script>
<h1>Reactor</h1>
<h3>Total reactor time</h3>
<div id="reactor_total_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_total_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
name: "time",
max: 1,
min: 0,
renderer: "area",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Average reactor tick time</h3>
<div id="reactor_average_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_average_time"),
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
name: "time",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s",
yTitle: "Time"
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
name: "calls",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Cals"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>
<div id="synapse_storage_query_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_query_time"),
expr: "rate(synapse_storage_query_time:count[2m])",
name: "[[verb]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "queries/s",
yTitle: "Queries"
})
</script>
<h3>Transactions</h3>
<div id="synapse_storage_transaction_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transaction_time"),
expr: "rate(synapse_storage_transaction_time:count[2m])",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "txn/s",
yTitle: "Transactions"
})
</script>
<h3>Transaction execution time</h3>
<div id="synapse_storage_transactions_time_msec"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_transactions_time_msec"),
expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
name: "[[desc]]",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Database scheduling latency</h3>
<div id="synapse_storage_schedule_time"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_storage_schedule_time"),
expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
name: "Total latency",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "Usage"
})
</script>
<h3>Cache hit ratio</h3>
<div id="synapse_cache_ratio"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_ratio"),
expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
name: "[[name]]",
min: 0,
max: 100,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "%",
yTitle: "Percentage"
})
</script>
<h3>Cache size</h3>
<div id="synapse_cache_size"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_cache_size"),
expr: "synapse_util_caches_cache:size",
name: "[[name]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Items"
})
</script>
<h1>Requests</h1>
<h3>Requests by Servlet</h3>
<div id="synapse_http_server_requests_servlet"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet"),
expr: "rate(synapse_http_server_requests:servlet[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h4>&nbsp;(without <tt>EventStreamRestServlet</tt> or <tt>SyncRestServlet</tt>)</h4>
<div id="synapse_http_server_requests_servlet_minus_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_requests_servlet_minus_events"),
expr: "rate(synapse_http_server_requests:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Average response times</h3>
<div id="synapse_http_server_response_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_time_avg"),
expr: "rate(synapse_http_server_response_time:total[2m]) / rate(synapse_http_server_response_time:count[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h3>All responses by code</h3>
<div id="synapse_http_server_responses"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses"),
expr: "rate(synapse_http_server_responses[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Error responses by code</h3>
<div id="synapse_http_server_responses_err"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_responses_err"),
expr: "rate(synapse_http_server_responses{code=~\"[45]..\"}[2m])",
name: "[[method]] / [[code]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>CPU Usage</h3>
<div id="synapse_http_server_response_ru_utime"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_ru_utime"),
expr: "rate(synapse_http_server_response_ru_utime:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "CPU Usage"
})
</script>
<h3>DB Usage</h3>
<div id="synapse_http_server_response_db_txn_duration"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
expr: "rate(synapse_http_server_response_db_txn_duration:total[2m])",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/s",
yTitle: "DB Usage"
})
</script>
<h3>Average event send times</h3>
<div id="synapse_http_server_send_time_avg"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_http_server_send_time_avg"),
expr: "rate(synapse_http_server_response_time:total{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_time:count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
name: "[[servlet]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "s/req",
yTitle: "Response time"
})
</script>
<h1>Federation</h1>
<h3>Sent Messages</h3>
<div id="synapse_federation_client_sent"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_client_sent"),
expr: "rate(synapse_federation_client_sent[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Received Messages</h3>
<div id="synapse_federation_server_received"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_server_received"),
expr: "rate(synapse_federation_server_received[2m])",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "req/s",
yTitle: "Requests"
})
</script>
<h3>Pending</h3>
<div id="synapse_federation_transaction_queue_pending"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_federation_transaction_queue_pending"),
expr: "synapse_federation_transaction_queue_pending",
name: "[[type]]",
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Units"
})
</script>
<h1>Clients</h1>
<h3>Notifiers</h3>
<div id="synapse_notifier_listeners"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_listeners"),
expr: "synapse_notifier_listeners",
name: "listeners",
min: 0,
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
yUnits: "",
yTitle: "Listeners"
})
</script>
<h3>Notified Events</h3>
<div id="synapse_notifier_notified_events"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#synapse_notifier_notified_events"),
expr: "rate(synapse_notifier_notified_events[2m])",
name: "events",
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yUnits: "events/s",
yTitle: "Event rate"
})
</script>
{{ template "prom_content_tail" . }}
{{ template "tail" }}

View File

@@ -0,0 +1,21 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_requests:method{servlet=""} = sum(synapse_http_server_requests) by (method)
synapse_http_server_requests:servlet{method=""} = sum(synapse_http_server_requests) by (servlet)
synapse_http_server_requests:total{servlet=""} = sum(synapse_http_server_requests:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0

View File

@@ -9,9 +9,10 @@ Description=Synapse Matrix homeserver
Type=simple
User=synapse
Group=synapse
EnvironmentFile=-/etc/sysconfig/synapse
WorkingDirectory=/var/lib/synapse
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml --log-config=/etc/synapse/log_config.yaml
ExecStart=/usr/bin/python2.7 -m synapse.app.homeserver --config-path=/etc/synapse/homeserver.yaml
ExecStop=/usr/bin/synctl stop /etc/synapse/homeserver.yaml
[Install]
WantedBy=multi-user.target

View File

@@ -1,6 +1,8 @@
Using Postgres
--------------
Postgres version 9.4 or later is known to work.
Set up database
===============

View File

@@ -17,6 +17,7 @@ export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -15,5 +15,6 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View File

@@ -14,4 +14,5 @@ export SYNAPSE_CACHE_FACTOR=1
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -12,4 +12,5 @@ export SYNAPSE_CACHE_FACTOR=1
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

87
scripts-dev/federation_client.py Normal file → Executable file
View File

@@ -1,10 +1,30 @@
#!/usr/bin/env python
#
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import nacl.signing
import json
import base64
import requests
import sys
import srvlookup
import yaml
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -120,11 +140,13 @@ def get_json(origin_name, origin_key, destination, path):
origin_name, key, sig,
)
authorization_headers.append(bytes(header))
sys.stderr.write(header)
sys.stderr.write("\n")
print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.get(
lookup(destination, path),
dest,
headers={"Authorization": authorization_headers[0]},
verify=False,
)
@@ -133,17 +155,66 @@ def get_json(origin_name, origin_key, destination, path):
def main():
origin_name, keyfile, destination, path = sys.argv[1:]
parser = argparse.ArgumentParser(
description=
"Signs and sends a federation request to a matrix homeserver",
)
with open(keyfile) as f:
parser.add_argument(
"-N", "--server-name",
help="Name to give as the local homeserver. If unspecified, will be "
"read from the config file.",
)
parser.add_argument(
"-k", "--signing-key-path",
help="Path to the file containing the private ed25519 key to sign the "
"request with.",
)
parser.add_argument(
"-c", "--config",
default="homeserver.yaml",
help="Path to server config file. Ignored if --server-name and "
"--signing-key-path are both given.",
)
parser.add_argument(
"-d", "--destination",
default="matrix.org",
help="name of the remote homeserver. We will do SRV lookups and "
"connect appropriately.",
)
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = get_json(
origin_name, key, destination, "/_matrix/federation/v1/" + path
args.server_name, key, args.destination, "/_matrix/federation/v1/" + args.path
)
json.dump(result, sys.stdout)
print ""
print ("")
def read_args_from_config(args):
with open(args.config, 'r') as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config['server_name']
if not args.signing_key_path:
args.signing_key_path = config['signing_key_path']
if __name__ == "__main__":
main()

View File

@@ -252,6 +252,25 @@ class Porter(object):
)
return
if table in (
"user_directory", "user_directory_search", "users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
# them anyway.
self.progress.update(table, table_size) # Mark table as done
return
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
table=table,
values={"stream_id": None},
)
self.progress.update(table, table_size) # Mark table as done
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.22.0-rc1"
__version__ = "0.23.0"

View File

@@ -23,7 +23,8 @@ from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes
from synapse.types import UserID
from synapse.util import logcontext
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -39,6 +40,10 @@ AuthEventTypes = (
GUEST_DEVICE_ID = "guest_device"
class _InvalidMacaroonException(Exception):
pass
class Auth(object):
"""
FIXME: This class contains a mix of functions for authenticating users
@@ -51,6 +56,9 @@ class Auth(object):
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
auth_events_ids = yield self.compute_auth_events(
@@ -200,8 +208,8 @@ class Auth(object):
default=[""]
)[0]
if user and access_token and ip_addr:
logcontext.preserve_fn(self.store.insert_client_ip)(
user=user,
self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
@@ -267,8 +275,8 @@ class Auth(object):
AuthError if no user by that token exists or the token is invalid.
"""
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
user_id, guest = self._parse_and_validate_macaroon(token, rights)
except _InvalidMacaroonException:
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
@@ -277,19 +285,8 @@ class Auth(object):
defer.returnValue(r)
try:
user_id = self.get_user_id_from_macaroon(macaroon)
user = UserID.from_string(user_id)
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id == "guest = true":
guest = True
if guest:
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -361,6 +358,55 @@ class Auth(object):
errcode=Codes.UNKNOWN_TOKEN
)
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
if and only if rights == access and there isn't an expiry.
On invalid macaroon raises _InvalidMacaroonException
Returns:
(user_id, is_guest)
"""
if rights == "access":
cached = self.token_cache.get(token, None)
if cached:
return cached
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
# people use access tokens which aren't macaroons
raise _InvalidMacaroonException()
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
@@ -473,6 +519,14 @@ class Auth(object):
)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
Args:
user (str): mxid of user to check
Returns:
bool: True if the user is an admin
"""
return self.store.is_server_admin(user)
@defer.inlineCallbacks

99
synapse/app/_base.py Normal file
View File

@@ -0,0 +1,99 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import affinity
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import reactor
def start_worker_reactor(appname, config):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor. Pulls configuration from the 'worker' settings in 'config'.
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
"""
logger = logging.getLogger(config.worker_app)
start_reactor(
appname,
config.soft_file_limit,
config.gc_thresholds,
config.worker_pid_file,
config.worker_daemonize,
config.worker_cpu_affinity,
logger,
)
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
logger,
):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
the reactor
Args:
appname (str): application name which will be sent to syslog
soft_file_limit (int):
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
logger (logging.Logger): logger instance to pass to Daemonize
"""
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
logger.info("Setting CPU affinity to %s" % cpu_affinity)
affinity.set_process_affinity_mask(0, cpu_affinity)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
if daemonize:
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()

View File

@@ -13,38 +13,31 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.appservice")
@@ -181,36 +174,13 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-appservice",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == '__main__':

View File

@@ -13,47 +13,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.client_reader")
@@ -65,8 +57,8 @@ class ClientReaderSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
pass
@@ -183,36 +175,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-client-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == '__main__':

View File

@@ -13,44 +13,36 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import FEDERATION_PREFIX
from synapse.federation.transport.server import TransportLayerServer
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.federation_reader")
@@ -172,36 +164,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-reader",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == '__main__':

View File

@@ -13,44 +13,37 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.federation import send_queue
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.federation_sender")
@@ -213,36 +206,12 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-federation-sender",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-federation-sender", config)
class FederationSenderHandler(object):

View File

@@ -0,0 +1,239 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.errors import SynapseError
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
logger = logging.getLogger("synapse.app.frontend_proxy")
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$",
releases=())
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_POST(self, request, device_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400,
"To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
body,
)
defer.returnValue((200, result))
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
class FrontendProxySlavedStore(
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
):
pass
class FrontendProxyServer(HomeServer):
def get_db_conn(self, run_new_connection=True):
# Any param beginning with cp_ is a parameter for adbapi, and should
# not be passed to the database engine.
db_params = {
k: v for k, v in self.db_config.get("args", {}).items()
if not k.startswith("cp_")
}
db_conn = self.database_engine.module.connect(**db_params)
if run_new_connection:
self.database_engine.on_new_connection(db_conn)
return db_conn
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, Resource())
for address in bind_addresses:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=address
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
bind_addresses = listener["bind_addresses"]
for address in bind_addresses:
reactor.listenTCP(
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
),
interface=address
)
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
assert config.worker_main_http_uri is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -13,61 +13,48 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
import gc
import logging
import os
import sys
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.python_dependencies import (
check_requirements, CONDITIONAL_REQUIREMENTS
)
from synapse.rest import ClientRestResource
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import are_all_users_on_domain
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.server import HomeServer
from twisted.internet import reactor, defer
from twisted.application import service
from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File
from twisted.web.server import GzipEncoderFactory
from synapse.http.server import RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.api.urls import (
FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
SERVER_KEY_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, STATIC_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.http.site import SynapseSite
from synapse import events
from daemonize import Daemonize
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -446,37 +433,18 @@ def run(hs):
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
def in_thread():
# Uncomment to enable tracing of log context changes.
# sys.settrace(logcontext_tracer)
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
change_resource_limit(hs.config.soft_file_limit)
if hs.config.gc_thresholds:
gc.set_threshold(*hs.config.gc_thresholds)
reactor.run()
if hs.config.daemonize:
if hs.config.print_pidfile:
print (hs.config.pid_file)
daemon = Daemonize(
app="synapse-homeserver",
pid=hs.config.pid_file,
action=lambda: in_thread(),
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
in_thread()
_base.start_reactor(
"synapse-homeserver",
hs.config.soft_file_limit,
hs.config.gc_thresholds,
hs.config.pid_file,
hs.config.daemonize,
hs.config.cpu_affinity,
logger,
)
def main():

View File

@@ -13,57 +13,49 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.crypto import context_factory
from synapse import events
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
ClientIpStore,
):
pass
@@ -180,36 +172,13 @@ def start(config_options):
ss.get_handlers()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-media-repository",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-media-repository", config)
if __name__ == '__main__':

View File

@@ -13,41 +13,33 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
import synapse
from synapse.server import HomeServer
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.storage.roommember import RoomMemberStore
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.storage.engines import create_engine
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn, \
PreserveLoggingContext
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse import events
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.pusher")
@@ -244,18 +236,6 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
@@ -263,18 +243,7 @@ def start(config_options):
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-pusher",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-pusher", config)
if __name__ == '__main__':

View File

@@ -13,56 +13,50 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
import synapse
from synapse.api.constants import EventTypes
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.site import SynapseSite
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.rest.client.v2_alpha import sync
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.internet import defer, reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import contextlib
import gc
logger = logging.getLogger("synapse.app.synchrotron")
@@ -77,9 +71,9 @@ class SynchrotronSlavedStore(
SlavedPresenceStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
who_forgot_in_room = (
RoomMemberStore.__dict__["who_forgot_in_room"]
@@ -440,36 +434,13 @@ def start(config_options):
ss.setup()
ss.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ss.get_datastore().start_profiling()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-synchrotron",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-synchrotron", config)
if __name__ == '__main__':

View File

@@ -14,43 +14,37 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import synapse
import logging
import sys
from synapse.server import HomeServer
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.logger import setup_logging
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.http.server import JsonResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse.http.site import SynapseSite
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.client_ips import ClientIpStore
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse import events
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, preserve_fn
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import Resource
from daemonize import Daemonize
import sys
import logging
import gc
logger = logging.getLogger("synapse.app.user_dir")
@@ -58,9 +52,9 @@ class UserDirectorySlaveStore(
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
UserDirectoryStore,
BaseSlavedStore,
ClientIpStore, # After BaseSlavedStore because the constructor is different
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
@@ -233,36 +227,13 @@ def start(config_options):
ps.setup()
ps.start_listening(config.worker_listeners)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
change_resource_limit(config.soft_file_limit)
if config.gc_thresholds:
gc.set_threshold(*config.gc_thresholds)
reactor.run()
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
if config.worker_daemonize:
daemon = Daemonize(
app="synapse-user-dir",
pid=config.worker_pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
_base.start_worker_reactor("synapse-user-dir", config)
if __name__ == '__main__':

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -29,6 +30,7 @@ class ServerConfig(Config):
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl")
self.cpu_affinity = config.get("cpu_affinity")
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
@@ -41,6 +43,12 @@ class ServerConfig(Config):
self.filter_timeline_limit = config.get("filter_timeline_limit", -1)
# Whether we should block invites sent to users on this server
# (other than those sent by local server admins)
self.block_non_admin_invites = config.get(
"block_non_admin_invites", False,
)
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -147,6 +155,27 @@ class ServerConfig(Config):
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40%% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
@@ -171,6 +200,10 @@ class ServerConfig(Config):
# and sync operations. The default value is -1, means no upper limit.
# filter_timeline_limit: 5000
# Whether room invites to users on this server should be blocked
# (except those sent by local server admins). The default is False.
# block_non_admin_invites: True
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:

View File

@@ -32,6 +32,9 @@ class WorkerConfig(Config):
self.worker_replication_port = config.get("worker_replication_port", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
self.worker_cpu_affinity = config.get("worker_cpu_affinity")
if self.worker_listeners:
for listener in self.worker_listeners:
bind_address = listener.pop("bind_address", None)

View File

@@ -13,14 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json
import logging
@@ -43,14 +40,10 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5):
try:
protocol = yield preserve_context_over_fn(
endpoint.connect, factory
)
server_response, server_certificate = yield preserve_context_over_deferred(
protocol.remote_key
)
defer.returnValue((server_response, server_certificate))
return
with logcontext.PreserveLoggingContext():
protocol = yield endpoint.connect(factory)
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,10 +16,9 @@
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError
from synapse.util.async import ObservableDeferred
from synapse.util import unwrapFirstError, logcontext
from synapse.util.logcontext import (
preserve_context_over_deferred, preserve_context_over_fn, PreserveLoggingContext,
PreserveLoggingContext,
preserve_fn
)
from synapse.util.metrics import Measure
@@ -57,7 +57,8 @@ Attributes:
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
"""
@@ -74,23 +75,32 @@ class Keyring(object):
self.perspective_servers = self.config.perspectives
self.hs = hs
# map from server name to Deferred. Has an entry for each server with
# an ongoing key download; the Deferred completes once the download
# completes.
#
# These are regular, logcontext-agnostic Deferreds.
self.key_downloads = {}
def verify_json_for_server(self, server_name, json_object):
return self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
return logcontext.make_deferred_yieldable(
self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
)
def verify_json_objects_for_server(self, server_and_json):
"""Bulk verfies signatures of json objects, bulk fetching keys as
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
Args:
server_and_json (list): List of pairs of (server_name, json_object)
Returns:
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
List<Deferred>: for each input pair, a deferred indicating success
or failure to verify each json object's signature for the given
server_name. The deferreds run their callbacks in the sentinel
logcontext.
"""
verify_requests = []
@@ -117,94 +127,72 @@ class Keyring(object):
verify_requests.append(verify_request)
@defer.inlineCallbacks
def handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
server_to_deferred = {
server_name: defer.Deferred()
for server_name, _ in server_and_json
}
with PreserveLoggingContext():
# We want to wait for any previous lookups to complete before
# proceeding.
wait_on_deferred = self.wait_for_previous_lookups(
[server_name for server_name, _ in server_and_json],
server_to_deferred,
)
# Actually start fetching keys.
wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(verify_requests)
)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
server_to_request_ids = {}
def remove_deferreds(res, server_name, verify_request):
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
deferred.addBoth(remove_deferreds, server_name, verify_request)
preserve_fn(self._start_key_lookups)(verify_requests)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
handle = preserve_fn(_handle_key_deferred)
return [
preserve_context_over_fn(handle_key_deferred, verify_request)
for verify_request in verify_requests
handle(rq) for rq in verify_requests
]
@defer.inlineCallbacks
def _start_key_lookups(self, verify_requests):
"""Sets off the key fetches for each verify request
Once each fetch completes, verify_request.deferred will be resolved.
Args:
verify_requests (List[VerifyKeyRequest]):
"""
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
server_to_request_ids = {}
for verify_request in verify_requests:
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
@@ -212,7 +200,13 @@ class Keyring(object):
Args:
server_names (list): list of server_names we want to lookup
server_to_deferred (dict): server_name to deferred which gets
resolved once we've finished looking up keys for that server
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
while True:
wait_on = [
@@ -226,17 +220,15 @@ class Keyring(object):
else:
break
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
d = ObservableDeferred(preserve_context_over_deferred(deferred))
self.key_downloads[server_name] = d
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def rm(r, server_name):
self.key_downloads.pop(server_name, None)
return r
d.addBoth(rm, server_name)
def get_server_verify_keys(self, verify_requests):
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
For each verify_request, verify_request.deferred is called back with
@@ -305,21 +297,23 @@ class Keyring(object):
if not missing_keys:
break
for verify_request in requests_missing_keys.values():
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
with PreserveLoggingContext():
for verify_request in requests_missing_keys:
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
def on_err(err):
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
with PreserveLoggingContext():
for verify_request in verify_requests:
if not verify_request.deferred.called:
verify_request.deferred.errback(err)
do_iterations().addErrback(on_err)
preserve_fn(do_iterations)().addErrback(on_err)
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
@@ -333,7 +327,7 @@ class Keyring(object):
Deferred: resolves to dict[str, dict[str, VerifyKey]]: map from
server_name -> key_id -> VerifyKey
"""
res = yield preserve_context_over_deferred(defer.gatherResults(
res = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.get_server_verify_keys)(
server_name, key_ids
@@ -341,7 +335,7 @@ class Keyring(object):
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(dict(res))
@@ -362,13 +356,13 @@ class Keyring(object):
)
defer.returnValue({})
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
union_of_keys = {}
for result in results:
@@ -402,13 +396,13 @@ class Keyring(object):
defer.returnValue(keys)
results = yield preserve_context_over_deferred(defer.gatherResults(
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(get_key)(server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
merged = {}
for result in results:
@@ -485,7 +479,7 @@ class Keyring(object):
for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=server_name,
@@ -495,7 +489,7 @@ class Keyring(object):
for server_name, response_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -543,7 +537,7 @@ class Keyring(object):
keys.update(response_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store_keys)(
server_name=key_server_name,
@@ -553,7 +547,7 @@ class Keyring(object):
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(keys)
@@ -619,7 +613,7 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield preserve_context_over_deferred(defer.gatherResults(
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_keys_json)(
server_name=server_name,
@@ -632,7 +626,7 @@ class Keyring(object):
for key_id in updated_key_ids
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
results[server_name] = response_keys
@@ -710,7 +704,6 @@ class Keyring(object):
defer.returnValue(verify_keys)
@defer.inlineCallbacks
def store_keys(self, server_name, from_server, verify_keys):
"""Store a collection of verify keys for a given server
Args:
@@ -721,7 +714,7 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
yield preserve_context_over_deferred(defer.gatherResults(
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
preserve_fn(self.store.store_server_verify_key)(
server_name, server_name, key.time_added, key
@@ -729,4 +722,48 @@ class Keyring(object):
for key_id, key in verify_keys.items()
],
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
_, key_id, verify_key = yield verify_request.deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, verify_request.key_ids),
Codes.UNAUTHORIZED,
)
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)

View File

@@ -0,0 +1,38 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
def check_event_for_spam(event):
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
Args:
event (synapse.events.EventBase): the event to be checked
Returns:
bool: True if the event is spammy.
"""
if not hasattr(event, "content") or "body" not in event.content:
return False
# for example:
#
# if "the third flower is green" in event.content["body"]:
# return True
return False

View File

@@ -12,21 +12,14 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.events.utils import prune_event
from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
import logging
from synapse.api.errors import SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import spamcheck
from synapse.events.utils import prune_event
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
logger = logging.getLogger(__name__)
@@ -57,56 +50,52 @@ class FederationBase(object):
"""
deferreds = self._check_sigs_and_hashes(pdus)
def callback(pdu):
return pdu
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield logcontext.make_deferred_yieldable(deferred)
except SynapseError:
res = None
def errback(failure, pdu):
failure.trap(SynapseError)
return None
def try_local_db(res, pdu):
if not res:
# Check local db.
return self.store.get_event(
res = yield self.store.get_event(
pdu.event_id,
allow_rejected=True,
allow_none=True,
)
return res
def try_remote(res, pdu):
if not res and pdu.origin != origin:
return self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
).addErrback(lambda e: None)
return res
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
outlier=outlier,
timeout=10000,
)
except SynapseError:
pass
def warn(res, pdu):
if not res:
logger.warn(
"Failed to find copy of %s with valid signature",
pdu.event_id,
)
return res
for pdu, deferred in zip(pdus, deferreds):
deferred.addCallbacks(
callback, errback, errbackArgs=[pdu]
).addCallback(
try_local_db, pdu
).addCallback(
try_remote, pdu
).addCallback(
warn, pdu
defer.returnValue(res)
handle = logcontext.preserve_fn(handle_check_result)
deferreds2 = [
handle(pdu, deferred)
for pdu, deferred in zip(pdus, deferreds)
]
valid_pdus = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
deferreds2,
consumeErrors=True,
)
valid_pdus = yield preserve_context_over_deferred(defer.gatherResults(
deferreds,
consumeErrors=True
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
@@ -114,15 +103,24 @@ class FederationBase(object):
defer.returnValue([p for p in valid_pdus if p])
def _check_sigs_and_hash(self, pdu):
return self._check_sigs_and_hashes([pdu])[0]
return logcontext.make_deferred_yieldable(
self._check_sigs_and_hashes([pdu])[0],
)
def _check_sigs_and_hashes(self, pdus):
"""Throws a SynapseError if a PDU does not have the correct
signatures.
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
pdus (list[FrozenEvent]): the events to be checked
Returns:
FrozenEvent: Either the given event or it redacted if it failed the
content hash check.
list[Deferred]: for each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
"""
redacted_pdus = [
@@ -130,26 +128,38 @@ class FederationBase(object):
for pdu in pdus
]
deferreds = preserve_fn(self.keyring.verify_json_objects_for_server)([
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
ctx = logcontext.LoggingContext.current_context()
def callback(_, pdu, redacted):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
if spamcheck.check_event_for_spam(pdu):
logger.warn(
"Event contains spam, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
def errback(failure, pdu):
failure.trap(SynapseError)
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
)
return failure
for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):

View File

@@ -22,7 +22,7 @@ from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError,
)
from synapse.util import unwrapFirstError
from synapse.util import unwrapFirstError, logcontext
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function
from synapse.util.logcontext import preserve_fn, preserve_context_over_deferred
@@ -189,10 +189,10 @@ class FederationClient(FederationBase):
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield preserve_context_over_deferred(defer.gatherResults(
pdus[:] = yield logcontext.make_deferred_yieldable(defer.gatherResults(
self._check_sigs_and_hashes(pdus),
consumeErrors=True,
)).addErrback(unwrapFirstError)
).addErrback(unwrapFirstError))
defer.returnValue(pdus)
@@ -252,7 +252,7 @@ class FederationClient(FederationBase):
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hashes([pdu])[0]
signed_pdu = yield self._check_sigs_and_hash(pdu)
break

View File

@@ -153,12 +153,10 @@ class Authenticator(object):
class BaseFederationServlet(object):
REQUIRE_AUTH = True
def __init__(self, handler, authenticator, ratelimiter, server_name,
room_list_handler):
def __init__(self, handler, authenticator, ratelimiter, server_name):
self.handler = handler
self.authenticator = authenticator
self.ratelimiter = ratelimiter
self.room_list_handler = room_list_handler
def _wrap(self, func):
authenticator = self.authenticator
@@ -590,7 +588,7 @@ class PublicRoomList(BaseFederationServlet):
else:
network_tuple = ThirdPartyInstanceID(None, None)
data = yield self.room_list_handler.get_local_public_room_list(
data = yield self.handler.get_local_public_room_list(
limit, since_token,
network_tuple=network_tuple
)
@@ -611,7 +609,7 @@ class FederationVersionServlet(BaseFederationServlet):
}))
SERVLET_CLASSES = (
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
FederationEventServlet,
@@ -634,17 +632,27 @@ SERVLET_CLASSES = (
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
OpenIdUserInfo,
PublicRoomList,
FederationVersionServlet,
)
ROOM_LIST_CLASSES = (
PublicRoomList,
)
def register_servlets(hs, resource, authenticator, ratelimiter):
for servletclass in SERVLET_CLASSES:
for servletclass in FEDERATION_SERVLET_CLASSES:
servletclass(
handler=hs.get_replication_layer(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
room_list_handler=hs.get_room_list_handler(),
).register(resource)
for servletclass in ROOM_LIST_CLASSES:
servletclass(
handler=hs.get_room_list_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)

View File

@@ -21,6 +21,7 @@ from synapse.api.constants import LoginType
from synapse.types import UserID
from synapse.api.errors import AuthError, LoginError, Codes, StoreError, SynapseError
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache
from twisted.web.client import PartialDownloadError
@@ -52,7 +53,15 @@ class AuthHandler(BaseHandler):
LoginType.DUMMY: self._check_dummy_auth,
}
self.bcrypt_rounds = hs.config.bcrypt_rounds
self.sessions = {}
# This is not a cache per se, but a store of all current sessions that
# expire after N hours
self.sessions = ExpiringCache(
cache_name="register_sessions",
clock=hs.get_clock(),
expiry_ms=self.SESSION_EXPIRE_MS,
reset_expiry_on_get=True,
)
account_handler = _AccountHandler(
hs, check_user_exists=self.check_user_exists
@@ -617,16 +626,6 @@ class AuthHandler(BaseHandler):
logger.debug("Saving session %s", session)
session["last_used"] = self.hs.get_clock().time_msec()
self.sessions[session["id"]] = session
self._prune_sessions()
def _prune_sessions(self):
for sid, sess in self.sessions.items():
last_used = 0
if 'last_used' in sess:
last_used = sess['last_used']
now = self.hs.get_clock().time_msec()
if last_used < now - AuthHandler.SESSION_EXPIRE_MS:
del self.sessions[sid]
def hash(self, password):
"""Computes a secure hash of password.

View File

@@ -106,7 +106,7 @@ class DeviceHandler(BaseHandler):
device_map = yield self.store.get_devices_by_user(user_id)
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id) for device_id in device_map.keys())
user_id, device_id=None
)
devices = device_map.values()
@@ -133,7 +133,7 @@ class DeviceHandler(BaseHandler):
except errors.StoreError:
raise errors.NotFoundError
ips = yield self.store.get_last_client_ip_by_device(
devices=((user_id, device_id),)
user_id, device_id,
)
_update_device_from_client_ips(device, ips)
defer.returnValue(device)
@@ -270,6 +270,8 @@ class DeviceHandler(BaseHandler):
user_id (str)
from_token (StreamToken)
"""
now_token = yield self.hs.get_event_sources().get_current_token()
room_ids = yield self.store.get_rooms_for_user(user_id)
# First we check if any devices have changed
@@ -280,11 +282,30 @@ class DeviceHandler(BaseHandler):
# Then work out if any users have since joined
rooms_changed = self.store.get_rooms_that_changed(room_ids, from_token.room_key)
member_events = yield self.store.get_membership_changes_for_user(
user_id, from_token.room_key, now_token.room_key
)
rooms_changed.update(event.room_id for event in member_events)
stream_ordering = RoomStreamToken.parse_stream_token(
from_token.room_key).stream
from_token.room_key
).stream
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
current_state_ids = yield self.store.get_current_state_ids(room_id)
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for key, event_id in current_state_ids.iteritems():
etype, state_key = key
if etype != EventTypes.Member:
continue
possibly_left.add(state_key)
continue
# Fetch the current state at the time.
try:
event_ids = yield self.store.get_forward_extremeties_for_room(
@@ -295,8 +316,6 @@ class DeviceHandler(BaseHandler):
# ordering: treat it the same as a new room
event_ids = []
current_state_ids = yield self.store.get_current_state_ids(room_id)
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
@@ -307,9 +326,25 @@ class DeviceHandler(BaseHandler):
possibly_changed.add(state_key)
continue
current_member_id = current_state_ids.get((EventTypes.Member, user_id))
if not current_member_id:
continue
# mapping from event_id -> state_dict
prev_state_ids = yield self.store.get_state_ids_for_events(event_ids)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.itervalues():
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for key, event_id in current_state_ids.iteritems():
etype, state_key = key
if etype != EventTypes.Member:
continue
possibly_changed.add(state_key)
break
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
@@ -320,19 +355,30 @@ class DeviceHandler(BaseHandler):
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.values():
for state_dict in prev_state_ids.itervalues():
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
possibly_changed.add(state_key)
if state_key != user_id:
possibly_changed.add(state_key)
break
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
if possibly_changed or possibly_left:
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
# Take the intersection of the users whose devices may have changed
# and those that actually still share a room with the user
defer.returnValue(users_who_share_room & possibly_changed)
# Take the intersection of the users whose devices may have changed
# and those that actually still share a room with the user
possibly_joined = possibly_changed & users_who_share_room
possibly_left = (possibly_changed | possibly_left) - users_who_share_room
else:
possibly_joined = []
possibly_left = []
defer.returnValue({
"changed": list(possibly_joined),
"left": list(possibly_left),
})
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):

View File

@@ -75,6 +75,8 @@ class FederationHandler(BaseHandler):
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
self.action_generator = hs.get_action_generator()
self.is_mine_id = hs.is_mine_id
self.pusher_pool = hs.get_pusherpool()
self.replication_layer.set_handler(self)
@@ -1072,6 +1074,23 @@ class FederationHandler(BaseHandler):
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if self.hs.config.block_non_admin_invites:
raise SynapseError(403, "This server does not accept room invites")
membership = event.content.get("membership")
if event.type != EventTypes.Member or membership != Membership.INVITE:
raise SynapseError(400, "The event was not an m.room.member invite event")
sender_domain = get_domain_from_id(event.sender)
if sender_domain != origin:
raise SynapseError(400, "The invite event was not from the server sending it")
if event.state_key is None:
raise SynapseError(400, "The invite event did not have a state key")
if not self.is_mine_id(event.state_key):
raise SynapseError(400, "The invite event must be for this server")
event.internal_metadata.outlier = True
event.internal_metadata.invite_from_remote = True
@@ -1280,7 +1299,7 @@ class FederationHandler(BaseHandler):
for event in res:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.hs.is_mine_id(event.event_id):
if self.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
@@ -1353,7 +1372,7 @@ class FederationHandler(BaseHandler):
)
if event:
if self.hs.is_mine_id(event.event_id):
if self.is_mine_id(event.event_id):
# FIXME: This is a temporary work around where we occasionally
# return events slightly differently than when they were
# originally signed
@@ -1397,7 +1416,7 @@ class FederationHandler(BaseHandler):
auth_events=auth_events,
)
if not event.internal_metadata.is_outlier():
if not event.internal_metadata.is_outlier() and not backfilled:
yield self.action_generator.handle_push_actions_for_event(
event, context
)
@@ -1411,7 +1430,7 @@ class FederationHandler(BaseHandler):
if not backfilled:
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
preserve_fn(self.pusher_pool.on_new_notifications)(
event_stream_id, max_stream_id
)
@@ -1590,7 +1609,7 @@ class FederationHandler(BaseHandler):
context.rejected = RejectedReason.AUTH_ERROR
if event.type == EventTypes.GuestAccess:
if event.type == EventTypes.GuestAccess and not context.rejected:
yield self.maybe_kick_guest_users(event)
defer.returnValue(context)
@@ -2074,6 +2093,14 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
"""Handle an exchange_third_party_invite request from a remote server
The remote server will call this when it wants to turn a 3pid invite
into a normal m.room.member invite.
Returns:
Deferred: resolves (to None)
"""
builder = self.event_builder_factory.new(event_dict)
message_handler = self.hs.get_handlers().message_handler
@@ -2092,9 +2119,12 @@ class FederationHandler(BaseHandler):
raise e
yield self._check_signature(event, context)
# XXX we send the invite here, but send_membership_event also sends it,
# so we end up making two requests. I think this is redundant.
returned_invite = yield self.send_invite(origin, event)
# TODO: Make sure the signatures actually are correct.
event.signatures.update(returned_invite.signatures)
member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.send_membership_event(None, event, context)

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.events import spamcheck
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
@@ -34,6 +34,7 @@ from canonicaljson import encode_canonical_json
import logging
import random
import ujson
logger = logging.getLogger(__name__)
@@ -49,6 +50,8 @@ class MessageHandler(BaseHandler):
self.pagination_lock = ReadWriteLock()
self.pusher_pool = hs.get_pusherpool()
# We arbitrarily limit concurrent event creation for a room to 5.
# This is to stop us from diverging history *too* much.
self.limiter = Limiter(max_count=5)
@@ -318,6 +321,12 @@ class MessageHandler(BaseHandler):
token_id=requester.access_token_id,
txn_id=txn_id
)
if spamcheck.check_event_for_spam(event):
raise SynapseError(
403, "Spam is not permitted here", Codes.FORBIDDEN
)
yield self.send_nonmember_event(
requester,
event,
@@ -498,6 +507,14 @@ class MessageHandler(BaseHandler):
logger.warn("Denying new event %r because %s", event, err)
raise err
# Ensure that we can round trip before trying to persist in db
try:
dump = ujson.dumps(event.content)
ujson.loads(dump)
except:
logger.exception("Failed to encode content: %r", event.content)
raise
yield self.maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias:
@@ -601,7 +618,7 @@ class MessageHandler(BaseHandler):
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
preserve_fn(self.hs.get_pusherpool().on_new_notifications)(
preserve_fn(self.pusher_pool.on_new_notifications)(
event_stream_id, max_stream_id
)

View File

@@ -191,6 +191,8 @@ class RoomMemberHandler(BaseHandler):
if action in ["kick", "unban"]:
effective_membership_state = "leave"
# if this is a join with a 3pid signature, we may need to turn a 3pid
# invite into a normal invite before we can handle the join.
if third_party_signed is not None:
replication = self.hs.get_replication_layer()
yield replication.exchange_third_party_invite(
@@ -208,6 +210,16 @@ class RoomMemberHandler(BaseHandler):
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if (effective_membership_state == "invite" and
self.hs.config.block_non_admin_invites):
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server",
)
latest_event_ids = yield self.store.get_latest_event_ids_in_room(room_id)
current_state_ids = yield self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids,
@@ -471,6 +483,16 @@ class RoomMemberHandler(BaseHandler):
requester,
txn_id
):
if self.hs.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server",
Codes.FORBIDDEN,
)
invitee = yield self._lookup_3pid(
id_server, medium, address
)

View File

@@ -108,6 +108,16 @@ class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
return True
class DeviceLists(collections.namedtuple("DeviceLists", [
"changed", # list of user_ids whose devices may have changed
"left", # list of user_ids whose devices we no longer track
])):
__slots__ = []
def __nonzero__(self):
return bool(self.changed or self.left)
class SyncResult(collections.namedtuple("SyncResult", [
"next_batch", # Token for the next sync
"presence", # List of presence events for the user.
@@ -290,10 +300,20 @@ class SyncHandler(object):
if recents:
recents = sync_config.filter_collection.filter_room_timeline(recents)
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
recents = yield filter_events_for_client(
self.store,
sync_config.user.to_string(),
recents,
always_include_ids=current_state_ids,
)
else:
recents = []
@@ -325,10 +345,20 @@ class SyncHandler(object):
loaded_recents = sync_config.filter_collection.filter_room_timeline(
events
)
# We check if there are any state events, if there are then we pass
# all current state events to the filter_events function. This is to
# ensure that we always include current state in the timeline
current_state_ids = frozenset()
if any(e.is_state() for e in loaded_recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
loaded_recents = yield filter_events_for_client(
self.store,
sync_config.user.to_string(),
loaded_recents,
always_include_ids=current_state_ids,
)
loaded_recents.extend(recents)
recents = loaded_recents
@@ -535,7 +565,8 @@ class SyncHandler(object):
res = yield self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
newly_joined_rooms, newly_joined_users = res
newly_joined_rooms, newly_joined_users, _, _ = res
_, _, newly_left_rooms, newly_left_users = res
block_all_presence_data = (
since_token is None and
@@ -549,7 +580,11 @@ class SyncHandler(object):
yield self._generate_sync_entry_for_to_device(sync_result_builder)
device_lists = yield self._generate_sync_entry_for_device_list(
sync_result_builder
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_users=newly_joined_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
device_id = sync_config.device_id
@@ -574,25 +609,50 @@ class SyncHandler(object):
@measure_func("_generate_sync_entry_for_device_list")
@defer.inlineCallbacks
def _generate_sync_entry_for_device_list(self, sync_result_builder):
def _generate_sync_entry_for_device_list(self, sync_result_builder,
newly_joined_rooms, newly_joined_users,
newly_left_rooms, newly_left_users):
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
if since_token and since_token.device_list_key:
room_ids = yield self.store.get_rooms_for_user(user_id)
user_ids_changed = set()
changed = yield self.store.get_user_whose_devices_changed(
since_token.device_list_key
)
for other_user_id in changed:
other_room_ids = yield self.store.get_rooms_for_user(other_user_id)
if room_ids.intersection(other_room_ids):
user_ids_changed.add(other_user_id)
defer.returnValue(user_ids_changed)
# TODO: Be more clever than this, i.e. remove users who we already
# share a room with?
for room_id in newly_joined_rooms:
joined_users = yield self.state.get_current_user_in_room(room_id)
newly_joined_users.update(joined_users)
for room_id in newly_left_rooms:
left_users = yield self.state.get_current_user_in_room(room_id)
newly_left_users.update(left_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
changed.update(newly_joined_users)
if not changed and not newly_left_users:
defer.returnValue(DeviceLists(
changed=[],
left=newly_left_users,
))
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
defer.returnValue(DeviceLists(
changed=users_who_share_room & changed,
left=set(newly_left_users) - users_who_share_room,
))
else:
defer.returnValue([])
defer.returnValue(DeviceLists(
changed=[],
left=[],
))
@defer.inlineCallbacks
def _generate_sync_entry_for_to_device(self, sync_result_builder):
@@ -756,8 +816,8 @@ class SyncHandler(object):
account_data_by_room(dict): Dictionary of per room account data
Returns:
Deferred(tuple): Returns a 2-tuple of
`(newly_joined_rooms, newly_joined_users)`
Deferred(tuple): Returns a 4-tuple of
`(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
"""
user_id = sync_result_builder.sync_config.user.to_string()
block_all_room_ephemeral = (
@@ -788,7 +848,7 @@ class SyncHandler(object):
)
if not tags_by_room:
logger.debug("no-oping sync")
defer.returnValue(([], []))
defer.returnValue(([], [], [], []))
ignored_account_data = yield self.store.get_global_account_data_by_type_for_user(
"m.ignored_user_list", user_id=user_id,
@@ -801,7 +861,7 @@ class SyncHandler(object):
if since_token:
res = yield self._get_rooms_changed(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms = res
room_entries, invited, newly_joined_rooms, newly_left_rooms = res
tags_by_room = yield self.store.get_updated_tags(
user_id, since_token.account_data_key,
@@ -809,6 +869,7 @@ class SyncHandler(object):
else:
res = yield self._get_all_rooms(sync_result_builder, ignored_users)
room_entries, invited, newly_joined_rooms = res
newly_left_rooms = []
tags_by_room = yield self.store.get_tags_for_user(user_id)
@@ -829,17 +890,30 @@ class SyncHandler(object):
# Now we want to get any newly joined users
newly_joined_users = set()
newly_left_users = set()
if since_token:
for joined_sync in sync_result_builder.joined:
it = itertools.chain(
joined_sync.timeline.events, joined_sync.state.values()
joined_sync.timeline.events, joined_sync.state.itervalues()
)
for event in it:
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
newly_joined_users.add(event.state_key)
else:
prev_content = event.unsigned.get("prev_content", {})
prev_membership = prev_content.get("membership", None)
if prev_membership == Membership.JOIN:
newly_left_users.add(event.state_key)
defer.returnValue((newly_joined_rooms, newly_joined_users))
newly_left_users -= newly_joined_users
defer.returnValue((
newly_joined_rooms,
newly_joined_users,
newly_left_rooms,
newly_left_users,
))
@defer.inlineCallbacks
def _have_rooms_changed(self, sync_result_builder):
@@ -909,15 +983,28 @@ class SyncHandler(object):
mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
newly_joined_rooms = []
newly_left_rooms = []
room_entries = []
invited = []
for room_id, events in mem_change_events_by_room_id.items():
for room_id, events in mem_change_events_by_room_id.iteritems():
non_joins = [e for e in events if e.membership != Membership.JOIN]
has_join = len(non_joins) != len(events)
# We want to figure out if we joined the room at some point since
# the last sync (even if we have since left). This is to make sure
# we do send down the room, and with full state, where necessary
old_state_ids = None
if room_id in joined_room_ids and non_joins:
# Always include if the user (re)joined the room, especially
# important so that device list changes are calculated correctly.
# If there are non join member events, but we are still in the room,
# then the user must have left and joined
newly_joined_rooms.append(room_id)
# User is in the room so we don't need to do the invite/leave checks
continue
if room_id in joined_room_ids or has_join:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get((EventTypes.Member, user_id), None)
@@ -929,12 +1016,33 @@ class SyncHandler(object):
if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
newly_joined_rooms.append(room_id)
if room_id in joined_room_ids:
continue
# If user is in the room then we don't need to do the invite/leave checks
if room_id in joined_room_ids:
continue
if not non_joins:
continue
# Check if we have left the room. This can either be because we were
# joined before *or* that we since joined and then left.
if events[-1].membership != Membership.JOIN:
if has_join:
newly_left_rooms.append(room_id)
else:
if not old_state_ids:
old_state_ids = yield self.get_state_at(room_id, since_token)
old_mem_ev_id = old_state_ids.get(
(EventTypes.Member, user_id),
None,
)
old_mem_ev = None
if old_mem_ev_id:
old_mem_ev = yield self.store.get_event(
old_mem_ev_id, allow_none=True
)
if old_mem_ev and old_mem_ev.membership == Membership.JOIN:
newly_left_rooms.append(room_id)
# Only bother if we're still currently invited
should_invite = non_joins[-1].membership == Membership.INVITE
if should_invite:
@@ -1012,7 +1120,7 @@ class SyncHandler(object):
upto_token=since_token,
))
defer.returnValue((room_entries, invited, newly_joined_rooms))
defer.returnValue((room_entries, invited, newly_joined_rooms, newly_left_rooms))
@defer.inlineCallbacks
def _get_all_rooms(self, sync_result_builder, ignored_users):
@@ -1260,6 +1368,7 @@ class SyncResultBuilder(object):
self.invited = []
self.archived = []
self.device = []
self.to_device = []
class RoomSyncResultBuilder(object):

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import socket
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor
@@ -30,7 +31,10 @@ logger = logging.getLogger(__name__)
SERVER_CACHE = {}
# our record of an individual server which can be tried to reach a destination.
#
# "host" is actually a dotted-quad or ipv6 address string. Except when there's
# no SRV record, in which case it is the original hostname.
_Server = collections.namedtuple(
"_Server", "priority weight host port expires"
)
@@ -219,9 +223,10 @@ class SRVClientEndpoint(object):
return self.default_server
else:
raise ConnectError(
"Not server available for %s" % self.service_name
"No server available for %s" % self.service_name
)
# look for all servers with the same priority
min_priority = self.servers[0].priority
weight_indexes = list(
(index, server.weight + 1)
@@ -231,11 +236,22 @@ class SRVClientEndpoint(object):
total_weight = sum(weight for index, weight in weight_indexes)
target_weight = random.randint(0, total_weight)
for index, weight in weight_indexes:
target_weight -= weight
if target_weight <= 0:
server = self.servers[index]
# XXX: this looks totally dubious:
#
# (a) we never reuse a server until we have been through
# all of the servers at the same priority, so if the
# weights are A: 100, B:1, we always do ABABAB instead of
# AAAA...AAAB (approximately).
#
# (b) After using all the servers at the lowest priority,
# we move onto the next priority. We should only use the
# second priority if servers at the top priority are
# unreachable.
#
del self.servers[index]
self.used_servers.append(server)
return server
@@ -280,26 +296,21 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
continue
payload = answer.payload
host = str(payload.target)
srv_ttl = answer.ttl
try:
answers, _, _ = yield dns_client.lookupAddress(host)
except DNSNameError:
continue
hosts = yield _get_hosts_for_srv_record(
dns_client, str(payload.target)
)
for answer in answers:
if answer.type == dns.A and answer.payload:
ip = answer.payload.dottedQuad()
host_ttl = min(srv_ttl, answer.ttl)
for (ip, ttl) in hosts:
host_ttl = min(answer.ttl, ttl)
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.append(_Server(
host=ip,
port=int(payload.port),
priority=int(payload.priority),
weight=int(payload.weight),
expires=int(clock.time()) + host_ttl,
))
servers.sort()
cache[service_name] = list(servers)
@@ -317,3 +328,68 @@ def resolve_service(service_name, dns_client=client, cache=SERVER_CACHE, clock=t
raise e
defer.returnValue(servers)
@defer.inlineCallbacks
def _get_hosts_for_srv_record(dns_client, host):
"""Look up each of the hosts in a SRV record
Args:
dns_client (twisted.names.dns.IResolver):
host (basestring): host to look up
Returns:
Deferred[list[(str, int)]]: a list of (host, ttl) pairs
"""
ip4_servers = []
ip6_servers = []
def cb(res):
# lookupAddress and lookupIP6Address return a three-tuple
# giving the answer, authority, and additional sections of the
# response.
#
# we only care about the answers.
return res[0]
def eb(res):
res.trap(DNSNameError)
return []
# no logcontexts here, so we can safely fire these off and gatherResults
d1 = dns_client.lookupAddress(host).addCallbacks(cb, eb)
d2 = dns_client.lookupIPV6Address(host).addCallbacks(cb, eb)
results = yield defer.gatherResults([d1, d2], consumeErrors=True)
for result in results:
for answer in result:
if not answer.payload:
continue
try:
if answer.type == dns.A:
ip = answer.payload.dottedQuad()
ip4_servers.append((ip, answer.ttl))
elif answer.type == dns.AAAA:
ip = socket.inet_ntop(
socket.AF_INET6, answer.payload.address,
)
ip6_servers.append((ip, answer.ttl))
else:
# the most likely candidate here is a CNAME record.
# rfc2782 says srvs may not point to aliases.
logger.warn(
"Ignoring unexpected DNS record type %s for %s",
answer.type, host,
)
continue
except Exception as e:
logger.warn("Ignoring invalid DNS response for %s: %s",
host, e)
continue
# keep the ipv4 results before the ipv6 results, mostly to match historical
# behaviour.
defer.returnValue(ip4_servers + ip6_servers)

View File

@@ -19,8 +19,9 @@ from twisted.internet import defer
from .push_rule_evaluator import PushRuleEvaluatorForEvent
from synapse.visibility import filter_events_for_clients_context
from synapse.api.constants import EventTypes, Membership
from synapse.metrics import get_metrics_for
from synapse.util.caches import metrics as cache_metrics
from synapse.util.caches.descriptors import cached
from synapse.util.async import Linearizer
@@ -32,6 +33,23 @@ logger = logging.getLogger(__name__)
rules_by_room = {}
push_metrics = get_metrics_for(__name__)
push_rules_invalidation_counter = push_metrics.register_counter(
"push_rules_invalidation_counter"
)
push_rules_state_size_counter = push_metrics.register_counter(
"push_rules_state_size_counter"
)
# Measures whether we use the fast path of using state deltas, or if we have to
# recalculate from scratch
push_rules_delta_state_cache_metric = cache_metrics.register_cache(
"cache",
size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values
cache_name="push_rules_delta_state_cache_metric",
)
class BulkPushRuleEvaluator(object):
"""Calculates the outcome of push rules for an event for all users in the
@@ -42,6 +60,12 @@ class BulkPushRuleEvaluator(object):
self.hs = hs
self.store = hs.get_datastore()
self.room_push_rule_cache_metrics = cache_metrics.register_cache(
"cache",
size_callback=lambda: 0, # There's not good value for this
cache_name="room_push_rule_cache",
)
@defer.inlineCallbacks
def _get_rules_for_event(self, event, context):
"""This gets the rules for all users in the room at the time of the event,
@@ -79,7 +103,10 @@ class BulkPushRuleEvaluator(object):
# It's important that RulesForRoom gets added to self._get_rules_for_room.cache
# before any lookup methods get called on it as otherwise there may be
# a race if invalidate_all gets called (which assumes its in the cache)
return RulesForRoom(self.hs, room_id, self._get_rules_for_room.cache)
return RulesForRoom(
self.hs, room_id, self._get_rules_for_room.cache,
self.room_push_rule_cache_metrics,
)
@defer.inlineCallbacks
def action_for_event_by_user(self, event, context):
@@ -92,15 +119,6 @@ class BulkPushRuleEvaluator(object):
rules_by_user = yield self._get_rules_for_event(event, context)
actions_by_user = {}
# None of these users can be peeking since this list of users comes
# from the set of users in the room, so we know for sure they're all
# actually in the room.
user_tuples = [(u, False) for u in rules_by_user]
filtered_by_user = yield filter_events_for_clients_context(
self.store, user_tuples, [event], {event.event_id: context}
)
room_members = yield self.store.get_joined_users_from_context(
event, context
)
@@ -110,6 +128,14 @@ class BulkPushRuleEvaluator(object):
condition_cache = {}
for uid, rules in rules_by_user.iteritems():
if event.sender == uid:
continue
if not event.is_state():
is_ignored = yield self.store.is_ignored_by(event.sender, uid)
if is_ignored:
continue
display_name = None
profile_info = room_members.get(uid)
if profile_info:
@@ -121,13 +147,6 @@ class BulkPushRuleEvaluator(object):
if event.type == EventTypes.Member and event.state_key == uid:
display_name = event.content.get("displayname", None)
filtered = filtered_by_user[uid]
if len(filtered) == 0:
continue
if filtered[0].sender == uid:
continue
for rule in rules:
if 'enabled' in rule and not rule['enabled']:
continue
@@ -170,17 +189,19 @@ class RulesForRoom(object):
the entire cache for the room.
"""
def __init__(self, hs, room_id, rules_for_room_cache):
def __init__(self, hs, room_id, rules_for_room_cache, room_push_rule_cache_metrics):
"""
Args:
hs (HomeServer)
room_id (str)
rules_for_room_cache(Cache): The cache object that caches these
RoomsForUser objects.
room_push_rule_cache_metrics (CacheMetric)
"""
self.room_id = room_id
self.is_mine_id = hs.is_mine_id
self.store = hs.get_datastore()
self.room_push_rule_cache_metrics = room_push_rule_cache_metrics
self.linearizer = Linearizer(name="rules_for_room")
@@ -222,11 +243,19 @@ class RulesForRoom(object):
"""
state_group = context.state_group
if state_group and self.state_group == state_group:
logger.debug("Using cached rules for %r", self.room_id)
self.room_push_rule_cache_metrics.inc_hits()
defer.returnValue(self.rules_by_user)
with (yield self.linearizer.queue(())):
if state_group and self.state_group == state_group:
logger.debug("Using cached rules for %r", self.room_id)
self.room_push_rule_cache_metrics.inc_hits()
defer.returnValue(self.rules_by_user)
self.room_push_rule_cache_metrics.inc_misses()
ret_rules_by_user = {}
missing_member_event_ids = {}
if state_group and self.state_group == context.prev_group:
@@ -234,8 +263,13 @@ class RulesForRoom(object):
# results.
ret_rules_by_user = self.rules_by_user
current_state_ids = context.delta_ids
push_rules_delta_state_cache_metric.inc_hits()
else:
current_state_ids = context.current_state_ids
push_rules_delta_state_cache_metric.inc_misses()
push_rules_state_size_counter.inc_by(len(current_state_ids))
logger.debug(
"Looking for member changes in %r %r", state_group, current_state_ids
@@ -282,6 +316,14 @@ class RulesForRoom(object):
yield self._update_rules_with_member_event_ids(
ret_rules_by_user, missing_member_event_ids, state_group, event
)
else:
# The push rules didn't change but lets update the cache anyway
self.update_cache(
self.sequence,
members={}, # There were no membership changes
rules_by_user=ret_rules_by_user,
state_group=state_group
)
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
@@ -380,6 +422,7 @@ class RulesForRoom(object):
self.state_group = object()
self.member_map = {}
self.rules_by_user = {}
push_rules_invalidation_counter.inc()
def update_cache(self, sequence, members, rules_by_user, state_group):
if sequence == self.sequence:

View File

@@ -244,6 +244,26 @@ class HttpPusher(object):
@defer.inlineCallbacks
def _build_notification_dict(self, event, tweaks, badge):
if self.data.get('format') == 'event_id_only':
d = {
'notification': {
'event_id': event.event_id,
'room_id': event.room_id,
'counts': {
'unread': badge,
},
'devices': [
{
'app_id': self.app_id,
'pushkey': self.pushkey,
'pushkey_ts': long(self.pushkey_ts / 1000),
'data': self.data_minus_url,
}
]
}
}
defer.returnValue(d)
ctx = yield push_tools.get_context_for_event(
self.store, self.state_handler, event, self.user_id
)

View File

@@ -200,7 +200,9 @@ def _glob_to_re(glob, word_boundary):
return re.compile(r, flags=re.IGNORECASE)
def _flatten_dict(d, prefix=[], result={}):
def _flatten_dict(d, prefix=[], result=None):
if result is None:
result = {}
for key, value in d.items():
if isinstance(value, basestring):
result[".".join(prefix + [key])] = value.lower()

View File

@@ -31,7 +31,7 @@ REQUIREMENTS = {
"pyyaml": ["yaml"],
"pyasn1": ["pyasn1"],
"daemonize": ["daemonize"],
"py-bcrypt": ["bcrypt"],
"bcrypt": ["bcrypt"],
"pillow": ["PIL"],
"pydenticon": ["pydenticon"],
"ujson": ["ujson"],
@@ -40,6 +40,7 @@ REQUIREMENTS = {
"pymacaroons-pynacl": ["pymacaroons"],
"msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"],
"affinity": ["affinity"],
}
CONDITIONAL_REQUIREMENTS = {
"web_client": {

View File

@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Copyright 2017 Vector Creations Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseSlavedStore
from synapse.storage.client_ips import LAST_SEEN_GRANULARITY
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.descriptors import Cache
class SlavedClientIpStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedClientIpStore, self).__init__(db_conn, hs)
self.client_ip_last_seen = Cache(
name="client_ip_last_seen",
keylen=4,
max_entries=50000 * CACHE_SIZE_FACTOR,
)
def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
now = int(self._clock.time_msec())
key = (user_id, access_token, ip)
try:
last_seen = self.client_ip_last_seen.get(key)
except KeyError:
last_seen = None
# Rate-limited inserts
if last_seen is not None and (now - last_seen) < LAST_SEEN_GRANULARITY:
return
self.hs.get_tcp_replication().send_user_ip(
user_id, access_token, ip, user_agent, device_id, now
)

View File

@@ -20,6 +20,7 @@ from twisted.internet.protocol import ReconnectingClientFactory
from .commands import (
FederationAckCommand, UserSyncCommand, RemovePusherCommand, InvalidateCacheCommand,
UserIpCommand,
)
from .protocol import ClientReplicationStreamProtocol
@@ -178,6 +179,12 @@ class ReplicationClientHandler(object):
cmd = InvalidateCacheCommand(cache_func.__name__, keys)
self.send_command(cmd)
def send_user_ip(self, user_id, access_token, ip, user_agent, device_id, last_seen):
"""Tell the master that the user made a request.
"""
cmd = UserIpCommand(user_id, access_token, ip, user_agent, device_id, last_seen)
self.send_command(cmd)
def await_sync(self, data):
"""Returns a deferred that is resolved when we receive a SYNC command
with given data.

View File

@@ -304,6 +304,40 @@ class InvalidateCacheCommand(Command):
return " ".join((self.cache_func, json.dumps(self.keys)))
class UserIpCommand(Command):
"""Sent periodically when a worker sees activity from a client.
Format::
USER_IP <user_id>, <access_token>, <ip>, <device_id>, <last_seen>, <user_agent>
"""
NAME = "USER_IP"
def __init__(self, user_id, access_token, ip, user_agent, device_id, last_seen):
self.user_id = user_id
self.access_token = access_token
self.ip = ip
self.user_agent = user_agent
self.device_id = device_id
self.last_seen = last_seen
@classmethod
def from_line(cls, line):
user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn)
return cls(
user_id, access_token, ip, user_agent, device_id, last_seen
)
def to_line(self):
return self.user_id + " " + json.dumps((
self.access_token, self.ip, self.user_agent, self.device_id,
self.last_seen,
))
# Map of command name to command type.
COMMAND_MAP = {
cmd.NAME: cmd
@@ -320,6 +354,7 @@ COMMAND_MAP = {
SyncCommand,
RemovePusherCommand,
InvalidateCacheCommand,
UserIpCommand,
)
}
@@ -342,5 +377,6 @@ VALID_CLIENT_COMMANDS = (
FederationAckCommand.NAME,
RemovePusherCommand.NAME,
InvalidateCacheCommand.NAME,
UserIpCommand.NAME,
ErrorCommand.NAME,
)

View File

@@ -244,7 +244,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
becoming full.
"""
if self.state == ConnectionStates.CLOSED:
logger.info("[%s] Not sending, connection closed", self.id())
logger.debug("[%s] Not sending, connection closed", self.id())
return
if do_buffer and self.state != ConnectionStates.ESTABLISHED:
@@ -264,7 +264,7 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
def _queue_command(self, cmd):
"""Queue the command until the connection is ready to write to again.
"""
logger.info("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd)
logger.debug("[%s] Queing as conn %r, cmd: %r", self.id(), self.state, cmd)
self.pending_commands.append(cmd)
if len(self.pending_commands) > self.max_line_buffer:
@@ -406,6 +406,12 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
def on_INVALIDATE_CACHE(self, cmd):
self.streamer.on_invalidate_cache(cmd.cache_func, cmd.keys)
def on_USER_IP(self, cmd):
self.streamer.on_user_ip(
cmd.user_id, cmd.access_token, cmd.ip, cmd.user_agent, cmd.device_id,
cmd.last_seen,
)
@defer.inlineCallbacks
def subscribe_to_stream(self, stream_name, token):
"""Subscribe the remote to a streams.

View File

@@ -35,6 +35,7 @@ user_sync_counter = metrics.register_counter("user_sync")
federation_ack_counter = metrics.register_counter("federation_ack")
remove_pusher_counter = metrics.register_counter("remove_pusher")
invalidate_cache_counter = metrics.register_counter("invalidate_cache")
user_ip_cache_counter = metrics.register_counter("user_ip_cache")
logger = logging.getLogger(__name__)
@@ -238,6 +239,15 @@ class ReplicationStreamer(object):
invalidate_cache_counter.inc()
getattr(self.store, cache_func).invalidate(tuple(keys))
@measure_func("repl.on_user_ip")
def on_user_ip(self, user_id, access_token, ip, user_agent, device_id, last_seen):
"""The client saw a user request
"""
user_ip_cache_counter.inc()
self.store.insert_client_ip(
user_id, access_token, ip, user_agent, device_id, last_seen,
)
def send_sync_to_all_connections(self, data):
"""Sends a SYNC command to all clients.

View File

@@ -168,7 +168,7 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
DEFAULT_MESSAGE = (
"Sharing illegal content on this server is not permitted and rooms in"
" violatation will be blocked."
" violation will be blocked."
)
def __init__(self, hs):
@@ -296,7 +296,7 @@ class QuarantineMediaInRoom(ClientV1RestServlet):
class ResetPasswordRestServlet(ClientV1RestServlet):
"""Post request to allow an administrator reset password for a user.
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_matrix/client/api/v1/admin/reset_password/
@user:to_reset_password?access_token=admin_access_token
@@ -319,7 +319,7 @@ class ResetPasswordRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_POST(self, request, target_user_id):
"""Post request to allow an administrator reset password for a user.
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
"""
UserID.from_string(target_user_id)
requester = yield self.auth.get_user_by_req(request)
@@ -343,7 +343,7 @@ class ResetPasswordRestServlet(ClientV1RestServlet):
class GetUsersPaginatedRestServlet(ClientV1RestServlet):
"""Get request to get specific number of users from Synapse.
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
@admin:user?access_token=admin_access_token&start=0&limit=10
@@ -362,7 +362,7 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_GET(self, request, target_user_id):
"""Get request to get specific number of users from Synapse.
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
"""
target_user = UserID.from_string(target_user_id)
requester = yield self.auth.get_user_by_req(request)
@@ -395,7 +395,7 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
@defer.inlineCallbacks
def on_POST(self, request, target_user_id):
"""Post request to get specific number of users from Synapse..
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
@admin:user?access_token=admin_access_token
@@ -433,7 +433,7 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
class SearchUsersRestServlet(ClientV1RestServlet):
"""Get request to search user table for specific users according to
search term.
This need a user have a administrator access in Synapse.
This needs user to have administrator access in Synapse.
Example:
http://localhost:8008/_matrix/client/api/v1/admin/search_users/
@admin:user?access_token=admin_access_token&term=alice
@@ -453,7 +453,7 @@ class SearchUsersRestServlet(ClientV1RestServlet):
def on_GET(self, request, target_user_id):
"""Get request to search user table for specific users according to
search term.
This need a user have a administrator access in Synapse.
This needs user to have a administrator access in Synapse.
"""
target_user = UserID.from_string(target_user_id)
requester = yield self.auth.get_user_by_req(request)

View File

@@ -73,6 +73,7 @@ class PushersSetRestServlet(ClientV1RestServlet):
def __init__(self, hs):
super(PushersSetRestServlet, self).__init__(hs)
self.notifier = hs.get_notifier()
self.pusher_pool = self.hs.get_pusherpool()
@defer.inlineCallbacks
def on_POST(self, request):
@@ -81,12 +82,10 @@ class PushersSetRestServlet(ClientV1RestServlet):
content = parse_json_object_from_request(request)
pusher_pool = self.hs.get_pusherpool()
if ('pushkey' in content and 'app_id' in content
and 'kind' in content and
content['kind'] is None):
yield pusher_pool.remove_pusher(
yield self.pusher_pool.remove_pusher(
content['app_id'], content['pushkey'], user_id=user.to_string()
)
defer.returnValue((200, {}))
@@ -109,14 +108,14 @@ class PushersSetRestServlet(ClientV1RestServlet):
append = content['append']
if not append:
yield pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
yield self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
app_id=content['app_id'],
pushkey=content['pushkey'],
not_user_id=user.to_string()
)
try:
yield pusher_pool.add_pusher(
yield self.pusher_pool.add_pusher(
user_id=user.to_string(),
access_token=requester.access_token_id,
kind=content['kind'],
@@ -152,6 +151,7 @@ class PushersRemoveRestServlet(RestServlet):
self.hs = hs
self.notifier = hs.get_notifier()
self.auth = hs.get_v1auth()
self.pusher_pool = self.hs.get_pusherpool()
@defer.inlineCallbacks
def on_GET(self, request):
@@ -161,10 +161,8 @@ class PushersRemoveRestServlet(RestServlet):
app_id = parse_string(request, "app_id", required=True)
pushkey = parse_string(request, "pushkey", required=True)
pusher_pool = self.hs.get_pusherpool()
try:
yield pusher_pool.remove_pusher(
yield self.pusher_pool.remove_pusher(
app_id=app_id,
pushkey=pushkey,
user_id=user.to_string(),

View File

@@ -188,13 +188,11 @@ class KeyChangesServlet(RestServlet):
user_id = requester.user.to_string()
changed = yield self.device_handler.get_user_ids_changed(
results = yield self.device_handler.get_user_ids_changed(
user_id, from_token,
)
defer.returnValue((200, {
"changed": list(changed),
}))
defer.returnValue((200, results))
class OneTimeKeyServlet(RestServlet):

View File

@@ -110,7 +110,7 @@ class SyncRestServlet(RestServlet):
filter_id = parse_string(request, "filter", default=None)
full_state = parse_boolean(request, "full_state", default=False)
logger.info(
logger.debug(
"/sync: user=%r, timeout=%r, since=%r,"
" set_presence=%r, filter_id=%r, device_id=%r" % (
user, timeout, since, set_presence, filter_id, device_id
@@ -164,27 +164,35 @@ class SyncRestServlet(RestServlet):
)
time_now = self.clock.time_msec()
joined = self.encode_joined(
sync_result.joined, time_now, requester.access_token_id, filter.event_fields
response_content = self.encode_response(
time_now, sync_result, requester.access_token_id, filter
)
invited = self.encode_invited(
sync_result.invited, time_now, requester.access_token_id
defer.returnValue((200, response_content))
@staticmethod
def encode_response(time_now, sync_result, access_token_id, filter):
joined = SyncRestServlet.encode_joined(
sync_result.joined, time_now, access_token_id, filter.event_fields
)
archived = self.encode_archived(
sync_result.archived, time_now, requester.access_token_id,
invited = SyncRestServlet.encode_invited(
sync_result.invited, time_now, access_token_id,
)
archived = SyncRestServlet.encode_archived(
sync_result.archived, time_now, access_token_id,
filter.event_fields,
)
response_content = {
return {
"account_data": {"events": sync_result.account_data},
"to_device": {"events": sync_result.to_device},
"device_lists": {
"changed": list(sync_result.device_lists),
"changed": list(sync_result.device_lists.changed),
"left": list(sync_result.device_lists.left),
},
"presence": self.encode_presence(
"presence": SyncRestServlet.encode_presence(
sync_result.presence, time_now
),
"rooms": {
@@ -196,9 +204,8 @@ class SyncRestServlet(RestServlet):
"next_batch": sync_result.next_batch.to_string(),
}
defer.returnValue((200, response_content))
def encode_presence(self, events, time_now):
@staticmethod
def encode_presence(events, time_now):
return {
"events": [
{
@@ -212,7 +219,8 @@ class SyncRestServlet(RestServlet):
]
}
def encode_joined(self, rooms, time_now, token_id, event_fields):
@staticmethod
def encode_joined(rooms, time_now, token_id, event_fields):
"""
Encode the joined rooms in a sync result
@@ -231,13 +239,14 @@ class SyncRestServlet(RestServlet):
"""
joined = {}
for room in rooms:
joined[room.room_id] = self.encode_room(
joined[room.room_id] = SyncRestServlet.encode_room(
room, time_now, token_id, only_fields=event_fields
)
return joined
def encode_invited(self, rooms, time_now, token_id):
@staticmethod
def encode_invited(rooms, time_now, token_id):
"""
Encode the invited rooms in a sync result
@@ -270,7 +279,8 @@ class SyncRestServlet(RestServlet):
return invited
def encode_archived(self, rooms, time_now, token_id, event_fields):
@staticmethod
def encode_archived(rooms, time_now, token_id, event_fields):
"""
Encode the archived rooms in a sync result
@@ -289,7 +299,7 @@ class SyncRestServlet(RestServlet):
"""
joined = {}
for room in rooms:
joined[room.room_id] = self.encode_room(
joined[room.room_id] = SyncRestServlet.encode_room(
room, time_now, token_id, joined=False, only_fields=event_fields
)

View File

@@ -24,13 +24,13 @@ from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError
from synapse.events.snapshot import EventContext
from synapse.util.async import Linearizer
from synapse.util.caches import CACHE_SIZE_FACTOR
from collections import namedtuple
from frozendict import frozendict
import logging
import hashlib
import os
logger = logging.getLogger(__name__)
@@ -38,9 +38,6 @@ logger = logging.getLogger(__name__)
KeyStateTuple = namedtuple("KeyStateTuple", ("context", "type", "state_key"))
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
SIZE_OF_CACHE = int(100000 * CACHE_SIZE_FACTOR)
EVICTION_TIMEOUT_SECONDS = 60 * 60

View File

@@ -304,16 +304,6 @@ class DataStore(RoomMemberStore, RoomStore,
ret = yield self.runInteraction("count_users", _count_users)
defer.returnValue(ret)
def get_user_ip_and_agents(self, user):
return self._simple_select_list(
table="user_ips",
keyvalues={"user_id": user.to_string()},
retcols=[
"access_token", "ip", "user_agent", "last_seen"
],
desc="get_user_ip_and_agents",
)
def get_users(self):
"""Function to reterive a list of users in users table.

View File

@@ -16,6 +16,7 @@ import logging
from synapse.api.errors import StoreError
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.dictionary_cache import DictionaryCache
from synapse.util.caches.descriptors import Cache
from synapse.storage.engines import PostgresEngine
@@ -27,10 +28,6 @@ from twisted.internet import defer
import sys
import time
import threading
import os
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
logger = logging.getLogger(__name__)

View File

@@ -308,3 +308,16 @@ class AccountDataStore(SQLBaseStore):
" WHERE stream_id < ?"
)
txn.execute(update_max_id_sql, (next_id, next_id))
@cachedInlineCallbacks(num_args=2, cache_context=True, max_entries=5000)
def is_ignored_by(self, ignored_user_id, ignorer_user_id, cache_context):
ignored_account_data = yield self.get_global_account_data_by_type_for_user(
"m.ignored_user_list", ignorer_user_id,
on_invalidate=cache_context.invalidate,
)
if not ignored_account_data:
defer.returnValue(False)
defer.returnValue(
ignored_user_id in ignored_account_data.get("ignored_users", {})
)

View File

@@ -15,12 +15,13 @@
import logging
from twisted.internet import defer
from twisted.internet import defer, reactor
from ._base import Cache
from . import background_updates
import os
from synapse.util.caches import CACHE_SIZE_FACTOR
logger = logging.getLogger(__name__)
@@ -30,9 +31,6 @@ logger = logging.getLogger(__name__)
LAST_SEEN_GRANULARITY = 120 * 1000
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
class ClientIpStore(background_updates.BackgroundUpdateStore):
def __init__(self, hs):
self.client_ip_last_seen = Cache(
@@ -50,10 +48,19 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
columns=["user_id", "device_id", "last_seen"],
)
@defer.inlineCallbacks
def insert_client_ip(self, user, access_token, ip, user_agent, device_id):
now = int(self._clock.time_msec())
key = (user.to_string(), access_token, ip)
# (user_id, access_token, ip) -> (user_agent, device_id, last_seen)
self._batch_row_update = {}
self._client_ip_looper = self._clock.looping_call(
self._update_client_ips_batch, 5 * 1000
)
reactor.addSystemEventTrigger("before", "shutdown", self._update_client_ips_batch)
def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id,
now=None):
if not now:
now = int(self._clock.time_msec())
key = (user_id, access_token, ip)
try:
last_seen = self.client_ip_last_seen.get(key)
@@ -62,34 +69,48 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
# Rate-limited inserts
if last_seen is not None and (now - last_seen) < LAST_SEEN_GRANULARITY:
defer.returnValue(None)
return
self.client_ip_last_seen.prefill(key, now)
# It's safe not to lock here: a) no unique constraint,
# b) LAST_SEEN_GRANULARITY makes concurrent updates incredibly unlikely
yield self._simple_upsert(
"user_ips",
keyvalues={
"user_id": user.to_string(),
"access_token": access_token,
"ip": ip,
"user_agent": user_agent,
"device_id": device_id,
},
values={
"last_seen": now,
},
desc="insert_client_ip",
lock=False,
self._batch_row_update[key] = (user_agent, device_id, now)
def _update_client_ips_batch(self):
to_update = self._batch_row_update
self._batch_row_update = {}
return self.runInteraction(
"_update_client_ips_batch", self._update_client_ips_batch_txn, to_update
)
def _update_client_ips_batch_txn(self, txn, to_update):
self.database_engine.lock_table(txn, "user_ips")
for entry in to_update.iteritems():
(user_id, access_token, ip), (user_agent, device_id, last_seen) = entry
self._simple_upsert_txn(
txn,
table="user_ips",
keyvalues={
"user_id": user_id,
"access_token": access_token,
"ip": ip,
"user_agent": user_agent,
"device_id": device_id,
},
values={
"last_seen": last_seen,
},
lock=False,
)
@defer.inlineCallbacks
def get_last_client_ip_by_device(self, devices):
def get_last_client_ip_by_device(self, user_id, device_id):
"""For each device_id listed, give the user_ip it was last seen on
Args:
devices (iterable[(str, str)]): list of (user_id, device_id) pairs
user_id (str)
device_id (str): If None fetches all devices for the user
Returns:
defer.Deferred: resolves to a dict, where the keys
@@ -100,6 +121,7 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
res = yield self.runInteraction(
"get_last_client_ip_by_device",
self._get_last_client_ip_by_device_txn,
user_id, device_id,
retcols=(
"user_id",
"access_token",
@@ -108,23 +130,34 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
"device_id",
"last_seen",
),
devices=devices
)
ret = {(d["user_id"], d["device_id"]): d for d in res}
for key in self._batch_row_update:
uid, access_token, ip = key
if uid == user_id:
user_agent, did, last_seen = self._batch_row_update[key]
if not device_id or did == device_id:
ret[(user_id, device_id)] = {
"user_id": user_id,
"access_token": access_token,
"ip": ip,
"user_agent": user_agent,
"device_id": did,
"last_seen": last_seen,
}
defer.returnValue(ret)
@classmethod
def _get_last_client_ip_by_device_txn(cls, txn, devices, retcols):
def _get_last_client_ip_by_device_txn(cls, txn, user_id, device_id, retcols):
where_clauses = []
bindings = []
for (user_id, device_id) in devices:
if device_id is None:
where_clauses.append("(user_id = ? AND device_id IS NULL)")
bindings.extend((user_id, ))
else:
where_clauses.append("(user_id = ? AND device_id = ?)")
bindings.extend((user_id, device_id))
if device_id is None:
where_clauses.append("user_id = ?")
bindings.extend((user_id, ))
else:
where_clauses.append("(user_id = ? AND device_id = ?)")
bindings.extend((user_id, device_id))
if not where_clauses:
return []
@@ -152,3 +185,37 @@ class ClientIpStore(background_updates.BackgroundUpdateStore):
txn.execute(sql, bindings)
return cls.cursor_to_dict(txn)
@defer.inlineCallbacks
def get_user_ip_and_agents(self, user):
user_id = user.to_string()
results = {}
for key in self._batch_row_update:
uid, access_token, ip = key
if uid == user_id:
user_agent, _, last_seen = self._batch_row_update[key]
results[(access_token, ip)] = (user_agent, last_seen)
rows = yield self._simple_select_list(
table="user_ips",
keyvalues={"user_id": user_id},
retcols=[
"access_token", "ip", "user_agent", "last_seen"
],
desc="get_user_ip_and_agents",
)
results.update(
((row["access_token"], row["ip"]), (row["user_agent"], row["last_seen"]))
for row in rows
)
defer.returnValue(list(
{
"access_token": access_token,
"ip": ip,
"user_agent": user_agent,
"last_seen": last_seen,
}
for (access_token, ip), (user_agent, last_seen) in results.iteritems()
))

View File

@@ -403,6 +403,11 @@ class EventsStore(SQLBaseStore):
(room_id, ), new_state
)
for room_id, latest_event_ids in new_forward_extremeties.iteritems():
self.get_latest_event_ids_in_room.prefill(
(room_id,), list(latest_event_ids)
)
@defer.inlineCallbacks
def _calculate_new_extremeties(self, room_id, event_contexts, latest_event_ids):
"""Calculates the new forward extremeties for a room given events to

View File

@@ -113,30 +113,37 @@ class KeyStore(SQLBaseStore):
keys[key_id] = key
defer.returnValue(keys)
@defer.inlineCallbacks
def store_server_verify_key(self, server_name, from_server, time_now_ms,
verify_key):
"""Stores a NACL verification key for the given server.
Args:
server_name (str): The name of the server.
key_id (str): The version of the key for the server.
from_server (str): Where the verification key was looked up
ts_now_ms (int): The time now in milliseconds
verification_key (VerifyKey): The NACL verify key.
time_now_ms (int): The time now in milliseconds
verify_key (nacl.signing.VerifyKey): The NACL verify key.
"""
yield self._simple_upsert(
table="server_signature_keys",
keyvalues={
"server_name": server_name,
"key_id": "%s:%s" % (verify_key.alg, verify_key.version),
},
values={
"from_server": from_server,
"ts_added_ms": time_now_ms,
"verify_key": buffer(verify_key.encode()),
},
desc="store_server_verify_key",
)
key_id = "%s:%s" % (verify_key.alg, verify_key.version)
def _txn(txn):
self._simple_upsert_txn(
txn,
table="server_signature_keys",
keyvalues={
"server_name": server_name,
"key_id": key_id,
},
values={
"from_server": from_server,
"ts_added_ms": time_now_ms,
"verify_key": buffer(verify_key.encode()),
},
)
txn.call_after(
self._get_server_verify_key.invalidate,
(server_name, key_id)
)
return self.runInteraction("store_server_verify_key", _txn)
def store_server_keys_json(self, server_name, key_id, from_server,
ts_now_ms, ts_expires_ms, key_json_bytes):

View File

@@ -315,6 +315,12 @@ class StateStore(SQLBaseStore):
],
)
for event_id, state_group_id in state_groups.iteritems():
txn.call_after(
self._get_state_group_for_event.prefill,
(event_id,), state_group_id
)
def _count_state_group_hops_txn(self, txn, state_group):
"""Given a state group, count how many hops there are in the tree.
@@ -584,8 +590,8 @@ class StateStore(SQLBaseStore):
state_map = yield self.get_state_ids_for_events([event_id], types)
defer.returnValue(state_map[event_id])
@cached(num_args=2, max_entries=50000)
def _get_state_group_for_event(self, room_id, event_id):
@cached(max_entries=50000)
def _get_state_group_for_event(self, event_id):
return self._simple_select_one_onecol(
table="event_to_state_groups",
keyvalues={

View File

@@ -16,7 +16,7 @@
import synapse.metrics
import os
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.5))
metrics = synapse.metrics.get_metrics_for("synapse.util.caches")

View File

@@ -16,6 +16,7 @@ import logging
from synapse.util.async import ObservableDeferred
from synapse.util import unwrapFirstError, logcontext
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.caches.lrucache import LruCache
from synapse.util.caches.treecache import TreeCache, iterate_tree_cache_entry
from synapse.util.stringutils import to_ascii
@@ -25,7 +26,6 @@ from . import register_cache
from twisted.internet import defer
from collections import namedtuple
import os
import functools
import inspect
import threading
@@ -37,9 +37,6 @@ logger = logging.getLogger(__name__)
_CacheSentinel = object()
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
class CacheEntry(object):
__slots__ = [
"deferred", "sequence", "callbacks", "invalidated"

View File

@@ -94,6 +94,9 @@ class ExpiringCache(object):
return entry.value
def __contains__(self, key):
return key in self._cache
def get(self, key, default=None):
try:
return self[key]

View File

@@ -13,20 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.caches import register_cache
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from blist import sorteddict
import logging
import os
logger = logging.getLogger(__name__)
CACHE_SIZE_FACTOR = float(os.environ.get("SYNAPSE_CACHE_FACTOR", 0.1))
class StreamChangeCache(object):
"""Keeps track of the stream positions of the latest change in a set of entities.

View File

@@ -43,7 +43,8 @@ MEMBERSHIP_PRIORITY = (
@defer.inlineCallbacks
def filter_events_for_clients(store, user_tuples, events, event_id_to_state):
def filter_events_for_clients(store, user_tuples, events, event_id_to_state,
always_include_ids=frozenset()):
""" Returns dict of user_id -> list of events that user is allowed to
see.
@@ -54,6 +55,8 @@ def filter_events_for_clients(store, user_tuples, events, event_id_to_state):
* the user has not been a member of the room since the
given events
events ([synapse.events.EventBase]): list of events to filter
always_include_ids (set(event_id)): set of event ids to specifically
include (unless sender is ignored)
"""
forgotten = yield preserve_context_over_deferred(defer.gatherResults([
defer.maybeDeferred(
@@ -91,6 +94,9 @@ def filter_events_for_clients(store, user_tuples, events, event_id_to_state):
if not event.is_state() and event.sender in ignore_list:
return False
if event.event_id in always_include_ids:
return True
state = event_id_to_state[event.event_id]
# get the room_visibility at the time of the event.
@@ -189,26 +195,8 @@ def filter_events_for_clients(store, user_tuples, events, event_id_to_state):
@defer.inlineCallbacks
def filter_events_for_clients_context(store, user_tuples, events, event_id_to_context):
user_ids = set(u[0] for u in user_tuples)
event_id_to_state = {}
for event_id, context in event_id_to_context.items():
state = yield store.get_events([
e_id
for key, e_id in context.current_state_ids.iteritems()
if key == (EventTypes.RoomHistoryVisibility, "")
or (key[0] == EventTypes.Member and key[1] in user_ids)
])
event_id_to_state[event_id] = state
res = yield filter_events_for_clients(
store, user_tuples, events, event_id_to_state
)
defer.returnValue(res)
@defer.inlineCallbacks
def filter_events_for_client(store, user_id, events, is_peeking=False):
def filter_events_for_client(store, user_id, events, is_peeking=False,
always_include_ids=frozenset()):
"""
Check which events a user is allowed to see
@@ -232,6 +220,7 @@ def filter_events_for_client(store, user_id, events, is_peeking=False):
types=types
)
res = yield filter_events_for_clients(
store, [(user_id, is_peeking)], events, event_id_to_state
store, [(user_id, is_peeking)], events, event_id_to_state,
always_include_ids=always_include_ids,
)
defer.returnValue(res.get(user_id, []))

View File

@@ -0,0 +1,229 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
import signedjson.key
import signedjson.sign
from mock import Mock
from synapse.api.errors import SynapseError
from synapse.crypto import keyring
from synapse.util import async, logcontext
from synapse.util.logcontext import LoggingContext
from tests import unittest, utils
from twisted.internet import defer
class MockPerspectiveServer(object):
def __init__(self):
self.server_name = "mock_server"
self.key = signedjson.key.generate_signing_key(0)
def get_verify_keys(self):
vk = signedjson.key.get_verify_key(self.key)
return {
"%s:%s" % (vk.alg, vk.version): vk,
}
def get_signed_key(self, server_name, verify_key):
key_id = "%s:%s" % (verify_key.alg, verify_key.version)
res = {
"server_name": server_name,
"old_verify_keys": {},
"valid_until_ts": time.time() * 1000 + 3600,
"verify_keys": {
key_id: {
"key": signedjson.key.encode_verify_key_base64(verify_key)
}
}
}
signedjson.sign.sign_json(res, self.server_name, self.key)
return res
class KeyringTestCase(unittest.TestCase):
@defer.inlineCallbacks
def setUp(self):
self.mock_perspective_server = MockPerspectiveServer()
self.http_client = Mock()
self.hs = yield utils.setup_test_homeserver(
handlers=None,
http_client=self.http_client,
)
self.hs.config.perspectives = {
self.mock_perspective_server.server_name:
self.mock_perspective_server.get_verify_keys()
}
def check_context(self, _, expected):
self.assertEquals(
getattr(LoggingContext.current_context(), "test_key", None),
expected
)
@defer.inlineCallbacks
def test_wait_for_previous_lookups(self):
sentinel_context = LoggingContext.current_context()
kr = keyring.Keyring(self.hs)
lookup_1_deferred = defer.Deferred()
lookup_2_deferred = defer.Deferred()
with LoggingContext("one") as context_one:
context_one.test_key = "one"
wait_1_deferred = kr.wait_for_previous_lookups(
["server1"],
{"server1": lookup_1_deferred},
)
# there were no previous lookups, so the deferred should be ready
self.assertTrue(wait_1_deferred.called)
# ... so we should have preserved the LoggingContext.
self.assertIs(LoggingContext.current_context(), context_one)
wait_1_deferred.addBoth(self.check_context, "one")
with LoggingContext("two") as context_two:
context_two.test_key = "two"
# set off another wait. It should block because the first lookup
# hasn't yet completed.
wait_2_deferred = kr.wait_for_previous_lookups(
["server1"],
{"server1": lookup_2_deferred},
)
self.assertFalse(wait_2_deferred.called)
# ... so we should have reset the LoggingContext.
self.assertIs(LoggingContext.current_context(), sentinel_context)
wait_2_deferred.addBoth(self.check_context, "two")
# let the first lookup complete (in the sentinel context)
lookup_1_deferred.callback(None)
# now the second wait should complete and restore our
# loggingcontext.
yield wait_2_deferred
@defer.inlineCallbacks
def test_verify_json_objects_for_server_awaits_previous_requests(self):
key1 = signedjson.key.generate_signing_key(1)
kr = keyring.Keyring(self.hs)
json1 = {}
signedjson.sign.sign_json(json1, "server10", key1)
persp_resp = {
"server_keys": [
self.mock_perspective_server.get_signed_key(
"server10",
signedjson.key.get_verify_key(key1)
),
]
}
persp_deferred = defer.Deferred()
@defer.inlineCallbacks
def get_perspectives(**kwargs):
self.assertEquals(
LoggingContext.current_context().test_key, "11",
)
with logcontext.PreserveLoggingContext():
yield persp_deferred
defer.returnValue(persp_resp)
self.http_client.post_json.side_effect = get_perspectives
with LoggingContext("11") as context_11:
context_11.test_key = "11"
# start off a first set of lookups
res_deferreds = kr.verify_json_objects_for_server(
[("server10", json1),
("server11", {})
]
)
# the unsigned json should be rejected pretty quickly
self.assertTrue(res_deferreds[1].called)
try:
yield res_deferreds[1]
self.assertFalse("unsigned json didn't cause a failure")
except SynapseError:
pass
self.assertFalse(res_deferreds[0].called)
res_deferreds[0].addBoth(self.check_context, None)
# wait a tick for it to send the request to the perspectives server
# (it first tries the datastore)
yield async.sleep(0.005)
self.http_client.post_json.assert_called_once()
self.assertIs(LoggingContext.current_context(), context_11)
context_12 = LoggingContext("12")
context_12.test_key = "12"
with logcontext.PreserveLoggingContext(context_12):
# a second request for a server with outstanding requests
# should block rather than start a second call
self.http_client.post_json.reset_mock()
self.http_client.post_json.return_value = defer.Deferred()
res_deferreds_2 = kr.verify_json_objects_for_server(
[("server10", json1)],
)
yield async.sleep(0.005)
self.http_client.post_json.assert_not_called()
res_deferreds_2[0].addBoth(self.check_context, None)
# complete the first request
with logcontext.PreserveLoggingContext():
persp_deferred.callback(persp_resp)
self.assertIs(LoggingContext.current_context(), context_11)
with logcontext.PreserveLoggingContext():
yield res_deferreds[0]
yield res_deferreds_2[0]
@defer.inlineCallbacks
def test_verify_json_for_server(self):
kr = keyring.Keyring(self.hs)
key1 = signedjson.key.generate_signing_key(1)
yield self.hs.datastore.store_server_verify_key(
"server9", "", time.time() * 1000,
signedjson.key.get_verify_key(key1),
)
json1 = {}
signedjson.sign.sign_json(json1, "server9", key1)
sentinel_context = LoggingContext.current_context()
with LoggingContext("one") as context_one:
context_one.test_key = "one"
defer = kr.verify_json_for_server("server9", {})
try:
yield defer
self.fail("should fail on unsigned json")
except SynapseError:
pass
self.assertIs(LoggingContext.current_context(), context_one)
defer = kr.verify_json_for_server("server9", json1)
self.assertFalse(defer.called)
self.assertIs(LoggingContext.current_context(), sentinel_context)
yield defer
self.assertIs(LoggingContext.current_context(), context_one)

View File

@@ -19,7 +19,6 @@ import synapse.api.errors
import synapse.handlers.device
import synapse.storage
from synapse import types
from tests import unittest, utils
user1 = "@boris:aaa"
@@ -179,6 +178,6 @@ class DeviceTestCase(unittest.TestCase):
if ip is not None:
yield self.store.insert_client_ip(
types.UserID.from_string(user_id),
user_id,
access_token, ip, "user_agent", device_id)
self.clock.advance_time(1000)

View File

@@ -241,7 +241,7 @@ class CacheDecoratorTestCase(unittest.TestCase):
callcount2 = [0]
class A(object):
@cached(max_entries=20) # HACK: This makes it 2 due to cache factor
@cached(max_entries=4) # HACK: This makes it 2 due to cache factor
def func(self, key):
callcount[0] += 1
return key

View File

@@ -15,9 +15,6 @@
from twisted.internet import defer
import synapse.server
import synapse.storage
import synapse.types
import tests.unittest
import tests.utils
@@ -39,14 +36,11 @@ class ClientIpStoreTestCase(tests.unittest.TestCase):
self.clock.now = 12345678
user_id = "@user:id"
yield self.store.insert_client_ip(
synapse.types.UserID.from_string(user_id),
user_id,
"access_token", "ip", "user_agent", "device_id",
)
# deliberately use an iterable here to make sure that the lookup
# method doesn't iterate it twice
device_list = iter(((user_id, "device_id"),))
result = yield self.store.get_last_client_ip_by_device(device_list)
result = yield self.store.get_last_client_ip_by_device(user_id, "device_id")
r = result[(user_id, "device_id")]
self.assertDictContainsSubset(

View File

@@ -24,15 +24,17 @@ from synapse.http.endpoint import resolve_service
from tests.utils import MockClock
@unittest.DEBUG
class DnsTestCase(unittest.TestCase):
@defer.inlineCallbacks
def test_resolve(self):
dns_client_mock = Mock()
service_name = "test_service.examle.com"
service_name = "test_service.example.com"
host_name = "example.com"
ip_address = "127.0.0.1"
ip6_address = "::1"
answer_srv = dns.RRHeader(
type=dns.SRV,
@@ -48,8 +50,22 @@ class DnsTestCase(unittest.TestCase):
)
)
dns_client_mock.lookupService.return_value = ([answer_srv], None, None)
dns_client_mock.lookupAddress.return_value = ([answer_a], None, None)
answer_aaaa = dns.RRHeader(
type=dns.AAAA,
payload=dns.Record_AAAA(
address=ip6_address,
)
)
dns_client_mock.lookupService.return_value = defer.succeed(
([answer_srv], None, None),
)
dns_client_mock.lookupAddress.return_value = defer.succeed(
([answer_a], None, None),
)
dns_client_mock.lookupIPV6Address.return_value = defer.succeed(
([answer_aaaa], None, None),
)
cache = {}
@@ -59,10 +75,12 @@ class DnsTestCase(unittest.TestCase):
dns_client_mock.lookupService.assert_called_once_with(service_name)
dns_client_mock.lookupAddress.assert_called_once_with(host_name)
dns_client_mock.lookupIPV6Address.assert_called_once_with(host_name)
self.assertEquals(len(servers), 1)
self.assertEquals(len(servers), 2)
self.assertEquals(servers, cache[service_name])
self.assertEquals(servers[0].host, ip_address)
self.assertEquals(servers[1].host, ip6_address)
@defer.inlineCallbacks
def test_from_cache_expired_and_dns_fail(self):

View File

@@ -56,6 +56,7 @@ def setup_test_homeserver(name="test", datastore=None, config=None, **kargs):
config.worker_replication_url = ""
config.worker_app = None
config.email_enable_notifs = False
config.block_non_admin_invites = False
config.use_frozen_dicts = True
config.database_config = {"name": "sqlite3"}

34
tox.ini
View File

@@ -14,14 +14,38 @@ deps =
setenv =
PYTHONDONTWRITEBYTECODE = no_byte_code
# As of twisted 16.4, trial tries to import the tests as a package, which
# means it needs to be on the pythonpath.
PYTHONPATH = {toxinidir}
commands =
/bin/sh -c "find {toxinidir} -name '*.pyc' -delete ; coverage run {env:COVERAGE_OPTS:} --source={toxinidir}/synapse \
{envbindir}/trial {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:}"
/usr/bin/find "{toxinidir}" -name '*.pyc' -delete
coverage run {env:COVERAGE_OPTS:} --source="{toxinidir}/synapse" \
"{envbindir}/trial" {env:TRIAL_FLAGS:} {posargs:tests} {env:TOXSUFFIX:}
{env:DUMP_COVERAGE_COMMAND:coverage report -m}
[testenv:py27]
# As of twisted 16.4, trial tries to import the tests as a package (previously
# it loaded the files explicitly), which means they need to be on the
# pythonpath. Our sdist doesn't include the 'tests' package, so normally it
# doesn't work within the tox virtualenv.
#
# As a workaround, we tell tox to do install with 'pip -e', which just
# creates a symlink to the project directory instead of unpacking the sdist.
#
# (An alternative to this would be to set PYTHONPATH to include the project
# directory. Note two problems with this:
#
# - if you set it via `setenv`, then it is also set during the 'install'
# phase, which inhibits unpacking the sdist, so the virtualenv isn't
# useful for anything else without setting PYTHONPATH similarly.
#
# - `synapse` is also loaded from PYTHONPATH so even if you only set
# PYTHONPATH for the test phase, we're still running the tests against
# the working copy rather than the contents of the sdist. So frankly
# you might as well use -e in the first place.
#
# )
usedevelop=true
[testenv:packaging]
deps =
check-manifest