1
0

Compare commits

...

523 Commits

Author SHA1 Message Date
Richard van der Hoff
1cfc2c4790 Prepare 0.32.1 release 2018-07-06 16:50:52 +01:00
Richard van der Hoff
2087d5d046 Merge pull request #3488 from matrix-org/rav/fix-netaddr-dep
Add explicit dependency on netaddr
2018-07-06 16:39:56 +01:00
Richard van der Hoff
1464a0578a Add explicit dependency on netaddr
the dependencies file, causing failures on upgrade (and presumably for new
installs).
2018-07-06 16:27:17 +01:00
Neil Johnson
277c561766 0.32.0 version bump, update changelog 2018-07-06 15:07:29 +01:00
Amber Brown
89690aaaeb changelog 2018-07-05 20:46:40 +10:00
Amber Brown
be8b32dbc2 ACL changelog 2018-07-05 20:45:12 +10:00
Amber Brown
d196fe42a9 bump version to 0.32.0rc1 2018-07-05 20:22:35 +10:00
Neil Johnson
feef8461d1 Merge remote-tracking branch 'hera/rav/server_acls' into develop 2018-07-05 11:04:01 +01:00
Erik Johnston
1a88640677 Merge pull request #3483 from matrix-org/rav/more_server_name_validation
More server_name validation
2018-07-05 10:04:20 +01:00
Richard van der Hoff
3cf3e08a97 Implementation of server_acls
... as described at
https://docs.google.com/document/d/1EttUVzjc2DWe2ciw4XPtNpUpIl9lWXGEsy2ewDS7rtw.
2018-07-04 19:06:20 +01:00
Richard van der Hoff
546bc9e28b More server_name validation
We need to do a bit more validation when we get a server name, but don't want
to be re-doing it all over the shop, so factor out a separate
parse_and_validate_server_name, and do the extra validation.

Also, use it to verify the server name in the config file.
2018-07-04 18:59:51 +01:00
Richard van der Hoff
f192a93875 Merge pull request #3481 from matrix-org/rav/fix_cachedescriptor_test
Reinstate lost run_on_reactor in unit test
2018-07-04 18:55:33 +01:00
Erik Johnston
13f7adf84b Merge pull request #3473 from matrix-org/erikj/thread_cache
Invalidate cache on correct thread
2018-07-04 10:11:38 +01:00
Erik Johnston
40252d13d1 Merge pull request #3474 from matrix-org/erikj/py3_auth
Fix up auth check
2018-07-04 09:41:33 +01:00
Richard van der Hoff
ea555d5633 Reinstate lost run_on_reactor in unit test
a61738b removed a call to run_on_reactor from a unit test, but that call was
doing something useful, in making the function in question asynchronous.

Reinstate the call and add a check that we are testing what we wanted to be
testing.
2018-07-04 09:40:01 +01:00
Richard van der Hoff
508196e08a Reject invalid server names (#3480)
Make sure that server_names used in auth headers are sane, and reject them with
a sensible error code, before they disappear off into the depths of the system.
2018-07-03 14:36:14 +01:00
Richard van der Hoff
f741630847 Merge pull request #3470 from matrix-org/matthew/fix-utf8-logging
don't mix unicode strings with utf8-in-byte-strings
2018-07-02 15:25:36 +01:00
Matthew Hodgson
6ec3aa2f72 news snippet 2018-07-02 13:43:34 +01:00
Erik Johnston
abb183438c Correct newsfile 2018-07-02 13:12:38 +01:00
Erik Johnston
f88dea577d Newsfile 2018-07-02 11:46:32 +01:00
Erik Johnston
cea4662b13 Newsfile 2018-07-02 11:46:20 +01:00
Erik Johnston
2c33b55738 Avoid relying on int vs None comparison
Python 3 doesn't support comparing None to ints
2018-07-02 11:40:32 +01:00
Erik Johnston
cbf82dddf1 Ensure that we define sender_domain 2018-07-02 11:37:57 +01:00
Erik Johnston
3905c693c5 Invalidate cache on correct thread 2018-07-02 11:36:44 +01:00
Matthew Hodgson
fc4f8f33be replace invalid utf8 with \ufffd 2018-07-02 11:33:02 +01:00
Matthew Hodgson
1c867f5391 a fix which doesn't NPE everywhere 2018-07-01 11:56:33 +01:00
Matthew Hodgson
f131bf8d3e don't mix unicode strings with utf8-in-byte-strings
otherwise we explode with:

```
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/handlers.py, line 78, in emit
    logging.FileHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 950, in emit
    StreamHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 887, in emit
    self.handleError(record)
  File /usr/lib/python2.7/logging/__init__.py, line 810, in handleError
    None, sys.stderr)
  File /usr/lib/python2.7/traceback.py, line 124, in print_exception
    _print(file, 'Traceback (most recent call last):')
  File /usr/lib/python2.7/traceback.py, line 13, in _print
    file.write(str+terminator)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_io.py, line 170, in write
    self.log.emit(self.level, format=u{log_io}, log_io=line)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 144, in emit
    self.observer(event)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 136, in __call__
    errorLogger = self._errorLoggerForObserver(brokenObserver)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 156, in _errorLoggerForObserver
    if obs is not observer
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 81, in __init__
    self.log = Logger(observer=self)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 64, in __init__
    namespace = self._namespaceFromCallingContext()
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 42, in _namespaceFromCallingContext
    return currentframe(2).f_globals[__name__]
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/python/compat.py, line 93, in currentframe
    for x in range(n + 1):
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file site.py, line 129
  File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
    msg = self.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 732, in format
    return fmt.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 471, in format
    record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
    msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)
Logged from file site.py, line 129
```

...where the logger apparently recurses whilst trying to log the error, hitting the
maximum recursion depth and killing everything badly.
2018-07-01 05:08:58 +01:00
Matthew Hodgson
75d4986a8c Merge pull request #3467 from matrix-org/hawkowl/contributor-requirements
Clarify "real name" in contributor requirements
2018-06-30 00:21:10 +01:00
Amber Brown
7c0cdd330f topfile 2018-06-29 14:13:15 +01:00
Erik Johnston
e3b4043800 Merge pull request #3456 from matrix-org/hawkowl/federation-prevevent-checking
Check the state of prev_events a bit more thoroughly when coming over federation
2018-06-29 13:55:02 +01:00
Amber Brown
ec766b2530 clarification on what "real names" are 2018-06-29 10:33:31 +01:00
Matthew Hodgson
fc0e17b3e5 Merge pull request #3465 from matrix-org/matthew/as_ip_lock
add ip_range_whitelist parameter to limit where ASes can connect from
2018-06-28 21:15:06 +01:00
Matthew Hodgson
f82cf3c7df add test 2018-06-28 21:14:16 +01:00
Matthew Hodgson
0d7eabeada add towncrier snippet 2018-06-28 20:59:01 +01:00
Matthew Hodgson
e72234f6bd fix tests 2018-06-28 20:56:07 +01:00
Matthew Hodgson
f4f1cda928 add ip_range_whitelist parameter to limit where ASes can connect from 2018-06-28 20:32:00 +01:00
Amber Brown
6350bf925e Attempt to be more performant on PyPy (#3462) 2018-06-28 14:49:57 +01:00
Amber Brown
cfda61e9cd topfile update 2018-06-28 11:13:08 +01:00
Amber Brown
72d2143ea8 Revert "Revert "Try to not use as much CPU in the StreamChangeCache"" (#3454) 2018-06-28 11:04:18 +01:00
Amber Brown
caf07f770a topfile 2018-06-27 15:07:16 +01:00
Amber Brown
99800de63d try and clean up 2018-06-27 11:40:27 +01:00
Amber Brown
f03a5d1a17 pep8 2018-06-27 11:38:14 +01:00
Amber Brown
94f09618e5 cleanups 2018-06-27 11:38:03 +01:00
Amber Brown
a7ecf34b70 cleanups 2018-06-27 11:36:03 +01:00
Amber Brown
35cc3e8b14 stylistic cleanup 2018-06-27 11:32:09 +01:00
Amber Brown
8d62baa48c cleanups 2018-06-27 11:31:48 +01:00
Amber Brown
77078d6c8e handle federation not telling us about prev_events 2018-06-27 11:27:32 +01:00
Amber Brown
cd6bcdaf87 Better testing framework for homeserver-using things (#3446) 2018-06-27 10:37:24 +01:00
Matthew Hodgson
71ad43a8b5 Merge pull request #3452 from matrix-org/revert-3451-hawkowl/sorteddict-api
Revert "Try to not use as much CPU in the StreamChangeCache"
2018-06-26 18:09:28 +01:00
Matthew Hodgson
8057489b26 Revert "Try to not use as much CPU in the StreamChangeCache" 2018-06-26 18:09:01 +01:00
Matthew Hodgson
d91efb06cf Merge pull request #3451 from matrix-org/hawkowl/sorteddict-api
Try to not use as much CPU in the cache
2018-06-26 17:49:55 +01:00
Amber Brown
1202508067 fixes 2018-06-26 17:29:01 +01:00
Amber Brown
bd3d329c88 fixes 2018-06-26 17:28:12 +01:00
Amber Brown
abfe4b2957 try and make loading items from the cache faster 2018-06-26 17:25:34 +01:00
Matthew Hodgson
c695a8d003 Merge pull request #3449 from matrix-org/dbkr/fix_deactivate_account_multiple_pending
Fix error on deleting users pending deactivation
2018-06-26 10:59:19 +01:00
David Baker
028490afd4 Fix error on deleting users pending deactivation
Use simple_delete instead of simple_delete_one as commented
2018-06-26 10:52:52 +01:00
Matthew Hodgson
c7f6b420ae Merge pull request #3448 from matrix-org/matthew/gdpr-deactivate-admin-api
add GDPR erase param to deactivate API
2018-06-26 10:43:14 +01:00
Matthew Hodgson
9570aa82eb update doc for deactivate API 2018-06-26 10:42:50 +01:00
Matthew Hodgson
1e788db430 add GDPR erase param to deactivate API 2018-06-26 10:26:54 +01:00
Amber Brown
1d62c4a127 Merge pull request #3438 from turt2live/travis/dont-print-access-tokens-in-logs
Stop including access tokens in warnings in the log
2018-06-26 09:55:55 +01:00
Erik Johnston
d72fb9a448 Merge pull request #3442 from matrix-org/matthew/allow-unconsented-parts
allow non-consented users to still part rooms (to let us autopart them)
2018-06-25 20:14:18 +01:00
Erik Johnston
1b947d6dde Merge pull request #3443 from matrix-org/erikj/fast_filter_servers
Add fast path to _filter_events_for_server
2018-06-25 20:11:00 +01:00
Erik Johnston
df48f7ef37 Actually fix it 2018-06-25 20:03:41 +01:00
Erik Johnston
a0e8a53c6d Comment 2018-06-25 19:57:38 +01:00
Erik Johnston
7bdc5c8fa3 Fix bug with assuming wrong type 2018-06-25 19:56:02 +01:00
Erik Johnston
ea7a9c0483 Add fast path to _filter_events_for_server
Most rooms have a trivial history visibility like "shared" or
"world_readable", especially large rooms, so lets not bother getting the
full membership of those rooms in that case.
2018-06-25 19:49:13 +01:00
Matthew Hodgson
0269367f18 allow non-consented users to still part rooms (to let us autopart them) 2018-06-25 17:56:10 +01:00
Matthew Hodgson
784189b1f4 typos 2018-06-25 17:37:16 +01:00
Matthew Hodgson
6eb861b67f typo 2018-06-25 17:37:16 +01:00
Erik Johnston
947fea67cb Need to pass reactor to endpoint fac 2018-06-25 15:22:57 +01:00
Amber Brown
36cb570641 Use towncrier to build the changelog (#3425) 2018-06-25 14:42:27 +01:00
Erik Johnston
33fdcfa957 Merge pull request #3441 from matrix-org/erikj/redo_erasure
Fix user erasure and re-enable
2018-06-25 14:37:01 +01:00
Erik Johnston
eb50c44eaf Add UserErasureWorkerStore to workers 2018-06-25 14:22:24 +01:00
Amber Brown
07cad26d65 Remove all global reactor imports & pass it around explicitly (#3424) 2018-06-25 14:08:28 +01:00
Erik Johnston
244484bf3c Revert "Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility""
This reverts commit 1d009013b3.
2018-06-25 13:42:55 +01:00
Travis Ralston
ec1e799e17 Don't print invalid access tokens in the logs
Tokens shouldn't be appearing the logs, valid or invalid. 

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-24 12:17:01 -06:00
Richard van der Hoff
1d009013b3 Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility"
This reverts commit ce0d911156, reversing
changes made to b4a5d767a9.
2018-06-22 16:35:10 +01:00
Richard van der Hoff
516f884176 Merge pull request #3435 from matrix-org/rav/fix_event_push_actions_tablescan
Fix event_push_actions tablescan when reinserting events
2018-06-22 15:57:22 +01:00
Mark Haines
9850f66abe Deleting from event_push_actions needs to use an index 2018-06-22 15:54:48 +01:00
Richard van der Hoff
fe8d2968e3 Merge pull request #3434 from matrix-org/rav/search_logging
Add some logging to /search and /context queries
2018-06-22 15:51:11 +01:00
Erik Johnston
28ddc6cfbe Also log number of events for serach context 2018-06-22 15:42:11 +01:00
Erik Johnston
4b4cec3989 Add some logging to search queries 2018-06-22 15:42:11 +01:00
Richard van der Hoff
200e11c5bf Merge pull request #3432 from matrix-org/rav/joined_hosts_cache_non_iterable
Make _get_joined_hosts_cache cache non-iterable
2018-06-22 15:18:51 +01:00
Erik Johnston
f8272813a9 Make _get_joined_hosts_cache cache non-iterable 2018-06-22 15:12:26 +01:00
Richard van der Hoff
1d7ad11747 Merge pull request #3430 from matrix-org/rav/configurable_push_action_rotation
Make push actions rotation configurable
2018-06-22 15:07:31 +01:00
Erik Johnston
ce0d911156 Merge pull request #3431 from matrix-org/rav/erasure_visibility
Support hiding events from deleted users
2018-06-22 15:06:44 +01:00
Erik Johnston
b4a5d767a9 Merge pull request #3428 from matrix-org/erikj/persisted_pdu
Simplify get_persisted_pdu
2018-06-22 14:47:43 +01:00
Erik Johnston
f79abda87f Merge pull request #3427 from matrix-org/erikj/remove_filters
remove dead filter_events_for_clients
2018-06-22 14:45:42 +01:00
Erik Johnston
75dc3ddeab Make push actions rotation configurable 2018-06-22 14:44:37 +01:00
Richard van der Hoff
bb018d0b5b Merge pull request #3383 from matrix-org/rav/hacky_speedup_state_group_cache
Improve StateGroupCache efficiency for wildcard lookups
2018-06-22 14:01:55 +01:00
Richard van der Hoff
43e02c409d Disable partial state group caching for wildcard lookups
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
2018-06-22 11:52:07 +01:00
Richard van der Hoff
240f192523 Merge pull request #3382 from matrix-org/rav/optimise_state_groups
Optimise state_group_cache update
2018-06-22 11:20:20 +01:00
Richard van der Hoff
70e6501913 Merge pull request #3419 from matrix-org/rav/events_per_request
Log number of events fetched from DB
2018-06-22 11:17:56 +01:00
Richard van der Hoff
0495fe0035 Indirect evt_count updates via method call
so that we can stub it for the sentinel and not have a billion failing UTs
2018-06-22 10:42:28 +01:00
Amber Brown
9a685d60ae Merge pull request #3418 from matrix-org/rav/fix_metric_desc
Fix description of "python_gc_time" metric
2018-06-22 09:43:26 +01:00
Amber Brown
77ac14b960 Pass around the reactor explicitly (#3385) 2018-06-22 09:37:10 +01:00
Richard van der Hoff
cbbfaa4be8 Fix description of "python_gc_time" metric 2018-06-21 10:02:42 +01:00
Amber Brown
c2eff937ac Populate synapse_federation_client_sent_pdu_destinations:count again (#3386) 2018-06-21 09:39:58 +01:00
Amber Brown
99b77aa829 Fix tcp protocol metrics naming (#3410) 2018-06-21 09:39:27 +01:00
Richard van der Hoff
b088aafcae Log number of events fetched from DB
When we finish processing a request, log the number of events we fetched from
the database to handle it.

[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
2018-06-21 06:15:03 +01:00
Richard van der Hoff
aff3d76920 Merge pull request #3416 from matrix-org/rav/restart_indicator
Write a clear restart indicator in logs
2018-06-20 18:14:08 +01:00
Richard van der Hoff
02bfc581f8 Merge pull request #3399 from costacruise/master
Add error code to room creation error
2018-06-20 17:26:25 +01:00
Richard van der Hoff
245d53d32a Write a clear restart indicator in logs
I'm fed up with never being able to find the point a server restarted in the
logs.
2018-06-20 15:33:14 +01:00
Amber Brown
f6c4d74f96 Fix inflight requests metric (incorrect name & traceback) (#3413) 2018-06-20 11:18:57 +01:00
Matthew Hodgson
ccfdaf68be spell gauge correctly 2018-06-16 07:10:34 +01:00
Richard van der Hoff
9a793f861c Merge branch 'master' into develop 2018-06-14 16:36:01 +01:00
Richard van der Hoff
53969e1960 Merge tag 'v0.31.2'
SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
when there is no `m.room.power_levels` event in force in the room. (PR #3397)

Discussion around the Matrix Spec change proposal for this change can be
followed at https://github.com/matrix-org/matrix-doc/issues/1304.
2018-06-14 16:35:33 +01:00
Richard van der Hoff
667c6546bd link to spec proposal from changelog 2018-06-14 16:27:41 +01:00
Richard van der Hoff
7e1c616452 v0.31.2 2018-06-14 16:24:32 +01:00
Richard van der Hoff
ba438a3ac1 changelog for 0.31.2 2018-06-14 16:22:46 +01:00
Richard van der Hoff
61ab08a197 Merge pull request #3397 from matrix-org/rav/adjust_auth_rules
Adjust event auth rules when there is no PL event
2018-06-14 16:09:13 +01:00
Richard van der Hoff
1e77ac66e3 Fix broken unit test
We need power levels for this test to do what it is supposed to do.
2018-06-14 14:21:29 +01:00
Richard van der Hoff
a502cfec00 remove spurious debug 2018-06-14 14:20:53 +01:00
Michael Wagner
19cd3120ec Add error code to room creation error
This error code is mentioned in the documentation at https://matrix.org/docs/api/client-server/#!/Room32creation/createRoom
2018-06-14 14:08:40 +02:00
Richard van der Hoff
5c9afd6f80 Make default state_default 50
Make it so that, before there is a power-levels event in the room, you need a
power level of at least 50 to send state.

Partially addresses https://github.com/matrix-org/matrix-doc/issues/1192
2018-06-14 12:38:09 +01:00
Richard van der Hoff
52423607bd Clarify interface for event_auth
stop pretending that it returns a boolean, which just almost gave me a heart
attack.
2018-06-14 12:26:17 +01:00
Amber Brown
f116f32ace add a last seen metric (#3396) 2018-06-14 20:26:59 +10:00
Richard van der Hoff
557b686eac Refactor get_send_level to take a power_levels event
it makes it easier for me to reason about
2018-06-14 11:26:27 +01:00
Amber Brown
a61738b316 Remove run_on_reactor (#3395) 2018-06-14 18:27:37 +10:00
Richard van der Hoff
3681437c35 Merge pull request #3368 from matrix-org/rav/fix_federation_client_host
Fix commandline federation_client to send the right Host header
2018-06-13 15:41:51 +01:00
Amber Brown
0fde1896cd Merge pull request #3389 from turt2live/travis/name_metrics
Use the correct flag (enable_metrics) when warning about an incorrect metrics setup
2018-06-13 23:50:10 +10:00
Amber Brown
2a4fde0a6f Merge pull request #3390 from turt2live/travis/appsvc-metrics
Use the RegistryProxy for appservices too
2018-06-13 22:05:15 +10:00
Travis Ralston
45768d1640 Use the RegistryProxy for appservices too
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:55:48 -06:00
Travis Ralston
12285a1a76 The flag is named enable_metrics, not collect_metrics
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:51:31 -06:00
Richard van der Hoff
96bad44f87 Fix federation_client to send the right Host
This appears to have stopped working since matrix.org moved to cloudflare. The
Host header should match the name of the server, not whatever is in the SRV
record.
2018-06-12 14:14:36 +01:00
Richard van der Hoff
b6faef2ad7 Filter out erased messages
Redact any messges sent by erased users.
2018-06-12 09:53:18 +01:00
Richard van der Hoff
f1023ebf4b mark accounts as erased when requested 2018-06-12 09:53:18 +01:00
Richard van der Hoff
3ff8a619f5 UserErasureStore
to store which users have been erased
2018-06-12 09:52:22 +01:00
Richard van der Hoff
9fc5b74b24 simplify get_persisted_pdu
it doesn't make much sense to use get_persisted_pdu on the receive path: just
get the event straight from the store.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
bd348f0af6 remove dead filter_events_for_clients
This is only used by filter_events_for_client, so we can simplify the whole
thing by just doing one user at a time, and removing a dead storage function to
boot.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
eb32b2ca20 Optimise state_group_cache update
(1) matrix-org-hotfixes has removed the intern calls; let's do the same here.
(2) remove redundant iteritems() so we can used an optimised db update.
2018-06-11 22:56:11 +01:00
David Baker
187a546bff Merge pull request #3276 from matrix-org/dbkr/unbind
Remove email addresses / phone numbers from ID servers when they're removed from synapse
2018-06-11 16:02:00 +01:00
Matthew Hodgson
d6cc369205 fix idiotic typo in state res 2018-06-11 14:43:55 +01:00
Neil Johnson
ed5a0780a4 Merge branch 'master' into develop 2018-06-08 15:47:11 +01:00
Neil Johnson
1032393dfb Merge tag 'v0.31.1'
Changes in synapse v0.31.1 (2018-06-08)
=======================================

v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
where event visibility rules were not applied correctly.

We are not aware of it being actively exploited but please upgrade asap.

Bug Fixes:

* Fix event filtering in get_missing_events handler (PR #3371)
2018-06-08 15:46:18 +01:00
Neil Johnson
aefcc0f5e5 tweak changelog 2018-06-08 15:32:54 +01:00
Neil Johnson
82e751c43f Update CHANGES.rst 2018-06-08 15:22:34 +01:00
Neil Johnson
0eb4722932 changelog a bump version 2018-06-08 15:21:46 +01:00
Richard van der Hoff
c6b1441c52 Fix event filtering in get_missing_events handler 2018-06-08 14:15:31 +01:00
Matthew Hodgson
8b98acca05 fix various changelog bugs and typos 2018-06-08 14:15:16 +01:00
David Baker
0e505b1913 Merge pull request #3372 from matrix-org/rav/better_verification_logging
Try to log more helpful info when a sig verification fails
2018-06-08 13:32:02 +01:00
David Baker
ad9edd1d96 Merge pull request #3371 from matrix-org/rav/fix_get_missing_events
Fix event filtering in get_missing_events handler
2018-06-08 13:19:15 +01:00
Richard van der Hoff
e82db24a0e Try to log more helpful info when a sig verification fails
Firstly, don't swallow the reason for the failure

Secondly, don't assume all exceptions are verification failures

Thirdly, log a bit of info about the key being used if debug is enabled
2018-06-08 12:13:08 +01:00
Richard van der Hoff
0834b49c6a Fix event filtering in get_missing_events handler 2018-06-08 11:34:46 +01:00
Matthew Hodgson
36446ffedb fix various changelog bugs and typos 2018-06-07 23:54:16 +03:00
Will Hunt
13d211edc1 Merge pull request #3344 from Half-Shot/hs/as-metrics
Add metrics to track appservice transactions
2018-06-07 11:37:12 +01:00
Richard van der Hoff
1152f495a0 Merge pull request #3363 from matrix-org/rav/fix_purge
Fix event-purge-by-ts admin API
2018-06-07 11:34:46 +01:00
Richard van der Hoff
e2acf536d4 Merge pull request #3355 from matrix-org/rav/fix_federation_backfill
Fix federation backfill from sqlite servers
2018-06-07 10:18:01 +01:00
Amber Brown
0160f6601f Merge pull request #3356 from matrix-org/rav/add_missing_attr_dep
Make sure that attr is installed
2018-06-07 16:33:30 +10:00
Richard van der Hoff
f4caf3f83d fix log 2018-06-07 00:26:38 +01:00
Richard van der Hoff
0546715c18 Fix event-purge-by-ts admin API
This got completely broken in 0.30.

Fixes #3300.
2018-06-07 00:15:49 +01:00
Richard van der Hoff
57e3f923d2 Add missing dependency on attr
We've rcently added a dep on `attr`. I don't know why the CI didn't pick this
up, but we should make it explicit anyway.
2018-06-06 17:12:41 +01:00
Richard van der Hoff
d3a8c9c55e Fix sql error in _get_state_groups_from_groups
If this was called with a `(type, None)` entry in types (which is supposed to
return all state of type `type`), it would explode with a sql error.
2018-06-06 14:19:01 +01:00
Neil Johnson
fef6c2cdcc Merge branch 'master' into develop 2018-06-06 12:28:15 +01:00
Neil Johnson
752b7b32ed Merge tag 'v0.31.0'
Changes in synapse v0.31.0 (2018-06-06)
======================================

Most notable change from v0.30.0 is to switch to python prometheus library to improve system
stats reporting. WARNING this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.

Bug Fixes:

* Fix metric documentation tables (PR #3341)
* Fix LaterGuage error handling (694968f)
* Fix replication metrics (b7e7fd2)

Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================

Features:

* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)

Changes:

* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)

Changes, python 3 migration:

* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!

Bugs:

* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
2018-06-06 12:27:33 +01:00
Neil Johnson
3f589f9097 7 char sha in changelog 2018-06-06 11:39:42 +01:00
Neil Johnson
176f1206d1 Update CHANGES.rst 2018-06-06 11:28:30 +01:00
Neil Johnson
61134debdc bump version and changelog 2018-06-06 11:26:21 +01:00
Richard van der Hoff
ad459a106c Merge pull request #3349 from t3chguy/redact_as_request_token
Redact AS tokens in log (fixes to #3327)
2018-06-06 10:58:07 +01:00
Michael Telatynski
592c162516 also redact __str__ of ApplicationService used for logging 2018-06-06 10:35:29 +01:00
Michael Telatynski
330432031b redact_uri in two missed log paths 2018-06-06 10:25:48 +01:00
David Baker
bf54c1cf6c pep8 2018-06-06 10:15:33 +01:00
David Baker
3e4bc4488c More doc fixes 2018-06-06 09:44:10 +01:00
Amber Brown
48e2b48888 Merge pull request #3347 from krombel/py3_extend_tox_2
update tox.ini to cover 292 succeeding tests
2018-06-06 16:37:19 +10:00
Amber Brown
d8db6d9267 Merge pull request #3348 from intelfx/fix-sortedcontainers
federation/send_queue.py: fix usage of sortedcontainers.SortedDict
2018-06-06 16:36:53 +10:00
Amber Brown
23c785992f Fix metric documentation tables (#3341) 2018-06-06 07:12:16 +01:00
Richard van der Hoff
b3b16490f7 Add note to changelog on prometheus metrics 2018-06-06 07:08:36 +01:00
Richard van der Hoff
592ee217a3 Merge commit 'b7e7fd2' into release-v0.31.0 2018-06-06 07:02:02 +01:00
Amber Brown
304bb22c1d Fix metric documentation tables (#3341) 2018-06-06 15:52:37 +10:00
Ivan Shapovalov
c88d50aa8f federation/send_queue.py: fix usage of sortedcontainers.SortedDict 2018-06-06 02:45:18 +03:00
Krombel
0c87eed294 update tox.ini to cover 292 succeeding tests
Signed-Off-By: Matthias Kesler <krombel@krombel.de>
2018-06-05 23:10:15 +02:00
Richard van der Hoff
e316407b5d Merge pull request #3327 from t3chguy/redact_as_request_token
Strip `access_token` from outgoing requests
2018-06-05 19:08:46 +01:00
Michael Telatynski
e6cbf47773 factor out uri redaction into a method on http 2018-06-05 18:31:40 +01:00
David Baker
607bd27c83 fix pep8 2018-06-05 18:10:35 +01:00
David Baker
d62162bbec doc fixes 2018-06-05 18:09:13 +01:00
Richard van der Hoff
617afee069 Merge pull request #3340 from ArchangeGabriel/patch-1
doc/postgres.rst: fix display of the last command block
2018-06-05 17:45:17 +01:00
Richard van der Hoff
522bd3c8a3 Merge remote-tracking branch 'origin/master' into develop 2018-06-05 17:42:49 +01:00
Will Hunt
d6e3c2c79b Let's try labels instead of label, that might work 2018-06-05 17:30:45 +01:00
Amber Brown
f7869f8f8b Port to sortedcontainers (with tests!) (#3332) 2018-06-06 00:13:57 +10:00
Will Hunt
604cff1a06 Add metrics to track appservice transactions 2018-06-05 13:21:30 +01:00
Bruno Pagani
b50f18171d doc/postgres.rest: fix displaying of the last command block
Also indent all of them with 4 spaces.
2018-06-04 22:41:52 +00:00
Richard van der Hoff
f29b41fde9 Merge pull request #3324 from matrix-org/rav/remove_dead_method
Remove was_forgotten_at
2018-06-04 18:18:08 +01:00
Richard van der Hoff
28b0490dfd Merge pull request #3334 from matrix-org/rav/cache_factor_override
Cache factor override system for specific caches
2018-06-04 17:19:33 +01:00
Richard van der Hoff
b7e7fd2d0e Fix replication metrics
fix bug introduced in #3256
2018-06-04 16:23:05 +01:00
Neil Johnson
244ab974e7 bump version and changelog 2018-06-04 16:09:58 +01:00
Richard van der Hoff
694968fa81 Hopefully, fix LaterGuage error handling 2018-06-04 15:59:14 +01:00
Erik Johnston
042eedfa2b Add hacky cache factor override system 2018-06-04 15:39:28 +01:00
David Baker
c5930d513a Docstring 2018-06-04 12:05:58 +01:00
David Baker
6a29e815fc Fix comment 2018-06-04 12:01:23 +01:00
David Baker
e44150a6de Missing yield 2018-06-04 12:01:13 +01:00
David Baker
f731e42baf docstring 2018-06-04 12:00:51 +01:00
Amber Brown
5dbf305444 Put python's logs into Trial when running unit tests (#3319) 2018-06-04 16:06:06 +10:00
Amber Brown
86accac5d5 Merge pull request #3328 from intelfx/fix-metrics-LaterGauge-usage
federation: fix LaterGauge usage
2018-06-04 15:35:32 +10:00
Ivan Shapovalov
7d9d75e4e8 federation/send_queue.py: fix usage of LaterGauge
Fixes a startup crash due to commit df9f72d9e5
"replacing portions".
2018-06-03 14:16:17 +03:00
Michael Telatynski
09503126df Strip access_token from outgoing requests using existing regex 2018-06-02 23:25:13 +01:00
Richard van der Hoff
c1f4118bb6 Remove was_forgotten_at
This is unused. IT MUST DIE!!!1
̧̪͈̱̹̳͖͙H̵̰̤̰͕̖e̛ ͚͉̗̼̞w̶̩̥͉̮h̩̺̪̩͘ͅọ͎͉̟ ̜̩͔̦̘ͅW̪̫̩̣̲͔̳a͏͔̳͖i͖͜t͓̤̠͓͙s̘̰̩̥̙̝ͅ ̲̠̬̥Be̡̙̫̦h̰̩i̛̫͙͔̭̤̗̲n̳͞d̸ ͎̻͘T̛͇̝̲̹̠̗ͅh̫̦̝ͅe̩̫͟ ͓͖̼W͕̳͎͚̙̥ą̙l̘͚̺͔͞ͅl̳͍̙̤̤̮̳.̢
̟̺̜̙͉Z̤̲̙̙͎̥̝A͎̣͔̙͘L̥̻̗̳̻̳̳͢G͉̖̯͓̞̩̦O̹̹̺!̙͈͎̞̬ *
2018-06-01 18:21:49 +01:00
Richard van der Hoff
a9e97dcd65 Merge pull request #3317 from thegcat/feature/3312-add_ipv6_to_blacklist_example_config
Add private IPv6 addresses to example config for url preview blacklist
2018-06-01 14:45:14 +01:00
Neil Johnson
71477f3317 Merge pull request #3264 from matrix-org/neil/sign-up-stats
daily user type phone home stats
2018-06-01 13:42:01 +00:00
Richard van der Hoff
41006d9c28 Merge pull request #3318 from matrix-org/rav/ignore_depth_on_rrs
Ignore depth when updating read-receipts
2018-06-01 14:14:45 +01:00
Richard van der Hoff
9f797a24a4 Handle RRs which arrive before their events 2018-06-01 14:01:43 +01:00
Richard van der Hoff
857e6fd8b6 Ignore depth when updating read-receipts
Order read receipts by stream ordering instead of depth
2018-06-01 12:18:11 +01:00
Felix Schäfer
4ef76f3ac4 Add private IPv6 addresses to preview blacklist #3312
The added addresses are expected to be local or loopback addresses and
shouldn't be spidered for previews.

Signed-off-by: Felix Schäfer <felix@thegcat.net>
2018-06-01 12:18:35 +02:00
Neil Johnson
4986b084f8 remove unnecessary INSERT 2018-06-01 10:50:40 +01:00
Richard van der Hoff
c2c3092cce code_style.rst: formatting 2018-05-31 16:11:34 +01:00
Amber Brown
febe0ec8fd Run Prometheus on a different port, optionally. (#3274) 2018-05-31 19:04:50 +10:00
Amber Brown
c936a52a9e Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (#3307) 2018-05-31 19:03:47 +10:00
Richard van der Hoff
e73635191f Merge pull request #3290 from rubo77/patch-7
Add link to thorough instruction how to configure consent
2018-05-30 19:52:01 +01:00
Richard van der Hoff
219c2a322b remove trailing whitespace 2018-05-30 19:42:19 +01:00
Richard van der Hoff
2e4be8bfd9 fix english and wrap comment 2018-05-30 19:24:12 +01:00
Amber Brown
872cf43516 Merge pull request #3303 from NotAFile/py3-memoryview
use memoryview in py3
2018-05-30 12:51:42 +10:00
Amber Brown
debff7ae09 Merge pull request #3281 from NotAFile/py3-six-isinstance
remaining isintance fixes
2018-05-30 12:44:46 +10:00
Richard van der Hoff
34b85df7f5 Update some comments and docstrings in SyncHandler 2018-05-29 22:31:18 +01:00
Richard van der Hoff
711f61a31d Merge pull request #3304 from matrix-org/rav/exempt_as_users_from_gdpr
Exempt AS-registered users from doing gdpr
2018-05-29 20:25:12 +01:00
Richard van der Hoff
a995fdae39 fix tests 2018-05-29 20:19:29 +01:00
Richard van der Hoff
4a9cbdbc15 Exempt AS-registered users from doing gdpr 2018-05-29 19:54:32 +01:00
Neil Johnson
ab0ef31dc7 create users index on creation_ts 2018-05-29 17:51:08 +01:00
Neil Johnson
558f3d376a create index in background 2018-05-29 17:47:55 +01:00
Neil Johnson
c379acd4fd bump version 2018-05-29 17:47:28 +01:00
Richard van der Hoff
db2e4608ab Merge pull request #3302 from krombel/py3_extend_tox_testing
extend tox testing for py3 to avoid regressions
2018-05-29 17:37:52 +01:00
Adrian Tschira
4b9d0cde97 add remaining memoryview changes 2018-05-29 17:42:43 +02:00
Adrian Tschira
7873cde526 pep8 2018-05-29 17:35:55 +02:00
Adrian Tschira
1afafb3497 use memoryview in py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-29 17:14:34 +02:00
Krombel
d6cd55532b extend tox testing for py3 to avoid regressions 2018-05-29 16:02:41 +02:00
Amber Brown
235b53263a Merge pull request #3299 from matrix-org/matthew/macos-fixes
disable CPUMetrics if no /proc/self/stat
2018-05-29 11:45:45 +10:00
Matthew Hodgson
ff1bc0a279 pep8 2018-05-29 02:32:15 +01:00
Matthew Hodgson
adb6bac4d5 fix another dumb typo 2018-05-29 02:29:22 +01:00
Matthew Hodgson
0a240ad36e disable CPUMetrics if no /proc/self/stat
fixes build on macOS again
2018-05-29 02:23:30 +01:00
Matthew Hodgson
284e36ad04 fix dumb typo 2018-05-29 02:23:11 +01:00
Amber Brown
81717e8515 Merge pull request #3256 from matrix-org/3218-official-prom
Switch to the Python Prometheus library
2018-05-28 23:30:09 +10:00
Amber Brown
57ad76fa4a fix up tests 2018-05-28 19:51:53 +10:00
Amber Brown
3ef5cd74a6 update to more consistently use seconds in any metrics or logging 2018-05-28 19:39:27 +10:00
Amber Brown
5c40ce3777 invalid syntax :( 2018-05-28 19:16:09 +10:00
Amber Brown
357c74a50f add comment about why unreg 2018-05-28 19:14:41 +10:00
Amber Brown
a2eb5db4a0 update metrics to be in seconds 2018-05-28 19:10:27 +10:00
Amber Brown
754826a830 Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-28 18:57:23 +10:00
Ruben Barkow
08ea5fe635 add link to thorough instruction how to configure consent 2018-05-25 23:19:55 +02:00
Richard van der Hoff
60f09b1e11 Merge pull request #3288 from matrix-org/rav/no_spam_guests
Avoid sending consent notice to guest users
2018-05-25 11:44:45 +01:00
Richard van der Hoff
66bdae986f Fix default for send_server_notice_to_guests
bool("False") == True...
2018-05-25 11:42:05 +01:00
Richard van der Hoff
ba1b163590 Avoid sending consent notice to guest users
we think it makes sense not to send the notices to guest users.
2018-05-25 11:36:43 +01:00
Richard van der Hoff
41921ac01b Merge pull request #3287 from matrix-org/rav/allow_leaving_server_notices_room
Let users leave the server notice room after joining
2018-05-25 11:15:45 +01:00
Richard van der Hoff
757ed27258 Let users leave the server notice room after joining
They still can't reject invites, but we let them leave it.
2018-05-25 11:07:21 +01:00
Adrian Tschira
4ee4450d66 fix recursion error 2018-05-24 21:44:10 +02:00
Amber Brown
9c36c150e7 Merge pull request #3283 from NotAFile/py3-state
py3-ize state.py
2018-05-24 14:23:33 -05:00
Amber Brown
cc1349c06a Merge pull request #3279 from NotAFile/py3-more-iteritems
more six iteritems
2018-05-24 14:23:13 -05:00
Amber Brown
5b788aba90 Merge pull request #3280 from NotAFile/py3-more-misc
More Misc. py3 fixes
2018-05-24 14:22:59 -05:00
Adrian Tschira
0e61705661 py3-ize state.py 2018-05-24 20:59:00 +02:00
Adrian Tschira
dd068ca979 remaining isintance fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:55:08 +02:00
Adrian Tschira
17a70cf6e9 Misc. py3 fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:20:33 +02:00
Adrian Tschira
6c16a4ec1b more iteritems 2018-05-24 20:19:06 +02:00
Amber Brown
7ea07c7305 Merge pull request #3278 from NotAFile/py3-storage-base
Py3 storage/_base.py
2018-05-24 13:08:09 -05:00
Amber Brown
1f69693347 Merge pull request #3244 from NotAFile/py3-six-4
replace some iteritems with six
2018-05-24 13:04:07 -05:00
Amber Brown
c4fb15a06c Merge pull request #3246 from NotAFile/py3-repr-string
use repr, not str
2018-05-24 13:00:20 -05:00
Amber Brown
36501068d8 Merge pull request #3247 from NotAFile/py3-misc
Misc Python3 fixes
2018-05-24 12:58:37 -05:00
Amber Brown
2aff6eab6d Merge pull request #3245 from NotAFile/batch-iter
Add batch_iter to utils
2018-05-24 12:54:12 -05:00
Adrian Tschira
095292304f Py3 storage/_base.py
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 18:24:12 +02:00
David Baker
77a23e2e05 Merge remote-tracking branch 'origin/develop' into dbkr/unbind 2018-05-24 16:20:53 +01:00
David Baker
ecc4b88bd1 Merge pull request #3277 from matrix-org/dbkr/remove_from_user_dir
Remove users from user directory on deactivate
2018-05-24 16:12:12 +01:00
Erik Johnston
46345187cc Merge pull request #3243 from NotAFile/py3-six-3
Replace some more comparisons with six
2018-05-24 16:08:57 +01:00
Neil Johnson
037c6db85d Merge branch 'master' into develop 2018-05-24 16:03:44 +01:00
David Baker
7a1af504d7 Remove users from user directory on deactivate 2018-05-24 15:59:58 +01:00
Neil Johnson
14ca678674 Update CHANGES.rst 2018-05-24 15:54:02 +01:00
Neil Johnson
6f67163c63 Update CHANGES.rst 2018-05-24 15:04:41 +01:00
Neil Johnson
bdd2ed5acf update for v0.30.0 2018-05-24 15:02:03 +01:00
Erik Johnston
f72d5a44d5 Merge pull request #3261 from matrix-org/erikj/pagination_fixes
Fix federation backfill bugs
2018-05-24 14:52:03 +01:00
Erik Johnston
68399fc4de Merge pull request #3267 from matrix-org/erikj/iter_filter
Use iter* methods for _filter_events_for_server
2018-05-24 14:44:57 +01:00
Neil Johnson
91d95a1d8e bump version 2018-05-24 14:05:07 +01:00
David Baker
9700d15611 pep8 2018-05-24 11:23:15 +01:00
David Baker
a21a41bad7 comment 2018-05-24 11:19:59 +01:00
David Baker
b3bff53178 Unbind 3pids when they're deleted too 2018-05-24 11:08:05 +01:00
Richard van der Hoff
8c98281b8d Merge branch 'release-v0.30.0' into develop 2018-05-24 10:33:12 +01:00
Richard van der Hoff
6abcb5d22d Merge pull request #3273 from matrix-org/rav/server_notices_avatar_url
Allow overriding the server_notices user's avatar
2018-05-24 10:31:43 +01:00
Amber Brown
389dac2c15 pepeightttt 2018-05-23 13:08:59 -05:00
Amber Brown
472a5ec4e2 add back CPU metrics 2018-05-23 13:03:56 -05:00
Amber Brown
e987079037 fixes 2018-05-23 13:03:51 -05:00
Richard van der Hoff
9bf4b2bda3 Allow overriding the server_notices user's avatar
probably should have done this in the first place, like @turt2live suggested.
2018-05-23 17:43:30 +01:00
Richard van der Hoff
23aa70cea8 Merge pull request #3272 from matrix-org/rav/localpart_in_consent_uri
Use the localpart in the consent uri
2018-05-23 16:29:44 +01:00
Richard van der Hoff
043f05a078 Merge docs on consent bits from PR #3268 into release branch 2018-05-23 16:24:58 +01:00
Richard van der Hoff
96f07cebda Merge pull request #3268 from matrix-org/rav/privacy_policy_docs
Docs on consent bits
2018-05-23 16:23:05 +01:00
Richard van der Hoff
a0b3946fe2 Merge branch 'release-v0.30.0' into rav/localpart_in_consent_uri 2018-05-23 16:06:03 +01:00
Richard van der Hoff
2f7008d4eb Merge pull request #3271 from matrix-org/rav/consent_uri_in_messages
Support for putting %(consent_uri)s in messages
2018-05-23 16:04:30 +01:00
Richard van der Hoff
e206b2c9ac consent_tracking.md: clarify link 2018-05-23 15:57:10 +01:00
Richard van der Hoff
2df8c3139a minor post-review tweaks 2018-05-23 15:39:52 +01:00
Richard van der Hoff
dda40fb55d fix typo 2018-05-23 15:30:26 +01:00
Richard van der Hoff
3ff6f50eac Use the localpart in the consent uri
... because it's shorter.
2018-05-23 15:28:23 +01:00
Richard van der Hoff
82191b08f6 Support for putting %(consent_uri)s in messages
Make it possible to put the URI in the error message and the server notice that
get sent by the server
2018-05-23 15:24:31 +01:00
Richard van der Hoff
2c62ea2515 Merge pull request #3270 from matrix-org/rav/block_remote_server_notices
Block attempts to send server notices to remote users
2018-05-23 15:06:35 +01:00
Richard van der Hoff
cd8ab9a0d8 mention public_baseurl 2018-05-23 14:43:09 +01:00
David Baker
2c7866d664 Hit the 3pid unbind endpoint on deactivation 2018-05-23 14:38:56 +01:00
Richard van der Hoff
321f02d263 Block attempts to send server notices to remote users 2018-05-23 14:30:47 +01:00
Richard van der Hoff
1cbb8e5a33 fix wrapping 2018-05-23 13:58:28 +01:00
Richard van der Hoff
052d08a6a5 Using the manhole to send server notices 2018-05-23 13:55:39 +01:00
Richard van der Hoff
5ad1149f38 Notes on the manhole 2018-05-23 13:47:34 +01:00
Richard van der Hoff
563606b8f2 consent_tracking: formatting etc 2018-05-23 12:37:39 +01:00
Richard van der Hoff
2574ea3dc8 server_notices.md: fix link 2018-05-23 12:34:34 +01:00
Richard van der Hoff
833db2d922 consent tracking docs 2018-05-23 12:32:38 +01:00
Neil Johnson
9e8ab0a4f4 style 2018-05-23 12:05:58 +01:00
Neil Johnson
3601a240aa bump version and changelog 2018-05-23 12:03:23 +01:00
Richard van der Hoff
e7598b666b Some docs about server notices 2018-05-23 11:14:23 +01:00
Erik Johnston
5aaa3189d5 s/values/itervalues/ 2018-05-23 10:13:05 +01:00
Erik Johnston
0a4bca4134 Use iter* methods for _filter_events_for_server 2018-05-23 10:04:23 +01:00
Amber Brown
b6063631c3 more cleanup 2018-05-22 17:36:20 -05:00
Amber Brown
53cc2cde1f cleanup 2018-05-22 17:32:57 -05:00
Amber Brown
071206304d cleanup pep8 errors 2018-05-22 16:54:22 -05:00
Amber Brown
4abeaedcf3 tests/metrics is gone now 2018-05-22 16:45:11 -05:00
Amber Brown
85ba83eb51 fixes 2018-05-22 16:28:23 -05:00
Amber Brown
228f1f584e fix the test failures 2018-05-22 15:02:38 -05:00
Erik Johnston
e85b5a0ff7 Use iter* methods 2018-05-22 19:02:48 +01:00
Erik Johnston
586b66b197 Fix that states is a dict of dicts 2018-05-22 19:02:36 +01:00
Erik Johnston
35ca3e7b65 Merge pull request #3265 from matrix-org/erikj/limit_pagination
Don't support limitless pagination
2018-05-22 18:29:36 +01:00
Erik Johnston
a17e901f4d Remove unused string formatting param 2018-05-22 18:24:32 +01:00
Erik Johnston
5494c1d71e Don't support limitless pagination
The pagination storage function supported not specifiying a limit on the
number of events returned. This was triggered when using the search or
context API with a limit of zero, which the storage function took to
mean not being limited.
2018-05-22 18:15:21 +01:00
Neil Johnson
d8cb7225d2 daily user type phone home stats 2018-05-22 18:09:09 +01:00
hera
ad2823ee27 fix synchrotron 2018-05-22 17:47:42 +01:00
Richard van der Hoff
08bfc48abf custom error code for not leaving server notices room 2018-05-22 17:27:27 +01:00
David Baker
0a078026ea comment typo 2018-05-22 17:14:06 +01:00
Amber Brown
07bb9bdae8 update gitignore to remove some files that got put on my machine 2018-05-22 10:57:28 -05:00
Amber Brown
8f5a688d42 cleanups, self-registration 2018-05-22 10:56:03 -05:00
Amber Brown
a8990fa2ec Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-22 10:50:26 -05:00
Erik Johnston
cb2a2ad791 get_domains_from_state returns list of tuples 2018-05-22 16:23:39 +01:00
Richard van der Hoff
08a14b32ae Merge pull request #3262 from matrix-org/rav/has_already_consented
Add a 'has_consented' template var to consent forms
2018-05-22 15:35:10 +01:00
Richard van der Hoff
82c2a52987 Merge pull request #3263 from matrix-org/rav/fix_jinja_dep
Fix dependency on jinja2
2018-05-22 15:31:08 +01:00
Richard van der Hoff
7b36d06a69 Add a 'has_consented' template var to consent forms
fixes #3260
2018-05-22 14:58:34 +01:00
Richard van der Hoff
669400e22f Enable auto-escaping for the consent templates
... to reduce the risk of somebody introducing an html injection attack...
2018-05-22 14:58:34 +01:00
Richard van der Hoff
b5b2d5d64b Fix dependency on jinja2
Delay the import of ConsentResource, so that we can get away without jinja2 if
people don't have the consent resource enabled.

Fixes #3259
2018-05-22 14:03:45 +01:00
Richard van der Hoff
3b2def6c7a Merge pull request #3257 from matrix-org/rav/fonx_on_no_consent
Reject attempts to send event before privacy consent is given
2018-05-22 12:01:43 +01:00
Richard van der Hoff
a5e2941aad Reject attempts to send event before privacy consent is given
Returns an M_CONSENT_NOT_GIVEN error (cf
https://github.com/matrix-org/matrix-doc/issues/1252) if consent is not yet
given.
2018-05-22 12:00:47 +01:00
Richard van der Hoff
8aeb529262 Merge pull request #3236 from matrix-org/rav/consent_notice
Send users a server notice about consent
2018-05-22 11:56:09 +01:00
Richard van der Hoff
8810685df9 Stub out ServerNoticesSender on the workers
... and have the sync endpoints call it directly rather than obsure indirection
via PresenceHandler
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d5dca9a04f Move consent config parsing into ConsentConfig
turns out we need to reuse this, so it's better in the config class.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
9ea219c514 Send users a server notice about consent
When a user first syncs, we will send them a server notice asking them to
consent to the privacy policy if they have not already done so.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d14d7b8fdc Rename 'version' param on user consent config
we're going to use it for the version we require too.
2018-05-22 11:54:51 +01:00
Erik Johnston
7cfa8a87a1 Merge pull request #3258 from matrix-org/erikj/fixup_logcontext_rusage
Fix logcontext resource usage tracking
2018-05-22 11:39:56 +01:00
Erik Johnston
7948ecf234 Comment 2018-05-22 11:39:43 +01:00
Erik Johnston
020377a550 Fix logcontext resource usage tracking 2018-05-22 11:16:07 +01:00
Erik Johnston
13a8dfba0d Merge pull request #3252 from matrix-org/erikj/in_flight_requests
Add in flight request metrics
2018-05-22 09:54:30 +01:00
Erik Johnston
c435b0b441 Don't store context 2018-05-22 09:34:26 +01:00
Erik Johnston
fb2806b186 Move in_flight_requests_count to be a callback metric 2018-05-22 09:31:53 +01:00
Amber Brown
fcc525b0b7 rest of the changes 2018-05-21 19:48:57 -05:00
Amber Brown
df9f72d9e5 replacing portions 2018-05-21 19:47:37 -05:00
Amber Brown
c60e0d5e02 don't need the resource portion 2018-05-21 17:03:20 -05:00
Amber Brown
02c1d29133 look at the Prometheus metrics instead 2018-05-21 17:02:20 -05:00
Amber Brown
f258deffcb remove old metrics libs 2018-05-21 17:01:15 -05:00
Richard van der Hoff
413482f578 Merge pull request #3255 from matrix-org/rav/fix_transactions
Stop the transaction cache caching failures
2018-05-21 17:58:08 +01:00
Neil Johnson
4aac88928f Merge pull request #3251 from matrix-org/cohort_analytics
Tighter filtering for user_daily_visits
2018-05-21 16:42:18 +00:00
Richard van der Hoff
6e1cb54a05 Fix logcontext leak in HttpTransactionCache
ONE DAY I WILL PURGE THE WORLD OF THIS EVIL
2018-05-21 16:58:20 +01:00
Richard van der Hoff
6d6e7288fe Stop the transaction cache caching failures
The transaction cache has some code which tries to stop it caching failures,
but if the callback function failed straight away, then things would happen
backwards and we'd end up with the failure stuck in the cache.
2018-05-21 16:49:59 +01:00
Michael Kaye
d689e0dba1 Merge pull request #3239 from ptman/develop
Add lxml to docker image for web previews
2018-05-21 16:31:18 +01:00
Erik Johnston
dfa70adc33 Add in flight request metrics
This tracks CPU and DB usage while requests are in flight, rather than
when we write the response.
2018-05-21 16:23:06 +01:00
Adrian Tschira
933bf2dd35 replace some iteritems with six
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:59:26 +02:00
Adrian Tschira
d9fe2b2d9d Replace some more comparisons with six
plus a bonus b"" string I missed last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:56:31 +02:00
Adrian Tschira
45b55e23d3 Add batch_iter to utils
There's a frequent idiom I noticed where an iterable is split up into a
number of chunks/batches. Unfortunately that method does not work with
iterators like dict.keys() in python3. This implementation works with
iterators.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:48:30 +02:00
Adrian Tschira
dcc235b47d use stand-in value if maxint is not available
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:44 +02:00
Adrian Tschira
73cbdef5f7 fix py3 intern and remove unnecessary py3 encode
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:31 +02:00
Adrian Tschira
aafb0f6b0d py3-ize url preview 2018-05-19 17:35:20 +02:00
Adrian Tschira
b932b4ea25 use repr, not str
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:28:42 +02:00
Neil Johnson
644aac5f73 Tighter filtering for user_daily_visits 2018-05-18 17:10:35 +01:00
Neil Johnson
08462620bf Merge pull request #3241 from matrix-org/fix_user_visits_insertion
fix psql compatability bug
2018-05-18 15:01:01 +00:00
Neil Johnson
ef466b3a13 fix psql compatability bug 2018-05-18 15:51:21 +01:00
Paul Tötterman
861f8a9b21 Add lxml to docker image for web previews
Signed-off-by: Paul Tötterman <paul.totterman@iki.fi>
2018-05-18 16:20:08 +03:00
Neil Johnson
2725223f08 Merge branch 'master' into develop 2018-05-18 14:07:50 +01:00
Neil Johnson
ab5e888927 Merge tag 'v0.29.1'
Changes in synapse v0.29.1 (2018-05-17)
==========================================
Changes:

* Update docker documentation (PR #3222)

Changes in synapse v0.29.0 (2018-05-16)
===========================================
Not changes since v0.29.0-rc1

Changes in synapse v0.29.0-rc1 (2018-05-14)
===========================================

Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a
closed spec bug in the Client Server API. Additionally further prep for Python 3
migration.

Potentially breaking change:

* Make Client-Server API return 401 for invalid token (PR #3161).

  This changes the Client-server spec to return a 401 error code instead of 403
  when the access token is unrecognised. This is the behaviour required by the
  specification, but some clients may be relying on the old, incorrect
  behaviour.

  Thanks to @NotAFile for fixing this.

Features:

* Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!

Changes - General:

* nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
* Part user from rooms on account deactivate (PR #3201)
* Make 'unexpected logging context' into warnings (PR #3007)
* Set Server header in SynapseRequest (PR #3208)
* remove duplicates from groups tables (PR #3129)
* Improve exception handling for background processes (PR #3138)
* Add missing consumeErrors to improve exception handling (PR #3139)
* reraise exceptions more carefully (PR #3142)
* Remove redundant call to preserve_fn (PR #3143)
* Trap exceptions thrown within run_in_background (PR #3144)

Changes - Refactors:

* Refactor /context to reuse pagination storage functions (PR #3193)
* Refactor recent events func to use pagination func (PR #3195)
* Refactor pagination DB API to return concrete type (PR #3196)
* Refactor get_recent_events_for_room return type (PR #3198)
* Refactor sync APIs to reuse pagination API (PR #3199)
* Remove unused code path from member change DB func (PR #3200)
* Refactor request handling wrappers (PR #3203)
* transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* Set Server header in SynapseRequest (PR #3208)
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
* Use run_in_background in preference to preserve_fn (PR #3140)

Changes - Python 3 migration:

* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!

Bug Fixes:

* synapse fails to start under Twisted >= 18.4 (PR #3157)
* Fix a class of logcontext leaks (PR #3170)
* Fix a couple of logcontext leaks in unit tests (PR #3172)
* Fix logcontext leak in media repo (PR #3174)
* Escape label values in prometheus metrics (PR #3175, #3186)
* Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
* Fix logcontext leaks in rate limiter (PR #3183)
* notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
* nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
2018-05-18 14:06:34 +01:00
Neil Johnson
f3d9dca975 Merge branch 'release-v0.29.0' of https://github.com/matrix-org/synapse into release-v0.29.0 2018-05-18 13:55:19 +01:00
Richard van der Hoff
6d9dc67139 Merge pull request #3232 from matrix-org/rav/server_notices_room
Infrastructure for a server notices room
2018-05-18 11:28:04 +01:00
Richard van der Hoff
ed3125b0a1 Merge pull request #3235 from matrix-org/rav/fix_receipts_deferred
Fix error in handling receipts
2018-05-18 11:23:11 +01:00
Richard van der Hoff
67af392712 Merge pull request #3233 from matrix-org/rav/remove_dead_code
Remove unused `update_external_syncs`
2018-05-18 11:22:43 +01:00
Richard van der Hoff
011e1f4010 Better docstrings 2018-05-18 11:22:12 +01:00
Richard van der Hoff
26305788fe Make sure we reject attempts to invite the notices user 2018-05-18 11:18:39 +01:00
Richard van der Hoff
d10707c810 Replace inline docstrings with "Attributes" in class docstring 2018-05-18 11:00:55 +01:00
Erik Johnston
fa30ac38cc Merge pull request #3221 from matrix-org/erikj/purge_token
Make purge_history operate on tokens
2018-05-18 10:35:23 +01:00
Richard van der Hoff
8b1c856d81 Fix error in handling receipts
Fixes an error which has been happening ever since #2158 (v0.21.0-rc1):

> TypeError: argument of type 'ObservableDeferred' is not iterable

fixes #3234
2018-05-18 09:15:35 +01:00
Neil Johnson
6958459b50 bump version, change log 2018-05-17 21:35:07 +01:00
Richard van der Hoff
88d3405332 fix missing yield for server_notices_room 2018-05-17 18:33:45 +01:00
Richard van der Hoff
d43d480d86 Remove unused update_external_syncs
This method isn't used anywhere. Burninate it.
2018-05-17 18:22:19 +01:00
Neil Johnson
a2da6de40e light grammar changes 2018-05-17 18:12:00 +01:00
Michael Kaye
450f500d0c Note that secrets need to be retained. 2018-05-17 18:12:00 +01:00
Michael Kaye
82b0361f02 Document macaroon env var correctly 2018-05-17 18:12:00 +01:00
Michael Kaye
1b1b47aec6 Reference synapse docker image and docker-compose 2018-05-17 18:12:00 +01:00
Richard van der Hoff
fed62e21ad Infrastructure for a server notices room
Server Notices use a special room which the user can't dismiss. They are
created on demand when some other bit of the code calls send_notice.

(This doesn't actually do much yet becuse we don't call send_notice anywhere)
2018-05-17 17:58:25 +01:00
Richard van der Hoff
f8a1e76d64 Merge pull request #3225 from matrix-org/rav/move_creation_handler
Move RoomCreationHandler out of synapse.handlers.Handlers
2018-05-17 17:56:42 +01:00
Erik Johnston
f7906203f6 Merge pull request #3212 from matrix-org/erikj/epa_stream
Use stream rather depth ordering for push actions
2018-05-17 12:01:21 +01:00
Richard van der Hoff
ae53c71d90 Merge pull request #1756 from rubo77/patch-4
Add instructions how to setup the postgres user and clarify the final step
2018-05-17 10:52:54 +01:00
rubo77
616da9eb1d postgres.rst: Add instructions how to setup the postgres user and clarify the final step 2018-05-17 11:48:56 +02:00
Richard van der Hoff
c46367d0d7 Move RoomCreationHandler out of synapse.handlers.Handlers
Handlers is deprecated nowadays, so let's move this out before I add a new
dependency on it.

Also fix the docstrings on create_room.
2018-05-17 09:08:42 +01:00
Neil Johnson
85b8acdeb4 Merge branch 'master' into develop 2018-05-16 16:48:10 +01:00
Neil Johnson
3c099219e0 bump version and changelog for 0.29.0 2018-05-16 15:44:59 +01:00
Erik Johnston
680530cc7f Clarify comment 2018-05-16 11:47:29 +01:00
Erik Johnston
43e6e82c4d Comments 2018-05-16 11:13:31 +01:00
Neil Johnson
dc8930ea9e Merge pull request #3163 from matrix-org/cohort_analytics
user visit data
2018-05-16 10:09:24 +00:00
Erik Johnston
c945af8799 Move and rename variable 2018-05-16 10:52:06 +01:00
Neil Johnson
be11a02c4f remove empty line 2018-05-16 10:45:40 +01:00
Neil Johnson
a2204cc9cc remove unused method recurring_user_daily_visit_stats 2018-05-16 09:47:20 +01:00
Neil Johnson
31c2502ca8 style and further contraining query 2018-05-16 09:46:43 +01:00
Richard van der Hoff
8030a825c8 Merge pull request #3213 from matrix-org/rav/consent_handler
ConsentResource to gather policy consent from users
2018-05-16 07:19:18 +01:00
Neil Johnson
c92a8aa578 pep8 2018-05-15 17:31:11 +01:00
Neil Johnson
05ac15ae82 Limit query load of generate_user_daily_visits
The aim is to keep track of when it was last called and only query from that point in time
2018-05-15 17:01:33 +01:00
Erik Johnston
5f27ed75ad Make purge_history operate on tokens
As we're soon going to change how topological_ordering works
2018-05-15 16:23:50 +01:00
Erik Johnston
37dbee6490 Use events_to_purge table rather than token 2018-05-15 16:23:47 +01:00
Richard van der Hoff
47815edcfa ConsentResource to gather policy consent from users
Hopefully there are enough comments and docs in this that it makes sense on its
own.
2018-05-15 15:11:59 +01:00
Neil Johnson
589ecc5b58 further musical chairs 2018-05-14 17:49:59 +01:00
Neil Johnson
e71fb118f4 rearrange and collect related PRs 2018-05-14 17:39:22 +01:00
Neil Johnson
aea80a0118 v0.29.0-rc1: bump version and change log 2018-05-14 15:50:57 +01:00
Neil Johnson
f077e97914 instead of inserting user daily visit data at the end of the day, instead insert incrementally through the day 2018-05-14 13:50:58 +01:00
David Baker
8cbbfd16fb Merge pull request #3201 from matrix-org/dbkr/leave_rooms_on_deactivate
Part user from rooms on account deactivate
2018-05-14 11:31:48 +01:00
Neil Johnson
977765bde2 Merge branch 'develop' of https://github.com/matrix-org/synapse into cohort_analytics 2018-05-14 09:31:42 +01:00
Michael Kaye
16f41237f0 Merge pull request #2846 from kaiyou/feat-dockerfile
Add a Dockerfile for synapse
2018-05-11 17:21:12 +01:00
Richard van der Hoff
c25d7ba12e Merge pull request #3208 from matrix-org/rav/more_refactor_request_handler
Set Server header in SynapseRequest
2018-05-11 16:07:47 +01:00
Erik Johnston
6406b70aeb Use stream rather depth ordering for push actions
This simplifies things as it is, but will also allow us to change the
way we traverse topologically without having to update the way push
actions work.
2018-05-11 15:30:11 +01:00
Richard van der Hoff
23e2dfe940 Merge pull request #3209 from damir-manapov/master
transaction_id, destination defined twice
2018-05-11 00:35:13 +01:00
Richard van der Hoff
bd8d0cfab1 Merge remote-tracking branch 'origin/master' into develop 2018-05-11 00:19:26 +01:00
Damir Manapov
db18d854cd transaction_id, destination twice 2018-05-10 22:13:31 +03:00
Richard van der Hoff
318711e139 Set Server header in SynapseRequest
(instead of everywhere that writes a response. Or rather, the subset of places
which write responses where we haven't forgotten it).

This also means that we don't have to have the mysterious version_string
attribute in anything with a request handler.

Unfortunately it does mean that we have to pass the version string wherever we
instantiate a SynapseSite, which has been c&ped 150 times, but that is code
that ought to be cleaned up anyway really.
2018-05-10 18:50:27 +01:00
Richard van der Hoff
7b411007e6 Merge pull request #3203 from matrix-org/rav/refactor_request_handler
Refactor request handling wrappers
2018-05-10 15:26:53 +01:00
David Baker
6b49628e3b Catch failure to part user from room 2018-05-10 12:23:53 +01:00
David Baker
217bc53c98 Many docstrings 2018-05-10 12:20:40 +01:00
Richard van der Hoff
645cb4bf06 Remove redundant request_handler decorator
This is needless complexity; we might as well use the wrapper directly.

Also rename wrap_request_handler->wrap_json_request_handler.
2018-05-10 12:19:53 +01:00
Richard van der Hoff
09f570b935 Factor wrap_request_handler_with_logging out of wrap_request_handler
... so that it can be used on non-JSON endpoints
2018-05-10 12:19:52 +01:00
Richard van der Hoff
9589a1925e Remove include_metrics param
The metrics are now available via the request, so this is redundant and can go
away at last.
2018-05-10 12:19:52 +01:00
Richard van der Hoff
49e5a613f1 Move outgoing_responses_counter handling to RequestMetrics
it's much neater there.
2018-05-10 12:19:52 +01:00
Richard van der Hoff
b8700dd7d0 Bump requests_counter in wrapped_request_handler
less magic
2018-05-10 12:19:52 +01:00
Richard van der Hoff
c6f730282c Move RequestMetrics handling into SynapseRequest.processing()
It fits quite nicely here, and opens the path to getting rid of the
"include_metrics" mess.
2018-05-10 12:19:51 +01:00
Richard van der Hoff
09b29f9c4a Make RequestMetrics take a raw time rather than a clock
... which is going to make it easier to move around.
2018-05-10 12:18:52 +01:00
David Baker
4d298506dd Oops, don't call function passed to run_in_background 2018-05-10 11:57:13 +01:00
Richard van der Hoff
8460e48d06 Move request_id management into SynapseRequest 2018-05-10 11:48:17 +01:00
Richard van der Hoff
18e144fe08 Move RequestsMetrics to its own file
This is useful in its own right, because server.py is full of stuff; but more
importantly, I want to do some refactoring that will cause a circular reference
as it is.
2018-05-09 19:55:03 +01:00
Erik Johnston
bfe1f73855 Merge pull request #3199 from matrix-org/erikj/pagination_sync
Refactor sync APIs to reuse pagination API
2018-05-09 16:16:56 +01:00
Erik Johnston
5adb75bcba Merge pull request #3198 from matrix-org/erikj/fixup_return_pagination
Refactor get_recent_events_for_room return type
2018-05-09 16:07:14 +01:00
Erik Johnston
a5c98dda48 Merge pull request #3200 from matrix-org/erikj/remove_membership_change
Remove unused code path from member change DB func
2018-05-09 16:02:13 +01:00
Erik Johnston
d26bec8a43 Add comment to sync as to why code path is split 2018-05-09 15:56:07 +01:00
Erik Johnston
fcf55f2255 Fix returned token is no longer a tuple 2018-05-09 15:43:00 +01:00
Erik Johnston
7ce98804ff Fix up comment 2018-05-09 15:42:39 +01:00
Erik Johnston
cddf91c8b9 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/remove_membership_change 2018-05-09 15:32:07 +01:00
Erik Johnston
9896dab8f6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fixup_return_pagination 2018-05-09 15:31:33 +01:00
Erik Johnston
1e5280b7d0 Merge pull request #3196 from matrix-org/erikj/pagination_return
Refactor pagination DB API to return concrete type
2018-05-09 15:27:17 +01:00
Erik Johnston
75552d2148 Update comments 2018-05-09 15:15:38 +01:00
David Baker
294e9a0c9b Prefix internal functions 2018-05-09 15:10:37 +01:00
David Baker
46df23f581 Add the schema file 2018-05-09 15:07:54 +01:00
David Baker
52281e4c54 Indent fail 2018-05-09 15:06:16 +01:00
David Baker
7e8726b8fb Part deactivated users in the background
One room at a time so we don't take out the whole server with leave
events, and restart at server restart.
2018-05-09 14:54:28 +01:00
Erik Johnston
c0e08dc45b Remove unused code path from member change DB func
The function is never called without a from_key, so we can remove all
the handling for that scenario.
2018-05-09 14:31:32 +01:00
Erik Johnston
0461ef01b7 Merge pull request #3195 from matrix-org/erikj/pagination_refactor
Refactor recent events func to use pagination func
2018-05-09 14:12:24 +01:00
Erik Johnston
e2accd7f1d Refactor sync APIs to reuse pagination API
The sync API often returns events in a topological rather than stream
ordering, e.g. when the user joined the room or on initial sync. When
this happens we can reuse existing pagination storage functions.
2018-05-09 13:43:39 +01:00
Erik Johnston
e5ab9cd24b Don't unnecessarily require token to be stream token
This allows calling the `get_recent_event_ids_for_room` function in more
situations.
2018-05-09 11:58:35 +01:00
Richard van der Hoff
60590211c1 Merge pull request #3194 from rubo77/fix-nuke
nuke-room-from-db.sh: fix deletion from search table
2018-05-09 11:58:07 +01:00
Erik Johnston
c4af4c24ca Refactor get_recent_events_for_room return type
There is no reason to return a tuple of tokens when the last token is
always the token passed as an argument. Changing it makes it consistent
with other storage APIs
2018-05-09 11:55:34 +01:00
Erik Johnston
05e0a2462c Refactor pagination DB API to return concrete type
This makes it easier to document what is being returned by the storage
functions and what some functions expect as arguments.
2018-05-09 11:34:24 +01:00
Erik Johnston
7dd13415db Remove unused from_token param 2018-05-09 10:58:16 +01:00
Erik Johnston
27cf170558 Refactor recent events func to use pagination func
This also removes a cache that is unlikely to ever get hit.
2018-05-09 10:55:55 +01:00
Erik Johnston
1aeb5e28a9 Merge pull request #3193 from matrix-org/erikj/pagination_refactor
Refactor /context to reuse pagination storage functions
2018-05-09 10:22:38 +01:00
Erik Johnston
23ec51c94c Fix up comments and make function private 2018-05-09 09:55:19 +01:00
Richard van der Hoff
d5377eba55 Merge pull request #2337 from rubo77/patch-5
nuke-room-from-db.sh: added postgresql option and help
2018-05-09 09:43:07 +01:00
rubo77
d11b8b6b65 nuke-room-from-db.sh: nuke from table event_search too 2018-05-09 00:46:47 +02:00
rubo77
8ff8ab3bce Dont nuke non-existing table event_search_content 2018-05-09 00:21:00 +02:00
rubo77
6c957e26f0 nuke-room-from-db.sh: added postgresql option and help 2018-05-09 00:14:01 +02:00
Erik Johnston
696f532453 Reuse existing pagination code for context API 2018-05-08 16:20:19 +01:00
Erik Johnston
3e6d306e94 Parse tokens before calling DB function 2018-05-08 16:18:58 +01:00
Erik Johnston
274b8c6025 Only fetch required fields from database 2018-05-08 16:15:25 +01:00
Erik Johnston
06c0d0ed08 Split paginate_room_events storage function 2018-05-08 16:14:26 +01:00
David Baker
bf98fa0864 Part user from rooms on account deactivate
This implements this very crudely: this probably isn't viable
because parting a user from all their rooms could take a long time,
and if the HS gets restarted in that time the process will be
aborted.
2018-05-08 15:58:35 +01:00
Richard van der Hoff
678e649b78 Merge pull request #3190 from mujx/notif-token-fix
notifications: Convert next_token to string according to the spec
2018-05-08 13:54:47 +01:00
Erik Johnston
0b7dfbb194 Merge pull request #3186 from matrix-org/erikj/fix_int_values_metrics
Fix metrics that have integer value labels
2018-05-08 09:48:50 +01:00
Konstantinos Sideris
88868b2839 notifications: Convert next_token to string according to the spec
Currently the parameter is serialized as an integer.

Signed-off-by: Konstantinos Sideris <sideris.konstantin@gmail.com>
2018-05-05 12:55:02 +03:00
kaiyou
5addeaa02c Add Docker packaging in the author list 2018-05-04 21:23:01 +02:00
Erik Johnston
6d8ec3462d Note that label values can be anything 2018-05-03 16:25:05 +01:00
Erik Johnston
95b6912045 Fix metrics that have integer value labels 2018-05-03 15:51:04 +01:00
kaiyou
9a779c2ddb Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-05-02 20:22:41 +02:00
kaiyou
4f2e898c29 Make the logging level configurable 2018-05-01 20:50:03 +02:00
kaiyou
d4c14e1438 Fix the documentation about 'POSTGRES_DB' 2018-05-01 20:47:58 +02:00
Matthew Hodgson
da602419b2 missing word :| 2018-05-01 19:19:23 +01:00
Matthew Hodgson
562532dd2d Merge branch 'release-v0.28.1' 2018-05-01 19:04:11 +01:00
Neil Johnson
dd1a832419 remove user agent from data model, will just join on user_ips 2018-05-01 12:13:49 +01:00
Neil Johnson
d0857702e8 add inidexes based on usage 2018-05-01 12:12:57 +01:00
Neil Johnson
5917562b60 10 mins seems more reasonable that every minute 2018-05-01 12:12:22 +01:00
Neil Johnson
fb6015d0a6 pep8 2018-04-25 17:56:11 +01:00
Neil Johnson
617bf40924 Generate user daily stats 2018-04-25 17:37:29 +01:00
Neil Johnson
48c01ae851 ignore atom editor python ide files 2018-04-25 11:02:28 +01:00
kaiyou
041b41a825 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-04-14 12:27:16 +02:00
kaiyou
a13b7860c6 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-04-08 17:56:44 +02:00
kaiyou
757f1b5843 Merge remote-tracking branch 'upstream/master' into feat-dockerfile 2018-03-17 16:02:08 +01:00
kaiyou
f44b7c022f Disable logging to file and rely on the console when using Docker 2018-02-10 23:57:51 +01:00
kaiyou
07f1b71819 Explicitely provide the postgres password to synapse in the Compose example 2018-02-10 23:57:36 +01:00
kaiyou
b815aa0e2d Remove an accidentally committed test configuration 2018-02-10 21:59:58 +01:00
kaiyou
6f0b1f85f9 Generate macaroon and registration secrets, then store the results to the data dir 2018-02-10 00:05:03 +01:00
kaiyou
ca70148c05 Fix the path to the log config file 2018-02-09 00:23:19 +01:00
kaiyou
e511979fe6 Make SYNAPSE_MACAROON_SECRET_KEY a mandatory option 2018-02-09 00:13:26 +01:00
kaiyou
a03c382966 Specify the Docker registry for the postgres image 2018-02-08 22:00:43 +01:00
kaiyou
48e2c641b8 Specify the Docker registry in the build tag 2018-02-08 21:58:12 +01:00
kaiyou
d8680c969b Make it clear that the image has two modes of operation 2018-02-08 21:55:35 +01:00
kaiyou
b9b668e4bb Update to Alpine 3.7 and switch to libressl 2018-02-08 21:39:36 +01:00
kaiyou
ef1f8d4be6 Enable email server configuration from environment variables 2018-02-08 20:53:12 +01:00
kaiyou
a0af0054ec Honor the SYNAPSE_REPORT_STATS parameter in the Docker image 2018-02-08 20:46:11 +01:00
kaiyou
914a59cb8c Disable the Web client in the Docker image 2018-02-08 20:43:45 +01:00
kaiyou
e174c46a29 Use 'synapse' as a default postgres user in Docker examples 2018-02-08 20:42:57 +01:00
kaiyou
b8a4dceb3c Refactor the start script to better handle mandatory parameters 2018-02-08 20:41:41 +01:00
kaiyou
084afbb6a0 Rename the permissions variable to avoid confusion 2018-02-08 19:50:04 +01:00
kaiyou
58df3a8c5d Add some documentation about high performance storage 2018-02-08 19:48:53 +01:00
kaiyou
63fd148724 Make it clear that two modes are avaiable in the documentation, improve the compose file 2018-02-08 19:46:11 +01:00
kaiyou
1ffd9cb936 Support loading application service files from /data/appservices/ 2018-02-05 23:13:27 +01:00
kaiyou
107a5c9441 Add the non-tls port to the expose list 2018-02-05 23:02:33 +01:00
kaiyou
ee3b160a2a Only generate configuration files when necessary 2018-02-05 22:57:35 +01:00
kaiyou
630573a932 Do not copy documentation files to the Docker root folder 2018-02-05 22:57:22 +01:00
kaiyou
f5364b47ec Point to the 'latest' tag in the Docker documentation 2018-02-05 22:14:40 +01:00
kaiyou
d8c7da5dca Fix a typo in the Docker README 2018-02-05 22:12:50 +01:00
kaiyou
cf4ef60e28 Document the cache factor environment variable for Docker 2018-02-05 22:10:03 +01:00
kaiyou
cd51931b62 Add dynamic TURN configuration in the Docker image 2018-02-05 21:53:53 +01:00
kaiyou
81010a126e Add dynamic recaptcha configuration in the Docker image 2018-02-05 21:28:15 +01:00
kaiyou
8db84e9b21 Remove docker related files from the python manifest 2018-02-05 20:08:35 +01:00
kaiyou
e9021e16c4 Run the server as an unprivileged user 2018-02-04 23:19:08 +01:00
kaiyou
f72c9c1fb6 Fix multiple typos 2018-02-04 16:18:40 +01:00
kaiyou
b8ab78b82c Add the build cache/ folder to gitignore 2018-02-04 15:41:54 +01:00
kaiyou
9a87b8aaf7 Update sumperdump Docker readme to match this image properties 2018-02-04 15:27:32 +01:00
kaiyou
84a9209ba7 Remove etc/service files from rob's branch 2018-02-04 15:08:43 +01:00
kaiyou
53965334da Merge remote-tracking branch 'origin/rob/docker' into feat-dockerfile 2018-02-04 15:08:19 +01:00
kaiyou
a207cccb05 Reuse environment variables of the postgres container 2018-02-04 15:04:26 +01:00
kaiyou
1ba2fe114c Provide an example docker compose file 2018-02-04 12:55:20 +01:00
kaiyou
042757feb2 Install the postgres dependencies 2018-02-04 12:28:42 +01:00
kaiyou
886c2d5019 Support an external postgresql config in the Docker image 2018-02-04 12:20:29 +01:00
kaiyou
f2bf0cda02 Generate shared secrets if not defined in the environment 2018-02-04 11:40:20 +01:00
kaiyou
6d1e28a842 Generate any missing keys before starting synapse 2018-02-04 11:14:06 +01:00
kaiyou
48bc22f89d Allow for a wheel cache and include missing files in the build 2018-02-04 10:58:07 +01:00
kaiyou
d434ae3387 Add template config files for the Docker image 2018-02-03 20:30:08 +01:00
kaiyou
431476fbc4 Initial commit including a Dockerfile for synapse 2018-02-03 20:18:36 +01:00
Robert Swain
24d162814b docker: s/matrix-org/matrixdotorg/g 2017-09-29 11:40:15 +02:00
Robert Swain
95e02b856b docker: Initial Dockerfile and docker-compose.yaml 2017-09-28 12:12:47 +02:00
236 changed files with 7677 additions and 2955 deletions

5
.dockerignore Normal file
View File

@@ -0,0 +1,5 @@
Dockerfile
.travis.yml
.gitignore
demo/etc
tox.ini

6
.gitignore vendored
View File

@@ -1,5 +1,6 @@
*.pyc
.*.swp
*~
.DS_Store
_trial_temp/
@@ -13,6 +14,7 @@ docs/build/
cmdclient_config.json
homeserver*.db
homeserver*.log
homeserver*.log.*
homeserver*.pid
homeserver*.yaml
@@ -32,6 +34,7 @@ demo/media_store.*
demo/etc
uploads
cache
.idea/
media_store/
@@ -39,6 +42,8 @@ media_store/
*.tac
build/
venv/
venv*/
localhost-800*/
static/client/register/register_config.js
@@ -48,3 +53,4 @@ env/
*.config
.vscode/
.ropeproject/

View File

@@ -4,7 +4,12 @@ language: python
# tell travis to cache ~/.cache/pip
cache: pip
before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
matrix:
fast_finish: true
include:
- python: 2.7
env: TOX_ENV=packaging
@@ -14,10 +19,13 @@ matrix:
- python: 2.7
env: TOX_ENV=py27
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check-newsfragment
install:
- pip install tox

View File

@@ -60,3 +60,6 @@ Niklas Riekenbrauck <nikriek at gmail dot.com>
Christoph Witzany <christoph at web.crofting.com>
* Add LDAP support for authentication
Pierre Jaury <pierre at jaury.eu>
* Docker packaging

View File

@@ -1,5 +1,213 @@
Changes in synapse <unreleased>
===============================
Synapse 0.32.1 (2018-07-06)
===========================
Bugfixes
--------
- Add explicit dependency on netaddr (`#3488 <https://github.com/matrix-org/synapse/issues/3488>`_)
Changes in synapse v0.32.0 (2018-07-06)
===========================================
No changes since 0.32.0rc1
Synapse 0.32.0rc1 (2018-07-05)
==============================
Features
--------
- Add blacklist & whitelist of servers allowed to send events to a room via ``m.room.server_acl`` event.
- Cache factor override system for specific caches (`#3334 <https://github.com/matrix-org/synapse/issues/3334>`_)
- Add metrics to track appservice transactions (`#3344 <https://github.com/matrix-org/synapse/issues/3344>`_)
- Try to log more helpful info when a sig verification fails (`#3372 <https://github.com/matrix-org/synapse/issues/3372>`_)
- Synapse now uses the best performing JSON encoder/decoder according to your runtime (simplejson on CPython, stdlib json on PyPy). (`#3462 <https://github.com/matrix-org/synapse/issues/3462>`_)
- Add optional ip_range_whitelist param to AS registration files to lock AS IP access (`#3465 <https://github.com/matrix-org/synapse/issues/3465>`_)
- Reject invalid server names in federation requests (`#3480 <https://github.com/matrix-org/synapse/issues/3480>`_)
- Reject invalid server names in homeserver.yaml (`#3483 <https://github.com/matrix-org/synapse/issues/3483>`_)
Bugfixes
--------
- Strip access_token from outgoing requests (`#3327 <https://github.com/matrix-org/synapse/issues/3327>`_)
- Redact AS tokens in logs (`#3349 <https://github.com/matrix-org/synapse/issues/3349>`_)
- Fix federation backfill from SQLite servers (`#3355 <https://github.com/matrix-org/synapse/issues/3355>`_)
- Fix event-purge-by-ts admin API (`#3363 <https://github.com/matrix-org/synapse/issues/3363>`_)
- Fix event filtering in get_missing_events handler (`#3371 <https://github.com/matrix-org/synapse/issues/3371>`_)
- Synapse is now stricter regarding accepting events which it cannot retrieve the prev_events for. (`#3456 <https://github.com/matrix-org/synapse/issues/3456>`_)
- Fix bug where synapse would explode when receiving unicode in HTTP User-Agent header (`#3470 <https://github.com/matrix-org/synapse/issues/3470>`_)
- Invalidate cache on correct thread to avoid race (`#3473 <https://github.com/matrix-org/synapse/issues/3473>`_)
Improved Documentation
----------------------
- ``doc/postgres.rst``: fix display of the last command block. Thanks to @ArchangeGabriel! (`#3340 <https://github.com/matrix-org/synapse/issues/3340>`_)
Deprecations and Removals
-------------------------
- Remove was_forgotten_at (`#3324 <https://github.com/matrix-org/synapse/issues/3324>`_)
Misc
----
- `#3332 <https://github.com/matrix-org/synapse/issues/3332>`_, `#3341 <https://github.com/matrix-org/synapse/issues/3341>`_, `#3347 <https://github.com/matrix-org/synapse/issues/3347>`_, `#3348 <https://github.com/matrix-org/synapse/issues/3348>`_, `#3356 <https://github.com/matrix-org/synapse/issues/3356>`_, `#3385 <https://github.com/matrix-org/synapse/issues/3385>`_, `#3446 <https://github.com/matrix-org/synapse/issues/3446>`_, `#3447 <https://github.com/matrix-org/synapse/issues/3447>`_, `#3467 <https://github.com/matrix-org/synapse/issues/3467>`_, `#3474 <https://github.com/matrix-org/synapse/issues/3474>`_
Changes in synapse v0.31.2 (2018-06-14)
=======================================
SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
when there is no ``m.room.power_levels`` event in force in the room. (PR #3397)
Discussion around the Matrix Spec change proposal for this change can be
followed at https://github.com/matrix-org/matrix-doc/issues/1304.
Changes in synapse v0.31.1 (2018-06-08)
=======================================
v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
where event visibility rules were not applied correctly.
We are not aware of it being actively exploited but please upgrade asap.
Bug Fixes:
* Fix event filtering in get_missing_events handler (PR #3371)
Changes in synapse v0.31.0 (2018-06-06)
=======================================
Most notable change from v0.30.0 is to switch to the python prometheus library to improve system
stats reporting. WARNING: this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.
Bug Fixes:
* Fix metric documentation tables (PR #3341)
* Fix LaterGauge error handling (694968f)
* Fix replication metrics (b7e7fd2)
Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================
Features:
* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)
Changes:
* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)
Changes, python 3 migration:
* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!
Bugs:
* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
Changes in synapse v0.30.0 (2018-05-24)
==========================================
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.
They are used as part of communication of the server policies (see ``docs/consent_tracking.md``),
however the intention is that they may also find a use for features such
as "Message of the day".
This feature is specific to Synapse, but uses standard Matrix communication mechanisms,
so should work with any Matrix client. For more details see ``docs/server_notices.md``
Further Server Notices/Consent Tracking Support:
* Allow overriding the server_notices user's avatar (PR #3273)
* Use the localpart in the consent uri (PR #3272)
* Support for putting %(consent_uri)s in messages (PR #3271)
* Block attempts to send server notices to remote users (PR #3270)
* Docs on consent bits (PR #3268)
Changes in synapse v0.30.0-rc1 (2018-05-23)
==========================================
Server Notices/Consent Tracking Support:
* ConsentResource to gather policy consent from users (PR #3213)
* Move RoomCreationHandler out of synapse.handlers.Handlers (PR #3225)
* Infrastructure for a server notices room (PR #3232)
* Send users a server notice about consent (PR #3236)
* Reject attempts to send event before privacy consent is given (PR #3257)
* Add a 'has_consented' template var to consent forms (PR #3262)
* Fix dependency on jinja2 (PR #3263)
Features:
* Cohort analytics (PR #3163, #3241, #3251)
* Add lxml to docker image for web previews (PR #3239) Thanks to @ptman!
* Add in flight request metrics (PR #3252)
Changes:
* Remove unused `update_external_syncs` (PR #3233)
* Use stream rather depth ordering for push actions (PR #3212)
* Make purge_history operate on tokens (PR #3221)
* Don't support limitless pagination (PR #3265)
Bug Fixes:
* Fix logcontext resource usage tracking (PR #3258)
* Fix error in handling receipts (PR #3235)
* Stop the transaction cache caching failures (PR #3255)
Changes in synapse v0.29.1 (2018-05-17)
==========================================
Changes:
* Update docker documentation (PR #3222)
Changes in synapse v0.29.0 (2018-05-16)
===========================================
Not changes since v0.29.0-rc1
Changes in synapse v0.29.0-rc1 (2018-05-14)
===========================================
Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a
closed spec bug in the Client Server API. Additionally further prep for Python 3
migration.
Potentially breaking change:
@@ -12,6 +220,66 @@ Potentially breaking change:
Thanks to @NotAFile for fixing this.
Features:
* Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!
Changes - General:
* nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
* Part user from rooms on account deactivate (PR #3201)
* Make 'unexpected logging context' into warnings (PR #3007)
* Set Server header in SynapseRequest (PR #3208)
* remove duplicates from groups tables (PR #3129)
* Improve exception handling for background processes (PR #3138)
* Add missing consumeErrors to improve exception handling (PR #3139)
* reraise exceptions more carefully (PR #3142)
* Remove redundant call to preserve_fn (PR #3143)
* Trap exceptions thrown within run_in_background (PR #3144)
Changes - Refactors:
* Refactor /context to reuse pagination storage functions (PR #3193)
* Refactor recent events func to use pagination func (PR #3195)
* Refactor pagination DB API to return concrete type (PR #3196)
* Refactor get_recent_events_for_room return type (PR #3198)
* Refactor sync APIs to reuse pagination API (PR #3199)
* Remove unused code path from member change DB func (PR #3200)
* Refactor request handling wrappers (PR #3203)
* transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* Set Server header in SynapseRequest (PR #3208)
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
* Use run_in_background in preference to preserve_fn (PR #3140)
Changes - Python 3 migration:
* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!
Bug Fixes:
* synapse fails to start under Twisted >= 18.4 (PR #3157)
* Fix a class of logcontext leaks (PR #3170)
* Fix a couple of logcontext leaks in unit tests (PR #3172)
* Fix logcontext leak in media repo (PR #3174)
* Escape label values in prometheus metrics (PR #3175, #3186)
* Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
* Fix logcontext leaks in rate limiter (PR #3183)
* notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
* nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
Changes in synapse v0.28.1 (2018-05-01)
=======================================

View File

@@ -48,6 +48,26 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``issuenumberOrPR.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
For example, a fix for a bug reported in #1234 would have its
changelog entry in ``changelog.d/1234.bugfix``, and contain content
like "The security levels of Florbs are now validated when
recieved over federation. Contributed by Jane Matrix".
Attribution
~~~~~~~~~~~
@@ -110,11 +130,15 @@ If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
Signed-off-by: Your Name <your@email.example.org>
...using your real name; unfortunately pseudonyms and anonymous contributions
can't be accepted. Git makes this trivial - just use the -s flag when you do
``git commit``, having first set ``user.name`` and ``user.email`` git configs
(which you should have done anyway :)
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Conclusion
~~~~~~~~~~

19
Dockerfile Normal file
View File

@@ -0,0 +1,19 @@
FROM docker.io/python:2-alpine3.7
RUN apk add --no-cache --virtual .nacl_deps su-exec build-base libffi-dev zlib-dev libressl-dev libjpeg-turbo-dev linux-headers postgresql-dev libxslt-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade pip setuptools psycopg2 lxml \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \
&& rm -rf setup.py setup.cfg synapse
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

View File

@@ -25,7 +25,12 @@ recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
exclude Dockerfile
exclude .dockerignore
recursive-exclude jenkins *.sh
include pyproject.toml
recursive-include changelog.d *
prune .github
prune demo/etc

View File

@@ -157,8 +157,9 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy

153
contrib/docker/README.md Normal file
View File

@@ -0,0 +1,153 @@
# Synapse Docker
The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
database server or a TURN server, you should run these separately.
If you run a Postgres server, you should simply include it in the same Compose
project or set the proper environment variables and the image will automatically
use that server.
## Build
Build the docker image with the `docker build` command from the root of the synapse repository.
```
docker build -t docker.io/matrixdotorg/synapse .
```
The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
You may have a local Python wheel cache available, in which case copy the relevant packages in the ``cache/`` directory at the root of the project.
## Run
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual edition.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including
this image and a Postgres server. A sample ``docker-compose.yml`` is provided,
including example labels for reverse proxying and other artifacts.
Read the section about environment variables and set at least mandatory variables,
then run the server:
```
docker-compose up -d
```
If secrets are not specified in the environment variables, they will be generated
as part of the startup. Please ensure these secrets are kept between launches of the
Docker container, as their loss may require users to log in again.
### Manual configuration
A sample ``docker-compose.yml`` is provided, including example labels for
reverse proxying and other artifacts. The docker-compose file is an example,
please comment/uncomment sections that are not suitable for your usecase.
Specify a ``SYNAPSE_CONFIG_PATH``, preferably to a persistent path,
to use manual configuration. To generate a fresh ``homeserver.yaml``, simply run:
```
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host synapse generate
```
Then, customize your configuration and run the server:
```
docker-compose up -d
```
### Without Compose
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.

View File

@@ -0,0 +1,219 @@
# vim:ft=yaml
## TLS ##
tls_certificate_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.crt"
tls_private_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.key"
tls_dh_params_path: "/data/{{ SYNAPSE_SERVER_NAME }}.tls.dh"
no_tls: {{ "True" if SYNAPSE_NO_TLS else "False" }}
tls_fingerprints: []
## Server ##
server_name: "{{ SYNAPSE_SERVER_NAME }}"
pid_file: /homeserver.pid
web_client: False
soft_file_limit: 0
## Ports ##
listeners:
{% if not SYNAPSE_NO_TLS %}
-
port: 8448
bind_addresses: ['0.0.0.0']
type: http
tls: true
x_forwarded: false
resources:
- names: [client]
compress: true
- names: [federation] # Federation APIs
compress: false
{% endif %}
- port: 8008
tls: false
bind_addresses: ['0.0.0.0']
type: http
x_forwarded: false
resources:
- names: [client]
compress: true
- names: [federation]
compress: false
## Database ##
{% if POSTGRES_PASSWORD %}
database:
name: "psycopg2"
args:
user: "{{ POSTGRES_USER or "synapse" }}"
password: "{{ POSTGRES_PASSWORD }}"
database: "{{ POSTGRES_DB or "synapse" }}"
host: "{{ POSTGRES_HOST or "db" }}"
port: "{{ POSTGRES_PORT or "5432" }}"
cp_min: 5
cp_max: 10
{% else %}
database:
name: "sqlite3"
args:
database: "/data/homeserver.db"
{% endif %}
## Performance ##
event_cache_size: "{{ SYNAPSE_EVENT_CACHE_SIZE or "10K" }}"
verbose: 0
log_file: "/data/homeserver.log"
log_config: "/compiled/log.config"
## Ratelimiting ##
rc_messages_per_second: 0.2
rc_message_burst_count: 10.0
federation_rc_window_size: 1000
federation_rc_sleep_limit: 10
federation_rc_sleep_delay: 500
federation_rc_reject_limit: 50
federation_rc_concurrent: 3
## Files ##
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "10M"
max_image_pixels: "32M"
dynamic_thumbnails: false
# List of thumbnail to precalculate when an image is uploaded.
thumbnail_sizes:
- width: 32
height: 32
method: crop
- width: 96
height: 96
method: crop
- width: 320
height: 240
method: scale
- width: 640
height: 480
method: scale
- width: 800
height: 600
method: scale
url_preview_enabled: False
max_spider_size: "10M"
## Captcha ##
{% if SYNAPSE_RECAPTCHA_PUBLIC_KEY %}
recaptcha_public_key: "{{ SYNAPSE_RECAPTCHA_PUBLIC_KEY }}"
recaptcha_private_key: "{{ SYNAPSE_RECAPTCHA_PRIVATE_KEY }}"
enable_registration_captcha: True
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
{% else %}
recaptcha_public_key: "YOUR_PUBLIC_KEY"
recaptcha_private_key: "YOUR_PRIVATE_KEY"
enable_registration_captcha: False
recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
{% endif %}
## Turn ##
{% if SYNAPSE_TURN_URIS %}
turn_uris:
{% for uri in SYNAPSE_TURN_URIS.split(',') %} - "{{ uri }}"
{% endfor %}
turn_shared_secret: "{{ SYNAPSE_TURN_SECRET }}"
turn_user_lifetime: "1h"
turn_allow_guests: True
{% else %}
turn_uris: []
turn_shared_secret: "YOUR_SHARED_SECRET"
turn_user_lifetime: "1h"
turn_allow_guests: True
{% endif %}
## Registration ##
enable_registration: {{ "True" if SYNAPSE_ENABLE_REGISTRATION else "False" }}
registration_shared_secret: "{{ SYNAPSE_REGISTRATION_SHARED_SECRET }}"
bcrypt_rounds: 12
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
enable_group_creation: true
# The list of identity servers trusted to verify third party
# identifiers by this server.
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
## Metrics ###
{% if SYNAPSE_REPORT_STATS.lower() == "yes" %}
enable_metrics: True
report_stats: True
{% else %}
enable_metrics: False
report_stats: False
{% endif %}
## API Configuration ##
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.name"
{% if SYNAPSE_APPSERVICES %}
app_service_config_files:
{% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
{% endfor %}
{% else %}
app_service_config_files: []
{% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"
expire_access_token: False
## Signing Keys ##
signing_key_path: "/data/{{ SYNAPSE_SERVER_NAME }}.signing.key"
old_signing_keys: {}
key_refresh_interval: "1d" # 1 Day.
# The trusted servers to download signing keys from.
perspectives:
servers:
"matrix.org":
verify_keys:
"ed25519:auto":
key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
password_config:
enabled: true
{% if SYNAPSE_SMTP_HOST %}
email:
enable_notifs: false
smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
smtp_user: "{{ SYNAPSE_SMTP_USER }}"
smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
require_transport_security: False
notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
app_name: Matrix
template_dir: res/templates
notif_template_html: notif_mail.html
notif_template_text: notif_mail.txt
notif_for_new_users: True
riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
{% endif %}

View File

@@ -0,0 +1,29 @@
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
root:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
handlers: [console]

View File

@@ -0,0 +1,49 @@
# This compose file is compatible with Compose itself, it might need some
# adjustments to run properly with stack.
version: '3'
services:
synapse:
image: docker.io/matrixdotorg/synapse:latest
# Since snyapse does not retry to connect to the database, restart upon
# failure
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_SERVER_NAME=my.matrix.host
- SYNAPSE_REPORT_STATS=no
- SYNAPSE_ENABLE_REGISTRATION=yes
- SYNAPSE_LOG_LEVEL=INFO
- POSTGRES_PASSWORD=changeme
volumes:
# You may either store all the files in a local folder
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
# - /path/to/ssd:/data/uploads
# - /path/to/large_hdd:/data/media
depends_on:
- db
# In order to expose Synapse, remove one of the following, you might for
# instance expose the TLS port directly:
ports:
- 8448:8448/tcp
# ... or use a reverse proxy, here is an example for traefik:
labels:
- traefik.enable=true
- traefik.frontend.rule=Host:my.matrix.Host
- traefik.port=8448
db:
image: docker.io/postgres:10-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse
- POSTGRES_PASSWORD=changeme
volumes:
# You may store the database tables in a local folder..
- ./schemas:/var/lib/postgresql/data
# .. or store them on some high performance storage for better results
# - /path/to/ssd/storage:/var/lib/postfesql/data

66
contrib/docker/start.py Executable file
View File

@@ -0,0 +1,66 @@
#!/usr/local/bin/python
import jinja2
import os
import sys
import subprocess
import glob
# Utility functions
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
def check_arguments(environ, args):
for argument in args:
if argument not in environ:
print("Environment variable %s is mandatory, exiting." % argument)
sys.exit(2)
def generate_secrets(environ, secrets):
for name, secret in secrets.items():
if secret not in environ:
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
if os.path.exists(filename):
with open(filename) as handle: value = handle.read()
else:
print("Generating a random secret for {}".format(name))
value = os.urandom(32).encode("hex")
with open(filename, "w") as handle: handle.write(value)
environ[secret] = value
# Prepare the configuration
mode = sys.argv[1] if len(sys.argv) > 1 else None
environ = os.environ.copy()
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
args = ["python", "-m", "synapse.app.homeserver"]
# In generate mode, generate a configuration, missing keys, then exit
if mode == "generate":
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
args += [
"--server-name", environ["SYNAPSE_SERVER_NAME"],
"--report-stats", environ["SYNAPSE_REPORT_STATS"],
"--config-path", environ["SYNAPSE_CONFIG_PATH"],
"--generate-config"
]
os.execv("/usr/local/bin/python", args)
# In normal mode, generate missing keys if any, then run synapse
else:
# Parse the configuration file
if "SYNAPSE_CONFIG_PATH" in environ:
args += ["--config-path", environ["SYNAPSE_CONFIG_PATH"]]
else:
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
generate_secrets(environ, {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
})
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists("/compiled"): os.mkdir("/compiled")
convert("/conf/homeserver.yaml", "/compiled/homeserver.yaml", environ)
convert("/conf/log.config", "/compiled/log.config", environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
args += ["--config-path", "/compiled/homeserver.yaml"]
# Generate missing keys and start synapse
subprocess.check_output(args + ["--generate-keys"])
os.execv("/sbin/su-exec", ["su-exec", ownership] + args)

View File

@@ -44,13 +44,26 @@ Deactivate Account
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
password reset). It can also mark the user as GDPR-erased (stopping their data
from distributed further, and deleting it entirely if there are no other
references to it).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
with a body of:
.. code:: json
{
"erase": true
}
including an ``access_token`` of a server admin.
The erase parameter is optional and defaults to 'false'.
An empty body may be passed for backwards compatibility.
Reset password

View File

@@ -16,7 +16,7 @@
print("I am a fish %s" %
"moo")
and this::
and this::
print(
"I am a fish %s" %

160
docs/consent_tracking.md Normal file
View File

@@ -0,0 +1,160 @@
Support in Synapse for tracking agreement to server terms and conditions
========================================================================
Synapse 0.30 introduces support for tracking whether users have agreed to the
terms and conditions set by the administrator of a server - and blocking access
to the server until they have.
There are several parts to this functionality; each requires some specific
configuration in `homeserver.yaml` to be enabled.
Note that various parts of the configuation and this document refer to the
"privacy policy": agreement with a privacy policy is one particular use of this
feature, but of course adminstrators can specify other terms and conditions
unrelated to "privacy" per se.
Collecting policy agreement from a user
---------------------------------------
Synapse can be configured to serve the user a simple policy form with an
"accept" button. Clicking "Accept" records the user's acceptance in the
database and shows a success page.
To enable this, first create templates for the policy and success pages.
These should be stored on the local filesystem.
These templates use the [Jinja2](http://jinja.pocoo.org) templating language,
and [docs/privacy_policy_templates](privacy_policy_templates) gives
examples of the sort of thing that can be done.
Note that the templates must be stored under a name giving the language of the
template - currently this must always be `en` (for "English");
internationalisation support is intended for the future.
The template for the policy itself should be versioned and named according to
the version: for example `1.0.html`. The version of the policy which the user
has agreed to is stored in the database.
Once the templates are in place, make the following changes to `homeserver.yaml`:
1. Add a `user_consent` section, which should look like:
```yaml
user_consent:
template_dir: privacy_policy_templates
version: 1.0
```
`template_dir` points to the directory containing the policy
templates. `version` defines the version of the policy which will be served
to the user. In the example above, Synapse will serve
`privacy_policy_templates/en/1.0.html`.
2. Add a `form_secret` setting at the top level:
```yaml
form_secret: "<unique secret>"
```
This should be set to an arbitrary secret string (try `pwgen -y 30` to
generate suitable secrets).
More on what this is used for below.
3. Add `consent` wherever the `client` resource is currently enabled in the
`listeners` configuration. For example:
```yaml
listeners:
- port: 8008
resources:
- names:
- client
- consent
```
Finally, ensure that `jinja2` is installed. If you are using a virtualenv, this
should be a matter of `pip install Jinja2`. On debian, try `apt-get install
python-jinja2`.
Once this is complete, and the server has been restarted, try visiting
`https://<server>/_matrix/consent`. If correctly configured, this should give
an error "Missing string query parameter 'u'". It is now possible to manually
construct URIs where users can give their consent.
### Constructing the consent URI
It may be useful to manually construct the "consent URI" for a given user - for
instance, in order to send them an email asking them to consent. To do this,
take the base `https://<server>/_matrix/consent` URL and add the following
query parameters:
* `u`: the user id of the user. This can either be a full MXID
(`@user:server.com`) or just the localpart (`user`).
* `h`: hex-encoded HMAC-SHA256 of `u` using the `form_secret` as a key. It is
possible to calculate this on the commandline with something like:
```bash
echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
```
This should result in a URI which looks something like:
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
Sending users a server notice asking them to agree to the policy
----------------------------------------------------------------
It is possible to configure Synapse to send a [server
notice](server_notices.md) to anybody who has not yet agreed to the current
version of the policy. To do so:
* ensure that the consent resource is configured, as in the previous section
* ensure that server notices are configured, as in [server_notices.md](server_notices.md).
* Add `server_notice_content` under `user_consent` in `homeserver.yaml`. For
example:
```yaml
user_consent:
server_notice_content:
msgtype: m.text
body: >-
Please give your consent to the privacy policy at %(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
* ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the server notice.)
Blocking users from using the server until they agree to the policy
-------------------------------------------------------------------
Synapse can be configured to block any attempts to join rooms or send messages
until the user has given their agreement to the policy. (Joining the server
notices room is exempted from this).
To enable this, add `block_events_error` under `user_consent`. For example:
```yaml
user_consent:
block_events_error: >-
You can't send any messages until you consent to the privacy policy at
%(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the error.)

43
docs/manhole.md Normal file
View File

@@ -0,0 +1,43 @@
Using the synapse manhole
=========================
The "manhole" allows server administrators to access a Python shell on a running
Synapse installation. This is a very powerful mechanism for administration and
debugging.
To enable it, first uncomment the `manhole` listener configuration in
`homeserver.yaml`:
```yaml
listeners:
- port: 9000
bind_addresses: ['::1', '127.0.0.1']
type: manhole
```
(`bind_addresses` in the above is important: it ensures that access to the
manhole is only possible for local users).
Note that this will give administrative access to synapse to **all users** with
shell access to the server. It should therefore **not** be enabled in
environments where untrusted users have shell access.
Then restart synapse, and point an ssh client at port 9000 on localhost, using
the username `matrix`:
```bash
ssh -p9000 matrix@localhost
```
The password is `rabbithole`.
This gives a Python REPL in which `hs` gives access to the
`synapse.server.HomeServer` object - which in turn gives access to many other
parts of the process.
As a simple example, retrieving an event from the database:
```
>>> hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org')
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
```

View File

@@ -1,25 +1,47 @@
How to monitor Synapse metrics using Prometheus
===============================================
1. Install prometheus:
1. Install Prometheus:
Follow instructions at http://prometheus.io/docs/introduction/install/
2. Enable synapse metrics:
2. Enable Synapse metrics:
Simply setting a (local) port number will enable it. Pick a port.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
There are two methods of enabling metrics in Synapse.
Add to homeserver.yaml::
The first serves the metrics as a part of the usual web server and can be
enabled by adding the "metrics" resource to the existing listener as such::
metrics_port: 9092
resources:
- names:
- client
- metrics
Also ensure that ``enable_metrics`` is set to ``True``.
This provides a simple way of adding metrics to your Synapse installation,
and serves under ``/_synapse/metrics``. If you do not wish your metrics be
publicly exposed, you will need to either filter it out at your load
balancer, or use the second method.
Restart synapse.
The second method runs the metrics server on a different port, in a
different thread to Synapse. This can make it more resilient to heavy load
meaning metrics cannot be retrieved, and can be exposed to just internal
networks easier. The served metrics are available over HTTP only, and will
be available at ``/``.
3. Add a prometheus target for synapse.
Add a new listener to homeserver.yaml::
listeners:
- type: metrics
port: 9000
bind_addresses:
- '0.0.0.0'
For both options, you will need to ensure that ``enable_metrics`` is set to
``True``.
Restart Synapse.
3. Add a Prometheus target for Synapse.
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
@@ -31,7 +53,50 @@ How to monitor Synapse metrics using Prometheus
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
Restart Prometheus.
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------
The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
All time duration-based metrics have been changed to be seconds. This affects:
+----------------------------------+
| msec -> sec metrics |
+==================================+
| python_gc_time |
+----------------------------------+
| python_twisted_reactor_tick_time |
+----------------------------------+
| synapse_storage_query_time |
+----------------------------------+
| synapse_storage_schedule_time |
+----------------------------------+
| synapse_storage_transaction_time |
+----------------------------------+
Several metrics have been changed to be histograms, which sort entries into
buckets and allow better analysis. The following metrics are now histograms:
+-------------------------------------------+
| Altered metrics |
+===========================================+
| python_gc_time |
+-------------------------------------------+
| python_twisted_reactor_pending_calls |
+-------------------------------------------+
| python_twisted_reactor_tick_time |
+-------------------------------------------+
| synapse_http_server_response_time_seconds |
+-------------------------------------------+
| synapse_storage_query_time |
+-------------------------------------------+
| synapse_storage_schedule_time |
+-------------------------------------------+
| synapse_storage_transaction_time |
+-------------------------------------------+
Block and response metrics renamed for 0.27.0

View File

@@ -6,16 +6,22 @@ Postgres version 9.4 or later is known to work.
Set up database
===============
The PostgreSQL database used *must* have the correct encoding set, otherwise
Assuming your PostgreSQL database user is called ``postgres``, create a user
``synapse_user`` with::
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
@@ -46,8 +52,8 @@ As with Debian/Ubuntu, postgres support depends on the postgres python connector
Synapse config
==============
When you are ready to start using PostgreSQL, add the following line to your
config file::
When you are ready to start using PostgreSQL, edit the ``database`` section in
your config file to match the following lines::
database:
name: psycopg2
@@ -96,9 +102,12 @@ complete, restart synapse. For instance::
cp homeserver.db homeserver.db.snapshot
./synctl start
Assuming your new config file (as described in the section *Synapse config*)
is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at
``homeserver.db.snapshot`` then simply run::
Copy the old config file into a new config file::
cp homeserver.yaml homeserver-postgres.yaml
Edit the database section as described in the section *Synapse config* above
and with the SQLite snapshot located at ``homeserver.db.snapshot`` simply run::
synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
@@ -117,6 +126,11 @@ run::
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.
database configuration file ``homeserver-postgres.yaml``::
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
Synapse should now be running against PostgreSQL.

View File

@@ -0,0 +1,23 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
{% if has_consented %}
<p>
Your base already belong to us.
</p>
{% else %}
<p>
All your base are belong to us.
</p>
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% endif %}
</body>
</html>

View File

@@ -0,0 +1,11 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
<p>
Sweet.
</p>
</body>
</html>

74
docs/server_notices.md Normal file
View File

@@ -0,0 +1,74 @@
Server Notices
==============
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.
They are used as part of communication of the server polices(see
[consent_tracking.md](consent_tracking.md)), however the intention is that
they may also find a use for features such as "Message of the day".
This is a feature specific to Synapse, but it uses standard Matrix
communication mechanisms, so should work with any Matrix client.
User experience
---------------
When the user is first sent a server notice, they will get an invitation to a
room (typically called 'Server Notices', though this is configurable in
`homeserver.yaml`). They will be **unable to reject** this invitation -
attempts to do so will receive an error.
Once they accept the invitation, they will see the notice message in the room
history; it will appear to have come from the 'server notices user' (see
below).
The user is prevented from sending any messages in this room by the power
levels.
Having joined the room, the user can leave the room if they want. Subsequent
server notices will then cause a new room to be created.
Synapse configuration
---------------------
Server notices come from a specific user id on the server. Server
administrators are free to choose the user id - something like `server` is
suggested, meaning the notices will come from
`@server:<your_server_name>`. Once the Server Notices user is configured, that
user id becomes a special, privileged user, so administrators should ensure
that **it is not already allocated**.
In order to support server notices, it is necessary to add some configuration
to the `homeserver.yaml` file. In particular, you should add a `server_notices`
section, which should look like this:
```yaml
server_notices:
system_mxid_localpart: server
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
```
The only compulsory setting is `system_mxid_localpart`, which defines the user
id of the Server Notices user, as above. `room_name` defines the name of the
room which will be created.
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
displayname and avatar of the Server Notices user.
Sending notices
---------------
As of the current version of synapse, there is no convenient interface for
sending notices (other than the automated ones sent as part of consent
tracking).
In the meantime, it is possible to test this feature using the manhole. Having
gone into the manhole as described in [manhole.md](manhole.md), a notice can be
sent with something like:
```
>>> hs.get_server_notices_manager().send_notice('@user:server.com', {'msgtype':'m.text', 'body':'foo'})
```

5
pyproject.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.towncrier]
package = "synapse"
filename = "CHANGES.rst"
directory = "changelog.d"
issue_format = "`#{issue} <https://github.com/matrix-org/synapse/issues/{issue}>`_"

View File

@@ -18,14 +18,22 @@
from __future__ import print_function
import argparse
from urlparse import urlparse, urlunparse
import nacl.signing
import json
import base64
import requests
import sys
from requests.adapters import HTTPAdapter
import srvlookup
import yaml
# uncomment the following to enable debug logging of http requests
#from httplib import HTTPConnection
#HTTPConnection.debuglevel = 1
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -113,17 +121,6 @@ def read_signing_keys(stream):
return keys
def lookup(destination, path):
if ":" in destination:
return "https://%s%s" % (destination, path)
else:
try:
srv = srvlookup.lookup("matrix", "tcp", destination)[0]
return "https://%s:%d%s" % (srv.host, srv.port, path)
except:
return "https://%s:%d%s" % (destination, 8448, path)
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
@@ -152,13 +149,19 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers.append(bytes(header))
print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path)
dest = "matrix://%s%s" % (destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request(
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
result = s.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
headers={
"Host": destination,
"Authorization": authorization_headers[0]
},
verify=False,
data=content,
)
@@ -242,5 +245,39 @@ def read_args_from_config(args):
args.signing_key_path = config['signing_key_path']
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s):
if s[-1] == ']':
# ipv6 literal (with no port)
return s, 8448
if ":" in s:
out = s.rsplit(":",1)
try:
port = int(out[1])
except ValueError:
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except:
return s, 8448
def get_connection(self, url, proxies=None):
parsed = urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse((
"https", netloc, parsed.path, parsed.params, parsed.query,
parsed.fragment,
))
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)
if __name__ == "__main__":
main()

View File

@@ -6,9 +6,19 @@
## Do not run it lightly.
set -e
if [ "$1" == "-h" ] || [ "$1" == "" ]; then
echo "Call with ROOM_ID as first option and then pipe it into the database. So for instance you might run"
echo " nuke-room-from-db.sh <room_id> | sqlite3 homeserver.db"
echo "or"
echo " nuke-room-from-db.sh <room_id> | psql --dbname=synapse"
exit
fi
ROOMID="$1"
sqlite3 homeserver.db <<EOF
cat <<EOF
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
@@ -29,7 +39,7 @@ DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search_content WHERE c1room_id = '$ROOMID';
DELETE FROM event_search WHERE room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';

View File

@@ -17,4 +17,5 @@ ignore =
[flake8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
ignore = W503
# E203 is contrary to PEP8.
ignore = W503,E203

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,4 +17,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.28.1"
__version__ = "0.32.1"

View File

@@ -15,8 +15,11 @@
import logging
from six import itervalues
import pymacaroons
from twisted.internet import defer
from netaddr import IPAddress
import synapse.types
from synapse import event_auth
@@ -57,7 +60,7 @@ class Auth(object):
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
register_cache("cache", "token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
@@ -66,7 +69,7 @@ class Auth(object):
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
(e.type, e.state_key): e for e in itervalues(auth_events)
}
self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)
@@ -242,6 +245,11 @@ class Auth(object):
if app_service is None:
defer.returnValue((None, None))
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
if "user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
@@ -486,7 +494,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
logger.warn("Unrecognised access token - not in store: %s" % (token,))
logger.warn("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN
@@ -509,7 +517,7 @@ class Auth(object):
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token: %s" % (token,))
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
@@ -653,7 +661,7 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.Aliases, "", auth_events
EventTypes.Aliases, "", power_level_event,
)
user_level = event_auth.get_user_power_level(user_id, auth_events)

View File

@@ -76,6 +76,8 @@ class EventTypes(object):
Topic = "m.room.topic"
Name = "m.room.name"
ServerACL = "m.room.server_acl"
class RejectedReason(object):
AUTH_ERROR = "auth_error"

View File

@@ -17,8 +17,10 @@
import logging
import simplejson as json
from canonicaljson import json
from six import iteritems
from six.moves import http_client
logger = logging.getLogger(__name__)
@@ -51,6 +53,8 @@ class Codes(object):
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
class CodeMessageException(RuntimeError):
@@ -138,6 +142,32 @@ class SynapseError(CodeMessageException):
return res
class ConsentNotGivenError(SynapseError):
"""The error returned to the client when the user has not consented to the
privacy policy.
"""
def __init__(self, msg, consent_uri):
"""Constructs a ConsentNotGivenError
Args:
msg (str): The human-readable error message
consent_url (str): The URL where the user can give their consent
"""
super(ConsentNotGivenError, self).__init__(
code=http_client.FORBIDDEN,
msg=msg,
errcode=Codes.CONSENT_NOT_GIVEN
)
self._consent_uri = consent_uri
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
consent_uri=self._consent_uri
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
@@ -292,7 +322,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
Args:
msg (str): The error message.
code (int): The error code.
code (str): The error code.
kwargs : Additional keys to add to the response.
Returns:
A dict representing the error response JSON.

View File

@@ -17,7 +17,8 @@ from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
from twisted.internet import defer
import simplejson as json
from canonicaljson import json
import jsonschema
from jsonschema import FormatChecker
@@ -411,7 +412,7 @@ class Filter(object):
return room_ids
def filter(self, events):
return filter(self.check, events)
return list(filter(self.check, events))
def limit(self):
return self.filter_json.get("limit", 10)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,6 +15,12 @@
# limitations under the License.
"""Contains the URL paths to prefix various aspects of the server with. """
from hashlib import sha256
import hmac
from six.moves.urllib.parse import urlencode
from synapse.config import ConfigError
CLIENT_PREFIX = "/_matrix/client/api/v1"
CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha"
@@ -25,3 +32,46 @@ SERVER_KEY_PREFIX = "/_matrix/key/v1"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
class ConsentURIBuilder(object):
def __init__(self, hs_config):
"""
Args:
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
raise ConfigError(
"form_secret not set in config",
)
if hs_config.public_baseurl is None:
raise ConfigError(
"public_baseurl not set in config",
)
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
def build_user_consent_uri(self, user_id):
"""Build a URI which we can give to the user to do their privacy
policy consent
Args:
user_id (str): mxid or username of user
Returns
(str) the URI where the user can do consent
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id,
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (
self._public_baseurl,
urlencode({
"u": user_id,
"h": mac
}),
)
return consent_uri

View File

@@ -124,6 +124,19 @@ def quit_with_error(error_string):
sys.exit(1)
def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
for host in bind_addresses:
reactor.callInThread(start_http_server, int(port),
addr=host, registry=RegistryProxy)
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, backlog=50):
"""
Create a TCP socket for a port and several addresses

View File

@@ -23,6 +23,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -62,7 +63,7 @@ class AppserviceServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -74,6 +75,7 @@ class AppserviceServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -93,6 +95,13 @@ class AppserviceServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -77,7 +78,7 @@ class ClientReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
@@ -98,6 +99,7 @@ class ClientReaderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -117,7 +119,13 @@ class ClientReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -90,7 +91,7 @@ class EventCreatorServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
@@ -114,6 +115,7 @@ class EventCreatorServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -133,6 +135,13 @@ class EventCreatorServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -71,7 +72,7 @@ class FederationReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -87,6 +88,7 @@ class FederationReaderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -106,6 +108,13 @@ class FederationReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
@@ -89,7 +90,7 @@ class FederationSenderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -101,6 +102,7 @@ class FederationSenderServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -120,6 +122,13 @@ class FederationSenderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -29,6 +29,7 @@ from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -131,7 +132,7 @@ class FrontendProxyServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
@@ -152,6 +153,7 @@ class FrontendProxyServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -171,6 +173,13 @@ class FrontendProxyServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -34,7 +34,7 @@ from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
@@ -140,6 +140,7 @@ class SynapseHomeServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
),
self.tls_server_context_factory,
)
@@ -153,6 +154,7 @@ class SynapseHomeServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
logger.info("Synapse now listening on port %d", port)
@@ -182,6 +184,15 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/versions": client_resource,
})
if name == "consent":
from synapse.rest.consent.consent_resource import ConsentResource
consent_resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({
"/_matrix/consent": consent_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -219,7 +230,7 @@ class SynapseHomeServer(HomeServer):
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -252,6 +263,13 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -300,11 +318,6 @@ def setup(config_options):
# check any extra requirements we have now we have a config
check_requirements(config)
version_string = "Synapse/" + get_version_string(synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_server_context_factory = context_factory.ServerContextFactory(config)
@@ -317,7 +330,7 @@ def setup(config_options):
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string=version_string,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
@@ -351,8 +364,6 @@ def setup(config_options):
hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
reactor.callWhenRunning(start)
return hs
@@ -423,6 +434,10 @@ def run(hs):
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in daily_user_type_results.iteritems():
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
@@ -473,6 +488,14 @@ def run(hs):
" changes across releases."
)
def generate_user_daily_visit_stats():
hs.get_datastore().generate_user_daily_visits()
# Rather than update on per session basis, batch up the requests.
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)

View File

@@ -27,6 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -73,7 +74,7 @@ class MediaRepositoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
resources.update({
@@ -94,6 +95,7 @@ class MediaRepositoryServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -113,6 +115,13 @@ class MediaRepositoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -23,6 +23,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -92,7 +93,7 @@ class PusherServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -104,6 +105,7 @@ class PusherServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -123,6 +125,13 @@ class PusherServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -257,7 +258,7 @@ class SynchrotronServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
@@ -281,6 +282,7 @@ class SynchrotronServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -300,6 +302,13 @@ class SynchrotronServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -171,6 +171,10 @@ def main():
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in cache_factors.iteritems():
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -105,7 +106,7 @@ class UserDirectoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
@@ -126,6 +127,7 @@ class UserDirectoryServer(HomeServer):
site_tag,
listener_config,
root_resource,
self.version_string,
)
)
@@ -145,6 +147,13 @@ class UserDirectoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -85,7 +85,8 @@ class ApplicationService(object):
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
sender=None, id=None, protocols=None, rate_limited=True,
ip_range_whitelist=None):
self.token = token
self.url = url
self.hs_token = hs_token
@@ -93,6 +94,7 @@ class ApplicationService(object):
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
self.ip_range_whitelist = ip_range_whitelist
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")
@@ -292,4 +294,8 @@ class ApplicationService(object):
return self.rate_limited
def __str__(self):
return "ApplicationService: %s" % (self.__dict__,)
# copy dictionary and redact token fields so they don't get logged
dict_copy = self.__dict__.copy()
dict_copy["token"] = "<redacted>"
dict_copy["hs_token"] = "<redacted>"
return "ApplicationService: %s" % (dict_copy,)

View File

@@ -24,8 +24,27 @@ from synapse.types import ThirdPartyInstanceID
import logging
import urllib
from prometheus_client import Counter
logger = logging.getLogger(__name__)
sent_transactions_counter = Counter(
"synapse_appservice_api_sent_transactions",
"Number of /transactions/ requests sent",
["service"]
)
failed_transactions_counter = Counter(
"synapse_appservice_api_failed_transactions",
"Number of /transactions/ requests that failed to send",
["service"]
)
sent_events_counter = Counter(
"synapse_appservice_api_sent_events",
"Number of events sent to the AS",
["service"]
)
HOUR_IN_MS = 60 * 60 * 1000
@@ -219,12 +238,15 @@ class ApplicationServiceApi(SimpleHttpClient):
args={
"access_token": service.hs_token
})
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
def _serialize(self, events):

View File

@@ -12,3 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import ConfigError
# export ConfigError if somebody does import *
# this is largely a fudge to stop PEP8 moaning about the import
__all__ = ["ConfigError"]

View File

@@ -17,6 +17,8 @@ from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
from netaddr import IPSet
import yaml
import logging
@@ -154,6 +156,13 @@ def _load_appservice(hostname, as_info, config_filename):
" will not receive events or queries.",
config_filename,
)
ip_range_whitelist = None
if as_info.get('ip_range_whitelist'):
ip_range_whitelist = IPSet(
as_info.get('ip_range_whitelist')
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
@@ -163,5 +172,6 @@ def _load_appservice(hostname, as_info, config_filename):
sender=user_id,
id=as_info["id"],
protocols=protocols,
rate_limited=rate_limited
rate_limited=rate_limited,
ip_range_whitelist=ip_range_whitelist,
)

View File

@@ -0,0 +1,88 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
DEFAULT_CONFIG = """\
# User Consent configuration
#
# for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
#
# Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'.
#
# 'template_dir' gives the location of the templates for the HTML forms.
# This directory should contain one subdirectory per language (eg, 'en', 'fr'),
# and each language directory should contain the policy document (named as
# '<version>.html') and a success page (success.html).
#
# 'version' specifies the 'current' version of the policy document. It defines
# the version to be served by the consent resource if there is no 'v'
# parameter.
#
# 'server_notice_content', if enabled, will send a user a "Server Notice"
# asking them to consent to the privacy policy. The 'server_notices' section
# must also be configured for this to work. Notices will *not* be sent to
# guest users unless 'send_server_notice_to_guests' is set to true.
#
# 'block_events_error', if set, will block any attempts to send events
# until the user consents to the privacy policy. The value of the setting is
# used as the text of the error.
#
# user_consent:
# template_dir: res/templates/privacy
# version: 1.0
# server_notice_content:
# msgtype: m.text
# body: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: True
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
#
"""
class ConsentConfig(Config):
def __init__(self):
super(ConsentConfig, self).__init__()
self.user_consent_version = None
self.user_consent_template_dir = None
self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None
def read_config(self, config):
consent_config = config.get("user_consent")
if consent_config is None:
return
self.user_consent_version = str(consent_config["version"])
self.user_consent_template_dir = consent_config["template_dir"]
self.user_consent_server_notice_content = consent_config.get(
"server_notice_content",
)
self.block_events_without_consent_error = consent_config.get(
"block_events_error",
)
self.user_consent_server_notice_to_guests = bool(consent_config.get(
"send_server_notice_to_guests", False,
))
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,7 +13,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .tls import TlsConfig
from .server import ServerConfig
from .logger import LoggingConfig
@@ -37,6 +37,8 @@ from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
from .consent_config import ConsentConfig
from .server_notices_config import ServerNoticesConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -45,12 +47,15 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,
ConsentConfig,
ServerNoticesConfig,
):
pass
if __name__ == '__main__':
import sys
sys.stdout.write(
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0]
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2], True)[0]
)

View File

@@ -59,14 +59,20 @@ class KeyConfig(Config):
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
def default_config(self, config_dir_path, server_name, is_generating_file=False,
**kwargs):
base_key_name = os.path.join(config_dir_path, server_name)
if is_generating_file:
macaroon_secret_key = random_string_with_symbols(50)
form_secret = '"%s"' % random_string_with_symbols(50)
else:
macaroon_secret_key = None
form_secret = 'null'
return """\
macaroon_secret_key: "%(macaroon_secret_key)s"
@@ -74,6 +80,10 @@ class KeyConfig(Config):
# Used to enable access token expiration.
expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
form_secret: %(form_secret)s
## Signing Keys ##
# Path to the signing key to sign messages with

View File

@@ -12,17 +12,20 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.logcontext import LoggingContextFilter
from twisted.logger import globalLogBeginner, STDLibLogObserver
import logging
import logging.config
import yaml
from string import Template
import os
import signal
from string import Template
import sys
from twisted.logger import STDLibLogObserver, globalLogBeginner
import yaml
import synapse
from synapse.util.logcontext import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
DEFAULT_LOG_CONFIG = Template("""
version: 1
@@ -202,6 +205,15 @@ def setup_logging(config, use_worker_options=False):
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn(
"Server %s version %s",
sys.argv[0], get_version_string(synapse),
)
logging.info("Server hostname: %s", config.server_name)
# It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164

View File

@@ -250,6 +250,9 @@ class ContentRepositoryConfig(Config):
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.

View File

@@ -14,13 +14,24 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.http.endpoint import parse_and_validate_server_name
from ._base import Config, ConfigError
logger = logging.Logger(__name__)
class ServerConfig(Config):
def read_config(self, config):
self.server_name = config["server_name"]
try:
parse_and_validate_server_name(self.server_name)
except ValueError as e:
raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"]
self.web_client_location = config.get("web_client_location", None)
@@ -138,6 +149,12 @@ class ServerConfig(Config):
metrics_port = config.get("metrics_port")
if metrics_port:
logger.warn(
("The metrics_port configuration option is deprecated in Synapse 0.31 "
"in favour of a listener. Please see "
"http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
" on how to configure the new listener."))
self.listeners.append({
"port": metrics_port,
"bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],
@@ -152,8 +169,8 @@ class ServerConfig(Config):
})
def default_config(self, server_name, **kwargs):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
unsecure_port = bind_port - 400
else:
bind_port = 8448

View File

@@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.types import UserID
DEFAULT_CONFIG = """\
# Server Notices room configuration
#
# Uncomment this section to enable a room which can be used to send notices
# from the server to users. It is a special room which cannot be left; notices
# come from a special "notices" user id.
#
# If you uncomment this section, you *must* define the system_mxid_localpart
# setting, which defines the id of the user which will be used to send the
# notices.
#
# It's also possible to override the room name, the display name of the
# "notices" user, and the avatar for the user.
#
# server_notices:
# system_mxid_localpart: notices
# system_mxid_display_name: "Server Notices"
# system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
# room_name: "Server Notices"
"""
class ServerNoticesConfig(Config):
"""Configuration for the server notices room.
Attributes:
server_notices_mxid (str|None):
The MXID to use for server notices.
None if server notices are not enabled.
server_notices_mxid_display_name (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_mxid_avatar_url (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_room_name (str|None):
The name to use for the server notices room.
None if server notices are not enabled.
"""
def __init__(self):
super(ServerNoticesConfig, self).__init__()
self.server_notices_mxid = None
self.server_notices_mxid_display_name = None
self.server_notices_mxid_avatar_url = None
self.server_notices_room_name = None
def read_config(self, config):
c = config.get("server_notices")
if c is None:
return
mxid_localpart = c['system_mxid_localpart']
self.server_notices_mxid = UserID(
mxid_localpart, self.server_name,
).to_string()
self.server_notices_mxid_display_name = c.get(
'system_mxid_display_name', None,
)
self.server_notices_mxid_avatar_url = c.get(
'system_mxid_avatar_url', None,
)
# todo: i18n
self.server_notices_room_name = c.get('room_name', "Server Notices")
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -18,7 +18,7 @@ from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
import simplejson as json
from canonicaljson import json
import logging

View File

@@ -27,10 +27,12 @@ from synapse.util.metrics import Measure
from twisted.internet import defer
from signedjson.sign import (
verify_signed_json, signature_ids, sign_json, encode_canonical_json
verify_signed_json, signature_ids, sign_json, encode_canonical_json,
SignatureVerifyException,
)
from signedjson.key import (
is_signing_algorithm_supported, decode_verify_key_bytes
is_signing_algorithm_supported, decode_verify_key_bytes,
encode_verify_key_base64,
)
from unpaddedbase64 import decode_base64, encode_base64
@@ -56,7 +58,7 @@ Attributes:
key_ids(set(str)): The set of key_ids to that could be used to verify the
JSON object
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
deferred(Deferred[str, str, nacl.signing.VerifyKey]):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
@@ -736,6 +738,17 @@ class Keyring(object):
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
"""Waits for the key to become available, and then performs a verification
Args:
verify_request (VerifyKeyRequest):
Returns:
Deferred[None]
Raises:
SynapseError if there was a problem performing the verification
"""
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
@@ -768,11 +781,17 @@ def _handle_key_deferred(verify_request):
))
try:
verify_signed_json(json_object, server_name, verify_key)
except Exception:
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
server_name, verify_key.alg, verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
"Invalid signature for server %s with key %s:%s: %s" % (
server_name, verify_key.alg, verify_key.version, str(e),
),
Codes.UNAUTHORIZED,
)

View File

@@ -34,9 +34,11 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
Raises:
AuthError if the checks fail
Returns:
True if the auth checks pass.
if the auth checks pass.
"""
if do_size_check:
_check_size_limits(event)
@@ -71,9 +73,10 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id)
return True
return
if event.type == EventTypes.Create:
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
raise AuthError(
@@ -81,7 +84,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
"Creation event's room_id domain does not match sender's"
)
# FIXME
return True
logger.debug("Allowing! %s", event)
return
creation_event = auth_events.get((EventTypes.Create, ""), None)
@@ -118,7 +122,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
403,
"Alias event's state_key does not match sender's domain"
)
return True
logger.debug("Allowing! %s", event)
return
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
@@ -127,14 +132,9 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
)
if event.type == EventTypes.Member:
allowed = _is_membership_change_allowed(
event, auth_events
)
if allowed:
logger.debug("Allowing! %s", event)
else:
logger.debug("Denying! %s", event)
return allowed
_is_membership_change_allowed(event, auth_events)
logger.debug("Allowing! %s", event)
return
_check_event_sender_in_room(event, auth_events)
@@ -153,7 +153,8 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
)
)
else:
return True
logger.debug("Allowing! %s", event)
return
_can_send_event(event, auth_events)
@@ -200,7 +201,7 @@ def _is_membership_change_allowed(event, auth_events):
create = auth_events.get(key)
if create and event.prev_events[0][0] == create.event_id:
if create.content["creator"] == event.state_key:
return True
return
target_user_id = event.state_key
@@ -265,13 +266,13 @@ def _is_membership_change_allowed(event, auth_events):
raise AuthError(
403, "%s is banned from the room" % (target_user_id,)
)
return True
return
if Membership.JOIN != membership:
if (caller_invited
and Membership.LEAVE == membership
and target_user_id == event.user_id):
return True
return
if not caller_in_room: # caller isn't joined
raise AuthError(
@@ -334,8 +335,6 @@ def _is_membership_change_allowed(event, auth_events):
else:
raise AuthError(500, "Unknown membership %s" % membership)
return True
def _check_event_sender_in_room(event, auth_events):
key = (EventTypes.Member, event.user_id, )
@@ -355,35 +354,46 @@ def _check_joined_room(member, user_id, room_id):
))
def get_send_level(etype, state_key, auth_events):
key = (EventTypes.PowerLevels, "", )
send_level_event = auth_events.get(key)
send_level = None
if send_level_event:
send_level = send_level_event.content.get("events", {}).get(
etype
)
if send_level is None:
if state_key is not None:
send_level = send_level_event.content.get(
"state_default", 50
)
else:
send_level = send_level_event.content.get(
"events_default", 0
)
def get_send_level(etype, state_key, power_levels_event):
"""Get the power level required to send an event of a given type
if send_level:
send_level = int(send_level)
The federation spec [1] refers to this as "Required Power Level".
https://matrix.org/docs/spec/server_server/unstable.html#definitions
Args:
etype (str): type of event
state_key (str|None): state_key of state event, or None if it is not
a state event.
power_levels_event (synapse.events.EventBase|None): power levels event
in force at this point in the room
Returns:
int: power level required to send this event.
"""
if power_levels_event:
power_levels_content = power_levels_event.content
else:
send_level = 0
power_levels_content = {}
return send_level
# see if we have a custom level for this event type
send_level = power_levels_content.get("events", {}).get(etype)
# otherwise, fall back to the state_default/events_default.
if send_level is None:
if state_key is not None:
send_level = power_levels_content.get("state_default", 50)
else:
send_level = power_levels_content.get("events_default", 0)
return int(send_level)
def _can_send_event(event, auth_events):
power_levels_event = _get_power_level_event(auth_events)
send_level = get_send_level(
event.type, event.get("state_key", None), auth_events
event.type, event.get("state_key"), power_levels_event,
)
user_level = get_user_power_level(event.user_id, auth_events)
@@ -471,14 +481,14 @@ def _check_power_levels(event, auth_events):
]
old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()):
for user in set(list(old_list) + list(user_list)):
levels_to_check.append(
(user, "users")
)
old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()):
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append(
(ev_id, "events")
)
@@ -515,7 +525,11 @@ def _check_power_levels(event, auth_events):
"to your own"
)
if old_level > user_level or new_level > user_level:
# Check if the old and new levels are greater than the user level
# (if defined)
old_level_too_big = old_level is not None and old_level > user_level
new_level_too_big = new_level is not None and new_level > user_level
if old_level_too_big or new_level_too_big:
raise AuthError(
403,
"You don't have permission to add ops level greater "
@@ -524,13 +538,22 @@ def _check_power_levels(event, auth_events):
def _get_power_level_event(auth_events):
key = (EventTypes.PowerLevels, "", )
return auth_events.get(key)
return auth_events.get((EventTypes.PowerLevels, ""))
def get_user_power_level(user_id, auth_events):
power_level_event = _get_power_level_event(auth_events)
"""Get a user's power level
Args:
user_id (str): user's id to look up in power_levels
auth_events (dict[(str, str), synapse.events.EventBase]):
state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event.
Returns:
int: the user's power level in this room.
"""
power_level_event = _get_power_level_event(auth_events)
if power_level_event:
level = power_level_event.content.get("users", {}).get(user_id)
if not level:
@@ -541,6 +564,11 @@ def get_user_power_level(user_id, auth_events):
else:
return int(level)
else:
# if there is no power levels event, the creator gets 100 and everyone
# else gets 0.
# some things which call this don't pass the create event: hack around
# that.
key = (EventTypes.Create, "", )
create_event = auth_events.get(key)
if (create_event is not None and

View File

@@ -146,7 +146,7 @@ class EventBase(object):
return field in self._event_dict
def items(self):
return self._event_dict.items()
return list(self._event_dict.items())
class FrozenEvent(EventBase):

View File

@@ -20,6 +20,8 @@ from frozendict import frozendict
import re
from six import string_types
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.
@@ -277,7 +279,7 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if only_event_fields:
if (not isinstance(only_event_fields, list) or
not all(isinstance(f, basestring) for f in only_event_fields)):
not all(isinstance(f, string_types) for f in only_event_fields)):
raise TypeError("only_event_fields must be a list of strings")
d = only_fields(d, only_event_fields)

View File

@@ -17,6 +17,8 @@ from synapse.types import EventID, RoomID, UserID
from synapse.api.errors import SynapseError
from synapse.api.constants import EventTypes, Membership
from six import string_types
class EventValidator(object):
@@ -49,7 +51,7 @@ class EventValidator(object):
strings.append("state_key")
for s in strings:
if not isinstance(getattr(event, s), basestring):
if not isinstance(getattr(event, s), string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))
if event.type == EventTypes.Member:
@@ -88,5 +90,5 @@ class EventValidator(object):
for s in keys:
if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,))
if not isinstance(d[s], basestring):
if not isinstance(d[s], string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))

View File

@@ -32,20 +32,17 @@ from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
import synapse.metrics
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
from prometheus_client import Counter
logger = logging.getLogger(__name__)
# synapse.federation.federation_client is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
PDU_RETRY_TIME_MS = 1 * 60 * 1000
@@ -108,7 +105,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc(query_type)
sent_queries_counter.labels(query_type).inc()
return self.transport_layer.make_query(
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
@@ -127,7 +124,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_device_keys")
sent_queries_counter.labels("client_device_keys").inc()
return self.transport_layer.query_client_keys(
destination, content, timeout
)
@@ -137,7 +134,7 @@ class FederationClient(FederationBase):
"""Query the device keys for a list of user ids hosted on a remote
server.
"""
sent_queries_counter.inc("user_devices")
sent_queries_counter.labels("user_devices").inc()
return self.transport_layer.query_user_devices(
destination, user_id, timeout
)
@@ -154,7 +151,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_one_time_keys")
sent_queries_counter.labels("client_one_time_keys").inc()
return self.transport_layer.claim_client_keys(
destination, content, timeout
)
@@ -394,7 +391,7 @@ class FederationClient(FederationBase):
"""
if return_local:
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
signed_events = list(seen_events.values())
else:
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
@@ -592,7 +589,7 @@ class FederationClient(FederationBase):
}
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus.values(),
destination, list(pdus.values()),
outlier=True,
)

View File

@@ -14,10 +14,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
import simplejson as json
from canonicaljson import json
import six
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
@@ -27,12 +31,14 @@ from synapse.federation.federation_base import (
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.http.endpoint import parse_server_name
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
from prometheus_client import Counter
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
@@ -41,17 +47,17 @@ TRANSACTION_CONCURRENCY_LIMIT = 10
logger = logging.getLogger(__name__)
# synapse.federation.federation_server is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.server")
received_pdus_counter = Counter("synapse_federation_server_received_pdus", "")
received_pdus_counter = metrics.register_counter("received_pdus")
received_edus_counter = Counter("synapse_federation_server_received_edus", "")
received_edus_counter = metrics.register_counter("received_edus")
received_queries_counter = metrics.register_counter("received_queries", labels=["type"])
received_queries_counter = Counter(
"synapse_federation_server_received_queries", "", ["type"]
)
class FederationServer(FederationBase):
def __init__(self, hs):
super(FederationServer, self).__init__(hs)
@@ -73,6 +79,9 @@ class FederationServer(FederationBase):
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
@@ -131,7 +140,9 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
received_pdus_counter.inc_by(len(transaction.pdus))
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(transaction.origin)
pdus_by_room = {}
@@ -153,9 +164,21 @@ class FederationServer(FederationBase):
# we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn(
"Ignoring PDUs for room %s from banned server", room_id,
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
@@ -210,6 +233,9 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -233,6 +259,9 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -276,7 +305,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_pdu_request(self, origin, event_id):
pdu = yield self._get_persisted_pdu(origin, event_id)
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu:
defer.returnValue(
@@ -292,12 +321,14 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
def on_make_join_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_join_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@@ -305,6 +336,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@@ -313,6 +346,10 @@ class FederationServer(FederationBase):
def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -324,7 +361,9 @@ class FederationServer(FederationBase):
}))
@defer.inlineCallbacks
def on_make_leave_request(self, room_id, user_id):
def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@@ -333,6 +372,10 @@ class FederationServer(FederationBase):
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@@ -340,6 +383,9 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {
@@ -368,6 +414,9 @@ class FederationServer(FederationBase):
Deferred: Results in `dict` with the same format as `content`
"""
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
auth_chain = [
event_from_pdu_json(e)
for e in content["auth_chain"]
@@ -441,6 +490,9 @@ class FederationServer(FederationBase):
def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d",
@@ -469,17 +521,6 @@ class FederationServer(FederationBase):
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
Returns:
Deferred: Results in a `Pdu`.
"""
return self.handler.get_persisted_pdu(
origin, event_id, do_auth=do_auth
)
def _transaction_from_pdus(self, pdu_list):
"""Returns a new Transaction containing the given PDUs suitable for
transmission.
@@ -559,7 +600,9 @@ class FederationServer(FederationBase):
affected=pdu.event_id,
)
yield self.handler.on_receive_pdu(origin, pdu, get_missing=True)
yield self.handler.on_receive_pdu(
origin, pdu, get_missing=True, sent_to_us_directly=True,
)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
@@ -587,6 +630,101 @@ class FederationServer(FederationBase):
)
defer.returnValue(ret)
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
server_name (str): name of server, *without any port part*
room_id (str): ID of the room to check
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event):
"""Check if the given server is allowed by the ACL event
Args:
server_name (str): name of server, without any port part
acl_event (EventBase): m.room.server_acl event
Returns:
bool: True if this server is allowed by the ACLs
"""
logger.debug("Checking %s against acl %s", server_name, acl_event.content)
# first of all, check if literal IPs are blocked, and if so, whether the
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
if server_name[0] == '[':
return False
# check for ipv4 literals. We can just lift the routine from twisted.
if isIPAddress(server_name):
return False
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched deny rule %s", server_name, e)
return False
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched allow rule %s", server_name, e)
return True
# everything else should be rejected.
# logger.info("%s fell through", server_name)
return False
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn("Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry))
return False
regex = _glob_to_regex(acl_entry)
return regex.match(server_name)
def _glob_to_regex(glob):
res = ''
for c in glob:
if c == '*':
res = res + '.*'
elif c == '?':
res = res + '.'
else:
res = res + re.escape(c)
return re.compile(res + "\\Z", re.IGNORECASE)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or

View File

@@ -33,9 +33,9 @@ from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from synapse.metrics import LaterGauge
from blist import sorteddict
from sortedcontainers import SortedDict
from collections import namedtuple
import logging
@@ -45,9 +45,6 @@ from six import itervalues, iteritems
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
@@ -58,29 +55,27 @@ class FederationRemoteSendQueue(object):
self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.presence_changed = SortedDict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.keyed_edu_changed = SortedDict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu
self.edus = SortedDict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure)
self.failures = SortedDict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
self.pos_time = SortedDict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which
# changes. ARGH.
def register(name, queue):
metrics.register_callback(
queue_name + "_size",
lambda: len(queue),
)
LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,),
"", [], lambda: len(queue))
for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
@@ -103,7 +98,7 @@ class FederationRemoteSendQueue(object):
now = self.clock.time_msec()
keys = self.pos_time.keys()
time = keys.bisect_left(now - FIVE_MINUTES_AGO)
time = self.pos_time.bisect_left(now - FIVE_MINUTES_AGO)
if not keys[:time]:
return
@@ -118,7 +113,7 @@ class FederationRemoteSendQueue(object):
with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps
keys = self.presence_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.presence_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.presence_changed[key]
@@ -136,7 +131,7 @@ class FederationRemoteSendQueue(object):
# Delete things out of keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.keyed_edu_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.keyed_edu_changed[key]
@@ -150,19 +145,19 @@ class FederationRemoteSendQueue(object):
# Delete things out of edu map
keys = self.edus.keys()
i = keys.bisect_left(position_to_delete)
i = self.edus.bisect_left(position_to_delete)
for key in keys[:i]:
del self.edus[key]
# Delete things out of failure map
keys = self.failures.keys()
i = keys.bisect_left(position_to_delete)
i = self.failures.bisect_left(position_to_delete)
for key in keys[:i]:
del self.failures[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = keys.bisect_left(position_to_delete)
i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
@@ -202,7 +197,7 @@ class FederationRemoteSendQueue(object):
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
@@ -255,13 +250,12 @@ class FederationRemoteSendQueue(object):
self._clear_queue_before_pos(federation_ack)
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.presence_changed.bisect_right(from_token)
j = self.presence_changed.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
for pos, user_id_list in self.presence_changed.items()[i:j]
for user_id in user_id_list
]
for (key, user_id) in dest_user_ids:
@@ -270,13 +264,12 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.keyed_edu_changed.bisect_right(from_token)
j = self.keyed_edu_changed.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
keyed_edus = {v: k for k, v in self.keyed_edu_changed.items()[i:j]}
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
@@ -285,19 +278,17 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
i = self.edus.bisect_right(from_token)
j = self.edus.bisect_right(to_token) + 1
edus = self.edus.items()[i:j]
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
i = self.failures.bisect_right(from_token)
j = self.failures.bisect_right(to_token) + 1
failures = self.failures.items()[i:j]
for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow(
@@ -306,10 +297,9 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(

View File

@@ -21,28 +21,32 @@ from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException, FederationDeniedError
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
from synapse.metrics import LaterGauge
from synapse.metrics import (
sent_edus_counter,
sent_transactions_counter,
events_processed_counter,
)
from prometheus_client import Counter
from six import itervalues
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
client_metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_pdus_destination_dist = client_metrics.register_distribution(
"sent_pdu_destinations"
sent_pdus_destination_dist_count = Counter(
"synapse_federation_client_sent_pdu_destinations:count", ""
)
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total", ""
)
sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
events_processed_counter = client_metrics.register_counter("events_processed")
class TransactionQueue(object):
@@ -69,8 +73,10 @@ class TransactionQueue(object):
# done
self.pending_transactions = {}
metrics.register_callback(
"pending_destinations",
LaterGauge(
"synapse_federation_transaction_queue_pending_destinations",
"",
[],
lambda: len(self.pending_transactions),
)
@@ -94,12 +100,16 @@ class TransactionQueue(object):
# Map of destination -> (edu_type, key) -> Edu
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
"pending_pdus",
LaterGauge(
"synapse_federation_transaction_queue_pending_pdus",
"",
[],
lambda: sum(map(len, pdus.values())),
)
metrics.register_callback(
"pending_edus",
LaterGauge(
"synapse_federation_transaction_queue_pending_edus",
"",
[],
lambda: (
sum(map(len, edus.values()))
+ sum(map(len, presence.values()))
@@ -228,7 +238,7 @@ class TransactionQueue(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
for evs in itervalues(events_by_room)
],
consumeErrors=True
))
@@ -241,18 +251,15 @@ class TransactionQueue(object):
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
synapse.metrics.event_processing_lag.labels(
"federation_sender").set(now - ts)
synapse.metrics.event_processing_last_ts.labels(
"federation_sender").set(ts)
events_processed_counter.inc_by(len(events))
events_processed_counter.inc(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
synapse.metrics.event_processing_positions.labels(
"federation_sender").set(next_token)
finally:
self._is_processing = False
@@ -275,7 +282,8 @@ class TransactionQueue(object):
if not destinations:
return
sent_pdus_destination_dist.inc_by(len(destinations))
sent_pdus_destination_dist_total.inc(len(destinations))
sent_pdus_destination_dist_count.inc()
for destination in destinations:
self.pending_pdus_by_dest.setdefault(destination, []).append(
@@ -322,7 +330,7 @@ class TransactionQueue(object):
if not states_map:
break
yield self._process_presence_inner(states_map.values())
yield self._process_presence_inner(list(states_map.values()))
except Exception:
logger.exception("Error sending presence states to servers")
finally:
@@ -446,9 +454,6 @@ class TransactionQueue(object):
# hence why we throw the result away.
yield get_retry_limiter(destination, self.clock, self.store)
# XXX: what's this for?
yield run_on_reactor()
pending_pdus = []
while True:
device_message_edus, device_stream_id, dev_list_id = (

View File

@@ -18,6 +18,7 @@ from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError, FederationDeniedError
from synapse.http.endpoint import parse_and_validate_server_name
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
@@ -99,26 +100,6 @@ class Authenticator(object):
origin = None
def parse_auth_header(header_str):
try:
params = auth.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith("\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return (origin, key, sig)
except Exception:
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED
)
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
if not auth_headers:
@@ -127,8 +108,8 @@ class Authenticator(object):
)
for auth in auth_headers:
if auth.startswith("X-Matrix"):
(origin, key, sig) = parse_auth_header(auth)
if auth.startswith(b"X-Matrix"):
(origin, key, sig) = _parse_auth_header(auth)
json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig
@@ -165,6 +146,48 @@ class Authenticator(object):
logger.exception("Error resetting retry timings on %s", origin)
def _parse_auth_header(header_bytes):
"""Parse an X-Matrix auth header
Args:
header_bytes (bytes): header value
Returns:
Tuple[str, str, str]: origin, key id, signature.
Raises:
AuthenticationError if the header could not be parsed
"""
try:
header_str = header_bytes.decode('utf-8')
params = header_str.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith(b"\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
# ensure that the origin is a valid server name
parse_and_validate_server_name(origin)
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return origin, key, sig
except Exception as e:
logger.warn(
"Error parsing auth header '%s': %s",
header_bytes.decode('ascii', 'replace'),
e,
)
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED,
)
class BaseFederationServlet(object):
REQUIRE_AUTH = True
@@ -362,7 +385,9 @@ class FederationMakeJoinServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_join_request(context, user_id)
content = yield self.handler.on_make_join_request(
origin, context, user_id,
)
defer.returnValue((200, content))
@@ -371,7 +396,9 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_leave_request(context, user_id)
content = yield self.handler.on_make_leave_request(
origin, context, user_id,
)
defer.returnValue((200, content))

View File

@@ -74,8 +74,6 @@ class Transaction(JsonEncodedObject):
"previous_ids",
"pdus",
"edus",
"transaction_id",
"destination",
"pdu_failures",
]

View File

@@ -20,6 +20,8 @@ from synapse.api.errors import SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from twisted.internet import defer
from six import string_types
logger = logging.getLogger(__name__)
@@ -431,7 +433,7 @@ class GroupsServerHandler(object):
"long_description"):
if keyname in content:
value = content[keyname]
if not isinstance(value, basestring):
if not isinstance(value, string_types):
raise SynapseError(400, "%r value is not a string" % (keyname,))
profile[keyname] = value

View File

@@ -14,9 +14,7 @@
# limitations under the License.
from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomContextHandler,
)
from .room import RoomContextHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
@@ -47,7 +45,6 @@ class Handlers(object):
def __init__(self, hs):
self.registration_handler = RegistrationHandler(hs)
self.message_handler = MessageHandler(hs)
self.room_creation_handler = RoomCreationHandler(hs)
self.federation_handler = FederationHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.admin_handler = AdminHandler(hs)

View File

@@ -114,14 +114,14 @@ class BaseHandler(object):
if guest_access != "can_join":
if context:
current_state = yield self.store.get_events(
context.current_state_ids.values()
list(context.current_state_ids.values())
)
else:
current_state = yield self.state_handler.get_current_state(
event.room_id
)
current_state = current_state.values()
current_state = list(current_state.values())
logger.info("maybe_kick_guest_users %r", current_state)
yield self.kick_guest_users(current_state)

View File

@@ -15,20 +15,21 @@
from twisted.internet import defer
from six import itervalues
import synapse
from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure
from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
from prometheus_client import Counter
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
events_processed_counter = metrics.register_counter("events_processed")
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
def log_failure(failure):
@@ -120,7 +121,7 @@ class ApplicationServicesHandler(object):
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
for evs in itervalues(events_by_room)
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound)
@@ -128,18 +129,15 @@ class ApplicationServicesHandler(object):
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
)
synapse.metrics.event_processing_positions.labels(
"appservice_sender").set(upper_bound)
events_processed_counter.inc_by(len(events))
events_processed_counter.inc(len(events))
synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
synapse.metrics.event_processing_lag.labels(
"appservice_sender").set(now - ts)
synapse.metrics.event_processing_last_ts.labels(
"appservice_sender").set(ts)
finally:
self.is_processing = False

View File

@@ -13,8 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer, threads
from canonicaljson import json
from ._base import BaseHandler
from synapse.api.constants import LoginType
from synapse.api.errors import (
@@ -23,7 +26,6 @@ from synapse.api.errors import (
)
from synapse.module_api import ModuleApi
from synapse.types import UserID
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable
@@ -32,7 +34,7 @@ from twisted.web.client import PartialDownloadError
import logging
import bcrypt
import pymacaroons
import simplejson
import attr
import synapse.util.stringutils as stringutils
@@ -249,7 +251,7 @@ class AuthHandler(BaseHandler):
errordict = e.error_dict()
for f in flows:
if len(set(f) - set(creds.keys())) == 0:
if len(set(f) - set(creds)) == 0:
# it's very useful to know what args are stored, but this can
# include the password in the case of registering, so only log
# the keys (confusingly, clientdict may contain a password
@@ -257,12 +259,12 @@ class AuthHandler(BaseHandler):
# and is not sensitive).
logger.info(
"Auth completed with creds: %r. Client dict has keys: %r",
creds, clientdict.keys()
creds, list(clientdict)
)
defer.returnValue((creds, clientdict, session['id']))
ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys()
ret['completed'] = list(creds)
ret.update(errordict)
raise InteractiveAuthIncompleteError(
ret,
@@ -402,7 +404,7 @@ class AuthHandler(BaseHandler):
except PartialDownloadError as pde:
# Twisted is silly
data = pde.response
resp_body = simplejson.loads(data)
resp_body = json.loads(data)
if 'success' in resp_body:
# Note that we do NOT check the hostname here: we explicitly
@@ -423,15 +425,11 @@ class AuthHandler(BaseHandler):
def _check_msisdn(self, authdict, _):
return self._check_threepid('msisdn', authdict)
@defer.inlineCallbacks
def _check_dummy_auth(self, authdict, _):
yield run_on_reactor()
defer.returnValue(True)
return defer.succeed(True)
@defer.inlineCallbacks
def _check_threepid(self, medium, authdict):
yield run_on_reactor()
if 'threepid_creds' not in authdict:
raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM)
@@ -825,6 +823,15 @@ class AuthHandler(BaseHandler):
if medium == 'email':
address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler
yield identity_handler.unbind_threepid(
user_id,
{
'medium': medium,
'address': address,
},
)
ret = yield self.store.user_delete_threepid(
user_id, medium, address,
)
@@ -849,7 +856,11 @@ class AuthHandler(BaseHandler):
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
return make_deferred_yieldable(threads.deferToThread(_do_hash))
return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(), self.hs.get_reactor().getThreadPool(), _do_hash
),
)
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
@@ -869,16 +880,21 @@ class AuthHandler(BaseHandler):
)
if stored_hash:
return make_deferred_yieldable(threads.deferToThread(_do_validate_hash))
return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(),
self.hs.get_reactor().getThreadPool(),
_do_validate_hash,
),
)
else:
return defer.succeed(False)
class MacaroonGeneartor(object):
def __init__(self, hs):
self.clock = hs.get_clock()
self.server_name = hs.config.server_name
self.macaroon_secret_key = hs.config.macaroon_secret_key
@attr.s
class MacaroonGenerator(object):
hs = attr.ib()
def generate_access_token(self, user_id, extra_caveats=None):
extra_caveats = extra_caveats or []
@@ -896,7 +912,7 @@ class MacaroonGeneartor(object):
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.clock.time_msec()
now = self.hs.get_clock().time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
@@ -908,9 +924,9 @@ class MacaroonGeneartor(object):
def _generate_base_macaroon(self, user_id):
macaroon = pymacaroons.Macaroon(
location=self.server_name,
location=self.hs.config.server_name,
identifier="key",
key=self.macaroon_secret_key)
key=self.hs.config.macaroon_secret_key)
macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon

View File

@@ -1,5 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
# Copyright 2017, 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,6 +15,9 @@
from twisted.internet import defer
from ._base import BaseHandler
from synapse.types import UserID, create_requester
from synapse.util.logcontext import run_in_background
from synapse.api.errors import SynapseError
import logging
@@ -27,13 +30,24 @@ class DeactivateAccountHandler(BaseHandler):
super(DeactivateAccountHandler, self).__init__(hs)
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
self._room_member_handler = hs.get_room_member_handler()
self._identity_handler = hs.get_handlers().identity_handler
self.user_directory_handler = hs.get_user_directory_handler()
# Flag that indicates whether the process to part users from rooms is running
self._user_parter_running = False
# Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do).
hs.get_reactor().callWhenRunning(self._start_user_parting)
@defer.inlineCallbacks
def deactivate_account(self, user_id):
def deactivate_account(self, user_id, erase_data):
"""Deactivate a user's account
Args:
user_id (str): ID of user to be deactivated
erase_data (bool): whether to GDPR-erase the user's data
Returns:
Deferred
@@ -41,12 +55,108 @@ class DeactivateAccountHandler(BaseHandler):
# FIXME: Theoretically there is a race here wherein user resets
# password using threepid.
# first delete any devices belonging to the user, which will also
# delete threepids first. We remove these from the IS so if this fails,
# leave the user still active so they can try again.
# Ideally we would prevent password resets and then do this in the
# background thread.
threepids = yield self.store.user_get_threepids(user_id)
for threepid in threepids:
try:
yield self._identity_handler.unbind_threepid(
user_id,
{
'medium': threepid['medium'],
'address': threepid['address'],
},
)
except Exception:
# Do we want this to be a fatal error or should we carry on?
logger.exception("Failed to remove threepid from ID server")
raise SynapseError(400, "Failed to remove threepid from ID server")
yield self.store.user_delete_threepid(
user_id, threepid['medium'], threepid['address'],
)
# delete any devices belonging to the user, which will also
# delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id)
# then delete any remaining access tokens which weren't associated with
# a device.
yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.user_delete_threepids(user_id)
yield self.store.user_set_password_hash(user_id, None)
# Add the user to a table of users pending deactivation (ie.
# removal from all the rooms they're a member of)
yield self.store.add_user_pending_deactivation(user_id)
# delete from user directory
yield self.user_directory_handler.handle_user_deactivated(user_id)
# Mark the user as erased, if they asked for that
if erase_data:
logger.info("Marking %s as erased", user_id)
yield self.store.mark_user_erased(user_id)
# Now start the process that goes through that list and
# parts users from rooms (if it isn't already running)
self._start_user_parting()
def _start_user_parting(self):
"""
Start the process that goes through the table of users
pending deactivation, if it isn't already running.
Returns:
None
"""
if not self._user_parter_running:
run_in_background(self._user_parter_loop)
@defer.inlineCallbacks
def _user_parter_loop(self):
"""Loop that parts deactivated users from rooms
Returns:
None
"""
self._user_parter_running = True
logger.info("Starting user parter")
try:
while True:
user_id = yield self.store.get_user_pending_deactivation()
if user_id is None:
break
logger.info("User parter parting %r", user_id)
yield self._part_user(user_id)
yield self.store.del_user_pending_deactivation(user_id)
logger.info("User parter finished parting %r", user_id)
logger.info("User parter finished: stopping")
finally:
self._user_parter_running = False
@defer.inlineCallbacks
def _part_user(self, user_id):
"""Causes the given user_id to leave all the rooms they're joined to
Returns:
None
"""
user = UserID.from_string(user_id)
rooms_for_user = yield self.store.get_rooms_for_user(user_id)
for room_id in rooms_for_user:
logger.info("User parter parting %r from %r", user_id, room_id)
try:
yield self._room_member_handler.update_membership(
create_requester(user),
user,
room_id,
"leave",
ratelimit=False,
)
except Exception:
logger.exception(
"Failed to part user %r from room %r: ignoring and continuing",
user_id, room_id,
)

View File

@@ -26,6 +26,8 @@ from ._base import BaseHandler
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -112,7 +114,7 @@ class DeviceHandler(BaseHandler):
user_id, device_id=None
)
devices = device_map.values()
devices = list(device_map.values())
for device in devices:
_update_device_from_client_ips(device, ips)
@@ -185,7 +187,7 @@ class DeviceHandler(BaseHandler):
defer.Deferred:
"""
device_map = yield self.store.get_devices_by_user(user_id)
device_ids = device_map.keys()
device_ids = list(device_map)
if except_device_id is not None:
device_ids = [d for d in device_ids if d != except_device_id]
yield self.delete_devices(user_id, device_ids)
@@ -318,7 +320,7 @@ class DeviceHandler(BaseHandler):
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -338,7 +340,7 @@ class DeviceHandler(BaseHandler):
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -354,10 +356,10 @@ class DeviceHandler(BaseHandler):
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.itervalues():
for state_dict in itervalues(prev_state_ids):
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -367,14 +369,14 @@ class DeviceHandler(BaseHandler):
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.itervalues():
for state_dict in itervalues(prev_state_ids):
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
if state_key != user_id:

View File

@@ -14,11 +14,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import simplejson as json
import logging
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from six import iteritems
from synapse.api.errors import (
SynapseError, CodeMessageException, FederationDeniedError,
@@ -79,7 +79,7 @@ class E2eKeysHandler(object):
else:
remote_queries[user_id] = device_ids
# Firt get local devices.
# First get local devices.
failures = {}
results = {}
if local_query:
@@ -92,7 +92,7 @@ class E2eKeysHandler(object):
remote_queries_not_in_cache = {}
if remote_queries:
query_list = []
for user_id, device_ids in remote_queries.iteritems():
for user_id, device_ids in iteritems(remote_queries):
if device_ids:
query_list.extend((user_id, device_id) for device_id in device_ids)
else:
@@ -103,9 +103,9 @@ class E2eKeysHandler(object):
query_list
)
)
for user_id, devices in remote_results.iteritems():
for user_id, devices in iteritems(remote_results):
user_devices = results.setdefault(user_id, {})
for device_id, device in devices.iteritems():
for device_id, device in iteritems(devices):
keys = device.get("keys", None)
device_display_name = device.get("device_display_name", None)
if keys:
@@ -250,9 +250,9 @@ class E2eKeysHandler(object):
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)
@@ -356,7 +356,7 @@ def _exception_to_failure(e):
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
# give a string for e.message, which json then fails to serialize.
return {
"status": 503, "message": str(e.message),
}

View File

@@ -48,6 +48,7 @@ class EventStreamHandler(BaseHandler):
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
self._server_notices_sender = hs.get_server_notices_sender()
@defer.inlineCallbacks
@log_function
@@ -58,6 +59,10 @@ class EventStreamHandler(BaseHandler):
If `only_keys` is not None, events from keys will be sent down.
"""
# send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(auth_user_id)
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_presence_handler()

View File

@@ -24,6 +24,7 @@ from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from six import iteritems
from twisted.internet import defer
from unpaddedbase64 import decode_base64
@@ -38,11 +39,12 @@ from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError, logcontext
from synapse.util.metrics import measure_func
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.async import Linearizer
from synapse.util.frozenutils import unfreeze
from synapse.crypto.event_signing import (
compute_event_signature, add_hashes_and_signatures,
)
from synapse.state import resolve_events_with_factory
from synapse.types import UserID, get_domain_from_id
from synapse.events.utils import prune_event
@@ -51,7 +53,6 @@ from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room
logger = logging.getLogger(__name__)
@@ -81,6 +82,7 @@ class FederationHandler(BaseHandler):
self.pusher_pool = hs.get_pusherpool()
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
# When joining a room we need to queue any events for that room up
self.room_queues = {}
@@ -88,7 +90,9 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def on_receive_pdu(self, origin, pdu, get_missing=True):
def on_receive_pdu(
self, origin, pdu, get_missing=True, sent_to_us_directly=False,
):
""" Process a PDU received via a federation /send/ transaction, or
via backfill of missing prev_events
@@ -102,8 +106,10 @@ class FederationHandler(BaseHandler):
"""
# We reprocess pdus when we have seen them only as outliers
existing = yield self.get_persisted_pdu(
origin, pdu.event_id, do_auth=False
existing = yield self.store.get_event(
pdu.event_id,
allow_none=True,
allow_rejected=True,
)
# FIXME: Currently we fetch an event again when we already have it
@@ -160,14 +166,11 @@ class FederationHandler(BaseHandler):
"Ignoring PDU %s for room %s from %s as we've left the room!",
pdu.event_id, pdu.room_id, origin,
)
return
defer.returnValue(None)
state = None
auth_chain = []
fetch_state = False
# Get missing pdus if necessary.
if not pdu.internal_metadata.is_outlier():
# We only backfill backwards to the min depth.
@@ -222,26 +225,60 @@ class FederationHandler(BaseHandler):
list(prevs - seen)[:5],
)
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
if sent_to_us_directly and prevs - seen:
# If they have sent it to us directly, and the server
# isn't telling us about the auth events that it's
# made a message referencing, we explode
raise FederationError(
"ERROR",
403,
(
"Your server isn't divulging details about prev_events "
"referenced in this event."
),
affected=pdu.event_id,
)
fetch_state = True
elif prevs - seen:
# Calculate the state of the previous events, and
# de-conflict them to find the current state.
state_groups = []
auth_chains = set()
try:
# Get the state of the events we know about
ours = yield self.store.get_state_groups(pdu.room_id, list(seen))
state_groups.append(ours)
if fetch_state:
# We need to get the state at this event, since we haven't
# processed all the prev events.
logger.debug(
"_handle_new_pdu getting state for %s",
pdu.room_id
)
try:
state, auth_chain = yield self.replication_layer.get_state_for_room(
origin, pdu.room_id, pdu.event_id,
)
except Exception:
logger.exception("Failed to get state for event: %s", pdu.event_id)
# Ask the remote server for the states we don't
# know about
for p in prevs - seen:
state, got_auth_chain = (
yield self.replication_layer.get_state_for_room(
origin, pdu.room_id, p
)
)
auth_chains.update(got_auth_chain)
state_group = {(x.type, x.state_key): x.event_id for x in state}
state_groups.append(state_group)
# Resolve any conflicting state
def fetch(ev_ids):
return self.store.get_events(
ev_ids, get_prev_content=False, check_redacted=False
)
state_map = yield resolve_events_with_factory(
state_groups, {pdu.event_id: pdu}, fetch
)
state = (yield self.store.get_events(state_map.values())).values()
auth_chain = list(auth_chains)
except Exception:
raise FederationError(
"ERROR",
403,
"We can't get valid state history.",
affected=pdu.event_id,
)
yield self._process_received_pdu(
origin,
@@ -319,11 +356,17 @@ class FederationHandler(BaseHandler):
for e in missing_events:
logger.info("Handling found event %s", e.event_id)
yield self.on_receive_pdu(
origin,
e,
get_missing=False
)
try:
yield self.on_receive_pdu(
origin,
e,
get_missing=False
)
except FederationError as e:
if e.code == 403:
logger.warn("Event %s failed history check.")
else:
raise
@log_function
@defer.inlineCallbacks
@@ -457,6 +500,47 @@ class FederationHandler(BaseHandler):
@measure_func("_filter_events_for_server")
@defer.inlineCallbacks
def _filter_events_for_server(self, server_name, room_id, events):
"""Filter the given events for the given server, redacting those the
server can't see.
Assumes the server is currently in the room.
Returns
list[FrozenEvent]
"""
# First lets check to see if all the events have a history visibility
# of "shared" or "world_readable". If thats the case then we don't
# need to check membership (as we know the server is in the room).
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
)
)
visibility_ids = set()
for sids in event_to_state_ids.itervalues():
hist = sids.get((EventTypes.RoomHistoryVisibility, ""))
if hist:
visibility_ids.add(hist)
# If we failed to find any history visibility events then the default
# is "shared" visiblity.
if not visibility_ids:
defer.returnValue(events)
event_map = yield self.store.get_events(visibility_ids)
all_open = all(
e.content.get("history_visibility") in (None, "shared", "world_readable")
for e in event_map.itervalues()
)
if all_open:
defer.returnValue(events)
# Ok, so we're dealing with events that have non-trivial visibility
# rules, so we need to also get the memberships of the room.
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
@@ -478,7 +562,7 @@ class FederationHandler(BaseHandler):
# to get all state ids that we're interested in.
event_map = yield self.store.get_events([
e_id
for key_to_eid in event_to_state_ids.values()
for key_to_eid in list(event_to_state_ids.values())
for key, e_id in key_to_eid.items()
if key[0] != EventTypes.Member or check_match(key[1])
])
@@ -486,13 +570,26 @@ class FederationHandler(BaseHandler):
event_to_state = {
e_id: {
key: event_map[inner_e_id]
for key, inner_e_id in key_to_eid.items()
for key, inner_e_id in key_to_eid.iteritems()
if inner_e_id in event_map
}
for e_id, key_to_eid in event_to_state_ids.items()
for e_id, key_to_eid in event_to_state_ids.iteritems()
}
erased_senders = yield self.store.are_users_erased(
e.sender for e in events,
)
def redact_disallowed(event, state):
# if the sender has been gdpr17ed, always return a redacted
# copy of the event.
if erased_senders[event.sender]:
logger.info(
"Sender of %s has been erased, redacting",
event.event_id,
)
return prune_event(event)
if not state:
return event
@@ -504,7 +601,7 @@ class FederationHandler(BaseHandler):
# membership states for the requesting server to determine
# if the server is either in the room or has been invited
# into the room.
for ev in state.values():
for ev in state.itervalues():
if ev.type != EventTypes.Member:
continue
try:
@@ -750,9 +847,19 @@ class FederationHandler(BaseHandler):
curr_state = yield self.state_handler.get_current_state(room_id)
def get_domains_from_state(state):
"""Get joined domains from state
Args:
state (dict[tuple, FrozenEvent]): State map from type/state
key to event.
Returns:
list[tuple[str, int]]: Returns a list of servers with the
lowest depth of their joins. Sorted by lowest depth first.
"""
joined_users = [
(state_key, int(event.depth))
for (e_type, state_key), event in state.items()
for (e_type, state_key), event in state.iteritems()
if e_type == EventTypes.Member
and event.membership == Membership.JOIN
]
@@ -769,7 +876,7 @@ class FederationHandler(BaseHandler):
except Exception:
pass
return sorted(joined_domains.items(), key=lambda d: d[1])
return sorted(joined_domains.iteritems(), key=lambda d: d[1])
curr_domains = get_domains_from_state(curr_state)
@@ -786,7 +893,7 @@ class FederationHandler(BaseHandler):
yield self.backfill(
dom, room_id,
limit=100,
extremities=[e for e in extremities.keys()]
extremities=extremities,
)
# If this succeeded then we probably already have the
# appropriate stuff.
@@ -832,7 +939,7 @@ class FederationHandler(BaseHandler):
tried_domains = set(likely_domains)
tried_domains.add(self.server_name)
event_ids = list(extremities.keys())
event_ids = list(extremities.iterkeys())
logger.debug("calling resolve_state_groups in _maybe_backfill")
resolve = logcontext.preserve_fn(
@@ -842,31 +949,34 @@ class FederationHandler(BaseHandler):
[resolve(room_id, [e]) for e in event_ids],
consumeErrors=True,
))
# dict[str, dict[tuple, str]], a map from event_id to state map of
# event_ids.
states = dict(zip(event_ids, [s.state for s in states]))
state_map = yield self.store.get_events(
[e_id for ids in states.values() for e_id in ids],
[e_id for ids in states.itervalues() for e_id in ids.itervalues()],
get_prev_content=False
)
states = {
key: {
k: state_map[e_id]
for k, e_id in state_dict.items()
for k, e_id in state_dict.iteritems()
if e_id in state_map
} for key, state_dict in states.items()
} for key, state_dict in states.iteritems()
}
for e_id, _ in sorted_extremeties_tuple:
likely_domains = get_domains_from_state(states[e_id])
success = yield try_backfill([
dom for dom in likely_domains
dom for dom, _ in likely_domains
if dom not in tried_domains
])
if success:
defer.returnValue(True)
tried_domains.update(likely_domains)
tried_domains.update(dom for dom, _ in likely_domains)
defer.returnValue(False)
@@ -1134,13 +1244,13 @@ class FederationHandler(BaseHandler):
user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id)
state_ids = context.prev_state_ids.values()
state_ids = list(context.prev_state_ids.values())
auth_chain = yield self.store.get_auth_chain(state_ids)
state = yield self.store.get_events(context.prev_state_ids.values())
state = yield self.store.get_events(list(context.prev_state_ids.values()))
defer.returnValue({
"state": state.values(),
"state": list(state.values()),
"auth_chain": auth_chain,
})
@@ -1180,6 +1290,13 @@ class FederationHandler(BaseHandler):
if not self.is_mine_id(event.state_key):
raise SynapseError(400, "The invite event must be for this server")
# block any attempts to invite the server notices mxid
if event.state_key == self._server_notices_mxid:
raise SynapseError(
http_client.FORBIDDEN,
"Cannot invite this user",
)
event.internal_metadata.outlier = True
event.internal_metadata.invite_from_remote = True
@@ -1360,14 +1477,12 @@ class FederationHandler(BaseHandler):
def get_state_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
state_groups = yield self.store.get_state_groups(
room_id, [event_id]
)
if state_groups:
_, state = state_groups.items().pop()
_, state = list(iteritems(state_groups)).pop()
results = {
(e.type, e.state_key): e for e in state
}
@@ -1383,7 +1498,7 @@ class FederationHandler(BaseHandler):
else:
del results[(event.type, event.state_key)]
res = results.values()
res = list(results.values())
for event in res:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
@@ -1404,8 +1519,6 @@ class FederationHandler(BaseHandler):
def get_state_ids_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
state_groups = yield self.store.get_state_groups_ids(
room_id, [event_id]
)
@@ -1424,7 +1537,7 @@ class FederationHandler(BaseHandler):
else:
results.pop((event.type, event.state_key), None)
defer.returnValue(results.values())
defer.returnValue(list(results.values()))
else:
defer.returnValue([])
@@ -1447,11 +1560,20 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
def get_persisted_pdu(self, origin, event_id):
"""Get an event from the database for the given server.
Args:
origin [str]: hostname of server which is requesting the event; we
will check that the server is allowed to see it.
event_id [str]: id of the event being requested
Returns:
Deferred: Results in a `Pdu`.
Deferred[EventBase|None]: None if we know nothing about the event;
otherwise the (possibly-redacted) event.
Raises:
AuthError if the server is not currently in the room
"""
event = yield self.store.get_event(
event_id,
@@ -1472,20 +1594,17 @@ class FederationHandler(BaseHandler):
)
)
if do_auth:
in_room = yield self.auth.check_host_in_room(
event.room_id,
origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
in_room = yield self.auth.check_host_in_room(
event.room_id,
origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
defer.returnValue(event)
else:
defer.returnValue(None)
@@ -1773,6 +1892,10 @@ class FederationHandler(BaseHandler):
min_depth=min_depth,
)
missing_events = yield self._filter_events_for_server(
origin, room_id, missing_events,
)
defer.returnValue(missing_events)
@defer.inlineCallbacks
@@ -1893,7 +2016,7 @@ class FederationHandler(BaseHandler):
})
new_state = self.state_handler.resolve_events(
[local_view.values(), remote_view.values()],
[list(local_view.values()), list(remote_view.values())],
event
)
@@ -2013,7 +2136,7 @@ class FederationHandler(BaseHandler):
this will not be included in the current_state in the context.
"""
state_updates = {
k: a.event_id for k, a in auth_events.iteritems()
k: a.event_id for k, a in iteritems(auth_events)
if k != event_key
}
context.current_state_ids = dict(context.current_state_ids)
@@ -2023,7 +2146,7 @@ class FederationHandler(BaseHandler):
context.delta_ids.update(state_updates)
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.iteritems()
k: a.event_id for k, a in iteritems(auth_events)
})
context.state_group = yield self.store.store_state_group(
event.event_id,
@@ -2075,7 +2198,7 @@ class FederationHandler(BaseHandler):
def get_next(it, opt=None):
try:
return it.next()
return next(it)
except Exception:
return opt

View File

@@ -15,6 +15,7 @@
# limitations under the License.
from twisted.internet import defer
from six import iteritems
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id
@@ -449,7 +450,7 @@ class GroupsLocalHandler(object):
results = {}
failed_results = []
for destination, dest_user_ids in destinations.iteritems():
for destination, dest_user_ids in iteritems(destinations):
try:
r = yield self.transport_client.bulk_get_publicised_groups(
destination, list(dest_user_ids),

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -18,7 +19,7 @@
import logging
import simplejson as json
from canonicaljson import json
from twisted.internet import defer
@@ -26,7 +27,6 @@ from synapse.api.errors import (
MatrixCodeMessageException, CodeMessageException
)
from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes
logger = logging.getLogger(__name__)
@@ -38,6 +38,7 @@ class IdentityHandler(BaseHandler):
super(IdentityHandler, self).__init__(hs)
self.http_client = hs.get_simple_http_client()
self.federation_http_client = hs.get_http_client()
self.trusted_id_servers = set(hs.config.trusted_third_party_id_servers)
self.trust_any_id_server_just_for_testing_do_not_use = (
@@ -60,8 +61,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks
def threepid_from_creds(self, creds):
yield run_on_reactor()
if 'id_server' in creds:
id_server = creds['id_server']
elif 'idServer' in creds:
@@ -104,7 +103,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks
def bind_threepid(self, creds, mxid):
yield run_on_reactor()
logger.debug("binding threepid %r to %s", creds, mxid)
data = None
@@ -139,9 +137,53 @@ class IdentityHandler(BaseHandler):
defer.returnValue(data)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
yield run_on_reactor()
def unbind_threepid(self, mxid, threepid):
"""
Removes a binding from an identity server
Args:
mxid (str): Matrix user ID of binding to be removed
threepid (dict): Dict with medium & address of binding to be removed
Returns:
Deferred[bool]: True on success, otherwise False
"""
logger.debug("unbinding threepid %r from %s", threepid, mxid)
if not self.trusted_id_servers:
logger.warn("Can't unbind threepid: no trusted ID servers set in config")
defer.returnValue(False)
# We don't track what ID server we added 3pids on (perhaps we ought to)
# but we assume that any of the servers in the trusted list are in the
# same ID server federation, so we can pick any one of them to send the
# deletion request to.
id_server = next(iter(self.trusted_id_servers))
url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,)
content = {
"mxid": mxid,
"threepid": threepid,
}
headers = {}
# we abuse the federation http client to sign the request, but we have to send it
# using the normal http client since we don't want the SRV lookup and want normal
# 'browser-like' HTTPS.
self.federation_http_client.sign_request(
destination=None,
method='POST',
url_bytes='/_matrix/identity/api/v1/3pid/unbind'.encode('ascii'),
headers_dict=headers,
content=content,
destination_is=id_server,
)
yield self.http_client.post_json_get_json(
url,
content,
headers,
)
defer.returnValue(True)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,
@@ -176,8 +218,6 @@ class IdentityHandler(BaseHandler):
self, id_server, country, phone_number,
client_secret, send_attempt, **kwargs
):
yield run_on_reactor()
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,

View File

@@ -181,8 +181,8 @@ class InitialSyncHandler(BaseHandler):
self.store, user_id, messages
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
start_token = now_token.copy_and_replace("room_key", token)
end_token = now_token.copy_and_replace("room_key", room_end_token)
time_now = self.clock.time_msec()
d["messages"] = {
@@ -325,8 +325,8 @@ class InitialSyncHandler(BaseHandler):
self.store, user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token[0])
end_token = StreamToken.START.copy_and_replace("room_key", token[1])
start_token = StreamToken.START.copy_and_replace("room_key", token)
end_token = StreamToken.START.copy_and_replace("room_key", stream_token)
time_now = self.clock.time_msec()
@@ -408,8 +408,8 @@ class InitialSyncHandler(BaseHandler):
self.store, user_id, messages, is_peeking=is_peeking,
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
start_token = now_token.copy_and_replace("room_key", token)
end_token = now_token
time_now = self.clock.time_msec()

View File

@@ -14,23 +14,28 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import simplejson
import sys
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
import six
from twisted.internet import defer, reactor
from six import string_types, itervalues, iteritems
from twisted.internet import defer
from twisted.internet.defer import succeed
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership, MAX_DEPTH
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.errors import (
AuthError, Codes, SynapseError,
ConsentNotGivenError,
)
from synapse.api.urls import ConsentURIBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.types import (
UserID, RoomAlias, RoomStreamToken,
)
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.util.async import ReadWriteLock, Limiter
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func
from synapse.util.frozenutils import frozendict_json_encoder
@@ -86,14 +91,14 @@ class MessageHandler(BaseHandler):
# map from purge id to PurgeStatus
self._purges_by_id = {}
def start_purge_history(self, room_id, topological_ordering,
def start_purge_history(self, room_id, token,
delete_local_events=False):
"""Start off a history purge on a room.
Args:
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
@@ -115,19 +120,19 @@ class MessageHandler(BaseHandler):
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, topological_ordering, delete_local_events,
purge_id, room_id, token, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def _purge_history(self, purge_id, room_id, topological_ordering,
def _purge_history(self, purge_id, room_id, token,
delete_local_events):
"""Carry out a history purge on a room.
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
@@ -138,7 +143,7 @@ class MessageHandler(BaseHandler):
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, topological_ordering, delete_local_events,
room_id, token, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
@@ -151,7 +156,7 @@ class MessageHandler(BaseHandler):
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
reactor.callLater(24 * 3600, clear_purge)
self.hs.get_reactor().callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
@@ -397,7 +402,7 @@ class MessageHandler(BaseHandler):
"avatar_url": profile.avatar_url,
"display_name": profile.display_name,
}
for user_id, profile in users_with_profile.iteritems()
for user_id, profile in iteritems(users_with_profile)
})
@@ -431,6 +436,9 @@ class EventCreationHandler(object):
self.spam_checker = hs.get_spam_checker()
if self.config.block_events_without_consent_error is not None:
self._consent_uri_builder = ConsentURIBuilder(self.config)
@defer.inlineCallbacks
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_events_and_hashes=None):
@@ -482,6 +490,10 @@ class EventCreationHandler(object):
target, e
)
is_exempt = yield self._is_exempt_from_privacy_policy(builder, requester)
if not is_exempt:
yield self.assert_accepted_privacy_policy(requester)
if token_id is not None:
builder.internal_metadata.token_id = token_id
@@ -496,6 +508,90 @@ class EventCreationHandler(object):
defer.returnValue((event, context))
def _is_exempt_from_privacy_policy(self, builder, requester):
""""Determine if an event to be sent is exempt from having to consent
to the privacy policy
Args:
builder (synapse.events.builder.EventBuilder): event being created
requester (Requster): user requesting this event
Returns:
Deferred[bool]: true if the event can be sent without the user
consenting
"""
# the only thing the user can do is join the server notices room.
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
if membership == Membership.JOIN:
return self._is_server_notices_room(builder.room_id)
elif membership == Membership.LEAVE:
# the user is always allowed to leave (but not kick people)
return builder.state_key == requester.user.to_string()
return succeed(False)
@defer.inlineCallbacks
def _is_server_notices_room(self, room_id):
if self.config.server_notices_mxid is None:
defer.returnValue(False)
user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self.config.server_notices_mxid in user_ids)
@defer.inlineCallbacks
def assert_accepted_privacy_policy(self, requester):
"""Check if a user has accepted the privacy policy
Called when the given user is about to do something that requires
privacy consent. We see if the user is exempt and otherwise check that
they have given consent. If they have not, a ConsentNotGiven error is
raised.
Args:
requester (synapse.types.Requester):
The user making the request
Returns:
Deferred[None]: returns normally if the user has consented or is
exempt
Raises:
ConsentNotGivenError: if the user has not given consent yet
"""
if self.config.block_events_without_consent_error is None:
return
# exempt AS users from needing consent
if requester.app_service is not None:
return
user_id = requester.user.to_string()
# exempt the system notices user
if (
self.config.server_notices_mxid is not None and
user_id == self.config.server_notices_mxid
):
return
u = yield self.store.get_user_by_id(user_id)
assert u is not None
if u["appservice_id"] is not None:
# users registered by an appservice are exempt
return
if u["consent_version"] == self.config.user_consent_version:
return
consent_uri = self._consent_uri_builder.build_user_consent_uri(
requester.user.localpart,
)
msg = self.config.block_events_without_consent_error % {
'consent_uri': consent_uri,
}
raise ConsentNotGivenError(
msg=msg,
consent_uri=consent_uri,
)
@defer.inlineCallbacks
def send_nonmember_event(self, requester, event, context, ratelimit=True):
"""
@@ -578,7 +674,7 @@ class EventCreationHandler(object):
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
if not isinstance(spam_error, string_types):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
@@ -700,7 +796,7 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db
try:
dump = frozendict_json_encoder.encode(event.content)
simplejson.loads(dump)
json.loads(dump)
except Exception:
logger.exception("Failed to encode content: %r", event.content)
raise
@@ -713,6 +809,7 @@ class EventCreationHandler(object):
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.hs.get_clock(),
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
@@ -792,7 +889,7 @@ class EventCreationHandler(object):
state_to_include_ids = [
e_id
for k, e_id in context.current_state_ids.iteritems()
for k, e_id in iteritems(context.current_state_ids)
if k[0] in self.hs.config.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
@@ -806,7 +903,7 @@ class EventCreationHandler(object):
"content": e.content,
"sender": e.sender,
}
for e in state_to_include.itervalues()
for e in itervalues(state_to_include)
]
invitee = UserID.from_string(event.state_key)
@@ -866,9 +963,7 @@ class EventCreationHandler(object):
event_stream_id, max_stream_id
)
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
try:
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,

View File

@@ -22,9 +22,11 @@ The methods that define policy are:
- should_notify
"""
from twisted.internet import defer, reactor
from twisted.internet import defer
from contextlib import contextmanager
from six import itervalues, iteritems
from synapse.api.errors import SynapseError
from synapse.api.constants import PresenceState
from synapse.storage.presence import UserPresenceState
@@ -36,27 +38,29 @@ from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID, get_domain_from_id
import synapse.metrics
from synapse.metrics import LaterGauge
import logging
from prometheus_client import Counter
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
notified_presence_counter = metrics.register_counter("notified_presence")
federation_presence_out_counter = metrics.register_counter("federation_presence_out")
presence_updates_counter = metrics.register_counter("presence_updates")
timers_fired_counter = metrics.register_counter("timers_fired")
federation_presence_counter = metrics.register_counter("federation_presence")
bump_active_time_counter = metrics.register_counter("bump_active_time")
notified_presence_counter = Counter("synapse_handler_presence_notified_presence", "")
federation_presence_out_counter = Counter(
"synapse_handler_presence_federation_presence_out", "")
presence_updates_counter = Counter("synapse_handler_presence_presence_updates", "")
timers_fired_counter = Counter("synapse_handler_presence_timers_fired", "")
federation_presence_counter = Counter("synapse_handler_presence_federation_presence", "")
bump_active_time_counter = Counter("synapse_handler_presence_bump_active_time", "")
get_updates_counter = metrics.register_counter("get_updates", labels=["type"])
get_updates_counter = Counter("synapse_handler_presence_get_updates", "", ["type"])
notify_reason_counter = metrics.register_counter("notify_reason", labels=["reason"])
state_transition_counter = metrics.register_counter(
"state_transition", labels=["from", "to"]
notify_reason_counter = Counter(
"synapse_handler_presence_notify_reason", "", ["reason"])
state_transition_counter = Counter(
"synapse_handler_presence_state_transition", "", ["from", "to"]
)
@@ -87,6 +91,11 @@ assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(object):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self.is_mine = hs.is_mine
self.is_mine_id = hs.is_mine_id
self.clock = hs.get_clock()
@@ -94,7 +103,6 @@ class PresenceHandler(object):
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.federation = hs.get_federation_sender()
self.state = hs.get_state_handler()
federation_registry = hs.get_federation_registry()
@@ -137,8 +145,9 @@ class PresenceHandler(object):
for state in active_presence
}
metrics.register_callback(
"user_to_current_state_size", lambda: len(self.user_to_current_state)
LaterGauge(
"synapse_handlers_presence_user_to_current_state_size", "", [],
lambda: len(self.user_to_current_state)
)
now = self.clock.time_msec()
@@ -170,7 +179,7 @@ class PresenceHandler(object):
# have not yet been persisted
self.unpersisted_users_changes = set()
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
hs.get_reactor().addSystemEventTrigger("before", "shutdown", self._on_shutdown)
self.serial_to_user = {}
self._next_serial = 1
@@ -208,7 +217,8 @@ class PresenceHandler(object):
60 * 1000,
)
metrics.register_callback("wheel_timer_size", lambda: len(self.wheel_timer))
LaterGauge("synapse_handlers_presence_wheel_timer_size", "", [],
lambda: len(self.wheel_timer))
@defer.inlineCallbacks
def _on_shutdown(self):
@@ -311,11 +321,11 @@ class PresenceHandler(object):
# TODO: We should probably ensure there are no races hereafter
presence_updates_counter.inc_by(len(new_states))
presence_updates_counter.inc(len(new_states))
if to_notify:
notified_presence_counter.inc_by(len(to_notify))
yield self._persist_and_notify(to_notify.values())
notified_presence_counter.inc(len(to_notify))
yield self._persist_and_notify(list(to_notify.values()))
self.unpersisted_users_changes |= set(s.user_id for s in new_states)
self.unpersisted_users_changes -= set(to_notify.keys())
@@ -325,7 +335,7 @@ class PresenceHandler(object):
if user_id not in to_notify
}
if to_federation_ping:
federation_presence_out_counter.inc_by(len(to_federation_ping))
federation_presence_out_counter.inc(len(to_federation_ping))
self._push_to_remotes(to_federation_ping.values())
@@ -363,7 +373,7 @@ class PresenceHandler(object):
for user_id in users_to_check
]
timers_fired_counter.inc_by(len(states))
timers_fired_counter.inc(len(states))
changes = handle_timeouts(
states,
@@ -463,61 +473,6 @@ class PresenceHandler(object):
syncing_user_ids.update(user_ids)
return syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs(self, process_id, syncing_user_ids):
"""Update the syncing users for an external process
Args:
process_id(str): An identifier for the process the users are
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
syncing_user_ids(set(str)): The set of user_ids that are
currently syncing on that server.
"""
# Grab the previous list of user_ids that were syncing on that process
prev_syncing_user_ids = (
self.external_process_to_current_syncs.get(process_id, set())
)
# Grab the current presence state for both the users that are syncing
# now and the users that were syncing before this update.
prev_states = yield self.current_state_for_users(
syncing_user_ids | prev_syncing_user_ids
)
updates = []
time_now_ms = self.clock.time_msec()
# For each new user that is syncing check if we need to mark them as
# being online.
for new_user_id in syncing_user_ids - prev_syncing_user_ids:
prev_state = prev_states[new_user_id]
if prev_state.state == PresenceState.OFFLINE:
updates.append(prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=time_now_ms,
last_user_sync_ts=time_now_ms,
))
else:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
# For each user that is still syncing or stopped syncing update the
# last sync time so that we will correctly apply the grace period when
# they stop syncing.
for old_user_id in prev_syncing_user_ids:
prev_state = prev_states[old_user_id]
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
yield self._update_states(updates)
# Update the last updated time for the process. We expire the entries
# if we don't receive an update in the given timeframe.
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
self.external_process_to_current_syncs[process_id] = syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs_row(self, process_id, user_id, is_syncing, sync_time_msec):
"""Update the syncing users for an external process as a delta.
@@ -581,7 +536,7 @@ class PresenceHandler(object):
prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
)
for prev_state in prev_states.itervalues()
for prev_state in itervalues(prev_states)
])
self.external_process_last_updated_ms.pop(process_id, None)
@@ -604,14 +559,14 @@ class PresenceHandler(object):
for user_id in user_ids
}
missing = [user_id for user_id, state in states.iteritems() if not state]
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = yield self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in states.iteritems() if not state]
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id)
@@ -707,7 +662,7 @@ class PresenceHandler(object):
updates.append(prev_state.copy_and_replace(**new_fields))
if updates:
federation_presence_counter.inc_by(len(updates))
federation_presence_counter.inc(len(updates))
yield self._update_states(updates)
@defer.inlineCallbacks
@@ -732,7 +687,7 @@ class PresenceHandler(object):
"""
updates = yield self.current_state_for_users(target_user_ids)
updates = updates.values()
updates = list(updates.values())
for user_id in set(target_user_ids) - set(u.user_id for u in updates):
updates.append(UserPresenceState.default(user_id))
@@ -798,11 +753,11 @@ class PresenceHandler(object):
self._push_to_remotes([state])
else:
user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.is_mine_id, user_ids)
user_ids = list(filter(self.is_mine_id, user_ids))
states = yield self.current_state_for_users(user_ids)
self._push_to_remotes(states.values())
self._push_to_remotes(list(states.values()))
@defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None):
@@ -982,28 +937,28 @@ def should_notify(old_state, new_state):
return False
if old_state.status_msg != new_state.status_msg:
notify_reason_counter.inc("status_msg_change")
notify_reason_counter.labels("status_msg_change").inc()
return True
if old_state.state != new_state.state:
notify_reason_counter.inc("state_change")
state_transition_counter.inc(old_state.state, new_state.state)
notify_reason_counter.labels("state_change").inc()
state_transition_counter.labels(old_state.state, new_state.state).inc()
return True
if old_state.state == PresenceState.ONLINE:
if new_state.currently_active != old_state.currently_active:
notify_reason_counter.inc("current_active_change")
notify_reason_counter.labels("current_active_change").inc()
return True
if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Only notify about last active bumps if we're not currently acive
if not new_state.currently_active:
notify_reason_counter.inc("last_active_change_online")
notify_reason_counter.labels("last_active_change_online").inc()
return True
elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Always notify for a transition where last active gets bumped.
notify_reason_counter.inc("last_active_change_not_online")
notify_reason_counter.labels("last_active_change_not_online").inc()
return True
return False
@@ -1077,14 +1032,14 @@ class PresenceEventSource(object):
if changed is not None and len(changed) < 500:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
get_updates_counter.inc("stream")
get_updates_counter.labels("stream").inc()
for other_user_id in changed:
if other_user_id in users_interested_in:
user_ids_changed.add(other_user_id)
else:
# Too many possible updates. Find all users we can see and check
# if any of them have changed.
get_updates_counter.inc("full")
get_updates_counter.labels("full").inc()
if from_key:
user_ids_changed = stream_change_cache.get_entities_changed(
@@ -1096,10 +1051,10 @@ class PresenceEventSource(object):
updates = yield presence.current_state_for_users(user_ids_changed)
if include_offline:
defer.returnValue((updates.values(), max_token))
defer.returnValue((list(updates.values()), max_token))
else:
defer.returnValue(([
s for s in updates.itervalues()
s for s in itervalues(updates)
if s.state != PresenceState.OFFLINE
], max_token))
@@ -1157,7 +1112,7 @@ def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
if new_state:
changes[state.user_id] = new_state
return changes.values()
return list(changes.values())
def handle_timeout(state, is_mine, syncing_user_ids, now):
@@ -1356,11 +1311,11 @@ def get_interested_remotes(store, states, state_handler):
# hosts in those rooms.
room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
for room_id, states in room_ids_to_states.iteritems():
for room_id, states in iteritems(room_ids_to_states):
hosts = yield state_handler.get_current_hosts_in_room(room_id)
hosts_and_states.append((hosts, states))
for user_id, states in users_to_states.iteritems():
for user_id, states in iteritems(users_to_states):
host = get_domain_from_id(user_id)
hosts_and_states.append(([host], states))

View File

@@ -24,7 +24,7 @@ from synapse.api.errors import (
from synapse.http.client import CaptchaServerHttpClient
from synapse import types
from synapse.types import UserID, create_requester, RoomID, RoomAlias
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.async import Linearizer
from synapse.util.threepids import check_3pid_allowed
from ._base import BaseHandler
@@ -34,6 +34,11 @@ logger = logging.getLogger(__name__)
class RegistrationHandler(BaseHandler):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
super(RegistrationHandler, self).__init__(hs)
self.auth = hs.get_auth()
@@ -49,6 +54,7 @@ class RegistrationHandler(BaseHandler):
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
self._server_notices_mxid = hs.config.server_notices_mxid
@defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None,
@@ -133,7 +139,6 @@ class RegistrationHandler(BaseHandler):
Raises:
RegistrationError if there was a problem registering.
"""
yield run_on_reactor()
password_hash = None
if password:
password_hash = yield self.auth_handler().hash(password)
@@ -338,6 +343,14 @@ class RegistrationHandler(BaseHandler):
yield identity_handler.bind_threepid(c, user_id)
def check_user_id_not_appservice_exclusive(self, user_id, allowed_appservice=None):
# don't allow people to register the server notices mxid
if self._server_notices_mxid is not None:
if user_id == self._server_notices_mxid:
raise SynapseError(
400, "This user ID is reserved.",
errcode=Codes.EXCLUSIVE
)
# valid user IDs must not clash with any user ID namespaces claimed by
# application services.
services = self.store.get_app_services()
@@ -417,8 +430,6 @@ class RegistrationHandler(BaseHandler):
Raises:
RegistrationError if there was a problem registering.
"""
yield run_on_reactor()
if localpart is None:
raise SynapseError(400, "Request must include user id")

View File

@@ -23,7 +23,7 @@ from synapse.types import UserID, RoomAlias, RoomID, RoomStreamToken
from synapse.api.constants import (
EventTypes, JoinRules, RoomCreationPreset
)
from synapse.api.errors import AuthError, StoreError, SynapseError
from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
from synapse.util import stringutils
from synapse.visibility import filter_events_for_client
@@ -68,14 +68,27 @@ class RoomCreationHandler(BaseHandler):
self.event_creation_handler = hs.get_event_creation_handler()
@defer.inlineCallbacks
def create_room(self, requester, config, ratelimit=True):
def create_room(self, requester, config, ratelimit=True,
creator_join_profile=None):
""" Creates a new room.
Args:
requester (Requester): The user who requested the room creation.
requester (synapse.types.Requester):
The user who requested the room creation.
config (dict) : A dict of configuration options.
ratelimit (bool): set to False to disable the rate limiter
creator_join_profile (dict|None):
Set to override the displayname and avatar for the creating
user in this room. If unset, displayname and avatar will be
derived from the user's profile. If set, should contain the
values to go in the body of the 'join' event (typically
`avatar_url` and/or `displayname`.
Returns:
The new room ID.
Deferred[dict]:
a dict containing the keys `room_id` and, if an alias was
requested, `room_alias`.
Raises:
SynapseError if the room ID couldn't be stored, or something went
horribly wrong.
@@ -102,7 +115,11 @@ class RoomCreationHandler(BaseHandler):
)
if mapping:
raise SynapseError(400, "Room alias already taken")
raise SynapseError(
400,
"Room alias already taken",
Codes.ROOM_IN_USE
)
else:
room_alias = None
@@ -113,6 +130,10 @@ class RoomCreationHandler(BaseHandler):
except Exception:
raise SynapseError(400, "Invalid user_id: %s" % (i,))
yield self.event_creation_handler.assert_accepted_privacy_policy(
requester,
)
invite_3pid_list = config.get("invite_3pid", [])
visibility = config.get("visibility", None)
@@ -176,7 +197,8 @@ class RoomCreationHandler(BaseHandler):
initial_state=initial_state,
creation_content=creation_content,
room_alias=room_alias,
power_level_content_override=config.get("power_level_content_override", {})
power_level_content_override=config.get("power_level_content_override", {}),
creator_join_profile=creator_join_profile,
)
if "name" in config:
@@ -256,6 +278,7 @@ class RoomCreationHandler(BaseHandler):
creation_content,
room_alias,
power_level_content_override,
creator_join_profile,
):
def create(etype, content, **kwargs):
e = {
@@ -299,6 +322,7 @@ class RoomCreationHandler(BaseHandler):
room_id,
"join",
ratelimit=False,
content=creator_join_profile,
)
# We treat the power levels override specially as this needs to be one
@@ -435,7 +459,7 @@ class RoomContextHandler(BaseHandler):
state = yield self.store.get_state_for_events(
[last_event_id], None
)
results["state"] = state[last_event_id].values()
results["state"] = list(state[last_event_id].values())
results["start"] = now_token.copy_and_replace(
"room_key", results["start"]

View File

@@ -15,6 +15,7 @@
from twisted.internet import defer
from six import iteritems
from six.moves import range
from ._base import BaseHandler
@@ -307,7 +308,7 @@ class RoomListHandler(BaseHandler):
)
event_map = yield self.store.get_events([
event_id for key, event_id in current_state_ids.iteritems()
event_id for key, event_id in iteritems(current_state_ids)
if key[0] in (
EventTypes.JoinRules,
EventTypes.Name,

View File

@@ -17,11 +17,14 @@
import abc
import logging
from six.moves import http_client
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from twisted.internet import defer
from unpaddedbase64 import decode_base64
import synapse.server
import synapse.types
from synapse.api.constants import (
EventTypes, Membership,
@@ -46,6 +49,11 @@ class RoomMemberHandler(object):
__metaclass__ = abc.ABCMeta
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
@@ -63,6 +71,7 @@ class RoomMemberHandler(object):
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
self._server_notices_mxid = self.config.server_notices_mxid
@abc.abstractmethod
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
@@ -290,11 +299,26 @@ class RoomMemberHandler(object):
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if effective_membership_state == "invite":
if effective_membership_state == Membership.INVITE:
# block any attempts to invite the server notices mxid
if target.to_string() == self._server_notices_mxid:
raise SynapseError(
http_client.FORBIDDEN,
"Cannot invite this user",
)
block_invite = False
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if (self._server_notices_mxid is not None and
requester.user.to_string() == self._server_notices_mxid):
# allow the server notices mxid to send invites
is_requester_admin = True
else:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
if self.config.block_non_admin_invites:
logger.info(
@@ -349,6 +373,20 @@ class RoomMemberHandler(object):
if same_sender and same_membership and same_content:
defer.returnValue(old_state)
# we don't allow people to reject invites to the server notice
# room, but they can leave it once they are joined.
if (
old_membership == Membership.INVITE and
effective_membership_state == Membership.LEAVE
):
is_blocked = yield self._is_server_notice_room(room_id)
if is_blocked:
raise SynapseError(
http_client.FORBIDDEN,
"You cannot reject this invite",
errcode=Codes.CANNOT_LEAVE_SERVER_NOTICE_ROOM,
)
is_host_in_room = yield self._is_host_in_room(current_state_ids)
if effective_membership_state == Membership.JOIN:
@@ -844,6 +882,13 @@ class RoomMemberHandler(object):
defer.returnValue(False)
@defer.inlineCallbacks
def _is_server_notice_room(self, room_id):
if self._server_notices_mxid is None:
defer.returnValue(False)
user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self._server_notices_mxid in user_ids)
class RoomMemberMasterHandler(RoomMemberHandler):
def __init__(self, hs):

View File

@@ -64,6 +64,13 @@ class SearchHandler(BaseHandler):
except Exception:
raise SynapseError(400, "Invalid batch")
logger.info(
"Search batch properties: %r, %r, %r",
batch_group, batch_group_key, batch_token,
)
logger.info("Search content: %s", content)
try:
room_cat = content["search_categories"]["room_events"]
@@ -271,6 +278,8 @@ class SearchHandler(BaseHandler):
# We should never get here due to the guard earlier.
raise NotImplementedError()
logger.info("Found %d events to return", len(allowed_events))
# If client has asked for "context" for each event (i.e. some surrounding
# events and state), fetch that
if event_context is not None:
@@ -282,6 +291,11 @@ class SearchHandler(BaseHandler):
event.room_id, event.event_id, before_limit, after_limit
)
logger.info(
"Context for search returned %d and %d events",
len(res["events_before"]), len(res["events_after"]),
)
res["events_before"] = yield filter_events_for_client(
self.store, user.to_string(), res["events_before"]
)
@@ -348,7 +362,7 @@ class SearchHandler(BaseHandler):
rooms = set(e.room_id for e in allowed_events)
for room_id in rooms:
state = yield self.state_handler.get_current_state(room_id)
state_results[room_id] = state.values()
state_results[room_id] = list(state.values())
state_results.values()

View File

@@ -28,6 +28,8 @@ import collections
import logging
import itertools
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -143,7 +145,7 @@ class SyncResult(collections.namedtuple("SyncResult", [
"invited", # InvitedSyncResult for each invited room.
"archived", # ArchivedSyncResult for each archived room.
"to_device", # List of direct messages for the device.
"device_lists", # List of user_ids whose devices have chanegd
"device_lists", # List of user_ids whose devices have changed
"device_one_time_keys_count", # Dict of algorithm to count for one time keys
# for this device
"groups",
@@ -275,7 +277,7 @@ class SyncHandler(object):
# result returned by the event source is poor form (it might cache
# the object)
room_id = event["room_id"]
event_copy = {k: v for (k, v) in event.iteritems()
event_copy = {k: v for (k, v) in iteritems(event)
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
@@ -294,7 +296,7 @@ class SyncHandler(object):
for event in receipts:
room_id = event["room_id"]
# exclude room id, as above
event_copy = {k: v for (k, v) in event.iteritems()
event_copy = {k: v for (k, v) in iteritems(event)
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
@@ -325,7 +327,7 @@ class SyncHandler(object):
current_state_ids = frozenset()
if any(e.is_state() for e in recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
current_state_ids = frozenset(itervalues(current_state_ids))
recents = yield filter_events_for_client(
self.store,
@@ -354,12 +356,24 @@ class SyncHandler(object):
since_key = since_token.room_key
while limited and len(recents) < timeline_limit and max_repeat:
events, end_key = yield self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
# If we have a since_key then we are trying to get any events
# that have happened since `since_key` up to `end_key`, so we
# can just use `get_room_events_stream_for_room`.
# Otherwise, we want to return the last N events in the room
# in toplogical ordering.
if since_key:
events, end_key = yield self.store.get_room_events_stream_for_room(
room_id,
limit=load_limit + 1,
from_key=since_key,
to_key=end_key,
)
else:
events, end_key = yield self.store.get_recent_events_for_room(
room_id,
limit=load_limit + 1,
end_token=end_key,
)
loaded_recents = sync_config.filter_collection.filter_room_timeline(
events
)
@@ -370,7 +384,7 @@ class SyncHandler(object):
current_state_ids = frozenset()
if any(e.is_state() for e in loaded_recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
current_state_ids = frozenset(itervalues(current_state_ids))
loaded_recents = yield filter_events_for_client(
self.store,
@@ -429,7 +443,11 @@ class SyncHandler(object):
Returns:
A Deferred map from ((type, state_key)->Event)
"""
last_events, token = yield self.store.get_recent_events_for_room(
# FIXME this claims to get the state at a stream position, but
# get_recent_events_for_room operates by topo ordering. This therefore
# does not reliably give you the state at the given stream position.
# (https://github.com/matrix-org/synapse/issues/3305)
last_events, _ = yield self.store.get_recent_events_for_room(
room_id, end_token=stream_position.room_key, limit=1,
)
@@ -523,11 +541,11 @@ class SyncHandler(object):
state = {}
if state_ids:
state = yield self.store.get_events(state_ids.values())
state = yield self.store.get_events(list(state_ids.values()))
defer.returnValue({
(e.type, e.state_key): e
for e in sync_config.filter_collection.filter_room_state(state.values())
for e in sync_config.filter_collection.filter_room_state(list(state.values()))
})
@defer.inlineCallbacks
@@ -876,7 +894,7 @@ class SyncHandler(object):
presence.extend(states)
# Deduplicate the presence entries so that there's at most one per user
presence = {p.user_id: p for p in presence}.values()
presence = list({p.user_id: p for p in presence}.values())
presence = sync_config.filter_collection.filter_presence(
presence
@@ -972,7 +990,7 @@ class SyncHandler(object):
if since_token:
for joined_sync in sync_result_builder.joined:
it = itertools.chain(
joined_sync.timeline.events, joined_sync.state.itervalues()
joined_sync.timeline.events, itervalues(joined_sync.state)
)
for event in it:
if event.type == EventTypes.Member:
@@ -1028,7 +1046,13 @@ class SyncHandler(object):
Returns:
Deferred(tuple): Returns a tuple of the form:
`([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)`
`(room_entries, invited_rooms, newly_joined_rooms, newly_left_rooms)`
where:
room_entries is a list [RoomSyncResultBuilder]
invited_rooms is a list [InvitedSyncResult]
newly_joined rooms is a list[str] of room ids
newly_left_rooms is a list[str] of room ids
"""
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
@@ -1050,7 +1074,7 @@ class SyncHandler(object):
newly_left_rooms = []
room_entries = []
invited = []
for room_id, events in mem_change_events_by_room_id.iteritems():
for room_id, events in iteritems(mem_change_events_by_room_id):
non_joins = [e for e in events if e.membership != Membership.JOIN]
has_join = len(non_joins) != len(events)

View File

@@ -19,9 +19,9 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.storage.roommember import ProfileInfo
from synapse.util.metrics import Measure
from synapse.util.async import sleep
from synapse.types import get_localpart_from_id
from six import iteritems
logger = logging.getLogger(__name__)
@@ -122,6 +122,13 @@ class UserDirectoryHandler(object):
user_id, profile.display_name, profile.avatar_url, None,
)
@defer.inlineCallbacks
def handle_user_deactivated(self, user_id):
"""Called when a user ID is deactivated
"""
yield self.store.remove_from_user_dir(user_id)
yield self.store.remove_from_user_in_public_room(user_id)
@defer.inlineCallbacks
def _unsafe_process(self):
# If self.pos is None then means we haven't fetched it from DB
@@ -166,7 +173,7 @@ class UserDirectoryHandler(object):
logger.info("Handling room %d/%d", num_processed_rooms + 1, len(room_ids))
yield self._handle_initial_room(room_id)
num_processed_rooms += 1
yield sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
logger.info("Processed all rooms.")
@@ -180,7 +187,7 @@ class UserDirectoryHandler(object):
logger.info("Handling user %d/%d", num_processed_users + 1, len(user_ids))
yield self._handle_local_user(user_id)
num_processed_users += 1
yield sleep(self.INITIAL_USER_SLEEP_MS / 1000.)
yield self.clock.sleep(self.INITIAL_USER_SLEEP_MS / 1000.)
logger.info("Processed all users")
@@ -228,7 +235,7 @@ class UserDirectoryHandler(object):
count = 0
for user_id in user_ids:
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
yield sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
if not self.is_mine_id(user_id):
count += 1
@@ -243,7 +250,7 @@ class UserDirectoryHandler(object):
continue
if count % self.INITIAL_ROOM_SLEEP_COUNT == 0:
yield sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
yield self.clock.sleep(self.INITIAL_ROOM_SLEEP_MS / 1000.)
count += 1
user_set = (user_id, other_user_id)
@@ -403,7 +410,7 @@ class UserDirectoryHandler(object):
if change:
users_with_profile = yield self.state.get_current_user_in_room(room_id)
for user_id, profile in users_with_profile.iteritems():
for user_id, profile in iteritems(users_with_profile):
yield self._handle_new_user(room_id, user_id, profile)
else:
users = yield self.store.get_users_in_public_due_to_room(room_id)

View File

@@ -13,6 +13,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from twisted.internet.defer import CancelledError
from twisted.python import failure
@@ -34,3 +36,14 @@ def cancelled_to_request_timed_out_error(value, timeout):
value.trap(CancelledError)
raise RequestTimedOutError()
return value
ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
def redact_uri(uri):
"""Strips access tokens from the uri replaces with <redacted>"""
return ACCESS_TOKEN_RE.sub(
br'\1<redacted>\3',
uri
)

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.http.server import wrap_request_handler
from synapse.http.server import wrap_json_request_handler
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
@@ -42,14 +42,13 @@ class AdditionalResource(Resource):
Resource.__init__(self)
self._handler = handler
# these are required by the request_handler wrapper
self.version_string = hs.version_string
# required by the request_handler wrapper
self.clock = hs.get_clock()
def render(self, request):
self._async_render(request)
return NOT_DONE_YET
@wrap_request_handler
@wrap_json_request_handler
def _async_render(self, request):
return self._handler(request)

View File

@@ -19,11 +19,10 @@ from OpenSSL.SSL import VERIFY_NONE
from synapse.api.errors import (
CodeMessageException, MatrixCodeMessageException, SynapseError, Codes,
)
from synapse.http import cancelled_to_request_timed_out_error
from synapse.http import cancelled_to_request_timed_out_error, redact_uri
from synapse.util.async import add_timeout_to_deferred
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.logcontext import make_deferred_yieldable
import synapse.metrics
from synapse.http.endpoint import SpiderEndpoint
from canonicaljson import encode_canonical_json
@@ -42,23 +41,17 @@ from twisted.web._newclient import ResponseDone
from six import StringIO
import simplejson as json
from prometheus_client import Counter
from canonicaljson import json
import logging
import urllib
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
outgoing_requests_counter = metrics.register_counter(
"requests",
labels=["method"],
)
incoming_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
)
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
incoming_responses_counter = Counter("synapse_http_client_responses", "",
["method", "code"])
class SimpleHttpClient(object):
@@ -95,33 +88,34 @@ class SimpleHttpClient(object):
def request(self, method, uri, *args, **kwargs):
# A small wrapper around self.agent.request() so we can easily attach
# counters to it
outgoing_requests_counter.inc(method)
outgoing_requests_counter.labels(method).inc()
logger.info("Sending request %s %s", method, uri)
# log request but strip `access_token` (AS requests for example include this)
logger.info("Sending request %s %s", method, redact_uri(uri))
try:
request_deferred = self.agent.request(
method, uri, *args, **kwargs
)
add_timeout_to_deferred(
request_deferred,
60, cancelled_to_request_timed_out_error,
request_deferred, 60, self.hs.get_reactor(),
cancelled_to_request_timed_out_error,
)
response = yield make_deferred_yieldable(request_deferred)
incoming_responses_counter.inc(method, response.code)
incoming_responses_counter.labels(method, response.code).inc()
logger.info(
"Received response to %s %s: %s",
method, uri, response.code
method, redact_uri(uri), response.code
)
defer.returnValue(response)
except Exception as e:
incoming_responses_counter.inc(method, "ERR")
incoming_responses_counter.labels(method, "ERR").inc()
logger.info(
"Error sending request to %s %s: %s %s",
method, uri, type(e).__name__, e.message
method, redact_uri(uri), type(e).__name__, e.message
)
raise e
raise
@defer.inlineCallbacks
def post_urlencoded_get_json(self, uri, args={}, headers=None):

View File

@@ -12,8 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from twisted.internet.endpoints import HostnameEndpoint, wrapClientTLS
from twisted.internet import defer, reactor
from twisted.internet import defer
from twisted.internet.error import ConnectError
from twisted.names import client, dns
from twisted.names.error import DNSNameError, DomainError
@@ -38,6 +40,71 @@ _Server = collections.namedtuple(
)
def parse_server_name(server_name):
"""Split a server name into host/port parts.
Args:
server_name (str): server name to parse
Returns:
Tuple[str, int|None]: host/port parts.
Raises:
ValueError if the server name could not be parsed.
"""
try:
if server_name[-1] == ']':
# ipv6 literal, hopefully
return server_name, None
domain_port = server_name.rsplit(":", 1)
domain = domain_port[0]
port = int(domain_port[1]) if domain_port[1:] else None
return domain, port
except Exception:
raise ValueError("Invalid server name '%s'" % server_name)
VALID_HOST_REGEX = re.compile(
"\\A[0-9a-zA-Z.-]+\\Z",
)
def parse_and_validate_server_name(server_name):
"""Split a server name into host/port parts and do some basic validation.
Args:
server_name (str): server name to parse
Returns:
Tuple[str, int|None]: host/port parts.
Raises:
ValueError if the server name could not be parsed.
"""
host, port = parse_server_name(server_name)
# these tests don't need to be bulletproof as we'll find out soon enough
# if somebody is giving us invalid data. What we *do* need is to be sure
# that nobody is sneaking IP literals in that look like hostnames, etc.
# look for ipv6 literals
if host[0] == '[':
if host[-1] != ']':
raise ValueError("Mismatched [...] in server name '%s'" % (
server_name,
))
return host, port
# otherwise it should only be alphanumerics.
if not VALID_HOST_REGEX.match(host):
raise ValueError("Server name '%s' contains invalid characters" % (
server_name,
))
return host, port
def matrix_federation_endpoint(reactor, destination, ssl_context_factory=None,
timeout=None):
"""Construct an endpoint for the given matrix destination.
@@ -50,9 +117,7 @@ def matrix_federation_endpoint(reactor, destination, ssl_context_factory=None,
timeout (int): connection timeout in seconds
"""
domain_port = destination.split(":")
domain = domain_port[0]
port = int(domain_port[1]) if domain_port[1:] else None
domain, port = parse_server_name(destination)
endpoint_kw_args = {}
@@ -74,21 +139,22 @@ def matrix_federation_endpoint(reactor, destination, ssl_context_factory=None,
reactor, "matrix", domain, protocol="tcp",
default_port=default_port, endpoint=transport_endpoint,
endpoint_kw_args=endpoint_kw_args
))
), reactor)
else:
return _WrappingEndpointFac(transport_endpoint(
reactor, domain, port, **endpoint_kw_args
))
), reactor)
class _WrappingEndpointFac(object):
def __init__(self, endpoint_fac):
def __init__(self, endpoint_fac, reactor):
self.endpoint_fac = endpoint_fac
self.reactor = reactor
@defer.inlineCallbacks
def connect(self, protocolFactory):
conn = yield self.endpoint_fac.connect(protocolFactory)
conn = _WrappedConnection(conn)
conn = _WrappedConnection(conn, self.reactor)
defer.returnValue(conn)
@@ -98,9 +164,10 @@ class _WrappedConnection(object):
"""
__slots__ = ["conn", "last_request"]
def __init__(self, conn):
def __init__(self, conn, reactor):
object.__setattr__(self, "conn", conn)
object.__setattr__(self, "last_request", time.time())
self._reactor = reactor
def __getattr__(self, name):
return getattr(self.conn, name)
@@ -131,14 +198,14 @@ class _WrappedConnection(object):
# Time this connection out if we haven't send a request in the last
# N minutes
# TODO: Cancel the previous callLater?
reactor.callLater(3 * 60, self._time_things_out_maybe)
self._reactor.callLater(3 * 60, self._time_things_out_maybe)
d = self.conn.request(request)
def update_request_time(res):
self.last_request = time.time()
# TODO: Cancel the previous callLater?
reactor.callLater(3 * 60, self._time_things_out_maybe)
self._reactor.callLater(3 * 60, self._time_things_out_maybe)
return res
d.addCallback(update_request_time)

View File

@@ -22,12 +22,12 @@ from twisted.web._newclient import ResponseDone
from synapse.http import cancelled_to_request_timed_out_error
from synapse.http.endpoint import matrix_federation_endpoint
import synapse.metrics
from synapse.util.async import sleep, add_timeout_to_deferred
from synapse.util.async import add_timeout_to_deferred
from synapse.util import logcontext
from synapse.util.logcontext import make_deferred_yieldable
import synapse.util.retryutils
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
from synapse.api.errors import (
SynapseError, Codes, HttpResponseException, FederationDeniedError,
@@ -36,26 +36,23 @@ from synapse.api.errors import (
from signedjson.sign import sign_json
import cgi
import simplejson as json
import logging
import random
import sys
import urllib
from six.moves.urllib import parse as urlparse
from six import string_types
from prometheus_client import Counter
logger = logging.getLogger(__name__)
outbound_logger = logging.getLogger("synapse.http.outbound")
metrics = synapse.metrics.get_metrics_for(__name__)
outgoing_requests_counter = metrics.register_counter(
"requests",
labels=["method"],
)
incoming_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
)
outgoing_requests_counter = Counter("synapse_http_matrixfederationclient_requests",
"", ["method"])
incoming_responses_counter = Counter("synapse_http_matrixfederationclient_responses",
"", ["method", "code"])
MAX_LONG_RETRIES = 10
@@ -195,6 +192,7 @@ class MatrixFederationHttpClient(object):
add_timeout_to_deferred(
request_deferred,
timeout / 1000. if timeout else 60,
self.hs.get_reactor(),
cancelled_to_request_timed_out_error,
)
response = yield make_deferred_yieldable(
@@ -236,7 +234,7 @@ class MatrixFederationHttpClient(object):
delay = min(delay, 2)
delay *= random.uniform(0.8, 1.4)
yield sleep(delay)
yield self.clock.sleep(delay)
retries_left -= 1
else:
raise
@@ -262,14 +260,35 @@ class MatrixFederationHttpClient(object):
defer.returnValue(response)
def sign_request(self, destination, method, url_bytes, headers_dict,
content=None):
content=None, destination_is=None):
"""
Signs a request by adding an Authorization header to headers_dict
Args:
destination (bytes|None): The desination home server of the request.
May be None if the destination is an identity server, in which case
destination_is must be non-None.
method (bytes): The HTTP method of the request
url_bytes (bytes): The URI path of the request
headers_dict (dict): Dictionary of request headers to append to
content (bytes): The body of the request
destination_is (bytes): As 'destination', but if the destination is an
identity server
Returns:
None
"""
request = {
"method": method,
"uri": url_bytes,
"origin": self.server_name,
"destination": destination,
}
if destination is not None:
request["destination"] = destination
if destination_is is not None:
request["destination_is"] = destination_is
if content is not None:
request["content"] = content
@@ -553,7 +572,7 @@ class MatrixFederationHttpClient(object):
encoded_args = {}
for k, vs in args.items():
if isinstance(vs, basestring):
if isinstance(vs, string_types):
vs = [vs]
encoded_args[k] = [v.encode("UTF-8") for v in vs]
@@ -668,7 +687,7 @@ def check_content_type_is_json(headers):
RuntimeError if the
"""
c_type = headers.getRawHeaders("Content-Type")
c_type = headers.getRawHeaders(b"Content-Type")
if c_type is None:
raise RuntimeError(
"No Content-Type header"
@@ -685,7 +704,7 @@ def check_content_type_is_json(headers):
def encode_query_args(args):
encoded_args = {}
for k, vs in args.items():
if isinstance(vs, basestring):
if isinstance(vs, string_types):
vs = [vs]
encoded_args[k] = [v.encode("UTF-8") for v in vs]

View File

@@ -0,0 +1,277 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from prometheus_client.core import Counter, Histogram
from synapse.metrics import LaterGauge
from synapse.util.logcontext import LoggingContext
logger = logging.getLogger(__name__)
# total number of responses served, split by method/servlet/tag
response_count = Counter(
"synapse_http_server_response_count", "", ["method", "servlet", "tag"]
)
requests_counter = Counter(
"synapse_http_server_requests_received", "", ["method", "servlet"]
)
outgoing_responses_counter = Counter(
"synapse_http_server_responses", "", ["method", "code"]
)
response_timer = Histogram(
"synapse_http_server_response_time_seconds", "sec", ["method", "servlet", "tag"]
)
response_ru_utime = Counter(
"synapse_http_server_response_ru_utime_seconds", "sec", ["method", "servlet", "tag"]
)
response_ru_stime = Counter(
"synapse_http_server_response_ru_stime_seconds", "sec", ["method", "servlet", "tag"]
)
response_db_txn_count = Counter(
"synapse_http_server_response_db_txn_count", "", ["method", "servlet", "tag"]
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
response_db_txn_duration = Counter(
"synapse_http_server_response_db_txn_duration_seconds",
"",
["method", "servlet", "tag"],
)
# seconds spent waiting for a db connection, when processing this request
response_db_sched_duration = Counter(
"synapse_http_server_response_db_sched_duration_seconds",
"",
["method", "servlet", "tag"],
)
# size in bytes of the response written
response_size = Counter(
"synapse_http_server_response_size", "", ["method", "servlet", "tag"]
)
# In flight metrics are incremented while the requests are in flight, rather
# than when the response was written.
in_flight_requests_ru_utime = Counter(
"synapse_http_server_in_flight_requests_ru_utime_seconds",
"",
["method", "servlet"],
)
in_flight_requests_ru_stime = Counter(
"synapse_http_server_in_flight_requests_ru_stime_seconds",
"",
["method", "servlet"],
)
in_flight_requests_db_txn_count = Counter(
"synapse_http_server_in_flight_requests_db_txn_count", "", ["method", "servlet"]
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
in_flight_requests_db_txn_duration = Counter(
"synapse_http_server_in_flight_requests_db_txn_duration_seconds",
"",
["method", "servlet"],
)
# seconds spent waiting for a db connection, when processing this request
in_flight_requests_db_sched_duration = Counter(
"synapse_http_server_in_flight_requests_db_sched_duration_seconds",
"",
["method", "servlet"],
)
# The set of all in flight requests, set[RequestMetrics]
_in_flight_requests = set()
def _get_in_flight_counts():
"""Returns a count of all in flight requests by (method, server_name)
Returns:
dict[tuple[str, str], int]
"""
# Cast to a list to prevent it changing while the Prometheus
# thread is collecting metrics
reqs = list(_in_flight_requests)
for rm in reqs:
rm.update_metrics()
# Map from (method, name) -> int, the number of in flight requests of that
# type
counts = {}
for rm in reqs:
key = (rm.method, rm.name,)
counts[key] = counts.get(key, 0) + 1
return counts
LaterGauge(
"synapse_http_server_in_flight_requests_count",
"",
["method", "servlet"],
_get_in_flight_counts,
)
class RequestMetrics(object):
def start(self, time_sec, name, method):
self.start = time_sec
self.start_context = LoggingContext.current_context()
self.name = name
self.method = method
self._request_stats = _RequestStats.from_context(self.start_context)
_in_flight_requests.add(self)
def stop(self, time_sec, request):
_in_flight_requests.discard(self)
context = LoggingContext.current_context()
tag = ""
if context:
tag = context.tag
if context != self.start_context:
logger.warn(
"Context have unexpectedly changed %r, %r",
context, self.start_context
)
return
outgoing_responses_counter.labels(request.method, str(request.code)).inc()
response_count.labels(request.method, self.name, tag).inc()
response_timer.labels(request.method, self.name, tag).observe(
time_sec - self.start
)
ru_utime, ru_stime = context.get_resource_usage()
response_ru_utime.labels(request.method, self.name, tag).inc(ru_utime)
response_ru_stime.labels(request.method, self.name, tag).inc(ru_stime)
response_db_txn_count.labels(request.method, self.name, tag).inc(
context.db_txn_count
)
response_db_txn_duration.labels(request.method, self.name, tag).inc(
context.db_txn_duration_sec
)
response_db_sched_duration.labels(request.method, self.name, tag).inc(
context.db_sched_duration_sec
)
response_size.labels(request.method, self.name, tag).inc(request.sentLength)
# We always call this at the end to ensure that we update the metrics
# regardless of whether a call to /metrics while the request was in
# flight.
self.update_metrics()
def update_metrics(self):
"""Updates the in flight metrics with values from this request.
"""
diff = self._request_stats.update(self.start_context)
in_flight_requests_ru_utime.labels(self.method, self.name).inc(diff.ru_utime)
in_flight_requests_ru_stime.labels(self.method, self.name).inc(diff.ru_stime)
in_flight_requests_db_txn_count.labels(self.method, self.name).inc(
diff.db_txn_count
)
in_flight_requests_db_txn_duration.labels(self.method, self.name).inc(
diff.db_txn_duration_sec
)
in_flight_requests_db_sched_duration.labels(self.method, self.name).inc(
diff.db_sched_duration_sec
)
class _RequestStats(object):
"""Keeps tracks of various metrics for an in flight request.
"""
__slots__ = [
"ru_utime",
"ru_stime",
"db_txn_count",
"db_txn_duration_sec",
"db_sched_duration_sec",
]
def __init__(
self, ru_utime, ru_stime, db_txn_count, db_txn_duration_sec, db_sched_duration_sec
):
self.ru_utime = ru_utime
self.ru_stime = ru_stime
self.db_txn_count = db_txn_count
self.db_txn_duration_sec = db_txn_duration_sec
self.db_sched_duration_sec = db_sched_duration_sec
@staticmethod
def from_context(context):
ru_utime, ru_stime = context.get_resource_usage()
return _RequestStats(
ru_utime, ru_stime,
context.db_txn_count,
context.db_txn_duration_sec,
context.db_sched_duration_sec,
)
def update(self, context):
"""Updates the current values and returns the difference between the
old and new values.
Returns:
_RequestStats: The difference between the old and new values
"""
new = _RequestStats.from_context(context)
diff = _RequestStats(
new.ru_utime - self.ru_utime,
new.ru_stime - self.ru_stime,
new.db_txn_count - self.db_txn_count,
new.db_txn_duration_sec - self.db_txn_duration_sec,
new.db_sched_duration_sec - self.db_sched_duration_sec,
)
self.ru_utime = new.ru_utime
self.ru_stime = new.ru_stime
self.db_txn_count = new.db_txn_count
self.db_txn_duration_sec = new.db_txn_duration_sec
self.db_sched_duration_sec = new.db_sched_duration_sec
return diff

View File

@@ -13,11 +13,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi
from six.moves import http_client
from synapse.api.errors import (
cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError, Codes
)
from synapse.http.request_metrics import (
requests_counter,
)
from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
from synapse.util.caches import intern_dict
from synapse.util.metrics import Measure
@@ -25,7 +29,7 @@ import synapse.metrics
import synapse.events
from canonicaljson import (
encode_canonical_json, encode_pretty_printed_json
encode_canonical_json, encode_pretty_printed_json, json
)
from twisted.internet import defer
@@ -37,182 +41,177 @@ from twisted.web.util import redirectTo
import collections
import logging
import urllib
import simplejson
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
# total number of responses served, split by method/servlet/tag
response_count = metrics.register_counter(
"response_count",
labels=["method", "servlet", "tag"],
alternative_names=(
# the following are all deprecated aliases for the same metric
metrics.name_prefix + x for x in (
"_requests",
"_response_time:count",
"_response_ru_utime:count",
"_response_ru_stime:count",
"_response_db_txn_count:count",
"_response_db_txn_duration:count",
)
)
)
requests_counter = metrics.register_counter(
"requests_received",
labels=["method", "servlet", ],
)
outgoing_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
)
response_timer = metrics.register_counter(
"response_time_seconds",
labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_time:total",
),
)
response_ru_utime = metrics.register_counter(
"response_ru_utime_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_ru_utime:total",
),
)
response_ru_stime = metrics.register_counter(
"response_ru_stime_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_ru_stime:total",
),
)
response_db_txn_count = metrics.register_counter(
"response_db_txn_count", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_db_txn_count:total",
),
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
response_db_txn_duration = metrics.register_counter(
"response_db_txn_duration_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_db_txn_duration:total",
),
)
# seconds spent waiting for a db connection, when processing this request
response_db_sched_duration = metrics.register_counter(
"response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
)
# size in bytes of the response written
response_size = metrics.register_counter(
"response_size", labels=["method", "servlet", "tag"]
)
_next_request_id = 0
HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
<html lang=en>
<head>
<meta charset="utf-8">
<title>Error {code}</title>
</head>
<body>
<p>{msg}</p>
</body>
</html>
"""
def request_handler(include_metrics=False):
"""Decorator for ``wrap_request_handler``"""
return lambda request_handler: wrap_request_handler(request_handler, include_metrics)
def wrap_json_request_handler(h):
"""Wraps a request handler method with exception handling.
Also adds logging as per wrap_request_handler_with_logging.
def wrap_request_handler(request_handler, include_metrics=False):
"""Wraps a method that acts as a request handler with the necessary logging
and exception handling.
The handler method must have a signature of "handle_foo(self, request)",
where "self" must have a "clock" attribute (and "request" must be a
SynapseRequest).
The method must have a signature of "handle_foo(self, request)". The
argument "self" must have "version_string" and "clock" attributes. The
argument "request" must be a twisted HTTP request.
The method must return a deferred. If the deferred succeeds we assume that
The handler must return a deferred. If the deferred succeeds we assume that
a response has been sent. If the deferred fails with a SynapseError we use
it to send a JSON response with the appropriate HTTP reponse code. If the
deferred fails with any other type of error we send a 500 reponse.
We insert a unique request-id into the logging context for this request and
log the response and duration for this request.
"""
@defer.inlineCallbacks
def wrapped_request_handler(self, request):
global _next_request_id
request_id = "%s-%s" % (request.method, _next_request_id)
_next_request_id += 1
try:
yield h(self, request)
except CodeMessageException as e:
code = e.code
if isinstance(e, SynapseError):
logger.info(
"%s SynapseError: %s - %s", request, code, e.msg
)
else:
logger.exception(e)
respond_with_json(
request, code, cs_exception(e), send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
)
except Exception:
# failure.Failure() fishes the original Failure out
# of our stack, and thus gives us a sensible stack
# trace.
f = failure.Failure()
logger.error(
"Failed handle request via %r: %r: %s",
h,
request,
f.getTraceback().rstrip(),
)
respond_with_json(
request,
500,
{
"error": "Internal server error",
"errcode": Codes.UNKNOWN,
},
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
)
return wrap_request_handler_with_logging(wrapped_request_handler)
def wrap_html_request_handler(h):
"""Wraps a request handler method with exception handling.
Also adds logging as per wrap_request_handler_with_logging.
The handler method must have a signature of "handle_foo(self, request)",
where "self" must have a "clock" attribute (and "request" must be a
SynapseRequest).
"""
def wrapped_request_handler(self, request):
d = defer.maybeDeferred(h, self, request)
d.addErrback(_return_html_error, request)
return d
return wrap_request_handler_with_logging(wrapped_request_handler)
def _return_html_error(f, request):
"""Sends an HTML error page corresponding to the given failure
Args:
f (twisted.python.failure.Failure):
request (twisted.web.iweb.IRequest):
"""
if f.check(CodeMessageException):
cme = f.value
code = cme.code
msg = cme.msg
if isinstance(cme, SynapseError):
logger.info(
"%s SynapseError: %s - %s", request, code, msg
)
else:
logger.error(
"Failed handle request %r: %s",
request,
f.getTraceback().rstrip(),
)
else:
code = http_client.INTERNAL_SERVER_ERROR
msg = "Internal server error"
logger.error(
"Failed handle request %r: %s",
request,
f.getTraceback().rstrip(),
)
body = HTML_ERROR_TEMPLATE.format(
code=code, msg=cgi.escape(msg),
).encode("utf-8")
request.setResponseCode(code)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%i" % (len(body),))
request.write(body)
finish_request(request)
def wrap_request_handler_with_logging(h):
"""Wraps a request handler to provide logging and metrics
The handler method must have a signature of "handle_foo(self, request)",
where "self" must have a "clock" attribute (and "request" must be a
SynapseRequest).
As well as calling `request.processing` (which will log the response and
duration for this request), the wrapped request handler will insert the
request id into the logging context.
"""
@defer.inlineCallbacks
def wrapped_request_handler(self, request):
"""
Args:
self:
request (synapse.http.site.SynapseRequest):
"""
request_id = request.get_request_id()
with LoggingContext(request_id) as request_context:
request_context.request = request_id
with Measure(self.clock, "wrapped_request_handler"):
request_metrics = RequestMetrics()
# we start the request metrics timer here with an initial stab
# at the servlet name. For most requests that name will be
# JsonResource (or a subclass), and JsonResource._async_render
# will update it once it picks a servlet.
servlet_name = self.__class__.__name__
request_metrics.start(self.clock, name=servlet_name)
with request.processing(servlet_name):
with PreserveLoggingContext(request_context):
d = defer.maybeDeferred(h, self, request)
request_context.request = request_id
with request.processing():
try:
with PreserveLoggingContext(request_context):
if include_metrics:
yield request_handler(self, request, request_metrics)
else:
requests_counter.inc(request.method, servlet_name)
yield request_handler(self, request)
except CodeMessageException as e:
code = e.code
if isinstance(e, SynapseError):
logger.info(
"%s SynapseError: %s - %s", request, code, e.msg
)
else:
logger.exception(e)
outgoing_responses_counter.inc(request.method, str(code))
respond_with_json(
request, code, cs_exception(e), send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
)
except Exception:
# failure.Failure() fishes the original Failure out
# of our stack, and thus gives us a sensible stack
# trace.
f = failure.Failure()
logger.error(
"Failed handle request %s.%s on %r: %r: %s",
request_handler.__module__,
request_handler.__name__,
self,
request,
f.getTraceback().rstrip(),
)
respond_with_json(
request,
500,
{
"error": "Internal server error",
"errcode": Codes.UNKNOWN,
},
send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
)
finally:
try:
request_metrics.stop(
self.clock, request
)
except Exception as e:
logger.warn("Failed to stop metrics: %r", e)
# record the arrival of the request *after*
# dispatching to the handler, so that the handler
# can update the servlet name in the request
# metrics
requests_counter.labels(request.method,
request.request_metrics.name).inc()
yield d
return wrapped_request_handler
@@ -262,7 +261,6 @@ class JsonResource(HttpServer, resource.Resource):
self.canonical_json = canonical_json
self.clock = hs.get_clock()
self.path_regexs = {}
self.version_string = hs.version_string
self.hs = hs
def register_paths(self, method, path_patterns, callback):
@@ -278,13 +276,9 @@ class JsonResource(HttpServer, resource.Resource):
self._async_render(request)
return server.NOT_DONE_YET
# Disable metric reporting because _async_render does its own metrics.
# It does its own metric reporting because _async_render dispatches to
# a callback and it's the class name of that callback we want to report
# against rather than the JsonResource itself.
@request_handler(include_metrics=True)
@wrap_json_request_handler
@defer.inlineCallbacks
def _async_render(self, request, request_metrics):
def _async_render(self, request):
""" This gets called from render() every time someone sends us a request.
This checks if anyone has registered a callback for that method and
path.
@@ -296,9 +290,7 @@ class JsonResource(HttpServer, resource.Resource):
servlet_classname = servlet_instance.__class__.__name__
else:
servlet_classname = "%r" % callback
request_metrics.name = servlet_classname
requests_counter.inc(request.method, servlet_classname)
request.request_metrics.name = servlet_classname
# Now trigger the callback. If it returns a response, we send it
# here. If it throws an exception, that is handled by the wrapper
@@ -345,15 +337,12 @@ class JsonResource(HttpServer, resource.Resource):
def _send_response(self, request, code, response_json_object,
response_code_message=None):
outgoing_responses_counter.inc(request.method, str(code))
# TODO: Only enable CORS for the requests that need it.
respond_with_json(
request, code, response_json_object,
send_cors=True,
response_code_message=response_code_message,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
canonical_json=self.canonical_json,
)
@@ -386,54 +375,6 @@ def _unrecognised_request_handler(request):
raise UnrecognizedRequestError()
class RequestMetrics(object):
def start(self, clock, name):
self.start = clock.time_msec()
self.start_context = LoggingContext.current_context()
self.name = name
def stop(self, clock, request):
context = LoggingContext.current_context()
tag = ""
if context:
tag = context.tag
if context != self.start_context:
logger.warn(
"Context have unexpectedly changed %r, %r",
context, self.start_context
)
return
response_count.inc(request.method, self.name, tag)
response_timer.inc_by(
clock.time_msec() - self.start, request.method,
self.name, tag
)
ru_utime, ru_stime = context.get_resource_usage()
response_ru_utime.inc_by(
ru_utime, request.method, self.name, tag
)
response_ru_stime.inc_by(
ru_stime, request.method, self.name, tag
)
response_db_txn_count.inc_by(
context.db_txn_count, request.method, self.name, tag
)
response_db_txn_duration.inc_by(
context.db_txn_duration_ms / 1000., request.method, self.name, tag
)
response_db_sched_duration.inc_by(
context.db_sched_duration_ms / 1000., request.method, self.name, tag
)
response_size.inc_by(request.sentLength, request.method, self.name, tag)
class RootRedirect(resource.Resource):
"""Redirects the root '/' path to another path."""
@@ -452,7 +393,7 @@ class RootRedirect(resource.Resource):
def respond_with_json(request, code, json_object, send_cors=False,
response_code_message=None, pretty_print=False,
version_string="", canonical_json=True):
canonical_json=True):
# could alternatively use request.notifyFinish() and flip a flag when
# the Deferred fires, but since the flag is RIGHT THERE it seems like
# a waste.
@@ -468,18 +409,17 @@ def respond_with_json(request, code, json_object, send_cors=False,
if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object)
else:
json_bytes = simplejson.dumps(json_object)
json_bytes = json.dumps(json_object)
return respond_with_json_bytes(
request, code, json_bytes,
send_cors=send_cors,
response_code_message=response_code_message,
version_string=version_string
)
def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
version_string="", response_code_message=None):
response_code_message=None):
"""Sends encoded JSON in response to the given request.
Args:
@@ -493,7 +433,6 @@ def respond_with_json_bytes(request, code, json_bytes, send_cors=False,
request.setResponseCode(code, message=response_code_message)
request.setHeader(b"Content-Type", b"application/json")
request.setHeader(b"Server", version_string)
request.setHeader(b"Content-Length", b"%d" % (len(json_bytes),))
request.setHeader(b"Cache-Control", b"no-cache, no-store, must-revalidate")

View File

@@ -18,7 +18,9 @@
from synapse.api.errors import SynapseError, Codes
import logging
import simplejson
from canonicaljson import json
logger = logging.getLogger(__name__)
@@ -171,7 +173,7 @@ def parse_json_value_from_request(request, allow_empty_body=False):
return None
try:
content = simplejson.loads(content_bytes)
content = json.loads(content_bytes)
except Exception as e:
logger.warn("Unable to parse JSON: %s", e)
raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)

View File

@@ -12,27 +12,49 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.logcontext import LoggingContext
from twisted.web.server import Site, Request
import contextlib
import logging
import re
import time
ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
from twisted.web.server import Site, Request
from synapse.http import redact_uri
from synapse.http.request_metrics import RequestMetrics
from synapse.util.logcontext import LoggingContext
logger = logging.getLogger(__name__)
_next_request_seq = 0
class SynapseRequest(Request):
"""Class which encapsulates an HTTP request to synapse.
All of the requests processed in synapse are of this type.
It extends twisted's twisted.web.server.Request, and adds:
* Unique request ID
* Redaction of access_token query-params in __repr__
* Logging at start and end
* Metrics to record CPU, wallclock and DB time by endpoint.
It provides a method `processing` which should be called by the Resource
which is handling the request, and returns a context manager.
"""
def __init__(self, site, *args, **kw):
Request.__init__(self, *args, **kw)
self.site = site
self.authenticated_entity = None
self.start_time = 0
global _next_request_seq
self.request_seq = _next_request_seq
_next_request_seq += 1
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
return '<%s at 0x%x method=%r uri=%r clientproto=%r site=%r>' % (
self.__class__.__name__,
id(self),
self.method,
@@ -41,16 +63,27 @@ class SynapseRequest(Request):
self.site.site_tag,
)
def get_request_id(self):
return "%s-%i" % (self.method, self.request_seq)
def get_redacted_uri(self):
return ACCESS_TOKEN_RE.sub(
br'\1<redacted>\3',
self.uri
)
return redact_uri(self.uri)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
def started_processing(self):
def render(self, resrc):
# override the Server header which is set by twisted
self.setHeader("Server", self.site.server_version_string)
return Request.render(self, resrc)
def _started_processing(self, servlet_name):
self.start_time = time.time()
self.request_metrics = RequestMetrics()
self.request_metrics.start(
self.start_time, name=servlet_name, method=self.method,
)
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
self.getClientIP(),
@@ -58,46 +91,91 @@ class SynapseRequest(Request):
self.method,
self.get_redacted_uri()
)
self.start_time = int(time.time() * 1000)
def finished_processing(self):
def _finished_processing(self):
try:
context = LoggingContext.current_context()
ru_utime, ru_stime = context.get_resource_usage()
db_txn_count = context.db_txn_count
db_txn_duration_ms = context.db_txn_duration_ms
db_sched_duration_ms = context.db_sched_duration_ms
db_txn_duration_sec = context.db_txn_duration_sec
db_sched_duration_sec = context.db_sched_duration_sec
evt_db_fetch_count = context.evt_db_fetch_count
except Exception:
ru_utime, ru_stime = (0, 0)
db_txn_count, db_txn_duration_ms = (0, 0)
db_txn_count, db_txn_duration_sec = (0, 0)
evt_db_fetch_count = 0
end_time = time.time()
# need to decode as it could be raw utf-8 bytes
# from a IDN servname in an auth header
authenticated_entity = self.authenticated_entity
if authenticated_entity is not None:
authenticated_entity = authenticated_entity.decode("utf-8", "replace")
# ...or could be raw utf-8 bytes in the User-Agent header.
# N.B. if you don't do this, the logger explodes cryptically
# with maximum recursion trying to log errors about
# the charset problem.
# c.f. https://github.com/matrix-org/synapse/issues/3471
user_agent = self.get_user_agent()
if user_agent is not None:
user_agent = user_agent.decode("utf-8", "replace")
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms (%dms, %dms) (%dms/%dms/%d)"
" %sB %s \"%s %s %s\" \"%s\"",
" Processed request: %.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
" %sB %s \"%s %s %s\" \"%s\" [%d dbevts]",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
int(time.time() * 1000) - self.start_time,
int(ru_utime * 1000),
int(ru_stime * 1000),
db_sched_duration_ms,
db_txn_duration_ms,
authenticated_entity,
end_time - self.start_time,
ru_utime,
ru_stime,
db_sched_duration_sec,
db_txn_duration_sec,
int(db_txn_count),
self.sentLength,
self.code,
self.method,
self.get_redacted_uri(),
self.clientproto,
self.get_user_agent(),
user_agent,
evt_db_fetch_count,
)
try:
self.request_metrics.stop(end_time, self)
except Exception as e:
logger.warn("Failed to stop metrics: %r", e)
@contextlib.contextmanager
def processing(self):
self.started_processing()
def processing(self, servlet_name):
"""Record the fact that we are processing this request.
Returns a context manager; the correct way to use this is:
@defer.inlineCallbacks
def handle_request(request):
with request.processing("FooServlet"):
yield really_handle_the_request()
This will log the request's arrival. Once the context manager is
closed, the completion of the request will be logged, and the various
metrics will be updated.
Args:
servlet_name (str): the name of the servlet which will be
processing this request. This is used in the metrics.
It is possible to update this afterwards by updating
self.request_metrics.servlet_name.
"""
# TODO: we should probably just move this into render() and finish(),
# to save having to call a separate method.
self._started_processing(servlet_name)
yield
self.finished_processing()
self._finished_processing()
class XForwardedForRequest(SynapseRequest):
@@ -135,7 +213,8 @@ class SynapseSite(Site):
Subclass of a twisted http Site that does access logging with python's
standard logging
"""
def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs):
def __init__(self, logger_name, site_tag, config, resource,
server_version_string, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs)
self.site_tag = site_tag
@@ -143,6 +222,7 @@ class SynapseSite(Site):
proxied = config.get("x_forwarded", False)
self.requestFactory = SynapseRequestFactory(self, proxied)
self.access_logger = logging.getLogger(logger_name)
self.server_version_string = server_version_string
def log(self, request):
pass

Some files were not shown because too many files have changed in this diff Show More