1
0

Compare commits

..

228 Commits

Author SHA1 Message Date
Matthew Hodgson
03bdbb8c6b typo 2018-12-29 12:48:09 +00:00
Matthew Hodgson
18752982db hook up state deltas to stats 2018-07-18 23:56:57 +01:00
Matthew Hodgson
6dacdd5fbe WIP for updating the stats store 2018-07-18 12:03:07 +01:00
Matthew Hodgson
c82785f5cb flake8 2018-07-18 09:52:28 +01:00
Matthew Hodgson
a34061d332 WIP of tracking per-room and per-user stats 2018-07-18 02:07:36 +01:00
Amber Brown
8a4f05fefb Fix develop because I broke it :( (#3535) 2018-07-14 09:51:00 +10:00
Amber Brown
8532953c04 Merge pull request #3534 from krombel/use_parse_and_asserts_from_servlet
Use parse and asserts from http.servlet
2018-07-14 09:09:19 +10:00
Amber Brown
a2374b2c7f fix sytests 2018-07-14 07:52:58 +10:00
Amber Brown
33b60c01b5 Make auth & transactions more testable (#3499) 2018-07-14 07:34:49 +10:00
Krombel
516f960ad8 add changelog 2018-07-13 22:19:19 +02:00
Krombel
3366b9c534 rename assert_params_in_request to assert_params_in_dict
the method "assert_params_in_request" does handle dicts and not
requests. A request body has to be parsed to json before this method
can be used
2018-07-13 21:53:01 +02:00
Krombel
32fd6910d0 Use parse_{int,str} and assert from http.servlet
parse_integer and parse_string can take a request and raise errors
in case we have wrong or missing params.
This PR tries to use them more to deduplicate some code and make it
better readable
2018-07-13 21:40:14 +02:00
Richard van der Hoff
2aba1f549c Merge pull request #3533 from matrix-org/rav/fix_federation_ratelimite_queue
Make FederationRateLimiter queue requests properly
2018-07-13 16:59:18 +01:00
Richard van der Hoff
08546be40f better changelog 2018-07-13 16:47:13 +01:00
Richard van der Hoff
b03a0e14db changelog 2018-07-13 16:28:28 +01:00
Richard van der Hoff
3b391d9c45 Fix unit tests 2018-07-13 16:28:04 +01:00
Richard van der Hoff
33b40d0a25 Make FederationRateLimiter queue requests properly
popitem removes the *most recent* item by default [1]. We want the oldest.

Fixes #3524

[1]: https://docs.python.org/2/library/collections.html#collections.OrderedDict.popitem
2018-07-13 16:19:40 +01:00
Matthew Hodgson
ba22b6a456 typo 2018-07-13 12:03:39 +01:00
Richard van der Hoff
6dff49b8a9 Merge pull request #3521 from matrix-org/rav/optimise_stream_change_cache
Reduce set building in get_entities_changed
2018-07-12 12:08:49 +01:00
Richard van der Hoff
38c5fa7ee4 changelog 2018-07-12 11:45:28 +01:00
Richard van der Hoff
fa5c2bc082 Reduce set building in get_entities_changed
This line shows up as about 5% of cpu time on a synchrotron:

    not_known_entities = set(entities) - set(self._entity_to_key)

Presumably the problem here is that _entity_to_key can be largeish, and
building a set for its keys every time this function is called is slow.

Here we rewrite the logic to avoid building so many sets.
2018-07-12 11:37:44 +01:00
Richard van der Hoff
4c4dd6299d Merge pull request #3316 from matrix-org/rav/enforce_report_api
Enforce the specified API for report_event
2018-07-12 10:50:05 +01:00
Richard van der Hoff
c969754a91 changelog 2018-07-12 10:00:57 +01:00
Richard van der Hoff
482d17b58b Merge branch 'develop' into rav/enforce_report_api 2018-07-12 09:56:28 +01:00
Erik Johnston
0456e05977 Merge pull request #3505 from matrix-org/erikj/receipts_cahce
Use stream cache in get_linearized_receipts_for_room
2018-07-12 09:46:29 +01:00
Erik Johnston
aff1dfdf3d Update return value docstring 2018-07-12 09:45:37 +01:00
Amber Brown
129ffd7b88 Merge pull request #3498 from OlegGirko/fix_attrs_syntax
* Use more portable syntax using attrs package.

Newer syntax

    attr.ib(factory=dict)

is just a syntactic sugar for

    attr.ib(default=attr.Factory(dict))

It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.

We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-07-11 04:22:46 +10:00
Amber Brown
85354bb18e changelog entry 2018-07-11 03:27:03 +10:00
Erik Johnston
6ccefef07a Use 'is not None' and add comments 2018-07-10 18:12:39 +01:00
Matthew Hodgson
ea752bdd99 s/becuase/because/g 2018-07-10 17:58:18 +01:00
Erik Johnston
bb3d536087 Newsfile 2018-07-10 17:28:31 +01:00
Erik Johnston
05f5dabc10 Use stream cache in get_linearized_receipts_for_room
This avoids us from uncessarily hitting the database when there has been
no change for the room
2018-07-10 17:22:42 +01:00
Richard van der Hoff
c3c29aa196 Attempt to include db threads in cpu usage stats (#3496)
Let's try to include time spent in the DB threads in the per-request/block cpu
usage metrics.
2018-07-10 16:12:36 +01:00
Richard van der Hoff
55370331da Refactor logcontext resource usage tracking (#3501)
Factor out the resource usage tracking out to a separate object, which can be
passed around and copied independently of the logcontext itself.
2018-07-10 13:56:07 +01:00
Matthew Hodgson
16b10666e7 another typo 2018-07-10 12:28:42 +01:00
Matthew Hodgson
4ea391a6ae typo (i think) 2018-07-10 12:08:09 +01:00
Richard van der Hoff
b1fe697b3c Merge pull request #3497 from matrix-org/rav/measure_fetch_event_loop
Add CPU metrics for _fetch_event_list
2018-07-10 10:00:24 +01:00
Oleg Girko
6c1ec5a1bd Use more portable syntax using attrs package.
Newer syntax

    attr.ib(factory=dict)

is just a syntactic sugar for

    attr.ib(default=attr.Factory(dict))

It was introduced in newest version of attrs package (18.1.0)
and doesn't work with older versions.

We should either require minimum version of attrs to be 18.1.0,
or use older (slightly more verbose) syntax.
Requiring newest version is not a good solution because
Linux distributions may have older version of attrs (17.4.0 in Fedora 28),
and requiring to build (and package)
newer version just to use newer syntactic sugar in only one test
is just too much.
It's much better to fix that test to use older syntax.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-07-10 00:38:49 +01:00
Richard van der Hoff
f3b3b9dd8f changelog 2018-07-09 18:16:52 +01:00
Richard van der Hoff
e31e5dee38 Add CPU metrics for _fetch_event_list
add a Measure block on _fetch_event_list, in the hope that we can better
measure CPU usage here.
2018-07-09 18:15:54 +01:00
Richard van der Hoff
395fa8d1fd Merge pull request #3464 from matrix-org/hawkowl/isort-run
Run isort on Synapse
2018-07-09 10:24:43 +01:00
Amber Brown
09477bd884 changelog 2018-07-09 16:09:37 +10:00
Amber Brown
49af402019 run isort 2018-07-09 16:09:20 +10:00
Amber Brown
2ee9f1bd1a Add an isort configuration (#3463) 2018-07-09 16:05:21 +10:00
Amber Brown
1241156c82 changelog 2018-07-07 10:48:30 +10:00
Amber Brown
3060bcc8e9 version 2018-07-07 10:48:06 +10:00
Amber Brown
e845fd41c2 Correct attrs package name in requirements (#3492) 2018-07-07 10:46:59 +10:00
Richard van der Hoff
20ff89d6e1 Merge tag 'v0.32.1'
Synapse 0.32.1 (2018-07-06)
===========================

Bugfixes
--------

- Add explicit dependency on netaddr ([#3488](https://github.com/matrix-org/synapse/issues/3488))
2018-07-06 16:56:23 +01:00
Richard van der Hoff
1cfc2c4790 Prepare 0.32.1 release 2018-07-06 16:50:52 +01:00
Richard van der Hoff
2087d5d046 Merge pull request #3488 from matrix-org/rav/fix-netaddr-dep
Add explicit dependency on netaddr
2018-07-06 16:39:56 +01:00
Richard van der Hoff
1464a0578a Add explicit dependency on netaddr
the dependencies file, causing failures on upgrade (and presumably for new
installs).
2018-07-06 16:27:17 +01:00
Neil Johnson
277c561766 0.32.0 version bump, update changelog 2018-07-06 15:07:29 +01:00
Amber Brown
89690aaaeb changelog 2018-07-05 20:46:40 +10:00
Amber Brown
be8b32dbc2 ACL changelog 2018-07-05 20:45:12 +10:00
Amber Brown
d196fe42a9 bump version to 0.32.0rc1 2018-07-05 20:22:35 +10:00
Neil Johnson
feef8461d1 Merge remote-tracking branch 'hera/rav/server_acls' into develop 2018-07-05 11:04:01 +01:00
Erik Johnston
1a88640677 Merge pull request #3483 from matrix-org/rav/more_server_name_validation
More server_name validation
2018-07-05 10:04:20 +01:00
Richard van der Hoff
3cf3e08a97 Implementation of server_acls
... as described at
https://docs.google.com/document/d/1EttUVzjc2DWe2ciw4XPtNpUpIl9lWXGEsy2ewDS7rtw.
2018-07-04 19:06:20 +01:00
Richard van der Hoff
546bc9e28b More server_name validation
We need to do a bit more validation when we get a server name, but don't want
to be re-doing it all over the shop, so factor out a separate
parse_and_validate_server_name, and do the extra validation.

Also, use it to verify the server name in the config file.
2018-07-04 18:59:51 +01:00
Richard van der Hoff
f192a93875 Merge pull request #3481 from matrix-org/rav/fix_cachedescriptor_test
Reinstate lost run_on_reactor in unit test
2018-07-04 18:55:33 +01:00
Erik Johnston
13f7adf84b Merge pull request #3473 from matrix-org/erikj/thread_cache
Invalidate cache on correct thread
2018-07-04 10:11:38 +01:00
Erik Johnston
40252d13d1 Merge pull request #3474 from matrix-org/erikj/py3_auth
Fix up auth check
2018-07-04 09:41:33 +01:00
Richard van der Hoff
ea555d5633 Reinstate lost run_on_reactor in unit test
a61738b removed a call to run_on_reactor from a unit test, but that call was
doing something useful, in making the function in question asynchronous.

Reinstate the call and add a check that we are testing what we wanted to be
testing.
2018-07-04 09:40:01 +01:00
Richard van der Hoff
508196e08a Reject invalid server names (#3480)
Make sure that server_names used in auth headers are sane, and reject them with
a sensible error code, before they disappear off into the depths of the system.
2018-07-03 14:36:14 +01:00
Richard van der Hoff
f741630847 Merge pull request #3470 from matrix-org/matthew/fix-utf8-logging
don't mix unicode strings with utf8-in-byte-strings
2018-07-02 15:25:36 +01:00
Matthew Hodgson
6ec3aa2f72 news snippet 2018-07-02 13:43:34 +01:00
Erik Johnston
abb183438c Correct newsfile 2018-07-02 13:12:38 +01:00
Erik Johnston
f88dea577d Newsfile 2018-07-02 11:46:32 +01:00
Erik Johnston
cea4662b13 Newsfile 2018-07-02 11:46:20 +01:00
Erik Johnston
2c33b55738 Avoid relying on int vs None comparison
Python 3 doesn't support comparing None to ints
2018-07-02 11:40:32 +01:00
Erik Johnston
cbf82dddf1 Ensure that we define sender_domain 2018-07-02 11:37:57 +01:00
Erik Johnston
3905c693c5 Invalidate cache on correct thread 2018-07-02 11:36:44 +01:00
Matthew Hodgson
fc4f8f33be replace invalid utf8 with \ufffd 2018-07-02 11:33:02 +01:00
Matthew Hodgson
1c867f5391 a fix which doesn't NPE everywhere 2018-07-01 11:56:33 +01:00
Matthew Hodgson
f131bf8d3e don't mix unicode strings with utf8-in-byte-strings
otherwise we explode with:

```
Traceback (most recent call last):
  File /usr/lib/python2.7/logging/handlers.py, line 78, in emit
    logging.FileHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 950, in emit
    StreamHandler.emit(self, record)
  File /usr/lib/python2.7/logging/__init__.py, line 887, in emit
    self.handleError(record)
  File /usr/lib/python2.7/logging/__init__.py, line 810, in handleError
    None, sys.stderr)
  File /usr/lib/python2.7/traceback.py, line 124, in print_exception
    _print(file, 'Traceback (most recent call last):')
  File /usr/lib/python2.7/traceback.py, line 13, in _print
    file.write(str+terminator)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_io.py, line 170, in write
    self.log.emit(self.level, format=u{log_io}, log_io=line)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 144, in emit
    self.observer(event)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 136, in __call__
    errorLogger = self._errorLoggerForObserver(brokenObserver)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 156, in _errorLoggerForObserver
    if obs is not observer
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_observer.py, line 81, in __init__
    self.log = Logger(observer=self)
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 64, in __init__
    namespace = self._namespaceFromCallingContext()
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/logger/_logger.py, line 42, in _namespaceFromCallingContext
    return currentframe(2).f_globals[__name__]
  File /home/matrix/.synapse/local/lib/python2.7/site-packages/twisted/python/compat.py, line 93, in currentframe
    for x in range(n + 1):
RuntimeError: maximum recursion depth exceeded while calling a Python object
Logged from file site.py, line 129
  File /usr/lib/python2.7/logging/__init__.py, line 859, in emit
    msg = self.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 732, in format
    return fmt.format(record)
  File /usr/lib/python2.7/logging/__init__.py, line 471, in format
    record.message = record.getMessage()
  File /usr/lib/python2.7/logging/__init__.py, line 335, in getMessage
    msg = msg % self.args
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 4: ordinal not in range(128)
Logged from file site.py, line 129
```

...where the logger apparently recurses whilst trying to log the error, hitting the
maximum recursion depth and killing everything badly.
2018-07-01 05:08:58 +01:00
Matthew Hodgson
75d4986a8c Merge pull request #3467 from matrix-org/hawkowl/contributor-requirements
Clarify "real name" in contributor requirements
2018-06-30 00:21:10 +01:00
Amber Brown
7c0cdd330f topfile 2018-06-29 14:13:15 +01:00
Erik Johnston
e3b4043800 Merge pull request #3456 from matrix-org/hawkowl/federation-prevevent-checking
Check the state of prev_events a bit more thoroughly when coming over federation
2018-06-29 13:55:02 +01:00
Amber Brown
ec766b2530 clarification on what "real names" are 2018-06-29 10:33:31 +01:00
Matthew Hodgson
fc0e17b3e5 Merge pull request #3465 from matrix-org/matthew/as_ip_lock
add ip_range_whitelist parameter to limit where ASes can connect from
2018-06-28 21:15:06 +01:00
Matthew Hodgson
f82cf3c7df add test 2018-06-28 21:14:16 +01:00
Matthew Hodgson
0d7eabeada add towncrier snippet 2018-06-28 20:59:01 +01:00
Matthew Hodgson
e72234f6bd fix tests 2018-06-28 20:56:07 +01:00
Matthew Hodgson
f4f1cda928 add ip_range_whitelist parameter to limit where ASes can connect from 2018-06-28 20:32:00 +01:00
Amber Brown
6350bf925e Attempt to be more performant on PyPy (#3462) 2018-06-28 14:49:57 +01:00
Amber Brown
cfda61e9cd topfile update 2018-06-28 11:13:08 +01:00
Amber Brown
72d2143ea8 Revert "Revert "Try to not use as much CPU in the StreamChangeCache"" (#3454) 2018-06-28 11:04:18 +01:00
Amber Brown
caf07f770a topfile 2018-06-27 15:07:16 +01:00
Amber Brown
99800de63d try and clean up 2018-06-27 11:40:27 +01:00
Amber Brown
f03a5d1a17 pep8 2018-06-27 11:38:14 +01:00
Amber Brown
94f09618e5 cleanups 2018-06-27 11:38:03 +01:00
Amber Brown
a7ecf34b70 cleanups 2018-06-27 11:36:03 +01:00
Amber Brown
35cc3e8b14 stylistic cleanup 2018-06-27 11:32:09 +01:00
Amber Brown
8d62baa48c cleanups 2018-06-27 11:31:48 +01:00
Amber Brown
77078d6c8e handle federation not telling us about prev_events 2018-06-27 11:27:32 +01:00
Amber Brown
cd6bcdaf87 Better testing framework for homeserver-using things (#3446) 2018-06-27 10:37:24 +01:00
Matthew Hodgson
71ad43a8b5 Merge pull request #3452 from matrix-org/revert-3451-hawkowl/sorteddict-api
Revert "Try to not use as much CPU in the StreamChangeCache"
2018-06-26 18:09:28 +01:00
Matthew Hodgson
8057489b26 Revert "Try to not use as much CPU in the StreamChangeCache" 2018-06-26 18:09:01 +01:00
Matthew Hodgson
d91efb06cf Merge pull request #3451 from matrix-org/hawkowl/sorteddict-api
Try to not use as much CPU in the cache
2018-06-26 17:49:55 +01:00
Amber Brown
1202508067 fixes 2018-06-26 17:29:01 +01:00
Amber Brown
bd3d329c88 fixes 2018-06-26 17:28:12 +01:00
Amber Brown
abfe4b2957 try and make loading items from the cache faster 2018-06-26 17:25:34 +01:00
Matthew Hodgson
c695a8d003 Merge pull request #3449 from matrix-org/dbkr/fix_deactivate_account_multiple_pending
Fix error on deleting users pending deactivation
2018-06-26 10:59:19 +01:00
David Baker
028490afd4 Fix error on deleting users pending deactivation
Use simple_delete instead of simple_delete_one as commented
2018-06-26 10:52:52 +01:00
Matthew Hodgson
c7f6b420ae Merge pull request #3448 from matrix-org/matthew/gdpr-deactivate-admin-api
add GDPR erase param to deactivate API
2018-06-26 10:43:14 +01:00
Matthew Hodgson
9570aa82eb update doc for deactivate API 2018-06-26 10:42:50 +01:00
Matthew Hodgson
1e788db430 add GDPR erase param to deactivate API 2018-06-26 10:26:54 +01:00
Amber Brown
1d62c4a127 Merge pull request #3438 from turt2live/travis/dont-print-access-tokens-in-logs
Stop including access tokens in warnings in the log
2018-06-26 09:55:55 +01:00
Erik Johnston
d72fb9a448 Merge pull request #3442 from matrix-org/matthew/allow-unconsented-parts
allow non-consented users to still part rooms (to let us autopart them)
2018-06-25 20:14:18 +01:00
Erik Johnston
1b947d6dde Merge pull request #3443 from matrix-org/erikj/fast_filter_servers
Add fast path to _filter_events_for_server
2018-06-25 20:11:00 +01:00
Erik Johnston
df48f7ef37 Actually fix it 2018-06-25 20:03:41 +01:00
Erik Johnston
a0e8a53c6d Comment 2018-06-25 19:57:38 +01:00
Erik Johnston
7bdc5c8fa3 Fix bug with assuming wrong type 2018-06-25 19:56:02 +01:00
Erik Johnston
ea7a9c0483 Add fast path to _filter_events_for_server
Most rooms have a trivial history visibility like "shared" or
"world_readable", especially large rooms, so lets not bother getting the
full membership of those rooms in that case.
2018-06-25 19:49:13 +01:00
Matthew Hodgson
0269367f18 allow non-consented users to still part rooms (to let us autopart them) 2018-06-25 17:56:10 +01:00
Matthew Hodgson
784189b1f4 typos 2018-06-25 17:37:16 +01:00
Matthew Hodgson
6eb861b67f typo 2018-06-25 17:37:16 +01:00
Erik Johnston
947fea67cb Need to pass reactor to endpoint fac 2018-06-25 15:22:57 +01:00
Amber Brown
36cb570641 Use towncrier to build the changelog (#3425) 2018-06-25 14:42:27 +01:00
Erik Johnston
33fdcfa957 Merge pull request #3441 from matrix-org/erikj/redo_erasure
Fix user erasure and re-enable
2018-06-25 14:37:01 +01:00
Erik Johnston
eb50c44eaf Add UserErasureWorkerStore to workers 2018-06-25 14:22:24 +01:00
Amber Brown
07cad26d65 Remove all global reactor imports & pass it around explicitly (#3424) 2018-06-25 14:08:28 +01:00
Erik Johnston
244484bf3c Revert "Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility""
This reverts commit 1d009013b3.
2018-06-25 13:42:55 +01:00
Travis Ralston
ec1e799e17 Don't print invalid access tokens in the logs
Tokens shouldn't be appearing the logs, valid or invalid. 

Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-24 12:17:01 -06:00
Richard van der Hoff
1d009013b3 Revert "Merge pull request #3431 from matrix-org/rav/erasure_visibility"
This reverts commit ce0d911156, reversing
changes made to b4a5d767a9.
2018-06-22 16:35:10 +01:00
Richard van der Hoff
516f884176 Merge pull request #3435 from matrix-org/rav/fix_event_push_actions_tablescan
Fix event_push_actions tablescan when reinserting events
2018-06-22 15:57:22 +01:00
Mark Haines
9850f66abe Deleting from event_push_actions needs to use an index 2018-06-22 15:54:48 +01:00
Richard van der Hoff
fe8d2968e3 Merge pull request #3434 from matrix-org/rav/search_logging
Add some logging to /search and /context queries
2018-06-22 15:51:11 +01:00
Erik Johnston
28ddc6cfbe Also log number of events for serach context 2018-06-22 15:42:11 +01:00
Erik Johnston
4b4cec3989 Add some logging to search queries 2018-06-22 15:42:11 +01:00
Richard van der Hoff
200e11c5bf Merge pull request #3432 from matrix-org/rav/joined_hosts_cache_non_iterable
Make _get_joined_hosts_cache cache non-iterable
2018-06-22 15:18:51 +01:00
Erik Johnston
f8272813a9 Make _get_joined_hosts_cache cache non-iterable 2018-06-22 15:12:26 +01:00
Richard van der Hoff
1d7ad11747 Merge pull request #3430 from matrix-org/rav/configurable_push_action_rotation
Make push actions rotation configurable
2018-06-22 15:07:31 +01:00
Erik Johnston
ce0d911156 Merge pull request #3431 from matrix-org/rav/erasure_visibility
Support hiding events from deleted users
2018-06-22 15:06:44 +01:00
Erik Johnston
b4a5d767a9 Merge pull request #3428 from matrix-org/erikj/persisted_pdu
Simplify get_persisted_pdu
2018-06-22 14:47:43 +01:00
Erik Johnston
f79abda87f Merge pull request #3427 from matrix-org/erikj/remove_filters
remove dead filter_events_for_clients
2018-06-22 14:45:42 +01:00
Erik Johnston
75dc3ddeab Make push actions rotation configurable 2018-06-22 14:44:37 +01:00
Richard van der Hoff
bb018d0b5b Merge pull request #3383 from matrix-org/rav/hacky_speedup_state_group_cache
Improve StateGroupCache efficiency for wildcard lookups
2018-06-22 14:01:55 +01:00
Richard van der Hoff
43e02c409d Disable partial state group caching for wildcard lookups
When _get_state_for_groups is given a wildcard filter, just do a complete
lookup. Hopefully this will give us the best of both worlds by not filling up
the ram if we only need one or two keys, but also making the cache still work
for the federation reader usecase.
2018-06-22 11:52:07 +01:00
Richard van der Hoff
240f192523 Merge pull request #3382 from matrix-org/rav/optimise_state_groups
Optimise state_group_cache update
2018-06-22 11:20:20 +01:00
Richard van der Hoff
70e6501913 Merge pull request #3419 from matrix-org/rav/events_per_request
Log number of events fetched from DB
2018-06-22 11:17:56 +01:00
Richard van der Hoff
0495fe0035 Indirect evt_count updates via method call
so that we can stub it for the sentinel and not have a billion failing UTs
2018-06-22 10:42:28 +01:00
Amber Brown
9a685d60ae Merge pull request #3418 from matrix-org/rav/fix_metric_desc
Fix description of "python_gc_time" metric
2018-06-22 09:43:26 +01:00
Amber Brown
77ac14b960 Pass around the reactor explicitly (#3385) 2018-06-22 09:37:10 +01:00
Richard van der Hoff
cbbfaa4be8 Fix description of "python_gc_time" metric 2018-06-21 10:02:42 +01:00
Amber Brown
c2eff937ac Populate synapse_federation_client_sent_pdu_destinations:count again (#3386) 2018-06-21 09:39:58 +01:00
Amber Brown
99b77aa829 Fix tcp protocol metrics naming (#3410) 2018-06-21 09:39:27 +01:00
Richard van der Hoff
b088aafcae Log number of events fetched from DB
When we finish processing a request, log the number of events we fetched from
the database to handle it.

[I'm trying to figure out which requests are responsible for large amounts of
event cache churn. It may turn out to be more helpful to add counts to the
prometheus per-request/block metrics, but that is an extension to this code
anyway.]
2018-06-21 06:15:03 +01:00
Richard van der Hoff
aff3d76920 Merge pull request #3416 from matrix-org/rav/restart_indicator
Write a clear restart indicator in logs
2018-06-20 18:14:08 +01:00
Richard van der Hoff
02bfc581f8 Merge pull request #3399 from costacruise/master
Add error code to room creation error
2018-06-20 17:26:25 +01:00
Richard van der Hoff
245d53d32a Write a clear restart indicator in logs
I'm fed up with never being able to find the point a server restarted in the
logs.
2018-06-20 15:33:14 +01:00
Amber Brown
f6c4d74f96 Fix inflight requests metric (incorrect name & traceback) (#3413) 2018-06-20 11:18:57 +01:00
Matthew Hodgson
ccfdaf68be spell gauge correctly 2018-06-16 07:10:34 +01:00
Richard van der Hoff
9a793f861c Merge branch 'master' into develop 2018-06-14 16:36:01 +01:00
Richard van der Hoff
53969e1960 Merge tag 'v0.31.2'
SECURITY UPDATE: Prevent unauthorised users from setting state events in a room
when there is no `m.room.power_levels` event in force in the room. (PR #3397)

Discussion around the Matrix Spec change proposal for this change can be
followed at https://github.com/matrix-org/matrix-doc/issues/1304.
2018-06-14 16:35:33 +01:00
Michael Wagner
19cd3120ec Add error code to room creation error
This error code is mentioned in the documentation at https://matrix.org/docs/api/client-server/#!/Room32creation/createRoom
2018-06-14 14:08:40 +02:00
Amber Brown
f116f32ace add a last seen metric (#3396) 2018-06-14 20:26:59 +10:00
Amber Brown
a61738b316 Remove run_on_reactor (#3395) 2018-06-14 18:27:37 +10:00
Richard van der Hoff
3681437c35 Merge pull request #3368 from matrix-org/rav/fix_federation_client_host
Fix commandline federation_client to send the right Host header
2018-06-13 15:41:51 +01:00
Amber Brown
0fde1896cd Merge pull request #3389 from turt2live/travis/name_metrics
Use the correct flag (enable_metrics) when warning about an incorrect metrics setup
2018-06-13 23:50:10 +10:00
Amber Brown
2a4fde0a6f Merge pull request #3390 from turt2live/travis/appsvc-metrics
Use the RegistryProxy for appservices too
2018-06-13 22:05:15 +10:00
Travis Ralston
45768d1640 Use the RegistryProxy for appservices too
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:55:48 -06:00
Travis Ralston
12285a1a76 The flag is named enable_metrics, not collect_metrics
Signed-off-by: Travis Ralston <travpc@gmail.com>
2018-06-12 12:51:31 -06:00
Richard van der Hoff
96bad44f87 Fix federation_client to send the right Host
This appears to have stopped working since matrix.org moved to cloudflare. The
Host header should match the name of the server, not whatever is in the SRV
record.
2018-06-12 14:14:36 +01:00
Richard van der Hoff
b6faef2ad7 Filter out erased messages
Redact any messges sent by erased users.
2018-06-12 09:53:18 +01:00
Richard van der Hoff
f1023ebf4b mark accounts as erased when requested 2018-06-12 09:53:18 +01:00
Richard van der Hoff
3ff8a619f5 UserErasureStore
to store which users have been erased
2018-06-12 09:52:22 +01:00
Richard van der Hoff
9fc5b74b24 simplify get_persisted_pdu
it doesn't make much sense to use get_persisted_pdu on the receive path: just
get the event straight from the store.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
bd348f0af6 remove dead filter_events_for_clients
This is only used by filter_events_for_client, so we can simplify the whole
thing by just doing one user at a time, and removing a dead storage function to
boot.
2018-06-12 09:51:31 +01:00
Richard van der Hoff
eb32b2ca20 Optimise state_group_cache update
(1) matrix-org-hotfixes has removed the intern calls; let's do the same here.
(2) remove redundant iteritems() so we can used an optimised db update.
2018-06-11 22:56:11 +01:00
David Baker
187a546bff Merge pull request #3276 from matrix-org/dbkr/unbind
Remove email addresses / phone numbers from ID servers when they're removed from synapse
2018-06-11 16:02:00 +01:00
Matthew Hodgson
d6cc369205 fix idiotic typo in state res 2018-06-11 14:43:55 +01:00
Neil Johnson
ed5a0780a4 Merge branch 'master' into develop 2018-06-08 15:47:11 +01:00
Neil Johnson
1032393dfb Merge tag 'v0.31.1'
Changes in synapse v0.31.1 (2018-06-08)
=======================================

v0.31.1 fixes a security bug in the ``get_missing_events`` federation API
where event visibility rules were not applied correctly.

We are not aware of it being actively exploited but please upgrade asap.

Bug Fixes:

* Fix event filtering in get_missing_events handler (PR #3371)
2018-06-08 15:46:18 +01:00
David Baker
0e505b1913 Merge pull request #3372 from matrix-org/rav/better_verification_logging
Try to log more helpful info when a sig verification fails
2018-06-08 13:32:02 +01:00
David Baker
ad9edd1d96 Merge pull request #3371 from matrix-org/rav/fix_get_missing_events
Fix event filtering in get_missing_events handler
2018-06-08 13:19:15 +01:00
Richard van der Hoff
e82db24a0e Try to log more helpful info when a sig verification fails
Firstly, don't swallow the reason for the failure

Secondly, don't assume all exceptions are verification failures

Thirdly, log a bit of info about the key being used if debug is enabled
2018-06-08 12:13:08 +01:00
Richard van der Hoff
0834b49c6a Fix event filtering in get_missing_events handler 2018-06-08 11:34:46 +01:00
Matthew Hodgson
36446ffedb fix various changelog bugs and typos 2018-06-07 23:54:16 +03:00
Will Hunt
13d211edc1 Merge pull request #3344 from Half-Shot/hs/as-metrics
Add metrics to track appservice transactions
2018-06-07 11:37:12 +01:00
Richard van der Hoff
1152f495a0 Merge pull request #3363 from matrix-org/rav/fix_purge
Fix event-purge-by-ts admin API
2018-06-07 11:34:46 +01:00
Richard van der Hoff
e2acf536d4 Merge pull request #3355 from matrix-org/rav/fix_federation_backfill
Fix federation backfill from sqlite servers
2018-06-07 10:18:01 +01:00
Amber Brown
0160f6601f Merge pull request #3356 from matrix-org/rav/add_missing_attr_dep
Make sure that attr is installed
2018-06-07 16:33:30 +10:00
Richard van der Hoff
f4caf3f83d fix log 2018-06-07 00:26:38 +01:00
Richard van der Hoff
0546715c18 Fix event-purge-by-ts admin API
This got completely broken in 0.30.

Fixes #3300.
2018-06-07 00:15:49 +01:00
Richard van der Hoff
57e3f923d2 Add missing dependency on attr
We've rcently added a dep on `attr`. I don't know why the CI didn't pick this
up, but we should make it explicit anyway.
2018-06-06 17:12:41 +01:00
Richard van der Hoff
d3a8c9c55e Fix sql error in _get_state_groups_from_groups
If this was called with a `(type, None)` entry in types (which is supposed to
return all state of type `type`), it would explode with a sql error.
2018-06-06 14:19:01 +01:00
Neil Johnson
fef6c2cdcc Merge branch 'master' into develop 2018-06-06 12:28:15 +01:00
Neil Johnson
752b7b32ed Merge tag 'v0.31.0'
Changes in synapse v0.31.0 (2018-06-06)
======================================

Most notable change from v0.30.0 is to switch to python prometheus library to improve system
stats reporting. WARNING this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.

Bug Fixes:

* Fix metric documentation tables (PR #3341)
* Fix LaterGuage error handling (694968f)
* Fix replication metrics (b7e7fd2)

Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================

Features:

* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)

Changes:

* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)

Changes, python 3 migration:

* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!

Bugs:

* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
2018-06-06 12:27:33 +01:00
Richard van der Hoff
ad459a106c Merge pull request #3349 from t3chguy/redact_as_request_token
Redact AS tokens in log (fixes to #3327)
2018-06-06 10:58:07 +01:00
Michael Telatynski
592c162516 also redact __str__ of ApplicationService used for logging 2018-06-06 10:35:29 +01:00
Michael Telatynski
330432031b redact_uri in two missed log paths 2018-06-06 10:25:48 +01:00
David Baker
bf54c1cf6c pep8 2018-06-06 10:15:33 +01:00
David Baker
3e4bc4488c More doc fixes 2018-06-06 09:44:10 +01:00
Amber Brown
48e2b48888 Merge pull request #3347 from krombel/py3_extend_tox_2
update tox.ini to cover 292 succeeding tests
2018-06-06 16:37:19 +10:00
Amber Brown
d8db6d9267 Merge pull request #3348 from intelfx/fix-sortedcontainers
federation/send_queue.py: fix usage of sortedcontainers.SortedDict
2018-06-06 16:36:53 +10:00
Amber Brown
304bb22c1d Fix metric documentation tables (#3341) 2018-06-06 15:52:37 +10:00
Ivan Shapovalov
c88d50aa8f federation/send_queue.py: fix usage of sortedcontainers.SortedDict 2018-06-06 02:45:18 +03:00
Krombel
0c87eed294 update tox.ini to cover 292 succeeding tests
Signed-Off-By: Matthias Kesler <krombel@krombel.de>
2018-06-05 23:10:15 +02:00
Richard van der Hoff
e316407b5d Merge pull request #3327 from t3chguy/redact_as_request_token
Strip `access_token` from outgoing requests
2018-06-05 19:08:46 +01:00
Michael Telatynski
e6cbf47773 factor out uri redaction into a method on http 2018-06-05 18:31:40 +01:00
David Baker
607bd27c83 fix pep8 2018-06-05 18:10:35 +01:00
David Baker
d62162bbec doc fixes 2018-06-05 18:09:13 +01:00
Richard van der Hoff
617afee069 Merge pull request #3340 from ArchangeGabriel/patch-1
doc/postgres.rst: fix display of the last command block
2018-06-05 17:45:17 +01:00
Richard van der Hoff
522bd3c8a3 Merge remote-tracking branch 'origin/master' into develop 2018-06-05 17:42:49 +01:00
Will Hunt
d6e3c2c79b Let's try labels instead of label, that might work 2018-06-05 17:30:45 +01:00
Amber Brown
f7869f8f8b Port to sortedcontainers (with tests!) (#3332) 2018-06-06 00:13:57 +10:00
Will Hunt
604cff1a06 Add metrics to track appservice transactions 2018-06-05 13:21:30 +01:00
Bruno Pagani
b50f18171d doc/postgres.rest: fix displaying of the last command block
Also indent all of them with 4 spaces.
2018-06-04 22:41:52 +00:00
Richard van der Hoff
f29b41fde9 Merge pull request #3324 from matrix-org/rav/remove_dead_method
Remove was_forgotten_at
2018-06-04 18:18:08 +01:00
Richard van der Hoff
28b0490dfd Merge pull request #3334 from matrix-org/rav/cache_factor_override
Cache factor override system for specific caches
2018-06-04 17:19:33 +01:00
Erik Johnston
042eedfa2b Add hacky cache factor override system 2018-06-04 15:39:28 +01:00
David Baker
c5930d513a Docstring 2018-06-04 12:05:58 +01:00
David Baker
6a29e815fc Fix comment 2018-06-04 12:01:23 +01:00
David Baker
e44150a6de Missing yield 2018-06-04 12:01:13 +01:00
David Baker
f731e42baf docstring 2018-06-04 12:00:51 +01:00
Michael Telatynski
09503126df Strip access_token from outgoing requests using existing regex 2018-06-02 23:25:13 +01:00
Richard van der Hoff
c1f4118bb6 Remove was_forgotten_at
This is unused. IT MUST DIE!!!1
̧̪͈̱̹̳͖͙H̵̰̤̰͕̖e̛ ͚͉̗̼̞w̶̩̥͉̮h̩̺̪̩͘ͅọ͎͉̟ ̜̩͔̦̘ͅW̪̫̩̣̲͔̳a͏͔̳͖i͖͜t͓̤̠͓͙s̘̰̩̥̙̝ͅ ̲̠̬̥Be̡̙̫̦h̰̩i̛̫͙͔̭̤̗̲n̳͞d̸ ͎̻͘T̛͇̝̲̹̠̗ͅh̫̦̝ͅe̩̫͟ ͓͖̼W͕̳͎͚̙̥ą̙l̘͚̺͔͞ͅl̳͍̙̤̤̮̳.̢
̟̺̜̙͉Z̤̲̙̙͎̥̝A͎̣͔̙͘L̥̻̗̳̻̳̳͢G͉̖̯͓̞̩̦O̹̹̺!̙͈͎̞̬ *
2018-06-01 18:21:49 +01:00
Richard van der Hoff
7e15410f02 Enforce the specified API for report_event
as per
https://matrix.org/docs/spec/client_server/unstable.html#post-matrix-client-r0-rooms-roomid-report-eventid
2018-05-31 18:17:11 +01:00
Richard van der Hoff
e73635191f Merge pull request #3290 from rubo77/patch-7
Add link to thorough instruction how to configure consent
2018-05-30 19:52:01 +01:00
Richard van der Hoff
219c2a322b remove trailing whitespace 2018-05-30 19:42:19 +01:00
Richard van der Hoff
2e4be8bfd9 fix english and wrap comment 2018-05-30 19:24:12 +01:00
Ruben Barkow
08ea5fe635 add link to thorough instruction how to configure consent 2018-05-25 23:19:55 +02:00
David Baker
77a23e2e05 Merge remote-tracking branch 'origin/develop' into dbkr/unbind 2018-05-24 16:20:53 +01:00
David Baker
9700d15611 pep8 2018-05-24 11:23:15 +01:00
David Baker
a21a41bad7 comment 2018-05-24 11:19:59 +01:00
David Baker
b3bff53178 Unbind 3pids when they're deleted too 2018-05-24 11:08:05 +01:00
David Baker
2c7866d664 Hit the 3pid unbind endpoint on deactivation 2018-05-23 14:38:56 +01:00
378 changed files with 5776 additions and 2863 deletions

1
.gitignore vendored
View File

@@ -43,6 +43,7 @@ media_store/
build/
venv/
venv*/
localhost-800*/
static/client/register/register_config.js

View File

@@ -4,7 +4,12 @@ language: python
# tell travis to cache ~/.cache/pip
cache: pip
before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
matrix:
fast_finish: true
include:
- python: 2.7
env: TOX_ENV=packaging
@@ -14,10 +19,13 @@ matrix:
- python: 2.7
env: TOX_ENV=py27
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check-newsfragment
install:
- pip install tox

View File

@@ -1,3 +1,72 @@
Synapse 0.32.2 (2018-07-07)
===========================
Bugfixes
--------
- Amend the Python dependencies to depend on attrs from PyPI, not attr (`#3492 <https://github.com/matrix-org/synapse/issues/3492>`_)
Synapse 0.32.1 (2018-07-06)
===========================
Bugfixes
--------
- Add explicit dependency on netaddr (`#3488 <https://github.com/matrix-org/synapse/issues/3488>`_)
Changes in synapse v0.32.0 (2018-07-06)
===========================================
No changes since 0.32.0rc1
Synapse 0.32.0rc1 (2018-07-05)
==============================
Features
--------
- Add blacklist & whitelist of servers allowed to send events to a room via ``m.room.server_acl`` event.
- Cache factor override system for specific caches (`#3334 <https://github.com/matrix-org/synapse/issues/3334>`_)
- Add metrics to track appservice transactions (`#3344 <https://github.com/matrix-org/synapse/issues/3344>`_)
- Try to log more helpful info when a sig verification fails (`#3372 <https://github.com/matrix-org/synapse/issues/3372>`_)
- Synapse now uses the best performing JSON encoder/decoder according to your runtime (simplejson on CPython, stdlib json on PyPy). (`#3462 <https://github.com/matrix-org/synapse/issues/3462>`_)
- Add optional ip_range_whitelist param to AS registration files to lock AS IP access (`#3465 <https://github.com/matrix-org/synapse/issues/3465>`_)
- Reject invalid server names in federation requests (`#3480 <https://github.com/matrix-org/synapse/issues/3480>`_)
- Reject invalid server names in homeserver.yaml (`#3483 <https://github.com/matrix-org/synapse/issues/3483>`_)
Bugfixes
--------
- Strip access_token from outgoing requests (`#3327 <https://github.com/matrix-org/synapse/issues/3327>`_)
- Redact AS tokens in logs (`#3349 <https://github.com/matrix-org/synapse/issues/3349>`_)
- Fix federation backfill from SQLite servers (`#3355 <https://github.com/matrix-org/synapse/issues/3355>`_)
- Fix event-purge-by-ts admin API (`#3363 <https://github.com/matrix-org/synapse/issues/3363>`_)
- Fix event filtering in get_missing_events handler (`#3371 <https://github.com/matrix-org/synapse/issues/3371>`_)
- Synapse is now stricter regarding accepting events which it cannot retrieve the prev_events for. (`#3456 <https://github.com/matrix-org/synapse/issues/3456>`_)
- Fix bug where synapse would explode when receiving unicode in HTTP User-Agent header (`#3470 <https://github.com/matrix-org/synapse/issues/3470>`_)
- Invalidate cache on correct thread to avoid race (`#3473 <https://github.com/matrix-org/synapse/issues/3473>`_)
Improved Documentation
----------------------
- ``doc/postgres.rst``: fix display of the last command block. Thanks to @ArchangeGabriel! (`#3340 <https://github.com/matrix-org/synapse/issues/3340>`_)
Deprecations and Removals
-------------------------
- Remove was_forgotten_at (`#3324 <https://github.com/matrix-org/synapse/issues/3324>`_)
Misc
----
- `#3332 <https://github.com/matrix-org/synapse/issues/3332>`_, `#3341 <https://github.com/matrix-org/synapse/issues/3341>`_, `#3347 <https://github.com/matrix-org/synapse/issues/3347>`_, `#3348 <https://github.com/matrix-org/synapse/issues/3348>`_, `#3356 <https://github.com/matrix-org/synapse/issues/3356>`_, `#3385 <https://github.com/matrix-org/synapse/issues/3385>`_, `#3446 <https://github.com/matrix-org/synapse/issues/3446>`_, `#3447 <https://github.com/matrix-org/synapse/issues/3447>`_, `#3467 <https://github.com/matrix-org/synapse/issues/3467>`_, `#3474 <https://github.com/matrix-org/synapse/issues/3474>`_
Changes in synapse v0.31.2 (2018-06-14)
=======================================

View File

@@ -48,6 +48,26 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``issuenumberOrPR.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
For example, a fix for a bug reported in #1234 would have its
changelog entry in ``changelog.d/1234.bugfix``, and contain content
like "The security levels of Florbs are now validated when
recieved over federation. Contributed by Jane Matrix".
Attribution
~~~~~~~~~~~
@@ -110,11 +130,15 @@ If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
Signed-off-by: Your Name <your@email.example.org>
...using your real name; unfortunately pseudonyms and anonymous contributions
can't be accepted. Git makes this trivial - just use the -s flag when you do
``git commit``, having first set ``user.name`` and ``user.email`` git configs
(which you should have done anyway :)
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Conclusion
~~~~~~~~~~

View File

@@ -29,5 +29,8 @@ exclude Dockerfile
exclude .dockerignore
recursive-exclude jenkins *.sh
include pyproject.toml
recursive-include changelog.d *
prune .github
prune demo/etc

1
changelog.d/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
!.gitignore

1
changelog.d/3316.feature Normal file
View File

@@ -0,0 +1 @@
Enforce the specified API for report_event

0
changelog.d/3463.misc Normal file
View File

0
changelog.d/3464.misc Normal file
View File

1
changelog.d/3496.feature Normal file
View File

@@ -0,0 +1 @@
Include CPU time from database threads in request/block metrics.

1
changelog.d/3497.feature Normal file
View File

@@ -0,0 +1 @@
Add CPU metrics for _fetch_event_list

0
changelog.d/3498.misc Normal file
View File

0
changelog.d/3499.misc Normal file
View File

0
changelog.d/3501.misc Normal file
View File

1
changelog.d/3505.feature Normal file
View File

@@ -0,0 +1 @@
Reduce database consumption when processing large numbers of receipts

1
changelog.d/3521.feature Normal file
View File

@@ -0,0 +1 @@
Cache optimisation for /sync requests

1
changelog.d/3533.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix queued federation requests being processed in the wrong order

1
changelog.d/3534.misc Normal file
View File

@@ -0,0 +1 @@
refactor: use parse_{string,integer} and assert's from http.servlet for deduplication

0
changelog.d/3535.misc Normal file
View File

View File

@@ -44,13 +44,26 @@ Deactivate Account
This API deactivates an account. It removes active access tokens, resets the
password, and deletes third-party IDs (to prevent the user requesting a
password reset).
password reset). It can also mark the user as GDPR-erased (stopping their data
from distributed further, and deleting it entirely if there are no other
references to it).
The api is::
POST /_matrix/client/r0/admin/deactivate/<user_id>
including an ``access_token`` of a server admin, and an empty request body.
with a body of:
.. code:: json
{
"erase": true
}
including an ``access_token`` of a server admin.
The erase parameter is optional and defaults to 'false'.
An empty body may be passed for backwards compatibility.
Reset password

View File

@@ -9,19 +9,19 @@ Set up database
Assuming your PostgreSQL database user is called ``postgres``, create a user
``synapse_user`` with::
su - postgres
createuser --pwprompt synapse_user
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
CREATE DATABASE synapse
ENCODING 'UTF8'
LC_COLLATE='C'
LC_CTYPE='C'
template=template0
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
@@ -126,7 +126,7 @@ run::
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file ``homeserver-postgres.yaml``:
database configuration file ``homeserver-postgres.yaml``::
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml

5
pyproject.toml Normal file
View File

@@ -0,0 +1,5 @@
[tool.towncrier]
package = "synapse"
filename = "CHANGES.rst"
directory = "changelog.d"
issue_format = "`#{issue} <https://github.com/matrix-org/synapse/issues/{issue}>`_"

View File

@@ -18,14 +18,22 @@
from __future__ import print_function
import argparse
from urlparse import urlparse, urlunparse
import nacl.signing
import json
import base64
import requests
import sys
from requests.adapters import HTTPAdapter
import srvlookup
import yaml
# uncomment the following to enable debug logging of http requests
#from httplib import HTTPConnection
#HTTPConnection.debuglevel = 1
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -113,17 +121,6 @@ def read_signing_keys(stream):
return keys
def lookup(destination, path):
if ":" in destination:
return "https://%s%s" % (destination, path)
else:
try:
srv = srvlookup.lookup("matrix", "tcp", destination)[0]
return "https://%s:%d%s" % (srv.host, srv.port, path)
except:
return "https://%s:%d%s" % (destination, 8448, path)
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
@@ -152,13 +149,19 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers.append(bytes(header))
print ("Authorization: %s" % header, file=sys.stderr)
dest = lookup(destination, path)
dest = "matrix://%s%s" % (destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
result = requests.request(
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
result = s.request(
method=method,
url=dest,
headers={"Authorization": authorization_headers[0]},
headers={
"Host": destination,
"Authorization": authorization_headers[0]
},
verify=False,
data=content,
)
@@ -242,5 +245,39 @@ def read_args_from_config(args):
args.signing_key_path = config['signing_key_path']
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s):
if s[-1] == ']':
# ipv6 literal (with no port)
return s, 8448
if ":" in s:
out = s.rsplit(":",1)
try:
port = int(out[1])
except ValueError:
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except:
return s, 8448
def get_connection(self, url, proxies=None):
parsed = urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse((
"https", netloc, parsed.path, parsed.params, parsed.query,
parsed.fragment,
))
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)
if __name__ == "__main__":
main()

View File

@@ -17,4 +17,17 @@ ignore =
[flake8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
ignore = W503
# E203 is contrary to PEP8.
ignore = W503,E203
[isort]
line_length = 89
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_compat = mock,six
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true

View File

@@ -17,4 +17,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.31.2"
__version__ = "0.32.2"

View File

@@ -18,14 +18,16 @@ import logging
from six import itervalues
import pymacaroons
from netaddr import IPAddress
from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes
from synapse.types import UserID
from synapse.util.caches import register_cache, CACHE_SIZE_FACTOR
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
@@ -191,7 +193,7 @@ class Auth(object):
synapse.types.create_requester(user_id, app_service=app_service)
)
access_token = get_access_token_from_request(
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
@@ -237,13 +239,18 @@ class Auth(object):
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
get_access_token_from_request(
self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
)
if app_service is None:
defer.returnValue((None, None))
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
if "user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
@@ -488,7 +495,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
logger.warn("Unrecognised access token - not in store: %s" % (token,))
logger.warn("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN
@@ -506,12 +513,12 @@ class Auth(object):
def get_appservice_by_req(self, request):
try:
token = get_access_token_from_request(
token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token: %s" % (token,))
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
@@ -666,67 +673,67 @@ class Auth(object):
" edit its room list entry"
)
@staticmethod
def has_access_token(request):
"""Checks if the request has an access_token.
def has_access_token(request):
"""Checks if the request has an access_token.
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
def get_access_token_from_request(request, token_not_found_http_status=401):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")
if auth_headers:
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
else:
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
)
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
)
return query_params[0]
return query_params[0]

View File

@@ -68,6 +68,7 @@ class EventTypes(object):
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
Encryption = "m.room.encryption"
RoomAvatar = "m.room.avatar"
GuestAccess = "m.room.guest_access"
@@ -76,6 +77,8 @@ class EventTypes(object):
Topic = "m.room.topic"
Name = "m.room.name"
ServerACL = "m.room.server_acl"
class RejectedReason(object):
AUTH_ERROR = "auth_error"

View File

@@ -17,10 +17,11 @@
import logging
import simplejson as json
from six import iteritems
from six.moves import http_client
from canonicaljson import json
logger = logging.getLogger(__name__)

View File

@@ -12,14 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, RoomID
import jsonschema
from canonicaljson import json
from jsonschema import FormatChecker
from twisted.internet import defer
import simplejson as json
import jsonschema
from jsonschema import FormatChecker
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import RoomID, UserID
FILTER_SCHEMA = {
"additionalProperties": False,

View File

@@ -15,8 +15,8 @@
# limitations under the License.
"""Contains the URL paths to prefix various aspects of the server with. """
from hashlib import sha256
import hmac
from hashlib import sha256
from six.moves.urllib.parse import urlencode

View File

@@ -14,9 +14,11 @@
# limitations under the License.
import sys
from synapse import python_dependencies # noqa: E402
sys.dont_write_bytecode = True
from synapse import python_dependencies # noqa: E402
try:
python_dependencies.check_requirements()

View File

@@ -17,15 +17,18 @@ import gc
import logging
import sys
from daemonize import Daemonize
from twisted.internet import error, reactor
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
try:
import affinity
except Exception:
affinity = None
from daemonize import Daemonize
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
from twisted.internet import error, reactor
logger = logging.getLogger(__name__)

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -23,6 +26,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -35,8 +39,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.appservice")
@@ -62,7 +64,7 @@ class AppserviceServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -97,7 +99,7 @@ class AppserviceServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -44,8 +47,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.client_reader")
@@ -122,7 +123,7 @@ class ClientReaderServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -43,8 +46,10 @@ from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomMembershipRestServlet, RoomStateEventRestServlet,
JoinRoomAliasServlet,
RoomMembershipRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -52,8 +57,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.event_creator")
@@ -138,7 +141,7 @@ class EventCreatorServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX
@@ -41,8 +44,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_reader")
@@ -111,7 +112,7 @@ class FederationReaderServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -42,8 +45,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.federation_sender")
@@ -125,7 +126,7 @@ class FederationSenderServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.errors import SynapseError
@@ -25,9 +28,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
@@ -44,8 +45,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.frontend_proxy")
@@ -176,7 +175,7 @@ class FrontendProxyServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -18,27 +18,39 @@ import logging
import os
import sys
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, FEDERATION_PREFIX, \
LEGACY_MEDIA_PREFIX, MEDIA_PREFIX, SERVER_KEY_PREFIX, SERVER_KEY_V2_PREFIX, \
STATIC_PREFIX, WEB_CLIENT_PREFIX
from synapse.api.urls import (
CONTENT_REPO_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
SERVER_KEY_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
)
from synapse.app import _base
from synapse.app._base import quit_with_error, listen_ssl, listen_tcp
from synapse.app._base import listen_ssl, listen_tcp, quit_with_error
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
from synapse.replication.http import ReplicationRestResource, REPLICATION_PREFIX
from synapse.module_api import ModuleApi
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
@@ -55,11 +67,6 @@ from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
logger = logging.getLogger("synapse.app.homeserver")
@@ -266,7 +273,7 @@ class SynapseHomeServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
@@ -318,11 +325,6 @@ def setup(config_options):
# check any extra requirements we have now we have a config
check_requirements(config)
version_string = "Synapse/" + get_version_string(synapse)
logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_server_context_factory = context_factory.ServerContextFactory(config)
@@ -335,7 +337,7 @@ def setup(config_options):
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
config=config,
version_string=version_string,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)

View File

@@ -16,11 +16,12 @@
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
)
from synapse.api.urls import CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
@@ -43,8 +44,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.media_repository")
@@ -118,7 +117,7 @@ class MediaRepositoryServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,6 +16,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -37,8 +40,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.pusher")
@@ -128,7 +129,7 @@ class PusherServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -17,6 +17,11 @@ import contextlib
import logging
import sys
from six import iteritems
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse.api.constants import EventTypes
from synapse.app import _base
@@ -36,12 +41,12 @@ from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@@ -56,10 +61,6 @@ from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
from six import iteritems
logger = logging.getLogger("synapse.app.synchrotron")
@@ -305,7 +306,7 @@ class SynchrotronServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -16,16 +16,17 @@
import argparse
import collections
import errno
import glob
import os
import os.path
import signal
import subprocess
import sys
import yaml
import errno
import time
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
@@ -171,6 +172,10 @@ def main():
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in cache_factors.iteritems():
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False

View File

@@ -17,6 +17,9 @@
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
@@ -43,8 +46,6 @@ from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
from twisted.internet import reactor, defer
from twisted.web.resource import NoResource
logger = logging.getLogger("synapse.app.user_dir")
@@ -150,7 +151,7 @@ class UserDirectoryServer(HomeServer):
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])

View File

@@ -12,17 +12,17 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import GroupID, get_domain_from_id
from twisted.internet import defer
import logging
import re
from six import string_types
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.types import GroupID, get_domain_from_id
from synapse.util.caches.descriptors import cachedInlineCallbacks
logger = logging.getLogger(__name__)
@@ -85,7 +85,8 @@ class ApplicationService(object):
NS_LIST = [NS_USERS, NS_ALIASES, NS_ROOMS]
def __init__(self, token, hostname, url=None, namespaces=None, hs_token=None,
sender=None, id=None, protocols=None, rate_limited=True):
sender=None, id=None, protocols=None, rate_limited=True,
ip_range_whitelist=None):
self.token = token
self.url = url
self.hs_token = hs_token
@@ -93,6 +94,7 @@ class ApplicationService(object):
self.server_name = hostname
self.namespaces = self._check_namespaces(namespaces)
self.id = id
self.ip_range_whitelist = ip_range_whitelist
if "|" in self.id:
raise Exception("application service ID cannot contain '|' character")
@@ -292,4 +294,8 @@ class ApplicationService(object):
return self.rate_limited
def __str__(self):
return "ApplicationService: %s" % (self.__dict__,)
# copy dictionary and redact token fields so they don't get logged
dict_copy = self.__dict__.copy()
dict_copy["token"] = "<redacted>"
dict_copy["hs_token"] = "<redacted>"
return "ApplicationService: %s" % (dict_copy,)

View File

@@ -12,20 +12,39 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
from synapse.http.client import SimpleHttpClient
from synapse.events.utils import serialize_event
from synapse.util.caches.response_cache import ResponseCache
from synapse.http.client import SimpleHttpClient
from synapse.types import ThirdPartyInstanceID
import logging
import urllib
from synapse.util.caches.response_cache import ResponseCache
logger = logging.getLogger(__name__)
sent_transactions_counter = Counter(
"synapse_appservice_api_sent_transactions",
"Number of /transactions/ requests sent",
["service"]
)
failed_transactions_counter = Counter(
"synapse_appservice_api_failed_transactions",
"Number of /transactions/ requests that failed to send",
["service"]
)
sent_events_counter = Counter(
"synapse_appservice_api_sent_events",
"Number of events sent to the AS",
["service"]
)
HOUR_IN_MS = 60 * 60 * 1000
@@ -219,12 +238,15 @@ class ApplicationServiceApi(SimpleHttpClient):
args={
"access_token": service.hs_token
})
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
def _serialize(self, events):

View File

@@ -48,14 +48,14 @@ UP & quit +---------- YES SUCCESS
This is all tied together by the AppServiceScheduler which DIs the required
components.
"""
import logging
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import Measure
import logging
logger = logging.getLogger(__name__)

View File

@@ -16,11 +16,12 @@
import argparse
import errno
import os
import yaml
from textwrap import dedent
from six import integer_types
import yaml
class ConfigError(Exception):
pass

View File

@@ -12,10 +12,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.api.constants import EventTypes
from ._base import Config
class ApiConfig(Config):

View File

@@ -12,17 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from synapse.appservice import ApplicationService
from synapse.types import UserID
import yaml
import logging
from six import string_types
from six.moves.urllib import parse as urlparse
import yaml
from netaddr import IPSet
from synapse.appservice import ApplicationService
from synapse.types import UserID
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
@@ -154,6 +156,13 @@ def _load_appservice(hostname, as_info, config_filename):
" will not receive events or queries.",
config_filename,
)
ip_range_whitelist = None
if as_info.get('ip_range_whitelist'):
ip_range_whitelist = IPSet(
as_info.get('ip_range_whitelist')
)
return ApplicationService(
token=as_info["as_token"],
hostname=hostname,
@@ -163,5 +172,6 @@ def _load_appservice(hostname, as_info, config_filename):
sender=user_id,
id=as_info["id"],
protocols=protocols,
rate_limited=rate_limited
rate_limited=rate_limited,
ip_range_whitelist=ip_range_whitelist,
)

View File

@@ -18,6 +18,9 @@ from ._base import Config
DEFAULT_CONFIG = """\
# User Consent configuration
#
# for detailed instructions, see
# https://github.com/matrix-org/synapse/blob/master/docs/consent_tracking.md
#
# Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'.
#

View File

@@ -13,32 +13,32 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .tls import TlsConfig
from .server import ServerConfig
from .logger import LoggingConfig
from .database import DatabaseConfig
from .ratelimiting import RatelimitConfig
from .repository import ContentRepositoryConfig
from .captcha import CaptchaConfig
from .voip import VoipConfig
from .registration import RegistrationConfig
from .metrics import MetricsConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
from .key import KeyConfig
from .saml2 import SAML2Config
from .captcha import CaptchaConfig
from .cas import CasConfig
from .password import PasswordConfig
from .jwt import JWTConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .emailconfig import EmailConfig
from .workers import WorkerConfig
from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .groups import GroupsConfig
from .jwt import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
from .ratelimiting import RatelimitConfig
from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig
from .saml2 import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .tls import TlsConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,

View File

@@ -15,7 +15,6 @@
from ._base import Config, ConfigError
MISSING_JWT = (
"""Missing jwt library. This is required for jwt login.

View File

@@ -13,21 +13,24 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from synapse.util.stringutils import random_string
from signedjson.key import (
generate_signing_key, is_signing_algorithm_supported,
decode_signing_key_base64, decode_verify_key_bytes,
read_signing_keys, write_signing_keys, NACL_ED25519
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string_with_symbols
import os
import hashlib
import logging
import os
from signedjson.key import (
NACL_ED25519,
decode_signing_key_base64,
decode_verify_key_bytes,
generate_signing_key,
is_signing_algorithm_supported,
read_signing_keys,
write_signing_keys,
)
from unpaddedbase64 import decode_base64
from synapse.util.stringutils import random_string, random_string_with_symbols
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)

View File

@@ -12,17 +12,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.logcontext import LoggingContextFilter
from twisted.logger import globalLogBeginner, STDLibLogObserver
import logging
import logging.config
import yaml
from string import Template
import os
import signal
import sys
from string import Template
import yaml
from twisted.logger import STDLibLogObserver, globalLogBeginner
import synapse
from synapse.util.logcontext import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
DEFAULT_LOG_CONFIG = Template("""
version: 1
@@ -202,6 +207,15 @@ def setup_logging(config, use_worker_options=False):
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn(
"Server %s version %s",
sys.argv[0], get_version_string(synapse),
)
logging.info("Server hostname: %s", config.server_name)
# It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164

View File

@@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.util.module_loader import load_module
from ._base import Config
LDAP_PROVIDER = 'ldap_auth_provider.LdapAuthProvider'

View File

@@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from distutils.util import strtobool
from synapse.util.stringutils import random_string_with_symbols
from distutils.util import strtobool
from ._base import Config
class RegistrationConfig(Config):

View File

@@ -13,11 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
from collections import namedtuple
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
MISSING_NETADDR = (
"Missing netaddr library. This is required for URL preview API."

View File

@@ -16,6 +16,8 @@
import logging
from synapse.http.endpoint import parse_and_validate_server_name
from ._base import Config, ConfigError
logger = logging.Logger(__name__)
@@ -25,6 +27,12 @@ class ServerConfig(Config):
def read_config(self, config):
self.server_name = config["server_name"]
try:
parse_and_validate_server_name(self.server_name)
except ValueError as e:
raise ConfigError(str(e))
self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"]
self.web_client_location = config.get("web_client_location", None)
@@ -162,8 +170,8 @@ class ServerConfig(Config):
})
def default_config(self, server_name, **kwargs):
if ":" in server_name:
bind_port = int(server_name.split(":")[1])
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
unsecure_port = bind_port - 400
else:
bind_port = 8448

View File

@@ -12,9 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.types import UserID
from ._base import Config
DEFAULT_CONFIG = """\
# Server Notices room configuration
#

46
synapse/config/stats.py Normal file
View File

@@ -0,0 +1,46 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
import sys
class StatsConfig(Config):
"""Stats Configuration
Configuration for the behaviour of synapse's stats engine
"""
def read_config(self, config):
self.stats_enable = False
self.stats_bucket_size = 86400
self.stats_retention = sys.maxint
stats_config = config.get("stats", None)
if stats_config:
self.stats_enable = stats_config.get("enable", self.stats_enable)
self.stats_bucket_size = stats_config.get(
"bucket_size", self.stats_bucket_size
)
self.stats_retention = stats_config.get("retention", self.stats_retention)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Stats configuration
#
# stats:
# enable: false
# bucket_size: 86400 # 1 day
# retention: 31536000 # 1 year
"""

View File

@@ -13,14 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
import os
import subprocess
from hashlib import sha256
from unpaddedbase64 import encode_base64
from OpenSSL import crypto
import subprocess
import os
from hashlib import sha256
from unpaddedbase64 import encode_base64
from ._base import Config
GENERATE_DH_PARAMS = False

View File

@@ -12,12 +12,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import ssl
from OpenSSL import SSL, crypto
from twisted.internet._sslverify import _defaultCurveName
import logging
from OpenSSL import SSL, crypto
from twisted.internet import ssl
from twisted.internet._sslverify import _defaultCurveName
logger = logging.getLogger(__name__)

View File

@@ -15,16 +15,16 @@
# limitations under the License.
from synapse.api.errors import SynapseError, Codes
from synapse.events.utils import prune_event
from canonicaljson import encode_canonical_json
from unpaddedbase64 import encode_base64, decode_base64
from signedjson.sign import sign_json
import hashlib
import logging
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.events.utils import prune_event
logger = logging.getLogger(__name__)

View File

@@ -13,14 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import logcontext
from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint
import simplejson as json
import logging
from canonicaljson import json
from twisted.internet import defer, reactor
from twisted.internet.protocol import Factory
from twisted.web.http import HTTPClient
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util import logcontext
logger = logging.getLogger(__name__)

View File

@@ -14,9 +14,31 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import hashlib
import logging
import urllib
from collections import namedtuple
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
is_signing_algorithm_supported,
)
from signedjson.sign import (
SignatureVerifyException,
encode_canonical_json,
sign_json,
signature_ids,
verify_signed_json,
)
from unpaddedbase64 import decode_base64, encode_base64
from OpenSSL import crypto
from twisted.internet import defer
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util import unwrapFirstError, logcontext
from synapse.util import logcontext, unwrapFirstError
from synapse.util.logcontext import (
PreserveLoggingContext,
preserve_fn,
@@ -24,24 +46,6 @@ from synapse.util.logcontext import (
)
from synapse.util.metrics import Measure
from twisted.internet import defer
from signedjson.sign import (
verify_signed_json, signature_ids, sign_json, encode_canonical_json
)
from signedjson.key import (
is_signing_algorithm_supported, decode_verify_key_bytes
)
from unpaddedbase64 import decode_base64, encode_base64
from OpenSSL import crypto
from collections import namedtuple
import urllib
import hashlib
import logging
logger = logging.getLogger(__name__)
@@ -56,7 +60,7 @@ Attributes:
key_ids(set(str)): The set of key_ids to that could be used to verify the
JSON object
json_object(dict): The JSON object to verify.
deferred(twisted.internet.defer.Deferred):
deferred(Deferred[str, str, nacl.signing.VerifyKey]):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched. The deferreds' callbacks are run with no
logcontext.
@@ -736,6 +740,17 @@ class Keyring(object):
@defer.inlineCallbacks
def _handle_key_deferred(verify_request):
"""Waits for the key to become available, and then performs a verification
Args:
verify_request (VerifyKeyRequest):
Returns:
Deferred[None]
Raises:
SynapseError if there was a problem performing the verification
"""
server_name = verify_request.server_name
try:
with PreserveLoggingContext():
@@ -768,11 +783,17 @@ def _handle_key_deferred(verify_request):
))
try:
verify_signed_json(json_object, server_name, verify_key)
except Exception:
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
server_name, verify_key.alg, verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
"Invalid signature for server %s with key %s:%s: %s" % (
server_name, verify_key.alg, verify_key.version, str(e),
),
Codes.UNAUTHORIZED,
)

View File

@@ -17,11 +17,11 @@ import logging
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, SynapseError, EventSizeError
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__)
@@ -76,6 +76,7 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
return
if event.type == EventTypes.Create:
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
raise AuthError(
@@ -524,7 +525,11 @@ def _check_power_levels(event, auth_events):
"to your own"
)
if old_level > user_level or new_level > user_level:
# Check if the old and new levels are greater than the user level
# (if defined)
old_level_too_big = old_level is not None and old_level > user_level
new_level_too_big = new_level is not None and new_level > user_level
if old_level_too_big or new_level_too_big:
raise AuthError(
403,
"You don't have permission to add ops level greater "

View File

@@ -13,9 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util.frozenutils import freeze
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting

View File

@@ -13,13 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from . import EventBase, FrozenEvent, _event_dict_property
import copy
from synapse.types import EventID
from synapse.util.stringutils import random_string
import copy
from . import EventBase, FrozenEvent, _event_dict_property
class EventBuilder(EventBase):

View File

@@ -13,10 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from frozendict import frozendict
from twisted.internet import defer
class EventContext(object):
"""

View File

@@ -13,15 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
from . import EventBase
from frozendict import frozendict
import re
from six import string_types
from frozendict import frozendict
from synapse.api.constants import EventTypes
from . import EventBase
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.

View File

@@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.types import EventID, RoomID, UserID
from synapse.api.errors import SynapseError
from synapse.api.constants import EventTypes, Membership
from six import string_types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.types import EventID, RoomID, UserID
class EventValidator(object):

View File

@@ -16,14 +16,15 @@ import logging
import six
from twisted.internet import defer
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import SynapseError, Codes
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_request
from synapse.util import unwrapFirstError, logcontext
from twisted.internet import defer
from synapse.http.servlet import assert_params_in_dict
from synapse.util import logcontext, unwrapFirstError
logger = logging.getLogger(__name__)
@@ -198,7 +199,7 @@ def event_from_pdu_json(pdu_json, outlier=False):
"""
# we could probably enforce a bunch of other fields here (room_id, sender,
# origin, etc etc)
assert_params_in_request(pdu_json, ('event_id', 'type', 'depth'))
assert_params_in_dict(pdu_json, ('event_id', 'type', 'depth'))
depth = pdu_json['depth']
if not isinstance(depth, six.integer_types):

View File

@@ -21,25 +21,25 @@ import random
from six.moves import range
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, FederationDeniedError
CodeMessageException,
FederationDeniedError,
HttpResponseException,
SynapseError,
)
from synapse.events import builder
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
from prometheus_client import Counter
logger = logging.getLogger(__name__)
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])

View File

@@ -14,28 +14,29 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
import six
from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
import simplejson as json
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from synapse.api.errors import AuthError, FederationError, SynapseError, NotFoundError
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, FederationError, NotFoundError, SynapseError
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
from prometheus_client import Counter
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
TRANSACTION_CONCURRENCY_LIMIT = 10
@@ -74,6 +75,9 @@ class FederationServer(FederationBase):
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request(
origin, room_id, versions, limit
)
@@ -134,6 +138,8 @@ class FederationServer(FederationBase):
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(transaction.origin)
pdus_by_room = {}
for p in transaction.pdus:
@@ -154,9 +160,21 @@ class FederationServer(FederationBase):
# we can process different rooms in parallel (which is useful if they
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn(
"Ignoring PDUs for room %s from banned server", room_id,
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
@@ -211,6 +229,9 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -234,6 +255,9 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -277,7 +301,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_pdu_request(self, origin, event_id):
pdu = yield self._get_persisted_pdu(origin, event_id)
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu:
defer.returnValue(
@@ -298,7 +322,9 @@ class FederationServer(FederationBase):
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, room_id, user_id):
def on_make_join_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_join_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@@ -306,6 +332,8 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue((200, {"event": ret_pdu.get_pdu_json(time_now)}))
@@ -314,6 +342,10 @@ class FederationServer(FederationBase):
def on_send_join_request(self, origin, content):
logger.debug("on_send_join_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -325,7 +357,9 @@ class FederationServer(FederationBase):
}))
@defer.inlineCallbacks
def on_make_leave_request(self, room_id, user_id):
def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@@ -334,6 +368,10 @@ class FederationServer(FederationBase):
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = event_from_pdu_json(content)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@@ -341,6 +379,9 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {
@@ -369,6 +410,9 @@ class FederationServer(FederationBase):
Deferred: Results in `dict` with the same format as `content`
"""
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
auth_chain = [
event_from_pdu_json(e)
for e in content["auth_chain"]
@@ -442,6 +486,9 @@ class FederationServer(FederationBase):
def on_get_missing_events(self, origin, room_id, earliest_events,
latest_events, limit, min_depth):
with (yield self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d, min_depth: %d",
@@ -470,17 +517,6 @@ class FederationServer(FederationBase):
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
@log_function
def _get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
Returns:
Deferred: Results in a `Pdu`.
"""
return self.handler.get_persisted_pdu(
origin, event_id, do_auth=do_auth
)
def _transaction_from_pdus(self, pdu_list):
"""Returns a new Transaction containing the given PDUs suitable for
transmission.
@@ -560,7 +596,9 @@ class FederationServer(FederationBase):
affected=pdu.event_id,
)
yield self.handler.on_receive_pdu(origin, pdu, get_missing=True)
yield self.handler.on_receive_pdu(
origin, pdu, get_missing=True, sent_to_us_directly=True,
)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
@@ -588,6 +626,101 @@ class FederationServer(FederationBase):
)
defer.returnValue(ret)
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
server_name (str): name of server, *without any port part*
room_id (str): ID of the room to check
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event):
"""Check if the given server is allowed by the ACL event
Args:
server_name (str): name of server, without any port part
acl_event (EventBase): m.room.server_acl event
Returns:
bool: True if this server is allowed by the ACLs
"""
logger.debug("Checking %s against acl %s", server_name, acl_event.content)
# first of all, check if literal IPs are blocked, and if so, whether the
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
if server_name[0] == '[':
return False
# check for ipv4 literals. We can just lift the routine from twisted.
if isIPAddress(server_name):
return False
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched deny rule %s", server_name, e)
return False
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
# logger.info("%s matched allow rule %s", server_name, e)
return True
# everything else should be rejected.
# logger.info("%s fell through", server_name)
return False
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn("Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry))
return False
regex = _glob_to_regex(acl_entry)
return regex.match(server_name)
def _glob_to_regex(glob):
res = ''
for c in glob:
if c == '*':
res = res + '.*'
elif c == '?':
res = res + '.'
else:
res = res + re.escape(c)
return re.compile(res + "\\Z", re.IGNORECASE)
class FederationHandlerRegistry(object):
"""Allows classes to register themselves as handlers for a given EDU or

View File

@@ -19,13 +19,12 @@ package.
These actions are mostly only used by the :py:mod:`.replication` module.
"""
import logging
from twisted.internet import defer
from synapse.util.logutils import log_function
import logging
logger = logging.getLogger(__name__)

View File

@@ -29,18 +29,18 @@ dead worker doesn't cause the queues to grow limitlessly.
Events are replicated via a separate events stream.
"""
from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
from synapse.metrics import LaterGauge
from blist import sorteddict
import logging
from collections import namedtuple
import logging
from six import iteritems, itervalues
from six import itervalues, iteritems
from sortedcontainers import SortedDict
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
from .units import Edu
logger = logging.getLogger(__name__)
@@ -55,19 +55,19 @@ class FederationRemoteSendQueue(object):
self.is_mine_id = hs.is_mine_id
self.presence_map = {} # Pending presence map user_id -> UserPresenceState
self.presence_changed = sorteddict() # Stream position -> user_id
self.presence_changed = SortedDict() # Stream position -> user_id
self.keyed_edu = {} # (destination, key) -> EDU
self.keyed_edu_changed = sorteddict() # stream position -> (destination, key)
self.keyed_edu_changed = SortedDict() # stream position -> (destination, key)
self.edus = sorteddict() # stream position -> Edu
self.edus = SortedDict() # stream position -> Edu
self.failures = sorteddict() # stream position -> (destination, Failure)
self.failures = SortedDict() # stream position -> (destination, Failure)
self.device_messages = sorteddict() # stream position -> destination
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
self.pos_time = sorteddict()
self.pos_time = SortedDict()
# EVERYTHING IS SAD. In particular, python only makes new scopes when
# we make a new function, so we need to make a new function so the inner
@@ -98,7 +98,7 @@ class FederationRemoteSendQueue(object):
now = self.clock.time_msec()
keys = self.pos_time.keys()
time = keys.bisect_left(now - FIVE_MINUTES_AGO)
time = self.pos_time.bisect_left(now - FIVE_MINUTES_AGO)
if not keys[:time]:
return
@@ -113,7 +113,7 @@ class FederationRemoteSendQueue(object):
with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps
keys = self.presence_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.presence_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.presence_changed[key]
@@ -131,7 +131,7 @@ class FederationRemoteSendQueue(object):
# Delete things out of keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_left(position_to_delete)
i = self.keyed_edu_changed.bisect_left(position_to_delete)
for key in keys[:i]:
del self.keyed_edu_changed[key]
@@ -145,19 +145,19 @@ class FederationRemoteSendQueue(object):
# Delete things out of edu map
keys = self.edus.keys()
i = keys.bisect_left(position_to_delete)
i = self.edus.bisect_left(position_to_delete)
for key in keys[:i]:
del self.edus[key]
# Delete things out of failure map
keys = self.failures.keys()
i = keys.bisect_left(position_to_delete)
i = self.failures.bisect_left(position_to_delete)
for key in keys[:i]:
del self.failures[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = keys.bisect_left(position_to_delete)
i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
@@ -250,13 +250,12 @@ class FederationRemoteSendQueue(object):
self._clear_queue_before_pos(federation_ack)
# Fetch changed presence
keys = self.presence_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.presence_changed.bisect_right(from_token)
j = self.presence_changed.bisect_right(to_token) + 1
dest_user_ids = [
(pos, user_id)
for pos in keys[i:j]
for user_id in self.presence_changed[pos]
for pos, user_id_list in self.presence_changed.items()[i:j]
for user_id in user_id_list
]
for (key, user_id) in dest_user_ids:
@@ -265,13 +264,12 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changes keyed edus
keys = self.keyed_edu_changed.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
i = self.keyed_edu_changed.bisect_right(from_token)
j = self.keyed_edu_changed.bisect_right(to_token) + 1
# We purposefully clobber based on the key here, python dict comprehensions
# always use the last value, so this will correctly point to the last
# stream position.
keyed_edus = {self.keyed_edu_changed[k]: k for k in keys[i:j]}
keyed_edus = {v: k for k, v in self.keyed_edu_changed.items()[i:j]}
for ((destination, edu_key), pos) in iteritems(keyed_edus):
rows.append((pos, KeyedEduRow(
@@ -280,19 +278,17 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changed edus
keys = self.edus.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
edus = ((k, self.edus[k]) for k in keys[i:j])
i = self.edus.bisect_right(from_token)
j = self.edus.bisect_right(to_token) + 1
edus = self.edus.items()[i:j]
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed failures
keys = self.failures.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
failures = ((k, self.failures[k]) for k in keys[i:j])
i = self.failures.bisect_right(from_token)
j = self.failures.bisect_right(to_token) + 1
failures = self.failures.items()[i:j]
for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow(
@@ -301,10 +297,9 @@ class FederationRemoteSendQueue(object):
)))
# Fetch changed device messages
keys = self.device_messages.keys()
i = keys.bisect_right(from_token)
j = keys.bisect_right(to_token) + 1
device_messages = {self.device_messages[k]: k for k in keys[i:j]}
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(

View File

@@ -13,37 +13,37 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
from twisted.internet import defer
from .persistence import TransactionActions
from .units import Transaction, Edu
from synapse.api.errors import HttpResponseException, FederationDeniedError
from synapse.util import logcontext, PreserveLoggingContext
from synapse.util.async import run_on_reactor
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
from synapse.metrics import LaterGauge
from synapse.metrics import (
sent_edus_counter,
sent_transactions_counter,
events_processed_counter,
)
from prometheus_client import Counter
import logging
from six import itervalues
import logging
from prometheus_client import Counter
from twisted.internet import defer
import synapse.metrics
from synapse.api.errors import FederationDeniedError, HttpResponseException
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
from synapse.metrics import (
LaterGauge,
events_processed_counter,
sent_edus_counter,
sent_transactions_counter,
)
from synapse.util import PreserveLoggingContext, logcontext
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from .persistence import TransactionActions
from .units import Edu, Transaction
logger = logging.getLogger(__name__)
sent_pdus_destination_dist = Counter(
"synapse_federation_transaction_queue_sent_pdu_destinations", ""
sent_pdus_destination_dist_count = Counter(
"synapse_federation_client_sent_pdu_destinations:count", ""
)
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total", ""
)
@@ -280,7 +280,8 @@ class TransactionQueue(object):
if not destinations:
return
sent_pdus_destination_dist.inc(len(destinations))
sent_pdus_destination_dist_total.inc(len(destinations))
sent_pdus_destination_dist_count.inc()
for destination in destinations:
self.pending_pdus_by_dest.setdefault(destination, []).append(
@@ -451,9 +452,6 @@ class TransactionQueue(object):
# hence why we throw the result away.
yield get_retry_limiter(destination, self.clock, self.store)
# XXX: what's this for?
yield run_on_reactor()
pending_pdus = []
while True:
device_message_edus, device_stream_id, dev_list_id = (

View File

@@ -14,15 +14,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
import logging
import urllib
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function
logger = logging.getLogger(__name__)

View File

@@ -14,25 +14,27 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.api.errors import Codes, SynapseError, FederationDeniedError
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_json_object_from_request, parse_integer_from_args, parse_string_from_args,
parse_boolean_from_args,
)
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
from synapse.util.logcontext import run_in_background
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
import functools
import logging
import re
import synapse
from twisted.internet import defer
import synapse
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.http.endpoint import parse_and_validate_server_name
from synapse.http.server import JsonResource
from synapse.http.servlet import (
parse_boolean_from_args,
parse_integer_from_args,
parse_json_object_from_request,
parse_string_from_args,
)
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
from synapse.util.logcontext import run_in_background
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__)
@@ -99,26 +101,6 @@ class Authenticator(object):
origin = None
def parse_auth_header(header_str):
try:
params = auth.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith("\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return (origin, key, sig)
except Exception:
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED
)
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
if not auth_headers:
@@ -127,8 +109,8 @@ class Authenticator(object):
)
for auth in auth_headers:
if auth.startswith("X-Matrix"):
(origin, key, sig) = parse_auth_header(auth)
if auth.startswith(b"X-Matrix"):
(origin, key, sig) = _parse_auth_header(auth)
json_request["origin"] = origin
json_request["signatures"].setdefault(origin, {})[key] = sig
@@ -165,6 +147,48 @@ class Authenticator(object):
logger.exception("Error resetting retry timings on %s", origin)
def _parse_auth_header(header_bytes):
"""Parse an X-Matrix auth header
Args:
header_bytes (bytes): header value
Returns:
Tuple[str, str, str]: origin, key id, signature.
Raises:
AuthenticationError if the header could not be parsed
"""
try:
header_str = header_bytes.decode('utf-8')
params = header_str.split(" ")[1].split(",")
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith(b"\""):
return value[1:-1]
else:
return value
origin = strip_quotes(param_dict["origin"])
# ensure that the origin is a valid server name
parse_and_validate_server_name(origin)
key = strip_quotes(param_dict["key"])
sig = strip_quotes(param_dict["sig"])
return origin, key, sig
except Exception as e:
logger.warn(
"Error parsing auth header '%s': %s",
header_bytes.decode('ascii', 'replace'),
e,
)
raise AuthenticationError(
400, "Malformed Authorization header", Codes.UNAUTHORIZED,
)
class BaseFederationServlet(object):
REQUIRE_AUTH = True
@@ -362,7 +386,9 @@ class FederationMakeJoinServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_join_request(context, user_id)
content = yield self.handler.on_make_join_request(
origin, context, user_id,
)
defer.returnValue((200, content))
@@ -371,7 +397,9 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_leave_request(context, user_id)
content = yield self.handler.on_make_leave_request(
origin, context, user_id,
)
defer.returnValue((200, content))

View File

@@ -17,10 +17,9 @@
server protocol.
"""
from synapse.util.jsonobject import JsonEncodedObject
import logging
from synapse.util.jsonobject import JsonEncodedObject
logger = logging.getLogger(__name__)

View File

@@ -23,9 +23,9 @@ If a user leaves (or gets kicked out of) a group, either side can still use
their attestation to "prove" their membership, until the attestation expires.
Therefore attestations shouldn't be relied on to prove membership in important
cases, but can for less important situtations, e.g. showing a users membership
of groups on their profile, showing flairs, etc.abs
of groups on their profile, showing flairs, etc.
An attestsation is a signed blob of json that looks like:
An attestation is a signed blob of json that looks like:
{
"user_id": "@foo:a.example.com",
@@ -38,15 +38,14 @@ An attestsation is a signed blob of json that looks like:
import logging
import random
from signedjson.sign import sign_json
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id
from synapse.util.logcontext import run_in_background
from signedjson.sign import sign_json
logger = logging.getLogger(__name__)

View File

@@ -16,11 +16,12 @@
import logging
from synapse.api.errors import SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from six import string_types
from twisted.internet import defer
from six import string_types
from synapse.api.errors import SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
logger = logging.getLogger(__name__)

View File

@@ -13,13 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from .admin import AdminHandler
from .directory import DirectoryHandler
from .federation import FederationHandler
from .identity import IdentityHandler
from .message import MessageHandler
from .register import RegistrationHandler
from .room import RoomContextHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
from .admin import AdminHandler
from .identity import IdentityHandler
from .search import SearchHandler

View File

@@ -18,11 +18,10 @@ import logging
from twisted.internet import defer
import synapse.types
from synapse.api.constants import Membership, EventTypes
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import LimitExceededError
from synapse.types import UserID
logger = logging.getLogger(__name__)

View File

@@ -13,12 +13,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)

View File

@@ -13,19 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
import logging
from six import itervalues
import synapse
from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure
from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
from prometheus_client import Counter
import logging
from twisted.internet import defer
import synapse
from synapse.api.constants import EventTypes
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)

View File

@@ -13,29 +13,33 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer, threads
from ._base import BaseHandler
import logging
import attr
import bcrypt
import pymacaroons
from canonicaljson import json
from twisted.internet import defer, threads
from twisted.web.client import PartialDownloadError
import synapse.util.stringutils as stringutils
from synapse.api.constants import LoginType
from synapse.api.errors import (
AuthError, Codes, InteractiveAuthIncompleteError, LoginError, StoreError,
AuthError,
Codes,
InteractiveAuthIncompleteError,
LoginError,
StoreError,
SynapseError,
)
from synapse.module_api import ModuleApi
from synapse.types import UserID
from synapse.util.async import run_on_reactor
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable
from twisted.web.client import PartialDownloadError
import logging
import bcrypt
import pymacaroons
import simplejson
import synapse.util.stringutils as stringutils
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -402,7 +406,7 @@ class AuthHandler(BaseHandler):
except PartialDownloadError as pde:
# Twisted is silly
data = pde.response
resp_body = simplejson.loads(data)
resp_body = json.loads(data)
if 'success' in resp_body:
# Note that we do NOT check the hostname here: we explicitly
@@ -423,15 +427,11 @@ class AuthHandler(BaseHandler):
def _check_msisdn(self, authdict, _):
return self._check_threepid('msisdn', authdict)
@defer.inlineCallbacks
def _check_dummy_auth(self, authdict, _):
yield run_on_reactor()
defer.returnValue(True)
return defer.succeed(True)
@defer.inlineCallbacks
def _check_threepid(self, medium, authdict):
yield run_on_reactor()
if 'threepid_creds' not in authdict:
raise LoginError(400, "Missing threepid_creds", Codes.MISSING_PARAM)
@@ -825,6 +825,15 @@ class AuthHandler(BaseHandler):
if medium == 'email':
address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler
yield identity_handler.unbind_threepid(
user_id,
{
'medium': medium,
'address': address,
},
)
ret = yield self.store.user_delete_threepid(
user_id, medium, address,
)
@@ -849,7 +858,11 @@ class AuthHandler(BaseHandler):
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
return make_deferred_yieldable(threads.deferToThread(_do_hash))
return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(), self.hs.get_reactor().getThreadPool(), _do_hash
),
)
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
@@ -869,16 +882,21 @@ class AuthHandler(BaseHandler):
)
if stored_hash:
return make_deferred_yieldable(threads.deferToThread(_do_validate_hash))
return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(),
self.hs.get_reactor().getThreadPool(),
_do_validate_hash,
),
)
else:
return defer.succeed(False)
class MacaroonGeneartor(object):
def __init__(self, hs):
self.clock = hs.get_clock()
self.server_name = hs.config.server_name
self.macaroon_secret_key = hs.config.macaroon_secret_key
@attr.s
class MacaroonGenerator(object):
hs = attr.ib()
def generate_access_token(self, user_id, extra_caveats=None):
extra_caveats = extra_caveats or []
@@ -896,7 +914,7 @@ class MacaroonGeneartor(object):
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.clock.time_msec()
now = self.hs.get_clock().time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
@@ -908,9 +926,9 @@ class MacaroonGeneartor(object):
def _generate_base_macaroon(self, user_id):
macaroon = pymacaroons.Macaroon(
location=self.server_name,
location=self.hs.config.server_name,
identifier="key",
key=self.macaroon_secret_key)
key=self.hs.config.macaroon_secret_key)
macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon

View File

@@ -12,13 +12,15 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer, reactor
import logging
from ._base import BaseHandler
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import UserID, create_requester
from synapse.util.logcontext import run_in_background
import logging
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -30,6 +32,7 @@ class DeactivateAccountHandler(BaseHandler):
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
self._room_member_handler = hs.get_room_member_handler()
self._identity_handler = hs.get_handlers().identity_handler
self.user_directory_handler = hs.get_user_directory_handler()
# Flag that indicates whether the process to part users from rooms is running
@@ -37,14 +40,15 @@ class DeactivateAccountHandler(BaseHandler):
# Start the user parter loop so it can resume parting users from rooms where
# it left off (if it has work left to do).
reactor.callWhenRunning(self._start_user_parting)
hs.get_reactor().callWhenRunning(self._start_user_parting)
@defer.inlineCallbacks
def deactivate_account(self, user_id):
def deactivate_account(self, user_id, erase_data):
"""Deactivate a user's account
Args:
user_id (str): ID of user to be deactivated
erase_data (bool): whether to GDPR-erase the user's data
Returns:
Deferred
@@ -52,14 +56,35 @@ class DeactivateAccountHandler(BaseHandler):
# FIXME: Theoretically there is a race here wherein user resets
# password using threepid.
# first delete any devices belonging to the user, which will also
# delete threepids first. We remove these from the IS so if this fails,
# leave the user still active so they can try again.
# Ideally we would prevent password resets and then do this in the
# background thread.
threepids = yield self.store.user_get_threepids(user_id)
for threepid in threepids:
try:
yield self._identity_handler.unbind_threepid(
user_id,
{
'medium': threepid['medium'],
'address': threepid['address'],
},
)
except Exception:
# Do we want this to be a fatal error or should we carry on?
logger.exception("Failed to remove threepid from ID server")
raise SynapseError(400, "Failed to remove threepid from ID server")
yield self.store.user_delete_threepid(
user_id, threepid['medium'], threepid['address'],
)
# delete any devices belonging to the user, which will also
# delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id)
# then delete any remaining access tokens which weren't associated with
# a device.
yield self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.user_delete_threepids(user_id)
yield self.store.user_set_password_hash(user_id, None)
# Add the user to a table of users pending deactivation (ie.
@@ -69,6 +94,11 @@ class DeactivateAccountHandler(BaseHandler):
# delete from user directory
yield self.user_directory_handler.handle_user_deactivated(user_id)
# Mark the user as erased, if they asked for that
if erase_data:
logger.info("Marking %s as erased", user_id)
yield self.store.mark_user_erased(user_id)
# Now start the process that goes through that list and
# parts users from rooms (if it isn't already running)
self._start_user_parting()

View File

@@ -12,22 +12,24 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from six import iteritems, itervalues
from twisted.internet import defer
from synapse.api import errors
from synapse.api.constants import EventTypes
from synapse.api.errors import FederationDeniedError
from synapse.types import RoomStreamToken, get_domain_from_id
from synapse.util import stringutils
from synapse.util.async import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.metrics import measure_func
from synapse.types import get_domain_from_id, RoomStreamToken
from twisted.internet import defer
from synapse.util.retryutils import NotRetryingDestination
from ._base import BaseHandler
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -537,7 +539,7 @@ class DeviceListEduUpdater(object):
yield self.device_handler.notify_device_update(user_id, device_ids)
else:
# Simply update the single device, since we know that is the only
# change (becuase of the single prev_id matching the current cache)
# change (because of the single prev_id matching the current cache)
for device_id, stream_id, prev_ids, content in pending_updates:
yield self.store.update_remote_device_list_cache_entry(
user_id, device_id, content, stream_id,

View File

@@ -18,10 +18,9 @@ import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id, UserID
from synapse.types import UserID, get_domain_from_id
from synapse.util.stringutils import random_string
logger = logging.getLogger(__name__)

View File

@@ -14,16 +14,17 @@
# limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.errors import SynapseError, Codes, CodeMessageException, AuthError
from synapse.api.constants import EventTypes
from synapse.types import RoomAlias, UserID, get_domain_from_id
import logging
import string
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, CodeMessageException, Codes, SynapseError
from synapse.types import RoomAlias, UserID, get_domain_from_id
from ._base import BaseHandler
logger = logging.getLogger(__name__)

View File

@@ -14,17 +14,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import simplejson as json
import logging
from canonicaljson import encode_canonical_json
from twisted.internet import defer
from six import iteritems
from synapse.api.errors import (
SynapseError, CodeMessageException, FederationDeniedError,
)
from synapse.types import get_domain_from_id, UserID
from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from synapse.api.errors import CodeMessageException, FederationDeniedError, SynapseError
from synapse.types import UserID, get_domain_from_id
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.retryutils import NotRetryingDestination
@@ -80,7 +79,7 @@ class E2eKeysHandler(object):
else:
remote_queries[user_id] = device_ids
# Firt get local devices.
# First get local devices.
failures = {}
results = {}
if local_query:
@@ -357,7 +356,7 @@ def _exception_to_failure(e):
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't
# give a string for e.message, which simplejson then fails to serialize.
# give a string for e.message, which json then fails to serialize.
return {
"status": 503, "message": str(e.message),
}

View File

@@ -13,19 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from synapse.util.logutils import log_function
from synapse.types import UserID
from synapse.events.utils import serialize_event
from synapse.api.constants import Membership, EventTypes
from synapse.events import EventBase
from ._base import BaseHandler
import logging
import random
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.types import UserID
from synapse.util.logutils import log_function
from ._base import BaseHandler
logger = logging.getLogger(__name__)

View File

@@ -20,37 +20,42 @@ import itertools
import logging
import sys
import six
from six import iteritems
from six.moves import http_client
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from six import iteritems
from twisted.internet import defer
from unpaddedbase64 import decode_base64
from ._base import BaseHandler
from twisted.internet import defer
from synapse.api.errors import (
AuthError, FederationError, StoreError, CodeMessageException, SynapseError,
FederationDeniedError,
)
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError, logcontext
from synapse.util.metrics import measure_func
from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor, Linearizer
from synapse.util.frozenutils import unfreeze
from synapse.crypto.event_signing import (
compute_event_signature, add_hashes_and_signatures,
from synapse.api.errors import (
AuthError,
CodeMessageException,
FederationDeniedError,
FederationError,
StoreError,
SynapseError,
)
from synapse.crypto.event_signing import (
add_hashes_and_signatures,
compute_event_signature,
)
from synapse.types import UserID, get_domain_from_id
from synapse.events.utils import prune_event
from synapse.events.validator import EventValidator
from synapse.state import resolve_events_with_factory
from synapse.types import UserID, get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
from synapse.util.async import Linearizer
from synapse.util.distributor import user_joined_room
from synapse.util.frozenutils import unfreeze
from synapse.util.logutils import log_function
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -89,7 +94,9 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def on_receive_pdu(self, origin, pdu, get_missing=True):
def on_receive_pdu(
self, origin, pdu, get_missing=True, sent_to_us_directly=False,
):
""" Process a PDU received via a federation /send/ transaction, or
via backfill of missing prev_events
@@ -103,8 +110,10 @@ class FederationHandler(BaseHandler):
"""
# We reprocess pdus when we have seen them only as outliers
existing = yield self.get_persisted_pdu(
origin, pdu.event_id, do_auth=False
existing = yield self.store.get_event(
pdu.event_id,
allow_none=True,
allow_rejected=True,
)
# FIXME: Currently we fetch an event again when we already have it
@@ -161,14 +170,11 @@ class FederationHandler(BaseHandler):
"Ignoring PDU %s for room %s from %s as we've left the room!",
pdu.event_id, pdu.room_id, origin,
)
return
defer.returnValue(None)
state = None
auth_chain = []
fetch_state = False
# Get missing pdus if necessary.
if not pdu.internal_metadata.is_outlier():
# We only backfill backwards to the min depth.
@@ -223,26 +229,60 @@ class FederationHandler(BaseHandler):
list(prevs - seen)[:5],
)
if prevs - seen:
logger.info(
"Still missing %d events for room %r: %r...",
len(prevs - seen), pdu.room_id, list(prevs - seen)[:5]
if sent_to_us_directly and prevs - seen:
# If they have sent it to us directly, and the server
# isn't telling us about the auth events that it's
# made a message referencing, we explode
raise FederationError(
"ERROR",
403,
(
"Your server isn't divulging details about prev_events "
"referenced in this event."
),
affected=pdu.event_id,
)
fetch_state = True
elif prevs - seen:
# Calculate the state of the previous events, and
# de-conflict them to find the current state.
state_groups = []
auth_chains = set()
try:
# Get the state of the events we know about
ours = yield self.store.get_state_groups(pdu.room_id, list(seen))
state_groups.append(ours)
if fetch_state:
# We need to get the state at this event, since we haven't
# processed all the prev events.
logger.debug(
"_handle_new_pdu getting state for %s",
pdu.room_id
)
try:
state, auth_chain = yield self.replication_layer.get_state_for_room(
origin, pdu.room_id, pdu.event_id,
)
except Exception:
logger.exception("Failed to get state for event: %s", pdu.event_id)
# Ask the remote server for the states we don't
# know about
for p in prevs - seen:
state, got_auth_chain = (
yield self.replication_layer.get_state_for_room(
origin, pdu.room_id, p
)
)
auth_chains.update(got_auth_chain)
state_group = {(x.type, x.state_key): x.event_id for x in state}
state_groups.append(state_group)
# Resolve any conflicting state
def fetch(ev_ids):
return self.store.get_events(
ev_ids, get_prev_content=False, check_redacted=False
)
state_map = yield resolve_events_with_factory(
state_groups, {pdu.event_id: pdu}, fetch
)
state = (yield self.store.get_events(state_map.values())).values()
auth_chain = list(auth_chains)
except Exception:
raise FederationError(
"ERROR",
403,
"We can't get valid state history.",
affected=pdu.event_id,
)
yield self._process_received_pdu(
origin,
@@ -320,11 +360,17 @@ class FederationHandler(BaseHandler):
for e in missing_events:
logger.info("Handling found event %s", e.event_id)
yield self.on_receive_pdu(
origin,
e,
get_missing=False
)
try:
yield self.on_receive_pdu(
origin,
e,
get_missing=False
)
except FederationError as e:
if e.code == 403:
logger.warn("Event %s failed history check.")
else:
raise
@log_function
@defer.inlineCallbacks
@@ -458,6 +504,47 @@ class FederationHandler(BaseHandler):
@measure_func("_filter_events_for_server")
@defer.inlineCallbacks
def _filter_events_for_server(self, server_name, room_id, events):
"""Filter the given events for the given server, redacting those the
server can't see.
Assumes the server is currently in the room.
Returns
list[FrozenEvent]
"""
# First lets check to see if all the events have a history visibility
# of "shared" or "world_readable". If thats the case then we don't
# need to check membership (as we know the server is in the room).
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
)
)
visibility_ids = set()
for sids in event_to_state_ids.itervalues():
hist = sids.get((EventTypes.RoomHistoryVisibility, ""))
if hist:
visibility_ids.add(hist)
# If we failed to find any history visibility events then the default
# is "shared" visiblity.
if not visibility_ids:
defer.returnValue(events)
event_map = yield self.store.get_events(visibility_ids)
all_open = all(
e.content.get("history_visibility") in (None, "shared", "world_readable")
for e in event_map.itervalues()
)
if all_open:
defer.returnValue(events)
# Ok, so we're dealing with events that have non-trivial visibility
# rules, so we need to also get the memberships of the room.
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
@@ -493,7 +580,20 @@ class FederationHandler(BaseHandler):
for e_id, key_to_eid in event_to_state_ids.iteritems()
}
erased_senders = yield self.store.are_users_erased(
e.sender for e in events,
)
def redact_disallowed(event, state):
# if the sender has been gdpr17ed, always return a redacted
# copy of the event.
if erased_senders[event.sender]:
logger.info(
"Sender of %s has been erased, redacting",
event.event_id,
)
return prune_event(event)
if not state:
return event
@@ -1381,8 +1481,6 @@ class FederationHandler(BaseHandler):
def get_state_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
state_groups = yield self.store.get_state_groups(
room_id, [event_id]
)
@@ -1425,8 +1523,6 @@ class FederationHandler(BaseHandler):
def get_state_ids_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
yield run_on_reactor()
state_groups = yield self.store.get_state_groups_ids(
room_id, [event_id]
)
@@ -1468,11 +1564,20 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
@log_function
def get_persisted_pdu(self, origin, event_id, do_auth=True):
""" Get a PDU from the database with given origin and id.
def get_persisted_pdu(self, origin, event_id):
"""Get an event from the database for the given server.
Args:
origin [str]: hostname of server which is requesting the event; we
will check that the server is allowed to see it.
event_id [str]: id of the event being requested
Returns:
Deferred: Results in a `Pdu`.
Deferred[EventBase|None]: None if we know nothing about the event;
otherwise the (possibly-redacted) event.
Raises:
AuthError if the server is not currently in the room
"""
event = yield self.store.get_event(
event_id,
@@ -1493,20 +1598,17 @@ class FederationHandler(BaseHandler):
)
)
if do_auth:
in_room = yield self.auth.check_host_in_room(
event.room_id,
origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
in_room = yield self.auth.check_host_in_room(
event.room_id,
origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
)
event = events[0]
defer.returnValue(event)
else:
defer.returnValue(None)

View File

@@ -14,14 +14,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
import logging
from six import iteritems
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id
import logging
logger = logging.getLogger(__name__)

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -18,16 +19,18 @@
import logging
import simplejson as json
from canonicaljson import json
from twisted.internet import defer
from synapse.api.errors import (
MatrixCodeMessageException, CodeMessageException
CodeMessageException,
Codes,
MatrixCodeMessageException,
SynapseError,
)
from ._base import BaseHandler
from synapse.util.async import run_on_reactor
from synapse.api.errors import SynapseError, Codes
logger = logging.getLogger(__name__)
@@ -38,6 +41,7 @@ class IdentityHandler(BaseHandler):
super(IdentityHandler, self).__init__(hs)
self.http_client = hs.get_simple_http_client()
self.federation_http_client = hs.get_http_client()
self.trusted_id_servers = set(hs.config.trusted_third_party_id_servers)
self.trust_any_id_server_just_for_testing_do_not_use = (
@@ -60,8 +64,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks
def threepid_from_creds(self, creds):
yield run_on_reactor()
if 'id_server' in creds:
id_server = creds['id_server']
elif 'idServer' in creds:
@@ -104,7 +106,6 @@ class IdentityHandler(BaseHandler):
@defer.inlineCallbacks
def bind_threepid(self, creds, mxid):
yield run_on_reactor()
logger.debug("binding threepid %r to %s", creds, mxid)
data = None
@@ -139,9 +140,53 @@ class IdentityHandler(BaseHandler):
defer.returnValue(data)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
yield run_on_reactor()
def unbind_threepid(self, mxid, threepid):
"""
Removes a binding from an identity server
Args:
mxid (str): Matrix user ID of binding to be removed
threepid (dict): Dict with medium & address of binding to be removed
Returns:
Deferred[bool]: True on success, otherwise False
"""
logger.debug("unbinding threepid %r from %s", threepid, mxid)
if not self.trusted_id_servers:
logger.warn("Can't unbind threepid: no trusted ID servers set in config")
defer.returnValue(False)
# We don't track what ID server we added 3pids on (perhaps we ought to)
# but we assume that any of the servers in the trusted list are in the
# same ID server federation, so we can pick any one of them to send the
# deletion request to.
id_server = next(iter(self.trusted_id_servers))
url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,)
content = {
"mxid": mxid,
"threepid": threepid,
}
headers = {}
# we abuse the federation http client to sign the request, but we have to send it
# using the normal http client since we don't want the SRV lookup and want normal
# 'browser-like' HTTPS.
self.federation_http_client.sign_request(
destination=None,
method='POST',
url_bytes='/_matrix/identity/api/v1/3pid/unbind'.encode('ascii'),
headers_dict=headers,
content=content,
destination_is=id_server,
)
yield self.http_client.post_json_get_json(
url,
content,
headers,
)
defer.returnValue(True)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,
@@ -176,8 +221,6 @@ class IdentityHandler(BaseHandler):
self, id_server, country, phone_number,
client_secret, send_attempt, **kwargs
):
yield run_on_reactor()
if not self._should_trust_id_server(id_server):
raise SynapseError(
400, "Untrusted ID server '%s'" % id_server,

View File

@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
@@ -21,9 +23,7 @@ from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state
from synapse.streams.config import PaginationConfig
from synapse.types import (
UserID, StreamToken,
)
from synapse.types import StreamToken, UserID
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
@@ -32,9 +32,6 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
import logging
logger = logging.getLogger(__name__)

View File

@@ -14,35 +14,31 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import simplejson
import sys
from canonicaljson import encode_canonical_json
import six
from six import string_types, itervalues, iteritems
from twisted.internet import defer, reactor
from six import iteritems, itervalues, string_types
from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership, MAX_DEPTH
from synapse.api.errors import (
AuthError, Codes, SynapseError,
ConsentNotGivenError,
)
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
from synapse.api.urls import ConsentURIBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.types import (
UserID, RoomAlias, RoomStreamToken,
)
from synapse.util.async import run_on_reactor, ReadWriteLock, Limiter
from synapse.replication.http.send_event import send_event_to_master
from synapse.types import RoomAlias, RoomStreamToken, UserID
from synapse.util.async import Limiter, ReadWriteLock
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client
from synapse.replication.http.send_event import send_event_to_master
from ._base import BaseHandler
@@ -157,7 +153,7 @@ class MessageHandler(BaseHandler):
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
reactor.callLater(24 * 3600, clear_purge)
self.hs.get_reactor().callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
@@ -388,7 +384,7 @@ class MessageHandler(BaseHandler):
users_with_profile = yield self.state.get_current_user_in_room(room_id)
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or becuase there
# This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in"
if requester.app_service and user_id not in users_with_profile:
for uid in users_with_profile:
@@ -491,7 +487,7 @@ class EventCreationHandler(object):
target, e
)
is_exempt = yield self._is_exempt_from_privacy_policy(builder)
is_exempt = yield self._is_exempt_from_privacy_policy(builder, requester)
if not is_exempt:
yield self.assert_accepted_privacy_policy(requester)
@@ -509,12 +505,13 @@ class EventCreationHandler(object):
defer.returnValue((event, context))
def _is_exempt_from_privacy_policy(self, builder):
def _is_exempt_from_privacy_policy(self, builder, requester):
""""Determine if an event to be sent is exempt from having to consent
to the privacy policy
Args:
builder (synapse.events.builder.EventBuilder): event being created
requester (Requster): user requesting this event
Returns:
Deferred[bool]: true if the event can be sent without the user
@@ -525,6 +522,9 @@ class EventCreationHandler(object):
membership = builder.content.get("membership", None)
if membership == Membership.JOIN:
return self._is_server_notices_room(builder.room_id)
elif membership == Membership.LEAVE:
# the user is always allowed to leave (but not kick people)
return builder.state_key == requester.user.to_string()
return succeed(False)
@defer.inlineCallbacks
@@ -793,7 +793,7 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db
try:
dump = frozendict_json_encoder.encode(event.content)
simplejson.loads(dump)
json.loads(dump)
except Exception:
logger.exception("Failed to encode content: %r", event.content)
raise
@@ -806,6 +806,7 @@ class EventCreationHandler(object):
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.hs.get_clock(),
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
@@ -959,9 +960,7 @@ class EventCreationHandler(object):
event_stream_id, max_stream_id
)
@defer.inlineCallbacks
def _notify():
yield run_on_reactor()
try:
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,

View File

@@ -22,27 +22,26 @@ The methods that define policy are:
- should_notify
"""
from twisted.internet import defer, reactor
import logging
from contextlib import contextmanager
from six import itervalues, iteritems
from six import iteritems, itervalues
from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.api.constants import PresenceState
from synapse.api.errors import SynapseError
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.types import UserID, get_domain_from_id
from synapse.util.async import Linearizer
from synapse.util.caches.descriptors import cachedInlineCallbacks
from synapse.util.logcontext import run_in_background
from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID, get_domain_from_id
from synapse.metrics import LaterGauge
import logging
from prometheus_client import Counter
logger = logging.getLogger(__name__)
@@ -179,7 +178,7 @@ class PresenceHandler(object):
# have not yet been persisted
self.unpersisted_users_changes = set()
reactor.addSystemEventTrigger("before", "shutdown", self._on_shutdown)
hs.get_reactor().addSystemEventTrigger("before", "shutdown", self._on_shutdown)
self.serial_to_user = {}
self._next_serial = 1

Some files were not shown because too many files have changed in this diff Show More