1
0

Compare commits

..

313 Commits

Author SHA1 Message Date
Olivier Wilkinson (reivilibre)
6e827507f7 Return empty object on success rather than null.
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 10:29:36 +01:00
Olivier Wilkinson (reivilibre)
0e99412f4c Add admin API docs for setting admin bits on users
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 10:24:38 +01:00
Olivier Wilkinson (reivilibre)
7fd0c90234 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-19 14:44:53 +01:00
Olivier Wilkinson (reivilibre)
ebd2cd84d5 Add admin API for setting the admin bit of a user. 2019-08-19 14:42:55 +01:00
Olivier Wilkinson (reivilibre)
c497e13734 Introduce set_server_admin as dual to is_server_admin. 2019-08-19 14:41:07 +01:00
Erik Johnston
d514dac0b2 Merge pull request #5860 from matrix-org/erikj/update_5704_comments
Remove logging for #5407 and update comments
2019-08-19 10:20:59 +01:00
Brendan Abolivier
bdd201ea7f Merge branch 'master' into develop 2019-08-17 10:50:42 +01:00
Richard van der Hoff
74fb729213 1.3.1 2019-08-17 09:16:17 +01:00
Richard van der Hoff
412c6e21a8 Drop dependency on sdnotify (#5871)
... to save OSes which don't use it from having to maintain a port.

Fixes #5865.
2019-08-17 09:09:52 +01:00
Hubert Chathi
8a5f6ed130 Merge pull request #5857 from matrix-org/uhoreg/fix_e2e_room_keys_index
add the version field to the index for e2e_room_keys
2019-08-16 17:45:50 -07:00
Richard van der Hoff
c188bd2c12 add attribution 2019-08-16 23:19:23 +01:00
Chris Moos
20402aa128 Add changelog entry. 2019-08-16 22:16:21 +01:00
Chris Moos
6d86df73f1 Fix issue with Synapse not starting up. Fixes #5866.
Signed-off-by: Chris Moos <chris@chrismoos.com>
2019-08-16 22:16:13 +01:00
Jorik Schellekens
87fa26006b Opentracing misc (#5856)
Add authenticated_entity and servlet_names tags.

Functionally:
- Add a tag for authenticated_entity
- Add a tag for servlet_names

Stylistically:
Moved to importing methods directly from opentracing.
2019-08-16 16:13:25 +01:00
Erik Johnston
ebba15ee7f Newsfile 2019-08-16 13:29:41 +01:00
Hubert Chathi
e132ba79ae fix changelog 2019-08-15 21:02:40 -07:00
Andrew Morgan
b13cac896d Fix up password reset template config names (#5863)
Fixes #5833

The emailconfig code was attempting to pull incorrect config file names. This corrects that, while also marking a difference between a config file variable that's a filepath versus a str containing HTML.
2019-08-15 16:27:11 +01:00
Brendan Abolivier
ce5f1cb98c Merge branch 'master' into develop 2019-08-15 12:38:21 +01:00
Brendan Abolivier
6382914587 Merge tag 'v1.3.0'
Synapse 1.3.0 (2019-08-15)
==========================

Bugfixes
--------

- Fix 500 Internal Server Error on `publicRooms` when the public room list was
  cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))

Synapse 1.3.0rc1 (2019-08-13)
==========================

Features
--------

- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))

Bugfixes
--------

- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))

Deprecations and Removals
-------------------------

- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))

Internal Changes
----------------

- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
2019-08-15 12:37:45 +01:00
Brendan Abolivier
fb5acd7039 1.3.0 2019-08-15 12:05:24 +01:00
Erik Johnston
748aa38378 Remove logging for #5407 and update comments 2019-08-15 12:02:18 +01:00
Andrew Morgan
8cf7fbbce0 Remove libsqlite3-dev from required build dependencies. (#5766) 2019-08-15 11:32:23 +01:00
reivilibre
7809f0c022 Merge pull request #5851 from matrix-org/rei/roomdir_maybedeferred
Room Directory:  Wrap `get_local_public_room_list` call in `maybeDeferred`
2019-08-15 11:02:33 +01:00
Michael Telatynski
baee288fb4 Don't create broken room when power_level_content_override.users does not contain creator_id. (#5633) 2019-08-15 09:45:57 +01:00
Hubert Chathi
c058aeb88d update set_e2e_room_key to agree with fixed index 2019-08-14 18:02:58 -07:00
Hubert Chathi
81b8080acd add changelog 2019-08-14 17:53:33 -07:00
Hubert Chathi
b7f7cc7ace add the version field to the index for e2e_room_keys 2019-08-14 17:14:40 -07:00
reivilibre
d6de55bce9 Update changelog.d/5851.bugfix
Use imperative

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-08-14 14:53:49 +01:00
Olivier Wilkinson (reivilibre)
3ad24ab386 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-14 14:53:49 +01:00
Olivier Wilkinson (reivilibre)
1b63ccd848 Wrap get_local_public_room_list call in maybeDeferred because it
is cached and so does not always return a `Deferred`.
`await` does not silently pass-through non-Deferreds like `yield` used to.

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-14 14:53:49 +01:00
Erik Johnston
09f6152a11 Merge pull request #5844 from matrix-org/erikj/retry_well_known_lookup
Retry well-known lookup before expiry.
2019-08-14 09:53:33 +01:00
Brendan Abolivier
f70d0a1dd9 1.3.0rc1 2019-08-13 18:20:09 +01:00
Brendan Abolivier
3039be82ce Merge pull request #5848 from matrix-org/hawkowl/fix-mediarepo-worker-startup
Fix mediarepo worker startup
2019-08-13 17:38:11 +01:00
Amber H. Brown
28bce1ac7c changelog 2019-08-14 02:08:24 +10:00
Amber H. Brown
18bdac8ee4 fix config being a dict, actually 2019-08-14 02:06:42 +10:00
Erik Johnston
aedfec3ad7 Newsfile 2019-08-13 16:20:38 +01:00
Erik Johnston
17e1e80726 Retry well-known lookup before expiry.
This gives a bit of a grace period where we can attempt to refetch a
remote `well-known`, while still using the cached result if that fails.

Hopefully this will make the well-known resolution a bit more torelant
of failures, rather than it immediately treating failures as "no result"
and caching that for an hour.
2019-08-13 16:20:38 +01:00
Erik Johnston
af187805b3 Merge pull request #5809 from matrix-org/erikj/handle_pusher_stop
Handle pusher being deleted during processing.
2019-08-13 14:08:29 +01:00
Erik Johnston
96bdd661b8 Remove redundant return 2019-08-13 12:50:36 +01:00
Amber Brown
0b6fbb28a8 Don't load the media repo when configured to use an external media repo (#5754) 2019-08-13 21:49:28 +10:00
Erik Johnston
e9906b0772 Merge pull request #5836 from matrix-org/erikj/lower_bound_ttl_well_known
Add a lower bound to well-known TTL.
2019-08-13 12:41:16 +01:00
Erik Johnston
fb3469f53a Clarify docstring
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-08-13 10:17:23 +01:00
Erik Johnston
f218705d2a Make default well known cache global again. 2019-08-13 10:06:51 +01:00
Erik Johnston
2546f32b90 Merge pull request #5826 from matrix-org/erikj/reduce_event_pauses
Don't unnecessarily block notifying of new events.
2019-08-13 09:36:25 +01:00
Erik Johnston
9d9cf3583b Merge pull request #5843 from matrix-org/erikj/workers_hist_vis
Whitelist history visbility sytests for worker mode
2019-08-12 18:02:19 +01:00
Erik Johnston
2bec3a4953 Merge pull request #5839 from tcitworld/fix-purge-remote-media-script
Fix curl command typo in purge_remote_media.sh
2019-08-12 14:51:27 +01:00
Erik Johnston
3de6cc245f Changelogs should end in '.' or '!' 2019-08-12 14:16:42 +01:00
Erik Johnston
156a461cbd Newsfile 2019-08-12 13:57:52 +01:00
Erik Johnston
c9456193d3 Whitelist history visbility sytests for worker mode 2019-08-12 13:56:26 +01:00
Richard van der Hoff
fb86217553 Merge pull request #5788 from matrix-org/rav/metaredactions
Fix handling of redactions of redactions
2019-08-12 12:25:19 +01:00
Erik Johnston
41546f946e Newsfile 2019-08-12 09:56:58 +01:00
Thomas Citharel
a7f0161276 Fix curl command typo in purge_remote_media.sh
Was verbose option instead of -X, command didn't work

Signed-off-by: Thomas Citharel <tcit@tcit.fr>
2019-08-09 18:36:12 +02:00
Neil Johnson
1016f303e5 make user creation steps clearer 2019-08-08 14:58:21 +01:00
Erik Johnston
107ad133fc Move well known lookup into a separate clas 2019-08-07 15:36:38 +01:00
Erik Johnston
af9f1c0764 Add a lower bound for TTL on well known results.
It costs both us and the remote server for us to fetch the well known
for every single request we send, so we add a minimum cache period. This
is set to 5m so that we still honour the basic premise of "refetch
frequently".
2019-08-06 17:01:23 +01:00
Erik Johnston
d1b5b055be Merge pull request #5825 from matrix-org/erikj/fix_empty_limited_sync
Handle TimelineBatch being limited and empty.
2019-08-06 15:39:44 +01:00
Andrew Morgan
edeae53221 Return 404 instead of 403 when retrieving an event without perms (#5798)
Part of fixing matrix-org/sytest#652

Sytest PR: matrix-org/sytest#667
2019-08-06 13:33:55 +01:00
Erik Johnston
c32d359094 Newsfile 2019-08-06 13:33:42 +01:00
Erik Johnston
bf4db42920 Don't unnecessarily block notifying of new events.
When persisting events we calculate new stream orderings up front.
Before we notify about an event all events with lower stream orderings
must have finished being persisted.

This PR moves the assignment of stream ordering till *after* calculated
the new current state and split the batch of events into separate chunks
for persistence. This means that if it takes a long time to calculate
new current state then it will not block events in other rooms being
notified about.

This should help reduce some global pauses in the events stream which
can last for tens of seconds (if not longer), caused by some
particularly expensive state resolutions.
2019-08-06 13:32:02 +01:00
Erik Johnston
977fa4a717 Newsfile 2019-08-06 13:00:45 +01:00
Erik Johnston
6881f21f3e Handle TimelineBatch being limited and empty.
This hopefully addresses #5407 by gracefully handling an empty but
limited TimelineBatch. We also add some logging to figure out how this
is happening.
2019-08-06 12:59:00 +01:00
Brendan Abolivier
8ed9e63432 Account validity: allow defining HTML templates to serve the us… (#5807)
Account validity: allow defining HTML templates to serve the user on account renewal attempt
2019-08-01 16:09:25 +02:00
Erik Johnston
d55bc4a8bf Merge pull request #5810 from matrix-org/erikj/no_server_reachable
Return 502 not 500 when failing to reach any remote server.
2019-08-01 14:19:39 +01:00
Andrew Morgan
5d018d23f0 Have ClientReaderSlavedStore inherit RegistrationStore (#5806)
Fixes #5803
2019-08-01 13:54:56 +01:00
Erik Johnston
93fd3cbc7a Newsfile 2019-08-01 13:48:52 +01:00
Erik Johnston
3c076c79c5 Merge pull request #5808 from matrix-org/erikj/parse_decode_error
Handle incorrectly encoded query params correctly
2019-08-01 13:48:10 +01:00
Erik Johnston
a8f40a8302 Return 502 not 500 when failing to reach any remote server. 2019-08-01 13:47:31 +01:00
Erik Johnston
55a0c98d16 Merge pull request #5805 from matrix-org/erikj/validate_state
Validate well known state events are state events.
2019-08-01 13:45:48 +01:00
Erik Johnston
0b36decfb6 Merge pull request #5801 from matrix-org/erikj/recursive_tombstone
Don't allow clients to send tombstones that reference the same room
2019-08-01 13:45:35 +01:00
Erik Johnston
312cc48e2b Newsfile 2019-08-01 13:45:09 +01:00
Erik Johnston
d02e41dcb2 Handle pusher being deleted during processing.
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
2019-08-01 13:44:12 +01:00
Erik Johnston
da378af445 Newsfile 2019-08-01 13:24:00 +01:00
Erik Johnston
d2e3d5b9db Handle incorrectly encoded query params correctly 2019-08-01 13:23:00 +01:00
Erik Johnston
76a58fdcce Fix spelling.
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2019-08-01 13:17:55 +01:00
Erik Johnston
58af30a6c7 Merge pull request #5802 from matrix-org/erikj/deny_redacting_different_room
Deny redaction of events in a different room.
2019-08-01 13:14:46 +01:00
Erik Johnston
0f632f3a57 Merge pull request #5790 from matrix-org/erikj/groups_request_errors
Handle RequestSendFailed exception correctly in more places.
2019-08-01 13:14:08 +01:00
Erik Johnston
ad167c3849 Merge pull request #5804 from matrix-org/erikj/match_against_state_key
Explicitly check that tombstone is a state event before notifying.
2019-08-01 13:13:33 +01:00
Brendan Abolivier
f25f638c35 Lint 2019-08-01 12:19:08 +02:00
Brendan Abolivier
3ff3dfe5a3 Sample config 2019-08-01 12:08:25 +02:00
Brendan Abolivier
f4a30d286f Changelog 2019-08-01 12:08:06 +02:00
Brendan Abolivier
bc35503528 Add tests 2019-08-01 12:00:08 +02:00
Brendan Abolivier
a4a9ded4d0 Allow defining HTML templates to serve the user on account renewal 2019-08-01 11:59:27 +02:00
Erik Johnston
e5a0224837 Newsfile 2019-07-31 16:39:42 +01:00
Erik Johnston
dc4d74e44a Validate well-known state events are state events.
Lets disallow sending things like memberships, topics etc as non-state
events.
2019-07-31 16:36:20 +01:00
Erik Johnston
c5288e9984 Newsfile 2019-07-31 16:32:03 +01:00
Erik Johnston
2e697d3013 Explicitly check that tombstone is a state event before notifying. 2019-07-31 16:32:03 +01:00
Erik Johnston
0eefb76fa1 Newsfile 2019-07-31 16:13:57 +01:00
Erik Johnston
cf89266b98 Deny redaction of events in a different room.
We already correctly filter out such redactions, but we should also deny
them over the CS API.
2019-07-31 16:12:27 +01:00
Erik Johnston
02735e140f Newsfile 2019-07-31 15:53:52 +01:00
Erik Johnston
f31d4cb7a2 Don't allow clients to send tombstones that reference the same room 2019-07-31 15:52:27 +01:00
Andrew Morgan
72167fb394 Change user deactivated errcode to USER_DEACTIVATED and use it (#5686)
This is intended as an amendment to #5674 as using M_UNKNOWN as the errcode makes it hard for clients to differentiate between an invalid password and a deactivated user (the problem we were trying to solve in the first place).

M_UNKNOWN was originally chosen as it was presumed than an MSC would have to be carried out to add a new code, but as Synapse often is the testing bed for new MSC implementations, it makes sense to try it out first in the wild and then add it into the spec if it is successful. Thus this PR return a new M_USER_DEACTIVATED code when a deactivated user attempts to login.
2019-07-31 15:19:06 +01:00
Andrew Morgan
58a755cdc3 Remove duplicate return statement 2019-07-31 13:24:51 +01:00
Erik Johnston
8fde611a8c Merge pull request #5794 from matrix-org/erikj/share_ssl_options_for_well_known
Share SSL options for well-known requests
2019-07-31 11:40:02 +01:00
Amber Brown
8f15832950 Remove DelayedCall debugging from test runs (#5787) 2019-07-31 20:39:22 +10:00
Erik Johnston
9fe6ad5fef Merge pull request #5796 from matrix-org/erikj/disable_codecov_report
Disable codecov reports to GH comments.
2019-07-31 11:16:15 +01:00
Erik Johnston
fe2f2fc530 Newsfile 2019-07-31 10:59:39 +01:00
Erik Johnston
6be336c0d8 Disable codecov reports to GH comments.
The double posting is really annoying, and I don't think anyone is
actually reading them. The commit statuses should give a good summary
and will link to a full report.
2019-07-31 10:56:02 +01:00
Erik Johnston
3b7a35a59a Newsfile 2019-07-31 10:39:24 +01:00
Erik Johnston
a9bcae9f50 Share SSL options for well-known requests 2019-07-31 10:39:24 +01:00
Brendan Abolivier
d4f91e7e9f Merge pull request #5793 from matrix-org/erikj/fix_bg_update
Don't recreate current_state_events.membership column
2019-07-30 21:19:39 +02:00
Erik Johnston
4037d3220a Newsfile 2019-07-30 16:43:59 +01:00
Erik Johnston
123c04daa7 Don't recreate column 2019-07-30 16:42:48 +01:00
Erik Johnston
62a2d60d72 Merge pull request #5792 from matrix-org/erikj/fix_bg_update
Fix current_state_events membership background update.
2019-07-30 15:20:09 +01:00
Erik Johnston
958d69f300 Newsfile 2019-07-30 14:53:52 +01:00
Erik Johnston
15056ca208 Fix current_state_events membership background update.
Turns out not all rooms are in `rooms`, so lets fetch the room list from
`current_state_events`. We move the delta file to force it to be run
again.
2019-07-30 14:51:41 +01:00
Erik Johnston
f92d05e254 Newsfile 2019-07-30 13:43:53 +01:00
Erik Johnston
7a48d0bab8 Merge pull request #5789 from matrix-org/erikj/fix_error_handling_keys
Fix error handling when fetching remote device keys
2019-07-30 13:26:12 +01:00
Erik Johnston
b4d5ff0af7 Don't log as exception when failing durig backfill 2019-07-30 13:19:22 +01:00
Erik Johnston
e23ab7f41a Newsfile 2019-07-30 13:10:00 +01:00
Erik Johnston
1ec7d656dd Unwrap error 2019-07-30 13:09:02 +01:00
Erik Johnston
458e51df7a Fix error handling when fetching remote device keys 2019-07-30 13:07:02 +01:00
Erik Johnston
63eb4a1b62 Merge pull request #5746 from matrix-org/erikj/test_bg_update_currnet_state
Add unit test for current state membership bg update
2019-07-30 10:00:02 +01:00
Richard van der Hoff
8c97f6414c Remove non-functional 'expire_access_token' setting (#5782)
The `expire_access_token` didn't do what it sounded like it should do. What it
actually did was make Synapse enforce the 'time' caveat on macaroons used as
access tokens, but since our access token macaroons never contained such a
caveat, it was always a no-op.

(The code to add 'time' caveats was removed back in v0.18.5, in #1656)
2019-07-30 08:25:02 +01:00
Richard van der Hoff
5c3eecc70f changelog 2019-07-30 00:00:34 +01:00
Richard van der Hoff
4e97eb89e5 Handle loops in redaction events 2019-07-30 00:00:34 +01:00
Richard van der Hoff
448bcfd0f9 recursively fetch redactions 2019-07-30 00:00:34 +01:00
Richard van der Hoff
e6a6c4fbab split _get_events_from_db out of _enqueue_events 2019-07-29 23:15:15 +01:00
Richard van der Hoff
c9964ba600 Return dicts from _fetch_event_list 2019-07-29 23:15:15 +01:00
Amber Brown
865077f1d1 Room Complexity Client Implementation (#5783) 2019-07-30 02:47:27 +10:00
Erik Johnston
aecae8f397 Correctly handle errors doing requests to group servers 2019-07-29 17:21:57 +01:00
Erik Johnston
7c8c3b8437 Merge pull request #5774 from matrix-org/erikj/fix_rejected_membership
Fix room summary when rejected events are in state
2019-07-29 17:15:15 +01:00
Erik Johnston
3e013b7c8e Merge pull request #5752 from matrix-org/erikj/forgotten_user
Remove some more joins on room_memberships
2019-07-29 17:15:01 +01:00
Erik Johnston
2a12d76646 Merge pull request #5770 from matrix-org/erikj/fix_current_state_event_sqlite
Fix current_state bg update to work on old SQLite
2019-07-29 17:09:01 +01:00
Amber Brown
97a8b4caf7 Move some timeout checking logs to DEBUG #5785 2019-07-30 02:02:18 +10:00
Erik Johnston
df3a5db629 Expand comment 2019-07-29 16:40:25 +01:00
Jorik Schellekens
85b0bd8fe0 Update the device list cache when keys/query is called (#5693) 2019-07-29 16:34:44 +01:00
Erik Johnston
105e7f6ed3 Remove lost comment 2019-07-29 16:09:48 +01:00
Erik Johnston
3b476f5767 Fix debian packages for sid being called buster. (#5775)
* Fix debian packages for sid being called buster.

I don't know why the sid images return buster as its codename in
`lsb_release` but it does, so lets just grab the codename from the
distro we pass into dockerfile

* Newsfile
2019-07-30 00:33:32 +10:00
Erik Johnston
d94916852f Newsfile 2019-07-29 13:04:58 +01:00
Erik Johnston
84c6ea1af8 Update old deps unit test to use old sqlite3 2019-07-29 13:04:50 +01:00
Erik Johnston
45df38e61b Fix current_state bg update to work on old SQLite 2019-07-29 13:04:10 +01:00
Brendan Abolivier
fa87004bc1 Merge pull request #5780 from matrix-org/baboliver/loopingcall-args
Add ability to pass arguments to looping calls
2019-07-29 10:58:22 +02:00
Brendan Abolivier
bd083a5fcf Changelog 2019-07-29 10:04:09 +02:00
Brendan Abolivier
244953be3f Add kwargs and doc 2019-07-29 10:03:14 +02:00
Brendan Abolivier
08352d44f8 Add ability to pass arguments to looping calls 2019-07-29 09:54:37 +02:00
Richard van der Hoff
d74595e2ca Merge branch 'master' into develop 2019-07-26 12:39:33 +01:00
Richard van der Hoff
1a93daf353 Merge pull request #5744 from matrix-org/erikj/log_leave_origin_mismatch
Log when we receive a /make_* request from a different origin
2019-07-26 12:38:37 +01:00
Richard van der Hoff
97bf307755 yet more changelog attribution fixes 2019-07-26 12:06:06 +01:00
Richard van der Hoff
992333b995 correct attributions in changelog 2019-07-26 11:45:06 +01:00
Richard van der Hoff
8b16696b24 correct attributions in changelog 2019-07-26 11:36:28 +01:00
Richard van der Hoff
dde6ea7ff6 1.2.1 2019-07-26 11:33:16 +01:00
Erik Johnston
2e9cf7dda5 Newsfile 2019-07-26 10:14:31 +01:00
Erik Johnston
14c24c9037 Fix room summary when rejected events are in state
Annoyingly, `current_state_events` table can include rejected events,
in which case the membership column will be null. To work around this
lets just always filter out null membership for now.
2019-07-26 10:11:36 +01:00
Richard van der Hoff
a0ee2ec458 Merge branch 'erikj/log_leave_origin_mismatch' into release-v1.2.1 2019-07-26 10:09:36 +01:00
Richard van der Hoff
d1020653fc Log when we receive a /make_* request from a different origin 2019-07-26 10:08:22 +01:00
Erik Johnston
1f8bae7724 Log when we receive receipt from a different origin 2019-07-26 07:55:25 +01:00
Richard van der Hoff
1cad8d7b6f Convert RedactionTestCase to modern test style (#5768) 2019-07-26 07:38:55 +01:00
Richard van der Hoff
0f2ecb961e Fix DoS when there is a cycle in redaction events
Make sure that synapse doesn't explode when a redaction redacts itself, or
there is a larger cycle.
2019-07-26 06:36:48 +00:00
Richard van der Hoff
26d742fed6 Merge pull request #5767 from matrix-org/rav/redactions/cross_room_id
log when a redaction attempts to redact an event in a different room
2019-07-25 18:49:56 +01:00
Richard van der Hoff
70e18cee00 Merge branch 'rav/redactions/cross_room_id' into release-v1.2.1 2019-07-25 18:46:44 +01:00
Richard van der Hoff
b1605cdd23 log when a redaction attempts to redact an event in a different room 2019-07-25 18:26:20 +01:00
Richard van der Hoff
618bd1ee76 Fix some error cases in the caching layer. (#5749)
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.

This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
2019-07-25 15:59:45 +01:00
Andrew Morgan
f16aa3a44b Merge branch 'master' into develop 2019-07-25 15:19:22 +01:00
Andrew Morgan
c0a1301ccd 1.2.0 2019-07-25 14:10:32 +01:00
Andrew Morgan
baf081cd3b Merge tag 'v1.2.0rc2' into develop
Bugfixes
--------

- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
2019-07-24 13:47:51 +01:00
Andrew Morgan
2d573e2e2b 1.2.0rc2 2019-07-24 13:43:40 +01:00
Erik Johnston
2276936bac Merge pull request #5743 from matrix-org/erikj/log_origin_receipts_mismatch
Log when we receive receipt from a different origin
2019-07-24 13:27:57 +01:00
Richard van der Hoff
f30a71a67b Stop trying to fetch events with event_id=None. (#5753)
`None` is not a valid event id, so queuing up a database fetch for it seems
like a silly thing to do.

I considered making `get_event` return `None` if `event_id is None`, but then
its interaction with `allow_none` seemed uninituitive, and strong typing ftw.
2019-07-24 13:16:18 +01:00
Jorik Schellekens
cf2972c818 Fix servlet metric names (#5734)
* Fix servlet metric names

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>

* Remove redundant check

* Cover all return paths
2019-07-24 13:07:35 +01:00
Erik Johnston
c159803067 Newsfile 2019-07-24 11:51:44 +01:00
Erik Johnston
0c4a99607e Remove join when calculating room summaries. 2019-07-24 11:49:15 +01:00
Erik Johnston
62921fb53e Remove join on room_memberships when fetching rooms for user. 2019-07-24 11:45:58 +01:00
Erik Johnston
32768e96d4 Add function to get all forgotten rooms for user
This will allow us to efficiently filter out rooms that have been
forgotten in other queries without having to join against the
`room_memberships` table.
2019-07-24 11:44:23 +01:00
Richard van der Hoff
418635e68a Add a prometheus metric for active cache lookups. (#5750)
* Add a prometheus metric for active cache lookups.

* changelog
2019-07-24 11:33:13 +01:00
Erik Johnston
adcd5368b0 Newsfile 2019-07-23 17:00:24 +01:00
Erik Johnston
73bbaf2bc6 Add unit test for current state membership bg update 2019-07-23 17:00:22 +01:00
Jorik Schellekens
3641784e8c Make Jaeger fully configurable (#5694)
* Allow Jaeger to be configured

* Update sample config
2019-07-23 15:46:04 +01:00
Erik Johnston
65afc535a6 Update changelog.d/5743.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-23 15:14:21 +01:00
Amber Brown
4806651744 Replace returnValue with return (#5736) 2019-07-23 23:00:55 +10:00
Erik Johnston
fadfde9aaa Newsfile 2019-07-23 13:32:37 +01:00
Jorik Schellekens
18a466b84e Opentracing Utils (#5722)
* Add decerators for tracing functions

* Use the new clean contexts

* Context and edu utils

* Move opentracing setters

* Move whitelisting

* Sectioning comments

* Better args wrapper

* Docstrings

Co-Authored-By: Erik Johnston <erik@matrix.org>

* Remove unused methods.

* Don't use global

* One tracing decorator to rule them all.
2019-07-23 13:31:16 +01:00
Erik Johnston
3db1377b26 Log when we receive receipt from a different origin 2019-07-23 13:31:03 +01:00
Erik Johnston
841b12867e Merge pull request #5732 from matrix-org/erikj/sdnotify
Add process hooks to tell systemd our state.
2019-07-23 13:06:53 +01:00
Erik Johnston
73bf452666 Merge pull request #5740 from matrix-org/erikj/worker_flakey_tests
Mark flakey tests as blacklisted for worker mode
2019-07-23 11:32:32 +01:00
Erik Johnston
22d2338ace Newsfile 2019-07-23 10:27:53 +01:00
Erik Johnston
1883223a01 Mark flakey tests as blacklisted for worker mode 2019-07-23 10:26:52 +01:00
Erik Johnston
4f6984aa88 Merge pull request #5738 from matrix-org/erikj/faster_update
Speed up current state background update.
2019-07-23 10:23:12 +01:00
Erik Johnston
cda4460d99 Also update systemd-with-workers contrib examples 2019-07-23 10:14:01 +01:00
Erik Johnston
39e594b765 Merge pull request #5733 from matrix-org/erikj/exlude_sytest_blacklist
Don't package sytest-blacklist file.
2019-07-23 10:11:34 +01:00
Erik Johnston
cf0006719d Newsfile 2019-07-23 10:01:30 +01:00
Erik Johnston
b2a629ef49 Speed up current state background update.
Turns out that storing huge JSON arrays in the progress JSON isn't
something that postgres particularly likes.
2019-07-23 10:01:30 +01:00
Erik Johnston
d9ea9881d2 Newsfile 2019-07-22 16:09:15 +01:00
Erik Johnston
c96322c8d2 Don't package sytest-blacklist file.
I don't think its useful, and I don't even know where it would end up.
2019-07-22 16:07:12 +01:00
Amber Brown
0d0f6d12bc Fix logging in workers (#5729)
This also adds a worker blacklist.
2019-07-22 16:05:00 +01:00
Erik Johnston
17c27df6ea Update example systemd service file 2019-07-22 15:24:25 +01:00
Erik Johnston
80cfad233e Call startup commands as system triggers.
This helps ensures that we only consider ourselves "up" once all the
startup functions have completed.
2019-07-22 15:22:14 +01:00
Erik Johnston
720d30469f Merge pull request #5730 from matrix-org/erikj/cache_versions
Cache get_version_string.
2019-07-22 14:52:52 +01:00
Erik Johnston
79f689e6c2 Newsfile 2019-07-22 14:52:19 +01:00
Erik Johnston
c560b791e1 Add process hooks to tell systemd our state.
Fixes #5676.
2019-07-22 14:52:18 +01:00
Jason Robinson
8e513e7afc Merge pull request #5731 from matrix-org/jaywink/admin-user-list-user-type
Add `user_type` to returned fields in admin API user list endpoints
2019-07-22 16:28:51 +03:00
Erik Johnston
22e862304a Update changelog.d/5730.misc
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-22 14:09:56 +01:00
Richard van der Hoff
0cb72812f9 Fix stack overflow in Keyring (#5724)
* Refactor Keyring._start_key_lookups

There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.

* Add a delay to key lookup lock release to fix stack overflow

A tactical call_later here should fix #5723

* changelog
2019-07-22 13:51:22 +01:00
Andrew Morgan
f477ce4b1a Merge tag 'v1.2.0rc1' into develop
v1.2.0rc1

Features
--------

- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))

Bugfixes
--------

- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
- Ignore redactions of m.room.create events. ([\#5701](https://github.com/matrix-org/synapse/issues/5701))

Updates to the Docker image
---------------------------

- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))

Improved Documentation
----------------------

- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))

Deprecations and Removals
-------------------------

- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))

Internal Changes
----------------

- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
2019-07-22 13:49:16 +01:00
Jason Robinson
66f5ff72fd Add user_type to returned fields in admin API user list endpoints
Mostly user type will be empty (normal user) but there is also the
"support" user type.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2019-07-22 15:29:18 +03:00
Erik Johnston
2017369f7d Newsfile 2019-07-22 13:18:25 +01:00
Andrew Morgan
8b0d5b171e Make changelog slightly more readable 2019-07-22 13:15:35 +01:00
Erik Johnston
5ea773c505 Cache get_version_string.
The version of a module isn't going to change over the lifetime of the
process (assuming no funky hot reloading is going on, which it isn't),
so let's just cache the result to avoid spawning lots of git
subprocesses.

Fixes #5672.
2019-07-22 13:15:08 +01:00
Andrew Morgan
54437c48ca 1.2.0rc1 2019-07-22 12:59:04 +01:00
Jorik Schellekens
f337d2f0f0 Demo uses deprecated cli option (#5725)
* Remove deprecated 'verbose' cli arg

* Create 5725.bugfix
2019-07-22 11:31:05 +01:00
Jorik Schellekens
0fd171770a Merge branch 'release-v1.2.0' into develop 2019-07-22 11:18:50 +01:00
Jorik Schellekens
826e6ec3bd Opentracing Documentation (#5703)
* Opentracing survival guide

* Update decorator names in doc

* Doc cleanup

These are all alterations as a result of comments in #5703, it
includes mostly typos and clarifications. The most interesting
changes are:

- Split developer and user docs into two sections
- Add a high level description of OpenTracing

* newsfile

* Move contributer specific info to docstring.

* Sample config.

* Trailing whitespace.

* Update 5703.misc

* Apply suggestions from code review

Mostly just rewording parts of the docs for clarity.

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-22 11:15:21 +01:00
Jorik Schellekens
f99554b15d Revert "Remove deprecated 'verbose' cli arg"
This reverts commit dc7cf81267.
2019-07-19 18:19:27 +01:00
Jorik Schellekens
dc7cf81267 Remove deprecated 'verbose' cli arg 2019-07-19 18:16:42 +01:00
Richard van der Hoff
f214bff0c0 changelog 2019-07-19 17:58:17 +01:00
Richard van der Hoff
dcca56baba Add a delay to key lookup lock release to fix stack overflow
A tactical call_later here should fix #5723
2019-07-19 17:57:00 +01:00
Richard van der Hoff
c7095be913 Refactor Keyring._start_key_lookups
There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.
2019-07-19 17:49:19 +01:00
Erik Johnston
7704873cb8 Merge pull request #5720 from matrix-org/erikj/transactions_upsert
Use upsert when updating destination retry interval
2019-07-19 16:51:16 +01:00
Erik Johnston
d7bd9651bc Merge pull request #5713 from matrix-org/erikj/use_cache_for_filtered_state
Delegate to cached version when using get_filtered_current_state_ids
2019-07-19 16:30:49 +01:00
Erik Johnston
5c07c97c09 Merge pull request #5706 from matrix-org/erikj/add_memberships_to_current_state
Add membership column to current_state_events table
2019-07-19 16:30:33 +01:00
Jorik Schellekens
7b8bc61834 Don't accept opentracing data from clients. (#5715)
* Don't accept opentracing data from clients.

* newsfile
2019-07-19 16:29:57 +01:00
Erik Johnston
ced4fdaa84 Newsfile 2019-07-19 13:40:26 +01:00
Erik Johnston
2410335507 Use upsert when updating destination retry interval 2019-07-19 13:40:24 +01:00
Erik Johnston
bd2e1a2aa8 LoggingTransaction accepts None for callback lists.
Its a bit disingenuousto give LoggingTransaction lists to append
callbacks to if we're not going to run the callbacks.
2019-07-19 13:36:04 +01:00
Erik Johnston
ebc5ed1296 Update comment for new column 2019-07-19 13:29:02 +01:00
Neil Johnson
5c05ae7ba0 Add 'rel' attribute to default welcome page. (#5695)
add rel attribute as a precaution against reverse tabnabbing in future
2019-07-19 12:03:36 +01:00
Richard van der Hoff
b73ce4ba81 Update the coding style doc (#5719)
A few fixes and removal of duplicated stuff, but mostly a bunch of the words on the config file.
2019-07-19 11:55:14 +01:00
Amber Brown
356ed0438e Speed up the PostgreSQL unit tests (#5717) 2019-07-19 19:01:23 +10:00
Amber Brown
6a85cb5ef7 Remove non-dedicated logging options and command line arguments (#5678) 2019-07-19 01:40:08 +10:00
Neil Johnson
a3e40bd5b4 towncrier 2019-07-18 16:02:09 +01:00
Neil Johnson
cfc00068bd enable aggregations support by default 2019-07-18 15:56:59 +01:00
Erik Johnston
dd2851d576 Newsfile 2019-07-18 15:27:18 +01:00
Erik Johnston
10523241d8 Delegate to cached version when using get_filtered_current_state_ids
In the case where it gets called with `StateFilter.all()`
2019-07-18 15:17:39 +01:00
Richard van der Hoff
82345bc09a Clean up opentracing configuration options (#5712)
Clean up config settings and dead code.

This is mostly about cleaning up the config format, to bring it into line with our conventions. In particular:
 * There should be a blank line after `## Section ##' headings
 * There should be a blank line between each config setting
 * There should be a `#`-only line between a comment and the setting it describes
 * We don't really do the `#  #` style commenting-out of whole sections if we can help it
 * rename `tracer_enabled` to `enabled`

While we're here, do more config parsing upfront, which makes it easier to use
later on.

Also removes redundant code from LogContextScopeManager.

Also changes the changelog fragment to a `feature` - it's exciting!
2019-07-18 15:06:54 +01:00
Amber Brown
7ad1d76356 Support Prometheus_client 0.4.0+ (#5636) 2019-07-18 23:57:15 +10:00
Andrew Morgan
b2a382efdb Remove the ability to query relations when the original event was redacted. (#5629)
Fixes #5594

Forbid viewing relations on an event once it has been redacted.
2019-07-18 14:41:42 +01:00
Erik Johnston
89c885909a Newsfile 2019-07-18 14:16:01 +01:00
Erik Johnston
8e1ada9e6f Use the current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
059d8c1a4e Track if current_state_events.membership is up to date 2019-07-18 14:16:01 +01:00
Erik Johnston
c618a5d348 Add background update for current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
6de09e07a6 Add membership column to current_state_events table.
It turns out that doing a join is surprisingly expensive for the DB to
do when room_membership table is larger than the disk cache.
2019-07-18 14:15:57 +01:00
Richard van der Hoff
fa8271c5ac Convert synapse.federation.transport.server to async (#5689)
* Convert BaseFederationServlet._wrap to async

Empirically, this fixes some lost stacktraces. It should be safe because the
wrapped function is called from JsonResource._async_render, which is already
async.

* Convert the rest of synapse.federation.transport.server to async

We may as well do the whole file while we're here.

* changelog

* flake8
2019-07-18 11:46:47 +01:00
Richard van der Hoff
9c70a02a9c Ignore redactions of m.room.create events (#5701) 2019-07-17 19:08:02 +01:00
Richard van der Hoff
1def298119 Improve Depends specs in debian package. (#5675)
This is basically a contrived way of adding a `Recommends` on `libpq5`, to fix #5653.

The way this is supposed to happen in debhelper is to run
`dh_shlibdeps`, which in turn runs `dpkg-shlibdeps`, which spits things out
into `debian/<package>.substvars` whence they can later be included by
`control`.

Previously, we had disabled `dh_shlibdeps`, mostly because `dpkg-shlibdeps`
gets confused about PIL's interdependent objects, but that's not really the
right thing to do and there is another way to work around that.

Since we don't always use postgres, we don't necessarily want a hard Depends on
libpq5, so I've actually ended up adding an explicit invocation of
`dpkg-shlibdeps` for `psycopg2`.

I've also updated the build-depends list for the package, which was missing a
couple of entries.
2019-07-17 17:47:07 +01:00
Richard van der Hoff
2091c91fde More refactoring in get_events_as_list (#5707)
We can now use `_get_events_from_cache_or_db` rather than going right back to
the database, which means that (a) we can benefit from caching, and (b) it
opens the way forward to more extensive checks on the original event.

We now always require the original event to exist before we will serve up a
redaction.
2019-07-17 17:34:13 +01:00
Richard van der Hoff
375162b3c3 Fix redaction authentication (#5700)
Ensures that redactions are correctly authenticated for recent room versions.

There are a few things going on here:

 * `_fetch_event_rows` is updated to return a dict rather than a list of rows.

 * Rather than returning multiple copies of an event which was redacted
   multiple times, it returns the redactions as a list within the dict.

 * It also returns the actual rejection reason, rather than merely the fact
   that it was rejected, so that we don't have to query the table again in
   `_get_event_from_row`.

 * The redaction handling is factored out of `_get_event_from_row`, and now
   checks if any of the redactions are valid.
2019-07-17 16:52:02 +01:00
Richard van der Hoff
65c5592b8e Refactor get_events_as_list (#5699)
A couple of changes here:

* get rid of a redundant `allow_rejected` condition - we should already have filtered out any rejected
  events before we get to that point in the code, and the redundancy is confusing. Instead, let's stick in
  an assertion just to make double-sure we aren't leaking rejected events by mistake.

* factor out a `_get_events_from_cache_or_db` method, which is going to be important for a 
  forthcoming fix to redactions.
2019-07-17 16:49:19 +01:00
Erik Johnston
c831c5b2bb Merge pull request #5597 from matrix-org/erikj/admin_api_cmd
Create basic admin command app
2019-07-16 15:59:36 +01:00
Erik Johnston
5ed7853bb0 Remove pointless description 2019-07-16 11:45:57 +01:00
Erik Johnston
f44354e17f Clean up arg name and remove lying comment 2019-07-16 11:39:40 +01:00
Erik Johnston
d0d479c1af Fix typo in synapse/app/admin_cmd.py
Co-Authored-By: Aaron Raimist <aaron@raim.ist>
2019-07-16 09:52:56 +01:00
Erik Johnston
03cc8c4b5d Fix invoking add_argument from homeserver.py 2019-07-15 14:25:05 +01:00
Erik Johnston
eca4f5ac73 s/exfiltrate_user_data/export_user_data/ 2019-07-15 14:17:28 +01:00
Erik Johnston
1b2067f53d Add FileExfiltrationWriter 2019-07-15 14:15:22 +01:00
Erik Johnston
e8c53b07f2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/admin_api_cmd 2019-07-15 14:13:22 +01:00
Erik Johnston
c8f35d8d38 Use set_defaults(func=) style 2019-07-15 14:09:35 +01:00
Erik Johnston
fdefb9e29a Move creation of ArgumentParser to caller 2019-07-15 14:09:35 +01:00
Erik Johnston
37b524f971 Fix up comments 2019-07-15 14:09:35 +01:00
Erik Johnston
823e13ddf4 Change add_arguments to be a static method 2019-07-15 14:09:33 +01:00
Andrew Morgan
18c516698e Return a different error from Invalid Password when a user is deactivated (#5674)
Return `This account has been deactivated` instead of `Invalid password` when a user is deactivated.
2019-07-15 11:45:29 +01:00
Erik Johnston
d86321300a Merge pull request #5589 from matrix-org/erikj/admin_exfiltrate_data
Add basic function to get all data for a user out of synapse
2019-07-15 10:04:02 +01:00
Richard van der Hoff
d336b51331 Add a docker type to the towncrier configuration (#5673)
... and certain other changelog-related fixes
2019-07-12 17:27:07 +01:00
Richard van der Hoff
5f158ec039 Implement access token expiry (#5660)
Record how long an access token is valid for, and raise a soft-logout once it
expires.
2019-07-12 17:26:02 +01:00
Erik Johnston
db0a50bc40 Fixup docstrings 2019-07-12 16:59:59 +01:00
Andrew Morgan
24aa0e0a5b fix typo: backgroud -> background 2019-07-12 15:29:40 +01:00
Richard van der Hoff
4c17a87606 fix changelog name 2019-07-12 11:47:24 +01:00
Ulrik Günther
d445b3ae57 Update reverse_proxy.rst (#5397)
Updates reverse_proxy.rst with information about nginx' URI normalisation.
2019-07-12 11:46:18 +01:00
Slavi Pantaleev
59f15309ca Add missing space in default logging file format generated by the Docker image (#5620)
This adds a missing space, without which log lines appear uglier.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2019-07-12 11:43:42 +01:00
Slavi Pantaleev
f369164761 Upgrade Alpine Linux used in the Docker image (3.8 -> 3.10) (#5619)
Alpine Linux 3.8 is still supported, but it seems like
it's quite outdated now.

While Python should be the same on both, all other libraries, etc.,
are much newer in Alpine 3.9 and 3.10.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2019-07-12 11:38:25 +01:00
Richard van der Hoff
6bb0357c94 Add a mechanism for per-test configs (#5657)
It's useful to be able to tweak the homeserver config to be used for each
test. This PR adds a mechanism to do so.
2019-07-12 10:16:23 +01:00
Amber Brown
a83577d64f Use /src for checking out synapse during sytests (#5664) 2019-07-11 23:43:41 +10:00
Lrizika
39e9839a04 Improved docs on setting up Postgresql (#5661)
Added that synapse_user needs a database to access before it can auth
Noted you'll need to enable password auth, linked to pg_hba.conf docs
2019-07-11 14:31:36 +01:00
Andrew Morgan
78a1cd36b5 small typo fix (#5655) 2019-07-11 13:33:23 +01:00
Richard van der Hoff
0a4001eba1 Clean up exception handling for access_tokens (#5656)
First of all, let's get rid of `TOKEN_NOT_FOUND_HTTP_STATUS`. It was a hack we
did at one point when it was possible to return either a 403 or a 401 if the
creds were missing. We always return a 401 in these cases now (thankfully), so
it's not needed.

Let's also stop abusing `AuthError` for these cases. Honestly they have nothing
that relates them to the other places that `AuthError` is used, other than the
fact that they are loosely under the 'Auth' banner. It makes no sense for them
to share exception classes.

Instead, let's add a couple of new exception classes: `InvalidClientTokenError`
and `MissingClientTokenError`, for the `M_UNKNOWN_TOKEN` and `M_MISSING_TOKEN`
cases respectively - and an `InvalidClientCredentialsError` base class for the
two of them.
2019-07-11 11:06:23 +01:00
Jorik Schellekens
38a6d3eea7 Add basic opentracing support (#5544)
* Configure and initialise tracer

Includes config options for the tracer and sets up JaegerClient.

* Scope manager using LogContexts

We piggy-back our tracer scopes by using log context.
The current log context gives us the current scope. If new scope is
created we create a stack of scopes in the context.

* jaeger is a dependency now

* Carrier inject and extraction for Twisted Headers

* Trace federation requests on the way in and out.

The span is created in _started_processing and closed in
_finished_processing because we need a meaningful log context.

* Create logcontext for new scope.

Instead of having a stack of scopes in a logcontext we create a new
context for a new scope if the current logcontext already has a scope.

* Remove scope from logcontext if logcontext is top level

* Disable tracer if not configured

* typo

* Remove dependence on jaeger internals

* bools

* Set service name

* :Explicitely state that the tracer is disabled

* Black is the new black

* Newsfile

* Code style

* Use the new config setup.

* Generate config.

* Copyright

* Rename config to opentracing

* Remove user whitelisting

* Empty whitelist by default

* User ConfigError instead of RuntimeError

* Use isinstance

* Use tag constants for opentracing.

* Remove debug comment and no need to explicitely record error

* Two errors a "s(c)entry"

* Docstrings!

* Remove debugging brainslip

* Homeserver Whitlisting

* Better opentracing config comment

* linting

* Inclue worker name in service_name

* Make opentracing an optional dependency

* Neater config retreival

* Clean up dummy tags

* Instantiate tracing as object instead of global class

* Inlcude opentracing as a homeserver member.

* Thread opentracing to the request level

* Reference opetnracing through hs

* Instantiate dummy opentracin g for tests.

* About to revert, just keeping the unfinished changes just in case

* Revert back to global state, commit number:

9ce4a3d9067bf9889b86c360c05ac88618b85c4f

* Use class level methods in tracerutils

* Start and stop requests spans in a place where we
have access to the authenticated entity

* Seen it, isort it

* Make sure to close the active span.

* I'm getting black and blue from this.

* Logger formatting

Co-Authored-By: Erik Johnston <erik@matrix.org>

* Outdated comment

* Import opentracing at the top

* Return a contextmanager

* Start tracing client requests from the servlet

* Return noop context manager if not tracing

* Explicitely say that these are federation requests

* Include servlet name in client requests

* Use context manager

* Move opentracing to logging/

* Seen it, isort it again!

* Ignore twisted return exceptions on context exit

* Escape the scope

* Scopes should be entered to make them useful.

* Nicer decorator names

* Just one init, init?

* Don't need to close something that isn't open

* Docs make you smarter
2019-07-11 10:36:03 +01:00
Richard van der Hoff
1890cfcf82 Inline issue_access_token (#5659)
this is only used in one place, so it's clearer if we inline it and reduce the
API surface.

Also, fixes a buglet where we would create an access token even if we were
about to block the user (we would never return the AT, so the user could never
use it, but it was still created and added to the db.)
2019-07-11 04:10:07 +10:00
Brendan Abolivier
8ab3444fdf Merge pull request #5658 from matrix-org/babolivier/is-json
Send 3PID bind requests as JSON data
2019-07-10 17:01:26 +01:00
Richard van der Hoff
953dbb7980 Remove access-token support from RegistrationStore.register (#5642)
The 'token' param is no longer used anywhere except the tests, so let's kill
that off too.
2019-07-10 16:26:49 +01:00
Brendan Abolivier
b2a2e96ea6 Typo 2019-07-10 15:56:21 +01:00
Brendan Abolivier
351d9bd317 Rename changelog file 2019-07-10 15:48:50 +01:00
Brendan Abolivier
f77e997619 Send 3PID bind requests as JSON data 2019-07-10 15:46:42 +01:00
Andrew Morgan
f281714583 Don't bundle aggregations when retrieving the original event (#5654)
A fix for PR #5626, which returned the original event content as part of a call to /relations.

Only problem was that we were attempting to aggregate the relations on top of it when we did so. We now set bundle_aggregations to False in the get_event call.

We also do this when pulling the relation events as well, because edits of edits are not something we'd like to support here.
2019-07-10 14:43:11 +01:00
Andrew Morgan
3dd61d12cd Add a linting script (#5627)
Add a dev script to cover all the different linting steps.
2019-07-10 14:03:18 +01:00
Bruno Windels
4d122d295c Correct pep517 flag in readme (#5651) 2019-07-10 13:55:24 +01:00
Brendan Abolivier
65434da75d Merge pull request #5638 from matrix-org/babolivier/invite-json
Use JSON when querying the IS's /store-invite endpoint
2019-07-09 18:48:38 +01:00
Hubert Chathi
7b3bc755a3 remove unused and unnecessary check for FederationDeniedError (#5645)
FederationDeniedError is a subclass of SynapseError, which is a subclass of
CodeMessageException, so if e is a FederationDeniedError, then this check for
FederationDeniedError will never be reached since it will be caught by the
check for CodeMessageException above.  The check for CodeMessageException does
almost the same thing as this check (since FederationDeniedError initialises
with code=403 and msg="Federation denied with %s."), so may as well just keep
allowing it to handle this case.
2019-07-09 18:37:39 +01:00
Andrew Morgan
d88421ab03 Include the original event in /relations (#5626)
When asking for the relations of an event, include the original event in the response. This will mostly be used for efficiently showing edit history, but could be useful in other circumstances.
2019-07-09 13:43:08 +01:00
Brendan Abolivier
af67c7c1de Merge pull request #5644 from matrix-org/babolivier/profile-allow-self
Allow newly-registered users to lookup their own profiles
2019-07-09 10:25:40 +01:00
Richard van der Hoff
824707383b Remove access-token support from RegistrationHandler.register (#5641)
Nothing uses this now, so we can remove the dead code, and clean up the
API.

Since we're changing the shape of the return value anyway, we take the
opportunity to give the method a better name.
2019-07-08 19:01:08 +01:00
Brendan Abolivier
73cb716b3c Lint 2019-07-08 17:44:20 +01:00
Brendan Abolivier
5e01e9ac19 Add test case 2019-07-08 17:41:16 +01:00
Brendan Abolivier
f3615a8aa5 Changelog 2019-07-08 17:31:58 +01:00
Brendan Abolivier
7556851665 Allow newly-registered users to lookup their own profiles
When a user creates an account and the 'require_auth_for_profile_requests' config flag is set, and a client that performed the registration wants to lookup the newly-created profile, the request will be denied because the user doesn't share a room with themselves yet.
2019-07-08 17:31:00 +01:00
Richard van der Hoff
43d175d17a Unblacklist some user_directory sytests (#5637)
Fixes https://github.com/matrix-org/synapse/issues/2306
2019-07-09 02:15:17 +10:00
Richard van der Hoff
b70e080b59 Better logging for auto-join. (#5643)
It was pretty unclear what was going on, so I've added a couple of log lines.
2019-07-08 17:14:51 +01:00
Brendan Abolivier
57eacee4f4 Merge branch 'develop' into babolivier/invite-json 2019-07-08 15:49:23 +01:00
Brendan Abolivier
c142e5d16a Changelog 2019-07-08 15:28:38 +01:00
Richard van der Hoff
4b1f7febc7 Update ModuleApi to avoid register(generate_token=True) (#5640)
* Update ModuleApi to avoid register(generate_token=True)

This is the only place this is still used, so I'm trying to kill it off.

* changelog
2019-07-08 23:55:34 +10:00
Richard van der Hoff
f9e99f9534 Factor out some redundant code in the login impl (#5639)
* Factor out some redundant code in the login impl

Also fixes a redundant access_token which was generated during jwt login.

* changelog
2019-07-08 23:54:22 +10:00
Richard van der Hoff
1af2fcd492 Move get_or_create_user to test code (#5628)
This is only used in tests, so...
2019-07-08 23:52:26 +10:00
Brendan Abolivier
f05c7d62bc Lint 2019-07-08 14:29:27 +01:00
Brendan Abolivier
1a807dfe68 Use application/json when querying the IS's /store-invite endpoint 2019-07-08 14:19:39 +01:00
Andrew Morgan
589d43d9cd Add a few more common environment directory names to black exclusion (#5630)
* Add a few more common environment directory names to black exclusion

* Add changelog
2019-07-08 21:53:33 +10:00
J. Ryan Stinnett
9b1b79f3f5 Add default push rule to ignore reactions (#5623)
This adds a default push rule following the proposal in
[MSC2153](https://github.com/matrix-org/matrix-doc/pull/2153).

See also https://github.com/vector-im/riot-web/issues/10208
See also https://github.com/matrix-org/matrix-js-sdk/pull/976
2019-07-05 17:37:52 +01:00
Andrew Morgan
ad8b909ce9 Add origin_server_ts and sender fields to m.replace (#5613)
Riot team would like some extra fields as part of m.replace, so here you go.

Fixes: #5598
2019-07-05 17:20:02 +01:00
Richard van der Hoff
80cc82a445 Remove support for invite_3pid_guest. (#5625)
This has never been documented, and I'm not sure it's ever been used outside
sytest.

It's quite a lot of poorly-maintained code, so I'd like to get rid of it.

For now I haven't removed the database table; I suggest we leave that for a
future clearout.
2019-07-05 16:47:58 +01:00
Erik Johnston
b4f5416dd9 pep8 2019-07-05 14:41:29 +01:00
Erik Johnston
eadb13d2e9 Remove FileExfiltrationWriter 2019-07-05 14:15:00 +01:00
Erik Johnston
7f0d8e4288 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/admin_exfiltrate_data 2019-07-05 14:08:21 +01:00
Erik Johnston
9ccea16d45 Assume key existence. Update docstrings 2019-07-05 14:07:56 +01:00
Richard van der Hoff
a6a776f3d8 remove dead transaction persist code (#5622)
this hasn't done anything for years
2019-07-05 12:59:42 +01:00
Richard van der Hoff
9481707a52 Fixes to the federation rate limiter (#5621)
- Put the default window_size back to 1000ms (broken by #5181)
- Make the `rc_federation` config actually do something
- fix an off-by-one error in the 'concurrent' limit
- Avoid creating an unused `_PerHostRatelimiter` object for every single
  incoming request
2019-07-05 11:10:19 +01:00
Andrew Morgan
0e5434264f Make errors about email password resets much clearer (#5616)
The runtime errors that dealt with local email password resets talked about config options that users may not even have in their config file yet (if upgrading). Instead, the cryptic errors are now replaced with hopefully much more helpful ones.
2019-07-05 10:44:12 +01:00
Amber Brown
1ee268d33d Improve the backwards compatibility re-exports of synapse.logging.context (#5617)
* Improve the backwards compatibility re-exports of synapse.logging.context.

* reexport logformatter too
2019-07-05 02:32:02 +10:00
Andrew Morgan
ee91ac179c Add a sytest blacklist file (#5611)
* Add a sytest blacklist file

* Add changelog

* Add blacklist to manifest
2019-07-05 01:24:13 +10:00
Erik Johnston
822a0f0435 Merge branch 'master' of github.com:matrix-org/synapse into develop 2019-07-04 14:00:27 +01:00
Erik Johnston
c061d4f237 Fixup from review comments. 2019-07-04 11:41:06 +01:00
Amber Brown
463b072b12 Move logging utilities out of the side drawer of util/ and into logging/ (#5606) 2019-07-04 00:07:04 +10:00
Erik Johnston
d0b849c86d Apply comment fixups from code review
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-03 15:03:38 +01:00
Richard van der Hoff
cb8d568cf9 Fix 'utime went backwards' errors on daemonization. (#5609)
* Fix 'utime went backwards' errors on daemonization.

Fixes #5608

* remove spurious debug
2019-07-03 22:40:45 +10:00
Erik Johnston
10fe904d88 Newsfile 2019-07-02 17:21:27 +01:00
Erik Johnston
9f3c0a8556 Add basic admin cmd app 2019-07-02 17:12:48 +01:00
Erik Johnston
65dd5543f6 Newsfile 2019-07-02 12:10:23 +01:00
Erik Johnston
8ee69f299c Add basic function to get all data for a user out of synapse 2019-07-02 12:09:04 +01:00
323 changed files with 9730 additions and 4990 deletions

View File

@@ -49,14 +49,15 @@ steps:
- command:
- "python -m pip install tox"
- "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
- "python3.5 -m pip install tox"
- "tox -e py35-old,codecov"
label: ":python: 3.5 / SQLite / Old Deps"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.5"
image: "ubuntu:xenial" # We use xenail to get an old sqlite and python
propagate-environment: true
retry:
automatic:
@@ -117,8 +118,10 @@ steps:
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins:
@@ -134,8 +137,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -151,8 +156,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -173,11 +180,13 @@ steps:
queue: "medium"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash .buildkite/synapse_sytest.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
@@ -192,11 +201,13 @@ steps:
POSTGRES: "1"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash .buildkite/synapse_sytest.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
@@ -210,14 +221,17 @@ steps:
env:
POSTGRES: "1"
WORKERS: "1"
BLACKLIST: "synapse-blacklist-with-workers"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash .buildkite/synapse_sytest.sh"
- "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
soft_fail: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1

View File

@@ -1,145 +0,0 @@
#!/bin/bash
#
# Fetch sytest, and then run the tests for synapse. The entrypoint for the
# sytest-synapse docker images.
set -ex
if [ -n "$BUILDKITE" ]
then
SYNAPSE_DIR=`pwd`
else
SYNAPSE_DIR="/src"
fi
# Attempt to find a sytest to use.
# If /sytest exists, it means that a SyTest checkout has been mounted into the Docker image.
if [ -d "/sytest" ]; then
# If the user has mounted in a SyTest checkout, use that.
echo "Using local sytests..."
# create ourselves a working directory and dos2unix some scripts therein
mkdir -p /work/jenkins
for i in install-deps.pl run-tests.pl tap-to-junit-xml.pl jenkins/prep_sytest_for_postgres.sh; do
dos2unix -n "/sytest/$i" "/work/$i"
done
ln -sf /sytest/tests /work
ln -sf /sytest/keys /work
SYTEST_LIB="/sytest/lib"
else
if [ -n "BUILDKITE_BRANCH" ]
then
branch_name=$BUILDKITE_BRANCH
else
# Otherwise, try and find out what the branch that the Synapse checkout is using. Fall back to develop if it's not a branch.
branch_name="$(git --git-dir=/src/.git symbolic-ref HEAD 2>/dev/null)" || branch_name="develop"
fi
# Try and fetch the branch
echo "Trying to get same-named sytest branch..."
wget -q https://github.com/matrix-org/sytest/archive/$branch_name.tar.gz -O sytest.tar.gz || {
# Probably a 404, fall back to develop
echo "Using develop instead..."
wget -q https://github.com/matrix-org/sytest/archive/develop.tar.gz -O sytest.tar.gz
}
mkdir -p /work
tar -C /work --strip-components=1 -xf sytest.tar.gz
SYTEST_LIB="/work/lib"
fi
cd /work
# PostgreSQL setup
if [ -n "$POSTGRES" ]
then
export PGUSER=postgres
export POSTGRES_DB_1=pg1
export POSTGRES_DB_2=pg2
# Start the database
su -c 'eatmydata /usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start' postgres
# Use the Jenkins script to write out the configuration for a PostgreSQL using Synapse
jenkins/prep_sytest_for_postgres.sh
# Make the test databases for the two Synapse servers that will be spun up
su -c 'psql -c "CREATE DATABASE pg1;"' postgres
su -c 'psql -c "CREATE DATABASE pg2;"' postgres
fi
if [ -n "$OFFLINE" ]; then
# if we're in offline mode, just put synapse into the virtualenv, and
# hope that the deps are up-to-date.
#
# (`pip install -e` likes to reinstall setuptools even if it's already installed,
# so we just run setup.py explicitly.)
#
(cd $SYNAPSE_DIR && /venv/bin/python setup.py -q develop)
else
# We've already created the virtualenv, but lets double check we have all
# deps.
/venv/bin/pip install -q --upgrade --no-cache-dir -e $SYNAPSE_DIR
/venv/bin/pip install -q --upgrade --no-cache-dir \
lxml psycopg2 coverage codecov tap.py
# Make sure all Perl deps are installed -- this is done in the docker build
# so will only install packages added since the last Docker build
./install-deps.pl
fi
# Run the tests
>&2 echo "+++ Running tests"
RUN_TESTS=(
perl -I "$SYTEST_LIB" ./run-tests.pl --python=/venv/bin/python --synapse-directory=$SYNAPSE_DIR --coverage -O tap --all
)
TEST_STATUS=0
if [ -n "$WORKERS" ]; then
RUN_TESTS+=(-I Synapse::ViaHaproxy --dendron-binary=/pydron.py)
else
RUN_TESTS+=(-I Synapse)
fi
"${RUN_TESTS[@]}" "$@" > results.tap || TEST_STATUS=$?
if [ $TEST_STATUS -ne 0 ]; then
>&2 echo -e "run-tests \e[31mFAILED\e[0m: exit code $TEST_STATUS"
else
>&2 echo -e "run-tests \e[32mPASSED\e[0m"
fi
>&2 echo "--- Copying assets"
# Copy out the logs
mkdir -p /logs
cp results.tap /logs/results.tap
rsync --ignore-missing-args --min-size=1B -av server-0 server-1 /logs --include "*/" --include="*.log.*" --include="*.log" --exclude="*"
# Upload coverage to codecov and upload files, if running on Buildkite
if [ -n "$BUILDKITE" ]
then
/venv/bin/coverage combine || true
/venv/bin/coverage xml || true
/venv/bin/codecov -X gcov -f coverage.xml
wget -O buildkite.tar.gz https://github.com/buildkite/agent/releases/download/v3.13.0/buildkite-agent-linux-amd64-3.13.0.tar.gz
tar xvf buildkite.tar.gz
chmod +x ./buildkite-agent
# Upload the files
./buildkite-agent artifact upload "/logs/**/*.log*"
./buildkite-agent artifact upload "/logs/results.tap"
if [ $TEST_STATUS -ne 0 ]; then
# Annotate, if failure
/venv/bin/python $SYNAPSE_DIR/.buildkite/format_tap.py /logs/results.tap "$BUILDKITE_LABEL" | ./buildkite-agent annotate --style="error" --context="$BUILDKITE_LABEL"
fi
fi
exit $TEST_STATUS

View File

@@ -0,0 +1,30 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist.
Message history can be paginated
Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers
If remote user leaves room we no longer receive device updates
Forgotten room messages cannot be paginated
Inbound federation can get public room list
Members from the gap are included in gappy incr LL sync
Leaves are present in non-gapped incremental syncs
Old leaves are present in gapped incremental syncs
User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking

View File

@@ -1,5 +1,4 @@
comment:
layout: "diff"
comment: off
coverage:
status:

1
.gitignore vendored
View File

@@ -16,6 +16,7 @@ _trial_temp*/
/*.log
/*.log.config
/*.pid
/.python-version
/*.signing.key
/env/
/homeserver*.yaml

View File

@@ -1,3 +1,224 @@
Synapse 1.3.1 (2019-08-17)
==========================
Features
--------
- Drop hard dependency on `sdnotify` python package. ([\#5871](https://github.com/matrix-org/synapse/issues/5871))
Bugfixes
--------
- Fix startup issue (hang on ACME provisioning) due to ordering of Twisted reactor startup. Thanks to @chrismoos for supplying the fix. ([\#5867](https://github.com/matrix-org/synapse/issues/5867))
Synapse 1.3.0 (2019-08-15)
==========================
Bugfixes
--------
- Fix 500 Internal Server Error on `publicRooms` when the public room list was
cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))
Synapse 1.3.0rc1 (2019-08-13)
==========================
Features
--------
- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))
Bugfixes
--------
- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))
Deprecations and Removals
-------------------------
- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))
Internal Changes
----------------
- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
Synapse 1.2.1 (2019-07-26)
==========================
Security update
---------------
This release includes *four* security fixes:
- Prevent an attack where a federated server could send redactions for arbitrary events in v1 and v2 rooms. ([\#5767](https://github.com/matrix-org/synapse/issues/5767))
- Prevent a denial-of-service attack where cycles of redaction events would make Synapse spin infinitely. Thanks to `@lrizika:matrix.org` for identifying and responsibly disclosing this issue. ([0f2ecb961](https://github.com/matrix-org/synapse/commit/0f2ecb961))
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @dylangerdaly for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
- Fix a vulnerability where a federated server could spoof read-receipts from
users on other servers. Thanks to @dylangerdaly for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
Additionally, the following fix was in Synapse **1.2.0**, but was not correctly
identified during the original release:
- It was possible for a room moderator to send a redaction for an `m.room.create` event, which would downgrade the room to version 1. Thanks to `/dev/ponies` for identifying and responsibly disclosing this issue! ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
Synapse 1.2.0 (2019-07-25)
==========================
No significant changes.
Synapse 1.2.0rc2 (2019-07-24)
=============================
Bugfixes
--------
- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
Synapse 1.2.0rc1 (2019-07-22)
=============================
Security fixes
--------------
This update included a security fix which was initially incorrectly flagged as
a regular bug fix.
- It was possible for a room moderator to send a redaction for an `m.room.create` event, which would downgrade the room to version 1. Thanks to `/dev/ponies` for identifying and responsibly disclosing this issue! ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
Features
--------
- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))
Bugfixes
--------
- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
Updates to the Docker image
---------------------------
- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))
Improved Documentation
----------------------
- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))
Deprecations and Removals
-------------------------
- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))
Internal Changes
----------------
- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
Synapse 1.1.0 (2019-07-04)
==========================

View File

@@ -30,11 +30,10 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Buildkite
<https://buildkite.com/matrix-dot-org/synapse>`_ for continuous integration.
Buildkite builds need to be authorised by a maintainer. If your change breaks
the build, this will be shown in GitHub, so please keep an eye on the pull
request for feedback.
We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
continuous integration. Buildkite builds need to be authorised by a
maintainer. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
@@ -70,13 +69,21 @@ All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes).
To create a changelog entry, make a new file in the ``changelog.d`` file named
in the format of ``PRnumber.type``. The type can be one of the following:
The content of the file is your changelog entry, which can contain Markdown
formatting. The entry should end with a full stop ('.') for consistency.
* ``feature``.
* ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!

View File

@@ -419,12 +419,11 @@ If Synapse is not configured with an SMTP server, password reset via email will
## Registering a user
You will need at least one user on your server in order to use a Matrix
client. Users can be registered either via a Matrix client, or via a
commandline script.
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
To get started, it is easiest to use the command line to register new
users. This can be done as follows:
Alternatively you can do so from the command line if you have installed via pip.
This can be done as follows:
```
$ source ~/synapse/env/bin/activate

View File

@@ -33,6 +33,7 @@ exclude Dockerfile
exclude .dockerignore
exclude test_postgresql.sh
exclude .editorconfig
exclude sytest-blacklist
include pyproject.toml
recursive-include changelog.d *

View File

@@ -272,7 +272,7 @@ to install using pip and a virtualenv::
virtualenv -p python3 env
source env/bin/activate
python -m pip install --no-pep-517 -e .[all]
python -m pip install --no-use-pep517 -e .[all]
This will run a process of downloading and installing all the needed
dependencies into a virtual env.

View File

@@ -49,6 +49,13 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0
===================

1
changelog.d/5633.bugfix Normal file
View File

@@ -0,0 +1 @@
Don't create broken room when power_level_content_override.users does not contain creator_id.

1
changelog.d/5844.misc Normal file
View File

@@ -0,0 +1 @@
Retry well-known lookup before the cache expires, giving a grace period where the remote well-known can be down but we still use the old result.

1
changelog.d/5856.feature Normal file
View File

@@ -0,0 +1 @@
Add a tag recording a request's authenticated entity and corresponding servlet in opentracing.

1
changelog.d/5857.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix database index so that different backup versions can have the same sessions.

1
changelog.d/5860.misc Normal file
View File

@@ -0,0 +1 @@
Remove log line for debugging issue #5407.

1
changelog.d/5863.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Synapse looking for config options `password_reset_failure_template` and `password_reset_success_template`, when they are actually `password_reset_template_failure_html`, `password_reset_template_success_html`.

1
changelog.d/5878.feature Normal file
View File

@@ -0,0 +1 @@
Add admin API endpoint for setting whether or not a user is a server administrator.

View File

@@ -1,7 +1,7 @@
# Example log_config file for synapse. To enable, point `log_config` to it in
# Example log_config file for synapse. To enable, point `log_config` to it in
# `homeserver.yaml`, and restart synapse.
#
# This configuration will produce similar results to the defaults within
# This configuration will produce similar results to the defaults within
# synapse, but can be edited to give more flexibility.
version: 1
@@ -12,7 +12,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
@@ -35,7 +35,7 @@ handlers:
root:
level: INFO
handlers: [console] # to use file handler instead, switch to [file]
loggers:
synapse:
level: INFO

View File

@@ -36,7 +36,7 @@ from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
# from synapse.util.logutils import log_function
# from synapse.logging.utils import log_function
from twisted.internet import reactor, defer
from twisted.python import log

View File

@@ -51,4 +51,4 @@ TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id
# finally start pruning media:
###############################################################################
set -x # for debugging the generated string
curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
curl --header "Authorization: Bearer $TOKEN" -X POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"

View File

@@ -4,7 +4,8 @@ After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -2,7 +2,8 @@
Description=Synapse Matrix Homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -8,7 +8,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:

View File

@@ -14,7 +14,9 @@
Description=Synapse Matrix homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-abort
User=synapse

29
debian/changelog vendored
View File

@@ -1,3 +1,32 @@
matrix-synapse-py3 (1.3.1) stable; urgency=medium
* New synapse release 1.3.1.
-- Synapse Packaging team <packages@matrix.org> Sat, 17 Aug 2019 09:15:49 +0100
matrix-synapse-py3 (1.3.0) stable; urgency=medium
[ Andrew Morgan ]
* Remove libsqlite3-dev from required build dependencies.
[ Synapse Packaging team ]
* New synapse release 1.3.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 15 Aug 2019 12:04:23 +0100
matrix-synapse-py3 (1.2.0) stable; urgency=medium
[ Amber Brown ]
* Update logging config defaults to match API changes in Synapse.
[ Richard van der Hoff ]
* Add Recommends and Depends for some libraries which you probably want.
[ Synapse Packaging team ]
* New synapse release 1.2.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Jul 2019 14:10:07 +0100
matrix-synapse-py3 (1.1.0) stable; urgency=medium
[ Silke Hofstra ]

6
debian/control vendored
View File

@@ -2,10 +2,13 @@ Source: matrix-synapse-py3
Section: contrib/python
Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
Build-Depends:
debhelper (>= 9),
dh-systemd,
dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,
lsb-release,
python3-dev,
python3,
@@ -28,9 +31,12 @@ Depends:
debconf,
python3-distutils|libpython3-stdlib (<< 3.6),
${misc:Depends},
${shlibs:Depends},
${synapse:pydepends},
# some of our scripts use perl, but none of them are important,
# so we put perl:Depends in Suggests rather than Depends.
Recommends:
${shlibs1:Recommends},
Suggests:
sqlite3,
${perl:Depends},

2
debian/log.yaml vendored
View File

@@ -7,7 +7,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:

14
debian/rules vendored
View File

@@ -3,15 +3,29 @@
# Build Debian package using https://github.com/spotify/dh-virtualenv
#
# assume we only have one package
PACKAGE_NAME:=`dh_listpackages`
override_dh_systemd_enable:
dh_systemd_enable --name=matrix-synapse
override_dh_installinit:
dh_installinit --name=matrix-synapse
# we don't really want to strip the symbols from our object files.
override_dh_strip:
override_dh_shlibdeps:
# make the postgres package's dependencies a recommendation
# rather than a hard dependency.
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
-pshlibs1 -dRecommends
# all the other dependencies can be normal 'Depends' requirements,
# except for PIL's, which is self-contained and which confuses
# dpkg-shlibdeps.
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
override_dh_virtualenv:
./debian/build_virtualenv

View File

@@ -29,7 +29,7 @@ for port in 8080 8081 8082; do
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
@@ -43,7 +43,7 @@ for port in 8080 8081 8082; do
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
@@ -68,7 +68,7 @@ for port in 8080 8081 8082; do
# Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
@@ -120,7 +120,6 @@ for port in 8080 8081 8082; do
python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \
-D \
-vv \
popd
done

View File

@@ -16,7 +16,7 @@ ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
# install the OS build deps
@@ -55,7 +55,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10
# xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \

View File

@@ -42,7 +42,15 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
###
FROM ${distro}
# Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage)
ARG distro=""
ENV distro ${distro}
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \

View File

@@ -4,7 +4,8 @@
set -ex
DIST=`lsb_release -c -s`
# Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro`
# we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build

View File

@@ -2,11 +2,11 @@ version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:

View File

@@ -84,3 +84,23 @@ with a body of:
}
including an ``access_token`` of a server admin.
Change whether a user is a server administrator or not
======================================================
Note that you cannot demote yourself.
The api is::
PUT /_synapse/admin/v1/users/<user_id>/admin
with a body of:
.. code:: json
{
"admin": true
}
including an ``access_token`` of a server admin.

View File

@@ -1,4 +1,8 @@
# Code Style
Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
@@ -6,20 +10,20 @@ in code.
The necessary tools are detailed below.
## Formatting tools
- **black**
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
First install ``black`` with::
First install ``black`` with::
pip install --upgrade black
pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
Have ``black`` auto-format your code (it shouldn't change any functionality)
with::
black . --exclude="\.tox|build|env"
black . --exclude="\.tox|build|env"
- **flake8**
@@ -54,17 +58,16 @@ functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
General rules
-------------
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
- **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
@@ -73,6 +76,8 @@ takes a while and is very resource intensive.
- **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules.
Example::
@@ -92,25 +97,84 @@ takes a while and is very resource intensive.
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@@ -1,4 +1,4 @@
Log contexts
Log Contexts
============
.. contents::
@@ -12,7 +12,7 @@ record.
Logcontexts are also used for CPU and database accounting, so that we can track
which requests were responsible for high CPU use or database activity.
The ``synapse.util.logcontext`` module provides a facilities for managing the
The ``synapse.logging.context`` module provides a facilities for managing the
current log context (as well as providing the ``LoggingContextFilter`` class).
Deferreds make the whole thing complicated, so this document describes how it
@@ -27,19 +27,19 @@ found them:
.. code:: python
from synapse.util import logcontext # omitted from future snippets
from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = logcontext.LoggingContext()
request_context = context.LoggingContext()
calling_context = logcontext.LoggingContext.current_context()
logcontext.LoggingContext.set_current_context(request_context)
calling_context = context.LoggingContext.current_context()
context.LoggingContext.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
logcontext.LoggingContext.set_current_context(calling_context)
context.LoggingContext.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id
@@ -51,7 +51,7 @@ written much more succinctly as:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
@@ -74,7 +74,7 @@ blocking operation, and returns a deferred:
@defer.inlineCallbacks
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
yield do_request_handling()
logger.debug("finished")
@@ -148,7 +148,7 @@ call any other functions.
d = more_stuff()
result = yield d # also fine, of course
defer.returnValue(result)
return result
def nonInlineCallbacksFun():
logger.debug("just a wrapper really")
@@ -179,7 +179,7 @@ though, we need to make up a new Deferred, or we get a Deferred back from
external code. We need to make it follow our rules.
The easy way to do it is with a combination of ``defer.inlineCallbacks``, and
``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
``context.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
which returns a deferred which will run its callbacks after a given number of
seconds. That might look like:
@@ -204,13 +204,13 @@ That doesn't follow the rules, but we can fix it by wrapping it with
This technique works equally for external functions which return deferreds,
or deferreds we have made ourselves.
You can also use ``logcontext.make_deferred_yieldable``, which just does the
You can also use ``context.make_deferred_yieldable``, which just does the
boilerplate for you, so the above could be written:
.. code:: python
def sleep(seconds):
return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds))
return context.make_deferred_yieldable(get_sleep_deferred(seconds))
Fire-and-forget
@@ -279,7 +279,7 @@ Obviously that option means that the operations done in
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.run_in_background``, which wraps a
The second option is to use ``context.run_in_background``, which wraps a
function so that it doesn't reset the logcontext even when it returns an
incomplete deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
@@ -293,7 +293,7 @@ It can be used like this:
def do_request_handling():
yield foreground_operation()
logcontext.run_in_background(background_operation)
context.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")
@@ -332,7 +332,7 @@ gathered:
result = yield defer.gatherResults([d1, d2])
In this case particularly, though, option two, of using
``logcontext.preserve_fn`` almost certainly makes more sense, so that
``context.preserve_fn`` almost certainly makes more sense, so that
``operation1`` and ``operation2`` are both logged against the original
logcontext. This looks like:
@@ -340,8 +340,8 @@ logcontext. This looks like:
@defer.inlineCallbacks
def do_request_handling():
d1 = logcontext.preserve_fn(operation1)()
d2 = logcontext.preserve_fn(operation2)()
d1 = context.preserve_fn(operation1)()
d2 = context.preserve_fn(operation2)()
with PreserveLoggingContext():
result = yield defer.gatherResults([d1, d2])
@@ -381,7 +381,7 @@ off the background process, and then leave the ``with`` block to wait for it:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
d = do_request_handling()
@@ -414,7 +414,7 @@ runs its callbacks in the original logcontext, all is happy.
The business of a Deferred which runs its callbacks in the original logcontext
isn't hard to achieve — we have it today, in the shape of
``logcontext._PreservingContextDeferred``:
``context._PreservingContextDeferred``:
.. code:: python

View File

@@ -59,6 +59,108 @@ How to monitor Synapse metrics using Prometheus
Restart Prometheus.
Renaming of metrics & deprecation of old names in 1.2
-----------------------------------------------------
Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
upstream ``prometheus_client``. The old names are considered deprecated and will
be removed in a future version of Synapse.
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| New Name | Old Name |
+=============================================================================+=======================================================================+
| python_gc_objects_collected_total | python_gc_objects_collected |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_collections_total | python_gc_collections |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| process_cpu_seconds_total | process_cpu_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_start_count_total | synapse_background_process_start_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------

100
docs/opentracing.rst Normal file
View File

@@ -0,0 +1,100 @@
===========
OpenTracing
===========
Background
----------
OpenTracing is a semi-standard being adopted by a number of distributed tracing
platforms. It is a common api for facilitating vendor-agnostic tracing
instrumentation. That is, we can use the OpenTracing api and select one of a
number of tracer implementations to do the heavy lifting in the background.
Our current selected implementation is Jaeger.
OpenTracing is a tool which gives an insight into the causal relationship of
work done in and between servers. The servers each track events and report them
to a centralised server - in Synapse's case: Jaeger. The basic unit used to
represent events is the span. The span roughly represents a single piece of work
that was done and the time at which it occurred. A span can have child spans,
meaning that the work of the child had to be completed for the parent span to
complete, or it can have follow-on spans which represent work that is undertaken
as a result of the parent but is not depended on by the parent to in order to
finish.
Since this is undertaken in a distributed environment a request to another
server, such as an RPC or a simple GET, can be considered a span (a unit or
work) for the local server. This causal link is what OpenTracing aims to
capture and visualise. In order to do this metadata about the local server's
span, i.e the 'span context', needs to be included with the request to the
remote.
It is up to the remote server to decide what it does with the spans
it creates. This is called the sampling policy and it can be configured
through Jaeger's settings.
For OpenTracing concepts see
https://opentracing.io/docs/overview/what-is-tracing/.
For more information about Jaeger's implementation see
https://www.jaegertracing.io/docs/
=====================
Seting up OpenTracing
=====================
To receive OpenTracing spans, start up a Jaeger server. This can be done
using docker like so:
.. code-block:: bash
docker run -d --name jaeger
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
jaegertracing/all-in-one:1.13
Latest documentation is probably at
https://www.jaegertracing.io/docs/1.13/getting-started/
Enable OpenTracing in Synapse
-----------------------------
OpenTracing is not enabled by default. It must be enabled in the homeserver
config by uncommenting the config options under ``opentracing`` as shown in
the `sample config <./sample_config.yaml>`_. For example:
.. code-block:: yaml
opentracing:
tracer_enabled: true
homeserver_whitelist:
- "mytrustedhomeserver.org"
- "*.myotherhomeservers.com"
Homeserver whitelisting
-----------------------
The homeserver whitelist is configured using regular expressions. A list of regular
expressions can be given and their union will be compared when propagating any
spans contexts to another homeserver.
Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead to
two problems, namely:
- If the span context is marked as sampled by the sending homeserver the receiver will
sample it. Therefore two homeservers with wildly different sampling policies
could incur higher sampling counts than intended.
- Sending servers can attach arbitrary data to spans, known as 'baggage'. For safety this has been disabled in Synapse
but that doesn't prevent another server sending you baggage which will be logged
to OpenTracing's logs.
==================
Configuring Jaeger
==================
Sampling strategies can be set as in this document:
https://www.jaegertracing.io/docs/1.13/sampling/

View File

@@ -11,7 +11,9 @@ a postgres database.
* If you are using the `matrix.org debian/ubuntu
packages <../INSTALL.md#matrixorg-packages>`_,
the necessary libraries will already be installed.
the necessary python library will already be installed, but you will need to
ensure the low-level postgres library is installed, which you can do with
``apt install libpq5``.
* For other pre-built packages, please consult the documentation from the
relevant package.
@@ -34,9 +36,14 @@ Assuming your PostgreSQL database user is called ``postgres``, create a user
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
Before you can authenticate with the ``synapse_user``, you must create a
database that it can access. To create a database, first connect to the database
with your database user::
su - postgres
psql
and then run::
CREATE DATABASE synapse
ENCODING 'UTF8'
@@ -46,7 +53,13 @@ encoding use, e.g.::
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
``synapse_user`` user (which must already have been created as above).
Note that the PostgreSQL database *must* have the correct encoding set (as
shown above), otherwise it will not be able to store UTF8 strings.
You may need to enable password authentication so ``synapse_user`` can connect
to the database. See https://www.postgresql.org/docs/11/auth-pg-hba-conf.html.
Tuning Postgres
===============

View File

@@ -48,6 +48,8 @@ Let's assume that we expect clients to connect to our server at
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Do not add a `/` after the port in `proxy_pass`, otherwise nginx will canonicalise/normalise the URI.
* Caddy::

View File

@@ -278,6 +278,23 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
#
@@ -548,6 +565,13 @@ log_config: "CONFDIR/SERVERNAME.log.config"
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored.
#
media_store_path: "DATADIR/media_store"
@@ -785,6 +809,27 @@ uploads_path: "DATADIR/uploads"
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
@@ -914,10 +959,6 @@ uploads_path: "DATADIR/uploads"
#
# macaroon_secret_key: <PRIVATE STRING>
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
@@ -1395,3 +1436,43 @@ password_config:
# module: "my_custom_project.SuperRulesSet"
# config:
# example_option: 'things'
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false

View File

@@ -206,6 +206,13 @@ Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/
And the following regular expressions matching media-specific administration
APIs::
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/quarantine_media/.*$
You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.

View File

@@ -14,6 +14,11 @@
name = "Bugfixes"
showcontent = true
[[tool.towncrier.type]]
directory = "docker"
name = "Updates to the Docker image"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
@@ -39,6 +44,8 @@ exclude = '''
| \.git # root of the project
| \.tox
| \.venv
| \.env
| env
| _build
| _trial_temp.*
| build

12
scripts-dev/lint.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements
# flake8 - lints and finds mistakes
# black - opinionated code formatter
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts

View File

@@ -35,4 +35,4 @@ try:
except ImportError:
pass
__version__ = "1.1.0"
__version__ = "1.3.1"

View File

@@ -22,10 +22,17 @@ from netaddr import IPAddress
from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
@@ -63,7 +70,6 @@ class Auth(object):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("cache", "token_cache", self.token_cache)
@@ -123,7 +129,7 @@ class Auth(object):
)
self._check_joined_room(member, user_id, room_id)
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id):
@@ -151,13 +157,13 @@ class Auth(object):
if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids)
return latest_event_ids
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
@@ -173,6 +179,7 @@ class Auth(object):
def get_public_keys(self, invite_event):
return event_auth.get_public_keys(invite_event)
@opentracing.trace
@defer.inlineCallbacks
def get_user_by_req(
self, request, allow_guest=False, rights="access", allow_expired=False
@@ -189,22 +196,22 @@ class Auth(object):
Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
# Can optionally look elsewhere in the request (e.g. headers)
try:
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
access_token = self.get_access_token_from_request(request)
user_id, app_service = yield self._get_appservice_user_id(request)
if user_id:
request.authenticated_entity = user_id
opentracing.set_tag("authenticated_entity", user_id)
if ip_addr and self.hs.config.track_appservice_user_ips:
yield self.store.insert_client_ip(
@@ -215,9 +222,7 @@ class Auth(object):
device_id="dummy-device", # stubbed
)
defer.returnValue(
synapse.types.create_requester(user_id, app_service=app_service)
)
return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"]
@@ -257,46 +262,39 @@ class Auth(object):
)
request.authenticated_entity = user.to_string()
opentracing.set_tag("authenticated_entity", user.to_string())
defer.returnValue(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
return synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError()
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
self.get_access_token_from_request(request)
)
if app_service is None:
defer.returnValue((None, None))
return (None, None)
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
return (None, None)
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
return (app_service.sender, app_service)
user_id = request.args[b"user_id"][0].decode("utf8")
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
return (app_service.sender, app_service)
if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user")
defer.returnValue((user_id, app_service))
return (user_id, app_service)
@defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"):
@@ -313,14 +311,26 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
"""
if rights == "access":
# first look in the database
r = yield self._look_up_user_by_access_token(token)
if r:
defer.returnValue(r)
valid_until_ms = r["valid_until_ms"]
if (
valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
return r
# otherwise it needs to be a valid macaroon
try:
@@ -331,11 +341,7 @@ class Auth(object):
if not guest:
# non-guest access tokens must be in the database
logger.warning("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError()
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -350,16 +356,10 @@ class Auth(object):
# guest tokens.
stored_user = yield self.store.get_user_by_id(user_id)
if not stored_user:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unknown user_id %s" % user_id,
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Guest access token used for regular user",
errcode=Codes.UNKNOWN_TOKEN,
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
ret = {
"user": user,
@@ -378,7 +378,7 @@ class Auth(object):
}
else:
raise RuntimeError("Unknown rights setting %s", rights)
defer.returnValue(ret)
return ret
except (
_InvalidMacaroonException,
pymacaroons.exceptions.MacaroonException,
@@ -386,11 +386,7 @@ class Auth(object):
ValueError,
) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Invalid macaroon passed.")
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
@@ -418,25 +414,16 @@ class Auth(object):
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
if caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access":
if rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
@@ -453,19 +440,16 @@ class Auth(object):
(str) user id
Raises:
AuthError if there is no user_id caveat in the macaroon
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
def validate_macaroon(self, macaroon, type_string, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
@@ -473,7 +457,6 @@ class Auth(object):
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token required (e.g. "access",
"delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
user_id (str): The user_id required
"""
v = pymacaroons.Verifier()
@@ -486,19 +469,7 @@ class Auth(object):
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
# verify_expiry should really always be True, but there exist access
# tokens in the wild which expire when they should not, so we can't
# enforce expiry yet (so we have to allow any caveat starting with
# 'time < ' in access tokens).
#
# On the other hand, short-term login tokens (as used by CAS login, for
# example) have an expiry time which we do want to enforce.
if verify_expiry:
v.satisfy_general(self._verify_expiry)
else:
v.satisfy_general(lambda c: c.startswith("time < "))
v.satisfy_general(self._verify_expiry)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
@@ -517,7 +488,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
defer.returnValue(None)
return None
# we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of
@@ -527,26 +498,18 @@ class Auth(object):
"token_id": ret.get("token_id", None),
"is_guest": False,
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
defer.returnValue(user_info)
return user_info
def get_appservice_by_req(self, request):
try:
token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
request.authenticated_entity = service.sender
return defer.succeed(service)
except KeyError:
raise AuthError(self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.")
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
@@ -562,7 +525,7 @@ class Auth(object):
@defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False):
if event.type == EventTypes.Create:
defer.returnValue([])
return []
auth_ids = []
@@ -623,22 +586,7 @@ class Auth(object):
if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id)
defer.returnValue(auth_ids)
def check_redaction(self, room_version, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
return event_auth.check_redaction(room_version, event, auth_events)
return auth_ids
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
@@ -652,7 +600,7 @@ class Auth(object):
is_admin = yield self.is_server_admin(user)
if is_admin:
defer.returnValue(True)
return True
user_id = user.to_string()
yield self.check_joined_room(room_id, user_id)
@@ -692,20 +640,16 @@ class Auth(object):
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401):
def get_access_token_from_request(request):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
MissingClientTokenError: If there isn't a single access_token in the
request
"""
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
@@ -714,34 +658,20 @@ class Auth(object):
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
raise MissingClientTokenError(
"Mixing Authorization headers and access_token query parameters."
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Too many Authorization headers.")
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode("ascii")
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Invalid Authorization header.")
else:
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError()
return query_params[0].decode("ascii")
@@ -764,7 +694,7 @@ class Auth(object):
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
return (member_event.membership, member_event.event_id)
except AuthError:
visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
@@ -773,7 +703,7 @@ class Auth(object):
visibility
and visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return (Membership.JOIN, None)
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN

View File

@@ -61,6 +61,7 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
class CodeMessageException(RuntimeError):
@@ -139,6 +140,22 @@ class ConsentNotGivenError(SynapseError):
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
class UserDeactivatedError(SynapseError):
"""The error returned to the client when the user attempted to access an
authenticated endpoint, but the account has been deactivated.
"""
def __init__(self, msg):
"""Constructs a UserDeactivatedError
Args:
msg (str): The human-readable error message
"""
super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
@@ -210,7 +227,9 @@ class NotFoundError(SynapseError):
class AuthError(SynapseError):
"""An error raised when there was a problem authorising an event."""
"""An error raised when there was a problem authorising an event, and at various
other poorly-defined times.
"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
@@ -218,6 +237,41 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs)
class InvalidClientCredentialsError(SynapseError):
"""An error raised when there was a problem with the authorisation credentials
in a client request.
https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens:
When credentials are required but missing or invalid, the HTTP call will
return with a status of 401 and the error code, M_MISSING_TOKEN or
M_UNKNOWN_TOKEN respectively.
"""
def __init__(self, msg, errcode):
super().__init__(code=401, msg=msg, errcode=errcode)
class MissingClientTokenError(InvalidClientCredentialsError):
"""Raised when we couldn't find the access token in a request"""
def __init__(self, msg="Missing access token"):
super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
class InvalidClientTokenError(InvalidClientCredentialsError):
"""Raised when we didn't understand the access token in a request"""
def __init__(self, msg="Unrecognised access token", soft_logout=False):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout
def error_dict(self):
d = super().error_dict()
d["soft_logout"] = self._soft_logout
return d
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.

View File

@@ -132,7 +132,7 @@ class Filtering(object):
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result))
return FilterCollection(result)
def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter)

View File

@@ -15,7 +15,9 @@
import gc
import logging
import os
import signal
import socket
import sys
import traceback
@@ -27,7 +29,7 @@ from twisted.protocols.tls import TLSMemoryBIOFactory
import synapse
from synapse.app import check_bind_error
from synapse.crypto import context_factory
from synapse.util import PreserveLoggingContext
from synapse.logging.context import PreserveLoggingContext
from synapse.util.async_helpers import Linearizer
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
@@ -48,7 +50,7 @@ def register_sighup(func):
_sighup_callbacks.append(func)
def start_worker_reactor(appname, config):
def start_worker_reactor(appname, config, run_command=reactor.run):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
@@ -57,6 +59,7 @@ def start_worker_reactor(appname, config):
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
run_command (Callable[]): callable that actually runs the reactor
"""
logger = logging.getLogger(config.worker_app)
@@ -69,11 +72,19 @@ def start_worker_reactor(appname, config):
daemonize=config.worker_daemonize,
print_pidfile=config.print_pidfile,
logger=logger,
run_command=run_command,
)
def start_reactor(
appname, soft_file_limit, gc_thresholds, pid_file, daemonize, print_pidfile, logger
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
print_pidfile,
logger,
run_command=reactor.run,
):
""" Run the reactor in the main process
@@ -88,38 +99,42 @@ def start_reactor(
daemonize (bool): true to run the reactor in a background process
print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize
run_command (Callable[]): callable that actually runs the reactor
"""
install_dns_limiter(reactor)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
logger.info("Running")
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
run_command()
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
#
# We also need to drop the logcontext before forking if we're daemonizing,
# otherwise the cputime metrics get confused about the per-thread resource usage
# appearing to go backwards.
with PreserveLoggingContext():
if daemonize:
if print_pidfile:
print(pid_file)
if daemonize:
if print_pidfile:
print(pid_file)
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
def quit_with_error(error_string):
@@ -136,8 +151,7 @@ def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
from synapse.metrics import RegistryProxy, start_http_server
for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port)
@@ -230,9 +244,15 @@ def start(hs, listeners=None):
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs):
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")
for i in _sighup_callbacks:
i(hs)
sdnotify(b"READY=1")
signal.signal(signal.SIGHUP, handle_sighup)
register_sighup(refresh_certificate)
@@ -240,11 +260,15 @@ def start(hs, listeners=None):
# Load the certificate from disk.
refresh_certificate(hs)
# Start the tracer
synapse.logging.opentracing.init_tracer(hs.config)
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().start_profiling()
setup_sentry(hs)
setup_sdnotify(hs)
except Exception:
traceback.print_exc(file=sys.stderr)
reactor = hs.get_reactor()
@@ -277,6 +301,21 @@ def setup_sentry(hs):
scope.set_tag("worker_name", name)
def setup_sdnotify(hs):
"""Adds process state hooks to tell systemd what we are up to.
"""
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
hs.get_reactor().addSystemEventTrigger(
"after", "startup", sdnotify, b"READY=1\nMAINPID=%i" % (os.getpid(),)
)
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", sdnotify, b"STOPPING=1"
)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS
requests.
@@ -370,3 +409,35 @@ class _DeferredResolutionReceiver(object):
def resolutionComplete(self):
self._deferred.callback(())
self._receiver.resolutionComplete()
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
def sdnotify(state):
"""
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
This function is based on the sdnotify python package, but since it's only a few
lines of code, it's easier to duplicate it here than to add a dependency on a
package which many OSes don't include as a matter of principle.
Args:
state (bytes): notification to send
"""
if not isinstance(state, bytes):
raise TypeError("sdnotify should be called with a bytes")
if not sdnotify_sockaddr:
return
addr = sdnotify_sockaddr
if addr[0] == "@":
addr = "\0" + addr[1:]
try:
with socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) as sock:
sock.connect(addr)
sock.sendall(state)
except Exception as e:
# this is a bit surprising, since we don't expect to have a NOTIFY_SOCKET
# unless systemd is expecting us to notify it.
logger.warning("Unable to send notification to systemd: %s", e)

264
synapse/app/admin_cmd.py Normal file
View File

@@ -0,0 +1,264 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import sys
import tempfile
from canonicaljson import json
from twisted.internet import defer, task
import synapse
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
pass
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
def _listen_http(self, listener_config):
pass
def start_listening(self, listeners):
pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks
def export_data_command(hs, args):
"""Export data for a user.
Args:
hs (HomeServer)
args (argparse.Namespace)
"""
user_id = args.user_id
directory = args.output_directory
res = yield hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
)
print(res)
class FileExfiltrationWriter(ExfiltrationWriter):
"""An ExfiltrationWriter that writes the users data to a directory.
Returns the directory location on completion.
Note: This writes to disk on the main reactor thread.
Args:
user_id (str): The user whose data is being exfiltrated.
directory (str|None): The directory to write the data to, if None then
will write to a temporary directory.
"""
def __init__(self, user_id, directory=None):
self.user_id = user_id
if directory:
self.base_directory = directory
else:
self.base_directory = tempfile.mkdtemp(
prefix="synapse-exfiltrate__%s__" % (user_id,)
)
os.makedirs(self.base_directory, exist_ok=True)
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id, events):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
with open(events_file, "a") as f:
for event in events:
print(json.dumps(event.get_pdu_json()), file=f)
def write_state(self, room_id, event_id, state):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
state_directory = os.path.join(room_directory, "state")
os.makedirs(state_directory, exist_ok=True)
event_file = os.path.join(state_directory, event_id)
with open(event_file, "a") as f:
for event in state.values():
print(json.dumps(event.get_pdu_json()), file=f)
def write_invite(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the invite state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
invite_state = os.path.join(room_directory, "invite_state")
with open(invite_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
return self.base_directory
def start(config_options):
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
subparser = parser.add_subparsers(
title="Admin Commands",
required=True,
dest="command",
metavar="<admin_command>",
help="The admin command to perform.",
)
export_data_parser = subparser.add_parser(
"export-data", help="Export all data for a user"
)
export_data_parser.add_argument("user_id", help="User to extra data from")
export_data_parser.add_argument(
"--output-directory",
action="store",
metavar="DIRECTORY",
required=False,
help="The directory to store the exported data in. Must be empty. Defaults"
" to creating a temp directory.",
)
export_data_parser.set_defaults(func=export_data_command)
try:
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if config.worker_app is not None:
assert config.worker_app == "synapse.app.admin_cmd"
# Update the config with some basic overrides so that don't have to specify
# a full worker config.
config.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
and not config.worker_log_file
and not config.worker_log_config
):
# Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config.
config.no_redirect_stdio = True
# Explicitly disable background processes
config.update_user_directory = False
config.start_pushers = False
config.send_federation = False
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = AdminCmdServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
# We use task.react as the basic run command as it correctly handles tearing
# down the reactor when the deferreds resolve and setting the return value.
# We also make sure that `_base.start` gets run before we actually run the
# command.
@defer.inlineCallbacks
def run(_reactor):
with LoggingContext("command"):
yield _base.start(ss, [])
yield args.func(ss, args)
_base.start_worker_reactor(
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
)
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -26,8 +26,8 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -36,7 +36,6 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -169,7 +168,9 @@ def start(config_options):
)
ps.setup()
reactor.callWhenRunning(_base.start, ps, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ps, config.worker_listeners
)
_base.start_worker_reactor("synapse-appservice", config)

View File

@@ -27,8 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -64,7 +64,6 @@ from synapse.rest.client.versions import VersionsRestServlet
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -195,7 +194,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-client-reader", config)

View File

@@ -27,8 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -59,7 +59,6 @@ from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -194,7 +193,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-event-creator", config)

View File

@@ -28,8 +28,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -48,7 +48,6 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -176,7 +175,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-reader", config)

View File

@@ -27,9 +27,9 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -44,7 +44,6 @@ from synapse.storage.engines import create_engine
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -199,7 +198,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-sender", config)

View File

@@ -29,8 +29,8 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@@ -41,7 +41,6 @@ from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -71,12 +70,12 @@ class PresenceStatusStubServlet(RestServlet):
except HttpResponseException as e:
raise e.to_synapse_error()
defer.returnValue((200, result))
return (200, result)
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
return (200, {})
class KeyUploadServlet(RestServlet):
@@ -127,11 +126,11 @@ class KeyUploadServlet(RestServlet):
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
defer.returnValue((200, result))
return (200, result)
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
return (200, {"one_time_key_counts": result})
class FrontendProxySlavedStore(
@@ -248,7 +247,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-frontend-proxy", config)

7
synapse/app/homeserver.py Executable file → Normal file
View File

@@ -54,9 +54,9 @@ from synapse.federation.transport.server import TransportLayerServer
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.module_api import ModuleApi
from synapse.python_dependencies import check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
@@ -72,7 +72,6 @@ from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
@@ -407,7 +406,7 @@ def setup(config_options):
if provision:
yield acme.provision_certificate()
defer.returnValue(provision)
return provision
@defer.inlineCallbacks
def reprovision_acme():

View File

@@ -26,21 +26,22 @@ from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -72,6 +73,12 @@ class MediaRepositoryServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_PREFIX: media_repo,
@@ -79,6 +86,7 @@ class MediaRepositoryServer(HomeServer):
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
"/_synapse/admin": admin_resource,
}
)
@@ -162,7 +170,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-media-repository", config)

View File

@@ -26,8 +26,8 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -38,7 +38,6 @@ from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -217,7 +216,7 @@ def start(config_options):
_base.start(ps, config.worker_listeners)
ps.get_pusherpool().start()
reactor.callWhenRunning(start)
reactor.addSystemEventTrigger("before", "startup", start)
_base.start_worker_reactor("synapse-pusher", config)

View File

@@ -31,8 +31,8 @@ from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -57,7 +57,6 @@ from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
@@ -452,7 +451,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-synchrotron", config)

View File

@@ -28,8 +28,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@@ -46,7 +46,6 @@ from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -225,7 +224,9 @@ def start(config_options):
)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-user-dir", config)

View File

@@ -175,21 +175,21 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_user(self, event, store):
if not event:
defer.returnValue(False)
return False
if self.is_interested_in_user(event.sender):
defer.returnValue(True)
return True
# also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
defer.returnValue(True)
return True
if not store:
defer.returnValue(False)
return False
does_match = yield self._matches_user_in_member_list(event.room_id, store)
defer.returnValue(does_match)
return does_match
@cachedInlineCallbacks(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context):
@@ -200,8 +200,8 @@ class ApplicationService(object):
# check joined member events
for user_id in member_list:
if self.is_interested_in_user(user_id):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
def _matches_room_id(self, event):
if hasattr(event, "room_id"):
@@ -211,13 +211,13 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_aliases(self, event, store):
if not store or not event:
defer.returnValue(False)
return False
alias_list = yield store.get_aliases_for_room(event.room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
@defer.inlineCallbacks
def is_interested(self, event, store=None):
@@ -231,15 +231,15 @@ class ApplicationService(object):
"""
# Do cheap checks first
if self._matches_room_id(event):
defer.returnValue(True)
return True
if (yield self._matches_aliases(event, store)):
defer.returnValue(True)
return True
if (yield self._matches_user(event, store)):
defer.returnValue(True)
return True
defer.returnValue(False)
return False
def is_interested_in_user(self, user_id):
return (

View File

@@ -97,40 +97,40 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def query_user(self, service, user_id):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
if e.code == 404:
defer.returnValue(False)
return False
return
logger.warning("query_user to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("query_user to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_alias(self, service, alias):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
logger.warning("query_alias to %s received %s", uri, e.code)
if e.code == 404:
defer.returnValue(False)
return False
return
except Exception as ex:
logger.warning("query_alias to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_3pe(self, service, kind, protocol, fields):
@@ -141,7 +141,7 @@ class ApplicationServiceApi(SimpleHttpClient):
else:
raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind)
if service.url is None:
defer.returnValue([])
return []
uri = "%s%s/thirdparty/%s/%s" % (
service.url,
@@ -155,7 +155,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response
)
defer.returnValue([])
return []
ret = []
for r in response:
@@ -166,14 +166,14 @@ class ApplicationServiceApi(SimpleHttpClient):
"query_3pe to %s returned an invalid result %r", uri, r
)
defer.returnValue(ret)
return ret
except Exception as ex:
logger.warning("query_3pe to %s threw exception %s", uri, ex)
defer.returnValue([])
return []
def get_3pe_protocol(self, service, protocol):
if service.url is None:
defer.returnValue({})
return {}
@defer.inlineCallbacks
def _get():
@@ -189,7 +189,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
)
defer.returnValue(None)
return None
for instance in info.get("instances", []):
network_id = instance.get("network_id", None)
@@ -198,10 +198,10 @@ class ApplicationServiceApi(SimpleHttpClient):
service.id, network_id
).to_string()
defer.returnValue(info)
return info
except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex)
defer.returnValue(None)
return None
key = (service.id, protocol)
return self.protocol_meta_cache.wrap(key, _get)
@@ -209,7 +209,7 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):
if service.url is None:
defer.returnValue(True)
return True
events = self._serialize(events)
@@ -229,14 +229,14 @@ class ApplicationServiceApi(SimpleHttpClient):
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return True
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
return False
def _serialize(self, events):
time_now = self.clock.time_msec()

View File

@@ -53,8 +53,8 @@ import logging
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util.logcontext import run_in_background
logger = logging.getLogger(__name__)
@@ -193,7 +193,7 @@ class _TransactionController(object):
@defer.inlineCallbacks
def _is_service_up(self, service):
state = yield self.store.get_appservice_state(service)
defer.returnValue(state == ApplicationServiceState.UP or state is None)
return state == ApplicationServiceState.UP or state is None
class _Recoverer(object):
@@ -208,7 +208,7 @@ class _Recoverer(object):
r.service.id,
)
r.recover()
defer.returnValue(recoverers)
return recoverers
def __init__(self, clock, store, as_api, service, callback):
self.clock = clock

View File

@@ -137,12 +137,42 @@ class Config(object):
return file_stream.read()
def invoke_all(self, name, *args, **kargs):
"""Invoke all instance methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = []
for cls in type(self).mro():
if name in cls.__dict__:
results.append(getattr(cls, name)(self, *args, **kargs))
return results
@classmethod
def invoke_all_static(cls, name, *args, **kargs):
"""Invoke all static methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = []
for c in cls.mro():
if name in c.__dict__:
results.append(getattr(c, name)(*args, **kargs))
return results
def generate_config(
self,
config_dir_path,
@@ -202,6 +232,23 @@ class Config(object):
Returns: Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
obj, _ = cls.load_config_with_parser(config_parser, argv)
return obj
@classmethod
def add_arguments_to_parser(cls, config_parser):
"""Adds all the config flags to an ArgumentParser.
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
config_parser (ArgumentParser): App description
"""
config_parser.add_argument(
"-c",
"--config-path",
@@ -219,16 +266,34 @@ class Config(object):
" Defaults to the directory containing the last config file",
)
cls.invoke_all_static("add_arguments", config_parser)
@classmethod
def load_config_with_parser(cls, parser, argv):
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
parser (ArgumentParser)
argv (list[str])
Returns:
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
config object and the parsed argparse.Namespace object from
`parser.parse_args(..)`
"""
obj = cls()
obj.invoke_all("add_arguments", config_parser)
config_args = config_parser.parse_args(argv)
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
if not config_files:
config_parser.error("Must supply a config file.")
parser.error("Must supply a config file.")
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
@@ -244,7 +309,7 @@ class Config(object):
obj.invoke_all("read_arguments", config_args)
return obj
return obj, config_args
@classmethod
def load_or_generate_config(cls, description, argv):
@@ -401,7 +466,7 @@ class Config(object):
formatter_class=argparse.RawDescriptionHelpFormatter,
)
obj.invoke_all("add_arguments", parser)
obj.invoke_all_static("add_arguments", parser)
args = parser.parse_args(remaining_args)
config_dict = read_config_files(config_files)

View File

@@ -69,7 +69,8 @@ class DatabaseConfig(Config):
if database_path is not None:
self.database_config["args"]["database"] = database_path
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
db_group = parser.add_argument_group("database")
db_group.add_argument(
"-d",

View File

@@ -112,13 +112,17 @@ class EmailConfig(Config):
missing = []
for k in required:
if k not in email_config:
missing.append(k)
missing.append("email." + k)
if config.get("public_baseurl") is None:
missing.append("public_base_url")
if len(missing) > 0:
raise RuntimeError(
"email.password_reset_behaviour is set to 'local' "
"but required keys are missing: %s"
% (", ".join(["email." + k for k in missing]),)
"Password resets emails are configured to be sent from "
"this homeserver due to a partial 'email' block. "
"However, the following required keys are missing: %s"
% (", ".join(missing),)
)
# Templates for password reset emails
@@ -128,21 +132,21 @@ class EmailConfig(Config):
self.email_password_reset_template_text = email_config.get(
"password_reset_template_text", "password_reset.txt"
)
self.email_password_reset_failure_template = email_config.get(
"password_reset_failure_template", "password_reset_failure.html"
self.email_password_reset_template_failure_html = email_config.get(
"password_reset_template_failure_html", "password_reset_failure.html"
)
# This template does not support any replaceable variables, so we will
# read it from the disk once during setup
email_password_reset_success_template = email_config.get(
"password_reset_success_template", "password_reset_success.html"
email_password_reset_template_success_html = email_config.get(
"password_reset_template_success_html", "password_reset_success.html"
)
# Check templates exist
for f in [
self.email_password_reset_template_html,
self.email_password_reset_template_text,
self.email_password_reset_failure_template,
email_password_reset_success_template,
self.email_password_reset_template_failure_html,
email_password_reset_template_success_html,
]:
p = os.path.join(self.email_template_dir, f)
if not os.path.isfile(p):
@@ -150,19 +154,12 @@ class EmailConfig(Config):
# Retrieve content of web templates
filepath = os.path.join(
self.email_template_dir, email_password_reset_success_template
self.email_template_dir, email_password_reset_template_success_html
)
self.email_password_reset_success_html_content = self.read_file(
self.email_password_reset_template_success_html_content = self.read_file(
filepath, "email.password_reset_template_success_html"
)
if config.get("public_baseurl") is None:
raise RuntimeError(
"email.password_reset_behaviour is set to 'local' but no "
"public_baseurl is set. This is necessary to generate password "
"reset links"
)
if self.email_enable_notifs:
required = [
"smtp_host",

View File

@@ -40,6 +40,7 @@ from .spam_checker import SpamCheckerConfig
from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig
from .tracer import TracerConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
@@ -75,5 +76,6 @@ class HomeServerConfig(
ServerNoticesConfig,
RoomDirectoryConfig,
ThirdPartyRulesConfig,
TracerConfig,
):
pass

View File

@@ -116,8 +116,6 @@ class KeyConfig(Config):
seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest()
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
@@ -144,10 +142,6 @@ class KeyConfig(Config):
#
%(macaroon_secret_key)s
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import logging.config
import os
@@ -24,7 +25,7 @@ from twisted.logger import STDLibLogObserver, globalLogBeginner
import synapse
from synapse.app import _base as appbase
from synapse.util.logcontext import LoggingContextFilter
from synapse.logging.context import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
@@ -40,7 +41,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
@@ -75,10 +76,8 @@ root:
class LoggingConfig(Config):
def read_config(self, config, **kwargs):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
log_config = os.path.join(config_dir_path, server_name + ".log.config")
@@ -94,37 +93,12 @@ class LoggingConfig(Config):
)
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
def add_arguments(cls, parser):
@staticmethod
def add_arguments(parser):
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"-f",
"--log-file",
dest="log_file",
help="File to log to. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"--log-config",
dest="log_config",
default=None,
help="Python logging config file",
)
logging_group.add_argument(
"-n",
"--no-redirect-stdio",
@@ -152,58 +126,29 @@ def setup_logging(config, use_worker_options=False):
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use 'worker_log_config' and
'worker_log_file' options instead of 'log_config' and 'log_file'.
use_worker_options (bool): True to use the 'worker_log_config' option
instead of 'log_config'.
register_sighup (func | None): Function to call to register a
sighup handler.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
log_file = config.worker_log_file if use_worker_options else config.log_file
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
level = logging.DEBUG
if config.verbosity > 1:
level_for_storage = logging.DEBUG
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(level)
logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(*args):
pass
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
@@ -217,8 +162,7 @@ def setup_logging(config, use_worker_options=False):
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()
appbase.register_sighup(sighup)
appbase.register_sighup(sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for

View File

@@ -23,7 +23,7 @@ class RateLimitConfig(object):
class FederationRateLimitConfig(object):
_items_and_default = {
"window_size": 10000,
"window_size": 1000,
"sleep_limit": 10,
"sleep_delay": 500,
"reject_limit": 50,
@@ -54,7 +54,7 @@ class RatelimitConfig(Config):
# Load the new-style federation config, if it exists. Otherwise, fall
# back to the old method.
if "federation_rc" in config:
if "rc_federation" in config:
self.rc_federation = FederationRateLimitConfig(**config["rc_federation"])
else:
self.rc_federation = FederationRateLimitConfig(

View File

@@ -13,8 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from distutils.util import strtobool
import pkg_resources
from synapse.config._base import Config, ConfigError
from synapse.types import RoomAlias
from synapse.util.stringutils import random_string_with_symbols
@@ -41,8 +44,36 @@ class AccountValidityConfig(Config):
self.startup_job_max_delta = self.period * 10.0 / 100.0
if self.renew_by_email_enabled and "public_baseurl" not in synapse_config:
raise ConfigError("Can't send renewal emails without 'public_baseurl'")
if self.renew_by_email_enabled:
if "public_baseurl" not in synapse_config:
raise ConfigError("Can't send renewal emails without 'public_baseurl'")
template_dir = config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates")
if "account_renewed_html_path" in config:
file_path = os.path.join(template_dir, config["account_renewed_html_path"])
self.account_renewed_html_content = self.read_file(
file_path, "account_validity.account_renewed_html_path"
)
else:
self.account_renewed_html_content = (
"<html><body>Your account has been successfully renewed.</body><html>"
)
if "invalid_token_html_path" in config:
file_path = os.path.join(template_dir, config["invalid_token_html_path"])
self.invalid_token_html_content = self.read_file(
file_path, "account_validity.invalid_token_html_path"
)
else:
self.invalid_token_html_content = (
"<html><body>Invalid renewal token.</body><html>"
)
class RegistrationConfig(Config):
@@ -71,9 +102,8 @@ class RegistrationConfig(Config):
self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False)
self.invite_3pid_guest = self.allow_guest_access and config.get(
"invite_3pid_guest", False
)
if config.get("invite_3pid_guest", False):
raise ConfigError("invite_3pid_guest is no longer supported")
self.auto_join_rooms = config.get("auto_join_rooms", [])
for room_alias in self.auto_join_rooms:
@@ -85,6 +115,11 @@ class RegistrationConfig(Config):
"disable_msisdn_registration", False
)
session_lifetime = config.get("session_lifetime")
if session_lifetime is not None:
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
def generate_config_section(self, generate_secrets=False, **kwargs):
if generate_secrets:
registration_shared_secret = 'registration_shared_secret: "%s"' % (
@@ -141,6 +176,27 @@ class RegistrationConfig(Config):
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
@@ -222,7 +278,8 @@ class RegistrationConfig(Config):
% locals()
)
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
reg_group = parser.add_argument_group("registration")
reg_group.add_argument(
"--enable-registration",

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from collections import namedtuple
@@ -87,6 +88,18 @@ def parse_thumbnail_requirements(thumbnail_sizes):
class ContentRepositoryConfig(Config):
def read_config(self, config, **kwargs):
# Only enable the media repo if either the media repo is enabled or the
# current worker app is the media repo.
if (
self.enable_media_repo is False
and config.get("worker_app") != "synapse.app.media_repository"
):
self.can_load_media_repo = False
return
else:
self.can_load_media_repo = True
self.max_upload_size = self.parse_size(config.get("max_upload_size", "10M"))
self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M"))
self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M"))
@@ -202,6 +215,13 @@ class ContentRepositoryConfig(Config):
return (
r"""
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored.
#
media_store_path: "%(media_store)s"

View File

@@ -18,6 +18,7 @@
import logging
import os.path
import attr
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
@@ -38,6 +39,12 @@ DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
"Please speak to your server administrator, or upgrade your instance "
"to join this room."
)
class ServerConfig(Config):
def read_config(self, config, **kwargs):
@@ -136,7 +143,7 @@ class ServerConfig(Config):
# Whether to enable experimental MSC1849 (aka relations) support
self.experimental_msc1849_support_enabled = config.get(
"experimental_msc1849_support_enabled", False
"experimental_msc1849_support_enabled", True
)
# Options to control access by tracking MAU
@@ -247,6 +254,23 @@ class ServerConfig(Config):
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
@attr.s
class LimitRemoteRoomsConfig(object):
enabled = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
complexity = attr.ib(
validator=attr.validators.instance_of((int, float)), default=1.0
)
complexity_error = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**config.get("limit_remote_rooms", {})
)
bind_port = config.get("bind_port")
if bind_port:
if config.get("no_tls", False):
@@ -617,6 +641,23 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
#
@@ -639,7 +680,8 @@ class ServerConfig(Config):
if args.print_pidfile is not None:
self.print_pidfile = args.print_pidfile
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
server_group = parser.add_argument_group("server")
server_group.add_argument(
"-D",

81
synapse/config/tracer.py Normal file
View File

@@ -0,0 +1,81 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.d
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
class TracerConfig(Config):
def read_config(self, config, **kwargs):
opentracing_config = config.get("opentracing")
if opentracing_config is None:
opentracing_config = {}
self.opentracer_enabled = opentracing_config.get("enabled", False)
self.jaeger_config = opentracing_config.get(
"jaeger_config",
{"sampler": {"type": "const", "param": 1}, "logging": False},
)
if not self.opentracer_enabled:
return
# The tracer is enabled so sanitize the config
self.opentracer_whitelist = opentracing_config.get("homeserver_whitelist", [])
if not isinstance(self.opentracer_whitelist, list):
raise ConfigError("Tracer homeserver_whitelist config is malformed")
def generate_config_section(cls, **kwargs):
return """\
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false
"""

View File

@@ -31,7 +31,6 @@ class WorkerConfig(Config):
self.worker_listeners = config.get("worker_listeners", [])
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
# The host used to connect to the main synapse
@@ -78,9 +77,5 @@ class WorkerConfig(Config):
if args.daemonize is not None:
self.worker_daemonize = args.daemonize
if args.log_config is not None:
self.worker_log_config = args.log_config
if args.log_file is not None:
self.worker_log_file = args.log_file
if args.manhole is not None:
self.worker_manhole = args.worker_manhole

View File

@@ -31,6 +31,7 @@ from twisted.internet.ssl import (
platformTrust,
)
from twisted.python.failure import Failure
from twisted.web.iweb import IPolicyForHTTPS
logger = logging.getLogger(__name__)
@@ -74,6 +75,7 @@ class ServerContextFactory(ContextFactory):
return self._context
@implementer(IPolicyForHTTPS)
class ClientTLSOptionsFactory(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers for federation.
@@ -146,6 +148,12 @@ class ClientTLSOptionsFactory(object):
f = Failure()
tls_protocol.failVerification(f)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
directly to agents.
"""
return self.get_options(hostname)
@implementer(IOpenSSLClientConnectionCreator)
class SSLClientConnectionCreator(object):

View File

@@ -44,15 +44,16 @@ from synapse.api.errors import (
RequestSendFailed,
SynapseError,
)
from synapse.storage.keys import FetchKeyResult
from synapse.util import logcontext, unwrapFirstError
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.logcontext import (
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
make_deferred_yieldable,
preserve_fn,
run_in_background,
)
from synapse.storage.keys import FetchKeyResult
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination
@@ -140,7 +141,7 @@ class Keyring(object):
"""
req = VerifyJsonRequest(server_name, json_object, validity_time, request_name)
requests = (req,)
return logcontext.make_deferred_yieldable(self._verify_objects(requests)[0])
return make_deferred_yieldable(self._verify_objects(requests)[0])
def verify_json_objects_for_server(self, server_and_json):
"""Bulk verifies signatures of json objects, bulk fetching keys as
@@ -237,27 +238,9 @@ class Keyring(object):
"""
try:
# create a deferred for each server we're going to look up the keys
# for; we'll resolve them once we have completed our lookups.
# These will be passed into wait_for_previous_lookups to block
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred() for rq in verify_requests
}
ctx = LoggingContext.current_context()
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(server_to_deferred)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
#
# map from server name to a set of request ids
# map from server name to a set of outstanding request ids
server_to_request_ids = {}
for verify_request in verify_requests:
@@ -265,40 +248,61 @@ class Keyring(object):
request_id = id(verify_request)
server_to_request_ids.setdefault(server_name, set()).add(request_id)
def remove_deferreds(res, verify_request):
# Wait for any previous lookups to complete before proceeding.
yield self.wait_for_previous_lookups(server_to_request_ids.keys())
# take out a lock on each of the servers by sticking a Deferred in
# key_downloads
for server_name in server_to_request_ids.keys():
self.key_downloads[server_name] = defer.Deferred()
logger.debug("Got key lookup lock on %s", server_name)
# When we've finished fetching all the keys for a given server_name,
# drop the lock by resolving the deferred in key_downloads.
def drop_server_lock(server_name):
d = self.key_downloads.pop(server_name)
d.callback(None)
def lookup_done(res, verify_request):
server_name = verify_request.server_name
request_id = id(verify_request)
server_to_request_ids[server_name].discard(request_id)
if not server_to_request_ids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
server_requests = server_to_request_ids[server_name]
server_requests.remove(id(verify_request))
# if there are no more requests for this server, we can drop the lock.
if not server_requests:
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name)
# ... but not immediately, as that can cause stack explosions if
# we get a long queue of lookups.
self.clock.call_later(0, drop_server_lock, server_name)
return res
for verify_request in verify_requests:
verify_request.key_ready.addBoth(remove_deferreds, verify_request)
verify_request.key_ready.addBoth(lookup_done, verify_request)
# Actually start fetching keys.
self._get_server_verify_keys(verify_requests)
except Exception:
logger.exception("Error starting key lookups")
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_to_deferred):
def wait_for_previous_lookups(self, server_names):
"""Waits for any previous key lookups for the given servers to finish.
Args:
server_to_deferred (dict[str, Deferred]): server_name to deferred which gets
resolved once we've finished looking up keys for that server.
The Deferreds should be regular twisted ones which call their
callbacks with no logcontext.
server_names (Iterable[str]): list of servers which we want to look up
Returns: a Deferred which resolves once all key lookups for the given
servers have completed. Follows the synapse rules of logcontext
preservation.
Returns:
Deferred[None]: resolves once all key lookups for the given servers have
completed. Follows the synapse rules of logcontext preservation.
"""
loop_count = 1
while True:
wait_on = [
(server_name, self.key_downloads[server_name])
for server_name in server_to_deferred.keys()
for server_name in server_names
if server_name in self.key_downloads
]
if not wait_on:
@@ -313,19 +317,6 @@ class Keyring(object):
loop_count += 1
ctx = LoggingContext.current_context()
def rm(r, server_name_):
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name_)
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
logger.debug("Got key lookup lock on %s", server_name)
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
def _get_server_verify_keys(self, verify_requests):
"""Tries to find at least one key for each verify request
@@ -471,7 +462,7 @@ class StoreKeyFetcher(KeyFetcher):
keys = {}
for (server_name, key_id), key in res.items():
keys.setdefault(server_name, {})[key_id] = key
defer.returnValue(keys)
return keys
class BaseV2KeyFetcher(object):
@@ -557,7 +548,7 @@ class BaseV2KeyFetcher(object):
signed_key_json_bytes = encode_canonical_json(signed_key_json)
yield logcontext.make_deferred_yieldable(
yield make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
@@ -575,7 +566,7 @@ class BaseV2KeyFetcher(object):
).addErrback(unwrapFirstError)
)
defer.returnValue(verify_keys)
return verify_keys
class PerspectivesKeyFetcher(BaseV2KeyFetcher):
@@ -597,7 +588,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
result = yield self.get_server_verify_key_v2_indirect(
keys_to_fetch, key_server
)
defer.returnValue(result)
return result
except KeyLookupError as e:
logger.warning(
"Key lookup failed from %r: %s", key_server.server_name, e
@@ -610,9 +601,9 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
str(e),
)
defer.returnValue({})
return {}
results = yield logcontext.make_deferred_yieldable(
results = yield make_deferred_yieldable(
defer.gatherResults(
[run_in_background(get_key, server) for server in self.key_servers],
consumeErrors=True,
@@ -624,7 +615,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
for server_name, keys in result.items():
union_of_keys.setdefault(server_name, {}).update(keys)
defer.returnValue(union_of_keys)
return union_of_keys
@defer.inlineCallbacks
def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
@@ -710,7 +701,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
perspective_name, time_now_ms, added_keys
)
defer.returnValue(keys)
return keys
def _validate_perspectives_response(self, key_server, response):
"""Optionally check the signature on the result of a /key/query request
@@ -852,7 +843,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
)
keys.update(response_keys)
defer.returnValue(keys)
return keys
@defer.inlineCallbacks

View File

@@ -104,6 +104,17 @@ class _EventInternalMetadata(object):
"""
return getattr(self, "proactively_send", True)
def is_redacted(self):
"""Whether the event has been redacted.
This is used for efficiently checking whether an event has been
marked as redacted without needing to make another database call.
Returns:
bool
"""
return getattr(self, "redacted", False)
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.

View File

@@ -144,15 +144,13 @@ class EventBuilder(object):
if self._origin_server_ts is not None:
event_dict["origin_server_ts"] = self._origin_server_ts
defer.returnValue(
create_local_event_from_event_dict(
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)
return create_local_event_from_event_dict(
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)

View File

@@ -19,7 +19,7 @@ from frozendict import frozendict
from twisted.internet import defer
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.logging.context import make_deferred_yieldable, run_in_background
class EventContext(object):
@@ -133,19 +133,17 @@ class EventContext(object):
else:
prev_state_id = None
defer.returnValue(
{
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
)
return {
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
@staticmethod
def deserialize(store, input):
@@ -202,7 +200,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._current_state_ids)
return self._current_state_ids
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
@@ -222,7 +220,7 @@ class EventContext(object):
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._prev_state_ids)
return self._prev_state_ids
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.

View File

@@ -51,7 +51,7 @@ class ThirdPartyEventRules(object):
defer.Deferred[bool]: True if the event should be allowed, False if not.
"""
if self.third_party_rules is None:
defer.returnValue(True)
return True
prev_state_ids = yield context.get_prev_state_ids(self.store)
@@ -61,7 +61,7 @@ class ThirdPartyEventRules(object):
state_events[key] = yield self.store.get_event(event_id, allow_none=True)
ret = yield self.third_party_rules.check_event_allowed(event, state_events)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def on_create_room(self, requester, config, is_requester_admin):
@@ -98,7 +98,7 @@ class ThirdPartyEventRules(object):
"""
if self.third_party_rules is None:
defer.returnValue(True)
return True
state_ids = yield self.store.get_filtered_current_state_ids(room_id)
room_state_events = yield self.store.get_events(state_ids.values())
@@ -110,4 +110,4 @@ class ThirdPartyEventRules(object):
ret = yield self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events
)
defer.returnValue(ret)
return ret

View File

@@ -52,10 +52,15 @@ def prune_event(event):
from . import event_type_from_format_version
return event_type_from_format_version(event.format_version)(
pruned_event = event_type_from_format_version(event.format_version)(
pruned_event_dict, event.internal_metadata.get_dict()
)
# Mark the event as redacted
pruned_event.internal_metadata.redacted = True
return pruned_event
def prune_event_dict(event_dict):
"""Redacts the event_dict in the same way as `prune_event`, except it
@@ -355,14 +360,17 @@ class EventClientSerializer(object):
"""
# To handle the case of presence events and the like
if not isinstance(event, EventBase):
defer.returnValue(event)
return event
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if thre are any relations
# we need to bundle in with the event
if self.experimental_msc1849_support_enabled and bundle_aggregations:
# If MSC1849 is enabled then we need to look if there are any relations
# we need to bundle in with the event.
# Do not bundle relations if the event has been redacted
if not event.internal_metadata.is_redacted() and (
self.experimental_msc1849_support_enabled and bundle_aggregations
):
annotations = yield self.store.get_aggregation_groups_for_event(event_id)
references = yield self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"
@@ -392,9 +400,13 @@ class EventClientSerializer(object):
serialized_event["content"].pop("m.relates_to", None)
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REPLACE] = {"event_id": edit.event_id}
r[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
defer.returnValue(serialized_event)
return serialized_event
def serialize_events(self, events, time_now, **kwargs):
"""Serializes multiple events.

View File

@@ -95,10 +95,10 @@ class EventValidator(object):
elif event.type == EventTypes.Topic:
self._ensure_strings(event.content, ["topic"])
self._ensure_state_event(event)
elif event.type == EventTypes.Name:
self._ensure_strings(event.content, ["name"])
self._ensure_state_event(event)
elif event.type == EventTypes.Member:
if "membership" not in event.content:
raise SynapseError(400, "Content has not membership key")
@@ -106,9 +106,25 @@ class EventValidator(object):
if event.content["membership"] not in Membership.LIST:
raise SynapseError(400, "Invalid membership key")
self._ensure_state_event(event)
elif event.type == EventTypes.Tombstone:
if "replacement_room" not in event.content:
raise SynapseError(400, "Content has no replacement_room key")
if event.content["replacement_room"] == event.room_id:
raise SynapseError(
400, "Tombstone cannot reference the room it was sent in"
)
self._ensure_state_event(event)
def _ensure_strings(self, d, keys):
for s in keys:
if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,))
if not isinstance(d[s], string_types):
raise SynapseError(400, "'%s' not a string type" % (s,))
def _ensure_state_event(self, event):
if not event.is_state():
raise SynapseError(400, "'%s' must be state events" % (event.type,))

View File

@@ -27,8 +27,14 @@ from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import event_type_from_format_version
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
make_deferred_yieldable,
preserve_fn,
)
from synapse.types import get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
from synapse.util import unwrapFirstError
logger = logging.getLogger(__name__)
@@ -73,7 +79,7 @@ class FederationBase(object):
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield logcontext.make_deferred_yieldable(deferred)
res = yield make_deferred_yieldable(deferred)
except SynapseError:
res = None
@@ -100,22 +106,22 @@ class FederationBase(object):
"Failed to find copy of %s with valid signature", pdu.event_id
)
defer.returnValue(res)
return res
handle = logcontext.preserve_fn(handle_check_result)
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
valid_pdus = yield logcontext.make_deferred_yieldable(
valid_pdus = yield make_deferred_yieldable(
defer.gatherResults(deferreds2, consumeErrors=True)
).addErrback(unwrapFirstError)
if include_none:
defer.returnValue(valid_pdus)
return valid_pdus
else:
defer.returnValue([p for p in valid_pdus if p])
return [p for p in valid_pdus if p]
def _check_sigs_and_hash(self, room_version, pdu):
return logcontext.make_deferred_yieldable(
return make_deferred_yieldable(
self._check_sigs_and_hashes(room_version, [pdu])[0]
)
@@ -133,14 +139,14 @@ class FederationBase(object):
* returns a redacted version of the event (if the signature
matched but the hash did not)
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
The deferreds run their callbacks in the sentinel
"""
deferreds = _check_sigs_on_pdus(self.keyring, room_version, pdus)
ctx = logcontext.LoggingContext.current_context()
ctx = LoggingContext.current_context()
def callback(_, pdu):
with logcontext.PreserveLoggingContext(ctx):
with PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
# let's try to distinguish between failures because the event was
# redacted (which are somewhat expected) vs actual ball-tampering
@@ -178,7 +184,7 @@ class FederationBase(object):
def errback(failure, pdu):
failure.trap(SynapseError)
with logcontext.PreserveLoggingContext(ctx):
with PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s: %s",
pdu.event_id,

View File

@@ -39,10 +39,10 @@ from synapse.api.room_versions import (
)
from synapse.events import builder, room_version_to_event_format
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.util import logcontext, unwrapFirstError
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.utils import log_function
from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -207,13 +207,13 @@ class FederationClient(FederationBase):
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield logcontext.make_deferred_yieldable(
pdus[:] = yield make_deferred_yieldable(
defer.gatherResults(
self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True
).addErrback(unwrapFirstError)
)
defer.returnValue(pdus)
return pdus
@defer.inlineCallbacks
@log_function
@@ -245,7 +245,7 @@ class FederationClient(FederationBase):
ev = self._get_pdu_cache.get(event_id)
if ev:
defer.returnValue(ev)
return ev
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
@@ -307,7 +307,7 @@ class FederationClient(FederationBase):
if signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu
defer.returnValue(signed_pdu)
return signed_pdu
@defer.inlineCallbacks
@log_function
@@ -355,7 +355,7 @@ class FederationClient(FederationBase):
auth_chain.sort(key=lambda e: e.depth)
defer.returnValue((pdus, auth_chain))
return (pdus, auth_chain)
except HttpResponseException as e:
if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back")
@@ -404,7 +404,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth)
defer.returnValue((signed_pdus, signed_auth))
return (signed_pdus, signed_auth)
@defer.inlineCallbacks
def get_events_from_store_or_dest(self, destination, room_id, event_ids):
@@ -429,7 +429,7 @@ class FederationClient(FederationBase):
missing_events.discard(k)
if not missing_events:
defer.returnValue((signed_events, failed_to_fetch))
return (signed_events, failed_to_fetch)
logger.debug(
"Fetching unknown state/auth events %s for room %s",
@@ -465,7 +465,7 @@ class FederationClient(FederationBase):
# We removed all events we successfully fetched from `batch`
failed_to_fetch.update(batch)
defer.returnValue((signed_events, failed_to_fetch))
return (signed_events, failed_to_fetch)
@defer.inlineCallbacks
@log_function
@@ -485,7 +485,7 @@ class FederationClient(FederationBase):
signed_auth.sort(key=lambda e: e.depth)
defer.returnValue(signed_auth)
return signed_auth
@defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback):
@@ -511,9 +511,8 @@ class FederationClient(FederationBase):
The [Deferred] result of callback, if it succeeds
Raises:
SynapseError if the chosen remote server returns a 300/400 code.
RuntimeError if no servers were reachable.
SynapseError if the chosen remote server returns a 300/400 code, or
no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
@@ -521,7 +520,7 @@ class FederationClient(FederationBase):
try:
res = yield callback(destination)
defer.returnValue(res)
return res
except InvalidResponseError as e:
logger.warn("Failed to %s via %s: %s", description, destination, e)
except HttpResponseException as e:
@@ -538,7 +537,7 @@ class FederationClient(FederationBase):
except Exception:
logger.warn("Failed to %s via %s", description, destination, exc_info=1)
raise RuntimeError("Failed to %s via any server" % (description,))
raise SynapseError(502, "Failed to %s via any server" % (description,))
def make_membership_event(
self, destinations, room_id, user_id, membership, content, params
@@ -615,7 +614,7 @@ class FederationClient(FederationBase):
event_dict=pdu_dict,
)
defer.returnValue((destination, ev, event_format))
return (destination, ev, event_format)
return self._try_destination_list(
"make_" + membership, destinations, send_request
@@ -728,13 +727,11 @@ class FederationClient(FederationBase):
check_authchain_validity(signed_auth)
defer.returnValue(
{
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
}
)
return {
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
}
return self._try_destination_list("send_join", destinations, send_request)
@@ -758,7 +755,7 @@ class FederationClient(FederationBase):
# FIXME: We should handle signature failures more gracefully.
defer.returnValue(pdu)
return pdu
@defer.inlineCallbacks
def _do_send_invite(self, destination, pdu, room_version):
@@ -786,7 +783,7 @@ class FederationClient(FederationBase):
"invite_room_state": pdu.unsigned.get("invite_room_state", []),
},
)
defer.returnValue(content)
return content
except HttpResponseException as e:
if e.code in [400, 404]:
err = e.to_synapse_error()
@@ -821,7 +818,7 @@ class FederationClient(FederationBase):
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
defer.returnValue(content)
return content
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
@@ -856,7 +853,7 @@ class FederationClient(FederationBase):
)
logger.debug("Got content: %s", content)
defer.returnValue(None)
return None
return self._try_destination_list("send_leave", destinations, send_request)
@@ -917,7 +914,7 @@ class FederationClient(FederationBase):
"missing": content.get("missing", []),
}
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_missing_events(
@@ -974,7 +971,7 @@ class FederationClient(FederationBase):
# get_missing_events
signed_events = []
defer.returnValue(signed_events)
return signed_events
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
@@ -986,7 +983,7 @@ class FederationClient(FederationBase):
yield self.transport_layer.exchange_third_party_invite(
destination=destination, room_id=room_id, event_dict=event_dict
)
defer.returnValue(None)
return None
except CodeMessageException:
raise
except Exception as e:
@@ -995,3 +992,39 @@ class FederationClient(FederationBase):
)
raise RuntimeError("Failed to send to any server.")
@defer.inlineCallbacks
def get_room_complexity(self, destination, room_id):
"""
Fetch the complexity of a remote room from another server.
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
Returns:
Deferred[dict] or Deferred[None]: Dict contains the complexity
metric versions, while None means we could not fetch the complexity.
"""
try:
complexity = yield self.transport_layer.get_room_complexity(
destination=destination, room_id=room_id
)
defer.returnValue(complexity)
except CodeMessageException as e:
# We didn't manage to get it -- probably a 404. We are okay if other
# servers don't give it to us.
logger.debug(
"Failed to fetch room complexity via %s for %s, got a %d",
destination,
room_id,
e.code,
)
except Exception:
logger.exception(
"Failed to fetch room complexity via %s for %s", destination, room_id
)
# If we don't manage to find it, return None. It's not an error if a
# server doesn't give it to us.
defer.returnValue(None)

View File

@@ -42,6 +42,8 @@ from synapse.federation.federation_base import FederationBase, event_from_pdu_js
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.logging.context import nested_logging_context
from synapse.logging.utils import log_function
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
@@ -50,8 +52,6 @@ from synapse.types import get_domain_from_id
from synapse.util import glob_to_regex
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import nested_logging_context
from synapse.util.logutils import log_function
# when processing incoming transactions, we try to handle multiple rooms in
# parallel, up to this limit.
@@ -99,7 +99,7 @@ class FederationServer(FederationBase):
res = self._transaction_from_pdus(pdus).get_dict()
defer.returnValue((200, res))
return (200, res)
@defer.inlineCallbacks
@log_function
@@ -126,7 +126,7 @@ class FederationServer(FederationBase):
origin, transaction, request_time
)
defer.returnValue(result)
return result
@defer.inlineCallbacks
def _handle_incoming_transaction(self, origin, transaction, request_time):
@@ -147,8 +147,7 @@ class FederationServer(FederationBase):
"[%s] We've already responded to this request",
transaction.transaction_id,
)
defer.returnValue(response)
return
return response
logger.debug("[%s] Transaction is new", transaction.transaction_id)
@@ -163,7 +162,7 @@ class FederationServer(FederationBase):
yield self.transaction_actions.set_response(
origin, transaction, 400, response
)
defer.returnValue((400, response))
return (400, response)
received_pdus_counter.inc(len(transaction.pdus))
@@ -265,7 +264,7 @@ class FederationServer(FederationBase):
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(origin, transaction, 200, response)
defer.returnValue((200, response))
return (200, response)
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
@@ -298,7 +297,7 @@ class FederationServer(FederationBase):
event_id,
)
defer.returnValue((200, resp))
return (200, resp)
@defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id):
@@ -315,9 +314,7 @@ class FederationServer(FederationBase):
state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
defer.returnValue(
(200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
)
return (200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids})
@defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id):
@@ -336,12 +333,10 @@ class FederationServer(FederationBase):
)
)
defer.returnValue(
{
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
)
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
@defer.inlineCallbacks
@log_function
@@ -349,15 +344,15 @@ class FederationServer(FederationBase):
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
if pdu:
defer.returnValue((200, self._transaction_from_pdus([pdu]).get_dict()))
return (200, self._transaction_from_pdus([pdu]).get_dict())
else:
defer.returnValue((404, ""))
return (404, "")
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))
return (200, resp)
@defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
@@ -369,11 +364,9 @@ class FederationServer(FederationBase):
logger.warn("Room version %s not in %s", room_version, supported_versions)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = yield self.handler.on_make_join_request(room_id, user_id)
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue(
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_invite_request(self, origin, content, room_version):
@@ -391,7 +384,7 @@ class FederationServer(FederationBase):
yield self.check_server_matches_acl(origin_host, pdu.room_id)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue({"event": ret_pdu.get_pdu_json(time_now)})
return {"event": ret_pdu.get_pdu_json(time_now)}
@defer.inlineCallbacks
def on_send_join_request(self, origin, content, room_id):
@@ -407,30 +400,26 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
defer.returnValue(
(
200,
{
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
],
},
)
return (
200,
{
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
],
},
)
@defer.inlineCallbacks
def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
pdu = yield self.handler.on_make_leave_request(origin, room_id, user_id)
room_version = yield self.store.get_room_version(room_id)
time_now = self._clock.time_msec()
defer.returnValue(
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
)
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content, room_id):
@@ -445,7 +434,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
return (200, {})
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
@@ -456,7 +445,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
defer.returnValue((200, res))
return (200, res)
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, room_id, event_id):
@@ -509,7 +498,7 @@ class FederationServer(FederationBase):
"missing": ret.get("missing", []),
}
defer.returnValue((200, send_content))
return (200, send_content)
@log_function
def on_query_client_keys(self, origin, content):
@@ -548,7 +537,7 @@ class FederationServer(FederationBase):
),
)
defer.returnValue({"one_time_keys": json_result})
return {"one_time_keys": json_result}
@defer.inlineCallbacks
@log_function
@@ -580,9 +569,7 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
defer.returnValue(
{"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
)
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function
def on_openid_userinfo(self, token):
@@ -676,14 +663,14 @@ class FederationServer(FederationBase):
ret = yield self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed
)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
origin, room_id, event_dict
)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):

View File

@@ -21,9 +21,7 @@ These actions are mostly only used by the :py:mod:`.replication` module.
import logging
from twisted.internet import defer
from synapse.util.logutils import log_function
from synapse.logging.utils import log_function
logger = logging.getLogger(__name__)
@@ -63,33 +61,3 @@ class TransactionActions(object):
return self.store.set_received_txn_response(
transaction.transaction_id, origin, code, response
)
@defer.inlineCallbacks
@log_function
def prepare_to_send(self, transaction):
""" Persists the `Transaction` we are about to send and works out the
correct value for the `prev_ids` key.
Returns:
Deferred
"""
transaction.prev_ids = yield self.store.prep_send_transaction(
transaction.transaction_id,
transaction.destination,
transaction.origin_server_ts,
)
@log_function
def delivered(self, transaction, response_code, response_dict):
""" Marks the given `Transaction` as having been successfully
delivered to the remote homeserver, and what the response was.
Returns:
Deferred
"""
return self.store.delivered_txn(
transaction.transaction_id,
transaction.destination,
response_code,
response_dict,
)

View File

@@ -26,6 +26,11 @@ from synapse.federation.sender.per_destination_queue import PerDestinationQueue
from synapse.federation.sender.transaction_manager import TransactionManager
from synapse.federation.units import Edu
from synapse.handlers.presence import get_interested_remotes
from synapse.logging.context import (
make_deferred_yieldable,
preserve_fn,
run_in_background,
)
from synapse.metrics import (
LaterGauge,
event_processing_loop_counter,
@@ -33,7 +38,6 @@ from synapse.metrics import (
events_processed_counter,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util import logcontext
from synapse.util.metrics import measure_func
logger = logging.getLogger(__name__)
@@ -210,10 +214,10 @@ class FederationSender(object):
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield logcontext.make_deferred_yieldable(
yield make_deferred_yieldable(
defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
run_in_background(handle_room_events, evs)
for evs in itervalues(events_by_room)
],
consumeErrors=True,
@@ -360,7 +364,7 @@ class FederationSender(object):
for queue in queues:
queue.flush_read_receipts_for_room(room_id)
@logcontext.preserve_fn # the caller should not yield on this
@preserve_fn # the caller should not yield on this
@defer.inlineCallbacks
def send_presence(self, states):
"""Send the new presence states to the appropriate destinations.

View File

@@ -374,7 +374,7 @@ class PerDestinationQueue(object):
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
defer.returnValue((edus, now_stream_id))
return (edus, now_stream_id)
@defer.inlineCallbacks
def _get_to_device_message_edus(self, limit):
@@ -393,4 +393,4 @@ class PerDestinationQueue(object):
for content in contents
]
defer.returnValue((edus, stream_id))
return (edus, stream_id)

View File

@@ -63,8 +63,6 @@ class TransactionManager(object):
len(edus),
)
logger.debug("TX [%s] Persisting transaction...", destination)
transaction = Transaction.create_new(
origin_server_ts=int(self.clock.time_msec()),
transaction_id=txn_id,
@@ -76,9 +74,6 @@ class TransactionManager(object):
self._next_txn_id += 1
yield self._transaction_actions.prepare_to_send(transaction)
logger.debug("TX [%s] Persisted transaction", destination)
logger.info(
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
destination,
@@ -118,10 +113,6 @@ class TransactionManager(object):
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
yield self._transaction_actions.delivered(transaction, code, response)
logger.debug("TX [%s] {%s} Marked as delivered", destination, txn_id)
if code == 200:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
@@ -142,4 +133,4 @@ class TransactionManager(object):
)
success = False
defer.returnValue(success)
return success

View File

@@ -21,8 +21,12 @@ from six.moves import urllib
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_V1_PREFIX, FEDERATION_V2_PREFIX
from synapse.util.logutils import log_function
from synapse.api.urls import (
FEDERATION_UNSTABLE_PREFIX,
FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX,
)
from synapse.logging.utils import log_function
logger = logging.getLogger(__name__)
@@ -183,7 +187,7 @@ class TransportLayerClient(object):
try_trailing_slash_on_400=True,
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -201,7 +205,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -259,7 +263,7 @@ class TransportLayerClient(object):
ignore_backoff=ignore_backoff,
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -270,7 +274,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -288,7 +292,7 @@ class TransportLayerClient(object):
ignore_backoff=True,
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -299,7 +303,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -310,7 +314,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -339,7 +343,7 @@ class TransportLayerClient(object):
destination=remote_server, path=path, args=args, ignore_backoff=True
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -350,7 +354,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=event_dict
)
defer.returnValue(response)
return response
@defer.inlineCallbacks
@log_function
@@ -359,7 +363,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(destination=destination, path=path)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -370,7 +374,7 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -402,7 +406,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -426,7 +430,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(
destination=destination, path=path, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -460,7 +464,7 @@ class TransportLayerClient(object):
content = yield self.client.post_json(
destination=destination, path=path, data=query_content, timeout=timeout
)
defer.returnValue(content)
return content
@defer.inlineCallbacks
@log_function
@@ -488,7 +492,7 @@ class TransportLayerClient(object):
timeout=timeout,
)
defer.returnValue(content)
return content
@log_function
def get_group_profile(self, destination, group_id, requester_user_id):
@@ -935,6 +939,23 @@ class TransportLayerClient(object):
destination=destination, path=path, data=content, ignore_backoff=True
)
def get_room_complexity(self, destination, room_id):
"""
Args:
destination (str): The remote server
room_id (str): The room ID to ask about.
"""
path = _create_path(FEDERATION_UNSTABLE_PREFIX, "/rooms/%s/complexity", room_id)
return self.client.get_json(destination=destination, path=path)
def _create_path(federation_prefix, path, *args):
"""
Ensures that all args are url encoded.
"""
return federation_prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)
def _create_v1_path(path, *args):
"""Creates a path against V1 federation API from the path template and
@@ -951,9 +972,7 @@ def _create_v1_path(path, *args):
Returns:
str
"""
return FEDERATION_V1_PREFIX + path % tuple(
urllib.parse.quote(arg, "") for arg in args
)
return _create_path(FEDERATION_V1_PREFIX, path, *args)
def _create_v2_path(path, *args):
@@ -971,6 +990,4 @@ def _create_v2_path(path, *args):
Returns:
str
"""
return FEDERATION_V2_PREFIX + path % tuple(
urllib.parse.quote(arg, "") for arg in args
)
return _create_path(FEDERATION_V2_PREFIX, path, *args)

File diff suppressed because it is too large Load Diff

View File

@@ -43,9 +43,9 @@ from signedjson.sign import sign_json
from twisted.internet import defer
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import get_domain_from_id
from synapse.util.logcontext import run_in_background
logger = logging.getLogger(__name__)
@@ -157,7 +157,7 @@ class GroupAttestionRenewer(object):
yield self.store.update_remote_attestion(group_id, user_id, attestation)
defer.returnValue({})
return {}
def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations)

View File

@@ -85,7 +85,7 @@ class GroupsServerHandler(object):
if not is_admin:
raise SynapseError(403, "User is not admin in group")
defer.returnValue(group)
return group
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
@@ -151,22 +151,20 @@ class GroupsServerHandler(object):
group_id, requester_user_id
)
defer.returnValue(
{
"profile": profile,
"users_section": {
"users": users,
"roles": roles,
"total_user_count_estimate": 0, # TODO
},
"rooms_section": {
"rooms": rooms,
"categories": categories,
"total_room_count_estimate": 0, # TODO
},
"user": membership_info,
}
)
return {
"profile": profile,
"users_section": {
"users": users,
"roles": roles,
"total_user_count_estimate": 0, # TODO
},
"rooms_section": {
"rooms": rooms,
"categories": categories,
"total_room_count_estimate": 0, # TODO
},
"user": membership_info,
}
@defer.inlineCallbacks
def update_group_summary_room(
@@ -192,7 +190,7 @@ class GroupsServerHandler(object):
is_public=is_public,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_summary_room(
@@ -208,7 +206,7 @@ class GroupsServerHandler(object):
group_id=group_id, room_id=room_id, category_id=category_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
@@ -228,7 +226,7 @@ class GroupsServerHandler(object):
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
@@ -237,7 +235,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = yield self.store.get_group_categories(group_id=group_id)
defer.returnValue({"categories": categories})
return {"categories": categories}
@defer.inlineCallbacks
def get_group_category(self, group_id, requester_user_id, category_id):
@@ -249,7 +247,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id
)
defer.returnValue(res)
return res
@defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content):
@@ -269,7 +267,7 @@ class GroupsServerHandler(object):
profile=profile,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id):
@@ -283,7 +281,7 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_roles(self, group_id, requester_user_id):
@@ -292,7 +290,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = yield self.store.get_group_roles(group_id=group_id)
defer.returnValue({"roles": roles})
return {"roles": roles}
@defer.inlineCallbacks
def get_group_role(self, group_id, requester_user_id, role_id):
@@ -301,7 +299,7 @@ class GroupsServerHandler(object):
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
defer.returnValue(res)
return res
@defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content):
@@ -319,7 +317,7 @@ class GroupsServerHandler(object):
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id):
@@ -331,7 +329,7 @@ class GroupsServerHandler(object):
yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def update_group_summary_user(
@@ -355,7 +353,7 @@ class GroupsServerHandler(object):
is_public=is_public,
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
@@ -369,7 +367,7 @@ class GroupsServerHandler(object):
group_id=group_id, user_id=user_id, role_id=role_id
)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def get_group_profile(self, group_id, requester_user_id):
@@ -391,7 +389,7 @@ class GroupsServerHandler(object):
group_description = {key: group[key] for key in cols}
group_description["is_openly_joinable"] = group["join_policy"] == "open"
defer.returnValue(group_description)
return group_description
else:
raise SynapseError(404, "Unknown group")
@@ -461,9 +459,7 @@ class GroupsServerHandler(object):
# TODO: If admin add lists of users whose attestations have timed out
defer.returnValue(
{"chunk": chunk, "total_user_count_estimate": len(user_results)}
)
return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
@defer.inlineCallbacks
def get_invited_users_in_group(self, group_id, requester_user_id):
@@ -494,9 +490,7 @@ class GroupsServerHandler(object):
logger.warn("Error getting profile for %s: %s", user_id, e)
user_profiles.append(user_profile)
defer.returnValue(
{"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
)
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
@defer.inlineCallbacks
def get_rooms_in_group(self, group_id, requester_user_id):
@@ -533,9 +527,7 @@ class GroupsServerHandler(object):
chunk.sort(key=lambda e: -e["num_joined_members"])
defer.returnValue(
{"chunk": chunk, "total_room_count_estimate": len(room_results)}
)
return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
@defer.inlineCallbacks
def add_room_to_group(self, group_id, requester_user_id, room_id, content):
@@ -551,7 +543,7 @@ class GroupsServerHandler(object):
yield self.store.add_room_to_group(group_id, room_id, is_public=is_public)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def update_room_in_group(
@@ -574,7 +566,7 @@ class GroupsServerHandler(object):
else:
raise SynapseError(400, "Uknown config option")
defer.returnValue({})
return {}
@defer.inlineCallbacks
def remove_room_from_group(self, group_id, requester_user_id, room_id):
@@ -586,7 +578,7 @@ class GroupsServerHandler(object):
yield self.store.remove_room_from_group(group_id, room_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def invite_to_group(self, group_id, user_id, requester_user_id, content):
@@ -644,9 +636,9 @@ class GroupsServerHandler(object):
)
elif res["state"] == "invite":
yield self.store.add_group_invite(group_id, user_id)
defer.returnValue({"state": "invite"})
return {"state": "invite"}
elif res["state"] == "reject":
defer.returnValue({"state": "reject"})
return {"state": "reject"}
else:
raise SynapseError(502, "Unknown state returned by HS")
@@ -679,7 +671,7 @@ class GroupsServerHandler(object):
remote_attestation=remote_attestation,
)
defer.returnValue(local_attestation)
return local_attestation
@defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content):
@@ -699,7 +691,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation})
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
@@ -716,7 +708,7 @@ class GroupsServerHandler(object):
local_attestation = yield self._add_user(group_id, requester_user_id, content)
defer.returnValue({"state": "join", "attestation": local_attestation})
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def knock(self, group_id, requester_user_id, content):
@@ -769,7 +761,7 @@ class GroupsServerHandler(object):
if not self.hs.is_mine_id(user_id):
yield self.store.maybe_delete_remote_profile_cache(user_id)
defer.returnValue({})
return {}
@defer.inlineCallbacks
def create_group(self, group_id, requester_user_id, content):
@@ -845,7 +837,7 @@ class GroupsServerHandler(object):
avatar_url=user_profile.get("avatar_url"),
)
defer.returnValue({"group_id": group_id})
return {"group_id": group_id}
@defer.inlineCallbacks
def delete_group(self, group_id, requester_user_id):

View File

@@ -51,8 +51,8 @@ class AccountDataEventSource(object):
{"type": account_data_type, "content": content, "room_id": room_id}
)
defer.returnValue((results, current_stream_id))
return (results, current_stream_id)
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):
defer.returnValue(([], config.to_id))
return ([], config.to_id)

View File

@@ -22,10 +22,10 @@ from email.mime.text import MIMEText
from twisted.internet import defer
from synapse.api.errors import StoreError
from synapse.logging.context import make_deferred_yieldable
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import UserID
from synapse.util import stringutils
from synapse.util.logcontext import make_deferred_yieldable
try:
from synapse.push.mailer import load_jinja2_templates
@@ -193,7 +193,7 @@ class AccountValidityHandler(object):
if threepid["medium"] == "email":
addresses.append(threepid["address"])
defer.returnValue(addresses)
return addresses
@defer.inlineCallbacks
def _get_renewal_token(self, user_id):
@@ -214,7 +214,7 @@ class AccountValidityHandler(object):
try:
renewal_token = stringutils.random_string(32)
yield self.store.set_renewal_token_for_user(user_id, renewal_token)
defer.returnValue(renewal_token)
return renewal_token
except StoreError:
attempts += 1
raise StoreError(500, "Couldn't generate a unique string as refresh string.")
@@ -226,11 +226,19 @@ class AccountValidityHandler(object):
Args:
renewal_token (str): Token sent with the renewal request.
Returns:
bool: Whether the provided token is valid.
"""
user_id = yield self.store.get_user_from_renewal_token(renewal_token)
try:
user_id = yield self.store.get_user_from_renewal_token(renewal_token)
except StoreError:
defer.returnValue(False)
logger.debug("Renewing an account for user %s", user_id)
yield self.renew_account_for_user(user_id)
defer.returnValue(True)
@defer.inlineCallbacks
def renew_account_for_user(self, user_id, expiration_ts=None, email_sent=False):
"""Renews the account attached to a given user by pushing back the
@@ -254,4 +262,4 @@ class AccountValidityHandler(object):
user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent
)
defer.returnValue(expiration_ts)
return expiration_ts

View File

@@ -100,4 +100,4 @@ class AcmeHandler(object):
logger.exception("Failed saving!")
raise
defer.returnValue(True)
return True

View File

@@ -17,6 +17,10 @@ import logging
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.types import RoomStreamToken
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -45,7 +49,7 @@ class AdminHandler(BaseHandler):
"devices": {"": {"sessions": [{"connections": connections}]}},
}
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_users(self):
@@ -57,7 +61,7 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.get_users()
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def get_users_paginate(self, order, start, limit):
@@ -74,7 +78,7 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.get_users_paginate(order, start, limit)
defer.returnValue(ret)
return ret
@defer.inlineCallbacks
def search_users(self, term):
@@ -88,4 +92,193 @@ class AdminHandler(BaseHandler):
"""
ret = yield self.store.search_users(term)
defer.returnValue(ret)
return ret
def set_user_server_admin(self, user, admin):
"""
Set the admin bit on a user.
Args:
user_id (UserID): the (necessarily local) user to manipulate
admin (bool): whether or not the user should be an admin of this server
"""
return self.store.set_server_admin(user, admin)
@defer.inlineCallbacks
def export_user_data(self, user_id, writer):
"""Write all data we have on the user to the given writer.
Args:
user_id (str)
writer (ExfiltrationWriter)
Returns:
defer.Deferred: Resolves when all data for a user has been written.
The returned value is that returned by `writer.finished()`.
"""
# Get all rooms the user is in or has been in
rooms = yield self.store.get_rooms_for_user_where_membership_is(
user_id,
membership_list=(
Membership.JOIN,
Membership.LEAVE,
Membership.BAN,
Membership.INVITE,
),
)
# We only try and fetch events for rooms the user has been in. If
# they've been e.g. invited to a room without joining then we handle
# those seperately.
rooms_user_has_been_in = yield self.store.get_rooms_user_has_been_in(user_id)
for index, room in enumerate(rooms):
room_id = room.room_id
logger.info(
"[%s] Handling room %s, %d/%d", user_id, room_id, index + 1, len(rooms)
)
forgotten = yield self.store.did_forget(user_id, room_id)
if forgotten:
logger.info("[%s] User forgot room %d, ignoring", user_id, room_id)
continue
if room_id not in rooms_user_has_been_in:
# If we haven't been in the rooms then the filtering code below
# won't return anything, so we need to handle these cases
# explicitly.
if room.membership == Membership.INVITE:
event_id = room.event_id
invite = yield self.store.get_event(event_id, allow_none=True)
if invite:
invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state)
continue
# We only want to bother fetching events up to the last time they
# were joined. We estimate that point by looking at the
# stream_ordering of the last membership if it wasn't a join.
if room.membership == Membership.JOIN:
stream_ordering = yield self.store.get_room_max_stream_ordering()
else:
stream_ordering = room.stream_ordering
from_key = str(RoomStreamToken(0, 0))
to_key = str(RoomStreamToken(None, stream_ordering))
written_events = set() # Events that we've processed in this room
# We need to track gaps in the events stream so that we can then
# write out the state at those events. We do this by keeping track
# of events whose prev events we haven't seen.
# Map from event ID to prev events that haven't been processed,
# dict[str, set[str]].
event_to_unseen_prevs = {}
# The reverse mapping to above, i.e. map from unseen event to events
# that have the unseen event in their prev_events, i.e. the unseen
# events "children". dict[str, set[str]]
unseen_to_child_events = {}
# We fetch events in the room the user could see by fetching *all*
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = yield self.store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction="f"
)
if not events:
break
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.store, user_id, events)
writer.write_events(room_id, events)
# Update the extremity tracking dicts
for event in events:
# Check if we have any prev events that haven't been
# processed yet, and add those to the appropriate dicts.
unseen_events = set(event.prev_event_ids()) - written_events
if unseen_events:
event_to_unseen_prevs[event.event_id] = unseen_events
for unseen in unseen_events:
unseen_to_child_events.setdefault(unseen, set()).add(
event.event_id
)
# Now check if this event is an unseen prev event, if so
# then we remove this event from the appropriate dicts.
for child_id in unseen_to_child_events.pop(event.event_id, []):
event_to_unseen_prevs[child_id].discard(event.event_id)
written_events.add(event.event_id)
logger.info(
"Written %d events in room %s", len(written_events), room_id
)
# Extremities are the events who have at least one unseen prev event.
extremities = (
event_id
for event_id, unseen_prevs in event_to_unseen_prevs.items()
if unseen_prevs
)
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
return writer.finished()
class ExfiltrationWriter(object):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id, events):
"""Write a batch of events for a room.
Args:
room_id (str)
events (list[FrozenEvent])
"""
pass
def write_state(self, room_id, event_id, state):
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
Args:
room_id (str)
event_id (str)
state (dict[tuple[str, str], FrozenEvent])
"""
pass
def write_invite(self, room_id, event, state):
"""Write an invite for the room, with associated invite state.
Args:
room_id (str)
event (FrozenEvent)
state (dict[tuple[str, str], dict]): A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
"""
def finished(self):
"""Called when all data has succesfully been exported and written.
This functions return value is passed to the caller of
`export_user_data`.
"""
pass

Some files were not shown because too many files have changed in this diff Show More