1
0

Compare commits

..

297 Commits

Author SHA1 Message Date
Andrew Morgan
522a50eea6 Merge branch 'develop' into anoa/remove_return_parans 2019-08-30 15:38:53 +01:00
Andrew Morgan
4765f0cfd9 Add m.id_access_token flag (#5930)
Adds a flag to `/versions`' `unstable_features` section indicating that this Synapse understands what an `id_access_token` is, as per https://github.com/matrix-org/synapse/issues/5927#issuecomment-523566043

Fixes #5927
2019-08-30 15:22:51 +01:00
Andrew Morgan
3aa3b52251 Fix another false positive 2019-08-30 14:20:16 +01:00
Amber Brown
d19505a8c1 Removed unused jenkins/ folder and script (#5938) 2019-08-30 23:13:16 +10:00
Andrew Morgan
76411e1840 Merge branch 'develop' into anoa/remove_return_parans 2019-08-30 14:06:59 +01:00
Andrew Morgan
fc9f12d6d7 Fix broken parentheses removal 2019-08-30 13:58:01 +01:00
Andrew Morgan
3057095a5d Revert "Use the v2 lookup API for 3PID invites (#5897)" (#5937)
This reverts commit 71fc04069a.

This broke 3PID invites as #5892 was required for it to work correctly.
2019-08-30 12:00:20 +01:00
Amber Brown
5625abe503 Fix buildkite pipeline plugin matrix-org/annotate using the wrong variable config 2019-08-30 15:06:40 +10:00
Andrew Morgan
dd22608cbd Do the same for the unit tests 2019-08-29 13:54:46 +01:00
Andrew Morgan
77350e6089 fix typo 2019-08-29 13:52:19 +01:00
Andrew Morgan
4bedc18841 Fix some edge cases 2019-08-29 13:51:41 +01:00
Andrew Morgan
acb7923397 Add changelog 2019-08-29 13:49:46 +01:00
Andrew Morgan
1d62242209 Remove unnecessary parantheses around return statements 2019-08-29 13:48:26 +01:00
Amber Brown
e7011280c7 Fix coverage in sytest and use plugins for buildkite (#5922) 2019-08-29 22:19:57 +10:00
Jorik Schellekens
92c1550f4a Add a link to python's logging config schema (#5926) 2019-08-28 19:08:32 +01:00
Will Hunt
c8fa620d7a Merge pull request #5902 from matrix-org/hs/exempt-support-users-from-consent
Exempt support users from consent
2019-08-28 16:31:40 +01:00
Jorik Schellekens
deca277d09 Let synctl use a config directory. (#5904)
* Let synctl use a config directory.
2019-08-28 15:55:58 +01:00
Will Hunt
5798a134c0 Removing entry for 5903 2019-08-28 14:25:05 +01:00
Andrew Morgan
71fc04069a Use the v2 lookup API for 3PID invites (#5897)
Fixes https://github.com/matrix-org/synapse/issues/5861

Adds support for the v2 lookup API as defined in [MSC2134](https://github.com/matrix-org/matrix-doc/pull/2134). Currently this is only used for 3PID invites.

Sytest PR: https://github.com/matrix-org/sytest/pull/679
2019-08-28 14:59:26 +02:00
Jorik Schellekens
6d97843793 Config templating (#5900)
Template config files

* Imagine a system composed entirely of x, y, z etc and the basic operations..

Wait George, why XOR? Why not just neq?

George: Eh, I didn't think of that..

Co-Authored-By: Erik Johnston <erik@matrix.org>
2019-08-28 13:12:22 +01:00
Amber Brown
7dc398586c Implement a structured logging output system. (#5680) 2019-08-28 21:18:53 +10:00
Richard van der Hoff
49ef8ec399 Fix a cache-invalidation bug for worker-based deployments (#5920)
Some of the caches on worker processes were not being correctly invalidated
when a room's state was changed in a way that did not affect the membership
list of the room.

We need to make sure we send out cache invalidations even when no memberships
are changing.
2019-08-28 10:18:16 +01:00
reivilibre
a3f0635686 Merge pull request #5914 from matrix-org/rei/admin_getadmin
Add GET method to admin API /users/@user:dom/admin
2019-08-28 09:44:22 +01:00
Victor Goff
1196ee32b3 Typographical corrections in docker/README (#5921) 2019-08-28 09:34:49 +01:00
reivilibre
7ccc251415 Merge pull request #5859 from matrix-org/rei/msc2197
MSC2197 Search Filters over Federation
2019-08-28 09:00:21 +01:00
Erik Johnston
dfd10f5133 Merge pull request #5864 from matrix-org/erikj/reliable_lookups
Refactor MatrixFederationAgent to retry SRV.
2019-08-27 16:54:06 +01:00
Erik Johnston
91caa5b430 Fix off by one error in SRV result shuffling 2019-08-27 13:56:42 +01:00
Olivier Wilkinson (reivilibre)
1b959b6977 Document GET method for retrieving admin bit of user in admin API
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 13:19:19 +01:00
Olivier Wilkinson (reivilibre)
c88a119259 Add GET method to admin API /users/@user:dom/admin
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 13:12:27 +01:00
reivilibre
322ccac33f Allow schema deltas to be engine-specific (#5911)
* Allow schema deltas to be engine-specific

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>

* Code style (Black)

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 11:53:21 +01:00
Richard van der Hoff
ccb15a5bbe Merge pull request #5906 from matrix-org/neilj/increase_display_name_limit
Increase profile display name limit
2019-08-27 11:52:59 +01:00
Erik Johnston
f5b50d0871 Merge pull request #5895 from matrix-org/erikj/notary_key
Add config option to sign remote key query responses with a separate key.
2019-08-27 11:51:37 +01:00
Richard van der Hoff
e7577427c9 Update 5909.misc 2019-08-27 11:50:52 +01:00
Richard van der Hoff
7837a5f2ea Merge pull request #5909 from aaronraimist/public_base_url
public_base_url is actually public_baseurl
2019-08-27 11:49:59 +01:00
reivilibre
1a7e6eb633 Add Admin API capability to set adminship of a user (#5878)
Admin API: Set adminship of a user
2019-08-27 10:14:00 +01:00
Olivier Wilkinson (reivilibre)
d1e0b91083 Code style (Black)
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 09:39:11 +01:00
Olivier Wilkinson (reivilibre)
62a1639287 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 09:36:12 +01:00
Olivier Wilkinson (reivilibre)
aefa76f5cd Allow schema deltas to be engine-specific
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-27 09:14:00 +01:00
Aaron Raimist
c25137a99f Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2019-08-26 21:06:10 -05:00
Aaron Raimist
e8e3e033ee public_base_url is actually public_baseurl
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2019-08-26 21:01:56 -05:00
Neil Johnson
27d3fc421a Increase max display name limit 2019-08-24 22:33:43 +01:00
Erik Johnston
fbb758a7ce Fixup comments 2019-08-23 15:37:20 +01:00
Erik Johnston
e70f0081da Fix logcontexts 2019-08-23 15:37:20 +01:00
Erik Johnston
fe0ac98e66 Don't implicitly include server signing key 2019-08-23 15:36:28 +01:00
Erik Johnston
7af5a63063 Fixup review comments 2019-08-23 15:36:28 +01:00
Will Hunt
c998f25006 Apply suggestions from code review
Co-Authored-By: Erik Johnston <erik@matrix.org>
2019-08-23 10:28:54 +01:00
Half-Shot
4a2d2c2b6f Update changelog 2019-08-23 09:57:07 +01:00
Half-Shot
9ba32f6573 Exempt bot users 2019-08-23 09:56:31 +01:00
Half-Shot
ffa5b757c7 Merge branch 'hs/bot-user-type' into hs/exempt-support-users-from-consent 2019-08-23 09:55:57 +01:00
Half-Shot
971c980c6e Add changelog 2019-08-23 09:53:48 +01:00
Half-Shot
d9b8cf81be Add bot type 2019-08-23 09:52:09 +01:00
Half-Shot
0fb5189072 Fix registration test 2019-08-23 09:25:35 +01:00
Half-Shot
80793e813c newsfile 5902 2019-08-23 09:20:31 +01:00
Half-Shot
ae38e0569f Ignore consent for support users 2019-08-23 09:15:10 +01:00
Half-Shot
886eceba3e Return user_type in get_user_by_id 2019-08-23 09:14:52 +01:00
Jorik Schellekens
8767b63a82 Propagate opentracing contexts through EDUs (#5852)
Propagate opentracing contexts through EDUs
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-08-22 18:21:10 +01:00
Richard van der Hoff
0b39fa53b6 Merge pull request #5877 from Awesome-Technologies/remove_shared_secret_registration
Remove shared secret registration
2019-08-22 18:12:25 +01:00
Jorik Schellekens
812ed6b0d5 Opentracing across workers (#5771)
Propagate opentracing contexts across workers


Also includes some Convenience modifications to opentracing for servlets, notably:
- Add boolean to skip the whitelisting check on inject
  extract methods. - useful when injecting into carriers
  locally. Otherwise we'd always have to include our
  own servername and whitelist our servername
- start_active_span_from_request instead of header
- Add boolean to decide whether to extract context
  from a request to a servlet
2019-08-22 18:08:07 +01:00
Manuel Stahl
0bab582fd6 Remove shared secret registration from client/r0/register endpoint
This type of registration was probably never used. It only includes the
user name in the HMAC but not the password.

Shared secret registration is still available via
client/r0/admin/register.

Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
2019-08-22 18:04:08 +02:00
Brendan Abolivier
dbd46decad Revert "Do not send consent notices if "no-consent-required" is set"
This reverts commit 27a686e53b.
2019-08-22 14:47:43 +01:00
Brendan Abolivier
1c5b8c6222 Revert "Add "require_consent" parameter for registration"
This reverts commit 3320aaab3a.
2019-08-22 14:47:34 +01:00
Half-Shot
27a686e53b Do not send consent notices if "no-consent-required" is set 2019-08-22 14:22:04 +01:00
Half-Shot
3320aaab3a Add "require_consent" parameter for registration 2019-08-22 14:21:54 +01:00
Erik Johnston
1e4b4d85e7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/reliable_lookups 2019-08-22 13:41:57 +01:00
Erik Johnston
1b09cf8658 Merge pull request #5850 from matrix-org/erikj/retry_well_known_on_fail
Retry well known on fail
2019-08-22 13:17:05 +01:00
Jorik Schellekens
9a6f2be572 Opentrace e2e keys (#5855)
Add opentracing tags and logs for e2e keys
2019-08-22 11:28:12 +01:00
Richard van der Hoff
c9f11d09fc Add missing index on users_in_public_rooms. (#5894) 2019-08-22 10:43:13 +01:00
Richard van der Hoff
119aa31b10 Servlet to purge old rooms (#5845) 2019-08-22 10:42:59 +01:00
Richard van der Hoff
ef1c524bb3 Improve error msg when key-fetch fails (#5896)
There's no point doing a raise_from here, because the exception is always
logged at warn with no stacktrace in the caller. Instead, let's try to give
better messages to reduce confusion.

In particular, this means that we won't log 'Failed to connect to remote
server' when we don't even attempt to connect to the remote server due to
blacklisting.
2019-08-22 10:42:06 +01:00
Richard van der Hoff
4dab867288 Drop some unused tables. (#5893)
These tables are never used, so we may as well drop them.
2019-08-21 13:16:28 +01:00
Erik Johnston
62fb643cdc Newsfile 2019-08-21 11:21:58 +01:00
Erik Johnston
97cbc96093 Only sign when we respond to remote key requests 2019-08-21 11:21:58 +01:00
Erik Johnston
5906be8589 Add config option for keys to use to sign keys
This allows servers to separate keys that are used to sign remote keys
when acting as a notary server.
2019-08-21 10:44:58 +01:00
Richard van der Hoff
72bc285669 Refactor the Appservice scheduler code (#5886)
Get rid of the labyrinthine `recoverer_fn` code, and clean up the startup code
(it seemed to be previously inexplicably split between
`ApplicationServiceScheduler.start` and `_Recoverer.start`).

Add some docstrings too.
2019-08-20 17:42:45 +01:00
Richard van der Hoff
baa3f4a80d Avoid deep recursion in appservice recovery (#5885)
Hopefully, this will fix a stack overflow when recovering an appservice.

The recursion here leads to a huge chain of deferred callbacks, which then
overflows the stack when the chain completes. `inlineCallbacks` makes a better
job of this if we use iteration instead.

Clean up the code a bit too, while we're there.
2019-08-20 17:39:38 +01:00
Jorik Schellekens
c886f976e0 Opentracing doc update (#5776)
Update opentracing docs to use the unified 'trace' method
2019-08-20 13:56:03 +01:00
Erik Johnston
29763f01c6 Make changelog entry be a feature 2019-08-20 12:38:06 +01:00
Erik Johnston
74f016d343 Remove now unused pick_server_from_list 2019-08-20 12:37:08 +01:00
Erik Johnston
1f9df1cc7b Fixup _sort_server_list to be slightly more efficient
Also document that we are using the algorithm described in RFC2782 and
ensure we handle zero weight correctly.
2019-08-20 12:36:11 +01:00
Richard van der Hoff
5019945828 Refactor the Appservice scheduler code
Get rid of the labyrinthine `recoverer_fn` code, and clean up the startup code
(it seemed to be previously inexplicably split between
`ApplicationServiceScheduler.start` and `_Recoverer.start`).

Add some docstrings too.
2019-08-20 11:50:23 +01:00
Erik Johnston
7777d353bf Remove test debugs 2019-08-20 11:46:59 +01:00
Erik Johnston
1dec31560e Change jitter to be a factor rather than absolute value 2019-08-20 11:46:00 +01:00
Olivier Wilkinson (reivilibre)
502728777c Newsfile on one line
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 08:49:53 +01:00
Olivier Wilkinson (reivilibre)
bb29bc2937 Use MSC2197 on stable prefix as it has almost finished FCP
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 08:49:31 +01:00
Erik Johnston
d514dac0b2 Merge pull request #5860 from matrix-org/erikj/update_5704_comments
Remove logging for #5407 and update comments
2019-08-19 10:20:59 +01:00
Brendan Abolivier
bdd201ea7f Merge branch 'master' into develop 2019-08-17 10:50:42 +01:00
Richard van der Hoff
74fb729213 1.3.1 2019-08-17 09:16:17 +01:00
Richard van der Hoff
412c6e21a8 Drop dependency on sdnotify (#5871)
... to save OSes which don't use it from having to maintain a port.

Fixes #5865.
2019-08-17 09:09:52 +01:00
Hubert Chathi
8a5f6ed130 Merge pull request #5857 from matrix-org/uhoreg/fix_e2e_room_keys_index
add the version field to the index for e2e_room_keys
2019-08-16 17:45:50 -07:00
Richard van der Hoff
c188bd2c12 add attribution 2019-08-16 23:19:23 +01:00
Chris Moos
20402aa128 Add changelog entry. 2019-08-16 22:16:21 +01:00
Chris Moos
6d86df73f1 Fix issue with Synapse not starting up. Fixes #5866.
Signed-off-by: Chris Moos <chris@chrismoos.com>
2019-08-16 22:16:13 +01:00
Jorik Schellekens
87fa26006b Opentracing misc (#5856)
Add authenticated_entity and servlet_names tags.

Functionally:
- Add a tag for authenticated_entity
- Add a tag for servlet_names

Stylistically:
Moved to importing methods directly from opentracing.
2019-08-16 16:13:25 +01:00
Erik Johnston
ebba15ee7f Newsfile 2019-08-16 13:29:41 +01:00
Erik Johnston
861d663c15 Fixup changelog and remove debug logging 2019-08-16 13:15:26 +01:00
Hubert Chathi
e132ba79ae fix changelog 2019-08-15 21:02:40 -07:00
Andrew Morgan
b13cac896d Fix up password reset template config names (#5863)
Fixes #5833

The emailconfig code was attempting to pull incorrect config file names. This corrects that, while also marking a difference between a config file variable that's a filepath versus a str containing HTML.
2019-08-15 16:27:11 +01:00
Erik Johnston
c03e3e8301 Newsfile 2019-08-15 15:43:22 +01:00
Erik Johnston
f299c5414c Refactor MatrixFederationAgent to retry SRV.
This refactors MatrixFederationAgent to move the SRV lookup into the
endpoint code, this has two benefits:
	1. Its easier to retry different host/ports in the same way as
	   HostnameEndpoint.
	2. We avoid SRV lookups if we have a free connection in the pool
2019-08-15 15:43:22 +01:00
Brendan Abolivier
ce5f1cb98c Merge branch 'master' into develop 2019-08-15 12:38:21 +01:00
Brendan Abolivier
6382914587 Merge tag 'v1.3.0'
Synapse 1.3.0 (2019-08-15)
==========================

Bugfixes
--------

- Fix 500 Internal Server Error on `publicRooms` when the public room list was
  cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))

Synapse 1.3.0rc1 (2019-08-13)
==========================

Features
--------

- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))

Bugfixes
--------

- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))

Deprecations and Removals
-------------------------

- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))

Internal Changes
----------------

- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
2019-08-15 12:37:45 +01:00
Brendan Abolivier
fb5acd7039 1.3.0 2019-08-15 12:05:24 +01:00
Erik Johnston
748aa38378 Remove logging for #5407 and update comments 2019-08-15 12:02:18 +01:00
Andrew Morgan
8cf7fbbce0 Remove libsqlite3-dev from required build dependencies. (#5766) 2019-08-15 11:32:23 +01:00
Olivier Wilkinson (reivilibre)
a3df04a899 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-15 11:09:07 +01:00
Olivier Wilkinson (reivilibre)
2253b083d9 Add support for inbound MSC2197 requests on unstable Federation API
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-15 11:06:21 +01:00
reivilibre
7809f0c022 Merge pull request #5851 from matrix-org/rei/roomdir_maybedeferred
Room Directory:  Wrap `get_local_public_room_list` call in `maybeDeferred`
2019-08-15 11:02:33 +01:00
Olivier Wilkinson (reivilibre)
6fadb560fc Support MSC2197 outbound with unstable prefix
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-15 10:59:37 +01:00
Michael Telatynski
baee288fb4 Don't create broken room when power_level_content_override.users does not contain creator_id. (#5633) 2019-08-15 09:45:57 +01:00
Erik Johnston
1771f0045d Newsfile 2019-08-15 09:28:58 +01:00
Erik Johnston
e6e136decc Retry well known on fail.
If we have recently seen a valid well-known for a domain we want to
retry on (non-final) errors a few times, to handle temporary blips in
networking/etc.
2019-08-15 09:28:58 +01:00
Hubert Chathi
c058aeb88d update set_e2e_room_key to agree with fixed index 2019-08-14 18:02:58 -07:00
Hubert Chathi
81b8080acd add changelog 2019-08-14 17:53:33 -07:00
Hubert Chathi
b7f7cc7ace add the version field to the index for e2e_room_keys 2019-08-14 17:14:40 -07:00
reivilibre
d6de55bce9 Update changelog.d/5851.bugfix
Use imperative

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-08-14 14:53:49 +01:00
Olivier Wilkinson (reivilibre)
3ad24ab386 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-14 14:53:49 +01:00
Olivier Wilkinson (reivilibre)
1b63ccd848 Wrap get_local_public_room_list call in maybeDeferred because it
is cached and so does not always return a `Deferred`.
`await` does not silently pass-through non-Deferreds like `yield` used to.

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-14 14:53:49 +01:00
Erik Johnston
09f6152a11 Merge pull request #5844 from matrix-org/erikj/retry_well_known_lookup
Retry well-known lookup before expiry.
2019-08-14 09:53:33 +01:00
Brendan Abolivier
f70d0a1dd9 1.3.0rc1 2019-08-13 18:20:09 +01:00
Brendan Abolivier
3039be82ce Merge pull request #5848 from matrix-org/hawkowl/fix-mediarepo-worker-startup
Fix mediarepo worker startup
2019-08-13 17:38:11 +01:00
Amber H. Brown
28bce1ac7c changelog 2019-08-14 02:08:24 +10:00
Amber H. Brown
18bdac8ee4 fix config being a dict, actually 2019-08-14 02:06:42 +10:00
Erik Johnston
aedfec3ad7 Newsfile 2019-08-13 16:20:38 +01:00
Erik Johnston
17e1e80726 Retry well-known lookup before expiry.
This gives a bit of a grace period where we can attempt to refetch a
remote `well-known`, while still using the cached result if that fails.

Hopefully this will make the well-known resolution a bit more torelant
of failures, rather than it immediately treating failures as "no result"
and caching that for an hour.
2019-08-13 16:20:38 +01:00
Erik Johnston
af187805b3 Merge pull request #5809 from matrix-org/erikj/handle_pusher_stop
Handle pusher being deleted during processing.
2019-08-13 14:08:29 +01:00
Erik Johnston
96bdd661b8 Remove redundant return 2019-08-13 12:50:36 +01:00
Amber Brown
0b6fbb28a8 Don't load the media repo when configured to use an external media repo (#5754) 2019-08-13 21:49:28 +10:00
Erik Johnston
e9906b0772 Merge pull request #5836 from matrix-org/erikj/lower_bound_ttl_well_known
Add a lower bound to well-known TTL.
2019-08-13 12:41:16 +01:00
Erik Johnston
fb3469f53a Clarify docstring
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-08-13 10:17:23 +01:00
Erik Johnston
f218705d2a Make default well known cache global again. 2019-08-13 10:06:51 +01:00
Erik Johnston
2546f32b90 Merge pull request #5826 from matrix-org/erikj/reduce_event_pauses
Don't unnecessarily block notifying of new events.
2019-08-13 09:36:25 +01:00
Erik Johnston
9d9cf3583b Merge pull request #5843 from matrix-org/erikj/workers_hist_vis
Whitelist history visbility sytests for worker mode
2019-08-12 18:02:19 +01:00
Erik Johnston
2bec3a4953 Merge pull request #5839 from tcitworld/fix-purge-remote-media-script
Fix curl command typo in purge_remote_media.sh
2019-08-12 14:51:27 +01:00
Erik Johnston
3de6cc245f Changelogs should end in '.' or '!' 2019-08-12 14:16:42 +01:00
Erik Johnston
156a461cbd Newsfile 2019-08-12 13:57:52 +01:00
Erik Johnston
c9456193d3 Whitelist history visbility sytests for worker mode 2019-08-12 13:56:26 +01:00
Richard van der Hoff
fb86217553 Merge pull request #5788 from matrix-org/rav/metaredactions
Fix handling of redactions of redactions
2019-08-12 12:25:19 +01:00
Erik Johnston
41546f946e Newsfile 2019-08-12 09:56:58 +01:00
Thomas Citharel
a7f0161276 Fix curl command typo in purge_remote_media.sh
Was verbose option instead of -X, command didn't work

Signed-off-by: Thomas Citharel <tcit@tcit.fr>
2019-08-09 18:36:12 +02:00
Neil Johnson
1016f303e5 make user creation steps clearer 2019-08-08 14:58:21 +01:00
Erik Johnston
107ad133fc Move well known lookup into a separate clas 2019-08-07 15:36:38 +01:00
Erik Johnston
af9f1c0764 Add a lower bound for TTL on well known results.
It costs both us and the remote server for us to fetch the well known
for every single request we send, so we add a minimum cache period. This
is set to 5m so that we still honour the basic premise of "refetch
frequently".
2019-08-06 17:01:23 +01:00
Erik Johnston
d1b5b055be Merge pull request #5825 from matrix-org/erikj/fix_empty_limited_sync
Handle TimelineBatch being limited and empty.
2019-08-06 15:39:44 +01:00
Andrew Morgan
edeae53221 Return 404 instead of 403 when retrieving an event without perms (#5798)
Part of fixing matrix-org/sytest#652

Sytest PR: matrix-org/sytest#667
2019-08-06 13:33:55 +01:00
Erik Johnston
c32d359094 Newsfile 2019-08-06 13:33:42 +01:00
Erik Johnston
bf4db42920 Don't unnecessarily block notifying of new events.
When persisting events we calculate new stream orderings up front.
Before we notify about an event all events with lower stream orderings
must have finished being persisted.

This PR moves the assignment of stream ordering till *after* calculated
the new current state and split the batch of events into separate chunks
for persistence. This means that if it takes a long time to calculate
new current state then it will not block events in other rooms being
notified about.

This should help reduce some global pauses in the events stream which
can last for tens of seconds (if not longer), caused by some
particularly expensive state resolutions.
2019-08-06 13:32:02 +01:00
Erik Johnston
977fa4a717 Newsfile 2019-08-06 13:00:45 +01:00
Erik Johnston
6881f21f3e Handle TimelineBatch being limited and empty.
This hopefully addresses #5407 by gracefully handling an empty but
limited TimelineBatch. We also add some logging to figure out how this
is happening.
2019-08-06 12:59:00 +01:00
Brendan Abolivier
8ed9e63432 Account validity: allow defining HTML templates to serve the us… (#5807)
Account validity: allow defining HTML templates to serve the user on account renewal attempt
2019-08-01 16:09:25 +02:00
Erik Johnston
d55bc4a8bf Merge pull request #5810 from matrix-org/erikj/no_server_reachable
Return 502 not 500 when failing to reach any remote server.
2019-08-01 14:19:39 +01:00
Andrew Morgan
5d018d23f0 Have ClientReaderSlavedStore inherit RegistrationStore (#5806)
Fixes #5803
2019-08-01 13:54:56 +01:00
Erik Johnston
93fd3cbc7a Newsfile 2019-08-01 13:48:52 +01:00
Erik Johnston
3c076c79c5 Merge pull request #5808 from matrix-org/erikj/parse_decode_error
Handle incorrectly encoded query params correctly
2019-08-01 13:48:10 +01:00
Erik Johnston
a8f40a8302 Return 502 not 500 when failing to reach any remote server. 2019-08-01 13:47:31 +01:00
Erik Johnston
55a0c98d16 Merge pull request #5805 from matrix-org/erikj/validate_state
Validate well known state events are state events.
2019-08-01 13:45:48 +01:00
Erik Johnston
0b36decfb6 Merge pull request #5801 from matrix-org/erikj/recursive_tombstone
Don't allow clients to send tombstones that reference the same room
2019-08-01 13:45:35 +01:00
Erik Johnston
312cc48e2b Newsfile 2019-08-01 13:45:09 +01:00
Erik Johnston
d02e41dcb2 Handle pusher being deleted during processing.
Instead of throwing a StoreError lets break out of processing loop and
mark the pusher as stopped.
2019-08-01 13:44:12 +01:00
Erik Johnston
da378af445 Newsfile 2019-08-01 13:24:00 +01:00
Erik Johnston
d2e3d5b9db Handle incorrectly encoded query params correctly 2019-08-01 13:23:00 +01:00
Erik Johnston
76a58fdcce Fix spelling.
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2019-08-01 13:17:55 +01:00
Erik Johnston
58af30a6c7 Merge pull request #5802 from matrix-org/erikj/deny_redacting_different_room
Deny redaction of events in a different room.
2019-08-01 13:14:46 +01:00
Erik Johnston
0f632f3a57 Merge pull request #5790 from matrix-org/erikj/groups_request_errors
Handle RequestSendFailed exception correctly in more places.
2019-08-01 13:14:08 +01:00
Erik Johnston
ad167c3849 Merge pull request #5804 from matrix-org/erikj/match_against_state_key
Explicitly check that tombstone is a state event before notifying.
2019-08-01 13:13:33 +01:00
Brendan Abolivier
f25f638c35 Lint 2019-08-01 12:19:08 +02:00
Brendan Abolivier
3ff3dfe5a3 Sample config 2019-08-01 12:08:25 +02:00
Brendan Abolivier
f4a30d286f Changelog 2019-08-01 12:08:06 +02:00
Brendan Abolivier
bc35503528 Add tests 2019-08-01 12:00:08 +02:00
Brendan Abolivier
a4a9ded4d0 Allow defining HTML templates to serve the user on account renewal 2019-08-01 11:59:27 +02:00
Erik Johnston
e5a0224837 Newsfile 2019-07-31 16:39:42 +01:00
Erik Johnston
dc4d74e44a Validate well-known state events are state events.
Lets disallow sending things like memberships, topics etc as non-state
events.
2019-07-31 16:36:20 +01:00
Erik Johnston
c5288e9984 Newsfile 2019-07-31 16:32:03 +01:00
Erik Johnston
2e697d3013 Explicitly check that tombstone is a state event before notifying. 2019-07-31 16:32:03 +01:00
Erik Johnston
0eefb76fa1 Newsfile 2019-07-31 16:13:57 +01:00
Erik Johnston
cf89266b98 Deny redaction of events in a different room.
We already correctly filter out such redactions, but we should also deny
them over the CS API.
2019-07-31 16:12:27 +01:00
Erik Johnston
02735e140f Newsfile 2019-07-31 15:53:52 +01:00
Erik Johnston
f31d4cb7a2 Don't allow clients to send tombstones that reference the same room 2019-07-31 15:52:27 +01:00
Andrew Morgan
72167fb394 Change user deactivated errcode to USER_DEACTIVATED and use it (#5686)
This is intended as an amendment to #5674 as using M_UNKNOWN as the errcode makes it hard for clients to differentiate between an invalid password and a deactivated user (the problem we were trying to solve in the first place).

M_UNKNOWN was originally chosen as it was presumed than an MSC would have to be carried out to add a new code, but as Synapse often is the testing bed for new MSC implementations, it makes sense to try it out first in the wild and then add it into the spec if it is successful. Thus this PR return a new M_USER_DEACTIVATED code when a deactivated user attempts to login.
2019-07-31 15:19:06 +01:00
Andrew Morgan
58a755cdc3 Remove duplicate return statement 2019-07-31 13:24:51 +01:00
Erik Johnston
8fde611a8c Merge pull request #5794 from matrix-org/erikj/share_ssl_options_for_well_known
Share SSL options for well-known requests
2019-07-31 11:40:02 +01:00
Amber Brown
8f15832950 Remove DelayedCall debugging from test runs (#5787) 2019-07-31 20:39:22 +10:00
Erik Johnston
9fe6ad5fef Merge pull request #5796 from matrix-org/erikj/disable_codecov_report
Disable codecov reports to GH comments.
2019-07-31 11:16:15 +01:00
Erik Johnston
fe2f2fc530 Newsfile 2019-07-31 10:59:39 +01:00
Erik Johnston
6be336c0d8 Disable codecov reports to GH comments.
The double posting is really annoying, and I don't think anyone is
actually reading them. The commit statuses should give a good summary
and will link to a full report.
2019-07-31 10:56:02 +01:00
Erik Johnston
3b7a35a59a Newsfile 2019-07-31 10:39:24 +01:00
Erik Johnston
a9bcae9f50 Share SSL options for well-known requests 2019-07-31 10:39:24 +01:00
Brendan Abolivier
d4f91e7e9f Merge pull request #5793 from matrix-org/erikj/fix_bg_update
Don't recreate current_state_events.membership column
2019-07-30 21:19:39 +02:00
Erik Johnston
4037d3220a Newsfile 2019-07-30 16:43:59 +01:00
Erik Johnston
123c04daa7 Don't recreate column 2019-07-30 16:42:48 +01:00
Erik Johnston
62a2d60d72 Merge pull request #5792 from matrix-org/erikj/fix_bg_update
Fix current_state_events membership background update.
2019-07-30 15:20:09 +01:00
Erik Johnston
958d69f300 Newsfile 2019-07-30 14:53:52 +01:00
Erik Johnston
15056ca208 Fix current_state_events membership background update.
Turns out not all rooms are in `rooms`, so lets fetch the room list from
`current_state_events`. We move the delta file to force it to be run
again.
2019-07-30 14:51:41 +01:00
Erik Johnston
f92d05e254 Newsfile 2019-07-30 13:43:53 +01:00
Erik Johnston
7a48d0bab8 Merge pull request #5789 from matrix-org/erikj/fix_error_handling_keys
Fix error handling when fetching remote device keys
2019-07-30 13:26:12 +01:00
Erik Johnston
b4d5ff0af7 Don't log as exception when failing durig backfill 2019-07-30 13:19:22 +01:00
Erik Johnston
e23ab7f41a Newsfile 2019-07-30 13:10:00 +01:00
Erik Johnston
1ec7d656dd Unwrap error 2019-07-30 13:09:02 +01:00
Erik Johnston
458e51df7a Fix error handling when fetching remote device keys 2019-07-30 13:07:02 +01:00
Erik Johnston
63eb4a1b62 Merge pull request #5746 from matrix-org/erikj/test_bg_update_currnet_state
Add unit test for current state membership bg update
2019-07-30 10:00:02 +01:00
Richard van der Hoff
8c97f6414c Remove non-functional 'expire_access_token' setting (#5782)
The `expire_access_token` didn't do what it sounded like it should do. What it
actually did was make Synapse enforce the 'time' caveat on macaroons used as
access tokens, but since our access token macaroons never contained such a
caveat, it was always a no-op.

(The code to add 'time' caveats was removed back in v0.18.5, in #1656)
2019-07-30 08:25:02 +01:00
Richard van der Hoff
5c3eecc70f changelog 2019-07-30 00:00:34 +01:00
Richard van der Hoff
4e97eb89e5 Handle loops in redaction events 2019-07-30 00:00:34 +01:00
Richard van der Hoff
448bcfd0f9 recursively fetch redactions 2019-07-30 00:00:34 +01:00
Richard van der Hoff
e6a6c4fbab split _get_events_from_db out of _enqueue_events 2019-07-29 23:15:15 +01:00
Richard van der Hoff
c9964ba600 Return dicts from _fetch_event_list 2019-07-29 23:15:15 +01:00
Amber Brown
865077f1d1 Room Complexity Client Implementation (#5783) 2019-07-30 02:47:27 +10:00
Erik Johnston
aecae8f397 Correctly handle errors doing requests to group servers 2019-07-29 17:21:57 +01:00
Erik Johnston
7c8c3b8437 Merge pull request #5774 from matrix-org/erikj/fix_rejected_membership
Fix room summary when rejected events are in state
2019-07-29 17:15:15 +01:00
Erik Johnston
3e013b7c8e Merge pull request #5752 from matrix-org/erikj/forgotten_user
Remove some more joins on room_memberships
2019-07-29 17:15:01 +01:00
Erik Johnston
2a12d76646 Merge pull request #5770 from matrix-org/erikj/fix_current_state_event_sqlite
Fix current_state bg update to work on old SQLite
2019-07-29 17:09:01 +01:00
Amber Brown
97a8b4caf7 Move some timeout checking logs to DEBUG #5785 2019-07-30 02:02:18 +10:00
Erik Johnston
df3a5db629 Expand comment 2019-07-29 16:40:25 +01:00
Jorik Schellekens
85b0bd8fe0 Update the device list cache when keys/query is called (#5693) 2019-07-29 16:34:44 +01:00
Erik Johnston
105e7f6ed3 Remove lost comment 2019-07-29 16:09:48 +01:00
Erik Johnston
3b476f5767 Fix debian packages for sid being called buster. (#5775)
* Fix debian packages for sid being called buster.

I don't know why the sid images return buster as its codename in
`lsb_release` but it does, so lets just grab the codename from the
distro we pass into dockerfile

* Newsfile
2019-07-30 00:33:32 +10:00
Erik Johnston
d94916852f Newsfile 2019-07-29 13:04:58 +01:00
Erik Johnston
84c6ea1af8 Update old deps unit test to use old sqlite3 2019-07-29 13:04:50 +01:00
Erik Johnston
45df38e61b Fix current_state bg update to work on old SQLite 2019-07-29 13:04:10 +01:00
Brendan Abolivier
fa87004bc1 Merge pull request #5780 from matrix-org/baboliver/loopingcall-args
Add ability to pass arguments to looping calls
2019-07-29 10:58:22 +02:00
Brendan Abolivier
bd083a5fcf Changelog 2019-07-29 10:04:09 +02:00
Brendan Abolivier
244953be3f Add kwargs and doc 2019-07-29 10:03:14 +02:00
Brendan Abolivier
08352d44f8 Add ability to pass arguments to looping calls 2019-07-29 09:54:37 +02:00
Richard van der Hoff
d74595e2ca Merge branch 'master' into develop 2019-07-26 12:39:33 +01:00
Richard van der Hoff
1a93daf353 Merge pull request #5744 from matrix-org/erikj/log_leave_origin_mismatch
Log when we receive a /make_* request from a different origin
2019-07-26 12:38:37 +01:00
Richard van der Hoff
97bf307755 yet more changelog attribution fixes 2019-07-26 12:06:06 +01:00
Erik Johnston
2e9cf7dda5 Newsfile 2019-07-26 10:14:31 +01:00
Erik Johnston
14c24c9037 Fix room summary when rejected events are in state
Annoyingly, `current_state_events` table can include rejected events,
in which case the membership column will be null. To work around this
lets just always filter out null membership for now.
2019-07-26 10:11:36 +01:00
Richard van der Hoff
1cad8d7b6f Convert RedactionTestCase to modern test style (#5768) 2019-07-26 07:38:55 +01:00
Richard van der Hoff
26d742fed6 Merge pull request #5767 from matrix-org/rav/redactions/cross_room_id
log when a redaction attempts to redact an event in a different room
2019-07-25 18:49:56 +01:00
Richard van der Hoff
618bd1ee76 Fix some error cases in the caching layer. (#5749)
There was some inconsistent behaviour in the caching layer around how
exceptions were handled - particularly synchronously-thrown ones.

This seems to be most easily handled by pushing the creation of
ObservableDeferreds down from CacheDescriptor to the Cache.
2019-07-25 15:59:45 +01:00
Andrew Morgan
f16aa3a44b Merge branch 'master' into develop 2019-07-25 15:19:22 +01:00
Andrew Morgan
baf081cd3b Merge tag 'v1.2.0rc2' into develop
Bugfixes
--------

- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
2019-07-24 13:47:51 +01:00
Erik Johnston
2276936bac Merge pull request #5743 from matrix-org/erikj/log_origin_receipts_mismatch
Log when we receive receipt from a different origin
2019-07-24 13:27:57 +01:00
Richard van der Hoff
f30a71a67b Stop trying to fetch events with event_id=None. (#5753)
`None` is not a valid event id, so queuing up a database fetch for it seems
like a silly thing to do.

I considered making `get_event` return `None` if `event_id is None`, but then
its interaction with `allow_none` seemed uninituitive, and strong typing ftw.
2019-07-24 13:16:18 +01:00
Erik Johnston
c159803067 Newsfile 2019-07-24 11:51:44 +01:00
Erik Johnston
0c4a99607e Remove join when calculating room summaries. 2019-07-24 11:49:15 +01:00
Erik Johnston
62921fb53e Remove join on room_memberships when fetching rooms for user. 2019-07-24 11:45:58 +01:00
Erik Johnston
32768e96d4 Add function to get all forgotten rooms for user
This will allow us to efficiently filter out rooms that have been
forgotten in other queries without having to join against the
`room_memberships` table.
2019-07-24 11:44:23 +01:00
Richard van der Hoff
418635e68a Add a prometheus metric for active cache lookups. (#5750)
* Add a prometheus metric for active cache lookups.

* changelog
2019-07-24 11:33:13 +01:00
Erik Johnston
adcd5368b0 Newsfile 2019-07-23 17:00:24 +01:00
Erik Johnston
73bbaf2bc6 Add unit test for current state membership bg update 2019-07-23 17:00:22 +01:00
Jorik Schellekens
3641784e8c Make Jaeger fully configurable (#5694)
* Allow Jaeger to be configured

* Update sample config
2019-07-23 15:46:04 +01:00
Erik Johnston
65afc535a6 Update changelog.d/5743.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-23 15:14:21 +01:00
Amber Brown
4806651744 Replace returnValue with return (#5736) 2019-07-23 23:00:55 +10:00
Erik Johnston
fadfde9aaa Newsfile 2019-07-23 13:32:37 +01:00
Jorik Schellekens
18a466b84e Opentracing Utils (#5722)
* Add decerators for tracing functions

* Use the new clean contexts

* Context and edu utils

* Move opentracing setters

* Move whitelisting

* Sectioning comments

* Better args wrapper

* Docstrings

Co-Authored-By: Erik Johnston <erik@matrix.org>

* Remove unused methods.

* Don't use global

* One tracing decorator to rule them all.
2019-07-23 13:31:16 +01:00
Erik Johnston
3db1377b26 Log when we receive receipt from a different origin 2019-07-23 13:31:03 +01:00
Erik Johnston
841b12867e Merge pull request #5732 from matrix-org/erikj/sdnotify
Add process hooks to tell systemd our state.
2019-07-23 13:06:53 +01:00
Erik Johnston
73bf452666 Merge pull request #5740 from matrix-org/erikj/worker_flakey_tests
Mark flakey tests as blacklisted for worker mode
2019-07-23 11:32:32 +01:00
Erik Johnston
22d2338ace Newsfile 2019-07-23 10:27:53 +01:00
Erik Johnston
1883223a01 Mark flakey tests as blacklisted for worker mode 2019-07-23 10:26:52 +01:00
Erik Johnston
4f6984aa88 Merge pull request #5738 from matrix-org/erikj/faster_update
Speed up current state background update.
2019-07-23 10:23:12 +01:00
Erik Johnston
cda4460d99 Also update systemd-with-workers contrib examples 2019-07-23 10:14:01 +01:00
Erik Johnston
39e594b765 Merge pull request #5733 from matrix-org/erikj/exlude_sytest_blacklist
Don't package sytest-blacklist file.
2019-07-23 10:11:34 +01:00
Erik Johnston
cf0006719d Newsfile 2019-07-23 10:01:30 +01:00
Erik Johnston
b2a629ef49 Speed up current state background update.
Turns out that storing huge JSON arrays in the progress JSON isn't
something that postgres particularly likes.
2019-07-23 10:01:30 +01:00
Erik Johnston
d9ea9881d2 Newsfile 2019-07-22 16:09:15 +01:00
Erik Johnston
c96322c8d2 Don't package sytest-blacklist file.
I don't think its useful, and I don't even know where it would end up.
2019-07-22 16:07:12 +01:00
Amber Brown
0d0f6d12bc Fix logging in workers (#5729)
This also adds a worker blacklist.
2019-07-22 16:05:00 +01:00
Erik Johnston
17c27df6ea Update example systemd service file 2019-07-22 15:24:25 +01:00
Erik Johnston
80cfad233e Call startup commands as system triggers.
This helps ensures that we only consider ourselves "up" once all the
startup functions have completed.
2019-07-22 15:22:14 +01:00
Erik Johnston
720d30469f Merge pull request #5730 from matrix-org/erikj/cache_versions
Cache get_version_string.
2019-07-22 14:52:52 +01:00
Erik Johnston
79f689e6c2 Newsfile 2019-07-22 14:52:19 +01:00
Erik Johnston
c560b791e1 Add process hooks to tell systemd our state.
Fixes #5676.
2019-07-22 14:52:18 +01:00
Jason Robinson
8e513e7afc Merge pull request #5731 from matrix-org/jaywink/admin-user-list-user-type
Add `user_type` to returned fields in admin API user list endpoints
2019-07-22 16:28:51 +03:00
Erik Johnston
22e862304a Update changelog.d/5730.misc
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-22 14:09:56 +01:00
Richard van der Hoff
0cb72812f9 Fix stack overflow in Keyring (#5724)
* Refactor Keyring._start_key_lookups

There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.

* Add a delay to key lookup lock release to fix stack overflow

A tactical call_later here should fix #5723

* changelog
2019-07-22 13:51:22 +01:00
Andrew Morgan
f477ce4b1a Merge tag 'v1.2.0rc1' into develop
v1.2.0rc1

Features
--------

- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))

Bugfixes
--------

- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
- Ignore redactions of m.room.create events. ([\#5701](https://github.com/matrix-org/synapse/issues/5701))

Updates to the Docker image
---------------------------

- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))

Improved Documentation
----------------------

- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))

Deprecations and Removals
-------------------------

- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))

Internal Changes
----------------

- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
2019-07-22 13:49:16 +01:00
Jason Robinson
66f5ff72fd Add user_type to returned fields in admin API user list endpoints
Mostly user type will be empty (normal user) but there is also the
"support" user type.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2019-07-22 15:29:18 +03:00
Erik Johnston
2017369f7d Newsfile 2019-07-22 13:18:25 +01:00
Erik Johnston
5ea773c505 Cache get_version_string.
The version of a module isn't going to change over the lifetime of the
process (assuming no funky hot reloading is going on, which it isn't),
so let's just cache the result to avoid spawning lots of git
subprocesses.

Fixes #5672.
2019-07-22 13:15:08 +01:00
Jorik Schellekens
f337d2f0f0 Demo uses deprecated cli option (#5725)
* Remove deprecated 'verbose' cli arg

* Create 5725.bugfix
2019-07-22 11:31:05 +01:00
Jorik Schellekens
0fd171770a Merge branch 'release-v1.2.0' into develop 2019-07-22 11:18:50 +01:00
Jorik Schellekens
f99554b15d Revert "Remove deprecated 'verbose' cli arg"
This reverts commit dc7cf81267.
2019-07-19 18:19:27 +01:00
Jorik Schellekens
dc7cf81267 Remove deprecated 'verbose' cli arg 2019-07-19 18:16:42 +01:00
Richard van der Hoff
f214bff0c0 changelog 2019-07-19 17:58:17 +01:00
Richard van der Hoff
dcca56baba Add a delay to key lookup lock release to fix stack overflow
A tactical call_later here should fix #5723
2019-07-19 17:57:00 +01:00
Richard van der Hoff
c7095be913 Refactor Keyring._start_key_lookups
There's an awful lot of deferreds and dictionaries flying around here. The
whole thing can be made much simpler and achieve the same effect.
2019-07-19 17:49:19 +01:00
Erik Johnston
7704873cb8 Merge pull request #5720 from matrix-org/erikj/transactions_upsert
Use upsert when updating destination retry interval
2019-07-19 16:51:16 +01:00
Erik Johnston
d7bd9651bc Merge pull request #5713 from matrix-org/erikj/use_cache_for_filtered_state
Delegate to cached version when using get_filtered_current_state_ids
2019-07-19 16:30:49 +01:00
Erik Johnston
5c07c97c09 Merge pull request #5706 from matrix-org/erikj/add_memberships_to_current_state
Add membership column to current_state_events table
2019-07-19 16:30:33 +01:00
Jorik Schellekens
7b8bc61834 Don't accept opentracing data from clients. (#5715)
* Don't accept opentracing data from clients.

* newsfile
2019-07-19 16:29:57 +01:00
Erik Johnston
ced4fdaa84 Newsfile 2019-07-19 13:40:26 +01:00
Erik Johnston
2410335507 Use upsert when updating destination retry interval 2019-07-19 13:40:24 +01:00
Erik Johnston
bd2e1a2aa8 LoggingTransaction accepts None for callback lists.
Its a bit disingenuousto give LoggingTransaction lists to append
callbacks to if we're not going to run the callbacks.
2019-07-19 13:36:04 +01:00
Erik Johnston
ebc5ed1296 Update comment for new column 2019-07-19 13:29:02 +01:00
Neil Johnson
5c05ae7ba0 Add 'rel' attribute to default welcome page. (#5695)
add rel attribute as a precaution against reverse tabnabbing in future
2019-07-19 12:03:36 +01:00
Richard van der Hoff
b73ce4ba81 Update the coding style doc (#5719)
A few fixes and removal of duplicated stuff, but mostly a bunch of the words on the config file.
2019-07-19 11:55:14 +01:00
Amber Brown
356ed0438e Speed up the PostgreSQL unit tests (#5717) 2019-07-19 19:01:23 +10:00
Amber Brown
6a85cb5ef7 Remove non-dedicated logging options and command line arguments (#5678) 2019-07-19 01:40:08 +10:00
Erik Johnston
dd2851d576 Newsfile 2019-07-18 15:27:18 +01:00
Erik Johnston
10523241d8 Delegate to cached version when using get_filtered_current_state_ids
In the case where it gets called with `StateFilter.all()`
2019-07-18 15:17:39 +01:00
Erik Johnston
89c885909a Newsfile 2019-07-18 14:16:01 +01:00
Erik Johnston
8e1ada9e6f Use the current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
059d8c1a4e Track if current_state_events.membership is up to date 2019-07-18 14:16:01 +01:00
Erik Johnston
c618a5d348 Add background update for current_state_events.membership column 2019-07-18 14:16:01 +01:00
Erik Johnston
6de09e07a6 Add membership column to current_state_events table.
It turns out that doing a join is surprisingly expensive for the DB to
do when room_membership table is larger than the disk cache.
2019-07-18 14:15:57 +01:00
328 changed files with 7797 additions and 3612 deletions

View File

@@ -6,6 +6,7 @@ services:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.5
@@ -16,6 +17,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
working_dir: /src
volumes:
- ..:/app
- ..:/src

View File

@@ -6,6 +6,7 @@ services:
image: postgres:11
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
@@ -16,6 +17,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
working_dir: /src
volumes:
- ..:/app
- ..:/src

View File

@@ -6,6 +6,7 @@ services:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
@@ -16,6 +17,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
working_dir: /src
volumes:
- ..:/app
- ..:/src

View File

@@ -2,7 +2,7 @@
set -ex
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs-.*|release-.*)$ ]]; then
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
echo "Not merging forward, as this is a release branch"
exit 0
fi
@@ -27,7 +27,7 @@ git config --global user.name "A robot"
# Fetch and merge. If it doesn't work, it will raise due to set -e.
git fetch -u origin $GITBASE
git merge --no-edit origin/$GITBASE
git merge --no-edit --no-commit origin/$GITBASE
# Show what we are after.
git --no-pager show -s

View File

@@ -1,8 +1,7 @@
env:
CODECOV_TOKEN: "2dd7eb9b-0eda-45fe-a47c-9b5ac040045f"
COVERALLS_REPO_TOKEN: wsJWOby6j0uCYFiCes3r0XauxO27mx8lD
steps:
- command:
- "python -m pip install tox"
- "tox -e check_codestyle"
@@ -10,6 +9,7 @@ steps:
plugins:
- docker#v3.0.1:
image: "python:3.6"
mount-buildkite-agent: false
- command:
- "python -m pip install tox"
@@ -18,6 +18,7 @@ steps:
plugins:
- docker#v3.0.1:
image: "python:3.6"
mount-buildkite-agent: false
- command:
- "python -m pip install tox"
@@ -26,16 +27,18 @@ steps:
plugins:
- docker#v3.0.1:
image: "python:3.6"
mount-buildkite-agent: false
- command:
- "python -m pip install tox"
- "scripts-dev/check-newsfragment"
label: ":newspaper: Newsfile"
branches: "!master !develop !release-* !shhs-v*"
branches: "!master !develop !release-*"
plugins:
- docker#v3.0.1:
image: "python:3.6"
propagate-environment: true
mount-buildkite-agent: false
- command:
- "python -m pip install tox"
@@ -44,20 +47,35 @@ steps:
plugins:
- docker#v3.0.1:
image: "python:3.6"
mount-buildkite-agent: false
- command:
- "python -m pip install tox"
- "tox -e mypy"
label: ":mypy: mypy"
plugins:
- docker#v3.0.1:
image: "python:3.5"
mount-buildkite-agent: false
- wait
- command:
- "python -m pip install tox"
- "tox -e py35-old,codecov"
- "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
- "python3.5 -m pip install tox"
- "tox -e py35-old,combine"
label: ":python: 3.5 / SQLite / Old Deps"
branches: "!shhs !shhs-*"
env:
TRIAL_FLAGS: "-j 2"
LANG: "C.UTF-8"
plugins:
- docker#v3.0.1:
image: "python:3.5"
image: "ubuntu:xenial" # We use xenial to get an old sqlite and python
workdir: "/src"
mount-buildkite-agent: false
propagate-environment: true
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -67,15 +85,18 @@ steps:
- command:
- "python -m pip install tox"
- "tox -e py35,codecov"
- "tox -e py35,combine"
label: ":python: 3.5 / SQLite"
branches: "!shhs !shhs-*"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.5"
workdir: "/src"
mount-buildkite-agent: false
propagate-environment: true
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -85,15 +106,18 @@ steps:
- command:
- "python -m pip install tox"
- "tox -e py36,codecov"
- "tox -e py36,combine"
label: ":python: 3.6 / SQLite"
branches: "!shhs !shhs-*"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.6"
workdir: "/src"
mount-buildkite-agent: false
propagate-environment: true
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -103,14 +127,18 @@ steps:
- command:
- "python -m pip install tox"
- "tox -e py37,codecov"
- "tox -e py37,combine"
label: ":python: 3.7 / SQLite"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.7"
workdir: "/src"
mount-buildkite-agent: false
propagate-environment: true
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -119,16 +147,19 @@ steps:
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
branches: "!shhs !shhs-*"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,combine'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py35.pg95.yaml
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -137,16 +168,19 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 9.5"
branches: "!shhs !shhs-*"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,combine'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py37.pg95.yaml
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -155,15 +189,19 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,combine'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py37.pg11.yaml
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -171,9 +209,7 @@ steps:
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / SQLite / Monolith"
branches: "!shhs !shhs-*"
agents:
queue: "medium"
command:
@@ -185,6 +221,16 @@ steps:
propagate-environment: true
always-pull: true
workdir: "/src"
entrypoint: ["/bin/sh", "-e", "-c"]
mount-buildkite-agent: false
volumes: ["./logs:/logs"]
- artifacts#v1.2.0:
upload: [ "logs/**/*.log", "logs/**/*.log.*", "logs/coverage.xml" ]
- matrix-org/annotate:
path: "logs/annotate.md"
style: "error"
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -206,6 +252,16 @@ steps:
propagate-environment: true
always-pull: true
workdir: "/src"
entrypoint: ["/bin/sh", "-e", "-c"]
mount-buildkite-agent: false
volumes: ["./logs:/logs"]
- artifacts#v1.2.0:
upload: [ "logs/**/*.log", "logs/**/*.log.*", "logs/coverage.xml" ]
- matrix-org/annotate:
path: "logs/annotate.md"
style: "error"
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -214,14 +270,15 @@ steps:
limit: 2
- label: "SyTest - :python: 3.5 / :postgres: 9.6 / Workers"
branches: "!shhs !shhs-*"
agents:
queue: "medium"
env:
POSTGRES: "1"
WORKERS: "1"
BLACKLIST: "synapse-blacklist-with-workers"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
@@ -229,7 +286,16 @@ steps:
propagate-environment: true
always-pull: true
workdir: "/src"
soft_fail: true
entrypoint: ["/bin/sh", "-e", "-c"]
mount-buildkite-agent: false
volumes: ["./logs:/logs"]
- artifacts#v1.2.0:
upload: [ "logs/**/*.log", "logs/**/*.log.*", "logs/coverage.xml" ]
- matrix-org/annotate:
path: "logs/annotate.md"
style: "error"
- matrix-org/coveralls#v1.0:
parallel: "true"
retry:
automatic:
- exit_status: -1
@@ -237,14 +303,8 @@ steps:
- exit_status: 2
limit: 2
- wait
- wait: ~
continue_on_failure: true
- label: ":docker: x86_64"
agents:
queue: "release"
branches: "shhs-*"
command:
- "docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.7.4 . -t matrixdotorg/synapse:${BUILDKITE_TAG}"
- "docker save matrixdotorg/synapse:${BUILDKITE_TAG} | gzip -9 > docker.tar.gz"
artifact_paths:
- "docker.tar.gz"
- label: Trigger webhook
command: "curl -k https://coveralls.io/webhook?repo_token=$COVERALLS_REPO_TOKEN -d \"payload[build_num]=$BUILDKITE_BUILD_NUMBER&payload[status]=done\""

View File

@@ -0,0 +1,30 @@
# This file serves as a blacklist for SyTest tests that we expect will fail in
# Synapse when run under worker mode. For more details, see sytest-blacklist.
Message history can be paginated
Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers
If remote user leaves room we no longer receive device updates
Forgotten room messages cannot be paginated
Inbound federation can get public room list
Members from the gap are included in gappy incr LL sync
Leaves are present in non-gapped incremental syncs
Old leaves are present in gapped incremental syncs
User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking

33
.circleci/config.yml Normal file
View File

@@ -0,0 +1,33 @@
version: 2
jobs:
dockerhubuploadrelease:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
workflows:
version: 2
build:
jobs:
- dockerhubuploadrelease:
filters:
tags:
only: /v[0-9].[0-9]+.[0-9]+.*/
branches:
ignore: /.*/
- dockerhubuploadlatest:
filters:
branches:
only: master

View File

@@ -1,5 +1,4 @@
comment:
layout: "diff"
comment: off
coverage:
status:

View File

@@ -1,7 +1,8 @@
[run]
branch = True
parallel = True
include = synapse/*
include=$TOP/synapse/*
data_file = $TOP/.coverage
[report]
precision = 2

4
.gitignore vendored
View File

@@ -16,6 +16,7 @@ _trial_temp*/
/*.log
/*.log.config
/*.pid
/.python-version
/*.signing.key
/env/
/homeserver*.yaml
@@ -29,8 +30,9 @@ _trial_temp*/
/.vscode/
# build products
/.coverage*
!/.coveragerc
/.coverage*
/.mypy_cache/
/.tox
/build/
/coverage.*

View File

@@ -1,3 +1,102 @@
Synapse 1.3.1 (2019-08-17)
==========================
Features
--------
- Drop hard dependency on `sdnotify` python package. ([\#5871](https://github.com/matrix-org/synapse/issues/5871))
Bugfixes
--------
- Fix startup issue (hang on ACME provisioning) due to ordering of Twisted reactor startup. Thanks to @chrismoos for supplying the fix. ([\#5867](https://github.com/matrix-org/synapse/issues/5867))
Synapse 1.3.0 (2019-08-15)
==========================
Bugfixes
--------
- Fix 500 Internal Server Error on `publicRooms` when the public room list was
cached. ([\#5851](https://github.com/matrix-org/synapse/issues/5851))
Synapse 1.3.0rc1 (2019-08-13)
==========================
Features
--------
- Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login. ([\#5686](https://github.com/matrix-org/synapse/issues/5686))
- Add sd_notify hooks to ease systemd integration and allows usage of Type=Notify. ([\#5732](https://github.com/matrix-org/synapse/issues/5732))
- Synapse will no longer serve any media repo admin endpoints when `enable_media_repo` is set to False in the configuration. If a media repo worker is used, the admin APIs relating to the media repo will be served from it instead. ([\#5754](https://github.com/matrix-org/synapse/issues/5754), [\#5848](https://github.com/matrix-org/synapse/issues/5848))
- Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events) over federation. This option can be used to prevent adverse performance on resource-constrained homeservers. ([\#5783](https://github.com/matrix-org/synapse/issues/5783))
- Allow defining HTML templates to serve the user on account renewal attempt when using the account validity feature. ([\#5807](https://github.com/matrix-org/synapse/issues/5807))
Bugfixes
--------
- Fix UISIs during homeserver outage. ([\#5693](https://github.com/matrix-org/synapse/issues/5693), [\#5789](https://github.com/matrix-org/synapse/issues/5789))
- Fix stack overflow in server key lookup code. ([\#5724](https://github.com/matrix-org/synapse/issues/5724))
- start.sh no longer uses deprecated cli option. ([\#5725](https://github.com/matrix-org/synapse/issues/5725))
- Log when we receive an event receipt from an unexpected origin. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
- Fix debian packaging scripts to correctly build sid packages. ([\#5775](https://github.com/matrix-org/synapse/issues/5775))
- Correctly handle redactions of redactions. ([\#5788](https://github.com/matrix-org/synapse/issues/5788))
- Return 404 instead of 403 when accessing /rooms/{roomId}/event/{eventId} for an event without the appropriate permissions. ([\#5798](https://github.com/matrix-org/synapse/issues/5798))
- Fix check that tombstone is a state event in push rules. ([\#5804](https://github.com/matrix-org/synapse/issues/5804))
- Fix error when trying to login as a deactivated user when using a worker to handle login. ([\#5806](https://github.com/matrix-org/synapse/issues/5806))
- Fix bug where user `/sync` stream could get wedged in rare circumstances. ([\#5825](https://github.com/matrix-org/synapse/issues/5825))
- The purge_remote_media.sh script was fixed. ([\#5839](https://github.com/matrix-org/synapse/issues/5839))
Deprecations and Removals
-------------------------
- Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration. ([\#5678](https://github.com/matrix-org/synapse/issues/5678), [\#5729](https://github.com/matrix-org/synapse/issues/5729))
- Remove non-functional 'expire_access_token' setting. ([\#5782](https://github.com/matrix-org/synapse/issues/5782))
Internal Changes
----------------
- Make Jaeger fully configurable. ([\#5694](https://github.com/matrix-org/synapse/issues/5694))
- Add precautionary measures to prevent future abuse of `window.opener` in default welcome page. ([\#5695](https://github.com/matrix-org/synapse/issues/5695))
- Reduce database IO usage by optimising queries for current membership. ([\#5706](https://github.com/matrix-org/synapse/issues/5706), [\#5738](https://github.com/matrix-org/synapse/issues/5738), [\#5746](https://github.com/matrix-org/synapse/issues/5746), [\#5752](https://github.com/matrix-org/synapse/issues/5752), [\#5770](https://github.com/matrix-org/synapse/issues/5770), [\#5774](https://github.com/matrix-org/synapse/issues/5774), [\#5792](https://github.com/matrix-org/synapse/issues/5792), [\#5793](https://github.com/matrix-org/synapse/issues/5793))
- Improve caching when fetching `get_filtered_current_state_ids`. ([\#5713](https://github.com/matrix-org/synapse/issues/5713))
- Don't accept opentracing data from clients. ([\#5715](https://github.com/matrix-org/synapse/issues/5715))
- Speed up PostgreSQL unit tests in CI. ([\#5717](https://github.com/matrix-org/synapse/issues/5717))
- Update the coding style document. ([\#5719](https://github.com/matrix-org/synapse/issues/5719))
- Improve database query performance when recording retry intervals for remote hosts. ([\#5720](https://github.com/matrix-org/synapse/issues/5720))
- Add a set of opentracing utils. ([\#5722](https://github.com/matrix-org/synapse/issues/5722))
- Cache result of get_version_string to reduce overhead of `/version` federation requests. ([\#5730](https://github.com/matrix-org/synapse/issues/5730))
- Return 'user_type' in admin API user endpoints results. ([\#5731](https://github.com/matrix-org/synapse/issues/5731))
- Don't package the sytest test blacklist file. ([\#5733](https://github.com/matrix-org/synapse/issues/5733))
- Replace uses of returnValue with plain return, as returnValue is not needed on Python 3. ([\#5736](https://github.com/matrix-org/synapse/issues/5736))
- Blacklist some flakey tests in worker mode. ([\#5740](https://github.com/matrix-org/synapse/issues/5740))
- Fix some error cases in the caching layer. ([\#5749](https://github.com/matrix-org/synapse/issues/5749))
- Add a prometheus metric for pending cache lookups. ([\#5750](https://github.com/matrix-org/synapse/issues/5750))
- Stop trying to fetch events with event_id=None. ([\#5753](https://github.com/matrix-org/synapse/issues/5753))
- Convert RedactionTestCase to modern test style. ([\#5768](https://github.com/matrix-org/synapse/issues/5768))
- Allow looping calls to be given arguments. ([\#5780](https://github.com/matrix-org/synapse/issues/5780))
- Set the logs emitted when checking typing and presence timeouts to DEBUG level, not INFO. ([\#5785](https://github.com/matrix-org/synapse/issues/5785))
- Remove DelayedCall debugging from the test suite, as it is no longer required in the vast majority of Synapse's tests. ([\#5787](https://github.com/matrix-org/synapse/issues/5787))
- Remove some spurious exceptions from the logs where we failed to talk to a remote server. ([\#5790](https://github.com/matrix-org/synapse/issues/5790))
- Improve performance when making `.well-known` requests by sharing the SSL options between requests. ([\#5794](https://github.com/matrix-org/synapse/issues/5794))
- Disable codecov GitHub comments on PRs. ([\#5796](https://github.com/matrix-org/synapse/issues/5796))
- Don't allow clients to send tombstone events that reference the room it's sent in. ([\#5801](https://github.com/matrix-org/synapse/issues/5801))
- Deny redactions of events sent in a different room. ([\#5802](https://github.com/matrix-org/synapse/issues/5802))
- Deny sending well known state types as non-state events. ([\#5805](https://github.com/matrix-org/synapse/issues/5805))
- Handle incorrectly encoded query params correctly by returning a 400. ([\#5808](https://github.com/matrix-org/synapse/issues/5808))
- Handle pusher being deleted during processing rather than logging an exception. ([\#5809](https://github.com/matrix-org/synapse/issues/5809))
- Return 502 not 500 when failing to reach any remote server. ([\#5810](https://github.com/matrix-org/synapse/issues/5810))
- Reduce global pauses in the events stream caused by expensive state resolution during persistence. ([\#5826](https://github.com/matrix-org/synapse/issues/5826))
- Add a lower bound to well-known lookup cache time to avoid repeated lookups. ([\#5836](https://github.com/matrix-org/synapse/issues/5836))
- Whitelist history visbility sytests in worker mode tests. ([\#5843](https://github.com/matrix-org/synapse/issues/5843))
Synapse 1.2.1 (2019-07-26)
==========================
@@ -8,9 +107,9 @@ This release includes *four* security fixes:
- Prevent an attack where a federated server could send redactions for arbitrary events in v1 and v2 rooms. ([\#5767](https://github.com/matrix-org/synapse/issues/5767))
- Prevent a denial-of-service attack where cycles of redaction events would make Synapse spin infinitely. Thanks to `@lrizika:matrix.org` for identifying and responsibly disclosing this issue. ([0f2ecb961](https://github.com/matrix-org/synapse/commit/0f2ecb961))
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @Dylanger for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @dylangerdaly for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
- Fix a vulnerability where a federated server could spoof read-receipts from
users on other servers. Thanks to @Dylanger for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
users on other servers. Thanks to @dylangerdaly for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
Additionally, the following fix was in Synapse **1.2.0**, but was not correctly
identified during the original release:

View File

@@ -419,12 +419,11 @@ If Synapse is not configured with an SMTP server, password reset via email will
## Registering a user
You will need at least one user on your server in order to use a Matrix
client. Users can be registered either via a Matrix client, or via a
commandline script.
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
To get started, it is easiest to use the command line to register new
users. This can be done as follows:
Alternatively you can do so from the command line if you have installed via pip.
This can be done as follows:
```
$ source ~/synapse/env/bin/activate

View File

@@ -7,7 +7,6 @@ include demo/README
include demo/demo.tls.dh
include demo/*.py
include demo/*.sh
include sytest-blacklist
recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.sql.postgres
@@ -34,6 +33,7 @@ exclude Dockerfile
exclude .dockerignore
exclude test_postgresql.sh
exclude .editorconfig
exclude sytest-blacklist
include pyproject.toml
recursive-include changelog.d *

View File

@@ -1 +0,0 @@
Synapse can now be configured to not join remote rooms of a given "complexity" (currently, state events). This option can be used to prevent adverse performance on resource-constrained homeservers.

View File

@@ -1 +0,0 @@
Python 2 has been removed from the CI.

1
changelog.d/5633.bugfix Normal file
View File

@@ -0,0 +1 @@
Don't create broken room when power_level_content_override.users does not contain creator_id.

1
changelog.d/5680.misc Normal file
View File

@@ -0,0 +1 @@
Lay the groundwork for structured logging output.

1
changelog.d/5771.feature Normal file
View File

@@ -0,0 +1 @@
Make Opentracing work in worker mode.

1
changelog.d/5776.misc Normal file
View File

@@ -0,0 +1 @@
Update opentracing docs to use the unified `trace` method.

1
changelog.d/5844.misc Normal file
View File

@@ -0,0 +1 @@
Retry well-known lookup before the cache expires, giving a grace period where the remote well-known can be down but we still use the old result.

1
changelog.d/5845.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin API to purge old rooms from the database.

1
changelog.d/5850.feature Normal file
View File

@@ -0,0 +1 @@
Add retry to well-known lookups if we have recently seen a valid well-known record for the server.

1
changelog.d/5852.feature Normal file
View File

@@ -0,0 +1 @@
Pass opentracing contexts between servers when transmitting EDUs.

1
changelog.d/5855.misc Normal file
View File

@@ -0,0 +1 @@
Opentracing for room and e2e keys.

1
changelog.d/5856.feature Normal file
View File

@@ -0,0 +1 @@
Add a tag recording a request's authenticated entity and corresponding servlet in opentracing.

1
changelog.d/5857.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix database index so that different backup versions can have the same sessions.

1
changelog.d/5859.feature Normal file
View File

@@ -0,0 +1 @@
Add unstable support for MSC2197 (filtered search requests over federation), in order to allow upcoming room directory query performance improvements.

1
changelog.d/5860.misc Normal file
View File

@@ -0,0 +1 @@
Remove log line for debugging issue #5407.

1
changelog.d/5863.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Synapse looking for config options `password_reset_failure_template` and `password_reset_success_template`, when they are actually `password_reset_template_failure_html`, `password_reset_template_success_html`.

1
changelog.d/5864.feature Normal file
View File

@@ -0,0 +1 @@
Correctly retry all hosts returned from SRV when we fail to connect.

1
changelog.d/5877.removal Normal file
View File

@@ -0,0 +1 @@
Remove shared secret registration from client/r0/register endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.

1
changelog.d/5878.feature Normal file
View File

@@ -0,0 +1 @@
Add admin API endpoint for setting whether or not a user is a server administrator.

1
changelog.d/5885.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix stack overflow when recovering an appservice which had an outage.

1
changelog.d/5886.misc Normal file
View File

@@ -0,0 +1 @@
Refactor the Appservice scheduler code.

1
changelog.d/5893.misc Normal file
View File

@@ -0,0 +1 @@
Drop some unused tables.

1
changelog.d/5894.misc Normal file
View File

@@ -0,0 +1 @@
Add missing index on users_in_public_rooms to improve the performance of directory queries.

1
changelog.d/5895.feature Normal file
View File

@@ -0,0 +1 @@
Add config option to sign remote key query responses with a separate key.

1
changelog.d/5896.misc Normal file
View File

@@ -0,0 +1 @@
Improve the logging when we have an error when fetching signing keys.

1
changelog.d/5900.feature Normal file
View File

@@ -0,0 +1 @@
Add support for config templating.

1
changelog.d/5902.feature Normal file
View File

@@ -0,0 +1 @@
Users with the type of "support" or "bot" are no longer required to consent.

1
changelog.d/5904.feature Normal file
View File

@@ -0,0 +1 @@
Let synctl accept a directory of config files.

1
changelog.d/5906.feature Normal file
View File

@@ -0,0 +1 @@
Increase max display name size to 256.

1
changelog.d/5909.misc Normal file
View File

@@ -0,0 +1 @@
Fix error message which referred to public_base_url instead of public_baseurl. Thanks to @aaronraimist for the fix!

1
changelog.d/5911.misc Normal file
View File

@@ -0,0 +1 @@
Add support for database engine-specific schema deltas, based on file extension.

1
changelog.d/5914.feature Normal file
View File

@@ -0,0 +1 @@
Add admin API endpoint for getting whether or not a user is a server administrator.

1
changelog.d/5920.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a cache-invalidation bug for worker-based deployments.

1
changelog.d/5922.misc Normal file
View File

@@ -0,0 +1 @@
Update Buildkite pipeline to use plugins instead of buildkite-agent commands.

1
changelog.d/5926.misc Normal file
View File

@@ -0,0 +1 @@
Add link in sample config to the logging config schema.

1
changelog.d/5930.misc Normal file
View File

@@ -0,0 +1 @@
Add temporary flag to /versions in unstable_features to indicate this Synapse supports receiving id_access_token parameters on calls to identity server-proxying endpoints.

1
changelog.d/5931.misc Normal file
View File

@@ -0,0 +1 @@
Remove unnecessary parentheses in return statements.

1
changelog.d/5938.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused jenkins/prepare_sytest.sh file.

View File

@@ -51,4 +51,4 @@ TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id
# finally start pruning media:
###############################################################################
set -x # for debugging the generated string
curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"
curl --header "Authorization: Bearer $TOKEN" -X POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"

View File

@@ -4,7 +4,8 @@ After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -2,7 +2,8 @@
Description=Synapse Matrix Homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse

View File

@@ -14,7 +14,9 @@
Description=Synapse Matrix homeserver
[Service]
Type=simple
Type=notify
NotifyAccess=main
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-abort
User=synapse

16
debian/changelog vendored
View File

@@ -1,8 +1,18 @@
matrix-synapse-py3 (1.2.1) stable; urgency=medium
matrix-synapse-py3 (1.3.1) stable; urgency=medium
* New synapse release 1.2.1.
* New synapse release 1.3.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Jul 2019 11:32:47 +0100
-- Synapse Packaging team <packages@matrix.org> Sat, 17 Aug 2019 09:15:49 +0100
matrix-synapse-py3 (1.3.0) stable; urgency=medium
[ Andrew Morgan ]
* Remove libsqlite3-dev from required build dependencies.
[ Synapse Packaging team ]
* New synapse release 1.3.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 15 Aug 2019 12:04:23 +0100
matrix-synapse-py3 (1.2.0) stable; urgency=medium

1
debian/control vendored
View File

@@ -15,7 +15,6 @@ Build-Depends:
python3-setuptools,
python3-pip,
python3-venv,
libsqlite3-dev,
tar,
Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse

View File

@@ -29,7 +29,7 @@ for port in 8080 8081 8082; do
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
@@ -43,7 +43,7 @@ for port in 8080 8081 8082; do
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
@@ -68,7 +68,7 @@ for port in 8080 8081 8082; do
# Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
@@ -120,7 +120,6 @@ for port in 8080 8081 8082; do
python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \
-D \
-vv \
popd
done

View File

@@ -42,6 +42,11 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
###
FROM ${distro}
# Get the distro we want to pull from as a dynamic build variable
# (We need to define it in each build stage)
ARG distro=""
ENV distro ${distro}
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control

View File

@@ -17,7 +17,7 @@ By default, the image expects a single volume, located at ``/data``, that will h
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
disposal. For instance, ``/data/media`` could be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
@@ -27,8 +27,8 @@ configuration file there. Multiple application services are supported.
## Generating a configuration file
The first step is to genearte a valid config file. To do this, you can run the
image with the `generate` commandline option.
The first step is to generate a valid config file. To do this, you can run the
image with the `generate` command line option.
You will need to specify values for the `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS` environment variable, and mount a docker volume to store
@@ -59,7 +59,7 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the datatase and media store. Defaults to `/data`.
such as the database and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`.
@@ -115,7 +115,7 @@ not given).
To migrate from a dynamic configuration file to a static one, run the docker
container once with the environment variables set, and `migrate_config`
commandline option. For example:
command line option. For example:
```
docker run -it --rm \

View File

@@ -4,7 +4,8 @@
set -ex
DIST=`lsb_release -c -s`
# Get the codename from distro env
DIST=`cut -d ':' -f2 <<< $distro`
# we get a read-only copy of the source: make a writeable copy
cp -aT /synapse/source /synapse/build

View File

@@ -0,0 +1,18 @@
Purge room API
==============
This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed.
The API is:
```
POST /_synapse/admin/v1/purge_room
{
"room_id": "!room:id"
}
```
You must authenticate using the access token of an admin user.

View File

@@ -84,3 +84,42 @@ with a body of:
}
including an ``access_token`` of a server admin.
Get whether a user is a server administrator or not
===================================================
The api is::
GET /_synapse/admin/v1/users/<user_id>/admin
including an ``access_token`` of a server admin.
A response body like the following is returned:
.. code:: json
{
"admin": true
}
Change whether a user is a server administrator or not
======================================================
Note that you cannot demote yourself.
The api is::
PUT /_synapse/admin/v1/users/<user_id>/admin
with a body of:
.. code:: json
{
"admin": true
}
including an ``access_token`` of a server admin.

View File

@@ -1,4 +1,8 @@
# Code Style
Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
@@ -6,20 +10,20 @@ in code.
The necessary tools are detailed below.
## Formatting tools
- **black**
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
First install ``black`` with::
First install ``black`` with::
pip install --upgrade black
pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
Have ``black`` auto-format your code (it shouldn't change any functionality)
with::
black . --exclude="\.tox|build|env"
black . --exclude="\.tox|build|env"
- **flake8**
@@ -54,17 +58,16 @@ functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
General rules
-------------
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
- **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
@@ -73,6 +76,8 @@ takes a while and is very resource intensive.
- **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules.
Example::
@@ -92,25 +97,84 @@ takes a while and is very resource intensive.
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@@ -148,7 +148,7 @@ call any other functions.
d = more_stuff()
result = yield d # also fine, of course
defer.returnValue(result)
return result
def nonInlineCallbacksFun():
logger.debug("just a wrapper really")

View File

@@ -32,7 +32,7 @@ It is up to the remote server to decide what it does with the spans
it creates. This is called the sampling policy and it can be configured
through Jaeger's settings.
For OpenTracing concepts see
For OpenTracing concepts see
https://opentracing.io/docs/overview/what-is-tracing/.
For more information about Jaeger's implementation see
@@ -79,7 +79,7 @@ Homeserver whitelisting
The homeserver whitelist is configured using regular expressions. A list of regular
expressions can be given and their union will be compared when propagating any
spans contexts to another homeserver.
spans contexts to another homeserver.
Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead to
@@ -92,6 +92,29 @@ two problems, namely:
but that doesn't prevent another server sending you baggage which will be logged
to OpenTracing's logs.
==========
EDU FORMAT
==========
EDUs can contain tracing data in their content. This is not specced but
it could be of interest for other homeservers.
EDU format (if you're using jaeger):
.. code-block:: json
{
"edu_type": "type",
"content": {
"org.matrix.opentracing_context": {
"uber-trace-id": "fe57cf3e65083289"
}
}
}
Though you don't have to use jaeger you must inject the span context into
`org.matrix.opentracing_context` using the opentracing `Format.TEXT_MAP` inject method.
==================
Configuring Jaeger
==================

View File

@@ -205,9 +205,9 @@ listeners:
#
- port: 8008
tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
bind_addresses: ['::1', '127.0.0.1']
resources:
- names: [client, federation]
@@ -280,14 +280,20 @@ listeners:
# Resource-constrained Homeserver Settings
#
# If limit_large_remote_room_joins is True, the room complexity will be
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_large_remote_room_complexity, it will disallow joining or
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_large_remote_room_joins: True
#limit_large_remote_room_complexity: 1.0
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
@@ -386,10 +392,10 @@ listeners:
# permission to listen on port 80.
#
acme:
# ACME support is disabled by default. Uncomment the following line
# (and tls_certificate_path and tls_private_key_path above) to enable it.
# ACME support is disabled by default. Set this to `true` and uncomment
# tls_certificate_path and tls_private_key_path above to enable it.
#
#enabled: true
enabled: False
# Endpoint to use to request certificates. If you only want to test,
# use Let's Encrypt's staging url:
@@ -400,17 +406,17 @@ acme:
# Port number to listen on for the HTTP-01 challenge. Change this if
# you are forwarding connections through Apache/Nginx/etc.
#
#port: 80
port: 80
# Local addresses to listen on for incoming connections.
# Again, you may want to change this if you are forwarding connections
# through Apache/Nginx/etc.
#
#bind_addresses: ['::', '0.0.0.0']
bind_addresses: ['::', '0.0.0.0']
# How many days remaining on a certificate before it is renewed.
#
#reprovision_threshold: 30
reprovision_threshold: 30
# The domain that the certificate should be for. Normally this
# should be the same as your Matrix domain (i.e., 'server_name'), but,
@@ -424,7 +430,7 @@ acme:
#
# If not set, defaults to your 'server_name'.
#
#domain: matrix.example.com
domain: matrix.example.com
# file to use for the account key. This will be generated if it doesn't
# exist.
@@ -479,7 +485,8 @@ database:
## Logging ##
# A yaml python logging config file
# A yaml python logging config file as described by
# https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
#
log_config: "CONFDIR/SERVERNAME.log.config"
@@ -559,6 +566,13 @@ log_config: "CONFDIR/SERVERNAME.log.config"
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored.
#
media_store_path: "DATADIR/media_store"
@@ -796,6 +810,16 @@ uploads_path: "DATADIR/uploads"
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
@@ -936,10 +960,6 @@ uploads_path: "DATADIR/uploads"
#
# macaroon_secret_key: <PRIVATE STRING>
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
@@ -1008,6 +1028,14 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
#
#trusted_key_servers:
# - server_name: "matrix.org"
#
# The signing keys to use when acting as a trusted key server. If not specified
# defaults to the server signing key.
#
# Can contain multiple keys, one per line.
#
#key_server_signing_keys_path: "key_server_signing_keys.key"
# Enable SAML2 for registration and login. Uses pysaml2.
@@ -1441,3 +1469,19 @@ opentracing:
#
#homeserver_whitelist:
# - ".*"
# Jaeger can be configured to sample traces at different rates.
# All configuration options provided by Jaeger can be set here.
# Jaeger's configuration mostly related to trace sampling which
# is documented here:
# https://www.jaegertracing.io/docs/1.13/sampling/.
#
#jaeger_config:
# sampler:
# type: const
# param: 1
# Logging whether spans were started and reported
#
# logging:
# false

View File

@@ -0,0 +1,83 @@
# Structured Logging
A structured logging system can be useful when your logs are destined for a machine to parse and process. By maintaining its machine-readable characteristics, it enables more efficient searching and aggregations when consumed by software such as the "ELK stack".
Synapse's structured logging system is configured via the file that Synapse's `log_config` config option points to. The file must be YAML and contain `structured: true`. It must contain a list of "drains" (places where logs go to).
A structured logging configuration looks similar to the following:
```yaml
structured: true
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: WARNING
drains:
console:
type: console
location: stdout
file:
type: file_json
location: homeserver.log
```
The above logging config will set Synapse as 'INFO' logging level by default, with the SQL layer at 'WARNING', and will have two logging drains (to the console and to a file, stored as JSON).
## Drain Types
Drain types can be specified by the `type` key.
### `console`
Outputs human-readable logs to the console.
Arguments:
- `location`: Either `stdout` or `stderr`.
### `console_json`
Outputs machine-readable JSON logs to the console.
Arguments:
- `location`: Either `stdout` or `stderr`.
### `console_json_terse`
Outputs machine-readable JSON logs to the console, separated by newlines. This
format is not designed to be read and re-formatted into human-readable text, but
is optimal for a logging aggregation system.
Arguments:
- `location`: Either `stdout` or `stderr`.
### `file`
Outputs human-readable logs to a file.
Arguments:
- `location`: An absolute path to the file to log to.
### `file_json`
Outputs machine-readable logs to a file.
Arguments:
- `location`: An absolute path to the file to log to.
### `network_json_terse`
Delivers machine-readable JSON logs to a log aggregator over TCP. This is
compatible with LogStash's TCP input with the codec set to `json_lines`.
Arguments:
- `host`: Hostname or IP address of the log aggregator.
- `port`: Numerical port to contact on the host.

View File

@@ -206,6 +206,13 @@ Handles the media repository. It can handle all endpoints starting with::
/_matrix/media/
And the following regular expressions matching media-specific administration
APIs::
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/quarantine_media/.*$
You should also set ``enable_media_repo: False`` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.

View File

@@ -1,16 +0,0 @@
#! /bin/bash
set -eux
cd "`dirname $0`/.."
TOX_DIR=$WORKSPACE/.tox
mkdir -p $TOX_DIR
if ! [ $TOX_DIR -ef .tox ]; then
ln -s "$TOX_DIR" .tox
fi
# set up the virtualenv
tox -e py27 --notest -v

View File

@@ -35,4 +35,4 @@ try:
except ImportError:
pass
__version__ = "1.2.1"
__version__ = "1.3.1"

View File

@@ -22,6 +22,7 @@ from netaddr import IPAddress
from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
@@ -128,7 +129,7 @@ class Auth(object):
)
self._check_joined_room(member, user_id, room_id)
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id):
@@ -156,13 +157,13 @@ class Auth(object):
if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
defer.returnValue(member)
return member
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.is_host_joined(room_id, host)
defer.returnValue(latest_event_ids)
return latest_event_ids
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
@@ -178,6 +179,7 @@ class Auth(object):
def get_public_keys(self, invite_event):
return event_auth.get_public_keys(invite_event)
@opentracing.trace
@defer.inlineCallbacks
def get_user_by_req(
self, request, allow_guest=False, rights="access", allow_expired=False
@@ -209,6 +211,7 @@ class Auth(object):
user_id, app_service = yield self._get_appservice_user_id(request)
if user_id:
request.authenticated_entity = user_id
opentracing.set_tag("authenticated_entity", user_id)
if ip_addr and self.hs.config.track_appservice_user_ips:
yield self.store.insert_client_ip(
@@ -219,9 +222,7 @@ class Auth(object):
device_id="dummy-device", # stubbed
)
defer.returnValue(
synapse.types.create_requester(user_id, app_service=app_service)
)
return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(access_token, rights)
user = user_info["user"]
@@ -261,11 +262,10 @@ class Auth(object):
)
request.authenticated_entity = user.to_string()
opentracing.set_tag("authenticated_entity", user.to_string())
defer.returnValue(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
return synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
except KeyError:
raise MissingClientTokenError()
@@ -276,25 +276,25 @@ class Auth(object):
self.get_access_token_from_request(request)
)
if app_service is None:
defer.returnValue((None, None))
return None, None
if app_service.ip_range_whitelist:
ip_address = IPAddress(self.hs.get_ip_from_request(request))
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
return None, None
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
return app_service.sender, app_service
user_id = request.args[b"user_id"][0].decode("utf8")
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
return app_service.sender, app_service
if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user")
defer.returnValue((user_id, app_service))
return user_id, app_service
@defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"):
@@ -330,7 +330,7 @@ class Auth(object):
msg="Access token has expired", soft_logout=True
)
defer.returnValue(r)
return r
# otherwise it needs to be a valid macaroon
try:
@@ -378,7 +378,7 @@ class Auth(object):
}
else:
raise RuntimeError("Unknown rights setting %s", rights)
defer.returnValue(ret)
return ret
except (
_InvalidMacaroonException,
pymacaroons.exceptions.MacaroonException,
@@ -414,21 +414,16 @@ class Auth(object):
try:
user_id = self.get_user_id_from_macaroon(macaroon)
has_expiry = False
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith("time "):
has_expiry = True
elif caveat.caveat_id == "guest = true":
if caveat.caveat_id == "guest = true":
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access":
if rights == "access":
self.token_cache[token] = (user_id, guest)
return user_id, guest
@@ -454,7 +449,7 @@ class Auth(object):
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
def validate_macaroon(self, macaroon, type_string, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
@@ -462,7 +457,6 @@ class Auth(object):
macaroon(pymacaroons.Macaroon): The macaroon to validate
type_string(str): The kind of token required (e.g. "access",
"delete_pusher")
verify_expiry(bool): Whether to verify whether the macaroon has expired.
user_id (str): The user_id required
"""
v = pymacaroons.Verifier()
@@ -475,19 +469,7 @@ class Auth(object):
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
# verify_expiry should really always be True, but there exist access
# tokens in the wild which expire when they should not, so we can't
# enforce expiry yet (so we have to allow any caveat starting with
# 'time < ' in access tokens).
#
# On the other hand, short-term login tokens (as used by CAS login, for
# example) have an expiry time which we do want to enforce.
if verify_expiry:
v.satisfy_general(self._verify_expiry)
else:
v.satisfy_general(lambda c: c.startswith("time < "))
v.satisfy_general(self._verify_expiry)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
@@ -506,7 +488,7 @@ class Auth(object):
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret:
defer.returnValue(None)
return None
# we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of
@@ -518,7 +500,7 @@ class Auth(object):
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
defer.returnValue(user_info)
return user_info
def get_appservice_by_req(self, request):
token = self.get_access_token_from_request(request)
@@ -543,7 +525,7 @@ class Auth(object):
@defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False):
if event.type == EventTypes.Create:
defer.returnValue([])
return []
auth_ids = []
@@ -604,7 +586,7 @@ class Auth(object):
if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id)
defer.returnValue(auth_ids)
return auth_ids
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
@@ -618,7 +600,7 @@ class Auth(object):
is_admin = yield self.is_server_admin(user)
if is_admin:
defer.returnValue(True)
return True
user_id = user.to_string()
yield self.check_joined_room(room_id, user_id)
@@ -712,7 +694,7 @@ class Auth(object):
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
return member_event.membership, member_event.event_id
except AuthError:
visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
@@ -721,7 +703,7 @@ class Auth(object):
visibility
and visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return Membership.JOIN, None
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN

View File

@@ -122,7 +122,8 @@ class UserTypes(object):
"""
SUPPORT = "support"
ALL_USER_TYPES = (SUPPORT,)
BOT = "bot"
ALL_USER_TYPES = (SUPPORT, BOT)
class RelationTypes(object):

View File

@@ -61,6 +61,7 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
class CodeMessageException(RuntimeError):
@@ -151,7 +152,7 @@ class UserDeactivatedError(SynapseError):
msg (str): The human-readable error message
"""
super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
)

View File

@@ -132,7 +132,7 @@ class Filtering(object):
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = yield self.store.get_user_filter(user_localpart, filter_id)
defer.returnValue(FilterCollection(result))
return FilterCollection(result)
def add_user_filter(self, user_localpart, user_filter):
self.check_valid_filter(user_filter)

View File

@@ -15,7 +15,9 @@
import gc
import logging
import os
import signal
import socket
import sys
import traceback
@@ -34,18 +36,20 @@ from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict
_sighup_callbacks = []
def register_sighup(func):
def register_sighup(func, *args, **kwargs):
"""
Register a function to be called when a SIGHUP occurs.
Args:
func (function): Function to be called when sent a SIGHUP signal.
Will be called with a single argument, the homeserver.
Will be called with a single default argument, the homeserver.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
_sighup_callbacks.append(func)
_sighup_callbacks.append((func, args, kwargs))
def start_worker_reactor(appname, config, run_command=reactor.run):
@@ -242,8 +246,14 @@ def start(hs, listeners=None):
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs):
for i in _sighup_callbacks:
i(hs)
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")
for i, args, kwargs in _sighup_callbacks:
i(hs, *args, **kwargs)
sdnotify(b"READY=1")
signal.signal(signal.SIGHUP, handle_sighup)
@@ -260,6 +270,7 @@ def start(hs, listeners=None):
hs.get_datastore().start_profiling()
setup_sentry(hs)
setup_sdnotify(hs)
except Exception:
traceback.print_exc(file=sys.stderr)
reactor = hs.get_reactor()
@@ -292,6 +303,21 @@ def setup_sentry(hs):
scope.set_tag("worker_name", name)
def setup_sdnotify(hs):
"""Adds process state hooks to tell systemd what we are up to.
"""
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
hs.get_reactor().addSystemEventTrigger(
"after", "startup", sdnotify, b"READY=1\nMAINPID=%i" % (os.getpid(),)
)
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", sdnotify, b"STOPPING=1"
)
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
"""Replaces the resolver with one that limits the number of in flight DNS
requests.
@@ -385,3 +411,35 @@ class _DeferredResolutionReceiver(object):
def resolutionComplete(self):
self._deferred.callback(())
self._receiver.resolutionComplete()
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
def sdnotify(state):
"""
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
This function is based on the sdnotify python package, but since it's only a few
lines of code, it's easier to duplicate it here than to add a dependency on a
package which many OSes don't include as a matter of principle.
Args:
state (bytes): notification to send
"""
if not isinstance(state, bytes):
raise TypeError("sdnotify should be called with a bytes")
if not sdnotify_sockaddr:
return
addr = sdnotify_sockaddr
if addr[0] == "@":
addr = "\0" + addr[1:]
try:
with socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM) as sock:
sock.connect(addr)
sock.sendall(state)
except Exception as e:
# this is a bit surprising, since we don't expect to have a NOTIFY_SOCKET
# unless systemd is expecting us to notify it.
logger.warning("Unable to send notification to systemd: %s", e)

View File

@@ -227,8 +227,6 @@ def start(config_options):
config.start_pushers = False
config.send_federation = False
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -241,6 +239,8 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
# We use task.react as the basic run command as it correctly handles tearing

View File

@@ -141,8 +141,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.appservice"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -167,8 +165,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ps, config, use_worker_options=True)
ps.setup()
reactor.callWhenRunning(_base.start, ps, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ps, config.worker_listeners
)
_base.start_worker_reactor("synapse-appservice", config)

View File

@@ -179,8 +179,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.client_reader"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -193,8 +191,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-client-reader", config)

View File

@@ -175,8 +175,6 @@ def start(config_options):
assert config.worker_replication_http_port is not None
setup_logging(config, use_worker_options=True)
# This should only be done on the user directory worker or the master
config.update_user_directory = False
@@ -192,8 +190,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-event-creator", config)

View File

@@ -160,8 +160,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.federation_reader"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -174,8 +172,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-reader", config)

View File

@@ -171,8 +171,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.federation_sender"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -197,8 +195,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-sender", config)

View File

@@ -70,12 +70,12 @@ class PresenceStatusStubServlet(RestServlet):
except HttpResponseException as e:
raise e.to_synapse_error()
defer.returnValue((200, result))
return 200, result
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
return 200, {}
class KeyUploadServlet(RestServlet):
@@ -126,11 +126,11 @@ class KeyUploadServlet(RestServlet):
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
defer.returnValue((200, result))
return 200, result
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
defer.returnValue((200, {"one_time_key_counts": result}))
return 200, {"one_time_key_counts": result}
class FrontendProxySlavedStore(
@@ -232,8 +232,6 @@ def start(config_options):
assert config.worker_main_http_uri is not None
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -246,8 +244,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-frontend-proxy", config)

6
synapse/app/homeserver.py Executable file → Normal file
View File

@@ -341,8 +341,6 @@ def setup(config_options):
# generating config files and shouldn't try to continue.
sys.exit(0)
synapse.config.logger.setup_logging(config, use_worker_options=False)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -356,6 +354,8 @@ def setup(config_options):
database_engine=database_engine,
)
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
logger.info("Preparing database: %s...", config.database_config["name"])
try:
@@ -406,7 +406,7 @@ def setup(config_options):
if provision:
yield acme.provision_certificate()
defer.returnValue(provision)
return provision
@defer.inlineCallbacks
def reprovision_acme():

View File

@@ -26,6 +26,7 @@ from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
@@ -35,6 +36,7 @@ from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -71,6 +73,12 @@ class MediaRepositoryServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_PREFIX: media_repo,
@@ -78,6 +86,7 @@ class MediaRepositoryServer(HomeServer):
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
"/_synapse/admin": admin_resource,
}
)
@@ -146,8 +155,6 @@ def start(config_options):
"Please add ``enable_media_repo: false`` to the main config\n"
)
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -160,8 +167,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-media-repository", config)

View File

@@ -184,8 +184,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.pusher"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.start_pushers:
@@ -210,13 +208,15 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ps, config, use_worker_options=True)
ps.setup()
def start():
_base.start(ps, config.worker_listeners)
ps.get_pusherpool().start()
reactor.callWhenRunning(start)
reactor.addSystemEventTrigger("before", "startup", start)
_base.start_worker_reactor("synapse-pusher", config)

View File

@@ -435,8 +435,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.synchrotron"
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -450,8 +448,12 @@ def start(config_options):
application_service_handler=SynchrotronApplicationService(),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-synchrotron", config)

View File

@@ -197,8 +197,6 @@ def start(config_options):
assert config.worker_app == "synapse.app.user_dir"
setup_logging(config, use_worker_options=True)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -223,8 +221,12 @@ def start(config_options):
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.callWhenRunning(_base.start, ss, config.worker_listeners)
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-user-dir", config)

View File

@@ -175,21 +175,21 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_user(self, event, store):
if not event:
defer.returnValue(False)
return False
if self.is_interested_in_user(event.sender):
defer.returnValue(True)
return True
# also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
defer.returnValue(True)
return True
if not store:
defer.returnValue(False)
return False
does_match = yield self._matches_user_in_member_list(event.room_id, store)
defer.returnValue(does_match)
return does_match
@cachedInlineCallbacks(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context):
@@ -200,8 +200,8 @@ class ApplicationService(object):
# check joined member events
for user_id in member_list:
if self.is_interested_in_user(user_id):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
def _matches_room_id(self, event):
if hasattr(event, "room_id"):
@@ -211,13 +211,13 @@ class ApplicationService(object):
@defer.inlineCallbacks
def _matches_aliases(self, event, store):
if not store or not event:
defer.returnValue(False)
return False
alias_list = yield store.get_aliases_for_room(event.room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
defer.returnValue(True)
defer.returnValue(False)
return True
return False
@defer.inlineCallbacks
def is_interested(self, event, store=None):
@@ -231,15 +231,15 @@ class ApplicationService(object):
"""
# Do cheap checks first
if self._matches_room_id(event):
defer.returnValue(True)
return True
if (yield self._matches_aliases(event, store)):
defer.returnValue(True)
return True
if (yield self._matches_user(event, store)):
defer.returnValue(True)
return True
defer.returnValue(False)
return False
def is_interested_in_user(self, user_id):
return (

View File

@@ -97,40 +97,40 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def query_user(self, service, user_id):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
if e.code == 404:
defer.returnValue(False)
return False
return
logger.warning("query_user to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("query_user to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_alias(self, service, alias):
if service.url is None:
defer.returnValue(False)
return False
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None
try:
response = yield self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object
defer.returnValue(True)
return True
except CodeMessageException as e:
logger.warning("query_alias to %s received %s", uri, e.code)
if e.code == 404:
defer.returnValue(False)
return False
return
except Exception as ex:
logger.warning("query_alias to %s threw exception %s", uri, ex)
defer.returnValue(False)
return False
@defer.inlineCallbacks
def query_3pe(self, service, kind, protocol, fields):
@@ -141,7 +141,7 @@ class ApplicationServiceApi(SimpleHttpClient):
else:
raise ValueError("Unrecognised 'kind' argument %r to query_3pe()", kind)
if service.url is None:
defer.returnValue([])
return []
uri = "%s%s/thirdparty/%s/%s" % (
service.url,
@@ -155,7 +155,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response
)
defer.returnValue([])
return []
ret = []
for r in response:
@@ -166,14 +166,14 @@ class ApplicationServiceApi(SimpleHttpClient):
"query_3pe to %s returned an invalid result %r", uri, r
)
defer.returnValue(ret)
return ret
except Exception as ex:
logger.warning("query_3pe to %s threw exception %s", uri, ex)
defer.returnValue([])
return []
def get_3pe_protocol(self, service, protocol):
if service.url is None:
defer.returnValue({})
return {}
@defer.inlineCallbacks
def _get():
@@ -189,7 +189,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
)
defer.returnValue(None)
return None
for instance in info.get("instances", []):
network_id = instance.get("network_id", None)
@@ -198,10 +198,10 @@ class ApplicationServiceApi(SimpleHttpClient):
service.id, network_id
).to_string()
defer.returnValue(info)
return info
except Exception as ex:
logger.warning("query_3pe_protocol to %s threw exception %s", uri, ex)
defer.returnValue(None)
return None
key = (service.id, protocol)
return self.protocol_meta_cache.wrap(key, _get)
@@ -209,7 +209,7 @@ class ApplicationServiceApi(SimpleHttpClient):
@defer.inlineCallbacks
def push_bulk(self, service, events, txn_id=None):
if service.url is None:
defer.returnValue(True)
return True
events = self._serialize(events)
@@ -229,14 +229,14 @@ class ApplicationServiceApi(SimpleHttpClient):
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
defer.returnValue(True)
return True
return
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
failed_transactions_counter.labels(service.id).inc()
defer.returnValue(False)
return False
def _serialize(self, events):
time_now = self.clock.time_msec()

View File

@@ -70,35 +70,37 @@ class ApplicationServiceScheduler(object):
self.store = hs.get_datastore()
self.as_api = hs.get_application_service_api()
def create_recoverer(service, callback):
return _Recoverer(self.clock, self.store, self.as_api, service, callback)
self.txn_ctrl = _TransactionController(
self.clock, self.store, self.as_api, create_recoverer
)
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
@defer.inlineCallbacks
def start(self):
logger.info("Starting appservice scheduler")
# check for any DOWN ASes and start recoverers for them.
recoverers = yield _Recoverer.start(
self.clock, self.store, self.as_api, self.txn_ctrl.on_recovered
services = yield self.store.get_appservices_by_state(
ApplicationServiceState.DOWN
)
self.txn_ctrl.add_recoverers(recoverers)
for service in services:
self.txn_ctrl.start_recoverer(service)
def submit_event_for_as(self, service, event):
self.queuer.enqueue(service, event)
class _ServiceQueuer(object):
"""Queues events for the same application service together, sending
transactions as soon as possible. Once a transaction is sent successfully,
this schedules any other events in the queue to run.
"""Queue of events waiting to be sent to appservices.
Groups events into transactions per-appservice, and sends them on to the
TransactionController. Makes sure that we only have one transaction in flight per
appservice at a given time.
"""
def __init__(self, txn_ctrl, clock):
self.queued_events = {} # dict of {service_id: [events]}
# the appservices which currently have a transaction in flight
self.requests_in_flight = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
@@ -136,13 +138,29 @@ class _ServiceQueuer(object):
class _TransactionController(object):
def __init__(self, clock, store, as_api, recoverer_fn):
"""Transaction manager.
Builds AppServiceTransactions and runs their lifecycle. Also starts a Recoverer
if a transaction fails.
(Note we have only have one of these in the homeserver.)
Args:
clock (synapse.util.Clock):
store (synapse.storage.DataStore):
as_api (synapse.appservice.api.ApplicationServiceApi):
"""
def __init__(self, clock, store, as_api):
self.clock = clock
self.store = store
self.as_api = as_api
self.recoverer_fn = recoverer_fn
# keep track of how many recoverers there are
self.recoverers = []
# map from service id to recoverer instance
self.recoverers = {}
# for UTs
self.RECOVERER_CLASS = _Recoverer
@defer.inlineCallbacks
def send(self, service, events):
@@ -154,61 +172,63 @@ class _TransactionController(object):
if sent:
yield txn.complete(self.store)
else:
run_in_background(self._start_recoverer, service)
run_in_background(self._on_txn_fail, service)
except Exception:
logger.exception("Error creating appservice transaction")
run_in_background(self._start_recoverer, service)
run_in_background(self._on_txn_fail, service)
@defer.inlineCallbacks
def on_recovered(self, recoverer):
self.recoverers.remove(recoverer)
logger.info(
"Successfully recovered application service AS ID %s", recoverer.service.id
)
self.recoverers.pop(recoverer.service.id)
logger.info("Remaining active recoverers: %s", len(self.recoverers))
yield self.store.set_appservice_state(
recoverer.service, ApplicationServiceState.UP
)
def add_recoverers(self, recoverers):
for r in recoverers:
self.recoverers.append(r)
if len(recoverers) > 0:
logger.info("New active recoverers: %s", len(self.recoverers))
@defer.inlineCallbacks
def _start_recoverer(self, service):
def _on_txn_fail(self, service):
try:
yield self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
logger.info(
"Application service falling behind. Starting recoverer. AS ID %s",
service.id,
)
recoverer = self.recoverer_fn(service, self.on_recovered)
self.add_recoverers([recoverer])
recoverer.recover()
self.start_recoverer(service)
except Exception:
logger.exception("Error starting AS recoverer")
def start_recoverer(self, service):
"""Start a Recoverer for the given service
Args:
service (synapse.appservice.ApplicationService):
"""
logger.info("Starting recoverer for AS ID %s", service.id)
assert service.id not in self.recoverers
recoverer = self.RECOVERER_CLASS(
self.clock, self.store, self.as_api, service, self.on_recovered
)
self.recoverers[service.id] = recoverer
recoverer.recover()
logger.info("Now %i active recoverers", len(self.recoverers))
@defer.inlineCallbacks
def _is_service_up(self, service):
state = yield self.store.get_appservice_state(service)
defer.returnValue(state == ApplicationServiceState.UP or state is None)
return state == ApplicationServiceState.UP or state is None
class _Recoverer(object):
@staticmethod
@defer.inlineCallbacks
def start(clock, store, as_api, callback):
services = yield store.get_appservices_by_state(ApplicationServiceState.DOWN)
recoverers = [_Recoverer(clock, store, as_api, s, callback) for s in services]
for r in recoverers:
logger.info(
"Starting recoverer for AS ID %s which was marked as " "DOWN",
r.service.id,
)
r.recover()
defer.returnValue(recoverers)
"""Manages retries and backoff for a DOWN appservice.
We have one of these for each appservice which is currently considered DOWN.
Args:
clock (synapse.util.Clock):
store (synapse.storage.DataStore):
as_api (synapse.appservice.api.ApplicationServiceApi):
service (synapse.appservice.ApplicationService): the service we are managing
callback (callable[_Recoverer]): called once the service recovers.
"""
def __init__(self, clock, store, as_api, service, callback):
self.clock = clock
@@ -224,7 +244,9 @@ class _Recoverer(object):
"as-recoverer-%s" % (self.service.id,), self.retry
)
self.clock.call_later((2 ** self.backoff_counter), _retry)
delay = 2 ** self.backoff_counter
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.clock.call_later(delay, _retry)
def _backoff(self):
# cap the backoff to be around 8.5min => (2^9) = 512 secs
@@ -234,25 +256,30 @@ class _Recoverer(object):
@defer.inlineCallbacks
def retry(self):
logger.info("Starting retries on %s", self.service.id)
try:
txn = yield self.store.get_oldest_unsent_txn(self.service)
if txn:
while True:
txn = yield self.store.get_oldest_unsent_txn(self.service)
if not txn:
# nothing left: we're done!
self.callback(self)
return
logger.info(
"Retrying transaction %s for AS ID %s", txn.id, txn.service.id
)
sent = yield txn.send(self.as_api)
if sent:
yield txn.complete(self.store)
# reset the backoff counter and retry immediately
self.backoff_counter = 1
yield self.retry()
else:
self._backoff()
else:
self._set_service_recovered()
except Exception as e:
logger.exception(e)
self._backoff()
if not sent:
break
def _set_service_recovered(self):
self.callback(self)
yield txn.complete(self.store)
# reset the backoff counter and then process the next transaction
self.backoff_counter = 1
except Exception:
logger.exception("Unexpected error running retries")
# we didn't manage to send all of the transactions before we got an error of
# some flavour: reschedule the next retry.
self._backoff()

View File

@@ -13,8 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import ConfigError
from ._base import ConfigError, find_config_files
# export ConfigError if somebody does import *
# export ConfigError and find_config_files if somebody does
# import *
# this is largely a fudge to stop PEP8 moaning about the import
__all__ = ["ConfigError"]
__all__ = ["ConfigError", "find_config_files"]

View File

@@ -181,6 +181,11 @@ class Config(object):
generate_secrets=False,
report_stats=None,
open_private_ports=False,
listeners=None,
database_conf=None,
tls_certificate_path=None,
tls_private_key_path=None,
acme_domain=None,
):
"""Build a default configuration file
@@ -207,6 +212,33 @@ class Config(object):
open_private_ports (bool): True to leave private ports (such as the non-TLS
HTTP listener) open to the internet.
listeners (list(dict)|None): A list of descriptions of the listeners
synapse should start with each of which specifies a port (str), a list of
resources (list(str)), tls (bool) and type (str). For example:
[{
"port": 8448,
"resources": [{"names": ["federation"]}],
"tls": True,
"type": "http",
},
{
"port": 443,
"resources": [{"names": ["client"]}],
"tls": False,
"type": "http",
}],
database (str|None): The database type to configure, either `psycog2`
or `sqlite3`.
tls_certificate_path (str|None): The path to the tls certificate.
tls_private_key_path (str|None): The path to the tls private key.
acme_domain (str|None): The domain acme will try to validate. If
specified acme will be enabled.
Returns:
str: the yaml config file
"""
@@ -220,6 +252,11 @@ class Config(object):
generate_secrets=generate_secrets,
report_stats=report_stats,
open_private_ports=open_private_ports,
listeners=listeners,
database_conf=database_conf,
tls_certificate_path=tls_certificate_path,
tls_private_key_path=tls_private_key_path,
acme_domain=acme_domain,
)
)

View File

@@ -13,6 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from textwrap import indent
import yaml
from ._base import Config
@@ -38,20 +41,28 @@ class DatabaseConfig(Config):
self.set_databasepath(config.get("database_path"))
def generate_config_section(self, data_dir_path, **kwargs):
database_path = os.path.join(data_dir_path, "homeserver.db")
return (
"""\
## Database ##
database:
# The database engine name
def generate_config_section(self, data_dir_path, database_conf, **kwargs):
if not database_conf:
database_path = os.path.join(data_dir_path, "homeserver.db")
database_conf = (
"""# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
args:
# Path to the database
database: "%(database_path)s"
"""
% locals()
)
else:
database_conf = indent(yaml.dump(database_conf), " " * 10).lstrip()
return (
"""\
## Database ##
database:
%(database_conf)s
# Number of events to cache in memory.
#
#event_cache_size: 10K

View File

@@ -115,7 +115,7 @@ class EmailConfig(Config):
missing.append("email." + k)
if config.get("public_baseurl") is None:
missing.append("public_base_url")
missing.append("public_baseurl")
if len(missing) > 0:
raise RuntimeError(
@@ -132,21 +132,21 @@ class EmailConfig(Config):
self.email_password_reset_template_text = email_config.get(
"password_reset_template_text", "password_reset.txt"
)
self.email_password_reset_failure_template = email_config.get(
"password_reset_failure_template", "password_reset_failure.html"
self.email_password_reset_template_failure_html = email_config.get(
"password_reset_template_failure_html", "password_reset_failure.html"
)
# This template does not support any replaceable variables, so we will
# read it from the disk once during setup
email_password_reset_success_template = email_config.get(
"password_reset_success_template", "password_reset_success.html"
email_password_reset_template_success_html = email_config.get(
"password_reset_template_success_html", "password_reset_success.html"
)
# Check templates exist
for f in [
self.email_password_reset_template_html,
self.email_password_reset_template_text,
self.email_password_reset_failure_template,
email_password_reset_success_template,
self.email_password_reset_template_failure_html,
email_password_reset_template_success_html,
]:
p = os.path.join(self.email_template_dir, f)
if not os.path.isfile(p):
@@ -154,9 +154,9 @@ class EmailConfig(Config):
# Retrieve content of web templates
filepath = os.path.join(
self.email_template_dir, email_password_reset_success_template
self.email_template_dir, email_password_reset_template_success_html
)
self.email_password_reset_success_html_content = self.read_file(
self.email_password_reset_template_success_html_content = self.read_file(
filepath, "email.password_reset_template_success_html"
)

View File

@@ -76,7 +76,7 @@ class KeyConfig(Config):
config_dir_path, config["server_name"] + ".signing.key"
)
self.signing_key = self.read_signing_key(signing_key_path)
self.signing_key = self.read_signing_keys(signing_key_path, "signing_key")
self.old_signing_keys = self.read_old_signing_keys(
config.get("old_signing_keys", {})
@@ -85,6 +85,14 @@ class KeyConfig(Config):
config.get("key_refresh_interval", "1d")
)
key_server_signing_keys_path = config.get("key_server_signing_keys_path")
if key_server_signing_keys_path:
self.key_server_signing_keys = self.read_signing_keys(
key_server_signing_keys_path, "key_server_signing_keys_path"
)
else:
self.key_server_signing_keys = list(self.signing_key)
# if neither trusted_key_servers nor perspectives are given, use the default.
if "perspectives" not in config and "trusted_key_servers" not in config:
key_servers = [{"server_name": "matrix.org"}]
@@ -116,8 +124,6 @@ class KeyConfig(Config):
seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest()
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
@@ -144,10 +150,6 @@ class KeyConfig(Config):
#
%(macaroon_secret_key)s
# Used to enable access token expiration.
#
#expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
@@ -216,16 +218,34 @@ class KeyConfig(Config):
#
#trusted_key_servers:
# - server_name: "matrix.org"
#
# The signing keys to use when acting as a trusted key server. If not specified
# defaults to the server signing key.
#
# Can contain multiple keys, one per line.
#
#key_server_signing_keys_path: "key_server_signing_keys.key"
"""
% locals()
)
def read_signing_key(self, signing_key_path):
signing_keys = self.read_file(signing_key_path, "signing_key")
def read_signing_keys(self, signing_key_path, name):
"""Read the signing keys in the given path.
Args:
signing_key_path (str)
name (str): Associated config key name
Returns:
list[SigningKey]
"""
signing_keys = self.read_file(signing_key_path, name)
try:
return read_signing_keys(signing_keys.splitlines(True))
except Exception as e:
raise ConfigError("Error reading signing_key: %s" % (str(e)))
raise ConfigError("Error reading %s: %s" % (name, str(e)))
def read_old_signing_keys(self, old_signing_keys):
keys = {}

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import logging.config
import os
@@ -24,6 +25,10 @@ from twisted.logger import STDLibLogObserver, globalLogBeginner
import synapse
from synapse.app import _base as appbase
from synapse.logging._structured import (
reload_structured_logging,
setup_structured_logging,
)
from synapse.logging.context import LoggingContextFilter
from synapse.util.versionstring import get_version_string
@@ -75,10 +80,8 @@ root:
class LoggingConfig(Config):
def read_config(self, config, **kwargs):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
log_config = os.path.join(config_dir_path, server_name + ".log.config")
@@ -86,7 +89,8 @@ class LoggingConfig(Config):
"""\
## Logging ##
# A yaml python logging config file
# A yaml python logging config file as described by
# https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
#
log_config: "%(log_config)s"
"""
@@ -94,38 +98,12 @@ class LoggingConfig(Config):
)
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
@staticmethod
def add_arguments(parser):
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"-f",
"--log-file",
dest="log_file",
help="File to log to. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"--log-config",
dest="log_config",
default=None,
help="Python logging config file",
)
logging_group.add_argument(
"-n",
"--no-redirect-stdio",
@@ -146,97 +124,31 @@ class LoggingConfig(Config):
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
def setup_logging(config, use_worker_options=False):
""" Set up python logging
Args:
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use 'worker_log_config' and
'worker_log_file' options instead of 'log_config' and 'log_file'.
register_sighup (func | None): Function to call to register a
sighup handler.
def _setup_stdlib_logging(config, log_config):
"""
Set up Python stdlib logging.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
log_file = config.worker_log_file if use_worker_options else config.log_file
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
level = logging.DEBUG
if config.verbosity > 1:
level_for_storage = logging.DEBUG
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(level)
logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(*args):
pass
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
logging.config.dictConfig(log_config)
def load_log_config():
with open(log_config, "r") as f:
logging.config.dictConfig(yaml.safe_load(f))
def sighup(*args):
# it might be better to use a file watcher or something for this.
load_log_config()
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()
appbase.register_sighup(sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
# It's critical to point twisted's internal logging somewhere, otherwise it
# stacks up and leaks kup to 64K object;
# see: https://twistedmatrix.com/trac/ticket/8164
#
# Routing to the python logging framework could be a performance problem if
# the handlers blocked for a long time as python.logging is a blocking API
# see https://twistedmatrix.com/documents/current/core/howto/logger.html
# filed as https://github.com/matrix-org/synapse/issues/1727
#
# However this may not be too much of a problem if we are just writing to a file.
# Route Twisted's native logging through to the standard library logging
# system.
observer = STDLibLogObserver()
def _log(event):
@@ -258,3 +170,54 @@ def setup_logging(config, use_worker_options=False):
)
if not config.no_redirect_stdio:
print("Redirected stdout/stderr to logs")
def _reload_stdlib_logging(*args, log_config=None):
logger = logging.getLogger("")
if not log_config:
logger.warn("Reloaded a blank config?")
logging.config.dictConfig(log_config)
def setup_logging(hs, config, use_worker_options=False):
"""
Set up the logging subsystem.
Args:
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use the 'worker_log_config' option
instead of 'log_config'.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
def read_config(*args, callback=None):
if log_config is None:
return None
with open(log_config, "rb") as f:
log_config_body = yaml.safe_load(f.read())
if callback:
callback(log_config=log_config_body)
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
return log_config_body
log_config_body = read_config()
if log_config_body and log_config_body.get("structured") is True:
setup_structured_logging(hs, config, log_config_body)
appbase.register_sighup(read_config, callback=reload_structured_logging)
else:
_setup_stdlib_logging(config, log_config_body)
appbase.register_sighup(read_config, callback=_reload_stdlib_logging)
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)

View File

@@ -13,8 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from distutils.util import strtobool
import pkg_resources
from synapse.config._base import Config, ConfigError
from synapse.types import RoomAlias
from synapse.util.stringutils import random_string_with_symbols
@@ -41,8 +44,36 @@ class AccountValidityConfig(Config):
self.startup_job_max_delta = self.period * 10.0 / 100.0
if self.renew_by_email_enabled and "public_baseurl" not in synapse_config:
raise ConfigError("Can't send renewal emails without 'public_baseurl'")
if self.renew_by_email_enabled:
if "public_baseurl" not in synapse_config:
raise ConfigError("Can't send renewal emails without 'public_baseurl'")
template_dir = config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates")
if "account_renewed_html_path" in config:
file_path = os.path.join(template_dir, config["account_renewed_html_path"])
self.account_renewed_html_content = self.read_file(
file_path, "account_validity.account_renewed_html_path"
)
else:
self.account_renewed_html_content = (
"<html><body>Your account has been successfully renewed.</body><html>"
)
if "invalid_token_html_path" in config:
file_path = os.path.join(template_dir, config["invalid_token_html_path"])
self.invalid_token_html_content = self.read_file(
file_path, "account_validity.invalid_token_html_path"
)
else:
self.invalid_token_html_content = (
"<html><body>Invalid renewal token.</body><html>"
)
class RegistrationConfig(Config):
@@ -145,6 +176,16 @@ class RegistrationConfig(Config):
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from collections import namedtuple
@@ -87,22 +88,25 @@ def parse_thumbnail_requirements(thumbnail_sizes):
class ContentRepositoryConfig(Config):
def read_config(self, config, **kwargs):
self.enable_media_repo = config.get("enable_media_repo", True)
# Only enable the media repo if either the media repo is enabled or the
# current worker app is the media repo.
if (
self.enable_media_repo is False
and config.get("worker_app") != "synapse.app.media_repository"
):
self.can_load_media_repo = False
return
else:
self.can_load_media_repo = True
self.max_upload_size = self.parse_size(config.get("max_upload_size", "10M"))
self.max_image_pixels = self.parse_size(config.get("max_image_pixels", "32M"))
self.max_spider_size = self.parse_size(config.get("max_spider_size", "10M"))
if self.enable_media_repo:
self.media_store_path = self.ensure_directory(
config.get("media_store_path", "media_store")
)
self.uploads_path = self.ensure_directory(
config.get("uploads_path", "uploads")
)
else:
self.media_store_path = None
self.uploads_path = None
self.media_store_path = self.ensure_directory(
config.get("media_store_path", "media_store")
)
backup_media_store_path = config.get("backup_media_store_path")
@@ -159,6 +163,7 @@ class ContentRepositoryConfig(Config):
(provider_class, parsed_config, wrapper_config)
)
self.uploads_path = self.ensure_directory(config.get("uploads_path", "uploads"))
self.dynamic_thumbnails = config.get("dynamic_thumbnails", False)
self.thumbnail_requirements = parse_thumbnail_requirements(
config.get("thumbnail_sizes", DEFAULT_THUMBNAIL_SIZES)
@@ -210,6 +215,13 @@ class ContentRepositoryConfig(Config):
return (
r"""
## Media Store ##
# Enable the media store service in the Synapse master. Uncomment the
# following if you are using a separate media store worker.
#
#enable_media_repo: false
# Directory where uploaded images and attachments are stored.
#
media_store_path: "%(media_store)s"

View File

@@ -17,7 +17,11 @@
import logging
import os.path
import re
from textwrap import indent
import attr
import yaml
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
@@ -38,6 +42,12 @@ DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
"Please speak to your server administrator, or upgrade your instance "
"to join this room."
)
class ServerConfig(Config):
def read_config(self, config, **kwargs):
@@ -247,10 +257,21 @@ class ServerConfig(Config):
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
# Resource-constrained Homeserver Configuration
self.limit_large_room_joins = config.get("limit_large_remote_room_joins", False)
self.limit_large_room_complexity = config.get(
"limit_large_remote_room_complexity", 1.0
@attr.s
class LimitRemoteRoomsConfig(object):
enabled = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
complexity = attr.ib(
validator=attr.validators.instance_of((int, float)), default=1.0
)
complexity_error = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**config.get("limit_remote_rooms", {})
)
bind_port = config.get("bind_port")
@@ -334,7 +355,7 @@ class ServerConfig(Config):
return any(l["tls"] for l in self.listeners)
def generate_config_section(
self, server_name, data_dir_path, open_private_ports, **kwargs
self, server_name, data_dir_path, open_private_ports, listeners, **kwargs
):
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
@@ -348,11 +369,68 @@ class ServerConfig(Config):
# Bring DEFAULT_ROOM_VERSION into the local-scope for use in the
# default config string
default_room_version = DEFAULT_ROOM_VERSION
secure_listeners = []
unsecure_listeners = []
private_addresses = ["::1", "127.0.0.1"]
if listeners:
for listener in listeners:
if listener["tls"]:
secure_listeners.append(listener)
else:
# If we don't want open ports we need to bind the listeners
# to some address other than 0.0.0.0. Here we chose to use
# localhost.
# If the addresses are already bound we won't overwrite them
# however.
if not open_private_ports:
listener.setdefault("bind_addresses", private_addresses)
unsecure_http_binding = "port: %i\n tls: false" % (unsecure_port,)
if not open_private_ports:
unsecure_http_binding += (
"\n bind_addresses: ['::1', '127.0.0.1']"
unsecure_listeners.append(listener)
secure_http_bindings = indent(
yaml.dump(secure_listeners), " " * 10
).lstrip()
unsecure_http_bindings = indent(
yaml.dump(unsecure_listeners), " " * 10
).lstrip()
if not unsecure_listeners:
unsecure_http_bindings = (
"""- port: %(unsecure_port)s
tls: false
type: http
x_forwarded: true"""
% locals()
)
if not open_private_ports:
unsecure_http_bindings += (
"\n bind_addresses: ['::1', '127.0.0.1']"
)
unsecure_http_bindings += """
resources:
- names: [client, federation]
compress: false"""
if listeners:
# comment out this block
unsecure_http_bindings = "#" + re.sub(
"\n {10}",
lambda match: match.group(0) + "#",
unsecure_http_bindings,
)
if not secure_listeners:
secure_http_bindings = (
"""#- port: %(bind_port)s
# type: http
# tls: true
# resources:
# - names: [client, federation]"""
% locals()
)
return (
@@ -538,11 +616,7 @@ class ServerConfig(Config):
# will also need to give Synapse a TLS key and certificate: see the TLS section
# below.)
#
#- port: %(bind_port)s
# type: http
# tls: true
# resources:
# - names: [client, federation]
%(secure_http_bindings)s
# Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
# that unwraps TLS.
@@ -550,13 +624,7 @@ class ServerConfig(Config):
# If you plan to use a reverse proxy, please see
# https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
#
- %(unsecure_http_binding)s
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
%(unsecure_http_bindings)s
# example additional_resources:
#
@@ -625,14 +693,20 @@ class ServerConfig(Config):
# Resource-constrained Homeserver Settings
#
# If limit_large_remote_room_joins is True, the room complexity will be
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_large_remote_room_complexity, it will disallow joining or
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
# Uncomment the below lines to enable:
#limit_large_remote_room_joins: True
#limit_large_remote_room_complexity: 1.0
#limit_remote_rooms:
# enabled: True
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.

Some files were not shown because too many files have changed in this diff Show More