1
0

Compare commits

..

151 Commits

Author SHA1 Message Date
Brendan Abolivier
3b3f327a0d 1.15.0 2020-06-11 13:27:27 +01:00
Brendan Abolivier
1cd67790b9 Merge branch 'release-v1.15.0' of github.com:matrix-org/synapse into release-v1.15.0 2020-06-09 17:34:25 +01:00
Brendan Abolivier
737530a000 Fix some attributions 2020-06-09 17:34:11 +01:00
Richard van der Hoff
3e8c8547e1 Update CHANGES.md
fix a typo
2020-06-09 17:26:51 +01:00
Brendan Abolivier
236d2d699d 1.15.0rc1 2020-06-09 16:37:14 +01:00
Brendan Abolivier
2dc9468c27 Revert "1.15.0rc1"
This reverts commit 8587b0426f.
2020-06-09 16:34:37 +01:00
Brendan Abolivier
8587b0426f 1.15.0rc1 2020-06-09 16:33:36 +01:00
Erik Johnston
664409b169 Fix bug in account data replication stream. (#7656)
* Ensure account data stream IDs are unique.

The account data stream is shared between three tables, and the maximum
allocated ID was tracked in a dedicated table. Updating the max ID
happened outside the transaction that allocated the ID, leading to a
race where if the server was restarted then the same ID could be
allocated but the max ID failed to be updated, leading it to be reused.

The ID generators have support for tracking across multiple tables, so
we may as well use that instead of a dedicated table.

* Fix bug in account data replication stream.

If the same stream ID was used in both global and room account data then
the getting updates for the replication stream would fail due to
`heapq.merge(..)` trying to compare a `str` with a `None`. (This is
because you'd have two rows like `(534, '!room')` and `(534, None)` from
the room and global account data tables).

Fix is just to order by stream ID, since we don't rely on the ordering
beyond that. The bug where stream IDs can be reused should be fixed now,
so this case shouldn't happen going forward.

Fixes #7617
2020-06-09 16:28:57 +01:00
Patrick Cloke
3c45a78090 Convert the registration handler to async/await. (#7649) 2020-06-08 11:15:02 -04:00
Patrick Cloke
375ca0cceb Accept device information at the login fallback endpoint. (#7629) 2020-06-08 10:13:24 -04:00
Patrick Cloke
737b4a936e Convert user directory handler and related classes to async/await. (#7640) 2020-06-05 14:42:55 -04:00
Travis Ralston
09099313e6 Add an option to disable autojoin for guest accounts (#6637)
Fixes https://github.com/matrix-org/synapse/issues/3177
2020-06-05 18:18:15 +01:00
Richard van der Hoff
1bc00fd76d Clarifications to the admin api documentation (#7647)
* Clarify how to authenticate
* path params are not the same thing as query params
* Fix documentation for `/_synapse/admin/v2/users/<user_id>`
2020-06-05 17:31:05 +01:00
Patrick Cloke
a0d2d81cf9 Update to the stable SSO prefix for UI Auth. (#7630) 2020-06-05 10:50:08 -04:00
Richard van der Hoff
eea124370b Fix type information on assert_*_is_admin methods (#7645)
These things don't return Deferreds.
2020-06-05 14:33:49 +01:00
Richard van der Hoff
b4f8dcb4bd Remove some unused constants. (#7644) 2020-06-05 14:33:35 +01:00
Patrick Cloke
f1e61ef85c Typo fixes. 2020-06-05 08:43:21 -04:00
Dirk Klimpel
908f9e2d24 Allow new users to be registered via the admin API even if the monthly active user limit has been reached (#7263) 2020-06-05 13:08:49 +01:00
Dirk Klimpel
2970ce8367 Add device management to admin API (#7481)
- Admin is able to
  - change displaynames
  - delete devices
  - list devices
  - get device informations

Fixes #7330
2020-06-05 13:07:22 +01:00
Patrick Cloke
02f345d053 Attempt to fix PhoneHomeStatsTestCase.test_performance_100 being flaky. (#7634) 2020-06-05 07:36:47 -04:00
Andrew Morgan
139bc86f3d Support CS API v0.6.0 (#6585) 2020-06-05 12:27:37 +01:00
WGH
e55ee7c32f Add support for webp thumbnailing (#7586)
Closes #4382

Signed-off-by: Maxim Plotnikov <wgh@torlan.ru>
2020-06-05 11:54:27 +01:00
Andrew Morgan
f4e6495b5d Performance improvements and refactor of Ratelimiter (#7595)
While working on https://github.com/matrix-org/synapse/issues/5665 I found myself digging into the `Ratelimiter` class and seeing that it was both:

* Rather undocumented, and
* causing a *lot* of config checks

This PR attempts to refactor and comment the `Ratelimiter` class, as well as encourage config file accesses to only be done at instantiation. 

Best to be reviewed commit-by-commit.
2020-06-05 10:47:20 +01:00
Andrew Morgan
c389bfb6ea Fix encryption algorithm typos in tests/comments (#7637)
@uhoreg has confirmed these were both typos. They are only in comments and tests though, rather than anything critical.

Introduced in:

* https://github.com/matrix-org/synapse/pull/7157
* https://github.com/matrix-org/synapse/pull/5726
2020-06-04 20:03:40 +01:00
Patrick Cloke
f8b9ead3ee Advertise the token login type when OpenID Connect is enabled. (#7631) 2020-06-04 06:49:51 -04:00
Richard van der Hoff
11de843626 Cleanups to the OpenID Connect integration (#7628)
docs, default configs, comments. Nothing very significant.
2020-06-03 21:13:17 +01:00
Andrew Morgan
e91abfd291 async/await get_user_id_by_threepid (#7620)
Based on #7619 

async's `get_user_id_by_threepid` and its call stack.
2020-06-03 17:15:57 +01:00
Richard van der Hoff
86d814cdde Check the changelog number in check-newsfragment (#7623) 2020-06-03 17:01:43 +01:00
Andrew Morgan
0188daf32c Replace instances of reactor pumping with get_success. (#7619)
Calls `self.get_success` on all deferred methods instead of abusing `self.pump()`. This has the benefit of working with coroutines, as well as checking that method execution completed successfully.

There are also a few small cleanups that I made in the process.
2020-06-03 16:39:30 +01:00
Brendan Abolivier
c9507be989 Check if the localpart is reserved for guests earlier in the registration flow (#7625)
This is so the user is warned about the username not being valid as soon as possible, rather than only once they've finished UIA.
2020-06-03 16:55:02 +02:00
Erik Johnston
11dc2b4698 Fix exceptions when fetching events from a down host. (#7622)
We already caught some exceptions, but not all.
2020-06-03 14:12:13 +01:00
Richard van der Hoff
38d4ebbac7 synctl restart should start synapse if it wasn't running (#7624) 2020-06-03 13:16:15 +01:00
Richard van der Hoff
2a8ed93bd4 Switch back to upstream dh-virtualenv (#7621)
Upstream have merged our changes
(https://github.com/spotify/dh-virtualenv/pull/300), so let's switch back to it
instead of using our fork.
2020-06-03 12:21:58 +01:00
Richard van der Hoff
3820c24836 Merge branch 'master' into develop 2020-06-03 11:23:27 +01:00
Richard van der Hoff
38c1fdb14e Fix typo in PR link 2020-06-03 11:22:27 +01:00
Richard van der Hoff
1bbc9e2df6 Clean up exception handling in SAML2ResponseResource (#7614)
* Expose `return_html_error`, and allow it to take a Jinja2 template instead of a raw string

* Clean up exception handling in SAML2ResponseResource

  * use the existing code in `return_html_error` instead of re-implementing it
    (giving it a jinja2 template rather than inventing a new form of template)

  * do the exception-catching in the REST layer rather than in the handler
    layer, to make sure we catch all exceptions.
2020-06-03 10:41:12 +01:00
Richard van der Hoff
816589b09a update grafana dashboard 2020-06-02 12:44:36 +01:00
Andrew Morgan
3e557447cb Mention #synapse:matrix.org in README troubleshooting (#7603)
Just in case people head straight to the troubleshooting section and find themselves at a dead end.
2020-06-01 19:45:39 +01:00
Andrew Morgan
25e2d193e3 Advertise Python 3.8 support in setup.py (#7602)
Synapse supports Python 3.8. We've been using it in CI for a while now.
2020-06-01 19:45:01 +01:00
Olof Johansson
fe434cd3c9 Fix a bug in automatic user creation with m.login.jwt. (#7585) 2020-06-01 12:55:07 -04:00
Brendan Abolivier
33c39ab93c Process cross-signing keys when resyncing device lists (#7594)
It looks like `user_device_resync` was ignoring cross-signing keys from the results received from the remote server. This patch fixes this, by processing these keys using the same process `_handle_signing_key_updates` does (and effectively factor that part out of that function).
2020-06-01 17:47:30 +02:00
Dirk Klimpel
901b1fa561 Email notifications for new users when creating via the Admin API. (#7267) 2020-06-01 15:34:33 +01:00
Dagfinn Ilmari Mannsåker
df8a3cef6b Improve performance of _get_state_groups_from_groups_txn (#7567)
The query keeps showing up in my slow query log.

This changes the plan under the top-level Sort node from

```
    WindowAgg  (cost=280335.88..292963.15 rows=561212 width=80) (actual time=138.651..160.562 rows=27112 loops=1)
      ->  Sort  (cost=280335.88..281738.91 rows=561212 width=84) (actual time=138.597..140.622 rows=27112 loops=1)
            Sort Key: state_groups_state.type, state_groups_state.state_key, state_groups_state.state_group
            Sort Method: quicksort  Memory: 4581kB
            ->  Nested Loop  (cost=2.83..226745.22 rows=561212 width=84) (actual time=21.548..47.657 rows=27112 loops=1)
                  ->  HashAggregate  (cost=2.27..3.28 rows=101 width=8) (actual time=21.526..21.535 rows=20 loops=1)
                        Group Key: state.state_group
                        ->  CTE Scan on state  (cost=0.00..2.02 rows=101 width=8) (actual time=21.280..21.493 rows=20 loops=1)
                  ->  Index Scan using state_groups_state_type_idx on state_groups_state  (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.005..0.991 rows=1356 loops=20)
                        Index Cond: (state_group = state.state_group)
```

to

```
    Nested Loop  (cost=2.83..226745.22 rows=561212 width=84) (actual time=24.194..52.834 rows=27112 loops=1)
      ->  HashAggregate  (cost=2.27..3.28 rows=101 width=8) (actual time=24.130..24.138 rows=20 loops=1)
            Group Key: state.state_group
            ->  CTE Scan on state  (cost=0.00..2.02 rows=101 width=8) (actual time=23.887..24.113 rows=20 loops=1)
      ->  Index Scan using state_groups_state_type_idx on state_groups_state  (cost=0.56..2189.40 rows=5557 width=84) (actual time=0.016..1.159 rows=1356 loops=20)
            Index Cond: (state_group = state.state_group)
```

This cuts the execution time from ~190ms to ~130ms, i.e. a reduction
of ~30%.

The full plans are visualised at https://explain.depesz.com/s/WpbT and
https://explain.depesz.com/s/KlEk

Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
2020-06-01 15:23:43 +01:00
Patrick Cloke
6af9cdca24 Convert groups local and server to async/await. (#7600) 2020-06-01 07:28:43 -04:00
Brendan Abolivier
c1bdd4fac7 Don't fail all of an iteration of the device list retry loop on error (#7609)
Without this patch, if an error happens which isn't caught by `user_device_resync`, then `_maybe_retry_device_resync` would fail, without retrying the next users in the iteration. This patch fixes this so that it now only logs an error in this case.
2020-06-01 12:55:14 +02:00
Dagfinn Ilmari Mannsåker
2dc430d36e Use upsert when inserting read receipts (#7607)
Fixes #7469

Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
2020-06-01 10:53:06 +01:00
hashashini
91a7c5ff6d Update OpenBSD installation instructions (#7587)
Synapse was added to the ports tree in Nov, 2019 by Renaud Allard (https://marc.info/?l=openbsd-ports&m=157417848805329).
With the release of OpenBSD 6.7 on May 22, 2020 a pre-compiled binary is available as well.
2020-05-30 17:08:07 +01:00
Erik Johnston
cb495f526d Fix 'FederationGroupsRoomsServlet' API when group has room server is not in. (#7599) 2020-05-29 17:49:47 +01:00
Erik Johnston
f5353eff21 Make inflight background metrics more efficient. (#7597)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-05-29 13:25:32 +01:00
David Rio Deiros
47db2c3673 Add entry to set dependency against psql service (#7591) 2020-05-28 16:02:41 +01:00
Brendan Abolivier
5cb470b495 Merge branch 'master' into develop 2020-05-28 12:50:26 +02:00
Brendan Abolivier
76261fc59d Update debian changelog 2020-05-28 12:39:09 +02:00
Brendan Abolivier
61469308df 1.14.0 2020-05-28 12:36:00 +02:00
Erik Johnston
8c5f88fa4d Merge pull request #7584 from matrix-org/erikj/save_and_send_fed_token_in_bg
Speed up processing of federation stream RDATA rows.
2020-05-27 20:06:29 +01:00
Erik Johnston
ef3934ec8f Ensure we persist and ack the same token 2020-05-27 19:45:42 +01:00
Erik Johnston
3d7f1b53d9 Remove spurious change 2020-05-27 19:41:44 +01:00
Erik Johnston
a72d5f39db Add test for Linearizer.is_queued(..) 2020-05-27 19:41:06 +01:00
Erik Johnston
a6a40a1519 Newsfile 2020-05-27 19:35:03 +01:00
Erik Johnston
35c308731d Speed up processing of federation stream RDATA rows.
Instead of storing and sending an ACK for every single row we send
synchronously, we instead do it asynchronously while batching up
updates.
2020-05-27 19:34:07 +01:00
Christopher Cooper
c4a820b32a allow emails to be passed through SAML (#7385)
Signed-off-by: Christopher Cooper <cooperc@ocf.berkeley.edu>
2020-05-27 17:40:08 +01:00
Brendan Abolivier
5af572ada0 Merge tag 'v1.14.0rc2' into develop
Synapse 1.14.0rc2 (2020-05-27)
==============================

Bugfixes
--------

- Fix cache config to not apply cache factor to event cache. Regression in v1.14.0rc1. ([\#7578](https://github.com/matrix-org/synapse/issues/7578))
- Fix bug where `ReplicationStreamer` was not always started when replication was enabled. Bug introduced in v1.14.0rc1. ([\#7579](https://github.com/matrix-org/synapse/issues/7579))
- Fix specifying individual cache factors for caches with special characters in their name. Regression in v1.14.0rc1. ([\#7580](https://github.com/matrix-org/synapse/issues/7580))

Improved Documentation
----------------------

- Fix the OIDC `client_auth_method` value in the sample config. ([\#7581](https://github.com/matrix-org/synapse/issues/7581))
2020-05-27 17:35:29 +02:00
Brendan Abolivier
4e3a617635 Improve changelog wording 2020-05-27 17:27:33 +02:00
Andrew Morgan
0a6e837aaa Fix incorrect placeholder syntax in database prepartion code (#7575)
We were using `logger` syntax which isn't supported by `Exception`s.
2020-05-27 16:26:59 +01:00
Brendan Abolivier
b4109499b4 1.14.0rc2 2020-05-27 17:22:28 +02:00
Jason Robinson
4be968d05d Fix sample config docs error (#7581)
'client_auth_method' commented out value was erronously 'client_auth_basic',
when code and docstring says it should be 'client_secret_basic'.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-05-27 13:52:18 +01:00
Erik Johnston
d7d8a2e7ee Fix up comments 2020-05-27 13:34:46 +01:00
Erik Johnston
4ba55559ac Fix specifying cache factors via env vars with * in name. (#7580)
This mostly applise to `*stateGroupCache*` and co.

Broke in #6391.
2020-05-27 13:17:01 +01:00
Erik Johnston
eefc6b3a0d Don't apply cache factor to event cache. (#7578)
This is already correctly done when we instansiate the cache, but wasn't
when it got reloaded (which always happens at least once on startup).
2020-05-27 12:04:37 +01:00
Erik Johnston
9bac5d62b3 Ensure ReplicationStreamer is always started when replication enabled. (#7579)
Fixes #7566.
2020-05-27 11:44:19 +01:00
Brendan Abolivier
98483890ee Merge branch 'develop' of github.com:matrix-org/synapse into develop 2020-05-26 20:30:41 +02:00
Patrick Cloke
ef884f6d04 Convert identity handler to async/await. (#7561) 2020-05-26 13:46:22 -04:00
Brendan Abolivier
b3b2038b6a Remove the changes to the debian changelog
Since this is not a full release yet
2020-05-26 17:22:46 +02:00
Brendan Abolivier
7193c100bf Merge branch 'release-v1.14.0' of github.com:matrix-org/synapse into release-v1.14.0 2020-05-26 17:20:53 +02:00
Brendan Abolivier
87e417c5cb Not full release yet, this is rc1 2020-05-26 17:20:43 +02:00
Erik Johnston
651bb76ee3 Merge event persistence move changelog entries 2020-05-26 16:12:50 +01:00
Brendan Abolivier
9097e135fb More changelog fix 2020-05-26 17:10:54 +02:00
Brendan Abolivier
f1689a7b7f Changelog fixes 2020-05-26 16:58:14 +02:00
Brendan Abolivier
3b19c17247 1.14.0 2020-05-26 16:45:37 +02:00
Richard van der Hoff
edd9a7214c Replace device_27_unique_idx bg update with a fg one (#7562)
The bg update never managed to complete, because it kept being interrupted by
transactions which want to take a lock.

Just doing it in the foreground isn't that bad, and is a good deal simpler.
2020-05-26 11:43:17 +01:00
Richard van der Hoff
04729b86f8 Fix incorrect exception handling in KeyUploadServlet.on_POST (#7563)
Introduced in #7556
2020-05-26 11:42:22 +01:00
Richard van der Hoff
00db90f409 Fix recording of federation stream token (#7564)
A couple of changes of significance:

 * remove the `_last_ack < federation_position` condition, so that
   updates will still be correctly processed after restart

 * Correctly wire up send_federation_ack to the right class.
2020-05-26 11:41:38 +01:00
Richard van der Hoff
d14c4d6b6d Simplify reap_monthly_active_users (#7558)
we can use `make_in_list_sql_clause` rather than doing our own half-baked
equivalent, which has the benefit of working just fine with empty lists.

(This has quite a lot of tests, so I think it's pretty safe)
2020-05-23 01:20:10 +01:00
Richard van der Hoff
f4269694ce Optimise some references to hs.config (#7546)
These are surprisingly expensive, and we only really need to do them at startup.
2020-05-22 21:47:07 +01:00
Erik Johnston
2901f54359 Fix missing CORS headers on OPTION responses (#7560)
Broke in #7534.
2020-05-22 17:42:39 +01:00
Erik Johnston
e5c67d04db Add option to move event persistence off master (#7517) 2020-05-22 16:11:35 +01:00
Patrick Cloke
4429764c9f Return 200 OK for all OPTIONS requests (#7534) 2020-05-22 09:30:07 -04:00
Erik Johnston
1531b214fc Add ability to wait for replication streams (#7542)
The idea here is that if an instance persists an event via the replication HTTP API it can return before we receive that event over replication, which can lead to races where code assumes that persisting an event immediately updates various caches (e.g. current state of the room).

Most of Synapse doesn't hit such races, so we don't do the waiting automagically, instead we do so where necessary to avoid unnecessary delays. We may decide to change our minds here if it turns out there are a lot of subtle races going on.

People probably want to look at this commit by commit.
2020-05-22 14:21:54 +01:00
Erik Johnston
06a02bc1ce Convert sending mail to async/await. (#7557)
Mainly because sometimes the email push code raises exceptions where the
stack traces have gotten lost, which is hopefully fixed by this.
2020-05-22 13:41:11 +01:00
Patrick Cloke
66f2ebc22f Use a non-empty RelayState for user interactive auth with SAML. (#7552) 2020-05-22 07:17:30 -04:00
Erik Johnston
710d958c64 On upgrade room only send canonical alias once. (#7547)
Instead of doing a complicated dance of deleting and moving aliases one
by one, which sends a canonical alias update into the old room for each
one, lets do it all in one go.

This also changes the function to move *all* local alias events to the new
room, however that happens later on anyway.
2020-05-22 11:41:41 +01:00
Erik Johnston
547e4dd83e Fix exception reporting due to HTTP request errors. (#7556)
These are business as usual errors, rather than stuff we want to log at
error.
2020-05-22 11:39:20 +01:00
Ivan Shapovalov
ac481a738e synapse.metrics: implement detailed memory usage reporting on PyPy (#7536)
PyPy's gc.get_stats() returns an object containing detailed allocator statistics
which could be beneficial to collect as metrics.

Signed-off-by: Ivan Shapovalov <intelfx@intelfx.name>
2020-05-22 11:08:41 +01:00
Richard van der Hoff
8c75da916c Refresh apt cache when building dh_virtualenv docker image (#7555)
When we tried to build debs for 1.13.0, the build failed because docker used a
base docker image which had a stale apt cache.

Fixes: #7540
2020-05-22 10:17:47 +01:00
Richard van der Hoff
a0f99f81b3 Fix stacktrace mangling in patch_inline_callbacks (#7554)
`Failure()` is more cunning than `Failure(e)`.
2020-05-22 10:17:36 +01:00
Richard van der Hoff
d84bdfe599 mypy for synapse.http.site (#7553) 2020-05-22 10:12:17 +01:00
Richard van der Hoff
66a564c859 Fix some DETECTED VIOLATIONS in the config file (#7550)
consistency ftw
2020-05-22 10:11:50 +01:00
Brendan Abolivier
d1ae1015ec Retry to sync out of sync device lists (#7453)
When a call to `user_device_resync` fails, we don't currently mark the remote user's device list as out of sync, nor do we retry to sync it.

https://github.com/matrix-org/synapse/pull/6776 introduced some code infrastructure to mark device lists as stale/out of sync.

This commit uses that code infrastructure to mark device lists as out of sync if processing an incoming device list update makes the device handler realise that the device list is out of sync, but we can't resync right now.

It also adds a looping call to retry all failed resync every 30s. This shouldn't cause too much spam in the logs as this commit also removes the "Failed to handle device list update for..." warning logs when catching `NotRetryingDestination`.

Fixes #7418
2020-05-21 17:41:12 +02:00
Richard van der Hoff
0bbbd10513 Stub out GET presence requests in the frontend proxy (#7545)
We don't really make any promises about returning accurate presence data when
presence is disabled, so we may as well just return a static response, rather
than making the master handle a request.
2020-05-21 14:36:46 +01:00
David Vo
d74cdc1a42 Ensure worker config exists in systemd service (#7528) 2020-05-21 13:47:23 +01:00
Richard van der Hoff
075375bbc9 add a comment 2020-05-21 13:25:41 +01:00
Erik Johnston
f6f92845f8 Fix bug in persist events when dealing with non member types. (#7548)
`_is_server_still_joined` will throw if it is given state updates with non-user ID state keys with local user leaves. This is actually rarely a problem since local leaves almost always get persisted by themselves.

(I discovered this on a branch that was otherwise broken, so I haven't seen this in the wild)
2020-05-21 13:20:10 +01:00
Richard van der Hoff
5db2a59a86 Update CONTRIBUTING.md (#7541) 2020-05-20 18:47:19 +01:00
Patrick Cloke
b2b8699070 Remove Ubuntu Cosmic and Disco which are both EOL. (#7539) 2020-05-20 10:08:46 -04:00
Patrick Cloke
9dc6f3075a Hash passwords earlier in the password reset process (#7538)
This now matches the logic of the registration process as modified in
56db0b1365 / #7523.
2020-05-20 09:48:03 -04:00
Richard van der Hoff
4fa74c7606 Minor clarifications to the TURN docs (#7533) 2020-05-20 11:04:34 +01:00
Patrick Cloke
02919bf4d8 Merge branch 'master' into develop 2020-05-19 09:56:15 -04:00
Patrick Cloke
13a82768ac Merge tag 'v1.13.0'
Synapse 1.13.0 (2020-05-19)
===========================

This release brings some potential changes necessary for certain
configurations of Synapse:

* If your Synapse is configured to use SSO and have a custom
  `sso_redirect_confirm_template_dir` configuration option set, you will need
  to duplicate the new `sso_auth_confirm.html`, `sso_auth_success.html` and
  `sso_account_deactivated.html` templates into that directory.
* Synapse plugins using the `complete_sso_login` method of
  `synapse.module_api.ModuleApi` should instead switch to the async/await
  version, `complete_sso_login_async`, which includes additional checks. The
  former version is now deprecated.
* A bug was introduced in Synapse 1.4.0 which could cause the room directory
  to be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
  earlier, to versions between v1.4.0 and v1.12.x.

Please review [UPGRADE.rst](https://github.com/matrix-org/synapse/blob/master/UPGRADE.rst)
for more details on these changes and for general upgrade guidance.

Notice of change to the default `git` branch for Synapse
--------------------------------------------------------

With the release of Synapse 1.13.0, the default `git` branch for Synapse has
changed to `develop`, which is the development tip. This is more consistent with
common practice and modern `git` usage.

The `master` branch, which tracks the latest release, is still available. It is
recommended that developers and distributors who have scripts which run builds
using the default branch of Synapse should therefore consider pinning their
scripts to `master`.

Features
--------

- Extend the `web_client_location` option to accept an absolute URL to use as a redirect. Adds a warning when running the web client on the same hostname as homeserver. Contributed by Martin Milata. ([\#7006](https://github.com/matrix-org/synapse/issues/7006))
- Set `Referrer-Policy` header to `no-referrer` on media downloads. ([\#7009](https://github.com/matrix-org/synapse/issues/7009))
- Add support for running replication over Redis when using workers. ([\#7040](https://github.com/matrix-org/synapse/issues/7040), [\#7325](https://github.com/matrix-org/synapse/issues/7325), [\#7352](https://github.com/matrix-org/synapse/issues/7352), [\#7401](https://github.com/matrix-org/synapse/issues/7401), [\#7427](https://github.com/matrix-org/synapse/issues/7427), [\#7439](https://github.com/matrix-org/synapse/issues/7439), [\#7446](https://github.com/matrix-org/synapse/issues/7446), [\#7450](https://github.com/matrix-org/synapse/issues/7450), [\#7454](https://github.com/matrix-org/synapse/issues/7454))
- Admin API `POST /_synapse/admin/v1/join/<roomIdOrAlias>` to join users to a room like `auto_join_rooms` for creation of users. ([\#7051](https://github.com/matrix-org/synapse/issues/7051))
- Add options to prevent users from changing their profile or associated 3PIDs. ([\#7096](https://github.com/matrix-org/synapse/issues/7096))
- Support SSO in the user interactive authentication workflow. ([\#7102](https://github.com/matrix-org/synapse/issues/7102), [\#7186](https://github.com/matrix-org/synapse/issues/7186), [\#7279](https://github.com/matrix-org/synapse/issues/7279), [\#7343](https://github.com/matrix-org/synapse/issues/7343))
- Allow server admins to define and enforce a password policy ([MSC2000](https://github.com/matrix-org/matrix-doc/issues/2000)). ([\#7118](https://github.com/matrix-org/synapse/issues/7118))
- Improve the support for SSO authentication on the login fallback page. ([\#7152](https://github.com/matrix-org/synapse/issues/7152), [\#7235](https://github.com/matrix-org/synapse/issues/7235))
- Always whitelist the login fallback in the SSO configuration if `public_baseurl` is set. ([\#7153](https://github.com/matrix-org/synapse/issues/7153))
- Admin users are no longer required to be in a room to create an alias for it. ([\#7191](https://github.com/matrix-org/synapse/issues/7191))
- Require admin privileges to enable room encryption by default. This does not affect existing rooms. ([\#7230](https://github.com/matrix-org/synapse/issues/7230))
- Add a config option for specifying the value of the Accept-Language HTTP header when generating URL previews. ([\#7265](https://github.com/matrix-org/synapse/issues/7265))
- Allow `/requestToken` endpoints to hide the existence (or lack thereof) of 3PID associations on the homeserver. ([\#7315](https://github.com/matrix-org/synapse/issues/7315))
- Add a configuration setting to tweak the threshold for dummy events. ([\#7422](https://github.com/matrix-org/synapse/issues/7422))

Bugfixes
--------

- Don't attempt to use an invalid sqlite config if no database configuration is provided. Contributed by @nekatak. ([\#6573](https://github.com/matrix-org/synapse/issues/6573))
- Fix single-sign on with CAS systems: pass the same service URL when requesting the CAS ticket and when calling the `proxyValidate` URL. Contributed by @Naugrimm. ([\#6634](https://github.com/matrix-org/synapse/issues/6634))
- Fix missing field `default` when fetching user-defined push rules. ([\#6639](https://github.com/matrix-org/synapse/issues/6639))
- Improve error responses when accessing remote public room lists. ([\#6899](https://github.com/matrix-org/synapse/issues/6899), [\#7368](https://github.com/matrix-org/synapse/issues/7368))
- Transfer alias mappings on room upgrade. ([\#6946](https://github.com/matrix-org/synapse/issues/6946))
- Ensure that a user interactive authentication session is tied to a single request. ([\#7068](https://github.com/matrix-org/synapse/issues/7068), [\#7455](https://github.com/matrix-org/synapse/issues/7455))
- Fix a bug in the federation API which could cause occasional "Failed to get PDU" errors. ([\#7089](https://github.com/matrix-org/synapse/issues/7089))
- Return the proper error (`M_BAD_ALIAS`) when a non-existant canonical alias is provided. ([\#7109](https://github.com/matrix-org/synapse/issues/7109))
- Fix a bug which meant that groups updates were not correctly replicated between workers. ([\#7117](https://github.com/matrix-org/synapse/issues/7117))
- Fix starting workers when federation sending not split out. ([\#7133](https://github.com/matrix-org/synapse/issues/7133))
- Ensure `is_verified` is a boolean in responses to `GET /_matrix/client/r0/room_keys/keys`. Also warn the user if they forgot the `version` query param. ([\#7150](https://github.com/matrix-org/synapse/issues/7150))
- Fix error page being shown when a custom SAML handler attempted to redirect when processing an auth response. ([\#7151](https://github.com/matrix-org/synapse/issues/7151))
- Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo. ([\#7155](https://github.com/matrix-org/synapse/issues/7155))
- Fix excessive CPU usage by `prune_old_outbound_device_pokes` job. ([\#7159](https://github.com/matrix-org/synapse/issues/7159))
- Fix a bug which could cause outbound federation traffic to stop working if a client uploaded an incorrect e2e device signature. ([\#7177](https://github.com/matrix-org/synapse/issues/7177))
- Fix a bug which could cause incorrect 'cyclic dependency' error. ([\#7178](https://github.com/matrix-org/synapse/issues/7178))
- Fix a bug that could cause a user to be invited to a server notices (aka System Alerts) room without any notice being sent. ([\#7199](https://github.com/matrix-org/synapse/issues/7199))
- Fix some worker-mode replication handling not being correctly recorded in CPU usage stats. ([\#7203](https://github.com/matrix-org/synapse/issues/7203))
- Do not allow a deactivated user to login via SSO. ([\#7240](https://github.com/matrix-org/synapse/issues/7240), [\#7259](https://github.com/matrix-org/synapse/issues/7259))
- Fix --help command-line argument. ([\#7249](https://github.com/matrix-org/synapse/issues/7249))
- Fix room publish permissions not being checked on room creation. ([\#7260](https://github.com/matrix-org/synapse/issues/7260))
- Reject unknown session IDs during user interactive authentication instead of silently creating a new session. ([\#7268](https://github.com/matrix-org/synapse/issues/7268))
- Fix a SQL query introduced in Synapse 1.12.0 which could cause large amounts of logging to the postgres slow-query log. ([\#7274](https://github.com/matrix-org/synapse/issues/7274))
- Persist user interactive authentication sessions across workers and Synapse restarts. ([\#7302](https://github.com/matrix-org/synapse/issues/7302))
- Fixed backwards compatibility logic of the first value of `trusted_third_party_id_servers` being used for `account_threepid_delegates.email`, which occurs when the former, deprecated option is set and the latter is not. ([\#7316](https://github.com/matrix-org/synapse/issues/7316))
- Fix a bug where event updates might not be sent over replication to worker processes after the stream falls behind. ([\#7337](https://github.com/matrix-org/synapse/issues/7337), [\#7358](https://github.com/matrix-org/synapse/issues/7358))
- Fix bad error handling that would cause Synapse to crash if it's provided with a YAML configuration file that's either empty or doesn't parse into a key-value map. ([\#7341](https://github.com/matrix-org/synapse/issues/7341))
- Fix incorrect metrics reporting for `renew_attestations` background task. ([\#7344](https://github.com/matrix-org/synapse/issues/7344))
- Prevent non-federating rooms from appearing in responses to federated `POST /publicRoom` requests when a filter was included. ([\#7367](https://github.com/matrix-org/synapse/issues/7367))
- Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](UPGRADE.rst#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387))
- Fix bug in `EventContext.deserialize`. ([\#7393](https://github.com/matrix-org/synapse/issues/7393))
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
- Hash passwords as early as possible during registration. ([\#7523](https://github.com/matrix-org/synapse/issues/7523))

Improved Documentation
----------------------

- Update Debian installation instructions to recommend installing the `virtualenv` package instead of `python3-virtualenv`. ([\#6892](https://github.com/matrix-org/synapse/issues/6892))
- Improve the documentation for database configuration. ([\#6988](https://github.com/matrix-org/synapse/issues/6988))
- Improve the documentation of application service configuration files. ([\#7091](https://github.com/matrix-org/synapse/issues/7091))
- Update pre-built package name for FreeBSD. ([\#7107](https://github.com/matrix-org/synapse/issues/7107))
- Update postgres docs with login troubleshooting information. ([\#7119](https://github.com/matrix-org/synapse/issues/7119))
- Clean up INSTALL.md a bit. ([\#7141](https://github.com/matrix-org/synapse/issues/7141))
- Add documentation for running a local CAS server for testing. ([\#7147](https://github.com/matrix-org/synapse/issues/7147))
- Improve README.md by being explicit about public IP recommendation for TURN relaying. ([\#7167](https://github.com/matrix-org/synapse/issues/7167))
- Fix a small typo in the `metrics_flags` config option. ([\#7171](https://github.com/matrix-org/synapse/issues/7171))
- Update the contributed documentation on managing synapse workers with systemd, and bring it into the core distribution. ([\#7234](https://github.com/matrix-org/synapse/issues/7234))
- Add documentation to the `password_providers` config option. Add known password provider implementations to docs. ([\#7238](https://github.com/matrix-org/synapse/issues/7238), [\#7248](https://github.com/matrix-org/synapse/issues/7248))
- Modify suggested nginx reverse proxy configuration to match Synapse's default file upload size. Contributed by @ProCycleDev. ([\#7251](https://github.com/matrix-org/synapse/issues/7251))
- Documentation of media_storage_providers options updated to avoid misunderstandings. Contributed by Tristan Lins. ([\#7272](https://github.com/matrix-org/synapse/issues/7272))
- Add documentation on monitoring workers with Prometheus. ([\#7357](https://github.com/matrix-org/synapse/issues/7357))
- Clarify endpoint usage in the users admin api documentation. ([\#7361](https://github.com/matrix-org/synapse/issues/7361))

Deprecations and Removals
-------------------------

- Remove nonfunctional `captcha_bypass_secret` option from `homeserver.yaml`. ([\#7137](https://github.com/matrix-org/synapse/issues/7137))

Internal Changes
----------------

- Add benchmarks for LruCache. ([\#6446](https://github.com/matrix-org/synapse/issues/6446))
- Return total number of users and profile attributes in admin users endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6881](https://github.com/matrix-org/synapse/issues/6881))
- Change device list streams to have one row per ID. ([\#7010](https://github.com/matrix-org/synapse/issues/7010))
- Remove concept of a non-limited stream. ([\#7011](https://github.com/matrix-org/synapse/issues/7011))
- Move catchup of replication streams logic to worker. ([\#7024](https://github.com/matrix-org/synapse/issues/7024), [\#7195](https://github.com/matrix-org/synapse/issues/7195), [\#7226](https://github.com/matrix-org/synapse/issues/7226), [\#7239](https://github.com/matrix-org/synapse/issues/7239), [\#7286](https://github.com/matrix-org/synapse/issues/7286), [\#7290](https://github.com/matrix-org/synapse/issues/7290), [\#7318](https://github.com/matrix-org/synapse/issues/7318), [\#7326](https://github.com/matrix-org/synapse/issues/7326), [\#7378](https://github.com/matrix-org/synapse/issues/7378), [\#7421](https://github.com/matrix-org/synapse/issues/7421))
- Convert some of synapse.rest.media to async/await. ([\#7110](https://github.com/matrix-org/synapse/issues/7110), [\#7184](https://github.com/matrix-org/synapse/issues/7184), [\#7241](https://github.com/matrix-org/synapse/issues/7241))
- De-duplicate / remove unused REST code for login and auth. ([\#7115](https://github.com/matrix-org/synapse/issues/7115))
- Convert `*StreamRow` classes to inner classes. ([\#7116](https://github.com/matrix-org/synapse/issues/7116))
- Clean up some LoggingContext code. ([\#7120](https://github.com/matrix-org/synapse/issues/7120), [\#7181](https://github.com/matrix-org/synapse/issues/7181), [\#7183](https://github.com/matrix-org/synapse/issues/7183), [\#7408](https://github.com/matrix-org/synapse/issues/7408), [\#7426](https://github.com/matrix-org/synapse/issues/7426))
- Add explicit `instance_id` for USER_SYNC commands and remove implicit `conn_id` usage. ([\#7128](https://github.com/matrix-org/synapse/issues/7128))
- Refactored the CAS authentication logic to a separate class. ([\#7136](https://github.com/matrix-org/synapse/issues/7136))
- Run replication streamers on workers. ([\#7146](https://github.com/matrix-org/synapse/issues/7146))
- Add tests for outbound device pokes. ([\#7157](https://github.com/matrix-org/synapse/issues/7157))
- Fix device list update stream ids going backward. ([\#7158](https://github.com/matrix-org/synapse/issues/7158))
- Use `stream.current_token()` and remove `stream_positions()`. ([\#7172](https://github.com/matrix-org/synapse/issues/7172))
- Move client command handling out of TCP protocol. ([\#7185](https://github.com/matrix-org/synapse/issues/7185))
- Move server command handling out of TCP protocol. ([\#7187](https://github.com/matrix-org/synapse/issues/7187))
- Fix consistency of HTTP status codes reported in log lines. ([\#7188](https://github.com/matrix-org/synapse/issues/7188))
- Only run one background database update at a time. ([\#7190](https://github.com/matrix-org/synapse/issues/7190))
- Remove sent outbound device list pokes from the database. ([\#7192](https://github.com/matrix-org/synapse/issues/7192))
- Add a background database update job to clear out duplicate `device_lists_outbound_pokes`. ([\#7193](https://github.com/matrix-org/synapse/issues/7193))
- Remove some extraneous debugging log lines. ([\#7207](https://github.com/matrix-org/synapse/issues/7207))
- Add explicit Python build tooling as dependencies for the snapcraft build. ([\#7213](https://github.com/matrix-org/synapse/issues/7213))
- Add typing information to federation server code. ([\#7219](https://github.com/matrix-org/synapse/issues/7219))
- Extend room admin api (`GET /_synapse/admin/v1/rooms`) with additional attributes. ([\#7225](https://github.com/matrix-org/synapse/issues/7225))
- Unblacklist '/upgrade creates a new room' sytest for workers. ([\#7228](https://github.com/matrix-org/synapse/issues/7228))
- Remove redundant checks on `daemonize` from synctl. ([\#7233](https://github.com/matrix-org/synapse/issues/7233))
- Upgrade jQuery to v3.4.1 on fallback login/registration pages. ([\#7236](https://github.com/matrix-org/synapse/issues/7236))
- Change log line that told user to implement onLogin/onRegister fallback js functions to a warning, instead of an info, so it's more visible. ([\#7237](https://github.com/matrix-org/synapse/issues/7237))
- Correct the parameters of a test fixture. Contributed by Isaiah Singletary. ([\#7243](https://github.com/matrix-org/synapse/issues/7243))
- Convert auth handler to async/await. ([\#7261](https://github.com/matrix-org/synapse/issues/7261))
- Add some unit tests for replication. ([\#7278](https://github.com/matrix-org/synapse/issues/7278))
- Improve typing annotations in `synapse.replication.tcp.streams.Stream`. ([\#7291](https://github.com/matrix-org/synapse/issues/7291))
- Reduce log verbosity of url cache cleanup tasks. ([\#7295](https://github.com/matrix-org/synapse/issues/7295))
- Fix sample SAML Service Provider configuration. Contributed by @frcl. ([\#7300](https://github.com/matrix-org/synapse/issues/7300))
- Fix StreamChangeCache to work with multiple entities changing on the same stream id. ([\#7303](https://github.com/matrix-org/synapse/issues/7303))
- Fix an incorrect import in IdentityHandler. ([\#7319](https://github.com/matrix-org/synapse/issues/7319))
- Reduce logging verbosity for successful federation requests. ([\#7321](https://github.com/matrix-org/synapse/issues/7321))
- Convert some federation handler code to async/await. ([\#7338](https://github.com/matrix-org/synapse/issues/7338))
- Fix collation for postgres for unit tests. ([\#7359](https://github.com/matrix-org/synapse/issues/7359))
- Convert RegistrationWorkerStore.is_server_admin and dependent code to async/await. ([\#7363](https://github.com/matrix-org/synapse/issues/7363))
- Add an `instance_name` to `RDATA` and `POSITION` replication commands. ([\#7364](https://github.com/matrix-org/synapse/issues/7364))
- Thread through instance name to replication client. ([\#7369](https://github.com/matrix-org/synapse/issues/7369))
- Convert synapse.server_notices to async/await. ([\#7394](https://github.com/matrix-org/synapse/issues/7394))
- Convert synapse.notifier to async/await. ([\#7395](https://github.com/matrix-org/synapse/issues/7395))
- Fix issues with the Python package manifest. ([\#7404](https://github.com/matrix-org/synapse/issues/7404))
- Prevent methods in `synapse.handlers.auth` from polling the homeserver config every request. ([\#7420](https://github.com/matrix-org/synapse/issues/7420))
- Speed up fetching device lists changes when handling `/sync` requests. ([\#7423](https://github.com/matrix-org/synapse/issues/7423))
- Run group attestation renewal in series rather than parallel for performance. ([\#7442](https://github.com/matrix-org/synapse/issues/7442))
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
- Update the version of dh-virtualenv we use to build debs, and add focal to the list of target distributions. ([\#7526](https://github.com/matrix-org/synapse/issues/7526))
2020-05-19 09:55:39 -04:00
Patrick Cloke
45c8b1c618 Update changelog based on feedback. 2020-05-19 09:31:59 -04:00
Patrick Cloke
66fd16261c Move warnings in the changelog and re-iterate changes to branches. 2020-05-19 09:28:02 -04:00
Patrick Cloke
ac3264bf1e 1.13.0 2020-05-19 09:19:09 -04:00
Richard van der Hoff
1fc8914f76 update dh-virtualenv (#7526) 2020-05-19 13:48:41 +01:00
Romain Bouyé
a57863d2b4 synctl warns when no process is stopped and avoids start (#6598)
* If an error occurs when stopping a process synctl now logs a warning.
* During a restart, synctl will avoid attempting to start Synapse if an error
  occurs during stopping Synapse.
2020-05-19 08:47:45 -04:00
Paul Tötterman
ab3e19d814 Improve API doc readability (#7527) 2020-05-19 11:20:23 +01:00
Aaron Raimist
250f3eb991 Omit displayname or avatar_url if they aren't set instead of returning null (#7497)
Per https://github.com/matrix-org/matrix-doc/issues/1436#issuecomment-410089470 they should be omitted instead of returning null or "". They aren't marked as required in the spec.

Fixes https://github.com/matrix-org/synapse/issues/7333

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2020-05-19 10:31:25 +01:00
Patrick Cloke
ee421e5244 Merge tag 'v1.13.0rc3' into develop
Synapse 1.13.0rc3 (2020-05-18)

Bugfixes:

- Hash passwords as early as possible during registration. #7523
2020-05-18 11:10:04 -04:00
Patrick Cloke
3c8a57f080 1.13.0rc3 2020-05-18 10:58:51 -04:00
Patrick Cloke
56db0b1365 Hash passwords earlier in the registration process (#7523) 2020-05-18 09:46:18 -04:00
Erik Johnston
51055c8c44 Allow ReplicationRestResource to be added to workers (#7515)
This allows workers to talk to each other over HTTP replication.
2020-05-18 12:24:48 +01:00
Richard van der Hoff
4d1afb1dfe Merge pull request #7519 from matrix-org/rav/kill_py2_code
Kill off some old python 2 code
2020-05-18 10:45:30 +01:00
Richard van der Hoff
164f50f5f2 fix mypy for tests/replication (#7518) 2020-05-18 10:43:05 +01:00
Patrick Cloke
c29915bd05 Add type hints to room member handlers (#7513) 2020-05-15 15:05:25 -04:00
Richard van der Hoff
ab57353de3 changelog 2020-05-15 19:37:41 +01:00
Richard van der Hoff
d4676910c9 remove miscellaneous PY2 code 2020-05-15 19:37:41 +01:00
Richard van der Hoff
e6027562e2 remove builtins.buffer code from storage code
this is no longer needed on python 3
2020-05-15 19:37:41 +01:00
Richard van der Hoff
91f51c611c remove redundant __func__
this is a no-op under python 3
2020-05-15 19:37:41 +01:00
Richard van der Hoff
65902e08c3 remove to_ascii
this is a no-op on python 3.
2020-05-15 19:12:03 +01:00
Richard van der Hoff
08fa96f030 Remove exception_to_unicode
this is a no-op on python 3.
2020-05-15 19:07:24 +01:00
Richard van der Hoff
6c1f7c722f Fix limit logic for AccountDataStream (#7384)
Make sure that the AccountDataStream presents complete updates, in the right
order.

This is much the same fix as #7337 and #7358, but applied to a different stream.
2020-05-15 19:03:25 +01:00
Andrew Morgan
34a43f0084 Fix a couple of small typos 2020-05-15 18:54:32 +01:00
Patrick Cloke
a3cf36f76e Support UI Authentication for OpenID Connect accounts (#7457) 2020-05-15 12:26:02 -04:00
Erik Johnston
03aff4c75e Add a worker store for search insertion. (#7516)
This is required as both event persistence and the background update needs access to this function. It should be perfectly safe for two workers to write to that table at the same time.
2020-05-15 17:22:47 +01:00
Andrew Morgan
16090a077f Prevent 0-member/null room_version rooms from appearing in group room queries (#7465) 2020-05-15 17:17:42 +01:00
Erik Johnston
1f36ff69e8 Move event stream handling out of slave store. (#7491)
This allows us to have the logic on both master and workers, which is necessary to move event persistence off master.

We also combine the instantiation of ID generators from DataStore and slave stores to the base worker stores. This allows us to select which process writes events independently of the master/worker splits.
2020-05-15 16:43:59 +01:00
Patrick Cloke
5355421295 Add type hints to event_auth code. (#7505) 2020-05-15 11:19:43 -04:00
Andrew Morgan
86614e251f Fix a small typo in the arguments of simple_update in update_remote_profile_cache (#7511) 2020-05-15 16:17:12 +01:00
Richard van der Hoff
24d9151a08 Formatting for reverse-proxy docs (#7514)
also a small clarification to nginx
2020-05-15 15:13:39 +01:00
Jeff Peeler
572b444dab Add Caddy 2 example (#7463)
The specific headers that are passed using this new configuration format
are Host and X-Forwarded-For, which should be all that's required.

Note that for production another matcher should be added in the first
section to properly handle the base_url lookup:
reverse_proxy /.well-known/matrix/* http://localhost:8008

Signed-off-by: Jeff Peeler <jpeeler@gmail.com>
2020-05-15 14:36:01 +01:00
Patrick Cloke
e9f3de0bab Update the room member handler to use async/await. (#7507) 2020-05-15 09:32:13 -04:00
Patrick Cloke
08bc80ef09 Implement room version 6 (MSC2240). (#7506) 2020-05-15 09:30:10 -04:00
Andrew Morgan
02d97fc3ba Ignore incoming presence updates when presence is disabled (#7508) 2020-05-15 11:44:00 +01:00
Patrick Cloke
56b66db78a Strictly enforce canonicaljson requirements in a new room version (#7381) 2020-05-14 13:24:01 -04:00
Richard van der Hoff
ec0b72bc4e Merge branch 'master' into develop 2020-05-14 18:12:00 +01:00
Richard van der Hoff
a564ec4d4b remove spurious changelog files
These PRs have gone straight to `master` and aren't really relevant to the
release, so it doesn't make sense to have changelog entries for them.
2020-05-14 18:11:20 +01:00
Richard van der Hoff
66d03639dc Notes on using git (#7496)
* general updates to CONTRIBUTING.md
* notes on updating your PR
* Notes on squash-merging or otherwise
* document git branching model
2020-05-14 18:03:10 +01:00
Patrick Cloke
fef3ff5cc4 Enforce MSC2209: auth rules for notifications in power level event (#7502)
In a new room version, the "notifications" key of power level events are
subject to restricted auth rules.
2020-05-14 12:38:17 -04:00
Andrew Morgan
5611644519 Workaround for failure to wrap reason in Failure (#7473) 2020-05-14 17:07:24 +01:00
Richard van der Hoff
eafd103fc7 Fix b'GET' in prometheus metrics (#7503) 2020-05-14 17:01:34 +01:00
Andrew Morgan
225c165087 Allow expired accounts to logout (#7443) 2020-05-14 16:32:49 +01:00
Richard van der Hoff
207b1737ee Update reverse_proxy.md
a couple of cleanups
2020-05-05 11:29:29 +01:00
Brendan Abolivier
cb6fd280af Add a section about support to the top of the README (#7392)
Continuation of #7379

Adds a section in the README telling people to go to #synapse:matrix.org instead of using github issues. I'm not entirely sure about placing it above the install section but then people are likely to first seek support when installing (if something goes boom), and it's probably better to have it as high as possible anyway so people actually see it.
2020-05-01 17:27:22 +02:00
Brendan Abolivier
a6b32bad77 Make it clearer that #synapse:matrix.org is our support channel (#7379)
This PR moves the "support is in #synapse:matrix.org" in the bug report template outside of the comment as some people seem to ignore what's in the comments, and phrase it a bit more like the support request template. It also adds a default issue template that says the same thing. It's also adding a notice about the security disclosure to both the default template and the bug report one.

It also adds a badge to the top of the README with an alt text saying about the same message if the badge doesn't load (e.g. if matrix.org is slow).

Fixes #6826
2020-05-01 13:42:35 +02:00
225 changed files with 7504 additions and 3104 deletions

5
.github/ISSUE_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,5 @@
**If you are looking for support** please ask in **#synapse:matrix.org**
(using a matrix.org account if necessary). We do not use GitHub issues for
support.
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/

View File

@@ -4,11 +4,13 @@ about: Create a report to help us improve
---
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #synapse:matrix.org ** ;)
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all

View File

@@ -1,20 +1,184 @@
Synapse 1.13.0rc2 (2020-05-14)
Synapse 1.15.0 (2020-06-11)
===========================
No significant changes.
Synapse 1.15.0rc1 (2020-06-09)
==============================
Features
--------
- Advertise support for Client-Server API r0.6.0 and remove related unstable feature flags. ([\#6585](https://github.com/matrix-org/synapse/issues/6585))
- Add an option to disable autojoining rooms for guest accounts. ([\#6637](https://github.com/matrix-org/synapse/issues/6637))
- For SAML authentication, add the ability to pass email addresses to be added to new users' accounts via SAML attributes. Contributed by Christopher Cooper. ([\#7385](https://github.com/matrix-org/synapse/issues/7385))
- Add admin APIs to allow server admins to manage users' devices. Contributed by @dklimpel. ([\#7481](https://github.com/matrix-org/synapse/issues/7481))
- Add support for generating thumbnails for WebP images. Previously, users would see an empty box instead of preview image. Contributed by @WGH-. ([\#7586](https://github.com/matrix-org/synapse/issues/7586))
- Support the standardized `m.login.sso` user-interactive authentication flow. ([\#7630](https://github.com/matrix-org/synapse/issues/7630))
Bugfixes
--------
- Allow new users to be registered via the admin API even if the monthly active user limit has been reached. Contributed by @dklimpel. ([\#7263](https://github.com/matrix-org/synapse/issues/7263))
- Fix email notifications not being enabled for new users when created via the Admin API. ([\#7267](https://github.com/matrix-org/synapse/issues/7267))
- Fix str placeholders in an instance of `PrepareDatabaseException`. Introduced in Synapse v1.8.0. ([\#7575](https://github.com/matrix-org/synapse/issues/7575))
- Fix a bug in automatic user creation during first time login with `m.login.jwt`. Regression in v1.6.0. Contributed by @olof. ([\#7585](https://github.com/matrix-org/synapse/issues/7585))
- Fix a bug causing the cross-signing keys to be ignored when resyncing a device list. ([\#7594](https://github.com/matrix-org/synapse/issues/7594))
- Fix metrics failing when there is a large number of active background processes. ([\#7597](https://github.com/matrix-org/synapse/issues/7597))
- Fix bug where returning rooms for a group would fail if it included a room that the server was not in. ([\#7599](https://github.com/matrix-org/synapse/issues/7599))
- Fix duplicate key violation when persisting read markers. ([\#7607](https://github.com/matrix-org/synapse/issues/7607))
- Prevent an entire iteration of the device list resync loop from failing if one server responds with a malformed result. ([\#7609](https://github.com/matrix-org/synapse/issues/7609))
- Fix exceptions when fetching events from a remote host fails. ([\#7622](https://github.com/matrix-org/synapse/issues/7622))
- Make `synctl restart` start synapse if it wasn't running. ([\#7624](https://github.com/matrix-org/synapse/issues/7624))
- Pass device information through to the login endpoint when using the login fallback. ([\#7629](https://github.com/matrix-org/synapse/issues/7629))
- Advertise the `m.login.token` login flow when OpenID Connect is enabled. ([\#7631](https://github.com/matrix-org/synapse/issues/7631))
- Fix bug in account data replication stream. ([\#7656](https://github.com/matrix-org/synapse/issues/7656))
Improved Documentation
----------------------
- Update the OpenBSD installation instructions. ([\#7587](https://github.com/matrix-org/synapse/issues/7587))
- Advertise Python 3.8 support in `setup.py`. ([\#7602](https://github.com/matrix-org/synapse/issues/7602))
- Add a link to `#synapse:matrix.org` in the troubleshooting section of the README. ([\#7603](https://github.com/matrix-org/synapse/issues/7603))
- Clarifications to the admin api documentation. ([\#7647](https://github.com/matrix-org/synapse/issues/7647))
Internal Changes
----------------
- Convert the identity handler to async/await. ([\#7561](https://github.com/matrix-org/synapse/issues/7561))
- Improve query performance for fetching state from a PostgreSQL database. Contributed by @ilmari. ([\#7567](https://github.com/matrix-org/synapse/issues/7567))
- Speed up processing of federation stream RDATA rows. ([\#7584](https://github.com/matrix-org/synapse/issues/7584))
- Add comment to systemd example to show postgresql dependency. ([\#7591](https://github.com/matrix-org/synapse/issues/7591))
- Refactor `Ratelimiter` to limit the amount of expensive config value accesses. ([\#7595](https://github.com/matrix-org/synapse/issues/7595))
- Convert groups handlers to async/await. ([\#7600](https://github.com/matrix-org/synapse/issues/7600))
- Clean up exception handling in `SAML2ResponseResource`. ([\#7614](https://github.com/matrix-org/synapse/issues/7614))
- Check that all asynchronous tasks succeed and general cleanup of `MonthlyActiveUsersTestCase` and `TestMauLimit`. ([\#7619](https://github.com/matrix-org/synapse/issues/7619))
- Convert `get_user_id_by_threepid` to async/await. ([\#7620](https://github.com/matrix-org/synapse/issues/7620))
- Switch to upstream `dh-virtualenv` rather than our fork for Debian package builds. ([\#7621](https://github.com/matrix-org/synapse/issues/7621))
- Update CI scripts to check the number in the newsfile fragment. ([\#7623](https://github.com/matrix-org/synapse/issues/7623))
- Check if the localpart of a Matrix ID is reserved for guest users earlier in the registration flow, as well as when responding to requests to `/register/available`. ([\#7625](https://github.com/matrix-org/synapse/issues/7625))
- Minor cleanups to OpenID Connect integration. ([\#7628](https://github.com/matrix-org/synapse/issues/7628))
- Attempt to fix flaky test: `PhoneHomeStatsTestCase.test_performance_100`. ([\#7634](https://github.com/matrix-org/synapse/issues/7634))
- Fix typos of `m.olm.curve25519-aes-sha2` and `m.megolm.v1.aes-sha2` in comments, test files. ([\#7637](https://github.com/matrix-org/synapse/issues/7637))
- Convert user directory, state deltas, and stats handlers to async/await. ([\#7640](https://github.com/matrix-org/synapse/issues/7640))
- Remove some unused constants. ([\#7644](https://github.com/matrix-org/synapse/issues/7644))
- Fix type information on `assert_*_is_admin` methods. ([\#7645](https://github.com/matrix-org/synapse/issues/7645))
- Convert registration handler to async/await. ([\#7649](https://github.com/matrix-org/synapse/issues/7649))
Synapse 1.14.0 (2020-05-28)
===========================
No significant changes.
Synapse 1.14.0rc2 (2020-05-27)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
- Fix cache config to not apply cache factor to event cache. Regression in v1.14.0rc1. ([\#7578](https://github.com/matrix-org/synapse/issues/7578))
- Fix bug where `ReplicationStreamer` was not always started when replication was enabled. Bug introduced in v1.14.0rc1. ([\#7579](https://github.com/matrix-org/synapse/issues/7579))
- Fix specifying individual cache factors for caches with special characters in their name. Regression in v1.14.0rc1. ([\#7580](https://github.com/matrix-org/synapse/issues/7580))
Improved Documentation
----------------------
- Fix the OIDC `client_auth_method` value in the sample config. ([\#7581](https://github.com/matrix-org/synapse/issues/7581))
Synapse 1.14.0rc1 (2020-05-26)
==============================
Features
--------
- Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches. ([\#6391](https://github.com/matrix-org/synapse/issues/6391))
- Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs). ([\#7256](https://github.com/matrix-org/synapse/issues/7256), [\#7457](https://github.com/matrix-org/synapse/issues/7457))
- Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7317](https://github.com/matrix-org/synapse/issues/7317))
- Allow for using more than one spam checker module at once. ([\#7435](https://github.com/matrix-org/synapse/issues/7435))
- Add additional authentication checks for `m.room.power_levels` event per [MSC2209](https://github.com/matrix-org/matrix-doc/pull/2209). ([\#7502](https://github.com/matrix-org/synapse/issues/7502))
- Implement room version 6 per [MSC2240](https://github.com/matrix-org/matrix-doc/pull/2240). ([\#7506](https://github.com/matrix-org/synapse/issues/7506))
- Add highly experimental option to move event persistence off master. ([\#7281](https://github.com/matrix-org/synapse/issues/7281), [\#7374](https://github.com/matrix-org/synapse/issues/7374), [\#7436](https://github.com/matrix-org/synapse/issues/7436), [\#7440](https://github.com/matrix-org/synapse/issues/7440), [\#7475](https://github.com/matrix-org/synapse/issues/7475), [\#7490](https://github.com/matrix-org/synapse/issues/7490), [\#7491](https://github.com/matrix-org/synapse/issues/7491), [\#7492](https://github.com/matrix-org/synapse/issues/7492), [\#7493](https://github.com/matrix-org/synapse/issues/7493), [\#7495](https://github.com/matrix-org/synapse/issues/7495), [\#7515](https://github.com/matrix-org/synapse/issues/7515), [\#7516](https://github.com/matrix-org/synapse/issues/7516), [\#7517](https://github.com/matrix-org/synapse/issues/7517), [\#7542](https://github.com/matrix-org/synapse/issues/7542))
Bugfixes
--------
- Fix a bug where event updates might not be sent over replication to worker processes after the stream falls behind. ([\#7384](https://github.com/matrix-org/synapse/issues/7384))
- Allow expired user accounts to log out their device sessions. ([\#7443](https://github.com/matrix-org/synapse/issues/7443))
- Fix a bug that would cause Synapse not to resync out-of-sync device lists. ([\#7453](https://github.com/matrix-org/synapse/issues/7453))
- Prevent rooms with 0 members or with invalid version strings from breaking group queries. ([\#7465](https://github.com/matrix-org/synapse/issues/7465))
- Workaround for an upstream Twisted bug that caused Synapse to become unresponsive after startup. ([\#7473](https://github.com/matrix-org/synapse/issues/7473))
- Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting. ([\#7482](https://github.com/matrix-org/synapse/issues/7482))
- When sending `m.room.member` events, omit `displayname` and `avatar_url` if they aren't set instead of setting them to `null`. Contributed by Aaron Raimist. ([\#7497](https://github.com/matrix-org/synapse/issues/7497))
- Fix incorrect `method` label on `synapse_http_matrixfederationclient_{requests,responses}` prometheus metrics. ([\#7503](https://github.com/matrix-org/synapse/issues/7503))
- Ignore incoming presence events from other homeservers if presence is disabled locally. ([\#7508](https://github.com/matrix-org/synapse/issues/7508))
- Fix a long-standing bug that broke the update remote profile background process. ([\#7511](https://github.com/matrix-org/synapse/issues/7511))
- Hash passwords as early as possible during password reset. ([\#7538](https://github.com/matrix-org/synapse/issues/7538))
- Fix bug where a local user leaving a room could fail under rare circumstances. ([\#7548](https://github.com/matrix-org/synapse/issues/7548))
- Fix "Missing RelayState parameter" error when using user interactive authentication with SAML for some SAML providers. ([\#7552](https://github.com/matrix-org/synapse/issues/7552))
- Fix exception `'GenericWorkerReplicationHandler' object has no attribute 'send_federation_ack'`, introduced in v1.13.0. ([\#7564](https://github.com/matrix-org/synapse/issues/7564))
- `synctl` now warns if it was unable to stop Synapse and will not attempt to start Synapse if nothing was stopped. Contributed by Romain Bouyé. ([\#6598](https://github.com/matrix-org/synapse/issues/6598))
Updates to the Docker image
---------------------------
- Update docker runtime image to Alpine v3.11. Contributed by @Starbix. ([\#7398](https://github.com/matrix-org/synapse/issues/7398))
Improved Documentation
----------------------
- Update information about mapping providers for SAML and OpenID. ([\#7458](https://github.com/matrix-org/synapse/issues/7458))
- Add additional reverse proxy example for Caddy v2. Contributed by Jeff Peeler. ([\#7463](https://github.com/matrix-org/synapse/issues/7463))
- Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman. ([\#7477](https://github.com/matrix-org/synapse/issues/7477))
- Improve the formatting of `reverse_proxy.md`. ([\#7514](https://github.com/matrix-org/synapse/issues/7514))
- Change the systemd worker service to check that the worker config file exists instead of silently failing. Contributed by David Vo. ([\#7528](https://github.com/matrix-org/synapse/issues/7528))
- Minor clarifications to the TURN docs. ([\#7533](https://github.com/matrix-org/synapse/issues/7533))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
- Add typing annotations in `synapse.federation`. ([\#7382](https://github.com/matrix-org/synapse/issues/7382))
- Convert the room handler to async/await. ([\#7396](https://github.com/matrix-org/synapse/issues/7396))
- Improve performance of `get_e2e_cross_signing_key`. ([\#7428](https://github.com/matrix-org/synapse/issues/7428))
- Improve performance of `mark_as_sent_devices_by_remote`. ([\#7429](https://github.com/matrix-org/synapse/issues/7429), [\#7562](https://github.com/matrix-org/synapse/issues/7562))
- Add type hints to the SAML handler. ([\#7445](https://github.com/matrix-org/synapse/issues/7445))
- Remove storage method `get_hosts_in_room` that is no longer called anywhere. ([\#7448](https://github.com/matrix-org/synapse/issues/7448))
- Fix some typos in the `notice_expiry` templates. ([\#7449](https://github.com/matrix-org/synapse/issues/7449))
- Convert the federation handler to async/await. ([\#7459](https://github.com/matrix-org/synapse/issues/7459))
- Convert the search handler to async/await. ([\#7460](https://github.com/matrix-org/synapse/issues/7460))
- Add type hints to `synapse.event_auth`. ([\#7505](https://github.com/matrix-org/synapse/issues/7505))
- Convert the room member handler to async/await. ([\#7507](https://github.com/matrix-org/synapse/issues/7507))
- Add type hints to room member handler. ([\#7513](https://github.com/matrix-org/synapse/issues/7513))
- Fix typing annotations in `tests.replication`. ([\#7518](https://github.com/matrix-org/synapse/issues/7518))
- Remove some redundant Python 2 support code. ([\#7519](https://github.com/matrix-org/synapse/issues/7519))
- All endpoints now respond with a 200 OK for `OPTIONS` requests. ([\#7534](https://github.com/matrix-org/synapse/issues/7534), [\#7560](https://github.com/matrix-org/synapse/issues/7560))
- Synapse now exports [detailed allocator statistics](https://doc.pypy.org/en/latest/gc_info.html#gc-get-stats) and basic GC timings as Prometheus metrics (`pypy_gc_time_seconds_total` and `pypy_memory_bytes`) when run under PyPy. Contributed by Ivan Shapovalov. ([\#7536](https://github.com/matrix-org/synapse/issues/7536))
- Remove Ubuntu Cosmic and Disco from the list of distributions which we provide `.deb`s for, due to end-of-life. ([\#7539](https://github.com/matrix-org/synapse/issues/7539))
- Make worker processes return a stubbed-out response to `GET /presence` requests. ([\#7545](https://github.com/matrix-org/synapse/issues/7545))
- Optimise some references to `hs.config`. ([\#7546](https://github.com/matrix-org/synapse/issues/7546))
- On upgrade room only send canonical alias once. ([\#7547](https://github.com/matrix-org/synapse/issues/7547))
- Fix some indentation inconsistencies in the sample config. ([\#7550](https://github.com/matrix-org/synapse/issues/7550))
- Include `synapse.http.site` in type checking. ([\#7553](https://github.com/matrix-org/synapse/issues/7553))
- Fix some test code to not mangle stacktraces, to make it easier to debug errors. ([\#7554](https://github.com/matrix-org/synapse/issues/7554))
- Refresh apt cache when building `dh_virtualenv` docker image. ([\#7555](https://github.com/matrix-org/synapse/issues/7555))
- Stop logging some expected HTTP request errors as exceptions. ([\#7556](https://github.com/matrix-org/synapse/issues/7556), [\#7563](https://github.com/matrix-org/synapse/issues/7563))
- Convert sending mail to async/await. ([\#7557](https://github.com/matrix-org/synapse/issues/7557))
- Simplify `reap_monthly_active_users`. ([\#7558](https://github.com/matrix-org/synapse/issues/7558))
Synapse 1.13.0rc1 (2020-05-11)
==============================
Synapse 1.13.0 (2020-05-19)
===========================
This release brings some potential changes necessary for certain
configurations of Synapse:
@@ -34,6 +198,53 @@ configurations of Synapse:
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes
and for general upgrade guidance.
Notice of change to the default `git` branch for Synapse
--------------------------------------------------------
With the release of Synapse 1.13.0, the default `git` branch for Synapse has
changed to `develop`, which is the development tip. This is more consistent with
common practice and modern `git` usage.
The `master` branch, which tracks the latest release, is still available. It is
recommended that developers and distributors who have scripts which run builds
using the default branch of Synapse should therefore consider pinning their
scripts to `master`.
Internal Changes
----------------
- Update the version of dh-virtualenv we use to build debs, and add focal to the list of target distributions. ([\#7526](https://github.com/matrix-org/synapse/issues/7526))
Synapse 1.13.0rc3 (2020-05-18)
==============================
Bugfixes
--------
- Hash passwords as early as possible during registration. ([\#7523](https://github.com/matrix-org/synapse/issues/7523))
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.13.0rc1 (2020-05-11)
==============================
Features
--------

View File

@@ -1,62 +1,48 @@
# Contributing code to Matrix
# Contributing code to Synapse
Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)).
Everyone is welcome to contribute code to [matrix.org
projects](https://github.com/matrix-org), provided that they are willing to
license their contributions under the same license as the project itself. We
follow a simple 'inbound=outbound' model for contributions: the act of
submitting an 'inbound' contribution means that the contributor agrees to
license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
[LICENSE](LICENSE)).
## How to contribute
The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull
your changes into our repo.
The preferred and easiest way to contribute changes is to fork the relevant
project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull your
changes into our repo.
**The single biggest thing you need to know is: please base your changes on
the develop branch - *not* master.**
Some other points to follow:
* Please base your changes on the `develop` branch.
* Please follow the [code style requirements](#code-style).
We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
* Please include a [changelog entry](#changelog) with each PR.
We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
* Please [sign off](#sign-off) your contribution.
To run unit tests in a local development environment, you can use:
* Please keep an eye on the pull request for feedback from the [continuous
integration system](#continuous-integration-and-testing) and try to fix any
errors that come up.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
* If you need to [update your PR](#updating-your-pull-request), just add new
commits to your branch rather than rebasing.
## Code style
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
[here](docs/code_style.md).
Synapse's code style is documented [here](docs/code_style.md). Please follow
it, including the conventions for the [sample configuration
file](docs/code_style.md#configuration-file-format).
To facilitate meeting these criteria you can run `scripts-dev/lint.sh`
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool:
Many of the conventions are enforced by scripts which are run as part of the
[continuous integration system](#continuous-integration-and-testing). To help
check if you have followed the code style, you can run `scripts-dev/lint.sh`
locally. You'll need python 3.6 or later, and to install a number of tools:
```
# Install the dependencies
@@ -67,9 +53,11 @@ pip install -U black flake8 flake8-comprehensions isort
```
**Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**. By default this script
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time:
may wish to ensure any new code is committed first**.
By default, this script checks all files and can take some time; if you alter
only certain files, you might wish to specify paths as arguments to reduce the
run-time:
```
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
@@ -82,7 +70,6 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
## Changelog
All changes, even minor ones, need a corresponding changelog / newsfragment
@@ -98,24 +85,55 @@ in the format of `PRnumber.type`. The type can be one of the following:
* `removal` (also used for deprecations)
* `misc` (for internal-only changes)
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop (.) or an
exclamation mark (!) for consistency.
This file will become part of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md) at the next
release, so the content of the file should be a short description of your
change in the same style as the rest of the changelog. The file can contain Markdown
formatting, and should end with a full stop (.) or an exclamation mark (!) for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
`changelog.d/1234.bugfix`, and contain content like "The security levels of
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
`changelog.d/1234.bugfix`, and contain content like:
## Debian changelog
> The security levels of Florbs are now validated when received
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each `changelog.d` file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
### How do I know what to call the changelog file before I create the PR?
Obviously, you don't know if you should call your newsfile
`1234.bugfix` or `5678.bugfix` until you create the PR, which leads to a
chicken-and-egg problem.
There are two options for solving this:
1. Open the PR without a changelog file, see what number you got, and *then*
add the changelog file to your branch (see [Updating your pull
request](#updating-your-pull-request)), or:
1. Look at the [list of all
issues/PRs](https://github.com/matrix-org/synapse/issues?q=), add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.
[This
script](https://github.com/richvdh/scripts/blob/master/next_github_number.sh)
might be helpful if you find yourself doing this a lot.
Sorry, we know it's a bit fiddly, but it's *really* helpful for us when we come
to put together a release!
### Debian changelog
Changes which affect the debian packaging files (in `debian`) are an
exception.
exception to the rule that all changes require a `changelog.d` file.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command:
@@ -200,19 +218,45 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
## Merge Strategy
## Continuous integration and testing
We use the commit history of develop/master extensively to identify
when regressions were introduced and what changes have been made.
[Buildkite](https://buildkite.com/matrix-dot-org/synapse) will automatically
run a series of checks and tests against any PR which is opened against the
project; if your change breaks the build, this will be shown in GitHub, with
links to the build results. If your build fails, please try to fix the errors
and update your branch.
We aim to have a clean merge history, which means we normally squash-merge
changes into develop. For small changes this means there is no need to rebase
to clean up your PR before merging. Larger changes with an organised set of
commits may be merged as-is, if the history is judged to be useful.
To run unit tests in a local development environment, you can use:
This use of squash-merging will mean PRs built on each other will be hard to
merge. We suggest avoiding these where possible, and if required, ensuring
each PR has a tidy set of commits to ease merging.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Updating your pull request
If you decide to make changes to your pull request - perhaps to address issues
raised in a review, or to fix problems highlighted by [continuous
integration](#continuous-integration-and-testing) - just add new commits to your
branch, and push to GitHub. The pull request will automatically be updated.
Please **avoid** rebasing your branch, especially once the PR has been
reviewed: doing so makes it very difficult for a reviewer to see what has
changed since a previous review.
## Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
## Conclusion

View File

@@ -180,35 +180,41 @@ sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
#### OpenBSD
Installing prerequisites on OpenBSD:
A port of Synapse is available under `net/synapse`. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
To be able to build Synapse's dependency on python the `WRKOBJDIR`
(cf. `bsd.port.mk(5)`) for building python, too, needs to be on a filesystem
mounted with `wxallowed` (cf. `mount(8)`).
Creating a `WRKOBJDIR` for building python under `/usr/local` (which on a
default OpenBSD installation is mounted with `wxallowed`):
```
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt jpeg
doas mkdir /usr/local/pobj_wxallowed
```
There is currently no port for OpenBSD. Additionally, OpenBSD's security
settings require a slightly more difficult installation process.
Assuming `PORTS_PRIVSEP=Yes` (cf. `bsd.port.mk(5)`) and `SUDO=doas` are
configured in `/etc/mk.conf`:
(XXX: I suspect this is out of date)
```
doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
```
1. Create a new directory in `/usr/local` called `_synapse`. Also, create a
new user called `_synapse` and set that directory as the new user's home.
This is required because, by default, OpenBSD only allows binaries which need
write and execute permissions on the same memory space to be run from
`/usr/local`.
2. `su` to the new `_synapse` user and change to their home directory.
3. Create a new virtualenv: `virtualenv -p python3 ~/.synapse`
4. Source the virtualenv configuration located at
`/usr/local/_synapse/.synapse/bin/activate`. This is done in `ksh` by
using the `.` command, rather than `bash`'s `source`.
5. Optionally, use `pip` to install `lxml`, which Synapse needs to parse
webpages for their titles.
6. Use `pip` to install this repository: `pip install matrix-synapse`
7. Optionally, change `_synapse`'s shell to `/bin/false` to reduce the
chance of a compromised Synapse server being used to take over your box.
Setting the `WRKOBJDIR` for building python:
After this, you may proceed with the rest of the install directions.
```
echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed >> /etc/mk.conf
```
Building Synapse:
```
cd /usr/ports/net/synapse
make install
```
#### Windows
@@ -350,6 +356,18 @@ Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Mo
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py37-matrix-synapse`
### OpenBSD
As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
Installing Synapse:
```
doas pkg_add synapse
```
### NixOS

View File

@@ -1,3 +1,11 @@
================
Synapse |shield|
================
.. |shield| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org
.. contents::
Introduction
@@ -77,6 +85,17 @@ Thanks for using Matrix!
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
Support
=======
For support installing or managing Synapse, please join |room|_ (from a matrix.org
account if necessary) and ask questions there. We do not use GitHub issues for
support requests, only for bug reports and feature requests.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
Synapse Installation
====================
@@ -248,7 +267,7 @@ First calculate the hash of the new password::
Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then update the `users` table in the database::
Then update the ``users`` table in the database::
UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com';
@@ -316,6 +335,9 @@ Building internal API documentation::
Troubleshooting
===============
Need help? Join our community support room on Matrix:
`#synapse:matrix.org <https://matrix.to/#/#synapse:matrix.org>`_
Running out of File Handles
---------------------------

View File

@@ -75,9 +75,15 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.13.0
Upgrading to v1.14.0
====================
This version includes a database update which is run as part of the upgrade,
and which may take a couple of minutes in the case of a large server. Synapse
will not respond to HTTP requests while this update is taking place.
Upgrading to v1.13.0
====================
Incorrect database migration in old synapse versions
----------------------------------------------------
@@ -136,12 +142,12 @@ back to v1.12.4 you need to:
2. Decrease the schema version in the database:
.. code:: sql
UPDATE schema_version SET version = 57;
3. Downgrade Synapse by following the instructions for your installation method
in the "Rolling back to older versions" section above.
Upgrading to v1.12.0
====================

View File

@@ -1 +0,0 @@
Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches.

View File

@@ -1 +0,0 @@
Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs).

View File

@@ -1 +0,0 @@
Add MultiWriterIdGenerator to support multiple concurrent writers of streams.

View File

@@ -1 +0,0 @@
Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.

View File

@@ -1 +0,0 @@
Move catchup of replication streams logic to worker.

View File

@@ -1 +0,0 @@
Add typing annotations in `synapse.federation`.

View File

@@ -1 +0,0 @@
Convert the room handler to async/await.

View File

@@ -1 +0,0 @@
Update docker runtime image to Alpine v3.11. Contributed by @Starbix.

View File

@@ -1 +0,0 @@
Improve performance of `get_e2e_cross_signing_key`.

View File

@@ -1 +0,0 @@
Improve performance of `mark_as_sent_devices_by_remote`.

View File

@@ -1 +0,0 @@
Allow for using more than one spam checker module at once.

View File

@@ -1 +0,0 @@
Support any process writing to cache invalidation stream.

View File

@@ -1 +0,0 @@
Refactor event persistence database functions in preparation for allowing them to be run on non-master processes.

View File

@@ -1 +0,0 @@
Add type hints to the SAML handler.

View File

@@ -1 +0,0 @@
Remove storage method `get_hosts_in_room` that is no longer called anywhere.

View File

@@ -1 +0,0 @@
Fix some typos in the notice_expiry templates.

View File

@@ -1 +0,0 @@
Update information about mapping providers for SAML and OpenID.

View File

@@ -1 +0,0 @@
Convert the federation handler to async/await.

View File

@@ -1 +0,0 @@
Convert the search handler to async/await.

View File

@@ -1 +0,0 @@
Fix linting errors in new version of Flake8.

View File

@@ -1 +0,0 @@
Have all instance correctly respond to REPLICATE command.

View File

@@ -1 +0,0 @@
Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman.

View File

@@ -1 +0,0 @@
Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting.

View File

@@ -1 +0,0 @@
Clean up replication unit tests.

View File

@@ -1 +0,0 @@
Move event stream handling out of slave store.

View File

@@ -1 +0,0 @@
Allow censoring of events to happen on workers.

View File

@@ -1 +0,0 @@
Move EventStream handling into default ReplicationDataHandler.

View File

@@ -1 +0,0 @@
Add `instance_map` config and route replication calls.

File diff suppressed because it is too large Load Diff

View File

@@ -15,6 +15,9 @@
[Unit]
Description=Synapse Matrix homeserver
# If you are using postgresql to persist data, uncomment this line to make sure
# synapse starts after the postgresql service.
# After=postgresql.service
[Service]
Type=notify

View File

@@ -36,7 +36,6 @@ esac
dh_virtualenv \
--install-suffix "matrix-synapse" \
--builtin-venv \
--setuptools \
--python "$SNAKE" \
--upgrade-pip \
--preinstall="lxml" \

24
debian/changelog vendored
View File

@@ -1,16 +1,30 @@
<<<<<<< HEAD
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
matrix-synapse-py3 (1.15.0) stable; urgency=medium
* New synapse release 1.15.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 11 Jun 2020 13:27:06 +0100
matrix-synapse-py3 (1.14.0) stable; urgency=medium
* New synapse release 1.14.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 28 May 2020 10:37:27 +0000
matrix-synapse-py3 (1.13.0) stable; urgency=medium
[ Patrick Cloke ]
* Add information about .well-known files to Debian installation scripts.
-- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400
=======
[ Synapse Packaging team ]
* New synapse release 1.13.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 19 May 2020 09:16:56 -0400
matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium

View File

@@ -27,15 +27,18 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget
# fetch and unpack the package
RUN wget -q -O /dh-virtuenv-1.1.tar.gz https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz
RUN tar xvf /dh-virtuenv-1.1.tar.gz
RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
# install its build deps
RUN cd dh-virtualenv-1.1/ \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends"
# install its build deps. We do another apt-cache-update here, because we might
# be using a stale cache from docker build.
RUN apt-get update -qq -o Acquire::Languages=none \
&& cd /dh-virtualenv \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -y --no-install-recommends"
# build it
RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
RUN cd /dh-virtualenv && dpkg-buildpackage -us -uc -b
###
### Stage 1
@@ -68,12 +71,12 @@ RUN apt-get update -qq -o Acquire::Languages=none \
sqlite3 \
libpq-dev
COPY --from=builder /dh-virtualenv_1.1-1_all.deb /
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a
# cached cache from docker the first time.
RUN apt-get update -qq -o Acquire::Languages=none \
&& apt-get install -yq /dh-virtualenv_1.1-1_all.deb
&& apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb
WORKDIR /synapse/source
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]

View File

@@ -4,17 +4,21 @@ Admin APIs
This directory includes documentation for the various synapse specific admin
APIs available.
Only users that are server admins can use these APIs. A user can be marked as a
server admin by updating the database directly, e.g.:
Authenticating as a server admin
--------------------------------
``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
Many of the API calls in the admin api will require an `access_token` for a
server admin. (Note that a server admin is distinct from a room admin.)
Restarting may be required for the changes to register.
A user can be marked as a server admin by updating the database directly, e.g.:
Using an admin access_token
###########################
.. code-block:: sql
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
A new server admin user can also be created using the
``register_new_matrix_user`` script.
Many of the API calls listed in the documentation here will require to include an admin `access_token`.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:

View File

@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
users out of the group so that their clients will correctly handle the group
being deleted.
The API is:
```
POST /_synapse/admin/v1/delete_group/<group_id>
```
including an `access_token` of a server admin.
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).

View File

@@ -6,9 +6,10 @@ The API is:
```
GET /_synapse/admin/v1/room/<room_id>/media
```
including an `access_token` of a server admin.
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
It returns a JSON body like the following:
The API returns a JSON body like the following:
```
{
"local": [
@@ -99,4 +100,3 @@ Response:
"num_quarantined": 10 # The number of media items successfully quarantined
}
```

View File

@@ -15,7 +15,8 @@ The API is:
``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
@@ -54,8 +55,10 @@ It is possible to poll for updates on recent purges with a second API;
``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
(again, with a suitable ``access_token``). This API returns a JSON body like
the following:
Again, you will need to authenticate by providing an ``access_token`` for a
server admin.
This API returns a JSON body like the following:
.. code:: json

View File

@@ -6,12 +6,15 @@ media.
The API is::
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
Which will remove all cached media that was last accessed before
\... which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -23,7 +23,8 @@ POST /_synapse/admin/v1/join/<room_id_or_alias>
}
```
Including an `access_token` of a server admin.
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
Response:

View File

@@ -1,9 +1,47 @@
.. contents::
Query User Account
==================
This API returns information about a specific user account.
The api is::
GET /_synapse/admin/v2/users/<user_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
It returns a JSON body like the following:
.. code:: json
{
"displayname": "User",
"threepids": [
{
"medium": "email",
"address": "<user_mail_1>"
},
{
"medium": "email",
"address": "<user_mail_2>"
}
],
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false
}
URL parameters:
- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
Create or modify Account
========================
This API allows an administrator to create or modify a user account with a
specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example,
``@user:server.com``.
specific ``user_id``.
This api is::
@@ -31,23 +69,29 @@ with a body of:
"deactivated": false
}
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``displayname`` is optional and defaults to the value of
``user_id``.
URL parameters:
The parameter ``threepids`` is optional and allows setting the third-party IDs
(email, msisdn) belonging to a user.
- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
The parameter ``avatar_url`` is optional. Must be a [MXC
URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
Body parameters:
The parameter ``admin`` is optional and defaults to ``false``.
- ``password``, optional. If provided, the user's password is updated and all
devices are logged out.
The parameter ``deactivated`` is optional and defaults to ``false``.
- ``displayname``, optional, defaults to the value of ``user_id``.
The parameter ``password`` is optional. If provided, the user's password is
updated and all devices are logged out.
- ``threepids``, optional, allows setting the third-party IDs (email, msisdn)
belonging to a user.
- ``avatar_url``, optional, must be a
`MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_.
- ``admin``, optional, defaults to ``false``.
- ``deactivated``, optional, defaults to ``false``.
If the user already exists then optional parameters default to the current value.
@@ -60,7 +104,8 @@ The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``from`` is optional but used for pagination, denoting the
offset in the returned results. This should be treated as an opaque value and
@@ -115,17 +160,17 @@ with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more users
to paginate through.
Query Account
=============
Query current sessions for a user
=================================
This API returns information about a specific user account.
This API returns information about the active sessions for a specific user.
The api is::
GET /_synapse/admin/v1/whois/<user_id> (deprecated)
GET /_synapse/admin/v2/users/<user_id>
GET /_synapse/admin/v1/whois/<user_id>
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
It returns a JSON body like the following:
@@ -178,9 +223,10 @@ with a body of:
"erase": true
}
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
The erase parameter is optional and defaults to 'false'.
The erase parameter is optional and defaults to ``false``.
An empty body may be passed for backwards compatibility.
@@ -202,7 +248,8 @@ with a body of:
"logout_devices": true,
}
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``new_password`` is required.
The parameter ``logout_devices`` is optional and defaults to ``true``.
@@ -215,7 +262,8 @@ The api is::
GET /_synapse/admin/v1/users/<user_id>/admin
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
@@ -243,4 +291,191 @@ with a body of:
"admin": true
}
including an ``access_token`` of a server admin.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
User devices
============
List all devices
----------------
Gets information about all devices for a specific ``user_id``.
The API is::
GET /_synapse/admin/v2/users/<user_id>/devices
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"devices": [
{
"device_id": "QBUAZIFURK",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
},
{
"device_id": "AUIECTSRND",
"display_name": "ios",
"last_seen_ip": "1.2.3.5",
"last_seen_ts": 1474491775025,
"user_id": "<user_id>"
}
]
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
**Response**
The following fields are returned in the JSON response body:
- ``devices`` - An array of objects, each containing information about a device.
Device objects contain the following fields:
- ``device_id`` - Identifier of device.
- ``display_name`` - Display name set by the user for this device.
Absent if no name has been set.
- ``last_seen_ip`` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- ``user_id`` - Owner of device.
Delete multiple devices
------------------
Deletes the given devices for a specific ``user_id``, and invalidates
any access token associated with them.
The API is::
POST /_synapse/admin/v2/users/<user_id>/delete_devices
{
"devices": [
"QBUAZIFURK",
"AUIECTSRND"
],
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
The following fields are required in the JSON request body:
- ``devices`` - The list of device IDs to delete.
Show a device
---------------
Gets information on a single device, by ``device_id`` for a specific ``user_id``.
The API is::
GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"device_id": "<device_id>",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to retrieve.
**Response**
The following fields are returned in the JSON response body:
- ``device_id`` - Identifier of device.
- ``display_name`` - Display name set by the user for this device.
Absent if no name has been set.
- ``last_seen_ip`` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- ``user_id`` - Owner of device.
Update a device
---------------
Updates the metadata on the given ``device_id`` for a specific ``user_id``.
The API is::
PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
{
"display_name": "My other phone"
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to update.
The following fields are required in the JSON request body:
- ``display_name`` - The new display name for this device. If not given,
the display name is unchanged.
Delete a device
---------------
Deletes the given ``device_id`` for a specific ``user_id``,
and invalidates any access token associated with it.
The API is::
DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
{}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to delete.

148
docs/dev/git.md Normal file
View File

@@ -0,0 +1,148 @@
Some notes on how we use git
============================
On keeping the commit history clean
-----------------------------------
In an ideal world, our git commit history would be a linear progression of
commits each of which contains a single change building on what came
before. Here, by way of an arbitrary example, is the top of `git log --graph
b2dba0607`:
<img src="git/clean.png" alt="clean git graph" width="500px">
Note how the commit comment explains clearly what is changing and why. Also
note the *absence* of merge commits, as well as the absence of commits called
things like (to pick a few culprits):
[“pep8”](https://github.com/matrix-org/synapse/commit/84691da6c), [“fix broken
test”](https://github.com/matrix-org/synapse/commit/474810d9d),
[“oops”](https://github.com/matrix-org/synapse/commit/c9d72e457),
[“typo”](https://github.com/matrix-org/synapse/commit/836358823), or [“Who's
the president?”](https://github.com/matrix-org/synapse/commit/707374d5d).
There are a number of reasons why keeping a clean commit history is a good
thing:
* From time to time, after a change lands, it turns out to be necessary to
revert it, or to backport it to a release branch. Those operations are
*much* easier when the change is contained in a single commit.
* Similarly, it's much easier to answer questions like “is the fix for
`/publicRooms` on the release branch?” if that change consists of a single
commit.
* Likewise: “what has changed on this branch in the last week?” is much
clearer without merges and “pep8” commits everywhere.
* Sometimes we need to figure out where a bug got introduced, or some
behaviour changed. One way of doing that is with `git bisect`: pick an
arbitrary commit between the known good point and the known bad point, and
see how the code behaves. However, that strategy fails if the commit you
chose is the middle of someone's epic branch in which they broke the world
before putting it back together again.
One counterargument is that it is sometimes useful to see how a PR evolved as
it went through review cycles. This is true, but that information is always
available via the GitHub UI (or via the little-known [refs/pull
namespace](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally)).
Of course, in reality, things are more complicated than that. We have release
branches as well as `develop` and `master`, and we deliberately merge changes
between them. Bugs often slip through and have to be fixed later. That's all
fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
towards.
Merges, squashes, rebases: wtf?
-------------------------------
Ok, so that's what we'd like to achieve. How do we achieve it?
The TL;DR is: when you come to merge a pull request, you *probably* want to
“squash and merge”:
![squash and merge](git/squash.png).
(This applies whether you are merging your own PR, or that of another
contributor.)
“Squash and merge”<sup id="a1">[1](#f1)</sup> takes all of the changes in the
PR, and bundles them into a single commit. GitHub gives you the opportunity to
edit the commit message before you confirm, and normally you should do so,
because the default will be useless (again: `* woops typo` is not a useful
thing to keep in the historical record).
The main problem with this approach comes when you have a series of pull
requests which build on top of one another: as soon as you squash-merge the
first PR, you'll end up with a stack of conflicts to resolve in all of the
others. In general, it's best to avoid this situation in the first place by
trying not to have multiple related PRs in flight at the same time. Still,
sometimes that's not possible and doing a regular merge is the lesser evil.
Another occasion in which a regular merge makes more sense is a PR where you've
deliberately created a series of commits each of which makes sense in its own
right. For example: [a PR which gradually propagates a refactoring operation
through the codebase](https://github.com/matrix-org/synapse/pull/6837), or [a
PR which is the culmination of several other
PRs](https://github.com/matrix-org/synapse/pull/5987). In this case the ability
to figure out when a particular change/bug was introduced could be very useful.
Ultimately: **this is not a hard-and-fast-rule**. If in doubt, ask yourself “do
each of the commits I am about to merge make sense in their own right”, but
remember that we're just doing our best to balance “keeping the commit history
clean” with other factors.
Git branching model
-------------------
A [lot](https://nvie.com/posts/a-successful-git-branching-model/)
[of](http://scottchacon.com/2011/08/31/github-flow.html)
[words](https://www.endoflineblog.com/gitflow-considered-harmful) have been
written in the past about git branching models (no really, [a
lot](https://martinfowler.com/articles/branching-patterns.html)). I tend to
think the whole thing is overblown. Fundamentally, it's not that
complicated. Here's how we do it.
Let's start with a picture:
![branching model](git/branches.jpg)
It looks complicated, but it's really not. There's one basic rule: *anyone* is
free to merge from *any* more-stable branch to *any* less-stable branch at
*any* time<sup id="a2">[2](#f2)</sup>. (The principle behind this is that if a
change is good enough for the more-stable branch, then it's also good enough go
put in a less-stable branch.)
Meanwhile, merging (or squashing, as per the above) from a less-stable to a
more-stable branch is a deliberate action in which you want to publish a change
or a set of changes to (some subset of) the world: for example, this happens
when a PR is landed, or as part of our release process.
So, what counts as a more- or less-stable branch? A little reflection will show
that our active branches are ordered thus, from more-stable to less-stable:
* `master` (tracks our last release).
* `release-vX.Y.Z` (the branch where we prepare the next release)<sup
id="a3">[3](#f3)</sup>.
* PR branches which are targeting the release.
* `develop` (our "mainline" branch containing our bleeding-edge).
* regular PR branches.
The corollary is: if you have a bugfix that needs to land in both
`release-vX.Y.Z` *and* `develop`, then you should base your PR on
`release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
`develop`. (If a fix lands in `develop` and we later need it in a
release-branch, we can of course cherry-pick it, but landing it in the release
branch first helps reduce the chance of annoying conflicts.)
---
<b id="f1">[1]</b>: “Squash and merge” is GitHub's term for this
operation. Given that there is no merge involved, I'm not convinced it's the
most intuitive name. [^](#a1)
<b id="f2">[2]</b>: Well, anyone with commit access.[^](#a2)
<b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
the history of Synapse), we've had two releases in flight at once. Obviously,
`release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)

BIN
docs/dev/git/branches.jpg Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

BIN
docs/dev/git/clean.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 108 KiB

BIN
docs/dev/git/squash.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -1,175 +0,0 @@
# How to test OpenID Connect
Any OpenID Connect Provider (OP) should work with Synapse, as long as it supports the authorization code flow.
There are a few options for that:
- start a local OP. Synapse has been tested with [Hydra][hydra] and [Dex][dex-idp].
Note that for an OP to work, it should be served under a secure (HTTPS) origin.
A certificate signed with a self-signed, locally trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE` environment variable set to the path of the CA.
- use a publicly available OP. Synapse has been tested with [Google][google-idp].
- setup a SaaS OP, like [Auth0][auth0] and [Okta][okta]. Auth0 has a free tier which has been tested with Synapse.
[google-idp]: https://developers.google.com/identity/protocols/OpenIDConnect#authenticatingtheuser
[auth0]: https://auth0.com/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[hydra]: https://www.ory.sh/docs/hydra/
## Sample configs
Here are a few configs for providers that should work with Synapse.
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider, with some external database, it can be configured with static passwords in a config file.
Follow the [Getting Started guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md) to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
```yaml
staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse base url]/_synapse/oidc/callback'
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`
Synapse config:
```yaml
oidc_config:
enabled: true
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: "http://127.0.0.1:5556/dex"
discover: true
client_id: "synapse"
client_secret: "secret"
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.name }}'
display_name_template: '{{ user.name|capitalize }}'
```
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse base url]/_synapse/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
```js
function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
```
</details>
```yaml
oidc_config:
enabled: true
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```
### GitHub
GitHub is a bit special as it is not an OpenID Connect compliant provider, but just a regular OAuth2 provider.
The `/user` API endpoint can be used to retrieve informations from the user.
As the OIDC login mechanism needs an attribute to uniquely identify users and that endpoint does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new
2. Set the callback URL to `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://github.com/"
discover: false
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
authorization_endpoint: "https://github.com/login/oauth/authorize"
token_endpoint: "https://github.com/login/oauth/access_token"
userinfo_endpoint: "https://api.github.com/user"
scopes:
- read:user
user_mapping_provider:
config:
subject_claim: 'id'
localpart_template: '{{ user.login }}'
display_name_template: '{{ user.name }}'
```
### Google
1. Setup a project in the Google API Console
2. Obtain the OAuth 2.0 credentials (see <https://developers.google.com/identity/protocols/oauth2/openid-connect>)
3. Add this Authorized redirect URI: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://accounts.google.com/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.given_name|lower }}'
display_name_template: '{{ user.name }}'
```
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://id.twitch.tv/oauth2/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: "client_secret_post"
scopes:
- openid
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```

206
docs/openid.md Normal file
View File

@@ -0,0 +1,206 @@
# Configuring Synapse to authenticate against an OpenID Connect provider
Synapse can be configured to use an OpenID Connect Provider (OP) for
authentication, instead of its own local password database.
Any OP should work with Synapse, as long as it supports the authorization code
flow. There are a few options for that:
- start a local OP. Synapse has been tested with [Hydra][hydra] and
[Dex][dex-idp]. Note that for an OP to work, it should be served under a
secure (HTTPS) origin. A certificate signed with a self-signed, locally
trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE`
environment variable set to the path of the CA.
- set up a SaaS OP, like [Google][google-idp], [Auth0][auth0] or
[Okta][okta]. Synapse has been tested with Auth0 and Google.
It may also be possible to use other OAuth2 providers which provide the
[authorization code grant type](https://tools.ietf.org/html/rfc6749#section-4.1),
such as [Github][github-idp].
[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
[auth0]: https://auth0.com/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[hydra]: https://www.ory.sh/docs/hydra/
[github-idp]: https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps
## Preparing Synapse
The OpenID integration in Synapse uses the
[`authlib`](https://pypi.org/project/Authlib/) library, which must be installed
as follows:
* The relevant libraries are included in the Docker images and Debian packages
provided by `matrix.org` so no further action is needed.
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
install synapse[oidc]` to install the necessary dependencies.
* For other installation mechanisms, see the documentation provided by the
maintainer.
To enable the OpenID integration, you should then add an `oidc_config` section
to your configuration file (or uncomment the `enabled: true` line in the
existing section). See [sample_config.yaml](./sample_config.yaml) for some
sample settings, as well as the text below for example configurations for
specific providers.
## Sample configs
Here are a few configs for providers that should work with Synapse.
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider with an
external database, it can be configured with static passwords in a config file.
Follow the [Getting Started
guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
```yaml
staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse public baseurl]/_synapse/oidc/callback'
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`.
Synapse config:
```yaml
oidc_config:
enabled: true
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: "http://127.0.0.1:5556/dex"
client_id: "synapse"
client_secret: "secret"
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.name }}"
display_name_template: "{{ user.name|capitalize }}"
```
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
```js
function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
```
</details>
Synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
### GitHub
GitHub is a bit special as it is not an OpenID Connect compliant provider, but
just a regular OAuth2 provider.
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
can be used to retrieve information on the authenticated user. As the Synaspse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new.
2. Set the callback URL to `[synapse public baseurl]/_synapse/oidc/callback`.
Synapse config:
```yaml
oidc_config:
enabled: true
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
authorization_endpoint: "https://github.com/login/oauth/authorize"
token_endpoint: "https://github.com/login/oauth/access_token"
userinfo_endpoint: "https://api.github.com/user"
scopes: ["read:user"]
user_mapping_provider:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.name }}"
```
### [Google][google-idp]
1. Set up a project in the Google API Console (see
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
2. add an "OAuth Client ID" for a Web Application under "Credentials".
3. Copy the Client ID and Client Secret, and add the following to your synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.given_name|lower }}"
display_name_template: "{{ user.name }}"
```
4. Back in the Google console, add this Authorized redirect URI: `[synapse
public baseurl]/_synapse/oidc/callback`.
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/oidc/callback`
Synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://id.twitch.tv/oauth2/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: "client_secret_post"
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```

View File

@@ -9,7 +9,7 @@ of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes).
Beware that Apache *will* canonicalise URIs unless you specifify
`nocanon`.
@@ -18,7 +18,7 @@ When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we
refer to the 'client port' and the \'federation port\'. See [the Matrix
refer to the 'client port' and the 'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
@@ -28,93 +28,113 @@ Let's assume that we expect clients to connect to our server at
`https://example.com:8448`. The following sections detail the configuration of
the reverse proxy and the homeserver.
## Webserver configuration examples
## Reverse-proxy configuration examples
> **NOTE**: You only need one of these.
**NOTE**: You only need one of these.
### nginx
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name matrix.example.com;
```
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name matrix.example.com;
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
}
}
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
}
}
server {
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name example.com;
server {
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name example.com;
location / {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
location / {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
```
> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will
**NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
canonicalise/normalise the URI.
### Caddy
### Caddy 1
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
}
```
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
```
### Caddy 2
```
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
}
example.com:8448 {
reverse_proxy http://localhost:8008
}
```
### Apache
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
```
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
```
> **NOTE**: ensure the `nocanon` options are included.
**NOTE**: ensure the `nocanon` options are included.
### HAProxy
frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
```
frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
use_backend matrix if matrix-host matrix-path
use_backend matrix if matrix-host matrix-path
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
default_backend matrix
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
default_backend matrix
backend matrix
server matrix 127.0.0.1:8008
backend matrix
server matrix 127.0.0.1:8008
```
## Homeserver Configuration

View File

@@ -322,22 +322,27 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained homeserver Settings
# Resource-constrained homeserver settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
# When this is enabled, the room "complexity" will be checked before a user
# joins a new remote room. If it is above the complexity limit, the server will
# disallow joining, or will instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
# Room complexity is an arbitrary measure based on factors such as the number of
# users in the room.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: true
# complexity: 1.0
# complexity_error: "This room is too complex."
limit_remote_rooms:
# Uncomment to enable room complexity checking.
#
#enabled: true
# the limit above which rooms cannot be joined. The default is 1.0.
#
#complexity: 0.5
# override the error which is returned when the room is too complex.
#
#complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
@@ -638,6 +643,12 @@ caches:
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
@@ -942,25 +953,28 @@ url_preview_accept_language:
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# See docs/CAPTCHA_SETUP.md for full details of configuring this.
# This homeserver's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This homeserver's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
# Enables ReCaptcha checks when registering, preventing signup
# Uncomment to enable ReCaptcha checks when registering, preventing signup
# unless a captcha is answered. Requires a valid ReCaptcha
# public/private key.
# public/private key. Defaults to 'false'.
#
#enable_registration_captcha: false
#enable_registration_captcha: true
# The API endpoint to use for verifying m.login.recaptcha responses.
# Defaults to "https://www.recaptcha.net/recaptcha/api/siteverify".
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
#recaptcha_siteverify_api: "https://my.recaptcha.site"
## TURN ##
@@ -1104,7 +1118,7 @@ account_validity:
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#
# registration_shared_secret: <PRIVATE STRING>
#registration_shared_secret: <PRIVATE STRING>
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
@@ -1209,6 +1223,13 @@ account_threepid_delegates:
#
#autocreate_auto_join_rooms: true
# When auto_join_rooms is specified, setting this flag to false prevents
# guest accounts from being automatically joined to the rooms.
#
# Defaults to true.
#
#auto_join_rooms_for_guests: false
## Metrics ###
@@ -1237,7 +1258,8 @@ metrics_flags:
#known_servers: true
# Whether or not to report anonymized homeserver usage statistics.
# report_stats: true|false
#
#report_stats: true|false
# The endpoint to report the anonymized homeserver usage statistics to.
# Defaults to https://matrix.org/report-usage-stats/push
@@ -1273,13 +1295,13 @@ metrics_flags:
# the registration_shared_secret is used, if one is given; otherwise,
# a secret key is derived from the signing key.
#
# macaroon_secret_key: <PRIVATE STRING>
#macaroon_secret_key: <PRIVATE STRING>
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
#
# form_secret: <PRIVATE STRING>
#form_secret: <PRIVATE STRING>
## Signing Keys ##
@@ -1364,6 +1386,8 @@ trusted_key_servers:
#key_server_signing_keys_path: "key_server_signing_keys.key"
## Single sign-on integration ##
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
@@ -1497,7 +1521,13 @@ saml2_config:
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
# When rendering, this template is given the following variables:
# * code: an HTML error code corresponding to the error that is being
# returned (typically 400 or 500)
#
# * msg: a textual message describing the error.
#
# The variables will automatically be HTML-escaped.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
@@ -1505,92 +1535,119 @@ saml2_config:
#template_dir: "res/templates"
# Enable OpenID Connect for registration and login. Uses authlib.
# OpenID Connect integration. The following settings can be used to make Synapse
# use an OpenID Connect Provider for authentication, instead of its internal
# password database.
#
# See https://github.com/matrix-org/synapse/blob/master/openid.md.
#
oidc_config:
# enable OpenID Connect. Defaults to false.
#
#enabled: true
# Uncomment the following to enable authorization against an OpenID Connect
# server. Defaults to false.
#
#enabled: true
# use the OIDC discovery mechanism to discover endpoints. Defaults to true.
#
#discover: true
# Uncomment the following to disable use of the OIDC discovery mechanism to
# discover endpoints. Defaults to true.
#
#discover: false
# the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
#
#issuer: "https://accounts.example.com/"
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
# discover the provider's endpoints.
#
# Required if 'enabled' is true.
#
#issuer: "https://accounts.example.com/"
# oauth2 client id to use. Required.
#
#client_id: "provided-by-your-issuer"
# oauth2 client id to use.
#
# Required if 'enabled' is true.
#
#client_id: "provided-by-your-issuer"
# oauth2 client secret to use. Required.
#
#client_secret: "provided-by-your-issuer"
# oauth2 client secret to use.
#
# Required if 'enabled' is true.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
#
#client_auth_method: "client_auth_basic"
# auth method to use when exchanging the token.
# Valid values are 'client_secret_basic' (default), 'client_secret_post' and
# 'none'.
#
#client_auth_method: client_secret_post
# list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
#
#scopes: ["openid"]
# list of scopes to request. This should normally include the "openid" scope.
# Defaults to ["openid"].
#
#scopes: ["openid", "profile"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# the OIDC userinfo endpoint. Required if discovery is disabled and the
# "openid" scope is not requested.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# URI where to fetch the JWKS. Required if discovery is disabled and the
# "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip metadata verification. Defaults to false.
# Use this if you are connecting to a provider that is not OpenID Connect compliant.
# Avoid this in production.
#
#skip_verification: false
# Uncomment to skip metadata verification. Defaults to false.
#
# Use this if you are connecting to a provider that is not OpenID Connect
# compliant.
# Avoid this in production.
#
#skip_verification: true
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
# for information on implementing a custom mapping provider.
#
#module: mapping_provider.OidcMappingProvider
# Custom configuration values for the module. This section will be passed as
# a Python dictionary to the user mapping provider module's `parse_config`
# method.
#
# The examples below are intended for the default provider: they should be
# changed if using a custom provider.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#module: mapping_provider.OidcMappingProvider
#subject_claim: "sub"
# Custom configuration values for the module. Below options are intended
# for the built-in provider, they should be changed if using a custom
# module. This section will be passed as a Python dictionary to the
# module's `parse_config` method.
# Jinja2 template for the localpart of the MXID.
#
# Below is the config of the default mapping provider, based on Jinja2
# templates. Those templates are used to render user attributes, where the
# userinfo object is available through the `user` variable.
# When rendering, this template is given the following variables:
# * user: The claims returned by the UserInfo Endpoint and/or in the ID
# Token
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
# This must be configured if using the default mapping provider.
#
localpart_template: "{{ user.preferred_username }}"
# Jinja2 template for the localpart of the MXID
#
localpart_template: "{{ user.preferred_username }}"
# Jinja2 template for the display name to set on first login. Optional.
#
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
# Jinja2 template for the display name to set on first login.
#
# If unset, no displayname will be set.
#
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
@@ -1605,7 +1662,8 @@ oidc_config:
# # name: value
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
# Additional settings to use with single-sign on systems such as OpenID Connect,
# SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
@@ -1764,8 +1822,8 @@ email:
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
#smtp_user: "exampleusername"
#smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to

View File

@@ -138,6 +138,8 @@ A custom mapping provider must specify the following methods:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
* `emails` - A list of emails for the new user. If not provided, will
default to an empty list.
### Default SAML Mapping Provider

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Synapse %i
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target

View File

@@ -18,7 +18,7 @@ For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work.
## `coturn` Setup
## `coturn` setup
### Initial installation
@@ -26,7 +26,13 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
#### Debian installation
# apt install coturn
Just install the debian package:
```sh
apt install coturn
```
This will install and start a systemd service called `coturn`.
#### Source installation
@@ -63,38 +69,52 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
1. Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
Ideally coturn should refuse to relay traffic which isn't SRTP; see
<https://github.com/matrix-org/synapse/issues/2009>
1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`:
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (remember to allow both TCP and UDP TURN
traffic)
you've configured it to listen on (By default: 3478 and 5349 for the TURN(s)
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. If you've configured coturn to support TLS/DTLS, generate or import your
private key and certificate.
1. (Re)start the turn server:
1. Start the turn server:
* If you used the Debian package (or have set up a systemd unit yourself):
```sh
systemctl restart coturn
```
bin/turnserver -o
* If you installed from source:
## synapse Setup
```sh
bin/turnserver -o
```
## Synapse setup
Your home server configuration file needs the following extra keys:
@@ -126,7 +146,14 @@ As an example, here is the relevant section of the config file for matrix.org:
After updating the homeserver configuration, you must restart synapse:
* If you use synctl:
```sh
cd /where/you/run/synapse
./synctl restart
```
* If you use systemd:
```
systemctl restart synapse.service
```
..and your Home Server now supports VoIP relaying!

View File

@@ -24,9 +24,8 @@ DISTS = (
"debian:sid",
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:cosmic",
"ubuntu:disco",
"ubuntu:eoan",
"ubuntu:focal",
)
DESC = '''\

View File

@@ -7,7 +7,9 @@ set -e
# make sure that origin/develop is up to date
git remote set-branches --add origin develop
git fetch origin develop
git fetch -q origin develop
pr="$BUILDKITE_PULL_REQUEST"
# if there are changes in the debian directory, check that the debian changelog
# has been updated
@@ -20,20 +22,30 @@ fi
# if there are changes *outside* the debian directory, check that the
# newsfragments have been updated.
if git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
tox -e check-newsfragment
if ! git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
exit 0
fi
tox -qe check-newsfragment
echo
echo "--------------------------"
echo
# check that any new newsfiles on this branch end with a full stop.
matched=0
for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
# check that any modified newsfiles on this branch end with a full stop.
lastchar=`tr -d '\n' < $f | tail -c 1`
if [ $lastchar != '.' -a $lastchar != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
exit 1
fi
# see if this newsfile corresponds to the right PR
[[ -n "$pr" && "$f" == changelog.d/"$pr".* ]] && matched=1
done
if [[ -n "$pr" && "$matched" -eq 0 ]]; then
echo -e "\e[31mERROR: Did not find a news fragment with the right number: expected changelog.d/$pr.*.\e[39m" >&2
exit 1
fi

View File

@@ -3,8 +3,6 @@ import json
import sys
import time
import six
import psycopg2
import yaml
from canonicaljson import encode_canonical_json
@@ -12,10 +10,7 @@ from signedjson.key import read_signing_keys
from signedjson.sign import sign_json
from unpaddedbase64 import encode_base64
if six.PY2:
db_type = six.moves.builtins.buffer
else:
db_type = memoryview
db_binary_type = memoryview
def select_v1_keys(connection):
@@ -72,7 +67,7 @@ def rows_v2(server, json):
valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, db_type(key_json))
yield (server, key_id, "-", valid_until, valid_until, db_binary_type(key_json))
def main():

View File

@@ -196,6 +196,9 @@ class MockHomeserver:
def get_reactor(self):
return reactor
def get_instance_name(self):
return "master"
class Porter(object):
def __init__(self, **kwargs):

View File

@@ -114,6 +114,7 @@ setup(
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},

View File

@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.13.0rc2"
__version__ = "1.15.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -22,6 +22,7 @@ import pymacaroons
from netaddr import IPAddress
from twisted.internet import defer
from twisted.web.server import Request
import synapse.logging.opentracing as opentracing
import synapse.types
@@ -162,19 +163,25 @@ class Auth(object):
@defer.inlineCallbacks
def get_user_by_req(
self, request, allow_guest=False, rights="access", allow_expired=False
self,
request: Request,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
):
""" Get a registered user's ID.
Args:
request - An HTTP request with an access_token query parameter.
allow_expired - Whether to allow the request through even if the account is
expired. If true, Synapse will still require an access token to be
provided but won't check if the account it belongs to has expired. This
works thanks to /login delivering access tokens regardless of accounts'
expiration.
request: An HTTP request with an access_token query parameter.
allow_guest: If False, will raise an AuthError if the user making the
request is a guest.
rights: The operation being performed; the access token must allow this
allow_expired: If True, allow the request through even if the account
is expired, or session token lifetime has ended. Note that
/login will deliver access tokens regardless of expiration.
Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object
defer.Deferred: resolves to a `synapse.types.Requester` object
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
@@ -205,7 +212,9 @@ class Auth(object):
return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(access_token, rights)
user_info = yield self.get_user_by_access_token(
access_token, rights, allow_expired=allow_expired
)
user = user_info["user"]
token_id = user_info["token_id"]
is_guest = user_info["is_guest"]
@@ -280,13 +289,17 @@ class Auth(object):
return user_id, app_service
@defer.inlineCallbacks
def get_user_by_access_token(self, token, rights="access"):
def get_user_by_access_token(
self, token: str, rights: str = "access", allow_expired: bool = False,
):
""" Validate access token and get user_id from it
Args:
token (str): The access token to get the user by.
rights (str): The operation being performed; the access token must
allow this.
token: The access token to get the user by
rights: The operation being performed; the access token must
allow this
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Returns:
Deferred[dict]: dict that includes:
`user` (UserID)
@@ -294,8 +307,10 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
is invalid
"""
if rights == "access":
@@ -304,7 +319,8 @@ class Auth(object):
if r:
valid_until_ms = r["valid_until_ms"]
if (
valid_until_ms is not None
not allow_expired
and valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
@@ -494,16 +510,16 @@ class Auth(object):
request.authenticated_entity = service.sender
return defer.succeed(service)
def is_server_admin(self, user):
async def is_server_admin(self, user: UserID) -> bool:
""" Check if the given user is a local server admin.
Args:
user (UserID): user to check
user: user to check
Returns:
bool: True if the user is an admin
True if the user is an admin
"""
return self.store.is_server_admin(user)
return await self.store.is_server_admin(user)
def compute_auth_events(
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
@@ -575,7 +591,7 @@ class Auth(object):
return user_level >= send_level
@staticmethod
def has_access_token(request):
def has_access_token(request: Request):
"""Checks if the request has an access_token.
Returns:
@@ -586,7 +602,7 @@ class Auth(object):
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request):
def get_access_token_from_request(request: Request):
"""Extracts the access_token from the request.
Args:

View File

@@ -61,13 +61,9 @@ class LoginType(object):
MSISDN = "m.login.msisdn"
RECAPTCHA = "m.login.recaptcha"
TERMS = "m.login.terms"
SSO = "org.matrix.login.sso"
SSO = "m.login.sso"
DUMMY = "m.login.dummy"
# Only for C/S API v1
APPLICATION_SERVICE = "m.login.application_service"
SHARED_SECRET = "org.matrix.login.shared_secret"
class EventTypes(object):
Member = "m.room.member"

View File

@@ -1,4 +1,5 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,75 +17,157 @@ from collections import OrderedDict
from typing import Any, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.util import Clock
class Ratelimiter(object):
"""
Ratelimit message sending by user.
Ratelimit actions marked by arbitrary keys.
Args:
clock: A homeserver clock, for retrieving the current time
rate_hz: The long term number of actions that can be performed in a second.
burst_count: How many actions that can be performed before being limited.
"""
def __init__(self):
self.message_counts = (
OrderedDict()
) # type: OrderedDict[Any, Tuple[float, int, Optional[float]]]
def __init__(self, clock: Clock, rate_hz: float, burst_count: int):
self.clock = clock
self.rate_hz = rate_hz
self.burst_count = burst_count
def can_do_action(self, key, time_now_s, rate_hz, burst_count, update=True):
# A ordered dictionary keeping track of actions, when they were last
# performed and how often. Each entry is a mapping from a key of arbitrary type
# to a tuple representing:
# * How many times an action has occurred since a point in time
# * The point in time
# * The rate_hz of this particular entry. This can vary per request
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
def can_do_action(
self,
key: Any,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the entity (e.g. user or IP address) perform the action?
Args:
key: The key we should use when rate limiting. Can be a user ID
(when sending events), an IP address, etc.
time_now_s: The time now.
rate_hz: The long term number of messages a user can send in a
second.
burst_count: How many messages the user can send before being
limited.
update (bool): Whether to update the message rates or not. This is
useful to check if a message would be allowed to be sent before
its ready to be actually sent.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Returns:
A pair of a bool indicating if they can send a message now and a
time in seconds of when they can next send a message.
A tuple containing:
* A bool indicating if they can perform the action now
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
self.prune_message_counts(time_now_s)
message_count, time_start, _ignored = self.message_counts.get(
key, (0.0, time_now_s, None)
)
# Override default values if set
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
rate_hz = rate_hz if rate_hz is not None else self.rate_hz
burst_count = burst_count if burst_count is not None else self.burst_count
# Remove any expired entries
self._prune_message_counts(time_now_s)
# Check if there is an existing count entry for this key
action_count, time_start, _ = self.actions.get(key, (0.0, time_now_s, 0.0))
# Check whether performing another action is allowed
time_delta = time_now_s - time_start
sent_count = message_count - time_delta * rate_hz
if sent_count < 0:
performed_count = action_count - time_delta * rate_hz
if performed_count < 0:
# Allow, reset back to count 1
allowed = True
time_start = time_now_s
message_count = 1.0
elif sent_count > burst_count - 1.0:
action_count = 1.0
elif performed_count > burst_count - 1.0:
# Deny, we have exceeded our burst count
allowed = False
else:
# We haven't reached our limit yet
allowed = True
message_count += 1
action_count += 1.0
if update:
self.message_counts[key] = (message_count, time_start, rate_hz)
self.actions[key] = (action_count, time_start, rate_hz)
if rate_hz > 0:
time_allowed = time_start + (message_count - burst_count + 1) / rate_hz
# Find out when the count of existing actions expires
time_allowed = time_start + (action_count - burst_count + 1) / rate_hz
# Don't give back a time in the past
if time_allowed < time_now_s:
time_allowed = time_now_s
else:
# XXX: Why is this -1? This seems to only be used in
# self.ratelimit. I guess so that clients get a time in the past and don't
# feel afraid to try again immediately
time_allowed = -1
return allowed, time_allowed
def prune_message_counts(self, time_now_s):
for key in list(self.message_counts.keys()):
message_count, time_start, rate_hz = self.message_counts[key]
time_delta = time_now_s - time_start
if message_count - time_delta * rate_hz > 0:
break
else:
del self.message_counts[key]
def _prune_message_counts(self, time_now_s: int):
"""Remove message count entries that have not exceeded their defined
rate_hz limit
Args:
time_now_s: The current time
"""
# We create a copy of the key list here as the dictionary is modified during
# the loop
for key in list(self.actions.keys()):
action_count, time_start, rate_hz = self.actions[key]
# Rate limit = "seconds since we started limiting this action" * rate_hz
# If this limit has not been exceeded, wipe our record of this action
time_delta = time_now_s - time_start
if action_count - time_delta * rate_hz > 0:
continue
else:
del self.actions[key]
def ratelimit(
self,
key: Any,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
):
"""Checks if an action can be performed. If not, raises a LimitExceededError
Args:
key: An arbitrary key used to classify an action
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Raises:
LimitExceededError: If an action could not be performed, along with the time in
milliseconds until the action can be performed again
"""
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
def ratelimit(self, key, time_now_s, rate_hz, burst_count, update=True):
allowed, time_allowed = self.can_do_action(
key, time_now_s, rate_hz, burst_count, update
key,
rate_hz=rate_hz,
burst_count=burst_count,
update=update,
_time_now_s=time_now_s,
)
if not allowed:

View File

@@ -58,7 +58,15 @@ class RoomVersion(object):
enforce_key_validity = attr.ib() # bool
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool, default=False)
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats
# * NaN, Infinity, -Infinity
strict_canonicaljson = attr.ib(type=bool)
# bool: MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels = attr.ib(type=bool)
class RoomVersions(object):
@@ -69,6 +77,8 @@ class RoomVersions(object):
StateResolutionVersions.V1,
enforce_key_validity=False,
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
V2 = RoomVersion(
"2",
@@ -77,6 +87,8 @@ class RoomVersions(object):
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
V3 = RoomVersion(
"3",
@@ -85,6 +97,8 @@ class RoomVersions(object):
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
V4 = RoomVersion(
"4",
@@ -93,6 +107,8 @@ class RoomVersions(object):
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
V5 = RoomVersion(
"5",
@@ -101,14 +117,18 @@ class RoomVersions(object):
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
)
MSC2432_DEV = RoomVersion(
"org.matrix.msc2432",
RoomDisposition.UNSTABLE,
V6 = RoomVersion(
"6",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
)
@@ -120,6 +140,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.MSC2432_DEV,
RoomVersions.V6,
)
} # type: Dict[str, RoomVersion]

View File

@@ -17,16 +17,15 @@
import contextlib
import logging
import sys
from typing import Dict, Iterable
from typing import Dict, Iterable, Optional, Set
from typing_extensions import ContextManager
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
import synapse.events
from synapse.api.errors import HttpResponseException, SynapseError
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
@@ -40,14 +39,22 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.federation.transport.server import TransportLayerServer
from synapse.handlers.presence import BasePresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.handlers.presence import (
BasePresenceHandler,
PresenceState,
get_interested_parties,
)
from synapse.http.server import JsonResource, OptionsResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.http.presence import (
ReplicationBumpPresenceActiveTime,
ReplicationPresenceSetState,
)
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -137,31 +144,18 @@ logger = logging.getLogger("synapse.app.generic_worker")
class PresenceStatusStubServlet(RestServlet):
"""If presence is disabled this servlet can be used to stub out setting
presence status, while proxying the getters to the master instance.
presence status.
"""
PATTERNS = client_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__()
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
async def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {"Authorization": auth_headers}
try:
result = await self.http_client.get_json(
self.main_uri + request.uri.decode("ascii"), headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error()
return 200, result
await self.auth.get_user_by_req(request)
return 200, {"presence": "offline"}
async def on_PUT(self, request, user_id):
await self.auth.get_user_by_req(request)
@@ -215,9 +209,14 @@ class KeyUploadServlet(RestServlet):
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {"Authorization": auth_headers}
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
try:
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error() from e
except RequestSendFailed as e:
raise SynapseError(502, "Failed to talk to master") from e
return 200, result
else:
@@ -256,6 +255,9 @@ class GenericWorkerPresence(BasePresenceHandler):
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._bump_active_client = ReplicationBumpPresenceActiveTime.make_client(hs)
self._set_state_client = ReplicationPresenceSetState.make_client(hs)
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
@@ -313,10 +315,6 @@ class GenericWorkerPresence(BasePresenceHandler):
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
return defer.succeed(None)
async def user_syncing(
self, user_id: str, affect_presence: bool
) -> ContextManager[None]:
@@ -395,6 +393,42 @@ class GenericWorkerPresence(BasePresenceHandler):
if count > 0
]
async def set_state(self, target_user, state, ignore_status_msg=False):
"""Set the presence state of the user.
"""
presence = state["presence"]
valid_presence = (
PresenceState.ONLINE,
PresenceState.UNAVAILABLE,
PresenceState.OFFLINE,
)
if presence not in valid_presence:
raise SynapseError(400, "Invalid presence state")
user_id = target_user.to_string()
# If presence is disabled, no-op
if not self.hs.config.use_presence:
return
# Proxy request to master
await self._set_state_client(
user_id=user_id, state=state, ignore_status_msg=ignore_status_msg
)
async def bump_presence_active_time(self, user):
"""We've seen the user do something that indicates they're interacting
with the app.
"""
# If presence is disabled, no-op
if not self.hs.config.use_presence:
return
# Proxy request to master
user_id = user.to_string()
await self._bump_active_client(user_id=user_id)
class GenericWorkerTyping(object):
def __init__(self, hs):
@@ -574,7 +608,7 @@ class GenericWorkerServer(HomeServer):
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
root_resource = create_resource_tree(resources, NoResource())
root_resource = create_resource_tree(resources, OptionsResource())
_base.listen_tcp(
bind_addresses,
@@ -643,10 +677,9 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
self.notify_pushers = hs.config.start_pushers
self.pusher_pool = hs.get_pusherpool()
self.send_handler = None # type: Optional[FederationSenderHandler]
if hs.config.send_federation:
self.send_handler = FederationSenderHandler(hs, self)
else:
self.send_handler = None
self.send_handler = FederationSenderHandler(hs)
async def on_rdata(self, stream_name, instance_name, token, rows):
await super().on_rdata(stream_name, instance_name, token, rows)
@@ -684,7 +717,7 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == DeviceListsStream.NAME:
all_room_ids = set()
all_room_ids = set() # type: Set[str]
for row in rows:
if row.entity.startswith("@"):
room_ids = await self.store.get_rooms_for_user(row.entity)
@@ -735,24 +768,33 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""Processes the fedration replication stream
This class is only instantiate on the worker responsible for sending outbound
federation transactions. It receives rows from the replication stream and forwards
the appropriate entries to the FederationSender class.
"""
def __init__(self, hs: GenericWorkerServer, replication_client):
def __init__(self, hs: GenericWorkerServer):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self._hs = hs
# if the worker is restarted, we want to pick up where we left off in
# the replication stream, so load the position from the database.
#
# XXX is this actually worthwhile? Whenever the master is restarted, we'll
# drop some rows anyway (which is mostly fine because we're only dropping
# typing and presence notifications). If the replication stream is
# unreliable, why do we do all this hoop-jumping to store the position in the
# database? See also https://github.com/matrix-org/synapse/issues/7535.
#
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
@@ -815,22 +857,51 @@ class FederationSenderHandler(object):
await self.federation_sender.send_read_receipt(receipt_info)
async def update_token(self, token):
"""Update the record of where we have processed to in the federation stream.
Called after we have processed a an update received over replication. Sends
a FEDERATION_ACK back to the master, and stores the token that we have processed
in `federation_stream_position` so that we can restart where we left off.
"""
self.federation_position = token
# We save and send the ACK to master asynchronously, so we don't block
# processing on persistence. We don't need to do this operation for
# every single RDATA we receive, we just need to do it periodically.
if self._fed_position_linearizer.is_queued(None):
# There is already a task queued up to save and send the token, so
# no need to queue up another task.
return
run_as_background_process("_save_and_send_ack", self._save_and_send_ack)
async def _save_and_send_ack(self):
"""Save the current federation position in the database and send an ACK
to master with where we're up to.
"""
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (await self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
await self.store.update_federation_out_pos(
"federation", self.federation_position
)
#
# XXX this appears to be redundant, since the ReplicationCommandHandler
# has a linearizer which ensures that we only process one line of
# replication data at a time. Should we remove it, or is it doing useful
# service for robustness? Or could we replace it with an assertion that
# we're not being re-entered?
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(
self.federation_position
)
self._last_ack = self.federation_position
with (await self._fed_position_linearizer.queue(None)):
# We persist and ack the same position, so we take a copy of it
# here as otherwise it can get modified from underneath us.
current_position = self.federation_position
await self.store.update_federation_out_pos(
"federation", current_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self._hs.get_tcp_replication().send_federation_ack(current_position)
self._last_ack = current_position
except Exception:
logger.exception("Error updating federation stream position")

View File

@@ -31,7 +31,7 @@ from prometheus_client import Gauge
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from twisted.web.resource import EncodingResourceWrapper, IResource, NoResource
from twisted.web.resource import EncodingResourceWrapper, IResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@@ -52,7 +52,11 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.server import (
OptionsResource,
RootOptionsRedirectResource,
RootRedirect,
)
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
@@ -121,11 +125,11 @@ class SynapseHomeServer(HomeServer):
# try to find something useful to redirect '/' to
if WEB_CLIENT_PREFIX in resources:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
elif STATIC_PREFIX in resources:
root_resource = RootRedirect(STATIC_PREFIX)
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
else:
root_resource = NoResource()
root_resource = OptionsResource()
root_resource = create_resource_tree(resources, root_resource)
@@ -484,6 +488,29 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
if uptime < 0:
uptime = 0
#
# Performance statistics. Keep this early in the function to maintain reliability of `test_performance_100` test.
#
old = stats_process[0]
new = (now, resource.getrusage(resource.RUSAGE_SELF))
stats_process[0] = new
# Get RSS in bytes
stats["memory_rss"] = new[1].ru_maxrss
# Get CPU time in % of a single core, not % of all cores
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
old[1].ru_utime + old[1].ru_stime
)
if used_cpu_time == 0 or new[0] == old[0]:
stats["cpu_average"] = 0
else:
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
#
# General statistics
#
stats["homeserver"] = hs.config.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
@@ -518,25 +545,6 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
stats["cache_factor"] = hs.config.caches.global_factor
stats["event_cache_size"] = hs.config.caches.event_cache_size
#
# Performance statistics
#
old = stats_process[0]
new = (now, resource.getrusage(resource.RUSAGE_SELF))
stats_process[0] = new
# Get RSS in bytes
stats["memory_rss"] = new[1].ru_maxrss
# Get CPU time in % of a single core, not % of all cores
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
old[1].ru_utime + old[1].ru_stime
)
if used_cpu_time == 0 or new[0] == old[0]:
stats["cpu_average"] = 0
else:
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
#
# Database version
#
@@ -613,18 +621,17 @@ def run(hs):
clock.looping_call(reap_monthly_active_users, 1000 * 60 * 60)
reap_monthly_active_users()
@defer.inlineCallbacks
def generate_monthly_active_users():
async def generate_monthly_active_users():
current_mau_count = 0
current_mau_count_by_service = {}
reserved_users = ()
store = hs.get_datastore()
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
current_mau_count = yield store.get_monthly_active_count()
current_mau_count = await store.get_monthly_active_count()
current_mau_count_by_service = (
yield store.get_monthly_active_count_by_service()
await store.get_monthly_active_count_by_service()
)
reserved_users = yield store.get_registered_reserved_users()
reserved_users = await store.get_registered_reserved_users()
current_mau_gauge.set(float(current_mau_count))
for app_service, count in current_mau_count_by_service.items():

View File

@@ -270,7 +270,7 @@ class ApplicationService(object):
def is_exclusive_room(self, room_id):
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exlusive_user_regexes(self):
def get_exclusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""

View File

@@ -14,13 +14,17 @@
# limitations under the License.
import os
import re
from typing import Callable, Dict
from ._base import Config, ConfigError
# The prefix for all cache factor-related environment variables
_CACHES = {}
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
# Map from canonicalised cache name to cache.
_CACHES = {}
_DEFAULT_FACTOR_SIZE = 0.5
_DEFAULT_EVENT_CACHE_SIZE = "10K"
@@ -37,6 +41,20 @@ class CacheProperties(object):
properties = CacheProperties()
def _canonicalise_cache_name(cache_name: str) -> str:
"""Gets the canonical form of the cache name.
Since we specify cache names in config and environment variables we need to
ignore case and special characters. For example, some caches have asterisks
in their name to denote that they're not attached to a particular database
function, and these asterisks need to be stripped out
"""
cache_name = re.sub(r"[^A-Za-z_1-9]", "", cache_name)
return cache_name.lower()
def add_resizable_cache(cache_name: str, cache_resize_callback: Callable):
"""Register a cache that's size can dynamically change
@@ -45,7 +63,10 @@ def add_resizable_cache(cache_name: str, cache_resize_callback: Callable):
cache_resize_callback: A callback function that will be ran whenever
the cache needs to be resized
"""
_CACHES[cache_name.lower()] = cache_resize_callback
# Some caches have '*' in them which we strip out.
cache_name = _canonicalise_cache_name(cache_name)
_CACHES[cache_name] = cache_resize_callback
# Ensure all loaded caches are sized appropriately
#
@@ -105,6 +126,12 @@ class CacheConfig(Config):
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
"""
@@ -130,10 +157,17 @@ class CacheConfig(Config):
if not isinstance(individual_factors, dict):
raise ConfigError("caches.per_cache_factors must be a dictionary")
# Canonicalise the cache names *before* updating with the environment
# variables.
individual_factors = {
_canonicalise_cache_name(key): val
for key, val in individual_factors.items()
}
# Override factors from environment if necessary
individual_factors.update(
{
key[len(_CACHE_PREFIX) + 1 :].lower(): float(val)
_canonicalise_cache_name(key[len(_CACHE_PREFIX) + 1 :]): float(val)
for key, val in self._environ.items()
if key.startswith(_CACHE_PREFIX + "_")
}
@@ -142,9 +176,9 @@ class CacheConfig(Config):
for cache, factor in individual_factors.items():
if not isinstance(factor, (int, float)):
raise ConfigError(
"caches.per_cache_factors.%s must be a number" % (cache.lower(),)
"caches.per_cache_factors.%s must be a number" % (cache,)
)
self.cache_factors[cache.lower()] = factor
self.cache_factors[cache] = factor
# Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches()

View File

@@ -32,23 +32,26 @@ class CaptchaConfig(Config):
def generate_config_section(self, **kwargs):
return """\
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# See docs/CAPTCHA_SETUP.md for full details of configuring this.
# This homeserver's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This homeserver's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key. Must be specified if
# enable_registration_captcha is enabled.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
# Enables ReCaptcha checks when registering, preventing signup
# Uncomment to enable ReCaptcha checks when registering, preventing signup
# unless a captcha is answered. Requires a valid ReCaptcha
# public/private key.
# public/private key. Defaults to 'false'.
#
#enable_registration_captcha: false
#enable_registration_captcha: true
# The API endpoint to use for verifying m.login.recaptcha responses.
# Defaults to "https://www.recaptcha.net/recaptcha/api/siteverify".
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
#recaptcha_siteverify_api: "https://my.recaptcha.site"
"""

View File

@@ -311,8 +311,8 @@ class EmailConfig(Config):
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
#smtp_user: "exampleusername"
#smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to

View File

@@ -175,8 +175,8 @@ class KeyConfig(Config):
)
form_secret = 'form_secret: "%s"' % random_string_with_symbols(50)
else:
macaroon_secret_key = "# macaroon_secret_key: <PRIVATE STRING>"
form_secret = "# form_secret: <PRIVATE STRING>"
macaroon_secret_key = "#macaroon_secret_key: <PRIVATE STRING>"
form_secret = "#form_secret: <PRIVATE STRING>"
return (
"""\

View File

@@ -93,10 +93,11 @@ class MetricsConfig(Config):
#known_servers: true
# Whether or not to report anonymized homeserver usage statistics.
#
"""
if report_stats is None:
res += "# report_stats: true|false\n"
res += "#report_stats: true|false\n"
else:
res += "report_stats: %s\n" % ("true" if report_stats else "false")

View File

@@ -55,7 +55,6 @@ class OIDCConfig(Config):
self.oidc_token_endpoint = oidc_config.get("token_endpoint")
self.oidc_userinfo_endpoint = oidc_config.get("userinfo_endpoint")
self.oidc_jwks_uri = oidc_config.get("jwks_uri")
self.oidc_subject_claim = oidc_config.get("subject_claim", "sub")
self.oidc_skip_verification = oidc_config.get("skip_verification", False)
ump_config = oidc_config.get("user_mapping_provider", {})
@@ -86,92 +85,119 @@ class OIDCConfig(Config):
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
# Enable OpenID Connect for registration and login. Uses authlib.
# OpenID Connect integration. The following settings can be used to make Synapse
# use an OpenID Connect Provider for authentication, instead of its internal
# password database.
#
# See https://github.com/matrix-org/synapse/blob/master/openid.md.
#
oidc_config:
# enable OpenID Connect. Defaults to false.
#
#enabled: true
# Uncomment the following to enable authorization against an OpenID Connect
# server. Defaults to false.
#
#enabled: true
# use the OIDC discovery mechanism to discover endpoints. Defaults to true.
#
#discover: true
# Uncomment the following to disable use of the OIDC discovery mechanism to
# discover endpoints. Defaults to true.
#
#discover: false
# the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
#
#issuer: "https://accounts.example.com/"
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
# discover the provider's endpoints.
#
# Required if 'enabled' is true.
#
#issuer: "https://accounts.example.com/"
# oauth2 client id to use. Required.
#
#client_id: "provided-by-your-issuer"
# oauth2 client id to use.
#
# Required if 'enabled' is true.
#
#client_id: "provided-by-your-issuer"
# oauth2 client secret to use. Required.
#
#client_secret: "provided-by-your-issuer"
# oauth2 client secret to use.
#
# Required if 'enabled' is true.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
#
#client_auth_method: "client_auth_basic"
# auth method to use when exchanging the token.
# Valid values are 'client_secret_basic' (default), 'client_secret_post' and
# 'none'.
#
#client_auth_method: client_secret_post
# list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
#
#scopes: ["openid"]
# list of scopes to request. This should normally include the "openid" scope.
# Defaults to ["openid"].
#
#scopes: ["openid", "profile"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# the OIDC userinfo endpoint. Required if discovery is disabled and the
# "openid" scope is not requested.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# URI where to fetch the JWKS. Required if discovery is disabled and the
# "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip metadata verification. Defaults to false.
# Use this if you are connecting to a provider that is not OpenID Connect compliant.
# Avoid this in production.
#
#skip_verification: false
# Uncomment to skip metadata verification. Defaults to false.
#
# Use this if you are connecting to a provider that is not OpenID Connect
# compliant.
# Avoid this in production.
#
#skip_verification: true
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is {mapping_provider!r}.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is {mapping_provider!r}.
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
# for information on implementing a custom mapping provider.
#
#module: mapping_provider.OidcMappingProvider
# Custom configuration values for the module. This section will be passed as
# a Python dictionary to the user mapping provider module's `parse_config`
# method.
#
# The examples below are intended for the default provider: they should be
# changed if using a custom provider.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#module: mapping_provider.OidcMappingProvider
#subject_claim: "sub"
# Custom configuration values for the module. Below options are intended
# for the built-in provider, they should be changed if using a custom
# module. This section will be passed as a Python dictionary to the
# module's `parse_config` method.
# Jinja2 template for the localpart of the MXID.
#
# Below is the config of the default mapping provider, based on Jinja2
# templates. Those templates are used to render user attributes, where the
# userinfo object is available through the `user` variable.
# When rendering, this template is given the following variables:
# * user: The claims returned by the UserInfo Endpoint and/or in the ID
# Token
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
# This must be configured if using the default mapping provider.
#
localpart_template: "{{{{ user.preferred_username }}}}"
# Jinja2 template for the localpart of the MXID
#
localpart_template: "{{{{ user.preferred_username }}}}"
# Jinja2 template for the display name to set on first login. Optional.
#
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
# Jinja2 template for the display name to set on first login.
#
# If unset, no displayname will be set.
#
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
""".format(
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
)

View File

@@ -12,11 +12,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict
from ._base import Config
class RateLimitConfig(object):
def __init__(self, config, defaults={"per_second": 0.17, "burst_count": 3.0}):
def __init__(
self,
config: Dict[str, float],
defaults={"per_second": 0.17, "burst_count": 3.0},
):
self.per_second = config.get("per_second", defaults["per_second"])
self.burst_count = config.get("burst_count", defaults["burst_count"])

View File

@@ -128,6 +128,7 @@ class RegistrationConfig(Config):
if not RoomAlias.is_valid(room_alias):
raise ConfigError("Invalid auto_join_rooms entry %s" % (room_alias,))
self.autocreate_auto_join_rooms = config.get("autocreate_auto_join_rooms", True)
self.auto_join_rooms_for_guests = config.get("auto_join_rooms_for_guests", True)
self.enable_set_displayname = config.get("enable_set_displayname", True)
self.enable_set_avatar_url = config.get("enable_set_avatar_url", True)
@@ -148,9 +149,7 @@ class RegistrationConfig(Config):
random_string_with_symbols(50),
)
else:
registration_shared_secret = (
"# registration_shared_secret: <PRIVATE STRING>"
)
registration_shared_secret = "#registration_shared_secret: <PRIVATE STRING>"
return (
"""\
@@ -370,6 +369,13 @@ class RegistrationConfig(Config):
# users cannot be auto-joined since they do not exist.
#
#autocreate_auto_join_rooms: true
# When auto_join_rooms is specified, setting this flag to false prevents
# guest accounts from being automatically joined to the rooms.
#
# Defaults to true.
#
#auto_join_rooms_for_guests: false
"""
% locals()
)

View File

@@ -70,6 +70,7 @@ def parse_thumbnail_requirements(thumbnail_sizes):
jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg")
png_thumbnail = ThumbnailRequirement(width, height, method, "image/png")
requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail)
requirements.setdefault("image/webp", []).append(jpeg_thumbnail)
requirements.setdefault("image/gif", []).append(png_thumbnail)
requirements.setdefault("image/png", []).append(png_thumbnail)
return {

View File

@@ -15,8 +15,8 @@
# limitations under the License.
import logging
import os
import jinja2
import pkg_resources
from synapse.python_dependencies import DependencyException, check_requirements
@@ -167,9 +167,11 @@ class SAML2Config(Config):
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.saml2_error_html_content = self.read_file(
os.path.join(template_dir, "saml_error.html"), "saml2_config.saml_error",
)
loader = jinja2.FileSystemLoader(template_dir)
# enable auto-escape here, to having to remember to escape manually in the
# template
env = jinja2.Environment(loader=loader, autoescape=True)
self.saml2_error_html_template = env.get_template("saml_error.html")
def _default_saml_config_dict(
self, required_attributes: set, optional_attributes: set
@@ -216,6 +218,8 @@ class SAML2Config(Config):
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
## Single sign-on integration ##
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
@@ -349,7 +353,13 @@ class SAML2Config(Config):
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
# When rendering, this template is given the following variables:
# * code: an HTML error code corresponding to the error that is being
# returned (typically 400 or 500)
#
# * msg: a textual message describing the error.
#
# The variables will automatically be HTML-escaped.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates

View File

@@ -434,7 +434,7 @@ class ServerConfig(Config):
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**config.get("limit_remote_rooms", {})
**(config.get("limit_remote_rooms") or {})
)
bind_port = config.get("bind_port")
@@ -895,22 +895,27 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained homeserver Settings
# Resource-constrained homeserver settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
# When this is enabled, the room "complexity" will be checked before a user
# joins a new remote room. If it is above the complexity limit, the server will
# disallow joining, or will instantly leave.
#
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
# Room complexity is an arbitrary measure based on factors such as the number of
# users in the room.
#
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: true
# complexity: 1.0
# complexity_error: "This room is too complex."
limit_remote_rooms:
# Uncomment to enable room complexity checking.
#
#enabled: true
# the limit above which rooms cannot be joined. The default is 1.0.
#
#complexity: 0.5
# override the error which is returned when the room is too complex.
#
#complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.

View File

@@ -61,7 +61,8 @@ class SSOConfig(Config):
def generate_config_section(self, **kwargs):
return """\
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
# Additional settings to use with single-sign on systems such as OpenID Connect,
# SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not

View File

@@ -15,7 +15,7 @@
import attr
from ._base import Config
from ._base import Config, ConfigError
@attr.s
@@ -94,15 +94,26 @@ class WorkerConfig(Config):
bind_addresses.append("")
# A map from instance name to host/port of their HTTP replication endpoint.
instance_map = config.get("instance_map", {}) or {}
instance_map = config.get("instance_map") or {}
self.instance_map = {
name: InstanceLocationConfig(**c) for name, c in instance_map.items()
}
# Map from type of streams to source, c.f. WriterLocations.
writers = config.get("writers", {}) or {}
writers = config.get("stream_writers") or {}
self.writers = WriterLocations(**writers)
# Check that the configured writer for events also appears in
# `instance_map`.
if (
self.writers.events != "master"
and self.writers.events not in self.instance_map
):
raise ConfigError(
"Instance %r is configured to write events but does not appear in `instance_map` config."
% (self.writers.events,)
)
def read_arguments(self, args):
# We support a bunch of command line arguments that override options in
# the config. A lot of these options have a worker_* prefix when running

View File

@@ -15,7 +15,7 @@
# limitations under the License.
import logging
from typing import Set, Tuple
from typing import List, Optional, Set, Tuple
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
@@ -29,18 +29,19 @@ from synapse.api.room_versions import (
EventFormatVersions,
RoomVersion,
)
from synapse.types import UserID, get_domain_from_id
from synapse.events import EventBase
from synapse.types import StateMap, UserID, get_domain_from_id
logger = logging.getLogger(__name__)
def check(
room_version_obj: RoomVersion,
event,
auth_events,
do_sig_check=True,
do_size_check=True,
):
event: EventBase,
auth_events: StateMap[EventBase],
do_sig_check: bool = True,
do_size_check: bool = True,
) -> None:
""" Checks if this event is correctly authed.
Args:
@@ -181,7 +182,7 @@ def check(
_can_send_event(event, auth_events)
if event.type == EventTypes.PowerLevels:
_check_power_levels(event, auth_events)
_check_power_levels(room_version_obj, event, auth_events)
if event.type == EventTypes.Redaction:
check_redaction(room_version_obj, event, auth_events)
@@ -189,7 +190,7 @@ def check(
logger.debug("Allowing! %s", event)
def _check_size_limits(event):
def _check_size_limits(event: EventBase) -> None:
def too_big(field):
raise EventSizeError("%s too large" % (field,))
@@ -207,13 +208,18 @@ def _check_size_limits(event):
too_big("event")
def _can_federate(event, auth_events):
def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
creation_event = auth_events.get((EventTypes.Create, ""))
# There should always be a creation event, but if not don't federate.
if not creation_event:
return False
return creation_event.content.get("m.federate", True) is True
def _is_membership_change_allowed(event, auth_events):
def _is_membership_change_allowed(
event: EventBase, auth_events: StateMap[EventBase]
) -> None:
membership = event.content["membership"]
# Check if this is the room creator joining:
@@ -339,21 +345,25 @@ def _is_membership_change_allowed(event, auth_events):
raise AuthError(500, "Unknown membership %s" % membership)
def _check_event_sender_in_room(event, auth_events):
def _check_event_sender_in_room(
event: EventBase, auth_events: StateMap[EventBase]
) -> None:
key = (EventTypes.Member, event.user_id)
member_event = auth_events.get(key)
return _check_joined_room(member_event, event.user_id, event.room_id)
_check_joined_room(member_event, event.user_id, event.room_id)
def _check_joined_room(member, user_id, room_id):
def _check_joined_room(member: Optional[EventBase], user_id: str, room_id: str) -> None:
if not member or member.membership != Membership.JOIN:
raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
)
def get_send_level(etype, state_key, power_levels_event):
def get_send_level(
etype: str, state_key: Optional[str], power_levels_event: Optional[EventBase]
) -> int:
"""Get the power level required to send an event of a given type
The federation spec [1] refers to this as "Required Power Level".
@@ -361,13 +371,13 @@ def get_send_level(etype, state_key, power_levels_event):
https://matrix.org/docs/spec/server_server/unstable.html#definitions
Args:
etype (str): type of event
state_key (str|None): state_key of state event, or None if it is not
etype: type of event
state_key: state_key of state event, or None if it is not
a state event.
power_levels_event (synapse.events.EventBase|None): power levels event
power_levels_event: power levels event
in force at this point in the room
Returns:
int: power level required to send this event.
power level required to send this event.
"""
if power_levels_event:
@@ -388,7 +398,7 @@ def get_send_level(etype, state_key, power_levels_event):
return int(send_level)
def _can_send_event(event, auth_events):
def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
power_levels_event = _get_power_level_event(auth_events)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
@@ -410,7 +420,9 @@ def _can_send_event(event, auth_events):
return True
def check_redaction(room_version_obj: RoomVersion, event, auth_events):
def check_redaction(
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
) -> bool:
"""Check whether the event sender is allowed to redact the target event.
Returns:
@@ -442,7 +454,9 @@ def check_redaction(room_version_obj: RoomVersion, event, auth_events):
raise AuthError(403, "You don't have permission to redact events")
def _check_power_levels(event, auth_events):
def _check_power_levels(
room_version_obj: RoomVersion, event: EventBase, auth_events: StateMap[EventBase],
) -> None:
user_list = event.content.get("users", {})
# Validate users
for k, v in user_list.items():
@@ -473,7 +487,7 @@ def _check_power_levels(event, auth_events):
("redact", None),
("kick", None),
("invite", None),
]
] # type: List[Tuple[str, Optional[str]]]
old_list = current_state.content.get("users", {})
for user in set(list(old_list) + list(user_list)):
@@ -484,6 +498,14 @@ def _check_power_levels(event, auth_events):
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append((ev_id, "events"))
# MSC2209 specifies these checks should also be done for the "notifications"
# key.
if room_version_obj.limit_notifications_power_levels:
old_list = current_state.content.get("notifications", {})
new_list = event.content.get("notifications", {})
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append((ev_id, "notifications"))
old_state = current_state.content
new_state = event.content
@@ -495,12 +517,12 @@ def _check_power_levels(event, auth_events):
new_loc = new_loc.get(dir, {})
if level_to_check in old_loc:
old_level = int(old_loc[level_to_check])
old_level = int(old_loc[level_to_check]) # type: Optional[int]
else:
old_level = None
if level_to_check in new_loc:
new_level = int(new_loc[level_to_check])
new_level = int(new_loc[level_to_check]) # type: Optional[int]
else:
new_level = None
@@ -526,21 +548,21 @@ def _check_power_levels(event, auth_events):
)
def _get_power_level_event(auth_events):
def _get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
return auth_events.get((EventTypes.PowerLevels, ""))
def get_user_power_level(user_id, auth_events):
def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
"""Get a user's power level
Args:
user_id (str): user's id to look up in power_levels
auth_events (dict[(str, str), synapse.events.EventBase]):
user_id: user's id to look up in power_levels
auth_events:
state in force at this point in the room (or rather, a subset of
it including at least the create event and power levels event.
Returns:
int: the user's power level in this room.
the user's power level in this room.
"""
power_level_event = _get_power_level_event(auth_events)
if power_level_event:
@@ -566,7 +588,7 @@ def get_user_power_level(user_id, auth_events):
return 0
def _get_named_level(auth_events, name, default):
def _get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
power_level_event = _get_power_level_event(auth_events)
if not power_level_event:
@@ -579,7 +601,7 @@ def _get_named_level(auth_events, name, default):
return default
def _verify_third_party_invite(event, auth_events):
def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase]):
"""
Validates that the invite event is authorized by a previous third-party invite.
@@ -654,7 +676,7 @@ def get_public_keys(invite_event):
return public_keys
def auth_types_for_event(event) -> Set[Tuple[str, str]]:
def auth_types_for_event(event: EventBase) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import collections
import re
from typing import Mapping, Union
from typing import Any, Mapping, Union
from six import string_types
@@ -23,6 +23,7 @@ from frozendict import frozendict
from twisted.internet import defer
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.util.async_helpers import yieldable_gather_results
@@ -449,3 +450,35 @@ def copy_power_levels_contents(
raise TypeError("Invalid power_levels value for %s: %r" % (k, v))
return power_levels
def validate_canonicaljson(value: Any):
"""
Ensure that the JSON object is valid according to the rules of canonical JSON.
See the appendix section 3.1: Canonical JSON.
This rejects JSON that has:
* An integer outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
* Floats
* NaN, Infinity, -Infinity
"""
if isinstance(value, int):
if value <= -(2 ** 53) or 2 ** 53 <= value:
raise SynapseError(400, "JSON integer out of range", Codes.BAD_JSON)
elif isinstance(value, float):
# Note that Infinity, -Infinity, and NaN are also considered floats.
raise SynapseError(400, "Bad JSON value: float", Codes.BAD_JSON)
elif isinstance(value, (dict, frozendict)):
for v in value.values():
validate_canonicaljson(v)
elif isinstance(value, (list, tuple)):
for i in value:
validate_canonicaljson(i)
elif not isinstance(value, (bool, str)) and value is not None:
# Other potential JSON values (bool, None, str) are safe.
raise SynapseError(400, "Unknown JSON value", Codes.BAD_JSON)

View File

@@ -18,6 +18,7 @@ from six import integer_types, string_types
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions
from synapse.events.utils import validate_canonicaljson
from synapse.types import EventID, RoomID, UserID
@@ -55,6 +56,12 @@ class EventValidator(object):
if not isinstance(getattr(event, s), string_types):
raise SynapseError(400, "'%s' not a string type" % (s,))
# Depending on the room version, ensure the data is spec compliant JSON.
if event.room_version.strict_canonicaljson:
# Note that only the client controlled portion of the event is
# checked, since we trust the portions of the event we created.
validate_canonicaljson(event.content)
if event.type == EventTypes.Aliases:
if "aliases" in event.content:
for alias in event.content["aliases"]:

View File

@@ -29,7 +29,7 @@ from synapse.api.room_versions import EventFormatVersions, RoomVersion
from synapse.crypto.event_signing import check_event_content_hash
from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event
from synapse.events.utils import prune_event, validate_canonicaljson
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
PreserveLoggingContext,
@@ -302,6 +302,10 @@ def event_from_pdu_json(
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
# Validate that the JSON conforms to the specification.
if room_version.strict_canonicaljson:
validate_canonicaljson(pdu_json)
event = make_event_from_dict(pdu_json, room_version)
event.internal_metadata.outlier = outlier

View File

@@ -80,6 +80,9 @@ class PerDestinationQueue(object):
# a list of tuples of (pending pdu, order)
self._pending_pdus = [] # type: List[Tuple[EventBase, int]]
# XXX this is never actually used: see
# https://github.com/matrix-org/synapse/issues/7549
self._pending_edus = [] # type: List[Edu]
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered

View File

@@ -19,8 +19,6 @@ import logging
from six import string_types
from twisted.internet import defer
from synapse.api.errors import Codes, SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
@@ -51,8 +49,7 @@ class GroupsServerWorkerHandler(object):
self.transport_client = hs.get_federation_transport_client()
self.profile_handler = hs.get_profile_handler()
@defer.inlineCallbacks
def check_group_is_ours(
async def check_group_is_ours(
self, group_id, requester_user_id, and_exists=False, and_is_admin=None
):
"""Check that the group is ours, and optionally if it exists.
@@ -68,25 +65,24 @@ class GroupsServerWorkerHandler(object):
if not self.is_mine_id(group_id):
raise SynapseError(400, "Group not on this server")
group = yield self.store.get_group(group_id)
group = await self.store.get_group(group_id)
if and_exists and not group:
raise SynapseError(404, "Unknown group")
is_user_in_group = yield self.store.is_user_in_group(
is_user_in_group = await self.store.is_user_in_group(
requester_user_id, group_id
)
if group and not is_user_in_group and not group["is_public"]:
raise SynapseError(404, "Unknown group")
if and_is_admin:
is_admin = yield self.store.is_user_admin_in_group(group_id, and_is_admin)
is_admin = await self.store.is_user_admin_in_group(group_id, and_is_admin)
if not is_admin:
raise SynapseError(403, "User is not admin in group")
return group
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
async def get_group_summary(self, group_id, requester_user_id):
"""Get the summary for a group as seen by requester_user_id.
The group summary consists of the profile of the room, and a curated
@@ -95,28 +91,28 @@ class GroupsServerWorkerHandler(object):
A user/room may appear in multiple roles/categories.
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(
is_user_in_group = await self.store.is_user_in_group(
requester_user_id, group_id
)
profile = yield self.get_group_profile(group_id, requester_user_id)
profile = await self.get_group_profile(group_id, requester_user_id)
users, roles = yield self.store.get_users_for_summary_by_role(
users, roles = await self.store.get_users_for_summary_by_role(
group_id, include_private=is_user_in_group
)
# TODO: Add profiles to users
rooms, categories = yield self.store.get_rooms_for_summary_by_category(
rooms, categories = await self.store.get_rooms_for_summary_by_category(
group_id, include_private=is_user_in_group
)
for room_entry in rooms:
room_id = room_entry["room_id"]
joined_users = yield self.store.get_users_in_room(room_id)
entry = yield self.room_list_handler.generate_room_entry(
joined_users = await self.store.get_users_in_room(room_id)
entry = await self.room_list_handler.generate_room_entry(
room_id, len(joined_users), with_alias=False, allow_private=True
)
entry = dict(entry) # so we don't change whats cached
@@ -130,7 +126,7 @@ class GroupsServerWorkerHandler(object):
user_id = entry["user_id"]
if not self.is_mine_id(requester_user_id):
attestation = yield self.store.get_remote_attestation(group_id, user_id)
attestation = await self.store.get_remote_attestation(group_id, user_id)
if not attestation:
continue
@@ -140,12 +136,12 @@ class GroupsServerWorkerHandler(object):
group_id, user_id
)
user_profile = yield self.profile_handler.get_profile_from_cache(user_id)
user_profile = await self.profile_handler.get_profile_from_cache(user_id)
entry.update(user_profile)
users.sort(key=lambda e: e.get("order", 0))
membership_info = yield self.store.get_users_membership_info_in_group(
membership_info = await self.store.get_users_membership_info_in_group(
group_id, requester_user_id
)
@@ -164,22 +160,20 @@ class GroupsServerWorkerHandler(object):
"user": membership_info,
}
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
async def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = yield self.store.get_group_categories(group_id=group_id)
categories = await self.store.get_group_categories(group_id=group_id)
return {"categories": categories}
@defer.inlineCallbacks
def get_group_category(self, group_id, requester_user_id, category_id):
async def get_group_category(self, group_id, requester_user_id, category_id):
"""Get a specific category in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_category(
res = await self.store.get_group_category(
group_id=group_id, category_id=category_id
)
@@ -187,32 +181,29 @@ class GroupsServerWorkerHandler(object):
return res
@defer.inlineCallbacks
def get_group_roles(self, group_id, requester_user_id):
async def get_group_roles(self, group_id, requester_user_id):
"""Get all roles in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = yield self.store.get_group_roles(group_id=group_id)
roles = await self.store.get_group_roles(group_id=group_id)
return {"roles": roles}
@defer.inlineCallbacks
def get_group_role(self, group_id, requester_user_id, role_id):
async def get_group_role(self, group_id, requester_user_id, role_id):
"""Get a specific role in a group (as seen by user)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
res = await self.store.get_group_role(group_id=group_id, role_id=role_id)
return res
@defer.inlineCallbacks
def get_group_profile(self, group_id, requester_user_id):
async def get_group_profile(self, group_id, requester_user_id):
"""Get the group profile as seen by requester_user_id
"""
yield self.check_group_is_ours(group_id, requester_user_id)
await self.check_group_is_ours(group_id, requester_user_id)
group = yield self.store.get_group(group_id)
group = await self.store.get_group(group_id)
if group:
cols = [
@@ -229,20 +220,19 @@ class GroupsServerWorkerHandler(object):
else:
raise SynapseError(404, "Unknown group")
@defer.inlineCallbacks
def get_users_in_group(self, group_id, requester_user_id):
async def get_users_in_group(self, group_id, requester_user_id):
"""Get the users in group as seen by requester_user_id.
The ordering is arbitrary at the moment
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(
is_user_in_group = await self.store.is_user_in_group(
requester_user_id, group_id
)
user_results = yield self.store.get_users_in_group(
user_results = await self.store.get_users_in_group(
group_id, include_private=is_user_in_group
)
@@ -254,14 +244,14 @@ class GroupsServerWorkerHandler(object):
entry = {"user_id": g_user_id}
profile = yield self.profile_handler.get_profile_from_cache(g_user_id)
profile = await self.profile_handler.get_profile_from_cache(g_user_id)
entry.update(profile)
entry["is_public"] = bool(is_public)
entry["is_privileged"] = bool(is_privileged)
if not self.is_mine_id(g_user_id):
attestation = yield self.store.get_remote_attestation(
attestation = await self.store.get_remote_attestation(
group_id, g_user_id
)
if not attestation:
@@ -279,30 +269,29 @@ class GroupsServerWorkerHandler(object):
return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
@defer.inlineCallbacks
def get_invited_users_in_group(self, group_id, requester_user_id):
async def get_invited_users_in_group(self, group_id, requester_user_id):
"""Get the users that have been invited to a group as seen by requester_user_id.
The ordering is arbitrary at the moment
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(
is_user_in_group = await self.store.is_user_in_group(
requester_user_id, group_id
)
if not is_user_in_group:
raise SynapseError(403, "User not in group")
invited_users = yield self.store.get_invited_users_in_group(group_id)
invited_users = await self.store.get_invited_users_in_group(group_id)
user_profiles = []
for user_id in invited_users:
user_profile = {"user_id": user_id}
try:
profile = yield self.profile_handler.get_profile_from_cache(user_id)
profile = await self.profile_handler.get_profile_from_cache(user_id)
user_profile.update(profile)
except Exception as e:
logger.warning("Error getting profile for %s: %s", user_id, e)
@@ -310,20 +299,19 @@ class GroupsServerWorkerHandler(object):
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
@defer.inlineCallbacks
def get_rooms_in_group(self, group_id, requester_user_id):
async def get_rooms_in_group(self, group_id, requester_user_id):
"""Get the rooms in group as seen by requester_user_id
This returns rooms in order of decreasing number of joined users
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_user_in_group = yield self.store.is_user_in_group(
is_user_in_group = await self.store.is_user_in_group(
requester_user_id, group_id
)
room_results = yield self.store.get_rooms_in_group(
room_results = await self.store.get_rooms_in_group(
group_id, include_private=is_user_in_group
)
@@ -331,8 +319,8 @@ class GroupsServerWorkerHandler(object):
for room_result in room_results:
room_id = room_result["room_id"]
joined_users = yield self.store.get_users_in_room(room_id)
entry = yield self.room_list_handler.generate_room_entry(
joined_users = await self.store.get_users_in_room(room_id)
entry = await self.room_list_handler.generate_room_entry(
room_id, len(joined_users), with_alias=False, allow_private=True
)
@@ -355,13 +343,12 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
# Ensure attestations get renewed
hs.get_groups_attestation_renewer()
@defer.inlineCallbacks
def update_group_summary_room(
async def update_group_summary_room(
self, group_id, requester_user_id, room_id, category_id, content
):
"""Add/update a room to the group summary
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
@@ -371,7 +358,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
is_public = _parse_visibility_from_contents(content)
yield self.store.add_room_to_summary(
await self.store.add_room_to_summary(
group_id=group_id,
room_id=room_id,
category_id=category_id,
@@ -381,31 +368,29 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
@defer.inlineCallbacks
def delete_group_summary_room(
async def delete_group_summary_room(
self, group_id, requester_user_id, room_id, category_id
):
"""Remove a room from the summary
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_room_from_summary(
await self.store.remove_room_from_summary(
group_id=group_id, room_id=room_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
async def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
@@ -413,22 +398,23 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if join_policy is None:
raise SynapseError(400, "No value specified for 'm.join_policy'")
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
await self.store.set_group_join_policy(group_id, join_policy=join_policy)
return {}
@defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content):
async def update_group_category(
self, group_id, requester_user_id, category_id, content
):
"""Add/Update a group category
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
yield self.store.upsert_group_category(
await self.store.upsert_group_category(
group_id=group_id,
category_id=category_id,
is_public=is_public,
@@ -437,25 +423,23 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
@defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id):
async def delete_group_category(self, group_id, requester_user_id, category_id):
"""Delete a group category
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_category(
await self.store.remove_group_category(
group_id=group_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content):
async def update_group_role(self, group_id, requester_user_id, role_id, content):
"""Add/update a role in a group
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
@@ -463,31 +447,29 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
profile = content.get("profile")
yield self.store.upsert_group_role(
await self.store.upsert_group_role(
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
)
return {}
@defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id):
async def delete_group_role(self, group_id, requester_user_id, role_id):
"""Remove role from group
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
await self.store.remove_group_role(group_id=group_id, role_id=role_id)
return {}
@defer.inlineCallbacks
def update_group_summary_user(
async def update_group_summary_user(
self, group_id, requester_user_id, user_id, role_id, content
):
"""Add/update a users entry in the group summary
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
@@ -495,7 +477,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_summary(
await self.store.add_user_to_summary(
group_id=group_id,
user_id=user_id,
role_id=role_id,
@@ -505,25 +487,25 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
@defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
async def delete_group_summary_user(
self, group_id, requester_user_id, user_id, role_id
):
"""Remove a user from the group summary
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_user_from_summary(
await self.store.remove_user_from_summary(
group_id=group_id, user_id=user_id, role_id=role_id
)
return {}
@defer.inlineCallbacks
def update_group_profile(self, group_id, requester_user_id, content):
async def update_group_profile(self, group_id, requester_user_id, content):
"""Update the group profile
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
@@ -535,40 +517,38 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
raise SynapseError(400, "%r value is not a string" % (keyname,))
profile[keyname] = value
yield self.store.update_group_profile(group_id, profile)
await self.store.update_group_profile(group_id, profile)
@defer.inlineCallbacks
def add_room_to_group(self, group_id, requester_user_id, room_id, content):
async def add_room_to_group(self, group_id, requester_user_id, room_id, content):
"""Add room to group
"""
RoomID.from_string(room_id) # Ensure valid room id
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_room_to_group(group_id, room_id, is_public=is_public)
await self.store.add_room_to_group(group_id, room_id, is_public=is_public)
return {}
@defer.inlineCallbacks
def update_room_in_group(
async def update_room_in_group(
self, group_id, requester_user_id, room_id, config_key, content
):
"""Update room in group
"""
RoomID.from_string(room_id) # Ensure valid room id
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
if config_key == "m.visibility":
is_public = _parse_visibility_dict(content)
yield self.store.update_room_in_group_visibility(
await self.store.update_room_in_group_visibility(
group_id, room_id, is_public=is_public
)
else:
@@ -576,36 +556,34 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
@defer.inlineCallbacks
def remove_room_from_group(self, group_id, requester_user_id, room_id):
async def remove_room_from_group(self, group_id, requester_user_id, room_id):
"""Remove room from group
"""
yield self.check_group_is_ours(
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_room_from_group(group_id, room_id)
await self.store.remove_room_from_group(group_id, room_id)
return {}
@defer.inlineCallbacks
def invite_to_group(self, group_id, user_id, requester_user_id, content):
async def invite_to_group(self, group_id, user_id, requester_user_id, content):
"""Invite user to group
"""
group = yield self.check_group_is_ours(
group = await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
# TODO: Check if user knocked
invited_users = yield self.store.get_invited_users_in_group(group_id)
invited_users = await self.store.get_invited_users_in_group(group_id)
if user_id in invited_users:
raise SynapseError(
400, "User already invited to group", errcode=Codes.BAD_STATE
)
user_results = yield self.store.get_users_in_group(
user_results = await self.store.get_users_in_group(
group_id, include_private=True
)
if user_id in (user_result["user_id"] for user_result in user_results):
@@ -618,18 +596,18 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
res = yield groups_local.on_invite(group_id, user_id, content)
res = await groups_local.on_invite(group_id, user_id, content)
local_attestation = None
else:
local_attestation = self.attestations.create_attestation(group_id, user_id)
content.update({"attestation": local_attestation})
res = yield self.transport_client.invite_to_group_notification(
res = await self.transport_client.invite_to_group_notification(
get_domain_from_id(user_id), group_id, user_id, content
)
user_profile = res.get("user_profile", {})
yield self.store.add_remote_profile_cache(
await self.store.add_remote_profile_cache(
user_id,
displayname=user_profile.get("displayname"),
avatar_url=user_profile.get("avatar_url"),
@@ -639,13 +617,13 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if not self.hs.is_mine_id(user_id):
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation, user_id=user_id, group_id=group_id
)
else:
remote_attestation = None
yield self.store.add_user_to_group(
await self.store.add_user_to_group(
group_id,
user_id,
is_admin=False,
@@ -654,15 +632,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
remote_attestation=remote_attestation,
)
elif res["state"] == "invite":
yield self.store.add_group_invite(group_id, user_id)
await self.store.add_group_invite(group_id, user_id)
return {"state": "invite"}
elif res["state"] == "reject":
return {"state": "reject"}
else:
raise SynapseError(502, "Unknown state returned by HS")
@defer.inlineCallbacks
def _add_user(self, group_id, user_id, content):
async def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
See accept_invite, join_group.
@@ -672,7 +649,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation, user_id=user_id, group_id=group_id
)
else:
@@ -681,7 +658,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_group(
await self.store.add_user_to_group(
group_id,
user_id,
is_admin=False,
@@ -692,59 +669,55 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return local_attestation
@defer.inlineCallbacks
def accept_invite(self, group_id, requester_user_id, content):
async def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group.
This is different from them asking to join, and so should error if no
invite exists (and they're not a member of the group)
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_invited = yield self.store.is_user_invited_to_local_group(
is_invited = await self.store.is_user_invited_to_local_group(
group_id, requester_user_id
)
if not is_invited:
raise SynapseError(403, "User not invited to group")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
local_attestation = await self._add_user(group_id, requester_user_id, content)
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def join_group(self, group_id, requester_user_id, content):
async def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
This will error if the group requires an invite/knock to join
"""
group_info = yield self.check_group_is_ours(
group_info = await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True
)
if group_info["join_policy"] != "open":
raise SynapseError(403, "Group is not publicly joinable")
local_attestation = yield self._add_user(group_id, requester_user_id, content)
local_attestation = await self._add_user(group_id, requester_user_id, content)
return {"state": "join", "attestation": local_attestation}
@defer.inlineCallbacks
def knock(self, group_id, requester_user_id, content):
async def knock(self, group_id, requester_user_id, content):
"""A user requests becoming a member of the group
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
@defer.inlineCallbacks
def accept_knock(self, group_id, requester_user_id, content):
async def accept_knock(self, group_id, requester_user_id, content):
"""Accept a users knock to the room.
Errors if the user hasn't knocked, rather than inviting them.
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
@@ -872,8 +845,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
group_id (str)
request_user_id (str)
Returns:
Deferred
"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)

View File

@@ -19,7 +19,7 @@ from twisted.internet import defer
import synapse.types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import LimitExceededError
from synapse.api.ratelimiting import Ratelimiter
from synapse.types import UserID
logger = logging.getLogger(__name__)
@@ -44,11 +44,26 @@ class BaseHandler(object):
self.notifier = hs.get_notifier()
self.state_handler = hs.get_state_handler()
self.distributor = hs.get_distributor()
self.ratelimiter = hs.get_ratelimiter()
self.admin_redaction_ratelimiter = hs.get_admin_redaction_ratelimiter()
self.clock = hs.get_clock()
self.hs = hs
# The rate_hz and burst_count are overridden on a per-user basis
self.request_ratelimiter = Ratelimiter(
clock=self.clock, rate_hz=0, burst_count=0
)
self._rc_message = self.hs.config.rc_message
# Check whether ratelimiting room admin message redaction is enabled
# by the presence of rate limits in the config
if self.hs.config.rc_admin_redaction:
self.admin_redaction_ratelimiter = Ratelimiter(
clock=self.clock,
rate_hz=self.hs.config.rc_admin_redaction.per_second,
burst_count=self.hs.config.rc_admin_redaction.burst_count,
)
else:
self.admin_redaction_ratelimiter = None
self.server_name = hs.hostname
self.event_builder_factory = hs.get_event_builder_factory()
@@ -70,7 +85,6 @@ class BaseHandler(object):
Raises:
LimitExceededError if the request should be ratelimited
"""
time_now = self.clock.time()
user_id = requester.user.to_string()
# The AS user itself is never rate limited.
@@ -83,48 +97,32 @@ class BaseHandler(object):
if requester.app_service and not requester.app_service.is_rate_limited():
return
messages_per_second = self._rc_message.per_second
burst_count = self._rc_message.burst_count
# Check if there is a per user override in the DB.
override = yield self.store.get_ratelimit_for_user(user_id)
if override:
# If overriden with a null Hz then ratelimiting has been entirely
# If overridden with a null Hz then ratelimiting has been entirely
# disabled for the user
if not override.messages_per_second:
return
messages_per_second = override.messages_per_second
burst_count = override.burst_count
else:
# We default to different values if this is an admin redaction and
# the config is set
if is_admin_redaction and self.hs.config.rc_admin_redaction:
messages_per_second = self.hs.config.rc_admin_redaction.per_second
burst_count = self.hs.config.rc_admin_redaction.burst_count
else:
messages_per_second = self.hs.config.rc_message.per_second
burst_count = self.hs.config.rc_message.burst_count
if is_admin_redaction and self.hs.config.rc_admin_redaction:
# If we have separate config for admin redactions we use a separate
# ratelimiter
allowed, time_allowed = self.admin_redaction_ratelimiter.can_do_action(
user_id,
time_now,
rate_hz=messages_per_second,
burst_count=burst_count,
update=update,
)
if is_admin_redaction and self.admin_redaction_ratelimiter:
# If we have separate config for admin redactions, use a separate
# ratelimiter as to not have user_ids clash
self.admin_redaction_ratelimiter.ratelimit(user_id, update=update)
else:
allowed, time_allowed = self.ratelimiter.can_do_action(
# Override rate and burst count per-user
self.request_ratelimiter.ratelimit(
user_id,
time_now,
rate_hz=messages_per_second,
burst_count=burst_count,
update=update,
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now))
)
async def maybe_kick_guest_users(self, event, context=None):
# Technically this function invalidates current_state by changing it.

View File

@@ -80,7 +80,9 @@ class AuthHandler(BaseHandler):
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled
self._sso_enabled = hs.config.saml2_enabled or hs.config.cas_enabled
self._sso_enabled = (
hs.config.cas_enabled or hs.config.saml2_enabled or hs.config.oidc_enabled
)
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
@@ -106,7 +108,11 @@ class AuthHandler(BaseHandler):
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
self._failed_uia_attempts_ratelimiter = Ratelimiter()
self._failed_uia_attempts_ratelimiter = Ratelimiter(
clock=self.clock,
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
)
self._clock = self.hs.get_clock()
@@ -194,13 +200,7 @@ class AuthHandler(BaseHandler):
user_id = requester.user.to_string()
# Check if we should be ratelimited due to too many previous failed attempts
self._failed_uia_attempts_ratelimiter.ratelimit(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=False,
)
self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False)
# build a list of supported flows
flows = [[login_type] for login_type in self._supported_ui_auth_types]
@@ -210,14 +210,8 @@ class AuthHandler(BaseHandler):
flows, request, request_body, clientip, description
)
except LoginError:
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=True,
)
# Update the ratelimiter to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(user_id)
raise
# find the completed login type

View File

@@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Any, Dict, Optional
from six import iteritems, itervalues
@@ -29,7 +30,12 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.logging.opentracing import log_kv, set_tag, trace
from synapse.types import RoomStreamToken, get_domain_from_id
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import (
RoomStreamToken,
get_domain_from_id,
get_verify_key_from_cross_signing_key,
)
from synapse.util import stringutils
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
@@ -535,6 +541,15 @@ class DeviceListUpdater(object):
iterable=True,
)
# Attempt to resync out of sync device lists every 30s.
self._resync_retry_in_progress = False
self.clock.looping_call(
run_as_background_process,
30 * 1000,
func=self._maybe_retry_device_resync,
desc="_maybe_retry_device_resync",
)
@trace
@defer.inlineCallbacks
def incoming_device_list_update(self, origin, edu_content):
@@ -679,25 +694,83 @@ class DeviceListUpdater(object):
return False
@defer.inlineCallbacks
def user_device_resync(self, user_id):
def _maybe_retry_device_resync(self):
"""Retry to resync device lists that are out of sync, except if another retry is
in progress.
"""
if self._resync_retry_in_progress:
return
try:
# Prevent another call of this function to retry resyncing device lists so
# we don't send too many requests.
self._resync_retry_in_progress = True
# Get all of the users that need resyncing.
need_resync = yield self.store.get_user_ids_requiring_device_list_resync()
# Iterate over the set of user IDs.
for user_id in need_resync:
try:
# Try to resync the current user's devices list.
result = yield self.user_device_resync(
user_id=user_id, mark_failed_as_stale=False,
)
# user_device_resync only returns a result if it managed to
# successfully resync and update the database. Updating the table
# of users requiring resync isn't necessary here as
# user_device_resync already does it (through
# self.store.update_remote_device_list_cache).
if result:
logger.debug(
"Successfully resynced the device list for %s", user_id,
)
except Exception as e:
# If there was an issue resyncing this user, e.g. if the remote
# server sent a malformed result, just log the error instead of
# aborting all the subsequent resyncs.
logger.debug(
"Could not resync the device list for %s: %s", user_id, e,
)
finally:
# Allow future calls to retry resyncinc out of sync device lists.
self._resync_retry_in_progress = False
@defer.inlineCallbacks
def user_device_resync(self, user_id, mark_failed_as_stale=True):
"""Fetches all devices for a user and updates the device cache with them.
Args:
user_id (str): The user's id whose device_list will be updated.
mark_failed_as_stale (bool): Whether to mark the user's device list as stale
if the attempt to resync failed.
Returns:
Deferred[dict]: a dict with device info as under the "devices" in the result of this
request:
https://matrix.org/docs/spec/server_server/r0.1.2#get-matrix-federation-v1-user-devices-userid
"""
logger.debug("Attempting to resync the device list for %s", user_id)
log_kv({"message": "Doing resync to update device list."})
# Fetch all devices for the user.
origin = get_domain_from_id(user_id)
try:
result = yield self.federation.query_user_devices(origin, user_id)
except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
# TODO: Remember that we are now out of sync and try again
# later
logger.warning("Failed to handle device list update for %s", user_id)
except NotRetryingDestination:
if mark_failed_as_stale:
# Mark the remote user's device list as stale so we know we need to retry
# it later.
yield self.store.mark_remote_user_device_cache_as_stale(user_id)
return
except (RequestSendFailed, HttpResponseException) as e:
logger.warning(
"Failed to handle device list update for %s: %s", user_id, e,
)
if mark_failed_as_stale:
# Mark the remote user's device list as stale so we know we need to retry
# it later.
yield self.store.mark_remote_user_device_cache_as_stale(user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
@@ -711,18 +784,29 @@ class DeviceListUpdater(object):
logger.info(e)
return
except Exception as e:
# TODO: Remember that we are now out of sync and try again
# later
set_tag("error", True)
log_kv(
{"message": "Exception raised by federation request", "exception": e}
)
logger.exception("Failed to handle device list update for %s", user_id)
if mark_failed_as_stale:
# Mark the remote user's device list as stale so we know we need to retry
# it later.
yield self.store.mark_remote_user_device_cache_as_stale(user_id)
return
log_kv({"result": result})
stream_id = result["stream_id"]
devices = result["devices"]
# Get the master key and the self-signing key for this user if provided in the
# response (None if not in the response).
# The response will not contain the user signing key, as this key is only used by
# its owner, thus it doesn't make sense to send it over federation.
master_key = result.get("master_key")
self_signing_key = result.get("self_signing_key")
# If the remote server has more than ~1000 devices for this user
# we assume that something is going horribly wrong (e.g. a bot
# that logs in and creates a new device every time it tries to
@@ -752,6 +836,13 @@ class DeviceListUpdater(object):
yield self.store.update_remote_device_list_cache(user_id, devices, stream_id)
device_ids = [device["device_id"] for device in devices]
# Handle cross-signing keys.
cross_signing_device_ids = yield self.process_cross_signing_key_update(
user_id, master_key, self_signing_key,
)
device_ids = device_ids + cross_signing_device_ids
yield self.device_handler.notify_device_update(user_id, device_ids)
# We clobber the seen updates since we've re-synced from a given
@@ -759,3 +850,40 @@ class DeviceListUpdater(object):
self._seen_updates[user_id] = {stream_id}
defer.returnValue(result)
@defer.inlineCallbacks
def process_cross_signing_key_update(
self,
user_id: str,
master_key: Optional[Dict[str, Any]],
self_signing_key: Optional[Dict[str, Any]],
) -> list:
"""Process the given new master and self-signing key for the given remote user.
Args:
user_id: The ID of the user these keys are for.
master_key: The dict of the cross-signing master key as returned by the
remote server.
self_signing_key: The dict of the cross-signing self-signing key as returned
by the remote server.
Return:
The device IDs for the given keys.
"""
device_ids = []
if master_key:
yield self.store.set_e2e_cross_signing_key(user_id, "master", master_key)
_, verify_key = get_verify_key_from_cross_signing_key(master_key)
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
device_ids.append(verify_key.version)
if self_signing_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key
)
_, verify_key = get_verify_key_from_cross_signing_key(self_signing_key)
device_ids.append(verify_key.version)
return device_ids

View File

@@ -1291,6 +1291,7 @@ class SigningKeyEduUpdater(object):
"""
device_handler = self.e2e_keys_handler.device_handler
device_list_updater = device_handler.device_list_updater
with (yield self._remote_edu_linearizer.queue(user_id)):
pending_updates = self._pending_updates.pop(user_id, [])
@@ -1303,22 +1304,9 @@ class SigningKeyEduUpdater(object):
logger.info("pending updates: %r", pending_updates)
for master_key, self_signing_key in pending_updates:
if master_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "master", master_key
)
_, verify_key = get_verify_key_from_cross_signing_key(master_key)
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
device_ids.append(verify_key.version)
if self_signing_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key
)
_, verify_key = get_verify_key_from_cross_signing_key(
self_signing_key
)
device_ids.append(verify_key.version)
new_device_ids = yield device_list_updater.process_cross_signing_key_update(
user_id, master_key, self_signing_key,
)
device_ids = device_ids + new_device_ids
yield device_handler.notify_device_update(user_id, device_ids)

View File

@@ -40,6 +40,7 @@ from synapse.api.errors import (
Codes,
FederationDeniedError,
FederationError,
HttpResponseException,
RequestSendFailed,
SynapseError,
)
@@ -126,6 +127,7 @@ class FederationHandler(BaseHandler):
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self._instance_name = hs.get_instance_name()
self._replication = hs.get_replication_data_handler()
self._send_events = ReplicationFederationSendEventsRestServlet.make_client(hs)
self._notify_user_membership_change = ReplicationUserJoinedLeftRoomRestServlet.make_client(
@@ -499,7 +501,7 @@ class FederationHandler(BaseHandler):
min_depth=min_depth,
timeout=60000,
)
except RequestSendFailed as e:
except (RequestSendFailed, HttpResponseException, NotRetryingDestination) as e:
# We failed to get the missing events, but since we need to handle
# the case of `get_missing_events` not returning the necessary
# events anyway, it is safe to simply log the error and continue.
@@ -1035,6 +1037,12 @@ class FederationHandler(BaseHandler):
# TODO: We can probably do something more intelligent here.
return True
except SynapseError as e:
logger.info("Failed to backfill from %s because %s", dom, e)
continue
except HttpResponseException as e:
if 400 <= e.code < 500:
raise e.to_synapse_error()
logger.info("Failed to backfill from %s because %s", dom, e)
continue
except CodeMessageException as e:
@@ -1213,7 +1221,7 @@ class FederationHandler(BaseHandler):
async def do_invite_join(
self, target_hosts: Iterable[str], room_id: str, joinee: str, content: JsonDict
) -> None:
) -> Tuple[str, int]:
""" Attempts to join the `joinee` to the room `room_id` via the
servers contained in `target_hosts`.
@@ -1234,6 +1242,10 @@ class FederationHandler(BaseHandler):
content: The event content to use for the join event.
"""
# TODO: We should be able to call this on workers, but the upgrading of
# room stuff after join currently doesn't work on workers.
assert self.config.worker.worker_app is None
logger.debug("Joining %s to %s", joinee, room_id)
origin, event, room_version_obj = await self._make_and_verify_event(
@@ -1296,15 +1308,23 @@ class FederationHandler(BaseHandler):
room_id=room_id, room_version=room_version_obj,
)
await self._persist_auth_tree(
max_stream_id = await self._persist_auth_tree(
origin, auth_chain, state, event, room_version_obj
)
# We wait here until this instance has seen the events come down
# replication (if we're using replication) as the below uses caches.
#
# TODO: Currently the events stream is written to from master
await self._replication.wait_for_stream_position(
self.config.worker.writers.events, "events", max_stream_id
)
# Check whether this room is the result of an upgrade of a room we already know
# about. If so, migrate over user information
predecessor = await self.store.get_room_predecessor(room_id)
if not predecessor or not isinstance(predecessor.get("room_id"), str):
return
return event.event_id, max_stream_id
old_room_id = predecessor["room_id"]
logger.debug(
"Found predecessor for %s during remote join: %s", room_id, old_room_id
@@ -1317,6 +1337,7 @@ class FederationHandler(BaseHandler):
)
logger.debug("Finished joining %s to %s", joinee, room_id)
return event.event_id, max_stream_id
finally:
room_queue = self.room_queues[room_id]
del self.room_queues[room_id]
@@ -1546,7 +1567,7 @@ class FederationHandler(BaseHandler):
async def do_remotely_reject_invite(
self, target_hosts: Iterable[str], room_id: str, user_id: str, content: JsonDict
) -> EventBase:
) -> Tuple[EventBase, int]:
origin, event, room_version = await self._make_and_verify_event(
target_hosts, room_id, user_id, "leave", content=content
)
@@ -1566,9 +1587,9 @@ class FederationHandler(BaseHandler):
await self.federation_client.send_leave(target_hosts, event)
context = await self.state_handler.compute_event_context(event)
await self.persist_events_and_notify([(event, context)])
stream_id = await self.persist_events_and_notify([(event, context)])
return event
return event, stream_id
async def _make_and_verify_event(
self,
@@ -1880,7 +1901,7 @@ class FederationHandler(BaseHandler):
state: List[EventBase],
event: EventBase,
room_version: RoomVersion,
) -> None:
) -> int:
"""Checks the auth chain is valid (and passes auth checks) for the
state and event. Then persists the auth chain and state atomically.
Persists the event separately. Notifies about the persisted events
@@ -1974,7 +1995,7 @@ class FederationHandler(BaseHandler):
event, old_state=state
)
await self.persist_events_and_notify([(event, new_event_context)])
return await self.persist_events_and_notify([(event, new_event_context)])
async def _prep_event(
self,
@@ -2827,7 +2848,7 @@ class FederationHandler(BaseHandler):
self,
event_and_contexts: Sequence[Tuple[EventBase, EventContext]],
backfilled: bool = False,
) -> None:
) -> int:
"""Persists events and tells the notifier/pushers about them, if
necessary.
@@ -2837,12 +2858,13 @@ class FederationHandler(BaseHandler):
backfilling or not
"""
if self.config.worker.writers.events != self._instance_name:
await self._send_events(
result = await self._send_events(
instance_name=self.config.worker.writers.events,
store=self.store,
event_and_contexts=event_and_contexts,
backfilled=backfilled,
)
return result["max_stream_id"]
else:
max_stream_id = await self.storage.persistence.persist_events(
event_and_contexts, backfilled=backfilled
@@ -2857,6 +2879,8 @@ class FederationHandler(BaseHandler):
for event, _ in event_and_contexts:
await self._notify_persisted_event(event, max_stream_id)
return max_stream_id
async def _notify_persisted_event(
self, event: EventBase, max_stream_id: int
) -> None:

View File

@@ -18,8 +18,6 @@ import logging
from six import iteritems
from twisted.internet import defer
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.types import get_domain_from_id
@@ -92,19 +90,18 @@ class GroupsLocalWorkerHandler(object):
get_group_role = _create_rerouter("get_group_role")
get_group_roles = _create_rerouter("get_group_roles")
@defer.inlineCallbacks
def get_group_summary(self, group_id, requester_user_id):
async def get_group_summary(self, group_id, requester_user_id):
"""Get the group summary for a group.
If the group is remote we check that the users have valid attestations.
"""
if self.is_mine_id(group_id):
res = yield self.groups_server_handler.get_group_summary(
res = await self.groups_server_handler.get_group_summary(
group_id, requester_user_id
)
else:
try:
res = yield self.transport_client.get_group_summary(
res = await self.transport_client.get_group_summary(
get_domain_from_id(group_id), group_id, requester_user_id
)
except HttpResponseException as e:
@@ -122,7 +119,7 @@ class GroupsLocalWorkerHandler(object):
attestation = entry.pop("attestation", {})
try:
if get_domain_from_id(g_user_id) != group_server_name:
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
attestation,
group_id=group_id,
user_id=g_user_id,
@@ -139,19 +136,18 @@ class GroupsLocalWorkerHandler(object):
# Add `is_publicised` flag to indicate whether the user has publicised their
# membership of the group on their profile
result = yield self.store.get_publicised_groups_for_user(requester_user_id)
result = await self.store.get_publicised_groups_for_user(requester_user_id)
is_publicised = group_id in result
res.setdefault("user", {})["is_publicised"] = is_publicised
return res
@defer.inlineCallbacks
def get_users_in_group(self, group_id, requester_user_id):
async def get_users_in_group(self, group_id, requester_user_id):
"""Get users in a group
"""
if self.is_mine_id(group_id):
res = yield self.groups_server_handler.get_users_in_group(
res = await self.groups_server_handler.get_users_in_group(
group_id, requester_user_id
)
return res
@@ -159,7 +155,7 @@ class GroupsLocalWorkerHandler(object):
group_server_name = get_domain_from_id(group_id)
try:
res = yield self.transport_client.get_users_in_group(
res = await self.transport_client.get_users_in_group(
get_domain_from_id(group_id), group_id, requester_user_id
)
except HttpResponseException as e:
@@ -174,7 +170,7 @@ class GroupsLocalWorkerHandler(object):
attestation = entry.pop("attestation", {})
try:
if get_domain_from_id(g_user_id) != group_server_name:
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
attestation,
group_id=group_id,
user_id=g_user_id,
@@ -188,15 +184,13 @@ class GroupsLocalWorkerHandler(object):
return res
@defer.inlineCallbacks
def get_joined_groups(self, user_id):
group_ids = yield self.store.get_joined_groups(user_id)
async def get_joined_groups(self, user_id):
group_ids = await self.store.get_joined_groups(user_id)
return {"groups": group_ids}
@defer.inlineCallbacks
def get_publicised_groups_for_user(self, user_id):
async def get_publicised_groups_for_user(self, user_id):
if self.hs.is_mine_id(user_id):
result = yield self.store.get_publicised_groups_for_user(user_id)
result = await self.store.get_publicised_groups_for_user(user_id)
# Check AS associated groups for this user - this depends on the
# RegExps in the AS registration file (under `users`)
@@ -206,7 +200,7 @@ class GroupsLocalWorkerHandler(object):
return {"groups": result}
else:
try:
bulk_result = yield self.transport_client.bulk_get_publicised_groups(
bulk_result = await self.transport_client.bulk_get_publicised_groups(
get_domain_from_id(user_id), [user_id]
)
except HttpResponseException as e:
@@ -218,8 +212,7 @@ class GroupsLocalWorkerHandler(object):
# TODO: Verify attestations
return {"groups": result}
@defer.inlineCallbacks
def bulk_get_publicised_groups(self, user_ids, proxy=True):
async def bulk_get_publicised_groups(self, user_ids, proxy=True):
destinations = {}
local_users = set()
@@ -236,7 +229,7 @@ class GroupsLocalWorkerHandler(object):
failed_results = []
for destination, dest_user_ids in iteritems(destinations):
try:
r = yield self.transport_client.bulk_get_publicised_groups(
r = await self.transport_client.bulk_get_publicised_groups(
destination, list(dest_user_ids)
)
results.update(r["users"])
@@ -244,7 +237,7 @@ class GroupsLocalWorkerHandler(object):
failed_results.extend(dest_user_ids)
for uid in local_users:
results[uid] = yield self.store.get_publicised_groups_for_user(uid)
results[uid] = await self.store.get_publicised_groups_for_user(uid)
# Check AS associated groups for this user - this depends on the
# RegExps in the AS registration file (under `users`)
@@ -333,12 +326,11 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
@defer.inlineCallbacks
def join_group(self, group_id, user_id, content):
async def join_group(self, group_id, user_id, content):
"""Request to join a group
"""
if self.is_mine_id(group_id):
yield self.groups_server_handler.join_group(group_id, user_id, content)
await self.groups_server_handler.join_group(group_id, user_id, content)
local_attestation = None
remote_attestation = None
else:
@@ -346,7 +338,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
content["attestation"] = local_attestation
try:
res = yield self.transport_client.join_group(
res = await self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
@@ -356,7 +348,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
@@ -366,7 +358,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id,
user_id,
membership="join",
@@ -379,12 +371,11 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {}
@defer.inlineCallbacks
def accept_invite(self, group_id, user_id, content):
async def accept_invite(self, group_id, user_id, content):
"""Accept an invite to a group
"""
if self.is_mine_id(group_id):
yield self.groups_server_handler.accept_invite(group_id, user_id, content)
await self.groups_server_handler.accept_invite(group_id, user_id, content)
local_attestation = None
remote_attestation = None
else:
@@ -392,7 +383,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
content["attestation"] = local_attestation
try:
res = yield self.transport_client.accept_group_invite(
res = await self.transport_client.accept_group_invite(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
@@ -402,7 +393,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
@@ -412,7 +403,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
# TODO: Check that the group is public and we're being added publically
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id,
user_id,
membership="join",
@@ -425,18 +416,17 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {}
@defer.inlineCallbacks
def invite(self, group_id, user_id, requester_user_id, config):
async def invite(self, group_id, user_id, requester_user_id, config):
"""Invite a user to a group
"""
content = {"requester_user_id": requester_user_id, "config": config}
if self.is_mine_id(group_id):
res = yield self.groups_server_handler.invite_to_group(
res = await self.groups_server_handler.invite_to_group(
group_id, user_id, requester_user_id, content
)
else:
try:
res = yield self.transport_client.invite_to_group(
res = await self.transport_client.invite_to_group(
get_domain_from_id(group_id),
group_id,
user_id,
@@ -450,8 +440,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
@defer.inlineCallbacks
def on_invite(self, group_id, user_id, content):
async def on_invite(self, group_id, user_id, content):
"""One of our users were invited to a group
"""
# TODO: Support auto join and rejection
@@ -466,7 +455,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
if "avatar_url" in content["profile"]:
local_profile["avatar_url"] = content["profile"]["avatar_url"]
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id,
user_id,
membership="invite",
@@ -474,7 +463,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
)
self.notifier.on_new_event("groups_key", token, users=[user_id])
try:
user_profile = yield self.profile_handler.get_profile(user_id)
user_profile = await self.profile_handler.get_profile(user_id)
except Exception as e:
logger.warning("No profile for user %s: %s", user_id, e)
user_profile = {}
@@ -516,12 +505,11 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return res
@defer.inlineCallbacks
def user_removed_from_group(self, group_id, user_id, content):
async def user_removed_from_group(self, group_id, user_id, content):
"""One of our users was removed/kicked from a group
"""
# TODO: Check if user in group
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id, user_id, membership="leave"
)
self.notifier.on_new_event("groups_key", token, users=[user_id])

View File

@@ -25,7 +25,6 @@ from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
from twisted.internet import defer
from twisted.internet.error import TimeoutError
from synapse.api.errors import (
@@ -60,8 +59,7 @@ class IdentityHandler(BaseHandler):
self.federation_http_client = hs.get_http_client()
self.hs = hs
@defer.inlineCallbacks
def threepid_from_creds(self, id_server, creds):
async def threepid_from_creds(self, id_server, creds):
"""
Retrieve and validate a threepid identifier from a "credentials" dictionary against a
given identity server
@@ -97,7 +95,7 @@ class IdentityHandler(BaseHandler):
url = id_server + "/_matrix/identity/api/v1/3pid/getValidated3pid"
try:
data = yield self.http_client.get_json(url, query_params)
data = await self.http_client.get_json(url, query_params)
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
except HttpResponseException as e:
@@ -120,8 +118,7 @@ class IdentityHandler(BaseHandler):
logger.info("%s reported non-validated threepid: %s", id_server, creds)
return None
@defer.inlineCallbacks
def bind_threepid(
async def bind_threepid(
self, client_secret, sid, mxid, id_server, id_access_token=None, use_v2=True
):
"""Bind a 3PID to an identity server
@@ -161,12 +158,12 @@ class IdentityHandler(BaseHandler):
try:
# Use the blacklisting http client as this call is only to identity servers
# provided by a client
data = yield self.blacklisting_http_client.post_json_get_json(
data = await self.blacklisting_http_client.post_json_get_json(
bind_url, bind_data, headers=headers
)
# Remember where we bound the threepid
yield self.store.add_user_bound_threepid(
await self.store.add_user_bound_threepid(
user_id=mxid,
medium=data["medium"],
address=data["address"],
@@ -185,13 +182,12 @@ class IdentityHandler(BaseHandler):
return data
logger.info("Got 404 when POSTing JSON %s, falling back to v1 URL", bind_url)
res = yield self.bind_threepid(
res = await self.bind_threepid(
client_secret, sid, mxid, id_server, id_access_token, use_v2=False
)
return res
@defer.inlineCallbacks
def try_unbind_threepid(self, mxid, threepid):
async def try_unbind_threepid(self, mxid, threepid):
"""Attempt to remove a 3PID from an identity server, or if one is not provided, all
identity servers we're aware the binding is present on
@@ -211,7 +207,7 @@ class IdentityHandler(BaseHandler):
if threepid.get("id_server"):
id_servers = [threepid["id_server"]]
else:
id_servers = yield self.store.get_id_servers_user_bound(
id_servers = await self.store.get_id_servers_user_bound(
user_id=mxid, medium=threepid["medium"], address=threepid["address"]
)
@@ -221,14 +217,13 @@ class IdentityHandler(BaseHandler):
changed = True
for id_server in id_servers:
changed &= yield self.try_unbind_threepid_with_id_server(
changed &= await self.try_unbind_threepid_with_id_server(
mxid, threepid, id_server
)
return changed
@defer.inlineCallbacks
def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server):
async def try_unbind_threepid_with_id_server(self, mxid, threepid, id_server):
"""Removes a binding from an identity server
Args:
@@ -266,7 +261,7 @@ class IdentityHandler(BaseHandler):
try:
# Use the blacklisting http client as this call is only to identity servers
# provided by a client
yield self.blacklisting_http_client.post_json_get_json(
await self.blacklisting_http_client.post_json_get_json(
url, content, headers
)
changed = True
@@ -281,7 +276,7 @@ class IdentityHandler(BaseHandler):
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
yield self.store.remove_user_bound_threepid(
await self.store.remove_user_bound_threepid(
user_id=mxid,
medium=threepid["medium"],
address=threepid["address"],
@@ -290,8 +285,7 @@ class IdentityHandler(BaseHandler):
return changed
@defer.inlineCallbacks
def send_threepid_validation(
async def send_threepid_validation(
self,
email_address,
client_secret,
@@ -319,7 +313,7 @@ class IdentityHandler(BaseHandler):
"""
# Check that this email/client_secret/send_attempt combo is new or
# greater than what we've seen previously
session = yield self.store.get_threepid_validation_session(
session = await self.store.get_threepid_validation_session(
"email", client_secret, address=email_address, validated=False
)
@@ -353,7 +347,7 @@ class IdentityHandler(BaseHandler):
# Send the mail with the link containing the token, client_secret
# and session_id
try:
yield send_email_func(email_address, token, client_secret, session_id)
await send_email_func(email_address, token, client_secret, session_id)
except Exception:
logger.exception(
"Error sending threepid validation email to %s", email_address
@@ -364,7 +358,7 @@ class IdentityHandler(BaseHandler):
self.hs.clock.time_msec() + self.hs.config.email_validation_token_lifetime
)
yield self.store.start_or_continue_validation_session(
await self.store.start_or_continue_validation_session(
"email",
email_address,
session_id,
@@ -377,8 +371,7 @@ class IdentityHandler(BaseHandler):
return session_id
@defer.inlineCallbacks
def requestEmailToken(
async def requestEmailToken(
self, id_server, email, client_secret, send_attempt, next_link=None
):
"""
@@ -413,7 +406,7 @@ class IdentityHandler(BaseHandler):
)
try:
data = yield self.http_client.post_json_get_json(
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
params,
)
@@ -424,8 +417,7 @@ class IdentityHandler(BaseHandler):
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
@defer.inlineCallbacks
def requestMsisdnToken(
async def requestMsisdnToken(
self,
id_server,
country,
@@ -467,7 +459,7 @@ class IdentityHandler(BaseHandler):
)
try:
data = yield self.http_client.post_json_get_json(
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/msisdn/requestToken",
params,
)
@@ -488,8 +480,7 @@ class IdentityHandler(BaseHandler):
)
return data
@defer.inlineCallbacks
def validate_threepid_session(self, client_secret, sid):
async def validate_threepid_session(self, client_secret, sid):
"""Validates a threepid session with only the client secret and session ID
Tries validating against any configured account_threepid_delegates as well as locally.
@@ -511,12 +502,12 @@ class IdentityHandler(BaseHandler):
# Try to validate as email
if self.hs.config.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
# Ask our delegated email identity server
validation_session = yield self.threepid_from_creds(
validation_session = await self.threepid_from_creds(
self.hs.config.account_threepid_delegate_email, threepid_creds
)
elif self.hs.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
# Get a validated session matching these details
validation_session = yield self.store.get_threepid_validation_session(
validation_session = await self.store.get_threepid_validation_session(
"email", client_secret, sid=sid, validated=True
)
@@ -526,14 +517,13 @@ class IdentityHandler(BaseHandler):
# Try to validate as msisdn
if self.hs.config.account_threepid_delegate_msisdn:
# Ask our delegated msisdn identity server
validation_session = yield self.threepid_from_creds(
validation_session = await self.threepid_from_creds(
self.hs.config.account_threepid_delegate_msisdn, threepid_creds
)
return validation_session
@defer.inlineCallbacks
def proxy_msisdn_submit_token(self, id_server, client_secret, sid, token):
async def proxy_msisdn_submit_token(self, id_server, client_secret, sid, token):
"""Proxy a POST submitToken request to an identity server for verification purposes
Args:
@@ -554,11 +544,9 @@ class IdentityHandler(BaseHandler):
body = {"client_secret": client_secret, "sid": sid, "token": token}
try:
return (
yield self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/msisdn/submitToken",
body,
)
return await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/msisdn/submitToken",
body,
)
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
@@ -566,8 +554,7 @@ class IdentityHandler(BaseHandler):
logger.warning("Error contacting msisdn account_threepid_delegate: %s", e)
raise SynapseError(400, "Error contacting the identity server")
@defer.inlineCallbacks
def lookup_3pid(self, id_server, medium, address, id_access_token=None):
async def lookup_3pid(self, id_server, medium, address, id_access_token=None):
"""Looks up a 3pid in the passed identity server.
Args:
@@ -583,7 +570,7 @@ class IdentityHandler(BaseHandler):
"""
if id_access_token is not None:
try:
results = yield self._lookup_3pid_v2(
results = await self._lookup_3pid_v2(
id_server, id_access_token, medium, address
)
return results
@@ -602,10 +589,9 @@ class IdentityHandler(BaseHandler):
logger.warning("Error when looking up hashing details: %s", e)
return None
return (yield self._lookup_3pid_v1(id_server, medium, address))
return await self._lookup_3pid_v1(id_server, medium, address)
@defer.inlineCallbacks
def _lookup_3pid_v1(self, id_server, medium, address):
async def _lookup_3pid_v1(self, id_server, medium, address):
"""Looks up a 3pid in the passed identity server using v1 lookup.
Args:
@@ -618,7 +604,7 @@ class IdentityHandler(BaseHandler):
str: the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.blacklisting_http_client.get_json(
data = await self.blacklisting_http_client.get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server),
{"medium": medium, "address": address},
)
@@ -626,7 +612,7 @@ class IdentityHandler(BaseHandler):
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
yield self._verify_any_signature(data, id_server)
await self._verify_any_signature(data, id_server)
return data["mxid"]
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
@@ -635,8 +621,7 @@ class IdentityHandler(BaseHandler):
return None
@defer.inlineCallbacks
def _lookup_3pid_v2(self, id_server, id_access_token, medium, address):
async def _lookup_3pid_v2(self, id_server, id_access_token, medium, address):
"""Looks up a 3pid in the passed identity server using v2 lookup.
Args:
@@ -651,7 +636,7 @@ class IdentityHandler(BaseHandler):
"""
# Check what hashing details are supported by this identity server
try:
hash_details = yield self.blacklisting_http_client.get_json(
hash_details = await self.blacklisting_http_client.get_json(
"%s%s/_matrix/identity/v2/hash_details" % (id_server_scheme, id_server),
{"access_token": id_access_token},
)
@@ -718,7 +703,7 @@ class IdentityHandler(BaseHandler):
headers = {"Authorization": create_id_access_token_header(id_access_token)}
try:
lookup_results = yield self.blacklisting_http_client.post_json_get_json(
lookup_results = await self.blacklisting_http_client.post_json_get_json(
"%s%s/_matrix/identity/v2/lookup" % (id_server_scheme, id_server),
{
"addresses": [lookup_value],
@@ -746,13 +731,12 @@ class IdentityHandler(BaseHandler):
mxid = lookup_results["mappings"].get(lookup_value)
return mxid
@defer.inlineCallbacks
def _verify_any_signature(self, data, server_hostname):
async def _verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
try:
key_data = yield self.blacklisting_http_client.get_json(
key_data = await self.blacklisting_http_client.get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s"
% (id_server_scheme, server_hostname, key_name)
)
@@ -771,8 +755,7 @@ class IdentityHandler(BaseHandler):
)
return
@defer.inlineCallbacks
def ask_id_server_for_third_party_invite(
async def ask_id_server_for_third_party_invite(
self,
requester,
id_server,
@@ -845,7 +828,7 @@ class IdentityHandler(BaseHandler):
# Attempt a v2 lookup
url = base_url + "/v2/store-invite"
try:
data = yield self.blacklisting_http_client.post_json_get_json(
data = await self.blacklisting_http_client.post_json_get_json(
url,
invite_config,
{"Authorization": create_id_access_token_header(id_access_token)},
@@ -865,7 +848,7 @@ class IdentityHandler(BaseHandler):
url = base_url + "/api/v1/store-invite"
try:
data = yield self.blacklisting_http_client.post_json_get_json(
data = await self.blacklisting_http_client.post_json_get_json(
url, invite_config
)
except TimeoutError:
@@ -883,7 +866,7 @@ class IdentityHandler(BaseHandler):
# types. This is especially true with old instances of Sydent, see
# https://github.com/matrix-org/sydent/pull/170
try:
data = yield self.blacklisting_http_client.post_urlencoded_get_json(
data = await self.blacklisting_http_client.post_urlencoded_get_json(
url, invite_config
)
except HttpResponseException as e:

View File

@@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional
from typing import Optional, Tuple
from six import iteritems, itervalues, string_types
@@ -42,6 +42,7 @@ from synapse.api.errors import (
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.api.urls import ConsentURIBuilder
from synapse.events import EventBase
from synapse.events.validator import EventValidator
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
@@ -361,11 +362,12 @@ class EventCreationHandler(object):
self.profile_handler = hs.get_profile_handler()
self.event_builder_factory = hs.get_event_builder_factory()
self.server_name = hs.hostname
self.ratelimiter = hs.get_ratelimiter()
self.notifier = hs.get_notifier()
self.config = hs.config
self.require_membership_for_aliases = hs.config.require_membership_for_aliases
self._instance_name = hs.get_instance_name()
self._is_event_writer = (
self.config.worker.writers.events == hs.get_instance_name()
)
self.room_invite_state_types = self.hs.config.room_invite_state_types
@@ -485,9 +487,13 @@ class EventCreationHandler(object):
try:
if "displayname" not in content:
content["displayname"] = yield profile.get_displayname(target)
displayname = yield profile.get_displayname(target)
if displayname is not None:
content["displayname"] = displayname
if "avatar_url" not in content:
content["avatar_url"] = yield profile.get_avatar_url(target)
avatar_url = yield profile.get_avatar_url(target)
if avatar_url is not None:
content["avatar_url"] = avatar_url
except Exception as e:
logger.info(
"Failed to get profile information for %r: %s", target, e
@@ -627,7 +633,9 @@ class EventCreationHandler(object):
msg = self._block_events_without_consent_error % {"consent_uri": consent_uri}
raise ConsentNotGivenError(msg=msg, consent_uri=consent_uri)
async def send_nonmember_event(self, requester, event, context, ratelimit=True):
async def send_nonmember_event(
self, requester, event, context, ratelimit=True
) -> int:
"""
Persists and notifies local clients and federation of an event.
@@ -636,6 +644,9 @@ class EventCreationHandler(object):
context (Context) the context of the event.
ratelimit (bool): Whether to rate limit this send.
is_guest (bool): Whether the sender is a guest.
Return:
The stream_id of the persisted event.
"""
if event.type == EventTypes.Member:
raise SynapseError(
@@ -656,7 +667,7 @@ class EventCreationHandler(object):
)
return prev_state
await self.handle_new_client_event(
return await self.handle_new_client_event(
requester=requester, event=event, context=context, ratelimit=ratelimit
)
@@ -685,7 +696,7 @@ class EventCreationHandler(object):
async def create_and_send_nonmember_event(
self, requester, event_dict, ratelimit=True, txn_id=None
):
) -> Tuple[EventBase, int]:
"""
Creates an event, then sends it.
@@ -708,10 +719,10 @@ class EventCreationHandler(object):
spam_error = "Spam is not permitted here"
raise SynapseError(403, spam_error, Codes.FORBIDDEN)
await self.send_nonmember_event(
stream_id = await self.send_nonmember_event(
requester, event, context, ratelimit=ratelimit
)
return event
return event, stream_id
@measure_func("create_new_client_event")
@defer.inlineCallbacks
@@ -771,7 +782,7 @@ class EventCreationHandler(object):
@measure_func("handle_new_client_event")
async def handle_new_client_event(
self, requester, event, context, ratelimit=True, extra_users=[]
):
) -> int:
"""Processes a new event. This includes checking auth, persisting it,
notifying users, sending to remote servers, etc.
@@ -784,6 +795,9 @@ class EventCreationHandler(object):
context (EventContext)
ratelimit (bool)
extra_users (list(UserID)): Any extra users to notify about event
Return:
The stream_id of the persisted event.
"""
if event.is_state() and (event.type, event.state_key) == (
@@ -823,8 +837,8 @@ class EventCreationHandler(object):
success = False
try:
# If we're a worker we need to hit out to the master.
if self.config.worker.writers.events != self._instance_name:
await self.send_event(
if not self._is_event_writer:
result = await self.send_event(
instance_name=self.config.worker.writers.events,
event_id=event.event_id,
store=self.store,
@@ -834,14 +848,17 @@ class EventCreationHandler(object):
ratelimit=ratelimit,
extra_users=extra_users,
)
stream_id = result["stream_id"]
event.internal_metadata.stream_ordering = stream_id
success = True
return
return stream_id
await self.persist_and_notify_client_event(
stream_id = await self.persist_and_notify_client_event(
requester, event, context, ratelimit=ratelimit, extra_users=extra_users
)
success = True
return stream_id
finally:
if not success:
# Ensure that we actually remove the entries in the push actions
@@ -884,13 +901,13 @@ class EventCreationHandler(object):
async def persist_and_notify_client_event(
self, requester, event, context, ratelimit=True, extra_users=[]
):
) -> int:
"""Called when we have fully built the event, have already
calculated the push actions for the event, and checked auth.
This should only be run on master.
This should only be run on the instance in charge of persisting events.
"""
assert self.config.worker.writers.events == self._instance_name
assert self._is_event_writer
if ratelimit:
# We check if this is a room admin redacting an event so that we
@@ -1074,6 +1091,8 @@ class EventCreationHandler(object):
# matters as sometimes presence code can take a while.
run_in_background(self._bump_active_time, requester.user)
return event_stream_id
async def _bump_active_time(self, user):
try:
presence = self.hs.get_presence_handler()

View File

@@ -37,6 +37,7 @@ from twisted.web.client import readBody
from synapse.config import ConfigError
from synapse.http.server import finish_request
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.push.mailer import load_jinja2_templates
from synapse.server import HomeServer
from synapse.types import UserID, map_username_to_mxid_localpart
@@ -99,7 +100,6 @@ class OidcHandler:
hs.config.oidc_client_auth_method,
) # type: ClientAuth
self._client_auth_method = hs.config.oidc_client_auth_method # type: str
self._subject_claim = hs.config.oidc_subject_claim
self._provider_metadata = OpenIDProviderMetadata(
issuer=hs.config.oidc_issuer,
authorization_endpoint=hs.config.oidc_authorization_endpoint,
@@ -310,8 +310,12 @@ class OidcHandler:
received in the callback to exchange it for a token. The call uses the
``ClientAuth`` to authenticate with the client with its ID and secret.
See:
https://tools.ietf.org/html/rfc6749#section-3.2
https://openid.net/specs/openid-connect-core-1_0.html#TokenEndpoint
Args:
code: The autorization code we got from the callback.
code: The authorization code we got from the callback.
Returns:
A dict containing various tokens.
@@ -362,7 +366,7 @@ class OidcHandler:
code=response.code, phrase=response.phrase.decode("utf-8")
)
resp_body = await readBody(response)
resp_body = await make_deferred_yieldable(readBody(response))
if response.code >= 500:
# In case of a server error, we should first try to decode the body
@@ -484,6 +488,7 @@ class OidcHandler:
claims_params=claims_params,
)
except ValueError:
logger.info("Reloading JWKS after decode error")
jwk_set = await self.load_jwks(force=True) # try reloading the jwks
claims = jwt.decode(
token["id_token"],
@@ -497,11 +502,14 @@ class OidcHandler:
return UserInfo(claims)
async def handle_redirect_request(
self, request: SynapseRequest, client_redirect_url: bytes
) -> None:
self,
request: SynapseRequest,
client_redirect_url: bytes,
ui_auth_session_id: Optional[str] = None,
) -> str:
"""Handle an incoming request to /login/sso/redirect
It redirects the browser to the authorization endpoint with a few
It returns a redirect to the authorization endpoint with a few
parameters:
- ``client_id``: the client ID set in ``oidc_config.client_id``
@@ -511,24 +519,32 @@ class OidcHandler:
- ``state``: a random string
- ``nonce``: a random string
In addition to redirecting the client, we are setting a cookie with
In addition generating a redirect URL, we are setting a cookie with
a signed macaroon token containing the state, the nonce and the
client_redirect_url params. Those are then checked when the client
comes back from the provider.
Args:
request: the incoming request from the browser.
We'll respond to it with a redirect and a cookie.
client_redirect_url: the URL that we should redirect the client to
when everything is done
ui_auth_session_id: The session ID of the ongoing UI Auth (or
None if this is a login).
Returns:
The redirect URL to the authorization endpoint.
"""
state = generate_token()
nonce = generate_token()
cookie = self._generate_oidc_session_token(
state=state, nonce=nonce, client_redirect_url=client_redirect_url.decode(),
state=state,
nonce=nonce,
client_redirect_url=client_redirect_url.decode(),
ui_auth_session_id=ui_auth_session_id,
)
request.addCookie(
SESSION_COOKIE_NAME,
@@ -541,7 +557,7 @@ class OidcHandler:
metadata = await self.load_metadata()
authorization_endpoint = metadata.get("authorization_endpoint")
uri = prepare_grant_uri(
return prepare_grant_uri(
authorization_endpoint,
client_id=self._client_auth.client_id,
response_type="code",
@@ -550,8 +566,6 @@ class OidcHandler:
state=state,
nonce=nonce,
)
request.redirect(uri)
finish_request(request)
async def handle_oidc_callback(self, request: SynapseRequest) -> None:
"""Handle an incoming request to /_synapse/oidc/callback
@@ -583,6 +597,9 @@ class OidcHandler:
# The provider might redirect with an error.
# In that case, just display it as-is.
if b"error" in request.args:
# error response from the auth server. see:
# https://tools.ietf.org/html/rfc6749#section-4.1.2.1
# https://openid.net/specs/openid-connect-core-1_0.html#AuthError
error = request.args[b"error"][0].decode()
description = request.args.get(b"error_description", [b""])[0].decode()
@@ -596,8 +613,11 @@ class OidcHandler:
self._render_error(request, error, description)
return
# otherwise, it is presumably a successful response. see:
# https://tools.ietf.org/html/rfc6749#section-4.1.2
# Fetch the session cookie
session = request.getCookie(SESSION_COOKIE_NAME)
session = request.getCookie(SESSION_COOKIE_NAME) # type: Optional[bytes]
if session is None:
logger.info("No session cookie found")
self._render_error(request, "missing_session", "No session cookie found")
@@ -625,7 +645,11 @@ class OidcHandler:
# Deserialize the session token and verify it.
try:
nonce, client_redirect_url = self._verify_oidc_session_token(session, state)
(
nonce,
client_redirect_url,
ui_auth_session_id,
) = self._verify_oidc_session_token(session, state)
except MacaroonDeserializationException as e:
logger.exception("Invalid session")
self._render_error(request, "invalid_session", str(e))
@@ -641,7 +665,7 @@ class OidcHandler:
self._render_error(request, "invalid_request", "Code parameter is missing")
return
logger.info("Exchanging code")
logger.debug("Exchanging code")
code = request.args[b"code"][0].decode()
try:
token = await self._exchange_code(code)
@@ -650,10 +674,12 @@ class OidcHandler:
self._render_error(request, e.error, e.error_description)
return
logger.debug("Successfully obtained OAuth2 access token")
# Now that we have a token, get the userinfo, either by decoding the
# `id_token` or by fetching the `userinfo_endpoint`.
if self._uses_userinfo:
logger.info("Fetching userinfo")
logger.debug("Fetching userinfo")
try:
userinfo = await self._fetch_userinfo(token)
except Exception as e:
@@ -661,7 +687,7 @@ class OidcHandler:
self._render_error(request, "fetch_error", str(e))
return
else:
logger.info("Extracting userinfo from id_token")
logger.debug("Extracting userinfo from id_token")
try:
userinfo = await self._parse_id_token(token, nonce=nonce)
except Exception as e:
@@ -678,15 +704,21 @@ class OidcHandler:
return
# and finally complete the login
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url
)
if ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)
else:
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url
)
def _generate_oidc_session_token(
self,
state: str,
nonce: str,
client_redirect_url: str,
ui_auth_session_id: Optional[str],
duration_in_ms: int = (60 * 60 * 1000),
) -> str:
"""Generates a signed token storing data about an OIDC session.
@@ -702,6 +734,8 @@ class OidcHandler:
nonce: The ``nonce`` parameter passed to the OIDC provider.
client_redirect_url: The URL the client gave when it initiated the
flow.
ui_auth_session_id: The session ID of the ongoing UI Auth (or
None if this is a login).
duration_in_ms: An optional duration for the token in milliseconds.
Defaults to an hour.
@@ -718,12 +752,19 @@ class OidcHandler:
macaroon.add_first_party_caveat(
"client_redirect_url = %s" % (client_redirect_url,)
)
if ui_auth_session_id:
macaroon.add_first_party_caveat(
"ui_auth_session_id = %s" % (ui_auth_session_id,)
)
now = self._clock.time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
def _verify_oidc_session_token(self, session: str, state: str) -> Tuple[str, str]:
def _verify_oidc_session_token(
self, session: bytes, state: str
) -> Tuple[str, str, Optional[str]]:
"""Verifies and extract an OIDC session token.
This verifies that a given session token was issued by this homeserver
@@ -734,7 +775,7 @@ class OidcHandler:
state: The state the OIDC provider gave back
Returns:
The nonce and the client_redirect_url for this session
The nonce, client_redirect_url, and ui_auth_session_id for this session
"""
macaroon = pymacaroons.Macaroon.deserialize(session)
@@ -744,17 +785,27 @@ class OidcHandler:
v.satisfy_exact("state = %s" % (state,))
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.satisfy_general(lambda c: c.startswith("client_redirect_url = "))
# Sometimes there's a UI auth session ID, it seems to be OK to attempt
# to always satisfy this.
v.satisfy_general(lambda c: c.startswith("ui_auth_session_id = "))
v.satisfy_general(self._verify_expiry)
v.verify(macaroon, self._macaroon_secret_key)
# Extract the `nonce` and `client_redirect_url` from the token
# Extract the `nonce`, `client_redirect_url`, and maybe the
# `ui_auth_session_id` from the token.
nonce = self._get_value_from_macaroon(macaroon, "nonce")
client_redirect_url = self._get_value_from_macaroon(
macaroon, "client_redirect_url"
)
try:
ui_auth_session_id = self._get_value_from_macaroon(
macaroon, "ui_auth_session_id"
) # type: Optional[str]
except ValueError:
ui_auth_session_id = None
return nonce, client_redirect_url
return nonce, client_redirect_url, ui_auth_session_id
def _get_value_from_macaroon(self, macaroon: pymacaroons.Macaroon, key: str) -> str:
"""Extracts a caveat value from a macaroon token.
@@ -773,7 +824,7 @@ class OidcHandler:
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(prefix):
return caveat.caveat_id[len(prefix) :]
raise Exception("No %s caveat in macaroon" % (key,))
raise ValueError("No %s caveat in macaroon" % (key,))
def _verify_expiry(self, caveat: str) -> bool:
prefix = "time < "

View File

@@ -193,6 +193,12 @@ class BasePresenceHandler(abc.ABC):
) -> None:
"""Set the presence state of the user. """
@abc.abstractmethod
async def bump_presence_active_time(self, user: UserID):
"""We've seen the user do something that indicates they're interacting
with the app.
"""
class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "synapse.server.HomeServer"):
@@ -204,6 +210,7 @@ class PresenceHandler(BasePresenceHandler):
self.notifier = hs.get_notifier()
self.federation = hs.get_federation_sender()
self.state = hs.get_state_handler()
self._presence_enabled = hs.config.use_presence
federation_registry = hs.get_federation_registry()
@@ -676,13 +683,14 @@ class PresenceHandler(BasePresenceHandler):
async def incoming_presence(self, origin, content):
"""Called when we receive a `m.presence` EDU from a remote server.
"""
if not self._presence_enabled:
return
now = self.clock.time_msec()
updates = []
for push in content.get("push", []):
# A "push" contains a list of presence that we are probably interested
# in.
# TODO: Actually check if we're interested, rather than blindly
# accepting presence updates.
user_id = push.get("user_id", None)
if not user_id:
logger.info(

Some files were not shown because too many files have changed in this diff Show More