1
0

Compare commits

..

451 Commits

Author SHA1 Message Date
Michael Kaye
288d1a6aea Remove unneeded warning 2021-03-03 10:44:41 +00:00
Richard van der Hoff
fdbccc1e74 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-26 14:05:40 +00:00
Richard van der Hoff
0e56f02d5d Revert "Redirect redirect requests if they arrive on the wrong URI"
This reverts commit 5ee8a1c50a.

This has now been superceded on develop by PR #9436.
2021-02-26 14:05:00 +00:00
Richard van der Hoff
c7934aee2c Revert "more login hacking"
This reverts commit 47d2b49e2b.

This has now been superceded on develop by PR 9472.
2021-02-26 14:04:05 +00:00
Erik Johnston
5d405f7e7a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-22 12:55:32 +00:00
Erik Johnston
5054eb291e Merge remote-tracking branch 'origin/release-v1.28.0' into matrix-org-hotfixes 2021-02-19 10:06:01 +00:00
Richard van der Hoff
47d2b49e2b more login hacking 2021-02-18 14:29:48 +00:00
Richard van der Hoff
1f507c2515 Merge branch 'rav/fix_cookie_path' into matrix-org-hotfixes
Merge the cookie fix to hotfixes
2021-02-18 14:03:43 +00:00
Richard van der Hoff
5ee8a1c50a Redirect redirect requests if they arrive on the wrong URI 2021-02-18 14:01:23 +00:00
Richard van der Hoff
7b7831bb63 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-17 16:31:57 +00:00
Erik Johnston
a4aa56a0eb Ensure that we never stop reconnecting to redis (#9391) 2021-02-11 17:39:56 +00:00
Patrick Cloke
fa0f99e4f2 Merge branch 'release-v1.27.0' into matrix-org-hotfixes 2021-02-11 11:30:16 -05:00
Richard van der Hoff
844b3e3f65 Revert "block groups requests to fosdem"
This reverts commit 3f6530ed55.
2021-02-06 12:03:46 +00:00
Richard van der Hoff
3f6530ed55 block groups requests to fosdem 2021-02-06 11:04:32 +00:00
Erik Johnston
25757a3d47 Merge branch 'erikj/media_spam_checker' into matrix-org-hotfixes 2021-02-05 10:13:55 +00:00
Erik Johnston
6e774373c2 Merge remote-tracking branch 'origin/release-v1.27.0' into matrix-org-hotfixes 2021-02-02 16:06:59 +00:00
Erik Johnston
512e313f18 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-26 14:15:26 +00:00
Patrick Cloke
a574751a87 Merge remote-tracking branch 'origin/release-v1.26.0' into matrix-org-hotfixes 2021-01-25 08:07:39 -05:00
Erik Johnston
bde75f5f66 Merge remote-tracking branch 'origin/release-v1.26.0' into matrix-org-hotfixes 2021-01-21 16:05:34 +00:00
Erik Johnston
e33124a642 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-20 10:45:19 +00:00
Erik Johnston
bed4fa29fd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-19 10:19:25 +00:00
Erik Johnston
f5ab7d8306 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-18 11:14:37 +00:00
Erik Johnston
029c9ef967 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-15 14:05:55 +00:00
Erik Johnston
e6b27b480c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-14 17:39:13 +00:00
Erik Johnston
43dc637136 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-14 15:29:29 +00:00
Erik Johnston
00c62b9d07 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-08 11:18:20 +00:00
Erik Johnston
82a91208d6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-07 13:04:45 +00:00
Erik Johnston
91fd180be1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-07 10:35:04 +00:00
Patrick Cloke
fb4a4f9f15 Merge branch 'release-v1.25.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-01-05 12:12:07 -05:00
Richard van der Hoff
5a4f09228d Remove cache from room directory query results
This reverts a285fe0. Hopefully the cache is no longer required, thanks to
2021-01-05 13:52:36 +00:00
Richard van der Hoff
97d12dcf56 Merge remote-tracking branch 'origin/release-v1.25.0' into matrix-org-hotfixes 2021-01-05 11:32:29 +00:00
Patrick Cloke
f4f65f4e99 Allow redacting events on workers (#8994)
Adds the redacts endpoint to workers that have the client listener.
2020-12-29 11:06:10 -05:00
Patrick Cloke
863359a04f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-12-15 11:26:13 -05:00
Patrick Cloke
33a349df91 Merge branch 'develop' into matrix-org-hotfixes 2020-12-15 08:23:14 -05:00
Patrick Cloke
a41b1dc49f Merge branch 'release-v1.24.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-12-04 09:03:12 -05:00
Patrick Cloke
16744644f6 Merge branch 'release-v1.24.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-12-02 08:40:21 -05:00
Erik Johnston
dbf46f3891 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-11-27 10:25:17 +00:00
Erik Johnston
52984e9e69 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-11-13 12:05:55 +00:00
Richard van der Hoff
ce2107eee1 Merge branch 'rav/fix_sighup' into matrix-org-hotfixes 2020-10-31 10:54:23 +00:00
Richard van der Hoff
8373e6254f Fix SIGHUP handler
Fixes:

```
builtins.TypeError: _reload_logging_config() takes 1 positional argument but 2 were given
```
2020-10-31 10:53:12 +00:00
Erik Johnston
1ff3bc332a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-30 12:08:09 +00:00
Andrew Morgan
172ddb3b45 Merge branch 'develop' into matrix-org-hotfixes
* develop:
  Don't unnecessarily start bg process in replication sending loop. (#8670)
  Don't unnecessarily start bg process while handling typing. (#8668)
2020-10-28 12:14:03 +00:00
Andrew Morgan
d60af9305a Patch to temporarily drop cross-user m.key_share_requests (#8675)
Cross-user `m.key_share_requests` are a relatively new `to_device` message that allows user to re-request session keys for a message from another user if they were otherwise unable to retrieve them.

Unfortunately, these have had performance concerns on matrix.org. This is a temporary patch to disable them while we investigate a better solution.
2020-10-28 11:58:47 +00:00
Erik Johnston
bcb6b243e9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-27 14:13:14 +00:00
Erik Johnston
32457baa40 Merge branch 'release-v1.22.0' into matrix-org-hotfixes 2020-10-26 15:03:36 +00:00
Erik Johnston
ab4cd7f802 Merge remote-tracking branch 'origin/release-v1.21.3' into matrix-org-hotfixes 2020-10-22 09:57:06 +01:00
Erik Johnston
e9b5e642c3 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-10-16 11:34:53 +01:00
Erik Johnston
9250ee8650 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-14 13:32:07 +01:00
Richard van der Hoff
bdbe2b12c2 Revert "block membership events from spammy freenode bridge"
This reverts commit cd2f831b9d.
2020-10-13 17:10:45 +01:00
Erik Johnston
43bcb1e54e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-13 13:29:50 +01:00
Richard van der Hoff
cd2f831b9d block membership events from spammy freenode bridge 2020-10-12 19:09:30 +01:00
Erik Johnston
4b43332131 Merge remote-tracking branch 'origin/release-v1.21.0' into matrix-org-hotfixes 2020-10-07 17:09:29 +01:00
Richard van der Hoff
77daff166d Merge remote-tracking branch 'origin/release-v1.21.0' into matrix-org-hotfixes 2020-10-02 12:32:26 +01:00
Richard van der Hoff
5ccc0785c1 Revert "fix remote thumbnails?"
This has now been fixed by a different commit (73d93039f).

This reverts commit b0a463f758.
2020-10-02 12:30:49 +01:00
Richard van der Hoff
b0a463f758 fix remote thumbnails? 2020-10-01 15:53:18 +01:00
Richard van der Hoff
8a8d01d732 Merge branch 'develop' into matrix-org-hotfixes 2020-10-01 15:07:33 +01:00
Richard van der Hoff
1c22954668 Revert "Temporary fix to ensure kde can contact matrix.org if stuff breaks"
This reverts commit d90b0946ed.

We believe this is no longer required.
2020-10-01 12:10:55 +01:00
Richard van der Hoff
e675bbcc49 Remove redundant EventCreationHandler._is_worker_app attribute
This was added in 1c347c84bf/#7544 as a temporary optimisation. That was never
merged to develop, since it conflicted with #7492. The merge cf92310da forgot
to remove it.
2020-10-01 11:51:57 +01:00
Richard van der Hoff
607367aeb1 Fix typo in comment
I think this came from a bad merge
2020-10-01 11:43:16 +01:00
Richard van der Hoff
ac6c5f198e Remove dangling changelog.d files
These result from PRs which were cherry-picked from release branches.
2020-10-01 11:31:07 +01:00
Richard van der Hoff
db13a8607e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-01 11:22:36 +01:00
Richard van der Hoff
cfb3096e33 Revert federation-transaction-transmission backoff hacks
This reverts b852a8247, 15b2a5081, 28889d8da.

I don't think these patches are required any more, and if they are, they should
be on mainline, not hidden in our hotfixes branch. Let's try backing them out:
if that turns out to be an error, we can PR them properly.
2020-10-01 11:22:19 +01:00
Erik Johnston
7b6f857aa9 Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-22 10:11:01 +01:00
Erik Johnston
9eea5c43af Intelligently select extremities used in backfill. (#8349)
Instead of just using the most recent extremities let's pick the
ones that will give us results that the pagination request cares about,
i.e. pick extremities only if they have a smaller depth than the
pagination token.

This is useful when we fail to backfill an extremity, as we no longer
get stuck requesting that same extremity repeatedly.
2020-09-18 15:07:36 +01:00
Andrew Morgan
104c490274 Use _check_sigs_and_hash_and_fetch to validate backfill requests (#8350)
This is a bit of a hack, as `_check_sigs_and_hash_and_fetch` is intended
for attempting to pull an event from the database/(re)pull it from the
server that originally sent the event if checking the signature of the
event fails.

During backfill we *know* that we won't have the event in our database,
however it is still useful to be able to query the original sending
server as the server we're backfilling from may be acting maliciously.

The main benefit and reason for this change however is that
`_check_sigs_and_hash_and_fetch` will drop an event during backfill if
it cannot be successfully validated, whereas the current code will
simply fail the backfill request - resulting in the client's /messages
request silently being dropped.

This is a quick patch to fix backfilling rooms that contain malformed
events. A better implementation in planned in future.
2020-09-18 15:07:33 +01:00
Patrick Cloke
bbb7ca1f15 Merge remote-tracking branch 'origin/release-v1.19.2' into matrix-org-hotfixes 2020-09-16 08:21:05 -04:00
Patrick Cloke
27ef82d972 Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-11 07:34:53 -04:00
Richard van der Hoff
9df3a8a19f Merge branch 'release-v1.20.0' into matrix-org-hotfixes 2020-09-09 16:59:10 +01:00
Richard van der Hoff
5c4b13cd8f Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-07 17:00:02 +01:00
Richard van der Hoff
d74e8f2875 Merge branch 'release-v1.20.0' into matrix-org-hotfixes 2020-09-07 13:44:54 +01:00
Brendan Abolivier
cc23d81a74 Merge branch 'develop' into matrix-org-hotfixes 2020-09-04 11:02:10 +01:00
Brendan Abolivier
505ea932f5 Merge branch 'develop' into matrix-org-hotfixes 2020-09-03 15:30:00 +01:00
Richard van der Hoff
5f224a4794 Merge branch 'develop' into matrix-org-hotfixes 2020-08-28 15:59:57 +01:00
Patrick Cloke
3f488bfded Merge branch 'develop' into matrix-org-hotfixes 2020-08-27 10:16:21 -04:00
Richard van der Hoff
b4c1cfacc2 Merge branch 'develop' into matrix-org-hotfixes 2020-08-18 18:20:01 +01:00
Richard van der Hoff
afe4c4e02e Merge branch 'develop' into matrix-org-hotfixes 2020-08-18 18:13:47 +01:00
Brendan Abolivier
527f73d902 Merge branch 'develop' into matrix-org-hotfixes 2020-08-13 11:45:08 +01:00
Richard van der Hoff
82fec809a5 Merge branch 'develop' into matrix-org-hotfixes 2020-07-31 10:30:05 +01:00
Richard van der Hoff
b2ccc72a00 Merge branch 'release-v1.18.0' into matrix-org-hotfixes 2020-07-28 10:15:22 +01:00
Richard van der Hoff
be777e325d Merge branch 'develop' into matrix-org-hotfixes 2020-07-24 09:57:49 +01:00
Richard van der Hoff
25880bd441 Merge branch 'develop' into matrix-org-hotfixes 2020-07-09 12:49:39 +01:00
Richard van der Hoff
cc86fbc9ad Merge branch 'develop' into matrix-org-hotfixes 2020-07-09 11:06:52 +01:00
Patrick Cloke
bd30967bd7 Merge branch 'release-v1.15.2' into matrix-org-hotfixes 2020-07-02 10:08:07 -04:00
Andrew Morgan
8fed03aa3e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-07-01 11:12:28 +01:00
Andrew Morgan
ba66e3dfef Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-07-01 10:46:06 +01:00
Erik Johnston
199ab854d6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-06-26 11:08:10 +01:00
Erik Johnston
c16bb06d25 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-06-25 09:39:01 +01:00
Erik Johnston
d06f4ab693 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-17 16:32:39 +01:00
Erik Johnston
8ba1086801 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-17 14:38:15 +01:00
Brendan Abolivier
fea4b1d6ad Merge branch 'release-v1.15.1' into matrix-org-hotfixes 2020-06-16 11:29:33 +01:00
Brendan Abolivier
ae91d50100 Merge branch 'release-v1.15.1' into matrix-org-hotfixes 2020-06-16 10:16:44 +01:00
Brendan Abolivier
0d29112624 Merge branch 'release-v1.15.0' into matrix-org-hotfixes 2020-06-11 13:43:55 +01:00
Brendan Abolivier
d6c7550cf5 Merge tag 'v1.15.0rc1' of github.com:matrix-org/synapse into matrix-org-hotfixes
Synapse 1.15.0rc1 (2020-06-09)
==============================

Features
--------

- Advertise support for Client-Server API r0.6.0 and remove related unstable feature flags. ([\#6585](https://github.com/matrix-org/synapse/issues/6585))
- Add an option to disable autojoining rooms for guest accounts. ([\#6637](https://github.com/matrix-org/synapse/issues/6637))
- For SAML authentication, add the ability to pass email addresses to be added to new users' accounts via SAML attributes. Contributed by Christopher Cooper. ([\#7385](https://github.com/matrix-org/synapse/issues/7385))
- Add admin APIs to allow server admins to manage users' devices. Contributed by @dklimpel. ([\#7481](https://github.com/matrix-org/synapse/issues/7481))
- Add support for generating thumbnails for WebP images. Previously, users would see an empty box instead of preview image. ([\#7586](https://github.com/matrix-org/synapse/issues/7586))
- Support the standardized `m.login.sso` user-interactive authentication flow. ([\#7630](https://github.com/matrix-org/synapse/issues/7630))

Bugfixes
--------

- Allow new users to be registered via the admin API even if the monthly active user limit has been reached. Contributed by @dkimpel. ([\#7263](https://github.com/matrix-org/synapse/issues/7263))
- Fix email notifications not being enabled for new users when created via the Admin API. ([\#7267](https://github.com/matrix-org/synapse/issues/7267))
- Fix str placeholders in an instance of `PrepareDatabaseException`. Introduced in Synapse v1.8.0. ([\#7575](https://github.com/matrix-org/synapse/issues/7575))
- Fix a bug in automatic user creation during first time login with `m.login.jwt`. Regression in v1.6.0. Contributed by @olof. ([\#7585](https://github.com/matrix-org/synapse/issues/7585))
- Fix a bug causing the cross-signing keys to be ignored when resyncing a device list. ([\#7594](https://github.com/matrix-org/synapse/issues/7594))
- Fix metrics failing when there is a large number of active background processes. ([\#7597](https://github.com/matrix-org/synapse/issues/7597))
- Fix bug where returning rooms for a group would fail if it included a room that the server was not in. ([\#7599](https://github.com/matrix-org/synapse/issues/7599))
- Fix duplicate key violation when persisting read markers. ([\#7607](https://github.com/matrix-org/synapse/issues/7607))
- Prevent an entire iteration of the device list resync loop from failing if one server responds with a malformed result. ([\#7609](https://github.com/matrix-org/synapse/issues/7609))
- Fix exceptions when fetching events from a remote host fails. ([\#7622](https://github.com/matrix-org/synapse/issues/7622))
- Make `synctl restart` start synapse if it wasn't running. ([\#7624](https://github.com/matrix-org/synapse/issues/7624))
- Pass device information through to the login endpoint when using the login fallback. ([\#7629](https://github.com/matrix-org/synapse/issues/7629))
- Advertise the `m.login.token` login flow when OpenID Connect is enabled. ([\#7631](https://github.com/matrix-org/synapse/issues/7631))
- Fix bug in account data replication stream. ([\#7656](https://github.com/matrix-org/synapse/issues/7656))

Improved Documentation
----------------------

- Update the OpenBSD installation instructions. ([\#7587](https://github.com/matrix-org/synapse/issues/7587))
- Advertise Python 3.8 support in `setup.py`. ([\#7602](https://github.com/matrix-org/synapse/issues/7602))
- Add a link to `#synapse:matrix.org` in the troubleshooting section of the README. ([\#7603](https://github.com/matrix-org/synapse/issues/7603))
- Clarifications to the admin api documentation. ([\#7647](https://github.com/matrix-org/synapse/issues/7647))

Internal Changes
----------------

- Convert the identity handler to async/await. ([\#7561](https://github.com/matrix-org/synapse/issues/7561))
- Improve query performance for fetching state from a PostgreSQL database. ([\#7567](https://github.com/matrix-org/synapse/issues/7567))
- Speed up processing of federation stream RDATA rows. ([\#7584](https://github.com/matrix-org/synapse/issues/7584))
- Add comment to systemd example to show postgresql dependency. ([\#7591](https://github.com/matrix-org/synapse/issues/7591))
- Refactor `Ratelimiter` to limit the amount of expensive config value accesses. ([\#7595](https://github.com/matrix-org/synapse/issues/7595))
- Convert groups handlers to async/await. ([\#7600](https://github.com/matrix-org/synapse/issues/7600))
- Clean up exception handling in `SAML2ResponseResource`. ([\#7614](https://github.com/matrix-org/synapse/issues/7614))
- Check that all asynchronous tasks succeed and general cleanup of `MonthlyActiveUsersTestCase` and `TestMauLimit`. ([\#7619](https://github.com/matrix-org/synapse/issues/7619))
- Convert `get_user_id_by_threepid` to async/await. ([\#7620](https://github.com/matrix-org/synapse/issues/7620))
- Switch to upstream `dh-virtualenv` rather than our fork for Debian package builds. ([\#7621](https://github.com/matrix-org/synapse/issues/7621))
- Update CI scripts to check the number in the newsfile fragment. ([\#7623](https://github.com/matrix-org/synapse/issues/7623))
- Check if the localpart of a Matrix ID is reserved for guest users earlier in the registration flow, as well as when responding to requests to `/register/available`. ([\#7625](https://github.com/matrix-org/synapse/issues/7625))
- Minor cleanups to OpenID Connect integration. ([\#7628](https://github.com/matrix-org/synapse/issues/7628))
- Attempt to fix flaky test: `PhoneHomeStatsTestCase.test_performance_100`. ([\#7634](https://github.com/matrix-org/synapse/issues/7634))
- Fix typos of `m.olm.curve25519-aes-sha2` and `m.megolm.v1.aes-sha2` in comments, test files. ([\#7637](https://github.com/matrix-org/synapse/issues/7637))
- Convert user directory, state deltas, and stats handlers to async/await. ([\#7640](https://github.com/matrix-org/synapse/issues/7640))
- Remove some unused constants. ([\#7644](https://github.com/matrix-org/synapse/issues/7644))
- Fix type information on `assert_*_is_admin` methods. ([\#7645](https://github.com/matrix-org/synapse/issues/7645))
- Convert registration handler to async/await. ([\#7649](https://github.com/matrix-org/synapse/issues/7649))
2020-06-10 10:57:26 +01:00
Brendan Abolivier
4cf4c7dc99 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-01 12:58:34 +02:00
Erik Johnston
6fdf5ef66b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-29 13:27:12 +01:00
Brendan Abolivier
d4220574a2 Merge branch 'release-v1.14.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-28 14:12:46 +02:00
Erik Johnston
1a9c8d5ee9 Merge commit 'ef3934ec8f123f6f553b07471588fbcc7f444cd8' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-27 20:06:41 +01:00
Erik Johnston
407dbf8574 Merge branch 'release-v1.14.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-27 13:35:15 +01:00
Erik Johnston
8beca8e21f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-26 09:43:21 +01:00
Erik Johnston
cf92310da2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-21 15:19:00 +01:00
Richard van der Hoff
89f795fe8a Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 23:40:22 +01:00
Richard van der Hoff
1c347c84bf inline some config references 2020-05-20 23:33:13 +01:00
Richard van der Hoff
0d8fb99cdf Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 22:18:21 +01:00
Richard van der Hoff
b3a9ad124c Fix field name in stubbed out presence servlet 2020-05-20 22:17:59 +01:00
Richard van der Hoff
a902468354 Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 22:13:44 +01:00
Richard van der Hoff
84639b32ae stub out GET presence requests 2020-05-20 22:13:32 +01:00
Patrick Cloke
dac5d5ae42 Merge branch 'release-v1.13.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-18 10:27:51 -04:00
Richard van der Hoff
6bd2a39a7d Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-14 10:08:45 +01:00
Richard van der Hoff
309e30bae3 Merge remote-tracking branch 'origin/release-v1.13.0' into matrix-org-hotfixes 2020-05-11 13:09:14 +01:00
Richard van der Hoff
7ff7a415d1 Revert emergency registration patches
Revert "Merge commit '4d3ebc' into matrix-org-hotfixes"

This reverts commit 617541c4c6, reversing
changes made to ae4f6140f1.
2020-05-11 13:08:48 +01:00
Richard van der Hoff
6610343332 Revert emergency registration patches
Revert "Merge remote-tracking branch 'origin/clokep/no-validate-ui-auth-sess' into matrix-org-hotfixes"

This reverts commit 5adad58d95, reversing
changes made to 617541c4c6.
2020-05-11 13:08:14 +01:00
Richard van der Hoff
5adad58d95 Merge remote-tracking branch 'origin/clokep/no-validate-ui-auth-sess' into matrix-org-hotfixes 2020-05-07 15:19:54 +01:00
Patrick Cloke
d7c7f64f17 Propagate changes to the client dict to the database. 2020-05-07 10:07:09 -04:00
Patrick Cloke
c4c84b67d5 Disable a failing test. 2020-05-07 10:05:00 -04:00
Richard van der Hoff
617541c4c6 Merge commit '4d3ebc' into matrix-org-hotfixes 2020-05-07 14:16:52 +01:00
Patrick Cloke
4d3ebc3620 Disable validation that a UI authentication session has not been modified during a request cycle.
Partial backout of 1c1242acba (#7068)
2020-05-07 08:34:14 -04:00
Richard van der Hoff
ae4f6140f1 Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-07 10:42:56 +01:00
Richard van der Hoff
323cfe3efb fix bad merge 2020-05-06 12:14:01 +01:00
Richard van der Hoff
b0d2add89d Merge branch 'rav/cross_signing_keys_cache' into matrix-org-hotfixes 2020-05-06 11:59:41 +01:00
Richard van der Hoff
ff20747703 Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-06 11:57:36 +01:00
Richard van der Hoff
9192f1b9dd Merge rav/upsert_for_device_list into matrix-org-hotfixes 2020-05-06 11:46:19 +01:00
Richard van der Hoff
89d178e8e7 Merge rav/fix_dropped_messages into matrix-org-hotfixes 2020-05-05 22:42:48 +01:00
Richard van der Hoff
1c24e35e85 Merge erikj/faster_device_lists_fetch into matrix-org-hotfixes 2020-05-05 18:36:17 +01:00
Erik Johnston
5debf3071c Fix redis password support 2020-05-04 16:44:21 +01:00
Richard van der Hoff
e9bd4bb388 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-05-01 09:26:57 +01:00
Richard van der Hoff
649e48a799 Merge branch 'develop' into matrix-org-hotfixes 2020-04-24 14:07:47 +01:00
Richard van der Hoff
9b0157686b Merge branch 'release-v1.12.4' into matrix-org-hotfixes 2020-04-22 13:30:35 +01:00
Richard van der Hoff
8288218b29 Merge remote-tracking branch 'origin/release-v1.12.4' into matrix-org-hotfixes 2020-04-21 11:03:32 +01:00
Richard van der Hoff
da5e6eea45 Revert recent merges of #7289 into matrix-org-hotfixes
This was incorrectly merged before it was ready.

This reverts commit aead826d2d, reversing
changes made to 4cd2a4ae3a.

It also reverts commits 9b8212d25, fb3f1fb5c and 2fdfa96ee.
2020-04-21 11:00:57 +01:00
Andrew Morgan
2fdfa96ee6 lint 2020-04-17 17:38:36 +01:00
Andrew Morgan
fb3f1fb5c0 Fix log lines, return type, tuple handling 2020-04-17 17:36:53 +01:00
Andrew Morgan
9b8212d256 Update changelog 2020-04-17 17:36:24 +01:00
Andrew Morgan
aead826d2d Merge branch 'release-v1.12.4' of github.com:matrix-org/synapse into matrix-org-hotfixes
* 'release-v1.12.4' of github.com:matrix-org/synapse:
  Query missing cross-signing keys on local sig upload
2020-04-17 15:49:31 +01:00
Andrew Morgan
4cd2a4ae3a Merge branch 'release-v1.12.4' into HEAD
* release-v1.12.4:
  Only register devices edu handler on the master process (#7255)
  tweak changelog
  1.12.3
  Fix the debian build in a better way. (#7212)
  Fix changelog wording
  1.12.2
  Pin Pillow>=4.3.0,<7.1.0 to fix dep issue
  1.12.1
2020-04-14 13:36:19 +01:00
Andrew Morgan
66cd243e6f Merge branch 'release-v1.12.1' of github.com:matrix-org/synapse into matrix-org-hotfixes
* 'release-v1.12.1' of github.com:matrix-org/synapse:
  Note where bugs were introduced
  1.12.1rc1
  Newsfile
  Rewrite changelog
  Add changelog
  Only import sqlite3 when type checking
  Fix another instance
  Only setdefault for signatures if device has key_json
  Fix starting workers when federation sending not split out.
  matrix.org was fine
  Update CHANGES.md
  changelog typos
  1.12.0 changelog
  1.12.0
  more changelog
  changelog fixes
  fix typo
  1.12.0rc1
  update grafana dashboard
2020-03-31 12:06:11 +01:00
Richard van der Hoff
7b66a1f0d9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-03-19 10:29:20 +00:00
Richard van der Hoff
059e91bdce Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-03-19 10:03:10 +00:00
Erik Johnston
f86962cb6b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-03-18 17:17:03 +00:00
Brendan Abolivier
03c694bb08 Fix schema deltas 2020-03-12 16:48:11 +00:00
Brendan Abolivier
08d68c5296 Populate the room version from state events
See `rooms_version_column_3.sql.postgres` for details about why we need to do
that.
2020-03-12 15:59:24 +00:00
Brendan Abolivier
568461b5ec Also don't filter out events sent by ignored users when checking state visibility 2020-03-11 17:04:18 +00:00
Brendan Abolivier
6b73b8b70c Fix condition 2020-03-11 15:32:07 +00:00
Brendan Abolivier
936686ed2d Don't filter out events when we're checking the visibility of state 2020-03-11 15:21:25 +00:00
Brendan Abolivier
74050d0c1c Merge branch 'develop' into matrix-org-hotfixes 2020-03-09 15:06:56 +00:00
Richard van der Hoff
69111a8b2a Merge branch 'develop' into matrix-org-hotfixes 2020-02-27 10:46:36 +00:00
Richard van der Hoff
d840ee5bde Revert "skip send without trailing slash"
I think this was done back when most synapses would reject the
no-trailing-slash version; it's no longer required, and makes matrix.org spec-incompliant.

This reverts commit fc5be50d56.
2020-02-27 10:44:55 +00:00
Erik Johnston
e3d811e85d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 15:48:33 +00:00
Erik Johnston
578ad9fc48 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 15:11:20 +00:00
Richard van der Hoff
9dbe34f0d0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-02-19 11:40:25 +00:00
Erik Johnston
93a0751302 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 10:16:46 +00:00
Erik Johnston
bc936b5657 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-18 16:11:26 +00:00
Richard van der Hoff
d6eae548a7 Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-11 10:43:32 +00:00
Richard van der Hoff
e439438b9b Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-10 09:56:51 +00:00
Richard van der Hoff
f8a1e0d1d2 Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-10 09:54:40 +00:00
Erik Johnston
8a29def84a Add support for putting fed user query API on workers (#6873) 2020-02-07 15:59:05 +00:00
Erik Johnston
77a166577a Allow moving group read APIs to workers (#6866) 2020-02-07 13:57:07 +00:00
Erik Johnston
7d5268d37c Merge branch 'release-v1.10.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-06 10:26:39 +00:00
Erik Johnston
c854d255e5 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-31 15:06:16 +00:00
Brendan Abolivier
c660962d4d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-22 13:48:11 +00:00
Richard van der Hoff
767bef0033 Merge branch 'rav/storage_provider_debug' into matrix-org-hotfixes 2020-01-21 23:03:22 +00:00
Richard van der Hoff
4d02bfd6e1 a bit of debugging for media storage providers 2020-01-21 23:02:58 +00:00
Andrew Morgan
a099ab7d38 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-14 10:37:32 +00:00
Erik Johnston
ce72a9ccdb Merge branch 'erikj/media_admin_apis' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-08 15:52:58 +00:00
Erik Johnston
bace86ed15 Merge branch 'release-v1.8.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-08 15:52:48 +00:00
Erik Johnston
45bf455948 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-07 14:24:36 +00:00
Richard van der Hoff
859663565c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-01-06 15:43:41 +00:00
Richard van der Hoff
0876a5b641 Merge branch 'release-v1.7.3' into matrix-org-hotfixes 2019-12-31 10:47:29 +00:00
Richard van der Hoff
5b5314ee41 Merge branch 'release-v1.7.2' into matrix-org-hotfixes 2019-12-20 10:48:04 +00:00
Richard van der Hoff
aff9189149 Merge remote-tracking branch 'origin/release-v1.7.1' into matrix-org-hotfixes 2019-12-17 16:00:43 +00:00
Richard van der Hoff
2eda49a8db Merge remote-tracking branch 'origin/release-v1.7.1' into matrix-org-hotfixes 2019-12-17 10:56:36 +00:00
Richard van der Hoff
96b17d4e4f Merge remote-tracking branch 'origin/release-v1.7.0' into matrix-org-hotfixes 2019-12-17 10:56:26 +00:00
Erik Johnston
aadc131dc1 Merge branch 'babolivier/pusher-room-store' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-12-10 12:50:04 +00:00
Neil Johnson
0a522121a0 Merge branch 'release-v1.7.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-12-10 11:25:28 +00:00
Andrew Morgan
0b5e2c8093 Merge branch 'release-v1.6.1' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-28 11:40:33 +00:00
Erik Johnston
c665d154a2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 18:56:54 +00:00
Neil Johnson
31295b5a60 Merge branch 'release-v1.6.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 13:16:18 +00:00
Erik Johnston
aebe20c452 Fix phone home stats (#6418)
Fix phone home stats
2019-11-26 13:10:09 +00:00
Andrew Morgan
508e0f9310 1.6.0 2019-11-26 12:15:46 +00:00
Andrew Morgan
e04e7e830e Merge branch 'release-v1.6.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 12:06:38 +00:00
Andrew Morgan
5407e69732 Change /push/v1/notify IP to 10.103.0.7 2019-11-26 12:04:19 +00:00
Erik Johnston
2c59eb368c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-20 15:17:10 +00:00
Erik Johnston
6d1a3e2bdd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-19 12:59:39 +00:00
Richard van der Hoff
7fa4586e36 Merge branch 'rav/url_preview_limit_title_2' into matrix-org-hotfixes 2019-11-05 18:18:02 +00:00
Erik Johnston
33b4aa8d99 Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-29 12:18:44 +00:00
Erik Johnston
627cf5def8 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-25 11:35:14 +01:00
Erik Johnston
b409d51dee Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-25 10:19:09 +01:00
Erik Johnston
4a4e620f30 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-11 10:40:15 +01:00
Richard van der Hoff
28889d8da5 fix logging 2019-10-11 09:57:18 +01:00
Richard van der Hoff
15b2a50817 Add some randomness to the high-cpu backoff hack 2019-10-11 09:15:56 +01:00
Richard van der Hoff
b852a8247d Awful hackery to try to get the fed sender to keep up
Basically, if the federation sender starts getting behind, insert some sleeps
into the transaction transmission code to give the fed sender a chance to catch
up.

Might have to experiment a bit with the numbers.
2019-10-10 10:34:08 +01:00
Erik Johnston
7b55cca011 Merge branch 'erikj/cache_memberships' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-07 13:15:22 +01:00
Richard van der Hoff
a9577ab1f4 Merge branch 'develop' into matrix-org-hotfixes 2019-10-03 17:52:22 +01:00
Richard van der Hoff
cb217d5d60 Revert "Awful hackery to try to get the fed sender to keep up"
This reverts commit 721086a291.

This didn't help.
2019-10-03 17:05:24 +01:00
Andrew Morgan
f4f5355bcf Merge branch 'release-v1.4.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-03 13:06:32 +01:00
Erik Johnston
23bb2713d2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 16:51:08 +01:00
Erik Johnston
b2471e1109 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 15:39:31 +01:00
Erik Johnston
610219d53d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 14:09:29 +01:00
Erik Johnston
b464afe283 Merge branch 'release-v1.4.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 11:09:05 +01:00
Richard van der Hoff
7657ad3ced Merge branch 'rav/federation_sender_hackery' into matrix-org-hotfixes 2019-09-27 16:14:52 +01:00
Richard van der Hoff
721086a291 Awful hackery to try to get the fed sender to keep up
Basically, if the federation sender starts getting behind, insert some sleeps
into the transaction transmission code to give the fed sender a chance to catch
up.

Might have to experiment a bit with the numbers.
2019-09-27 16:13:51 +01:00
Richard van der Hoff
6e6b53ed3a Merge branch 'develop' into matrix-org-hotfixes 2019-09-26 15:22:33 +01:00
Richard van der Hoff
601b50672d Merge branch 'develop' into matrix-org-hotfixes 2019-09-25 12:48:40 +01:00
Richard van der Hoff
a7af389da0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-09-24 17:05:15 +01:00
Neil Johnson
99db0d76fd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-24 14:56:12 +01:00
Richard van der Hoff
561b0f79bc Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-09-24 10:11:19 +01:00
Richard van der Hoff
8569f3cdef Merge branch 'rav/fix_retry_reset' into matrix-org-hotfixes 2019-09-20 12:14:19 +01:00
Richard van der Hoff
7b61e6f5d6 Merge branch 'develop' into matrix-org-hotfixes 2019-09-18 13:55:25 +01:00
Richard van der Hoff
05241b3031 Revert "Fix m.federate bug"
This has now been merged into develop (142c9325c) so we no longer need this
cherry-picked commit.

This reverts commit ee91c69ef7.
2019-09-18 13:54:57 +01:00
Richard van der Hoff
e01026d84d Revert "Fix existing v2 identity server calls (MSC2140) (#6013)"
This has now been merged into develop (3505ffcda) so we don't need this
cherry-picked commit.

This reverts commit e0eef47315.
2019-09-18 13:53:37 +01:00
Erik Johnston
ee91c69ef7 Fix m.federate bug 2019-09-13 14:44:48 +01:00
Andrew Morgan
e0eef47315 Fix existing v2 identity server calls (MSC2140) (#6013)
Two things I missed while implementing [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140/files#diff-c03a26de5ac40fb532de19cb7fc2aaf7R80).

1. Access tokens should be provided to the identity server as `access_token`, not `id_access_token`, even though the homeserver may accept the tokens as `id_access_token`.
2. Access tokens must be sent to the identity server in a query parameter, the JSON body is not allowed.

We now send the access token as part of an `Authorization: ...` header, which fixes both things.

The breaking code was added in https://github.com/matrix-org/synapse/pull/5892

Sytest PR: https://github.com/matrix-org/sytest/pull/697
2019-09-13 14:08:26 +01:00
Erik Johnston
44d2ca2990 Merge branch 'anoa/fix_3pid_validation' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-10 18:15:24 +01:00
Erik Johnston
9240622c1a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 14:10:53 +01:00
Erik Johnston
0dbba85e95 Merge branch 'anoa/worker_store_reg' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 13:02:12 +01:00
Andrew Morgan
1ceeccb769 Move get_threepid_validation_session into RegistrationWorkerStore 2019-09-06 13:00:34 +01:00
Erik Johnston
39883e85bd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 12:50:28 +01:00
Erik Johnston
68f53b7a0e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 09:53:37 +01:00
Erik Johnston
e679b008ff Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-05 15:23:40 +01:00
Erik Johnston
e80a5b7492 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-04 13:13:30 +01:00
Richard van der Hoff
b272e7345f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-08-30 12:01:24 +01:00
Erik Johnston
a81e0233e9 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-29 11:18:57 +01:00
Richard van der Hoff
80898481ab Merge branch 'release-v1.3.1' into matrix-org-hotfixes 2019-08-17 09:22:30 +01:00
Brendan Abolivier
9d4c716d85 Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-15 11:36:00 +01:00
Brendan Abolivier
d90b0946ed Temporary fix to ensure kde can contact matrix.org if stuff breaks 2019-08-13 18:05:06 +01:00
Brendan Abolivier
8d5762b0dc Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-13 17:39:30 +01:00
Brendan Abolivier
a7efbc5416 Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-13 15:54:01 +01:00
Richard van der Hoff
be362cb8f8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-08-13 10:52:19 +01:00
Erik Johnston
873ff9522b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-01 14:46:09 +01:00
Erik Johnston
c1ee2999a0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-31 10:01:56 +01:00
Erik Johnston
9b2b386f76 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-30 13:26:19 +01:00
Erik Johnston
65fe31786d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-30 10:12:13 +01:00
Andrew Morgan
70b6d1dfd6 Merge branch 'release-v1.2.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-24 13:32:41 +01:00
Erik Johnston
ee62aed72e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-23 10:23:40 +01:00
Erik Johnston
c02f26319d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-23 09:20:26 +01:00
Andrew Morgan
fdd182870c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-22 10:19:16 +01:00
Richard van der Hoff
4102cb220a Merge branch 'release-v1.2.0' into matrix-org-hotfixes 2019-07-18 15:20:00 +01:00
Erik Johnston
5299707329 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-17 10:56:55 +01:00
Richard van der Hoff
43e01be158 Merge remote-tracking branch 'origin/release-v1.1.0' into matrix-org-hotfixes 2019-07-03 09:49:35 +01:00
Richard van der Hoff
589e080c6b Merge branch 'release-v1.1.0' into matrix-org-hotfixes 2019-07-03 09:47:55 +01:00
Richard van der Hoff
24e48bc9ff Merge branch 'release-v1.1.0' into matrix-org-hotfixes 2019-07-02 12:05:33 +01:00
Erik Johnston
576b62a6a3 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-28 10:04:54 +01:00
Erik Johnston
ad2ba70959 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-24 15:31:36 +01:00
Erik Johnston
a330505025 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-21 14:36:13 +01:00
Erik Johnston
67b73fd147 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-21 13:27:04 +01:00
Erik Johnston
c08e4dbadc Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-17 14:10:28 +01:00
Erik Johnston
6dbd498772 Merge branch 'master' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 17:25:54 +01:00
Erik Johnston
03b09b32d6 Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 14:00:50 +01:00
Erik Johnston
8f1711da0e Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 00:23:54 +01:00
Erik Johnston
6fb6c98f71 Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-10 18:34:45 +01:00
Erik Johnston
aad993f24d Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-10 16:05:10 +01:00
Erik Johnston
544e101c24 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-04 16:58:38 +01:00
Richard van der Hoff
8699f380f0 hotfix RetryLimiter 2019-06-04 12:14:41 +01:00
Richard van der Hoff
e91a68ef3a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-06-04 11:59:55 +01:00
Richard van der Hoff
9f5048c198 Merge branch 'rav/limit_displayname_length' into matrix-org-hotfixes 2019-06-01 11:15:43 +01:00
Erik Johnston
b3c40ba58a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-31 10:58:47 +01:00
Erik Johnston
8d69193a42 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-30 14:33:44 +01:00
Erik Johnston
bbcd19f2d0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-24 10:53:01 +01:00
Erik Johnston
3cd598135f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-23 15:54:13 +01:00
Richard van der Hoff
1c8f2c34ff Merge branch 'develop' into matrix-org-hotfixes 2019-05-21 16:29:25 +01:00
Richard van der Hoff
ca03f90ee7 Merge branch 'develop' into matrix-org-hotfixes 2019-05-20 15:55:39 +01:00
Richard van der Hoff
9feee29d76 Merge tag 'v0.99.4rc1' into matrix-org-hotfixes
v0.99.4rc1
2019-05-14 11:12:22 +01:00
Richard van der Hoff
e7dcee13da Merge commit 'a845abbf3' into matrix-org-hotfixes 2019-05-03 17:12:28 +01:00
Richard van der Hoff
7467738834 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-05-02 13:37:35 +01:00
Erik Johnston
d75fb8ae22 Merge branch 'erikj/ratelimit_3pid_invite' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-26 18:12:33 +01:00
Erik Johnston
ae25a8efef Merge branch 'erikj/postpath' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-17 10:14:57 +01:00
Richard van der Hoff
fc5be50d56 skip send without trailing slash 2019-04-16 15:16:57 +01:00
Erik Johnston
aadba440da Point pusher to new box 2019-04-15 19:23:21 +01:00
Erik Johnston
ec94d6a590 VersionRestServlet doesn't take a param 2019-04-15 19:21:32 +01:00
Erik Johnston
42ce90c3f7 Merge branch 'erikj/move_endpoints' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-15 18:56:46 +01:00
Erik Johnston
8467756dc1 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-04 14:43:57 +01:00
Erik Johnston
613b443ff0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-02 18:25:45 +01:00
Richard van der Hoff
233b61ac61 Remove spurious changelog files from hotfixes
The relevant patches are now in develop thanks to
https://github.com/matrix-org/synapse/pull/4816.
2019-04-02 13:51:37 +01:00
Richard van der Hoff
f41c9d37d6 Merge branch 'develop' into matrix-org-hotfixes 2019-04-02 13:47:08 +01:00
Neil Johnson
1048e2ca6a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-27 09:18:35 +00:00
Richard van der Hoff
ce0ce1add3 Merge branch 'develop' into matrix-org-hotfixes 2019-03-25 16:48:56 +00:00
Erik Johnston
b0bf1ea7bd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-21 14:10:31 +00:00
Richard van der Hoff
2561b628af Merge branch 'develop' into matrix-org-hotfixes 2019-03-19 12:19:20 +00:00
Richard van der Hoff
73c6630718 Revert "Reinstate EDU-batching hacks"
This reverts commit ed8ccc3737.
2019-03-19 12:17:28 +00:00
Erik Johnston
a189bb03ab Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-14 14:39:06 +00:00
Erik Johnston
404a2d70be Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-14 13:55:29 +00:00
Richard van der Hoff
ed8ccc3737 Reinstate EDU-batching hacks
This reverts commit c7285607a3.
2019-03-13 14:42:11 +00:00
Erik Johnston
18b1a92162 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-08 09:56:17 +00:00
Amber Brown
199aa72d35 Merge branch 'develop' of ssh://github.com/matrix-org/synapse into
matrix-org-hotfixes
2019-03-07 21:43:10 +11:00
Erik Johnston
8f7dbbc14a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-06 19:30:30 +00:00
Erik Johnston
27dbc9ac42 Reenable presence tests and remove pointless change 2019-03-06 17:12:45 +00:00
Richard van der Hoff
e9aa401994 Remove redundant changes from synapse/replication/tcp/streams.py (#4813)
This was some hacky code (introduced in c10c71e70d) to make the presence stream
do nothing on hotfixes. We now ensure that no replication clients subscribe to
the presence stream, so this is redundant.
2019-03-06 13:21:32 +00:00
Richard van der Hoff
9e9572c79e Run black on synapse/handlers/user_directory.py (#4812)
This got done on the develop branch in #4635, but the subsequent merge to
hotfixes (88af0317a) discarded the changes for some reason.

Fixing this here and now means (a) there are fewer differences between
matrix-org-hotfixes and develop, making future patches easier to merge, and (b)
fixes some pep8 errors on the hotfixes branch which have been annoying me for
some time.
2019-03-06 11:56:03 +00:00
Richard van der Hoff
c7285607a3 Revert EDU-batching hacks from matrix-org-hotfixes
Firstly: we want to do this in a better way, which is the intention of
too many RRs, which means we need to make it happen again.

This reverts commits: 8d7c0264b 000d23090 eb0334b07 4d07dc0d1
2019-03-06 11:04:53 +00:00
Erik Johnston
a6e2546980 Fix outbound federation 2019-03-05 14:50:37 +00:00
Erik Johnston
dc510e0e43 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-05 14:41:13 +00:00
Richard van der Hoff
ed12338f35 Remove #4733 debug (#4767)
We don't need any of this stuff now; this brings protocol.py back into line
with develop for the hotfixes branch.
2019-03-04 14:00:03 +00:00
Richard van der Hoff
bf3f8b8855 Add more debug for #4422 (#4769) 2019-02-28 17:46:22 +00:00
Richard van der Hoff
67acd1aa1b Merge branch 'develop' into matrix-org-hotfixes 2019-02-27 10:29:24 +00:00
Erik Johnston
75c924430e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-26 09:36:29 +00:00
Richard van der Hoff
6087c53830 Add more debug for membership syncing issues (#4719) 2019-02-25 17:00:18 +00:00
Erik Johnston
b50fe65a22 Add logging when sending error 2019-02-25 15:55:21 +00:00
Erik Johnston
17009e689b Merge pull request #4734 from matrix-org/rav/repl_debug
Add some debug to help with #4733
2019-02-25 15:52:45 +00:00
Richard van der Hoff
5d2f755d3f Add some debug to help with #4733 2019-02-25 14:37:23 +00:00
Richard van der Hoff
8d7c0264bc more fix edu batching hackery 2019-02-24 23:27:52 +00:00
Richard van der Hoff
000d230901 fix edu batching hackery 2019-02-24 23:19:37 +00:00
Richard van der Hoff
eb0334b07c more edu batching hackery 2019-02-24 23:15:09 +00:00
Richard van der Hoff
4d07dc0d18 Add a delay to the federation loop for EDUs 2019-02-24 22:24:36 +00:00
Erik Johnston
0ea52872ab Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-22 15:29:41 +00:00
Richard van der Hoff
6868d53fe9 bail out early in on_new_receipts if no pushers 2019-02-21 15:58:15 +00:00
Richard van der Hoff
68af15637b Merge branch 'develop' into matrix-org-hotfixes 2019-02-20 14:24:17 +00:00
Richard van der Hoff
4da63d9f6f Merge branch 'develop' into matrix-org-hotfixes 2019-02-20 14:15:56 +00:00
Richard van der Hoff
085d69b0bd Apply the pusher http hack in the right place (#4692)
Do it in the constructor, so that it works for badge updates as well as pushes
2019-02-20 11:25:10 +00:00
Erik Johnston
776fe6c184 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-20 09:52:24 +00:00
Erik Johnston
0e07d2c7d5 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-19 13:24:37 +00:00
Erik Johnston
90ec885805 Revert "Merge pull request #4654 from matrix-org/hawkowl/registration-worker"
This reverts commit 5bd2e2c31d, reversing
changes made to d97c3a6ce6.
2019-02-19 13:23:17 +00:00
Erik Johnston
5a28154c4d Revert "Merge pull request #4655 from matrix-org/hawkowl/registration-worker"
This reverts commit 93555af5c9, reversing
changes made to 5bd2e2c31d.
2019-02-19 13:23:14 +00:00
Erik Johnston
2fcb51e703 Merge branch 'matthew/well-known-cors' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-18 18:38:49 +00:00
Erik Johnston
26f524872f Revert change that cached connection factory 2019-02-18 18:36:54 +00:00
Erik Johnston
88af0317a2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-15 22:39:13 +00:00
Erik Johnston
c10c71e70d Emergency changes 2019-02-15 18:15:21 +00:00
Erik Johnston
93555af5c9 Merge pull request #4655 from matrix-org/hawkowl/registration-worker
Device replication
2019-02-15 18:12:49 +00:00
Amber Brown
06622e4110 fix 2019-02-16 05:11:09 +11:00
Amber Brown
155efa9e36 fix 2019-02-16 05:10:48 +11:00
Amber Brown
3175edc5d8 maybe 2019-02-16 05:09:08 +11:00
Amber Brown
d95252c01f use a device replication thingy 2019-02-16 05:08:58 +11:00
Erik Johnston
5bd2e2c31d Merge pull request #4654 from matrix-org/hawkowl/registration-worker
Registration worker
2019-02-15 17:51:34 +00:00
Amber Brown
84528e4fb2 cleanup 2019-02-16 04:49:09 +11:00
Amber Brown
e4381ed514 pep8 2019-02-16 04:42:04 +11:00
Amber Brown
d9235b9e29 fix appservice, add to frontend proxy 2019-02-16 04:39:49 +11:00
Amber Brown
ce5f3b1ba5 add all the files 2019-02-16 04:35:58 +11:00
Amber Brown
7b5c04312e isort 2019-02-16 04:35:27 +11:00
Amber Brown
f5bafd70f4 add cache remover endpoint and wire it up 2019-02-16 04:34:23 +11:00
Richard van der Hoff
d97c3a6ce6 Merge remote-tracking branch 'origin/release-v0.99.1' into matrix-org-hotfixes 2019-02-13 14:29:05 +00:00
Erik Johnston
341c35614a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-13 10:29:31 +00:00
Richard van der Hoff
fecf28319c Merge branch 'release-v0.99.0' into matrix-org-hotfixes 2019-02-01 13:29:31 +00:00
Richard van der Hoff
345d8cfb69 Merge branch 'release-v0.99.0' into matrix-org-hotfixes 2019-02-01 13:21:42 +00:00
Richard van der Hoff
b60d005156 Merge branch 'develop' into matrix-org-hotfixes 2019-01-31 18:44:04 +00:00
Richard van der Hoff
6c232a69df Revert "Break infinite loop on redaction in v3 rooms"
We've got a better fix of this now.

This reverts commit decb5698b3.
2019-01-31 18:43:49 +00:00
Amber Brown
e97c1df30c remove slow code on userdir (#4534) 2019-01-31 13:26:38 +00:00
Richard van der Hoff
decb5698b3 Break infinite loop on redaction in v3 rooms 2019-01-31 00:23:58 +00:00
Erik Johnston
62962e30e4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 17:04:08 +00:00
Erik Johnston
05413d4e20 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 14:27:19 +00:00
Erik Johnston
ca46dcf683 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 13:11:25 +00:00
Erik Johnston
d351be1567 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 11:48:29 +00:00
Andrew Morgan
c7f2eaf4f4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-29 10:07:13 +00:00
Andrew Morgan
53d25116df Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-25 14:33:14 +00:00
Andrew Morgan
08e25ffa0c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-24 15:51:59 +00:00
Andrew Morgan
1c148e442b Merge branch 'anoa/room_dir_quick_fix' into matrix-org-hotfixes 2019-01-24 15:37:16 +00:00
Andrew Morgan
acaca1b4e9 Merge branch 'anoa/room_dir_quick_fix' into matrix-org-hotfixes 2019-01-24 14:51:35 +00:00
Andrew Morgan
4777836b83 Fix missing synapse metrics import 2019-01-23 15:26:03 +00:00
Andrew Morgan
7da659dd6d Use existing stream position counter metric 2019-01-23 15:04:12 +00:00
Andrew Morgan
77dfe51aba Name metric consistently 2019-01-23 15:04:05 +00:00
Andrew Morgan
ef7865e2f2 Track user_dir current event stream position 2019-01-23 15:03:54 +00:00
Matthew Hodgson
5cb15c0443 warn if we ignore device lists 2019-01-15 22:11:46 +00:00
Matthew Hodgson
b43172ffbc Merge pull request #4396 from matrix-org/matthew/bodge_device_update_dos
limit remote device lists to 10000 entries per user
2019-01-15 21:47:00 +00:00
Matthew Hodgson
b4796d1814 drop the limit to 1K as e2e will be hosed beyond that point anyway 2019-01-15 21:46:29 +00:00
Matthew Hodgson
482d06774a don't store remote device lists if they have more than 10K devices 2019-01-15 21:38:07 +00:00
Matthew Hodgson
046d731fbd limit remote device lists to 1000 entries per user 2019-01-15 21:07:12 +00:00
Richard van der Hoff
892f6c98ec Merge tag 'v0.34.1.1' into matrix-org-hotfixes
Synapse 0.34.1.1 (2019-01-11)
=============================

This release fixes CVE-2019-5885 and is recommended for all users of Synapse 0.34.1.

This release is compatible with Python 2.7 and 3.5+. Python 3.7 is fully supported.

Bugfixes
--------

- Fix spontaneous logout on upgrade
  ([\#4374](https://github.com/matrix-org/synapse/issues/4374))
2019-01-11 10:21:18 +00:00
Erik Johnston
7fafa2d954 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-09 09:13:16 +00:00
Richard van der Hoff
1d63046542 Merge tag 'v0.34.1rc1' into matrix-org-hotfixes
Synapse 0.34.1rc1 (2019-01-08)
==============================

Features
--------

- Special-case a support user for use in verifying behaviour of a given server. The support user does not appear in user directory or monthly active user counts. ([\#4141](https://github.com/matrix-org/synapse/issues/4141), [\#4344](https://github.com/matrix-org/synapse/issues/4344))
- Support for serving .well-known files ([\#4262](https://github.com/matrix-org/synapse/issues/4262))
- Rework SAML2 authentication ([\#4265](https://github.com/matrix-org/synapse/issues/4265), [\#4267](https://github.com/matrix-org/synapse/issues/4267))
- SAML2 authentication: Initialise user display name from SAML2 data ([\#4272](https://github.com/matrix-org/synapse/issues/4272))
- Synapse can now have its conditional/extra dependencies installed by pip. This functionality can be used by using `pip install matrix-synapse[feature]`, where feature is a comma separated list with the possible values `email.enable_notifs`, `matrix-synapse-ldap3`, `postgres`, `resources.consent`, `saml2`, `url_preview`, and `test`. If you want to install all optional dependencies, you can use "all" instead. ([\#4298](https://github.com/matrix-org/synapse/issues/4298), [\#4325](https://github.com/matrix-org/synapse/issues/4325), [\#4327](https://github.com/matrix-org/synapse/issues/4327))
- Add routes for reading account data. ([\#4303](https://github.com/matrix-org/synapse/issues/4303))
- Add opt-in support for v2 rooms ([\#4307](https://github.com/matrix-org/synapse/issues/4307))
- Add a script to generate a clean config file ([\#4315](https://github.com/matrix-org/synapse/issues/4315))
- Return server data in /login response ([\#4319](https://github.com/matrix-org/synapse/issues/4319))

Bugfixes
--------

- Fix contains_url check to be consistent with other instances in code-base and check that value is an instance of string. ([\#3405](https://github.com/matrix-org/synapse/issues/3405))
- Fix CAS login when username is not valid in an MXID ([\#4264](https://github.com/matrix-org/synapse/issues/4264))
- Send CORS headers for /media/config ([\#4279](https://github.com/matrix-org/synapse/issues/4279))
- Add 'sandbox' to CSP for media reprository ([\#4284](https://github.com/matrix-org/synapse/issues/4284))
- Make the new landing page prettier. ([\#4294](https://github.com/matrix-org/synapse/issues/4294))
- Fix deleting E2E room keys when using old SQLite versions. ([\#4295](https://github.com/matrix-org/synapse/issues/4295))
- The metric synapse_admin_mau:current previously did not update when config.mau_stats_only was set to True ([\#4305](https://github.com/matrix-org/synapse/issues/4305))
- Fixed per-room account data filters ([\#4309](https://github.com/matrix-org/synapse/issues/4309))
- Fix indentation in default config ([\#4313](https://github.com/matrix-org/synapse/issues/4313))
- Fix synapse:latest docker upload ([\#4316](https://github.com/matrix-org/synapse/issues/4316))
- Fix test_metric.py compatibility with prometheus_client 0.5. Contributed by Maarten de Vries <maarten@de-vri.es>. ([\#4317](https://github.com/matrix-org/synapse/issues/4317))
- Avoid packaging _trial_temp directory in -py3 debian packages ([\#4326](https://github.com/matrix-org/synapse/issues/4326))
- Check jinja version for consent resource ([\#4327](https://github.com/matrix-org/synapse/issues/4327))
- fix NPE in /messages by checking if all events were filtered out ([\#4330](https://github.com/matrix-org/synapse/issues/4330))
- Fix `python -m synapse.config` on Python 3. ([\#4356](https://github.com/matrix-org/synapse/issues/4356))

Deprecations and Removals
-------------------------

- Remove the deprecated v1/register API on Python 2. It was never ported to Python 3. ([\#4334](https://github.com/matrix-org/synapse/issues/4334))

Internal Changes
----------------

- Getting URL previews of IP addresses no longer fails on Python 3. ([\#4215](https://github.com/matrix-org/synapse/issues/4215))
- drop undocumented dependency on dateutil ([\#4266](https://github.com/matrix-org/synapse/issues/4266))
- Update the example systemd config to use a virtualenv ([\#4273](https://github.com/matrix-org/synapse/issues/4273))
- Update link to kernel DCO guide ([\#4274](https://github.com/matrix-org/synapse/issues/4274))
- Make isort tox check print diff when it fails ([\#4283](https://github.com/matrix-org/synapse/issues/4283))
- Log room_id in Unknown room errors ([\#4297](https://github.com/matrix-org/synapse/issues/4297))
- Documentation improvements for coturn setup. Contributed by Krithin Sitaram. ([\#4333](https://github.com/matrix-org/synapse/issues/4333))
- Update pull request template to use absolute links ([\#4341](https://github.com/matrix-org/synapse/issues/4341))
- Update README to not lie about required restart when updating TLS certificates ([\#4343](https://github.com/matrix-org/synapse/issues/4343))
- Update debian packaging for compatibility with transitional package ([\#4349](https://github.com/matrix-org/synapse/issues/4349))
- Fix command hint to generate a config file when trying to start without a config file ([\#4353](https://github.com/matrix-org/synapse/issues/4353))
- Add better logging for unexpected errors while sending transactions ([\#4358](https://github.com/matrix-org/synapse/issues/4358))
2019-01-08 11:37:25 +00:00
Richard van der Hoff
4c238a9a91 Merge remote-tracking branch 'origin/release-v0.34.0' into matrix-org-hotfixes 2018-12-19 10:24:26 +00:00
Richard van der Hoff
002db39a36 Merge tag 'v0.34.0rc1' into matrix-org-hotfixes 2018-12-04 14:07:28 +00:00
Richard van der Hoff
c4074e4ab6 Revert "Merge branch 'rav/timestamp_patch' into matrix-org-hotfixes"
This reverts commit 7960e814e5, reversing
changes made to 3dd704ee9a.

We no longer need this; please redo it as a proper MSC & synapse PR if you want
to keep it...
2018-12-03 10:15:39 +00:00
Richard van der Hoff
7960e814e5 Merge branch 'rav/timestamp_patch' into matrix-org-hotfixes 2018-11-30 12:10:30 +00:00
Richard van der Hoff
080025e533 Fix buglet and remove thread_id stuff 2018-11-30 12:09:33 +00:00
Richard van der Hoff
9accd63a38 Initial patch from Erik 2018-11-30 12:04:38 +00:00
Richard van der Hoff
3dd704ee9a Merge branch 'develop' into matrix-org-hotfixes 2018-11-20 11:29:45 +00:00
Richard van der Hoff
28e28a1974 Merge branch 'develop' into matrix-org-hotfixes 2018-11-20 11:03:35 +00:00
Richard van der Hoff
b699178aa1 Merge branch 'develop' into matrix-org-hotfixes 2018-11-14 11:54:29 +00:00
Richard van der Hoff
c08c649fa1 Merge remote-tracking branch 'origin/erikj/fix_device_comparison' into matrix-org-hotfixes 2018-11-08 12:48:19 +00:00
hera
5c0c4b4079 Fix encoding error for consent form on python3
The form was rendering this as "b'01234....'".

-- richvdh
2018-11-08 11:03:39 +00:00
Richard van der Hoff
b55cdfaa31 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-11-08 10:47:56 +00:00
Richard van der Hoff
34406cf22c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-11-06 10:49:20 +00:00
Amber Brown
f91aefd245 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-31 04:41:03 +11:00
Erik Johnston
f8281f42c8 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-29 18:16:58 +00:00
Amber Brown
7171bdf279 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-29 23:14:47 +11:00
Erik Johnston
9f2d14ee26 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-26 09:52:23 +01:00
Amber Brown
ead471e72d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-22 22:18:02 +11:00
Richard van der Hoff
9a4011de46 Merge branch 'develop' into matrix-org-hotfixes 2018-10-18 16:37:01 +01:00
Amber Brown
33551be61b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-15 20:15:27 +11:00
Richard van der Hoff
eeb29d99fd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-09 09:49:08 +01:00
Richard van der Hoff
1a0c407e6b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-09 09:47:37 +01:00
Erik Johnston
c4b37cbf18 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-02 16:44:57 +01:00
Erik Johnston
7fa156af80 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-02 14:39:30 +01:00
Richard van der Hoff
78825f4f1c Merge branch 'develop' into matrix-org-hotfixes 2018-09-26 13:27:33 +01:00
Richard van der Hoff
6e15b5debe Revert "Actuall set cache factors in workers"
This reverts commit e21c312e16.
2018-09-26 13:25:52 +01:00
Matthew Hodgson
2e0d2879d0 Merge branch 'develop' into matrix-org-hotfixes 2018-09-26 11:00:26 +01:00
Michael Kaye
128043072b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-24 11:20:10 +01:00
Erik Johnston
b2fda9d20e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-20 11:00:14 +01:00
Erik Johnston
3c8c5eabc2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-15 11:40:37 +01:00
Erik Johnston
2da2041e2e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-14 19:56:33 +01:00
Erik Johnston
b5eef203f4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-14 18:25:55 +01:00
Erik Johnston
df73da691f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-13 16:15:56 +01:00
Matthew Hodgson
30d054e0bb Merge branch 'develop' into matrix-org-hotfixes 2018-09-12 17:16:21 +01:00
Erik Johnston
ebb3cc4ab6 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-12 11:22:06 +01:00
Erik Johnston
17201abd53 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-11 14:17:33 +01:00
Erik Johnston
2f141f4c41 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-22 11:47:08 +01:00
Richard van der Hoff
638c0bf49b Merge branch 'rav/fix_gdpr_consent' into matrix-org-hotfixes 2018-08-21 22:54:35 +01:00
hera
d1065e6f51 Merge tag 'v0.33.3rc2' into matrix-org-hotfixes
Bugfixes
--------

- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs
([\#3723](https://github.com/matrix-org/synapse/issues/3723))
2018-08-21 19:12:14 +00:00
Erik Johnston
567863127a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-20 13:34:47 +01:00
Erik Johnston
f5abc10724 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-20 11:12:18 +01:00
Erik Johnston
bb795b56da Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-16 15:51:16 +01:00
Erik Johnston
4dd0604f61 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-15 15:37:05 +01:00
Richard van der Hoff
c05d278ba0 Merge branch 'rav/federation_metrics' into matrix-org-hotfixes 2018-08-07 19:11:29 +01:00
Erik Johnston
49a3163958 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-06 13:33:54 +01:00
Erik Johnston
1a568041fa Merge branch 'release-v0.33.1' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-02 15:28:32 +01:00
Erik Johnston
c9db8b0c32 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-24 17:22:23 +01:00
Erik Johnston
aa1bf10b91 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-24 15:49:38 +01:00
Erik Johnston
5222907bea Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 17:54:41 +01:00
Erik Johnston
e1eb147f2a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 16:45:22 +01:00
hera
e43eb47c5f Fixup limiter 2018-07-23 15:22:47 +00:00
hera
27eb4c45cd Lower hacky timeout for member limiter 2018-07-23 15:16:36 +00:00
Erik Johnston
b136d7ff8f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 16:09:40 +01:00
Richard van der Hoff
9e56e1ab30 Merge branch 'develop' into matrix-org-hotfixes 2018-07-19 16:40:28 +01:00
Erik Johnston
742f757337 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-19 10:26:13 +01:00
Richard van der Hoff
2f5dfe299c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-07-17 15:26:47 +01:00
Erik Johnston
e4eec87c6a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-17 11:18:39 +01:00
Erik Johnston
f793ff4571 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-17 10:04:33 +01:00
Richard van der Hoff
195aae2f16 Merge branch 'develop' into matrix-org-hotfixes 2018-07-12 12:09:25 +01:00
Erik Johnston
7c79f2cb72 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-12 09:59:58 +01:00
Richard van der Hoff
f04e35c170 Merge branch 'develop' into matrix-org-hotfixes 2018-07-10 18:04:03 +01:00
Matthew Hodgson
36bbac05bd Merge branch 'develop' of git+ssh://github.com/matrix-org/synapse into matrix-org-hotfixes 2018-07-06 19:21:09 +01:00
Erik Johnston
e2a4b7681e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-05 10:29:32 +01:00
Erik Johnston
957944eee4 Merge pull request #3476 from matrix-org/erikj/timeout_memberships
Timeout membership requests after 90s
2018-07-03 10:18:39 +01:00
Erik Johnston
bf425e533e Fix PEP8 2018-07-03 10:11:09 +01:00
Erik Johnston
ca21957b8a Timeout membership requests after 90s
This is a hacky fix to try and stop in flight requests from building up
2018-07-02 13:56:08 +01:00
Erik Johnston
6a95270671 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-29 14:10:29 +01:00
hera
82781f5838 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-06-28 21:09:28 +00:00
Matthew Hodgson
aae6d3ff69 Merge remote-tracking branch 'origin/revert-3451-hawkowl/sorteddict-api' into matrix-org-hotfixes 2018-06-26 18:36:29 +01:00
Matthew Hodgson
9175225adf Merge remote-tracking branch 'origin/hawkowl/sorteddict-api' into matrix-org-hotfixes 2018-06-26 17:52:37 +01:00
David Baker
7a32fa0101 Fix error on deleting users pending deactivation
Use simple_delete instead of simple_delete_one as commented
2018-06-26 11:57:44 +01:00
Erik Johnston
d46450195b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 20:14:34 +01:00
Erik Johnston
c0128c1021 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 20:12:13 +01:00
Erik Johnston
3320b7c9a4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 15:23:18 +01:00
Erik Johnston
4c22c9b0b6 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 14:37:13 +01:00
Richard van der Hoff
6d6ea1bb40 Merge branch 'develop' into matrix-org-hotfixes 2018-06-22 16:35:37 +01:00
aphrodite
9e38981ae4 Send HTTP pushes direct to http-priv rather than via clouldflare
(This is a heinous hack that ought to be made more generic and pushed back to develop)
2018-06-22 15:58:15 +01:00
hera
463e7c2709 Lower member limiter 2018-06-22 15:58:15 +01:00
Richard van der Hoff
ce9d0b1d0c Fix earlier logging patch
`@cached` doesn't work on decorated functions, because it uses inspection on
the target to calculate the number of arguments.
2018-06-22 15:58:15 +01:00
Richard van der Hoff
80786d5caf Logging for get_users_in_room 2018-06-22 15:58:15 +01:00
Richard van der Hoff
e18378c3e2 Increase member limiter to 20
Let's see if this makes the bridges go faster, or if it kills the synapse
master.
2018-06-22 15:58:15 +01:00
hera
0ca2857baa increase sync cache to 2 minutes
to give synchrotrons being hammered by repeating initial /syncs to get more
chance to actually complete and avoid a DoS
2018-06-22 15:58:15 +01:00
Erik Johnston
e21c312e16 Actuall set cache factors in workers 2018-06-22 15:58:15 +01:00
Richard van der Hoff
1031bd25f8 Avoid doing presence updates on replication reconnect
Presence is supposed to be disabled on matrix.org, so we shouldn't send a load
of USER_SYNC commands every time the synchrotron reconnects to the master.
2018-06-22 15:58:15 +01:00
hera
fae708c0e8 Disable auth on room_members for now
because the moznet bridge is broken (https://github.com/matrix-org/matrix-appservice-irc/issues/506)
2018-06-22 15:58:15 +01:00
Erik Johnston
8f8ea91eef Bump LAST_SEEN_GRANULARITY in client_ips 2018-06-22 15:58:15 +01:00
Erik Johnston
7a1406d144 Prefill client_ip_last_seen in replication 2018-06-22 15:58:15 +01:00
Erik Johnston
6373874833 Move event sending to end in shutdown room admin api 2018-06-22 15:58:15 +01:00
Erik Johnston
a79823e64b Add dummy presence REST handler to frontend proxy
The handler no-ops all requests as presence is disabled.
2018-06-22 15:58:15 +01:00
Erik Johnston
1766a5fdc0 Increase MAX_EVENTS_BEHIND for replication clients 2018-06-22 15:58:14 +01:00
Erik Johnston
e6b1ea3eb2 Disable presence in txn queue 2018-06-22 15:58:14 +01:00
Erik Johnston
e5537cf983 Limit concurrent AS joins 2018-06-22 15:58:14 +01:00
Erik Johnston
43bb12e640 Disable presence
This reverts commit 0ebd376a53 and
disables presence a bit more
2018-06-22 15:58:14 +01:00
Erik Johnston
66dcbf47a3 Disable auto search for prefixes in event search 2018-06-22 15:58:14 +01:00
Erik Johnston
a285fe05fd Add timeout to ResponseCache of /public_rooms 2018-06-22 15:58:14 +01:00
167 changed files with 1395 additions and 3508 deletions

View File

@@ -1,8 +0,0 @@
# Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3
# Target Python 3.5 with black (#8664).
aff1eb7c671b0a3813407321d2702ec46c71fa56
# Update black to 20.8b1 (#9381).
0a00b7ff14890987f09112a2ae696c61001e6cf1

3
.gitignore vendored
View File

@@ -6,14 +6,13 @@
*.egg
*.egg-info
*.lock
*.py[cod]
*.pyc
*.snap
*.tac
_trial_temp/
_trial_temp*/
/out
.DS_Store
__pycache__/
# stuff that is likely to exist when you run a server locally
/*.db

View File

@@ -1,150 +1,9 @@
Synapse 1.30.0 (2021-03-22)
===========================
Note that this release deprecates the ability for appservices to
call `POST /_matrix/client/r0/register` without the body parameter `type`. Appservice
developers should use a `type` value of `m.login.application_service` as
per [the spec](https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions).
In future releases, calling this endpoint with an access token - but without a `m.login.application_service`
type - will fail.
No significant changes.
Synapse 1.30.0rc1 (2021-03-16)
==============================
Features
--------
- Add prometheus metrics for number of users successfully registering and logging in. ([\#9510](https://github.com/matrix-org/synapse/issues/9510), [\#9511](https://github.com/matrix-org/synapse/issues/9511), [\#9573](https://github.com/matrix-org/synapse/issues/9573))
- Add `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time` prometheus metrics, which monitor federation delays by reporting the timestamps of messages sent and received to a set of remote servers. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Add support for generating JSON Web Tokens dynamically for use as OIDC client secrets. ([\#9549](https://github.com/matrix-org/synapse/issues/9549))
- Optimise handling of incomplete room history for incoming federation. ([\#9601](https://github.com/matrix-org/synapse/issues/9601))
- Finalise support for allowing clients to pick an SSO Identity Provider ([MSC2858](https://github.com/matrix-org/matrix-doc/pull/2858)). ([\#9617](https://github.com/matrix-org/synapse/issues/9617))
- Tell spam checker modules about the SSO IdP a user registered through if one was used. ([\#9626](https://github.com/matrix-org/synapse/issues/9626))
Bugfixes
--------
- Fix long-standing bug when generating thumbnails for some images with transparency: `TypeError: cannot unpack non-iterable int object`. ([\#9473](https://github.com/matrix-org/synapse/issues/9473))
- Purge chain cover indexes for events that were purged prior to Synapse v1.29.0. ([\#9542](https://github.com/matrix-org/synapse/issues/9542), [\#9583](https://github.com/matrix-org/synapse/issues/9583))
- Fix bug where federation requests were not correctly retried on 5xx responses. ([\#9567](https://github.com/matrix-org/synapse/issues/9567))
- Fix re-activating an account via the admin API when local passwords are disabled. ([\#9587](https://github.com/matrix-org/synapse/issues/9587))
- Fix a bug introduced in Synapse 1.20 which caused incoming federation transactions to stack up, causing slow recovery from outages. ([\#9597](https://github.com/matrix-org/synapse/issues/9597))
- Fix a bug introduced in v1.28.0 where the OpenID Connect callback endpoint could error with a `MacaroonInitException`. ([\#9620](https://github.com/matrix-org/synapse/issues/9620))
- Fix Internal Server Error on `GET /_synapse/client/saml2/authn_response` request. ([\#9623](https://github.com/matrix-org/synapse/issues/9623))
Updates to the Docker image
---------------------------
- Make use of an improved malloc implementation (`jemalloc`) in the docker image. ([\#8553](https://github.com/matrix-org/synapse/issues/8553))
Improved Documentation
----------------------
- Add relayd entry to reverse proxy example configurations. ([\#9508](https://github.com/matrix-org/synapse/issues/9508))
- Improve the SAML2 upgrade notes for 1.27.0. ([\#9550](https://github.com/matrix-org/synapse/issues/9550))
- Link to the "List user's media" admin API from the media admin API docs. ([\#9571](https://github.com/matrix-org/synapse/issues/9571))
- Clarify the spam checker modules documentation example to mention that `parse_config` is a required method. ([\#9580](https://github.com/matrix-org/synapse/issues/9580))
- Clarify the sample configuration for `stats` settings. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
Deprecations and Removals
-------------------------
- The `synapse_federation_last_sent_pdu_age` and `synapse_federation_last_received_pdu_age` prometheus metrics have been removed. They are replaced by `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time`. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Registering an Application Service user without using the `m.login.application_service` login type will be unsupported in an upcoming Synapse release. ([\#9559](https://github.com/matrix-org/synapse/issues/9559))
Internal Changes
----------------
- Add tests to ResponseCache. ([\#9458](https://github.com/matrix-org/synapse/issues/9458))
- Add type hints to purge room and server notice admin API. ([\#9520](https://github.com/matrix-org/synapse/issues/9520))
- Add extra logging to ObservableDeferred when callbacks throw exceptions. ([\#9523](https://github.com/matrix-org/synapse/issues/9523))
- Fix incorrect type hints. ([\#9528](https://github.com/matrix-org/synapse/issues/9528), [\#9543](https://github.com/matrix-org/synapse/issues/9543), [\#9591](https://github.com/matrix-org/synapse/issues/9591), [\#9608](https://github.com/matrix-org/synapse/issues/9608), [\#9618](https://github.com/matrix-org/synapse/issues/9618))
- Add an additional test for purging a room. ([\#9541](https://github.com/matrix-org/synapse/issues/9541))
- Add a `.git-blame-ignore-revs` file with the hashes of auto-formatting. ([\#9560](https://github.com/matrix-org/synapse/issues/9560))
- Increase the threshold before which outbound federation to a server goes into "catch up" mode, which is expensive for the remote server to handle. ([\#9561](https://github.com/matrix-org/synapse/issues/9561))
- Fix spurious errors reported by the `config-lint.sh` script. ([\#9562](https://github.com/matrix-org/synapse/issues/9562))
- Fix type hints and tests for BlacklistingAgentWrapper and BlacklistingReactorWrapper. ([\#9563](https://github.com/matrix-org/synapse/issues/9563))
- Do not have mypy ignore type hints from unpaddedbase64. ([\#9568](https://github.com/matrix-org/synapse/issues/9568))
- Improve efficiency of calculating the auth chain in large rooms. ([\#9576](https://github.com/matrix-org/synapse/issues/9576))
- Convert `synapse.types.Requester` to an `attrs` class. ([\#9586](https://github.com/matrix-org/synapse/issues/9586))
- Add logging for redis connection setup. ([\#9590](https://github.com/matrix-org/synapse/issues/9590))
- Improve logging when processing incoming transactions. ([\#9596](https://github.com/matrix-org/synapse/issues/9596))
- Remove unused `stats.retention` setting, and emit a warning if stats are disabled. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
- Prevent attempting to bundle aggregations for state events in /context APIs. ([\#9619](https://github.com/matrix-org/synapse/issues/9619))
Synapse 1.29.0 (2021-03-08)
===========================
Synapse 1.xx.0
==============
Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change.
No significant changes.
Synapse 1.29.0rc1 (2021-03-04)
==============================
Features
--------
- Add rate limiters to cross-user key sharing requests. ([\#8957](https://github.com/matrix-org/synapse/issues/8957))
- Add `order_by` to the admin API `GET /_synapse/admin/v1/users/<user_id>/media`. Contributed by @dklimpel. ([\#8978](https://github.com/matrix-org/synapse/issues/8978))
- Add some configuration settings to make users' profile data more private. ([\#9203](https://github.com/matrix-org/synapse/issues/9203))
- The `no_proxy` and `NO_PROXY` environment variables are now respected in proxied HTTP clients with the lowercase form taking precedence if both are present. Additionally, the lowercase `https_proxy` environment variable is now respected in proxied HTTP clients on top of existing support for the uppercase `HTTPS_PROXY` form and takes precedence if both are present. Contributed by Timothy Leung. ([\#9372](https://github.com/matrix-org/synapse/issues/9372))
- Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users. ([\#9383](https://github.com/matrix-org/synapse/issues/9383), [\#9385](https://github.com/matrix-org/synapse/issues/9385))
- Add support for regenerating thumbnails if they have been deleted but the original image is still stored. ([\#9438](https://github.com/matrix-org/synapse/issues/9438))
- Add support for `X-Forwarded-Proto` header when using a reverse proxy. ([\#9472](https://github.com/matrix-org/synapse/issues/9472), [\#9501](https://github.com/matrix-org/synapse/issues/9501), [\#9512](https://github.com/matrix-org/synapse/issues/9512), [\#9539](https://github.com/matrix-org/synapse/issues/9539))
Bugfixes
--------
- Fix a bug where users' pushers were not all deleted when they deactivated their account. ([\#9285](https://github.com/matrix-org/synapse/issues/9285), [\#9516](https://github.com/matrix-org/synapse/issues/9516))
- Fix a bug where a lot of unnecessary presence updates were sent when joining a room. ([\#9402](https://github.com/matrix-org/synapse/issues/9402))
- Fix a bug that caused multiple calls to the experimental `shared_rooms` endpoint to return stale results. ([\#9416](https://github.com/matrix-org/synapse/issues/9416))
- Fix a bug in single sign-on which could cause a "No session cookie found" error. ([\#9436](https://github.com/matrix-org/synapse/issues/9436))
- Fix bug introduced in v1.27.0 where allowing a user to choose their own username when logging in via single sign-on did not work unless an `idp_icon` was defined. ([\#9440](https://github.com/matrix-org/synapse/issues/9440))
- Fix a bug introduced in v1.26.0 where some sequences were not properly configured when running `synapse_port_db`. ([\#9449](https://github.com/matrix-org/synapse/issues/9449))
- Fix deleting pushers when using sharded pushers. ([\#9465](https://github.com/matrix-org/synapse/issues/9465), [\#9466](https://github.com/matrix-org/synapse/issues/9466), [\#9479](https://github.com/matrix-org/synapse/issues/9479), [\#9536](https://github.com/matrix-org/synapse/issues/9536))
- Fix missing startup checks for the consistency of certain PostgreSQL sequences. ([\#9470](https://github.com/matrix-org/synapse/issues/9470))
- Fix a long-standing bug where the media repository could leak file descriptors while previewing media. ([\#9497](https://github.com/matrix-org/synapse/issues/9497))
- Properly purge the event chain cover index when purging history. ([\#9498](https://github.com/matrix-org/synapse/issues/9498))
- Fix missing chain cover index due to a schema delta not being applied correctly. Only affected servers that ran development versions. ([\#9503](https://github.com/matrix-org/synapse/issues/9503))
- Fix a bug introduced in v1.25.0 where `/_synapse/admin/join/` would fail when given a room alias. ([\#9506](https://github.com/matrix-org/synapse/issues/9506))
- Prevent presence background jobs from running when presence is disabled. ([\#9530](https://github.com/matrix-org/synapse/issues/9530))
- Fix rare edge case that caused a background update to fail if the server had rejected an event that had duplicate auth events. ([\#9537](https://github.com/matrix-org/synapse/issues/9537))
Improved Documentation
----------------------
- Update the example systemd config to propagate reloads to individual units. ([\#9463](https://github.com/matrix-org/synapse/issues/9463))
Internal Changes
----------------
- Add documentation and type hints to `parse_duration`. ([\#9432](https://github.com/matrix-org/synapse/issues/9432))
- Remove vestiges of `uploads_path` configuration setting. ([\#9462](https://github.com/matrix-org/synapse/issues/9462))
- Add a comment about systemd-python. ([\#9464](https://github.com/matrix-org/synapse/issues/9464))
- Test that we require validated email for email pushers. ([\#9496](https://github.com/matrix-org/synapse/issues/9496))
- Allow python to generate bytecode for synapse. ([\#9502](https://github.com/matrix-org/synapse/issues/9502))
- Fix incorrect type hints. ([\#9515](https://github.com/matrix-org/synapse/issues/9515), [\#9518](https://github.com/matrix-org/synapse/issues/9518))
- Add type hints to device and event report admin API. ([\#9519](https://github.com/matrix-org/synapse/issues/9519))
- Add type hints to user admin API. ([\#9521](https://github.com/matrix-org/synapse/issues/9521))
- Bump the versions of mypy and mypy-zope used for static type checking. ([\#9529](https://github.com/matrix-org/synapse/issues/9529))
Synapse 1.28.0 (2021-02-25)
===========================

View File

@@ -20,10 +20,9 @@ recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
recursive-include tests *.pem
recursive-include tests *.p8
recursive-include tests *.crt
recursive-include tests *.key
include tests/http/ca.crt
include tests/http/ca.key
include tests/http/server.key
recursive-include synapse/res *
recursive-include synapse/static *.css

View File

@@ -183,9 +183,8 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.

View File

@@ -98,9 +98,9 @@ will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client. See the `reverse proxy documentation
<docs/reverse_proxy.md>`_, where the example configurations have been updated to
show how to set this header.
indicate the protocol used by the client. See the [reverse proxy
documentation](docs/reverse_proxy.md), where the example configurations have
been updated to show how to set this header.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.)
@@ -124,13 +124,6 @@ This version changes the URI used for callbacks from OAuth2 and SAML2 identity p
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
"ACS location" (also known as "allowed callback URLs") at the identity provider.
The "Issuer" in the "AuthnRequest" to the SAML2 identity provider is also updated to
``[synapse public baseurl]/_synapse/client/saml2/metadata.xml``. If your SAML2 identity
provider uses this property to validate or otherwise identify Synapse, its configuration
will need to be updated to use the new URL. Alternatively you could create a new, separate
"EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in
the existing "EntityDescriptor" as they were.
Changes to HTML templates
-------------------------

1
changelog.d/8675.misc Normal file
View File

@@ -0,0 +1 @@
Temporarily drop cross-user m.room_key_request to_device messages over performance concerns.

1
changelog.d/8957.feature Normal file
View File

@@ -0,0 +1 @@
Add rate limiters to cross-user key sharing requests.

1
changelog.d/8978.feature Normal file
View File

@@ -0,0 +1 @@
Add `order_by` to the admin API `GET /_synapse/admin/v1/users/<user_id>/media`. Contributed by @dklimpel.

1
changelog.d/9203.feature Normal file
View File

@@ -0,0 +1 @@
Add some configuration settings to make users' profile data more private.

1
changelog.d/9285.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where users' pushers were not all deleted when they deactivated their account.

1
changelog.d/9358.misc Normal file
View File

@@ -0,0 +1 @@
Added a fix that invalidates cache for empty timed-out sync responses.

1
changelog.d/9383.feature Normal file
View File

@@ -0,0 +1 @@
Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users.

1
changelog.d/9385.feature Normal file
View File

@@ -0,0 +1 @@
Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users.

1
changelog.d/9402.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where a lot of unnecessary presence updates were sent when joining a room.

1
changelog.d/9416.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug that caused multiple calls to the experimental `shared_rooms` endpoint to return stale results.

1
changelog.d/9432.misc Normal file
View File

@@ -0,0 +1 @@
Add documentation and type hints to `parse_duration`.

1
changelog.d/9436.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in single sign-on which could cause a "No session cookie found" error.

1
changelog.d/9438.feature Normal file
View File

@@ -0,0 +1 @@
Add support for regenerating thumbnails if they have been deleted but the original image is still stored.

1
changelog.d/9440.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug introduced in v1.27.0 where allowing a user to choose their own username when logging in via single sign-on did not work unless an `idp_icon` was defined.

1
changelog.d/9449.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in v1.26.0 where some sequences were not properly configured when running `synapse_port_db`.

1
changelog.d/9462.misc Normal file
View File

@@ -0,0 +1 @@
Remove vestiges of `uploads_path` configuration setting.

1
changelog.d/9463.doc Normal file
View File

@@ -0,0 +1 @@
Update the example systemd config to propagate reloads to individual units.

1
changelog.d/9464.misc Normal file
View File

@@ -0,0 +1 @@
Add a comment about systemd-python.

1
changelog.d/9465.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix deleting pushers when using sharded pushers.

1
changelog.d/9466.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix deleting pushers when using sharded pushers.

1
changelog.d/9470.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix missing startup checks for the consistency of certain PostgreSQL sequences.

1
changelog.d/9472.feature Normal file
View File

@@ -0,0 +1 @@
Add support for `X-Forwarded-Proto` header when using a reverse proxy.

1
changelog.d/9479.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix deleting pushers when using sharded pushers.

1
changelog.d/9496.misc Normal file
View File

@@ -0,0 +1 @@
Test that we require validated email for email pushers.

1
changelog.d/9501.feature Normal file
View File

@@ -0,0 +1 @@
Add support for `X-Forwarded-Proto` header when using a reverse proxy.

View File

@@ -58,10 +58,10 @@ trap "rm -r $tmpdir" EXIT
cp -r tests "$tmpdir"
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
"${TARGET_PYTHON}" -B -m twisted.trial --reporter=text -j2 tests
# build the config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_config" \
--config-dir="/etc/matrix-synapse" \
--data-dir="/var/lib/matrix-synapse" |
perl -pe '
@@ -87,7 +87,7 @@ PYTHONPATH="$tmpdir" \
' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
# build the log config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_log_config" \
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars.

16
debian/changelog vendored
View File

@@ -1,19 +1,3 @@
matrix-synapse-py3 (1.30.0) stable; urgency=medium
* New synapse release 1.30.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 22 Mar 2021 13:15:34 +0000
matrix-synapse-py3 (1.29.0) stable; urgency=medium
[ Jonathan de Jong ]
* Remove the python -B flag (don't generate bytecode) in scripts and documentation.
[ Synapse Packaging team ]
* New synapse release 1.29.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 08 Mar 2021 13:51:50 +0000
matrix-synapse-py3 (1.28.0) stable; urgency=medium
* New synapse release 1.28.0.

2
debian/synctl.1 vendored
View File

@@ -44,7 +44,7 @@ Configuration file may be generated as follows:
.
.nf
$ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
$ python \-B \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
.
.fi
.

2
debian/synctl.ronn vendored
View File

@@ -41,7 +41,7 @@ process.
Configuration file may be generated as follows:
$ python -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
$ python -B -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
## ENVIRONMENT

View File

@@ -69,7 +69,6 @@ RUN apt-get update && apt-get install -y \
libpq5 \
libwebp6 \
xmlsec1 \
libjemalloc2 \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local

View File

@@ -204,8 +204,3 @@ healthcheck:
timeout: 10s
retries: 3
```
## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse [README](../README.md)

View File

@@ -3,7 +3,6 @@
import codecs
import glob
import os
import platform
import subprocess
import sys
@@ -214,13 +213,6 @@ def main(args, environ):
if "-m" not in args:
args = ["-m", synapse_worker] + args
jemallocpath = "/usr/lib/%s-linux-gnu/libjemalloc.so.2" % (platform.machine(),)
if os.path.isfile(jemallocpath):
environ["LD_PRELOAD"] = jemallocpath
else:
log("Could not find %s, will not use" % (jemallocpath,))
# if there are no config files passed to synapse, try adding the default file
if not any(p.startswith("--config-path") or p.startswith("-c") for p in args):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
@@ -256,9 +248,9 @@ running with 'migrate_config'. See the README for more details.
args = ["python"] + args
if ownership is not None:
args = ["gosu", ownership] + args
os.execve("/usr/sbin/gosu", args, environ)
os.execv("/usr/sbin/gosu", args)
else:
os.execve("/usr/local/bin/python", args, environ)
os.execv("/usr/local/bin/python", args)
if __name__ == "__main__":

View File

@@ -1,7 +1,5 @@
# Contents
- [Querying media](#querying-media)
* [List all media in a room](#list-all-media-in-a-room)
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
- [List all media in a room](#list-all-media-in-a-room)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
@@ -12,11 +10,7 @@
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# Querying media
These APIs allow extracting media information from the homeserver.
## List all media in a room
# List all media in a room
This API gets a list of known media in a room.
However, it only shows media from unencrypted events or rooms.
@@ -42,12 +36,6 @@ The API returns a JSON body like the following:
}
```
## List all media uploaded by a user
Listing all media that has been uploaded by a local user can be achieved through
the use of the [List media of a user](user_admin_api.rst#list-media-of-a-user)
Admin API.
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies

View File

@@ -226,7 +226,7 @@ Synapse config:
oidc_providers:
- idp_id: github
idp_name: Github
idp_brand: "github" # optional: styling hint for clients
idp_brand: "org.matrix.github" # optional: styling hint for clients
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
@@ -252,7 +252,7 @@ oidc_providers:
oidc_providers:
- idp_id: google
idp_name: Google
idp_brand: "google" # optional: styling hint for clients
idp_brand: "org.matrix.google" # optional: styling hint for clients
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -299,7 +299,7 @@ Synapse config:
oidc_providers:
- idp_id: gitlab
idp_name: Gitlab
idp_brand: "gitlab" # optional: styling hint for clients
idp_brand: "org.matrix.gitlab" # optional: styling hint for clients
issuer: "https://gitlab.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -334,7 +334,7 @@ Synapse config:
```yaml
- idp_id: facebook
idp_name: Facebook
idp_brand: "facebook" # optional: styling hint for clients
idp_brand: "org.matrix.facebook" # optional: styling hint for clients
discover: false
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
@@ -386,7 +386,7 @@ oidc_providers:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.full_name }}"
display_name_template: "{{ user.full_name }}"
```
### XWiki
@@ -401,7 +401,8 @@ oidc_providers:
idp_name: "XWiki"
issuer: "https://myxwikihost/xwiki/oidc/"
client_id: "your-client-id" # TO BE FILLED
client_auth_method: none
# Needed until https://github.com/matrix-org/synapse/issues/9212 is fixed
client_secret: "dontcare"
scopes: ["openid", "profile"]
user_profile_method: "userinfo_endpoint"
user_mapping_provider:
@@ -409,40 +410,3 @@ oidc_providers:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
## Apple
Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.
You will need to create a new "Services ID" for SiWA, and create and download a
private key with "SiWA" enabled.
As well as the private key file, you will need:
* Client ID: the "identifier" you gave the "Services ID"
* Team ID: a 10-character ID associated with your developer account.
* Key ID: the 10-character identifier for the key.
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
documentation on setting up SiWA.
The synapse config will look like this:
```yaml
- idp_id: apple
idp_name: Apple
issuer: "https://appleid.apple.com"
client_id: "your-client-id" # Set to the "identifier" for your "ServicesID"
client_auth_method: "client_secret_post"
client_secret_jwt_key:
key_file: "/path/to/AuthKey_KEYIDCODE.p8" # point to your key file
jwt_header:
alg: ES256
kid: "KEYIDCODE" # Set to the 10-char Key ID
jwt_payload:
iss: TEAMIDCODE # Set to the 10-char Team ID
scopes: ["name", "email", "openid"]
authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
user_mapping_provider:
config:
email_template: "{{ user.email }}"
```

View File

@@ -3,9 +3,8 @@
It is recommended to put a reverse proxy such as
[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy),
[HAProxy](https://www.haproxy.org/) or
[relayd](https://man.openbsd.org/relayd.8) in front of Synapse. One advantage
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy) or
[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
@@ -54,8 +53,6 @@ server {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 50M;
@@ -163,52 +160,6 @@ backend matrix
server matrix 127.0.0.1:8008
```
### Relayd
```
table <webserver> { 127.0.0.1 }
table <matrixserver> { 127.0.0.1 }
http protocol "https" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
match header set "X-Forwarded-For" value "$REMOTE_ADDR"
match header set "X-Forwarded-Proto" value "https"
# set CORS header for .well-known/matrix/server, .well-known/matrix/client
# httpd does not support setting headers, so do it here
match request path "/.well-known/matrix/*" tag "matrix-cors"
match response tagged "matrix-cors" header set "Access-Control-Allow-Origin" value "*"
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
# pass on non-matrix traffic to webserver
pass forward to <webserver>
}
relay "https_traffic" {
listen on egress port 443 tls
protocol "https"
forward to <matrixserver> port 8008 check tcp
forward to <webserver> port 8080 check tcp
}
http protocol "matrix" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
block
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
}
relay "matrix_federation" {
listen on egress port 8448 tls
protocol "matrix"
forward to <matrixserver> port 8008 check tcp
}
```
## Homeserver Configuration
You will also want to set `bind_addresses: ['127.0.0.1']` and

View File

@@ -89,7 +89,8 @@ pid_file: DATADIR/homeserver.pid
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, unless allow_profile_lookup_over_federation is set to false.
# API, so this setting is of limited value if federation is enabled on
# the server.
#
#require_auth_for_profile_requests: true
@@ -1779,26 +1780,7 @@ saml2_config:
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as "ES256", using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
# client_secret: Required. oauth2 client secret to use.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
@@ -1919,7 +1901,7 @@ oidc_providers:
#
#- idp_id: github
# idp_name: Github
# idp_brand: github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
@@ -2645,20 +2627,19 @@ user_directory:
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
# Local statistics collection. Used in populating the room directory.
#
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
# Server Notices room configuration

View File

@@ -14,7 +14,6 @@ The Python class is instantiated with two objects:
* An instance of `synapse.module_api.ModuleApi`.
It then implements methods which return a boolean to alter behavior in Synapse.
All the methods must be defined.
There's a generic method for checking every event (`check_event_for_spam`), as
well as some specific methods:
@@ -25,7 +24,6 @@ well as some specific methods:
* `user_may_publish_room`
* `check_username_for_spam`
* `check_registration_for_spam`
* `check_media_file_for_spam`
The details of each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
@@ -33,10 +31,6 @@ are documented in the `synapse.events.spamcheck.SpamChecker` class.
The `ModuleApi` class provides a way for the custom spam checker class to
call back into the homeserver internals.
Additionally, a `parse_config` method is mandatory and receives the plugin config
dictionary. After parsing, It must return an object which will be
passed to `__init__` later.
### Example
```python
@@ -47,10 +41,6 @@ class ExampleSpamChecker:
self.config = config
self.api = api
@staticmethod
def parse_config(config):
return config
async def check_event_for_spam(self, foo):
return False # allow all events
@@ -69,13 +59,7 @@ class ExampleSpamChecker:
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_registration_for_spam(
self,
email_threepid,
username,
request_info,
auth_provider_id,
):
async def check_registration_for_spam(self, email_threepid, username, request_info):
return RegistrationBehaviour.ALLOW # allow all registrations
async def check_media_file_for_spam(self, file_wrapper, file_info):

View File

@@ -69,7 +69,6 @@ files =
synapse/util/async_helpers.py,
synapse/util/caches,
synapse/util/metrics.py,
synapse/util/macaroons.py,
synapse/util/stringutils.py,
tests/replication,
tests/test_utils,
@@ -117,6 +116,9 @@ ignore_missing_imports = True
[mypy-saml2.*]
ignore_missing_imports = True
[mypy-unpaddedbase64]
ignore_missing_imports = True
[mypy-canonicaljson]
ignore_missing_imports = True

View File

@@ -2,14 +2,9 @@
# Find linting errors in Synapse's default config file.
# Exits with 0 if there are no problems, or another code otherwise.
# cd to the root of the repository
cd `dirname $0`/..
# Restore backup of sample config upon script exit
trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT
# Fix non-lowercase true/false values
sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
rm docs/sample_config.yaml.bak
# Check if anything changed
diff docs/sample_config.yaml docs/sample_config.yaml.bak
git diff --exit-code docs/sample_config.yaml

View File

@@ -47,7 +47,6 @@ from synapse.storage.databases.main.events_bg_updates import (
from synapse.storage.databases.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.databases.main.pusher import PusherWorkerStore
from synapse.storage.databases.main.registration import (
RegistrationBackgroundUpdateStore,
find_max_generated_user_id_localpart,
@@ -178,7 +177,6 @@ class Store(
UserDirectoryBackgroundUpdateStore,
EndToEndKeyBackgroundStore,
StatsStore,
PusherWorkerStore,
):
def execute(self, f, *args, **kwargs):
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)

View File

@@ -3,7 +3,6 @@ test_suite = tests
[check-manifest]
ignore =
.git-blame-ignore-revs
contrib
contrib/*
docs/*

View File

@@ -102,7 +102,7 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
"flake8",
]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.790", "mypy-zope==0.2.8"]
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.

View File

@@ -17,9 +17,7 @@
"""
from typing import Any, List, Optional, Type, Union
from twisted.internet import protocol
class RedisProtocol(protocol.Protocol):
class RedisProtocol:
def publish(self, channel: str, message: bytes): ...
async def ping(self) -> None: ...
async def set(
@@ -54,7 +52,7 @@ def lazyConnection(
class ConnectionHandler: ...
class RedisFactory(protocol.ReconnectingClientFactory):
class RedisFactory:
continueTrying: bool
handler: RedisProtocol
pool: List[RedisProtocol]

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.30.0"
__version__ = "1.28.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -39,7 +39,6 @@ from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID
from synapse.util.caches.lrucache import LruCache
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -164,7 +163,7 @@ class Auth:
async def get_user_by_req(
self,
request: SynapseRequest,
request: Request,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
@@ -409,7 +408,7 @@ class Auth:
raise _InvalidMacaroonException()
try:
user_id = get_value_from_macaroon(macaroon, "user_id")
user_id = self.get_user_id_from_macaroon(macaroon)
guest = False
for caveat in macaroon.caveats:
@@ -417,12 +416,7 @@ class Auth:
guest = True
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (
pymacaroons.exceptions.MacaroonException,
KeyError,
TypeError,
ValueError,
):
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise InvalidClientTokenError("Invalid macaroon passed.")
if rights == "access":
@@ -430,6 +424,27 @@ class Auth:
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
Does *not* validate the macaroon.
Args:
macaroon (pymacaroons.Macaroon): The macaroon to validate
Returns:
(str) user id
Raises:
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
@@ -450,13 +465,21 @@ class Auth:
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
satisfy_expiry(v, self.clock.time_msec)
v.satisfy_general(self._verify_expiry)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.verify(macaroon, self._macaroon_secret_key)
def _verify_expiry(self, caveat):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix) :])
now = self.hs.get_clock().time_msec()
return now < expiry
def get_appservice_by_req(self, request: SynapseRequest) -> ApplicationService:
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)

View File

@@ -17,6 +17,8 @@ import sys
from synapse import python_dependencies # noqa: E402
sys.dont_write_bytecode = True
logger = logging.getLogger(__name__)
try:

View File

@@ -23,7 +23,6 @@ from typing_extensions import ContextManager
from twisted.internet import address
from twisted.web.resource import IResource
from twisted.web.server import Request
import synapse
import synapse.events
@@ -191,7 +190,7 @@ class KeyUploadServlet(RestServlet):
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
async def on_POST(self, request: Request, device_id: Optional[str]):
async def on_POST(self, request, device_id):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
@@ -224,12 +223,10 @@ class KeyUploadServlet(RestServlet):
header: request.requestHeaders.getRawHeaders(header, [])
for header in (b"Authorization", b"User-Agent")
}
# Add the previous hop to the X-Forwarded-For header.
# Add the previous hop the the X-Forwarded-For header.
x_forwarded_for = request.requestHeaders.getRawHeaders(
b"X-Forwarded-For", []
)
# we use request.client here, since we want the previous hop, not the
# original client (as returned by request.getClientAddress()).
if isinstance(request.client, (address.IPv4Address, address.IPv6Address)):
previous_host = request.client.host.encode("ascii")
# If the header exists, add to the comma-separated list of the first
@@ -242,14 +239,6 @@ class KeyUploadServlet(RestServlet):
x_forwarded_for = [previous_host]
headers[b"X-Forwarded-For"] = x_forwarded_for
# Replicate the original X-Forwarded-Proto header. Note that
# XForwardedForRequest overrides isSecure() to give us the original protocol
# used by the client, as opposed to the protocol used by our upstream proxy
# - which is what we want here.
headers[b"X-Forwarded-Proto"] = [
b"https" if request.isSecure() else b"http"
]
try:
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers

View File

@@ -90,7 +90,7 @@ class ApplicationServiceApi(SimpleHttpClient):
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(
hs.get_clock(), "as_protocol_meta", timeout_ms=HOUR_IN_MS
hs, "as_protocol_meta", timeout_ms=HOUR_IN_MS
) # type: ResponseCache[Tuple[str, str]]
async def query_user(self, service, user_id):

View File

@@ -212,8 +212,9 @@ class Config:
@classmethod
def read_file(cls, file_path, config_name):
"""Deprecated: call read_file directly"""
return read_file(file_path, (config_name,))
cls.check_file(file_path, config_name)
with open(file_path) as file_stream:
return file_stream.read()
def read_template(self, filename: str) -> jinja2.Template:
"""Load a template file from disk.
@@ -893,35 +894,4 @@ class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
return self._get_instance(key)
def read_file(file_path: Any, config_path: Iterable[str]) -> str:
"""Check the given file exists, and read it into a string
If it does not, emit an error indicating the problem
Args:
file_path: the file to be read
config_path: where in the configuration file_path came from, so that a useful
error can be emitted if it does not exist.
Returns:
content of the file.
Raises:
ConfigError if there is a problem reading the file.
"""
if not isinstance(file_path, str):
raise ConfigError("%r is not a string", config_path)
try:
os.stat(file_path)
with open(file_path) as file_stream:
return file_stream.read()
except OSError as e:
raise ConfigError("Error accessing file %r" % (file_path,), config_path) from e
__all__ = [
"Config",
"RootConfig",
"ShardedWorkerHandlingConfig",
"RoutableShardedWorkerHandlingConfig",
"read_file",
]
__all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]

View File

@@ -152,5 +152,3 @@ class ShardedWorkerHandlingConfig:
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
def get_instance(self, key: str) -> str: ...
def read_file(file_path: Any, config_path: Iterable[str]) -> str: ...

View File

@@ -21,10 +21,8 @@ import threading
from string import Template
import yaml
from zope.interface import implementer
from twisted.logger import (
ILogObserver,
LogBeginner,
STDLibLogObserver,
eventAsText,
@@ -229,8 +227,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
threadlocal = threading.local()
@implementer(ILogObserver)
def _log(event: dict) -> None:
def _log(event):
if "log_text" in event:
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
return

View File

@@ -15,7 +15,7 @@
# limitations under the License.
from collections import Counter
from typing import Iterable, Mapping, Optional, Tuple, Type
from typing import Iterable, Optional, Tuple, Type
import attr
@@ -25,7 +25,7 @@ from synapse.types import Collection, JsonDict
from synapse.util.module_loader import load_module
from synapse.util.stringutils import parse_and_validate_mxc_uri
from ._base import Config, ConfigError, read_file
from ._base import Config, ConfigError
DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingProvider"
@@ -97,26 +97,7 @@ class OIDCConfig(Config):
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as "ES256", using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
# client_secret: Required. oauth2 client secret to use.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
@@ -237,7 +218,7 @@ class OIDCConfig(Config):
#
#- idp_id: github
# idp_name: Github
# idp_brand: github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
@@ -259,7 +240,7 @@ class OIDCConfig(Config):
# jsonschema definition of the configuration settings for an oidc identity provider
OIDC_PROVIDER_CONFIG_SCHEMA = {
"type": "object",
"required": ["issuer", "client_id"],
"required": ["issuer", "client_id", "client_secret"],
"properties": {
"idp_id": {
"type": "string",
@@ -272,12 +253,7 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"idp_icon": {"type": "string"},
"idp_brand": {
"type": "string",
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
},
"idp_unstable_brand": {
"type": "string",
# MSC2758-style namespaced identifier
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
@@ -286,30 +262,6 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"issuer": {"type": "string"},
"client_id": {"type": "string"},
"client_secret": {"type": "string"},
"client_secret_jwt_key": {
"type": "object",
"required": ["jwt_header"],
"oneOf": [
{"required": ["key"]},
{"required": ["key_file"]},
],
"properties": {
"key": {"type": "string"},
"key_file": {"type": "string"},
"jwt_header": {
"type": "object",
"required": ["alg"],
"properties": {
"alg": {"type": "string"},
},
"additionalProperties": {"type": "string"},
},
"jwt_payload": {
"type": "object",
"additionalProperties": {"type": "string"},
},
},
},
"client_auth_method": {
"type": "string",
# the following list is the same as the keys of
@@ -452,31 +404,15 @@ def _parse_oidc_config_dict(
"idp_icon must be a valid MXC URI", config_path + ("idp_icon",)
) from e
client_secret_jwt_key_config = oidc_config.get("client_secret_jwt_key")
client_secret_jwt_key = None # type: Optional[OidcProviderClientSecretJwtKey]
if client_secret_jwt_key_config is not None:
keyfile = client_secret_jwt_key_config.get("key_file")
if keyfile:
key = read_file(keyfile, config_path + ("client_secret_jwt_key",))
else:
key = client_secret_jwt_key_config["key"]
client_secret_jwt_key = OidcProviderClientSecretJwtKey(
key=key,
jwt_header=client_secret_jwt_key_config["jwt_header"],
jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}),
)
return OidcProviderConfig(
idp_id=idp_id,
idp_name=oidc_config.get("idp_name", "OIDC"),
idp_icon=idp_icon,
idp_brand=oidc_config.get("idp_brand"),
unstable_idp_brand=oidc_config.get("unstable_idp_brand"),
discover=oidc_config.get("discover", True),
issuer=oidc_config["issuer"],
client_id=oidc_config["client_id"],
client_secret=oidc_config.get("client_secret"),
client_secret_jwt_key=client_secret_jwt_key,
client_secret=oidc_config["client_secret"],
client_auth_method=oidc_config.get("client_auth_method", "client_secret_basic"),
scopes=oidc_config.get("scopes", ["openid"]),
authorization_endpoint=oidc_config.get("authorization_endpoint"),
@@ -491,18 +427,6 @@ def _parse_oidc_config_dict(
)
@attr.s(slots=True, frozen=True)
class OidcProviderClientSecretJwtKey:
# a pem-encoded signing key
key = attr.ib(type=str)
# properties to include in the JWT header
jwt_header = attr.ib(type=Mapping[str, str])
# properties to include in the JWT payload.
jwt_payload = attr.ib(type=Mapping[str, str])
@attr.s(slots=True, frozen=True)
class OidcProviderConfig:
# a unique identifier for this identity provider. Used in the 'user_external_ids'
@@ -518,9 +442,6 @@ class OidcProviderConfig:
# Optional brand identifier for this IdP.
idp_brand = attr.ib(type=Optional[str])
# Optional brand identifier for the unstable API (see MSC2858).
unstable_idp_brand = attr.ib(type=Optional[str])
# whether the OIDC discovery mechanism is used to discover endpoints
discover = attr.ib(type=bool)
@@ -531,13 +452,8 @@ class OidcProviderConfig:
# oauth2 client id to use
client_id = attr.ib(type=str)
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
# a secret.
client_secret = attr.ib(type=Optional[str])
# key to use to construct a JWT to use as a client secret. May be `None` if
# `client_secret` is set.
client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey])
# oauth2 client secret to use
client_secret = attr.ib(type=str)
# auth method to use when exchanging the token.
# Valid values are 'client_secret_basic', 'client_secret_post' and

View File

@@ -841,7 +841,8 @@ class ServerConfig(Config):
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, unless allow_profile_lookup_over_federation is set to false.
# API, so this setting is of limited value if federation is enabled on
# the server.
#
#require_auth_for_profile_requests: true

View File

@@ -13,22 +13,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from ._base import Config
ROOM_STATS_DISABLED_WARN = """\
WARNING: room/user statistics have been disabled via the stats.enabled
configuration setting. This means that certain features (such as the room
directory) will not operate correctly. Future versions of Synapse may ignore
this setting.
To fix this warning, remove the stats.enabled setting from your configuration
file.
--------------------------------------------------------------------------------"""
logger = logging.getLogger(__name__)
class StatsConfig(Config):
"""Stats Configuration
@@ -40,29 +28,30 @@ class StatsConfig(Config):
def read_config(self, config, **kwargs):
self.stats_enabled = True
self.stats_bucket_size = 86400 * 1000
self.stats_retention = sys.maxsize
stats_config = config.get("stats", None)
if stats_config:
self.stats_enabled = stats_config.get("enabled", self.stats_enabled)
self.stats_bucket_size = self.parse_duration(
stats_config.get("bucket_size", "1d")
)
if not self.stats_enabled:
logger.warning(ROOM_STATS_DISABLED_WARN)
self.stats_retention = self.parse_duration(
stats_config.get("retention", "%ds" % (sys.maxsize,))
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
# Local statistics collection. Used in populating the room directory.
#
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
"""

View File

@@ -15,7 +15,6 @@
# limitations under the License.
import inspect
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from synapse.rest.media.v1._base import FileInfo
@@ -28,8 +27,6 @@ if TYPE_CHECKING:
import synapse.events
import synapse.server
logger = logging.getLogger(__name__)
class SpamChecker:
def __init__(self, hs: "synapse.server.HomeServer"):
@@ -193,7 +190,6 @@ class SpamChecker:
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str] = None,
) -> RegistrationBehaviour:
"""Checks if we should allow the given registration request.
@@ -202,9 +198,6 @@ class SpamChecker:
username: The request user name, if any
request_info: List of tuples of user agent and IP that
were used during the registration process.
auth_provider_id: The SSO IdP the user used, e.g "oidc", "saml",
"cas". If any. Note this does not include users registered
via a password provider.
Returns:
Enum for how the request should be handled
@@ -215,25 +208,9 @@ class SpamChecker:
# spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker:
# Provide auth_provider_id if the function supports it
checker_args = inspect.signature(checker)
if len(checker_args.parameters) == 4:
d = checker(
email_threepid,
username,
request_info,
auth_provider_id,
)
elif len(checker_args.parameters) == 3:
d = checker(email_threepid, username, request_info)
else:
logger.error(
"Invalid signature for %s.check_registration_for_spam. Denying registration",
spam_checker.__module__,
)
return RegistrationBehaviour.DENY
behaviour = await maybe_awaitable(d)
behaviour = await maybe_awaitable(
checker(email_threepid, username, request_info)
)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour

View File

@@ -22,7 +22,6 @@ from typing import (
Awaitable,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
@@ -91,15 +90,16 @@ pdu_process_time = Histogram(
"Time taken to process an event",
)
last_pdu_ts_metric = Gauge(
"synapse_federation_last_received_pdu_time",
"The timestamp of the last PDU which was successfully received from the given domain",
last_pdu_age_metric = Gauge(
"synapse_federation_last_received_pdu_age",
"The age (in seconds) of the last PDU successfully received from the given domain",
labelnames=("server_name",),
)
class FederationServer(FederationBase):
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
super().__init__(hs)
self.auth = hs.get_auth()
@@ -112,15 +112,14 @@ class FederationServer(FederationBase):
# with FederationHandlerRegistry.
hs.get_directory_handler()
self._server_linearizer = Linearizer("fed_server")
self._federation_ratelimiter = hs.get_federation_ratelimiter()
# origins that we are currently processing a transaction from.
# a dict from origin to txn id.
self._active_transactions = {} # type: Dict[str, str]
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
# We cache results for transaction with the same ID
self._transaction_resp_cache = ResponseCache(
hs.get_clock(), "fed_txn_handler", timeout_ms=30000
hs, "fed_txn_handler", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self.transaction_actions = TransactionActions(self.store)
@@ -130,10 +129,10 @@ class FederationServer(FederationBase):
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(
hs.get_clock(), "state_resp", timeout_ms=30000
hs, "state_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self._state_ids_resp_cache = ResponseCache(
hs.get_clock(), "state_ids_resp", timeout_ms=30000
hs, "state_ids_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self._federation_metrics_domains = (
@@ -170,33 +169,6 @@ class FederationServer(FederationBase):
logger.debug("[%s] Got transaction", transaction_id)
# Reject malformed transactions early: reject if too many PDUs/EDUs
if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
):
logger.info("Transaction PDU or EDU count too large. Returning 400")
return 400, {}
# we only process one transaction from each origin at a time. We need to do
# this check here, rather than in _on_incoming_transaction_inner so that we
# don't cache the rejection in _transaction_resp_cache (so that if the txn
# arrives again later, we can process it).
current_transaction = self._active_transactions.get(origin)
if current_transaction and current_transaction != transaction_id:
logger.warning(
"Received another txn %s from %s while still processing %s",
transaction_id,
origin,
current_transaction,
)
return 429, {
"errcode": Codes.UNKNOWN,
"error": "Too many concurrent transactions",
}
# CRITICAL SECTION: we must now not await until we populate _active_transactions
# in _on_incoming_transaction_inner.
# We wrap in a ResponseCache so that we de-duplicate retried
# transactions.
return await self._transaction_resp_cache.wrap(
@@ -210,18 +182,26 @@ class FederationServer(FederationBase):
async def _on_incoming_transaction_inner(
self, origin: str, transaction: Transaction, request_time: int
) -> Tuple[int, Dict[str, Any]]:
# CRITICAL SECTION: the first thing we must do (before awaiting) is
# add an entry to _active_transactions.
assert origin not in self._active_transactions
self._active_transactions[origin] = transaction.transaction_id # type: ignore
# Use a linearizer to ensure that transactions from a remote are
# processed in order.
with await self._transaction_linearizer.queue(origin):
# We rate limit here *after* we've queued up the incoming requests,
# so that we don't fill up the ratelimiter with blocked requests.
#
# This is important as the ratelimiter allows N concurrent requests
# at a time, and only starts ratelimiting if there are more requests
# than that being processed at a time. If we queued up requests in
# the linearizer/response cache *after* the ratelimiting then those
# queued up requests would count as part of the allowed limit of N
# concurrent requests.
with self._federation_ratelimiter.ratelimit(origin) as d:
await d
try:
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
finally:
del self._active_transactions[origin]
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
async def _handle_incoming_transaction(
self, origin: str, transaction: Transaction, request_time: int
@@ -247,6 +227,19 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id) # type: ignore
# Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
):
logger.info("Transaction PDU or EDU count too large. Returning 400")
response = {}
await self.transaction_actions.set_response(
origin, transaction, 400, response
)
return 400, response
# We process PDUs and EDUs in parallel. This is important as we don't
# want to block things like to device messages from reaching clients
# behind the potentially expensive handling of PDUs.
@@ -342,48 +335,42 @@ class FederationServer(FederationBase):
# impose a limit to avoid going too crazy with ram/cpu.
async def process_pdus_for_room(room_id: str):
with nested_logging_context(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warning(
"Ignoring PDUs for room %s from banned server", room_id
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
logger.debug("Processing PDUs for %s", room_id)
try:
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warning("Ignoring PDUs for room %s from banned server", room_id)
for pdu in pdus_by_room[room_id]:
pdu_results[pdu.event_id] = await process_pdu(pdu)
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
async def process_pdu(pdu: EventBase) -> JsonDict:
event_id = pdu.event_id
with pdu_process_time.time():
with nested_logging_context(event_id):
try:
await self._handle_received_pdu(origin, pdu)
return {}
except FederationError as e:
logger.warning("Error handling PDU %s: %s", event_id, e)
return {"error": str(e)}
except Exception as e:
f = failure.Failure()
logger.error(
"Failed to handle PDU %s",
event_id,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
)
return {"error": str(e)}
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
with pdu_process_time.time():
with nested_logging_context(event_id):
try:
await self._handle_received_pdu(origin, pdu)
pdu_results[event_id] = {}
except FederationError as e:
logger.warning("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.error(
"Failed to handle PDU %s",
event_id,
exc_info=(f.type, f.value, f.getTracebackObject()),
)
await concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if newest_pdu_ts and origin in self._federation_metrics_domains:
last_pdu_ts_metric.labels(server_name=origin).set(newest_pdu_ts / 1000)
newest_pdu_age = self._clock.time_msec() - newest_pdu_ts
last_pdu_age_metric.labels(server_name=origin).set(newest_pdu_age / 1000)
return pdu_results
@@ -461,22 +448,18 @@ class FederationServer(FederationBase):
async def _on_state_ids_request_compute(self, room_id, event_id):
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
auth_chain_ids = await self.store.get_auth_chain_ids(state_ids)
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(
self, room_id: str, event_id: str
) -> Dict[str, list]:
if event_id:
pdus = await self.handler.get_state_for_pdu(
room_id, event_id
) # type: Iterable[EventBase]
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
else:
pdus = (await self.state.get_current_state(room_id)).values()
auth_chain = await self.store.get_auth_chain(
room_id, [pdu.event_id for pdu in pdus]
)
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
@@ -880,9 +863,7 @@ class FederationHandlerRegistry:
self.edu_handlers = (
{}
) # type: Dict[str, Callable[[str, dict], Awaitable[None]]]
self.query_handlers = (
{}
) # type: Dict[str, Callable[[dict], Awaitable[JsonDict]]]
self.query_handlers = {} # type: Dict[str, Callable[[dict], Awaitable[None]]]
# Map from type to instance names that we should route EDU handling to.
# We randomly choose one instance from the list to route to for each new
@@ -916,7 +897,7 @@ class FederationHandlerRegistry:
self.edu_handlers[edu_type] = handler
def register_query_handler(
self, query_type: str, handler: Callable[[dict], Awaitable[JsonDict]]
self, query_type: str, handler: Callable[[dict], defer.Deferred]
):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
@@ -955,6 +936,10 @@ class FederationHandlerRegistry:
):
return
# Temporary patch to drop cross-user key share requests
if edu_type == "m.room_key_request":
return
# Check if we have a handler on this instance
handler = self.edu_handlers.get(edu_type)
if handler:
@@ -989,7 +974,7 @@ class FederationHandlerRegistry:
# Oh well, let's just log and move on.
logger.warning("No handler registered for EDU type %s", edu_type)
async def on_query(self, query_type: str, args: dict) -> JsonDict:
async def on_query(self, query_type: str, args: dict):
handler = self.query_handlers.get(query_type)
if handler:
return await handler(args)

View File

@@ -17,7 +17,6 @@ import datetime
import logging
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, cast
import attr
from prometheus_client import Counter
from synapse.api.errors import (
@@ -94,10 +93,6 @@ class PerDestinationQueue:
self._destination = destination
self.transmission_loop_running = False
# Flag to signal to any running transmission loop that there is new data
# queued up to be sent.
self._new_data_to_send = False
# True whilst we are sending events that the remote homeserver missed
# because it was unreachable. We start in this state so we can perform
# catch-up at startup.
@@ -113,7 +108,7 @@ class PerDestinationQueue:
# destination (we are the only updater so this is safe)
self._last_successful_stream_ordering = None # type: Optional[int]
# a queue of pending PDUs
# a list of pending PDUs
self._pending_pdus = [] # type: List[EventBase]
# XXX this is never actually used: see
@@ -213,10 +208,6 @@ class PerDestinationQueue:
transaction in the background.
"""
# Mark that we (may) have new things to send, so that any running
# transmission loop will recheck whether there is stuff to send.
self._new_data_to_send = True
if self.transmission_loop_running:
# XXX: this can get stuck on by a never-ending
# request at which point pending_pdus just keeps growing.
@@ -259,41 +250,125 @@ class PerDestinationQueue:
pending_pdus = []
while True:
self._new_data_to_send = False
# We have to keep 2 free slots for presence and rr_edus
limit = MAX_EDUS_PER_TRANSACTION - 2
async with _TransactionQueueManager(self) as (
pending_pdus,
pending_edus,
):
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
device_update_edus, dev_list_id = await self._get_device_update_edus(
limit
)
# If we've gotten told about new things to send during
# checking for things to send, we try looking again.
# Otherwise new PDUs or EDUs might arrive in the meantime,
# but not get sent because we hold the
# `transmission_loop_running` flag.
if self._new_data_to_send:
continue
else:
return
limit -= len(device_update_edus)
if pending_pdus:
logger.debug(
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination,
len(pending_pdus),
(
to_device_edus,
device_stream_id,
) = await self._get_to_device_message_edus(limit)
pending_edus = device_update_edus + to_device_edus
# BEGIN CRITICAL SECTION
#
# In order to avoid a race condition, we need to make sure that
# the following code (from popping the queues up to the point
# where we decide if we actually have any pending messages) is
# atomic - otherwise new PDUs or EDUs might arrive in the
# meantime, but not get sent because we hold the
# transmission_loop_running flag.
pending_pdus = self._pending_pdus
# We can only include at most 50 PDUs per transactions
pending_pdus, self._pending_pdus = pending_pdus[:50], pending_pdus[50:]
pending_edus.extend(self._get_rr_edus(force_flush=False))
pending_presence = self._pending_presence
self._pending_presence = {}
if pending_presence:
pending_edus.append(
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self._clock.time_msec()
)
for presence in pending_presence.values()
]
},
)
await self._transaction_manager.send_new_transaction(
self._destination, pending_pdus, pending_edus
)
pending_edus.extend(
self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self._pending_edus_keyed
):
_, val = self._pending_edus_keyed.popitem()
pending_edus.append(val)
if pending_pdus:
logger.debug(
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination,
len(pending_pdus),
)
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
self._last_device_stream_id = device_stream_id
return
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self._get_rr_edus(force_flush=True))
# END CRITICAL SECTION
success = await self._transaction_manager.send_new_transaction(
self._destination, pending_pdus, pending_edus
)
if success:
sent_transactions_counter.inc()
sent_edus_counter.inc(len(pending_edus))
for edu in pending_edus:
sent_edus_by_type.labels(edu.edu_type).inc()
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if to_device_edus:
await self._store.delete_device_msgs_for_remote(
self._destination, device_stream_id
)
# also mark the device updates as sent
if device_update_edus:
logger.info(
"Marking as sent %r %r", self._destination, dev_list_id
)
await self._store.mark_as_sent_devices_by_remote(
self._destination, dev_list_id
)
self._last_device_stream_id = device_stream_id
self._last_device_list_stream_id = dev_list_id
if pending_pdus:
# we sent some PDUs and it was successful, so update our
# last_successful_stream_ordering in the destinations table.
final_pdu = pending_pdus[-1]
last_successful_stream_ordering = (
final_pdu.internal_metadata.stream_ordering
)
assert last_successful_stream_ordering
await self._store.set_destination_last_successful_stream_ordering(
self._destination, last_successful_stream_ordering
)
else:
break
except NotRetryingDestination as e:
logger.debug(
"TX [%s] not ready for retry yet (next retry at %s) - "
@@ -326,7 +401,7 @@ class PerDestinationQueue:
self._pending_presence = {}
self._pending_rrs = {}
self._start_catching_up()
self._start_catching_up()
except FederationDeniedError as e:
logger.info(e)
except HttpResponseException as e:
@@ -337,6 +412,7 @@ class PerDestinationQueue:
e,
)
self._start_catching_up()
except RequestSendFailed as e:
logger.warning(
"TX [%s] Failed to send transaction: %s", self._destination, e
@@ -346,12 +422,16 @@ class PerDestinationQueue:
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
self._start_catching_up()
except Exception:
logger.exception("TX [%s] Failed to send transaction", self._destination)
for p in pending_pdus:
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
self._start_catching_up()
finally:
# We want to be *very* sure we clear this after we stop processing
self.transmission_loop_running = False
@@ -419,10 +499,13 @@ class PerDestinationQueue:
rooms = [p.room_id for p in catchup_pdus]
logger.info("Catching up rooms to %s: %r", self._destination, rooms)
await self._transaction_manager.send_new_transaction(
success = await self._transaction_manager.send_new_transaction(
self._destination, catchup_pdus, []
)
if not success:
return
sent_transactions_counter.inc()
final_pdu = catchup_pdus[-1]
self._last_successful_stream_ordering = cast(
@@ -501,135 +584,3 @@ class PerDestinationQueue:
"""
self._catching_up = True
self._pending_pdus = []
@attr.s(slots=True)
class _TransactionQueueManager:
"""A helper async context manager for pulling stuff off the queues and
tracking what was last successfully sent, etc.
"""
queue = attr.ib(type=PerDestinationQueue)
_device_stream_id = attr.ib(type=Optional[int], default=None)
_device_list_id = attr.ib(type=Optional[int], default=None)
_last_stream_ordering = attr.ib(type=Optional[int], default=None)
_pdus = attr.ib(type=List[EventBase], factory=list)
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
# First we calculate the EDUs we want to send, if any.
# We start by fetching device related EDUs, i.e device updates and to
# device messages. We have to keep 2 free slots for presence and rr_edus.
limit = MAX_EDUS_PER_TRANSACTION - 2
device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
limit
)
if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id
limit -= len(device_update_edus)
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(limit)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
pending_edus = device_update_edus + to_device_edus
# Now add the read receipt EDU.
pending_edus.extend(self.queue._get_rr_edus(force_flush=False))
# And presence EDU.
if self.queue._pending_presence:
pending_edus.append(
Edu(
origin=self.queue._server_name,
destination=self.queue._destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self.queue._clock.time_msec()
)
for presence in self.queue._pending_presence.values()
]
},
)
)
self.queue._pending_presence = {}
# Finally add any other types of EDUs if there is room.
pending_edus.extend(
self.queue._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self.queue._pending_edus_keyed
):
_, val = self.queue._pending_edus_keyed.popitem()
pending_edus.append(val)
# Now we look for any PDUs to send, by getting up to 50 PDUs from the
# queue
self._pdus = self.queue._pending_pdus[:50]
if not self._pdus and not pending_edus:
return [], []
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self.queue._get_rr_edus(force_flush=True))
if self._pdus:
self._last_stream_ordering = self._pdus[
-1
].internal_metadata.stream_ordering
assert self._last_stream_ordering
return self._pdus, pending_edus
async def __aexit__(self, exc_type, exc, tb):
if exc_type is not None:
# Failed to send transaction, so we bail out.
return
# Successfully sent transactions, so we remove pending PDUs from the queue
if self._pdus:
self.queue._pending_pdus = self.queue._pending_pdus[len(self._pdus) :]
# Succeeded to send the transaction so we record where we have sent up
# to in the various streams
if self._device_stream_id:
await self.queue._store.delete_device_msgs_for_remote(
self.queue._destination, self._device_stream_id
)
self.queue._last_device_stream_id = self._device_stream_id
# also mark the device updates as sent
if self._device_list_id:
logger.info(
"Marking as sent %r %r", self.queue._destination, self._device_list_id
)
await self.queue._store.mark_as_sent_devices_by_remote(
self.queue._destination, self._device_list_id
)
self.queue._last_device_list_stream_id = self._device_list_id
if self._last_stream_ordering:
# we sent some PDUs and it was successful, so update our
# last_successful_stream_ordering in the destinations table.
await self.queue._store.set_destination_last_successful_stream_ordering(
self.queue._destination, self._last_stream_ordering
)

View File

@@ -36,9 +36,9 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
last_pdu_ts_metric = Gauge(
"synapse_federation_last_sent_pdu_time",
"The timestamp of the last PDU which was successfully sent to the given domain",
last_pdu_age_metric = Gauge(
"synapse_federation_last_sent_pdu_age",
"The age (in seconds) of the last PDU successfully sent to the given domain",
labelnames=("server_name",),
)
@@ -69,12 +69,15 @@ class TransactionManager:
destination: str,
pdus: List[EventBase],
edus: List[Edu],
) -> None:
) -> bool:
"""
Args:
destination: The destination to send to (e.g. 'example.org')
pdus: In-order list of PDUs to send
edus: List of EDUs to send
Returns:
True iff the transaction was successful
"""
# Make a transaction-sending opentracing span. This span follows on from
@@ -93,6 +96,8 @@ class TransactionManager:
edu.strip_context()
with start_active_span_follows_from("send_transaction", span_contexts):
success = True
logger.debug("TX [%s] _attempt_new_transaction", destination)
txn_id = str(self._next_txn_id)
@@ -147,29 +152,45 @@ class TransactionManager:
response = await self._transport_layer.send_transaction(
transaction, json_data_cb
)
code = 200
except HttpResponseException as e:
code = e.code
response = e.response
set_tag(tags.ERROR, True)
if e.code in (401, 404, 429) or 500 <= e.code:
logger.info(
"TX [%s] {%s} got %d response", destination, txn_id, code
)
raise e
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
raise
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
logger.info("TX [%s] {%s} got 200 response", destination, txn_id)
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
if code == 200:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warning(
"TX [%s] {%s} Remote returned error for %s: %s",
destination,
txn_id,
e_id,
r,
)
else:
for p in pdus:
logger.warning(
"TX [%s] {%s} Remote returned error for %s: %s",
"TX [%s] {%s} Failed to send event %s",
destination,
txn_id,
e_id,
r,
p.event_id,
)
success = False
if pdus and destination in self._federation_metrics_domains:
if success and pdus and destination in self._federation_metrics_domains:
last_pdu = pdus[-1]
last_pdu_ts_metric.labels(server_name=destination).set(
last_pdu.origin_server_ts / 1000
last_pdu_age = self.clock.time_msec() - last_pdu.origin_server_ts
last_pdu_age_metric.labels(server_name=destination).set(
last_pdu_age / 1000
)
set_tag(tags.ERROR, not success)
return success

View File

@@ -73,9 +73,7 @@ class AcmeHandler:
"Listening for ACME requests on %s:%i", host, self.hs.config.acme_port
)
try:
self.reactor.listenTCP(
self.hs.config.acme_port, srv, backlog=50, interface=host
)
self.reactor.listenTCP(self.hs.config.acme_port, srv, interface=host)
except twisted.internet.error.CannotListenError as e:
check_bind_error(e, host, bind_addresses)

View File

@@ -36,7 +36,7 @@ import attr
import bcrypt
import pymacaroons
from twisted.web.server import Request
from twisted.web.http import Request
from synapse.api.constants import LoginType
from synapse.api.errors import (
@@ -65,7 +65,6 @@ from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.threepids import canonicalise_email
@@ -171,16 +170,6 @@ class SsoLoginExtraAttributes:
extra_attributes = attr.ib(type=JsonDict)
@attr.s(slots=True, frozen=True)
class LoginTokenAttributes:
"""Data we store in a short-term login token"""
user_id = attr.ib(type=str)
# the SSO Identity Provider that the user authenticated with, to get this token
auth_provider_id = attr.ib(type=str)
class AuthHandler(BaseHandler):
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
@@ -337,8 +326,7 @@ class AuthHandler(BaseHandler):
user is too high to proceed
"""
if not requester.access_token_id:
raise ValueError("Cannot validate a user without an access token")
if self._ui_auth_session_timeout:
last_validated = await self.store.get_access_token_last_validated(
requester.access_token_id
@@ -493,7 +481,7 @@ class AuthHandler(BaseHandler):
sid = authdict["session"]
# Convert the URI and method to strings.
uri = request.uri.decode("utf-8") # type: ignore
uri = request.uri.decode("utf-8")
method = request.method.decode("utf-8")
# If there's no session ID, create a new session.
@@ -1176,16 +1164,18 @@ class AuthHandler(BaseHandler):
return None
return user_id
async def validate_short_term_login_token(
self, login_token: str
) -> LoginTokenAttributes:
async def validate_short_term_login_token_and_get_user_id(self, login_token: str):
auth_api = self.hs.get_auth()
user_id = None
try:
res = self.macaroon_gen.verify_short_term_login_token(login_token)
macaroon = pymacaroons.Macaroon.deserialize(login_token)
user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", user_id)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
await self.auth.check_auth_blocking(res.user_id)
return res
await self.auth.check_auth_blocking(user_id)
return user_id
async def delete_access_token(self, access_token: str):
"""Invalidate a single access token
@@ -1214,7 +1204,7 @@ class AuthHandler(BaseHandler):
async def delete_access_tokens_for_user(
self,
user_id: str,
except_token_id: Optional[int] = None,
except_token_id: Optional[str] = None,
device_id: Optional[str] = None,
):
"""Invalidate access tokens belonging to a user
@@ -1407,7 +1397,6 @@ class AuthHandler(BaseHandler):
async def complete_sso_login(
self,
registered_user_id: str,
auth_provider_id: str,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
@@ -1417,9 +1406,6 @@ class AuthHandler(BaseHandler):
Args:
registered_user_id: The registered user ID to complete SSO login for.
auth_provider_id: The id of the SSO Identity provider that was used for
login. This will be stored in the login token for future tracking in
prometheus metrics.
request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
@@ -1441,7 +1427,6 @@ class AuthHandler(BaseHandler):
self._complete_sso_login(
registered_user_id,
auth_provider_id,
request,
client_redirect_url,
extra_attributes,
@@ -1452,7 +1437,6 @@ class AuthHandler(BaseHandler):
def _complete_sso_login(
self,
registered_user_id: str,
auth_provider_id: str,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
@@ -1479,7 +1463,7 @@ class AuthHandler(BaseHandler):
# Create a login token
login_token = self.macaroon_gen.generate_short_term_login_token(
registered_user_id, auth_provider_id=auth_provider_id
registered_user_id
)
# Append the login token to the original redirect URL (i.e. with its query
@@ -1585,48 +1569,15 @@ class MacaroonGenerator:
return macaroon.serialize()
def generate_short_term_login_token(
self,
user_id: str,
auth_provider_id: str,
duration_in_ms: int = (2 * 60 * 1000),
self, user_id: str, duration_in_ms: int = (2 * 60 * 1000)
) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
macaroon.add_first_party_caveat("auth_provider_id = %s" % (auth_provider_id,))
return macaroon.serialize()
def verify_short_term_login_token(self, token: str) -> LoginTokenAttributes:
"""Verify a short-term-login macaroon
Checks that the given token is a valid, unexpired short-term-login token
minted by this server.
Args:
token: the login token to verify
Returns:
the user_id that this token is valid for
Raises:
MacaroonVerificationFailedException if the verification failed
"""
macaroon = pymacaroons.Macaroon.deserialize(token)
user_id = get_value_from_macaroon(macaroon, "user_id")
auth_provider_id = get_value_from_macaroon(macaroon, "auth_provider_id")
v = pymacaroons.Verifier()
v.satisfy_exact("gen = 1")
v.satisfy_exact("type = login")
v.satisfy_general(lambda c: c.startswith("user_id = "))
v.satisfy_general(lambda c: c.startswith("auth_provider_id = "))
satisfy_expiry(v, self.hs.get_clock().time_msec)
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
return LoginTokenAttributes(user_id=user_id, auth_provider_id=auth_provider_id)
def generate_delete_pusher_token(self, user_id: str) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = delete_pusher")

View File

@@ -83,7 +83,6 @@ class CasHandler:
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
self.unstable_idp_brand = None
self._sso_handler = hs.get_sso_handler()

View File

@@ -201,7 +201,7 @@ class FederationHandler(BaseHandler):
or pdu.internal_metadata.is_outlier()
)
if already_seen:
logger.debug("Already seen pdu")
logger.debug("[%s %s]: Already seen pdu", room_id, event_id)
return
# do some initial sanity-checking of the event. In particular, make
@@ -210,14 +210,18 @@ class FederationHandler(BaseHandler):
try:
self._sanity_check_event(pdu)
except SynapseError as err:
logger.warning("Received event failed sanity checks")
logger.warning(
"[%s %s] Received event failed sanity checks", room_id, event_id
)
raise FederationError("ERROR", err.code, err.msg, affected=pdu.event_id)
# If we are currently in the process of joining this room, then we
# queue up events for later processing.
if room_id in self.room_queues:
logger.info(
"Queuing PDU from %s for now: join in progress",
"[%s %s] Queuing PDU from %s for now: join in progress",
room_id,
event_id,
origin,
)
self.room_queues[room_id].append((pdu, origin))
@@ -232,7 +236,9 @@ class FederationHandler(BaseHandler):
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
if not is_in_room:
logger.info(
"Ignoring PDU from %s as we're not in the room",
"[%s %s] Ignoring PDU from %s as we're not in the room",
room_id,
event_id,
origin,
)
return None
@@ -244,7 +250,7 @@ class FederationHandler(BaseHandler):
# We only backfill backwards to the min depth.
min_depth = await self.get_min_depth_for_context(pdu.room_id)
logger.debug("min_depth: %d", min_depth)
logger.debug("[%s %s] min_depth: %d", room_id, event_id, min_depth)
prevs = set(pdu.prev_event_ids())
seen = await self.store.have_events_in_timeline(prevs)
@@ -261,13 +267,17 @@ class FederationHandler(BaseHandler):
# If we're missing stuff, ensure we only fetch stuff one
# at a time.
logger.info(
"Acquiring room lock to fetch %d missing prev_events: %s",
"[%s %s] Acquiring room lock to fetch %d missing prev_events: %s",
room_id,
event_id,
len(missing_prevs),
shortstr(missing_prevs),
)
with (await self._room_pdu_linearizer.queue(pdu.room_id)):
logger.info(
"Acquired room lock to fetch %d missing prev_events",
"[%s %s] Acquired room lock to fetch %d missing prev_events",
room_id,
event_id,
len(missing_prevs),
)
@@ -287,7 +297,9 @@ class FederationHandler(BaseHandler):
if not prevs - seen:
logger.info(
"Found all missing prev_events",
"[%s %s] Found all missing prev_events",
room_id,
event_id,
)
if prevs - seen:
@@ -317,7 +329,9 @@ class FederationHandler(BaseHandler):
if sent_to_us_directly:
logger.warning(
"Rejecting: failed to fetch %d prev events: %s",
"[%s %s] Rejecting: failed to fetch %d prev events: %s",
room_id,
event_id,
len(prevs - seen),
shortstr(prevs - seen),
)
@@ -353,16 +367,17 @@ class FederationHandler(BaseHandler):
# Ask the remote server for the states we don't
# know about
for p in prevs - seen:
logger.info("Requesting state after missing prev_event %s", p)
logger.info(
"Requesting state at missing prev_event %s",
event_id,
)
with nested_logging_context(p):
# note that if any of the missing prevs share missing state or
# auth events, the requests to fetch those events are deduped
# by the get_pdu_cache in federation_client.
remote_state = (
await self._get_state_after_missing_prev_event(
origin, room_id, p
)
(remote_state, _,) = await self._get_state_for_room(
origin, room_id, p, include_event_in_state=True
)
remote_state_map = {
@@ -399,7 +414,10 @@ class FederationHandler(BaseHandler):
state = [event_map[e] for e in state_map.values()]
except Exception:
logger.warning(
"Error attempting to resolve state at missing " "prev_events",
"[%s %s] Error attempting to resolve state at missing "
"prev_events",
room_id,
event_id,
exc_info=True,
)
raise FederationError(
@@ -436,7 +454,9 @@ class FederationHandler(BaseHandler):
latest |= seen
logger.info(
"Requesting missing events between %s and %s",
"[%s %s]: Requesting missing events between %s and %s",
room_id,
event_id,
shortstr(latest),
event_id,
)
@@ -503,11 +523,15 @@ class FederationHandler(BaseHandler):
# We failed to get the missing events, but since we need to handle
# the case of `get_missing_events` not returning the necessary
# events anyway, it is safe to simply log the error and continue.
logger.warning("Failed to get prev_events: %s", e)
logger.warning(
"[%s %s]: Failed to get prev_events: %s", room_id, event_id, e
)
return
logger.info(
"Got %d prev_events: %s",
"[%s %s]: Got %d prev_events: %s",
room_id,
event_id,
len(missing_events),
shortstr(missing_events),
)
@@ -518,7 +542,9 @@ class FederationHandler(BaseHandler):
for ev in missing_events:
logger.info(
"Handling received prev_event %s",
"[%s %s] Handling received prev_event %s",
room_id,
event_id,
ev.event_id,
)
with nested_logging_context(ev.event_id):
@@ -527,7 +553,9 @@ class FederationHandler(BaseHandler):
except FederationError as e:
if e.code == 403:
logger.warning(
"Received prev_event %s failed history check.",
"[%s %s] Received prev_event %s failed history check.",
room_id,
event_id,
ev.event_id,
)
else:
@@ -538,6 +566,7 @@ class FederationHandler(BaseHandler):
destination: str,
room_id: str,
event_id: str,
include_event_in_state: bool = False,
) -> Tuple[List[EventBase], List[EventBase]]:
"""Requests all of the room state at a given event from a remote homeserver.
@@ -545,9 +574,11 @@ class FederationHandler(BaseHandler):
destination: The remote homeserver to query for the state.
room_id: The id of the room we're interested in.
event_id: The id of the event we want the state at.
include_event_in_state: if true, the event itself will be included in the
returned state event list.
Returns:
A list of events in the state, not including the event itself, and
A list of events in the state, possibly including the event itself, and
a list of events in the auth chain for the given event.
"""
(
@@ -559,6 +590,9 @@ class FederationHandler(BaseHandler):
desired_events = set(state_event_ids + auth_event_ids)
if include_event_in_state:
desired_events.add(event_id)
event_map = await self._get_events_from_store_or_dest(
destination, room_id, desired_events
)
@@ -575,6 +609,13 @@ class FederationHandler(BaseHandler):
event_map[e_id] for e_id in state_event_ids if e_id in event_map
]
if include_event_in_state:
remote_event = event_map.get(event_id)
if not remote_event:
raise Exception("Unable to get missing prev_event %s" % (event_id,))
if remote_event.is_state() and remote_event.rejected_reason is None:
remote_state.append(remote_event)
auth_chain = [event_map[e_id] for e_id in auth_event_ids if e_id in event_map]
auth_chain.sort(key=lambda e: e.depth)
@@ -648,131 +689,6 @@ class FederationHandler(BaseHandler):
return fetched_events
async def _get_state_after_missing_prev_event(
self,
destination: str,
room_id: str,
event_id: str,
) -> List[EventBase]:
"""Requests all of the room state at a given event from a remote homeserver.
Args:
destination: The remote homeserver to query for the state.
room_id: The id of the room we're interested in.
event_id: The id of the event we want the state at.
Returns:
A list of events in the state, including the event itself
"""
# TODO: This function is basically the same as _get_state_for_room. Can
# we make backfill() use it, rather than having two code paths? I think the
# only difference is that backfill() persists the prev events separately.
(
state_event_ids,
auth_event_ids,
) = await self.federation_client.get_room_state_ids(
destination, room_id, event_id=event_id
)
logger.debug(
"state_ids returned %i state events, %i auth events",
len(state_event_ids),
len(auth_event_ids),
)
# start by just trying to fetch the events from the store
desired_events = set(state_event_ids)
desired_events.add(event_id)
logger.debug("Fetching %i events from cache/store", len(desired_events))
fetched_events = await self.store.get_events(
desired_events, allow_rejected=True
)
missing_desired_events = desired_events - fetched_events.keys()
logger.debug(
"We are missing %i events (got %i)",
len(missing_desired_events),
len(fetched_events),
)
# We probably won't need most of the auth events, so let's just check which
# we have for now, rather than thrashing the event cache with them all
# unnecessarily.
# TODO: we probably won't actually need all of the auth events, since we
# already have a bunch of the state events. It would be nice if the
# federation api gave us a way of finding out which we actually need.
missing_auth_events = set(auth_event_ids) - fetched_events.keys()
missing_auth_events.difference_update(
await self.store.have_seen_events(missing_auth_events)
)
logger.debug("We are also missing %i auth events", len(missing_auth_events))
missing_events = missing_desired_events | missing_auth_events
logger.debug("Fetching %i events from remote", len(missing_events))
await self._get_events_and_persist(
destination=destination, room_id=room_id, events=missing_events
)
# we need to make sure we re-load from the database to get the rejected
# state correct.
fetched_events.update(
(await self.store.get_events(missing_desired_events, allow_rejected=True))
)
# check for events which were in the wrong room.
#
# this can happen if a remote server claims that the state or
# auth_events at an event in room A are actually events in room B
bad_events = [
(event_id, event.room_id)
for event_id, event in fetched_events.items()
if event.room_id != room_id
]
for bad_event_id, bad_room_id in bad_events:
# This is a bogus situation, but since we may only discover it a long time
# after it happened, we try our best to carry on, by just omitting the
# bad events from the returned state set.
logger.warning(
"Remote server %s claims event %s in room %s is an auth/state "
"event in room %s",
destination,
bad_event_id,
bad_room_id,
room_id,
)
del fetched_events[bad_event_id]
# if we couldn't get the prev event in question, that's a problem.
remote_event = fetched_events.get(event_id)
if not remote_event:
raise Exception("Unable to get missing prev_event %s" % (event_id,))
# missing state at that event is a warning, not a blocker
# XXX: this doesn't sound right? it means that we'll end up with incomplete
# state.
failed_to_fetch = desired_events - fetched_events.keys()
if failed_to_fetch:
logger.warning(
"Failed to fetch missing state events for %s %s",
event_id,
failed_to_fetch,
)
remote_state = [
fetched_events[e_id] for e_id in state_event_ids if e_id in fetched_events
]
if remote_event.is_state() and remote_event.rejected_reason is None:
remote_state.append(remote_event)
return remote_state
async def _process_received_pdu(
self,
origin: str,
@@ -791,7 +707,10 @@ class FederationHandler(BaseHandler):
(ie, we are missing one or more prev_events), the resolved state at the
event
"""
logger.debug("Processing event: %s", event)
room_id = event.room_id
event_id = event.event_id
logger.debug("[%s %s] Processing event: %s", room_id, event_id, event)
try:
await self._handle_new_event(origin, event, state=state)
@@ -952,6 +871,7 @@ class FederationHandler(BaseHandler):
destination=dest,
room_id=room_id,
event_id=e_id,
include_event_in_state=False,
)
auth_events.update({a.event_id: a for a in auth})
auth_events.update({s.event_id: s for s in state})
@@ -1397,7 +1317,7 @@ class FederationHandler(BaseHandler):
async def on_event_auth(self, event_id: str) -> List[EventBase]:
event = await self.store.get_event(event_id)
auth = await self.store.get_auth_chain(
event.room_id, list(event.auth_event_ids()), include_given=True
list(event.auth_event_ids()), include_given=True
)
return list(auth)
@@ -1660,7 +1580,7 @@ class FederationHandler(BaseHandler):
prev_state_ids = await context.get_prev_state_ids()
state_ids = list(prev_state_ids.values())
auth_chain = await self.store.get_auth_chain(event.room_id, state_ids)
auth_chain = await self.store.get_auth_chain(state_ids)
state = await self.store.get_events(list(prev_state_ids.values()))
@@ -2299,7 +2219,7 @@ class FederationHandler(BaseHandler):
# Now get the current auth_chain for the event.
local_auth_chain = await self.store.get_auth_chain(
room_id, list(event.auth_event_ids()), include_given=True
list(event.auth_event_ids()), include_given=True
)
# TODO: Check if we would now reject event_id. If so we need to tell

View File

@@ -48,7 +48,7 @@ class InitialSyncHandler(BaseHandler):
self.clock = hs.get_clock()
self.validator = EventValidator()
self.snapshot_cache = ResponseCache(
hs.get_clock(), "initial_sync_cache"
hs, "initial_sync_cache"
) # type: ResponseCache[Tuple[str, Optional[StreamToken], Optional[StreamToken], str, Optional[int], bool, bool]]
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()

View File

@@ -252,7 +252,7 @@ class MessageHandler:
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in"
if requester.app_service and user_id not in users_with_profile:
if False and requester.app_service and user_id not in users_with_profile:
for uid in users_with_profile:
if requester.app_service.is_interested_in_user(uid):
break

View File

@@ -1,6 +1,5 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Quentin Gliech
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -15,13 +14,13 @@
# limitations under the License.
import inspect
import logging
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, TypeVar, Union
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, TypeVar
from urllib.parse import urlencode
import attr
import pymacaroons
from authlib.common.security import generate_token
from authlib.jose import JsonWebToken, jwt
from authlib.jose import JsonWebToken
from authlib.oauth2.auth import ClientAuth
from authlib.oauth2.rfc6749.parameters import prepare_grant_uri
from authlib.oidc.core import CodeIDToken, ImplicitIDToken, UserInfo
@@ -29,26 +28,20 @@ from authlib.oidc.discovery import OpenIDProviderMetadata, get_well_known_url
from jinja2 import Environment, Template
from pymacaroons.exceptions import (
MacaroonDeserializationException,
MacaroonInitException,
MacaroonInvalidSignatureException,
)
from typing_extensions import TypedDict
from twisted.web.client import readBody
from twisted.web.http_headers import Headers
from synapse.config import ConfigError
from synapse.config.oidc_config import (
OidcProviderClientSecretJwtKey,
OidcProviderConfig,
)
from synapse.config.oidc_config import OidcProviderConfig
from synapse.handlers.sso import MappingException, UserAttributes
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
from synapse.util import Clock, json_decoder
from synapse.util import json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -218,7 +211,7 @@ class OidcHandler:
session_data = self._token_generator.verify_oidc_session_token(
session, state
)
except (MacaroonInitException, MacaroonDeserializationException, KeyError) as e:
except (MacaroonDeserializationException, ValueError) as e:
logger.exception("Invalid session for OIDC callback")
self._sso_handler.render_error(request, "invalid_session", str(e))
return
@@ -282,21 +275,9 @@ class OidcProvider:
self._scopes = provider.scopes
self._user_profile_method = provider.user_profile_method
client_secret = None # type: Union[None, str, JwtClientSecret]
if provider.client_secret:
client_secret = provider.client_secret
elif provider.client_secret_jwt_key:
client_secret = JwtClientSecret(
provider.client_secret_jwt_key,
provider.client_id,
provider.issuer,
hs.get_clock(),
)
self._client_auth = ClientAuth(
provider.client_id,
client_secret,
provider.client_secret,
provider.client_auth_method,
) # type: ClientAuth
self._client_auth_method = provider.client_auth_method
@@ -331,9 +312,6 @@ class OidcProvider:
# optional brand identifier for this auth provider
self.idp_brand = provider.idp_brand
# Optional brand identifier for the unstable API (see MSC2858).
self.unstable_idp_brand = provider.unstable_idp_brand
self._sso_handler = hs.get_sso_handler()
self._sso_handler.register_identity_provider(self)
@@ -543,7 +521,7 @@ class OidcProvider:
"""
metadata = await self.load_metadata()
token_endpoint = metadata.get("token_endpoint")
raw_headers = {
headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": self._http_client.user_agent,
"Accept": "application/json",
@@ -557,10 +535,10 @@ class OidcProvider:
body = urlencode(args, True)
# Fill the body/headers with credentials
uri, raw_headers, body = self._client_auth.prepare(
method="POST", uri=token_endpoint, headers=raw_headers, body=body
uri, headers, body = self._client_auth.prepare(
method="POST", uri=token_endpoint, headers=headers, body=body
)
headers = Headers({k: [v] for (k, v) in raw_headers.items()})
headers = {k: [v] for (k, v) in headers.items()}
# Do the actual request
# We're not using the SimpleHttpClient util methods as we don't want to
@@ -767,7 +745,7 @@ class OidcProvider:
idp_id=self.idp_id,
nonce=nonce,
client_redirect_url=client_redirect_url.decode(),
ui_auth_session_id=ui_auth_session_id or "",
ui_auth_session_id=ui_auth_session_id,
),
)
@@ -998,81 +976,6 @@ class OidcProvider:
return str(remote_user_id)
# number of seconds a newly-generated client secret should be valid for
CLIENT_SECRET_VALIDITY_SECONDS = 3600
# minimum remaining validity on a client secret before we should generate a new one
CLIENT_SECRET_MIN_VALIDITY_SECONDS = 600
class JwtClientSecret:
"""A class which generates a new client secret on demand, based on a JWK
This implementation is designed to comply with the requirements for Apple Sign in:
https://developer.apple.com/documentation/sign_in_with_apple/generate_and_validate_tokens#3262048
It looks like those requirements are based on https://tools.ietf.org/html/rfc7523,
but it's worth noting that we still put the generated secret in the "client_secret"
field (or rather, whereever client_auth_method puts it) rather than in a
client_assertion field in the body as that RFC seems to require.
"""
def __init__(
self,
key: OidcProviderClientSecretJwtKey,
oauth_client_id: str,
oauth_issuer: str,
clock: Clock,
):
self._key = key
self._oauth_client_id = oauth_client_id
self._oauth_issuer = oauth_issuer
self._clock = clock
self._cached_secret = b""
self._cached_secret_replacement_time = 0
def __str__(self):
# if client_auth_method is client_secret_basic, then ClientAuth.prepare calls
# encode_client_secret_basic, which calls "{}".format(secret), which ends up
# here.
return self._get_secret().decode("ascii")
def __bytes__(self):
# if client_auth_method is client_secret_post, then ClientAuth.prepare calls
# encode_client_secret_post, which ends up here.
return self._get_secret()
def _get_secret(self) -> bytes:
now = self._clock.time()
# if we have enough validity on our existing secret, use it
if now < self._cached_secret_replacement_time:
return self._cached_secret
issued_at = int(now)
expires_at = issued_at + CLIENT_SECRET_VALIDITY_SECONDS
# we copy the configured header because jwt.encode modifies it.
header = dict(self._key.jwt_header)
# see https://tools.ietf.org/html/rfc7523#section-3
payload = {
"sub": self._oauth_client_id,
"aud": self._oauth_issuer,
"iat": issued_at,
"exp": expires_at,
**self._key.jwt_payload,
}
logger.info(
"Generating new JWT for %s: %s %s", self._oauth_issuer, header, payload
)
self._cached_secret = jwt.encode(header, payload, self._key.key)
self._cached_secret_replacement_time = (
expires_at - CLIENT_SECRET_MIN_VALIDITY_SECONDS
)
return self._cached_secret
class OidcSessionTokenGenerator:
"""Methods for generating and checking OIDC Session cookies."""
@@ -1117,9 +1020,10 @@ class OidcSessionTokenGenerator:
macaroon.add_first_party_caveat(
"client_redirect_url = %s" % (session_data.client_redirect_url,)
)
macaroon.add_first_party_caveat(
"ui_auth_session_id = %s" % (session_data.ui_auth_session_id,)
)
if session_data.ui_auth_session_id:
macaroon.add_first_party_caveat(
"ui_auth_session_id = %s" % (session_data.ui_auth_session_id,)
)
now = self._clock.time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
@@ -1142,7 +1046,7 @@ class OidcSessionTokenGenerator:
The data extracted from the session cookie
Raises:
KeyError if an expected caveat is missing from the macaroon.
ValueError if an expected caveat is missing from the macaroon.
"""
macaroon = pymacaroons.Macaroon.deserialize(session)
@@ -1153,16 +1057,26 @@ class OidcSessionTokenGenerator:
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.satisfy_general(lambda c: c.startswith("idp_id = "))
v.satisfy_general(lambda c: c.startswith("client_redirect_url = "))
# Sometimes there's a UI auth session ID, it seems to be OK to attempt
# to always satisfy this.
v.satisfy_general(lambda c: c.startswith("ui_auth_session_id = "))
satisfy_expiry(v, self._clock.time_msec)
v.satisfy_general(self._verify_expiry)
v.verify(macaroon, self._macaroon_secret_key)
# Extract the session data from the token.
nonce = get_value_from_macaroon(macaroon, "nonce")
idp_id = get_value_from_macaroon(macaroon, "idp_id")
client_redirect_url = get_value_from_macaroon(macaroon, "client_redirect_url")
ui_auth_session_id = get_value_from_macaroon(macaroon, "ui_auth_session_id")
nonce = self._get_value_from_macaroon(macaroon, "nonce")
idp_id = self._get_value_from_macaroon(macaroon, "idp_id")
client_redirect_url = self._get_value_from_macaroon(
macaroon, "client_redirect_url"
)
try:
ui_auth_session_id = self._get_value_from_macaroon(
macaroon, "ui_auth_session_id"
) # type: Optional[str]
except ValueError:
ui_auth_session_id = None
return OidcSessionData(
nonce=nonce,
idp_id=idp_id,
@@ -1170,6 +1084,33 @@ class OidcSessionTokenGenerator:
ui_auth_session_id=ui_auth_session_id,
)
def _get_value_from_macaroon(self, macaroon: pymacaroons.Macaroon, key: str) -> str:
"""Extracts a caveat value from a macaroon token.
Args:
macaroon: the token
key: the key of the caveat to extract
Returns:
The extracted value
Raises:
ValueError: if the caveat was not in the macaroon
"""
prefix = key + " = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(prefix):
return caveat.caveat_id[len(prefix) :]
raise ValueError("No %s caveat in macaroon" % (key,))
def _verify_expiry(self, caveat: str) -> bool:
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix) :])
now = self._clock.time_msec()
return now < expiry
@attr.s(frozen=True, slots=True)
class OidcSessionData:
@@ -1184,8 +1125,8 @@ class OidcSessionData:
# The URL the client gave when it initiated the flow. ("" if this is a UI Auth)
client_redirect_url = attr.ib(type=str)
# The session ID of the ongoing UI Auth ("" if this is a login)
ui_auth_session_id = attr.ib(type=str)
# The session ID of the ongoing UI Auth (None if this is a login)
ui_auth_session_id = attr.ib(type=Optional[str], default=None)
UserAttributeDict = TypedDict(

View File

@@ -285,7 +285,7 @@ class PaginationHandler:
except Exception:
f = Failure()
logger.error(
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject()) # type: ignore
"[purge] failed", exc_info=(f.type, f.value, f.getTracebackObject())
)
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
finally:

View File

@@ -274,25 +274,22 @@ class PresenceHandler(BasePresenceHandler):
self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
if self._presence_enabled:
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
def run_timeout_handler():
return run_as_background_process(
"handle_presence_timeouts", self._handle_timeouts
)
self.clock.call_later(
30, self.clock.looping_call, run_timeout_handler, 5000
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
def run_timeout_handler():
return run_as_background_process(
"handle_presence_timeouts", self._handle_timeouts
)
def run_persister():
return run_as_background_process(
"persist_presence_changes", self._persist_unpersisted_changes
)
self.clock.call_later(30, self.clock.looping_call, run_timeout_handler, 5000)
self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
def run_persister():
return run_as_background_process(
"persist_presence_changes", self._persist_unpersisted_changes
)
self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
LaterGauge(
"synapse_handlers_presence_wheel_timer_size",
@@ -302,7 +299,7 @@ class PresenceHandler(BasePresenceHandler):
)
# Used to handle sending of presence to newly joined users/servers
if self._presence_enabled:
if hs.config.use_presence:
self.notifier.add_replication_callback(self.notify_new_event)
# Presence is best effort and quickly heals itself, so lets just always

View File

@@ -16,9 +16,7 @@
"""Contains functions for registering clients."""
import logging
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
from prometheus_client import Counter
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple
from synapse import types
from synapse.api.constants import MAX_USERID_LENGTH, EventTypes, JoinRules, LoginType
@@ -43,19 +41,6 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
registration_counter = Counter(
"synapse_user_registrations_total",
"Number of new users registered (since restart)",
["guest", "shadow_banned", "auth_provider"],
)
login_counter = Counter(
"synapse_user_logins_total",
"Number of user logins (since restart)",
["guest", "auth_provider"],
)
class RegistrationHandler(BaseHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
@@ -82,7 +67,6 @@ class RegistrationHandler(BaseHandler):
)
else:
self.device_handler = hs.get_device_handler()
self._register_device_client = self.register_device_inner
self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.session_lifetime
@@ -172,7 +156,6 @@ class RegistrationHandler(BaseHandler):
bind_emails: Iterable[str] = [],
by_admin: bool = False,
user_agent_ips: Optional[List[Tuple[str, str]]] = None,
auth_provider_id: Optional[str] = None,
) -> str:
"""Registers a new client on the server.
@@ -198,9 +181,8 @@ class RegistrationHandler(BaseHandler):
admin api, otherwise False.
user_agent_ips: Tuples of IP addresses and user-agents used
during the registration process.
auth_provider_id: The SSO IdP the user used, if any.
Returns:
The registered user_id.
The registere user_id.
Raises:
SynapseError if there was a problem registering.
"""
@@ -210,7 +192,6 @@ class RegistrationHandler(BaseHandler):
threepid,
localpart,
user_agent_ips or [],
auth_provider_id=auth_provider_id,
)
if result == RegistrationBehaviour.DENY:
@@ -299,12 +280,6 @@ class RegistrationHandler(BaseHandler):
# if user id is taken, just generate another
fail_count += 1
registration_counter.labels(
guest=make_guest,
shadow_banned=shadow_banned,
auth_provider=(auth_provider_id or ""),
).inc()
if not self.hs.config.user_consent_at_registration:
if not self.hs.config.auto_join_rooms_for_guests and make_guest:
logger.info(
@@ -663,7 +638,6 @@ class RegistrationHandler(BaseHandler):
initial_display_name: Optional[str],
is_guest: bool = False,
is_appservice_ghost: bool = False,
auth_provider_id: Optional[str] = None,
) -> Tuple[str, str]:
"""Register a device for a user and generate an access token.
@@ -674,40 +648,21 @@ class RegistrationHandler(BaseHandler):
device_id: The device ID to check, or None to generate a new one.
initial_display_name: An optional display name for the device.
is_guest: Whether this is a guest account
auth_provider_id: The SSO IdP the user used, if any (just used for the
prometheus metrics).
Returns:
Tuple of device ID and access token
"""
res = await self._register_device_client(
user_id=user_id,
device_id=device_id,
initial_display_name=initial_display_name,
is_guest=is_guest,
is_appservice_ghost=is_appservice_ghost,
)
login_counter.labels(
guest=is_guest,
auth_provider=(auth_provider_id or ""),
).inc()
if self.hs.config.worker_app:
r = await self._register_device_client(
user_id=user_id,
device_id=device_id,
initial_display_name=initial_display_name,
is_guest=is_guest,
is_appservice_ghost=is_appservice_ghost,
)
return r["device_id"], r["access_token"]
return res["device_id"], res["access_token"]
async def register_device_inner(
self,
user_id: str,
device_id: Optional[str],
initial_display_name: Optional[str],
is_guest: bool = False,
is_appservice_ghost: bool = False,
) -> Dict[str, str]:
"""Helper for register_device
Does the bits that need doing on the main process. Not for use outside this
class and RegisterDeviceReplicationServlet.
"""
assert not self.hs.config.worker_app
valid_until_ms = None
if self.session_lifetime is not None:
if is_guest:
@@ -732,7 +687,7 @@ class RegistrationHandler(BaseHandler):
is_appservice_ghost=is_appservice_ghost,
)
return {"device_id": registered_device_id, "access_token": access_token}
return (registered_device_id, access_token)
async def post_registration_actions(
self, user_id: str, auth_result: dict, access_token: Optional[str]

View File

@@ -121,7 +121,7 @@ class RoomCreationHandler(BaseHandler):
# succession, only process the first attempt and return its result to
# subsequent requests
self._upgrade_response_cache = ResponseCache(
hs.get_clock(), "room_upgrade", timeout_ms=FIVE_MINUTES_IN_MS
hs, "room_upgrade", timeout_ms=FIVE_MINUTES_IN_MS
) # type: ResponseCache[Tuple[str, str]]
self._server_notices_mxid = hs.config.server_notices_mxid

View File

@@ -43,11 +43,12 @@ class RoomListHandler(BaseHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.enable_room_list_search = hs.config.enable_room_list_search
self.response_cache = ResponseCache(
hs.get_clock(), "room_list"
hs, "room_list"
) # type: ResponseCache[Tuple[Optional[int], Optional[str], ThirdPartyInstanceID]]
self.remote_response_cache = ResponseCache(
hs.get_clock(), "remote_room_list", timeout_ms=30 * 1000
hs, "remote_room_list", timeout_ms=30 * 1000
) # type: ResponseCache[Tuple[str, Optional[int], Optional[str], bool, Optional[str]]]
async def get_local_public_room_list(

View File

@@ -66,6 +66,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.account_data_handler = hs.get_account_data_handler()
self.member_linearizer = Linearizer(name="member")
self.member_limiter = Linearizer(max_count=10, name="member_as_limiter")
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
@@ -336,19 +337,38 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
key = (room_id,)
with (await self.member_linearizer.queue(key)):
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
require_consent=require_consent,
)
as_id = object()
if requester.app_service:
as_id = requester.app_service.id
then = self.clock.time_msec()
with (await self.member_limiter.queue(as_id)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
with (await self.member_linearizer.queue(key)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
require_consent=require_consent,
)
return result

View File

@@ -81,7 +81,6 @@ class SamlHandler(BaseHandler):
# the SsoIdentityProvider protocol type.
self.idp_icon = None
self.idp_brand = None
self.unstable_idp_brand = None
# a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]

View File

@@ -31,8 +31,8 @@ from urllib.parse import urlencode
import attr
from typing_extensions import NoReturn, Protocol
from twisted.web.http import Request
from twisted.web.iweb import IRequest
from twisted.web.server import Request
from synapse.api.constants import LoginType
from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError
@@ -98,11 +98,6 @@ class SsoIdentityProvider(Protocol):
"""Optional branding identifier"""
return None
@property
def unstable_idp_brand(self) -> Optional[str]:
"""Optional brand identifier for the unstable API (see MSC2858)."""
return None
@abc.abstractmethod
async def handle_redirect_request(
self,
@@ -461,7 +456,6 @@ class SsoHandler:
await self._auth_handler.complete_sso_login(
user_id,
auth_provider_id,
request,
client_redirect_url,
extra_login_attributes,
@@ -611,7 +605,6 @@ class SsoHandler:
default_display_name=attributes.display_name,
bind_emails=attributes.emails,
user_agent_ips=[(user_agent, ip_address)],
auth_provider_id=auth_provider_id,
)
await self._store.record_user_external_id(
@@ -893,7 +886,6 @@ class SsoHandler:
await self._auth_handler.complete_sso_login(
user_id,
session.auth_provider_id,
request,
session.client_redirect_url,
session.extra_login_attributes,

View File

@@ -52,6 +52,7 @@ logger = logging.getLogger(__name__)
# Debug logger for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger = logging.getLogger("synapse.handler.sync.4422_debug")
SYNC_RESPONSE_CACHE_MS = 2 * 60 * 1000
# Counts the number of times we returned a non-empty sync. `type` is one of
# "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is
@@ -244,7 +245,7 @@ class SyncHandler:
self.event_sources = hs.get_event_sources()
self.clock = hs.get_clock()
self.response_cache = ResponseCache(
hs.get_clock(), "sync"
hs, "sync", timeout_ms=SYNC_RESPONSE_CACHE_MS
) # type: ResponseCache[Tuple[Any, ...]]
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
@@ -277,8 +278,9 @@ class SyncHandler:
user_id = sync_config.user.to_string()
await self.auth.check_auth_blocking(requester=requester)
res = await self.response_cache.wrap(
res = await self.response_cache.wrap_conditional(
sync_config.request_key,
lambda result: since_token != result.next_batch,
self._wait_for_sync_for_user,
sync_config,
since_token,

View File

@@ -39,15 +39,12 @@ from zope.interface import implementer, provider
from OpenSSL import SSL
from OpenSSL.SSL import VERIFY_NONE
from twisted.internet import defer, error as twisted_error, protocol, ssl
from twisted.internet.address import IPv4Address, IPv6Address
from twisted.internet.interfaces import (
IAddress,
IHostResolution,
IReactorPluggableNameResolver,
IResolutionReceiver,
ITCPTransport,
)
from twisted.internet.protocol import connectionDone
from twisted.internet.task import Cooperator
from twisted.python.failure import Failure
from twisted.web._newclient import ResponseDone
@@ -59,20 +56,13 @@ from twisted.web.client import (
)
from twisted.web.http import PotentialDataLoss
from twisted.web.http_headers import Headers
from twisted.web.iweb import (
UNKNOWN_LENGTH,
IAgent,
IBodyProducer,
IPolicyForHTTPS,
IResponse,
)
from twisted.web.iweb import UNKNOWN_LENGTH, IAgent, IBodyProducer, IResponse
from synapse.api.errors import Codes, HttpResponseException, SynapseError
from synapse.http import QuieterFileBodyProducer, RequestTimedOutError, redact_uri
from synapse.http.proxyagent import ProxyAgent
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.types import ISynapseReactor
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred
@@ -160,17 +150,16 @@ class _IPBlacklistingResolver:
def resolveHostName(
self, recv: IResolutionReceiver, hostname: str, portNumber: int = 0
) -> IResolutionReceiver:
r = recv()
addresses = [] # type: List[IAddress]
def _callback() -> None:
has_bad_ip = False
for address in addresses:
# We only expect IPv4 and IPv6 addresses since only A/AAAA lookups
# should go through this path.
if not isinstance(address, (IPv4Address, IPv6Address)):
continue
r.resolutionBegan(None)
ip_address = IPAddress(address.host)
has_bad_ip = False
for i in addresses:
ip_address = IPAddress(i.host)
if check_against_blacklist(
ip_address, self._ip_whitelist, self._ip_blacklist
@@ -185,15 +174,15 @@ class _IPBlacklistingResolver:
# request, but all we can really do from here is claim that there were no
# valid results.
if not has_bad_ip:
for address in addresses:
recv.addressResolved(address)
recv.resolutionComplete()
for i in addresses:
r.addressResolved(i)
r.resolutionComplete()
@provider(IResolutionReceiver)
class EndpointReceiver:
@staticmethod
def resolutionBegan(resolutionInProgress: IHostResolution) -> None:
recv.resolutionBegan(resolutionInProgress)
pass
@staticmethod
def addressResolved(address: IAddress) -> None:
@@ -207,10 +196,10 @@ class _IPBlacklistingResolver:
EndpointReceiver, hostname, portNumber=portNumber
)
return recv
return r
@implementer(ISynapseReactor)
@implementer(IReactorPluggableNameResolver)
class BlacklistingReactorWrapper:
"""
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
@@ -300,7 +289,8 @@ class SimpleHttpClient:
treq_args: Dict[str, Any] = {},
ip_whitelist: Optional[IPSet] = None,
ip_blacklist: Optional[IPSet] = None,
use_proxy: bool = False,
http_proxy: Optional[bytes] = None,
https_proxy: Optional[bytes] = None,
):
"""
Args:
@@ -310,8 +300,8 @@ class SimpleHttpClient:
we may not request.
ip_whitelist: The whitelisted IP addresses, that we can
request if it were otherwise caught in a blacklist.
use_proxy: Whether proxy settings should be discovered and used
from conventional environment variables.
http_proxy: proxy server to use for http connections. host[:port]
https_proxy: proxy server to use for https connections. host[:port]
"""
self.hs = hs
@@ -335,7 +325,7 @@ class SimpleHttpClient:
# filters out blacklisted IP addresses, to prevent DNS rebinding.
self.reactor = BlacklistingReactorWrapper(
hs.get_reactor(), self._ip_whitelist, self._ip_blacklist
) # type: ISynapseReactor
)
else:
self.reactor = hs.get_reactor()
@@ -355,8 +345,9 @@ class SimpleHttpClient:
connectTimeout=15,
contextFactory=self.hs.get_http_client_context_factory(),
pool=pool,
use_proxy=use_proxy,
) # type: IAgent
http_proxy=http_proxy,
https_proxy=https_proxy,
)
if self._ip_blacklist:
# If we have an IP blacklist, we then install the blacklisting Agent
@@ -759,37 +750,7 @@ class BodyExceededMaxSize(Exception):
"""The maximum allowed size of the HTTP body was exceeded."""
class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which immediately errors upon receiving data."""
transport = None # type: Optional[ITCPTransport]
def __init__(self, deferred: defer.Deferred):
self.deferred = deferred
def _maybe_fail(self):
"""
Report a max size exceed error and disconnect the first time this is called.
"""
if not self.deferred.called:
self.deferred.errback(BodyExceededMaxSize())
# Close the connection (forcefully) since all the data will get
# discarded anyway.
assert self.transport is not None
self.transport.abortConnection()
def dataReceived(self, data: bytes) -> None:
self._maybe_fail()
def connectionLost(self, reason: Failure = connectionDone) -> None:
self._maybe_fail()
class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which reads body to a stream, erroring if the body exceeds a maximum size."""
transport = None # type: Optional[ITCPTransport]
def __init__(
self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int]
):
@@ -812,10 +773,9 @@ class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
self.deferred.errback(BodyExceededMaxSize())
# Close the connection (forcefully) since all the data will get
# discarded anyway.
assert self.transport is not None
self.transport.abortConnection()
def connectionLost(self, reason: Failure = connectionDone) -> None:
def connectionLost(self, reason: Failure) -> None:
# If the maximum size was already exceeded, there's nothing to do.
if self.deferred.called:
return
@@ -847,15 +807,13 @@ def read_body_with_max_size(
Returns:
A Deferred which resolves to the length of the read body.
"""
d = defer.Deferred()
# If the Content-Length header gives a size larger than the maximum allowed
# size, do not bother downloading the body.
if max_size is not None and response.length != UNKNOWN_LENGTH:
if response.length > max_size:
response.deliverBody(_DiscardBodyWithMaxSizeProtocol(d))
return d
return defer.fail(BodyExceededMaxSize())
d = defer.Deferred()
response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size))
return d
@@ -884,7 +842,6 @@ def encode_query_args(args: Optional[Mapping[str, Union[str, List[str]]]]) -> by
return query_str.encode("utf8")
@implementer(IPolicyForHTTPS)
class InsecureInterceptableContextFactory(ssl.ContextFactory):
"""
Factory for PyOpenSSL SSL contexts which accepts any certificate for any domain.

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import logging
import urllib.parse
from typing import Any, Generator, List, Optional
from typing import List, Optional
from netaddr import AddrFormatError, IPAddress, IPSet
from zope.interface import implementer
@@ -35,7 +35,6 @@ from synapse.http.client import BlacklistingAgentWrapper
from synapse.http.federation.srv_resolver import Server, SrvResolver
from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import ISynapseReactor
from synapse.util import Clock
logger = logging.getLogger(__name__)
@@ -69,7 +68,7 @@ class MatrixFederationAgent:
def __init__(
self,
reactor: ISynapseReactor,
reactor: IReactorCore,
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
user_agent: bytes,
ip_blacklist: IPSet,
@@ -117,7 +116,7 @@ class MatrixFederationAgent:
uri: bytes,
headers: Optional[Headers] = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> Generator[defer.Deferred, Any, defer.Deferred]:
) -> defer.Deferred:
"""
Args:
method: HTTP method: GET/POST/etc
@@ -178,17 +177,17 @@ class MatrixFederationAgent:
# We need to make sure the host header is set to the netloc of the
# server and that a user-agent is provided.
if headers is None:
request_headers = Headers()
headers = Headers()
else:
request_headers = headers.copy()
headers = headers.copy()
if not request_headers.hasHeader(b"host"):
request_headers.addRawHeader(b"host", parsed_uri.netloc)
if not request_headers.hasHeader(b"user-agent"):
request_headers.addRawHeader(b"user-agent", self.user_agent)
if not headers.hasHeader(b"host"):
headers.addRawHeader(b"host", parsed_uri.netloc)
if not headers.hasHeader(b"user-agent"):
headers.addRawHeader(b"user-agent", self.user_agent)
res = yield make_deferred_yieldable(
self._agent.request(method, uri, request_headers, bodyProducer)
self._agent.request(method, uri, headers, bodyProducer)
)
return res

View File

@@ -322,8 +322,7 @@ def _cache_period_from_headers(
def _parse_cache_control(headers: Headers) -> Dict[bytes, Optional[bytes]]:
cache_controls = {}
cache_control_headers = headers.getRawHeaders(b"cache-control") or []
for hdr in cache_control_headers:
for hdr in headers.getRawHeaders(b"cache-control", []):
for directive in hdr.split(b","):
splits = [x.strip() for x in directive.split(b"=", 1)]
k = splits[0].lower()

View File

@@ -59,7 +59,7 @@ from synapse.logging.opentracing import (
start_active_span,
tags,
)
from synapse.types import ISynapseReactor, JsonDict
from synapse.types import JsonDict
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred
from synapse.util.metrics import Measure
@@ -237,14 +237,14 @@ class MatrixFederationHttpClient:
# addresses, to prevent DNS rebinding.
self.reactor = BlacklistingReactorWrapper(
hs.get_reactor(), None, hs.config.federation_ip_range_blacklist
) # type: ISynapseReactor
)
user_agent = hs.version_string
if hs.config.user_agent_suffix:
user_agent = "%s %s" % (user_agent, hs.config.user_agent_suffix)
user_agent = user_agent.encode("ascii")
federation_agent = MatrixFederationAgent(
self.agent = MatrixFederationAgent(
self.reactor,
tls_client_options_factory,
user_agent,
@@ -254,7 +254,7 @@ class MatrixFederationHttpClient:
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
# blacklist via IP literals in server names
self.agent = BlacklistingAgentWrapper(
federation_agent,
self.agent,
ip_blacklist=hs.config.federation_ip_range_blacklist,
)
@@ -534,10 +534,9 @@ class MatrixFederationHttpClient:
response.code, response_phrase, body
)
# Retry if the error is a 5xx or a 429 (Too Many
# Requests), otherwise just raise a standard
# `HttpResponseException`
if 500 <= response.code < 600 or response.code == 429:
# Retry if the error is a 429 (Too Many Requests),
# otherwise just raise a standard HttpResponseException
if response.code == 429:
raise RequestSendFailed(exc, can_retry=True) from exc
else:
raise exc
@@ -1050,14 +1049,14 @@ def check_content_type_is_json(headers: Headers) -> None:
RequestSendFailed: if the Content-Type header is missing or isn't JSON
"""
content_type_headers = headers.getRawHeaders(b"Content-Type")
if content_type_headers is None:
c_type = headers.getRawHeaders(b"Content-Type")
if c_type is None:
raise RequestSendFailed(
RuntimeError("No Content-Type header received from remote server"),
can_retry=False,
)
c_type = content_type_headers[0].decode("ascii") # only the first header
c_type = c_type[0].decode("ascii") # only the first header
val, options = cgi.parse_header(c_type)
if val != "application/json":
raise RequestSendFailed(

View File

@@ -14,7 +14,6 @@
# limitations under the License.
import logging
import re
from urllib.request import getproxies_environment, proxy_bypass_environment
from zope.interface import implementer
@@ -59,9 +58,6 @@ class ProxyAgent(_AgentBase):
pool (HTTPConnectionPool|None): connection pool to be used. If None, a
non-persistent pool instance will be created.
use_proxy (bool): Whether proxy settings should be discovered and used
from conventional environment variables.
"""
def __init__(
@@ -72,7 +68,8 @@ class ProxyAgent(_AgentBase):
connectTimeout=None,
bindAddress=None,
pool=None,
use_proxy=False,
http_proxy=None,
https_proxy=None,
):
_AgentBase.__init__(self, reactor, pool)
@@ -87,15 +84,6 @@ class ProxyAgent(_AgentBase):
if bindAddress is not None:
self._endpoint_kwargs["bindAddress"] = bindAddress
http_proxy = None
https_proxy = None
no_proxy = None
if use_proxy:
proxies = getproxies_environment()
http_proxy = proxies["http"].encode() if "http" in proxies else None
https_proxy = proxies["https"].encode() if "https" in proxies else None
no_proxy = proxies["no"] if "no" in proxies else None
self.http_proxy_endpoint = _http_proxy_endpoint(
http_proxy, self.proxy_reactor, **self._endpoint_kwargs
)
@@ -104,8 +92,6 @@ class ProxyAgent(_AgentBase):
https_proxy, self.proxy_reactor, **self._endpoint_kwargs
)
self.no_proxy = no_proxy
self._policy_for_https = contextFactory
self._reactor = reactor
@@ -153,28 +139,13 @@ class ProxyAgent(_AgentBase):
pool_key = (parsed_uri.scheme, parsed_uri.host, parsed_uri.port)
request_path = parsed_uri.originForm
should_skip_proxy = False
if self.no_proxy is not None:
should_skip_proxy = proxy_bypass_environment(
parsed_uri.host.decode(),
proxies={"no": self.no_proxy},
)
if (
parsed_uri.scheme == b"http"
and self.http_proxy_endpoint
and not should_skip_proxy
):
if parsed_uri.scheme == b"http" and self.http_proxy_endpoint:
# Cache *all* connections under the same key, since we are only
# connecting to a single destination, the proxy:
pool_key = ("http-proxy", self.http_proxy_endpoint)
endpoint = self.http_proxy_endpoint
request_path = uri
elif (
parsed_uri.scheme == b"https"
and self.https_proxy_endpoint
and not should_skip_proxy
):
elif parsed_uri.scheme == b"https" and self.https_proxy_endpoint:
endpoint = HTTPConnectProxyEndpoint(
self.proxy_reactor,
self.https_proxy_endpoint,

View File

@@ -21,7 +21,6 @@ import logging
import types
import urllib
from http import HTTPStatus
from inspect import isawaitable
from io import BytesIO
from typing import (
Any,
@@ -31,7 +30,6 @@ from typing import (
Iterable,
Iterator,
List,
Optional,
Pattern,
Tuple,
Union,
@@ -81,12 +79,10 @@ def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
"""Sends a JSON error response to clients."""
if f.check(SynapseError):
# mypy doesn't understand that f.check asserts the type.
exc = f.value # type: SynapseError # type: ignore
error_code = exc.code
error_dict = exc.error_dict()
error_code = f.value.code
error_dict = f.value.error_dict()
logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg)
logger.info("%s SynapseError: %s - %s", request, error_code, f.value.msg)
else:
error_code = 500
error_dict = {"error": "Internal server error", "errcode": Codes.UNKNOWN}
@@ -95,7 +91,7 @@ def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
"Failed handle request via %r: %r",
request.request_metrics.name,
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
# Only respond with an error response if we haven't already started writing,
@@ -132,8 +128,7 @@ def return_html_error(
`{msg}` placeholders), or a jinja2 template
"""
if f.check(CodeMessageException):
# mypy doesn't understand that f.check asserts the type.
cme = f.value # type: CodeMessageException # type: ignore
cme = f.value
code = cme.code
msg = cme.msg
@@ -147,7 +142,7 @@ def return_html_error(
logger.error(
"Failed handle request %r",
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
else:
code = HTTPStatus.INTERNAL_SERVER_ERROR
@@ -156,7 +151,7 @@ def return_html_error(
logger.error(
"Failed handle request %r",
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
if isinstance(error_template, str):
@@ -283,7 +278,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
raw_callback_return = method_handler(request)
# Is it synchronous? We'll allow this for now.
if isawaitable(raw_callback_return):
if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)):
callback_return = await raw_callback_return
else:
callback_return = raw_callback_return # type: ignore
@@ -404,10 +399,8 @@ class JsonResource(DirectServeJsonResource):
A tuple of the callback to use, the name of the servlet, and the
key word arguments to pass to the callback
"""
# At this point the path must be bytes.
request_path_bytes = request.path # type: bytes # type: ignore
request_path = request_path_bytes.decode("ascii")
# Treat HEAD requests as GET requests.
request_path = request.path.decode("ascii")
request_method = request.method
if request_method == b"HEAD":
request_method = b"GET"
@@ -558,7 +551,7 @@ class _ByteProducer:
request: Request,
iterator: Iterator[bytes],
):
self._request = request # type: Optional[Request]
self._request = request
self._iterator = iterator
self._paused = False
@@ -570,7 +563,7 @@ class _ByteProducer:
"""
Send a list of bytes as a chunk of a response.
"""
if not data or not self._request:
if not data:
return
self._request.write(b"".join(data))

View File

@@ -14,7 +14,7 @@
import contextlib
import logging
import time
from typing import Optional, Type, Union
from typing import Optional, Union
import attr
from zope.interface import implementer
@@ -57,7 +57,7 @@ class SynapseRequest(Request):
def __init__(self, channel, *args, **kw):
Request.__init__(self, channel, *args, **kw)
self.site = channel.site # type: SynapseSite
self.site = channel.site
self._channel = channel # this is used by the tests
self.start_time = 0.0
@@ -96,34 +96,25 @@ class SynapseRequest(Request):
def get_request_id(self):
return "%s-%i" % (self.get_method(), self.request_seq)
def get_redacted_uri(self) -> str:
"""Gets the redacted URI associated with the request (or placeholder if the URI
has not yet been received).
Note: This is necessary as the placeholder value in twisted is str
rather than bytes, so we need to sanitise `self.uri`.
Returns:
The redacted URI as a string.
"""
uri = self.uri # type: Union[bytes, str]
def get_redacted_uri(self):
uri = self.uri
if isinstance(uri, bytes):
uri = uri.decode("ascii", errors="replace")
uri = self.uri.decode("ascii", errors="replace")
return redact_uri(uri)
def get_method(self) -> str:
"""Gets the method associated with the request (or placeholder if method
has not yet been received).
def get_method(self):
"""Gets the method associated with the request (or placeholder if not
method has yet been received).
Note: This is necessary as the placeholder value in twisted is str
rather than bytes, so we need to sanitise `self.method`.
Returns:
The request method as a string.
str
"""
method = self.method # type: Union[bytes, str]
method = self.method
if isinstance(method, bytes):
return self.method.decode("ascii")
method = self.method.decode("ascii")
return method
def render(self, resrc):
@@ -384,9 +375,9 @@ class XForwardedForRequest(SynapseRequest):
else:
# this is done largely for backwards-compatibility so that people that
# haven't set an x-forwarded-proto header don't get a redirect loop.
logger.warning(
"forwarded request lacks an x-forwarded-proto header: assuming https"
)
#logger.warning(
# "forwarded request lacks an x-forwarded-proto header: assuming https"
#)
self._forwarded_https = True
def isSecure(self):
@@ -441,9 +432,7 @@ class SynapseSite(Site):
assert config.http_options is not None
proxied = config.http_options.x_forwarded
self.requestFactory = (
XForwardedForRequest if proxied else SynapseRequest
) # type: Type[Request]
self.requestFactory = XForwardedForRequest if proxied else SynapseRequest
self.access_logger = logging.getLogger(logger_name)
self.server_version_string = server_version_string.encode("ascii")

View File

@@ -32,9 +32,8 @@ from twisted.internet.endpoints import (
TCP4ClientEndpoint,
TCP6ClientEndpoint,
)
from twisted.internet.interfaces import IPushProducer, IStreamClientEndpoint
from twisted.internet.interfaces import IPushProducer, ITransport
from twisted.internet.protocol import Factory, Protocol
from twisted.internet.tcp import Connection
from twisted.python.failure import Failure
logger = logging.getLogger(__name__)
@@ -53,9 +52,7 @@ class LogProducer:
format: A callable to format the log record to a string.
"""
# This is essentially ITCPTransport, but that is missing certain fields
# (connected and registerProducer) which are part of the implementation.
transport = attr.ib(type=Connection)
transport = attr.ib(type=ITransport)
_format = attr.ib(type=Callable[[logging.LogRecord], str])
_buffer = attr.ib(type=deque)
_paused = attr.ib(default=False, type=bool, init=False)
@@ -124,9 +121,7 @@ class RemoteHandler(logging.Handler):
try:
ip = ip_address(self.host)
if isinstance(ip, IPv4Address):
endpoint = TCP4ClientEndpoint(
_reactor, self.host, self.port
) # type: IStreamClientEndpoint
endpoint = TCP4ClientEndpoint(_reactor, self.host, self.port)
elif isinstance(ip, IPv6Address):
endpoint = TCP6ClientEndpoint(_reactor, self.host, self.port)
else:
@@ -152,6 +147,8 @@ class RemoteHandler(logging.Handler):
if self._connection_waiter:
return
self._connection_waiter = self._service.whenConnected(failAfterFailures=1)
def fail(failure: Failure) -> None:
# If the Deferred was cancelled (e.g. during shutdown) do not try to
# reconnect (this will cause an infinite loop of errors).
@@ -164,13 +161,9 @@ class RemoteHandler(logging.Handler):
self._connect()
def writer(result: Protocol) -> None:
# Force recognising transport as a Connection and not the more
# generic ITransport.
transport = result.transport # type: Connection # type: ignore
# We have a connection. If we already have a producer, and its
# transport is the same, just trigger a resumeProducing.
if self._producer and transport is self._producer.transport:
if self._producer and result.transport is self._producer.transport:
self._producer.resumeProducing()
self._connection_waiter = None
return
@@ -182,16 +175,14 @@ class RemoteHandler(logging.Handler):
# Make a new producer and start it.
self._producer = LogProducer(
buffer=self._buffer,
transport=transport,
transport=result.transport,
format=self.format,
)
transport.registerProducer(self._producer, True)
result.transport.registerProducer(self._producer, True)
self._producer.resumeProducing()
self._connection_waiter = None
deferred = self._service.whenConnected(failAfterFailures=1) # type: Deferred
deferred.addCallbacks(writer, fail)
self._connection_waiter = deferred
self._connection_waiter.addCallbacks(writer, fail)
def _handle_pressure(self) -> None:
"""

View File

@@ -669,7 +669,7 @@ def preserve_fn(f):
return g
def run_in_background(f, *args, **kwargs) -> defer.Deferred:
def run_in_background(f, *args, **kwargs):
"""Calls a function, ensuring that the current context is restored after
return from the function, and that the sentinel context is set once the
deferred returned by the function completes.
@@ -697,10 +697,8 @@ def run_in_background(f, *args, **kwargs) -> defer.Deferred:
if isinstance(res, types.CoroutineType):
res = defer.ensureDeferred(res)
# At this point we should have a Deferred, if not then f was a synchronous
# function, wrap it in a Deferred for consistency.
if not isinstance(res, defer.Deferred):
return defer.succeed(res)
return res
if res.called and not res.paused:
# The function should have maintained the logcontext, so we can

View File

@@ -527,7 +527,7 @@ class ReactorLastSeenMetric:
REGISTRY.register(ReactorLastSeenMetric())
def runUntilCurrentTimer(reactor, func):
def runUntilCurrentTimer(func):
@functools.wraps(func)
def f(*args, **kwargs):
now = reactor.seconds()
@@ -590,14 +590,13 @@ def runUntilCurrentTimer(reactor, func):
try:
# Ensure the reactor has all the attributes we expect
reactor.seconds # type: ignore
reactor.runUntilCurrent # type: ignore
reactor._newTimedCalls # type: ignore
reactor.threadCallQueue # type: ignore
reactor.runUntilCurrent
reactor._newTimedCalls
reactor.threadCallQueue
# runUntilCurrent is called when we have pending calls. It is called once
# per iteratation after fd polling.
reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore
reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent)
# We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC,

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Generator, Iterable, Optional, Tuple
from typing import TYPE_CHECKING, Iterable, Optional, Tuple
from twisted.internet import defer
@@ -203,26 +203,11 @@ class ModuleApi:
)
def generate_short_term_login_token(
self,
user_id: str,
duration_in_ms: int = (2 * 60 * 1000),
auth_provider_id: str = "",
self, user_id: str, duration_in_ms: int = (2 * 60 * 1000)
) -> str:
"""Generate a login token suitable for m.login.token authentication
Args:
user_id: gives the ID of the user that the token is for
duration_in_ms: the time that the token will be valid for
auth_provider_id: the ID of the SSO IdP that the user used to authenticate
to get this token, if any. This is encoded in the token so that
/login can report stats on number of successful logins by IdP.
"""
"""Generate a login token suitable for m.login.token authentication"""
return self._hs.get_macaroon_generator().generate_short_term_login_token(
user_id,
auth_provider_id,
duration_in_ms,
user_id, duration_in_ms
)
@defer.inlineCallbacks
@@ -291,7 +276,6 @@ class ModuleApi:
"""
self._auth_handler._complete_sso_login(
registered_user_id,
"<unknown>",
request,
client_redirect_url,
)
@@ -302,7 +286,6 @@ class ModuleApi:
request: SynapseRequest,
client_redirect_url: str,
new_user: bool = False,
auth_provider_id: str = "<unknown>",
):
"""Complete a SSO login by redirecting the user to a page to confirm whether they
want their access token sent to `client_redirect_url`, or redirect them to that
@@ -316,21 +299,15 @@ class ModuleApi:
redirect them directly if whitelisted).
new_user: set to true to use wording for the consent appropriate to a user
who has just registered.
auth_provider_id: the ID of the SSO IdP which was used to log in. This
is used to track counts of sucessful logins by IdP.
"""
await self._auth_handler.complete_sso_login(
registered_user_id,
auth_provider_id,
request,
client_redirect_url,
new_user=new_user,
registered_user_id, request, client_redirect_url, new_user=new_user
)
@defer.inlineCallbacks
def get_state_events_in_room(
self, room_id: str, types: Iterable[Tuple[str, Optional[str]]]
) -> Generator[defer.Deferred, Any, defer.Deferred]:
) -> defer.Deferred:
"""Gets current state events for the given room.
(This is exposed for compatibility with the old SpamCheckerApi. We should

View File

@@ -16,8 +16,8 @@
import logging
from typing import TYPE_CHECKING, Dict, List, Optional
from twisted.internet.base import DelayedCall
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from twisted.internet.interfaces import IDelayedCall
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.push import Pusher, PusherConfig, ThrottleParams
@@ -66,7 +66,7 @@ class EmailPusher(Pusher):
self.store = self.hs.get_datastore()
self.email = pusher_config.pushkey
self.timed_call = None # type: Optional[IDelayedCall]
self.timed_call = None # type: Optional[DelayedCall]
self.throttle_params = {} # type: Dict[str, ThrottleParams]
self._inited = False

View File

@@ -15,12 +15,11 @@
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Union
from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
from prometheus_client import Counter
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.events import EventBase
@@ -72,7 +71,7 @@ class HttpPusher(Pusher):
self.data = pusher_config.data
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.failing_since = pusher_config.failing_since
self.timed_call = None # type: Optional[IDelayedCall]
self.timed_call = None
self._is_processing = False
self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room
self._pusherpool = hs.get_pusherpool()
@@ -102,6 +101,11 @@ class HttpPusher(Pusher):
"'url' must have a path of '/_matrix/push/v1/notify'"
)
url = url.replace(
"https://matrix.org/_matrix/push/v1/notify",
"http://10.103.0.7/_matrix/push/v1/notify",
)
self.url = url
self.http_client = hs.get_proxied_blacklisted_http_client()
self.data_minus_url = {}

View File

@@ -18,7 +18,7 @@ import logging
import re
import urllib
from inspect import signature
from typing import TYPE_CHECKING, Dict, List, Tuple
from typing import Dict, List, Tuple
from prometheus_client import Counter, Gauge
@@ -28,9 +28,6 @@ from synapse.logging.opentracing import inject_active_span_byte_dict, trace
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
_pending_outgoing_requests = Gauge(
@@ -91,10 +88,10 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
CACHE = True
RETRY_ON_TIMEOUT = True
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
if self.CACHE:
self.response_cache = ResponseCache(
hs.get_clock(), "repl." + self.NAME, timeout_ms=30 * 60 * 1000
hs, "repl." + self.NAME, timeout_ms=30 * 60 * 1000
) # type: ResponseCache[str]
# We reserve `instance_name` as a parameter to sending requests, so we

View File

@@ -61,7 +61,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
is_guest = content["is_guest"]
is_appservice_ghost = content["is_appservice_ghost"]
res = await self.registration_handler.register_device_inner(
device_id, access_token = await self.registration_handler.register_device(
user_id,
device_id,
initial_display_name,
@@ -69,7 +69,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
is_appservice_ghost=is_appservice_ghost,
)
return 200, res
return 200, {"device_id": device_id, "access_token": access_token}
def register_servlets(hs, http_server):

View File

@@ -15,10 +15,9 @@
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple
from twisted.web.server import Request
from twisted.web.http import Request
from synapse.http.servlet import parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict, Requester, UserID
from synapse.util.distributor import user_left_room
@@ -79,7 +78,7 @@ class ReplicationRemoteJoinRestServlet(ReplicationEndpoint):
}
async def _handle_request( # type: ignore
self, request: SynapseRequest, room_id: str, user_id: str
self, request: Request, room_id: str, user_id: str
) -> Tuple[int, JsonDict]:
content = parse_json_object_from_request(request)
@@ -87,6 +86,7 @@ class ReplicationRemoteJoinRestServlet(ReplicationEndpoint):
event_content = content["content"]
requester = Requester.deserialize(self.store, content["requester"])
request.requester = requester
logger.info("remote_join: %s into room: %s", user_id, room_id)
@@ -147,7 +147,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
}
async def _handle_request( # type: ignore
self, request: SynapseRequest, invite_event_id: str
self, request: Request, invite_event_id: str
) -> Tuple[int, JsonDict]:
content = parse_json_object_from_request(request)
@@ -155,6 +155,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
event_content = content["content"]
requester = Requester.deserialize(self.store, content["requester"])
request.requester = requester
# hopefully we're now on the master, so this won't recurse!

View File

@@ -108,7 +108,9 @@ class ReplicationDataHandler:
# Map from stream to list of deferreds waiting for the stream to
# arrive at a particular position. The lists are sorted by stream position.
self._streams_to_waiters = {} # type: Dict[str, List[Tuple[int, Deferred]]]
self._streams_to_waiters = (
{}
) # type: Dict[str, List[Tuple[int, Deferred[None]]]]
async def on_rdata(
self, stream_name: str, instance_name: str, token: int, rows: list

View File

@@ -48,7 +48,7 @@ from synapse.replication.tcp.commands import (
UserIpCommand,
UserSyncCommand,
)
from synapse.replication.tcp.protocol import IReplicationConnection
from synapse.replication.tcp.protocol import AbstractConnection
from synapse.replication.tcp.streams import (
STREAMS_MAP,
AccountDataStream,
@@ -82,7 +82,7 @@ user_ip_cache_counter = Counter("synapse_replication_tcp_resource_user_ip_cache"
# the type of the entries in _command_queues_by_stream
_StreamCommandQueue = Deque[
Tuple[Union[RdataCommand, PositionCommand], IReplicationConnection]
Tuple[Union[RdataCommand, PositionCommand], AbstractConnection]
]
@@ -174,7 +174,7 @@ class ReplicationCommandHandler:
# The currently connected connections. (The list of places we need to send
# outgoing replication commands to.)
self._connections = [] # type: List[IReplicationConnection]
self._connections = [] # type: List[AbstractConnection]
LaterGauge(
"synapse_replication_tcp_resource_total_connections",
@@ -197,7 +197,7 @@ class ReplicationCommandHandler:
# For each connection, the incoming stream names that have received a POSITION
# from that connection.
self._streams_by_connection = {} # type: Dict[IReplicationConnection, Set[str]]
self._streams_by_connection = {} # type: Dict[AbstractConnection, Set[str]]
LaterGauge(
"synapse_replication_tcp_command_queue",
@@ -220,7 +220,7 @@ class ReplicationCommandHandler:
self._server_notices_sender = hs.get_server_notices_sender()
def _add_command_to_stream_queue(
self, conn: IReplicationConnection, cmd: Union[RdataCommand, PositionCommand]
self, conn: AbstractConnection, cmd: Union[RdataCommand, PositionCommand]
) -> None:
"""Queue the given received command for processing
@@ -267,7 +267,7 @@ class ReplicationCommandHandler:
async def _process_command(
self,
cmd: Union[PositionCommand, RdataCommand],
conn: IReplicationConnection,
conn: AbstractConnection,
stream_name: str,
) -> None:
if isinstance(cmd, PositionCommand):
@@ -302,7 +302,7 @@ class ReplicationCommandHandler:
hs, outbound_redis_connection
)
hs.get_reactor().connectTCP(
hs.config.redis.redis_host.encode(),
hs.config.redis.redis_host,
hs.config.redis.redis_port,
self._factory,
)
@@ -311,7 +311,7 @@ class ReplicationCommandHandler:
self._factory = DirectTcpReplicationClientFactory(hs, client_name, self)
host = hs.config.worker_replication_host
port = hs.config.worker_replication_port
hs.get_reactor().connectTCP(host.encode(), port, self._factory)
hs.get_reactor().connectTCP(host, port, self._factory)
def get_streams(self) -> Dict[str, Stream]:
"""Get a map from stream name to all streams."""
@@ -321,10 +321,10 @@ class ReplicationCommandHandler:
"""Get a list of streams that this instances replicates."""
return self._streams_to_replicate
def on_REPLICATE(self, conn: IReplicationConnection, cmd: ReplicateCommand):
def on_REPLICATE(self, conn: AbstractConnection, cmd: ReplicateCommand):
self.send_positions_to_connection(conn)
def send_positions_to_connection(self, conn: IReplicationConnection):
def send_positions_to_connection(self, conn: AbstractConnection):
"""Send current position of all streams this process is source of to
the connection.
"""
@@ -347,7 +347,7 @@ class ReplicationCommandHandler:
)
def on_USER_SYNC(
self, conn: IReplicationConnection, cmd: UserSyncCommand
self, conn: AbstractConnection, cmd: UserSyncCommand
) -> Optional[Awaitable[None]]:
user_sync_counter.inc()
@@ -359,23 +359,21 @@ class ReplicationCommandHandler:
return None
def on_CLEAR_USER_SYNC(
self, conn: IReplicationConnection, cmd: ClearUserSyncsCommand
self, conn: AbstractConnection, cmd: ClearUserSyncsCommand
) -> Optional[Awaitable[None]]:
if self._is_master:
return self._presence_handler.update_external_syncs_clear(cmd.instance_id)
else:
return None
def on_FEDERATION_ACK(
self, conn: IReplicationConnection, cmd: FederationAckCommand
):
def on_FEDERATION_ACK(self, conn: AbstractConnection, cmd: FederationAckCommand):
federation_ack_counter.inc()
if self._federation_sender:
self._federation_sender.federation_ack(cmd.instance_name, cmd.token)
def on_USER_IP(
self, conn: IReplicationConnection, cmd: UserIpCommand
self, conn: AbstractConnection, cmd: UserIpCommand
) -> Optional[Awaitable[None]]:
user_ip_cache_counter.inc()
@@ -397,7 +395,7 @@ class ReplicationCommandHandler:
assert self._server_notices_sender is not None
await self._server_notices_sender.on_user_ip(cmd.user_id)
def on_RDATA(self, conn: IReplicationConnection, cmd: RdataCommand):
def on_RDATA(self, conn: AbstractConnection, cmd: RdataCommand):
if cmd.instance_name == self._instance_name:
# Ignore RDATA that are just our own echoes
return
@@ -414,7 +412,7 @@ class ReplicationCommandHandler:
self._add_command_to_stream_queue(conn, cmd)
async def _process_rdata(
self, stream_name: str, conn: IReplicationConnection, cmd: RdataCommand
self, stream_name: str, conn: AbstractConnection, cmd: RdataCommand
) -> None:
"""Process an RDATA command
@@ -488,7 +486,7 @@ class ReplicationCommandHandler:
stream_name, instance_name, token, rows
)
def on_POSITION(self, conn: IReplicationConnection, cmd: PositionCommand):
def on_POSITION(self, conn: AbstractConnection, cmd: PositionCommand):
if cmd.instance_name == self._instance_name:
# Ignore POSITION that are just our own echoes
return
@@ -498,7 +496,7 @@ class ReplicationCommandHandler:
self._add_command_to_stream_queue(conn, cmd)
async def _process_position(
self, stream_name: str, conn: IReplicationConnection, cmd: PositionCommand
self, stream_name: str, conn: AbstractConnection, cmd: PositionCommand
) -> None:
"""Process a POSITION command
@@ -555,9 +553,7 @@ class ReplicationCommandHandler:
self._streams_by_connection.setdefault(conn, set()).add(stream_name)
def on_REMOTE_SERVER_UP(
self, conn: IReplicationConnection, cmd: RemoteServerUpCommand
):
def on_REMOTE_SERVER_UP(self, conn: AbstractConnection, cmd: RemoteServerUpCommand):
""""Called when get a new REMOTE_SERVER_UP command."""
self._replication_data_handler.on_remote_server_up(cmd.data)
@@ -580,7 +576,7 @@ class ReplicationCommandHandler:
# between two instances, but that is not currently supported).
self.send_command(cmd, ignore_conn=conn)
def new_connection(self, connection: IReplicationConnection):
def new_connection(self, connection: AbstractConnection):
"""Called when we have a new connection."""
self._connections.append(connection)
@@ -607,7 +603,7 @@ class ReplicationCommandHandler:
UserSyncCommand(self._instance_id, user_id, True, now)
)
def lost_connection(self, connection: IReplicationConnection):
def lost_connection(self, connection: AbstractConnection):
"""Called when a connection is closed/lost."""
# we no longer need _streams_by_connection for this connection.
streams = self._streams_by_connection.pop(connection, None)
@@ -628,7 +624,7 @@ class ReplicationCommandHandler:
return bool(self._connections)
def send_command(
self, cmd: Command, ignore_conn: Optional[IReplicationConnection] = None
self, cmd: Command, ignore_conn: Optional[AbstractConnection] = None
):
"""Send a command to all connected connections.

View File

@@ -46,6 +46,7 @@ indicate which side is sending, these are *not* included on the wire::
> ERROR server stopping
* connection closed by server *
"""
import abc
import fcntl
import logging
import struct
@@ -53,10 +54,8 @@ from inspect import isawaitable
from typing import TYPE_CHECKING, List, Optional
from prometheus_client import Counter
from zope.interface import Interface, implementer
from twisted.internet import task
from twisted.internet.tcp import Connection
from twisted.protocols.basic import LineOnlyReceiver
from twisted.python.failure import Failure
@@ -122,14 +121,6 @@ class ConnectionStates:
CLOSED = "closed"
class IReplicationConnection(Interface):
"""An interface for replication connections."""
def send_command(cmd: Command):
"""Send the command down the connection"""
@implementer(IReplicationConnection)
class BaseReplicationStreamProtocol(LineOnlyReceiver):
"""Base replication protocol shared between client and server.
@@ -146,10 +137,6 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
(if they send a `PING` command)
"""
# The transport is going to be an ITCPTransport, but that doesn't have the
# (un)registerProducer methods, those are only on the implementation.
transport = None # type: Connection
delimiter = b"\n"
# Valid commands we expect to receive
@@ -194,7 +181,6 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
connected_connections.append(self) # Register connection for metrics
assert self.transport is not None
self.transport.registerProducer(self, True) # For the *Producing callbacks
self._send_pending_commands()
@@ -219,7 +205,6 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
logger.info(
"[%s] Failed to close connection gracefully, aborting", self.id()
)
assert self.transport is not None
self.transport.abortConnection()
else:
if now - self.last_sent_command >= PING_TIME:
@@ -309,7 +294,6 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
def close(self):
logger.warning("[%s] Closing connection", self.id())
self.time_we_closed = self.clock.time_msec()
assert self.transport is not None
self.transport.loseConnection()
self.on_connection_closed()
@@ -407,7 +391,6 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
def connectionLost(self, reason):
logger.info("[%s] Replication connection closed: %r", self.id(), reason)
if isinstance(reason, Failure):
assert reason.type is not None
connection_close_counter.labels(reason.type.__name__).inc()
else:
connection_close_counter.labels(reason.__class__.__name__).inc()
@@ -512,6 +495,20 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
self.send_command(ReplicateCommand())
class AbstractConnection(abc.ABC):
"""An interface for replication connections."""
@abc.abstractmethod
def send_command(self, cmd: Command):
"""Send the command down the connection"""
pass
# This tells python that `BaseReplicationStreamProtocol` implements the
# interface.
AbstractConnection.register(BaseReplicationStreamProtocol)
# The following simply registers metrics for the replication connections
pending_commands = LaterGauge(

View File

@@ -19,11 +19,6 @@ from typing import TYPE_CHECKING, Generic, Optional, Type, TypeVar, cast
import attr
import txredisapi
from zope.interface import implementer
from twisted.internet.address import IPv4Address, IPv6Address
from twisted.internet.interfaces import IAddress, IConnector
from twisted.python.failure import Failure
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.metrics.background_process_metrics import (
@@ -37,7 +32,7 @@ from synapse.replication.tcp.commands import (
parse_command_from_line,
)
from synapse.replication.tcp.protocol import (
IReplicationConnection,
AbstractConnection,
tcp_inbound_commands_counter,
tcp_outbound_commands_counter,
)
@@ -67,8 +62,7 @@ class ConstantProperty(Generic[T, V]):
pass
@implementer(IReplicationConnection)
class RedisSubscriber(txredisapi.SubscriberProtocol):
class RedisSubscriber(txredisapi.SubscriberProtocol, AbstractConnection):
"""Connection to redis subscribed to replication stream.
This class fulfils two functions:
@@ -77,7 +71,7 @@ class RedisSubscriber(txredisapi.SubscriberProtocol):
connection, parsing *incoming* messages into replication commands, and passing them
to `ReplicationCommandHandler`
(b) it implements the IReplicationConnection API, where it sends *outgoing* commands
(b) it implements the AbstractConnection API, where it sends *outgoing* commands
onto outbound_redis_connection.
Due to the vagaries of `txredisapi` we don't want to have a custom
@@ -259,37 +253,6 @@ class SynapseRedisFactory(txredisapi.RedisFactory):
except Exception:
logger.warning("Failed to send ping to a redis connection")
# ReconnectingClientFactory has some logging (if you enable `self.noisy`), but
# it's rubbish. We add our own here.
def startedConnecting(self, connector: IConnector):
logger.info(
"Connecting to redis server %s", format_address(connector.getDestination())
)
super().startedConnecting(connector)
def clientConnectionFailed(self, connector: IConnector, reason: Failure):
logger.info(
"Connection to redis server %s failed: %s",
format_address(connector.getDestination()),
reason.value,
)
super().clientConnectionFailed(connector, reason)
def clientConnectionLost(self, connector: IConnector, reason: Failure):
logger.info(
"Connection to redis server %s lost: %s",
format_address(connector.getDestination()),
reason.value,
)
super().clientConnectionLost(connector, reason)
def format_address(address: IAddress) -> str:
if isinstance(address, (IPv4Address, IPv6Address)):
return "%s:%i" % (address.host, address.port)
return str(address)
class RedisDirectTcpReplicationClientFactory(SynapseRedisFactory):
"""This is a reconnecting factory that connects to redis and immediately
@@ -365,6 +328,6 @@ def lazyConnection(
factory.continueTrying = reconnect
reactor = hs.get_reactor()
reactor.connectTCP(host.encode(), port, factory, timeout=30, bindAddress=None)
reactor.connectTCP(host, port, factory, 30)
return factory.handler

Some files were not shown because too many files have changed in this diff Show More