1
0

Compare commits

..

72 Commits

Author SHA1 Message Date
Erik Johnston
ebd534b58d Move removal warning up changelog 2021-01-13 10:31:27 +00:00
Erik Johnston
891c925b88 Link to GH profile and fix tense 2021-01-13 10:28:03 +00:00
Erik Johnston
f7478d5cc6 Fix link in changelog 2021-01-13 10:26:25 +00:00
Erik Johnston
429c339de8 Fixup changelog 2021-01-13 10:23:16 +00:00
Erik Johnston
3dd6ba135e 1.25.0 2021-01-13 10:19:12 +00:00
Dan Callahan
6d91e6ca5f Announce Python / PostgreSQL deprecation policies (#9085)
Fixes #8782
2021-01-12 20:11:15 +00:00
Marcus
e385c8b473 Don't apply the IP range blacklist to proxy connections (#9084)
It is expected that the proxy would be on a private IP address so the
configured proxy should be connected to regardless of the IP range
blacklist.
2021-01-12 12:20:30 -05:00
Dan Callahan
fa6deb298b Fix failures in Debian packaging (#9079)
Debian package builds were failing for two reasons:

 1. Python versions prior to 3.7 throw exceptions when attempting to print
    Unicode characters under a "C" locale. (#9076)

 2. We depended on `dh-systemd` which no longer exists in Debian Bullseye, but
    is necessary in Ubuntu Xenial. (#9073)

Setting `LANG="C.UTF-8"` in the build environment fixes the first issue.
See also: https://bugs.python.org/issue19846

The second issue is a bit trickier. The dh-systemd package was merged into
debhelper version 9.20160709 and a transitional package left in its wake.

The transitional dh-systemd package was removed in Debian Bullseye.

However, Ubuntu Xenial ships an older debhelper, and still needs dh-systemd.

Thus, builds were failing on Bullseye since we depended on a package which had
ceased existing, but we couldn't remove it from the debian/control file and our
build scripts because we still needed it for Ubuntu Xenial.

We can fix the debian/control issue by listing dh-systemd as an alternative to
the newer versions of debhelper. Since dh-systemd declares that it depends on
debhelper, Ubuntu Xenial will select its older dh-systemd which will in turn
pull in its older debhelper, resulting in no change from the status quo. All
other supported releases will satisfy the debhelper dependency constraint and
skip the dh-systemd alternative.

Build scripts were fixed by unconditionally attempting to install dh-systemd on
all releases and suppressing failures.

Once we drop support for Ubuntu Xenial, we can revert most of this commit and
rely on the version constraint on debhelper in debian/control.

Fixes #9076
Fixes #9073

Signed-off-by: Dan Callahan <danc@element.io>
2021-01-12 14:15:04 +00:00
Patrick Cloke
8f08021e86 More updates to changes for consistency. 2021-01-06 07:36:52 -05:00
Patrick Cloke
62b5f13768 A few more tweaks to changes. 2021-01-06 07:34:11 -05:00
Patrick Cloke
bde6705ad1 Some manual tweaks to the changes file. 2021-01-06 07:20:12 -05:00
Patrick Cloke
2fe0fb21f6 1.25.0rc1 2021-01-06 07:08:13 -05:00
Patrick Cloke
37eaf9c272 Fix-up assertions about last stream token in push (#9020)
The last stream token is always known and we do not need to handle none.
2021-01-05 10:53:15 -05:00
Patrick Cloke
31b1905e13 Add type hints to the receipts and user directory handlers. (#8976) 2021-01-04 10:05:12 -05:00
Patrick Cloke
1c9a850562 Add type hints to the crypto module. (#8999) 2021-01-04 10:04:50 -05:00
Eric Eastwood
a685bbb018 Add link to Synapse dev room to the relevant README section (#9002) 2021-01-04 08:59:19 -05:00
Patrick Cloke
0eccf53146 Use the SSO handler helpers for CAS registration/login. (#8856) 2021-01-03 16:25:44 +00:00
Andrew Morgan
168ba00d01 Fix RoomDirectoryFederationTests and make them actually run (#8998)
The `RoomDirectoryFederationTests` tests were not being run unless explicitly called as an `__init__.py` file was not present in `tests/federation/transport/`. Thus the folder was not a python module, and `trial` did not look inside for any test cases to run. This was found while working on #6739.

This PR adds a `__init__.py` and also fixes the test in a couple ways:

- Switch to subclassing `unittest.FederatingHomeserverTestCase` instead, which sets up federation endpoints for us.
- Supply a `federation_auth_origin` to `make_request` in order to more act like the request is coming from another server, instead of just an unauthenicated client requesting a federation endpoint.

I found that the second point makes no difference to the test passing, but felt like the right thing to do if we're testing over federation.
2020-12-30 19:27:32 +00:00
Patrick Cloke
b7c580e333 Check if group IDs are valid before using them. (#8977) 2020-12-30 08:39:59 -05:00
Patrick Cloke
637282bb50 Add additional type hints to the storage module. (#8980) 2020-12-30 08:09:53 -05:00
Shashank Sabniveesu
b8591899ab Doc/move database setup instructions in install md (#8987) 2020-12-30 11:33:03 +00:00
Patrick Cloke
9999eb2d02 Add type hints to admin and room list handlers. (#8973) 2020-12-29 17:42:10 -05:00
Patrick Cloke
14a7371375 Validate input parameters for the sendToDevice API. (#8975)
This makes the "messages" key in the content required. This is currently
optional in the spec, but that seems to be an error.
2020-12-29 12:47:45 -05:00
Jerin J Titus
cfcf5541b4 Update the value of group_creation_prefix in sample config. (#8992)
Removes the trailing slash with causes issues with matrix.to/Element.
2020-12-29 09:30:48 -05:00
Patrick Cloke
68bb26da69 Allow redacting events on workers (#8994)
Adds the redacts endpoint to workers that have the client listener.
2020-12-29 07:40:12 -05:00
Patrick Cloke
d0c3c24eb2 Drop the unused local_invites table. (#8979)
This table has been unused since Synapse v1.17.0.
2020-12-29 07:26:29 -05:00
Patrick Cloke
a802606475 Support PyJWT v2.0.0. (#8986)
Tests were broken due to an API changing. The code used in Synapse
proper should be compatible with both versions already.
2020-12-22 13:00:14 -05:00
Patrick Cloke
4218473f9e Refactor the CAS handler in prep for using the abstracted SSO code. (#8958)
This makes the CAS handler look more like the SAML/OIDC handlers:

* Render errors to users instead of throwing JSON errors.
* Internal reorganization.
2020-12-18 13:09:45 -05:00
Patrick Cloke
56e00ca85e Send the location of the web client to the IS when inviting via 3PIDs. (#8930)
Adds a new setting `email.invite_client_location` which, if defined, is
passed to the identity server during invites.
2020-12-18 11:01:57 -05:00
Erik Johnston
d781a81e69 Allow server admin to get admin bit in rooms where local user is an admin (#8756)
This adds an admin API that allows a server admin to get power in a room if a local user has power in a room. Will also invite the user if they're not in the room and its a private room. Can specify another user (rather than the admin user) to be granted power.

Co-authored-by: Matthew Hodgson <matthew@matrix.org>
2020-12-18 15:37:19 +00:00
Erik Johnston
5e7d75daa2 Fix mainline ordering in state res v2 (#8971)
This had two effects 1) it'd give the wrong answer and b) would iterate
*all* power levels in the auth chain of each event. The latter of which
can be *very* expensive for certain types of IRC bridge rooms that have
large numbers of power level changes.
2020-12-18 15:00:34 +00:00
Richard van der Hoff
28877fade9 Implement a username picker for synapse (#8942)
The final part (for now) of my work to implement a username picker in synapse itself. The idea is that we allow
`UsernameMappingProvider`s to return `localpart=None`, in which case, rather than redirecting the browser
back to the client, we redirect to a username-picker resource, which allows the user to enter a username.
We *then* complete the SSO flow (including doing the client permission checks).

The static resources for the username picker itself (in 
https://github.com/matrix-org/synapse/tree/rav/username_picker/synapse/res/username_picker)
are essentially lifted wholesale from
https://github.com/matrix-org/matrix-synapse-saml-mozilla/tree/master/matrix_synapse_saml_mozilla/res. 
As the comment says, we might want to think about making them customisable, but that can be a follow-up. 

Fixes #8876.
2020-12-18 14:19:46 +00:00
Patrick Cloke
5d4c330ed9 Allow re-using a UI auth validation for a period of time (#8970) 2020-12-18 07:33:57 -05:00
Patrick Cloke
4136255d3c Ensure that a URL exists in the content during push. (#8965)
This fixes an KeyError exception, after this PR the content
is just considered unknown.
2020-12-18 07:26:15 -05:00
Erik Johnston
a7a913918c Merge remote-tracking branch 'origin/erikj/as_mau_block' into develop 2020-12-18 09:51:56 +00:00
Erik Johnston
70586aa63e Try and drop stale extremities. (#8929)
If we see stale extremities while persisting events, and notice that
they don't change the result of state resolution, we drop them.
2020-12-18 09:49:18 +00:00
Richard van der Hoff
f1db20b5a5 Clean up tox.ini (#8963)
... and disable coverage tracking for mypy and friends.
2020-12-17 22:58:00 +00:00
Erik Johnston
14eab1b4d2 Update tests/test_mau.py
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-12-17 16:14:13 +00:00
Richard van der Hoff
c9c1c9d82f Fix UsersListTestCase (#8964) 2020-12-17 10:46:40 -05:00
Brendan Abolivier
f2783fc201 Use the simple dictionary in full text search for the user directory (#8959)
* Use the simple dictionary in fts for the user directory

* Clarify naming
2020-12-17 14:42:30 +01:00
Erik Johnston
4c33796b20 Correctly handle AS registerations and add test 2020-12-17 12:55:21 +00:00
Dirk Klimpel
c07022303e Fix a bug that deactivated users appear in the directory (#8933)
Fixes a bug that deactivated users appear in the directory when their profile information was updated.

To change profile information of deactivated users is neccesary for example you will remove displayname or avatar.
But they should not appear in directory. They are deactivated.



Co-authored-by: Erik Johnston <erikj@jki.re>
2020-12-17 12:05:39 +00:00
Erik Johnston
35be260090 Newsfile 2020-12-17 12:05:18 +00:00
Erik Johnston
7932d4e9f7 Don't MAU limit AS ghost users 2020-12-17 12:04:14 +00:00
Dirk Klimpel
06006058d7 Make search statement in List Room and User Admin API case-insensitive (#8931) 2020-12-17 10:43:37 +00:00
Patrick Cloke
ff5c4da128 Add a maximum size for well-known lookups. (#8950) 2020-12-16 17:25:24 -05:00
Richard van der Hoff
e1b8e37f93 Push login completion down into SsoHandler (#8941)
This is another part of my work towards fixing #8876. It moves some of the logic currently in the SAML and OIDC handlers - in particular the call to `AuthHandler.complete_sso_login` down into the `SsoHandler`.
2020-12-16 20:01:53 +00:00
Patrick Cloke
44b7d4c6d6 Fix the sample config location for the ip_range_whitelist setting. (#8954)
Move it from the federation section to the server section to match
ip_range_blacklist.
2020-12-16 14:40:47 -05:00
Patrick Cloke
bd30cfe86a Convert internal pusher dicts to attrs classes. (#8940)
This improves type hinting and should use less memory.
2020-12-16 11:25:30 -05:00
Richard van der Hoff
7a332850e6 Merge pull request #8951 from matrix-org/rav/username_picker_2
More preparatory refactoring of the OidcHandler tests
2020-12-16 14:53:26 +00:00
Richard van der Hoff
651e1ae534 Merge pull request #8946 from matrix-org/rav/refactor_send_request
Remove `Request` return value from `make_request`
2020-12-16 14:53:01 +00:00
Richard van der Hoff
3ad699cc65 Fix generate_log_config script (#8952)
It used to write an empty file if you gave it a -o arg.
2020-12-16 14:52:04 +00:00
Patrick Cloke
be2db93b3c Do not assume that the contents dictionary includes history_visibility. (#8945) 2020-12-16 08:46:37 -05:00
Richard van der Hoff
757b5a0bf6 changelog 2020-12-15 23:11:42 +00:00
Richard van der Hoff
8388a7fb3a Make _make_callback_with_userinfo async
... so that we can test its behaviour when it raises.

Also pull it out to the top level so that I can use it from other test classes.
2020-12-15 23:10:59 +00:00
Richard van der Hoff
c1883f042d Remove spurious mocking of complete_sso_login
The tests that need this all do it already.
2020-12-15 23:10:59 +00:00
Richard van der Hoff
2dd2e90e2b Test get_extra_attributes fallback
despite the warnings saying "don't implement get_extra_attributes", we had
implemented it, so the tests weren't doing what we thought they were.
2020-12-15 23:10:59 +00:00
Richard van der Hoff
c9dd47d668 lint 2020-12-15 22:35:50 +00:00
Richard van der Hoff
ed61fe4ada changelog 2020-12-15 22:35:50 +00:00
Richard van der Hoff
394516ad1b Remove spurious "SynapseRequest" result from `make_request"
This was never used, so let's get rid of it.
2020-12-15 22:35:40 +00:00
Richard van der Hoff
ac2acf1524 Remove redundant reading of SynapseRequest.args
this didn't seem to be doing a lot, so remove it.
2020-12-15 22:35:03 +00:00
Richard van der Hoff
5bcf6e8289 Skip redundant check on request.args 2020-12-15 22:35:03 +00:00
Richard van der Hoff
0378581c13 remove 'response' result from _get_shared_rooms 2020-12-15 22:34:20 +00:00
Richard van der Hoff
7eebe4b3fc Replace request.code with channel.code
The two are equivalent, but really we want to check the HTTP result that got
returned to the channel, not the code that the Request object *intended* to
return to the channel.
2020-12-15 22:32:12 +00:00
Richard van der Hoff
01333681bc Preparatory refactoring of the SamlHandlerTestCase (#8938)
* move simple_async_mock to test_utils

... so that it can be re-used

* Remove references to `SamlHandler._map_saml_response_to_user` from tests

This method is going away, so we can no longer use it as a test point. Instead,
factor out a higher-level method which takes a SAML object, and verify correct
behaviour by mocking out `AuthHandler.complete_sso_login`.

* changelog
2020-12-15 20:56:10 +00:00
Patrick Cloke
b3a4b53587 Fix handling of stream tokens for push. (#8943)
Removes faulty assertions and fixes the logic to ensure the max
stream token is always set.
2020-12-15 10:41:34 -05:00
Richard van der Hoff
6d02eb22df Fix startup failure with localdb_enabled: False (#8937) 2020-12-14 20:42:03 +00:00
Patrick Cloke
1619802228 Various clean-ups to the logging context code (#8935) 2020-12-14 14:19:47 -05:00
Richard van der Hoff
895e04319b Preparatory refactoring of the OidcHandlerTestCase (#8911)
* Remove references to handler._auth_handler

(and replace them with hs.get_auth_handler)

* Factor out a utility function for building Requests

* Remove mocks of `OidcHandler._map_userinfo_to_user`

This method is going away, so mocking it out is no longer a valid approach.

Instead, we mock out lower-level methods (eg _remote_id_from_userinfo), or
simply allow the regular implementation to proceed and update the expectations
accordingly.

* Remove references to `OidcHandler._map_userinfo_to_user` from tests

This method is going away, so we can no longer use it as a test point. Instead
we build mock "callback" requests which we pass into `handle_oidc_callback`,
and verify correct behaviour by mocking out `AuthHandler.complete_sso_login`.
2020-12-14 11:38:50 +00:00
David Teller
f14428b25c Allow spam-checker modules to be provide async methods. (#8890)
Spam checker modules can now provide async methods. This is implemented
in a backwards-compatible manner.
2020-12-11 14:05:15 -05:00
Patrick Cloke
5d34f40d49 Add type hints to the push module. (#8901) 2020-12-11 11:43:53 -05:00
Erik Johnston
a8eceb01e5 Honour AS ratelimit settings for /login requests (#8920)
Fixes #8846.
2020-12-11 16:33:31 +00:00
242 changed files with 5533 additions and 3279 deletions

View File

@@ -1,6 +1,31 @@
Synapse 1.25.0 (2020-xx-xx)
Synapse 1.25.0 (2021-01-13)
===========================
Ending Support for Python 3.5 and Postgres 9.5
----------------------------------------------
With this release, the Synapse team is announcing a formal deprecation policy for our platform dependencies, like Python and PostgreSQL:
All future releases of Synapse will follow the upstream end-of-life schedules.
Which means:
* This is the last release which guarantees support for Python 3.5.
* We will end support for PostgreSQL 9.5 early next month.
* We will end support for Python 3.6 and PostgreSQL 9.6 near the end of the year.
Crucially, this means __we will not produce .deb packages for Debian 9 (Stretch) or Ubuntu 16.04 (Xenial)__ beyond the transition period described below.
The website https://endoflife.date/ has convenient summaries of the support schedules for projects like [Python](https://endoflife.date/python) and [PostgreSQL](https://endoflife.date/postgresql).
If you are unable to upgrade your environment to a supported version of Python or Postgres, we encourage you to consider using the [Synapse Docker images](./INSTALL.md#docker-images-and-ansible-playbooks) instead.
### Transition Period
We will make a good faith attempt to avoid breaking compatibility in all releases through the end of March 2021. However, critical security vulnerabilities in dependencies or other unanticipated circumstances may arise which necessitate breaking compatibility earlier.
We intend to continue producing .deb packages for Debian 9 (Stretch) and Ubuntu 16.04 (Xenial) through the transition period.
Removal warning
---------------
@@ -12,6 +37,101 @@ are deprecated and will be removed in a future release. They will be replaced by
`POST /_synapse/admin/v1/rooms/<room_id>/delete` replaces `POST /_synapse/admin/v1/purge_room` and
`POST /_synapse/admin/v1/shutdown_room/<room_id>`.
Bugfixes
--------
- Fix HTTP proxy support when using a proxy that is on a blacklisted IP. Introduced in v1.25.0rc1. Contributed by @Bubu. ([\#9084](https://github.com/matrix-org/synapse/issues/9084))
Synapse 1.25.0rc1 (2021-01-06)
==============================
Features
--------
- Add an admin API that lets server admins get power in rooms in which local users have power. ([\#8756](https://github.com/matrix-org/synapse/issues/8756))
- Add optional HTTP authentication to replication endpoints. ([\#8853](https://github.com/matrix-org/synapse/issues/8853))
- Improve the error messages printed as a result of configuration problems for extension modules. ([\#8874](https://github.com/matrix-org/synapse/issues/8874))
- Add the number of local devices to Room Details Admin API. Contributed by @dklimpel. ([\#8886](https://github.com/matrix-org/synapse/issues/8886))
- Add `X-Robots-Tag` header to stop web crawlers from indexing media. Contributed by Aaron Raimist. ([\#8887](https://github.com/matrix-org/synapse/issues/8887))
- Spam-checkers may now define their methods as `async`. ([\#8890](https://github.com/matrix-org/synapse/issues/8890))
- Add support for allowing users to pick their own user ID during a single-sign-on login. ([\#8897](https://github.com/matrix-org/synapse/issues/8897), [\#8900](https://github.com/matrix-org/synapse/issues/8900), [\#8911](https://github.com/matrix-org/synapse/issues/8911), [\#8938](https://github.com/matrix-org/synapse/issues/8938), [\#8941](https://github.com/matrix-org/synapse/issues/8941), [\#8942](https://github.com/matrix-org/synapse/issues/8942), [\#8951](https://github.com/matrix-org/synapse/issues/8951))
- Add an `email.invite_client_location` configuration option to send a web client location to the invite endpoint on the identity server which allows customisation of the email template. ([\#8930](https://github.com/matrix-org/synapse/issues/8930))
- The search term in the list room and list user Admin APIs is now treated as case-insensitive. ([\#8931](https://github.com/matrix-org/synapse/issues/8931))
- Apply an IP range blacklist to push and key revocation requests. ([\#8821](https://github.com/matrix-org/synapse/issues/8821), [\#8870](https://github.com/matrix-org/synapse/issues/8870), [\#8954](https://github.com/matrix-org/synapse/issues/8954))
- Add an option to allow re-use of user-interactive authentication sessions for a period of time. ([\#8970](https://github.com/matrix-org/synapse/issues/8970))
- Allow running the redact endpoint on workers. ([\#8994](https://github.com/matrix-org/synapse/issues/8994))
Bugfixes
--------
- Fix bug where we might not correctly calculate the current state for rooms with multiple extremities. ([\#8827](https://github.com/matrix-org/synapse/issues/8827))
- Fix a long-standing bug in the register admin endpoint (`/_synapse/admin/v1/register`) when the `mac` field was not provided. The endpoint now properly returns a 400 error. Contributed by @edwargix. ([\#8837](https://github.com/matrix-org/synapse/issues/8837))
- Fix a long-standing bug on Synapse instances supporting Single-Sign-On, where users would be prompted to enter their password to confirm certain actions, even though they have not set a password. ([\#8858](https://github.com/matrix-org/synapse/issues/8858))
- Fix a longstanding bug where a 500 error would be returned if the `Content-Length` header was not provided to the upload media resource. ([\#8862](https://github.com/matrix-org/synapse/issues/8862))
- Add additional validation to pusher URLs to be compliant with the specification. ([\#8865](https://github.com/matrix-org/synapse/issues/8865))
- Fix the error code that is returned when a user tries to register on a homeserver on which new-user registration has been disabled. ([\#8867](https://github.com/matrix-org/synapse/issues/8867))
- Fix a bug where `PUT /_synapse/admin/v2/users/<user_id>` failed to create a new user when `avatar_url` is specified. Bug introduced in Synapse v1.9.0. ([\#8872](https://github.com/matrix-org/synapse/issues/8872))
- Fix a 500 error when attempting to preview an empty HTML file. ([\#8883](https://github.com/matrix-org/synapse/issues/8883))
- Fix occasional deadlock when handling SIGHUP. ([\#8918](https://github.com/matrix-org/synapse/issues/8918))
- Fix login API to not ratelimit application services that have ratelimiting disabled. ([\#8920](https://github.com/matrix-org/synapse/issues/8920))
- Fix bug where we ratelimited auto joining of rooms on registration (using `auto_join_rooms` config). ([\#8921](https://github.com/matrix-org/synapse/issues/8921))
- Fix a bug where deactivated users appeared in the user directory when their profile information was updated. ([\#8933](https://github.com/matrix-org/synapse/issues/8933), [\#8964](https://github.com/matrix-org/synapse/issues/8964))
- Fix bug introduced in Synapse v1.24.0 which would cause an exception on startup if both `enabled` and `localdb_enabled` were set to `False` in the `password_config` setting of the configuration file. ([\#8937](https://github.com/matrix-org/synapse/issues/8937))
- Fix a bug where 500 errors would be returned if the `m.room_history_visibility` event had invalid content. ([\#8945](https://github.com/matrix-org/synapse/issues/8945))
- Fix a bug causing common English words to not be considered for a user directory search. ([\#8959](https://github.com/matrix-org/synapse/issues/8959))
- Fix bug where application services couldn't register new ghost users if the server had reached its MAU limit. ([\#8962](https://github.com/matrix-org/synapse/issues/8962))
- Fix a long-standing bug where a `m.image` event without a `url` would cause errors on push. ([\#8965](https://github.com/matrix-org/synapse/issues/8965))
- Fix a small bug in v2 state resolution algorithm, which could also cause performance issues for rooms with large numbers of power levels. ([\#8971](https://github.com/matrix-org/synapse/issues/8971))
- Add validation to the `sendToDevice` API to raise a missing parameters error instead of a 500 error. ([\#8975](https://github.com/matrix-org/synapse/issues/8975))
- Add validation of group IDs to raise a 400 error instead of a 500 eror. ([\#8977](https://github.com/matrix-org/synapse/issues/8977))
Improved Documentation
----------------------
- Fix the "Event persist rate" section of the included grafana dashboard by adding missing prometheus rules. ([\#8802](https://github.com/matrix-org/synapse/issues/8802))
- Combine related media admin API docs. ([\#8839](https://github.com/matrix-org/synapse/issues/8839))
- Fix an error in the documentation for the SAML username mapping provider. ([\#8873](https://github.com/matrix-org/synapse/issues/8873))
- Clarify comments around template directories in `sample_config.yaml`. ([\#8891](https://github.com/matrix-org/synapse/issues/8891))
- Move instructions for database setup, adjusted heading levels and improved syntax highlighting in [INSTALL.md](../INSTALL.md). Contributed by @fossterer. ([\#8987](https://github.com/matrix-org/synapse/issues/8987))
- Update the example value of `group_creation_prefix` in the sample configuration. ([\#8992](https://github.com/matrix-org/synapse/issues/8992))
- Link the Synapse developer room to the development section in the docs. ([\#9002](https://github.com/matrix-org/synapse/issues/9002))
Deprecations and Removals
-------------------------
- Deprecate Shutdown Room and Purge Room Admin APIs. ([\#8829](https://github.com/matrix-org/synapse/issues/8829))
Internal Changes
----------------
- Properly store the mapping of external ID to Matrix ID for CAS users. ([\#8856](https://github.com/matrix-org/synapse/issues/8856), [\#8958](https://github.com/matrix-org/synapse/issues/8958))
- Remove some unnecessary stubbing from unit tests. ([\#8861](https://github.com/matrix-org/synapse/issues/8861))
- Remove unused `FakeResponse` class from unit tests. ([\#8864](https://github.com/matrix-org/synapse/issues/8864))
- Pass `room_id` to `get_auth_chain_difference`. ([\#8879](https://github.com/matrix-org/synapse/issues/8879))
- Add type hints to push module. ([\#8880](https://github.com/matrix-org/synapse/issues/8880), [\#8882](https://github.com/matrix-org/synapse/issues/8882), [\#8901](https://github.com/matrix-org/synapse/issues/8901), [\#8940](https://github.com/matrix-org/synapse/issues/8940), [\#8943](https://github.com/matrix-org/synapse/issues/8943), [\#9020](https://github.com/matrix-org/synapse/issues/9020))
- Simplify logic for handling user-interactive-auth via single-sign-on servers. ([\#8881](https://github.com/matrix-org/synapse/issues/8881))
- Skip the SAML tests if the requirements (`pysaml2` and `xmlsec1`) aren't available. ([\#8905](https://github.com/matrix-org/synapse/issues/8905))
- Fix multiarch docker image builds. ([\#8906](https://github.com/matrix-org/synapse/issues/8906))
- Don't publish `latest` docker image until all archs are built. ([\#8909](https://github.com/matrix-org/synapse/issues/8909))
- Various clean-ups to the structured logging and logging context code. ([\#8916](https://github.com/matrix-org/synapse/issues/8916), [\#8935](https://github.com/matrix-org/synapse/issues/8935))
- Automatically drop stale forward-extremities under some specific conditions. ([\#8929](https://github.com/matrix-org/synapse/issues/8929))
- Refactor test utilities for injecting HTTP requests. ([\#8946](https://github.com/matrix-org/synapse/issues/8946))
- Add a maximum size of 50 kilobytes to .well-known lookups. ([\#8950](https://github.com/matrix-org/synapse/issues/8950))
- Fix bug in `generate_log_config` script which made it write empty files. ([\#8952](https://github.com/matrix-org/synapse/issues/8952))
- Clean up tox.ini file; disable coverage checking for non-test runs. ([\#8963](https://github.com/matrix-org/synapse/issues/8963))
- Add type hints to the admin and room list handlers. ([\#8973](https://github.com/matrix-org/synapse/issues/8973))
- Add type hints to the receipts and user directory handlers. ([\#8976](https://github.com/matrix-org/synapse/issues/8976))
- Drop the unused `local_invites` table. ([\#8979](https://github.com/matrix-org/synapse/issues/8979))
- Add type hints to the base storage code. ([\#8980](https://github.com/matrix-org/synapse/issues/8980))
- Support using PyJWT v2.0.0 in the test suite. ([\#8986](https://github.com/matrix-org/synapse/issues/8986))
- Fix `tests.federation.transport.RoomDirectoryFederationTests` and ensure it runs in CI. ([\#8998](https://github.com/matrix-org/synapse/issues/8998))
- Add type hints to the crypto module. ([\#8999](https://github.com/matrix-org/synapse/issues/8999))
Synapse 1.24.0 (2020-12-09)
===========================

View File

@@ -1,19 +1,44 @@
- [Choosing your server name](#choosing-your-server-name)
- [Picking a database engine](#picking-a-database-engine)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates)
- [Client Well-Known URI](#client-well-known-uri)
- [Email](#email)
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation)
# Installation Instructions
# Choosing your server name
There are 3 steps to follow under **Installation Instructions**.
- [Installation Instructions](#installation-instructions)
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
- [ArchLinux](#archlinux)
- [CentOS/Fedora](#centosfedora)
- [macOS](#macos)
- [OpenSUSE](#opensuse)
- [OpenBSD](#openbsd)
- [Windows](#windows)
- [Prebuilt packages](#prebuilt-packages)
- [Docker images and Ansible playbooks](#docker-images-and-ansible-playbooks)
- [Debian/Ubuntu](#debianubuntu)
- [Matrix.org packages](#matrixorg-packages)
- [Downstream Debian packages](#downstream-debian-packages)
- [Downstream Ubuntu packages](#downstream-ubuntu-packages)
- [Fedora](#fedora)
- [OpenSUSE](#opensuse-1)
- [SUSE Linux Enterprise Server](#suse-linux-enterprise-server)
- [ArchLinux](#archlinux-1)
- [Void Linux](#void-linux)
- [FreeBSD](#freebsd)
- [OpenBSD](#openbsd-1)
- [NixOS](#nixos)
- [Setting up Synapse](#setting-up-synapse)
- [Using PostgreSQL](#using-postgresql)
- [TLS certificates](#tls-certificates)
- [Client Well-Known URI](#client-well-known-uri)
- [Email](#email)
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation)
## Choosing your server name
It is important to choose the name for your server before you install Synapse,
because it cannot be changed later.
@@ -29,28 +54,9 @@ that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md).
# Picking a database engine
## Installing Synapse
Synapse offers two database engines:
* [PostgreSQL](https://www.postgresql.org)
* [SQLite](https://sqlite.org/)
Almost all installations should opt to use PostgreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
For information on how to install and use PostgreSQL, please see
[docs/postgres.md](docs/postgres.md)
By default Synapse uses SQLite and in doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
# Installing Synapse
## Installing from source
### Installing from source
(Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
@@ -68,7 +74,7 @@ these on various platforms.
To install the Synapse homeserver run:
```
```sh
mkdir -p ~/synapse
virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate
@@ -85,7 +91,7 @@ prefer.
This Synapse installation can then be later upgraded by using pip again with the
update flag:
```
```sh
source ~/synapse/env/bin/activate
pip install -U matrix-synapse
```
@@ -93,7 +99,7 @@ pip install -U matrix-synapse
Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before):
```
```sh
cd ~/synapse
python -m synapse.app.homeserver \
--server-name my.domain.name \
@@ -111,45 +117,43 @@ wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your homeserver's keys, you may find that other homeserver have the
old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something
different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management).
different. See the [spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys) for more information on key management).
To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and:
```
```sh
cd ~/synapse
source env/bin/activate
synctl start
```
### Platform-Specific Instructions
#### Platform-Specific Instructions
#### Debian/Ubuntu/Raspbian
##### Debian/Ubuntu/Raspbian
Installing prerequisites on Ubuntu or Debian:
```
sudo apt-get install build-essential python3-dev libffi-dev \
```sh
sudo apt install build-essential python3-dev libffi-dev \
python3-pip python3-setuptools sqlite3 \
libssl-dev virtualenv libjpeg-dev libxslt1-dev
```
#### ArchLinux
##### ArchLinux
Installing prerequisites on ArchLinux:
```
```sh
sudo pacman -S base-devel python python-pip \
python-setuptools python-virtualenv sqlite3
```
#### CentOS/Fedora
##### CentOS/Fedora
Installing prerequisites on CentOS 8 or Fedora>26:
```
```sh
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
@@ -158,7 +162,7 @@ sudo dnf groupinstall "Development Tools"
Installing prerequisites on CentOS 7 or Fedora<=25:
```
```sh
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
@@ -170,11 +174,11 @@ uses SQLite 3.7. You may be able to work around this by installing a more
recent SQLite version, but it is recommended that you instead use a Postgres
database: see [docs/postgres.md](docs/postgres.md).
#### macOS
##### macOS
Installing prerequisites on macOS:
```
```sh
xcode-select --install
sudo easy_install pip
sudo pip install virtualenv
@@ -184,22 +188,22 @@ brew install pkg-config libffi
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform `pip` about it so that `psycopg2` builds:
```
```sh
brew install openssl@1.1
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
```
#### OpenSUSE
##### OpenSUSE
Installing prerequisites on openSUSE:
```
```sh
sudo zypper in -t pattern devel_basis
sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
python-devel libffi-devel libopenssl-devel libjpeg62-devel
```
#### OpenBSD
##### OpenBSD
A port of Synapse is available under `net/synapse`. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
@@ -213,73 +217,72 @@ mounted with `wxallowed` (cf. `mount(8)`).
Creating a `WRKOBJDIR` for building python under `/usr/local` (which on a
default OpenBSD installation is mounted with `wxallowed`):
```
```sh
doas mkdir /usr/local/pobj_wxallowed
```
Assuming `PORTS_PRIVSEP=Yes` (cf. `bsd.port.mk(5)`) and `SUDO=doas` are
configured in `/etc/mk.conf`:
```
```sh
doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
```
Setting the `WRKOBJDIR` for building python:
```
```sh
echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed >> /etc/mk.conf
```
Building Synapse:
```
```sh
cd /usr/ports/net/synapse
make install
```
#### Windows
##### Windows
If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can
be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
be found at <https://docs.microsoft.com/en-us/windows/wsl/install-win10> for
Windows 10 and <https://docs.microsoft.com/en-us/windows/wsl/install-on-server>
for Windows Server.
## Prebuilt packages
### Prebuilt packages
As an alternative to installing from source, prebuilt packages are available
for a number of platforms.
### Docker images and Ansible playbooks
#### Docker images and Ansible playbooks
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse which can be used with
<https://hub.docker.com/r/matrixdotorg/synapse> which can be used with
the docker-compose file available at [contrib/docker](contrib/docker). Further
information on this including configuration options is available in the README
on hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
<https://hub.docker.com/r/avhost/docker-matrix/tags/>
Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, Element, coturn,
ma1sd, SSL support, etc.).
For more details, see
https://github.com/spantaleev/matrix-docker-ansible-deploy
<https://github.com/spantaleev/matrix-docker-ansible-deploy>
#### Debian/Ubuntu
### Debian/Ubuntu
#### Matrix.org packages
##### Matrix.org packages
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
Synapse via https://packages.matrix.org/debian/. They are available for Debian
Synapse via <https://packages.matrix.org/debian/>. They are available for Debian
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
```
```sh
sudo apt install -y lsb-release wget apt-transport-https
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
@@ -299,7 +302,7 @@ The fingerprint of the repository signing key (as shown by `gpg
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
#### Downstream Debian packages
##### Downstream Debian packages
We do not recommend using the packages from the default Debian `buster`
repository at this time, as they are old and suffer from known security
@@ -311,49 +314,49 @@ for information on how to use backports.
If you are using Debian `sid` or testing, Synapse is available in the default
repositories and it should be possible to install it simply with:
```
```sh
sudo apt install matrix-synapse
```
#### Downstream Ubuntu packages
##### Downstream Ubuntu packages
We do not recommend using the packages in the default Ubuntu repository
at this time, as they are old and suffer from known security vulnerabilities.
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
### Fedora
#### Fedora
Synapse is in the Fedora repositories as `matrix-synapse`:
```
```sh
sudo dnf install matrix-synapse
```
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
<https://obs.infoserver.lv/project/monitor/matrix-synapse>
### OpenSUSE
#### OpenSUSE
Synapse is in the OpenSUSE repositories as `matrix-synapse`:
```
```sh
sudo zypper install matrix-synapse
```
### SUSE Linux Enterprise Server
#### SUSE Linux Enterprise Server
Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/
<https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/>
### ArchLinux
#### ArchLinux
The quickest way to get up and running with ArchLinux is probably with the community package
https://www.archlinux.org/packages/community/any/matrix-synapse/, which should pull in most of
<https://www.archlinux.org/packages/community/any/matrix-synapse/>, which should pull in most of
the necessary dependencies.
pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):
```
```sh
sudo pip install --upgrade pip
```
@@ -362,28 +365,28 @@ ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly
compile it under the right architecture. (This should not be needed if
installing under virtualenv):
```
```sh
sudo pip uninstall py-bcrypt
sudo pip install py-bcrypt
```
### Void Linux
#### Void Linux
Synapse can be found in the void repositories as 'synapse':
```
```sh
xbps-install -Su
xbps-install -S synapse
```
### FreeBSD
#### FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py37-matrix-synapse`
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py37-matrix-synapse`
### OpenBSD
#### OpenBSD
As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
@@ -392,20 +395,35 @@ and mounting it to `/var/synapse` should be taken into consideration.
Installing Synapse:
```
```sh
doas pkg_add synapse
```
### NixOS
#### NixOS
Robin Lambertz has packaged Synapse for NixOS at:
https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix
<https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix>
# Setting up Synapse
## Setting up Synapse
Once you have installed synapse as above, you will need to configure it.
## TLS certificates
### Using PostgreSQL
By default Synapse uses [SQLite](https://sqlite.org/) and in doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
very light workloads.
Almost all installations should opt to use [PostgreSQL](https://www.postgresql.org). Advantages include:
- significant performance improvements due to the superior threading and
caching model, smarter query optimiser
- allowing the DB to be run on separate hardware
For information on how to install and use PostgreSQL in Synapse, please see
[docs/postgres.md](docs/postgres.md)
### TLS certificates
The default configuration exposes a single HTTP port on the local
interface: `http://localhost:8008`. It is suitable for local testing,
@@ -419,19 +437,19 @@ The recommended way to do so is to set up a reverse proxy on port
Alternatively, you can configure Synapse to expose an HTTPS port. To do
so, you will need to edit `homeserver.yaml`, as follows:
* First, under the `listeners` section, uncomment the configuration for the
- First, under the `listeners` section, uncomment the configuration for the
TLS-enabled listener. (Remove the hash sign (`#`) at the start of
each line). The relevant lines are like this:
```
- port: 8448
type: http
tls: true
resources:
- names: [client, federation]
```yaml
- port: 8448
type: http
tls: true
resources:
- names: [client, federation]
```
* You will also need to uncomment the `tls_certificate_path` and
- You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You will need to manage
provisioning of these certificates yourself — Synapse had built-in ACME
support, but the ACMEv1 protocol Synapse implements is deprecated, not
@@ -446,7 +464,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md).
## Client Well-Known URI
### Client Well-Known URI
Setting up the client Well-Known URI is optional but if you set it up, it will
allow users to enter their full username (e.g. `@user:<server_name>`) into clients
@@ -457,7 +475,7 @@ about the actual homeserver URL you are using.
The URL `https://<server_name>/.well-known/matrix/client` should return JSON in
the following format.
```
```json
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
@@ -467,7 +485,7 @@ the following format.
It can optionally contain identity server information as well.
```
```json
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
@@ -484,7 +502,8 @@ Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
view it.
In nginx this would be something like:
```
```nginx
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
default_type application/json;
@@ -497,11 +516,11 @@ correctly. `public_baseurl` should be set to the URL that clients will use to
connect to your server. This is the same URL you put for the `m.homeserver`
`base_url` above.
```
```yaml
public_baseurl: "https://<matrix.example.com>"
```
## Email
### Email
It is desirable for Synapse to have the capability to send email. This allows
Synapse to send password reset emails, send verifications when an email address
@@ -516,7 +535,7 @@ and `notif_from` fields filled out. You may also need to set `smtp_user`,
If email is not configured, password reset, registration and notifications via
email will be disabled.
## Registering a user
### Registering a user
The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
@@ -524,7 +543,7 @@ Alternatively you can do so from the command line if you have installed via pip.
This can be done as follows:
```
```sh
$ source ~/synapse/env/bin/activate
$ synctl start # if not already running
$ register_new_matrix_user -c homeserver.yaml http://localhost:8008
@@ -542,12 +561,12 @@ value is generated by `--generate-config`), but it should be kept secret, as
anyone with knowledge of it can register users, including admin accounts,
on your server even if `enable_registration` is `false`.
## Setting up a TURN server
### Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
## URL previews
### URL previews
Synapse includes support for previewing URLs, which is disabled by default. To
turn it on you must enable the `url_preview_enabled: True` config parameter
@@ -561,14 +580,14 @@ This also requires the optional `lxml` python dependency to be installed. This
in turn requires the `libxml2` library to be available - on Debian/Ubuntu this
means `apt-get install libxml2-dev`, or equivalent for your OS.
# Troubleshooting Installation
### Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.:
```
```sh
pip install twisted
```

View File

@@ -243,6 +243,8 @@ Then update the ``users`` table in the database::
Synapse Development
===================
Join our developer community on Matrix: [#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org)
Before setting up a development environment for synapse, make sure you have the
system dependencies (such as the python header files) installed - see
`Installing from source <INSTALL.md#installing-from-source>`_.

View File

@@ -5,6 +5,16 @@ Before upgrading check if any special steps are required to upgrade from the
version you currently have installed to the current version of Synapse. The extra
instructions that may be required are listed later in this document.
* Check that your versions of Python and PostgreSQL are still supported.
Synapse follows upstream lifecycles for `Python`_ and `PostgreSQL`_, and
removes support for versions which are no longer maintained.
The website https://endoflife.date also offers convenient summaries.
.. _Python: https://devguide.python.org/devcycle/#end-of-life-branches
.. _PostgreSQL: https://www.postgresql.org/support/versioning/
* If Synapse was installed using `prebuilt packages
<INSTALL.md#prebuilt-packages>`_, you will need to follow the normal process
for upgrading those packages.
@@ -78,6 +88,18 @@ for example:
Upgrading to v1.25.0
====================
Last release supporting Python 3.5
----------------------------------
This is the last release of Synapse which guarantees support with Python 3.5,
which passed its upstream End of Life date several months ago.
We will attempt to maintain support through March 2021, but without guarantees.
In the future, Synapse will follow upstream schedules for ending support of
older versions of Python and PostgreSQL. Please upgrade to at least Python 3.6
and PostgreSQL 9.6 as soon as possible.
Blacklisting IP ranges
----------------------

View File

@@ -1 +0,0 @@
Fix the "Event persist rate" section of the included grafana dashboard by adding missing prometheus rules.

View File

@@ -1 +0,0 @@
Apply an IP range blacklist to push and key revocation requests.

View File

@@ -1 +0,0 @@
Fix bug where we might not correctly calculate the current state for rooms with multiple extremities.

View File

@@ -1 +0,0 @@
Deprecate Shutdown Room and Purge Room Admin APIs.

View File

@@ -1 +0,0 @@
Fix a long standing bug in the register admin endpoint (`/_synapse/admin/v1/register`) when the `mac` field was not provided. The endpoint now properly returns a 400 error. Contributed by @edwargix.

View File

@@ -1 +0,0 @@
Combine related media admin API docs.

View File

@@ -1 +0,0 @@
Add optional HTTP authentication to replication endpoints.

View File

@@ -1 +0,0 @@
Fix a long-standing bug on Synapse instances supporting Single-Sign-On, where users would be prompted to enter their password to confirm certain actions, even though they have not set a password.

View File

@@ -1 +0,0 @@
Remove some unnecessary stubbing from unit tests.

View File

@@ -1 +0,0 @@
Fix a longstanding bug where a 500 error would be returned if the `Content-Length` header was not provided to the upload media resource.

View File

@@ -1 +0,0 @@
Remove unused `FakeResponse` class from unit tests.

View File

@@ -1 +0,0 @@
Add additional validation to pusher URLs to be compliant with the specification.

View File

@@ -1 +0,0 @@
Fix the error code that is returned when a user tries to register on a homeserver on which new-user registration has been disabled.

View File

@@ -1 +0,0 @@
Apply an IP range blacklist to push and key revocation requests.

View File

@@ -1 +0,0 @@
Fix a bug where `PUT /_synapse/admin/v2/users/<user_id>` failed to create a new user when `avatar_url` is specified. Bug introduced in Synapse v1.9.0.

View File

@@ -1 +0,0 @@
Fix an error in the documentation for the SAML username mapping provider.

View File

@@ -1 +0,0 @@
Improve the error messages printed as a result of configuration problems for extension modules.

View File

@@ -1 +0,0 @@
Pass `room_id` to `get_auth_chain_difference`.

View File

@@ -1 +0,0 @@
Add type hints to push module.

View File

@@ -1 +0,0 @@
Simplify logic for handling user-interactive-auth via single-sign-on servers.

View File

@@ -1 +0,0 @@
Add type hints to push module.

View File

@@ -1 +0,0 @@
Fix a 500 error when attempting to preview an empty HTML file.

View File

@@ -1 +0,0 @@
Add number of local devices to Room Details Admin API. Contributed by @dklimpel.

View File

@@ -1 +0,0 @@
Add `X-Robots-Tag` header to stop web crawlers from indexing media.

View File

@@ -1 +0,0 @@
Clarify comments around template directories in `sample_config.yaml`.

View File

@@ -1 +0,0 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

View File

@@ -1 +0,0 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

View File

@@ -1 +0,0 @@
Skip the SAML tests if the requirements (`pysaml2` and `xmlsec1`) aren't available.

View File

@@ -1 +0,0 @@
Fix multiarch docker image builds.

View File

@@ -1 +0,0 @@
Don't publish `latest` docker image until all archs are built.

View File

@@ -1 +0,0 @@
Improve structured logging tests.

View File

@@ -1 +0,0 @@
Fix occasional deadlock when handling SIGHUP.

View File

@@ -1 +0,0 @@
Fix bug where we ratelimited auto joining of rooms on registration (using `auto_join_rooms` config).

11
debian/changelog vendored
View File

@@ -1,3 +1,14 @@
matrix-synapse-py3 (1.25.0) stable; urgency=medium
[ Dan Callahan ]
* Update dependencies to account for the removal of the transitional
dh-systemd package from Debian Bullseye.
[ Synapse Packaging team ]
* New synapse release 1.25.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 Jan 2021 10:14:55 +0000
matrix-synapse-py3 (1.24.0) stable; urgency=medium
* New synapse release 1.24.0.

6
debian/control vendored
View File

@@ -3,9 +3,11 @@ Section: contrib/python
Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
# TODO: Remove the dependency on dh-systemd after dropping support for Ubuntu xenial
# On all other supported releases, it's merely a transitional package which
# does nothing but depends on debhelper (> 9.20160709)
Build-Depends:
debhelper (>= 9),
dh-systemd,
debhelper (>= 9.20160709) | dh-systemd,
dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,

View File

@@ -50,17 +50,22 @@ FROM ${distro}
ARG distro=""
ENV distro ${distro}
# Python < 3.7 assumes LANG="C" means ASCII-only and throws on printing unicode
# http://bugs.python.org/issue19846
ENV LANG C.UTF-8
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
# TODO: Remove the dh-systemd stanza after dropping support for Ubuntu xenial
# it's a transitional package on all other, more recent releases
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \
debhelper \
devscripts \
dh-systemd \
libsystemd-dev \
lsb-release \
pkg-config \
@@ -70,7 +75,10 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-venv \
sqlite3 \
libpq-dev \
xmlsec1
xmlsec1 \
&& ( env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
dh-systemd || true )
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /

View File

@@ -1,24 +0,0 @@
# Inherit from the official Synapse docker image
FROM matrixdotorg/synapse
# Install deps
RUN apt-get update
RUN apt-get install -y supervisor redis nginx
RUN rm /etc/nginx/sites-enabled/default
# Copy the worker process and log configuration files
COPY ./docker/worker.yaml.j2 /conf/worker.yaml.j2
# Expose nginx listener port
EXPOSE 8080/tcp
# Volume for user-editable config files, logs etc.
VOLUME ["/data"]
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
# TODO: Healthcheck? Which worker to ask? Can we ask supervisord?

View File

@@ -1,31 +0,0 @@
# Inherit from the workers Synapse docker image
FROM matrixdotorg/synapse:workers
RUN apt-get update
RUN apt-get install -y postgresql
RUN pg_ctlcluster 11 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 11 main stop
WORKDIR /root
RUN curl -OL "https://github.com/caddyserver/caddy/releases/download/v2.3.0/caddy_2.3.0_linux_amd64.tar.gz" && \
tar xzf caddy_2.3.0_linux_amd64.tar.gz && rm caddy_2.3.0_linux_amd64.tar.gz
COPY ./docker/caddy.complement.json /root/caddy.json
EXPOSE 8008 8448
ENTRYPOINT sed -i "s/{{ server_name }}/${SERVER_NAME}/g" /root/caddy.json && \
pg_ctlcluster 11 main start > /dev/null && \
/root/caddy start --config /root/caddy.json > /dev/null && \
SYNAPSE_SERVER_NAME=${SERVER_NAME} \
SYNAPSE_REPORT_STATS=no \
POSTGRES_PASSWORD=somesecret POSTGRES_USER=postgres POSTGRES_HOST=localhost \
SYNAPSE_WORKERS=synchrotron \
/configure_workers_and_start.py

View File

@@ -1,76 +0,0 @@
{
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":8448"
],
"routes": [
{
"match": [
{
"host": [
"{{ server_name }}"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "localhost:80"
}
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
},
"tls": {
"automation": {
"policies": [
{
"subjects": [
"{{ server_name }}"
],
"issuers": [
{
"module": "internal"
}
],
"on_demand": true
}
]
}
},
"pki": {
"certificate_authorities": {
"local": {
"name": "Complement CA",
"root": {
"certificate": "/ca/ca.crt",
"private_key": "/ca/ca.key"
},
"intermediate": {
"certificate": "/ca/ca.crt",
"private_key": "/ca/ca.key"
}
}
}
}
}
}

View File

@@ -27,7 +27,8 @@ log_config: "{{ SYNAPSE_LOG_CONFIG }}"
listeners:
{% if not SYNAPSE_NO_TLS %}
- port: 8448
-
port: 8448
bind_addresses: ['::']
type: http
tls: true
@@ -43,7 +44,7 @@ listeners:
tls: false
bind_addresses: ['::']
type: http
x_forwarded: true
x_forwarded: false
resources:
- names: [client]

View File

@@ -1,366 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script reads environment variables and generates a shared Synapse worker,
# nginx and supervisord configs depending on the workers requested
import os
import sys
import subprocess
import jinja2
import yaml
DEFAULT_LISTENER_RESOURCES = ["client", "federation"]
WORKERS_CONFIG = {
"pusher": {
"app": "synapse.app.pusher",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "start_pushers: false"
},
"user_dir": {
"app": "synapse.app.user_dir",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
],
"shared_extra_conf": "update_user_directory: false"
},
"media_repository": {
"app": "synapse.app.media_repository",
"listener_resources": ["media"],
"endpoint_patterns": [
"^/_synapse/admin/v1/purge_media_cache$",
"^/_synapse/admin/v1/room/.*/media.*$",
"^/_synapse/admin/v1/user/.*/media.*$",
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
],
"shared_extra_conf": "enable_media_repo: false"
},
"appservice": {
"app": "synapse.app.appservice",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "notify_appservices: false"
},
"federation_sender": {
"app": "synapse.app.federation_sender",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "send_federation: false"
},
"synchrotron": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": ""
},
"federation_reader": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
"^/_matrix/federation/(v1|v2)/backfill/",
"^/_matrix/federation/(v1|v2)/get_missing_events/",
"^/_matrix/federation/(v1|v2)/publicRooms",
"^/_matrix/federation/(v1|v2)/query/",
"^/_matrix/federation/(v1|v2)/make_join/",
"^/_matrix/federation/(v1|v2)/make_leave/",
"^/_matrix/federation/(v1|v2)/send_join/",
"^/_matrix/federation/(v1|v2)/send_leave/",
"^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/",
"^/_matrix/federation/(v1|v2)/exchange_third_party_invite/",
"^/_matrix/federation/(v1|v2)/user/devices/",
"^/_matrix/federation/(v1|v2)/get_groups_publicised$",
"^/_matrix/key/v2/query",
],
"shared_extra_conf": ""
},
"federation_inbound": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"/_matrix/federation/(v1|v2)/send/",
],
"shared_extra_conf": ""
},
}
# Utility functions
def log(txt):
print(txt)
def error(txt):
log(txt)
sys.exit(2)
def convert(src, dst, environ):
"""Generate a file from a template
Args:
src (str): path to input file
dst (str): path to file to write
environ (dict): environment dictionary, for replacement mappings.
"""
with open(src) as infile:
template = infile.read()
rendered = jinja2.Template(template, autoescape=True).render(**environ)
print(rendered)
with open(dst, "w") as outfile:
outfile.write(rendered)
def generate_base_homeserver_config():
"""Starts Synapse and generates a basic homeserver config, which will later be
modified for worker support.
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"])
def generate_worker_files(environ, config_path: str, data_dir: str):
"""Read the desired list of workers from environment variables and generate
shared homeserver, nginx and supervisord configs.
Args:
environ: _Environ[str]
config_path: Where to output the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
"""
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# The contents of a Synapse config file that will be added alongside the generated
# config when running the main Synapse process.
# It is intended mainly for disabling functionality when certain workers are spun up,
# and add the replication listener
# first read the original config file to take listeners config and add the replication one
listeners = [{
"port": 9093,
"bind_address": "127.0.0.1",
"type": "http",
"resources":[{
"names": ["replication"]
}]
}]
with open(config_path) as file_stream:
original_config = yaml.safe_load(file_stream)
original_listeners = original_config.get("listeners")
if original_listeners:
listeners += original_listeners
homeserver_config = yaml.dump({"listeners": listeners})
homeserver_config += """
redis:
enabled: true
# TODO: remove before prod
suppress_key_server_warning: true
"""
# The supervisord config
supervisord_config = """
[supervisord]
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
priority=500
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=www-data
autorestart=true
[program:synapse_main]
command=/usr/local/bin/python -m synapse.app.homeserver \
--config-path="%s" \
--config-path=/conf/workers/shared.yaml
priority=1
# Log startup failures to supervisord's stdout/err
# Regular synapse logs will still go in the configured data directory
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=unexpected
exitcodes=0
""" % (config_path,)
# An nginx site config. Will live in /etc/nginx/conf.d
nginx_config_template_header = """
server {
# Listen on Synapse's default HTTP port number
listen 8080;
listen [::]:8080;
server_name localhost;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 100M;
"""
nginx_config_body = "" # to modify below
nginx_config_template_end = """
# Send all other traffic to the main process
location ~* ^(\/_matrix|\/_synapse) {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
"""
# Read desired worker configuration from environment
if "SYNAPSE_WORKERS" not in environ:
worker_types = []
else:
worker_types = environ.get("SYNAPSE_WORKERS")
worker_types = worker_types.split(",")
os.mkdir("/conf/workers")
worker_port = 18009
for worker_type in worker_types:
worker_type = worker_type.strip()
worker_config = WORKERS_CONFIG.get(worker_type)
if worker_config:
worker_config = worker_config.copy()
else:
log(worker_type + " is a wrong worker type ! It will be ignored")
continue
# this is not hardcoded bc we want to be able to have several workers
# of each type ultimately (not supported for now)
worker_name = worker_type
worker_config.update({"name": worker_name})
worker_config.update({"port": worker_port})
worker_config.update({"config_path": config_path})
homeserver_config += worker_config['shared_extra_conf'] + "\n"
# Enable the pusher worker in supervisord
supervisord_config += """
[program:synapse_{name}]
command=/usr/local/bin/python -m {app} \
--config-path="{config_path}" \
--config-path=/conf/workers/shared.yaml \
--config-path=/conf/workers/{name}.yaml
autorestart=unexpected
priority=500
exitcodes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0""".format_map(worker_config)
for pattern in worker_config['endpoint_patterns']:
nginx_config_body += """
location ~* %s {
proxy_pass http://localhost:%s;
proxy_set_header X-Forwarded-For $remote_addr;
}
""" % (pattern, worker_port)
convert("/conf/worker.yaml.j2", "/conf/workers/{name}.yaml".format(name=worker_name), worker_config)
worker_port += 1
# Write out the config files. We use append mode for each in case the
# files may have already been written to by others.
# Shared homeserver config
print(homeserver_config)
with open("/conf/workers/shared.yaml", "a") as f:
f.write(homeserver_config)
# Nginx config
print()
print(nginx_config_template_header)
print(nginx_config_body)
print(nginx_config_template_end)
with open("/etc/nginx/conf.d/matrix-synapse.conf", "a") as f:
f.write(nginx_config_template_header)
f.write(nginx_config_body)
f.write(nginx_config_template_end)
# Supervisord config
print()
print(supervisord_config)
with open("/etc/supervisor/conf.d/supervisord.conf", "a") as f:
f.write(supervisord_config)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
if not os.path.exists(log_dir):
os.mkdir(log_dir)
def start_supervisord():
"""Starts up supervisord which then starts and monitors all other necessary processes
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
subprocess.check_output(["/usr/bin/supervisord"])
def main(args, environ):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# override SYNAPSE_NO_TLS, we don't support TLS in worker mode,
# this needs to be handled by a frontend proxy
environ["SYNAPSE_NO_TLS"] = "yes"
# Generate the base homeserver config if one does not yet exist
if not os.path.exists(config_path):
log("Generating base homeserver config")
generate_base_homeserver_config()
# Always regenerate all other config files
generate_worker_files(environ, config_path, data_dir)
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
start_supervisord()
if __name__ == "__main__":
main(sys.argv, os.environ)

View File

@@ -134,7 +134,6 @@ def run_generate_config(environ, ownership):
Never returns.
"""
print("running generate config")
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error("Environment variable '%s' is mandatory in `generate` mode." % (v,))
@@ -150,8 +149,6 @@ def run_generate_config(environ, ownership):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
print("Generating config at", config_path, "Config dir:", config_dir)
args = [
"python",
"-m",
@@ -180,8 +177,8 @@ def run_generate_config(environ, ownership):
else:
os.execv("/usr/local/bin/python", args)
def main(args, environ):
print("bla")
mode = args[1] if len(args) > 1 else "run"
desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991"))

View File

@@ -1,15 +0,0 @@
worker_app: "{{ app }}"
worker_name: "{{ name }}"
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: {{ port }}
resources:
- names:
{%- for resource in listener_resources %}
- {{ resource }}
{%- endfor %}

View File

@@ -8,6 +8,7 @@
* [Parameters](#parameters-1)
* [Response](#response)
* [Undoing room shutdowns](#undoing-room-shutdowns)
- [Make Room Admin API](#make-room-admin-api)
# List Room API
@@ -467,6 +468,7 @@ The following fields are returned in the JSON response body:
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.
## Undoing room shutdowns
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
@@ -492,4 +494,20 @@ You will have to manually handle, if you so choose, the following:
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.
* Removal of the Content Violation room if desired.
# Make Room Admin API
Grants another user the highest power available to a local user who is in the room.
If the user is not in the room, and it is not publicly joinable, then invite the user.
By default the server admin (the caller) is granted power, but another user can
optionally be specified, e.g.:
```
POST /_synapse/admin/v1/rooms/<room_id_or_alias>/make_room_admin
{
"user_id": "@foo:example.com"
}
```

View File

@@ -30,7 +30,12 @@ It returns a JSON body like the following:
],
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false
"deactivated": false,
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
"creation_ts": 1560432506,
"appservice_id": null,
"consent_server_notice_sent": null,
"consent_version": null
}
URL parameters:
@@ -139,7 +144,6 @@ A JSON body is returned with the following shape:
"users": [
{
"name": "<user_id1>",
"password_hash": "<password_hash1>",
"is_guest": 0,
"admin": 0,
"user_type": null,
@@ -148,7 +152,6 @@ A JSON body is returned with the following shape:
"avatar_url": null
}, {
"name": "<user_id2>",
"password_hash": "<password_hash2>",
"is_guest": 0,
"admin": 1,
"user_type": null,

View File

@@ -31,7 +31,7 @@ easy to run CAS implementation built on top of Django.
You should now have a Django project configured to serve CAS authentication with
a single user created.
## Configure Synapse (and Riot) to use CAS
## Configure Synapse (and Element) to use CAS
1. Modify your `homeserver.yaml` to enable CAS and point it to your locally
running Django test server:
@@ -51,9 +51,9 @@ and that the CAS server is on port 8000, both on localhost.
## Testing the configuration
Then in Riot:
Then in Element:
1. Visit the login page with a Riot pointing at your homeserver.
1. Visit the login page with a Element pointing at your homeserver.
2. Click the Single Sign-On button.
3. Login using the credentials created with `createsuperuser`.
4. You should be logged in.

View File

@@ -173,6 +173,18 @@ pid_file: DATADIR/homeserver.pid
# - 'fe80::/10'
# - 'fc00::/7'
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
#ip_range_whitelist:
# - '192.168.1.1'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
@@ -671,18 +683,6 @@ acme:
# - nyc.example.com
# - syd.example.com
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
#ip_range_whitelist:
# - '192.168.1.1'
# Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound
# and outbound federation, though be aware that any delay can be due to problems
@@ -1825,9 +1825,10 @@ oidc_config:
# * user: The claims returned by the UserInfo Endpoint and/or in the ID
# Token
#
# This must be configured if using the default mapping provider.
# If this is not set, the user will be prompted to choose their
# own username.
#
localpart_template: "{{ user.preferred_username }}"
#localpart_template: "{{ user.preferred_username }}"
# Jinja2 template for the display name to set on first login.
#
@@ -2068,6 +2069,21 @@ password_config:
#
#require_uppercase: true
ui_auth:
# The number of milliseconds to allow a user-interactive authentication
# session to be active.
#
# This defaults to 0, meaning the user is queried for their credentials
# before every action, but this can be overridden to alow a single
# validation to be re-used. This weakens the protections afforded by
# the user-interactive authentication process, by allowing for multiple
# (and potentially different) operations to use the same validation session.
#
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
#session_timeout: 15000
# Configuration for sending emails from Synapse.
#
@@ -2133,6 +2149,12 @@ email:
#
#validation_token_lifetime: 15m
# The web client location to direct users to during an invite. This is passed
# to the identity server as the org.matrix.web_client_location key. Defaults
# to unset, giving no guidance to the identity server.
#
#invite_client_location: https://app.element.io
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
@@ -2344,7 +2366,7 @@ spam_checker:
# If enabled, non server admins can only create groups with local parts
# starting with this prefix
#
#group_creation_prefix: "unofficial/"
#group_creation_prefix: "unofficial_"

View File

@@ -22,6 +22,8 @@ well as some specific methods:
* `user_may_create_room`
* `user_may_create_room_alias`
* `user_may_publish_room`
* `check_username_for_spam`
* `check_registration_for_spam`
The details of the each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
@@ -32,28 +34,33 @@ call back into the homeserver internals.
### Example
```python
from synapse.spam_checker_api import RegistrationBehaviour
class ExampleSpamChecker:
def __init__(self, config, api):
self.config = config
self.api = api
def check_event_for_spam(self, foo):
async def check_event_for_spam(self, foo):
return False # allow all events
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
async def user_may_invite(self, inviter_userid, invitee_userid, room_id):
return True # allow all invites
def user_may_create_room(self, userid):
async def user_may_create_room(self, userid):
return True # allow all room creations
def user_may_create_room_alias(self, userid, room_alias):
async def user_may_create_room_alias(self, userid, room_alias):
return True # allow all room aliases
def user_may_publish_room(self, userid, room_id):
async def user_may_publish_room(self, userid, room_id):
return True # allow publishing of all rooms
def check_username_for_spam(self, user_profile):
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_registration_for_spam(self, email_threepid, username, request_info):
return RegistrationBehaviour.ALLOW # allow all registrations
```
## Configuration

View File

@@ -15,12 +15,18 @@ where SAML mapping providers come into play.
SSO mapping providers are currently supported for OpenID and SAML SSO
configurations. Please see the details below for how to implement your own.
It is the responsibility of the mapping provider to normalise the SSO attributes
and map them to a valid Matrix ID. The
[specification for Matrix IDs](https://matrix.org/docs/spec/appendices#user-identifiers)
has some information about what is considered valid. Alternately an easy way to
ensure it is valid is to use a Synapse utility function:
`synapse.types.map_username_to_mxid_localpart`.
It is up to the mapping provider whether the user should be assigned a predefined
Matrix ID based on the SSO attributes, or if the user should be allowed to
choose their own username.
In the first case - where users are automatically allocated a Matrix ID - it is
the responsibility of the mapping provider to normalise the SSO attributes and
map them to a valid Matrix ID. The [specification for Matrix
IDs](https://matrix.org/docs/spec/appendices#user-identifiers) has some
information about what is considered valid.
If the mapping provider does not assign a Matrix ID, then Synapse will
automatically serve an HTML page allowing the user to pick their own username.
External mapping providers are provided to Synapse in the form of an external
Python module. You can retrieve this module from [PyPI](https://pypi.org) or elsewhere,
@@ -80,8 +86,9 @@ A custom mapping provider must specify the following methods:
with failures=1. The method should then return a different
`localpart` value, such as `john.doe1`.
- Returns a dictionary with two keys:
- localpart: A required string, used to generate the Matrix ID.
- displayname: An optional string, the display name for the user.
- `localpart`: A string, used to generate the Matrix ID. If this is
`None`, the user is prompted to pick their own username.
- `displayname`: An optional string, the display name for the user.
* `get_extra_attributes(self, userinfo, token)`
- This method must be async.
- Arguments:
@@ -165,12 +172,13 @@ A custom mapping provider must specify the following methods:
redirected to.
- This method must return a dictionary, which will then be used by Synapse
to build a new user. The following keys are allowed:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `mxid_localpart` - The mxid localpart of the new user. If this is
`None`, the user is prompted to pick their own username.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
* `emails` - A list of emails for the new user. If not provided, will
default to an empty list.
Alternatively it can raise a `synapse.api.errors.RedirectException` to
redirect the user to another page. This is useful to prompt the user for
additional information, e.g. if you want them to provide their own username.

View File

@@ -229,6 +229,7 @@ expressions:
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$

View File

@@ -7,11 +7,17 @@ show_error_codes = True
show_traceback = True
mypy_path = stubs
warn_unreachable = True
# To find all folders that pass mypy you run:
#
# find synapse/* -type d -not -name __pycache__ -exec bash -c "mypy '{}' > /dev/null" \; -print
files =
scripts-dev/sign_json,
synapse/api,
synapse/appservice,
synapse/config,
synapse/crypto,
synapse/event_auth.py,
synapse/events/builder.py,
synapse/events/validator.py,
@@ -20,6 +26,7 @@ files =
synapse/handlers/_base.py,
synapse/handlers/account_data.py,
synapse/handlers/account_validity.py,
synapse/handlers/admin.py,
synapse/handlers/appservice.py,
synapse/handlers/auth.py,
synapse/handlers/cas_handler.py,
@@ -38,13 +45,16 @@ files =
synapse/handlers/presence.py,
synapse/handlers/profile.py,
synapse/handlers/read_marker.py,
synapse/handlers/receipts.py,
synapse/handlers/register.py,
synapse/handlers/room.py,
synapse/handlers/room_list.py,
synapse/handlers/room_member.py,
synapse/handlers/room_member_worker.py,
synapse/handlers/saml_handler.py,
synapse/handlers/sso.py,
synapse/handlers/sync.py,
synapse/handlers/user_directory.py,
synapse/handlers/ui_auth,
synapse/http/client.py,
synapse/http/federation/matrix_federation_agent.py,
@@ -56,27 +66,34 @@ files =
synapse/metrics,
synapse/module_api,
synapse/notifier.py,
synapse/push/emailpusher.py,
synapse/push/httppusher.py,
synapse/push/mailer.py,
synapse/push/pusher.py,
synapse/push/pusherpool.py,
synapse/push/push_rule_evaluator.py,
synapse/push,
synapse/replication,
synapse/rest,
synapse/server.py,
synapse/server_notices,
synapse/spam_checker_api,
synapse/state,
synapse/storage/__init__.py,
synapse/storage/_base.py,
synapse/storage/background_updates.py,
synapse/storage/databases/main/appservice.py,
synapse/storage/databases/main/events.py,
synapse/storage/databases/main/keys.py,
synapse/storage/databases/main/pusher.py,
synapse/storage/databases/main/registration.py,
synapse/storage/databases/main/stream.py,
synapse/storage/databases/main/ui_auth.py,
synapse/storage/database.py,
synapse/storage/engines,
synapse/storage/keys.py,
synapse/storage/persist_events.py,
synapse/storage/prepare_database.py,
synapse/storage/purge_events.py,
synapse/storage/push_rule.py,
synapse/storage/relations.py,
synapse/storage/roommember.py,
synapse/storage/state.py,
synapse/storage/types.py,
synapse/storage/util,
synapse/streams,
synapse/types.py,
@@ -113,6 +130,9 @@ ignore_missing_imports = True
[mypy-h11]
ignore_missing_imports = True
[mypy-msgpack]
ignore_missing_imports = True
[mypy-opentracing]
ignore_missing_imports = True

View File

@@ -1,30 +0,0 @@
#! /bin/bash -eu
# This script is designed for developers who want to test their code
# against Complement.
#
# It creates a Complement-ready worker-enabled Synapse docker image from
# the local checkout and runs Complement tests against it.
#
# This script assumes that it is located in the scripts-dev folder of a
# Synapse checkout, and that Complement exists at ../../complement
# In my case, I have /home/user/code/complement and /home/user/code/synapse.
COMPLEMENT_DIR="/home/user/code/complement"
cd "$(dirname $0)/.."
# Build the Synapse image from the local checkout
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
# Build the base Synapse worker image
docker build -t matrixdotorg/synapse:workers -f docker/Dockerfile-workers .
cd "$COMPLEMENT_DIR"
# Build the Complement Synapse worker image
docker build -t matrixdotorg/complement-synapse:workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles
# Run the tests on the resulting image!
COMPLEMENT_VERSION_CHECK_ITERATIONS=300 COMPLEMENT_DEBUG=1 COMPLEMENT_BASE_IMAGE=matrixdotorg/complement-synapse:workers go test -v -count=1 -tags="synapse_blacklist" -failfast ./tests
#COMPLEMENT_VERSION_CHECK_ITERATIONS=100 COMPLEMENT_DEBUG=1 COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 -parallel=1 ./tests/
#COMPLEMENT_VERSION_CHECK_ITERATIONS=100 COMPLEMENT_BASE_IMAGE=complement-synapse go test ./tests

View File

@@ -31,6 +31,8 @@ class SynapsePlugin(Plugin):
) -> Optional[Callable[[MethodSigContext], CallableType]]:
if fullname.startswith(
"synapse.util.caches.descriptors._CachedFunction.__call__"
) or fullname.startswith(
"synapse.util.caches.descriptors._LruCachedFunction.__call__"
):
return cached_function_method_signature
return None

View File

@@ -40,4 +40,6 @@ if __name__ == "__main__":
)
args = parser.parse_args()
args.output_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=args.log_file))
out = args.output_file
out.write(DEFAULT_LOG_CONFIG.substitute(log_file=args.log_file))
out.flush()

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.24.0"
__version__ = "1.25.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -23,7 +23,7 @@ from twisted.web.server import Request
import synapse.types
from synapse import event_auth
from synapse.api.auth_blocking import AuthBlocking
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
from synapse.api.errors import (
AuthError,
Codes,
@@ -31,7 +31,9 @@ from synapse.api.errors import (
MissingClientTokenError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.appservice import ApplicationService
from synapse.events import EventBase
from synapse.http.site import SynapseRequest
from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID
@@ -474,7 +476,7 @@ class Auth:
now = self.hs.get_clock().time_msec()
return now < expiry
def get_appservice_by_req(self, request):
def get_appservice_by_req(self, request: SynapseRequest) -> ApplicationService:
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
@@ -646,7 +648,8 @@ class Auth:
)
if (
visibility
and visibility.content["history_visibility"] == "world_readable"
and visibility.content.get("history_visibility")
== HistoryVisibility.WORLD_READABLE
):
return Membership.JOIN, None
raise AuthError(

View File

@@ -36,6 +36,7 @@ class AuthBlocking:
self._limit_usage_by_mau = hs.config.limit_usage_by_mau
self._mau_limits_reserved_threepids = hs.config.mau_limits_reserved_threepids
self._server_name = hs.hostname
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
async def check_auth_blocking(
self,
@@ -76,6 +77,12 @@ class AuthBlocking:
# We never block the server from doing actions on behalf of
# users.
return
elif requester.app_service and not self._track_appservice_user_ips:
# If we're authenticated as an appservice then we only block
# auth if `track_appservice_user_ips` is set, as that option
# implicitly means that application services are part of MAU
# limits.
return
# Never fail an auth check for the server notices users or support user
# This can be a problem where event creation is prohibited due to blocking

View File

@@ -95,6 +95,8 @@ class EventTypes:
Presence = "m.presence"
Dummy = "org.matrix.dummy_event"
class RejectedReason:
AUTH_ERROR = "auth_error"
@@ -160,3 +162,10 @@ class RoomEncryptionAlgorithms:
class AccountDataTypes:
DIRECT = "m.direct"
IGNORED_USER_LIST = "m.ignored_user_list"
class HistoryVisibility:
INVITED = "invited"
JOINED = "joined"
SHARED = "shared"
WORLD_READABLE = "world_readable"

View File

@@ -89,7 +89,7 @@ from synapse.replication.tcp.streams import (
ToDeviceStream,
)
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events
from synapse.rest.client.v1 import events, room
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.login import LoginRestServlet
from synapse.rest.client.v1.profile import (
@@ -98,20 +98,6 @@ from synapse.rest.client.v1.profile import (
ProfileRestServlet,
)
from synapse.rest.client.v1.push_rule import PushRuleRestServlet
from synapse.rest.client.v1.room import (
JoinedRoomMemberListRestServlet,
JoinRoomAliasServlet,
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomInitialSyncRestServlet,
RoomMemberListRestServlet,
RoomMembershipRestServlet,
RoomMessageListRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
RoomStateRestServlet,
RoomTypingRestServlet,
)
from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha import groups, sync, user_directory
from synapse.rest.client.v2_alpha._base import client_patterns
@@ -512,12 +498,6 @@ class GenericWorkerServer(HomeServer):
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
RoomMemberListRestServlet(self).register(resource)
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
RoomMessageListRestServlet(self).register(resource)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
ThreepidRestServlet(self).register(resource)
@@ -526,22 +506,19 @@ class GenericWorkerServer(HomeServer):
VoipRestServlet(self).register(resource)
PushRuleRestServlet(self).register(resource)
VersionsRestServlet(self).register(resource)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
KeyUploadServlet(self).register(resource)
AccountDataServlet(self).register(resource)
RoomAccountDataServlet(self).register(resource)
RoomTypingRestServlet(self).register(resource)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
room.register_servlets(self, resource, True)
room.register_deprecated_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
RoomInitialSyncRestServlet(self).register(resource)
user_directory.register_servlets(self, resource)

View File

@@ -63,6 +63,7 @@ from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client.pick_username import pick_username_resource
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore
@@ -192,6 +193,7 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self),
"/_synapse/admin": AdminRestResource(self),
"/_synapse/client/pick_username": pick_username_resource(self),
}
)

View File

@@ -3,6 +3,7 @@ from typing import Any, Iterable, List, Optional
from synapse.config import (
api,
appservice,
auth,
captcha,
cas,
consent_config,
@@ -14,7 +15,6 @@ from synapse.config import (
logger,
metrics,
oidc_config,
password,
password_auth_providers,
push,
ratelimiting,
@@ -65,7 +65,7 @@ class RootConfig:
sso: sso.SSOConfig
oidc: oidc_config.OIDCConfig
jwt: jwt_config.JWTConfig
password: password.PasswordConfig
auth: auth.AuthConfig
email: emailconfig.EmailConfig
worker: workers.WorkerConfig
authproviders: password_auth_providers.PasswordAuthProviderConfig

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -16,11 +17,11 @@
from ._base import Config
class PasswordConfig(Config):
"""Password login configuration
class AuthConfig(Config):
"""Password and login configuration
"""
section = "password"
section = "auth"
def read_config(self, config, **kwargs):
password_config = config.get("password_config", {})
@@ -35,6 +36,10 @@ class PasswordConfig(Config):
self.password_policy = password_config.get("policy") or {}
self.password_policy_enabled = self.password_policy.get("enabled", False)
# User-interactive authentication
ui_auth = config.get("ui_auth") or {}
self.ui_auth_session_timeout = ui_auth.get("session_timeout", 0)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
password_config:
@@ -87,4 +92,19 @@ class PasswordConfig(Config):
# Defaults to 'false'.
#
#require_uppercase: true
ui_auth:
# The number of milliseconds to allow a user-interactive authentication
# session to be active.
#
# This defaults to 0, meaning the user is queried for their credentials
# before every action, but this can be overridden to alow a single
# validation to be re-used. This weakens the protections afforded by
# the user-interactive authentication process, by allowing for multiple
# (and potentially different) operations to use the same validation session.
#
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
#session_timeout: 15000
"""

View File

@@ -322,6 +322,22 @@ class EmailConfig(Config):
self.email_subjects = EmailSubjectConfig(**subjects)
# The invite client location should be a HTTP(S) URL or None.
self.invite_client_location = email_config.get("invite_client_location") or None
if self.invite_client_location:
if not isinstance(self.invite_client_location, str):
raise ConfigError(
"Config option email.invite_client_location must be type str"
)
if not (
self.invite_client_location.startswith("http://")
or self.invite_client_location.startswith("https://")
):
raise ConfigError(
"Config option email.invite_client_location must be a http or https URL",
path=("email", "invite_client_location"),
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return (
"""\
@@ -389,6 +405,12 @@ class EmailConfig(Config):
#
#validation_token_lifetime: 15m
# The web client location to direct users to during an invite. This is passed
# to the identity server as the org.matrix.web_client_location key. Defaults
# to unset, giving no guidance to the identity server.
#
#invite_client_location: https://app.element.io
# Directory in which Synapse will try to find the template files below.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.

View File

@@ -56,18 +56,6 @@ class FederationConfig(Config):
# - nyc.example.com
# - syd.example.com
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
#ip_range_whitelist:
# - '192.168.1.1'
# Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound
# and outbound federation, though be aware that any delay can be due to problems

View File

@@ -32,5 +32,5 @@ class GroupsConfig(Config):
# If enabled, non server admins can only create groups with local parts
# starting with this prefix
#
#group_creation_prefix: "unofficial/"
#group_creation_prefix: "unofficial_"
"""

View File

@@ -17,6 +17,7 @@
from ._base import RootConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
from .auth import AuthConfig
from .cache import CacheConfig
from .captcha import CaptchaConfig
from .cas import CasConfig
@@ -30,7 +31,6 @@ from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
from .oidc_config import OIDCConfig
from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
from .ratelimiting import RatelimitConfig
@@ -76,7 +76,7 @@ class HomeServerConfig(RootConfig):
CasConfig,
SSOConfig,
JWTConfig,
PasswordConfig,
AuthConfig,
EmailConfig,
PasswordAuthProviderConfig,
PushConfig,

View File

@@ -206,7 +206,7 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
# filter options, but care must when using e.g. MemoryHandler to buffer
# writes.
log_context_filter = LoggingContextFilter(request="")
log_context_filter = LoggingContextFilter()
log_metadata_filter = MetadataFilter({"server_name": config.server_name})
old_factory = logging.getLogRecordFactory()

View File

@@ -203,9 +203,10 @@ class OIDCConfig(Config):
# * user: The claims returned by the UserInfo Endpoint and/or in the ID
# Token
#
# This must be configured if using the default mapping provider.
# If this is not set, the user will be prompted to choose their
# own username.
#
localpart_template: "{{{{ user.preferred_username }}}}"
#localpart_template: "{{{{ user.preferred_username }}}}"
# Jinja2 template for the display name to set on first login.
#

View File

@@ -832,6 +832,18 @@ class ServerConfig(Config):
#ip_range_blacklist:
%(ip_range_blacklist)s
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
#ip_range_whitelist:
# - '192.168.1.1'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#

View File

@@ -227,7 +227,7 @@ class ConnectionVerifier:
# This code is based on twisted.internet.ssl.ClientTLSOptions.
def __init__(self, hostname: bytes, verify_certs):
def __init__(self, hostname: bytes, verify_certs: bool):
self._verify_certs = verify_certs
_decoded = hostname.decode("ascii")

View File

@@ -18,7 +18,7 @@
import collections.abc
import hashlib
import logging
from typing import Dict
from typing import Any, Callable, Dict, Tuple
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
@@ -27,13 +27,18 @@ from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.events import EventBase
from synapse.events.utils import prune_event, prune_event_dict
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
Hasher = Callable[[bytes], "hashlib._Hash"]
def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
def check_event_content_hash(
event: EventBase, hash_algorithm: Hasher = hashlib.sha256
) -> bool:
"""Check whether the hash for this PDU matches the contents"""
name, expected_hash = compute_content_hash(event.get_pdu_json(), hash_algorithm)
logger.debug(
@@ -67,18 +72,19 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
return message_hash_bytes == expected_hash
def compute_content_hash(event_dict, hash_algorithm):
def compute_content_hash(
event_dict: Dict[str, Any], hash_algorithm: Hasher
) -> Tuple[str, bytes]:
"""Compute the content hash of an event, which is the hash of the
unredacted event.
Args:
event_dict (dict): The unredacted event as a dict
event_dict: The unredacted event as a dict
hash_algorithm: A hasher from `hashlib`, e.g. hashlib.sha256, to use
to hash the event
Returns:
tuple[str, bytes]: A tuple of the name of hash and the hash as raw
bytes.
A tuple of the name of hash and the hash as raw bytes.
"""
event_dict = dict(event_dict)
event_dict.pop("age_ts", None)
@@ -94,18 +100,19 @@ def compute_content_hash(event_dict, hash_algorithm):
return hashed.name, hashed.digest()
def compute_event_reference_hash(event, hash_algorithm=hashlib.sha256):
def compute_event_reference_hash(
event, hash_algorithm: Hasher = hashlib.sha256
) -> Tuple[str, bytes]:
"""Computes the event reference hash. This is the hash of the redacted
event.
Args:
event (FrozenEvent)
event
hash_algorithm: A hasher from `hashlib`, e.g. hashlib.sha256, to use
to hash the event
Returns:
tuple[str, bytes]: A tuple of the name of hash and the hash as raw
bytes.
A tuple of the name of hash and the hash as raw bytes.
"""
tmp_event = prune_event(event)
event_dict = tmp_event.get_pdu_json()
@@ -156,7 +163,7 @@ def add_hashes_and_signatures(
event_dict: JsonDict,
signature_name: str,
signing_key: SigningKey,
):
) -> None:
"""Add content hash and sign the event
Args:

View File

@@ -14,9 +14,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import logging
import urllib
from collections import defaultdict
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple
import attr
from signedjson.key import (
@@ -40,6 +42,7 @@ from synapse.api.errors import (
RequestSendFailed,
SynapseError,
)
from synapse.config.key import TrustedKeyServer
from synapse.logging.context import (
PreserveLoggingContext,
make_deferred_yieldable,
@@ -47,11 +50,15 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.storage.keys import FetchKeyResult
from synapse.types import JsonDict
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -61,16 +68,17 @@ class VerifyJsonRequest:
A request to verify a JSON object.
Attributes:
server_name(str): The name of the server to verify against.
server_name: The name of the server to verify against.
key_ids(set[str]): The set of key_ids to that could be used to verify the
JSON object
json_object: The JSON object to verify.
json_object(dict): The JSON object to verify.
minimum_valid_until_ts (int): time at which we require the signing key to
minimum_valid_until_ts: time at which we require the signing key to
be valid. (0 implies we don't care)
request_name: The name of the request.
key_ids: The set of key_ids to that could be used to verify the JSON object
key_ready (Deferred[str, str, nacl.signing.VerifyKey]):
A deferred (server_name, key_id, verify_key) tuple that resolves when
a verify key has been fetched. The deferreds' callbacks are run with no
@@ -80,12 +88,12 @@ class VerifyJsonRequest:
errbacks with an M_UNAUTHORIZED SynapseError.
"""
server_name = attr.ib()
json_object = attr.ib()
minimum_valid_until_ts = attr.ib()
request_name = attr.ib()
key_ids = attr.ib(init=False)
key_ready = attr.ib(default=attr.Factory(defer.Deferred))
server_name = attr.ib(type=str)
json_object = attr.ib(type=JsonDict)
minimum_valid_until_ts = attr.ib(type=int)
request_name = attr.ib(type=str)
key_ids = attr.ib(init=False, type=List[str])
key_ready = attr.ib(default=attr.Factory(defer.Deferred), type=defer.Deferred)
def __attrs_post_init__(self):
self.key_ids = signature_ids(self.json_object, self.server_name)
@@ -96,7 +104,9 @@ class KeyLookupError(ValueError):
class Keyring:
def __init__(self, hs, key_fetchers=None):
def __init__(
self, hs: "HomeServer", key_fetchers: "Optional[Iterable[KeyFetcher]]" = None
):
self.clock = hs.get_clock()
if key_fetchers is None:
@@ -112,22 +122,26 @@ class Keyring:
# completes.
#
# These are regular, logcontext-agnostic Deferreds.
self.key_downloads = {}
self.key_downloads = {} # type: Dict[str, defer.Deferred]
def verify_json_for_server(
self, server_name, json_object, validity_time, request_name
):
self,
server_name: str,
json_object: JsonDict,
validity_time: int,
request_name: str,
) -> defer.Deferred:
"""Verify that a JSON object has been signed by a given server
Args:
server_name (str): name of the server which must have signed this object
server_name: name of the server which must have signed this object
json_object (dict): object to be checked
json_object: object to be checked
validity_time (int): timestamp at which we require the signing key to
validity_time: timestamp at which we require the signing key to
be valid. (0 implies we don't care)
request_name (str): an identifier for this json object (eg, an event id)
request_name: an identifier for this json object (eg, an event id)
for logging.
Returns:
@@ -138,12 +152,14 @@ class Keyring:
requests = (req,)
return make_deferred_yieldable(self._verify_objects(requests)[0])
def verify_json_objects_for_server(self, server_and_json):
def verify_json_objects_for_server(
self, server_and_json: Iterable[Tuple[str, dict, int, str]]
) -> List[defer.Deferred]:
"""Bulk verifies signatures of json objects, bulk fetching keys as
necessary.
Args:
server_and_json (iterable[Tuple[str, dict, int, str]):
server_and_json:
Iterable of (server_name, json_object, validity_time, request_name)
tuples.
@@ -164,13 +180,14 @@ class Keyring:
for server_name, json_object, validity_time, request_name in server_and_json
)
def _verify_objects(self, verify_requests):
def _verify_objects(
self, verify_requests: Iterable[VerifyJsonRequest]
) -> List[defer.Deferred]:
"""Does the work of verify_json_[objects_]for_server
Args:
verify_requests (iterable[VerifyJsonRequest]):
Iterable of verification requests.
verify_requests: Iterable of verification requests.
Returns:
List<Deferred[None]>: for each input item, a deferred indicating success
@@ -182,7 +199,7 @@ class Keyring:
key_lookups = []
handle = preserve_fn(_handle_key_deferred)
def process(verify_request):
def process(verify_request: VerifyJsonRequest) -> defer.Deferred:
"""Process an entry in the request list
Adds a key request to key_lookups, and returns a deferred which
@@ -222,18 +239,20 @@ class Keyring:
return results
async def _start_key_lookups(self, verify_requests):
async def _start_key_lookups(
self, verify_requests: List[VerifyJsonRequest]
) -> None:
"""Sets off the key fetches for each verify request
Once each fetch completes, verify_request.key_ready will be resolved.
Args:
verify_requests (List[VerifyJsonRequest]):
verify_requests:
"""
try:
# map from server name to a set of outstanding request ids
server_to_request_ids = {}
server_to_request_ids = {} # type: Dict[str, Set[int]]
for verify_request in verify_requests:
server_name = verify_request.server_name
@@ -275,11 +294,11 @@ class Keyring:
except Exception:
logger.exception("Error starting key lookups")
async def wait_for_previous_lookups(self, server_names) -> None:
async def wait_for_previous_lookups(self, server_names: Iterable[str]) -> None:
"""Waits for any previous key lookups for the given servers to finish.
Args:
server_names (Iterable[str]): list of servers which we want to look up
server_names: list of servers which we want to look up
Returns:
Resolves once all key lookups for the given servers have
@@ -304,7 +323,7 @@ class Keyring:
loop_count += 1
def _get_server_verify_keys(self, verify_requests):
def _get_server_verify_keys(self, verify_requests: List[VerifyJsonRequest]) -> None:
"""Tries to find at least one key for each verify request
For each verify_request, verify_request.key_ready is called back with
@@ -312,7 +331,7 @@ class Keyring:
with a SynapseError if none of the keys are found.
Args:
verify_requests (list[VerifyJsonRequest]): list of verify requests
verify_requests: list of verify requests
"""
remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called}
@@ -366,17 +385,19 @@ class Keyring:
run_in_background(do_iterations)
async def _attempt_key_fetches_with_fetcher(self, fetcher, remaining_requests):
async def _attempt_key_fetches_with_fetcher(
self, fetcher: "KeyFetcher", remaining_requests: Set[VerifyJsonRequest]
):
"""Use a key fetcher to attempt to satisfy some key requests
Args:
fetcher (KeyFetcher): fetcher to use to fetch the keys
remaining_requests (set[VerifyJsonRequest]): outstanding key requests.
fetcher: fetcher to use to fetch the keys
remaining_requests: outstanding key requests.
Any successfully-completed requests will be removed from the list.
"""
# dict[str, dict[str, int]]: keys to fetch.
# The keys to fetch.
# server_name -> key_id -> min_valid_ts
missing_keys = defaultdict(dict)
missing_keys = defaultdict(dict) # type: Dict[str, Dict[str, int]]
for verify_request in remaining_requests:
# any completed requests should already have been removed
@@ -438,16 +459,18 @@ class Keyring:
remaining_requests.difference_update(completed)
class KeyFetcher:
async def get_keys(self, keys_to_fetch):
class KeyFetcher(metaclass=abc.ABCMeta):
@abc.abstractmethod
async def get_keys(
self, keys_to_fetch: Dict[str, Dict[str, int]]
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""
Args:
keys_to_fetch (dict[str, dict[str, int]]):
keys_to_fetch:
the keys to be fetched. server_name -> key_id -> min_valid_ts
Returns:
Deferred[dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]]:
map from server_name -> key_id -> FetchKeyResult
Map from server_name -> key_id -> FetchKeyResult
"""
raise NotImplementedError
@@ -455,31 +478,35 @@ class KeyFetcher:
class StoreKeyFetcher(KeyFetcher):
"""KeyFetcher impl which fetches keys from our data store"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
async def get_keys(self, keys_to_fetch):
async def get_keys(
self, keys_to_fetch: Dict[str, Dict[str, int]]
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""see KeyFetcher.get_keys"""
keys_to_fetch = (
key_ids_to_fetch = (
(server_name, key_id)
for server_name, keys_for_server in keys_to_fetch.items()
for key_id in keys_for_server.keys()
)
res = await self.store.get_server_verify_keys(keys_to_fetch)
keys = {}
res = await self.store.get_server_verify_keys(key_ids_to_fetch)
keys = {} # type: Dict[str, Dict[str, FetchKeyResult]]
for (server_name, key_id), key in res.items():
keys.setdefault(server_name, {})[key_id] = key
return keys
class BaseV2KeyFetcher:
def __init__(self, hs):
class BaseV2KeyFetcher(KeyFetcher):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.config = hs.get_config()
async def process_v2_response(self, from_server, response_json, time_added_ms):
async def process_v2_response(
self, from_server: str, response_json: JsonDict, time_added_ms: int
) -> Dict[str, FetchKeyResult]:
"""Parse a 'Server Keys' structure from the result of a /key request
This is used to parse either the entirety of the response from
@@ -493,16 +520,16 @@ class BaseV2KeyFetcher:
to /_matrix/key/v2/query.
Args:
from_server (str): the name of the server producing this result: either
from_server: the name of the server producing this result: either
the origin server for a /_matrix/key/v2/server request, or the notary
for a /_matrix/key/v2/query.
response_json (dict): the json-decoded Server Keys response object
response_json: the json-decoded Server Keys response object
time_added_ms (int): the timestamp to record in server_keys_json
time_added_ms: the timestamp to record in server_keys_json
Returns:
Deferred[dict[str, FetchKeyResult]]: map from key_id to result object
Map from key_id to result object
"""
ts_valid_until_ms = response_json["valid_until_ts"]
@@ -575,21 +602,22 @@ class BaseV2KeyFetcher:
class PerspectivesKeyFetcher(BaseV2KeyFetcher):
"""KeyFetcher impl which fetches keys from the "perspectives" servers"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.clock = hs.get_clock()
self.client = hs.get_federation_http_client()
self.key_servers = self.config.key_servers
async def get_keys(self, keys_to_fetch):
async def get_keys(
self, keys_to_fetch: Dict[str, Dict[str, int]]
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""see KeyFetcher.get_keys"""
async def get_key(key_server):
async def get_key(key_server: TrustedKeyServer) -> Dict:
try:
result = await self.get_server_verify_key_v2_indirect(
return await self.get_server_verify_key_v2_indirect(
keys_to_fetch, key_server
)
return result
except KeyLookupError as e:
logger.warning(
"Key lookup failed from %r: %s", key_server.server_name, e
@@ -611,25 +639,25 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
).addErrback(unwrapFirstError)
)
union_of_keys = {}
union_of_keys = {} # type: Dict[str, Dict[str, FetchKeyResult]]
for result in results:
for server_name, keys in result.items():
union_of_keys.setdefault(server_name, {}).update(keys)
return union_of_keys
async def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
async def get_server_verify_key_v2_indirect(
self, keys_to_fetch: Dict[str, Dict[str, int]], key_server: TrustedKeyServer
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""
Args:
keys_to_fetch (dict[str, dict[str, int]]):
keys_to_fetch:
the keys to be fetched. server_name -> key_id -> min_valid_ts
key_server (synapse.config.key.TrustedKeyServer): notary server to query for
the keys
key_server: notary server to query for the keys
Returns:
dict[str, dict[str, synapse.storage.keys.FetchKeyResult]]: map
from server_name -> key_id -> FetchKeyResult
Map from server_name -> key_id -> FetchKeyResult
Raises:
KeyLookupError if there was an error processing the entire response from
@@ -662,11 +690,12 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
except HttpResponseException as e:
raise KeyLookupError("Remote server returned an error: %s" % (e,))
keys = {}
added_keys = []
keys = {} # type: Dict[str, Dict[str, FetchKeyResult]]
added_keys = [] # type: List[Tuple[str, str, FetchKeyResult]]
time_now_ms = self.clock.time_msec()
assert isinstance(query_response, dict)
for response in query_response["server_keys"]:
# do this first, so that we can give useful errors thereafter
server_name = response.get("server_name")
@@ -704,14 +733,15 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
return keys
def _validate_perspectives_response(self, key_server, response):
def _validate_perspectives_response(
self, key_server: TrustedKeyServer, response: JsonDict
) -> None:
"""Optionally check the signature on the result of a /key/query request
Args:
key_server (synapse.config.key.TrustedKeyServer): the notary server that
produced this result
key_server: the notary server that produced this result
response (dict): the json-decoded Server Keys response object
response: the json-decoded Server Keys response object
"""
perspective_name = key_server.server_name
perspective_keys = key_server.verify_keys
@@ -745,25 +775,26 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
class ServerKeyFetcher(BaseV2KeyFetcher):
"""KeyFetcher impl which fetches keys from the origin servers"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.clock = hs.get_clock()
self.client = hs.get_federation_http_client()
async def get_keys(self, keys_to_fetch):
async def get_keys(
self, keys_to_fetch: Dict[str, Dict[str, int]]
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""
Args:
keys_to_fetch (dict[str, iterable[str]]):
keys_to_fetch:
the keys to be fetched. server_name -> key_ids
Returns:
dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]:
map from server_name -> key_id -> FetchKeyResult
Map from server_name -> key_id -> FetchKeyResult
"""
results = {}
async def get_key(key_to_fetch_item):
async def get_key(key_to_fetch_item: Tuple[str, Dict[str, int]]) -> None:
server_name, key_ids = key_to_fetch_item
try:
keys = await self.get_server_verify_key_v2_direct(server_name, key_ids)
@@ -778,20 +809,22 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
await yieldable_gather_results(get_key, keys_to_fetch.items())
return results
async def get_server_verify_key_v2_direct(self, server_name, key_ids):
async def get_server_verify_key_v2_direct(
self, server_name: str, key_ids: Iterable[str]
) -> Dict[str, FetchKeyResult]:
"""
Args:
server_name (str):
key_ids (iterable[str]):
server_name:
key_ids:
Returns:
dict[str, FetchKeyResult]: map from key ID to lookup result
Map from key ID to lookup result
Raises:
KeyLookupError if there was a problem making the lookup
"""
keys = {} # type: dict[str, FetchKeyResult]
keys = {} # type: Dict[str, FetchKeyResult]
for requested_key_id in key_ids:
# we may have found this key as a side-effect of asking for another.
@@ -825,6 +858,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
except HttpResponseException as e:
raise KeyLookupError("Remote server returned an error: %s" % (e,))
assert isinstance(response, dict)
if response["server_name"] != server_name:
raise KeyLookupError(
"Expected a response for server %r not %r"
@@ -846,11 +880,11 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
return keys
async def _handle_key_deferred(verify_request) -> None:
async def _handle_key_deferred(verify_request: VerifyJsonRequest) -> None:
"""Waits for the key to become available, and then performs a verification
Args:
verify_request (VerifyJsonRequest):
verify_request:
Raises:
SynapseError if there was a problem performing the verification

View File

@@ -15,10 +15,11 @@
# limitations under the License.
import inspect
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import Collection
from synapse.util.async_helpers import maybe_awaitable
if TYPE_CHECKING:
import synapse.events
@@ -39,7 +40,9 @@ class SpamChecker:
else:
self.spam_checkers.append(module(config=config))
def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool:
async def check_event_for_spam(
self, event: "synapse.events.EventBase"
) -> Union[bool, str]:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
@@ -50,15 +53,16 @@ class SpamChecker:
event: the event to be checked
Returns:
True if the event is spammy.
True or a string if the event is spammy. If a string is returned it
will be used as the error message returned to the user.
"""
for spam_checker in self.spam_checkers:
if spam_checker.check_event_for_spam(event):
if await maybe_awaitable(spam_checker.check_event_for_spam(event)):
return True
return False
def user_may_invite(
async def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str
) -> bool:
"""Checks if a given user may send an invite
@@ -75,14 +79,18 @@ class SpamChecker:
"""
for spam_checker in self.spam_checkers:
if (
spam_checker.user_may_invite(inviter_userid, invitee_userid, room_id)
await maybe_awaitable(
spam_checker.user_may_invite(
inviter_userid, invitee_userid, room_id
)
)
is False
):
return False
return True
def user_may_create_room(self, userid: str) -> bool:
async def user_may_create_room(self, userid: str) -> bool:
"""Checks if a given user may create a room
If this method returns false, the creation request will be rejected.
@@ -94,12 +102,15 @@ class SpamChecker:
True if the user may create a room, otherwise False
"""
for spam_checker in self.spam_checkers:
if spam_checker.user_may_create_room(userid) is False:
if (
await maybe_awaitable(spam_checker.user_may_create_room(userid))
is False
):
return False
return True
def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
async def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
"""Checks if a given user may create a room alias
If this method returns false, the association request will be rejected.
@@ -112,12 +123,17 @@ class SpamChecker:
True if the user may create a room alias, otherwise False
"""
for spam_checker in self.spam_checkers:
if spam_checker.user_may_create_room_alias(userid, room_alias) is False:
if (
await maybe_awaitable(
spam_checker.user_may_create_room_alias(userid, room_alias)
)
is False
):
return False
return True
def user_may_publish_room(self, userid: str, room_id: str) -> bool:
async def user_may_publish_room(self, userid: str, room_id: str) -> bool:
"""Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected.
@@ -130,12 +146,17 @@ class SpamChecker:
True if the user may publish the room, otherwise False
"""
for spam_checker in self.spam_checkers:
if spam_checker.user_may_publish_room(userid, room_id) is False:
if (
await maybe_awaitable(
spam_checker.user_may_publish_room(userid, room_id)
)
is False
):
return False
return True
def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
async def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in
@@ -157,12 +178,12 @@ class SpamChecker:
if checker:
# Make a copy of the user profile object to ensure the spam checker
# cannot modify it.
if checker(user_profile.copy()):
if await maybe_awaitable(checker(user_profile.copy())):
return True
return False
def check_registration_for_spam(
async def check_registration_for_spam(
self,
email_threepid: Optional[dict],
username: Optional[str],
@@ -185,7 +206,9 @@ class SpamChecker:
# spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker:
behaviour = checker(email_threepid, username, request_info)
behaviour = await maybe_awaitable(
checker(email_threepid, username, request_info)
)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour

View File

@@ -78,6 +78,7 @@ class FederationBase:
ctx = current_context()
@defer.inlineCallbacks
def callback(_, pdu: EventBase):
with PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
@@ -105,7 +106,11 @@ class FederationBase:
)
return redacted_event
if self.spam_checker.check_event_for_spam(pdu):
result = yield defer.ensureDeferred(
self.spam_checker.check_event_for_spam(pdu)
)
if result:
logger.warning(
"Event contains spam, redacting %s: %s",
pdu.event_id,

View File

@@ -144,7 +144,7 @@ class Authenticator:
):
raise FederationDeniedError(origin)
if not json_request["signatures"]:
if origin is None or not json_request["signatures"]:
raise NoAuthenticationError(
401, "Missing Authorization headers", Codes.UNAUTHORIZED
)

View File

@@ -13,27 +13,31 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import logging
from typing import List
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set
from synapse.api.constants import Membership
from synapse.events import FrozenEvent
from synapse.types import RoomStreamToken, StateMap
from synapse.events import EventBase
from synapse.types import JsonDict, RoomStreamToken, StateMap, UserID
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class AdminHandler(BaseHandler):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.storage = hs.get_storage()
self.state_store = self.storage.state
async def get_whois(self, user):
async def get_whois(self, user: UserID) -> JsonDict:
connections = []
sessions = await self.store.get_user_ip_and_agents(user)
@@ -53,7 +57,7 @@ class AdminHandler(BaseHandler):
return ret
async def get_user(self, user):
async def get_user(self, user: UserID) -> Optional[JsonDict]:
"""Function to get user details"""
ret = await self.store.get_user_by_id(user.to_string())
if ret:
@@ -64,12 +68,12 @@ class AdminHandler(BaseHandler):
ret["threepids"] = threepids
return ret
async def export_user_data(self, user_id, writer):
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:
"""Write all data we have on the user to the given writer.
Args:
user_id (str)
writer (ExfiltrationWriter)
user_id: The user ID to fetch data of.
writer: The writer to write to.
Returns:
Resolves when all data for a user has been written.
@@ -128,7 +132,8 @@ class AdminHandler(BaseHandler):
from_key = RoomStreamToken(0, 0)
to_key = RoomStreamToken(None, stream_ordering)
written_events = set() # Events that we've processed in this room
# Events that we've processed in this room
written_events = set() # type: Set[str]
# We need to track gaps in the events stream so that we can then
# write out the state at those events. We do this by keeping track
@@ -140,8 +145,8 @@ class AdminHandler(BaseHandler):
# The reverse mapping to above, i.e. map from unseen event to events
# that have the unseen event in their prev_events, i.e. the unseen
# events "children". dict[str, set[str]]
unseen_to_child_events = {}
# events "children".
unseen_to_child_events = {} # type: Dict[str, Set[str]]
# We fetch events in the room the user could see by fetching *all*
# events that we have and then filtering, this isn't the most
@@ -197,38 +202,46 @@ class AdminHandler(BaseHandler):
return writer.finished()
class ExfiltrationWriter:
class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id: str, events: List[FrozenEvent]):
@abc.abstractmethod
def write_events(self, room_id: str, events: List[EventBase]) -> None:
"""Write a batch of events for a room.
"""
pass
raise NotImplementedError()
def write_state(self, room_id: str, event_id: str, state: StateMap[FrozenEvent]):
@abc.abstractmethod
def write_state(
self, room_id: str, event_id: str, state: StateMap[EventBase]
) -> None:
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
"""
pass
raise NotImplementedError()
def write_invite(self, room_id: str, event: FrozenEvent, state: StateMap[dict]):
@abc.abstractmethod
def write_invite(
self, room_id: str, event: EventBase, state: StateMap[dict]
) -> None:
"""Write an invite for the room, with associated invite state.
Args:
room_id
event
state: A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
room_id: The room ID the invite is for.
event: The invite event.
state: A subset of the state at the invite, with a subset of the
event keys (type, state_key content and sender).
"""
raise NotImplementedError()
def finished(self):
@abc.abstractmethod
def finished(self) -> Any:
"""Called when all data has successfully been exported and written.
This functions return value is passed to the caller of
`export_user_data`.
"""
pass
raise NotImplementedError()

View File

@@ -14,7 +14,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import logging
import time
import unicodedata
@@ -22,6 +21,7 @@ import urllib.parse
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Dict,
Iterable,
@@ -58,6 +58,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import ModuleApi
from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.threepids import canonicalise_email
@@ -197,27 +198,25 @@ class AuthHandler(BaseHandler):
self._password_enabled = hs.config.password_enabled
self._password_localdb_enabled = hs.config.password_localdb_enabled
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
# type in the list. (NB that the spec doesn't require us to do so and
# clients which favour types that they don't understand over those that
# they do are technically broken)
# start out by assuming PASSWORD is enabled; we will remove it later if not.
login_types = []
login_types = set()
if self._password_localdb_enabled:
login_types.append(LoginType.PASSWORD)
login_types.add(LoginType.PASSWORD)
for provider in self.password_providers:
if hasattr(provider, "get_supported_login_types"):
for t in provider.get_supported_login_types().keys():
if t not in login_types:
login_types.append(t)
login_types.update(provider.get_supported_login_types().keys())
if not self._password_enabled:
login_types.remove(LoginType.PASSWORD)
login_types.discard(LoginType.PASSWORD)
self._supported_login_types = login_types
# Some clients just pick the first type in the list. In this case, we want
# them to use PASSWORD (rather than token or whatever), so we want to make sure
# that comes first, where it's present.
self._supported_login_types = []
if LoginType.PASSWORD in login_types:
self._supported_login_types.append(LoginType.PASSWORD)
login_types.remove(LoginType.PASSWORD)
self._supported_login_types.extend(login_types)
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
@@ -227,6 +226,9 @@ class AuthHandler(BaseHandler):
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
)
# The number of seconds to keep a UI auth session active.
self._ui_auth_session_timeout = hs.config.ui_auth_session_timeout
# Ratelimitier for failed /login attempts
self._failed_login_attempts_ratelimiter = Ratelimiter(
clock=hs.get_clock(),
@@ -284,7 +286,7 @@ class AuthHandler(BaseHandler):
request_body: Dict[str, Any],
clientip: str,
description: str,
) -> Tuple[dict, str]:
) -> Tuple[dict, Optional[str]]:
"""
Checks that the user is who they claim to be, via a UI auth.
@@ -311,7 +313,8 @@ class AuthHandler(BaseHandler):
have been given only in a previous call).
'session_id' is the ID of this session, either passed in by the
client or assigned by this call
client or assigned by this call. This is None if UI auth was
skipped (by re-using a previous validation).
Raises:
InteractiveAuthIncompleteError if the client has not yet completed
@@ -325,6 +328,16 @@ class AuthHandler(BaseHandler):
"""
if self._ui_auth_session_timeout:
last_validated = await self.store.get_access_token_last_validated(
requester.access_token_id
)
if self.clock.time_msec() - last_validated < self._ui_auth_session_timeout:
# Return the input parameters, minus the auth key, which matches
# the logic in check_ui_auth.
request_body.pop("auth", None)
return request_body, None
user_id = requester.user.to_string()
# Check if we should be ratelimited due to too many previous failed attempts
@@ -360,6 +373,9 @@ class AuthHandler(BaseHandler):
if user_id != requester.user.to_string():
raise AuthError(403, "Invalid auth")
# Note that the access token has been validated.
await self.store.update_access_token_last_validated(requester.access_token_id)
return params, session_id
async def _get_available_ui_auth_types(self, user: UserID) -> Iterable[str]:
@@ -453,13 +469,10 @@ class AuthHandler(BaseHandler):
all the stages in any of the permitted flows.
"""
authdict = None
sid = None # type: Optional[str]
if clientdict and "auth" in clientdict:
authdict = clientdict["auth"]
del clientdict["auth"]
if "session" in authdict:
sid = authdict["session"]
authdict = clientdict.pop("auth", {})
if "session" in authdict:
sid = authdict["session"]
# Convert the URI and method to strings.
uri = request.uri.decode("utf-8")
@@ -564,6 +577,8 @@ class AuthHandler(BaseHandler):
creds = await self.store.get_completed_ui_auth_stages(session.session_id)
for f in flows:
# If all the required credentials have been supplied, the user has
# successfully completed the UI auth process!
if len(set(f) - set(creds)) == 0:
# it's very useful to know what args are stored, but this can
# include the password in the case of registering, so only log
@@ -739,6 +754,7 @@ class AuthHandler(BaseHandler):
device_id: Optional[str],
valid_until_ms: Optional[int],
puppets_user_id: Optional[str] = None,
is_appservice_ghost: bool = False,
) -> str:
"""
Creates a new access token for the user with the given user ID.
@@ -755,6 +771,7 @@ class AuthHandler(BaseHandler):
we should always have a device ID)
valid_until_ms: when the token is valid until. None for
no expiry.
is_appservice_ghost: Whether the user is an application ghost user
Returns:
The access token for the user's session.
Raises:
@@ -775,7 +792,11 @@ class AuthHandler(BaseHandler):
"Logging in user %s on device %s%s", user_id, device_id, fmt_expiry
)
await self.auth.check_auth_blocking(user_id)
if (
not is_appservice_ghost
or self.hs.config.appservice.track_appservice_user_ips
):
await self.auth.check_auth_blocking(user_id)
access_token = self.macaroon_gen.generate_access_token(user_id)
await self.store.add_access_token_to_user(
@@ -861,7 +882,7 @@ class AuthHandler(BaseHandler):
async def validate_login(
self, login_submission: Dict[str, Any], ratelimit: bool = False,
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate auth types which don't
@@ -1004,7 +1025,7 @@ class AuthHandler(BaseHandler):
async def _validate_userid_login(
self, username: str, login_submission: Dict[str, Any],
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
) -> Tuple[str, Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Helper for validate_login
Handles login, once we've mapped 3pids onto userids
@@ -1082,7 +1103,7 @@ class AuthHandler(BaseHandler):
async def check_password_provider_3pid(
self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], None]]]:
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], Awaitable[None]]]]:
"""Check if a password provider is able to validate a thirdparty login
Args:
@@ -1638,6 +1659,6 @@ class PasswordProvider:
# This might return an awaitable, if it does block the log out
# until it completes.
result = g(user_id=user_id, device_id=device_id, access_token=access_token,)
if inspect.isawaitable(result):
await result
await maybe_awaitable(
g(user_id=user_id, device_id=device_id, access_token=access_token,)
)

View File

@@ -13,13 +13,16 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib
from typing import TYPE_CHECKING, Dict, Optional, Tuple
import urllib.parse
from typing import TYPE_CHECKING, Dict, Optional
from xml.etree import ElementTree as ET
import attr
from twisted.web.client import PartialDownloadError
from synapse.api.errors import Codes, LoginError
from synapse.api.errors import HttpResponseException
from synapse.handlers.sso import MappingException, UserAttributes
from synapse.http.site import SynapseRequest
from synapse.types import UserID, map_username_to_mxid_localpart
@@ -29,6 +32,26 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
class CasError(Exception):
"""Used to catch errors when validating the CAS ticket.
"""
def __init__(self, error, error_description=None):
self.error = error
self.error_description = error_description
def __str__(self):
if self.error_description:
return "{}: {}".format(self.error, self.error_description)
return self.error
@attr.s(slots=True, frozen=True)
class CasResponse:
username = attr.ib(type=str)
attributes = attr.ib(type=Dict[str, Optional[str]])
class CasHandler:
"""
Utility class for to handle the response from a CAS SSO service.
@@ -40,6 +63,7 @@ class CasHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self._hostname = hs.hostname
self._store = hs.get_datastore()
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
@@ -50,6 +74,11 @@ class CasHandler:
self._http_client = hs.get_proxied_http_client()
# identifier for the external_ids table
self._auth_provider_id = "cas"
self._sso_handler = hs.get_sso_handler()
def _build_service_param(self, args: Dict[str, str]) -> str:
"""
Generates a value to use as the "service" parameter when redirecting or
@@ -69,14 +98,20 @@ class CasHandler:
async def _validate_ticket(
self, ticket: str, service_args: Dict[str, str]
) -> Tuple[str, Optional[str]]:
) -> CasResponse:
"""
Validate a CAS ticket with the server, parse the response, and return the user and display name.
Validate a CAS ticket with the server, and return the parsed the response.
Args:
ticket: The CAS ticket from the client.
service_args: Additional arguments to include in the service URL.
Should be the same as those passed to `get_redirect_url`.
Raises:
CasError: If there's an error parsing the CAS response.
Returns:
The parsed CAS response.
"""
uri = self._cas_server_url + "/proxyValidate"
args = {
@@ -89,66 +124,65 @@ class CasHandler:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
body = pde.response
except HttpResponseException as e:
description = (
(
'Authorization server responded with a "{status}" error '
"while exchanging the authorization code."
).format(status=e.code),
)
raise CasError("server_error", description) from e
user, attributes = self._parse_cas_response(body)
displayname = attributes.pop(self._cas_displayname_attribute, None)
return self._parse_cas_response(body)
for required_attribute, required_value in self._cas_required_attributes.items():
# If required attribute was not in CAS Response - Forbidden
if required_attribute not in attributes:
raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED)
# Also need to check value
if required_value is not None:
actual_value = attributes[required_attribute]
# If required attribute value does not match expected - Forbidden
if required_value != actual_value:
raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED)
return user, displayname
def _parse_cas_response(
self, cas_response_body: bytes
) -> Tuple[str, Dict[str, Optional[str]]]:
def _parse_cas_response(self, cas_response_body: bytes) -> CasResponse:
"""
Retrieve the user and other parameters from the CAS response.
Args:
cas_response_body: The response from the CAS query.
Raises:
CasError: If there's an error parsing the CAS response.
Returns:
A tuple of the user and a mapping of other attributes.
The parsed CAS response.
"""
# Ensure the response is valid.
root = ET.fromstring(cas_response_body)
if not root.tag.endswith("serviceResponse"):
raise CasError(
"missing_service_response",
"root of CAS response is not serviceResponse",
)
success = root[0].tag.endswith("authenticationSuccess")
if not success:
raise CasError("unsucessful_response", "Unsuccessful CAS response")
# Iterate through the nodes and pull out the user and any extra attributes.
user = None
attributes = {}
try:
root = ET.fromstring(cas_response_body)
if not root.tag.endswith("serviceResponse"):
raise Exception("root of CAS response is not serviceResponse")
success = root[0].tag.endswith("authenticationSuccess")
for child in root[0]:
if child.tag.endswith("user"):
user = child.text
if child.tag.endswith("attributes"):
for attribute in child:
# ElementTree library expands the namespace in
# attribute tags to the full URL of the namespace.
# We don't care about namespace here and it will always
# be encased in curly braces, so we remove them.
tag = attribute.tag
if "}" in tag:
tag = tag.split("}")[1]
attributes[tag] = attribute.text
if user is None:
raise Exception("CAS response does not contain user")
except Exception:
logger.exception("Error parsing CAS response")
raise LoginError(401, "Invalid CAS response", errcode=Codes.UNAUTHORIZED)
if not success:
raise LoginError(
401, "Unsuccessful CAS response", errcode=Codes.UNAUTHORIZED
)
return user, attributes
for child in root[0]:
if child.tag.endswith("user"):
user = child.text
if child.tag.endswith("attributes"):
for attribute in child:
# ElementTree library expands the namespace in
# attribute tags to the full URL of the namespace.
# We don't care about namespace here and it will always
# be encased in curly braces, so we remove them.
tag = attribute.tag
if "}" in tag:
tag = tag.split("}")[1]
attributes[tag] = attribute.text
# Ensure a user was found.
if user is None:
raise CasError("no_user", "CAS response does not contain user")
return CasResponse(user, attributes)
def get_redirect_url(self, service_args: Dict[str, str]) -> str:
"""
@@ -201,59 +235,150 @@ class CasHandler:
args["redirectUrl"] = client_redirect_url
if session:
args["session"] = session
username, user_display_name = await self._validate_ticket(ticket, args)
# Pull out the user-agent and IP from the request.
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
try:
cas_response = await self._validate_ticket(ticket, args)
except CasError as e:
logger.exception("Could not validate ticket")
self._sso_handler.render_error(request, e.error, e.error_description, 401)
return
# Get the matrix ID from the CAS username.
user_id = await self._map_cas_user_to_matrix_user(
username, user_display_name, user_agent, ip_address
await self._handle_cas_response(
request, cas_response, client_redirect_url, session
)
if session:
await self._auth_handler.complete_sso_ui_auth(
user_id, session, request,
)
else:
# If this not a UI auth request than there must be a redirect URL.
assert client_redirect_url
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url
)
async def _map_cas_user_to_matrix_user(
async def _handle_cas_response(
self,
remote_user_id: str,
display_name: Optional[str],
user_agent: str,
ip_address: str,
) -> str:
"""
Given a CAS username, retrieve the user ID for it and possibly register the user.
request: SynapseRequest,
cas_response: CasResponse,
client_redirect_url: Optional[str],
session: Optional[str],
) -> None:
"""Handle a CAS response to a ticket request.
Assumes that the response has been validated. Maps the user onto an MXID,
registering them if necessary, and returns a response to the browser.
Args:
remote_user_id: The username from the CAS response.
display_name: The display name from the CAS response.
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
request: the incoming request from the browser. We'll respond to it with an
HTML page or a redirect
Returns:
The user ID associated with this response.
cas_response: The parsed CAS response.
client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given.
This should be the same as the redirectUrl from the original `/login/sso/redirect` request.
session: The session parameter from the `/cas/ticket` HTTP request, if given.
This should be the UI Auth session id.
"""
localpart = map_username_to_mxid_localpart(remote_user_id)
user_id = UserID(localpart, self._hostname).to_string()
registered_user_id = await self._auth_handler.check_user_exists(user_id)
# If the user does not exist, register it.
if not registered_user_id:
registered_user_id = await self._registration_handler.register_user(
localpart=localpart,
default_display_name=display_name,
user_agent_ips=[(user_agent, ip_address)],
# first check if we're doing a UIA
if session:
return await self._sso_handler.complete_sso_ui_auth_request(
self._auth_provider_id, cas_response.username, session, request,
)
return registered_user_id
# otherwise, we're handling a login request.
# Ensure that the attributes of the logged in user meet the required
# attributes.
for required_attribute, required_value in self._cas_required_attributes.items():
# If required attribute was not in CAS Response - Forbidden
if required_attribute not in cas_response.attributes:
self._sso_handler.render_error(
request,
"unauthorised",
"You are not authorised to log in here.",
401,
)
return
# Also need to check value
if required_value is not None:
actual_value = cas_response.attributes[required_attribute]
# If required attribute value does not match expected - Forbidden
if required_value != actual_value:
self._sso_handler.render_error(
request,
"unauthorised",
"You are not authorised to log in here.",
401,
)
return
# Call the mapper to register/login the user
# If this not a UI auth request than there must be a redirect URL.
assert client_redirect_url is not None
try:
await self._complete_cas_login(cas_response, request, client_redirect_url)
except MappingException as e:
logger.exception("Could not map user")
self._sso_handler.render_error(request, "mapping_error", str(e))
async def _complete_cas_login(
self,
cas_response: CasResponse,
request: SynapseRequest,
client_redirect_url: str,
) -> None:
"""
Given a CAS response, complete the login flow
Retrieves the remote user ID, registers the user if necessary, and serves
a redirect back to the client with a login-token.
Args:
cas_response: The parsed CAS response.
request: The request to respond to
client_redirect_url: The redirect URL passed in by the client.
Raises:
MappingException if there was a problem mapping the response to a user.
RedirectException: some mapping providers may raise this if they need
to redirect to an interstitial page.
"""
# Note that CAS does not support a mapping provider, so the logic is hard-coded.
localpart = map_username_to_mxid_localpart(cas_response.username)
async def cas_response_to_user_attributes(failures: int) -> UserAttributes:
"""
Map from CAS attributes to user attributes.
"""
# Due to the grandfathering logic matching any previously registered
# mxids it isn't expected for there to be any failures.
if failures:
raise RuntimeError("CAS is not expected to de-duplicate Matrix IDs")
display_name = cas_response.attributes.get(
self._cas_displayname_attribute, None
)
return UserAttributes(localpart=localpart, display_name=display_name)
async def grandfather_existing_users() -> Optional[str]:
# Since CAS did not always use the user_external_ids table, always
# to attempt to map to existing users.
user_id = UserID(localpart, self._hostname).to_string()
logger.debug(
"Looking for existing account based on mapped %s", user_id,
)
users = await self._store.get_users_by_id_case_insensitive(user_id)
if users:
registered_user_id = list(users.keys())[0]
logger.info("Grandfathering mapping to %s", registered_user_id)
return registered_user_id
return None
await self._sso_handler.complete_sso_login_request(
self._auth_provider_id,
cas_response.username,
request,
client_redirect_url,
cas_response_to_user_attributes,
grandfather_existing_users,
)

View File

@@ -133,7 +133,9 @@ class DirectoryHandler(BaseHandler):
403, "You must be in the room to create an alias for it"
)
if not self.spam_checker.user_may_create_room_alias(user_id, room_alias):
if not await self.spam_checker.user_may_create_room_alias(
user_id, room_alias
):
raise AuthError(403, "This user is not permitted to create this alias")
if not self.config.is_alias_creation_allowed(
@@ -409,7 +411,7 @@ class DirectoryHandler(BaseHandler):
"""
user_id = requester.user.to_string()
if not self.spam_checker.user_may_publish_room(user_id, room_id):
if not await self.spam_checker.user_may_publish_room(user_id, room_id):
raise AuthError(
403, "This user is not permitted to publish rooms to the room list"
)

View File

@@ -1593,7 +1593,7 @@ class FederationHandler(BaseHandler):
if self.hs.config.block_non_admin_invites:
raise SynapseError(403, "This server does not accept room invites")
if not self.spam_checker.user_may_invite(
if not await self.spam_checker.user_may_invite(
event.sender, event.state_key, event.room_id
):
raise SynapseError(

View File

@@ -29,7 +29,7 @@ def _create_rerouter(func_name):
async def f(self, group_id, *args, **kwargs):
if not GroupID.is_valid(group_id):
raise SynapseError(400, "%s was not legal group ID" % (group_id,))
raise SynapseError(400, "%s is not a legal group ID" % (group_id,))
if self.is_mine_id(group_id):
return await getattr(self.groups_server_handler, func_name)(

View File

@@ -55,6 +55,8 @@ class IdentityHandler(BaseHandler):
self.federation_http_client = hs.get_federation_http_client()
self.hs = hs
self._web_client_location = hs.config.invite_client_location
async def threepid_from_creds(
self, id_server: str, creds: Dict[str, str]
) -> Optional[JsonDict]:
@@ -803,6 +805,9 @@ class IdentityHandler(BaseHandler):
"sender_display_name": inviter_display_name,
"sender_avatar_url": inviter_avatar_url,
}
# If a custom web client location is available, include it in the request.
if self._web_client_location:
invite_config["org.matrix.web_client_location"] = self._web_client_location
# Add the identity service access token to the JSON body and use the v2
# Identity Service endpoints if id_access_token is present

View File

@@ -323,9 +323,7 @@ class InitialSyncHandler(BaseHandler):
member_event_id: str,
is_peeking: bool,
) -> JsonDict:
room_state = await self.state_store.get_state_for_events([member_event_id])
room_state = room_state[member_event_id]
room_state = await self.state_store.get_state_for_event(member_event_id)
limit = pagin_config.limit if pagin_config else None
if limit is None:

View File

@@ -744,7 +744,7 @@ class EventCreationHandler:
event.sender,
)
spam_error = self.spam_checker.check_event_for_spam(event)
spam_error = await self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, str):
spam_error = "Spam is not permitted here"
@@ -1261,7 +1261,7 @@ class EventCreationHandler:
event, context = await self.create_event(
requester,
{
"type": "org.matrix.dummy_event",
"type": EventTypes.Dummy,
"content": {},
"room_id": room_id,
"sender": user_id,

View File

@@ -115,8 +115,6 @@ class OidcHandler(BaseHandler):
self._allow_existing_users = hs.config.oidc_allow_existing_users # type: bool
self._http_client = hs.get_proxied_http_client()
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
self._server_name = hs.config.server_name # type: str
self._macaroon_secret_key = hs.config.macaroon_secret_key
@@ -689,33 +687,14 @@ class OidcHandler(BaseHandler):
# otherwise, it's a login
# Pull out the user-agent and IP from the request.
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
# Call the mapper to register/login the user
try:
user_id = await self._map_userinfo_to_user(
userinfo, token, user_agent, ip_address
await self._complete_oidc_login(
userinfo, token, request, client_redirect_url
)
except MappingException as e:
logger.exception("Could not map user")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
# Mapping providers might not have get_extra_attributes: only call this
# method if it exists.
extra_attributes = None
get_extra_attributes = getattr(
self._user_mapping_provider, "get_extra_attributes", None
)
if get_extra_attributes:
extra_attributes = await get_extra_attributes(userinfo, token)
# and finally complete the login
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url, extra_attributes
)
def _generate_oidc_session_token(
self,
@@ -838,10 +817,14 @@ class OidcHandler(BaseHandler):
now = self.clock.time_msec()
return now < expiry
async def _map_userinfo_to_user(
self, userinfo: UserInfo, token: Token, user_agent: str, ip_address: str
) -> str:
"""Maps a UserInfo object to a mxid.
async def _complete_oidc_login(
self,
userinfo: UserInfo,
token: Token,
request: SynapseRequest,
client_redirect_url: str,
) -> None:
"""Given a UserInfo response, complete the login flow
UserInfo should have a claim that uniquely identifies users. This claim
is usually `sub`, but can be configured with `oidc_config.subject_claim`.
@@ -853,17 +836,16 @@ class OidcHandler(BaseHandler):
If a user already exists with the mxid we've mapped and allow_existing_users
is disabled, raise an exception.
Otherwise, render a redirect back to the client_redirect_url with a loginToken.
Args:
userinfo: an object representing the user
token: a dict with the tokens obtained from the provider
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
request: The request to respond to
client_redirect_url: The redirect URL passed in by the client.
Raises:
MappingException: if there was an error while mapping some properties
Returns:
The mxid of the user
"""
try:
remote_user_id = self._remote_id_from_userinfo(userinfo)
@@ -931,13 +913,23 @@ class OidcHandler(BaseHandler):
return None
return await self._sso_handler.get_mxid_from_sso(
# Mapping providers might not have get_extra_attributes: only call this
# method if it exists.
extra_attributes = None
get_extra_attributes = getattr(
self._user_mapping_provider, "get_extra_attributes", None
)
if get_extra_attributes:
extra_attributes = await get_extra_attributes(userinfo, token)
await self._sso_handler.complete_sso_login_request(
self._auth_provider_id,
remote_user_id,
user_agent,
ip_address,
request,
client_redirect_url,
oidc_response_to_user_attributes,
grandfather_existing_users,
extra_attributes,
)
def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str:
@@ -955,7 +947,7 @@ class OidcHandler(BaseHandler):
UserAttributeDict = TypedDict(
"UserAttributeDict", {"localpart": str, "display_name": Optional[str]}
"UserAttributeDict", {"localpart": Optional[str], "display_name": Optional[str]}
)
C = TypeVar("C")
@@ -1036,10 +1028,10 @@ env = Environment(finalize=jinja_finalize)
@attr.s
class JinjaOidcMappingConfig:
subject_claim = attr.ib() # type: str
localpart_template = attr.ib() # type: Template
display_name_template = attr.ib() # type: Optional[Template]
extra_attributes = attr.ib() # type: Dict[str, Template]
subject_claim = attr.ib(type=str)
localpart_template = attr.ib(type=Optional[Template])
display_name_template = attr.ib(type=Optional[Template])
extra_attributes = attr.ib(type=Dict[str, Template])
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
@@ -1055,18 +1047,14 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
def parse_config(config: dict) -> JinjaOidcMappingConfig:
subject_claim = config.get("subject_claim", "sub")
if "localpart_template" not in config:
raise ConfigError(
"missing key: oidc_config.user_mapping_provider.config.localpart_template"
)
try:
localpart_template = env.from_string(config["localpart_template"])
except Exception as e:
raise ConfigError(
"invalid jinja template for oidc_config.user_mapping_provider.config.localpart_template: %r"
% (e,)
)
localpart_template = None # type: Optional[Template]
if "localpart_template" in config:
try:
localpart_template = env.from_string(config["localpart_template"])
except Exception as e:
raise ConfigError(
"invalid jinja template", path=["localpart_template"]
) from e
display_name_template = None # type: Optional[Template]
if "display_name_template" in config:
@@ -1074,26 +1062,22 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
display_name_template = env.from_string(config["display_name_template"])
except Exception as e:
raise ConfigError(
"invalid jinja template for oidc_config.user_mapping_provider.config.display_name_template: %r"
% (e,)
)
"invalid jinja template", path=["display_name_template"]
) from e
extra_attributes = {} # type Dict[str, Template]
if "extra_attributes" in config:
extra_attributes_config = config.get("extra_attributes") or {}
if not isinstance(extra_attributes_config, dict):
raise ConfigError(
"oidc_config.user_mapping_provider.config.extra_attributes must be a dict"
)
raise ConfigError("must be a dict", path=["extra_attributes"])
for key, value in extra_attributes_config.items():
try:
extra_attributes[key] = env.from_string(value)
except Exception as e:
raise ConfigError(
"invalid jinja template for oidc_config.user_mapping_provider.config.extra_attributes.%s: %r"
% (key, e)
)
"invalid jinja template", path=["extra_attributes", key]
) from e
return JinjaOidcMappingConfig(
subject_claim=subject_claim,
@@ -1108,14 +1092,17 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
async def map_user_attributes(
self, userinfo: UserInfo, token: Token, failures: int
) -> UserAttributeDict:
localpart = self._config.localpart_template.render(user=userinfo).strip()
localpart = None
# Ensure only valid characters are included in the MXID.
localpart = map_username_to_mxid_localpart(localpart)
if self._config.localpart_template:
localpart = self._config.localpart_template.render(user=userinfo).strip()
# Append suffix integer if last call to this function failed to produce
# a usable mxid.
localpart += str(failures) if failures else ""
# Ensure only valid characters are included in the MXID.
localpart = map_username_to_mxid_localpart(localpart)
# Append suffix integer if last call to this function failed to produce
# a usable mxid.
localpart += str(failures) if failures else ""
display_name = None # type: Optional[str]
if self._config.display_name_template is not None:

View File

@@ -13,18 +13,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import List, Tuple
from typing import TYPE_CHECKING, List, Optional, Tuple
from synapse.appservice import ApplicationService
from synapse.handlers._base import BaseHandler
from synapse.types import JsonDict, ReadReceipt, get_domain_from_id
from synapse.util.async_helpers import maybe_awaitable
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class ReceiptsHandler(BaseHandler):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.server_name = hs.config.server_name
@@ -37,7 +39,7 @@ class ReceiptsHandler(BaseHandler):
self.clock = self.hs.get_clock()
self.state = hs.get_state_handler()
async def _received_remote_receipt(self, origin, content):
async def _received_remote_receipt(self, origin: str, content: JsonDict) -> None:
"""Called when we receive an EDU of type m.receipt from a remote HS.
"""
receipts = []
@@ -64,11 +66,11 @@ class ReceiptsHandler(BaseHandler):
await self._handle_new_receipts(receipts)
async def _handle_new_receipts(self, receipts):
async def _handle_new_receipts(self, receipts: List[ReadReceipt]) -> bool:
"""Takes a list of receipts, stores them and informs the notifier.
"""
min_batch_id = None
max_batch_id = None
min_batch_id = None # type: Optional[int]
max_batch_id = None # type: Optional[int]
for receipt in receipts:
res = await self.store.insert_receipt(
@@ -90,7 +92,8 @@ class ReceiptsHandler(BaseHandler):
if max_batch_id is None or max_persisted_id > max_batch_id:
max_batch_id = max_persisted_id
if min_batch_id is None:
# Either both of these should be None or neither.
if min_batch_id is None or max_batch_id is None:
# no new receipts
return False
@@ -98,15 +101,15 @@ class ReceiptsHandler(BaseHandler):
self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids)
# Note that the min here shouldn't be relied upon to be accurate.
await maybe_awaitable(
self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
)
await self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
)
return True
async def received_client_receipt(self, room_id, receipt_type, user_id, event_id):
async def received_client_receipt(
self, room_id: str, receipt_type: str, user_id: str, event_id: str
) -> None:
"""Called when a client tells us a local user has read up to the given
event_id in the room.
"""
@@ -126,10 +129,12 @@ class ReceiptsHandler(BaseHandler):
class ReceiptEventSource:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
async def get_new_events(self, from_key, room_ids, **kwargs):
async def get_new_events(
self, from_key: int, room_ids: List[str], **kwargs
) -> Tuple[List[JsonDict], int]:
from_key = int(from_key)
to_key = self.get_current_key()
@@ -174,5 +179,5 @@ class ReceiptEventSource:
return (events, to_key)
def get_current_key(self, direction="f"):
def get_current_key(self, direction: str = "f") -> int:
return self.store.get_max_receipt_stream_id()

View File

@@ -187,7 +187,7 @@ class RegistrationHandler(BaseHandler):
"""
self.check_registration_ratelimit(address)
result = self.spam_checker.check_registration_for_spam(
result = await self.spam_checker.check_registration_for_spam(
threepid, localpart, user_agent_ips or [],
)
@@ -630,6 +630,7 @@ class RegistrationHandler(BaseHandler):
device_id: Optional[str],
initial_display_name: Optional[str],
is_guest: bool = False,
is_appservice_ghost: bool = False,
) -> Tuple[str, str]:
"""Register a device for a user and generate an access token.
@@ -651,6 +652,7 @@ class RegistrationHandler(BaseHandler):
device_id=device_id,
initial_display_name=initial_display_name,
is_guest=is_guest,
is_appservice_ghost=is_appservice_ghost,
)
return r["device_id"], r["access_token"]
@@ -672,7 +674,10 @@ class RegistrationHandler(BaseHandler):
)
else:
access_token = await self._auth_handler.get_access_token_for_user_id(
user_id, device_id=registered_device_id, valid_until_ms=valid_until_ms
user_id,
device_id=registered_device_id,
valid_until_ms=valid_until_ms,
is_appservice_ghost=is_appservice_ghost,
)
return (registered_device_id, access_token)

View File

@@ -27,6 +27,7 @@ from typing import TYPE_CHECKING, Any, Awaitable, Dict, List, Optional, Tuple
from synapse.api.constants import (
EventTypes,
HistoryVisibility,
JoinRules,
Membership,
RoomCreationPreset,
@@ -81,21 +82,21 @@ class RoomCreationHandler(BaseHandler):
self._presets_dict = {
RoomCreationPreset.PRIVATE_CHAT: {
"join_rules": JoinRules.INVITE,
"history_visibility": "shared",
"history_visibility": HistoryVisibility.SHARED,
"original_invitees_have_ops": False,
"guest_can_join": True,
"power_level_content_override": {"invite": 0},
},
RoomCreationPreset.TRUSTED_PRIVATE_CHAT: {
"join_rules": JoinRules.INVITE,
"history_visibility": "shared",
"history_visibility": HistoryVisibility.SHARED,
"original_invitees_have_ops": True,
"guest_can_join": True,
"power_level_content_override": {"invite": 0},
},
RoomCreationPreset.PUBLIC_CHAT: {
"join_rules": JoinRules.PUBLIC,
"history_visibility": "shared",
"history_visibility": HistoryVisibility.SHARED,
"original_invitees_have_ops": False,
"guest_can_join": False,
"power_level_content_override": {},
@@ -358,7 +359,7 @@ class RoomCreationHandler(BaseHandler):
"""
user_id = requester.user.to_string()
if not self.spam_checker.user_may_create_room(user_id):
if not await self.spam_checker.user_may_create_room(user_id):
raise SynapseError(403, "You are not permitted to create rooms")
creation_content = {
@@ -609,7 +610,7 @@ class RoomCreationHandler(BaseHandler):
403, "You are not permitted to create rooms", Codes.FORBIDDEN
)
if not is_requester_admin and not self.spam_checker.user_may_create_room(
if not is_requester_admin and not await self.spam_checker.user_may_create_room(
user_id
):
raise SynapseError(403, "You are not permitted to create rooms")

View File

@@ -15,19 +15,22 @@
import logging
from collections import namedtuple
from typing import Any, Dict, Optional
from typing import TYPE_CHECKING, Optional, Tuple
import msgpack
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.constants import EventTypes, JoinRules
from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules
from synapse.api.errors import Codes, HttpResponseException
from synapse.types import ThirdPartyInstanceID
from synapse.types import JsonDict, ThirdPartyInstanceID
from synapse.util.caches.descriptors import cached
from synapse.util.caches.response_cache import ResponseCache
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
REMOTE_ROOM_LIST_POLL_INTERVAL = 60 * 1000
@@ -37,37 +40,38 @@ EMPTY_THIRD_PARTY_ID = ThirdPartyInstanceID(None, None)
class RoomListHandler(BaseHandler):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.enable_room_list_search = hs.config.enable_room_list_search
self.response_cache = ResponseCache(hs, "room_list")
self.response_cache = ResponseCache(
hs, "room_list"
) # type: ResponseCache[Tuple[Optional[int], Optional[str], ThirdPartyInstanceID]]
self.remote_response_cache = ResponseCache(
hs, "remote_room_list", timeout_ms=30 * 1000
)
) # type: ResponseCache[Tuple[str, Optional[int], Optional[str], bool, Optional[str]]]
async def get_local_public_room_list(
self,
limit=None,
since_token=None,
search_filter=None,
network_tuple=EMPTY_THIRD_PARTY_ID,
from_federation=False,
):
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[dict] = None,
network_tuple: ThirdPartyInstanceID = EMPTY_THIRD_PARTY_ID,
from_federation: bool = False,
) -> JsonDict:
"""Generate a local public room list.
There are multiple different lists: the main one plus one per third
party network. A client can ask for a specific list or to return all.
Args:
limit (int|None)
since_token (str|None)
search_filter (dict|None)
network_tuple (ThirdPartyInstanceID): Which public list to use.
limit
since_token
search_filter
network_tuple: Which public list to use.
This can be (None, None) to indicate the main list, or a particular
appservice and network id to use an appservice specific one.
Setting to None returns all public rooms across all lists.
from_federation (bool): true iff the request comes from the federation
API
from_federation: true iff the request comes from the federation API
"""
if not self.enable_room_list_search:
return {"chunk": [], "total_room_count_estimate": 0}
@@ -107,10 +111,10 @@ class RoomListHandler(BaseHandler):
self,
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[Dict] = None,
search_filter: Optional[dict] = None,
network_tuple: ThirdPartyInstanceID = EMPTY_THIRD_PARTY_ID,
from_federation: bool = False,
) -> Dict[str, Any]:
) -> JsonDict:
"""Generate a public room list.
Args:
limit: Maximum amount of rooms to return.
@@ -131,13 +135,17 @@ class RoomListHandler(BaseHandler):
if since_token:
batch_token = RoomListNextBatch.from_token(since_token)
bounds = (batch_token.last_joined_members, batch_token.last_room_id)
bounds = (
batch_token.last_joined_members,
batch_token.last_room_id,
) # type: Optional[Tuple[int, str]]
forwards = batch_token.direction_is_forward
has_batch_token = True
else:
batch_token = None
bounds = None
forwards = True
has_batch_token = False
# we request one more than wanted to see if there are more pages to come
probing_limit = limit + 1 if limit is not None else None
@@ -159,7 +167,8 @@ class RoomListHandler(BaseHandler):
"canonical_alias": room["canonical_alias"],
"num_joined_members": room["joined_members"],
"avatar_url": room["avatar"],
"world_readable": room["history_visibility"] == "world_readable",
"world_readable": room["history_visibility"]
== HistoryVisibility.WORLD_READABLE,
"guest_can_join": room["guest_access"] == "can_join",
}
@@ -168,7 +177,7 @@ class RoomListHandler(BaseHandler):
results = [build_room_entry(r) for r in results]
response = {}
response = {} # type: JsonDict
num_results = len(results)
if limit is not None:
more_to_come = num_results == probing_limit
@@ -186,7 +195,7 @@ class RoomListHandler(BaseHandler):
initial_entry = results[0]
if forwards:
if batch_token:
if has_batch_token:
# If there was a token given then we assume that there
# must be previous results.
response["prev_batch"] = RoomListNextBatch(
@@ -202,7 +211,7 @@ class RoomListHandler(BaseHandler):
direction_is_forward=True,
).to_token()
else:
if batch_token:
if has_batch_token:
response["next_batch"] = RoomListNextBatch(
last_joined_members=final_entry["num_joined_members"],
last_room_id=final_entry["room_id"],
@@ -292,7 +301,7 @@ class RoomListHandler(BaseHandler):
return None
# Return whether this room is open to federation users or not
create_event = current_state.get((EventTypes.Create, ""))
create_event = current_state[EventTypes.Create, ""]
result["m.federate"] = create_event.content.get("m.federate", True)
name_event = current_state.get((EventTypes.Name, ""))
@@ -317,7 +326,7 @@ class RoomListHandler(BaseHandler):
visibility = None
if visibility_event:
visibility = visibility_event.content.get("history_visibility", None)
result["world_readable"] = visibility == "world_readable"
result["world_readable"] = visibility == HistoryVisibility.WORLD_READABLE
guest_event = current_state.get((EventTypes.GuestAccess, ""))
guest = None
@@ -335,13 +344,13 @@ class RoomListHandler(BaseHandler):
async def get_remote_public_room_list(
self,
server_name,
limit=None,
since_token=None,
search_filter=None,
include_all_networks=False,
third_party_instance_id=None,
):
server_name: str,
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[dict] = None,
include_all_networks: bool = False,
third_party_instance_id: Optional[str] = None,
) -> JsonDict:
if not self.enable_room_list_search:
return {"chunk": [], "total_room_count_estimate": 0}
@@ -398,13 +407,13 @@ class RoomListHandler(BaseHandler):
async def _get_remote_list_cached(
self,
server_name,
limit=None,
since_token=None,
search_filter=None,
include_all_networks=False,
third_party_instance_id=None,
):
server_name: str,
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[dict] = None,
include_all_networks: bool = False,
third_party_instance_id: Optional[str] = None,
) -> JsonDict:
repl_layer = self.hs.get_federation_client()
if search_filter:
# We can't cache when asking for search
@@ -455,24 +464,24 @@ class RoomListNextBatch(
REVERSE_KEY_DICT = {v: k for k, v in KEY_DICT.items()}
@classmethod
def from_token(cls, token):
def from_token(cls, token: str) -> "RoomListNextBatch":
decoded = msgpack.loads(decode_base64(token), raw=False)
return RoomListNextBatch(
**{cls.REVERSE_KEY_DICT[key]: val for key, val in decoded.items()}
)
def to_token(self):
def to_token(self) -> str:
return encode_base64(
msgpack.dumps(
{self.KEY_DICT[key]: val for key, val in self._asdict().items()}
)
)
def copy_and_replace(self, **kwds):
def copy_and_replace(self, **kwds) -> "RoomListNextBatch":
return self._replace(**kwds)
def _matches_room_entry(room_entry, search_filter):
def _matches_room_entry(room_entry: JsonDict, search_filter: dict) -> bool:
if search_filter and search_filter.get("generic_search_term", None):
generic_search_term = search_filter["generic_search_term"].upper()
if generic_search_term in room_entry.get("name", "").upper():

View File

@@ -408,7 +408,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
)
block_invite = True
if not self.spam_checker.user_may_invite(
if not await self.spam_checker.user_may_invite(
requester.user.to_string(), target.to_string(), room_id
):
logger.info("Blocking invite due to spam checker")

View File

@@ -58,8 +58,6 @@ class SamlHandler(BaseHandler):
super().__init__(hs)
self._saml_client = Saml2Client(hs.config.saml2_sp_config)
self._saml_idp_entityid = hs.config.saml2_idp_entityid
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
self._saml2_session_lifetime = hs.config.saml2_session_lifetime
self._grandfathered_mxid_source_attribute = (
@@ -163,6 +161,29 @@ class SamlHandler(BaseHandler):
return
logger.debug("SAML2 response: %s", saml2_auth.origxml)
await self._handle_authn_response(request, saml2_auth, relay_state)
async def _handle_authn_response(
self,
request: SynapseRequest,
saml2_auth: saml2.response.AuthnResponse,
relay_state: str,
) -> None:
"""Handle an AuthnResponse, having parsed it from the request params
Assumes that the signature on the response object has been checked. Maps
the user onto an MXID, registering them if necessary, and returns a response
to the browser.
Args:
request: the incoming request from the browser. We'll respond to it with an
HTML page or a redirect
saml2_auth: the parsed AuthnResponse object
relay_state: the RelayState query param, which encodes the URI to rediret
back to
"""
for assertion in saml2_auth.assertions:
# kibana limits the length of a log field, whereas this is all rather
# useful, so split it up.
@@ -206,40 +227,29 @@ class SamlHandler(BaseHandler):
)
return
# Pull out the user-agent and IP from the request.
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
# Call the mapper to register/login the user
try:
user_id = await self._map_saml_response_to_user(
saml2_auth, relay_state, user_agent, ip_address
)
await self._complete_saml_login(saml2_auth, request, relay_state)
except MappingException as e:
logger.exception("Could not map user")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
await self._auth_handler.complete_sso_login(user_id, request, relay_state)
async def _map_saml_response_to_user(
async def _complete_saml_login(
self,
saml2_auth: saml2.response.AuthnResponse,
request: SynapseRequest,
client_redirect_url: str,
user_agent: str,
ip_address: str,
) -> str:
) -> None:
"""
Given a SAML response, retrieve the user ID for it and possibly register the user.
Given a SAML response, complete the login flow
Retrieves the remote user ID, registers the user if necessary, and serves
a redirect back to the client with a login-token.
Args:
saml2_auth: The parsed SAML2 response.
request: The request to respond to
client_redirect_url: The redirect URL passed in by the client.
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
Returns:
The user ID associated with this response.
Raises:
MappingException if there was a problem mapping the response to a user.
@@ -295,11 +305,11 @@ class SamlHandler(BaseHandler):
return None
return await self._sso_handler.get_mxid_from_sso(
await self._sso_handler.complete_sso_login_request(
self._auth_provider_id,
remote_user_id,
user_agent,
ip_address,
request,
client_redirect_url,
saml_response_to_remapped_user_attributes,
grandfather_existing_users,
)

View File

@@ -13,16 +13,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional
import attr
from typing_extensions import NoReturn
from twisted.web.http import Request
from synapse.api.errors import RedirectException
from synapse.api.errors import RedirectException, SynapseError
from synapse.http.server import respond_with_html
from synapse.types import UserID, contains_invalid_mxid_characters
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict, UserID, contains_invalid_mxid_characters
from synapse.util.async_helpers import Linearizer
from synapse.util.stringutils import random_string
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -39,16 +42,52 @@ class MappingException(Exception):
@attr.s
class UserAttributes:
localpart = attr.ib(type=str)
# the localpart of the mxid that the mapper has assigned to the user.
# if `None`, the mapper has not picked a userid, and the user should be prompted to
# enter one.
localpart = attr.ib(type=Optional[str])
display_name = attr.ib(type=Optional[str], default=None)
emails = attr.ib(type=List[str], default=attr.Factory(list))
@attr.s(slots=True)
class UsernameMappingSession:
"""Data we track about SSO sessions"""
# A unique identifier for this SSO provider, e.g. "oidc" or "saml".
auth_provider_id = attr.ib(type=str)
# user ID on the IdP server
remote_user_id = attr.ib(type=str)
# attributes returned by the ID mapper
display_name = attr.ib(type=Optional[str])
emails = attr.ib(type=List[str])
# An optional dictionary of extra attributes to be provided to the client in the
# login response.
extra_login_attributes = attr.ib(type=Optional[JsonDict])
# where to redirect the client back to
client_redirect_url = attr.ib(type=str)
# expiry time for the session, in milliseconds
expiry_time_ms = attr.ib(type=int)
# the HTTP cookie used to track the mapping session id
USERNAME_MAPPING_SESSION_COOKIE_NAME = b"username_mapping_session"
class SsoHandler:
# The number of attempts to ask the mapping provider for when generating an MXID.
_MAP_USERNAME_RETRIES = 1000
# the time a UsernameMappingSession remains valid for
_MAPPING_SESSION_VALIDITY_PERIOD_MS = 15 * 60 * 1000
def __init__(self, hs: "HomeServer"):
self._clock = hs.get_clock()
self._store = hs.get_datastore()
self._server_name = hs.hostname
self._registration_handler = hs.get_registration_handler()
@@ -58,8 +97,15 @@ class SsoHandler:
# a lock on the mappings
self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock())
# a map from session id to session data
self._username_mapping_sessions = {} # type: Dict[str, UsernameMappingSession]
def render_error(
self, request, error: str, error_description: Optional[str] = None
self,
request: Request,
error: str,
error_description: Optional[str] = None,
code: int = 400,
) -> None:
"""Renders the error template and responds with it.
@@ -71,11 +117,12 @@ class SsoHandler:
We'll respond with an HTML page describing the error.
error: A technical identifier for this error.
error_description: A human-readable description of the error.
code: The integer error code (an HTTP response code)
"""
html = self._error_template.render(
error=error, error_description=error_description
)
respond_with_html(request, 400, html)
respond_with_html(request, code, html)
async def get_sso_user_by_remote_user_id(
self, auth_provider_id: str, remote_user_id: str
@@ -119,15 +166,16 @@ class SsoHandler:
# No match.
return None
async def get_mxid_from_sso(
async def complete_sso_login_request(
self,
auth_provider_id: str,
remote_user_id: str,
user_agent: str,
ip_address: str,
request: SynapseRequest,
client_redirect_url: str,
sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
grandfather_existing_users: Optional[Callable[[], Awaitable[Optional[str]]]],
) -> str:
grandfather_existing_users: Callable[[], Awaitable[Optional[str]]],
extra_login_attributes: Optional[JsonDict] = None,
) -> None:
"""
Given an SSO ID, retrieve the user ID for it and possibly register the user.
@@ -146,12 +194,18 @@ class SsoHandler:
given user-agent and IP address and the SSO ID is linked to this matrix
ID for subsequent calls.
Finally, we generate a redirect to the supplied redirect uri, with a login token
Args:
auth_provider_id: A unique identifier for this SSO provider, e.g.
"oidc" or "saml".
remote_user_id: The unique identifier from the SSO provider.
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
request: The request to respond to
client_redirect_url: The redirect URL passed in by the client.
sso_to_matrix_id_mapper: A callable to generate the user attributes.
The only parameter is an integer which represents the amount of
times the returned mxid localpart mapping has failed.
@@ -163,12 +217,13 @@ class SsoHandler:
to the user.
RedirectException to redirect to an additional page (e.g.
to prompt the user for more information).
grandfather_existing_users: A callable which can return an previously
existing matrix ID. The SSO ID is then linked to the returned
matrix ID.
Returns:
The user ID associated with the SSO response.
extra_login_attributes: An optional dictionary of extra
attributes to be provided to the client in the login response.
Raises:
MappingException if there was a problem mapping the response to a user.
@@ -181,28 +236,45 @@ class SsoHandler:
# interstitial pages.
with await self._mapping_lock.queue(auth_provider_id):
# first of all, check if we already have a mapping for this user
previously_registered_user_id = await self.get_sso_user_by_remote_user_id(
user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id,
)
if previously_registered_user_id:
return previously_registered_user_id
# Check for grandfathering of users.
if grandfather_existing_users:
previously_registered_user_id = await grandfather_existing_users()
if previously_registered_user_id:
if not user_id:
user_id = await grandfather_existing_users()
if user_id:
# Future logins should also match this user ID.
await self._store.record_user_external_id(
auth_provider_id, remote_user_id, previously_registered_user_id
auth_provider_id, remote_user_id, user_id
)
return previously_registered_user_id
# Otherwise, generate a new user.
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
user_id = await self._register_mapped_user(
attributes, auth_provider_id, remote_user_id, user_agent, ip_address,
)
return user_id
if not user_id:
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
if attributes.localpart is None:
# the mapper doesn't return a username. bail out with a redirect to
# the username picker.
await self._redirect_to_username_picker(
auth_provider_id,
remote_user_id,
attributes,
client_redirect_url,
extra_login_attributes,
)
user_id = await self._register_mapped_user(
attributes,
auth_provider_id,
remote_user_id,
request.get_user_agent(""),
request.getClientIP(),
)
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url, extra_login_attributes
)
async def _call_attribute_mapper(
self, sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
@@ -229,10 +301,8 @@ class SsoHandler:
)
if not attributes.localpart:
raise MappingException(
"Error parsing SSO response: SSO mapping provider plugin "
"did not return a localpart value"
)
# the mapper has not picked a localpart
return attributes
# Check if this mxid already exists
user_id = UserID(attributes.localpart, self._server_name).to_string()
@@ -247,6 +317,59 @@ class SsoHandler:
)
return attributes
async def _redirect_to_username_picker(
self,
auth_provider_id: str,
remote_user_id: str,
attributes: UserAttributes,
client_redirect_url: str,
extra_login_attributes: Optional[JsonDict],
) -> NoReturn:
"""Creates a UsernameMappingSession and redirects the browser
Called if the user mapping provider doesn't return a localpart for a new user.
Raises a RedirectException which redirects the browser to the username picker.
Args:
auth_provider_id: A unique identifier for this SSO provider, e.g.
"oidc" or "saml".
remote_user_id: The unique identifier from the SSO provider.
attributes: the user attributes returned by the user mapping provider.
client_redirect_url: The redirect URL passed in by the client, which we
will eventually redirect back to.
extra_login_attributes: An optional dictionary of extra
attributes to be provided to the client in the login response.
Raises:
RedirectException
"""
session_id = random_string(16)
now = self._clock.time_msec()
session = UsernameMappingSession(
auth_provider_id=auth_provider_id,
remote_user_id=remote_user_id,
display_name=attributes.display_name,
emails=attributes.emails,
client_redirect_url=client_redirect_url,
expiry_time_ms=now + self._MAPPING_SESSION_VALIDITY_PERIOD_MS,
extra_login_attributes=extra_login_attributes,
)
self._username_mapping_sessions[session_id] = session
logger.info("Recorded registration session id %s", session_id)
# Set the cookie and redirect to the username picker
e = RedirectException(b"/_synapse/client/pick_username")
e.cookies.append(
b"%s=%s; path=/"
% (USERNAME_MAPPING_SESSION_COOKIE_NAME, session_id.encode("ascii"))
)
raise e
async def _register_mapped_user(
self,
attributes: UserAttributes,
@@ -255,9 +378,38 @@ class SsoHandler:
user_agent: str,
ip_address: str,
) -> str:
"""Register a new SSO user.
This is called once we have successfully mapped the remote user id onto a local
user id, one way or another.
Args:
attributes: user attributes returned by the user mapping provider,
including a non-empty localpart.
auth_provider_id: A unique identifier for this SSO provider, e.g.
"oidc" or "saml".
remote_user_id: The unique identifier from the SSO provider.
user_agent: The user-agent in the HTTP request (used for potential
shadow-banning.)
ip_address: The IP address of the requester (used for potential
shadow-banning.)
Raises:
a MappingException if the localpart is invalid.
a SynapseError with code 400 and errcode Codes.USER_IN_USE if the localpart
is already taken.
"""
# Since the localpart is provided via a potentially untrusted module,
# ensure the MXID is valid before registering.
if contains_invalid_mxid_characters(attributes.localpart):
if not attributes.localpart or contains_invalid_mxid_characters(
attributes.localpart
):
raise MappingException("localpart is invalid: %s" % (attributes.localpart,))
logger.debug("Mapped SSO user to local part %s", attributes.localpart)
@@ -312,3 +464,108 @@ class SsoHandler:
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)
async def check_username_availability(
self, localpart: str, session_id: str,
) -> bool:
"""Handle an "is username available" callback check
Args:
localpart: desired localpart
session_id: the session id for the username picker
Returns:
True if the username is available
Raises:
SynapseError if the localpart is invalid or the session is unknown
"""
# make sure that there is a valid mapping session, to stop people dictionary-
# scanning for accounts
self._expire_old_sessions()
session = self._username_mapping_sessions.get(session_id)
if not session:
logger.info("Couldn't find session id %s", session_id)
raise SynapseError(400, "unknown session")
logger.info(
"[session %s] Checking for availability of username %s",
session_id,
localpart,
)
if contains_invalid_mxid_characters(localpart):
raise SynapseError(400, "localpart is invalid: %s" % (localpart,))
user_id = UserID(localpart, self._server_name).to_string()
user_infos = await self._store.get_users_by_id_case_insensitive(user_id)
logger.info("[session %s] users: %s", session_id, user_infos)
return not user_infos
async def handle_submit_username_request(
self, request: SynapseRequest, localpart: str, session_id: str
) -> None:
"""Handle a request to the username-picker 'submit' endpoint
Will serve an HTTP response to the request.
Args:
request: HTTP request
localpart: localpart requested by the user
session_id: ID of the username mapping session, extracted from a cookie
"""
self._expire_old_sessions()
session = self._username_mapping_sessions.get(session_id)
if not session:
logger.info("Couldn't find session id %s", session_id)
raise SynapseError(400, "unknown session")
logger.info("[session %s] Registering localpart %s", session_id, localpart)
attributes = UserAttributes(
localpart=localpart,
display_name=session.display_name,
emails=session.emails,
)
# the following will raise a 400 error if the username has been taken in the
# meantime.
user_id = await self._register_mapped_user(
attributes,
session.auth_provider_id,
session.remote_user_id,
request.get_user_agent(""),
request.getClientIP(),
)
logger.info("[session %s] Registered userid %s", session_id, user_id)
# delete the mapping session and the cookie
del self._username_mapping_sessions[session_id]
# delete the cookie
request.addCookie(
USERNAME_MAPPING_SESSION_COOKIE_NAME,
b"",
expires=b"Thu, 01 Jan 1970 00:00:00 GMT",
path=b"/",
)
await self._auth_handler.complete_sso_login(
user_id,
request,
session.client_redirect_url,
session.extra_login_attributes,
)
def _expire_old_sessions(self):
to_expire = []
now = int(self._clock.time_msec())
for session_id, session in self._username_mapping_sessions.items():
if session.expiry_time_ms <= now:
to_expire.append(session_id)
for session_id in to_expire:
logger.info("Expiring mapping session %s", session_id)
del self._username_mapping_sessions[session_id]

View File

@@ -554,7 +554,7 @@ class SyncHandler:
event.event_id, state_filter=state_filter
)
if event.is_state():
state_ids = state_ids.copy()
state_ids = dict(state_ids)
state_ids[(event.type, event.state_key)] = event.event_id
return state_ids

View File

@@ -14,14 +14,19 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional
import synapse.metrics
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.constants import EventTypes, HistoryVisibility, JoinRules, Membership
from synapse.handlers.state_deltas import StateDeltasHandler
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -36,7 +41,7 @@ class UserDirectoryHandler(StateDeltasHandler):
be in the directory or not when necessary.
"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.store = hs.get_datastore()
@@ -49,7 +54,7 @@ class UserDirectoryHandler(StateDeltasHandler):
self.search_all_users = hs.config.user_directory_search_all_users
self.spam_checker = hs.get_spam_checker()
# The current position in the current_state_delta stream
self.pos = None
self.pos = None # type: Optional[int]
# Guard to ensure we only process deltas one at a time
self._is_processing = False
@@ -61,7 +66,9 @@ class UserDirectoryHandler(StateDeltasHandler):
# we start populating the user directory
self.clock.call_later(0, self.notify_new_event)
async def search_users(self, user_id, search_term, limit):
async def search_users(
self, user_id: str, search_term: str, limit: int
) -> JsonDict:
"""Searches for users in directory
Returns:
@@ -81,15 +88,15 @@ class UserDirectoryHandler(StateDeltasHandler):
results = await self.store.search_user_dir(user_id, search_term, limit)
# Remove any spammy users from the results.
results["results"] = [
user
for user in results["results"]
if not self.spam_checker.check_username_for_spam(user)
]
non_spammy_users = []
for user in results["results"]:
if not await self.spam_checker.check_username_for_spam(user):
non_spammy_users.append(user)
results["results"] = non_spammy_users
return results
def notify_new_event(self):
def notify_new_event(self) -> None:
"""Called when there may be more deltas to process
"""
if not self.update_user_directory:
@@ -107,27 +114,33 @@ class UserDirectoryHandler(StateDeltasHandler):
self._is_processing = True
run_as_background_process("user_directory.notify_new_event", process)
async def handle_local_profile_change(self, user_id, profile):
async def handle_local_profile_change(
self, user_id: str, profile: ProfileInfo
) -> None:
"""Called to update index of our local user profiles when they change
irrespective of any rooms the user may be in.
"""
# FIXME(#3714): We should probably do this in the same worker as all
# the other changes.
is_support = await self.store.is_support_user(user_id)
# Support users are for diagnostics and should not appear in the user directory.
if not is_support:
is_support = await self.store.is_support_user(user_id)
# When change profile information of deactivated user it should not appear in the user directory.
is_deactivated = await self.store.get_user_deactivated_status(user_id)
if not (is_support or is_deactivated):
await self.store.update_profile_in_user_dir(
user_id, profile.display_name, profile.avatar_url
)
async def handle_user_deactivated(self, user_id):
async def handle_user_deactivated(self, user_id: str) -> None:
"""Called when a user ID is deactivated
"""
# FIXME(#3714): We should probably do this in the same worker as all
# the other changes.
await self.store.remove_from_user_dir(user_id)
async def _unsafe_process(self):
async def _unsafe_process(self) -> None:
# If self.pos is None then means we haven't fetched it from DB
if self.pos is None:
self.pos = await self.store.get_user_directory_stream_pos()
@@ -162,7 +175,7 @@ class UserDirectoryHandler(StateDeltasHandler):
await self.store.update_user_directory_stream_pos(max_pos)
async def _handle_deltas(self, deltas):
async def _handle_deltas(self, deltas: List[Dict[str, Any]]) -> None:
"""Called with the state deltas to process
"""
for delta in deltas:
@@ -232,16 +245,20 @@ class UserDirectoryHandler(StateDeltasHandler):
logger.debug("Ignoring irrelevant type: %r", typ)
async def _handle_room_publicity_change(
self, room_id, prev_event_id, event_id, typ
):
self,
room_id: str,
prev_event_id: Optional[str],
event_id: Optional[str],
typ: str,
) -> None:
"""Handle a room having potentially changed from/to world_readable/publicly
joinable.
Args:
room_id (str)
prev_event_id (str|None): The previous event before the state change
event_id (str|None): The new event after the state change
typ (str): Type of the event
room_id: The ID of the room which changed.
prev_event_id: The previous event before the state change
event_id: The new event after the state change
typ: Type of the event
"""
logger.debug("Handling change for %s: %s", typ, room_id)
@@ -250,7 +267,7 @@ class UserDirectoryHandler(StateDeltasHandler):
prev_event_id,
event_id,
key_name="history_visibility",
public_value="world_readable",
public_value=HistoryVisibility.WORLD_READABLE,
)
elif typ == EventTypes.JoinRules:
change = await self._get_key_change(
@@ -299,12 +316,14 @@ class UserDirectoryHandler(StateDeltasHandler):
for user_id, profile in users_with_profile.items():
await self._handle_new_user(room_id, user_id, profile)
async def _handle_new_user(self, room_id, user_id, profile):
async def _handle_new_user(
self, room_id: str, user_id: str, profile: ProfileInfo
) -> None:
"""Called when we might need to add user to directory
Args:
room_id (str): room_id that user joined or started being public
user_id (str)
room_id: The room ID that user joined or started being public
user_id
"""
logger.debug("Adding new user to dir, %r", user_id)
@@ -352,12 +371,12 @@ class UserDirectoryHandler(StateDeltasHandler):
if to_insert:
await self.store.add_users_who_share_private_room(room_id, to_insert)
async def _handle_remove_user(self, room_id, user_id):
async def _handle_remove_user(self, room_id: str, user_id: str) -> None:
"""Called when we might need to remove user from directory
Args:
room_id (str): room_id that user left or stopped being public that
user_id (str)
room_id: The room ID that user left or stopped being public that
user_id
"""
logger.debug("Removing user %r", user_id)
@@ -370,7 +389,13 @@ class UserDirectoryHandler(StateDeltasHandler):
if len(rooms_user_is_in) == 0:
await self.store.remove_from_user_dir(user_id)
async def _handle_profile_change(self, user_id, room_id, prev_event_id, event_id):
async def _handle_profile_change(
self,
user_id: str,
room_id: str,
prev_event_id: Optional[str],
event_id: Optional[str],
) -> None:
"""Check member event changes for any profile changes and update the
database if there are.
"""

View File

@@ -341,6 +341,7 @@ class SimpleHttpClient:
self.agent = ProxyAgent(
self.reactor,
hs.get_reactor(),
connectTimeout=15,
contextFactory=self.hs.get_http_client_context_factory(),
pool=pool,
@@ -720,11 +721,14 @@ class SimpleHttpClient:
try:
length = await make_deferred_yieldable(
readBodyToFile(response, output_stream, max_size)
read_body_with_max_size(response, output_stream, max_size)
)
except BodyExceededMaxSize:
SynapseError(
502,
"Requested file is too large > %r bytes" % (max_size,),
Codes.TOO_LARGE,
)
except SynapseError:
# This can happen e.g. because the body is too large.
raise
except Exception as e:
raise SynapseError(502, ("Failed to download remote body: %s" % e)) from e
@@ -748,7 +752,11 @@ def _timeout_to_request_timed_out_error(f: Failure):
return f
class _ReadBodyToFileProtocol(protocol.Protocol):
class BodyExceededMaxSize(Exception):
"""The maximum allowed size of the HTTP body was exceeded."""
class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
def __init__(
self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int]
):
@@ -761,13 +769,7 @@ class _ReadBodyToFileProtocol(protocol.Protocol):
self.stream.write(data)
self.length += len(data)
if self.max_size is not None and self.length >= self.max_size:
self.deferred.errback(
SynapseError(
502,
"Requested file is too large > %r bytes" % (self.max_size,),
Codes.TOO_LARGE,
)
)
self.deferred.errback(BodyExceededMaxSize())
self.deferred = defer.Deferred()
self.transport.loseConnection()
@@ -782,12 +784,15 @@ class _ReadBodyToFileProtocol(protocol.Protocol):
self.deferred.errback(reason)
def readBodyToFile(
def read_body_with_max_size(
response: IResponse, stream: BinaryIO, max_size: Optional[int]
) -> defer.Deferred:
"""
Read a HTTP response body to a file-object. Optionally enforcing a maximum file size.
If the maximum file size is reached, the returned Deferred will resolve to a
Failure with a BodyExceededMaxSize exception.
Args:
response: The HTTP response to read from.
stream: The file-object to write to.
@@ -798,7 +803,7 @@ def readBodyToFile(
"""
d = defer.Deferred()
response.deliverBody(_ReadBodyToFileProtocol(stream, d, max_size))
response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size))
return d

View File

@@ -15,17 +15,19 @@
import logging
import random
import time
from io import BytesIO
from typing import Callable, Dict, Optional, Tuple
import attr
from twisted.internet import defer
from twisted.internet.interfaces import IReactorTime
from twisted.web.client import RedirectAgent, readBody
from twisted.web.client import RedirectAgent
from twisted.web.http import stringToDatetime
from twisted.web.http_headers import Headers
from twisted.web.iweb import IAgent, IResponse
from synapse.http.client import BodyExceededMaxSize, read_body_with_max_size
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock, json_decoder
from synapse.util.caches.ttlcache import TTLCache
@@ -53,6 +55,9 @@ WELL_KNOWN_MAX_CACHE_PERIOD = 48 * 3600
# lower bound for .well-known cache period
WELL_KNOWN_MIN_CACHE_PERIOD = 5 * 60
# The maximum size (in bytes) to allow a well-known file to be.
WELL_KNOWN_MAX_SIZE = 50 * 1024 # 50 KiB
# Attempt to refetch a cached well-known N% of the TTL before it expires.
# e.g. if set to 0.2 and we have a cached entry with a TTL of 5mins, then
# we'll start trying to refetch 1 minute before it expires.
@@ -229,6 +234,9 @@ class WellKnownResolver:
server_name: name of the server, from the requested url
retry: Whether to retry the request if it fails.
Raises:
_FetchWellKnownFailure if we fail to lookup a result
Returns:
Returns the response object and body. Response may be a non-200 response.
"""
@@ -250,7 +258,11 @@ class WellKnownResolver:
b"GET", uri, headers=Headers(headers)
)
)
body = await make_deferred_yieldable(readBody(response))
body_stream = BytesIO()
await make_deferred_yieldable(
read_body_with_max_size(response, body_stream, WELL_KNOWN_MAX_SIZE)
)
body = body_stream.getvalue()
if 500 <= response.code < 600:
raise Exception("Non-200 response %s" % (response.code,))
@@ -259,6 +271,15 @@ class WellKnownResolver:
except defer.CancelledError:
# Bail if we've been cancelled
raise
except BodyExceededMaxSize:
# If the well-known file was too large, do not keep attempting
# to download it, but consider it a temporary error.
logger.warning(
"Requested .well-known file for %s is too large > %r bytes",
server_name.decode("ascii"),
WELL_KNOWN_MAX_SIZE,
)
raise _FetchWellKnownFailure(temporary=True)
except Exception as e:
if not retry or i >= WELL_KNOWN_RETRY_ATTEMPTS:
logger.info("Error fetching %s: %s", uri_str, e)

Some files were not shown because too many files have changed in this diff Show More