1
0

Compare commits

..

58 Commits

Author SHA1 Message Date
Erik Johnston
0eaa6dd30e Basic release script 2020-10-30 19:28:53 +00:00
Brendan Abolivier
7a0fd6f98d Fix error handling around when completing an AS transaction (#8693) 2020-10-30 16:50:48 +00:00
Erik Johnston
f27a789697 Merge branch 'master' into develop 2020-10-30 16:27:02 +00:00
Erik Johnston
1b831f2bec Merge branch 'release-v1.22.1' into develop 2020-10-30 15:24:48 +00:00
Patrick Cloke
8f1aefa694 Improve the sample config for SSO (OIDC, SAML, and CAS). (#8635) 2020-10-30 10:01:59 -04:00
Richard van der Hoff
cbc82aa09f Implement and use an @lru_cache decorator (#8595)
We don't always need the full power of a DeferredCache.
2020-10-30 11:43:17 +00:00
Patrick Cloke
fd7c743445 Fail test cases if they fail to await all awaitables (#8690) 2020-10-30 07:15:07 -04:00
Erik Johnston
46f4be94b4 Fix race for concurrent downloads of remote media. (#8682)
Fixes #6755
2020-10-30 10:55:24 +00:00
Andrew Morgan
4504151546 Fix optional parameter in stripped state storage method (#8688)
Missed in #8671.
2020-10-30 00:22:31 +00:00
Erik Johnston
ef2d627015 Fix unit tests (#8689)
* Fix unit tests

* Newsfile
2020-10-29 18:21:49 +00:00
Will Hunt
70269fbd18 Tie together matches_user_in_member_list and get_users_in_room caches (#8676)
* Tie together matches_user_in_member_list and get_users_in_room

* changelog

* Remove type to fix mypy

* Add `on_invalidate` to the function signature in the hopes that may make things work well

* Remove **kwargs

* Update 8676.bugfix
2020-10-29 16:58:16 +00:00
Patrick Cloke
8b42a4eefd Gracefully handle a pending logging connection during shutdown. (#8685) 2020-10-29 12:53:57 -04:00
Erik Johnston
f21e24ffc2 Add ability for access tokens to belong to one user but grant access to another user. (#8616)
We do it this way round so that only the "owner" can delete the access token (i.e. `/logout/all` by the "owner" also deletes that token, but `/logout/all` by the "target user" doesn't).

A future PR will add an API for creating such a token.

When the target user and authenticated entity are different the `Processed request` log line will be logged with a: `{@admin:server as @bob:server} ...`. I'm not convinced by that format (especially since it adds spaces in there, making it harder to use `cut -d ' '` to chop off the start of log lines). Suggestions welcome.
2020-10-29 15:58:44 +00:00
Erik Johnston
22eeb6bc54 Fix cache call signature to accept on_invalidate. (#8684)
Cached functions accept an `on_invalidate` function, which we failed to add to the type signature. It's rarely used in the files that we have typed, which is why we haven't noticed it before.
2020-10-29 15:18:17 +00:00
Richard van der Hoff
0073fe914a Use %r rather than %s for stringifying events (#8679)
otherwise non-state events get written as `<FrozenEvent ... state_key='None'>`
which is indistinguishable from state events with the actual state_key `None`.
2020-10-29 12:16:49 +00:00
Richard van der Hoff
56f0ee78a9 Optimise createRoom with multiple invites (#8559)
By not dropping the membership lock between invites, we can stop joins from
grabbing the lock when we're half-done and slowing the whole thing down.
2020-10-29 11:48:39 +00:00
Patrick Cloke
00b24aa545 Support generating structured logs in addition to standard logs. (#8607)
This modifies the configuration of structured logging to be usable from
the standard Python logging configuration.

This also separates the formatting of logs from the transport allowing
JSON logs to files or standard logs to sockets.
2020-10-29 07:27:37 -04:00
Erik Johnston
9a7e0d2ea6 Don't require hiredis to run unit tests (#8680) 2020-10-29 11:17:35 +00:00
Richard van der Hoff
c97da1e45d Merge pull request #8678 from matrix-org/rav/fix_frozen_events
Fix serialisation errors when using third-party event rules.
2020-10-28 20:41:42 +00:00
Richard van der Hoff
e80eb69887 remove unused imports 2020-10-28 16:18:05 +00:00
Richard van der Hoff
b6ca69e4f1 Remove frozendict_json_encoder and support frozendicts everywhere
Not being able to serialise `frozendicts` is fragile, and it's annoying to have
to think about which serialiser you want. There's no real downside to
supporting frozendicts, so let's just have one json encoder.
2020-10-28 15:56:57 +00:00
Patrick Cloke
31d721fbf6 Add type hints to application services. (#8655) 2020-10-28 11:12:21 -04:00
Dirk Klimpel
2239813278 Add an admin APIs to allow server admins to list users' pushers (#8610)
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/pushers` like https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers
2020-10-28 15:02:42 +00:00
kleph
29ce6d43b5 Run mypy as part of the lint.sh script. (#8633) 2020-10-28 08:49:08 -04:00
Erik Johnston
a6ea1a957e Don't pull event from DB when handling replication traffic. (#8669)
I was trying to make it so that we didn't have to start a background task when handling RDATA, but that is a bigger job (due to all the code in `generic_worker`). However I still think not pulling the event from the DB may help reduce some DB usage due to replication, even if most workers will simply go and pull that event from the DB later anyway.

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2020-10-28 12:11:45 +00:00
Dan Callahan
aff1eb7c67 Tell Black to format code for Python 3.5 (#8664)
This allows trailing commas in multi-line arg lists.

Minor, but we might as well keep our formatting current with regard to
our minimum supported Python version.

Signed-off-by: Dan Callahan <danc@element.io>
2020-10-27 23:26:36 +00:00
Dan Callahan
e90fad5cba Minor updates to docs on how to run tests (#8666)
The test runner isn't present in the `[all]` set of extras, so the
previous instructions did not work without also installing `[test]`.

Note that this does not include the `[lint]` extras, since those do not
install on all supported Python versions (specifically, isort 5.x
requires Python 3.6, while we still support 3.5). Instructions for that
are included in our pull request template, so we should be fine there.

I've also dropped the `--no-use-pep517` arg to `pip install` since it
seems to have been added to address a temporary regression in pip 19.1
which was fixed in pip 19.1.1 the following month.

Lastly, updated the example output of the test suite to set more
realistic expectations around run time.

Signed-off-by: Dan Callahan <danc@element.io>
2020-10-27 23:26:00 +00:00
Dan Callahan
88e1d0c52b Note support for Python 3.9 (#8665)
As expected, all tests pass locally without modification.

Signed-off-by: Dan Callahan <danc@element.io>
2020-10-27 23:24:33 +00:00
Michael Kaye
f49c2093b5 Cross-link documentation to the prometheus recording rules. (#8667) 2020-10-27 15:29:50 -04:00
Andrew Morgan
a699c044b6 Abstract code for stripping room state into a separate method (#8671)
This is a requirement for [knocking](https://github.com/matrix-org/synapse/pull/6739), and is abstracting some code that was originally used by the invite flow. I'm separating it out into this PR as it's a fairly contained change.

For a bit of context: when you invite a user to a room, you send them [stripped state events](https://matrix.org/docs/spec/server_server/unstable#put-matrix-federation-v2-invite-roomid-eventid) as part of `invite_room_state`. This is so that their client can display useful information such as the room name and avatar. The same requirement applies to knocking, as it would be nice for clients to be able to display a list of rooms you've knocked on - room name and avatar included.

The reason we're sending membership events down as well is in the case that you are invited to a room that does not have an avatar or name set. In that case, the client should use the displayname/avatar of the inviter. That information is located in the inviter's membership event.

This is optional as knocks don't really have any user in the room to link up to. When you knock on a room, your knock is sent by you and inserted into the room. It wouldn't *really* make sense to show the avatar of a random user - plus it'd be a data leak. So I've opted not to send membership events to the client here. The UX on the client for when you knock on a room without a name/avatar is a separate problem.

In essence this is just moving some inline code to a reusable store method.
2020-10-27 18:42:46 +00:00
Erik Johnston
4215a3acd4 Don't unnecessarily start bg process in replication sending loop. (#8670) 2020-10-27 17:37:08 +00:00
Erik Johnston
0c7f9cb81f Don't unnecessarily start bg process while handling typing. (#8668)
There's no point starting a background process when all its going to do is bail if federation isn't enabled.
2020-10-27 15:32:19 +00:00
Dirk Klimpel
9b7c28283a Add admin API to list users' local media (#8647)
Add admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information of users' uploaded files.
2020-10-27 14:12:31 +00:00
Erik Johnston
24229fac05 Merge branch 'master' into develop 2020-10-27 12:12:54 +00:00
Jonas Jelten
2e380f0f18 e2e: ensure we have both master and self-signing key (#8455)
it seems to be possible that only one of them ends up to be cached.
when this was the case, the missing one was not fetched via federation,
and clients then failed to validate cross-signed devices.

Signed-off-by: Jonas Jelten <jj@sft.lol>
2020-10-26 18:37:47 +00:00
Patrick Cloke
10f45d85bb Add type hints for account validity handler (#8620)
This also fixes a bug by fixing handling of an account which doesn't expire.
2020-10-26 14:17:31 -04:00
Dirk Klimpel
66e6801c3e Split admin API for reported events into a detail and a list view (#8539)
Split admin API for reported events in detail und list view.
API was introduced with #8217 in synapse v.1.21.0.

It makes the list (`GET /_synapse/admin/v1/event_reports`) less complex and provides a better overview.
The details can be queried with: `GET /_synapse/admin/v1/event_reports/<report_id>`.
It is similar to room and users API.

It is a kind of regression in `GET /_synapse/admin/v1/event_reports`.  `event_json` was removed. But the api was introduced one version before and it is an admin API (not under spec).

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2020-10-26 18:16:37 +00:00
Peter Krantz
6c9ab61df5 Added basic instructions for Azure AD to OpenId documentation (#8582)
Signed-off-by: Peter Krantz peter.krantz@gmail.com
2020-10-26 17:49:55 +00:00
Dirk Klimpel
49d72dea2a Add an admin api to delete local media. (#8519)
Related to: #6459, #3479

Add `DELETE /_synapse/admin/v1/media/<server_name>/<media_id>` to delete
a single file from server.
2020-10-26 17:02:28 +00:00
Andrew Morgan
f6a3859a73 Fix filepath of Dex example config (#8657) 2020-10-26 16:53:11 +00:00
Dirk Klimpel
4ac3a8c5dc Fix a bug in the joined_rooms admin API (#8643)
If the user was not in any rooms then the API returned the same error
as if the user did not exist.
2020-10-26 12:25:48 -04:00
Erik Johnston
cf9a17a2b3 Merge tag 'v1.22.0rc2' into develop
Synapse 1.22.0rc2 (2020-10-26)
==============================

Bugfixes
--------

- Fix bugs where ephemeral events were not sent to appservices. Broke in v1.22.0rc1. ([\#8648](https://github.com/matrix-org/synapse/issues/8648), [\#8656](https://github.com/matrix-org/synapse/issues/8656))
- Fix `user_daily_visits` table to not have duplicate rows per user/device due to multiple user agents. Broke in v1.22.0rc1. ([\#8654](https://github.com/matrix-org/synapse/issues/8654))
2020-10-26 15:23:13 +00:00
Erik Johnston
ff7f0e8a14 Merge branch 'release-v1.22.0' into develop 2020-10-26 15:02:55 +00:00
Will Hunt
e8dbbcb64c Fix get|set_type_stream_id_for_appservice store functions (#8648) 2020-10-26 10:51:33 -04:00
Andrew Morgan
73d8209694 Correct the package name in OpenID Connect install instructions (#8634)
The OpenID Connect install instructions suggested installing `synapse[oidc]`, but our PyPI package is called `matrix-synapse`.
2020-10-26 14:45:33 +00:00
Dirk Klimpel
913f8a06e4 Add field total to device list in admin API (#8644) 2020-10-26 14:07:51 +00:00
LEdoian
7b13780c54 Check status codes that profile handler returns (#8580)
Fixes #8520

Signed-off-by: Pavel Turinsky <pavel.turinsky@matfyz.cz>

Co-authored-by: Erik Johnston <erikj@jki.re>
2020-10-26 13:55:21 +00:00
Erik Johnston
2b7c180879 Start fewer opentracing spans (#8640)
#8567 started a span for every background process. This is good as it means all Synapse code that gets run should be in a span (unless in the sentinel logging context), but it means we generate about 15x the number of spans as we did previously.

This PR attempts to reduce that number by a) not starting one for send commands to Redis, and b) deferring starting background processes until after we're sure they're necessary.

I don't really know how much this will help.
2020-10-26 09:30:19 +00:00
Patrick Cloke
34a5696f93 Fix typos and spelling errors. (#8639) 2020-10-23 12:38:40 -04:00
Erik Johnston
c850dd9a8e Fix handling of User-Agent headers with bad utf-8. (#8632) 2020-10-23 17:12:59 +01:00
Erik Johnston
db9ef792f0 Fix email notifications for invites without local state. (#8627)
This can happen if e.g. the room invited into is no longer on the
server (or if all users left the room).
2020-10-23 10:41:32 +01:00
Andrew Morgan
f28756bb40 Changelog 2020-10-22 18:33:02 +01:00
Andrew Morgan
4fb7a68a65 Correct the package name in authlib install instructions 2020-10-22 18:25:58 +01:00
Erik Johnston
054a6b9538 Merge tag 'v1.22.0rc1' into develop
Synapse 1.22.0rc1 (2020-10-22)
==============================

Features
--------

- Add a configuration option for always using the "userinfo endpoint" for OpenID Connect. This fixes support for some identity providers, e.g. GitLab. Contributed by Benjamin Koch. ([\#7658](https://github.com/matrix-org/synapse/issues/7658))
- Add ability for `ThirdPartyEventRules` modules to query and manipulate whether a room is in the public rooms directory. ([\#8292](https://github.com/matrix-org/synapse/issues/8292), [\#8467](https://github.com/matrix-org/synapse/issues/8467))
- Add support for olm fallback keys ([MSC2732](https://github.com/matrix-org/matrix-doc/pull/2732)). ([\#8312](https://github.com/matrix-org/synapse/issues/8312), [\#8501](https://github.com/matrix-org/synapse/issues/8501))
- Add support for running background tasks in a separate worker process. ([\#8369](https://github.com/matrix-org/synapse/issues/8369), [\#8458](https://github.com/matrix-org/synapse/issues/8458), [\#8489](https://github.com/matrix-org/synapse/issues/8489), [\#8513](https://github.com/matrix-org/synapse/issues/8513), [\#8544](https://github.com/matrix-org/synapse/issues/8544), [\#8599](https://github.com/matrix-org/synapse/issues/8599))
- Add support for device dehydration ([MSC2697](https://github.com/matrix-org/matrix-doc/pull/2697)). ([\#8380](https://github.com/matrix-org/synapse/issues/8380))
- Add support for [MSC2409](https://github.com/matrix-org/matrix-doc/pull/2409), which allows sending typing, read receipts, and presence events to appservices. ([\#8437](https://github.com/matrix-org/synapse/issues/8437), [\#8590](https://github.com/matrix-org/synapse/issues/8590))
- Change default room version to "6", per [MSC2788](https://github.com/matrix-org/matrix-doc/pull/2788). ([\#8461](https://github.com/matrix-org/synapse/issues/8461))
- Add the ability to send non-membership events into a room via the `ModuleApi`. ([\#8479](https://github.com/matrix-org/synapse/issues/8479))
- Increase default upload size limit from 10M to 50M. Contributed by @Akkowicz. ([\#8502](https://github.com/matrix-org/synapse/issues/8502))
- Add support for modifying event content in `ThirdPartyRules` modules. ([\#8535](https://github.com/matrix-org/synapse/issues/8535), [\#8564](https://github.com/matrix-org/synapse/issues/8564))

Bugfixes
--------

- Fix a longstanding bug where invalid ignored users in account data could break clients. ([\#8454](https://github.com/matrix-org/synapse/issues/8454))
- Fix a bug where backfilling a room with an event that was missing the `redacts` field would break. ([\#8457](https://github.com/matrix-org/synapse/issues/8457))
- Don't attempt to respond to some requests if the client has already disconnected. ([\#8465](https://github.com/matrix-org/synapse/issues/8465))
- Fix message duplication if something goes wrong after persisting the event. ([\#8476](https://github.com/matrix-org/synapse/issues/8476))
- Fix incremental sync returning an incorrect `prev_batch` token in timeline section, which when used to paginate returned events that were included in the incremental sync. Broken since v0.16.0. ([\#8486](https://github.com/matrix-org/synapse/issues/8486))
- Expose the `uk.half-shot.msc2778.login.application_service` to clients from the login API. This feature was added in v1.21.0, but was not exposed as a potential login flow. ([\#8504](https://github.com/matrix-org/synapse/issues/8504))
- Fix error code for `/profile/{userId}/displayname` to be `M_BAD_JSON`. ([\#8517](https://github.com/matrix-org/synapse/issues/8517))
- Fix a bug introduced in v1.7.0 that could cause Synapse to insert values from non-state `m.room.retention` events into the `room_retention` database table. ([\#8527](https://github.com/matrix-org/synapse/issues/8527))
- Fix not sending events over federation when using sharded event writers. ([\#8536](https://github.com/matrix-org/synapse/issues/8536))
- Fix a long standing bug where email notifications for encrypted messages were blank. ([\#8545](https://github.com/matrix-org/synapse/issues/8545))
- Fix increase in the number of `There was no active span...` errors logged when using OpenTracing. ([\#8567](https://github.com/matrix-org/synapse/issues/8567))
- Fix a bug that prevented errors encountered during execution of the `synapse_port_db` from being correctly printed. ([\#8585](https://github.com/matrix-org/synapse/issues/8585))
- Fix appservice transactions to only include a maximum of 100 persistent and 100 ephemeral events. ([\#8606](https://github.com/matrix-org/synapse/issues/8606))

Updates to the Docker image
---------------------------

- Added multi-arch support (arm64,arm/v7) for the docker images. Contributed by @maquis196. ([\#7921](https://github.com/matrix-org/synapse/issues/7921))
- Add support for passing commandline args to the synapse process. Contributed by @samuel-p. ([\#8390](https://github.com/matrix-org/synapse/issues/8390))

Improved Documentation
----------------------

- Update the directions for using the manhole with coroutines. ([\#8462](https://github.com/matrix-org/synapse/issues/8462))
- Improve readme by adding new shield.io badges. ([\#8493](https://github.com/matrix-org/synapse/issues/8493))
- Added note about docker in manhole.md regarding which ip address to bind to. Contributed by @Maquis196. ([\#8526](https://github.com/matrix-org/synapse/issues/8526))
- Document the new behaviour of the `allowed_lifetime_min` and `allowed_lifetime_max` settings in the room retention configuration. ([\#8529](https://github.com/matrix-org/synapse/issues/8529))

Deprecations and Removals
-------------------------

- Drop unused `device_max_stream_id` table. ([\#8589](https://github.com/matrix-org/synapse/issues/8589))

Internal Changes
----------------

- Check for unreachable code with mypy. ([\#8432](https://github.com/matrix-org/synapse/issues/8432))
- Add unit test for event persister sharding. ([\#8433](https://github.com/matrix-org/synapse/issues/8433))
- Allow events to be sent to clients sooner when using sharded event persisters. ([\#8439](https://github.com/matrix-org/synapse/issues/8439), [\#8488](https://github.com/matrix-org/synapse/issues/8488), [\#8496](https://github.com/matrix-org/synapse/issues/8496), [\#8499](https://github.com/matrix-org/synapse/issues/8499))
- Configure `public_baseurl` when using demo scripts. ([\#8443](https://github.com/matrix-org/synapse/issues/8443))
- Add SQL logging on queries that happen during startup. ([\#8448](https://github.com/matrix-org/synapse/issues/8448))
- Speed up unit tests when using PostgreSQL. ([\#8450](https://github.com/matrix-org/synapse/issues/8450))
- Remove redundant database loads of stream_ordering for events we already have. ([\#8452](https://github.com/matrix-org/synapse/issues/8452))
- Reduce inconsistencies between codepaths for membership and non-membership events. ([\#8463](https://github.com/matrix-org/synapse/issues/8463))
- Combine `SpamCheckerApi` with the more generic `ModuleApi`. ([\#8464](https://github.com/matrix-org/synapse/issues/8464))
- Additional testing for `ThirdPartyEventRules`. ([\#8468](https://github.com/matrix-org/synapse/issues/8468))
- Add `-d` option to `./scripts-dev/lint.sh` to lint files that have changed since the last git commit. ([\#8472](https://github.com/matrix-org/synapse/issues/8472))
- Unblacklist some sytests. ([\#8474](https://github.com/matrix-org/synapse/issues/8474))
- Include the log level in the phone home stats. ([\#8477](https://github.com/matrix-org/synapse/issues/8477))
- Remove outdated sphinx documentation, scripts and configuration. ([\#8480](https://github.com/matrix-org/synapse/issues/8480))
- Clarify error message when plugin config parsers raise an error. ([\#8492](https://github.com/matrix-org/synapse/issues/8492))
- Remove the deprecated `Handlers` object. ([\#8494](https://github.com/matrix-org/synapse/issues/8494))
- Fix a threadsafety bug in unit tests. ([\#8497](https://github.com/matrix-org/synapse/issues/8497))
- Add user agent to user_daily_visits table. ([\#8503](https://github.com/matrix-org/synapse/issues/8503))
- Add type hints to various parts of the code base. ([\#8407](https://github.com/matrix-org/synapse/issues/8407), [\#8505](https://github.com/matrix-org/synapse/issues/8505), [\#8507](https://github.com/matrix-org/synapse/issues/8507), [\#8547](https://github.com/matrix-org/synapse/issues/8547), [\#8562](https://github.com/matrix-org/synapse/issues/8562), [\#8609](https://github.com/matrix-org/synapse/issues/8609))
- Remove unused code from the test framework. ([\#8514](https://github.com/matrix-org/synapse/issues/8514))
- Apply some internal fixes to the `HomeServer` class to make its code more idiomatic and statically-verifiable. ([\#8515](https://github.com/matrix-org/synapse/issues/8515))
- Factor out common code between `RoomMemberHandler._locally_reject_invite` and `EventCreationHandler.create_event`. ([\#8537](https://github.com/matrix-org/synapse/issues/8537))
- Improve database performance by executing more queries without starting transactions. ([\#8542](https://github.com/matrix-org/synapse/issues/8542))
- Rename `Cache` to `DeferredCache`, to better reflect its purpose. ([\#8548](https://github.com/matrix-org/synapse/issues/8548))
- Move metric registration code down into `LruCache`. ([\#8561](https://github.com/matrix-org/synapse/issues/8561), [\#8591](https://github.com/matrix-org/synapse/issues/8591))
- Replace `DeferredCache` with the lighter-weight `LruCache` where possible. ([\#8563](https://github.com/matrix-org/synapse/issues/8563))
- Add virtualenv-generated folders to `.gitignore`. ([\#8566](https://github.com/matrix-org/synapse/issues/8566))
- Add `get_immediate` method to `DeferredCache`. ([\#8568](https://github.com/matrix-org/synapse/issues/8568))
- Fix mypy not properly checking across the codebase, additionally, fix a typing assertion error in `handlers/auth.py`. ([\#8569](https://github.com/matrix-org/synapse/issues/8569))
- Fix `synmark` benchmark runner. ([\#8571](https://github.com/matrix-org/synapse/issues/8571))
- Modify `DeferredCache.get()` to return `Deferred`s instead of `ObservableDeferred`s. ([\#8572](https://github.com/matrix-org/synapse/issues/8572))
- Adjust a protocol-type definition to fit `sqlite3` assertions. ([\#8577](https://github.com/matrix-org/synapse/issues/8577))
- Support macOS on the `synmark` benchmark runner. ([\#8578](https://github.com/matrix-org/synapse/issues/8578))
- Update `mypy` static type checker to 0.790. ([\#8583](https://github.com/matrix-org/synapse/issues/8583), [\#8600](https://github.com/matrix-org/synapse/issues/8600))
- Re-organize the structured logging code to separate the TCP transport handling from the JSON formatting. ([\#8587](https://github.com/matrix-org/synapse/issues/8587))
- Remove extraneous unittest logging decorators from unit tests. ([\#8592](https://github.com/matrix-org/synapse/issues/8592))
- Minor optimisations in caching code. ([\#8593](https://github.com/matrix-org/synapse/issues/8593), [\#8594](https://github.com/matrix-org/synapse/issues/8594))
2020-10-22 13:37:08 +01:00
Patrick Cloke
514a240aed Remove unused OPTIONS handlers. (#8621)
The handling of OPTIONS requests was consolidated in #7534, but the endpoint
specific handlers were not removed.
2020-10-22 08:35:55 -04:00
Erik Johnston
b19b63e6b4 Don't 500 for invalid group IDs (#8628) 2020-10-22 13:19:06 +01:00
Erik Johnston
a9f90fa73a Type hints for RegistrationStore (#8615) 2020-10-22 11:56:58 +01:00
Erik Johnston
2ac908f377 Don't instansiate Requester directly (#8614) 2020-10-22 10:11:06 +01:00
201 changed files with 4987 additions and 2021 deletions

View File

@@ -46,7 +46,7 @@ locally. You'll need python 3.6 or later, and to install a number of tools:
```
# Install the dependencies
pip install -e ".[lint]"
pip install -e ".[lint,mypy]"
# Run the linter script
./scripts-dev/lint.sh
@@ -63,7 +63,7 @@ run-time:
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
```
You can also provided the `-d` option, which will lint the files that have been
You can also provide the `-d` option, which will lint the files that have been
changed since the last git commit. This will often be significantly faster than
linting the whole codebase.

View File

@@ -57,7 +57,7 @@ light workloads.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5.2 or later, up to Python 3.8.
- Python 3.5.2 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in

View File

@@ -256,9 +256,9 @@ directory of your choice::
Synapse has a number of external dependencies, that are easiest
to install using pip and a virtualenv::
virtualenv -p python3 env
source env/bin/activate
python -m pip install --no-use-pep517 -e ".[all]"
python3 -m venv ./env
source ./env/bin/activate
pip install -e ".[all,test]"
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
@@ -270,9 +270,9 @@ check that everything is installed as it should be::
This should end with a 'PASSED' result::
Ran 143 tests in 0.601s
Ran 1266 tests in 643.930s
PASSED (successes=143)
PASSED (skips=15, successes=1251)
Running the Integration Tests
=============================

View File

@@ -75,6 +75,22 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.23.0
====================
Structured logging configuration breaking changes
-------------------------------------------------
This release deprecates use of the ``structured: true`` logging configuration for
structured logging. If your logging configuration contains ``structured: true``
then it should be modified based on the `structured logging documentation
<https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>`_.
The ``structured`` and ``drains`` logging options are now deprecated and should
be replaced by standard logging configuration of ``handlers`` and ``formatters`.
A future will release of Synapse will make using ``structured: true`` an error.
Upgrading to v1.22.0
====================

1
changelog.d/8455.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already.

1
changelog.d/8519.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin api to delete a single file or files were not used for a defined time from server. Contributed by @dklimpel.

1
changelog.d/8539.feature Normal file
View File

@@ -0,0 +1 @@
Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel.

1
changelog.d/8559.misc Normal file
View File

@@ -0,0 +1 @@
Optimise `/createRoom` with multiple invited users.

1
changelog.d/8580.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information.

1
changelog.d/8582.doc Normal file
View File

@@ -0,0 +1 @@
Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk.

1
changelog.d/8595.misc Normal file
View File

@@ -0,0 +1 @@
Implement and use an @lru_cache decorator.

1
changelog.d/8607.feature Normal file
View File

@@ -0,0 +1 @@
Support generating structured logs via the standard logging configuration.

1
changelog.d/8610.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.

1
changelog.d/8614.misc Normal file
View File

@@ -0,0 +1 @@
Don't instansiate Requester directly.

1
changelog.d/8615.misc Normal file
View File

@@ -0,0 +1 @@
Type hints for `RegistrationStore`.

1
changelog.d/8616.misc Normal file
View File

@@ -0,0 +1 @@
Change schema to support access tokens belonging to one user but granting access to another.

1
changelog.d/8620.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error.

1
changelog.d/8621.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused OPTIONS handlers.

1
changelog.d/8627.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix email notifications for invites without local state.

1
changelog.d/8628.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500.

1
changelog.d/8632.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded.

1
changelog.d/8633.misc Normal file
View File

@@ -0,0 +1 @@
Run `mypy` as part of the lint.sh script.

1
changelog.d/8634.misc Normal file
View File

@@ -0,0 +1 @@
Correct Synapse's PyPI package name in the OpenID Connect installation instructions.

1
changelog.d/8635.doc Normal file
View File

@@ -0,0 +1 @@
Improve the sample configuration for single sign-on providers.

1
changelog.d/8639.misc Normal file
View File

@@ -0,0 +1 @@
Fix typos and spelling errors in the code.

1
changelog.d/8640.misc Normal file
View File

@@ -0,0 +1 @@
Reduce number of OpenTracing spans started.

1
changelog.d/8643.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0.

1
changelog.d/8644.misc Normal file
View File

@@ -0,0 +1 @@
Add field `total` to device list in admin API.

1
changelog.d/8647.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel.

1
changelog.d/8655.misc Normal file
View File

@@ -0,0 +1 @@
Add more type hints to the application services code.

1
changelog.d/8657.doc Normal file
View File

@@ -0,0 +1 @@
Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs.

1
changelog.d/8664.misc Normal file
View File

@@ -0,0 +1 @@
Tell Black to format code for Python 3.5.

1
changelog.d/8665.doc Normal file
View File

@@ -0,0 +1 @@
Note support for Python 3.9.

1
changelog.d/8666.doc Normal file
View File

@@ -0,0 +1 @@
Minor updates to docs on running tests.

1
changelog.d/8667.doc Normal file
View File

@@ -0,0 +1 @@
Interlink prometheus/grafana documentation.

1
changelog.d/8668.misc Normal file
View File

@@ -0,0 +1 @@
Reduce number of OpenTracing spans started.

1
changelog.d/8669.misc Normal file
View File

@@ -0,0 +1 @@
Don't pull event from DB when handling replication traffic.

1
changelog.d/8670.misc Normal file
View File

@@ -0,0 +1 @@
Reduce number of OpenTracing spans started.

1
changelog.d/8671.misc Normal file
View File

@@ -0,0 +1 @@
Abstract some invite-related code in preparation for landing knocking.

1
changelog.d/8679.misc Normal file
View File

@@ -0,0 +1 @@
Clarify representation of events in logfiles.

1
changelog.d/8680.misc Normal file
View File

@@ -0,0 +1 @@
Don't require `hiredis` package to be installed to run unit tests.

1
changelog.d/8682.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories.

1
changelog.d/8684.misc Normal file
View File

@@ -0,0 +1 @@
Fix typing info on cache call signature to accept `on_invalidate`.

1
changelog.d/8685.feature Normal file
View File

@@ -0,0 +1 @@
Support generating structured logs via the standard logging configuration.

1
changelog.d/8688.misc Normal file
View File

@@ -0,0 +1 @@
Abstract some invite-related code in preparation for landing knocking.

1
changelog.d/8689.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.

1
changelog.d/8690.misc Normal file
View File

@@ -0,0 +1 @@
Fail tests if they do not await coroutines.

1
changelog.d/8693.misc Normal file
View File

@@ -0,0 +1 @@
Add more type hints to the application services code.

View File

@@ -3,4 +3,4 @@
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up additional recording rules
3. Set up required recording rules. https://github.com/matrix-org/synapse/tree/master/contrib/prometheus

View File

@@ -17,67 +17,26 @@ It returns a JSON body like the following:
{
"event_reports": [
{
"content": {
"reason": "foo",
"score": -100
},
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"event_json": {
"auth_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
],
"content": {
"body": "matrix.org: This Week in Matrix",
"format": "org.matrix.custom.html",
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
"msgtype": "m.notice"
},
"depth": 546,
"hashes": {
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
},
"origin": "matrix.org",
"origin_server_ts": 1592291711430,
"prev_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
],
"prev_state": [],
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"sender": "@foobar:matrix.org",
"signatures": {
"matrix.org": {
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 1592291711430,
}
},
"id": 2,
"reason": "foo",
"score": -100,
"received_ts": 1570897107409,
"room_alias": "#alias1:matrix.org",
"canonical_alias": "#alias1:matrix.org",
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"name": "Matrix HQ",
"sender": "@foobar:matrix.org",
"user_id": "@foo:matrix.org"
},
{
"content": {
"reason": "bar",
"score": -100
},
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
"event_json": {
// hidden items
// see above
},
"id": 3,
"reason": "bar",
"score": -100,
"received_ts": 1598889612059,
"room_alias": "#alias2:matrix.org",
"canonical_alias": "#alias2:matrix.org",
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
"name": "Your room name here",
"sender": "@foobar:matrix.org",
"user_id": "@bar:matrix.org"
}
@@ -113,17 +72,94 @@ The following fields are returned in the JSON response body:
- ``id``: integer - ID of event report.
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
- ``room_id``: string - The ID of the room in which the event being reported is located.
- ``name``: string - The name of the room.
- ``event_id``: string - The ID of the reported event.
- ``user_id``: string - This is the user who reported the event and wrote the reason.
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
- ``content``: object - Content of reported event.
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
- ``room_alias``: string - The alias of the room. ``null`` if the room does not have a canonical alias set.
- ``event_json``: object - Details of the original event that was reported.
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
- ``next_token``: integer - Indication for pagination. See above.
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
Show details of a specific event report
=======================================
This API returns information about a specific event report.
The api is::
GET /_synapse/admin/v1/event_reports/<report_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
It returns a JSON body like the following:
.. code:: jsonc
{
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
"event_json": {
"auth_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
],
"content": {
"body": "matrix.org: This Week in Matrix",
"format": "org.matrix.custom.html",
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
"msgtype": "m.notice"
},
"depth": 546,
"hashes": {
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
},
"origin": "matrix.org",
"origin_server_ts": 1592291711430,
"prev_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
],
"prev_state": [],
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"sender": "@foobar:matrix.org",
"signatures": {
"matrix.org": {
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 1592291711430,
}
},
"id": <report_id>,
"reason": "foo",
"score": -100,
"received_ts": 1570897107409,
"canonical_alias": "#alias1:matrix.org",
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"name": "Matrix HQ",
"sender": "@foobar:matrix.org",
"user_id": "@foo:matrix.org"
}
**URL parameters:**
- ``report_id``: string - The ID of the event report.
**Response**
The following fields are returned in the JSON response body:
- ``id``: integer - ID of event report.
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
- ``room_id``: string - The ID of the room in which the event being reported is located.
- ``name``: string - The name of the room.
- ``event_id``: string - The ID of the reported event.
- ``user_id``: string - This is the user who reported the event and wrote the reason.
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
- ``event_json``: object - Details of the original event that was reported.

View File

@@ -100,3 +100,82 @@ Response:
"num_quarantined": 10 # The number of media items successfully quarantined
}
```
# Delete local media
This API deletes the *local* media from the disk of your own server.
This includes any local thumbnails and copies of media downloaded from
remote homeservers.
This API will not affect media that has been uploaded to external
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
See also [purge_remote_media.rst](purge_remote_media.rst).
## Delete a specific local media
Delete a specific `media_id`.
Request:
```
DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
{}
```
URL Parameters
* `server_name`: string - The name of your local server (e.g `matrix.org`)
* `media_id`: string - The ID of the media (e.g `abcdefghijklmnopqrstuvwx`)
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx"
],
"total": 1
}
```
The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id`
## Delete local media by date or size
Request:
```
POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
{}
```
URL Parameters
* `server_name`: string - The name of your local server (e.g `matrix.org`).
* `before_ts`: string representing a positive integer - Unix timestamp in ms.
Files that were last used before this timestamp will be deleted. It is the timestamp of
last access and not the timestamp creation.
* `size_gt`: Optional - string representing a positive integer - Size of the media in bytes.
Files that are larger will be deleted. Defaults to `0`.
* `keep_profiles`: Optional - string representing a boolean - Switch to also delete files
that are still used in image data (e.g user profile, room avatar).
If `false` these files will be deleted. Defaults to `true`.
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx",
"abcdefghijklmnopqrstuvwz"
],
"total": 2
}
```
The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id`

View File

@@ -341,6 +341,89 @@ The following fields are returned in the JSON response body:
- ``total`` - Number of rooms.
List media of an user
================================
Gets a list of all local media that a specific ``user_id`` has created.
The response is ordered by creation date descending and media ID descending.
The newest media is on top.
The API is::
GET /_synapse/admin/v1/users/<user_id>/media
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"media": [
{
"created_ts": 100400,
"last_access_ts": null,
"media_id": "qXhyRzulkwLsNHTbpHreuEgo",
"media_length": 67,
"media_type": "image/png",
"quarantined_by": null,
"safe_from_quarantine": false,
"upload_name": "test1.png"
},
{
"created_ts": 200400,
"last_access_ts": null,
"media_id": "FHfiSnzoINDatrXHQIXBtahw",
"media_length": 67,
"media_type": "image/png",
"quarantined_by": null,
"safe_from_quarantine": false,
"upload_name": "test2.png"
}
],
"next_token": 3,
"total": 2
}
To paginate, check for ``next_token`` and if present, call the endpoint again
with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more
reports to paginate through.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - string - fully qualified: for example, ``@user:server.com``.
- ``limit``: string representing a positive integer - Is optional but is used for pagination,
denoting the maximum number of items to return in this call. Defaults to ``100``.
- ``from``: string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
Defaults to ``0``.
**Response**
The following fields are returned in the JSON response body:
- ``media`` - An array of objects, each containing information about a media.
Media objects contain the following fields:
- ``created_ts`` - integer - Timestamp when the content was uploaded in ms.
- ``last_access_ts`` - integer - Timestamp when the content was last accessed in ms.
- ``media_id`` - string - The id used to refer to the media.
- ``media_length`` - integer - Length of the media in bytes.
- ``media_type`` - string - The MIME-type of the media.
- ``quarantined_by`` - string - The user ID that initiated the quarantine request
for this media.
- ``safe_from_quarantine`` - bool - Status if this media is safe from quarantining.
- ``upload_name`` - string - The name the media was uploaded with.
- ``next_token``: integer - Indication for pagination. See above.
- ``total`` - integer - Total number of media.
User devices
============
@@ -375,7 +458,8 @@ A response body like the following is returned:
"last_seen_ts": 1474491775025,
"user_id": "<user_id>"
}
]
],
"total": 2
}
**Parameters**
@@ -400,6 +484,8 @@ The following fields are returned in the JSON response body:
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- ``user_id`` - Owner of device.
- ``total`` - Total number of user's devices.
Delete multiple devices
------------------
Deletes the given devices for a specific ``user_id``, and invalidates
@@ -525,3 +611,82 @@ The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to delete.
List all pushers
================
Gets information about all pushers for a specific ``user_id``.
The API is::
GET /_synapse/admin/v1/users/<user_id>/pushers
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"pushers": [
{
"app_display_name":"HTTP Push Notifications",
"app_id":"m.http",
"data": {
"url":"example.com"
},
"device_display_name":"pushy push",
"kind":"http",
"lang":"None",
"profile_tag":"",
"pushkey":"a@example.com"
}
],
"total": 1
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
**Response**
The following fields are returned in the JSON response body:
- ``pushers`` - An array containing the current pushers for the user
- ``app_display_name`` - string - A string that will allow the user to identify
what application owns this pusher.
- ``app_id`` - string - This is a reverse-DNS style identifier for the application.
Max length, 64 chars.
- ``data`` - A dictionary of information for the pusher implementation itself.
- ``url`` - string - Required if ``kind`` is ``http``. The URL to use to send
notifications to.
- ``format`` - string - The format to use when sending notifications to the
Push Gateway.
- ``device_display_name`` - string - A string that will allow the user to identify
what device owns this pusher.
- ``profile_tag`` - string - This string determines which set of device specific rules
this pusher executes.
- ``kind`` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes.
- ``lang`` - string - The preferred language for receiving notifications
(e.g. 'en' or 'en-US')
- ``profile_tag`` - string - This string determines which set of device specific rules
this pusher executes.
- ``pushkey`` - string - This is a unique identifier for this pusher.
Max length, 512 bytes.
- ``total`` - integer - Number of pushers.
See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_

View File

@@ -60,6 +60,8 @@
1. Restart Prometheus.
1. Consider using the [grafana dashboard](https://github.com/matrix-org/synapse/tree/master/contrib/grafana/) and required [recording rules](https://github.com/matrix-org/synapse/tree/master/contrib/prometheus/)
## Monitoring workers
To monitor a Synapse installation using

View File

@@ -37,7 +37,7 @@ as follows:
provided by `matrix.org` so no further action is needed.
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
install synapse[oidc]` to install the necessary dependencies.
install matrix-synapse[oidc]` to install the necessary dependencies.
* For other installation mechanisms, see the documentation provided by the
maintainer.
@@ -52,14 +52,39 @@ specific providers.
Here are a few configs for providers that should work with Synapse.
### Microsoft Azure Active Directory
Azure AD can act as an OpenID Connect Provider. Register a new application under
*App registrations* in the Azure AD management console. The RedirectURI for your
application should point to your matrix server: `[synapse public baseurl]/_synapse/oidc/callback`
Go to *Certificates & secrets* and register a new client secret. Make note of your
Directory (tenant) ID as it will be used in the Azure links.
Edit your Synapse config file and change the `oidc_config` section:
```yaml
oidc_config:
enabled: true
issuer: "https://login.microsoftonline.com/<tenant id>/v2.0"
client_id: "<client id>"
client_secret: "<client secret>"
scopes: ["openid", "profile"]
authorization_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/authorize"
token_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/token"
userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo"
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username.split('@')[0] }}"
display_name_template: "{{ user.name }}"
```
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider with an
external database, it can be configured with static passwords in a config file.
Follow the [Getting Started
guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
Follow the [Getting Started guide](https://dexidp.io/docs/getting-started/)
to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
@@ -73,7 +98,7 @@ staticClients:
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`.
Run with `dex serve examples/config-dev.yaml`.
Synapse config:

View File

@@ -1505,10 +1505,8 @@ trusted_key_servers:
## Single sign-on integration ##
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
# enable SAML login.
# The following settings can be used to make Synapse use a single sign-on
# provider for authentication, instead of its internal password database.
#
# You will probably also want to set the following options to `false` to
# disable the regular login/registration flows:
@@ -1517,6 +1515,11 @@ trusted_key_servers:
#
# You will also want to investigate the settings under the "sso" configuration
# section below.
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
# enable SAML login.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
@@ -1532,40 +1535,42 @@ saml2_config:
# so it is not normally necessary to specify them unless you need to
# override them.
#
#sp_config:
# # point this to the IdP's metadata. You can use either a local file or
# # (preferably) a URL.
# metadata:
# #local: ["saml2/idp.xml"]
# remote:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
#
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
#
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
sp_config:
# Point this to the IdP's metadata. You must provide either a local
# file via the `local` attribute or (preferably) a URL via the
# `remote` attribute.
#
#metadata:
# local: ["saml2/idp.xml"]
# remote:
# - url: https://our_idp/metadata.xml
# By default, the user has to go to our login page first. If you'd like
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# 'service.sp' section:
#
#service:
# sp:
# allow_unsolicited: true
# The examples below are just used to generate our metadata xml, and you
# may well not need them, depending on your setup. Alternatively you
# may need a whole lot more detail - see the pysaml2 docs!
#description: ["My awesome SP", "en"]
#name: ["Test SP", "en"]
#organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file:
@@ -1641,11 +1646,10 @@ saml2_config:
# value: "sales"
# OpenID Connect integration. The following settings can be used to make Synapse
# use an OpenID Connect Provider for authentication, instead of its internal
# password database.
# Enable OpenID Connect (OIDC) / OAuth 2.0 for registration and login.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md.
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for some example configurations.
#
oidc_config:
# Uncomment the following to enable authorization against an OpenID Connect
@@ -1778,15 +1782,37 @@ oidc_config:
# Enable CAS for registration and login.
# Enable Central Authentication Service (CAS) for registration and login.
#
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homeserver.domain.com:8448"
# #displayname_attribute: name
# #required_attributes:
# # name: value
cas_config:
# Uncomment the following to enable authorization against a CAS server.
# Defaults to false.
#
#enabled: true
# The URL of the CAS authorization endpoint.
#
#server_url: "https://cas-server.com"
# The public URL of the homeserver.
#
#service_url: "https://homeserver.domain.com:8448"
# The attribute of the CAS response to use as the display name.
#
# If unset, no displayname will be set.
#
#displayname_attribute: name
# It is possible to configure Synapse to only allow logins if CAS attributes
# match particular values. All of the keys in the mapping below must exist
# and the values must match the given value. Alternately if the given value
# is None then any value is allowed (the attribute just must exist).
# All of the listed attributes must match for the login to be permitted.
#
#required_attributes:
# userGroup: "staff"
# department: None
# Additional settings to use with single-sign on systems such as OpenID Connect,
@@ -1886,7 +1912,7 @@ sso:
# and issued at ("iat") claims are validated if present.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existant.
# expected to be non-existent.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
#
@@ -2402,7 +2428,7 @@ spam_checker:
#
# Options for the rules include:
#
# user_id: Matches agaisnt the creator of the alias
# user_id: Matches against the creator of the alias
# room_id: Matches against the room ID being published
# alias: Matches against any current local or canonical aliases
# associated with the room
@@ -2448,7 +2474,7 @@ opentracing:
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
# By default, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"

View File

@@ -3,7 +3,11 @@
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# Synapse also supports structured logging for machine readable logs which can
# be ingested by ELK stacks. See [2] for details.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
version: 1
@@ -59,7 +63,7 @@ root:
# then write them to a file.
#
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
# also need to update the configuation for the `twisted` logger above, in
# also need to update the configuration for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]

View File

@@ -1,11 +1,116 @@
# Structured Logging
A structured logging system can be useful when your logs are destined for a machine to parse and process. By maintaining its machine-readable characteristics, it enables more efficient searching and aggregations when consumed by software such as the "ELK stack".
A structured logging system can be useful when your logs are destined for a
machine to parse and process. By maintaining its machine-readable characteristics,
it enables more efficient searching and aggregations when consumed by software
such as the "ELK stack".
Synapse's structured logging system is configured via the file that Synapse's `log_config` config option points to. The file must be YAML and contain `structured: true`. It must contain a list of "drains" (places where logs go to).
Synapse's structured logging system is configured via the file that Synapse's
`log_config` config option points to. The file should include a formatter which
uses the `synapse.logging.TerseJsonFormatter` class included with Synapse and a
handler which uses the above formatter.
There is also a `synapse.logging.JsonFormatter` option which does not include
a timestamp in the resulting JSON. This is useful if the log ingester adds its
own timestamp.
A structured logging configuration looks similar to the following:
```yaml
version: 1
formatters:
structured:
class: synapse.logging.TerseJsonFormatter
handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
formatter: structured
filename: /path/to/my/logs/homeserver.log
when: midnight
backupCount: 3 # Does not include the current log file.
encoding: utf8
loggers:
synapse:
level: INFO
handlers: [remote]
synapse.storage.SQL:
level: WARNING
```
The above logging config will set Synapse as 'INFO' logging level by default,
with the SQL layer at 'WARNING', and will log to a file, stored as JSON.
It is also possible to figure Synapse to log to a remote endpoint by using the
`synapse.logging.RemoteHandler` class included with Synapse. It takes the
following arguments:
- `host`: Hostname or IP address of the log aggregator.
- `port`: Numerical port to contact on the host.
- `maximum_buffer`: (Optional, defaults to 1000) The maximum buffer size to allow.
A remote structured logging configuration looks similar to the following:
```yaml
version: 1
formatters:
structured:
class: synapse.logging.TerseJsonFormatter
handlers:
remote:
class: synapse.logging.RemoteHandler
formatter: structured
host: 10.1.2.3
port: 9999
loggers:
synapse:
level: INFO
handlers: [remote]
synapse.storage.SQL:
level: WARNING
```
The above logging config will set Synapse as 'INFO' logging level by default,
with the SQL layer at 'WARNING', and will log JSON formatted messages to a
remote endpoint at 10.1.2.3:9999.
## Upgrading from legacy structured logging configuration
Versions of Synapse prior to v1.23.0 included a custom structured logging
configuration which is deprecated. It used a `structured: true` flag and
configured `drains` instead of ``handlers`` and `formatters`.
Synapse currently automatically converts the old configuration to the new
configuration, but this will be removed in a future version of Synapse. The
following reference can be used to update your configuration. Based on the drain
`type`, we can pick a new handler:
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`
or `ext://sys.stderr` should be used.
2. For a type of `file` or `file_json`: a handler of `logging.FileHandler` with
a location of the file path should be used.
3. For a type of `network_json_terse`: a handler of `synapse.logging.RemoteHandler`
with the host and port should be used.
Then based on the drain `type` we can pick a new formatter:
1. For a type of `console` or `file` no formatter is necessary.
2. For a type of `console_json` or `file_json`: a formatter of
`synapse.logging.JsonFormatter` should be used.
3. For a type of `console_json_terse` or `network_json_terse`: a formatter of
`synapse.logging.TerseJsonFormatter` should be used.
For each new handler and formatter they should be added to the logging configuration
and then assigned to either a logger or the root logger.
An example legacy configuration:
```yaml
structured: true
@@ -24,60 +129,33 @@ drains:
location: homeserver.log
```
The above logging config will set Synapse as 'INFO' logging level by default, with the SQL layer at 'WARNING', and will have two logging drains (to the console and to a file, stored as JSON).
Would be converted into a new configuration:
## Drain Types
```yaml
version: 1
Drain types can be specified by the `type` key.
formatters:
json:
class: synapse.logging.JsonFormatter
### `console`
handlers:
console:
class: logging.StreamHandler
location: ext://sys.stdout
file:
class: logging.FileHandler
formatter: json
filename: homeserver.log
Outputs human-readable logs to the console.
loggers:
synapse:
level: INFO
handlers: [console, file]
synapse.storage.SQL:
level: WARNING
```
Arguments:
- `location`: Either `stdout` or `stderr`.
### `console_json`
Outputs machine-readable JSON logs to the console.
Arguments:
- `location`: Either `stdout` or `stderr`.
### `console_json_terse`
Outputs machine-readable JSON logs to the console, separated by newlines. This
format is not designed to be read and re-formatted into human-readable text, but
is optimal for a logging aggregation system.
Arguments:
- `location`: Either `stdout` or `stderr`.
### `file`
Outputs human-readable logs to a file.
Arguments:
- `location`: An absolute path to the file to log to.
### `file_json`
Outputs machine-readable logs to a file.
Arguments:
- `location`: An absolute path to the file to log to.
### `network_json_terse`
Delivers machine-readable JSON logs to a log aggregator over TCP. This is
compatible with LogStash's TCP input with the codec set to `json_lines`.
Arguments:
- `host`: Hostname or IP address of the log aggregator.
- `port`: Numerical port to contact on the host.
The new logging configuration is a bit more verbose, but significantly more
flexible. It allows for configuration that were not previously possible, such as
sending plain logs over the network, or using different handlers for different
modules.

View File

@@ -17,6 +17,7 @@ files =
synapse/federation,
synapse/handlers/_base.py,
synapse/handlers/account_data.py,
synapse/handlers/account_validity.py,
synapse/handlers/appservice.py,
synapse/handlers/auth.py,
synapse/handlers/cas_handler.py,
@@ -56,7 +57,9 @@ files =
synapse/server_notices,
synapse/spam_checker_api,
synapse/state,
synapse/storage/databases/main/appservice.py,
synapse/storage/databases/main/events.py,
synapse/storage/databases/main/registration.py,
synapse/storage/databases/main/stream.py,
synapse/storage/databases/main/ui_auth.py,
synapse/storage/database.py,
@@ -80,6 +83,9 @@ ignore_missing_imports = True
[mypy-zope]
ignore_missing_imports = True
[mypy-bcrypt]
ignore_missing_imports = True
[mypy-constantly]
ignore_missing_imports = True

View File

@@ -35,7 +35,7 @@
showcontent = true
[tool.black]
target-version = ['py34']
target-version = ['py35']
exclude = '''
(

View File

@@ -80,7 +80,7 @@ else
# then lint everything!
if [[ -z ${files+x} ]]; then
# Lint all source code files and directories
files=("synapse" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py")
files=("synapse" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py" "synmark")
fi
fi
@@ -94,3 +94,4 @@ isort "${files[@]}"
python3 -m black "${files[@]}"
./scripts-dev/config-lint.sh
flake8 "${files[@]}"
mypy

View File

@@ -19,9 +19,10 @@ can crop up, e.g the cache descriptors.
from typing import Callable, Optional
from mypy.nodes import ARG_NAMED_OPT
from mypy.plugin import MethodSigContext, Plugin
from mypy.typeops import bind_self
from mypy.types import CallableType
from mypy.types import CallableType, NoneType
class SynapsePlugin(Plugin):
@@ -40,8 +41,9 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
It already has *almost* the correct signature, except:
1. the `self` argument needs to be marked as "bound"; and
2. any `cache_context` argument should be removed.
1. the `self` argument needs to be marked as "bound";
2. any `cache_context` argument should be removed;
3. an optional keyword argument `on_invalidated` should be added.
"""
# First we mark this as a bound function signature.
@@ -58,19 +60,33 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
context_arg_index = idx
break
arg_types = list(signature.arg_types)
arg_names = list(signature.arg_names)
arg_kinds = list(signature.arg_kinds)
if context_arg_index:
arg_types = list(signature.arg_types)
arg_types.pop(context_arg_index)
arg_names = list(signature.arg_names)
arg_names.pop(context_arg_index)
arg_kinds = list(signature.arg_kinds)
arg_kinds.pop(context_arg_index)
signature = signature.copy_modified(
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
)
# Third, we add an optional "on_invalidate" argument.
#
# This is a callable which accepts no input and returns nothing.
calltyp = CallableType(
arg_types=[],
arg_kinds=[],
arg_names=[],
ret_type=NoneType(),
fallback=ctx.api.named_generic_type("builtins.function", []),
)
arg_types.append(calltyp)
arg_names.append("on_invalidate")
arg_kinds.append(ARG_NAMED_OPT) # Arg is an optional kwarg.
signature = signature.copy_modified(
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
)
return signature

227
scripts-dev/release.py Normal file
View File

@@ -0,0 +1,227 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import sys
from typing import Optional
import click
import git
from packaging import version
from redbaron import RedBaron
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote.
"""
if ref_name in repo.refs:
return repo.refs[ref_name]
elif ref_name in repo.remote().refs:
return repo.remote().refs[ref_name]
else:
return None
def update_branch(repo: git.Repo):
"""Ensure branch is up to date if it has a remote
"""
if repo.active_branch.tracking_branch():
repo.git.merge(repo.active_branch.tracking_branch().name)
@click.command()
def release():
"""Main release command
"""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
click.secho("Updating git repo...")
repo.remote().fetch()
# Parse the AST and load the `__version__` node so that we can edit it
# later.
with open("synapse/__init__.py") as f:
red = RedBaron(f.read())
version_node = None
for node in red:
if node.type != "assignment":
continue
if node.target.type != "name":
continue
if node.target.value != "__version__":
continue
version_node = node
break
if not version_node:
print("Failed to find '__version__' definition in synapse/__init__.py")
sys.exit(1)
# Parse the current version.
current_version = version.parse(version_node.value.value.strip('"'))
assert isinstance(current_version, version.Version)
# Figure out what sort of release we're doing and calcuate the new version.
rc = click.confirm("RC", default=True)
if current_version.pre:
# If the current version is an RC we don't need to bump any of the
# version numbers (other than the RC number).
base_version = "{}.{}.{}".format(
current_version.major, current_version.minor, current_version.micro,
)
if rc:
new_version = "{}.{}.{}rc{}".format(
current_version.major,
current_version.minor,
current_version.micro,
current_version.pre[1] + 1,
)
else:
new_version = base_version
else:
# If this is a new release cycle then we need to know if its a major
# version bump or a hotfix.
release_type = click.prompt(
"Release type",
type=click.Choice(("major", "hotfix")),
show_choices=True,
default="major",
)
if release_type == "major":
base_version = new_version = "{}.{}.{}".format(
current_version.major, current_version.minor + 1, 0,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major, current_version.minor + 1, 0,
)
else:
base_version = new_version = "{}.{}.{}".format(
current_version.major, current_version.minor, current_version.micro + 1,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
# Confirm the calculated version is OK.
if not click.confirm(f"Create new version: {new_version}?", default=True):
click.get_current_context().abort()
# Switch to the release branch.
release_branch_name = f"release-v{base_version}"
release_branch = find_ref(repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
# If the release branch only exists on the remote we check it out
# locally.
repo.git.checkout(release_branch_name)
release_branch = repo.active_branch
else:
# If a branch doesn't exist we create one. We ask which one branch it
# should be based off, defaulting to sensible values depending on the
# release type.
if current_version.is_prerelease:
default = release_branch_name
elif release_type == "major":
default = "develop"
else:
default = "master"
branch_name = click.prompt(
"Which branch should release be based off of?", default=default
)
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name}!")
click.get_current_context().abort()
# Checkout the base branch and ensure its up to date
repo.head.reference = base_branch
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Create the new release branch
release_branch = repo.create_head(release_branch_name, commit=base_branch)
# Switch to the release branch and ensure its up to date.
repo.git.checkout(release_branch_name)
update_branch(repo)
# Update the `__version__` variable and write it back to the file.
version_node.value = '"' + new_version + '"'
with open("synapse/__init__.py", "w") as f:
f.write(red.dumps())
# Generate changelgs
subprocess.run("python3 -m towncrier", shell=True)
# Generate debian changelogs if its not an RC.
if not rc:
subprocess.run(
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
)
subprocess.run('dch -M -r -D stable ""', shell=True)
# Show the user the changes and ask if they want to edit the change log.
repo.git.add("-u")
subprocess.run("git diff --cached", shell=True)
if click.confirm("Edit changelog?", default=False):
click.edit(filename="CHANGES.md")
# Commit the changes.
repo.git.add("-u")
repo.git.commit(f"-m {new_version}")
# We give the option to bail here in case the user wants to make sure things
# are OK before pushing.
if not click.confirm("Push branch to github?", default=True):
print("")
print("Run when ready to push:")
print("")
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
print("")
sys.exit(0)
# Otherwise, push and open the changelog in the browser.
repo.git.push(f"-u {repo.remote().name} {repo.active_branch.name}")
click.launch(
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
)
if __name__ == "__main__":
release()

View File

@@ -131,6 +131,7 @@ setup(
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},

View File

@@ -33,6 +33,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
@@ -184,18 +185,12 @@ class Auth:
"""
try:
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
user_agent = request.get_user_agent("")
access_token = self.get_access_token_from_request(request)
user_id, app_service = await self._get_appservice_user_id(request)
if user_id:
request.authenticated_entity = user_id
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("appservice_id", app_service.id)
if ip_addr and self._track_appservice_user_ips:
await self.store.insert_client_ip(
user_id=user_id,
@@ -205,31 +200,38 @@ class Auth:
device_id="dummy-device", # stubbed
)
return synapse.types.create_requester(user_id, app_service=app_service)
requester = synapse.types.create_requester(
user_id, app_service=app_service
)
request.requester = user_id
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("user_id", user_id)
opentracing.set_tag("appservice_id", app_service.id)
return requester
user_info = await self.get_user_by_access_token(
access_token, rights, allow_expired=allow_expired
)
user = user_info["user"]
token_id = user_info["token_id"]
is_guest = user_info["is_guest"]
shadow_banned = user_info["shadow_banned"]
token_id = user_info.token_id
is_guest = user_info.is_guest
shadow_banned = user_info.shadow_banned
# Deny the request if the user account has expired.
if self._account_validity.enabled and not allow_expired:
user_id = user.to_string()
if await self.store.is_account_expired(user_id, self.clock.time_msec()):
if await self.store.is_account_expired(
user_info.user_id, self.clock.time_msec()
):
raise AuthError(
403, "User account has expired", errcode=Codes.EXPIRED_ACCOUNT
)
# device_id may not be present if get_user_by_access_token has been
# stubbed out.
device_id = user_info.get("device_id")
device_id = user_info.device_id
if user and access_token and ip_addr:
if access_token and ip_addr:
await self.store.insert_client_ip(
user_id=user.to_string(),
user_id=user_info.token_owner,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
@@ -243,19 +245,23 @@ class Auth:
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
request.authenticated_entity = user.to_string()
opentracing.set_tag("authenticated_entity", user.to_string())
if device_id:
opentracing.set_tag("device_id", device_id)
return synapse.types.create_requester(
user,
requester = synapse.types.create_requester(
user_info.user_id,
token_id,
is_guest,
shadow_banned,
device_id,
app_service=app_service,
authenticated_entity=user_info.token_owner,
)
request.requester = requester
opentracing.set_tag("authenticated_entity", user_info.token_owner)
opentracing.set_tag("user_id", user_info.user_id)
if device_id:
opentracing.set_tag("device_id", device_id)
return requester
except KeyError:
raise MissingClientTokenError()
@@ -286,7 +292,7 @@ class Auth:
async def get_user_by_access_token(
self, token: str, rights: str = "access", allow_expired: bool = False,
) -> dict:
) -> TokenLookupResult:
""" Validate access token and get user_id from it
Args:
@@ -295,13 +301,7 @@ class Auth:
allow this
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Returns:
dict that includes:
`user` (UserID)
`is_guest` (bool)
`shadow_banned` (bool)
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
InvalidClientTokenError if a user by that token exists, but the token is
expired
@@ -311,9 +311,9 @@ class Auth:
if rights == "access":
# first look in the database
r = await self._look_up_user_by_access_token(token)
r = await self.store.get_user_by_access_token(token)
if r:
valid_until_ms = r["valid_until_ms"]
valid_until_ms = r.valid_until_ms
if (
not allow_expired
and valid_until_ms is not None
@@ -330,7 +330,6 @@ class Auth:
# otherwise it needs to be a valid macaroon
try:
user_id, guest = self._parse_and_validate_macaroon(token, rights)
user = UserID.from_string(user_id)
if rights == "access":
if not guest:
@@ -356,23 +355,17 @@ class Auth:
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
ret = {
"user": user,
"is_guest": True,
"shadow_banned": False,
"token_id": None,
ret = TokenLookupResult(
user_id=user_id,
is_guest=True,
# all guests get the same device id
"device_id": GUEST_DEVICE_ID,
}
device_id=GUEST_DEVICE_ID,
)
elif rights == "delete_pusher":
# We don't store these tokens in the database
ret = {
"user": user,
"is_guest": False,
"shadow_banned": False,
"token_id": None,
"device_id": None,
}
ret = TokenLookupResult(user_id=user_id, is_guest=False)
else:
raise RuntimeError("Unknown rights setting %s", rights)
return ret
@@ -481,31 +474,15 @@ class Auth:
now = self.hs.get_clock().time_msec()
return now < expiry
async def _look_up_user_by_access_token(self, token):
ret = await self.store.get_user_by_access_token(token)
if not ret:
return None
# we use ret.get() below because *lots* of unit tests stub out
# get_user_by_access_token in a way where it only returns a couple of
# the fields.
user_info = {
"user": UserID.from_string(ret.get("name")),
"token_id": ret.get("token_id", None),
"is_guest": False,
"shadow_banned": ret.get("shadow_banned"),
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
return user_info
def get_appservice_by_req(self, request):
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warning("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
request.requester = synapse.types.create_requester(
service.sender, app_service=service
)
return service
async def is_server_admin(self, user: UserID) -> bool:

View File

@@ -52,11 +52,11 @@ class ApplicationService:
self,
token,
hostname,
id,
sender,
url=None,
namespaces=None,
hs_token=None,
sender=None,
id=None,
protocols=None,
rate_limited=True,
ip_range_whitelist=None,

View File

@@ -26,14 +26,14 @@ class CasConfig(Config):
def read_config(self, config, **kwargs):
cas_config = config.get("cas_config", None)
if cas_config:
self.cas_enabled = cas_config.get("enabled", True)
self.cas_enabled = cas_config and cas_config.get("enabled", True)
if self.cas_enabled:
self.cas_server_url = cas_config["server_url"]
self.cas_service_url = cas_config["service_url"]
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
self.cas_required_attributes = cas_config.get("required_attributes", {})
self.cas_required_attributes = cas_config.get("required_attributes") or {}
else:
self.cas_enabled = False
self.cas_server_url = None
self.cas_service_url = None
self.cas_displayname_attribute = None
@@ -41,13 +41,35 @@ class CasConfig(Config):
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# Enable CAS for registration and login.
# Enable Central Authentication Service (CAS) for registration and login.
#
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homeserver.domain.com:8448"
# #displayname_attribute: name
# #required_attributes:
# # name: value
cas_config:
# Uncomment the following to enable authorization against a CAS server.
# Defaults to false.
#
#enabled: true
# The URL of the CAS authorization endpoint.
#
#server_url: "https://cas-server.com"
# The public URL of the homeserver.
#
#service_url: "https://homeserver.domain.com:8448"
# The attribute of the CAS response to use as the display name.
#
# If unset, no displayname will be set.
#
#displayname_attribute: name
# It is possible to configure Synapse to only allow logins if CAS attributes
# match particular values. All of the keys in the mapping below must exist
# and the values must match the given value. Alternately if the given value
# is None then any value is allowed (the attribute just must exist).
# All of the listed attributes must match for the login to be permitted.
#
#required_attributes:
# userGroup: "staff"
# department: None
"""

View File

@@ -63,7 +63,7 @@ class JWTConfig(Config):
# and issued at ("iat") claims are validated if present.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existant.
# expected to be non-existent.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
#

View File

@@ -23,7 +23,6 @@ from string import Template
import yaml
from twisted.logger import (
ILogObserver,
LogBeginner,
STDLibLogObserver,
eventAsText,
@@ -32,11 +31,9 @@ from twisted.logger import (
import synapse
from synapse.app import _base as appbase
from synapse.logging._structured import (
reload_structured_logging,
setup_structured_logging,
)
from synapse.logging._structured import setup_structured_logging
from synapse.logging.context import LoggingContextFilter
from synapse.logging.filter import MetadataFilter
from synapse.util.versionstring import get_version_string
from ._base import Config, ConfigError
@@ -48,7 +45,11 @@ DEFAULT_LOG_CONFIG = Template(
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# Synapse also supports structured logging for machine readable logs which can
# be ingested by ELK stacks. See [2] for details.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
version: 1
@@ -105,7 +106,7 @@ root:
# then write them to a file.
#
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
# also need to update the configuation for the `twisted` logger above, in
# also need to update the configuration for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]
@@ -176,11 +177,11 @@ class LoggingConfig(Config):
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) -> None:
"""
Set up Python stdlib logging.
Set up Python standard library logging.
"""
if log_config is None:
if log_config_path is None:
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
@@ -196,7 +197,8 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
handler.setFormatter(formatter)
logger.addHandler(handler)
else:
logging.config.dictConfig(log_config)
# Load the logging configuration.
_load_logging_config(log_config_path)
# We add a log record factory that runs all messages through the
# LoggingContextFilter so that we get the context *at the time we log*
@@ -204,12 +206,14 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
# filter options, but care must when using e.g. MemoryHandler to buffer
# writes.
log_filter = LoggingContextFilter(request="")
log_context_filter = LoggingContextFilter(request="")
log_metadata_filter = MetadataFilter({"server_name": config.server_name})
old_factory = logging.getLogRecordFactory()
def factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
log_filter.filter(record)
log_context_filter.filter(record)
log_metadata_filter.filter(record)
return record
logging.setLogRecordFactory(factory)
@@ -255,21 +259,40 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
if not config.no_redirect_stdio:
print("Redirected stdout/stderr to logs")
return observer
def _reload_stdlib_logging(*args, log_config=None):
logger = logging.getLogger("")
def _load_logging_config(log_config_path: str) -> None:
"""
Configure logging from a log config path.
"""
with open(log_config_path, "rb") as f:
log_config = yaml.safe_load(f.read())
if not log_config:
logger.warning("Reloaded a blank config?")
logging.warning("Loaded a blank logging config?")
# If the old structured logging configuration is being used, convert it to
# the new style configuration.
if "structured" in log_config and log_config.get("structured"):
log_config = setup_structured_logging(log_config)
logging.config.dictConfig(log_config)
def _reload_logging_config(log_config_path):
"""
Reload the log configuration from the file and apply it.
"""
# If no log config path was given, it cannot be reloaded.
if log_config_path is None:
return
_load_logging_config(log_config_path)
logging.info("Reloaded log config from %s due to SIGHUP", log_config_path)
def setup_logging(
hs, config, use_worker_options=False, logBeginner: LogBeginner = globalLogBeginner
) -> ILogObserver:
) -> None:
"""
Set up the logging subsystem.
@@ -282,41 +305,18 @@ def setup_logging(
logBeginner: The Twisted logBeginner to use.
Returns:
The "root" Twisted Logger observer, suitable for sending logs to from a
Logger instance.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
log_config_path = (
config.worker_log_config if use_worker_options else config.log_config
)
def read_config(*args, callback=None):
if log_config is None:
return None
# Perform one-time logging configuration.
_setup_stdlib_logging(config, log_config_path, logBeginner=logBeginner)
# Add a SIGHUP handler to reload the logging configuration, if one is available.
appbase.register_sighup(_reload_logging_config, log_config_path)
with open(log_config, "rb") as f:
log_config_body = yaml.safe_load(f.read())
if callback:
callback(log_config=log_config_body)
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
return log_config_body
log_config_body = read_config()
if log_config_body and log_config_body.get("structured") is True:
logger = setup_structured_logging(
hs, config, log_config_body, logBeginner=logBeginner
)
appbase.register_sighup(read_config, callback=reload_structured_logging)
else:
logger = _setup_stdlib_logging(config, log_config_body, logBeginner=logBeginner)
appbase.register_sighup(read_config, callback=_reload_stdlib_logging)
# make sure that the first thing we log is a thing we can grep backwards
# for
# Log immediately so we can grep backwards.
logging.warning("***** STARTING SERVER *****")
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
logging.info("Instance name: %s", hs.get_instance_name())
return logger

View File

@@ -87,11 +87,10 @@ class OIDCConfig(Config):
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
# OpenID Connect integration. The following settings can be used to make Synapse
# use an OpenID Connect Provider for authentication, instead of its internal
# password database.
# Enable OpenID Connect (OIDC) / OAuth 2.0 for registration and login.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md.
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for some example configurations.
#
oidc_config:
# Uncomment the following to enable authorization against an OpenID Connect

View File

@@ -143,7 +143,7 @@ class RegistrationConfig(Config):
RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
}
# Pull the creater/inviter from the configuration, this gets used to
# Pull the creator/inviter from the configuration, this gets used to
# send invites for invite-only rooms.
mxid_localpart = config.get("auto_join_mxid_localpart")
self.auto_join_user_id = None

View File

@@ -99,7 +99,7 @@ class RoomDirectoryConfig(Config):
#
# Options for the rules include:
#
# user_id: Matches agaisnt the creator of the alias
# user_id: Matches against the creator of the alias
# room_id: Matches against the room ID being published
# alias: Matches against any current local or canonical aliases
# associated with the room

View File

@@ -216,10 +216,8 @@ class SAML2Config(Config):
return """\
## Single sign-on integration ##
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
# enable SAML login.
# The following settings can be used to make Synapse use a single sign-on
# provider for authentication, instead of its internal password database.
#
# You will probably also want to set the following options to `false` to
# disable the regular login/registration flows:
@@ -228,6 +226,11 @@ class SAML2Config(Config):
#
# You will also want to investigate the settings under the "sso" configuration
# section below.
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
# enable SAML login.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
@@ -243,40 +246,42 @@ class SAML2Config(Config):
# so it is not normally necessary to specify them unless you need to
# override them.
#
#sp_config:
# # point this to the IdP's metadata. You can use either a local file or
# # (preferably) a URL.
# metadata:
# #local: ["saml2/idp.xml"]
# remote:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
#
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
#
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
sp_config:
# Point this to the IdP's metadata. You must provide either a local
# file via the `local` attribute or (preferably) a URL via the
# `remote` attribute.
#
#metadata:
# local: ["saml2/idp.xml"]
# remote:
# - url: https://our_idp/metadata.xml
# By default, the user has to go to our login page first. If you'd like
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# 'service.sp' section:
#
#service:
# sp:
# allow_unsolicited: true
# The examples below are just used to generate our metadata xml, and you
# may well not need them, depending on your setup. Alternatively you
# may need a whole lot more detail - see the pysaml2 docs!
#description: ["My awesome SP", "en"]
#name: ["Test SP", "en"]
#organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file:

View File

@@ -67,7 +67,7 @@ class TracerConfig(Config):
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
# By default, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"

View File

@@ -149,7 +149,7 @@ class FederationPolicyForHTTPS:
return SSLClientConnectionCreator(host, ssl_context, should_verify)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
"""Implements the IPolicyForHTTPS interface so that this can be passed
directly to agents.
"""
return self.get_options(hostname)

View File

@@ -59,7 +59,7 @@ class DictProperty:
#
# To exclude the KeyError from the traceback, we explicitly
# 'raise from e1.__context__' (which is better than 'raise from None',
# becuase that would omit any *earlier* exceptions).
# because that would omit any *earlier* exceptions).
#
raise AttributeError(
"'%s' has no '%s' property" % (type(instance), self.key)
@@ -368,7 +368,7 @@ class FrozenEvent(EventBase):
return self.__repr__()
def __repr__(self):
return "<FrozenEvent event_id='%s', type='%s', state_key='%s'>" % (
return "<FrozenEvent event_id=%r, type=%r, state_key=%r>" % (
self.get("event_id", None),
self.get("type", None),
self.get("state_key", None),
@@ -451,7 +451,7 @@ class FrozenEventV2(EventBase):
return self.__repr__()
def __repr__(self):
return "<%s event_id='%s', type='%s', state_key='%s'>" % (
return "<%s event_id=%r, type=%r, state_key=%r>" % (
self.__class__.__name__,
self.event_id,
self.get("type", None),

View File

@@ -180,7 +180,7 @@ def only_fields(dictionary, fields):
in 'fields'.
If there are no event fields specified then all fields are included.
The entries may include '.' charaters to indicate sub-fields.
The entries may include '.' characters to indicate sub-fields.
So ['content.body'] will include the 'body' field of the 'content' object.
A literal '.' character in a field name may be escaped using a '\'.

View File

@@ -154,7 +154,7 @@ class Authenticator:
)
logger.debug("Request from %s", origin)
request.authenticated_entity = origin
request.requester = origin
# If we get a valid signed request from the other side, its probably
# alive

View File

@@ -22,7 +22,7 @@ attestations have a validity period so need to be periodically renewed.
If a user leaves (or gets kicked out of) a group, either side can still use
their attestation to "prove" their membership, until the attestation expires.
Therefore attestations shouldn't be relied on to prove membership in important
cases, but can for less important situtations, e.g. showing a users membership
cases, but can for less important situations, e.g. showing a users membership
of groups on their profile, showing flairs, etc.
An attestation is a signed blob of json that looks like:

View File

@@ -113,7 +113,7 @@ class GroupsServerWorkerHandler:
entry = await self.room_list_handler.generate_room_entry(
room_id, len(joined_users), with_alias=False, allow_private=True
)
entry = dict(entry) # so we don't change whats cached
entry = dict(entry) # so we don't change what's cached
entry.pop("room_id", None)
room_entry["profile"] = entry
@@ -550,7 +550,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
group_id, room_id, is_public=is_public
)
else:
raise SynapseError(400, "Uknown config option")
raise SynapseError(400, "Unknown config option")
return {}

View File

@@ -18,19 +18,22 @@ import email.utils
import logging
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from typing import List
from typing import TYPE_CHECKING, List
from synapse.api.errors import StoreError
from synapse.api.errors import StoreError, SynapseError
from synapse.logging.context import make_deferred_yieldable
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.types import UserID
from synapse.util import stringutils
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class AccountValidityHandler:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.config = hs.config
self.store = self.hs.get_datastore()
@@ -67,7 +70,7 @@ class AccountValidityHandler:
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
@wrap_as_background_process("send_renewals")
async def _send_renewal_emails(self):
async def _send_renewal_emails(self) -> None:
"""Gets the list of users whose account is expiring in the amount of time
configured in the ``renew_at`` parameter from the ``account_validity``
configuration, and sends renewal emails to all of these users as long as they
@@ -81,11 +84,25 @@ class AccountValidityHandler:
user_id=user["user_id"], expiration_ts=user["expiration_ts_ms"]
)
async def send_renewal_email_to_user(self, user_id: str):
async def send_renewal_email_to_user(self, user_id: str) -> None:
"""
Send a renewal email for a specific user.
Args:
user_id: The user ID to send a renewal email for.
Raises:
SynapseError if the user is not set to renew.
"""
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
# If this user isn't set to be expired, raise an error.
if expiration_ts is None:
raise SynapseError(400, "User has no expiration time: %s" % (user_id,))
await self._send_renewal_email(user_id, expiration_ts)
async def _send_renewal_email(self, user_id: str, expiration_ts: int):
async def _send_renewal_email(self, user_id: str, expiration_ts: int) -> None:
"""Sends out a renewal email to every email address attached to the given user
with a unique link allowing them to renew their account.

View File

@@ -88,7 +88,7 @@ class AdminHandler(BaseHandler):
# We only try and fetch events for rooms the user has been in. If
# they've been e.g. invited to a room without joining then we handle
# those seperately.
# those separately.
rooms_user_has_been_in = await self.store.get_rooms_user_has_been_in(user_id)
for index, room in enumerate(rooms):
@@ -226,7 +226,7 @@ class ExfiltrationWriter:
"""
def finished(self):
"""Called when all data has succesfully been exported and written.
"""Called when all data has successfully been exported and written.
This functions return value is passed to the caller of
`export_user_data`.

View File

@@ -12,9 +12,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Dict, List, Optional
from typing import TYPE_CHECKING, Dict, List, Optional, Union
from prometheus_client import Counter
@@ -30,17 +29,24 @@ from synapse.metrics import (
event_processing_loop_counter,
event_processing_loop_room_count,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import Collection, JsonDict, RoomStreamToken, UserID
from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
)
from synapse.storage.databases.main.directory import RoomAliasMapping
from synapse.types import Collection, JsonDict, RoomAlias, RoomStreamToken, UserID
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
class ApplicationServicesHandler:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.is_mine_id = hs.is_mine_id
self.appservice_api = hs.get_application_service_api()
@@ -53,7 +59,7 @@ class ApplicationServicesHandler:
self.current_max = 0
self.is_processing = False
async def notify_interested_services(self, max_token: RoomStreamToken):
def notify_interested_services(self, max_token: RoomStreamToken):
"""Notifies (pushes) all application services interested in this event.
Pushing is done asynchronously, so this method won't block for any
@@ -72,6 +78,12 @@ class ApplicationServicesHandler:
if self.is_processing:
return
# We only start a new background process if necessary rather than
# optimistically (to cut down on overhead).
self._notify_interested_services(max_token)
@wrap_as_background_process("notify_interested_services")
async def _notify_interested_services(self, max_token: RoomStreamToken):
with Measure(self.clock, "notify_interested_services"):
self.is_processing = True
try:
@@ -166,8 +178,11 @@ class ApplicationServicesHandler:
finally:
self.is_processing = False
async def notify_interested_services_ephemeral(
self, stream_key: str, new_token: Optional[int], users: Collection[UserID] = [],
def notify_interested_services_ephemeral(
self,
stream_key: str,
new_token: Optional[int],
users: Collection[Union[str, UserID]] = [],
):
"""This is called by the notifier in the background
when a ephemeral event handled by the homeserver.
@@ -183,13 +198,34 @@ class ApplicationServicesHandler:
new_token: The latest stream token
users: The user(s) involved with the event.
"""
if not self.notify_appservices:
return
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
return
services = [
service
for service in self.store.get_app_services()
if service.supports_ephemeral
]
if not services or not self.notify_appservices:
if not services:
return
# We only start a new background process if necessary rather than
# optimistically (to cut down on overhead).
self._notify_interested_services_ephemeral(
services, stream_key, new_token, users
)
@wrap_as_background_process("notify_interested_services_ephemeral")
async def _notify_interested_services_ephemeral(
self,
services: List[ApplicationService],
stream_key: str,
new_token: Optional[int],
users: Collection[Union[str, UserID]],
):
logger.info("Checking interested services for %s" % (stream_key))
with Measure(self.clock, "notify_interested_services_ephemeral"):
for service in services:
@@ -214,7 +250,9 @@ class ApplicationServicesHandler:
service, "presence", new_token
)
async def _handle_typing(self, service: ApplicationService, new_token: int):
async def _handle_typing(
self, service: ApplicationService, new_token: int
) -> List[JsonDict]:
typing_source = self.event_sources.sources["typing"]
# Get the typing events from just before current
typing, _ = await typing_source.get_new_events_as(
@@ -226,7 +264,7 @@ class ApplicationServicesHandler:
)
return typing
async def _handle_receipts(self, service: ApplicationService):
async def _handle_receipts(self, service: ApplicationService) -> List[JsonDict]:
from_key = await self.store.get_type_stream_id_for_appservice(
service, "read_receipt"
)
@@ -237,7 +275,7 @@ class ApplicationServicesHandler:
return receipts
async def _handle_presence(
self, service: ApplicationService, users: Collection[UserID]
self, service: ApplicationService, users: Collection[Union[str, UserID]]
) -> List[JsonDict]:
events = [] # type: List[JsonDict]
presence_source = self.event_sources.sources["presence"]
@@ -245,6 +283,9 @@ class ApplicationServicesHandler:
service, "presence"
)
for user in users:
if isinstance(user, str):
user = UserID.from_string(user)
interested = await service.is_interested_in_presence(user, self.store)
if not interested:
continue
@@ -265,11 +306,11 @@ class ApplicationServicesHandler:
return events
async def query_user_exists(self, user_id):
async def query_user_exists(self, user_id: str) -> bool:
"""Check if any application service knows this user_id exists.
Args:
user_id(str): The user to query if they exist on any AS.
user_id: The user to query if they exist on any AS.
Returns:
True if this user exists on at least one application service.
"""
@@ -280,11 +321,13 @@ class ApplicationServicesHandler:
return True
return False
async def query_room_alias_exists(self, room_alias):
async def query_room_alias_exists(
self, room_alias: RoomAlias
) -> Optional[RoomAliasMapping]:
"""Check if an application service knows this room alias exists.
Args:
room_alias(RoomAlias): The room alias to query.
room_alias: The room alias to query.
Returns:
namedtuple: with keys "room_id" and "servers" or None if no
association can be found.
@@ -300,10 +343,13 @@ class ApplicationServicesHandler:
)
if is_known_alias:
# the alias exists now so don't query more ASes.
result = await self.store.get_association_from_room_alias(room_alias)
return result
return await self.store.get_association_from_room_alias(room_alias)
async def query_3pe(self, kind, protocol, fields):
return None
async def query_3pe(
self, kind: str, protocol: str, fields: Dict[bytes, List[bytes]]
) -> List[JsonDict]:
services = self._get_services_for_3pn(protocol)
results = await make_deferred_yieldable(
@@ -325,7 +371,9 @@ class ApplicationServicesHandler:
return ret
async def get_3pe_protocols(self, only_protocol=None):
async def get_3pe_protocols(
self, only_protocol: Optional[str] = None
) -> Dict[str, JsonDict]:
services = self.store.get_app_services()
protocols = {} # type: Dict[str, List[JsonDict]]
@@ -343,7 +391,7 @@ class ApplicationServicesHandler:
if info is not None:
protocols[p].append(info)
def _merge_instances(infos):
def _merge_instances(infos: List[JsonDict]) -> JsonDict:
if not infos:
return {}
@@ -358,19 +406,17 @@ class ApplicationServicesHandler:
return combined
for p in protocols.keys():
protocols[p] = _merge_instances(protocols[p])
return {p: _merge_instances(protocols[p]) for p in protocols.keys()}
return protocols
async def _get_services_for_event(self, event):
async def _get_services_for_event(
self, event: EventBase
) -> List[ApplicationService]:
"""Retrieve a list of application services interested in this event.
Args:
event(Event): The event to check. Can be None if alias_list is not.
event: The event to check. Can be None if alias_list is not.
Returns:
list<ApplicationService>: A list of services interested in this
event based on the service regex.
A list of services interested in this event based on the service regex.
"""
services = self.store.get_app_services()
@@ -384,17 +430,15 @@ class ApplicationServicesHandler:
return interested_list
def _get_services_for_user(self, user_id):
def _get_services_for_user(self, user_id: str) -> List[ApplicationService]:
services = self.store.get_app_services()
interested_list = [s for s in services if (s.is_interested_in_user(user_id))]
return interested_list
return [s for s in services if (s.is_interested_in_user(user_id))]
def _get_services_for_3pn(self, protocol):
def _get_services_for_3pn(self, protocol: str) -> List[ApplicationService]:
services = self.store.get_app_services()
interested_list = [s for s in services if s.is_interested_in_protocol(protocol)]
return interested_list
return [s for s in services if s.is_interested_in_protocol(protocol)]
async def _is_unknown_user(self, user_id):
async def _is_unknown_user(self, user_id: str) -> bool:
if not self.is_mine_id(user_id):
# we don't know if they are unknown or not since it isn't one of our
# users. We can't poke ASes.
@@ -409,9 +453,8 @@ class ApplicationServicesHandler:
service_list = [s for s in services if s.sender == user_id]
return len(service_list) == 0
async def _check_user_exists(self, user_id):
async def _check_user_exists(self, user_id: str) -> bool:
unknown_user = await self._is_unknown_user(user_id)
if unknown_user:
exists = await self.query_user_exists(user_id)
return exists
return await self.query_user_exists(user_id)
return True

View File

@@ -18,10 +18,20 @@ import logging
import time
import unicodedata
import urllib.parse
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
Union,
)
import attr
import bcrypt # type: ignore[import]
import bcrypt
import pymacaroons
from synapse.api.constants import LoginType
@@ -49,6 +59,9 @@ from synapse.util.threepids import canonicalise_email
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -149,11 +162,7 @@ class SsoLoginExtraAttributes:
class AuthHandler(BaseHandler):
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.checkers = {} # type: Dict[str, UserInteractiveAuthChecker]
@@ -470,9 +479,7 @@ class AuthHandler(BaseHandler):
# authentication flow.
await self.store.set_ui_auth_clientdict(sid, clientdict)
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
user_agent = request.get_user_agent("")
await self.store.add_user_agent_ip_to_ui_auth_session(
session.session_id, user_agent, clientip
@@ -692,7 +699,7 @@ class AuthHandler(BaseHandler):
Creates a new access token for the user with the given user ID.
The user is assumed to have been authenticated by some other
machanism (e.g. CAS), and the user_id converted to the canonical case.
mechanism (e.g. CAS), and the user_id converted to the canonical case.
The device will be recorded in the table if it is not there already.
@@ -984,17 +991,17 @@ class AuthHandler(BaseHandler):
# This might return an awaitable, if it does block the log out
# until it completes.
result = provider.on_logged_out(
user_id=str(user_info["user"]),
device_id=user_info["device_id"],
user_id=user_info.user_id,
device_id=user_info.device_id,
access_token=access_token,
)
if inspect.isawaitable(result):
await result
# delete pushers associated with this access token
if user_info["token_id"] is not None:
if user_info.token_id is not None:
await self.hs.get_pusherpool().remove_pushers_by_access_token(
str(user_info["user"]), (user_info["token_id"],)
user_info.user_id, (user_info.token_id,)
)
async def delete_access_tokens_for_user(

View File

@@ -212,9 +212,7 @@ class CasHandler:
else:
if not registered_user_id:
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
registered_user_id = await self._registration_handler.register_user(

View File

@@ -129,6 +129,11 @@ class E2eKeysHandler:
if user_id in local_query:
results[user_id] = keys
# Get cached cross-signing keys
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
device_keys_query, from_user_id
)
# Now attempt to get any remote devices from our local cache.
remote_queries_not_in_cache = {}
if remote_queries:
@@ -155,16 +160,28 @@ class E2eKeysHandler:
unsigned["device_display_name"] = device_display_name
user_devices[device_id] = result
# check for missing cross-signing keys.
for user_id in remote_queries.keys():
cached_cross_master = user_id in cross_signing_keys["master_keys"]
cached_cross_selfsigning = (
user_id in cross_signing_keys["self_signing_keys"]
)
# check if we are missing only one of cross-signing master or
# self-signing key, but the other one is cached.
# as we need both, this will issue a federation request.
# if we don't have any of the keys, either the user doesn't have
# cross-signing set up, or the cached device list
# is not (yet) updated.
if cached_cross_master ^ cached_cross_selfsigning:
user_ids_not_in_cache.add(user_id)
# add those users to the list to fetch over federation.
for user_id in user_ids_not_in_cache:
domain = get_domain_from_id(user_id)
r = remote_queries_not_in_cache.setdefault(domain, {})
r[user_id] = remote_queries[user_id]
# Get cached cross-signing keys
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
device_keys_query, from_user_id
)
# Now fetch any devices that we don't have in our cache
@trace
async def do_remote_query(destination):

View File

@@ -112,7 +112,7 @@ class FederationHandler(BaseHandler):
"""Handles events that originated from federation.
Responsible for:
a) handling received Pdus before handing them on as Events to the rest
of the homeserver (including auth and state conflict resoultion)
of the homeserver (including auth and state conflict resolutions)
b) converting events that were produced by local clients that may need
to be sent to remote homeservers.
c) doing the necessary dances to invite remote users and join remote
@@ -477,7 +477,7 @@ class FederationHandler(BaseHandler):
# ----
#
# Update richvdh 2018/09/18: There are a number of problems with timing this
# request out agressively on the client side:
# request out aggressively on the client side:
#
# - it plays badly with the server-side rate-limiter, which starts tarpitting you
# if you send too many requests at once, so you end up with the server carefully
@@ -495,13 +495,13 @@ class FederationHandler(BaseHandler):
# we'll end up back here for the *next* PDU in the list, which exacerbates the
# problem.
#
# - the agressive 10s timeout was introduced to deal with incoming federation
# - the aggressive 10s timeout was introduced to deal with incoming federation
# requests taking 8 hours to process. It's not entirely clear why that was going
# on; certainly there were other issues causing traffic storms which are now
# resolved, and I think in any case we may be more sensible about our locking
# now. We're *certainly* more sensible about our logging.
#
# All that said: Let's try increasing the timout to 60s and see what happens.
# All that said: Let's try increasing the timeout to 60s and see what happens.
try:
missing_events = await self.federation_client.get_missing_events(
@@ -1120,7 +1120,7 @@ class FederationHandler(BaseHandler):
logger.info(str(e))
continue
except RequestSendFailed as e:
logger.info("Falied to get backfill from %s because %s", dom, e)
logger.info("Failed to get backfill from %s because %s", dom, e)
continue
except FederationDeniedError as e:
logger.info(e)
@@ -1545,7 +1545,7 @@ class FederationHandler(BaseHandler):
#
# The reasons we have the destination server rather than the origin
# server send it are slightly mysterious: the origin server should have
# all the neccessary state once it gets the response to the send_join,
# all the necessary state once it gets the response to the send_join,
# so it could send the event itself if it wanted to. It may be that
# doing it this way reduces failure modes, or avoids certain attacks
# where a new server selectively tells a subset of the federation that
@@ -1649,7 +1649,7 @@ class FederationHandler(BaseHandler):
event.internal_metadata.outlier = True
event.internal_metadata.out_of_band_membership = True
# Try the host that we succesfully called /make_leave/ on first for
# Try the host that we successfully called /make_leave/ on first for
# the /send_leave/ request.
host_list = list(target_hosts)
try:

View File

@@ -17,7 +17,7 @@
import logging
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.types import get_domain_from_id
from synapse.types import GroupID, get_domain_from_id
logger = logging.getLogger(__name__)
@@ -28,6 +28,9 @@ def _create_rerouter(func_name):
"""
async def f(self, group_id, *args, **kwargs):
if not GroupID.is_valid(group_id):
raise SynapseError(400, "%s was not legal group ID" % (group_id,))
if self.is_mine_id(group_id):
return await getattr(self.groups_server_handler, func_name)(
group_id, *args, **kwargs
@@ -346,7 +349,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
# TODO: Check that the group is public and we're being added publicly
is_publicised = content.get("publicise", False)
token = await self.store.register_user_group_membership(
@@ -391,7 +394,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
server_name=get_domain_from_id(group_id),
)
# TODO: Check that the group is public and we're being added publically
# TODO: Check that the group is public and we're being added publicly
is_publicised = content.get("publicise", False)
token = await self.store.register_user_group_membership(

View File

@@ -656,7 +656,7 @@ class EventCreationHandler:
context: The event context.
Returns:
The previous verion of the event is returned, if it is found in the
The previous version of the event is returned, if it is found in the
event context. Otherwise, None is returned.
"""
prev_state_ids = await context.get_prev_state_ids()
@@ -1099,34 +1099,13 @@ class EventCreationHandler:
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE:
def is_inviter_member_event(e):
return e.type == EventTypes.Member and e.sender == event.sender
current_state_ids = await context.get_current_state_ids()
# We know this event is not an outlier, so this must be
# non-None.
assert current_state_ids is not None
state_to_include_ids = [
e_id
for k, e_id in current_state_ids.items()
if k[0] in self.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
state_to_include = await self.store.get_events(state_to_include_ids)
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for e in state_to_include.values()
]
event.unsigned[
"invite_room_state"
] = await self.store.get_stripped_room_state_from_event_context(
context,
self.room_invite_state_types,
membership_user_id=event.sender,
)
invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee):

View File

@@ -217,7 +217,7 @@ class OidcHandler:
This is based on the requested scopes: if the scopes include
``openid``, the provider should give use an ID token containing the
user informations. If not, we should fetch them using the
user information. If not, we should fetch them using the
``access_token`` with the ``userinfo_endpoint``.
"""
@@ -426,7 +426,7 @@ class OidcHandler:
return resp
async def _fetch_userinfo(self, token: Token) -> UserInfo:
"""Fetch user informations from the ``userinfo_endpoint``.
"""Fetch user information from the ``userinfo_endpoint``.
Args:
token: the token given by the ``token_endpoint``.
@@ -695,9 +695,7 @@ class OidcHandler:
return
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
# Call the mapper to register/login the user
@@ -756,7 +754,7 @@ class OidcHandler:
Defaults to an hour.
Returns:
A signed macaroon token with the session informations.
A signed macaroon token with the session information.
"""
macaroon = pymacaroons.Macaroon(
location=self._server_name, identifier="key", key=self._macaroon_secret_key,

View File

@@ -48,7 +48,7 @@ from synapse.util.wheel_timer import WheelTimer
MYPY = False
if MYPY:
import synapse.server
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -101,7 +101,7 @@ assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class BasePresenceHandler(abc.ABC):
"""Parts of the PresenceHandler that are shared between workers and master"""
def __init__(self, hs: "synapse.server.HomeServer"):
def __init__(self, hs: "HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
@@ -199,7 +199,7 @@ class BasePresenceHandler(abc.ABC):
class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "synapse.server.HomeServer"):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.hs = hs
self.is_mine_id = hs.is_mine_id
@@ -802,7 +802,7 @@ class PresenceHandler(BasePresenceHandler):
between the requested tokens due to the limit.
The token returned can be used in a subsequent call to this
function to get further updatees.
function to get further updates.
The updates are a list of 2-tuples of stream ID and the row data
"""
@@ -977,7 +977,7 @@ def should_notify(old_state, new_state):
new_state.last_active_ts - old_state.last_active_ts
> LAST_ACTIVE_GRANULARITY
):
# Only notify about last active bumps if we're not currently acive
# Only notify about last active bumps if we're not currently active
if not new_state.currently_active:
notify_reason_counter.labels("last_active_change_online").inc()
return True
@@ -1011,7 +1011,7 @@ def format_user_presence_state(state, now, include_user_id=True):
class PresenceEventSource:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
# We can't call get_presence_handler here because there's a cycle:
#
# Presence -> Notifier -> PresenceEventSource -> Presence
@@ -1071,12 +1071,14 @@ class PresenceEventSource:
users_interested_in = await self._get_interested_in(user, explicit_room_id)
user_ids_changed = set()
user_ids_changed = set() # type: Collection[str]
changed = None
if from_key:
changed = stream_change_cache.get_all_entities_changed(from_key)
if changed is not None and len(changed) < 500:
assert isinstance(user_ids_changed, set)
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
get_updates_counter.labels("stream").inc()

View File

@@ -98,11 +98,18 @@ class ProfileHandler(BaseHandler):
except RequestSendFailed as e:
raise SynapseError(502, "Failed to fetch profile") from e
except HttpResponseException as e:
if e.code < 500 and e.code != 404:
# Other codes are not allowed in c2s API
logger.info(
"Server replied with wrong response: %s %s", e.code, e.msg
)
raise SynapseError(502, "Failed to fetch profile")
raise e.to_synapse_error()
async def get_profile_from_cache(self, user_id: str) -> JsonDict:
"""Get the profile information from our local cache. If the user is
ours then the profile information will always be corect. Otherwise,
ours then the profile information will always be correct. Otherwise,
it may be out of date/missing.
"""
target_user = UserID.from_string(user_id)
@@ -124,7 +131,7 @@ class ProfileHandler(BaseHandler):
profile = await self.store.get_from_remote_profile_cache(user_id)
return profile or {}
async def get_displayname(self, target_user: UserID) -> str:
async def get_displayname(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
try:
displayname = await self.store.get_profile_displayname(
@@ -211,7 +218,7 @@ class ProfileHandler(BaseHandler):
await self._update_join_states(requester, target_user)
async def get_avatar_url(self, target_user: UserID) -> str:
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
if self.hs.is_mine(target_user):
try:
avatar_url = await self.store.get_profile_avatar_url(

View File

@@ -115,7 +115,10 @@ class RegistrationHandler(BaseHandler):
400, "User ID already taken.", errcode=Codes.USER_IN_USE
)
user_data = await self.auth.get_user_by_access_token(guest_access_token)
if not user_data["is_guest"] or user_data["user"].localpart != localpart:
if (
not user_data.is_guest
or UserID.from_string(user_data.user_id).localpart != localpart
):
raise AuthError(
403,
"Cannot register taken user ID without valid guest "
@@ -741,7 +744,7 @@ class RegistrationHandler(BaseHandler):
# up when the access token is saved, but that's quite an
# invasive change I'd rather do separately.
user_tuple = await self.store.get_user_by_access_token(token)
token_id = user_tuple["token_id"]
token_id = user_tuple.token_id
await self.pusher_pool.add_pusher(
user_id=user_id,

View File

@@ -771,22 +771,29 @@ class RoomCreationHandler(BaseHandler):
ratelimit=False,
)
for invitee in invite_list:
# we avoid dropping the lock between invites, as otherwise joins can
# start coming in and making the createRoom slow.
#
# we also don't need to check the requester's shadow-ban here, as we
# have already done so above (and potentially emptied invite_list).
with (await self.room_member_handler.member_linearizer.queue((room_id,))):
content = {}
is_direct = config.get("is_direct", None)
if is_direct:
content["is_direct"] = is_direct
# Note that update_membership with an action of "invite" can raise a
# ShadowBanError, but this was handled above by emptying invite_list.
_, last_stream_id = await self.room_member_handler.update_membership(
requester,
UserID.from_string(invitee),
room_id,
"invite",
ratelimit=False,
content=content,
)
for invitee in invite_list:
(
_,
last_stream_id,
) = await self.room_member_handler.update_membership_locked(
requester,
UserID.from_string(invitee),
room_id,
"invite",
ratelimit=False,
content=content,
)
for invite_3pid in invite_3pid_list:
id_server = invite_3pid["id_server"]
@@ -1268,7 +1275,7 @@ class RoomShutdownHandler:
)
# We now wait for the create room to come back in via replication so
# that we can assume that all the joins/invites have propogated before
# that we can assume that all the joins/invites have propagated before
# we try and auto join below.
await self._replication.wait_for_stream_position(
self.hs.config.worker.events_shard_config.get_instance(new_room_id),

View File

@@ -307,7 +307,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
key = (room_id,)
with (await self.member_linearizer.queue(key)):
result = await self._update_membership(
result = await self.update_membership_locked(
requester,
target,
room_id,
@@ -322,7 +322,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
return result
async def _update_membership(
async def update_membership_locked(
self,
requester: Requester,
target: UserID,
@@ -335,6 +335,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
content: Optional[dict] = None,
require_consent: bool = True,
) -> Tuple[str, int]:
"""Helper for update_membership.
Assumes that the membership linearizer is already held for the room.
"""
content_specified = bool(content)
if content is None:
content = {}

View File

@@ -216,9 +216,7 @@ class SamlHandler:
return
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
# Call the mapper to register/login the user

View File

@@ -139,7 +139,7 @@ class SearchHandler(BaseHandler):
# Filter to apply to results
filter_dict = room_cat.get("filter", {})
# What to order results by (impacts whether pagination can be doen)
# What to order results by (impacts whether pagination can be done)
order_by = room_cat.get("order_by", "rank")
# Return the current state of the rooms?

View File

@@ -32,7 +32,7 @@ class StateDeltasHandler:
Returns:
None if the field in the events either both match `public_value`
or if neither do, i.e. there has been no change.
True if it didnt match `public_value` but now does
True if it didn't match `public_value` but now does
False if it did match `public_value` but now doesn't
"""
prev_event = None

View File

@@ -754,7 +754,7 @@ class SyncHandler:
"""
# TODO(mjark) Check if the state events were received by the server
# after the previous sync, since we need to include those state
# updates even if they occured logically before the previous event.
# updates even if they occurred logically before the previous event.
# TODO(mjark) Check for new redactions in the state events.
with Measure(self.clock, "compute_state_delta"):
@@ -1882,7 +1882,7 @@ class SyncHandler:
# members (as the client otherwise doesn't have enough info to form
# the name itself).
if sync_config.filter_collection.lazy_load_members() and (
# we recalulate the summary:
# we recalculate the summary:
# if there are membership changes in the timeline, or
# if membership has changed during a gappy sync, or
# if this is an initial sync.

View File

@@ -167,20 +167,25 @@ class FollowerTypingHandler:
now_typing = set(row.user_ids)
self._room_typing[row.room_id] = row.user_ids
run_as_background_process(
"_handle_change_in_typing",
self._handle_change_in_typing,
row.room_id,
prev_typing,
now_typing,
)
if self.federation:
run_as_background_process(
"_send_changes_in_typing_to_remotes",
self._send_changes_in_typing_to_remotes,
row.room_id,
prev_typing,
now_typing,
)
async def _handle_change_in_typing(
async def _send_changes_in_typing_to_remotes(
self, room_id: str, prev_typing: Set[str], now_typing: Set[str]
):
"""Process a change in typing of a room from replication, sending EDUs
for any local users.
"""
if not self.federation:
return
for user_id in now_typing - prev_typing:
if self.is_mine_id(user_id):
await self._push_remote(RoomMember(room_id, user_id), True)
@@ -371,7 +376,7 @@ class TypingWriterHandler(FollowerTypingHandler):
between the requested tokens due to the limit.
The token returned can be used in a subsequent call to this
function to get further updatees.
function to get further updates.
The updates are a list of 2-tuples of stream ID and the row data
"""

View File

@@ -31,7 +31,7 @@ class UserDirectoryHandler(StateDeltasHandler):
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
The user directory is filled with users who this server can see are joined to a
world_readable or publically joinable room. We keep a database table up to date
world_readable or publicly joinable room. We keep a database table up to date
by streaming changes of the current state and recalculating whether users should
be in the directory or not when necessary.
"""

Some files were not shown because too many files have changed in this diff Show More