1
0

Compare commits

...

146 Commits

Author SHA1 Message Date
Erik Johnston
e2b2decd11 Fix 2021-04-02 11:28:51 +01:00
Erik Johnston
d088695f15 fix 2021-04-02 11:20:07 +01:00
Erik Johnston
0fddb9aacd fix 2021-04-02 11:10:25 +01:00
Erik Johnston
b05667b610 Don't limit 2021-04-02 09:55:47 +01:00
Erik Johnston
8d308066db fixup 2021-04-01 18:02:06 +01:00
Erik Johnston
2bb036eda7 fixup 2021-04-01 17:46:14 +01:00
Erik Johnston
5deb349a7f Fixup 2021-04-01 17:33:24 +01:00
Erik Johnston
2e78ef4d22 Smear 2021-04-01 17:13:55 +01:00
Erik Johnston
0f82904fdd fixup 2021-04-01 16:55:59 +01:00
Erik Johnston
f2e61d3cc1 Only evict 100 at once 2021-04-01 16:50:54 +01:00
Erik Johnston
3d6c982c41 Randomise 2021-04-01 16:50:08 +01:00
Erik Johnston
bebb7f0f60 Merge remote-tracking branch 'origin/develop' into erikj/cache_memory_usage 2021-04-01 16:33:32 +01:00
Erik Johnston
d6b8e471cf Time out caches after ten minutes 2021-04-01 16:33:23 +01:00
Dirk Klimpel
bb0fe02a52 Add order_by to list user admin API (#9691) 2021-04-01 11:28:53 +01:00
Patrick Cloke
35c5ef2d24 Add an experimental room version to support restricted join rules. (#9717)
Per MSC3083.
2021-03-31 16:39:08 -04:00
Patrick Cloke
e32294f54b Merge branch 'release-v1.31.0' into develop 2021-03-31 14:19:14 -04:00
Patrick Cloke
5fe38e07e7 Revert "Use 'dmypy run' in lint.sh instead of 'mypy' (#9701)" (#9720) 2021-03-31 14:17:52 -04:00
Denis Kasak
5ff8eb97c6 Make sample config allowed_local_3pids regex stricter. (#9719)
The regex should be terminated so that subdomain matches of another
domain are not accepted. Just ensuring that someone doesn't shoot
themselves in the foot by copying our example.

Signed-off-by: Denis Kasak <dkasak@termina.org.uk>
2021-03-31 12:27:20 +00:00
Cristina
670564446c Deprecate imp (#9718)
Fixes #9642.

Signed-off-by: Cristina Muñoz <hi@xmunoz.com>
2021-03-31 12:04:27 +01:00
Andrew Morgan
ac99774dac Rewrite complement.sh (#9685)
This PR rewrites the original complement.sh script with a number of improvements:

* We can now use a local checkout of Complement (configurable with `COMPLEMENT_DIR`), though the default behaviour still downloads the master branch.
* You can now specify a regex of test names to run, or just run all tests.
* We now use the Synapse test blacklist tag (so all tests will pass).
2021-03-31 11:58:12 +01:00
Richard van der Hoff
4dabcf026e Include m.room.create in invite_room_state for Spaces (#9710) 2021-03-30 14:03:17 +01:00
Richard van der Hoff
f02663c4dd Replace room_invite_state_types with room_prejoin_state (#9700)
`room_invite_state_types` was inconvenient as a configuration setting, because
anyone that ever set it would not receive any new types that were added to the
defaults. Here, we deprecate the old setting, and replace it with a couple of
new settings under `room_prejoin_state`.
2021-03-30 12:12:44 +01:00
Erik Johnston
963f4309fe Make RateLimiter class check for ratelimit overrides (#9711)
This should fix a class of bug where we forget to check if e.g. the appservice shouldn't be ratelimited.

We also check the `ratelimit_override` table to check if the user has ratelimiting disabled. That table is really only meant to override the event sender ratelimiting, so we don't use any values from it (as they might not make sense for different rate limits), but we do infer that if ratelimiting is disabled for the user we should disabled all ratelimits.

Fixes #9663
2021-03-30 12:06:09 +01:00
Erik Johnston
3a446c21f8 Update changelog 2021-03-30 11:29:21 +01:00
Erik Johnston
78e48f61bf 1.31.0rc1 2021-03-30 11:19:21 +01:00
Andrew Morgan
f380bb77d1 Use 'dmypy run' in lint.sh instead of 'mypy' (#9701)
For it's obvious performance benefits. `dmypy` support landed in #9692.
2021-03-30 10:30:43 +01:00
Erik Johnston
e73881a439 Add metadata type 2021-03-29 18:54:38 +01:00
Erik Johnston
51a728ec24 Fixup 2021-03-29 18:35:57 +01:00
Erik Johnston
acd2778d61 Fixup 2021-03-29 18:34:21 +01:00
Erik Johnston
ffa6e96b5f Fix 2021-03-29 18:31:49 +01:00
Erik Johnston
a1dfe34d86 Log errors 2021-03-29 18:29:25 +01:00
Erik Johnston
c232e16d23 Export jemalloc stats 2021-03-29 18:27:28 +01:00
Patrick Cloke
01dd90b0f0 Add type hints to DictionaryCache and TTLCache. (#9442) 2021-03-29 12:15:33 -04:00
blakehawkins
7dcf3fd221 Clarify that register_new_matrix_user is present also when installed via non-pip package (#9074)
Signed-off-by: blakehawkins blake.hawkins.11@gmail.com
2021-03-29 17:05:06 +01:00
Patrick Cloke
da75d2ea1f Add type hints for the federation sender. (#9681)
Includes an abstract base class which both the FederationSender
and the FederationRemoteSendQueue must implement.
2021-03-29 11:43:20 -04:00
Richard van der Hoff
4bbd535450 Update the OIDC sample config (#9695)
I've reiterated the advice about using `oidc` to migrate, since I've seen a few
people caught by this.

I've also removed a couple of the examples as they are duplicating the OIDC
documentation, and I think they might be leading people astray.
2021-03-29 15:40:11 +01:00
Andrew Morgan
5fdff97719 Fix CI by ignore type for None module import (#9709) 2021-03-29 14:42:38 +01:00
Jonathan de Jong
fc53a606e4 Fix re.Pattern mypy error on 3.6 (#9703) 2021-03-29 09:40:45 -04:00
Erik Johnston
ee36be5eef Merge remote-tracking branch 'origin/develop' into erikj/cache_memory_usage 2021-03-29 14:24:11 +01:00
Erik Johnston
b3e99c25bf Handle RulesForRoom and _JoinedHostsCache 2021-03-29 14:23:28 +01:00
Richard van der Hoff
ad8690a26c Fix the suggested pip incantation for cryptography (#9699)
If you have the wrong version of `cryptography` installed, synapse suggests:

```
To install run:
    pip install --upgrade --force 'cryptography>=3.4.7;python_version>='3.6''
```

However, the use of ' inside '...' doesn't work, so when you run this, you get
an error.
2021-03-29 11:55:33 +01:00
Erik Johnston
f83ad8dd2d Fixup 2021-03-29 11:12:00 +01:00
Erik Johnston
915163f72f Ignore _JoinedHostsCache as it includes DataStore 2021-03-29 11:02:10 +01:00
Eric Eastwood
0a778c135f Make pip install faster in Docker build for Complement testing (#9610)
Make pip install faster in Docker build for [Complement](https://github.com/matrix-org/complement) testing.

If files have changed in a `COPY` command, Docker will invalidate all of the layers below. So I changed the order of operations to install all dependencies before we `COPY synapse /synapse/synapse/`. This allows Docker to use our cached layer of dependencies even when we change the source of Synapse and speed up builds dramatically! `53.5s` -> `3.7s` builds 🤘

As an alternative, I did try using BuildKit caches but this still took 30 seconds overall on that step. 15 seconds to gather the dependencies from the cache and another 15 seconds to `Installing collected packages`.

Fix https://github.com/matrix-org/synapse/issues/9364
2021-03-26 18:42:58 +00:00
Erik Johnston
9596641e1d Fix 2021-03-26 17:51:11 +00:00
Erik Johnston
c4d468b69f Don't ban __iter__ 2021-03-26 17:45:36 +00:00
Erik Johnston
f38350fdda Report cache memory usage 2021-03-26 17:38:50 +00:00
Richard van der Hoff
7c8402ddb8 Suppress CryptographyDeprecationWarning (#9698)
This warning is somewhat confusing to users, so let's suppress it
2021-03-26 17:33:55 +00:00
Erik Johnston
b5efcb577e Make it possible to use dmypy (#9692)
Running `dmypy run` will do a `mypy` check while spinning up a daemon
that makes rerunning `dmypy run` a lot faster.

`dmypy` doesn't support `follow_imports = silent` and has
`local_partial_types` enabled, so this PR enables those options and
fixes the issues that were newly raised. Note that `local_partial_types`
will be enabled by default in upcoming mypy releases.
2021-03-26 16:49:46 +00:00
Erik Johnston
019010964d Merge branch 'master' into develop 2021-03-26 12:26:58 +00:00
Erik Johnston
262ed05f5b Update cahngelog 2021-03-26 12:21:04 +00:00
Erik Johnston
548c4a6587 Update cahngelog 2021-03-26 12:17:37 +00:00
Erik Johnston
c6f8e8086c 1.30.1 2021-03-26 12:03:29 +00:00
Erik Johnston
12d6184713 Explicitly upgrade openssl in docker file and enforce new version of cryptography (#9697) 2021-03-26 12:00:25 +00:00
Paul Tötterman
d7d4232a2d Preserve host in example apache config (#9696)
Fixes redirect loop

Signed-off-by: Paul Tötterman <paul.totterman@iki.fi>
2021-03-26 10:38:31 +00:00
Quentin Gliech
d4c4798a25 Use interpreter from $PATH instead of absolute paths in various scripts using /usr/bin/env (#9689)
On NixOS, `bash` isn't under `/bin/bash` but rather in some directory in `$PATH`. Locally, I've been patching those scripts to make them work.

`/usr/bin/env` seems to be the only [portable way](https://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my) to use binaries from the PATH as interpreters.

Signed-off-by: Quentin Gliech <quentingliech@gmail.com>
2021-03-25 16:53:54 +00:00
Serban Constantin
e5801db830 platform specific prerequisites in source install (#9667)
Make it clearer in the source install step that the platform specific
prerequisites must be installed first.

Signed-off-by: Serban Constantin <serban.constantin@gmail.com>
2021-03-25 15:31:26 +00:00
Andrew Morgan
fae81f2f68 Add a storage method for returning all current presence from all users (#9650)
Split off from https://github.com/matrix-org/synapse/pull/9491

Adds a storage method for getting the current presence of all local users, optionally excluding those that are offline. This will be used by the code in #9491 when a PresenceRouter module informs Synapse that a given user should have `"ALL"` user presence updates routed to them. Specifically, it is used here: b588f16e39/synapse/handlers/presence.py (L1131-L1133)

Note that there is a `get_all_presence_updates` function just above. That function is intended to walk up the table through stream IDs, and is primarily used by the presence replication stream. I could possibly make use of it in the PresenceRouter-related code, but it would be a bit of a bodge.
2021-03-25 10:34:23 +00:00
Erik Johnston
c602ba8336 Fixed undefined variable error in catchup (#9664)
Broke in #9640

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-03-24 16:12:47 +00:00
Patrick Cloke
c2d4bd62a2 Fix typo in changelog. 2021-03-24 11:32:42 -04:00
Jonathan de Jong
4c3827f2c1 Enable addtional flake8-bugbear linting checks. (#9659) 2021-03-24 09:34:30 -04:00
Richard van der Hoff
c73cc2c2ad Spaces summary: call out to other servers (#9653)
When we hit an unknown room in the space tree, see if there are other servers that we might be able to poll to get the data.

Fixes: #9447
2021-03-24 12:45:39 +00:00
Ben Banfield-Zanin
4655d2221e docs: fallback/web endpoint does not appear to be mounted on workers (#9679) 2021-03-24 11:43:04 +00:00
Patrick Cloke
83de0be4b0 Bump mypy-zope to 0.2.13. (#9678)
This fixes an error ("Cannot determine consistent method resolution order (MRO)")
when running mypy with a cache.
2021-03-24 07:35:43 -04:00
Patrick Cloke
af387cf52a Add type hints to misc. files. (#9676) 2021-03-24 06:49:01 -04:00
Patrick Cloke
7e8dc9934e Add a type hints for service notices to the HomeServer object. (#9675) 2021-03-24 06:48:46 -04:00
Erik Johnston
e550ab17ad Increase default join burst ratelimiting (#9674)
It's legitimate behaviour to try and join a bunch of rooms at once.
2021-03-23 14:52:20 +00:00
Jonathan de Jong
0caf2a338e Fix federation stall on concurrent access errors (#9639) 2021-03-23 13:52:30 +00:00
Richard van der Hoff
4ecba9bd5c Federation API for Space summary (#9652)
Builds on the work done in #9643 to add a federation API for space summaries.

There's a bit of refactoring of the existing client-server code first, to avoid too much duplication.
2021-03-23 11:51:12 +00:00
Patrick Cloke
b7748d3c00 Import HomeServer from the proper module. (#9665) 2021-03-23 07:12:48 -04:00
Andrew Morgan
5b268997bd Allow providing credentials to HTTPS_PROXY (#9657)
Addresses https://github.com/matrix-org/synapse-dinsic/issues/70

This PR causes `ProxyAgent` to attempt to extract credentials from an `HTTPS_PROXY` env var. If credentials are found, a `Proxy-Authorization` header ([details](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Proxy-Authorization)) is sent to the proxy server to authenticate against it. The headers are *not* passed to the remote server.

Also added some type hints.
2021-03-22 17:20:47 +00:00
Johannes Wienke
4612302399 Include opencontainers labels in Docker image (#9612)
Cf. https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys

Signed-off-by: Johannes Wienke <languitar@semipol.de>
2021-03-22 15:31:00 +00:00
Ankit Dobhal
d66f9070cd Fixed code misc. quality issues (#9649)
- Merge 'isinstance' calls.
- Remove unnecessary dict call outside of comprehension.
- Use 'sys.exit()' calls.
2021-03-22 11:18:13 -04:00
Erik Johnston
d600d4506b Merge branch 'master' into develop 2021-03-22 13:36:36 +00:00
Brendan Abolivier
e09838c78f Merge pull request #9644 from matrix-org/babolivier/msc3026
Implement MSC3026: busy presence state
2021-03-22 14:28:19 +01:00
Erik Johnston
e2904f720d 1.30.0 2021-03-22 13:15:55 +00:00
Brendan Abolivier
b6ed4f55ac Incorporate review 2021-03-19 18:19:50 +01:00
Brendan Abolivier
592d6305fd Merge branch 'develop' into babolivier/msc3026 2021-03-19 16:12:40 +01:00
Brendan Abolivier
0b56481caa Fix lint 2021-03-19 16:11:08 +01:00
Richard van der Hoff
066068f034 fix mypy 2021-03-19 12:20:11 +00:00
Richard van der Hoff
0e35584734 federation_client: handle inline signing_keys in hs.yaml (#9647) 2021-03-18 21:12:07 +00:00
Richard van der Hoff
201178db1a federation_client: stop adding URL prefix (#9645) 2021-03-18 20:31:47 +00:00
Patrick Cloke
9b0e3009fa Fix type-hints from bad merge. 2021-03-18 14:40:56 -04:00
Richard van der Hoff
004234f03a Initial spaces summary API (#9643)
This is very bare-bones for now: federation will come soon, while pagination is descoped for now but will come later.
2021-03-18 18:24:16 +00:00
Brendan Abolivier
066c703729 Move support for MSC3026 behind an experimental flag 2021-03-18 18:37:19 +01:00
Dirk Klimpel
8dd2ea65a9 Consistently check whether a password may be set for a user. (#9636) 2021-03-18 12:54:08 -04:00
Erik Johnston
dd71eb0f8a Make federation catchup send last event from any server. (#9640)
Currently federation catchup will send the last *local* event that we
failed to send to the remote. This can cause issues for large rooms
where lots of servers have sent events while the remote server was down,
as when it comes back up again it'll be flooded with events from various
points in the DAG.

Instead, let's make it so that all the servers send the most recent
events, even if its not theirs. The remote should deduplicate the
events, so there shouldn't be much overhead in doing this.
Alternatively, the servers could only send local events if they were
also extremities and hope that the other server will send the event
over, but that is a bit risky.
2021-03-18 15:52:26 +00:00
Brendan Abolivier
405aeb0b2c Implement MSC3026: busy presence state 2021-03-18 16:34:47 +01:00
Andrew Morgan
7b06f85c0e Ensure we use a copy of the event content dict before modifying it in serialize_event (#9585)
This bug was discovered by DINUM. We were modifying `serialized_event["content"]`, which - if you've got `USE_FROZEN_DICTS` turned on or are [using a third party rules module](17cd48fe51/synapse/events/third_party_rules.py (L73-L76)) - will raise a 500 if you try to a edit a reply to a message.

`serialized_event["content"]` could be set to the edit event's content, instead of a copy of it, which is bad as we attempt to modify it. Instead, we also end up modifying the original event's content. DINUM uses a third party rules module, which meant the event's content got frozen and thus an exception was raised.

To be clear, the problem is not that the event's content was frozen. In fact doing so helped us uncover the fact we weren't copying event content correctly.
2021-03-17 16:51:55 +00:00
Patrick Cloke
cc324d53fe Fix up types for the typing handler. (#9638)
By splitting this to two separate methods the callers know
what methods they can expect on the handler.
2021-03-17 11:30:21 -04:00
Hubert Chathi
73dbce5523 only save remote cross-signing keys if they're different from the current ones (#9634)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2021-03-17 11:04:57 -04:00
Erik Johnston
ad721fc559 Fix bad naming of storage function (#9637)
We had two functions named `get_forward_extremities_for_room` and
`get_forward_extremeties_for_room` that took different paramters. We
rename one of them to avoid confusion.
2021-03-17 13:20:08 +00:00
Richard van der Hoff
567f88f835 Prep work for removing outlier from internal_metadata (#9411)
* Populate `internal_metadata.outlier` based on `events` table

Rather than relying on `outlier` being in the `internal_metadata` column,
populate it based on the `events.outlier` column.

* Move `outlier` out of InternalMetadata._dict

Ultimately, this will allow us to stop writing it to the database. For now, we
have to grandfather it back in so as to maintain compatibility with older
versions of Synapse.
2021-03-17 12:33:18 +00:00
Patrick Cloke
b449af0379 Add type hints to the room member handler. (#9631) 2021-03-17 07:14:39 -04:00
Jonathan de Jong
27d2820c33 Enable flake8-bugbear, but disable most checks. (#9499)
* Adds B00 to ignored checks.
* Fixes remaining issues.
2021-03-16 14:19:27 -04:00
Hubbe
dd5e5dc1d6 Add SSO attribute requirements for OIDC providers (#9609)
Allows limiting who can login using OIDC via the claims
made from the IdP.
2021-03-16 11:46:07 -04:00
Dirk Klimpel
8000cf1315 Return m.change_password.enabled=false if local database is disabled (#9588)
Instead of if the user does not have a password hash. This allows a SSO
user to add a password to their account, but only if the local password
database is configured.
2021-03-16 11:44:25 -04:00
Andrew Morgan
45ef73fd4f Fix jemalloc changelog entry wording 2021-03-16 14:46:40 +00:00
Andrew Morgan
e3bc0e6f7c Changelog typo 2021-03-16 14:33:23 +00:00
Andrew Morgan
ad5d2e7ec0 Pull up appservice login deprecation notice 2021-03-16 13:51:24 +00:00
Andrew Morgan
d315e96443 1.30.0rc1 2021-03-16 13:45:46 +00:00
Andrew Morgan
847ecdd8fa Pass SSO IdP information to spam checker's registration function (#9626)
Fixes https://github.com/matrix-org/synapse/issues/9572

When a SSO user logs in for the first time, we create a local Matrix user for them. This goes through the register_user flow, which ends up triggering the spam checker. Spam checker modules don't currently have any way to differentiate between a user trying to sign up initially, versus an SSO user (whom has presumably already been approved elsewhere) trying to log in for the first time.

This PR passes `auth_provider_id` as an argument to the `check_registration_for_spam` function. This argument will contain an ID of an SSO provider (`"saml"`, `"cas"`, etc.) if one was used, else `None`.
2021-03-16 12:41:41 +00:00
Mathieu Velten
ccf1dc51d7 Install jemalloc in docker image (#8553)
Co-authored-by: Will Hunt <willh@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
2021-03-16 11:32:18 +00:00
Patrick Cloke
1383508f29 Handle an empty cookie as an invalid macaroon. (#9620)
* Handle an empty cookie as an invalid macaroon.

* Newsfragment
2021-03-16 11:29:35 +00:00
Richard van der Hoff
dd69110d95 Add support for stable MSC2858 API (#9617)
The stable format uses different brand identifiers, so we need to support two
identifiers for each IdP.
2021-03-16 11:21:26 +00:00
Richard van der Hoff
5b5bc188cf Clean up config settings for stats (#9604)
... and complain if people try to turn it off.
2021-03-16 10:57:54 +00:00
Andrew Morgan
1b0eaed21f Prevent bundling aggregations for state events (#9619)
There's no need to do aggregation bundling for state events. Doing so can cause performance issues.
2021-03-16 10:27:51 +00:00
Richard van der Hoff
1c8a2541da Fix Internal Server Error on GET /saml2/authn_response (#9623)
* Fix Internal Server Error on `GET /saml2/authn_response`

Seems to have been introduced in #8765 (Synapse 1.24.0)

* Fix newsfile
2021-03-16 10:20:20 +00:00
Patrick Cloke
f87dfb9403 Revert requiring a specific version of Twisted for mypy checks. (#9618) 2021-03-15 12:18:35 -04:00
Patrick Cloke
d29b71aa50 Fix remaining mypy issues due to Twisted upgrade. (#9608) 2021-03-15 11:14:39 -04:00
Erik Johnston
026503fa3b Don't go into federation catch up mode so easily (#9561)
Federation catch up mode is very inefficient if the number of events
that the remote server has missed is small, since handling gaps can be
very expensive, c.f. #9492.

Instead of going into catch up mode whenever we see an error, we instead
do so only if we've backed off from trying the remote for more than an
hour (the assumption being that in such a case it is more than a
transient failure).
2021-03-15 14:42:40 +00:00
Richard van der Hoff
af2248f8bf Optimise missing prev_event handling (#9601)
Background: When we receive incoming federation traffic, and notice that we are missing prev_events from 
the incoming traffic, first we do a `/get_missing_events` request, and then if we still have missing prev_events,
we set up new backwards-extremities. To do that, we need to make a `/state_ids` request to ask the remote
server for the state at those prev_events, and then we may need to then ask the remote server for any events
in that state which we don't already have, as well as the auth events for those missing state events, so that we
can auth them.

This PR attempts to optimise the processing of that state request. The `state_ids` API returns a list of the state
events, as well as a list of all the auth events for *all* of those state events. The optimisation comes from the
observation that we are currently loading all of those auth events into memory at the start of the operation, but
we almost certainly aren't going to need *all* of the auth events. Rather, we can check that we have them, and
leave the actual load into memory for later. (Ideally the federation API would tell us which auth events we're
actually going to need, but it doesn't.)

The effect of this is to reduce the number of events that I need to load for an event in Matrix HQ from about
60000 to about 22000, which means it can stay in my in-memory cache, whereas previously the sheer number
of events meant that all 60K events had to be loaded from db for each request, due to the amount of cache
churn. (NB I've already tripled the size of the cache from its default of 10K).

Unfortunately I've ended up basically C&Ping `_get_state_for_room` and `_get_events_from_store_or_dest` into
a new method, because `_get_state_for_room` is also called during backfill, which expects the auth events to be
returned, so the same tricks don't work. That said, I don't really know why that codepath is completely different
(ultimately we're doing the same thing in setting up a new backwards extremity) so I've left a TODO suggesting
that we clean it up.
2021-03-15 13:51:02 +00:00
Patrick Cloke
55da8df078 Fix additional type hints from Twisted 21.2.0. (#9591) 2021-03-12 11:37:57 -05:00
Richard van der Hoff
1e67bff833 Reject concurrent transactions (#9597)
If more transactions arrive from an origin while we're still processing the
first one, reject them.

Hopefully a quick fix to https://github.com/matrix-org/synapse/issues/9489
2021-03-12 15:14:55 +00:00
Richard van der Hoff
2b328d7e02 Improve logging when processing incoming transactions (#9596)
Put the room id in the logcontext, to make it easier to understand what's going on.
2021-03-12 15:08:03 +00:00
Richard van der Hoff
464e5da7b2 Add logging for redis connection setup (#9590) 2021-03-11 18:35:09 +00:00
Patrick Cloke
e55bd0e110 Add tests for blacklisting reactor/agent. (#9563) 2021-03-11 09:15:22 -05:00
Dirk Klimpel
70d1b6abff Re-Activating account when local passwords are disabled (#9587)
Fixes: #8393
2021-03-11 13:52:32 +00:00
Richard van der Hoff
a7a3790066 Convert Requester to attrs (#9586)
... because namedtuples suck

Fix up a couple of other annotations to keep mypy happy.
2021-03-10 18:15:56 +00:00
Richard van der Hoff
1107214a1d Fix the auth provider on the logins metric (#9573)
We either need to pass the auth provider over the replication api, or make sure
we report the auth provider on the worker that received the request. I've gone
with the latter.
2021-03-10 18:15:03 +00:00
Jason Robinson
17cd48fe51 Fix spam checker modules documentation example (#9580)
Mention that parse_config must exist and note the
check_media_file_for_spam method.
2021-03-10 10:42:51 -05:00
Patrick Cloke
2a99cc6524 Use the chain cover index in get_auth_chain_ids. (#9576)
This uses a simplified version of get_chain_cover_difference to calculate
auth chain of events.
2021-03-10 09:57:59 -05:00
Patrick Cloke
918f6ed827 Fix a bug in the background task for purging chain cover. (#9583) 2021-03-10 08:55:52 -05:00
Patrick Cloke
67b979bfa1 Do not ignore the unpaddedbase64 module when type checking. (#9568) 2021-03-09 14:41:02 -05:00
Patrick Cloke
dc51d8ffaf Add a background task to purge unused chain IDs. (#9542)
This is a companion change to apply the fix in #9498 /
922788c604 to previously
purged rooms.
2021-03-09 11:22:25 -05:00
Andrew Morgan
e9df3f496b Link to the List user's media admin API from media Admin API docs (#9571)
Earlier [I was convinced](https://github.com/matrix-org/synapse/issues/9565) that we didn't have an Admin API for listing media uploaded by a user. Foolishly I was looking under the Media Admin API documentation, instead of the User Admin API documentation.

I thought it'd be helpful to link to the latter so others don't hit the same dead end :)
2021-03-09 15:15:52 +00:00
Richard van der Hoff
eaada74075 JWT OIDC secrets for Sign in with Apple (#9549)
Apple had to be special. They want a client secret which is generated from an EC key.

Fixes #9220. Also fixes #9212 while I'm here.
2021-03-09 15:03:37 +00:00
Erik Johnston
9cd18cc588 Retry 5xx errors in federation client (#9567)
Fixes #8915
2021-03-09 13:15:12 +00:00
Patrick Cloke
7fdc6cefb3 Fix additional type hints. (#9543)
Type hint fixes due to Twisted 21.2.0 adding type hints.
2021-03-09 07:41:32 -05:00
Patrick Cloke
075c16b410 Handle image transparency better when thumbnailing. (#9473)
Properly uses RGBA mode for 1- and 8-bit images with transparency
(instead of RBG mode).
2021-03-09 07:37:09 -05:00
Patrick Cloke
3ce650057d Add a list of hashes to ignore during git blame. (#9560)
The hashes are from commits due to auto-formatting, e.g. running black.

git can be configured to use this automatically by running the following:

    git config blame.ignoreRevsFile .git-blame-ignore-revs
2021-03-09 07:34:55 -05:00
Erik Johnston
576c91c7c1 Fixup sample config
After 0764d0c6e5
2021-03-09 11:40:45 +00:00
Andrew Morgan
22db45bd4d Prevent the config-lint script erroring out on any sample_config changes (#9562)
I noticed that I'd occasionally have `scripts-dev/lint.sh` fail when messing about with config options in my PR. The script calls `scripts-dev/config-lint.sh`, which attempts some validation on the sample config.

 It does this by using `sed` to edit the sample_config, and then seeing if the file changed using `git diff`.

The problem is: if you changed the sample_config as part of your commit, this script will error regardless.

This PR attempts to change the check so that existing, unstaged changes to the sample_config will not cause the script to report an invalid file.
2021-03-09 11:11:42 +00:00
Jonathan de Jong
9898470e7d Add logging to ObservableDeferred callbacks (#9523) 2021-03-09 11:09:31 +00:00
Matthew Hodgson
0764d0c6e5 quick config comment tweak to clarify allow_profile_lookup_over_federation 2021-03-08 21:52:04 +00:00
Jonathan de Jong
d6196efafc Add ResponseCache tests. (#9458) 2021-03-08 14:00:07 -05:00
Will Hunt
b2c4d3d721 Warn that /register will soon require a type when called with an access token (#9559)
This notice is giving a heads up to the planned spec compliance fix https://github.com/matrix-org/synapse/pull/9548.
2021-03-08 16:35:04 +00:00
Dirk Klimpel
7076eee4b9 Add type hints to purge room and server notice admin API. (#9520) 2021-03-08 10:34:38 -05:00
Patrick Cloke
cb7fc7523e Add a basic test for purging rooms. (#9541)
Unfortunately this doesn't test re-joining the room since
that requires having another homeserver to query over
federation, which isn't easily doable in unit tests.
2021-03-08 09:21:36 -05:00
Erik Johnston
b988b07bb0 Merge branch 'master' into develop 2021-03-08 14:06:35 +00:00
Patrick Cloke
58114f8a17 Create a SynapseReactor type which incorporates the necessary reactor interfaces. (#9528)
This helps fix some type hints when running with Twisted 21.2.0.
2021-03-08 08:25:43 -05:00
Leo Bärring
0fc4eb103a Update reverse proxy to add OpenBSD relayd example configuration. (#9508)
Update reverse proxy to add OpenBSD relayd example configuration.

Signed-off-by: Leo Bärring <leo.barring@protonmail.com>
2021-03-06 11:49:19 +00:00
Ben Banfield-Zanin
e5da770cce Add additional SAML2 upgrade notes (#9550) 2021-03-05 12:07:50 +00:00
Richard van der Hoff
8a4b3738f3 Replace last_*_pdu_age metrics with timestamps (#9540)
Following the advice at
https://prometheus.io/docs/practices/instrumentation/#timestamps-not-time-since,
it's preferable to export unix timestamps, not ages.

There doesn't seem to be any particular naming convention for timestamp
metrics.
2021-03-04 16:40:18 +00:00
Richard van der Hoff
df425c2c63 Prometheus metrics for logins and registrations (#9511)
Add prom metrics for number of users successfully registering and logging in, by SSO provider.
2021-03-04 16:39:27 +00:00
Richard van der Hoff
7eb6e39a8f Record the SSO Auth Provider in the login token (#9510)
This great big stack of commits is a a whole load of hoop-jumping to make it easier to store additional values in login tokens, and then to actually store the SSO Identity Provider in the login token. (Making use of that data will follow in a subsequent PR.)
2021-03-04 14:44:22 +00:00
263 changed files with 6455 additions and 1958 deletions

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py35-old tox environment.

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# with additional dependencies needed for the test (such as coverage or the PostgreSQL

8
.git-blame-ignore-revs Normal file
View File

@@ -0,0 +1,8 @@
# Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3
# Target Python 3.5 with black (#8664).
aff1eb7c671b0a3813407321d2702ec46c71fa56
# Update black to 20.8b1 (#9381).
0a00b7ff14890987f09112a2ae696c61001e6cf1

View File

@@ -1,3 +1,185 @@
Synapse 1.31.0rc1 (2021-03-30)
==============================
**Note:** As announced in v1.25.0, and in line with the deprecation policy for platform dependencies, this is the last release to support Python 3.5 and PostgreSQL 9.5. Future versions of Synapse will require Python 3.6+ and PostgreSQL 9.6+.
This is also the last release that the Synapse team will be publishing packages for Debian Stretch and Ubuntu Xenial.
Features
--------
- Add support to OpenID Connect login for requiring attributes on the `userinfo` response. Contributed by Hubbe King. ([\#9609](https://github.com/matrix-org/synapse/issues/9609))
- Add initial experimental support for a "space summary" API. ([\#9643](https://github.com/matrix-org/synapse/issues/9643), [\#9652](https://github.com/matrix-org/synapse/issues/9652), [\#9653](https://github.com/matrix-org/synapse/issues/9653))
- Add support for the busy presence state as described in [MSC3026](https://github.com/matrix-org/matrix-doc/pull/3026). ([\#9644](https://github.com/matrix-org/synapse/issues/9644))
- Add support for credentials for proxy authentication in the `HTTPS_PROXY` environment variable. ([\#9657](https://github.com/matrix-org/synapse/issues/9657))
Bugfixes
--------
- Fix a longstanding bug that could cause issues when editing a reply to a message. ([\#9585](https://github.com/matrix-org/synapse/issues/9585))
- Fix the `/capabilities` endpoint to return `m.change_password` as disabled if the local password database is not used for authentication. Contributed by @dklimpel. ([\#9588](https://github.com/matrix-org/synapse/issues/9588))
- Check if local passwords are enabled before setting them for the user. ([\#9636](https://github.com/matrix-org/synapse/issues/9636))
- Fix a bug where federation sending can stall due to `concurrent access` database exceptions when it falls behind. ([\#9639](https://github.com/matrix-org/synapse/issues/9639))
- Fix a bug introduced in Synapse 1.30.1 which meant the suggested `pip` incantation to install an updated `cryptography` was incorrect. ([\#9699](https://github.com/matrix-org/synapse/issues/9699))
Updates to the Docker image
---------------------------
- Speed up Docker builds and make it nicer to test against Complement while developing (install all dependencies before copying the project). ([\#9610](https://github.com/matrix-org/synapse/issues/9610))
- Include [opencontainers labels](https://github.com/opencontainers/image-spec/blob/master/annotations.md#pre-defined-annotation-keys) in the Docker image. ([\#9612](https://github.com/matrix-org/synapse/issues/9612))
Improved Documentation
----------------------
- Clarify that `register_new_matrix_user` is present also when installed via non-pip package. ([\#9074](https://github.com/matrix-org/synapse/issues/9074))
- Update source install documentation to mention platform prerequisites before the source install steps. ([\#9667](https://github.com/matrix-org/synapse/issues/9667))
- Improve worker documentation for fallback/web auth endpoints. ([\#9679](https://github.com/matrix-org/synapse/issues/9679))
- Update the sample configuration for OIDC authentication. ([\#9695](https://github.com/matrix-org/synapse/issues/9695))
Internal Changes
----------------
- Preparatory steps for removing redundant `outlier` data from `event_json.internal_metadata` column. ([\#9411](https://github.com/matrix-org/synapse/issues/9411))
- Add type hints to the caching module. ([\#9442](https://github.com/matrix-org/synapse/issues/9442))
- Introduce flake8-bugbear to the test suite and fix some of its lint violations. ([\#9499](https://github.com/matrix-org/synapse/issues/9499), [\#9659](https://github.com/matrix-org/synapse/issues/9659))
- Add additional type hints to the Homeserver object. ([\#9631](https://github.com/matrix-org/synapse/issues/9631), [\#9638](https://github.com/matrix-org/synapse/issues/9638), [\#9675](https://github.com/matrix-org/synapse/issues/9675), [\#9681](https://github.com/matrix-org/synapse/issues/9681))
- Only save remote cross-signing and device keys if they're different from the current ones. ([\#9634](https://github.com/matrix-org/synapse/issues/9634))
- Rename storage function to fix spelling and not conflict with another function's name. ([\#9637](https://github.com/matrix-org/synapse/issues/9637))
- Improve performance of federation catch up by sending the latest events in the room to the remote, rather than just the last event sent by the local server. ([\#9640](https://github.com/matrix-org/synapse/issues/9640), [\#9664](https://github.com/matrix-org/synapse/issues/9664))
- In the `federation_client` commandline client, stop automatically adding the URL prefix, so that servlets on other prefixes can be tested. ([\#9645](https://github.com/matrix-org/synapse/issues/9645))
- In the `federation_client` commandline client, handle inline `signing_key`s in `homeserver.yaml`. ([\#9647](https://github.com/matrix-org/synapse/issues/9647))
- Fixed some antipattern issues to improve code quality. ([\#9649](https://github.com/matrix-org/synapse/issues/9649))
- Add a storage method for pulling all current user presence state from the database. ([\#9650](https://github.com/matrix-org/synapse/issues/9650))
- Import `HomeServer` from the proper module. ([\#9665](https://github.com/matrix-org/synapse/issues/9665))
- Increase default join ratelimiting burst rate. ([\#9674](https://github.com/matrix-org/synapse/issues/9674))
- Add type hints to third party event rules and visibility modules. ([\#9676](https://github.com/matrix-org/synapse/issues/9676))
- Bump mypy-zope to 0.2.13 to fix "Cannot determine consistent method resolution order (MRO)" errors when running mypy a second time. ([\#9678](https://github.com/matrix-org/synapse/issues/9678))
- Use interpreter from `$PATH` via `/usr/bin/env` instead of absolute paths in various scripts. ([\#9689](https://github.com/matrix-org/synapse/issues/9689))
- Make it possible to use `dmypy`. ([\#9692](https://github.com/matrix-org/synapse/issues/9692))
- Suppress "CryptographyDeprecationWarning: int_from_bytes is deprecated". ([\#9698](https://github.com/matrix-org/synapse/issues/9698))
- Use `dmypy run` in lint script for improved performance in type-checking while developing. ([\#9701](https://github.com/matrix-org/synapse/issues/9701))
- Fix undetected mypy error when using Python 3.6. ([\#9703](https://github.com/matrix-org/synapse/issues/9703))
- Fix type-checking CI on develop. ([\#9709](https://github.com/matrix-org/synapse/issues/9709))
Synapse 1.30.1 (2021-03-26)
===========================
This release is identical to Synapse 1.30.0, with the exception of explicitly
setting a minimum version of Python's Cryptography library to ensure that users
of Synapse are protected from the recent [OpenSSL security advisories](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000198.html),
especially CVE-2021-3449.
Note that Cryptography defaults to bundling its own statically linked copy of
OpenSSL, which means that you may not be protected by your operating system's
security updates.
It's also worth noting that Cryptography no longer supports Python 3.5, so
admins deploying to older environments may not be protected against this or
future vulnerabilities. Synapse will be dropping support for Python 3.5 at the
end of March.
Updates to the Docker image
---------------------------
- Ensure that the docker container has up to date versions of openssl. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
Internal Changes
----------------
- Enforce that `cryptography` dependency is up to date to ensure it has the most recent openssl patches. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
Synapse 1.30.0 (2021-03-22)
===========================
Note that this release deprecates the ability for appservices to
call `POST /_matrix/client/r0/register` without the body parameter `type`. Appservice
developers should use a `type` value of `m.login.application_service` as
per [the spec](https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions).
In future releases, calling this endpoint with an access token - but without a `m.login.application_service`
type - will fail.
No significant changes.
Synapse 1.30.0rc1 (2021-03-16)
==============================
Features
--------
- Add prometheus metrics for number of users successfully registering and logging in. ([\#9510](https://github.com/matrix-org/synapse/issues/9510), [\#9511](https://github.com/matrix-org/synapse/issues/9511), [\#9573](https://github.com/matrix-org/synapse/issues/9573))
- Add `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time` prometheus metrics, which monitor federation delays by reporting the timestamps of messages sent and received to a set of remote servers. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Add support for generating JSON Web Tokens dynamically for use as OIDC client secrets. ([\#9549](https://github.com/matrix-org/synapse/issues/9549))
- Optimise handling of incomplete room history for incoming federation. ([\#9601](https://github.com/matrix-org/synapse/issues/9601))
- Finalise support for allowing clients to pick an SSO Identity Provider ([MSC2858](https://github.com/matrix-org/matrix-doc/pull/2858)). ([\#9617](https://github.com/matrix-org/synapse/issues/9617))
- Tell spam checker modules about the SSO IdP a user registered through if one was used. ([\#9626](https://github.com/matrix-org/synapse/issues/9626))
Bugfixes
--------
- Fix long-standing bug when generating thumbnails for some images with transparency: `TypeError: cannot unpack non-iterable int object`. ([\#9473](https://github.com/matrix-org/synapse/issues/9473))
- Purge chain cover indexes for events that were purged prior to Synapse v1.29.0. ([\#9542](https://github.com/matrix-org/synapse/issues/9542), [\#9583](https://github.com/matrix-org/synapse/issues/9583))
- Fix bug where federation requests were not correctly retried on 5xx responses. ([\#9567](https://github.com/matrix-org/synapse/issues/9567))
- Fix re-activating an account via the admin API when local passwords are disabled. ([\#9587](https://github.com/matrix-org/synapse/issues/9587))
- Fix a bug introduced in Synapse 1.20 which caused incoming federation transactions to stack up, causing slow recovery from outages. ([\#9597](https://github.com/matrix-org/synapse/issues/9597))
- Fix a bug introduced in v1.28.0 where the OpenID Connect callback endpoint could error with a `MacaroonInitException`. ([\#9620](https://github.com/matrix-org/synapse/issues/9620))
- Fix Internal Server Error on `GET /_synapse/client/saml2/authn_response` request. ([\#9623](https://github.com/matrix-org/synapse/issues/9623))
Updates to the Docker image
---------------------------
- Make use of an improved malloc implementation (`jemalloc`) in the docker image. ([\#8553](https://github.com/matrix-org/synapse/issues/8553))
Improved Documentation
----------------------
- Add relayd entry to reverse proxy example configurations. ([\#9508](https://github.com/matrix-org/synapse/issues/9508))
- Improve the SAML2 upgrade notes for 1.27.0. ([\#9550](https://github.com/matrix-org/synapse/issues/9550))
- Link to the "List user's media" admin API from the media admin API docs. ([\#9571](https://github.com/matrix-org/synapse/issues/9571))
- Clarify the spam checker modules documentation example to mention that `parse_config` is a required method. ([\#9580](https://github.com/matrix-org/synapse/issues/9580))
- Clarify the sample configuration for `stats` settings. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
Deprecations and Removals
-------------------------
- The `synapse_federation_last_sent_pdu_age` and `synapse_federation_last_received_pdu_age` prometheus metrics have been removed. They are replaced by `synapse_federation_last_sent_pdu_time` and `synapse_federation_last_received_pdu_time`. ([\#9540](https://github.com/matrix-org/synapse/issues/9540))
- Registering an Application Service user without using the `m.login.application_service` login type will be unsupported in an upcoming Synapse release. ([\#9559](https://github.com/matrix-org/synapse/issues/9559))
Internal Changes
----------------
- Add tests to ResponseCache. ([\#9458](https://github.com/matrix-org/synapse/issues/9458))
- Add type hints to purge room and server notice admin API. ([\#9520](https://github.com/matrix-org/synapse/issues/9520))
- Add extra logging to ObservableDeferred when callbacks throw exceptions. ([\#9523](https://github.com/matrix-org/synapse/issues/9523))
- Fix incorrect type hints. ([\#9528](https://github.com/matrix-org/synapse/issues/9528), [\#9543](https://github.com/matrix-org/synapse/issues/9543), [\#9591](https://github.com/matrix-org/synapse/issues/9591), [\#9608](https://github.com/matrix-org/synapse/issues/9608), [\#9618](https://github.com/matrix-org/synapse/issues/9618))
- Add an additional test for purging a room. ([\#9541](https://github.com/matrix-org/synapse/issues/9541))
- Add a `.git-blame-ignore-revs` file with the hashes of auto-formatting. ([\#9560](https://github.com/matrix-org/synapse/issues/9560))
- Increase the threshold before which outbound federation to a server goes into "catch up" mode, which is expensive for the remote server to handle. ([\#9561](https://github.com/matrix-org/synapse/issues/9561))
- Fix spurious errors reported by the `config-lint.sh` script. ([\#9562](https://github.com/matrix-org/synapse/issues/9562))
- Fix type hints and tests for BlacklistingAgentWrapper and BlacklistingReactorWrapper. ([\#9563](https://github.com/matrix-org/synapse/issues/9563))
- Do not have mypy ignore type hints from unpaddedbase64. ([\#9568](https://github.com/matrix-org/synapse/issues/9568))
- Improve efficiency of calculating the auth chain in large rooms. ([\#9576](https://github.com/matrix-org/synapse/issues/9576))
- Convert `synapse.types.Requester` to an `attrs` class. ([\#9586](https://github.com/matrix-org/synapse/issues/9586))
- Add logging for redis connection setup. ([\#9590](https://github.com/matrix-org/synapse/issues/9590))
- Improve logging when processing incoming transactions. ([\#9596](https://github.com/matrix-org/synapse/issues/9596))
- Remove unused `stats.retention` setting, and emit a warning if stats are disabled. ([\#9604](https://github.com/matrix-org/synapse/issues/9604))
- Prevent attempting to bundle aggregations for state events in /context APIs. ([\#9619](https://github.com/matrix-org/synapse/issues/9619))
Synapse 1.29.0 (2021-03-08)
===========================

View File

@@ -6,7 +6,7 @@ There are 3 steps to follow under **Installation Instructions**.
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Platform-specific prerequisites](#platform-specific-prerequisites)
- [Debian/Ubuntu/Raspbian](#debianubunturaspbian)
- [ArchLinux](#archlinux)
- [CentOS/Fedora](#centosfedora)
@@ -60,17 +60,14 @@ that your email address is probably `user@example.com` rather than
(Prebuilt packages are available for some platforms - see [Prebuilt packages](#prebuilt-packages).)
When installing from source please make sure that the [Platform-specific prerequisites](#platform-specific-prerequisites) are already installed.
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5.2 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions. See [Platform-Specific
Instructions](#platform-specific-instructions) for information on installing
these on various platforms.
To install the Synapse homeserver run:
@@ -128,7 +125,11 @@ source env/bin/activate
synctl start
```
#### Platform-Specific Instructions
#### Platform-specific prerequisites
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
##### Debian/Ubuntu/Raspbian
@@ -526,14 +527,24 @@ email will be disabled.
The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
Alternatively you can do so from the command line if you have installed via pip.
Alternatively, you can do so from the command line. This can be done as follows:
This can be done as follows:
1. If synapse was installed via pip, activate the virtualenv as follows (if Synapse was
installed via a prebuilt package, `register_new_matrix_user` should already be
on the search path):
```sh
cd ~/synapse
source env/bin/activate
synctl start # if not already running
```
2. Run the following command:
```sh
register_new_matrix_user -c homeserver.yaml http://localhost:8008
```
```sh
$ source ~/synapse/env/bin/activate
$ synctl start # if not already running
$ register_new_matrix_user -c homeserver.yaml http://localhost:8008
This will prompt you to add details for the new user, and will then connect to
the running Synapse to create the new user. For example:
```
New user localpart: erikj
Password:
Confirm password:

View File

@@ -20,9 +20,10 @@ recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
include tests/http/ca.crt
include tests/http/ca.key
include tests/http/server.key
recursive-include tests *.pem
recursive-include tests *.p8
recursive-include tests *.crt
recursive-include tests *.key
recursive-include synapse/res *
recursive-include synapse/static *.css

View File

@@ -183,8 +183,9 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.

View File

@@ -98,9 +98,12 @@ will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client. See the `reverse proxy documentation
<docs/reverse_proxy.md>`_, where the example configurations have been updated to
show how to set this header.
indicate the protocol used by the client.
Synapse also requires the `Host` header to be preserved.
See the `reverse proxy documentation <docs/reverse_proxy.md>`_, where the
example configurations have been updated to show how to set these headers.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.)
@@ -124,6 +127,13 @@ This version changes the URI used for callbacks from OAuth2 and SAML2 identity p
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
"ACS location" (also known as "allowed callback URLs") at the identity provider.
The "Issuer" in the "AuthnRequest" to the SAML2 identity provider is also updated to
``[synapse public baseurl]/_synapse/client/saml2/metadata.xml``. If your SAML2 identity
provider uses this property to validate or otherwise identify Synapse, its configuration
will need to be updated to use the new URL. Alternatively you could create a new, separate
"EntityDescriptor" in your SAML2 identity provider with the new URLs and leave the URLs in
the existing "EntityDescriptor" as they were.
Changes to HTML templates
-------------------------

1
changelog.d/9685.misc Normal file
View File

@@ -0,0 +1 @@
Update `scripts-dev/complement.sh` to use a local checkout of Complement, allow running a subset of tests and have it use Synapse's Complement test blacklist.

1
changelog.d/9691.feature Normal file
View File

@@ -0,0 +1 @@
Add `order_by` to the admin API `GET /_synapse/admin/v2/users`. Contributed by @dklimpel.

1
changelog.d/9700.feature Normal file
View File

@@ -0,0 +1 @@
Replace the `room_invite_state_types` configuration setting with `room_prejoin_state`.

1
changelog.d/9710.feature Normal file
View File

@@ -0,0 +1 @@
Experimental Spaces support: include `m.room.create` in the room state sent with room-invites.

1
changelog.d/9711.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix recently added ratelimits to correctly honour the application service `rate_limited` flag.

1
changelog.d/9717.feature Normal file
View File

@@ -0,0 +1 @@
Add experimental support for [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083): restricting room access via group membership.

1
changelog.d/9718.removal Normal file
View File

@@ -0,0 +1 @@
Replace deprecated `imp` module with successor `importlib`. Contributed by Cristina Muñoz.

1
changelog.d/9719.doc Normal file
View File

@@ -0,0 +1 @@
Make the allowed_local_3pids regex example in the sample config stricter.

1
changelog.d/9720.misc Normal file
View File

@@ -0,0 +1 @@
Revert using `dmypy run` in lint script.

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# this script will use the api:
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DOMAIN=yourserver.tld
# add this user as admin in your home server:

12
debian/changelog vendored
View File

@@ -1,3 +1,15 @@
matrix-synapse-py3 (1.30.1) stable; urgency=medium
* New synapse release 1.30.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Mar 2021 12:01:28 +0000
matrix-synapse-py3 (1.30.0) stable; urgency=medium
* New synapse release 1.30.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 22 Mar 2021 13:15:34 +0000
matrix-synapse-py3 (1.29.0) stable; urgency=medium
[ Jonathan de Jong ]

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
DIR="$( cd "$( dirname "$0" )" && pwd )"

View File

@@ -18,6 +18,11 @@ ARG PYTHON_VERSION=3.8
###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0'
# install the OS build deps
RUN apt-get update && apt-get install -y \
build-essential \
@@ -28,33 +33,32 @@ RUN apt-get update && apt-get install -y \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
rustc \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
&& rm -rf /var/lib/apt/lists/*
# Build dependencies that are not available as wheels, to speed up rebuilds
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
frozendict \
jaeger-client \
opentracing \
# Match the version constraints of Synapse
"prometheus_client>=0.4.0" \
psycopg2 \
pycparser \
pyrsistent \
pyyaml \
simplejson \
threadloop \
thrift
# now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/
# Copy just what we need to pip install
COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project so that we this layer in the Docker cache can be
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all]
/synapse[all]
# Copy over the rest of the project
COPY synapse /synapse/synapse/
# Install the synapse package itself and all of its children packages.
#
# This is aiming at installing only the `packages=find_packages(...)` from `setup.py
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
###
### Stage 1: runtime
@@ -69,7 +73,10 @@ RUN apt-get update && apt-get install -y \
libpq5 \
libwebp6 \
xmlsec1 \
&& rm -rf /var/lib/apt/lists/*
libjemalloc2 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
@@ -82,4 +89,4 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]
HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1
CMD curl -fSs http://localhost:8008/health || exit 1

View File

@@ -204,3 +204,8 @@ healthcheck:
timeout: 10s
retries: 3
```
## Using jemalloc
Jemalloc is embedded in the image and will be used instead of the default allocator.
You can read about jemalloc by reading the Synapse [README](../README.md)

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# The script to build the Debian package, as ran inside the Docker image.

View File

@@ -173,18 +173,10 @@ report_stats: False
## API Configuration ##
room_invite_state_types:
- "m.room.join_rules"
- "m.room.canonical_alias"
- "m.room.avatar"
- "m.room.name"
{% if SYNAPSE_APPSERVICES %}
app_service_config_files:
{% for appservice in SYNAPSE_APPSERVICES %} - "{{ appservice }}"
{% endfor %}
{% else %}
app_service_config_files: []
{% endif %}
macaroon_secret_key: "{{ SYNAPSE_MACAROON_SECRET_KEY }}"

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
# This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the

View File

@@ -3,6 +3,7 @@
import codecs
import glob
import os
import platform
import subprocess
import sys
@@ -213,6 +214,13 @@ def main(args, environ):
if "-m" not in args:
args = ["-m", synapse_worker] + args
jemallocpath = "/usr/lib/%s-linux-gnu/libjemalloc.so.2" % (platform.machine(),)
if os.path.isfile(jemallocpath):
environ["LD_PRELOAD"] = jemallocpath
else:
log("Could not find %s, will not use" % (jemallocpath,))
# if there are no config files passed to synapse, try adding the default file
if not any(p.startswith("--config-path") or p.startswith("-c") for p in args):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
@@ -248,9 +256,9 @@ running with 'migrate_config'. See the README for more details.
args = ["python"] + args
if ownership is not None:
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
os.execve("/usr/sbin/gosu", args, environ)
else:
os.execv("/usr/local/bin/python", args)
os.execve("/usr/local/bin/python", args, environ)
if __name__ == "__main__":

View File

@@ -1,5 +1,7 @@
# Contents
- [List all media in a room](#list-all-media-in-a-room)
- [Querying media](#querying-media)
* [List all media in a room](#list-all-media-in-a-room)
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
@@ -10,7 +12,11 @@
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# List all media in a room
# Querying media
These APIs allow extracting media information from the homeserver.
## List all media in a room
This API gets a list of known media in a room.
However, it only shows media from unencrypted events or rooms.
@@ -36,6 +42,12 @@ The API returns a JSON body like the following:
}
```
## List all media uploaded by a user
Listing all media that has been uploaded by a local user can be achieved through
the use of the [List media of a user](user_admin_api.rst#list-media-of-a-user)
Admin API.
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies

View File

@@ -111,35 +111,16 @@ List Accounts
=============
This API returns all local user accounts.
By default, the response is ordered by ascending user ID.
The api is::
The API is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``from`` is optional but used for pagination, denoting the
offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token``
from a previous call.
The parameter ``limit`` is optional but is used for pagination, denoting the
maximum number of items to return in this call. Defaults to ``100``.
The parameter ``user_id`` is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.
The parameter ``name`` is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
The parameter ``deactivated`` is optional and if ``true`` will **include** deactivated users.
Defaults to ``false`` to exclude deactivated users.
A JSON body is returned with the following shape:
A response body like the following is returned:
.. code:: json
@@ -175,6 +156,66 @@ with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more users
to paginate through.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - Is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.
- ``name`` - Is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
- ``guests`` - string representing a bool - Is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
- ``deactivated`` - string representing a bool - Is optional and if ``true`` will **include** deactivated users.
Defaults to ``false`` to exclude deactivated users.
- ``limit`` - string representing a positive integer - Is optional but is used for pagination,
denoting the maximum number of items to return in this call. Defaults to ``100``.
- ``from`` - string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
Defaults to ``0``.
- ``order_by`` - The method by which to sort the returned list of users.
If the ordered field has duplicates, the second order is always by ascending ``name``,
which guarantees a stable ordering. Valid values are:
- ``name`` - Users are ordered alphabetically by ``name``. This is the default.
- ``is_guest`` - Users are ordered by ``is_guest`` status.
- ``admin`` - Users are ordered by ``admin`` status.
- ``user_type`` - Users are ordered alphabetically by ``user_type``.
- ``deactivated`` - Users are ordered by ``deactivated`` status.
- ``shadow_banned`` - Users are ordered by ``shadow_banned`` status.
- ``displayname`` - Users are ordered alphabetically by ``displayname``.
- ``avatar_url`` - Users are ordered alphabetically by avatar URL.
- ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
Caution. The database only has indexes on the columns ``name`` and ``created_ts``.
This means that if a different sort order is used (``is_guest``, ``admin``,
``user_type``, ``deactivated``, ``shadow_banned``, ``avatar_url`` or ``displayname``),
this can cause a large load on the database, especially for large environments.
**Response**
The following fields are returned in the JSON response body:
- ``users`` - An array of objects, each containing information about an user.
User objects contain the following fields:
- ``name`` - string - Fully-qualified user ID (ex. `@user:server.com`).
- ``is_guest`` - bool - Status if that user is a guest account.
- ``admin`` - bool - Status if that user is a server administrator.
- ``user_type`` - string - Type of the user. Normal users are type ``None``.
This allows user type specific behaviour. There are also types ``support`` and ``bot``.
- ``deactivated`` - bool - Status if that user has been marked as deactivated.
- ``shadow_banned`` - bool - Status if that user has been marked as shadow banned.
- ``displayname`` - string - The user's display name if they have set one.
- ``avatar_url`` - string - The user's avatar URL if they have set one.
- ``next_token``: string representing a positive integer - Indication for pagination. See above.
- ``total`` - integer - Total number of media.
Query current sessions for a user
=================================

View File

@@ -128,6 +128,9 @@ Some guidelines follow:
will be if no sub-options are enabled).
- Lines should be wrapped at 80 characters.
- Use two-space indents.
- `true` and `false` are spelt thus (as opposed to `True`, etc.)
- Use single quotes (`'`) rather than double-quotes (`"`) or backticks
(`` ` ``) to refer to configuration options.
Example:

View File

@@ -226,7 +226,7 @@ Synapse config:
oidc_providers:
- idp_id: github
idp_name: Github
idp_brand: "org.matrix.github" # optional: styling hint for clients
idp_brand: "github" # optional: styling hint for clients
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
@@ -252,7 +252,7 @@ oidc_providers:
oidc_providers:
- idp_id: google
idp_name: Google
idp_brand: "org.matrix.google" # optional: styling hint for clients
idp_brand: "google" # optional: styling hint for clients
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -299,7 +299,7 @@ Synapse config:
oidc_providers:
- idp_id: gitlab
idp_name: Gitlab
idp_brand: "org.matrix.gitlab" # optional: styling hint for clients
idp_brand: "gitlab" # optional: styling hint for clients
issuer: "https://gitlab.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
@@ -334,7 +334,7 @@ Synapse config:
```yaml
- idp_id: facebook
idp_name: Facebook
idp_brand: "org.matrix.facebook" # optional: styling hint for clients
idp_brand: "facebook" # optional: styling hint for clients
discover: false
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
@@ -386,7 +386,7 @@ oidc_providers:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.full_name }}"
display_name_template: "{{ user.full_name }}"
```
### XWiki
@@ -401,8 +401,7 @@ oidc_providers:
idp_name: "XWiki"
issuer: "https://myxwikihost/xwiki/oidc/"
client_id: "your-client-id" # TO BE FILLED
# Needed until https://github.com/matrix-org/synapse/issues/9212 is fixed
client_secret: "dontcare"
client_auth_method: none
scopes: ["openid", "profile"]
user_profile_method: "userinfo_endpoint"
user_mapping_provider:
@@ -410,3 +409,40 @@ oidc_providers:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
## Apple
Configuring "Sign in with Apple" (SiWA) requires an Apple Developer account.
You will need to create a new "Services ID" for SiWA, and create and download a
private key with "SiWA" enabled.
As well as the private key file, you will need:
* Client ID: the "identifier" you gave the "Services ID"
* Team ID: a 10-character ID associated with your developer account.
* Key ID: the 10-character identifier for the key.
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
documentation on setting up SiWA.
The synapse config will look like this:
```yaml
- idp_id: apple
idp_name: Apple
issuer: "https://appleid.apple.com"
client_id: "your-client-id" # Set to the "identifier" for your "ServicesID"
client_auth_method: "client_secret_post"
client_secret_jwt_key:
key_file: "/path/to/AuthKey_KEYIDCODE.p8" # point to your key file
jwt_header:
alg: ES256
kid: "KEYIDCODE" # Set to the 10-char Key ID
jwt_payload:
iss: TEAMIDCODE # Set to the 10-char Team ID
scopes: ["name", "email", "openid"]
authorization_endpoint: https://appleid.apple.com/auth/authorize?response_mode=form_post
user_mapping_provider:
config:
email_template: "{{ user.email }}"
```

View File

@@ -3,8 +3,9 @@
It is recommended to put a reverse proxy such as
[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy) or
[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy),
[HAProxy](https://www.haproxy.org/) or
[relayd](https://man.openbsd.org/relayd.8) in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
@@ -103,10 +104,11 @@ example.com:8448 {
```
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
ServerName matrix.example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPreserveHost on
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
@@ -115,7 +117,7 @@ example.com:8448 {
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
ServerName example.com
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
@@ -134,6 +136,8 @@ example.com:8448 {
</IfModule>
```
**NOTE 3**: Missing `ProxyPreserveHost on` can lead to a redirect loop.
### HAProxy
```
@@ -162,6 +166,52 @@ backend matrix
server matrix 127.0.0.1:8008
```
### Relayd
```
table <webserver> { 127.0.0.1 }
table <matrixserver> { 127.0.0.1 }
http protocol "https" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
match header set "X-Forwarded-For" value "$REMOTE_ADDR"
match header set "X-Forwarded-Proto" value "https"
# set CORS header for .well-known/matrix/server, .well-known/matrix/client
# httpd does not support setting headers, so do it here
match request path "/.well-known/matrix/*" tag "matrix-cors"
match response tagged "matrix-cors" header set "Access-Control-Allow-Origin" value "*"
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
# pass on non-matrix traffic to webserver
pass forward to <webserver>
}
relay "https_traffic" {
listen on egress port 443 tls
protocol "https"
forward to <matrixserver> port 8008 check tcp
forward to <webserver> port 8080 check tcp
}
http protocol "matrix" {
tls { no tlsv1.0, ciphers "HIGH" }
tls keypair "example.com"
block
pass quick path "/_matrix/*" forward to <matrixserver>
pass quick path "/_synapse/client/*" forward to <matrixserver>
}
relay "matrix_federation" {
listen on egress port 8448 tls
protocol "matrix"
forward to <matrixserver> port 8008 check tcp
}
```
## Homeserver Configuration
You will also want to set `bind_addresses: ['127.0.0.1']` and

View File

@@ -89,8 +89,7 @@ pid_file: DATADIR/homeserver.pid
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, so this setting is of limited value if federation is enabled on
# the server.
# API, unless allow_profile_lookup_over_federation is set to false.
#
#require_auth_for_profile_requests: true
@@ -870,10 +869,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# burst_count: 10
# remote:
# per_second: 0.01
# burst_count: 3
# burst_count: 10
#
#rc_3pid_validation:
# per_second: 0.003
@@ -1247,9 +1246,9 @@ account_validity:
#
#allowed_local_3pids:
# - medium: email
# pattern: '.*@matrix\.org'
# pattern: '^[^@]+@matrix\.org$'
# - medium: email
# pattern: '.*@vector\.im'
# pattern: '^[^@]+@vector\.im$'
# - medium: msisdn
# pattern: '\+44'
@@ -1452,14 +1451,31 @@ metrics_flags:
## API Configuration ##
# A list of event types that will be included in the room_invite_state
# Controls for the state that is shared with users who receive an invite
# to a room
#
#room_invite_state_types:
# - "m.room.join_rules"
# - "m.room.canonical_alias"
# - "m.room.avatar"
# - "m.room.encryption"
# - "m.room.name"
room_prejoin_state:
# By default, the following state event types are shared with users who
# receive invites to the room:
#
# - m.room.join_rules
# - m.room.canonical_alias
# - m.room.avatar
# - m.room.encryption
# - m.room.name
#
# Uncomment the following to disable these defaults (so that only the event
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
#
#disable_default_event_types: true
# Additional state event types to share with users when they are invited
# to a room.
#
# By default, this list is empty (so only the default event types are shared).
#
#additional_event_types:
# - org.example.custom.event.type
# A list of application service config files to use
@@ -1759,6 +1775,9 @@ saml2_config:
# Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user!
#
# (Use "oidc" here if you are migrating from an old "oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms.
#
@@ -1780,7 +1799,26 @@ saml2_config:
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: Required. oauth2 client secret to use.
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as "ES256", using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
@@ -1855,6 +1893,24 @@ saml2_config:
# which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token.
#
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be "Stephensson", but the `groups`
# claim MUST contain "admin".
#
# attribute_requirements:
# - attribute: family_name
# value: "Stephensson"
# - attribute: groups
# value: "admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options.
#
@@ -1887,34 +1943,9 @@ oidc_providers:
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# email_template: "{{ user.email }}"
# For use with Keycloak
#
#- idp_id: keycloak
# idp_name: Keycloak
# issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
# client_id: "synapse"
# client_secret: "copy secret generated in Keycloak UI"
# scopes: ["openid", "profile"]
# For use with Github
#
#- idp_id: github
# idp_name: Github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
# client_secret: "your-client-secret" # TO BE FILLED
# authorization_endpoint: "https://github.com/login/oauth/authorize"
# token_endpoint: "https://github.com/login/oauth/access_token"
# userinfo_endpoint: "https://api.github.com/user"
# scopes: ["read:user"]
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{ user.login }}"
# display_name_template: "{{ user.name }}"
# attribute_requirements:
# - attribute: userGroup
# value: "synapseUsers"
# Enable Central Authentication Service (CAS) for registration and login.
@@ -2627,19 +2658,20 @@ user_directory:
# Local statistics collection. Used in populating the room directory.
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
#
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
# Server Notices room configuration

View File

@@ -14,6 +14,7 @@ The Python class is instantiated with two objects:
* An instance of `synapse.module_api.ModuleApi`.
It then implements methods which return a boolean to alter behavior in Synapse.
All the methods must be defined.
There's a generic method for checking every event (`check_event_for_spam`), as
well as some specific methods:
@@ -24,6 +25,7 @@ well as some specific methods:
* `user_may_publish_room`
* `check_username_for_spam`
* `check_registration_for_spam`
* `check_media_file_for_spam`
The details of each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
@@ -31,6 +33,10 @@ are documented in the `synapse.events.spamcheck.SpamChecker` class.
The `ModuleApi` class provides a way for the custom spam checker class to
call back into the homeserver internals.
Additionally, a `parse_config` method is mandatory and receives the plugin config
dictionary. After parsing, It must return an object which will be
passed to `__init__` later.
### Example
```python
@@ -41,6 +47,10 @@ class ExampleSpamChecker:
self.config = config
self.api = api
@staticmethod
def parse_config(config):
return config
async def check_event_for_spam(self, foo):
return False # allow all events
@@ -59,7 +69,13 @@ class ExampleSpamChecker:
async def check_username_for_spam(self, user_profile):
return False # allow all usernames
async def check_registration_for_spam(self, email_threepid, username, request_info):
async def check_registration_for_spam(
self,
email_threepid,
username,
request_info,
auth_provider_id,
):
return RegistrationBehaviour.ALLOW # allow all registrations
async def check_media_file_for_spam(self, file_wrapper, file_info):

View File

@@ -232,7 +232,6 @@ expressions:
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
@@ -276,7 +275,7 @@ using):
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Note that a HTTP listener with `client` and `federation` resources must be

View File

@@ -1,12 +1,13 @@
[mypy]
namespace_packages = True
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
follow_imports = silent
follow_imports = normal
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
warn_unreachable = True
local_partial_types = True
# To find all folders that pass mypy you run:
#
@@ -20,8 +21,9 @@ files =
synapse/crypto,
synapse/event_auth.py,
synapse/events/builder.py,
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/events/third_party_rules.py,
synapse/events/validator.py,
synapse/federation,
synapse/groups,
synapse/handlers,
@@ -38,6 +40,7 @@ files =
synapse/push,
synapse/replication,
synapse/rest,
synapse/secrets.py,
synapse/server.py,
synapse/server_notices,
synapse/spam_checker_api,
@@ -69,7 +72,9 @@ files =
synapse/util/async_helpers.py,
synapse/util/caches,
synapse/util/metrics.py,
synapse/util/macaroons.py,
synapse/util/stringutils.py,
synapse/visibility.py,
tests/replication,
tests/test_utils,
tests/handlers/test_password_providers.py,
@@ -116,9 +121,6 @@ ignore_missing_imports = True
[mypy-saml2.*]
ignore_missing_imports = True
[mypy-unpaddedbase64]
ignore_missing_imports = True
[mypy-canonicaljson]
ignore_missing_imports = True

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# A script which checks that an appropriate news file has been added on this
# branch.

View File

@@ -1,22 +1,49 @@
#! /bin/bash -eu
#!/usr/bin/env bash
# This script is designed for developers who want to test their code
# against Complement.
#
# It makes a Synapse image which represents the current checkout,
# then downloads Complement and runs it with that image.
# builds a synapse-complement image on top, then runs tests with it.
#
# By default the script will fetch the latest Complement master branch and
# run tests with that. This can be overridden to use a custom Complement
# checkout by setting the COMPLEMENT_DIR environment variable to the
# filepath of a local Complement checkout.
#
# A regular expression of test method names can be supplied as the first
# argument to the script. Complement will then only run those tests. If
# no regex is supplied, all tests are run. For example;
#
# ./complement.sh "TestOutboundFederation(Profile|Send)"
#
# Exit if a line returns a non-zero exit code
set -e
# Change to the repository root
cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
echo "COMPLEMENT_DIR not set. Fetching the latest Complement checkout..."
wget -Nq https://github.com/matrix-org/complement/archive/master.tar.gz
tar -xzf master.tar.gz
COMPLEMENT_DIR=complement-master
echo "Checkout available at 'complement-master'"
fi
# Build the base Synapse image from the local checkout
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
# Build the Synapse monolith image from Complement, based on the above image we just built
docker build -t complement-synapse -f "$COMPLEMENT_DIR/dockerfiles/Synapse.Dockerfile" "$COMPLEMENT_DIR/dockerfiles"
# Download Complement
wget -N https://github.com/matrix-org/complement/archive/master.tar.gz
tar -xzf master.tar.gz
cd complement-master
cd "$COMPLEMENT_DIR"
# Build the Synapse image from Complement, based on the above image we just built
docker build -t complement-synapse -f dockerfiles/Synapse.Dockerfile ./dockerfiles
EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS+="-run $1 "
fi
# Run the tests on the resulting image!
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 ./tests
# Run the tests!
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -tags synapse_blacklist -count=1 $EXTRA_COMPLEMENT_ARGS ./tests

View File

@@ -1,10 +1,15 @@
#!/bin/bash
#!/usr/bin/env bash
# Find linting errors in Synapse's default config file.
# Exits with 0 if there are no problems, or another code otherwise.
# cd to the root of the repository
cd `dirname $0`/..
# Restore backup of sample config upon script exit
trap "mv docs/sample_config.yaml.bak docs/sample_config.yaml" EXIT
# Fix non-lowercase true/false values
sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
rm docs/sample_config.yaml.bak
# Check if anything changed
git diff --exit-code docs/sample_config.yaml
diff docs/sample_config.yaml docs/sample_config.yaml.bak

View File

@@ -22,8 +22,8 @@ import sys
from typing import Any, Optional
from urllib import parse as urlparse
import nacl.signing
import requests
import signedjson.key
import signedjson.types
import srvlookup
import yaml
@@ -44,18 +44,6 @@ def encode_base64(input_bytes):
return output_string
def decode_base64(input_string):
"""Decode a base64 string to bytes inferring padding from the length of the
string."""
input_bytes = input_string.encode("ascii")
input_len = len(input_bytes)
padding = b"=" * (3 - ((input_len + 3) % 4))
output_len = 3 * ((input_len + 2) // 4) + (input_len + 2) % 4 - 2
output_bytes = base64.b64decode(input_bytes + padding)
return output_bytes[:output_len]
def encode_canonical_json(value):
return json.dumps(
value,
@@ -88,42 +76,6 @@ def sign_json(
return json_object
NACL_ED25519 = "ed25519"
def decode_signing_key_base64(algorithm, version, key_base64):
"""Decode a base64 encoded signing key
Args:
algorithm (str): The algorithm the key is for (currently "ed25519").
version (str): Identifies this key out of the keys for this entity.
key_base64 (str): Base64 encoded bytes of the key.
Returns:
A SigningKey object.
"""
if algorithm == NACL_ED25519:
key_bytes = decode_base64(key_base64)
key = nacl.signing.SigningKey(key_bytes)
key.version = version
key.alg = NACL_ED25519
return key
else:
raise ValueError("Unsupported algorithm %s" % (algorithm,))
def read_signing_keys(stream):
"""Reads a list of keys from a stream
Args:
stream : A stream to iterate for keys.
Returns:
list of SigningKey objects.
"""
keys = []
for line in stream:
algorithm, version, key_base64 = line.split()
keys.append(decode_signing_key_base64(algorithm, version, key_base64))
return keys
def request(
method: Optional[str],
origin_name: str,
@@ -223,23 +175,28 @@ def main():
parser.add_argument("--body", help="Data to send as the body of the HTTP request")
parser.add_argument(
"path", help="request path. We will add '/_matrix/federation/v1/' to this."
"path", help="request path, including the '/_matrix/federation/...' prefix."
)
args = parser.parse_args()
if not args.server_name or not args.signing_key_path:
args.signing_key = None
if args.signing_key_path:
with open(args.signing_key_path) as f:
args.signing_key = f.readline()
if not args.server_name or not args.signing_key:
read_args_from_config(args)
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
algorithm, version, key_base64 = args.signing_key.split()
key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64)
result = request(
args.method,
args.server_name,
key,
args.destination,
"/_matrix/federation/v1/" + args.path,
args.path,
content=args.body,
)
@@ -255,10 +212,16 @@ def main():
def read_args_from_config(args):
with open(args.config, "r") as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config["server_name"]
if not args.signing_key_path:
args.signing_key_path = config["signing_key_path"]
if not args.signing_key:
if "signing_key" in config:
args.signing_key = config["signing_key"]
else:
with open(config["signing_key_path"]) as f:
args.signing_key = f.readline()
class MatrixConnectionAdapter(HTTPAdapter):

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Update/check the docs/sample_config.yaml

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
#
# This script generates SQL files for creating a brand new Synapse DB with the latest
# schema, on both SQLite3 and Postgres.

View File

@@ -1,4 +1,4 @@
#!/bin/bash
#!/usr/bin/env bash
set -e
@@ -6,4 +6,4 @@ set -e
# next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER
echo $CURRENT_NUMBER

View File

@@ -51,7 +51,7 @@ def main(src_repo, dest_repo):
parts = line.split("|")
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
sys.exit(1)
move_media(parts[0], parts[1], src_paths, dest_paths)

View File

@@ -3,6 +3,7 @@ test_suite = tests
[check-manifest]
ignore =
.git-blame-ignore-revs
contrib
contrib/*
docs/*
@@ -17,7 +18,8 @@ ignore =
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501
# B00*: Subsection of the bugbear suite (TODO: add in remaining fixes)
ignore=W503,W504,E203,E731,E501,B006,B007,B008
[isort]
line_length = 88

View File

@@ -99,10 +99,11 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0",
"black==20.8b1",
"flake8-comprehensions",
"flake8-bugbear",
"flake8",
]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.13"]
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.

View File

@@ -17,7 +17,9 @@
"""
from typing import Any, List, Optional, Type, Union
class RedisProtocol:
from twisted.internet import protocol
class RedisProtocol(protocol.Protocol):
def publish(self, channel: str, message: bytes): ...
async def ping(self) -> None: ...
async def set(
@@ -52,7 +54,7 @@ def lazyConnection(
class ConnectionHandler: ...
class RedisFactory:
class RedisFactory(protocol.ReconnectingClientFactory):
continueTrying: bool
handler: RedisProtocol
pool: List[RedisProtocol]

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.29.0"
__version__ = "1.31.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -39,6 +39,7 @@ from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import StateMap, UserID
from synapse.util.caches.lrucache import LruCache
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
@@ -163,7 +164,7 @@ class Auth:
async def get_user_by_req(
self,
request: Request,
request: SynapseRequest,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
@@ -408,7 +409,7 @@ class Auth:
raise _InvalidMacaroonException()
try:
user_id = self.get_user_id_from_macaroon(macaroon)
user_id = get_value_from_macaroon(macaroon, "user_id")
guest = False
for caveat in macaroon.caveats:
@@ -416,7 +417,12 @@ class Auth:
guest = True
self.validate_macaroon(macaroon, rights, user_id=user_id)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
except (
pymacaroons.exceptions.MacaroonException,
KeyError,
TypeError,
ValueError,
):
raise InvalidClientTokenError("Invalid macaroon passed.")
if rights == "access":
@@ -424,27 +430,6 @@ class Auth:
return user_id, guest
def get_user_id_from_macaroon(self, macaroon):
"""Retrieve the user_id given by the caveats on the macaroon.
Does *not* validate the macaroon.
Args:
macaroon (pymacaroons.Macaroon): The macaroon to validate
Returns:
(str) user id
Raises:
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, user_id):
"""
validate that a Macaroon is understood by and was signed by this server.
@@ -465,21 +450,13 @@ class Auth:
v.satisfy_exact("type = " + type_string)
v.satisfy_exact("user_id = %s" % user_id)
v.satisfy_exact("guest = true")
v.satisfy_general(self._verify_expiry)
satisfy_expiry(v, self.clock.time_msec)
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.verify(macaroon, self._macaroon_secret_key)
def _verify_expiry(self, caveat):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix) :])
now = self.hs.get_clock().time_msec()
return now < expiry
def get_appservice_by_req(self, request: SynapseRequest) -> ApplicationService:
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
@@ -581,6 +558,9 @@ class Auth:
Returns:
bool: False if no access_token was given, True otherwise.
"""
# This will always be set by the time Twisted calls us.
assert request.args is not None
query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -597,6 +577,8 @@ class Auth:
MissingClientTokenError: If there isn't a single access_token in the
request
"""
# This will always be set by the time Twisted calls us.
assert request.args is not None
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
query_params = request.args.get(b"access_token")

View File

@@ -51,6 +51,7 @@ class PresenceState:
OFFLINE = "offline"
UNAVAILABLE = "unavailable"
ONLINE = "online"
BUSY = "org.matrix.msc3026.busy"
class JoinRules:
@@ -58,6 +59,8 @@ class JoinRules:
KNOCK = "knock"
INVITE = "invite"
PRIVATE = "private"
# As defined for MSC3083.
MSC3083_RESTRICTED = "restricted"
class LoginType:
@@ -100,6 +103,9 @@ class EventTypes:
Dummy = "org.matrix.dummy_event"
MSC1772_SPACE_CHILD = "org.matrix.msc1772.space.child"
MSC1772_SPACE_PARENT = "org.matrix.msc1772.space.parent"
class EduTypes:
Presence = "m.presence"
@@ -160,6 +166,9 @@ class EventContentFields:
# cf https://github.com/matrix-org/matrix-doc/pull/2228
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
# cf https://github.com/matrix-org/matrix-doc/pull/1772
MSC1772_ROOM_TYPE = "org.matrix.msc1772.type"
class RoomEncryptionAlgorithms:
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"

View File

@@ -17,6 +17,7 @@ from collections import OrderedDict
from typing import Hashable, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util import Clock
@@ -31,10 +32,13 @@ class Ratelimiter:
burst_count: How many actions that can be performed before being limited.
"""
def __init__(self, clock: Clock, rate_hz: float, burst_count: int):
def __init__(
self, store: DataStore, clock: Clock, rate_hz: float, burst_count: int
):
self.clock = clock
self.rate_hz = rate_hz
self.burst_count = burst_count
self.store = store
# A ordered dictionary keeping track of actions, when they were last
# performed and how often. Each entry is a mapping from a key of arbitrary type
@@ -46,45 +50,10 @@ class Ratelimiter:
OrderedDict()
) # type: OrderedDict[Hashable, Tuple[float, int, float]]
def can_requester_do_action(
async def can_do_action(
self,
requester: Requester,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the requester perform the action?
Args:
requester: The requester to key off when rate limiting. The user property
will be used.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Returns:
A tuple containing:
* A bool indicating if they can perform the action now
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0
return self.can_do_action(
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
)
def can_do_action(
self,
key: Hashable,
requester: Optional[Requester],
key: Optional[Hashable] = None,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
@@ -92,9 +61,16 @@ class Ratelimiter:
) -> Tuple[bool, float]:
"""Can the entity (e.g. user or IP address) perform the action?
Checks if the user has ratelimiting disabled in the database by looking
for null/zero values in the `ratelimit_override` table. (Non-zero
values aren't honoured, as they're specific to the event sending
ratelimiter, rather than all ratelimiters)
Args:
key: The key we should use when rate limiting. Can be a user ID
(when sending events), an IP address, etc.
requester: The requester that is doing the action, if any. Used to check
if the user has ratelimits disabled in the database.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
@@ -109,6 +85,30 @@ class Ratelimiter:
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
if requester:
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0
# Check if ratelimiting has been disabled for the user.
#
# Note that we don't use the returned rate/burst count, as the table
# is specifically for the event sending ratelimiter. Instead, we
# only use it to (somewhat cheekily) infer whether the user should
# be subject to any rate limiting or not.
override = await self.store.get_ratelimit_for_user(
requester.authenticated_entity
)
if override and not override.messages_per_second:
return True, -1.0
# Override default values if set
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
rate_hz = rate_hz if rate_hz is not None else self.rate_hz
@@ -175,9 +175,10 @@ class Ratelimiter:
else:
del self.actions[key]
def ratelimit(
async def ratelimit(
self,
key: Hashable,
requester: Optional[Requester],
key: Optional[Hashable] = None,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
@@ -185,8 +186,16 @@ class Ratelimiter:
):
"""Checks if an action can be performed. If not, raises a LimitExceededError
Checks if the user has ratelimiting disabled in the database by looking
for null/zero values in the `ratelimit_override` table. (Non-zero
values aren't honoured, as they're specific to the event sending
ratelimiter, rather than all ratelimiters)
Args:
key: An arbitrary key used to classify an action
requester: The requester that is doing the action, if any. Used to check for
if the user has ratelimits disabled.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
@@ -201,7 +210,8 @@ class Ratelimiter:
"""
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
allowed, time_allowed = self.can_do_action(
allowed, time_allowed = await self.can_do_action(
requester,
key,
rate_hz=rate_hz,
burst_count=burst_count,

View File

@@ -57,7 +57,7 @@ class RoomVersion:
state_res = attr.ib(type=int) # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool)
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
# Before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
@@ -69,6 +69,8 @@ class RoomVersion:
limit_notifications_power_levels = attr.ib(type=bool)
# MSC2174/MSC2176: Apply updated redaction rules algorithm.
msc2176_redaction_rules = attr.ib(type=bool)
# MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules = attr.ib(type=bool)
class RoomVersions:
@@ -82,6 +84,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V2 = RoomVersion(
"2",
@@ -93,6 +96,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V3 = RoomVersion(
"3",
@@ -104,6 +108,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V4 = RoomVersion(
"4",
@@ -115,6 +120,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V5 = RoomVersion(
"5",
@@ -126,6 +132,7 @@ class RoomVersions:
strict_canonicaljson=False,
limit_notifications_power_levels=False,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
V6 = RoomVersion(
"6",
@@ -137,6 +144,7 @@ class RoomVersions:
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -148,6 +156,19 @@ class RoomVersions:
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=True,
msc3083_join_rules=False,
)
MSC3083 = RoomVersion(
"org.matrix.msc3083",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
)
@@ -162,4 +183,5 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V6,
RoomVersions.MSC2176,
)
# Note that we do not include MSC3083 here unless it is enabled in the config.
} # type: Dict[str, RoomVersion]

View File

@@ -22,7 +22,9 @@ logger = logging.getLogger(__name__)
try:
python_dependencies.check_requirements()
except python_dependencies.DependencyException as e:
sys.stderr.writelines(e.message)
sys.stderr.writelines(
e.message # noqa: B306, DependencyException.message is a property
)
sys.exit(1)

View File

@@ -21,8 +21,10 @@ import signal
import socket
import sys
import traceback
import warnings
from typing import Awaitable, Callable, Iterable
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import NoReturn
from twisted.internet import defer, error, reactor
@@ -195,6 +197,25 @@ def listen_metrics(bind_addresses, port):
start_http_server(port, addr=host, registry=RegistryProxy)
def listen_manhole(bind_addresses: Iterable[str], port: int, manhole_globals: dict):
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
# suppress the warning for now.
warnings.filterwarnings(
action="ignore",
category=CryptographyDeprecationWarning,
message="int_from_bytes is deprecated",
)
from synapse.util.manhole import manhole
listen_tcp(
bind_addresses,
port,
manhole(username="matrix", password="rabbithole", globals=manhole_globals),
)
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
"""
Create a TCP socket for a port and several addresses

View File

@@ -147,7 +147,6 @@ from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
@@ -302,6 +301,8 @@ class GenericWorkerPresence(BasePresenceHandler):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
self._busy_presence_enabled = hs.config.experimental.msc3026_enabled
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
@@ -439,8 +440,12 @@ class GenericWorkerPresence(BasePresenceHandler):
PresenceState.ONLINE,
PresenceState.UNAVAILABLE,
PresenceState.OFFLINE,
PresenceState.BUSY,
)
if presence not in valid_presence:
if presence not in valid_presence or (
presence == PresenceState.BUSY and not self._busy_presence_enabled
):
raise SynapseError(400, "Invalid presence state")
user_id = target_user.to_string()
@@ -634,12 +639,8 @@ class GenericWorkerServer(HomeServer):
if listener.type == "http":
self._listen_http(listener)
elif listener.type == "manhole":
_base.listen_tcp(
listener.bind_addresses,
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
)
elif listener.type == "metrics":
if not self.get_config().enable_metrics:
@@ -786,13 +787,6 @@ class FederationSenderHandler:
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def wake_destination(self, server: str):
self.federation_sender.wake_destination(server)

View File

@@ -67,7 +67,6 @@ from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.versionstring import get_version_string
@@ -288,12 +287,8 @@ class SynapseHomeServer(HomeServer):
if listener.type == "http":
self._listening_services.extend(self._listener_http(config, listener))
elif listener.type == "manhole":
listen_tcp(
listener.bind_addresses,
listener.port,
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
_base.listen_manhole(
listener.bind_addresses, listener.port, manhole_globals={"hs": self}
)
elif listener.type == "replication":
services = listen_tcp(

View File

@@ -90,7 +90,7 @@ class ApplicationServiceApi(SimpleHttpClient):
self.clock = hs.get_clock()
self.protocol_meta_cache = ResponseCache(
hs, "as_protocol_meta", timeout_ms=HOUR_IN_MS
hs.get_clock(), "as_protocol_meta", timeout_ms=HOUR_IN_MS
) # type: ResponseCache[Tuple[str, str]]
async def query_user(self, service, user_id):

View File

@@ -212,9 +212,8 @@ class Config:
@classmethod
def read_file(cls, file_path, config_name):
cls.check_file(file_path, config_name)
with open(file_path) as file_stream:
return file_stream.read()
"""Deprecated: call read_file directly"""
return read_file(file_path, (config_name,))
def read_template(self, filename: str) -> jinja2.Template:
"""Load a template file from disk.
@@ -894,4 +893,35 @@ class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
return self._get_instance(key)
__all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]
def read_file(file_path: Any, config_path: Iterable[str]) -> str:
"""Check the given file exists, and read it into a string
If it does not, emit an error indicating the problem
Args:
file_path: the file to be read
config_path: where in the configuration file_path came from, so that a useful
error can be emitted if it does not exist.
Returns:
content of the file.
Raises:
ConfigError if there is a problem reading the file.
"""
if not isinstance(file_path, str):
raise ConfigError("%r is not a string", config_path)
try:
os.stat(file_path)
with open(file_path) as file_stream:
return file_stream.read()
except OSError as e:
raise ConfigError("Error accessing file %r" % (file_path,), config_path) from e
__all__ = [
"Config",
"RootConfig",
"ShardedWorkerHandlingConfig",
"RoutableShardedWorkerHandlingConfig",
"read_file",
]

View File

@@ -152,3 +152,5 @@ class ShardedWorkerHandlingConfig:
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
def get_instance(self, key: str) -> str: ...
def read_file(file_path: Any, config_path: Iterable[str]) -> str: ...

View File

@@ -1,4 +1,4 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2015-2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,38 +12,131 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.constants import EventTypes
import logging
from typing import Iterable
from ._base import Config
from synapse.api.constants import EventTypes
from synapse.config._base import Config, ConfigError
from synapse.config._util import validate_config
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
class ApiConfig(Config):
section = "api"
def read_config(self, config, **kwargs):
self.room_invite_state_types = config.get(
"room_invite_state_types",
[
EventTypes.JoinRules,
EventTypes.CanonicalAlias,
EventTypes.RoomAvatar,
EventTypes.RoomEncryption,
EventTypes.Name,
],
def read_config(self, config: JsonDict, **kwargs):
validate_config(_MAIN_SCHEMA, config, ())
self.room_prejoin_state = list(self._get_prejoin_state_types(config))
def generate_config_section(cls, **kwargs) -> str:
formatted_default_state_types = "\n".join(
" # - %s" % (t,) for t in _DEFAULT_PREJOIN_STATE_TYPES
)
def generate_config_section(cls, **kwargs):
return """\
## API Configuration ##
# A list of event types that will be included in the room_invite_state
# Controls for the state that is shared with users who receive an invite
# to a room
#
#room_invite_state_types:
# - "{JoinRules}"
# - "{CanonicalAlias}"
# - "{RoomAvatar}"
# - "{RoomEncryption}"
# - "{Name}"
""".format(
**vars(EventTypes)
)
room_prejoin_state:
# By default, the following state event types are shared with users who
# receive invites to the room:
#
%(formatted_default_state_types)s
#
# Uncomment the following to disable these defaults (so that only the event
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
#
#disable_default_event_types: true
# Additional state event types to share with users when they are invited
# to a room.
#
# By default, this list is empty (so only the default event types are shared).
#
#additional_event_types:
# - org.example.custom.event.type
""" % {
"formatted_default_state_types": formatted_default_state_types
}
def _get_prejoin_state_types(self, config: JsonDict) -> Iterable[str]:
"""Get the event types to include in the prejoin state
Parses the config and returns an iterable of the event types to be included.
"""
room_prejoin_state_config = config.get("room_prejoin_state") or {}
# backwards-compatibility support for room_invite_state_types
if "room_invite_state_types" in config:
# if both "room_invite_state_types" and "room_prejoin_state" are set, then
# we don't really know what to do.
if room_prejoin_state_config:
raise ConfigError(
"Can't specify both 'room_invite_state_types' and 'room_prejoin_state' "
"in config"
)
logger.warning(_ROOM_INVITE_STATE_TYPES_WARNING)
yield from config["room_invite_state_types"]
return
if not room_prejoin_state_config.get("disable_default_event_types"):
yield from _DEFAULT_PREJOIN_STATE_TYPES
if self.spaces_enabled:
# MSC1772 suggests adding m.room.create to the prejoin state
yield EventTypes.Create
yield from room_prejoin_state_config.get("additional_event_types", [])
_ROOM_INVITE_STATE_TYPES_WARNING = """\
WARNING: The 'room_invite_state_types' configuration setting is now deprecated,
and replaced with 'room_prejoin_state'. New features may not work correctly
unless 'room_invite_state_types' is removed. See the sample configuration file for
details of 'room_prejoin_state'.
--------------------------------------------------------------------------------
"""
_DEFAULT_PREJOIN_STATE_TYPES = [
EventTypes.JoinRules,
EventTypes.CanonicalAlias,
EventTypes.RoomAvatar,
EventTypes.RoomEncryption,
EventTypes.Name,
]
# room_prejoin_state can either be None (as it is in the default config), or
# an object containing other config settings
_ROOM_PREJOIN_STATE_CONFIG_SCHEMA = {
"oneOf": [
{
"type": "object",
"properties": {
"disable_default_event_types": {"type": "boolean"},
"additional_event_types": {
"type": "array",
"items": {"type": "string"},
},
},
},
{"type": "null"},
]
}
# the legacy room_invite_state_types setting
_ROOM_INVITE_STATE_TYPES_SCHEMA = {"type": "array", "items": {"type": "string"}}
_MAIN_SCHEMA = {
"type": "object",
"properties": {
"room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
"room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
},
}

View File

@@ -24,7 +24,7 @@ from ._base import Config, ConfigError
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
# Map from canonicalised cache name to cache.
_CACHES = {}
_CACHES = {} # type: Dict[str, Callable[[float], None]]
# a lock on the contents of _CACHES
_CACHES_LOCK = threading.Lock()
@@ -59,7 +59,9 @@ def _canonicalise_cache_name(cache_name: str) -> str:
return cache_name.lower()
def add_resizable_cache(cache_name: str, cache_resize_callback: Callable):
def add_resizable_cache(
cache_name: str, cache_resize_callback: Callable[[float], None]
):
"""Register a cache that's size can dynamically change
Args:

View File

@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config._base import Config
from synapse.types import JsonDict
@@ -27,3 +28,11 @@ class ExperimentalConfig(Config):
# MSC2858 (multiple SSO identity providers)
self.msc2858_enabled = experimental.get("msc2858_enabled", False) # type: bool
# Spaces (MSC1772, MSC2946, MSC3083, etc)
self.spaces_enabled = experimental.get("spaces_enabled", False) # type: bool
if self.spaces_enabled:
KNOWN_ROOM_VERSIONS[RoomVersions.MSC3083.identifier] = RoomVersions.MSC3083
# MSC3026 (busy presence state)
self.msc3026_enabled = experimental.get("msc3026_enabled", False) # type: bool

View File

@@ -404,7 +404,11 @@ def _parse_key_servers(key_servers, federation_verify_certificates):
try:
jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA)
except jsonschema.ValidationError as e:
raise ConfigError("Unable to parse 'trusted_key_servers': " + e.message)
raise ConfigError(
"Unable to parse 'trusted_key_servers': {}".format(
e.message # noqa: B306, jsonschema.ValidationError.message is a valid attribute
)
)
for server in key_servers:
server_name = server["server_name"]

View File

@@ -21,8 +21,10 @@ import threading
from string import Template
import yaml
from zope.interface import implementer
from twisted.logger import (
ILogObserver,
LogBeginner,
STDLibLogObserver,
eventAsText,
@@ -227,7 +229,8 @@ def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) ->
threadlocal = threading.local()
def _log(event):
@implementer(ILogObserver)
def _log(event: dict) -> None:
if "log_text" in event:
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
return

View File

@@ -56,7 +56,9 @@ class MetricsConfig(Config):
try:
check_requirements("sentry")
except DependencyException as e:
raise ConfigError(e.message)
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
self.sentry_dsn = config["sentry"].get("dsn")
if not self.sentry_dsn:

View File

@@ -15,17 +15,18 @@
# limitations under the License.
from collections import Counter
from typing import Iterable, Optional, Tuple, Type
from typing import Iterable, List, Mapping, Optional, Tuple, Type
import attr
from synapse.config._util import validate_config
from synapse.config.sso import SsoAttributeRequirement
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import Collection, JsonDict
from synapse.util.module_loader import load_module
from synapse.util.stringutils import parse_and_validate_mxc_uri
from ._base import Config, ConfigError
from ._base import Config, ConfigError, read_file
DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingProvider"
@@ -41,7 +42,9 @@ class OIDCConfig(Config):
try:
check_requirements("oidc")
except DependencyException as e:
raise ConfigError(e.message) from e
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
) from e
# check we don't have any duplicate idp_ids now. (The SSO handler will also
# check for duplicates when the REST listeners get registered, but that happens
@@ -76,6 +79,9 @@ class OIDCConfig(Config):
# Note that, if this is changed, users authenticating via that provider
# will no longer be recognised as the same user!
#
# (Use "oidc" here if you are migrating from an old "oidc_config"
# configuration.)
#
# idp_name: A user-facing name for this identity provider, which is used to
# offer the user a choice of login mechanisms.
#
@@ -97,7 +103,26 @@ class OIDCConfig(Config):
#
# client_id: Required. oauth2 client id to use.
#
# client_secret: Required. oauth2 client secret to use.
# client_secret: oauth2 client secret to use. May be omitted if
# client_secret_jwt_key is given, or if client_auth_method is 'none'.
#
# client_secret_jwt_key: Alternative to client_secret: details of a key used
# to create a JSON Web Token to be used as an OAuth2 client secret. If
# given, must be a dictionary with the following properties:
#
# key: a pem-encoded signing key. Must be a suitable key for the
# algorithm specified. Required unless 'key_file' is given.
#
# key_file: the path to file containing a pem-encoded signing key file.
# Required unless 'key' is given.
#
# jwt_header: a dictionary giving properties to include in the JWT
# header. Must include the key 'alg', giving the algorithm used to
# sign the JWT, such as "ES256", using the JWA identifiers in
# RFC7518.
#
# jwt_payload: an optional dictionary giving properties to include in
# the JWT payload. Normally this should include an 'iss' key.
#
# client_auth_method: auth method to use when exchanging the token. Valid
# values are 'client_secret_basic' (default), 'client_secret_post' and
@@ -172,6 +197,24 @@ class OIDCConfig(Config):
# which is set to the claims returned by the UserInfo Endpoint and/or
# in the ID Token.
#
# It is possible to configure Synapse to only allow logins if certain attributes
# match particular values in the OIDC userinfo. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted. Additional attributes can be added to
# userinfo by expanding the `scopes` section of the OIDC config to retrieve
# additional information from the OIDC provider.
#
# If the OIDC claim is a list, then the attribute must match any value in the list.
# Otherwise, it must exactly match the value of the claim. Using the example
# below, the `family_name` claim MUST be "Stephensson", but the `groups`
# claim MUST contain "admin".
#
# attribute_requirements:
# - attribute: family_name
# value: "Stephensson"
# - attribute: groups
# value: "admin"
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
# for information on how to configure these options.
#
@@ -204,34 +247,9 @@ class OIDCConfig(Config):
# localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}"
# email_template: "{{{{ user.email }}}}"
# For use with Keycloak
#
#- idp_id: keycloak
# idp_name: Keycloak
# issuer: "https://127.0.0.1:8443/auth/realms/my_realm_name"
# client_id: "synapse"
# client_secret: "copy secret generated in Keycloak UI"
# scopes: ["openid", "profile"]
# For use with Github
#
#- idp_id: github
# idp_name: Github
# idp_brand: org.matrix.github
# discover: false
# issuer: "https://github.com/"
# client_id: "your-client-id" # TO BE FILLED
# client_secret: "your-client-secret" # TO BE FILLED
# authorization_endpoint: "https://github.com/login/oauth/authorize"
# token_endpoint: "https://github.com/login/oauth/access_token"
# userinfo_endpoint: "https://api.github.com/user"
# scopes: ["read:user"]
# user_mapping_provider:
# config:
# subject_claim: "id"
# localpart_template: "{{{{ user.login }}}}"
# display_name_template: "{{{{ user.name }}}}"
# attribute_requirements:
# - attribute: userGroup
# value: "synapseUsers"
""".format(
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
)
@@ -240,7 +258,7 @@ class OIDCConfig(Config):
# jsonschema definition of the configuration settings for an oidc identity provider
OIDC_PROVIDER_CONFIG_SCHEMA = {
"type": "object",
"required": ["issuer", "client_id", "client_secret"],
"required": ["issuer", "client_id"],
"properties": {
"idp_id": {
"type": "string",
@@ -253,7 +271,12 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"idp_icon": {"type": "string"},
"idp_brand": {
"type": "string",
# MSC2758-style namespaced identifier
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
},
"idp_unstable_brand": {
"type": "string",
"minLength": 1,
"maxLength": 255,
"pattern": "^[a-z][a-z0-9_.-]*$",
@@ -262,6 +285,30 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
"issuer": {"type": "string"},
"client_id": {"type": "string"},
"client_secret": {"type": "string"},
"client_secret_jwt_key": {
"type": "object",
"required": ["jwt_header"],
"oneOf": [
{"required": ["key"]},
{"required": ["key_file"]},
],
"properties": {
"key": {"type": "string"},
"key_file": {"type": "string"},
"jwt_header": {
"type": "object",
"required": ["alg"],
"properties": {
"alg": {"type": "string"},
},
"additionalProperties": {"type": "string"},
},
"jwt_payload": {
"type": "object",
"additionalProperties": {"type": "string"},
},
},
},
"client_auth_method": {
"type": "string",
# the following list is the same as the keys of
@@ -281,6 +328,10 @@ OIDC_PROVIDER_CONFIG_SCHEMA = {
},
"allow_existing_users": {"type": "boolean"},
"user_mapping_provider": {"type": ["object", "null"]},
"attribute_requirements": {
"type": "array",
"items": SsoAttributeRequirement.JSON_SCHEMA,
},
},
}
@@ -404,15 +455,36 @@ def _parse_oidc_config_dict(
"idp_icon must be a valid MXC URI", config_path + ("idp_icon",)
) from e
client_secret_jwt_key_config = oidc_config.get("client_secret_jwt_key")
client_secret_jwt_key = None # type: Optional[OidcProviderClientSecretJwtKey]
if client_secret_jwt_key_config is not None:
keyfile = client_secret_jwt_key_config.get("key_file")
if keyfile:
key = read_file(keyfile, config_path + ("client_secret_jwt_key",))
else:
key = client_secret_jwt_key_config["key"]
client_secret_jwt_key = OidcProviderClientSecretJwtKey(
key=key,
jwt_header=client_secret_jwt_key_config["jwt_header"],
jwt_payload=client_secret_jwt_key_config.get("jwt_payload", {}),
)
# parse attribute_requirements from config (list of dicts) into a list of SsoAttributeRequirement
attribute_requirements = [
SsoAttributeRequirement(**x)
for x in oidc_config.get("attribute_requirements", [])
]
return OidcProviderConfig(
idp_id=idp_id,
idp_name=oidc_config.get("idp_name", "OIDC"),
idp_icon=idp_icon,
idp_brand=oidc_config.get("idp_brand"),
unstable_idp_brand=oidc_config.get("unstable_idp_brand"),
discover=oidc_config.get("discover", True),
issuer=oidc_config["issuer"],
client_id=oidc_config["client_id"],
client_secret=oidc_config["client_secret"],
client_secret=oidc_config.get("client_secret"),
client_secret_jwt_key=client_secret_jwt_key,
client_auth_method=oidc_config.get("client_auth_method", "client_secret_basic"),
scopes=oidc_config.get("scopes", ["openid"]),
authorization_endpoint=oidc_config.get("authorization_endpoint"),
@@ -424,9 +496,22 @@ def _parse_oidc_config_dict(
allow_existing_users=oidc_config.get("allow_existing_users", False),
user_mapping_provider_class=user_mapping_provider_class,
user_mapping_provider_config=user_mapping_provider_config,
attribute_requirements=attribute_requirements,
)
@attr.s(slots=True, frozen=True)
class OidcProviderClientSecretJwtKey:
# a pem-encoded signing key
key = attr.ib(type=str)
# properties to include in the JWT header
jwt_header = attr.ib(type=Mapping[str, str])
# properties to include in the JWT payload.
jwt_payload = attr.ib(type=Mapping[str, str])
@attr.s(slots=True, frozen=True)
class OidcProviderConfig:
# a unique identifier for this identity provider. Used in the 'user_external_ids'
@@ -442,6 +527,9 @@ class OidcProviderConfig:
# Optional brand identifier for this IdP.
idp_brand = attr.ib(type=Optional[str])
# Optional brand identifier for the unstable API (see MSC2858).
unstable_idp_brand = attr.ib(type=Optional[str])
# whether the OIDC discovery mechanism is used to discover endpoints
discover = attr.ib(type=bool)
@@ -452,8 +540,13 @@ class OidcProviderConfig:
# oauth2 client id to use
client_id = attr.ib(type=str)
# oauth2 client secret to use
client_secret = attr.ib(type=str)
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
# a secret.
client_secret = attr.ib(type=Optional[str])
# key to use to construct a JWT to use as a client secret. May be `None` if
# `client_secret` is set.
client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey])
# auth method to use when exchanging the token.
# Valid values are 'client_secret_basic', 'client_secret_post' and
@@ -493,3 +586,6 @@ class OidcProviderConfig:
# the config of the user mapping provider
user_mapping_provider_config = attr.ib()
# required attributes to require in userinfo to allow login/registration
attribute_requirements = attr.ib(type=List[SsoAttributeRequirement])

View File

@@ -95,11 +95,11 @@ class RatelimitConfig(Config):
self.rc_joins_local = RateLimitConfig(
config.get("rc_joins", {}).get("local", {}),
defaults={"per_second": 0.1, "burst_count": 3},
defaults={"per_second": 0.1, "burst_count": 10},
)
self.rc_joins_remote = RateLimitConfig(
config.get("rc_joins", {}).get("remote", {}),
defaults={"per_second": 0.01, "burst_count": 3},
defaults={"per_second": 0.01, "burst_count": 10},
)
# Ratelimit cross-user key requests:
@@ -187,10 +187,10 @@ class RatelimitConfig(Config):
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# burst_count: 10
# remote:
# per_second: 0.01
# burst_count: 3
# burst_count: 10
#
#rc_3pid_validation:
# per_second: 0.003

View File

@@ -298,9 +298,9 @@ class RegistrationConfig(Config):
#
#allowed_local_3pids:
# - medium: email
# pattern: '.*@matrix\\.org'
# pattern: '^[^@]+@matrix\\.org$'
# - medium: email
# pattern: '.*@vector\\.im'
# pattern: '^[^@]+@vector\\.im$'
# - medium: msisdn
# pattern: '\\+44'

View File

@@ -176,7 +176,9 @@ class ContentRepositoryConfig(Config):
check_requirements("url_preview")
except DependencyException as e:
raise ConfigError(e.message)
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
if "url_preview_ip_range_blacklist" not in config:
raise ConfigError(

View File

@@ -76,7 +76,9 @@ class SAML2Config(Config):
try:
check_requirements("saml2")
except DependencyException as e:
raise ConfigError(e.message)
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
self.saml2_enabled = True

View File

@@ -841,8 +841,7 @@ class ServerConfig(Config):
# Whether to require authentication to retrieve profile data (avatars,
# display names) of other users through the client API. Defaults to
# 'false'. Note that profile data is also available via the federation
# API, so this setting is of limited value if federation is enabled on
# the server.
# API, unless allow_profile_lookup_over_federation is set to false.
#
#require_auth_for_profile_requests: true

View File

@@ -13,10 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import logging
from ._base import Config
ROOM_STATS_DISABLED_WARN = """\
WARNING: room/user statistics have been disabled via the stats.enabled
configuration setting. This means that certain features (such as the room
directory) will not operate correctly. Future versions of Synapse may ignore
this setting.
To fix this warning, remove the stats.enabled setting from your configuration
file.
--------------------------------------------------------------------------------"""
logger = logging.getLogger(__name__)
class StatsConfig(Config):
"""Stats Configuration
@@ -28,30 +40,29 @@ class StatsConfig(Config):
def read_config(self, config, **kwargs):
self.stats_enabled = True
self.stats_bucket_size = 86400 * 1000
self.stats_retention = sys.maxsize
stats_config = config.get("stats", None)
if stats_config:
self.stats_enabled = stats_config.get("enabled", self.stats_enabled)
self.stats_bucket_size = self.parse_duration(
stats_config.get("bucket_size", "1d")
)
self.stats_retention = self.parse_duration(
stats_config.get("retention", "%ds" % (sys.maxsize,))
)
if not self.stats_enabled:
logger.warning(ROOM_STATS_DISABLED_WARN)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# Local statistics collection. Used in populating the room directory.
# Settings for local room and user statistics collection. See
# docs/room_and_user_statistics.md.
#
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
stats:
# Uncomment the following to disable room and user statistics. Note that doing
# so may cause certain features (such as the room directory) not to work
# correctly.
#
#enabled: false
# The size of each timeslice in the room_stats_historical and
# user_stats_historical tables, as a time period. Defaults to "1d".
#
#bucket_size: 1h
"""

View File

@@ -39,7 +39,9 @@ class TracerConfig(Config):
try:
check_requirements("opentracing")
except DependencyException as e:
raise ConfigError(e.message)
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
# The tracer is enabled so sanitize the config

View File

@@ -191,7 +191,7 @@ def _context_info_cb(ssl_connection, where, ret):
# ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where)
except: # noqa: E722, taken from the twisted implementation
except BaseException: # taken from the twisted implementation
logger.exception("Error during info_callback")
f = Failure()
tls_protocol.failVerification(f)
@@ -219,7 +219,7 @@ class SSLClientConnectionCreator:
# ... and we also gut-wrench a '_synapse_tls_verifier' attribute into the
# tls_protocol so that the SSL context's info callback has something to
# call to do the cert verification.
setattr(tls_protocol, "_synapse_tls_verifier", self._verifier)
tls_protocol._synapse_tls_verifier = self._verifier
return connection

View File

@@ -57,7 +57,7 @@ from synapse.util.metrics import Measure
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -162,7 +162,7 @@ def check(
logger.debug("Auth events: %s", [a.event_id for a in auth_events.values()])
if event.type == EventTypes.Member:
_is_membership_change_allowed(event, auth_events)
_is_membership_change_allowed(room_version_obj, event, auth_events)
logger.debug("Allowing! %s", event)
return
@@ -220,8 +220,19 @@ def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
def _is_membership_change_allowed(
event: EventBase, auth_events: StateMap[EventBase]
room_version: RoomVersion, event: EventBase, auth_events: StateMap[EventBase]
) -> None:
"""
Confirms that the event which changes membership is an allowed change.
Args:
room_version: The version of the room.
event: The event to check.
auth_events: The current auth events of the room.
Raises:
AuthError if the event is not allowed.
"""
membership = event.content["membership"]
# Check if this is the room creator joining:
@@ -315,14 +326,19 @@ def _is_membership_change_allowed(
if user_level < invite_level:
raise AuthError(403, "You don't have permission to invite users")
elif Membership.JOIN == membership:
# Joins are valid iff caller == target and they were:
# invited: They are accepting the invitation
# joined: It's a NOOP
# Joins are valid iff caller == target and:
# * They are not banned.
# * They are accepting a previously sent invitation.
# * They are already joined (it's a NOOP).
# * The room is public or restricted.
if event.user_id != target_user_id:
raise AuthError(403, "Cannot force another user to join.")
elif target_banned:
raise AuthError(403, "You are banned from this room")
elif join_rule == JoinRules.PUBLIC:
elif join_rule == JoinRules.PUBLIC or (
room_version.msc3083_join_rules
and join_rule == JoinRules.MSC3083_RESTRICTED
):
pass
elif join_rule == JoinRules.INVITE:
if not caller_in_room and not caller_invited:

View File

@@ -98,7 +98,7 @@ class DefaultDictProperty(DictProperty):
class _EventInternalMetadata:
__slots__ = ["_dict", "stream_ordering"]
__slots__ = ["_dict", "stream_ordering", "outlier"]
def __init__(self, internal_metadata_dict: JsonDict):
# we have to copy the dict, because it turns out that the same dict is
@@ -108,7 +108,10 @@ class _EventInternalMetadata:
# the stream ordering of this event. None, until it has been persisted.
self.stream_ordering = None # type: Optional[int]
outlier = DictProperty("outlier") # type: bool
# whether this event is an outlier (ie, whether we have the state at that point
# in the DAG)
self.outlier = False
out_of_band_membership = DictProperty("out_of_band_membership") # type: bool
send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str
recheck_redaction = DictProperty("recheck_redaction") # type: bool
@@ -129,7 +132,7 @@ class _EventInternalMetadata:
return dict(self._dict)
def is_outlier(self) -> bool:
return self._dict.get("outlier", False)
return self.outlier
def is_out_of_band_membership(self) -> bool:
"""Whether this is an out of band membership, like an invite or an invite

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import inspect
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union
from synapse.rest.media.v1._base import FileInfo
@@ -27,6 +28,8 @@ if TYPE_CHECKING:
import synapse.events
import synapse.server
logger = logging.getLogger(__name__)
class SpamChecker:
def __init__(self, hs: "synapse.server.HomeServer"):
@@ -190,6 +193,7 @@ class SpamChecker:
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
auth_provider_id: Optional[str] = None,
) -> RegistrationBehaviour:
"""Checks if we should allow the given registration request.
@@ -198,6 +202,9 @@ class SpamChecker:
username: The request user name, if any
request_info: List of tuples of user agent and IP that
were used during the registration process.
auth_provider_id: The SSO IdP the user used, e.g "oidc", "saml",
"cas". If any. Note this does not include users registered
via a password provider.
Returns:
Enum for how the request should be handled
@@ -208,9 +215,25 @@ class SpamChecker:
# spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker:
behaviour = await maybe_awaitable(
checker(email_threepid, username, request_info)
)
# Provide auth_provider_id if the function supports it
checker_args = inspect.signature(checker)
if len(checker_args.parameters) == 4:
d = checker(
email_threepid,
username,
request_info,
auth_provider_id,
)
elif len(checker_args.parameters) == 3:
d = checker(email_threepid, username, request_info)
else:
logger.error(
"Invalid signature for %s.check_registration_for_spam. Denying registration",
spam_checker.__module__,
)
return RegistrationBehaviour.DENY
behaviour = await maybe_awaitable(d)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour

View File

@@ -13,12 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Callable, Union
from typing import TYPE_CHECKING, Union
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.types import Requester, StateMap
if TYPE_CHECKING:
from synapse.server import HomeServer
class ThirdPartyEventRules:
"""Allows server admins to provide a Python module implementing an extra
@@ -28,7 +31,7 @@ class ThirdPartyEventRules:
behaviours.
"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.third_party_rules = None
self.store = hs.get_datastore()
@@ -95,10 +98,9 @@ class ThirdPartyEventRules:
if self.third_party_rules is None:
return True
ret = await self.third_party_rules.on_create_room(
return await self.third_party_rules.on_create_room(
requester, config, is_requester_admin
)
return ret
async def check_threepid_can_be_invited(
self, medium: str, address: str, room_id: str
@@ -119,10 +121,9 @@ class ThirdPartyEventRules:
state_events = await self._get_state_map_for_room(room_id)
ret = await self.third_party_rules.check_threepid_can_be_invited(
return await self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events
)
return ret
async def check_visibility_can_be_modified(
self, room_id: str, new_visibility: str
@@ -143,7 +144,7 @@ class ThirdPartyEventRules:
check_func = getattr(
self.third_party_rules, "check_visibility_can_be_modified", None
)
if not check_func or not isinstance(check_func, Callable):
if not check_func or not callable(check_func):
return True
state_events = await self._get_state_map_for_room(room_id)

View File

@@ -22,6 +22,7 @@ from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.frozenutils import unfreeze
from . import EventBase
@@ -54,6 +55,8 @@ def prune_event(event: EventBase) -> EventBase:
event.internal_metadata.stream_ordering
)
pruned_event.internal_metadata.outlier = event.internal_metadata.outlier
# Mark the event as redacted
pruned_event.internal_metadata.redacted = True
@@ -400,10 +403,19 @@ class EventClientSerializer:
# If there is an edit replace the content, preserving existing
# relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relations = event.content.get("m.relates_to")
serialized_event["content"] = edit.content.get("m.new_content", {})
if relations:
serialized_event["content"]["m.relates_to"] = relations
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relations.copy()
else:
serialized_event["content"].pop("m.relates_to", None)

View File

@@ -27,11 +27,13 @@ from typing import (
List,
Mapping,
Optional,
Sequence,
Tuple,
TypeVar,
Union,
)
import attr
from prometheus_client import Counter
from twisted.internet import defer
@@ -62,7 +64,7 @@ from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -455,6 +457,7 @@ class FederationClient(FederationBase):
description: str,
destinations: Iterable[str],
callback: Callable[[str], Awaitable[T]],
failover_on_unknown_endpoint: bool = False,
) -> T:
"""Try an operation on a series of servers, until it succeeds
@@ -474,6 +477,10 @@ class FederationClient(FederationBase):
next server tried. Normally the stacktrace is logged but this is
suppressed if the exception is an InvalidResponseError.
failover_on_unknown_endpoint: if True, we will try other servers if it looks
like a server doesn't support the endpoint. This is typically useful
if the endpoint in question is new or experimental.
Returns:
The result of callback, if it succeeds
@@ -493,16 +500,31 @@ class FederationClient(FederationBase):
except UnsupportedRoomVersionError:
raise
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
else:
logger.warning(
"Failed to %s via %s: %i %s",
description,
destination,
e.code,
e.args[0],
)
synapse_error = e.to_synapse_error()
failover = False
if 500 <= e.code < 600:
failover = True
elif failover_on_unknown_endpoint:
# there is no good way to detect an "unknown" endpoint. Dendrite
# returns a 404 (with no body); synapse returns a 400
# with M_UNRECOGNISED.
if e.code == 404 or (
e.code == 400 and synapse_error.errcode == Codes.UNRECOGNIZED
):
failover = True
if not failover:
raise synapse_error from e
logger.warning(
"Failed to %s via %s: %i %s",
description,
destination,
e.code,
e.args[0],
)
except Exception:
logger.warning(
"Failed to %s via %s", description, destination, exc_info=True
@@ -1042,3 +1064,141 @@ class FederationClient(FederationBase):
# If we don't manage to find it, return None. It's not an error if a
# server doesn't give it to us.
return None
async def get_space_summary(
self,
destinations: Iterable[str],
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> "FederationSpaceSummaryResult":
"""
Call other servers to get a summary of the given space
Args:
destinations: The remote servers. We will try them in turn, omitting any
that have been blacklisted.
room_id: ID of the space to be queried
suggested_only: If true, ask the remote server to only return children
with the "suggested" flag set
max_rooms_per_space: A limit on the number of children to return for each
space
exclude_rooms: A list of room IDs to tell the remote server to skip
Returns:
a parsed FederationSpaceSummaryResult
Raises:
SynapseError if we were unable to get a valid summary from any of the
remote servers
"""
async def send_request(destination: str) -> FederationSpaceSummaryResult:
res = await self.transport_layer.get_space_summary(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
max_rooms_per_space=max_rooms_per_space,
exclude_rooms=exclude_rooms,
)
try:
return FederationSpaceSummaryResult.from_json_dict(res)
except ValueError as e:
raise InvalidResponseError(str(e))
return await self._try_destination_list(
"fetch space summary",
destinations,
send_request,
failover_on_unknown_endpoint=True,
)
@attr.s(frozen=True, slots=True)
class FederationSpaceSummaryEventResult:
"""Represents a single event in the result of a successful get_space_summary call.
It's essentially just a serialised event object, but we do a bit of parsing and
validation in `from_json_dict` and store some of the validated properties in
object attributes.
"""
event_type = attr.ib(type=str)
state_key = attr.ib(type=str)
via = attr.ib(type=Sequence[str])
# the raw data, including the above keys
data = attr.ib(type=JsonDict)
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryEventResult":
"""Parse an event within the result of a /spaces/ request
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid event
"""
event_type = d.get("type")
if not isinstance(event_type, str):
raise ValueError("Invalid event: 'event_type' must be a str")
state_key = d.get("state_key")
if not isinstance(state_key, str):
raise ValueError("Invalid event: 'state_key' must be a str")
content = d.get("content")
if not isinstance(content, dict):
raise ValueError("Invalid event: 'content' must be a dict")
via = content.get("via")
if not isinstance(via, Sequence):
raise ValueError("Invalid event: 'via' must be a list")
if any(not isinstance(v, str) for v in via):
raise ValueError("Invalid event: 'via' must be a list of strings")
return cls(event_type, state_key, via, d)
@attr.s(frozen=True, slots=True)
class FederationSpaceSummaryResult:
"""Represents the data returned by a successful get_space_summary call."""
rooms = attr.ib(type=Sequence[JsonDict])
events = attr.ib(type=Sequence[FederationSpaceSummaryEventResult])
@classmethod
def from_json_dict(cls, d: JsonDict) -> "FederationSpaceSummaryResult":
"""Parse the result of a /spaces/ request
Args:
d: json object to be parsed
Raises:
ValueError if d is not a valid /spaces/ response
"""
rooms = d.get("rooms")
if not isinstance(rooms, Sequence):
raise ValueError("'rooms' must be a list")
if any(not isinstance(r, dict) for r in rooms):
raise ValueError("Invalid room in 'rooms' list")
events = d.get("events")
if not isinstance(events, Sequence):
raise ValueError("'events' must be a list")
if any(not isinstance(e, dict) for e in events):
raise ValueError("Invalid event in 'events' list")
parsed_events = [
FederationSpaceSummaryEventResult.from_json_dict(e) for e in events
]
return cls(rooms, parsed_events)

View File

@@ -22,6 +22,7 @@ from typing import (
Awaitable,
Callable,
Dict,
Iterable,
List,
Optional,
Tuple,
@@ -34,7 +35,7 @@ from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EduTypes, EventTypes
from synapse.api.errors import (
AuthError,
Codes,
@@ -62,7 +63,7 @@ from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
)
from synapse.types import JsonDict, get_domain_from_id
from synapse.types import JsonDict
from synapse.util import glob_to_regex, json_decoder, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
@@ -90,16 +91,15 @@ pdu_process_time = Histogram(
"Time taken to process an event",
)
last_pdu_age_metric = Gauge(
"synapse_federation_last_received_pdu_age",
"The age (in seconds) of the last PDU successfully received from the given domain",
last_pdu_ts_metric = Gauge(
"synapse_federation_last_received_pdu_time",
"The timestamp of the last PDU which was successfully received from the given domain",
labelnames=("server_name",),
)
class FederationServer(FederationBase):
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.auth = hs.get_auth()
@@ -112,14 +112,15 @@ class FederationServer(FederationBase):
# with FederationHandlerRegistry.
hs.get_directory_handler()
self._federation_ratelimiter = hs.get_federation_ratelimiter()
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
# origins that we are currently processing a transaction from.
# a dict from origin to txn id.
self._active_transactions = {} # type: Dict[str, str]
# We cache results for transaction with the same ID
self._transaction_resp_cache = ResponseCache(
hs, "fed_txn_handler", timeout_ms=30000
hs.get_clock(), "fed_txn_handler", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self.transaction_actions = TransactionActions(self.store)
@@ -129,10 +130,10 @@ class FederationServer(FederationBase):
# We cache responses to state queries, as they take a while and often
# come in waves.
self._state_resp_cache = ResponseCache(
hs, "state_resp", timeout_ms=30000
hs.get_clock(), "state_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self._state_ids_resp_cache = ResponseCache(
hs, "state_ids_resp", timeout_ms=30000
hs.get_clock(), "state_ids_resp", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self._federation_metrics_domains = (
@@ -169,6 +170,33 @@ class FederationServer(FederationBase):
logger.debug("[%s] Got transaction", transaction_id)
# Reject malformed transactions early: reject if too many PDUs/EDUs
if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
):
logger.info("Transaction PDU or EDU count too large. Returning 400")
return 400, {}
# we only process one transaction from each origin at a time. We need to do
# this check here, rather than in _on_incoming_transaction_inner so that we
# don't cache the rejection in _transaction_resp_cache (so that if the txn
# arrives again later, we can process it).
current_transaction = self._active_transactions.get(origin)
if current_transaction and current_transaction != transaction_id:
logger.warning(
"Received another txn %s from %s while still processing %s",
transaction_id,
origin,
current_transaction,
)
return 429, {
"errcode": Codes.UNKNOWN,
"error": "Too many concurrent transactions",
}
# CRITICAL SECTION: we must now not await until we populate _active_transactions
# in _on_incoming_transaction_inner.
# We wrap in a ResponseCache so that we de-duplicate retried
# transactions.
return await self._transaction_resp_cache.wrap(
@@ -182,26 +210,18 @@ class FederationServer(FederationBase):
async def _on_incoming_transaction_inner(
self, origin: str, transaction: Transaction, request_time: int
) -> Tuple[int, Dict[str, Any]]:
# Use a linearizer to ensure that transactions from a remote are
# processed in order.
with await self._transaction_linearizer.queue(origin):
# We rate limit here *after* we've queued up the incoming requests,
# so that we don't fill up the ratelimiter with blocked requests.
#
# This is important as the ratelimiter allows N concurrent requests
# at a time, and only starts ratelimiting if there are more requests
# than that being processed at a time. If we queued up requests in
# the linearizer/response cache *after* the ratelimiting then those
# queued up requests would count as part of the allowed limit of N
# concurrent requests.
with self._federation_ratelimiter.ratelimit(origin) as d:
await d
# CRITICAL SECTION: the first thing we must do (before awaiting) is
# add an entry to _active_transactions.
assert origin not in self._active_transactions
self._active_transactions[origin] = transaction.transaction_id # type: ignore
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
try:
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
finally:
del self._active_transactions[origin]
async def _handle_incoming_transaction(
self, origin: str, transaction: Transaction, request_time: int
@@ -227,19 +247,6 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id) # type: ignore
# Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
):
logger.info("Transaction PDU or EDU count too large. Returning 400")
response = {}
await self.transaction_actions.set_response(
origin, transaction, 400, response
)
return 400, response
# We process PDUs and EDUs in parallel. This is important as we don't
# want to block things like to device messages from reaching clients
# behind the potentially expensive handling of PDUs.
@@ -335,42 +342,48 @@ class FederationServer(FederationBase):
# impose a limit to avoid going too crazy with ram/cpu.
async def process_pdus_for_room(room_id: str):
logger.debug("Processing PDUs for %s", room_id)
try:
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warning("Ignoring PDUs for room %s from banned server", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
with nested_logging_context(room_id):
logger.debug("Processing PDUs for %s", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
with pdu_process_time.time():
with nested_logging_context(event_id):
try:
await self._handle_received_pdu(origin, pdu)
pdu_results[event_id] = {}
except FederationError as e:
logger.warning("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.error(
"Failed to handle PDU %s",
event_id,
exc_info=(f.type, f.value, f.getTracebackObject()),
)
try:
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warning(
"Ignoring PDUs for room %s from banned server", room_id
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
return
for pdu in pdus_by_room[room_id]:
pdu_results[pdu.event_id] = await process_pdu(pdu)
async def process_pdu(pdu: EventBase) -> JsonDict:
event_id = pdu.event_id
with pdu_process_time.time():
with nested_logging_context(event_id):
try:
await self._handle_received_pdu(origin, pdu)
return {}
except FederationError as e:
logger.warning("Error handling PDU %s: %s", event_id, e)
return {"error": str(e)}
except Exception as e:
f = failure.Failure()
logger.error(
"Failed to handle PDU %s",
event_id,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
)
return {"error": str(e)}
await concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if newest_pdu_ts and origin in self._federation_metrics_domains:
newest_pdu_age = self._clock.time_msec() - newest_pdu_ts
last_pdu_age_metric.labels(server_name=origin).set(newest_pdu_age / 1000)
last_pdu_ts_metric.labels(server_name=origin).set(newest_pdu_ts / 1000)
return pdu_results
@@ -448,18 +461,22 @@ class FederationServer(FederationBase):
async def _on_state_ids_request_compute(self, room_id, event_id):
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = await self.store.get_auth_chain_ids(state_ids)
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(
self, room_id: str, event_id: str
) -> Dict[str, list]:
if event_id:
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
pdus = await self.handler.get_state_for_pdu(
room_id, event_id
) # type: Iterable[EventBase]
else:
pdus = (await self.state.get_current_state(room_id)).values()
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
auth_chain = await self.store.get_auth_chain(
room_id, [pdu.event_id for pdu in pdus]
)
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
@@ -710,27 +727,6 @@ class FederationServer(FederationBase):
if the event was unacceptable for any other reason (eg, too large,
too many prev_events, couldn't find the prev_events)
"""
# check that it's actually being sent from a valid destination to
# workaround bug #1753 in 0.18.5 and 0.18.6
if origin != get_domain_from_id(pdu.sender):
# We continue to accept join events from any server; this is
# necessary for the federation join dance to work correctly.
# (When we join over federation, the "helper" server is
# responsible for sending out the join event, rather than the
# origin. See bug #1893. This is also true for some third party
# invites).
if not (
pdu.type == "m.room.member"
and pdu.content
and pdu.content.get("membership", None)
in (Membership.JOIN, Membership.INVITE)
):
logger.info(
"Discarding PDU %s from invalid origin %s", pdu.event_id, origin
)
return
else:
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = await self.store.get_room_version(pdu.room_id)
@@ -863,7 +859,9 @@ class FederationHandlerRegistry:
self.edu_handlers = (
{}
) # type: Dict[str, Callable[[str, dict], Awaitable[None]]]
self.query_handlers = {} # type: Dict[str, Callable[[dict], Awaitable[None]]]
self.query_handlers = (
{}
) # type: Dict[str, Callable[[dict], Awaitable[JsonDict]]]
# Map from type to instance names that we should route EDU handling to.
# We randomly choose one instance from the list to route to for each new
@@ -872,6 +870,7 @@ class FederationHandlerRegistry:
# A rate limiter for incoming room key requests per origin.
self._room_key_request_rate_limiter = Ratelimiter(
store=hs.get_datastore(),
clock=self.clock,
rate_hz=self.config.rc_key_requests.per_second,
burst_count=self.config.rc_key_requests.burst_count,
@@ -897,7 +896,7 @@ class FederationHandlerRegistry:
self.edu_handlers[edu_type] = handler
def register_query_handler(
self, query_type: str, handler: Callable[[dict], defer.Deferred]
self, query_type: str, handler: Callable[[dict], Awaitable[JsonDict]]
):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
@@ -932,7 +931,9 @@ class FederationHandlerRegistry:
# the limit, drop them.
if (
edu_type == EduTypes.RoomKeyRequest
and not self._room_key_request_rate_limiter.can_do_action(origin)
and not await self._room_key_request_rate_limiter.can_do_action(
None, origin
)
):
return
@@ -970,7 +971,7 @@ class FederationHandlerRegistry:
# Oh well, let's just log and move on.
logger.warning("No handler registered for EDU type %s", edu_type)
async def on_query(self, query_type: str, args: dict):
async def on_query(self, query_type: str, args: dict) -> JsonDict:
handler = self.query_handlers.get(query_type)
if handler:
return await handler(args)

View File

@@ -31,25 +31,39 @@ Events are replicated via a separate events stream.
import logging
from collections import namedtuple
from typing import Dict, List, Tuple, Type
from typing import (
TYPE_CHECKING,
Dict,
Hashable,
Iterable,
List,
Optional,
Sized,
Tuple,
Type,
)
from sortedcontainers import SortedDict
from twisted.internet import defer
from synapse.api.presence import UserPresenceState
from synapse.federation.sender import AbstractFederationSender, FederationSender
from synapse.metrics import LaterGauge
from synapse.replication.tcp.streams.federation import FederationStream
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.util.metrics import Measure
from .units import Edu
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class FederationRemoteSendQueue:
class FederationRemoteSendQueue(AbstractFederationSender):
"""A drop in replacement for FederationSender"""
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
@@ -58,7 +72,7 @@ class FederationRemoteSendQueue:
# We may have multiple federation sender instances, so we need to track
# their positions separately.
self._sender_instances = hs.config.worker.federation_shard_config.instances
self._sender_positions = {}
self._sender_positions = {} # type: Dict[str, int]
# Pending presence map user_id -> UserPresenceState
self.presence_map = {} # type: Dict[str, UserPresenceState]
@@ -71,7 +85,7 @@ class FederationRemoteSendQueue:
# Stream position -> (user_id, destinations)
self.presence_destinations = (
SortedDict()
) # type: SortedDict[int, Tuple[str, List[str]]]
) # type: SortedDict[int, Tuple[str, Iterable[str]]]
# (destination, key) -> EDU
self.keyed_edu = {} # type: Dict[Tuple[str, tuple], Edu]
@@ -94,7 +108,7 @@ class FederationRemoteSendQueue:
# we make a new function, so we need to make a new function so the inner
# lambda binds to the queue rather than to the name of the queue which
# changes. ARGH.
def register(name, queue):
def register(name: str, queue: Sized) -> None:
LaterGauge(
"synapse_federation_send_queue_%s_size" % (queue_name,),
"",
@@ -115,13 +129,13 @@ class FederationRemoteSendQueue:
self.clock.looping_call(self._clear_queue, 30 * 1000)
def _next_pos(self):
def _next_pos(self) -> int:
pos = self.pos
self.pos += 1
self.pos_time[self.clock.time_msec()] = pos
return pos
def _clear_queue(self):
def _clear_queue(self) -> None:
"""Clear the queues for anything older than N minutes"""
FIVE_MINUTES_AGO = 5 * 60 * 1000
@@ -138,7 +152,7 @@ class FederationRemoteSendQueue:
self._clear_queue_before_pos(position_to_delete)
def _clear_queue_before_pos(self, position_to_delete):
def _clear_queue_before_pos(self, position_to_delete: int) -> None:
"""Clear all the queues from before a given position"""
with Measure(self.clock, "send_queue._clear"):
# Delete things out of presence maps
@@ -188,13 +202,18 @@ class FederationRemoteSendQueue:
for key in keys[:i]:
del self.edus[key]
def notify_new_events(self, max_token):
def notify_new_events(self, max_token: RoomStreamToken) -> None:
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
# stream.
pass
# This should never get called.
raise NotImplementedError()
def build_and_send_edu(self, destination, edu_type, content, key=None):
def build_and_send_edu(
self,
destination: str,
edu_type: str,
content: JsonDict,
key: Optional[Hashable] = None,
) -> None:
"""As per FederationSender"""
if destination == self.server_name:
logger.info("Not sending EDU to ourselves")
@@ -218,38 +237,39 @@ class FederationRemoteSendQueue:
self.notifier.on_new_replication_data()
def send_read_receipt(self, receipt):
async def send_read_receipt(self, receipt: ReadReceipt) -> None:
"""As per FederationSender
Args:
receipt (synapse.types.ReadReceipt):
receipt:
"""
# nothing to do here: the replication listener will handle it.
return defer.succeed(None)
def send_presence(self, states):
def send_presence(self, states: List[UserPresenceState]) -> None:
"""As per FederationSender
Args:
states (list(UserPresenceState))
states
"""
pos = self._next_pos()
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
local_states = [s for s in states if self.is_mine_id(s.user_id)]
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]
self.notifier.on_new_replication_data()
def send_presence_to_destinations(self, states, destinations):
def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""As per FederationSender
Args:
states (list[UserPresenceState])
destinations (list[str])
states
destinations
"""
for state in states:
pos = self._next_pos()
@@ -258,15 +278,18 @@ class FederationRemoteSendQueue:
self.notifier.on_new_replication_data()
def send_device_messages(self, destination):
def send_device_messages(self, destination: str) -> None:
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
# stream.
def get_current_token(self):
def wake_destination(self, server: str) -> None:
pass
def get_current_token(self) -> int:
return self.pos - 1
def federation_ack(self, instance_name, token):
def federation_ack(self, instance_name: str, token: int) -> None:
if self._sender_instances:
# If we have configured multiple federation sender instances we need
# to track their positions separately, and only clear the queue up
@@ -504,13 +527,16 @@ ParsedFederationStreamData = namedtuple(
)
def process_rows_for_federation(transaction_queue, rows):
def process_rows_for_federation(
transaction_queue: FederationSender,
rows: List[FederationStream.FederationStreamRow],
) -> None:
"""Parse a list of rows from the federation stream and put them in the
transaction queue ready for sending to the relevant homeservers.
Args:
transaction_queue (FederationSender)
rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow))
transaction_queue
rows
"""
# The federation stream contains a bunch of different types of

View File

@@ -13,14 +13,14 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import logging
from typing import Dict, Hashable, Iterable, List, Optional, Set, Tuple
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Set, Tuple
from prometheus_client import Counter
from twisted.internet import defer
import synapse
import synapse.metrics
from synapse.api.presence import UserPresenceState
from synapse.events import EventBase
@@ -40,9 +40,12 @@ from synapse.metrics import (
events_processed_counter,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import ReadReceipt, RoomStreamToken
from synapse.types import JsonDict, ReadReceipt, RoomStreamToken
from synapse.util.metrics import Measure, measure_func
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
sent_pdus_destination_dist_count = Counter(
@@ -65,8 +68,91 @@ CATCH_UP_STARTUP_DELAY_SEC = 15
CATCH_UP_STARTUP_INTERVAL_SEC = 5
class FederationSender:
def __init__(self, hs: "synapse.server.HomeServer"):
class AbstractFederationSender(metaclass=abc.ABCMeta):
@abc.abstractmethod
def notify_new_events(self, max_token: RoomStreamToken) -> None:
"""This gets called when we have some new events we might want to
send out to other servers.
"""
raise NotImplementedError()
@abc.abstractmethod
async def send_read_receipt(self, receipt: ReadReceipt) -> None:
"""Send a RR to any other servers in the room
Args:
receipt: receipt to be sent
"""
raise NotImplementedError()
@abc.abstractmethod
def send_presence(self, states: List[UserPresenceState]) -> None:
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
"""
raise NotImplementedError()
@abc.abstractmethod
def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
) -> None:
"""Send the given presence states to the given destinations.
Args:
destinations:
"""
raise NotImplementedError()
@abc.abstractmethod
def build_and_send_edu(
self,
destination: str,
edu_type: str,
content: JsonDict,
key: Optional[Hashable] = None,
) -> None:
"""Construct an Edu object, and queue it for sending
Args:
destination: name of server to send to
edu_type: type of EDU to send
content: content of EDU
key: clobbering key for this edu
"""
raise NotImplementedError()
@abc.abstractmethod
def send_device_messages(self, destination: str) -> None:
raise NotImplementedError()
@abc.abstractmethod
def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it
might have come back.
"""
raise NotImplementedError()
@abc.abstractmethod
def get_current_token(self) -> int:
raise NotImplementedError()
@abc.abstractmethod
def federation_ack(self, instance_name: str, token: int) -> None:
raise NotImplementedError()
@abc.abstractmethod
async def get_replication_rows(
self, instance_name: str, from_token: int, to_token: int, target_row_count: int
) -> Tuple[List[Tuple[int, Tuple]], int, bool]:
raise NotImplementedError()
class FederationSender(AbstractFederationSender):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.server_name = hs.hostname
@@ -432,7 +518,7 @@ class FederationSender:
queue.flush_read_receipts_for_room(room_id)
@preserve_fn # the caller should not yield on this
async def send_presence(self, states: List[UserPresenceState]):
async def send_presence(self, states: List[UserPresenceState]) -> None:
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
@@ -494,7 +580,7 @@ class FederationSender:
self._get_per_destination_queue(destination).send_presence(states)
@measure_func("txnqueue._process_presence")
async def _process_presence_inner(self, states: List[UserPresenceState]):
async def _process_presence_inner(self, states: List[UserPresenceState]) -> None:
"""Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination
"""
@@ -516,9 +602,9 @@ class FederationSender:
self,
destination: str,
edu_type: str,
content: dict,
content: JsonDict,
key: Optional[Hashable] = None,
):
) -> None:
"""Construct an Edu object, and queue it for sending
Args:
@@ -545,7 +631,7 @@ class FederationSender:
self.send_edu(edu, key)
def send_edu(self, edu: Edu, key: Optional[Hashable]):
def send_edu(self, edu: Edu, key: Optional[Hashable]) -> None:
"""Queue an EDU for sending
Args:
@@ -563,7 +649,7 @@ class FederationSender:
else:
queue.send_edu(edu)
def send_device_messages(self, destination: str):
def send_device_messages(self, destination: str) -> None:
if destination == self.server_name:
logger.warning("Not sending device update to ourselves")
return
@@ -575,7 +661,7 @@ class FederationSender:
self._get_per_destination_queue(destination).attempt_new_transaction()
def wake_destination(self, destination: str):
def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it
@@ -599,6 +685,10 @@ class FederationSender:
# to a worker.
return 0
def federation_ack(self, instance_name: str, token: int) -> None:
# It is not expected that this gets called on FederationSender.
raise NotImplementedError()
@staticmethod
async def get_replication_rows(
instance_name: str, from_token: int, to_token: int, target_row_count: int
@@ -607,7 +697,7 @@ class FederationSender:
# to a worker.
return [], 0, False
async def _wake_destinations_needing_catchup(self):
async def _wake_destinations_needing_catchup(self) -> None:
"""
Wakes up destinations that need catch-up and are not currently being
backed off from.

View File

@@ -15,8 +15,9 @@
# limitations under the License.
import datetime
import logging
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, cast
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple
import attr
from prometheus_client import Counter
from synapse.api.errors import (
@@ -76,6 +77,7 @@ class PerDestinationQueue:
self._transaction_manager = transaction_manager
self._instance_name = hs.get_instance_name()
self._federation_shard_config = hs.config.worker.federation_shard_config
self._state = hs.get_state_handler()
self._should_send_on_this_instance = True
if not self._federation_shard_config.should_handle(
@@ -93,6 +95,10 @@ class PerDestinationQueue:
self._destination = destination
self.transmission_loop_running = False
# Flag to signal to any running transmission loop that there is new data
# queued up to be sent.
self._new_data_to_send = False
# True whilst we are sending events that the remote homeserver missed
# because it was unreachable. We start in this state so we can perform
# catch-up at startup.
@@ -108,7 +114,7 @@ class PerDestinationQueue:
# destination (we are the only updater so this is safe)
self._last_successful_stream_ordering = None # type: Optional[int]
# a list of pending PDUs
# a queue of pending PDUs
self._pending_pdus = [] # type: List[EventBase]
# XXX this is never actually used: see
@@ -208,6 +214,10 @@ class PerDestinationQueue:
transaction in the background.
"""
# Mark that we (may) have new things to send, so that any running
# transmission loop will recheck whether there is stuff to send.
self._new_data_to_send = True
if self.transmission_loop_running:
# XXX: this can get stuck on by a never-ending
# request at which point pending_pdus just keeps growing.
@@ -250,125 +260,41 @@ class PerDestinationQueue:
pending_pdus = []
while True:
# We have to keep 2 free slots for presence and rr_edus
limit = MAX_EDUS_PER_TRANSACTION - 2
self._new_data_to_send = False
device_update_edus, dev_list_id = await self._get_device_update_edus(
limit
)
limit -= len(device_update_edus)
(
to_device_edus,
device_stream_id,
) = await self._get_to_device_message_edus(limit)
pending_edus = device_update_edus + to_device_edus
# BEGIN CRITICAL SECTION
#
# In order to avoid a race condition, we need to make sure that
# the following code (from popping the queues up to the point
# where we decide if we actually have any pending messages) is
# atomic - otherwise new PDUs or EDUs might arrive in the
# meantime, but not get sent because we hold the
# transmission_loop_running flag.
pending_pdus = self._pending_pdus
# We can only include at most 50 PDUs per transactions
pending_pdus, self._pending_pdus = pending_pdus[:50], pending_pdus[50:]
pending_edus.extend(self._get_rr_edus(force_flush=False))
pending_presence = self._pending_presence
self._pending_presence = {}
if pending_presence:
pending_edus.append(
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self._clock.time_msec()
)
for presence in pending_presence.values()
]
},
)
)
pending_edus.extend(
self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self._pending_edus_keyed
async with _TransactionQueueManager(self) as (
pending_pdus,
pending_edus,
):
_, val = self._pending_edus_keyed.popitem()
pending_edus.append(val)
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
if pending_pdus:
logger.debug(
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination,
len(pending_pdus),
# If we've gotten told about new things to send during
# checking for things to send, we try looking again.
# Otherwise new PDUs or EDUs might arrive in the meantime,
# but not get sent because we hold the
# `transmission_loop_running` flag.
if self._new_data_to_send:
continue
else:
return
if pending_pdus:
logger.debug(
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
self._destination,
len(pending_pdus),
)
await self._transaction_manager.send_new_transaction(
self._destination, pending_pdus, pending_edus
)
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", self._destination)
self._last_device_stream_id = device_stream_id
return
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self._get_rr_edus(force_flush=True))
# END CRITICAL SECTION
success = await self._transaction_manager.send_new_transaction(
self._destination, pending_pdus, pending_edus
)
if success:
sent_transactions_counter.inc()
sent_edus_counter.inc(len(pending_edus))
for edu in pending_edus:
sent_edus_by_type.labels(edu.edu_type).inc()
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if to_device_edus:
await self._store.delete_device_msgs_for_remote(
self._destination, device_stream_id
)
# also mark the device updates as sent
if device_update_edus:
logger.info(
"Marking as sent %r %r", self._destination, dev_list_id
)
await self._store.mark_as_sent_devices_by_remote(
self._destination, dev_list_id
)
self._last_device_stream_id = device_stream_id
self._last_device_list_stream_id = dev_list_id
if pending_pdus:
# we sent some PDUs and it was successful, so update our
# last_successful_stream_ordering in the destinations table.
final_pdu = pending_pdus[-1]
last_successful_stream_ordering = (
final_pdu.internal_metadata.stream_ordering
)
assert last_successful_stream_ordering
await self._store.set_destination_last_successful_stream_ordering(
self._destination, last_successful_stream_ordering
)
else:
break
except NotRetryingDestination as e:
logger.debug(
"TX [%s] not ready for retry yet (next retry at %s) - "
@@ -401,7 +327,7 @@ class PerDestinationQueue:
self._pending_presence = {}
self._pending_rrs = {}
self._start_catching_up()
self._start_catching_up()
except FederationDeniedError as e:
logger.info(e)
except HttpResponseException as e:
@@ -412,7 +338,6 @@ class PerDestinationQueue:
e,
)
self._start_catching_up()
except RequestSendFailed as e:
logger.warning(
"TX [%s] Failed to send transaction: %s", self._destination, e
@@ -422,16 +347,12 @@ class PerDestinationQueue:
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
self._start_catching_up()
except Exception:
logger.exception("TX [%s] Failed to send transaction", self._destination)
for p in pending_pdus:
logger.info(
"Failed to send event %s to %s", p.event_id, self._destination
)
self._start_catching_up()
finally:
# We want to be *very* sure we clear this after we stop processing
self.transmission_loop_running = False
@@ -495,25 +416,97 @@ class PerDestinationQueue:
"This should not happen." % event_ids
)
if logger.isEnabledFor(logging.INFO):
rooms = [p.room_id for p in catchup_pdus]
logger.info("Catching up rooms to %s: %r", self._destination, rooms)
# We send transactions with events from one room only, as its likely
# that the remote will have to do additional processing, which may
# take some time. It's better to give it small amounts of work
# rather than risk the request timing out and repeatedly being
# retried, and not making any progress.
#
# Note: `catchup_pdus` will have exactly one PDU per room.
for pdu in catchup_pdus:
# The PDU from the DB will be the last PDU in the room from
# *this server* that wasn't sent to the remote. However, other
# servers may have sent lots of events since then, and we want
# to try and tell the remote only about the *latest* events in
# the room. This is so that it doesn't get inundated by events
# from various parts of the DAG, which all need to be processed.
#
# Note: this does mean that in large rooms a server coming back
# online will get sent the same events from all the different
# servers, but the remote will correctly deduplicate them and
# handle it only once.
success = await self._transaction_manager.send_new_transaction(
self._destination, catchup_pdus, []
)
# Step 1, fetch the current extremities
extrems = await self._store.get_prev_events_for_room(pdu.room_id)
if not success:
return
if pdu.event_id in extrems:
# If the event is in the extremities, then great! We can just
# use that without having to do further checks.
room_catchup_pdus = [pdu]
else:
# If not, fetch the extremities and figure out which we can
# send.
extrem_events = await self._store.get_events_as_list(extrems)
sent_transactions_counter.inc()
final_pdu = catchup_pdus[-1]
self._last_successful_stream_ordering = cast(
int, final_pdu.internal_metadata.stream_ordering
)
await self._store.set_destination_last_successful_stream_ordering(
self._destination, self._last_successful_stream_ordering
)
new_pdus = []
for p in extrem_events:
# We pulled this from the DB, so it'll be non-null
assert p.internal_metadata.stream_ordering
# Filter out events that happened before the remote went
# offline
if (
p.internal_metadata.stream_ordering
< self._last_successful_stream_ordering
):
continue
# Filter out events where the server is not in the room,
# e.g. it may have left/been kicked. *Ideally* we'd pull
# out the kick and send that, but it's a rare edge case
# so we don't bother for now (the server that sent the
# kick should send it out if its online).
hosts = await self._state.get_hosts_in_room_at_events(
p.room_id, [p.event_id]
)
if self._destination not in hosts:
continue
new_pdus.append(p)
# If we've filtered out all the extremities, fall back to
# sending the original event. This should ensure that the
# server gets at least some of missed events (especially if
# the other sending servers are up).
if new_pdus:
room_catchup_pdus = new_pdus
else:
room_catchup_pdus = [pdu]
logger.info(
"Catching up rooms to %s: %r", self._destination, pdu.room_id
)
await self._transaction_manager.send_new_transaction(
self._destination, room_catchup_pdus, []
)
sent_transactions_counter.inc()
# We pulled this from the DB, so it'll be non-null
assert pdu.internal_metadata.stream_ordering
# Note that we mark the last successful stream ordering as that
# from the *original* PDU, rather than the PDU(s) we actually
# send. This is because we use it to mark our position in the
# queue of missed PDUs to process.
self._last_successful_stream_ordering = (
pdu.internal_metadata.stream_ordering
)
await self._store.set_destination_last_successful_stream_ordering(
self._destination, self._last_successful_stream_ordering
)
def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
if not self._pending_rrs:
@@ -584,3 +577,135 @@ class PerDestinationQueue:
"""
self._catching_up = True
self._pending_pdus = []
@attr.s(slots=True)
class _TransactionQueueManager:
"""A helper async context manager for pulling stuff off the queues and
tracking what was last successfully sent, etc.
"""
queue = attr.ib(type=PerDestinationQueue)
_device_stream_id = attr.ib(type=Optional[int], default=None)
_device_list_id = attr.ib(type=Optional[int], default=None)
_last_stream_ordering = attr.ib(type=Optional[int], default=None)
_pdus = attr.ib(type=List[EventBase], factory=list)
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
# First we calculate the EDUs we want to send, if any.
# We start by fetching device related EDUs, i.e device updates and to
# device messages. We have to keep 2 free slots for presence and rr_edus.
limit = MAX_EDUS_PER_TRANSACTION - 2
device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
limit
)
if device_update_edus:
self._device_list_id = dev_list_id
else:
self.queue._last_device_list_stream_id = dev_list_id
limit -= len(device_update_edus)
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(limit)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
pending_edus = device_update_edus + to_device_edus
# Now add the read receipt EDU.
pending_edus.extend(self.queue._get_rr_edus(force_flush=False))
# And presence EDU.
if self.queue._pending_presence:
pending_edus.append(
Edu(
origin=self.queue._server_name,
destination=self.queue._destination,
edu_type="m.presence",
content={
"push": [
format_user_presence_state(
presence, self.queue._clock.time_msec()
)
for presence in self.queue._pending_presence.values()
]
},
)
)
self.queue._pending_presence = {}
# Finally add any other types of EDUs if there is room.
pending_edus.extend(
self.queue._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
)
while (
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
and self.queue._pending_edus_keyed
):
_, val = self.queue._pending_edus_keyed.popitem()
pending_edus.append(val)
# Now we look for any PDUs to send, by getting up to 50 PDUs from the
# queue
self._pdus = self.queue._pending_pdus[:50]
if not self._pdus and not pending_edus:
return [], []
# if we've decided to send a transaction anyway, and we have room, we
# may as well send any pending RRs
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
pending_edus.extend(self.queue._get_rr_edus(force_flush=True))
if self._pdus:
self._last_stream_ordering = self._pdus[
-1
].internal_metadata.stream_ordering
assert self._last_stream_ordering
return self._pdus, pending_edus
async def __aexit__(self, exc_type, exc, tb):
if exc_type is not None:
# Failed to send transaction, so we bail out.
return
# Successfully sent transactions, so we remove pending PDUs from the queue
if self._pdus:
self.queue._pending_pdus = self.queue._pending_pdus[len(self._pdus) :]
# Succeeded to send the transaction so we record where we have sent up
# to in the various streams
if self._device_stream_id:
await self.queue._store.delete_device_msgs_for_remote(
self.queue._destination, self._device_stream_id
)
self.queue._last_device_stream_id = self._device_stream_id
# also mark the device updates as sent
if self._device_list_id:
logger.info(
"Marking as sent %r %r", self.queue._destination, self._device_list_id
)
await self.queue._store.mark_as_sent_devices_by_remote(
self.queue._destination, self._device_list_id
)
self.queue._last_device_list_stream_id = self._device_list_id
if self._last_stream_ordering:
# we sent some PDUs and it was successful, so update our
# last_successful_stream_ordering in the destinations table.
await self.queue._store.set_destination_last_successful_stream_ordering(
self.queue._destination, self._last_stream_ordering
)

View File

@@ -36,9 +36,9 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
last_pdu_age_metric = Gauge(
"synapse_federation_last_sent_pdu_age",
"The age (in seconds) of the last PDU successfully sent to the given domain",
last_pdu_ts_metric = Gauge(
"synapse_federation_last_sent_pdu_time",
"The timestamp of the last PDU which was successfully sent to the given domain",
labelnames=("server_name",),
)
@@ -69,15 +69,12 @@ class TransactionManager:
destination: str,
pdus: List[EventBase],
edus: List[Edu],
) -> bool:
) -> None:
"""
Args:
destination: The destination to send to (e.g. 'example.org')
pdus: In-order list of PDUs to send
edus: List of EDUs to send
Returns:
True iff the transaction was successful
"""
# Make a transaction-sending opentracing span. This span follows on from
@@ -96,8 +93,6 @@ class TransactionManager:
edu.strip_context()
with start_active_span_follows_from("send_transaction", span_contexts):
success = True
logger.debug("TX [%s] _attempt_new_transaction", destination)
txn_id = str(self._next_txn_id)
@@ -152,45 +147,29 @@ class TransactionManager:
response = await self._transport_layer.send_transaction(
transaction, json_data_cb
)
code = 200
except HttpResponseException as e:
code = e.code
response = e.response
if e.code in (401, 404, 429) or 500 <= e.code:
logger.info(
"TX [%s] {%s} got %d response", destination, txn_id, code
)
raise e
set_tag(tags.ERROR, True)
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
logger.info("TX [%s] {%s} got %d response", destination, txn_id, code)
raise
if code == 200:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warning(
"TX [%s] {%s} Remote returned error for %s: %s",
destination,
txn_id,
e_id,
r,
)
else:
for p in pdus:
logger.info("TX [%s] {%s} got 200 response", destination, txn_id)
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warning(
"TX [%s] {%s} Failed to send event %s",
"TX [%s] {%s} Remote returned error for %s: %s",
destination,
txn_id,
p.event_id,
e_id,
r,
)
success = False
if success and pdus and destination in self._federation_metrics_domains:
if pdus and destination in self._federation_metrics_domains:
last_pdu = pdus[-1]
last_pdu_age = self.clock.time_msec() - last_pdu.origin_server_ts
last_pdu_age_metric.labels(server_name=destination).set(
last_pdu_age / 1000
last_pdu_ts_metric.labels(server_name=destination).set(
last_pdu.origin_server_ts / 1000
)
set_tag(tags.ERROR, not success)
return success

View File

@@ -16,7 +16,7 @@
import logging
import urllib
from typing import Any, Dict, Optional
from typing import Any, Dict, List, Optional
from synapse.api.constants import Membership
from synapse.api.errors import Codes, HttpResponseException, SynapseError
@@ -26,6 +26,7 @@ from synapse.api.urls import (
FEDERATION_V2_PREFIX,
)
from synapse.logging.utils import log_function
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -978,6 +979,38 @@ class TransportLayerClient:
return self.client.get_json(destination=destination, path=path)
async def get_space_summary(
self,
destination: str,
room_id: str,
suggested_only: bool,
max_rooms_per_space: Optional[int],
exclude_rooms: List[str],
) -> JsonDict:
"""
Args:
destination: The remote server
room_id: The room ID to ask about.
suggested_only: if True, only suggested rooms will be returned
max_rooms_per_space: an optional limit to the number of children to be
returned per space
exclude_rooms: a list of any rooms we can skip
"""
path = _create_path(
FEDERATION_UNSTABLE_PREFIX, "/org.matrix.msc2946/spaces/%s", room_id
)
params = {
"suggested_only": suggested_only,
"exclude_rooms": exclude_rooms,
}
if max_rooms_per_space is not None:
params["max_rooms_per_space"] = max_rooms_per_space
return await self.client.post_json(
destination=destination, path=path, data=params
)
def _create_path(federation_prefix, path, *args):
"""

View File

@@ -18,7 +18,7 @@
import functools
import logging
import re
from typing import Optional, Tuple, Type
from typing import Container, Mapping, Optional, Sequence, Tuple, Type
import synapse
from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH
@@ -29,7 +29,7 @@ from synapse.api.urls import (
FEDERATION_V1_PREFIX,
FEDERATION_V2_PREFIX,
)
from synapse.http.server import JsonResource
from synapse.http.server import HttpServer, JsonResource
from synapse.http.servlet import (
parse_boolean_from_args,
parse_integer_from_args,
@@ -44,7 +44,8 @@ from synapse.logging.opentracing import (
whitelisted_homeserver,
)
from synapse.server import HomeServer
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
from synapse.types import JsonDict, ThirdPartyInstanceID, get_domain_from_id
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.stringutils import parse_and_validate_server_name
from synapse.util.versionstring import get_version_string
@@ -1376,6 +1377,40 @@ class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
return 200, new_content
class FederationSpaceSummaryServlet(BaseFederationServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
PATH = "/spaces/(?P<room_id>[^/]*)"
async def on_POST(
self,
origin: str,
content: JsonDict,
query: Mapping[bytes, Sequence[bytes]],
room_id: str,
) -> Tuple[int, JsonDict]:
suggested_only = content.get("suggested_only", False)
if not isinstance(suggested_only, bool):
raise SynapseError(
400, "'suggested_only' must be a boolean", Codes.BAD_JSON
)
exclude_rooms = content.get("exclude_rooms", [])
if not isinstance(exclude_rooms, list) or any(
not isinstance(x, str) for x in exclude_rooms
):
raise SynapseError(400, "bad value for 'exclude_rooms'", Codes.BAD_JSON)
max_rooms_per_space = content.get("max_rooms_per_space")
if max_rooms_per_space is not None and not isinstance(max_rooms_per_space, int):
raise SynapseError(
400, "bad value for 'max_rooms_per_space'", Codes.BAD_JSON
)
return 200, await self.handler.federation_space_summary(
room_id, suggested_only, max_rooms_per_space, exclude_rooms
)
class RoomComplexityServlet(BaseFederationServlet):
"""
Indicates to other servers how complex (and therefore likely
@@ -1474,18 +1509,24 @@ DEFAULT_SERVLET_GROUPS = (
)
def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=None):
def register_servlets(
hs: HomeServer,
resource: HttpServer,
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
servlet_groups: Optional[Container[str]] = None,
):
"""Initialize and register servlet classes.
Will by default register all servlets. For custom behaviour, pass in
a list of servlet_groups to register.
Args:
hs (synapse.server.HomeServer): homeserver
resource (JsonResource): resource class to register to
authenticator (Authenticator): authenticator to use
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
servlet_groups (list[str], optional): List of servlet groups to register.
hs: homeserver
resource: resource class to register to
authenticator: authenticator to use
ratelimiter: ratelimiter to use
servlet_groups: List of servlet groups to register.
Defaults to ``DEFAULT_SERVLET_GROUPS``.
"""
if not servlet_groups:
@@ -1500,6 +1541,14 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
server_name=hs.hostname,
).register(resource)
if hs.config.experimental.spaces_enabled:
FederationSpaceSummaryServlet(
handler=hs.get_space_summary_handler(),
authenticator=authenticator,
ratelimiter=ratelimiter,
server_name=hs.hostname,
).register(resource)
if "openid" in servlet_groups:
for servletclass in OPENID_SERVLET_CLASSES:
servletclass(

View File

@@ -46,7 +46,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, get_domain_from_id
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -25,7 +25,7 @@ from synapse.types import GroupID, JsonDict, RoomID, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -24,7 +24,7 @@ from synapse.api.ratelimiting import Ratelimiter
from synapse.types import UserID
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -49,7 +49,7 @@ class BaseHandler:
# The rate_hz and burst_count are overridden on a per-user basis
self.request_ratelimiter = Ratelimiter(
clock=self.clock, rate_hz=0, burst_count=0
store=self.store, clock=self.clock, rate_hz=0, burst_count=0
)
self._rc_message = self.hs.config.rc_message
@@ -57,6 +57,7 @@ class BaseHandler:
# by the presence of rate limits in the config
if self.hs.config.rc_admin_redaction:
self.admin_redaction_ratelimiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=self.hs.config.rc_admin_redaction.per_second,
burst_count=self.hs.config.rc_admin_redaction.burst_count,
@@ -91,11 +92,6 @@ class BaseHandler:
if app_service is not None:
return # do not ratelimit app service senders
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return
messages_per_second = self._rc_message.per_second
burst_count = self._rc_message.burst_count
@@ -113,11 +109,11 @@ class BaseHandler:
if is_admin_redaction and self.admin_redaction_ratelimiter:
# If we have separate config for admin redactions, use a separate
# ratelimiter as to not have user_ids clash
self.admin_redaction_ratelimiter.ratelimit(user_id, update=update)
await self.admin_redaction_ratelimiter.ratelimit(requester, update=update)
else:
# Override rate and burst count per-user
self.request_ratelimiter.ratelimit(
user_id,
await self.request_ratelimiter.ratelimit(
requester,
rate_hz=messages_per_second,
burst_count=burst_count,
update=update,

View File

@@ -25,7 +25,7 @@ from synapse.replication.http.account_data import (
from synapse.types import JsonDict, UserID
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
class AccountDataHandler:

View File

@@ -27,7 +27,7 @@ from synapse.types import UserID
from synapse.util import stringutils
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -24,7 +24,7 @@ from twisted.web.resource import Resource
from synapse.app import check_bind_error
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -73,7 +73,9 @@ class AcmeHandler:
"Listening for ACME requests on %s:%i", host, self.hs.config.acme_port
)
try:
self.reactor.listenTCP(self.hs.config.acme_port, srv, interface=host)
self.reactor.listenTCP(
self.hs.config.acme_port, srv, backlog=50, interface=host
)
except twisted.internet.error.CannotListenError as e:
check_bind_error(e, host, bind_addresses)

View File

@@ -25,7 +25,7 @@ from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -38,7 +38,7 @@ from synapse.types import Collection, JsonDict, RoomAlias, RoomStreamToken, User
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)

View File

@@ -65,11 +65,12 @@ from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import maybe_awaitable
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.msisdn import phone_number_to_msisdn
from synapse.util.threepids import canonicalise_email
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -170,6 +171,16 @@ class SsoLoginExtraAttributes:
extra_attributes = attr.ib(type=JsonDict)
@attr.s(slots=True, frozen=True)
class LoginTokenAttributes:
"""Data we store in a short-term login token"""
user_id = attr.ib(type=str)
# the SSO Identity Provider that the user authenticated with, to get this token
auth_provider_id = attr.ib(type=str)
class AuthHandler(BaseHandler):
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
@@ -227,6 +238,7 @@ class AuthHandler(BaseHandler):
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
self._failed_uia_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
@@ -237,6 +249,7 @@ class AuthHandler(BaseHandler):
# Ratelimitier for failed /login attempts
self._failed_login_attempts_ratelimiter = Ratelimiter(
store=self.store,
clock=hs.get_clock(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
@@ -326,7 +339,8 @@ class AuthHandler(BaseHandler):
user is too high to proceed
"""
if not requester.access_token_id:
raise ValueError("Cannot validate a user without an access token")
if self._ui_auth_session_timeout:
last_validated = await self.store.get_access_token_last_validated(
requester.access_token_id
@@ -340,7 +354,7 @@ class AuthHandler(BaseHandler):
requester_user_id = requester.user.to_string()
# Check if we should be ratelimited due to too many previous failed attempts
self._failed_uia_attempts_ratelimiter.ratelimit(requester_user_id, update=False)
await self._failed_uia_attempts_ratelimiter.ratelimit(requester, update=False)
# build a list of supported flows
supported_ui_auth_types = await self._get_available_ui_auth_types(
@@ -361,7 +375,9 @@ class AuthHandler(BaseHandler):
)
except LoginError:
# Update the ratelimiter to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(requester_user_id)
await self._failed_uia_attempts_ratelimiter.can_do_action(
requester,
)
raise
# find the completed login type
@@ -874,6 +890,19 @@ class AuthHandler(BaseHandler):
)
return result
def can_change_password(self) -> bool:
"""Get whether users on this server are allowed to change or set a password.
Both `config.password_enabled` and `config.password_localdb_enabled` must be true.
Note that any account (even SSO accounts) are allowed to add passwords if the above
is true.
Returns:
Whether users on this server are allowed to change or set a password
"""
return self._password_enabled and self._password_localdb_enabled
def get_supported_login_types(self) -> Iterable[str]:
"""Get a the login types supported for the /login API
@@ -957,8 +986,8 @@ class AuthHandler(BaseHandler):
# We also apply account rate limiting using the 3PID as a key, as
# otherwise using 3PID bypasses the ratelimiting based on user ID.
if ratelimit:
self._failed_login_attempts_ratelimiter.ratelimit(
(medium, address), update=False
await self._failed_login_attempts_ratelimiter.ratelimit(
None, (medium, address), update=False
)
# Check for login providers that support 3pid login types
@@ -991,8 +1020,8 @@ class AuthHandler(BaseHandler):
# this code path, which is fine as then the per-user ratelimit
# will kick in below.
if ratelimit:
self._failed_login_attempts_ratelimiter.can_do_action(
(medium, address)
await self._failed_login_attempts_ratelimiter.can_do_action(
None, (medium, address)
)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
@@ -1014,8 +1043,8 @@ class AuthHandler(BaseHandler):
# Check if we've hit the failed ratelimit (but don't update it)
if ratelimit:
self._failed_login_attempts_ratelimiter.ratelimit(
qualified_user_id.lower(), update=False
await self._failed_login_attempts_ratelimiter.ratelimit(
None, qualified_user_id.lower(), update=False
)
try:
@@ -1026,8 +1055,8 @@ class AuthHandler(BaseHandler):
# exception and masking the LoginError. The actual ratelimiting
# should have happened above.
if ratelimit:
self._failed_login_attempts_ratelimiter.can_do_action(
qualified_user_id.lower()
await self._failed_login_attempts_ratelimiter.can_do_action(
None, qualified_user_id.lower()
)
raise
@@ -1164,18 +1193,16 @@ class AuthHandler(BaseHandler):
return None
return user_id
async def validate_short_term_login_token_and_get_user_id(self, login_token: str):
auth_api = self.hs.get_auth()
user_id = None
async def validate_short_term_login_token(
self, login_token: str
) -> LoginTokenAttributes:
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", user_id)
res = self.macaroon_gen.verify_short_term_login_token(login_token)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
await self.auth.check_auth_blocking(user_id)
return user_id
await self.auth.check_auth_blocking(res.user_id)
return res
async def delete_access_token(self, access_token: str):
"""Invalidate a single access token
@@ -1204,7 +1231,7 @@ class AuthHandler(BaseHandler):
async def delete_access_tokens_for_user(
self,
user_id: str,
except_token_id: Optional[str] = None,
except_token_id: Optional[int] = None,
device_id: Optional[str] = None,
):
"""Invalidate access tokens belonging to a user
@@ -1397,6 +1424,7 @@ class AuthHandler(BaseHandler):
async def complete_sso_login(
self,
registered_user_id: str,
auth_provider_id: str,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
@@ -1406,6 +1434,9 @@ class AuthHandler(BaseHandler):
Args:
registered_user_id: The registered user ID to complete SSO login for.
auth_provider_id: The id of the SSO Identity provider that was used for
login. This will be stored in the login token for future tracking in
prometheus metrics.
request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
@@ -1427,6 +1458,7 @@ class AuthHandler(BaseHandler):
self._complete_sso_login(
registered_user_id,
auth_provider_id,
request,
client_redirect_url,
extra_attributes,
@@ -1437,6 +1469,7 @@ class AuthHandler(BaseHandler):
def _complete_sso_login(
self,
registered_user_id: str,
auth_provider_id: str,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
@@ -1463,7 +1496,7 @@ class AuthHandler(BaseHandler):
# Create a login token
login_token = self.macaroon_gen.generate_short_term_login_token(
registered_user_id
registered_user_id, auth_provider_id=auth_provider_id
)
# Append the login token to the original redirect URL (i.e. with its query
@@ -1569,15 +1602,48 @@ class MacaroonGenerator:
return macaroon.serialize()
def generate_short_term_login_token(
self, user_id: str, duration_in_ms: int = (2 * 60 * 1000)
self,
user_id: str,
auth_provider_id: str,
duration_in_ms: int = (2 * 60 * 1000),
) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
expiry = now + duration_in_ms
macaroon.add_first_party_caveat("time < %d" % (expiry,))
macaroon.add_first_party_caveat("auth_provider_id = %s" % (auth_provider_id,))
return macaroon.serialize()
def verify_short_term_login_token(self, token: str) -> LoginTokenAttributes:
"""Verify a short-term-login macaroon
Checks that the given token is a valid, unexpired short-term-login token
minted by this server.
Args:
token: the login token to verify
Returns:
the user_id that this token is valid for
Raises:
MacaroonVerificationFailedException if the verification failed
"""
macaroon = pymacaroons.Macaroon.deserialize(token)
user_id = get_value_from_macaroon(macaroon, "user_id")
auth_provider_id = get_value_from_macaroon(macaroon, "auth_provider_id")
v = pymacaroons.Verifier()
v.satisfy_exact("gen = 1")
v.satisfy_exact("type = login")
v.satisfy_general(lambda c: c.startswith("user_id = "))
v.satisfy_general(lambda c: c.startswith("auth_provider_id = "))
satisfy_expiry(v, self.hs.get_clock().time_msec)
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
return LoginTokenAttributes(user_id=user_id, auth_provider_id=auth_provider_id)
def generate_delete_pusher_token(self, user_id: str) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = delete_pusher")

Some files were not shown because too many files have changed in this diff Show More