1
0

Compare commits

...

153 Commits

Author SHA1 Message Date
Patrick Cloke
426144c9bd Disable push summaries. 2022-08-05 09:10:33 -04:00
Patrick Cloke
7fb3b8405e Merge remote-tracking branch 'origin/clokep/thread-api' into clokep/thread-poc 2022-08-05 09:09:00 -04:00
Patrick Cloke
f0ab6a7f4c Add test script. 2022-08-05 08:18:31 -04:00
Patrick Cloke
0cbddc632e Add an experimental config flag. 2022-08-05 08:18:31 -04:00
Patrick Cloke
80dcb911dc Return thread receipts down sync. 2022-08-05 08:18:31 -04:00
Patrick Cloke
fd972df8f9 Mark thread notifications as read. 2022-08-05 08:18:31 -04:00
Patrick Cloke
fbd6727760 Send the thread ID over replication. 2022-08-05 08:18:31 -04:00
Patrick Cloke
08620b3f28 Add a thread ID to receipts. 2022-08-05 08:18:31 -04:00
Patrick Cloke
d56296aa57 Add a sync flag for unread thread notifications 2022-08-05 08:18:08 -04:00
Patrick Cloke
e0ed95a45b Add an experimental config option. 2022-08-05 08:18:08 -04:00
Patrick Cloke
dfd921d421 Return thread notification counts down sync. 2022-08-05 08:18:08 -04:00
Patrick Cloke
2c7a5681b4 Extract the thread ID when processing push rules. 2022-08-05 08:18:08 -04:00
Julian-Samuel Gebühr
3d2cabf966 Mark token-authenticaticated-registration API as not-experimental (#11897) 2022-08-05 11:15:35 +00:00
Matt C
026ac4486c Update module API "update room membership" method to allow for remote joins (#13441)
Co-authored-by: MattC <buffless-matt@users.noreply.github.com>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-08-05 09:37:58 +00:00
Erik Johnston
b6a6bb4027 Add comments about how event push actions are stored. (#13445) 2022-08-04 19:38:08 +00:00
Eric Eastwood
860fdd9098 Fix @tag_args being off-by-one (ahead) (#13452)
Fix @tag_args being off-by-one (ahead)

Example:

```
argspec.args=[
  'self',
  'room_id'
]

args=(
  <synapse.storage.databases.main.DataStore object at 0x10d0b8d00>,
  '!HBehERstyQBxyJDLfR:my.synapse.server'
)
```

---

The previous logic was also flawed and we can end up in a situation like this:

```
argspec.args=['self', 'dest', 'room_id', 'limit', 'extremities']

args=(<synapse.federation.federation_client.FederationClient object at 0x7f1651c18160>, 'hs1', '!jAEHKIubyIfuLOdfpY:hs1')
```

From this source:
```py
async def backfill(
    self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]:
```

And this usage:
```py
events = await self._federation_client.backfill(
    dest, room_id, limit=limit, extremities=extremities
)
```

which would previously cause this error:
```
synapse_main | 2022-08-04 06:13:12,051 - synapse.handlers.federation - 424 - ERROR - GET-5 - Failed to backfill from hs1 because tuple index out of range
synapse_main | Traceback (most recent call last):
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/handlers/federation.py", line 392, in try_backfill
synapse_main |     await self._federation_event_handler.backfill(
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/logging/tracing.py", line 828, in _wrapper
synapse_main |     return await func(*args, **kwargs)
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/handlers/federation_event.py", line 593, in backfill
synapse_main |     events = await self._federation_client.backfill(
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/logging/tracing.py", line 828, in _wrapper
synapse_main |     return await func(*args, **kwargs)
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/logging/tracing.py", line 827, in _wrapper
synapse_main |     with wrapping_logic(func, *args, **kwargs):
synapse_main |   File "/usr/local/lib/python3.9/contextlib.py", line 119, in __enter__
synapse_main |     return next(self.gen)
synapse_main |   File "/usr/local/lib/python3.9/site-packages/synapse/logging/tracing.py", line 922, in _wrapping_logic
synapse_main |     set_attribute("ARG_" + arg, str(args[i + 1]))  # type: ignore[index]
synapse_main | IndexError: tuple index out of range
```
2022-08-04 14:29:41 -05:00
Patrick Cloke
ec24813220 Improve comments (& avoid a duplicate query) in push actions processing. (#13455)
* Adds docstrings and inline comments.
* Formats SQL queries using triple quoted strings.
* Minor formatting changes.
* Avoid fetching `event_push_summary_stream_ordering` multiple times
  in the same transactions.
2022-08-04 19:24:44 +00:00
Richard van der Hoff
96d92156d0 Update type of EventContext.rejected (#13460) 2022-08-04 17:45:01 +01:00
reivilibre
e9e6aacfbe Faster Room Joins: prevent Synapse from answering federated join requests for a room which it has not fully joined yet. (#13416) 2022-08-04 16:27:04 +01:00
Nick Mills-Barrett
41320a0554 Optimise async get event lookups (#13435)
Still maintains local in memory lookup optimisation, but does any external
lookup as part of the deferred that prevents duplicate lookups for the same
event at once. This makes the assumption that fetching from an external
cache is a non-zero load operation.
2022-08-04 15:49:55 +01:00
Dirk Klimpel
6dd7fa12dc Update some outdated information on sso_mapping_providers.md (#13449) 2022-08-04 13:06:02 +01:00
Dirk Klimpel
afbdbe0634 Fix return value in example on password_auth_provider_callbacks.md (#13450)
Fixes: #12534

Signed-off-by: Dirk Klimpel <dirk@klimpel.org>
2022-08-04 13:03:36 +01:00
Richard van der Hoff
166fafdf8d synapse-workers docker: copy nginx and redis in from base images (#13447)
Part of my continuing quest to make the docker images build quicker: copy nginx and redis in from base docker images, rather than apt installing each time.
2022-08-04 12:59:27 +01:00
Matt C
a91078200d Add module API method to create a room (#13429)
Co-authored-by: MattC <buffless-matt@users.noreply.github.com>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-08-04 09:34:05 +00:00
Brendan Abolivier
845732be45 Fix rooms not being properly excluded from incremental sync (#13408) 2022-08-04 11:02:29 +02:00
Shay
a648a06d52 Add some tracing spans to give insight into local joins (#13439) 2022-08-03 10:19:34 -07:00
Eric Eastwood
92d21faf12 Instrument /messages for understandable traces in Jaeger (#13368)
In Jaeger:

 - Before: huge list of uncategorized database calls
 - After: nice and collapsible into units of work
2022-08-03 10:57:38 -05:00
andrew do
78a3111c41 Return 404 or member list when getting joined_members after leaving (#13374)
Signed-off-by: Andrew Doh <andrewddo@gmail.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Andrew Morgan <andrewm@element.io>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-08-03 14:26:31 +02:00
Jasper Spaans
503a95804e Install cryptography build dependencies in requirements image. (#13372) 2022-08-03 11:16:32 +01:00
jejo86
668597214f Improve documentation on becoming server admin (#13230)
* Improved section regarding server admin

Added steps describing how to elevate an existing user to administrator by manipulating a `postgres` database.

Signed-off-by: jejo86 28619134+jejo86@users.noreply.github.com

* Improved section regarding server admin

* Reference database settings

Add instructions to check database settings to find out the database name, instead of listing all available PostgreSQL databases.

* Add suggestions from PR conversation

Replace config filename `homeserver.yaml`. with "config file".
Remove instructions to switch to `postgres` user.
Add instructions how to connect to SQLite database.

* Update changelog.d/13230.doc

Co-authored-by: reivilibre <olivier@librepush.net>
2022-08-03 11:15:23 +01:00
Dirk Klimpel
fb7a2cc4cc Update doc for setting macaroon_secret_key (#13443)
* Update doc for setting `macaroon_secret_key`

* newsfile
2022-08-03 10:41:19 +01:00
Dirk Klimpel
d6e94ad9d9 Rename RateLimitConfig to RatelimitSettings (#13442) 2022-08-03 10:40:20 +01:00
Matt C
570bf32bbb Add module API method to resolve a room alias to a room ID (#13428)
Co-authored-by: MattC <buffless-matt@users.noreply.github.com>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-08-03 09:25:36 +00:00
Dirk Klimpel
5eccfdfafd Remove 'Contents' section from the Configuration Manual (#13438)
Fixes: #13053
2022-08-03 09:19:20 +00:00
Dirk Klimpel
ec6758d472 Fix wrong headline for url_preview_accept_language in docs (#13437)
Fixes: #13433
2022-08-03 09:41:57 +01:00
reivilibre
1c910e2216 Add a merge-back command to the release script, which automates merging the correct branches after a release. (#13393) 2022-08-02 15:56:28 +00:00
Sean Quah
8d317f6da5 Fix error when out of servers to sync partial state with (#13432)
so that we raise the intended error instead.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-08-02 12:12:44 +01:00
Olivier Wilkinson (reivilibre)
a2a867b521 Merge branch 'master' into develop 2022-08-02 11:56:02 +01:00
Olivier Wilkinson (reivilibre)
c2f4871226 Mention specific version in rc2 notes 2022-08-02 11:19:32 +01:00
Olivier Wilkinson (reivilibre)
cb209638ea Add upgrade notes 2022-08-02 11:10:26 +01:00
Olivier Wilkinson (reivilibre)
4e80ca2243 1.64.0 2022-08-02 11:04:08 +01:00
reivilibre
e17e5c97e0 Faster Room Joins: don't leave a stuck room partial state flag if the join fails. (#13403) 2022-08-01 16:45:39 +00:00
Patrick Cloke
759366e5e6 Merge remote-tracking branch 'origin/develop' into clokep/thread-api 2022-08-01 10:26:15 -04:00
Patrick Cloke
f8e7a9418a Fix missing import in federation_event handler. (#13431)
#13404 removed an import of `Optional` which was still needed
due to #13413 added more usages.
2022-08-01 14:14:29 +00:00
Patrick Cloke
6c2e08ed6f Merge remote-tracking branch 'origin/develop' into clokep/thread-api 2022-08-01 09:05:23 -04:00
Patrick Cloke
f6267b1abe Add an enum. 2022-08-01 09:05:04 -04:00
Sean Quah
224d792dd7 Refactor _resolve_state_at_missing_prevs to return an EventContext (#13404)
Previously, `_resolve_state_at_missing_prevs` returned the resolved
state before an event and a partial state flag. These were unwieldy to
carry around would only ever be used to build an event context. Build
the event context directly instead.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-08-01 13:53:56 +01:00
reivilibre
05aeeb3a80 Enable Complement CI tests in the 'latest deps' test run. (#13213)
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-08-01 10:55:31 +00:00
reivilibre
b817574be7 Re-enable running Complement tests against Synapse with workers. (#13420) 2022-08-01 11:51:44 +01:00
Richard van der Hoff
23768ccb4d Faster joins: fix rejected events becoming un-rejected during resync (#13413)
Make sure that we re-check the auth rules during state resync, otherwise
rejected events get un-rejected.
2022-08-01 11:20:05 +01:00
Richard van der Hoff
d548d8f18d Merge tag 'v1.64.0rc2' into develop
Synapse 1.64.0rc2 (2022-07-29)
==============================

This RC reintroduces support for `account_threepid_delegates.email`, which was removed in 1.64.0rc1. It remains deprecated and will be removed altogether in a future release. ([\#13406](https://github.com/matrix-org/synapse/issues/13406))
2022-07-29 15:15:21 +01:00
Richard van der Hoff
979d94de29 update changelog 2022-07-29 12:27:23 +01:00
Richard van der Hoff
6b4fd8b430 1.64.0rc2 2022-07-29 12:23:13 +01:00
3nprob
98fb610cc0 Revert "Drop support for delegating email validation (#13192)" (#13406)
Reverts commit fa71bb18b5, and tweaks documentation.

Signed-off-by: 3nprob <git@3n.anonaddy.com>
2022-07-29 10:29:23 +00:00
Brendan Abolivier
24ef1460f6 Explicitly mention which resources support compression in the config guide (#13221) 2022-07-29 09:09:57 +00:00
Šimon Brandner
583f22780f Use stable prefixes for MSC3827: filtering of /publicRooms by room type (#13370)
Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>
2022-07-27 19:46:57 +01:00
Patrick Cloke
922b771337 Add missing type hints for tests.unittest. (#13397) 2022-07-27 17:18:41 +00:00
Patrick Cloke
d510975b2f Test ignored users. 2022-07-27 12:39:43 -04:00
Patrick Cloke
8dcdb4efa9 Allow limiting threads by participation. 2022-07-27 12:39:17 -04:00
Patrick Cloke
e9a649ec31 Add an API for listing threads in a room. 2022-07-27 11:46:22 -04:00
Will Hunt
502f075e96 Implement MSC3848: Introduce errcodes for specific event sending failures (#13343)
Implements MSC3848
2022-07-27 13:44:40 +01:00
reivilibre
39be5bc550 Make minor clarifications to the error messages given when we fail to join a room via any server. (#13160) 2022-07-27 10:37:50 +00:00
Eric Eastwood
4f3082d6bf Fix get_pdu asking every remote destination even after it finds an event (#13346) 2022-07-27 10:40:04 +01:00
Nick Mills-Barrett
bf3115584c Copy room serials before handling in get_new_events_as (#13392) 2022-07-26 17:45:27 +00:00
reivilibre
543dc9c93e Extend the release script to automatically push a new SyTest branch, rather than having that be a manual process. (#12978)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2022-07-26 18:08:14 +01:00
Olivier Wilkinson (reivilibre)
6236afc621 Merge branch 'release-v1.64' into develop 2022-07-26 16:26:30 +01:00
Patrick Cloke
57d334a13d Remove the unspecced room_id field in the /hierarchy response. (#13365)
The `room_id` field represented the parent space for each room
and was made redundant by changes in the API shape where the
`children_state` is now nested underneath each `room`.

The room ID of each child is in the `state_key` field and is still
available.
2022-07-26 08:02:34 -04:00
Olivier Wilkinson (reivilibre)
33788a07ee Explain less-known term 'Implicit TLS' 2022-07-26 12:56:24 +01:00
Richard van der Hoff
ca3db044a3 Fix infinite loop in partial-state resync (#13353)
Make sure that we only pull out events from the db once they have no
prev-events with partial state.
2022-07-26 11:47:31 +00:00
Olivier Wilkinson (reivilibre)
5d7e2b0195 Tweak changelog in response to review 2022-07-26 12:45:19 +01:00
Sean Quah
335ebb21cc Faster room joins: avoid blocking when pulling events with missing prevs (#13355)
Avoid blocking on full state in `_resolve_state_at_missing_prevs` and
return a new flag indicating whether the resolved state is partial.
Thread that flag around so that it makes it into the event context.

Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-07-26 12:39:23 +01:00
Olivier Wilkinson (reivilibre)
f765a40f69 Tweak changelog 2022-07-26 12:26:36 +01:00
Patrick Cloke
8b603299bf Remove unused argument for get_relations_for_event. (#13383) 2022-07-26 07:19:20 -04:00
Olivier Wilkinson (reivilibre)
641412decd 1.64.0rc1 2022-07-26 12:12:22 +01:00
Doug
549c55606a Disable autocorrect and autocaptialisation when entering username for SSO registration. (#13350)
When registering a new account via SSO on iOS, the text field becomes pretty annoying as it autocapitalises and autocorrects your input. This PR fixes that (although I have only tested the raw HTML file on the simulator, I'm not sure how to get the complete setup available for testing in the flow).
2022-07-26 08:08:20 +00:00
Matt Holt
935e73efed Update Caddy reverse proxy documentation (#13344)
Improve/simplify Caddy examples. Remove Caddy v1 (has long been EOL'ed)

Signed-off-by: Matthew Holt <mholt@users.noreply.github.com>
2022-07-25 16:07:26 +00:00
Jan Schär
e8519e0ed2 Support Implicit TLS for sending emails (#13317)
Previously, TLS could only be used with STARTTLS.
Add a new option `force_tls`, where TLS is used from the start.
Implicit TLS is recommended over STARTLS,
see https://datatracker.ietf.org/doc/html/rfc8314

Fixes #8046.

Signed-off-by: Jan Schär <jan@jschaer.ch>
2022-07-25 16:27:19 +01:00
Patrick Cloke
908aeac44a Additional fixes for opentracing type hints. (#13362) 2022-07-25 08:34:06 -04:00
Erik Johnston
43adf2521c Refactor presence so we can prune user in room caches (#13313)
See #10826 and #10786 for context as to why we had to disable pruning on
those caches.

Now that `get_users_who_share_room_with_user` is called frequently only
for presence, we just need to make calls to it less frequent and then we
can remove the various levels of caching that is going on.
2022-07-25 09:21:06 +00:00
Eric Eastwood
357561c1a2 Backfill remote event fetched by MSC3030 so we can paginate from it later (#13205)
Depends on https://github.com/matrix-org/synapse/pull/13320

Complement tests: https://github.com/matrix-org/complement/pull/406

We could use the same method to backfill for `/context` as well in the future, see https://github.com/matrix-org/synapse/issues/3848
2022-07-22 16:00:11 -05:00
Richard van der Hoff
c7c84b81e3 Update config_documentation.md (#13364)
"changed in" goes before the example
2022-07-22 13:50:20 +01:00
Sean Quah
0fa41a7b17 Update locked frozendict version to 2.3.3 (#13352)
frozendict 2.3.3 includes fixes for memory leaks that get triggered during `/sync`.
2022-07-22 10:26:09 +01:00
Sean Quah
158782c3ce Skip soft fail checks for rooms with partial state (#13354)
When a room has the partial state flag, we may not have an accurate
`m.room.member` event for event senders in the room's current state, and
so cannot perform soft fail checks correctly. Skip the soft fail check
entirely in this case.

As an alternative, we could block until we have full state, but that
would prevent us from receiving incoming events over federation, which
is undesirable.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-22 10:13:01 +01:00
Nick Mills-Barrett
86e366a46e Remove old empty/redundant slaved stores. (#13349) 2022-07-21 17:56:45 +00:00
Erik Johnston
0b87eb8e0c Make DictionaryCache have better expiry properties (#13292) 2022-07-21 17:13:44 +01:00
Erik Johnston
13341dde5a Don't hold onto full state in state cache (#13324) 2022-07-21 16:02:02 +01:00
Brendan Abolivier
10e4093839 Call out buildkit is required when building test docker images (#13338)
Co-authored-by: David Robertson <davidr@element.io>
2022-07-21 14:29:58 +02:00
David Robertson
34949ead1f Track DB txn times w/ two counters, not histogram (#13342) 2022-07-21 13:23:05 +01:00
Patrick Cloke
50122754c8 Add missing types to opentracing. (#13345)
After this change `synapse.logging` is fully typed.
2022-07-21 12:01:52 +00:00
Nick Mills-Barrett
190f49d8ab Use cache store remove base slaved (#13329)
This comes from two identical definitions in each of the base stores, and means the base slaved store is now empty and can be removed.
2022-07-21 11:51:30 +01:00
David Robertson
4f57ef0b18 Merge branch 'master' into develop 2022-07-21 11:27:08 +01:00
David Teller
b909d5327b Document rc_invites.per_issuer, added in v1.63.
Resolves #13330.
Missed in #13125.

Signed-off-by: David Teller <davidt@element.io>
2022-07-21 11:26:34 +01:00
Eric Eastwood
0f971ca68e Update get_pdu to return the original, pristine EventBase (#13320)
Update `get_pdu` to return the untouched, pristine `EventBase` as it was originally seen over federation (no metadata added). Previously, we returned the same `event` reference that we stored in the cache which downstream code modified in place and added metadata like setting it as an `outlier`  and essentially poisoned our cache. Now we always return a copy of the `event` so the original can stay pristine in our cache and re-used for the next cache call.

Split out from https://github.com/matrix-org/synapse/pull/13205

As discussed at:

 - https://github.com/matrix-org/synapse/pull/13205#discussion_r918365746
 - https://github.com/matrix-org/synapse/pull/13205#discussion_r918366125

Related to https://github.com/matrix-org/synapse/issues/12584. This PR doesn't fix that issue because it hits [`get_event` which exists from the local database before it tries to `get_pdu`](7864f33e28/synapse/federation/federation_client.py (L581-L594)).
2022-07-20 15:58:51 -05:00
Shay
a1b62af2af Validate federation destinations and log an error if server name is invalid. (#13318) 2022-07-20 11:17:26 -07:00
Erik Johnston
d3995049a8 Merge remote-tracking branch 'origin/master' into develop 2022-07-20 14:59:43 +01:00
Erik Johnston
93740cae57 1.63.1 2022-07-20 13:37:00 +01:00
Erik Johnston
b4ae3b0d44 Don't include appservice users when calculating push rules (#13332)
This can cause a lot of extra load on servers with lots of appservice users. Introduced in #13078
2022-07-20 12:06:13 +01:00
Sean Quah
172ce29b14 Fix spurious warning when fetching state after a missing prev event (#13258) 2022-07-19 19:15:54 +01:00
Patrick Cloke
a6895dd576 Add type annotations to trace decorator. (#13328)
Functions that are decorated with `trace` are now properly typed
and the type hints for them are fixed.
2022-07-19 14:14:30 -04:00
Brendan Abolivier
47822fd2e8 Merge branch 'master' into develop 2022-07-19 16:14:02 +02:00
Erik Johnston
de70b25e84 Reduce memory usage of state group cache (#13323) 2022-07-19 14:40:37 +01:00
Patrick Cloke
1efe6b8c41 Stop building Ubuntu 21.10 (Impish Indri) which is end of life. (#13326) 2022-07-19 09:08:46 -04:00
villepeh
84c5e6b1fd Bash script for creating multiple stream writers (#13271)
Add another bash script to the contrib directory. It creates multiple stream writers and also prints out the example configuration for homeserver.yaml.

Signed-off-by: Ville Petteri Huh.
2022-07-19 12:37:20 +00:00
Jörg Behrmann
87a917e8c8 Add notes when config options were changed to config documentation (#13314)
Signed-off-by: Jörg Behrmann <behrmann@physik.fu-berlin.de>
2022-07-19 12:36:29 +00:00
David Robertson
b977867358 Rate limit joins per-room (#13276) 2022-07-19 11:45:17 +00:00
Nick Mills-Barrett
2ee0b6ef4b Safe async event cache (#13308)
Fix race conditions in the async cache invalidation logic, by separating
the async & local invalidation calls and ensuring any async call i
executed first.

Signed off by Nick @ Beeper (@Fizzadar).
2022-07-19 11:25:29 +00:00
Shay
7864f33e28 Increase batch size of bulk_get_push_rules and _get_joined_profiles_from_event_ids. (#13300) 2022-07-18 13:15:23 -07:00
Shay
15edf23626 Improve performance of query _get_subset_users_in_room_with_profiles (#13299) 2022-07-18 12:35:45 -07:00
Sean Quah
5526f9fc4f Fix overcounting of pushers when they are replaced (#13296)
Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-18 17:39:39 +01:00
Brendan Abolivier
8c60c572f0 Up the dependency on canonicaljson to ^1.5.0 (#13172)
Co-authored-by: David Robertson <davidr@element.io>
2022-07-18 17:30:59 +02:00
Andrew Morgan
bb25dd81e3 Prevent #3679 from appearing in blame results (#13311) 2022-07-18 14:02:32 +00:00
Erik Johnston
f721f1baba Revert "Make all process_replication_rows methods async (#13304)" (#13312)
This reverts commit 5d4028f217.
2022-07-18 14:28:14 +01:00
Erik Johnston
cf5fa5063d Don't pull out full state when sending dummy events (#13310) 2022-07-18 14:19:11 +01:00
Nick Mills-Barrett
6785b0f39d Use READ COMMITTED isolation level when purging rooms (#12942)
To close: #10294.

Signed off by Nick @ Beeper.
2022-07-18 14:17:24 +01:00
Andrew Morgan
c5f487b7cb Update expected DB query count when creating a room (#13307) 2022-07-18 13:02:25 +01:00
Erik Johnston
c6a05063ff Don't pull out the full state when creating an event (#13281) 2022-07-18 10:05:30 +01:00
Dirk Klimpel
efee345b45 Remove unnecessary json.dumps from tests (#13303) 2022-07-17 22:28:45 +01:00
Nick Mills-Barrett
5d4028f217 Make all process_replication_rows methods async (#13304)
More prep work for asyncronous caching, also makes all process_replication_rows methods consistent (presence handler already is so).

Signed off by Nick @ Beeper (@Fizzadar)
2022-07-17 22:19:43 +01:00
Dirk Klimpel
96cf81e312 Use HTTPStatus constants in place of literals in tests. (#13297) 2022-07-15 19:31:27 +00:00
Eric Eastwood
7b67e93d49 Provide more info why we don't have any thumbnails to serve (#13038)
Fix https://github.com/matrix-org/synapse/issues/13016

## New error code and status

### Before

Previously, we returned a `404` for `/thumbnail` which isn't even in the spec.

```json
{
  "errcode": "M_NOT_FOUND",
  "error": "Not found [b'hs1', b'tefQeZhmVxoiBfuFQUKRzJxc']"
}
```

### After

What does the spec say?

> 400: The request does not make sense to the server, or the server cannot thumbnail the content. For example, the client requested non-integer dimensions or asked for negatively-sized images.
>
> *-- https://spec.matrix.org/v1.1/client-server-api/#get_matrixmediav3thumbnailservernamemediaid*

Now with this PR, we respond with a `400` when we don't have thumbnails to serve and we explain why we might not have any thumbnails.

```json
{
    "errcode": "M_UNKNOWN",
    "error": "Cannot find any thumbnails for the requested media ([b'example.com', b'12345']). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)",
}
```

> Cannot find any thumbnails for the requested media ([b'example.com', b'12345']). This might mean the media is not a supported_media_format=(image/jpeg, image/jpg, image/webp, image/gif, image/png) or that thumbnailing failed for some other reason. (Dynamic thumbnails are disabled on this server.)


---

We still respond with a 404 in many other places. But we can iterate on those later and maybe keep some in some specific places after spec updates/clarification: https://github.com/matrix-org/matrix-spec/issues/1122

We can also iterate on the bugs where Synapse doesn't thumbnail when it should in other issues/PRs.
2022-07-15 11:42:21 -05:00
David Robertson
e9ce4d089b Use and recommend poetry 1.1.14, up from 1.1.12 (#13285) 2022-07-15 16:18:47 +01:00
Erik Johnston
0731e0829c Don't pull out the full state when storing state (#13274) 2022-07-15 12:59:45 +00:00
Patrick Cloke
3343035a06 Use a real room in the notification rotation tests. (#13260)
Instead of manually inserting fake data. This fixes some issues with
having to manually calculate stream orderings and other oddities.
2022-07-15 08:22:43 -04:00
David Robertson
7281591f4c Use state before join to determine if we _should_perform_remote_join (#13270)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-07-15 12:20:47 +00:00
Sean Quah
d765ada84f Update locked frozendict version to 2.3.2 (#13284)
`frozendict` 2.3.2 includes a fix for a memory leak in
`frozendict.__hash__`. This likely has no impact outside of the
deprecated `/initialSync` endpoint, which uses `StreamToken`s,
containing `RoomStreamToken`s, containing `frozendict`s, as cache keys.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-15 13:18:51 +01:00
Richard van der Hoff
b116d3ce00 Bg update to populate new events table columns (#13215)
These columns were added back in Synapse 1.52, and have been populated for new
events since then. It's now (beyond) time to back-populate them for existing
events.
2022-07-15 12:47:26 +01:00
Erik Johnston
7be954f59b Fix a bug which could lead to incorrect state (#13278)
There are two fixes here:
1. A long-standing bug where we incorrectly calculated `delta_ids`; and
2. A bug introduced in #13267 where we got current state incorrect.
2022-07-15 11:06:41 +00:00
Richard van der Hoff
512486bbeb Docker: copy postgres from base image (#13279)
When building the docker images for complement testing, copy a preinstalled
complement over from a base image, rather than apt installing it. This avoids
network traffic and is much faster.
2022-07-15 11:13:40 +01:00
Nick Mills-Barrett
cc21a431f3 Async get event cache prep (#13242)
Some experimental prep work to enable external event caching based on #9379 & #12955. Doesn't actually move the cache at all, just lays the groundwork for async implemented caches.

Signed off by Nick @ Beeper (@Fizzadar)
2022-07-15 09:30:46 +00:00
Nick Mills-Barrett
21eeacc995 Federation Sender & Appservice Pusher Stream Optimisations (#13251)
* Replace `get_new_events_for_appservice` with `get_all_new_events_stream`

The functions were near identical and this brings the AS worker closer
to the way federation senders work which can allow for multiple workers
to handle AS traffic.

* Pull received TS alongside events when processing the stream

This avoids an extra query -per event- when both federation sender
and appservice pusher process events.
2022-07-15 09:36:56 +01:00
Richard van der Hoff
fe15a865a5 Rip out auth-event reconciliation code (#12943)
There is a corner in `_check_event_auth` (long known as "the weird corner") where, if we get an event with auth_events which don't match those we were expecting, we attempt to resolve the diffence between our state and the remote's with a state resolution.

This isn't specced, and there's general agreement we shouldn't be doing it.

However, it turns out that the faster-joins code was relying on it, so we need to introduce something similar (but rather simpler) for that.
2022-07-14 21:52:26 +00:00
Richard van der Hoff
df55b377be CHANGES.md: fix link to upgrade notes 2022-07-14 15:07:52 +01:00
Erik Johnston
0ca4172b5d Don't pull out state in compute_event_context for unconflicted state (#13267) 2022-07-14 13:57:02 +00:00
David Robertson
599c403d99 Allow rate limiters to passively record actions they cannot limit (#13253)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-07-13 19:09:42 +00:00
David Robertson
0eb7e69768 Notifier: accept callbacks to fire on room joins (#13254) 2022-07-13 19:48:24 +01:00
Jacek Kuśnierz
cc1071598a Call the v2 identity service /3pid/unbind endpoint, rather than v1. (#13240)
* Drop support for v1 unbind

Signed-off-by: Jacek Kusnierz <jacek.kusnierz@tum.de>

* Add changelog

Signed-off-by: Jacek Kusnierz <jacek.kusnierz@tum.de>

* Update changelog.d/13240.misc
2022-07-13 19:43:17 +01:00
Shay
ad5761b65c Add support for room version 10 (#13220) 2022-07-13 11:36:02 -07:00
jejo86
2341032cf2 Document advising against publicly exposing the Admin API and provide a usage example (#13231)
* Admin API request explanation improved

Pointed out, that the Admin API is not accessible by default from any remote computer, but only from the PC `matrix-synapse` is running on.
Added a full, working example, making sure to include the cURL flag `-X`, which needs to be prepended to `GET`, `POST`, `PUT` etc. and listing the full query string including protocol, IP address and port.

* Admin API request explanation improved

* Apply suggestions from code review

Update changelog. Reword prose.

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-07-13 19:33:33 +01:00
Nick Mills-Barrett
982fe29655 Optimise room creation event lookups part 2 (#13224) 2022-07-13 19:32:46 +01:00
Patrick Cloke
1d5c80b161 Reduce duplicate code in receipts servlets. (#13198) 2022-07-13 13:23:16 -04:00
Brad Murray
3371e1abcb Add prometheus counters for content types other than events (#13175) 2022-07-13 15:18:20 +01:00
Patrick Cloke
4db7862e0f Drop unused tables from groups/communities. (#12967)
These tables have been unused since Synapse v1.61.0, although schema version 72
was added in Synapse v1.62.0.
2022-07-13 09:55:14 -04:00
Patrick Cloke
90e9b4fa1e Do not fail build if complement with workers fails. (#13266) 2022-07-13 08:30:42 -04:00
Thomas Weston
0312ff44c6 Fix "add user" admin api error when request contains a "msisdn" threepid (#13263)
Co-authored-by: Thomas Weston <thomas.weston@clearspancloud.com>
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-07-13 11:33:21 +01:00
Patrick Cloke
1381563988 Inline URL preview documentation. (#13261)
Inline URL preview documentation near the implementation.
2022-07-12 15:01:58 -04:00
Richard van der Hoff
a366b75b72 Drop unused table event_reference_hashes (#13218)
This is unused since Synapse 1.60.0 (#12679). It's time for it to go.
2022-07-12 18:52:06 +00:00
Jacek Kuśnierz
7218a0ca18 Drop support for calling /_matrix/client/v3/account/3pid/bind without an id_access_token (#13239)
Fixes #13201

Signed-off-by: Jacek Kusnierz jacek.kusnierz@tum.de
2022-07-12 18:48:29 +00:00
David Robertson
52a0c8f2f7 Rename test case method to add_hashes_and_signatures_from_other_server (#13255) 2022-07-12 18:46:32 +00:00
Richard van der Hoff
fa71bb18b5 Drop support for delegating email validation (#13192)
* Drop support for delegating email validation

Delegating email validation to an IS is insecure (since it allows the owner of
the IS to do a password reset on your HS), and has long been deprecated. It
will now cause a config error at startup.

* Update unit test which checks for email verification

Give it an `email` config instead of a threepid delegate

* Remove unused method `requestEmailToken`

* Simplify config handling for email verification

Rather than an enum and a boolean, all we need here is a single bool, which
says whether we are or are not doing email verification.

* update docs

* changelog

* upgrade.md: fix typo

* update version number

this will be in 1.64, not 1.63

* update version number

this one too
2022-07-12 19:18:53 +01:00
Sean Quah
3f178332d6 Log the stack when waiting for an entire room to be un-partial stated (#13257)
The stack is already logged when waiting for an event to be un-partial
stated. Log the stack for rooms as well, to aid in debugging.
2022-07-12 18:57:38 +01:00
Shay
6f30eb5b8e Add info about configuration in the url preview docs (#13233)
Cross-link doc pages for easier navigation.
2022-07-12 13:48:47 -04:00
Quentin Gliech
b19060a29b Make the AS login method call Auth.get_user_by_req for checking the AS token. (#13094)
This gets rid of another usage of get_appservice_by_req, with all the benefits, including correctly tracking the appservice IP and setting the tracing attributes correctly.

Signed-off-by: Quentin Gliech <quenting@element.io>
2022-07-12 18:06:29 +01:00
andrew do
2d82cdafd2 expose whether a room is a space in the Admin API (#13208) 2022-07-12 15:30:53 +01:00
259 changed files with 7455 additions and 3261 deletions

View File

@@ -69,7 +69,7 @@ with open('pyproject.toml', 'w') as f:
"
python3 -c "$REMOVE_DEV_DEPENDENCIES"
pipx install poetry==1.1.12
pipx install poetry==1.1.14
~/.local/bin/poetry lock
echo "::group::Patched pyproject.toml"

View File

@@ -1,3 +1,16 @@
# Commits in this file will be removed from GitHub blame results.
#
# To use this file locally, use:
# git blame --ignore-revs-file="path/to/.git-blame-ignore-revs" <files>
#
# or configure the `blame.ignoreRevsFile` option in your git config.
#
# If ignoring a pull request that was not squash merged, only the merge
# commit needs to be put here. Child commits will be resolved from it.
# Run black (#3679).
8b3d9b6b199abb87246f982d5db356f1966db925
# Black reformatting (#5482).
32e7c9e7f20b57dd081023ac42d6931a8da9b3a3

View File

@@ -135,11 +135,42 @@ jobs:
/logs/**/*.log*
# TODO: run complement (as with twisted trunk, see #12473).
complement:
if: "${{ !failure() && !cancelled() }}"
runs-on: ubuntu-latest
# open an issue if the build fails, so we know about it.
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- run: |
set -o pipefail
TEST_ONLY_IGNORE_POETRY_LOCKFILE=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
# Open an issue if the build fails, so we know about it.
# Only do this if we're not experimenting with this action in a PR.
open-issue:
if: failure()
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request'"
needs:
# TODO: should mypy be included here? It feels more brittle than the other two.
- mypy

View File

@@ -127,12 +127,12 @@ jobs:
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.1.12
pipx install poetry==1.1.14
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update
# NOT IN 1.1.12 poetry lock --check
# NOT IN 1.1.14 poetry lock --check
working-directory: synapse
- run: |

View File

@@ -1,3 +1,125 @@
Synapse 1.64.0 (2022-08-02)
===========================
No significant changes since 1.64.0rc2.
Deprecation Warning
-------------------
Synapse v1.66.0 will remove the ability to delegate the tasks of verifying email address ownership, and password reset confirmation, to an identity server.
If you require your homeserver to verify e-mail addresses or to support password resets via e-mail, please configure your homeserver with SMTP access so that it can send e-mails on its own behalf.
[Consult the configuration documentation for more information.](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#email)
Synapse 1.64.0rc2 (2022-07-29)
==============================
This RC reintroduces support for `account_threepid_delegates.email`, which was removed in 1.64.0rc1. It remains deprecated and will be removed altogether in Synapse v1.66.0. ([\#13406](https://github.com/matrix-org/synapse/issues/13406))
Synapse 1.64.0rc1 (2022-07-26)
==============================
This RC removed the ability to delegate the tasks of verifying email address ownership, and password reset confirmation, to an identity server.
We have also stopped building `.deb` packages for Ubuntu 21.10 as it is no longer an active version of Ubuntu.
Features
--------
- Improve error messages when media thumbnails cannot be served. ([\#13038](https://github.com/matrix-org/synapse/issues/13038))
- Allow pagination from remote event after discovering it from [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event`. ([\#13205](https://github.com/matrix-org/synapse/issues/13205))
- Add a `room_type` field in the responses for the list room and room details admin APIs. Contributed by @andrewdoh. ([\#13208](https://github.com/matrix-org/synapse/issues/13208))
- Add support for room version 10. ([\#13220](https://github.com/matrix-org/synapse/issues/13220))
- Add per-room rate limiting for room joins. For each room, Synapse now monitors the rate of join events in that room, and throttles additional joins if that rate grows too large. ([\#13253](https://github.com/matrix-org/synapse/issues/13253), [\#13254](https://github.com/matrix-org/synapse/issues/13254), [\#13255](https://github.com/matrix-org/synapse/issues/13255), [\#13276](https://github.com/matrix-org/synapse/issues/13276))
- Support Implicit TLS (TLS without using a STARTTLS upgrade, typically on port 465) for sending emails, enabled by the new option `force_tls`. Contributed by Jan Schär. ([\#13317](https://github.com/matrix-org/synapse/issues/13317))
Bugfixes
--------
- Fix a bug introduced in Synapse 1.15.0 where adding a user through the Synapse Admin API with a phone number would fail if the `enable_email_notifs` and `email_notifs_for_new_users` options were enabled. Contributed by @thomasweston12. ([\#13263](https://github.com/matrix-org/synapse/issues/13263))
- Fix a bug introduced in Synapse 1.40.0 where a user invited to a restricted room would be briefly unable to join. ([\#13270](https://github.com/matrix-org/synapse/issues/13270))
- Fix a long-standing bug where, in rare instances, Synapse could store the incorrect state for a room after a state resolution. ([\#13278](https://github.com/matrix-org/synapse/issues/13278))
- Fix a bug introduced in v1.18.0 where the `synapse_pushers` metric would overcount pushers when they are replaced. ([\#13296](https://github.com/matrix-org/synapse/issues/13296))
- Disable autocorrection and autocapitalisation on the username text field shown during registration when using SSO. ([\#13350](https://github.com/matrix-org/synapse/issues/13350))
- Update locked version of `frozendict` to 2.3.3, which has fixes for memory leaks affecting `/sync`. ([\#13284](https://github.com/matrix-org/synapse/issues/13284), [\#13352](https://github.com/matrix-org/synapse/issues/13352))
Improved Documentation
----------------------
- Provide an example of using the Admin API. Contributed by @jejo86. ([\#13231](https://github.com/matrix-org/synapse/issues/13231))
- Move the documentation for how URL previews work to the URL preview module. ([\#13233](https://github.com/matrix-org/synapse/issues/13233), [\#13261](https://github.com/matrix-org/synapse/issues/13261))
- Add another `contrib` script to help set up worker processes. Contributed by @villepeh. ([\#13271](https://github.com/matrix-org/synapse/issues/13271))
- Document that certain config options were added or changed in Synapse 1.62. Contributed by @behrmann. ([\#13314](https://github.com/matrix-org/synapse/issues/13314))
- Document the new `rc_invites.per_issuer` throttling option added in Synapse 1.63. ([\#13333](https://github.com/matrix-org/synapse/issues/13333))
- Mention that BuildKit is needed when building Docker images for tests. ([\#13338](https://github.com/matrix-org/synapse/issues/13338))
- Improve Caddy reverse proxy documentation. ([\#13344](https://github.com/matrix-org/synapse/issues/13344))
Deprecations and Removals
-------------------------
- Drop tables that were formerly used for groups/communities. ([\#12967](https://github.com/matrix-org/synapse/issues/12967))
- Drop support for delegating email verification to an external server. ([\#13192](https://github.com/matrix-org/synapse/issues/13192))
- Drop support for calling `/_matrix/client/v3/account/3pid/bind` without an `id_access_token`, which was not permitted by the spec. Contributed by @Vetchu. ([\#13239](https://github.com/matrix-org/synapse/issues/13239))
- Stop building `.deb` packages for Ubuntu 21.10 (Impish Indri), which has reached end of life. ([\#13326](https://github.com/matrix-org/synapse/issues/13326))
Internal Changes
----------------
- Use lower transaction isolation level when purging rooms to avoid serialization errors. Contributed by Nick @ Beeper. ([\#12942](https://github.com/matrix-org/synapse/issues/12942))
- Remove code which incorrectly attempted to reconcile state with remote servers when processing incoming events. ([\#12943](https://github.com/matrix-org/synapse/issues/12943))
- Make the AS login method call `Auth.get_user_by_req` for checking the AS token. ([\#13094](https://github.com/matrix-org/synapse/issues/13094))
- Always use a version of canonicaljson that supports the C implementation of frozendict. ([\#13172](https://github.com/matrix-org/synapse/issues/13172))
- Add prometheus counters for ephemeral events and to device messages pushed to app services. Contributed by Brad @ Beeper. ([\#13175](https://github.com/matrix-org/synapse/issues/13175))
- Refactor receipts servlet logic to avoid duplicated code. ([\#13198](https://github.com/matrix-org/synapse/issues/13198))
- Preparation for database schema simplifications: populate `state_key` and `rejection_reason` for existing rows in the `events` table. ([\#13215](https://github.com/matrix-org/synapse/issues/13215))
- Remove unused database table `event_reference_hashes`. ([\#13218](https://github.com/matrix-org/synapse/issues/13218))
- Further reduce queries used sending events when creating new rooms. Contributed by Nick @ Beeper (@fizzadar). ([\#13224](https://github.com/matrix-org/synapse/issues/13224))
- Call the v2 identity service `/3pid/unbind` endpoint, rather than v1. Contributed by @Vetchu. ([\#13240](https://github.com/matrix-org/synapse/issues/13240))
- Use an asynchronous cache wrapper for the get event cache. Contributed by Nick @ Beeper (@fizzadar). ([\#13242](https://github.com/matrix-org/synapse/issues/13242), [\#13308](https://github.com/matrix-org/synapse/issues/13308))
- Optimise federation sender and appservice pusher event stream processing queries. Contributed by Nick @ Beeper (@fizzadar). ([\#13251](https://github.com/matrix-org/synapse/issues/13251))
- Log the stack when waiting for an entire room to be un-partial stated. ([\#13257](https://github.com/matrix-org/synapse/issues/13257))
- Fix spurious warning when fetching state after a missing prev event. ([\#13258](https://github.com/matrix-org/synapse/issues/13258))
- Clean-up tests for notifications. ([\#13260](https://github.com/matrix-org/synapse/issues/13260))
- Do not fail build if complement with workers fails. ([\#13266](https://github.com/matrix-org/synapse/issues/13266))
- Don't pull out state in `compute_event_context` for unconflicted state. ([\#13267](https://github.com/matrix-org/synapse/issues/13267), [\#13274](https://github.com/matrix-org/synapse/issues/13274))
- Reduce the rebuild time for the complement-synapse docker image. ([\#13279](https://github.com/matrix-org/synapse/issues/13279))
- Don't pull out the full state when creating an event. ([\#13281](https://github.com/matrix-org/synapse/issues/13281), [\#13307](https://github.com/matrix-org/synapse/issues/13307))
- Upgrade from Poetry 1.1.12 to 1.1.14, to fix bugs when locking packages. ([\#13285](https://github.com/matrix-org/synapse/issues/13285))
- Make `DictionaryCache` expire full entries if they haven't been queried in a while, even if specific keys have been queried recently. ([\#13292](https://github.com/matrix-org/synapse/issues/13292))
- Use `HTTPStatus` constants in place of literals in tests. ([\#13297](https://github.com/matrix-org/synapse/issues/13297))
- Improve performance of query `_get_subset_users_in_room_with_profiles`. ([\#13299](https://github.com/matrix-org/synapse/issues/13299))
- Up batch size of `bulk_get_push_rules` and `_get_joined_profiles_from_event_ids`. ([\#13300](https://github.com/matrix-org/synapse/issues/13300))
- Remove unnecessary `json.dumps` from tests. ([\#13303](https://github.com/matrix-org/synapse/issues/13303))
- Reduce memory usage of sending dummy events. ([\#13310](https://github.com/matrix-org/synapse/issues/13310))
- Prevent formatting changes of [#3679](https://github.com/matrix-org/synapse/pull/3679) from appearing in `git blame`. ([\#13311](https://github.com/matrix-org/synapse/issues/13311))
- Change `get_users_in_room` and `get_rooms_for_user` caches to enable pruning of old entries. ([\#13313](https://github.com/matrix-org/synapse/issues/13313))
- Validate federation destinations and log an error if a destination is invalid. ([\#13318](https://github.com/matrix-org/synapse/issues/13318))
- Fix `FederationClient.get_pdu()` returning events from the cache as `outliers` instead of original events we saw over federation. ([\#13320](https://github.com/matrix-org/synapse/issues/13320))
- Reduce memory usage of state caches. ([\#13323](https://github.com/matrix-org/synapse/issues/13323))
- Reduce the amount of state we store in the `state_cache`. ([\#13324](https://github.com/matrix-org/synapse/issues/13324))
- Add missing type hints to open tracing module. ([\#13328](https://github.com/matrix-org/synapse/issues/13328), [\#13345](https://github.com/matrix-org/synapse/issues/13345), [\#13362](https://github.com/matrix-org/synapse/issues/13362))
- Remove old base slaved store and de-duplicate cache ID generators. Contributed by Nick @ Beeper (@fizzadar). ([\#13329](https://github.com/matrix-org/synapse/issues/13329), [\#13349](https://github.com/matrix-org/synapse/issues/13349))
- When reporting metrics is enabled, use ~8x less data to describe DB transaction metrics. ([\#13342](https://github.com/matrix-org/synapse/issues/13342))
- Faster room joins: skip soft fail checks while Synapse only has partial room state, since the current membership of event senders may not be accurately known. ([\#13354](https://github.com/matrix-org/synapse/issues/13354))
Synapse 1.63.1 (2022-07-20)
===========================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.63.0 where push actions were incorrectly calculated for appservice users. This caused performance issues on servers with large numbers of appservices. ([\#13332](https://github.com/matrix-org/synapse/issues/13332))
Synapse 1.63.0 (2022-07-19)
===========================
@@ -82,7 +204,6 @@ Internal Changes
- More aggressively rotate push actions. ([\#13211](https://github.com/matrix-org/synapse/issues/13211))
- Add `max_line_length` setting for Python files to the `.editorconfig`. Contributed by @sumnerevans @ Beeper. ([\#13228](https://github.com/matrix-org/synapse/issues/13228))
Synapse 1.62.0 (2022-07-05)
===========================
@@ -90,7 +211,6 @@ No significant changes since 1.62.0rc3.
Authors of spam-checker plugins should consult the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.62/docs/upgrade.md#upgrading-to-v1620) to learn about the enriched signatures for spam checker callbacks, which are supported with this release of Synapse.
Synapse 1.62.0rc3 (2022-07-04)
==============================
@@ -130,7 +250,7 @@ Bugfixes
- Update [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786) implementation to check `state_key`. ([\#12939](https://github.com/matrix-org/synapse/issues/12939))
- Fix a bug introduced in Synapse 1.58 where Synapse would not report full version information when installed from a git checkout. This is a best-effort affair and not guaranteed to be stable. ([\#12973](https://github.com/matrix-org/synapse/issues/12973))
- Fix a bug introduced in Synapse 1.60 where Synapse would fail to start if the `sqlite3` module was not available. ([\#12979](https://github.com/matrix-org/synapse/issues/12979))
- Fix a bug where non-standard information was required when requesting the `/hierarchy` API over federation. Introduced
- Fix a bug where non-standard information was required when requesting the `/hierarchy` API over federation. Introduced
in Synapse v1.41.0. ([\#12991](https://github.com/matrix-org/synapse/issues/12991))
- Fix a long-standing bug which meant that rate limiting was not restrictive enough in some cases. ([\#13018](https://github.com/matrix-org/synapse/issues/13018))
- Fix a bug introduced in Synapse 1.58 where profile requests for a malformed user ID would ccause an internal error. Synapse now returns 400 Bad Request in this situation. ([\#13041](https://github.com/matrix-org/synapse/issues/13041))

1
changelog.d/11897.doc Normal file
View File

@@ -0,0 +1 @@
Update the 'registration tokens' page to acknowledge that the relevant MSC was merged into version 1.2 of the Matrix specification. Contributed by @moan0s.

1
changelog.d/12978.misc Normal file
View File

@@ -0,0 +1 @@
Extend the release script to automatically push a new SyTest branch, rather than having that be a manual process.

1
changelog.d/13160.misc Normal file
View File

@@ -0,0 +1 @@
Make minor clarifications to the error messages given when we fail to join a room via any server.

View File

@@ -0,0 +1 @@
Experimental support for thread-specific notifications ([MSC3773](https://github.com/matrix-org/matrix-spec-proposals/pull/3773)).

View File

@@ -0,0 +1 @@
Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)).

1
changelog.d/13213.misc Normal file
View File

@@ -0,0 +1 @@
Enable Complement CI tests in the 'latest deps' test run.

1
changelog.d/13221.doc Normal file
View File

@@ -0,0 +1 @@
Document which HTTP resources support gzip compression.

1
changelog.d/13230.doc Normal file
View File

@@ -0,0 +1 @@
Add steps describing how to elevate an existing user to administrator by manipulating the database.

View File

@@ -0,0 +1 @@
Add new unstable error codes `ORG.MATRIX.MSC3848.ALREADY_JOINED`, `ORG.MATRIX.MSC3848.NOT_JOINED`, and `ORG.MATRIX.MSC3848.INSUFFICIENT_POWER` described in MSC3848.

1
changelog.d/13346.misc Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bugged logic which was never hit in `get_pdu` asking every remote destination even after it finds an event.

1
changelog.d/13353.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in the experimental faster-room-joins support which could cause it to get stuck in an infinite loop.

1
changelog.d/13355.misc Normal file
View File

@@ -0,0 +1 @@
Faster room joins: avoid blocking when pulling events with partially missing prev events.

1
changelog.d/13365.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`).

1
changelog.d/13368.misc Normal file
View File

@@ -0,0 +1 @@
Instrument `/messages` for understandable traces in Jaeger.

View File

@@ -0,0 +1 @@
Use stable prefixes for [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827).

1
changelog.d/13372.docker Normal file
View File

@@ -0,0 +1 @@
Make docker images build on armv7 by installing cryptography dependencies in the "requirements" stage. Contributed by Jasper Spaans.

1
changelog.d/13374.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 0.24.0 that would respond with the wrong error status code to `/joined_members` requests when the requester is not a current member of the room. Contributed by @andrewdoh.

1
changelog.d/13383.misc Normal file
View File

@@ -0,0 +1 @@
Remove an unused argument to `get_relations_for_event`.

1
changelog.d/13392.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug in handling of typing events for appservices. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13393.misc Normal file
View File

@@ -0,0 +1 @@
Add a `merge-back` command to the release script, which automates merging the correct branches after a release.

View File

@@ -0,0 +1 @@
Experimental support for [MSC3856](https://github.com/matrix-org/matrix-spec-proposals/pull/3856): threads list API.

1
changelog.d/13397.misc Normal file
View File

@@ -0,0 +1 @@
Adding missing type hints to tests.

1
changelog.d/13403.misc Normal file
View File

@@ -0,0 +1 @@
Faster Room Joins: don't leave a stuck room partial state flag if the join fails.

1
changelog.d/13404.misc Normal file
View File

@@ -0,0 +1 @@
Refactor `_resolve_state_at_missing_prevs` to compute an `EventContext` instead.

1
changelog.d/13408.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.57.0 where rooms listed in `exclude_rooms_from_sync` in the configuration file would not be properly excluded from incremental syncs.

1
changelog.d/13413.bugfix Normal file
View File

@@ -0,0 +1 @@
Faster room joins: fix a bug which caused rejected events to become un-rejected during state syncing.

1
changelog.d/13416.misc Normal file
View File

@@ -0,0 +1 @@
Faster Room Joins: prevent Synapse from answering federated join requests for a room which it has not fully joined yet.

1
changelog.d/13420.misc Normal file
View File

@@ -0,0 +1 @@
Re-enable running Complement tests against Synapse with workers.

View File

@@ -0,0 +1 @@
Add a module API method to translate a room alias into a room ID.

View File

@@ -0,0 +1 @@
Add a module API method to create a room.

1
changelog.d/13431.misc Normal file
View File

@@ -0,0 +1 @@
Refactor `_resolve_state_at_missing_prevs` to compute an `EventContext` instead.

1
changelog.d/13432.bugfix Normal file
View File

@@ -0,0 +1 @@
Faster room joins: Fix error when running out of servers to sync partial state with, so that Synapse raises the intended error instead.

1
changelog.d/13435.misc Normal file
View File

@@ -0,0 +1 @@
Prevent unnecessary lookups to any external `get_event` cache. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13437.doc Normal file
View File

@@ -0,0 +1 @@
Fix wrong headline for `url_preview_accept_language` in documentation.

1
changelog.d/13438.doc Normal file
View File

@@ -0,0 +1 @@
Remove redundant 'Contents' section from the Configuration Manual. Contributed by @dklimpel.

1
changelog.d/13439.misc Normal file
View File

@@ -0,0 +1 @@
Add some tracing to give more insight into local room joins.

View File

@@ -0,0 +1 @@
Add remote join capability to the module API's `update_room_membership` method (in a backwards compatible manner).

1
changelog.d/13442.misc Normal file
View File

@@ -0,0 +1 @@
Rename class `RateLimitConfig` to `RatelimitSettings` and `FederationRateLimitConfig` to `FederationRatelimitSettings`.

1
changelog.d/13443.doc Normal file
View File

@@ -0,0 +1 @@
Update documentation for config setting `macaroon_secret_key`.

1
changelog.d/13445.misc Normal file
View File

@@ -0,0 +1 @@
Add some comments about how event push actions are stored.

1
changelog.d/13447.misc Normal file
View File

@@ -0,0 +1 @@
Improve rebuild speed for the "synapse-workers" docker image.

1
changelog.d/13449.doc Normal file
View File

@@ -0,0 +1 @@
Update outdated information on `sso_mapping_providers` documentation.

1
changelog.d/13450.doc Normal file
View File

@@ -0,0 +1 @@
Fix example code in module documentation of `password_auth_provider_callbacks`.

1
changelog.d/13452.misc Normal file
View File

@@ -0,0 +1 @@
Fix `@tag_args` being off-by-one with the arguments when tagging a span (tracing).

1
changelog.d/13455.misc Normal file
View File

@@ -0,0 +1 @@
Add some comments about how event push actions are stored.

1
changelog.d/13460.misc Normal file
View File

@@ -0,0 +1 @@
Update type of `EventContext.rejected`.

View File

@@ -1,4 +1,4 @@
# Creating multiple workers with a bash script
# Creating multiple generic workers with a bash script
Setting up multiple worker configuration files manually can be time-consuming.
You can alternatively create multiple worker configuration files with a simple `bash` script. For example:

View File

@@ -0,0 +1,145 @@
# Creating multiple stream writers with a bash script
This script creates multiple [stream writer](https://github.com/matrix-org/synapse/blob/develop/docs/workers.md#stream-writers) workers.
Stream writers require both replication and HTTP listeners.
It also prints out the example lines for Synapse main configuration file.
Remember to route necessary endpoints directly to a worker associated with it.
If you run the script as-is, it will create workers with the replication listener starting from port 8034 and another, regular http listener starting from 8044. If you don't need all of the stream writers listed in the script, just remove them from the ```STREAM_WRITERS``` array.
```sh
#!/bin/bash
# Start with these replication and http ports.
# The script loop starts with the exact port and then increments it by one.
REP_START_PORT=8034
HTTP_START_PORT=8044
# Stream writer workers to generate. Feel free to add or remove them as you wish.
# Event persister ("events") isn't included here as it does not require its
# own HTTP listener.
STREAM_WRITERS+=( "presence" "typing" "receipts" "to_device" "account_data" )
NUM_WRITERS=$(expr ${#STREAM_WRITERS[@]})
i=0
while [ $i -lt "$NUM_WRITERS" ]
do
cat << EOF > ${STREAM_WRITERS[$i]}_stream_writer.yaml
worker_app: synapse.app.generic_worker
worker_name: ${STREAM_WRITERS[$i]}_stream_writer
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: $(expr $REP_START_PORT + $i)
resources:
- names: [replication]
- type: http
port: $(expr $HTTP_START_PORT + $i)
resources:
- names: [client]
worker_log_config: /etc/matrix-synapse/stream-writer-log.yaml
EOF
HOMESERVER_YAML_INSTANCE_MAP+=$" ${STREAM_WRITERS[$i]}_stream_writer:
host: 127.0.0.1
port: $(expr $REP_START_PORT + $i)
"
HOMESERVER_YAML_STREAM_WRITERS+=$" ${STREAM_WRITERS[$i]}: ${STREAM_WRITERS[$i]}_stream_writer
"
((i++))
done
cat << EXAMPLECONFIG
# Add these lines to your homeserver.yaml.
# Don't forget to configure your reverse proxy and
# necessary endpoints to their respective worker.
# See https://github.com/matrix-org/synapse/blob/develop/docs/workers.md
# for more information.
# Remember: Under NO circumstances should the replication
# listener be exposed to the public internet;
# it has no authentication and is unencrypted.
instance_map:
$HOMESERVER_YAML_INSTANCE_MAP
stream_writers:
$HOMESERVER_YAML_STREAM_WRITERS
EXAMPLECONFIG
```
Copy the code above save it to a file ```create_stream_writers.sh``` (for example).
Make the script executable by running ```chmod +x create_stream_writers.sh```.
## Run the script to create workers and print out a sample configuration
Simply run the script to create YAML files in the current folder and print out the required configuration for ```homeserver.yaml```.
```console
$ ./create_stream_writers.sh
# Add these lines to your homeserver.yaml.
# Don't forget to configure your reverse proxy and
# necessary endpoints to their respective worker.
# See https://github.com/matrix-org/synapse/blob/develop/docs/workers.md
# for more information
# Remember: Under NO circumstances should the replication
# listener be exposed to the public internet;
# it has no authentication and is unencrypted.
instance_map:
presence_stream_writer:
host: 127.0.0.1
port: 8034
typing_stream_writer:
host: 127.0.0.1
port: 8035
receipts_stream_writer:
host: 127.0.0.1
port: 8036
to_device_stream_writer:
host: 127.0.0.1
port: 8037
account_data_stream_writer:
host: 127.0.0.1
port: 8038
stream_writers:
presence: presence_stream_writer
typing: typing_stream_writer
receipts: receipts_stream_writer
to_device: to_device_stream_writer
account_data: account_data_stream_writer
```
Simply copy-and-paste the output to an appropriate place in your Synapse main configuration file.
## Write directly to Synapse configuration file
You could also write the output directly to homeserver main configuration file. **This, however, is not recommended** as even a small typo (such as replacing >> with >) can erase the entire ```homeserver.yaml```.
If you do this, back up your original configuration file first:
```console
# Back up homeserver.yaml first
cp /etc/matrix-synapse/homeserver.yaml /etc/matrix-synapse/homeserver.yaml.bak
# Create workers and write output to your homeserver.yaml
./create_stream_writers.sh >> /etc/matrix-synapse/homeserver.yaml
```

24
debian/changelog vendored
View File

@@ -1,3 +1,27 @@
matrix-synapse-py3 (1.64.0) stable; urgency=medium
* New Synapse release 1.64.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 02 Aug 2022 10:32:30 +0100
matrix-synapse-py3 (1.64.0~rc2) stable; urgency=medium
* New Synapse release 1.64.0rc2.
-- Synapse Packaging team <packages@matrix.org> Fri, 29 Jul 2022 12:22:53 +0100
matrix-synapse-py3 (1.64.0~rc1) stable; urgency=medium
* New Synapse release 1.64.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Jul 2022 12:11:49 +0100
matrix-synapse-py3 (1.63.1) stable; urgency=medium
* New Synapse release 1.63.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Jul 2022 13:36:52 +0100
matrix-synapse-py3 (1.63.0) stable; urgency=medium
* Clarify that homeserver server names are included in the data reported

View File

@@ -40,12 +40,13 @@ FROM docker.io/python:${PYTHON_VERSION}-slim as requirements
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq git \
apt-get update -qq && apt-get install -yqq \
build-essential cargo git libffi-dev libssl-dev \
&& rm -rf /var/lib/apt/lists/*
# We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies.
# We use a specific commit from poetry's master branch instead of our usual 1.1.12,
# We use a specific commit from poetry's master branch instead of our usual 1.1.14,
# to incorporate fixes to some bugs in `poetry export`. This commit corresponds to
# https://github.com/python-poetry/poetry/pull/5156 and
# https://github.com/python-poetry/poetry/issues/5141 ;
@@ -68,7 +69,18 @@ COPY pyproject.toml poetry.lock /synapse/
# reason, such as when a git repository is used directly as a dependency.
ARG TEST_ONLY_SKIP_DEP_HASH_VERIFICATION
RUN /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}
# If specified, we won't use the Poetry lockfile.
# Instead, we'll just install what a regular `pip install` would from PyPI.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# Export the dependencies, but only if we're actually going to use the Poetry lockfile.
# Otherwise, just create an empty requirements file so that the Dockerfile can
# proceed.
RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
/root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
else \
touch /synapse/requirements.txt; \
fi
###
### Stage 1: builder
@@ -108,8 +120,17 @@ COPY synapse /synapse/synapse/
# ... and what we need to `pip install`.
COPY pyproject.toml README.rst /synapse/
# Repeat of earlier build argument declaration, as this is a new build stage.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# Install the synapse package itself.
RUN pip install --prefix="/install" --no-deps --no-warn-script-location /synapse
# If we have populated requirements.txt, we don't install any dependencies
# as we should already have those from the previous `pip install` step.
RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; \
else \
pip install --prefix="/install" --no-warn-script-location /synapse[all]; \
fi
###
### Stage 2: runtime

View File

@@ -1,38 +1,62 @@
# Inherit from the official Synapse docker image
# syntax=docker/dockerfile:1
ARG SYNAPSE_VERSION=latest
# first of all, we create a base image with an nginx which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
# each time.
FROM debian:bullseye-slim AS deps_base
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
redis-server nginx-light
# Similarly, a base to copy the redis server from.
#
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM redis:6-bullseye AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM matrixdotorg/synapse:$SYNAPSE_VERSION
# Install deps
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
redis-server nginx-light
# Install supervisord with pip instead of apt, to avoid installing a second
# copy of python.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install supervisor~=4.2
RUN mkdir -p /etc/supervisor/conf.d
# Install supervisord with pip instead of apt, to avoid installing a second
# copy of python.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install supervisor~=4.2
# Copy over redis and nginx
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
# Disable the default nginx sites
RUN rm /etc/nginx/sites-enabled/default
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx
COPY --from=deps_base /etc/nginx /etc/nginx
RUN rm /etc/nginx/sites-enabled/default
RUN mkdir /var/log/nginx /var/lib/nginx
RUN chown www-data /var/log/nginx /var/lib/nginx
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Copy a script to prefix log lines with the supervisor program name
COPY ./docker/prefix-log /usr/local/bin/
# Copy a script to prefix log lines with the supervisor program name
COPY ./docker/prefix-log /usr/local/bin/
# Expose nginx listener port
EXPOSE 8080/tcp
# Expose nginx listener port
EXPOSE 8080/tcp
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
# Replace the healthcheck with one which checks *all* the workers. The script
# is generated by configure_workers_and_start.py.
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD /bin/sh /healthcheck.sh
# Replace the healthcheck with one which checks *all* the workers. The script
# is generated by configure_workers_and_start.py.
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD /bin/sh /healthcheck.sh

View File

@@ -22,6 +22,10 @@ Consult the [contributing guide][guideComplementSh] for instructions on how to u
Under some circumstances, you may wish to build the images manually.
The instructions below will lead you to doing that.
Note that these images can only be built using [BuildKit](https://docs.docker.com/develop/develop-images/build_enhancements/),
therefore BuildKit needs to be enabled when calling `docker build`. This can be done by
setting `DOCKER_BUILDKIT=1` in your environment.
Start by building the base Synapse docker image. If you wish to run tests with the latest
release of Synapse, instead of your current checkout, you can skip this step. From the
root of the repository:

View File

@@ -1,45 +1,62 @@
# syntax=docker/dockerfile:1
# This dockerfile builds on top of 'docker/Dockerfile-workers' in matrix-org/synapse
# by including a built-in postgres instance, as well as setting up the homeserver so
# that it is ready for testing via Complement.
#
# Instructions for building this image from those it depends on is detailed in this guide:
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
ARG SYNAPSE_VERSION=latest
# first of all, we create a base image with a postgres server and database,
# which we can copy into the target image. For repeated rebuilds, this is
# much faster than apt installing postgres each time.
#
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
FROM postgres:13-bullseye AS postgres_base
# initialise the database cluster in /var/lib/postgresql
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
# Configure a password and create a database for Synapse
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# Install postgresql
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -yqq postgresql-13
# Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
# Configure a user and create a database for Synapse
RUN pg_ctlcluster 13 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 13 main stop
WORKDIR /data
# Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
WORKDIR /data
# Copy the entrypoint
COPY conf/start_for_complement.sh /
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
# Expose nginx's listener ports
EXPOSE 8008 8448
# Copy the entrypoint
COPY conf/start_for_complement.sh /
ENTRYPOINT ["/start_for_complement.sh"]
# Expose nginx's listener ports
EXPOSE 8008 8448
ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh

View File

@@ -1,5 +1,5 @@
[program:postgres]
command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground
command=/usr/local/bin/prefix-log gosu postgres postgres
# Only start if START_POSTGRES=1
autostart=%(ENV_START_POSTGRES)s

View File

@@ -67,6 +67,10 @@ rc_joins:
per_second: 9999
burst_count: 9999
rc_joins_per_room:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000

View File

@@ -19,7 +19,7 @@ username=www-data
autorestart=true
[program:redis]
command=/usr/local/bin/prefix-log /usr/bin/redis-server /etc/redis/redis.conf --daemonize no
command=/usr/local/bin/prefix-log /usr/local/bin/redis-server
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0

View File

@@ -35,7 +35,6 @@
- [Application Services](application_services.md)
- [Server Notices](server_notices.md)
- [Consent Tracking](consent_tracking.md)
- [URL Previews](development/url_previews.md)
- [User Directory](user_directory.md)
- [Message Retention Policies](message_retention_policies.md)
- [Pluggable Modules](modules/index.md)

View File

@@ -59,6 +59,7 @@ The following fields are possible in the JSON response body:
- `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
- `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
- `state_events` - Total number of state_events of a room. Complexity of the room.
- `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space. If the room does not define a type, the value will be `null`.
* `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is
not intended to be parsed.
@@ -101,7 +102,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
},
... (8 hidden items) ...
{
@@ -118,7 +120,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": null
}
],
"offset": 0,
@@ -151,7 +154,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8
"state_events": 8,
"room_type": null
}
],
"offset": 0,
@@ -184,7 +188,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": null
},
... (98 hidden items) ...
{
@@ -201,7 +206,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": "m.space"
}
],
"offset": 0,
@@ -238,7 +244,9 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
},
... (48 hidden items) ...
{
@@ -255,7 +263,9 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": null
}
],
"offset": 100,
@@ -290,6 +300,8 @@ The following fields are possible in the JSON response body:
* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
* `state_events` - Total number of state_events of a room. Complexity of the room.
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
If the room does not define a type, the value will be `null`.
The API is:
@@ -317,7 +329,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
}
```

View File

@@ -544,7 +544,7 @@ Gets a list of all local media that a specific `user_id` has created.
These are media that the user has uploaded themselves
([local media](../media_repository.md#local-media)), as well as
[URL preview images](../media_repository.md#url-previews) requested by the user if the
[feature is enabled](../development/url_previews.md).
[feature is enabled](../usage/configuration/config_documentation.md#url_preview_enabled).
By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters

View File

@@ -237,3 +237,28 @@ poetry run pip install build && poetry run python -m build
because [`build`](https://github.com/pypa/build) is a standardish tool which
doesn't require poetry. (It's what we use in CI too). However, you could try
`poetry build` too.
# Troubleshooting
## Check the version of poetry with `poetry --version`.
At the time of writing, the 1.2 series is beta only. We have seen some examples
where the lockfiles generated by 1.2 prereleasese aren't interpreted correctly
by poetry 1.1.x. For now, use poetry 1.1.14, which includes a critical
[change](https://github.com/python-poetry/poetry/pull/5973) needed to remain
[compatible with PyPI](https://github.com/pypi/warehouse/pull/11775).
It can also be useful to check the version of `poetry-core` in use. If you've
installed `poetry` with `pipx`, try `pipx runpip poetry list | grep poetry-core`.
## Clear caches: `poetry cache clear --all pypi`.
Poetry caches a bunch of information about packages that isn't readily available
from PyPI. (This is what makes poetry seem slow when doing the first
`poetry install`.) Try `poetry cache list` and `poetry cache clear --all
<name of cache>` to see if that fixes things.
## Try `--verbose` or `--dry-run` arguments.
Sometimes useful to see what poetry's internal logic is.

View File

@@ -1,61 +0,0 @@
URL Previews
============
The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
for URLs which outputs [Open Graph](https://ogp.me/) responses (with some Matrix
specific additions).
This does have trade-offs compared to other designs:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each homeserver provides one of these independently, all the HSes in a
room may needlessly DoS the target URI
* The URL metadata must be stored somewhere, rather than just using Matrix
itself to store the media.
* Matrix cannot be used to distribute the metadata between homeservers.
When Synapse is asked to preview a URL it does the following:
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
config).
2. Checks the in-memory cache by URLs and returns the result if it exists. (This
is also used to de-duplicate processing of multiple in-flight requests at once.)
3. Kicks off a background process to generate a preview:
1. Checks the database cache by URL and timestamp and returns the result if it
has not expired and was successful (a 2xx return code).
2. Checks if the URL matches an [oEmbed](https://oembed.com/) pattern. If it
does, update the URL to download.
3. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
4. If the media is an image:
1. Generates thumbnails.
2. Generates an Open Graph response based on image properties.
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
6. If the media is JSON and an oEmbed URL was found:
1. Convert the oEmbed response to an Open Graph response.
2. If a thumbnail or image is in the oEmbed response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
7. Stores the result in the database cache.
4. Returns the result.
The in-memory cache expires after 1 hour.
Expired entries in the database cache (and their associated media files) are
deleted every 10 seconds. The default expiration time is 1 hour from download.

View File

@@ -7,8 +7,7 @@ The media repository
users.
* caches avatars, attachments and their thumbnails for media uploaded by remote
users.
* caches resources and thumbnails used for
[URL previews](development/url_previews.md).
* caches resources and thumbnails used for URL previews.
All media in Matrix can be identified by a unique
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
@@ -59,8 +58,6 @@ remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
Note that `remote_thumbnail/` does not have an `s`.
## URL Previews
See [URL Previews](development/url_previews.md) for documentation on the URL preview
process.
When generating previews for URLs, Synapse may download and cache various
resources, including images. These resources are assigned temporary media IDs

View File

@@ -263,7 +263,7 @@ class MyAuthProvider:
return None
if self.credentials.get(username) == login_dict.get("my_field"):
return self.api.get_qualified_user_id(username)
return (self.api.get_qualified_user_id(username), None)
async def check_pass(
self,
@@ -280,5 +280,5 @@ class MyAuthProvider:
return None
if self.credentials.get(username) == login_dict.get("password"):
return self.api.get_qualified_user_id(username)
return (self.api.get_qualified_user_id(username), None)
```

View File

@@ -79,63 +79,32 @@ server {
}
```
### Caddy v1
```
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
proxy /_synapse/client http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
```
### Caddy v2
```
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
reverse_proxy /_matrix/* localhost:8008
reverse_proxy /_synapse/client/* localhost:8008
}
example.com:8448 {
reverse_proxy http://localhost:8008
reverse_proxy localhost:8008
}
```
[Delegation](delegate.md) example:
```
(matrix-well-known-header) {
# Headers
header Access-Control-Allow-Origin "*"
header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS"
header Access-Control-Allow-Headers "Origin, X-Requested-With, Content-Type, Accept, Authorization"
header Content-Type "application/json"
}
example.com {
handle /.well-known/matrix/server {
import matrix-well-known-header
respond `{"m.server":"matrix.example.com:443"}`
}
handle /.well-known/matrix/client {
import matrix-well-known-header
respond `{"m.homeserver":{"base_url":"https://matrix.example.com"},"m.identity_server":{"base_url":"https://identity.example.com"}}`
}
header /.well-known/matrix/* Content-Type application/json
header /.well-known/matrix/* Access-Control-Allow-Origin *
respond /.well-known/matrix/server `{"m.server": "matrix.example.com:443"}`
respond /.well-known/matrix/client `{"m.homeserver":{"base_url":"https://matrix.example.com"},"m.identity_server":{"base_url":"https://identity.example.com"}}`
}
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
reverse_proxy /_synapse/client/* http://localhost:8008
reverse_proxy /_matrix/* localhost:8008
reverse_proxy /_synapse/client/* localhost:8008
}
```

View File

@@ -22,7 +22,7 @@ choose their own username.
In the first case - where users are automatically allocated a Matrix ID - it is
the responsibility of the mapping provider to normalise the SSO attributes and
map them to a valid Matrix ID. The [specification for Matrix
IDs](https://matrix.org/docs/spec/appendices#user-identifiers) has some
IDs](https://spec.matrix.org/latest/appendices/#user-identifiers) has some
information about what is considered valid.
If the mapping provider does not assign a Matrix ID, then Synapse will
@@ -37,9 +37,10 @@ as Synapse). The Synapse config is then modified to point to the mapping provide
## OpenID Mapping Providers
The OpenID mapping provider can be customized by editing the
`oidc_config.user_mapping_provider.module` config option.
[`oidc_providers.user_mapping_provider.module`](usage/configuration/config_documentation.md#oidc_providers)
config option.
`oidc_config.user_mapping_provider.config` allows you to provide custom
`oidc_providers.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
@@ -58,7 +59,7 @@ A custom mapping provider must specify the following methods:
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`oidc_config.user_mapping_provider.config` homeserver config option.
`oidc_providers.user_mapping_provider.config` homeserver config option.
Runs on homeserver startup. Providers should extract and validate
any option values they need here.
- Whatever is returned will be passed back to the user mapping provider module's
@@ -102,7 +103,7 @@ A custom mapping provider must specify the following methods:
will be returned as part of the response during a successful login.
Note that care should be taken to not overwrite any of the parameters
usually returned as part of the [login response](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
usually returned as part of the [login response](https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3login).
### Default OpenID Mapping Provider
@@ -113,7 +114,8 @@ specified in the config. It is located at
## SAML Mapping Providers
The SAML mapping provider can be customized by editing the
`saml2_config.user_mapping_provider.module` config option.
[`saml2_config.user_mapping_provider.module`](docs/usage/configuration/config_documentation.md#saml2_config)
config option.
`saml2_config.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for

View File

@@ -89,6 +89,37 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.64.0
## Deprecation of the ability to delegate e-mail verification to identity servers
Synapse v1.66.0 will remove the ability to delegate the tasks of verifying email address ownership, and password reset confirmation, to an identity server.
If you require your homeserver to verify e-mail addresses or to support password resets via e-mail, please configure your homeserver with SMTP access so that it can send e-mails on its own behalf.
[Consult the configuration documentation for more information.](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#email)
The option that will be removed is `account_threepid_delegates.email`.
## Changes to the event replication streams
Synapse now includes a flag indicating if an event is an outlier when
replicating it to other workers. This is a forwards- and backwards-incompatible
change: v1.63 and workers cannot process events replicated by v1.64 workers, and
vice versa.
Once all workers are upgraded to v1.64 (or downgraded to v1.63), event
replication will resume as normal.
## frozendict release
[frozendict 2.3.3](https://github.com/Marco-Sulla/python-frozendict/releases/tag/v2.3.3)
has recently been released, which fixes a memory leak that occurs during `/sync`
requests. We advise server administrators who installed Synapse via pip to upgrade
frozendict with `pip install --upgrade frozendict`. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` already
include the updated library.
# Upgrading to v1.62.0
## New signatures for spam checker callbacks

View File

@@ -5,8 +5,9 @@
Many of the API calls in the admin api will require an `access_token` for a
server admin. (Note that a server admin is distinct from a room admin.)
A user can be marked as a server admin by updating the database directly, e.g.:
An existing user can be marked as a server admin by updating the database directly.
Check your [database settings](config_documentation.md#database) in the configuration file, connect to the correct database using either `psql [database name]` (if using PostgreSQL) or `sqlite3 path/to/your/database.db` (if using SQLite) and elevate the user `@foo:bar.com` to administrator.
```sql
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
```
@@ -18,6 +19,11 @@ already on your `$PATH` depending on how Synapse was installed.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
## Making an Admin API request
For security reasons, we [recommend](reverse_proxy.md#synapse-administration-endpoints)
that the Admin API (`/_synapse/admin/...`) should be hidden from public view using a
reverse proxy. This means you should typically query the Admin API from a terminal on
the machine which runs Synapse.
Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by
providing the token as either a query parameter or a request header. To add it as a request header in cURL:
@@ -25,5 +31,17 @@ providing the token as either a query parameter or a request header. To add it a
curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>
```
For example, suppose we want to
[query the account](user_admin_api.md#query-user-account) of the user
`@foo:bar.com`. We need an admin access token (e.g.
`syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk`), and we need to know which port
Synapse's [`client` listener](config_documentation.md#listeners) is listening
on (e.g. `8008`). Then we can use the following command to request the account
information from the Admin API.
```sh
curl --header "Authorization: Bearer syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk" -X GET http://127.0.0.1:8008/_synapse/admin/v2/users/@foo:bar.com
```
For more details on access tokens in Matrix, please refer to the complete
[matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens).

View File

@@ -2,11 +2,11 @@
This API allows you to manage tokens which can be used to authenticate
registration requests, as proposed in
[MSC3231](https://github.com/matrix-org/matrix-doc/blob/main/proposals/3231-token-authenticated-registration.md).
[MSC3231](https://github.com/matrix-org/matrix-doc/blob/main/proposals/3231-token-authenticated-registration.md)
and stabilised in version 1.2 of the Matrix specification.
To use it, you will need to enable the `registration_requires_token` config
option, and authenticate by providing an `access_token` for a server admin:
see [Admin API](../../usage/administration/admin_api).
Note that this API is still experimental; not all clients may support it yet.
see [Admin API](../admin_api).
## Registration token objects

File diff suppressed because it is too large Load Diff

View File

@@ -84,9 +84,6 @@ disallow_untyped_defs = False
[mypy-synapse.http.matrixfederationclient]
disallow_untyped_defs = False
[mypy-synapse.logging.opentracing]
disallow_untyped_defs = False
[mypy-synapse.metrics._reactor_metrics]
disallow_untyped_defs = False
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.

38
poetry.lock generated
View File

@@ -290,7 +290,7 @@ importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
[[package]]
name = "frozendict"
version = "2.3.0"
version = "2.3.3"
description = "A simple immutable dictionary"
category = "main"
optional = false
@@ -1563,7 +1563,7 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "e96625923122e29b6ea5964379828e321b6cede2b020fc32c6f86c09d86d1ae8"
content-hash = "c24bbcee7e86dbbe7cdbf49f91a25b310bf21095452641e7440129f59b077f78"
[metadata.files]
attrs = [
@@ -1753,23 +1753,23 @@ flake8-comprehensions = [
{file = "flake8_comprehensions-3.8.0-py3-none-any.whl", hash = "sha256:9406314803abe1193c064544ab14fdc43c58424c0882f6ff8a581eb73fc9bb58"},
]
frozendict = [
{file = "frozendict-2.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e18e2abd144a9433b0a8334582843b2aa0d3b9ac8b209aaa912ad365115fe2e1"},
{file = "frozendict-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96dc7a02e78da5725e5e642269bb7ae792e0c9f13f10f2e02689175ebbfedb35"},
{file = "frozendict-2.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:752a6dcfaf9bb20a7ecab24980e4dbe041f154509c989207caf185522ef85461"},
{file = "frozendict-2.3.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:5346d9fc1c936c76d33975a9a9f1a067342963105d9a403a99e787c939cc2bb2"},
{file = "frozendict-2.3.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60dd2253f1bacb63a7c486ec541a968af4f985ffb06602ee8954a3d39ec6bd2e"},
{file = "frozendict-2.3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:b2e044602ce17e5cd86724add46660fb9d80169545164e763300a3b839cb1b79"},
{file = "frozendict-2.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a27a69b1ac3591e4258325108aee62b53c0eeb6ad0a993ae68d3c7eaea980420"},
{file = "frozendict-2.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f45ef5f6b184d84744fff97b61f6b9a855e24d36b713ea2352fc723a047afa5"},
{file = "frozendict-2.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:2d3f5016650c0e9a192f5024e68fb4d63f670d0ee58b099ed3f5b4c62ea30ecb"},
{file = "frozendict-2.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6cf605916f50aabaaba5624c81eb270200f6c2c466c46960237a125ec8fe3ae0"},
{file = "frozendict-2.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6da06e44904beae4412199d7e49be4f85c6cc168ab06b77c735ea7da5ce3454"},
{file = "frozendict-2.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:1f34793fb409c4fa70ffd25bea87b01f3bd305fb1c6b09e7dff085b126302206"},
{file = "frozendict-2.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fd72494a559bdcd28aa71f4aa81860269cd0b7c45fff3e2614a0a053ecfd2a13"},
{file = "frozendict-2.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00ea9166aa68cc5feed05986206fdbf35e838a09cb3feef998cf35978ff8a803"},
{file = "frozendict-2.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:9ffaf440648b44e0bc694c1a4701801941378ba3ba6541e17750ae4b4aeeb116"},
{file = "frozendict-2.3.0-py3-none-any.whl", hash = "sha256:8578fe06815fcdcc672bd5603eebc98361a5317c1c3a13b28c6c810f6ea3b323"},
{file = "frozendict-2.3.0.tar.gz", hash = "sha256:da4231adefc5928e7810da2732269d3ad7b5616295b3e693746392a8205ea0b5"},
{file = "frozendict-2.3.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:39942914c1217a5a49c7551495a103b3dbd216e19413687e003b859c6b0ebc12"},
{file = "frozendict-2.3.3-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5589256058b31f2b91419fa30b8dc62dbdefe7710e688a3fd5b43849161eecc9"},
{file = "frozendict-2.3.3-cp310-cp310-win_amd64.whl", hash = "sha256:35eb7e59e287c41f4f712d4d3d2333354175b155d217b97c99c201d2d8920790"},
{file = "frozendict-2.3.3-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:310aaf81793abf4f471895e6fe65e0e74a28a2aaf7b25c2ba6ccd4e35af06842"},
{file = "frozendict-2.3.3-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c353c11010a986566a0cb37f9a783c560ffff7d67d5e7fd52221fb03757cdc43"},
{file = "frozendict-2.3.3-cp36-cp36m-win_amd64.whl", hash = "sha256:15b5f82aad108125336593cec1b6420c638bf45f449c57e50949fc7654ea5a41"},
{file = "frozendict-2.3.3-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a4737e5257756bd6b877504ff50185b705db577b5330d53040a6cf6417bb3cdb"},
{file = "frozendict-2.3.3-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:80a14c11e33e8b0bc09e07bba3732c77a502c39edb8c3959fd9a0e490e031158"},
{file = "frozendict-2.3.3-cp37-cp37m-win_amd64.whl", hash = "sha256:027952d1698ac9c766ef43711226b178cdd49d2acbdff396936639ad1d2a5615"},
{file = "frozendict-2.3.3-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ef818d66c85098a37cf42509545a4ba7dd0c4c679d6262123a8dc14cc474bab7"},
{file = "frozendict-2.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:812279f2b270c980112dc4e367b168054f937108f8044eced4199e0ab2945a37"},
{file = "frozendict-2.3.3-cp38-cp38-win_amd64.whl", hash = "sha256:c1fb7efbfebc2075f781be3d9774e4ba6ce4fc399148b02097f68d4b3c4bc00a"},
{file = "frozendict-2.3.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:a0b46d4bf95bce843c0151959d54c3e5b8d0ce29cb44794e820b3ec980d63eee"},
{file = "frozendict-2.3.3-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:38c4660f37fcc70a32ff997fe58e40b3fcc60b2017b286e33828efaa16b01308"},
{file = "frozendict-2.3.3-cp39-cp39-win_amd64.whl", hash = "sha256:919e3609844fece11ab18bcbf28a3ed20f8108ad4149d7927d413687f281c6c9"},
{file = "frozendict-2.3.3-py3-none-any.whl", hash = "sha256:f988b482d08972a196664718167a993a61c9e9f6fe7b0ca2443570b5f20ca44a"},
{file = "frozendict-2.3.3.tar.gz", hash = "sha256:398539c52af3c647d103185bbaa1291679f0507ad035fe3bab2a8b0366d52cf1"},
]
gitdb = [
{file = "gitdb-4.0.9-py3-none-any.whl", hash = "sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd"},

View File

@@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.63.0"
version = "1.64.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -110,7 +110,9 @@ jsonschema = ">=3.0.0"
frozendict = ">=1,!=2.1.2"
# We require 2.1.0 or higher for type hints. Previous guard was >= 1.1.0
unpaddedbase64 = ">=2.1.0"
canonicaljson = "^1.4.0"
# We require 1.5.0 to work around an issue when running against the C implementation of
# frozendict: https://github.com/matrix-org/python-canonicaljson/issues/36
canonicaljson = "^1.5.0"
# we use the type definitions added in signedjson 1.1.
signedjson = "^1.1.0"
# validating SSL certs for IP addresses requires service_identity 18.1.

View File

@@ -26,7 +26,6 @@ DISTS = (
"debian:bookworm",
"debian:sid",
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:impish", # 21.10 (EOL 2022-07)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04)
)

View File

@@ -101,6 +101,7 @@ if [ -z "$skip_docker_build" ]; then
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
docker build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
echo_if_github "::endgroup::"

View File

@@ -32,6 +32,7 @@ import click
import commonmark
import git
from click.exceptions import ClickException
from git import GitCommandError, Repo
from github import Github
from packaging import version
@@ -55,9 +56,12 @@ def run_until_successful(
def cli() -> None:
"""An interactive script to walk through the parts of creating a release.
Requires the dev dependencies be installed, which can be done via:
Requirements:
- The dev dependencies be installed, which can be done via:
pip install -e .[dev]
pip install -e .[dev]
- A checkout of the sytest repository at ../sytest
Then to use:
@@ -75,6 +79,8 @@ def cli() -> None:
# Optional: generate some nice links for the announcement
./scripts-dev/release.py merge-back
./scripts-dev/release.py announce
If the env var GH_TOKEN (or GITHUB_TOKEN) is set, or passed into the
@@ -89,10 +95,12 @@ def prepare() -> None:
"""
# Make sure we're in a git repo.
repo = get_repo_and_check_clean_checkout()
synapse_repo = get_repo_and_check_clean_checkout()
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
click.secho("Updating git repo...")
repo.remote().fetch()
click.secho("Updating Synapse and Sytest git repos...")
synapse_repo.remote().fetch()
sytest_repo.remote().fetch()
# Get the current version and AST from root Synapse module.
current_version = get_package_version()
@@ -166,12 +174,12 @@ def prepare() -> None:
assert not parsed_new_version.is_postrelease
release_branch_name = get_release_branch_name(parsed_new_version)
release_branch = find_ref(repo, release_branch_name)
release_branch = find_ref(synapse_repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
# If the release branch only exists on the remote we check it out
# locally.
repo.git.checkout(release_branch_name)
synapse_repo.git.checkout(release_branch_name)
else:
# If a branch doesn't exist we create one. We ask which one branch it
# should be based off, defaulting to sensible values depending on the
@@ -187,25 +195,34 @@ def prepare() -> None:
"Which branch should the release be based on?", default=default
)
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name}!")
click.get_current_context().abort()
for repo_name, repo in {"synapse": synapse_repo, "sytest": sytest_repo}.items():
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name} for {repo_name}!")
click.get_current_context().abort()
# Check out the base branch and ensure it's up to date
repo.head.set_reference(base_branch, "check out the base branch")
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Check out the base branch and ensure it's up to date
repo.head.set_reference(
base_branch, f"check out the base branch for {repo_name}"
)
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Create the new release branch
# Type ignore will no longer be needed after GitPython 3.1.28.
# See https://github.com/gitpython-developers/GitPython/pull/1419
repo.create_head(release_branch_name, commit=base_branch) # type: ignore[arg-type]
# Create the new release branch
# Type ignore will no longer be needed after GitPython 3.1.28.
# See https://github.com/gitpython-developers/GitPython/pull/1419
repo.create_head(release_branch_name, commit=base_branch) # type: ignore[arg-type]
# Special-case SyTest: we don't actually prepare any files so we may
# as well push it now (and only when we create a release branch;
# not on subsequent RCs or full releases).
if click.confirm("Push new SyTest branch?", default=True):
sytest_repo.git.push("-u", sytest_repo.remote().name, release_branch_name)
# Switch to the release branch and ensure it's up to date.
repo.git.checkout(release_branch_name)
update_branch(repo)
synapse_repo.git.checkout(release_branch_name)
update_branch(synapse_repo)
# Update the version specified in pyproject.toml.
subprocess.check_output(["poetry", "version", new_version])
@@ -230,15 +247,15 @@ def prepare() -> None:
run_until_successful('dch -M -r -D stable ""', shell=True)
# Show the user the changes and ask if they want to edit the change log.
repo.git.add("-u")
synapse_repo.git.add("-u")
subprocess.run("git diff --cached", shell=True)
if click.confirm("Edit changelog?", default=False):
click.edit(filename="CHANGES.md")
# Commit the changes.
repo.git.add("-u")
repo.git.commit("-m", new_version)
synapse_repo.git.add("-u")
synapse_repo.git.commit("-m", new_version)
# We give the option to bail here in case the user wants to make sure things
# are OK before pushing.
@@ -246,17 +263,21 @@ def prepare() -> None:
print("")
print("Run when ready to push:")
print("")
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
print(
f"\tgit push -u {synapse_repo.remote().name} {synapse_repo.active_branch.name}"
)
print("")
sys.exit(0)
# Otherwise, push and open the changelog in the browser.
repo.git.push("-u", repo.remote().name, repo.active_branch.name)
synapse_repo.git.push(
"-u", synapse_repo.remote().name, synapse_repo.active_branch.name
)
print("Opening the changelog in your browser...")
print("Please ask others to give it a check.")
click.launch(
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
f"https://github.com/matrix-org/synapse/blob/{synapse_repo.active_branch.name}/CHANGES.md"
)
@@ -423,6 +444,79 @@ def upload() -> None:
)
def _merge_into(repo: Repo, source: str, target: str) -> None:
"""
Merges branch `source` into branch `target`.
Pulls both before merging and pushes the result.
"""
# Update our branches and switch to the target branch
for branch in [source, target]:
click.echo(f"Switching to {branch} and pulling...")
repo.heads[branch].checkout()
# Pull so we're up to date
repo.remote().pull()
assert repo.active_branch.name == target
try:
# TODO This seemed easier than using GitPython directly
click.echo(f"Merging {source}...")
repo.git.merge(source)
except GitCommandError as exc:
# If a merge conflict occurs, give some context and try to
# make it easy to abort if necessary.
click.echo(exc)
if not click.confirm(
f"Likely merge conflict whilst merging ({source}{target}). "
f"Have you resolved it?"
):
repo.git.merge("--abort")
return
# Push result.
click.echo("Pushing...")
repo.remote().push()
@cli.command()
def merge_back() -> None:
"""Merge the release branch back into the appropriate branches.
All branches will be automatically pulled from the remote and the results
will be pushed to the remote."""
synapse_repo = get_repo_and_check_clean_checkout()
branch_name = synapse_repo.active_branch.name
if not branch_name.startswith("release-v"):
raise RuntimeError("Not on a release branch. This does not seem sensible.")
# Pull so we're up to date
synapse_repo.remote().pull()
current_version = get_package_version()
if current_version.is_prerelease:
# Release candidate
if click.confirm(f"Merge {branch_name} → develop?", default=True):
_merge_into(synapse_repo, branch_name, "develop")
else:
# Full release
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
if click.confirm(f"Merge {branch_name} → master?", default=True):
_merge_into(synapse_repo, branch_name, "master")
if click.confirm("Merge master → develop?", default=True):
_merge_into(synapse_repo, "master", "develop")
if click.confirm(f"On SyTest, merge {branch_name} → master?", default=True):
_merge_into(sytest_repo, branch_name, "master")
if click.confirm("On SyTest, merge master → develop?", default=True):
_merge_into(sytest_repo, "master", "develop")
@cli.command()
def announce() -> None:
"""Generate markdown to announce the release."""
@@ -469,14 +563,18 @@ def get_release_branch_name(version_number: version.Version) -> str:
return f"release-v{version_number.major}.{version_number.minor}"
def get_repo_and_check_clean_checkout() -> git.Repo:
def get_repo_and_check_clean_checkout(
path: str = ".", name: str = "synapse"
) -> git.Repo:
"""Get the project repo and check it's not got any uncommitted changes."""
try:
repo = git.Repo()
repo = git.Repo(path=path)
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
raise click.ClickException(
f"{path} is not a git repository (expecting a {name} repository)."
)
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
raise click.ClickException(f"Uncommitted changes exist in {path}.")
return repo

View File

@@ -166,22 +166,6 @@ IGNORED_TABLES = {
"ui_auth_sessions",
"ui_auth_sessions_credentials",
"ui_auth_sessions_ips",
# Groups/communities is no longer supported.
"group_attestations_remote",
"group_attestations_renewals",
"group_invites",
"group_roles",
"group_room_categories",
"group_rooms",
"group_summary_roles",
"group_summary_room_categories",
"group_summary_rooms",
"group_summary_users",
"group_users",
"groups",
"local_group_membership",
"local_group_updates",
"remote_profile_cache",
}

View File

@@ -26,11 +26,17 @@ from synapse.api.errors import (
Codes,
InvalidClientTokenError,
MissingClientTokenError,
UnstableSpecAuthError,
)
from synapse.appservice import ApplicationService
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.logging.opentracing import (
active_span,
force_tracing,
start_active_span,
trace,
)
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import Requester, UserID, create_requester
@@ -106,8 +112,11 @@ class Auth:
forgot = await self.store.did_forget(user_id, room_id)
if not forgot:
return membership, member_event_id
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
raise UnstableSpecAuthError(
403,
"User %s not in room %s" % (user_id, room_id),
errcode=Codes.NOT_JOINED,
)
async def get_user_by_req(
self,
@@ -563,6 +572,7 @@ class Auth:
return query_params[0].decode("ascii")
@trace
async def check_user_in_room_or_world_readable(
self, room_id: str, user_id: str, allow_departed_users: bool = False
) -> Tuple[str, Optional[str]]:
@@ -600,8 +610,9 @@ class Auth:
== HistoryVisibility.WORLD_READABLE
):
return Membership.JOIN, None
raise AuthError(
raise UnstableSpecAuthError(
403,
"User %s not in room %s, and room previews are disabled"
% (user_id, room_id),
errcode=Codes.NOT_JOINED,
)

View File

@@ -268,4 +268,4 @@ class PublicRoomsFilterFields:
"""
GENERIC_SEARCH_TERM: Final = "generic_search_term"
ROOM_TYPES: Final = "org.matrix.msc3827.room_types"
ROOM_TYPES: Final = "room_types"

View File

@@ -26,6 +26,7 @@ from twisted.web import http
from synapse.util import json_decoder
if typing.TYPE_CHECKING:
from synapse.config.homeserver import HomeServerConfig
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -80,6 +81,12 @@ class Codes(str, Enum):
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
# Part of MSC3848
# https://github.com/matrix-org/matrix-spec-proposals/pull/3848
ALREADY_JOINED = "ORG.MATRIX.MSC3848.ALREADY_JOINED"
NOT_JOINED = "ORG.MATRIX.MSC3848.NOT_JOINED"
INSUFFICIENT_POWER = "ORG.MATRIX.MSC3848.INSUFFICIENT_POWER"
# The account has been suspended on the server.
# By opposition to `USER_DEACTIVATED`, this is a reversible measure
# that can possibly be appealed and reverted.
@@ -167,7 +174,7 @@ class SynapseError(CodeMessageException):
else:
self._additional_fields = dict(additional_fields)
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, **self._additional_fields)
@@ -213,7 +220,7 @@ class ConsentNotGivenError(SynapseError):
)
self._consent_uri = consent_uri
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
@@ -307,6 +314,37 @@ class AuthError(SynapseError):
super().__init__(code, msg, errcode, additional_fields)
class UnstableSpecAuthError(AuthError):
"""An error raised when a new error code is being proposed to replace a previous one.
This error will return a "org.matrix.unstable.errcode" property with the new error code,
with the previous error code still being defined in the "errcode" property.
This error will include `org.matrix.msc3848.unstable.errcode` in the C-S error body.
"""
def __init__(
self,
code: int,
msg: str,
errcode: str,
previous_errcode: str = Codes.FORBIDDEN,
additional_fields: Optional[dict] = None,
):
self.previous_errcode = previous_errcode
super().__init__(code, msg, errcode, additional_fields)
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
fields = {}
if config is not None and config.experimental.msc3848_enabled:
fields["org.matrix.msc3848.unstable.errcode"] = self.errcode
return cs_error(
self.msg,
self.previous_errcode,
**fields,
**self._additional_fields,
)
class InvalidClientCredentialsError(SynapseError):
"""An error raised when there was a problem with the authorisation credentials
in a client request.
@@ -338,8 +376,8 @@ class InvalidClientTokenError(InvalidClientCredentialsError):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout
def error_dict(self) -> "JsonDict":
d = super().error_dict()
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
d = super().error_dict(config)
d["soft_logout"] = self._soft_logout
return d
@@ -362,7 +400,7 @@ class ResourceLimitError(SynapseError):
self.limit_type = limit_type
super().__init__(code, msg, errcode=errcode)
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(
self.msg,
self.errcode,
@@ -397,7 +435,7 @@ class InvalidCaptchaError(SynapseError):
super().__init__(code, msg, errcode)
self.error_url = error_url
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, error_url=self.error_url)
@@ -414,7 +452,7 @@ class LimitExceededError(SynapseError):
super().__init__(code, msg, errcode)
self.retry_after_ms = retry_after_ms
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, retry_after_ms=self.retry_after_ms)
@@ -429,7 +467,7 @@ class RoomKeysVersionError(SynapseError):
super().__init__(403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION)
self.current_version = current_version
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, current_version=self.current_version)
@@ -469,7 +507,7 @@ class IncompatibleRoomVersionError(SynapseError):
self._room_version = room_version
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
return cs_error(self.msg, self.errcode, room_version=self._room_version)
@@ -515,7 +553,7 @@ class UnredactedContentDeletedError(SynapseError):
)
self.content_keep_ms = content_keep_ms
def error_dict(self) -> "JsonDict":
def error_dict(self, config: Optional["HomeServerConfig"]) -> "JsonDict":
extra = {}
if self.content_keep_ms is not None:
extra = {"fi.mau.msc2815.content_keep_ms": self.content_keep_ms}

View File

@@ -84,6 +84,7 @@ ROOM_EVENT_FILTER_SCHEMA = {
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
"unread_thread_notifications": {"type": "boolean"},
# Include or exclude events with the provided labels.
# cf https://github.com/matrix-org/matrix-doc/pull/2326
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
@@ -240,6 +241,9 @@ class FilterCollection:
def include_redundant_members(self) -> bool:
return self._room_state_filter.include_redundant_members
def unread_thread_notifications(self) -> bool:
return self._room_timeline_filter.unread_thread_notifications
async def filter_presence(
self, events: Iterable[UserPresenceState]
) -> List[UserPresenceState]:
@@ -304,6 +308,9 @@ class Filter:
self.include_redundant_members = filter_json.get(
"include_redundant_members", False
)
self.unread_thread_notifications = filter_json.get(
"unread_thread_notifications", False
)
self.types = filter_json.get("types", None)
self.not_types = filter_json.get("not_types", [])

View File

@@ -17,7 +17,7 @@ from collections import OrderedDict
from typing import Hashable, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.config.ratelimiting import RateLimitConfig
from synapse.config.ratelimiting import RatelimitSettings
from synapse.storage.databases.main import DataStore
from synapse.types import Requester
from synapse.util import Clock
@@ -27,6 +27,33 @@ class Ratelimiter:
"""
Ratelimit actions marked by arbitrary keys.
(Note that the source code speaks of "actions" and "burst_count" rather than
"tokens" and a "bucket_size".)
This is a "leaky bucket as a meter". For each key to be tracked there is a bucket
containing some number 0 <= T <= `burst_count` of tokens corresponding to previously
permitted requests for that key. Each bucket starts empty, and gradually leaks
tokens at a rate of `rate_hz`.
Upon an incoming request, we must determine:
- the key that this request falls under (which bucket to inspect), and
- the cost C of this request in tokens.
Then, if there is room in the bucket for C tokens (T + C <= `burst_count`),
the request is permitted and `cost` tokens are added to the bucket.
Otherwise the request is denied, and the bucket continues to hold T tokens.
This means that the limiter enforces an average request frequency of `rate_hz`,
while accumulating a buffer of up to `burst_count` requests which can be consumed
instantaneously.
The tricky bit is the leaking. We do not want to have a periodic process which
leaks every bucket! Instead, we track
- the time point when the bucket was last completely empty, and
- how many tokens have added to the bucket permitted since then.
Then for each incoming request, we can calculate how many tokens have leaked
since this time point, and use that to decide if we should accept or reject the
request.
Args:
clock: A homeserver clock, for retrieving the current time
rate_hz: The long term number of actions that can be performed in a second.
@@ -41,14 +68,30 @@ class Ratelimiter:
self.burst_count = burst_count
self.store = store
# A ordered dictionary keeping track of actions, when they were last
# performed and how often. Each entry is a mapping from a key of arbitrary type
# to a tuple representing:
# * How many times an action has occurred since a point in time
# * The point in time
# * The rate_hz of this particular entry. This can vary per request
# An ordered dictionary representing the token buckets tracked by this rate
# limiter. Each entry maps a key of arbitrary type to a tuple representing:
# * The number of tokens currently in the bucket,
# * The time point when the bucket was last completely empty, and
# * The rate_hz (leak rate) of this particular bucket.
self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict()
def _get_key(
self, requester: Optional[Requester], key: Optional[Hashable]
) -> Hashable:
"""Use the requester's MXID as a fallback key if no key is provided."""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
return key
def _get_action_counts(
self, key: Hashable, time_now_s: float
) -> Tuple[float, float, float]:
"""Retrieve the action counts, with a fallback representing an empty bucket."""
return self.actions.get(key, (0.0, time_now_s, 0.0))
async def can_do_action(
self,
requester: Optional[Requester],
@@ -88,11 +131,7 @@ class Ratelimiter:
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
key = self._get_key(requester, key)
if requester:
# Disable rate limiting of users belonging to any AS that is configured
@@ -121,7 +160,7 @@ class Ratelimiter:
self._prune_message_counts(time_now_s)
# Check if there is an existing count entry for this key
action_count, time_start, _ = self.actions.get(key, (0.0, time_now_s, 0.0))
action_count, time_start, _ = self._get_action_counts(key, time_now_s)
# Check whether performing another action is allowed
time_delta = time_now_s - time_start
@@ -164,6 +203,37 @@ class Ratelimiter:
return allowed, time_allowed
def record_action(
self,
requester: Optional[Requester],
key: Optional[Hashable] = None,
n_actions: int = 1,
_time_now_s: Optional[float] = None,
) -> None:
"""Record that an action(s) took place, even if they violate the rate limit.
This is useful for tracking the frequency of events that happen across
federation which we still want to impose local rate limits on. For instance, if
we are alice.com monitoring a particular room, we cannot prevent bob.com
from joining users to that room. However, we can track the number of recent
joins in the room and refuse to serve new joins ourselves if there have been too
many in the room across both homeservers.
Args:
requester: The requester that is doing the action, if any.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
"""
key = self._get_key(requester, key)
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
action_count, time_start, rate_hz = self._get_action_counts(key, time_now_s)
self.actions[key] = (action_count + n_actions, time_start, rate_hz)
def _prune_message_counts(self, time_now_s: float) -> None:
"""Remove message count entries that have not exceeded their defined
rate_hz limit
@@ -244,8 +314,8 @@ class RequestRatelimiter:
self,
store: DataStore,
clock: Clock,
rc_message: RateLimitConfig,
rc_admin_redaction: Optional[RateLimitConfig],
rc_message: RatelimitSettings,
rc_admin_redaction: Optional[RatelimitSettings],
):
self.store = store
self.clock = clock

View File

@@ -84,6 +84,8 @@ class RoomVersion:
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
# knocks and restricted join rules into the same join condition.
msc3787_knock_restricted_join_rule: bool
# MSC3667: Enforce integer power levels
msc3667_int_only_power_levels: bool
class RoomVersions:
@@ -103,6 +105,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V2 = RoomVersion(
"2",
@@ -120,6 +123,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V3 = RoomVersion(
"3",
@@ -137,6 +141,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V4 = RoomVersion(
"4",
@@ -154,6 +159,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V5 = RoomVersion(
"5",
@@ -171,6 +177,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V6 = RoomVersion(
"6",
@@ -188,6 +195,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -205,6 +213,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V7 = RoomVersion(
"7",
@@ -222,6 +231,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V8 = RoomVersion(
"8",
@@ -239,6 +249,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V9 = RoomVersion(
"9",
@@ -256,6 +267,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3",
@@ -273,6 +285,7 @@ class RoomVersions:
msc2716_historical=True,
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
@@ -290,6 +303,25 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
)
V10 = RoomVersion(
"10",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
)
@@ -308,6 +340,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9,
RoomVersions.MSC2716v3,
RoomVersions.MSC3787,
RoomVersions.V10,
)
}

View File

@@ -28,19 +28,22 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.events import EventBase
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.server import HomeServer
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.types import StateMap
from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext
@@ -49,16 +52,17 @@ logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
BaseSlavedStore,
TagsWorkerStore,
DeviceInboxWorkerStore,
AccountDataWorkerStore,
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
RegistrationWorkerStore,
ReceiptsWorkerStore,
RoomWorkerStore,
):
def __init__(

View File

@@ -48,20 +48,12 @@ from synapse.http.site import SynapseRequest, SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client import (
account_data,
@@ -100,8 +92,15 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
from synapse.rest.well_known import well_known_resource
from synapse.server import HomeServer
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
from synapse.storage.databases.main.e2e_room_keys import EndToEndRoomKeyStore
from synapse.storage.databases.main.lock import LockStore
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
@@ -110,11 +109,15 @@ from synapse.storage.databases.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.databases.main.presence import PresenceStore
from synapse.storage.databases.main.profile import ProfileWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.transactions import TransactionWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
@@ -227,11 +230,11 @@ class GenericWorkerSlavedStore(
UIAuthWorkerStore,
EndToEndRoomKeyStore,
PresenceStore,
SlavedDeviceInboxStore,
DeviceInboxWorkerStore,
SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedAccountDataStore,
TagsWorkerStore,
AccountDataWorkerStore,
SlavedPusherStore,
CensorEventsStore,
ClientIpWorkerStore,
@@ -239,19 +242,20 @@ class GenericWorkerSlavedStore(
SlavedKeyStore,
RoomWorkerStore,
RoomBatchStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedProfileStore,
DirectoryWorkerStore,
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
ProfileWorkerStore,
SlavedFilteringStore,
MonthlyActiveUsersWorkerStore,
MediaRepositoryStore,
ServerMetricsStore,
ReceiptsWorkerStore,
RegistrationWorkerStore,
SearchStore,
TransactionWorkerStore,
LockStore,
SessionStore,
BaseSlavedStore,
):
# Properties that multiple storage classes define. Tell mypy what the
# expected type is.

View File

@@ -53,6 +53,18 @@ sent_events_counter = Counter(
"synapse_appservice_api_sent_events", "Number of events sent to the AS", ["service"]
)
sent_ephemeral_counter = Counter(
"synapse_appservice_api_sent_ephemeral",
"Number of ephemeral events sent to the AS",
["service"],
)
sent_todevice_counter = Counter(
"synapse_appservice_api_sent_todevice",
"Number of todevice messages sent to the AS",
["service"],
)
HOUR_IN_MS = 60 * 60 * 1000
@@ -310,6 +322,8 @@ class ApplicationServiceApi(SimpleHttpClient):
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(serialized_events))
sent_ephemeral_counter.labels(service.id).inc(len(ephemeral))
sent_todevice_counter.labels(service.id).inc(len(to_device_messages))
return True
except CodeMessageException as e:
logger.warning(

View File

@@ -86,14 +86,19 @@ class EmailConfig(Config):
if email_config is None:
email_config = {}
self.force_tls = email_config.get("force_tls", False)
self.email_smtp_host = email_config.get("smtp_host", "localhost")
self.email_smtp_port = email_config.get("smtp_port", 25)
self.email_smtp_port = email_config.get(
"smtp_port", 465 if self.force_tls else 25
)
self.email_smtp_user = email_config.get("smtp_user", None)
self.email_smtp_pass = email_config.get("smtp_pass", None)
self.require_transport_security = email_config.get(
"require_transport_security", False
)
self.enable_smtp_tls = email_config.get("enable_tls", True)
if self.force_tls and not self.enable_smtp_tls:
raise ConfigError("email.force_tls requires email.enable_tls to be true")
if self.require_transport_security and not self.enable_smtp_tls:
raise ConfigError(
"email.require_transport_security requires email.enable_tls to be true"
@@ -143,8 +148,7 @@ class EmailConfig(Config):
if config.get("trust_identity_server_for_password_resets"):
raise ConfigError(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
'The config option "trust_identity_server_for_password_resets" has been removed.'
"Please consult the configuration manual at docs/usage/configuration/config_documentation.md for "
"details and update your config file."
)

View File

@@ -82,11 +82,18 @@ class ExperimentalConfig(Config):
# MSC3786 (Add a default push rule to ignore m.room.server_acl events)
self.msc3786_enabled: bool = experimental.get("msc3786_enabled", False)
# MSC3771: Thread read receipts
self.msc3771_enabled: bool = experimental.get("msc3771_enabled", False)
# MSC3772: A push rule for mutual relations.
self.msc3772_enabled: bool = experimental.get("msc3772_enabled", False)
# MSC3773: Thread notifications
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
# MSC3715: dir param on /relations.
self.msc3715_enabled: bool = experimental.get("msc3715_enabled", False)
# MSC3827: Filtering of /publicRooms by room type
self.msc3827_enabled: bool = experimental.get("msc3827_enabled", False)
# MSC3848: Introduce errcodes for specific event sending failures
self.msc3848_enabled: bool = experimental.get("msc3848_enabled", False)
# MSC3856: Threads list API
self.msc3856_enabled: bool = experimental.get("msc3856_enabled", False)

View File

@@ -21,7 +21,7 @@ from synapse.types import JsonDict
from ._base import Config
class RateLimitConfig:
class RatelimitSettings:
def __init__(
self,
config: Dict[str, float],
@@ -34,7 +34,7 @@ class RateLimitConfig:
@attr.s(auto_attribs=True)
class FederationRateLimitConfig:
class FederationRatelimitSettings:
window_size: int = 1000
sleep_limit: int = 10
sleep_delay: int = 500
@@ -50,11 +50,11 @@ class RatelimitConfig(Config):
# Load the new-style messages config if it exists. Otherwise fall back
# to the old method.
if "rc_message" in config:
self.rc_message = RateLimitConfig(
self.rc_message = RatelimitSettings(
config["rc_message"], defaults={"per_second": 0.2, "burst_count": 10.0}
)
else:
self.rc_message = RateLimitConfig(
self.rc_message = RatelimitSettings(
{
"per_second": config.get("rc_messages_per_second", 0.2),
"burst_count": config.get("rc_message_burst_count", 10.0),
@@ -64,9 +64,9 @@ class RatelimitConfig(Config):
# Load the new-style federation config, if it exists. Otherwise, fall
# back to the old method.
if "rc_federation" in config:
self.rc_federation = FederationRateLimitConfig(**config["rc_federation"])
self.rc_federation = FederationRatelimitSettings(**config["rc_federation"])
else:
self.rc_federation = FederationRateLimitConfig(
self.rc_federation = FederationRatelimitSettings(
**{
k: v
for k, v in {
@@ -80,17 +80,17 @@ class RatelimitConfig(Config):
}
)
self.rc_registration = RateLimitConfig(config.get("rc_registration", {}))
self.rc_registration = RatelimitSettings(config.get("rc_registration", {}))
self.rc_registration_token_validity = RateLimitConfig(
self.rc_registration_token_validity = RatelimitSettings(
config.get("rc_registration_token_validity", {}),
defaults={"per_second": 0.1, "burst_count": 5},
)
rc_login_config = config.get("rc_login", {})
self.rc_login_address = RateLimitConfig(rc_login_config.get("address", {}))
self.rc_login_account = RateLimitConfig(rc_login_config.get("account", {}))
self.rc_login_failed_attempts = RateLimitConfig(
self.rc_login_address = RatelimitSettings(rc_login_config.get("address", {}))
self.rc_login_account = RatelimitSettings(rc_login_config.get("account", {}))
self.rc_login_failed_attempts = RatelimitSettings(
rc_login_config.get("failed_attempts", {})
)
@@ -101,47 +101,54 @@ class RatelimitConfig(Config):
rc_admin_redaction = config.get("rc_admin_redaction")
self.rc_admin_redaction = None
if rc_admin_redaction:
self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
self.rc_admin_redaction = RatelimitSettings(rc_admin_redaction)
self.rc_joins_local = RateLimitConfig(
self.rc_joins_local = RatelimitSettings(
config.get("rc_joins", {}).get("local", {}),
defaults={"per_second": 0.1, "burst_count": 10},
)
self.rc_joins_remote = RateLimitConfig(
self.rc_joins_remote = RatelimitSettings(
config.get("rc_joins", {}).get("remote", {}),
defaults={"per_second": 0.01, "burst_count": 10},
)
# Track the rate of joins to a given room. If there are too many, temporarily
# prevent local joins and remote joins via this server.
self.rc_joins_per_room = RatelimitSettings(
config.get("rc_joins_per_room", {}),
defaults={"per_second": 1, "burst_count": 10},
)
# Ratelimit cross-user key requests:
# * For local requests this is keyed by the sending device.
# * For requests received over federation this is keyed by the origin.
#
# Note that this isn't exposed in the configuration as it is obscure.
self.rc_key_requests = RateLimitConfig(
self.rc_key_requests = RatelimitSettings(
config.get("rc_key_requests", {}),
defaults={"per_second": 20, "burst_count": 100},
)
self.rc_3pid_validation = RateLimitConfig(
self.rc_3pid_validation = RatelimitSettings(
config.get("rc_3pid_validation") or {},
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_room = RateLimitConfig(
self.rc_invites_per_room = RatelimitSettings(
config.get("rc_invites", {}).get("per_room", {}),
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_invites_per_user = RateLimitConfig(
self.rc_invites_per_user = RatelimitSettings(
config.get("rc_invites", {}).get("per_user", {}),
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_issuer = RateLimitConfig(
self.rc_invites_per_issuer = RatelimitSettings(
config.get("rc_invites", {}).get("per_issuer", {}),
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_third_party_invite = RateLimitConfig(
self.rc_third_party_invite = RatelimitSettings(
config.get("rc_third_party_invite", {}),
defaults={
"per_second": self.rc_message.per_second,

View File

@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
from typing import Any, Optional
from synapse.api.constants import RoomCreationPreset
@@ -20,6 +21,17 @@ from synapse.config._base import Config, ConfigError
from synapse.types import JsonDict, RoomAlias, UserID
from synapse.util.stringutils import random_string_with_symbols, strtobool
logger = logging.getLogger(__name__)
LEGACY_EMAIL_DELEGATE_WARNING = """\
Delegation of email verification to an identity server is now deprecated. To
continue to allow users to add email addresses to their accounts, and use them for
password resets, configure Synapse with an SMTP server via the `email` setting, and
remove `account_threepid_delegates.email`.
This will be an error in a future version.
"""
class RegistrationConfig(Config):
section = "registration"
@@ -51,6 +63,9 @@ class RegistrationConfig(Config):
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
account_threepid_delegates = config.get("account_threepid_delegates") or {}
if "email" in account_threepid_delegates:
logger.warning(LEGACY_EMAIL_DELEGATE_WARNING)
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
self.default_identity_server = config.get("default_identity_server")

View File

@@ -42,6 +42,18 @@ THUMBNAIL_SIZE_YAML = """\
# method: %(method)s
"""
# A map from the given media type to the type of thumbnail we should generate
# for it.
THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP = {
"image/jpeg": "jpeg",
"image/jpg": "jpeg",
"image/webp": "jpeg",
# Thumbnails can only be jpeg or png. We choose png thumbnails for gif
# because it can have transparency.
"image/gif": "png",
"image/png": "png",
}
HTTP_PROXY_SET_WARNING = """\
The Synapse config url_preview_ip_range_blacklist will be ignored as an HTTP(s) proxy is configured."""
@@ -79,13 +91,22 @@ def parse_thumbnail_requirements(
width = size["width"]
height = size["height"]
method = size["method"]
jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg")
png_thumbnail = ThumbnailRequirement(width, height, method, "image/png")
requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail)
requirements.setdefault("image/jpg", []).append(jpeg_thumbnail)
requirements.setdefault("image/webp", []).append(jpeg_thumbnail)
requirements.setdefault("image/gif", []).append(png_thumbnail)
requirements.setdefault("image/png", []).append(png_thumbnail)
for format, thumbnail_format in THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP.items():
requirement = requirements.setdefault(format, [])
if thumbnail_format == "jpeg":
requirement.append(
ThumbnailRequirement(width, height, method, "image/jpeg")
)
elif thumbnail_format == "png":
requirement.append(
ThumbnailRequirement(width, height, method, "image/png")
)
else:
raise Exception(
"Unknown thumbnail mapping from %s to %s. This is a Synapse problem, please report!"
% (format, thumbnail_format)
)
return {
media_type: tuple(thumbnails) for media_type, thumbnails in requirements.items()
}

View File

@@ -30,7 +30,13 @@ from synapse.api.constants import (
JoinRules,
Membership,
)
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.api.errors import (
AuthError,
Codes,
EventSizeError,
SynapseError,
UnstableSpecAuthError,
)
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
@@ -291,7 +297,11 @@ def check_state_dependent_auth_rules(
invite_level = get_named_level(auth_dict, "invite", 0)
if user_level < invite_level:
raise AuthError(403, "You don't have permission to invite users")
raise UnstableSpecAuthError(
403,
"You don't have permission to invite users",
errcode=Codes.INSUFFICIENT_POWER,
)
else:
logger.debug("Allowing! %s", event)
return
@@ -474,7 +484,11 @@ def _is_membership_change_allowed(
return
if not caller_in_room: # caller isn't joined
raise AuthError(403, "%s not in room %s." % (event.user_id, event.room_id))
raise UnstableSpecAuthError(
403,
"%s not in room %s." % (event.user_id, event.room_id),
errcode=Codes.NOT_JOINED,
)
if Membership.INVITE == membership:
# TODO (erikj): We should probably handle this more intelligently
@@ -484,10 +498,18 @@ def _is_membership_change_allowed(
if target_banned:
raise AuthError(403, "%s is banned from the room" % (target_user_id,))
elif target_in_room: # the target is already in the room.
raise AuthError(403, "%s is already in the room." % target_user_id)
raise UnstableSpecAuthError(
403,
"%s is already in the room." % target_user_id,
errcode=Codes.ALREADY_JOINED,
)
else:
if user_level < invite_level:
raise AuthError(403, "You don't have permission to invite users")
raise UnstableSpecAuthError(
403,
"You don't have permission to invite users",
errcode=Codes.INSUFFICIENT_POWER,
)
elif Membership.JOIN == membership:
# Joins are valid iff caller == target and:
# * They are not banned.
@@ -549,15 +571,27 @@ def _is_membership_change_allowed(
elif Membership.LEAVE == membership:
# TODO (erikj): Implement kicks.
if target_banned and user_level < ban_level:
raise AuthError(403, "You cannot unban user %s." % (target_user_id,))
raise UnstableSpecAuthError(
403,
"You cannot unban user %s." % (target_user_id,),
errcode=Codes.INSUFFICIENT_POWER,
)
elif target_user_id != event.user_id:
kick_level = get_named_level(auth_events, "kick", 50)
if user_level < kick_level or user_level <= target_level:
raise AuthError(403, "You cannot kick user %s." % target_user_id)
raise UnstableSpecAuthError(
403,
"You cannot kick user %s." % target_user_id,
errcode=Codes.INSUFFICIENT_POWER,
)
elif Membership.BAN == membership:
if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban")
raise UnstableSpecAuthError(
403,
"You don't have permission to ban",
errcode=Codes.INSUFFICIENT_POWER,
)
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK and (
not room_version.msc3787_knock_restricted_join_rule
@@ -567,7 +601,11 @@ def _is_membership_change_allowed(
elif target_user_id != event.user_id:
raise AuthError(403, "You cannot knock for other users")
elif target_in_room:
raise AuthError(403, "You cannot knock on a room you are already in")
raise UnstableSpecAuthError(
403,
"You cannot knock on a room you are already in",
errcode=Codes.ALREADY_JOINED,
)
elif caller_invited:
raise AuthError(403, "You are already invited to this room")
elif target_banned:
@@ -638,10 +676,11 @@ def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> b
user_level = get_user_power_level(event.user_id, auth_events)
if user_level < send_level:
raise AuthError(
raise UnstableSpecAuthError(
403,
"You don't have permission to post that to the room. "
+ "user_level (%d) < send_level (%d)" % (user_level, send_level),
errcode=Codes.INSUFFICIENT_POWER,
)
# Check state_key
@@ -716,9 +755,10 @@ def check_historical(
historical_level = get_named_level(auth_events, "historical", 100)
if user_level < historical_level:
raise AuthError(
raise UnstableSpecAuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "batch", and "marker")',
errcode=Codes.INSUFFICIENT_POWER,
)
@@ -740,6 +780,32 @@ def _check_power_levels(
except Exception:
raise SynapseError(400, "Not a valid power level: %s" % (v,))
# Reject events with stringy power levels if required by room version
if (
event.type == EventTypes.PowerLevels
and room_version_obj.msc3667_int_only_power_levels
):
for k, v in event.content.items():
if k in {
"users_default",
"events_default",
"state_default",
"ban",
"redact",
"kick",
"invite",
}:
if not isinstance(v, int):
raise SynapseError(400, f"{v!r} must be an integer.")
if k in {"events", "notifications", "users"}:
if not isinstance(v, dict) or not all(
isinstance(v, int) for v in v.values()
):
raise SynapseError(
400,
f"{v!r} must be a dict wherein all the values are integers.",
)
key = (event.type, event.state_key)
current_state = auth_events.get(key)

View File

@@ -24,9 +24,11 @@ from synapse.api.room_versions import (
RoomVersion,
)
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
from synapse.state import StateHandler
from synapse.storage.databases.main import DataStore
from synapse.storage.state import StateFilter
from synapse.types import EventID, JsonDict
from synapse.util import Clock
from synapse.util.stringutils import random_string
@@ -120,8 +122,12 @@ class EventBuilder:
The signed and hashed event.
"""
if auth_event_ids is None:
state_ids = await self._state.get_current_state_ids(
self.room_id, prev_event_ids
state_ids = await self._state.compute_state_after_events(
self.room_id,
prev_event_ids,
state_filter=StateFilter.from_types(
auth_types_for_event(self.room_version, self)
),
)
auth_event_ids = self._event_auth_handler.compute_auth_events(
self, state_ids

View File

@@ -11,11 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING, List, Optional, Tuple, Union
from typing import TYPE_CHECKING, List, Optional, Tuple
import attr
from frozendict import frozendict
from typing_extensions import Literal
from synapse.appservice import ApplicationService
from synapse.events import EventBase
@@ -33,7 +32,7 @@ class EventContext:
Holds information relevant to persisting an event
Attributes:
rejected: A rejection reason if the event was rejected, else False
rejected: A rejection reason if the event was rejected, else None
_state_group: The ID of the state group for this event. Note that state events
are persisted with a state group which includes the new event, so this is
@@ -85,7 +84,7 @@ class EventContext:
"""
_storage: "StorageControllers"
rejected: Union[Literal[False], str] = False
rejected: Optional[str] = None
_state_group: Optional[int] = None
state_group_before_event: Optional[int] = None
_state_delta_due_to_event: Optional[StateMap[str]] = None

View File

@@ -53,7 +53,7 @@ from synapse.api.room_versions import (
RoomVersion,
RoomVersions,
)
from synapse.events import EventBase, builder
from synapse.events import EventBase, builder, make_event_from_dict
from synapse.federation.federation_base import (
FederationBase,
InvalidEventSignatureError,
@@ -61,6 +61,7 @@ from synapse.federation.federation_base import (
)
from synapse.federation.transport.client import SendJoinResponse
from synapse.http.types import QueryParams
from synapse.logging.opentracing import trace
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
@@ -217,7 +218,7 @@ class FederationClient(FederationBase):
)
async def claim_client_keys(
self, destination: str, content: JsonDict, timeout: int
self, destination: str, content: JsonDict, timeout: Optional[int]
) -> JsonDict:
"""Claims one-time keys for a device hosted on a remote server.
@@ -233,6 +234,7 @@ class FederationClient(FederationBase):
destination, content, timeout
)
@trace
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> Optional[List[EventBase]]:
@@ -299,7 +301,8 @@ class FederationClient(FederationBase):
moving to the next destination. None indicates no timeout.
Returns:
The requested PDU, or None if we were unable to find it.
A copy of the requested PDU that is safe to modify, or None if we
were unable to find it.
Raises:
SynapseError, NotRetryingDestination, FederationDeniedError
@@ -309,7 +312,7 @@ class FederationClient(FederationBase):
)
logger.debug(
"retrieved event id %s from %s: %r",
"get_pdu_from_destination_raw: retrieved event id %s from %s: %r",
event_id,
destination,
transaction_data,
@@ -358,54 +361,92 @@ class FederationClient(FederationBase):
The requested PDU, or None if we were unable to find it.
"""
logger.debug(
"get_pdu: event_id=%s from destinations=%s", event_id, destinations
)
# TODO: Rate limit the number of times we try and get the same event.
ev = self._get_pdu_cache.get(event_id)
if ev:
return ev
# We might need the same event multiple times in quick succession (before
# it gets persisted to the database), so we cache the results of the lookup.
# Note that this is separate to the regular get_event cache which caches
# events once they have been persisted.
event = self._get_pdu_cache.get(event_id)
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
# If we don't see the event in the cache, go try to fetch it from the
# provided remote federated destinations
if not event:
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
signed_pdu = None
for destination in destinations:
now = self._clock.time_msec()
last_attempt = pdu_attempts.get(destination, 0)
if last_attempt + PDU_RETRY_TIME_MS > now:
continue
for destination in destinations:
now = self._clock.time_msec()
last_attempt = pdu_attempts.get(destination, 0)
if last_attempt + PDU_RETRY_TIME_MS > now:
logger.debug(
"get_pdu: skipping destination=%s because we tried it recently last_attempt=%s and we only check every %s (now=%s)",
destination,
last_attempt,
PDU_RETRY_TIME_MS,
now,
)
continue
try:
signed_pdu = await self.get_pdu_from_destination_raw(
destination=destination,
event_id=event_id,
room_version=room_version,
timeout=timeout,
)
try:
event = await self.get_pdu_from_destination_raw(
destination=destination,
event_id=event_id,
room_version=room_version,
timeout=timeout,
)
pdu_attempts[destination] = now
pdu_attempts[destination] = now
except SynapseError as e:
logger.info(
"Failed to get PDU %s from %s because %s", event_id, destination, e
)
continue
except NotRetryingDestination as e:
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
if event:
# Prime the cache
self._get_pdu_cache[event.event_id] = event
logger.info(
"Failed to get PDU %s from %s because %s", event_id, destination, e
)
continue
# Now that we have an event, we can break out of this
# loop and stop asking other destinations.
break
if signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu
except SynapseError as e:
logger.info(
"Failed to get PDU %s from %s because %s",
event_id,
destination,
e,
)
continue
except NotRetryingDestination as e:
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
return signed_pdu
logger.info(
"Failed to get PDU %s from %s because %s",
event_id,
destination,
e,
)
continue
if not event:
return None
# `event` now refers to an object stored in `get_pdu_cache`. Our
# callers may need to modify the returned object (eg to set
# `event.internal_metadata.outlier = true`), so we return a copy
# rather than the original object.
event_copy = make_event_from_dict(
event.get_pdu_json(),
event.room_version,
)
return event_copy
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
@@ -686,6 +727,12 @@ class FederationClient(FederationBase):
if failover_errcodes is None:
failover_errcodes = ()
if not destinations:
# Give a bit of a clearer message if no servers were specified at all.
raise SynapseError(
502, f"Failed to {description} via any server: No servers specified."
)
for destination in destinations:
if destination == self.server_name:
continue
@@ -735,7 +782,7 @@ class FederationClient(FederationBase):
"Failed to %s via %s", description, destination, exc_info=True
)
raise SynapseError(502, "Failed to %s via any server" % (description,))
raise SynapseError(502, f"Failed to {description} via any server")
async def make_membership_event(
self,

View File

@@ -118,6 +118,7 @@ class FederationServer(FederationBase):
self._federation_event_handler = hs.get_federation_event_handler()
self.state = hs.get_state_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self._room_member_handler = hs.get_room_member_handler()
self._state_storage_controller = hs.get_storage_controllers().state
@@ -468,7 +469,7 @@ class FederationServer(FederationBase):
)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
pdu_results[event_id] = e.error_dict(self.hs.config)
return
for pdu in pdus_by_room[room_id]:
@@ -621,6 +622,15 @@ class FederationServer(FederationBase):
)
raise IncompatibleRoomVersionError(room_version=room_version)
# Refuse the request if that room has seen too many joins recently.
# This is in addition to the HS-level rate limiting applied by
# BaseFederationServlet.
# type-ignore: mypy doesn't seem able to deduce the type of the limiter(!?)
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
requester=None,
key=room_id,
update=False,
)
pdu = await self.handler.on_make_join_request(origin, room_id, user_id)
return {"event": pdu.get_templated_pdu_json(), "room_version": room_version}
@@ -655,6 +665,12 @@ class FederationServer(FederationBase):
room_id: str,
caller_supports_partial_state: bool = False,
) -> Dict[str, Any]:
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
requester=None,
key=room_id,
update=False,
)
event, context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id
)
@@ -827,8 +843,25 @@ class FederationServer(FederationBase):
Codes.BAD_JSON,
)
# Note that get_room_version throws if the room does not exist here.
room_version = await self.store.get_room_version(room_id)
if await self.store.is_partial_state_room(room_id):
# If our server is still only partially joined, we can't give a complete
# response to /send_join, /send_knock or /send_leave.
# This is because we will not be able to provide the server list (for partial
# joins) or the full state (for full joins).
# Return a 404 as we would if we weren't in the room at all.
logger.info(
f"Rejecting /send_{membership_type} to %s because it's a partial state room",
room_id,
)
raise SynapseError(
404,
f"Unable to handle /send_{membership_type} right now; this server is not fully joined.",
errcode=Codes.NOT_FOUND,
)
if membership_type == Membership.KNOCK and not room_version.msc2403_knocking:
raise SynapseError(
403,

Some files were not shown because too many files have changed in this diff Show More