1
0

Compare commits

...

67 Commits

Author SHA1 Message Date
David Robertson a037c2ed43 Run generate-sample-config
Just trying to ensure we don't forget about this PR
2021-10-14 17:40:04 +01:00
Azrenbeth fd1d3e1fb3 Run linters 2021-09-28 17:40:15 +01:00
Azrenbeth 1a50b18994 Update name to 'synapse_auto_compressor' 2021-09-28 17:16:23 +01:00
Erik Johnston 584c670802 Make the looping call wait until the previous run has finished 2021-09-28 17:10:24 +01:00
Erik Johnston 61c5650058 Fix connectng to postgres when config has no host 2021-09-28 17:01:01 +01:00
Azrenbeth db6cc8f35b Merge remote-tracking branch 'origin/develop' into azren/compressor_integration 2021-09-28 16:15:58 +01:00
Richard van der Hoff 8aaa4b7b5d Drop backwards-compatibility support for "outlier" (#10903)
Before Synapse 1.31 (#9411), we relied on `outlier` being stored in the
`internal_metadata` column. We can now assume nobody will roll back their
deployment that far and drop the legacy support.
2021-09-28 15:25:36 +01:00
Richard van der Hoff 2622b28c5c Inline _check_event_auth for outliers (#10926)
* Inline `_check_event_auth` for outliers

When we are persisting an outlier, most of `_check_event_auth` is redundant:

 * `_update_auth_events_and_context_for_auth` does nothing, because the
   `input_auth_events` are (now) exactly the event's auth_events,
   which means that `missing_auth` is empty.

 * we don't care about soft-fail, kicking guest users or `send_on_behalf_of`
   for outliers

... so the only thing that matters is the auth itself, so let's just do that.

* `_auth_and_persist_fetched_events_inner`: de-async `prep`

`prep` no longer calls any `async` methods, so let's make it synchronous.

* Simplify `_check_event_auth`

We no longer need to support outliers here, which makes things rather simpler.

* changelog

* lint
2021-09-28 15:25:07 +01:00
Patrick Cloke eb2c7e51c4 Clean-up type hints in server config (#10915)
By using attrs instead of dicts to store configuration.

Also updates some of the attrs classes to use proper type
hints and auto_attribs.
2021-09-28 09:24:40 -04:00
Azrenbeth d6b511e669 Tidy up documentation a bit 2021-09-28 13:50:57 +01:00
Patrick Cloke c3ccad7785 Only do restricted join rules signature checks for room versions 8/9. (#10927)
Otherwise the presence of a (bogus, unused) field could cause
auth checks to fail.
2021-09-28 08:44:19 -04:00
Erik Johnston a8bbf08576 Fix debian package builds. (#10931)
This was due to dh-virtualenv builds being broken due to Shpinx removing
deprecated APIs.
2021-09-28 12:13:51 +01:00
Erik Johnston 707d5e4e48 Encode JSON responses on a thread in C, mk2 (#10905)
Currently we use `JsonEncoder.iterencode` to write JSON responses, which ensures that we don't block the main reactor thread when encoding huge objects. The downside to this is that `iterencode` falls back to using a pure Python encoder that is *much* less efficient and can easily burn a lot of CPU for huge responses. To fix this, while still ensuring we don't block the reactor loop, we encode the JSON on a threadpool using the standard `JsonEncoder.encode` functions, which is backed by a C library.

Doing so, however, requires `respond_with_json` to have access to the reactor, which it previously didn't. There are two ways of doing this:

1. threading through the reactor object, which is a bit fiddly as e.g. `DirectServeJsonResource` doesn't currently take a reactor, but is exposed to modules and so is a PITA to change; or
2. expose the reactor in `SynapseRequest`, which requires updating a bunch of servlet types.

I went with the latter as that is just a mechanical change, and I think makes sense as a request already has a reactor associated with it (via its http channel).
2021-09-28 09:37:58 +00:00
Azrenbeth 596e13ce74 Better search for state database 2021-09-27 16:35:13 +01:00
Azrenbeth efbc338043 Extract password from db_args 2021-09-27 16:20:06 +01:00
Erik Johnston d37841787a Sign the git tag in release script (#10925) 2021-09-27 15:39:49 +01:00
Azrenbeth 71aace8a0d Move from compressing largest rooms to compressing number of chunks 2021-09-27 15:14:36 +01:00
Azrenbeth a5819f7da9 Extract dsn parameters earlier 2021-09-27 15:09:09 +01:00
Azrenbeth 7d49d86b60 Remove accidental s at end of hs.config.worker 2021-09-27 13:05:29 +01:00
Sean Quah f7768f62cb Avoid storing URL cache files in storage providers (#10911)
URL cache files are short-lived and it does not make sense to offload
them (eg. to the cloud) or back them up.
2021-09-27 12:55:27 +01:00
Sean Quah 6c83c27107 Fix race conditions when creating media store and config directories (#10913) 2021-09-27 11:29:23 +01:00
Eric Eastwood d138187045 Document changes to schema version 61 - 64 (#10917)
As pointed out by @richvdh, https://github.com/matrix-org/synapse/pull/10838#discussion_r715424244

Retroactively summarize `61` - `64`
2021-09-24 17:09:12 -05:00
Brendan Abolivier b10257e879 Add a spamchecker callback to allow or deny room creation based on invites (#10898)
This is in the context of creating new module callbacks that modules in https://github.com/matrix-org/synapse-dinsic can use, in an effort to reconcile the spam checker API in synapse-dinsic with the one in mainline.

This adds a callback that's fairly similar to user_may_create_room except it also allows processing based on the invites sent at room creation.
2021-09-24 16:38:23 +02:00
David Robertson ea01d4c2de Update postgresql testing script (#10906)
- Use sytest:bionic. Sytest:latest is two years old (do we want
  CI to push out latest at all?) and comes with Python 3.5, which we
  explictly no longer support. The script now runs under PostgreSQL 10
  as a result.
- Advertise script in the docs
- Move pg testing script to scripts-dev directory
- Write to host as the script's exector, not root

A few changes to make it speedier to re-run the tests:

- Create blank DB in the container, not the script, so we don't have to
  `initdb` each time
- Use a named volume to persist the tox environment, so we don't have to
  fetch and install a bunch of packages from PyPI each time

Co-authored-by: reivilibre <olivier@librepush.net>
2021-09-24 14:27:09 +00:00
Erik Johnston f1c149cb18 Use the effective connection params when connecting to postgres 2021-09-24 14:48:35 +01:00
Erik Johnston 3e5dda1a47 Add a DatabasePoolpostgres_connection_info 2021-09-24 14:48:15 +01:00
Richard van der Hoff 0420d4e6a5 Stop trying to auth/persist events whose auth events we do not have. (#10907) 2021-09-24 14:01:45 +01:00
Patrick Cloke bb7fdd821b Use direct references for configuration variables (part 5). (#10897) 2021-09-24 07:25:21 -04:00
Richard van der Hoff 85551b7a85 Factor out common code for persisting fetched auth events (#10896)
* Factor more stuff out of `_get_events_and_persist`

It turns out that the event-sorting algorithm in `_get_events_and_persist` is
also useful in other circumstances. Here we move the current
`_auth_and_persist_fetched_events` to `_auth_and_persist_fetched_events_inner`,
and then factor the sorting part out to `_auth_and_persist_fetched_events`.

* `_get_remote_auth_chain_for_event`: remove redundant `outlier` assignment

`get_event_auth` returns events with the outlier flag already set, so this is
redundant (though we need to update a test where `get_event_auth` is mocked).

* `_get_remote_auth_chain_for_event`: move existing-event tests earlier

Move a couple of tests outside the loop. This is a bit inefficient for now, but
a future commit will make it better. It should be functionally identical.

* `_get_remote_auth_chain_for_event`: use `_auth_and_persist_fetched_events`

We can use the same codepath for persisting the events fetched as part of an
auth chain as for those fetched individually by `_get_events_and_persist` for
building the state at a backwards extremity.

* `_get_remote_auth_chain_for_event`: use a dict for efficiency

`_auth_and_persist_fetched_events` sorts the events itself, so we no longer
need to care about maintaining the ordering from `get_event_auth` (and no
longer need to sort by depth in `get_event_auth`).

That means that we can use a map, making it easier to filter out events we
already have, etc.

* changelog

* `_auth_and_persist_fetched_events`: improve docstring
2021-09-24 11:56:33 +01:00
Richard van der Hoff 261c9763c4 Simplify _auth_and_persist_fetched_events (#10901)
Combine the two loops over the list of events, and hence get rid of
`_NewEventInfo`. Also pass the event back alongside the context, so that it's
easier to process the result.
2021-09-24 11:56:13 +01:00
Erik Johnston 50022cff96 Add reactor to SynapseRequest and fix up types. (#10868) 2021-09-24 11:01:25 +01:00
Jason Robinson fa74536384 Fix AuthBlocking check when requester is appservice (#10881)
If the MAU count had been reached, Synapse incorrectly blocked appservice users even though they've been explicitly configured not to be tracked (the default). This was due to bypassing the relevant if as it was chained behind another earlier hit if as an elif.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2021-09-24 10:41:18 +01:00
David Robertson 7f3352743e Improve typing in user_directory files (#10891)
* Improve typing in user_directory files

This makes the user_directory.py in storage pass most of mypy's
checks (including `no-untyped-defs`). Unfortunately that file is in the
tangled web of Store class inheritance so doesn't pass mypy at the moment.

The handlers directory has already been mypyed.

Co-authored-by: reivilibre <olivier@librepush.net>
2021-09-24 10:38:22 +01:00
Kokokokoka e704cc2a48 In _purge_history_txn, ensure that txn.fetchall has elements before accessing rows (#10690)
This change adds a check for row existence before accessing row element, this should fix issue #10669
Signed-off-by: Vasya Boytsov vasiliy.boytsov@phystech.edu
2021-09-24 09:19:51 +00:00
Callum Brown 90d9fc7505 Allow . and ~ chars in registration tokens (#10887)
Per updates to MSC3231 in order to use the same grammar
as other identifiers.
2021-09-23 17:58:12 +00:00
Richard van der Hoff a7304adc7d Factor out _get_remote_auth_chain_for_event from _update_auth_events_and_context_for_auth (#10884)
* Reload auth events from db after fetching and persisting

In `_update_auth_events_and_context_for_auth`, when we fetch the remote auth
tree and persist the returned events: load the missing events from the database
rather than using the copies we got from the remote server.

This is mostly in preparation for additional refactors, but does have an
advantage in that if we later get around to checking the rejected status, we'll
be able to make use of it.

* Factor out `_get_remote_auth_chain_for_event` from `_update_auth_events_and_context_for_auth`

* changelog
2021-09-23 17:34:33 +01:00
Patrick Cloke 47854c71e9 Use direct references for configuration variables (part 4). (#10893) 2021-09-23 12:03:01 -04:00
David Robertson a10988983a Break down cache expiry reasons in grafana (#10880)
A follow-up to #10829
2021-09-23 14:45:32 +01:00
David Robertson dcfd864970 Fix reactivated users not being added to the user directory (#10782)
Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
Co-authored-by: reivilibre <olivier@librepush.net>
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2021-09-23 12:02:13 +00:00
Patrick Cloke e584534403 Use direct references for some configuration variables (part 3) (#10885)
This avoids the overhead of searching through the various
configuration classes by directly referencing the class that
the attributes are in.

It also improves type hints since mypy can now resolve the
types of the configuration variables.
2021-09-23 07:13:34 -04:00
Andrew Morgan aa2c027792 Remove unnecessary parentheses around tuples returned from methods (#10889) 2021-09-23 11:59:07 +01:00
Richard van der Hoff 26f2bfedbf Factor out a separate EventContext.for_outlier (#10883)
Constructing an EventContext for an outlier is actually really simple, and
there's no sense in going via an `async` method in the `StateHandler`.

This also means that we can resolve a bunch of FIXMEs.
2021-09-22 17:58:57 +01:00
Hillery Shay f78b68a96b Treat "\u0000" as "\u0020" for the purposes of message search (message indexing) (#10820)
* add test to check if null code points are being inserted

* add logic to detect and replace null code points before insertion into db

* lints

* add license to test

* change approach to null substitution

* add type hint for SearchEntry

* Add changelog entry

Signed-off-by: H.Shay <shaysquared@gmail.com>

* updated changelog

* update chanelog message

* remove duplicate changelog

* Update synapse/storage/databases/main/events.py remove extra space

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* rename and move test file, update tests, delete old test file

* fix typo in comments

* update _find_highlights_in_postgres to replace null byte with space

* replace null byte in sqlite search insertion

* beef up and reorganize test for this pr

* update changelog

* add type hints and update docstring

* check db engine directly vs using env variable

* refactor tests to be less repetetive

* move rplace logic into seperate function

* requested changes

* Fix typo.

* Update synapse/storage/databases/main/search.py

Co-authored-by: reivilibre <olivier@librepush.net>

* Update changelog.d/10820.misc

Co-authored-by: Aaron Raimist <aaron@raim.ist>

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: reivilibre <olivier@librepush.net>
Co-authored-by: Aaron Raimist <aaron@raim.ist>
2021-09-22 08:25:26 -07:00
Tulir Asokan 03db6701d5 Fix invalidating OTK count cache after claim (#10875)
The invalidation was missing in `_claim_e2e_one_time_key_returning`,
which is used on SQLite 3.24+ and Postgres. This could break e2ee if
nothing else happened to invalidate the caches before the keys ran out.

Signed-off-by: Tulir Asokan <tulir@beeper.com>
2021-09-22 15:31:05 +01:00
Richard van der Hoff 8f2a52766b Ensure we mark sent knocks as outliers (#10873) 2021-09-22 15:20:18 +01:00
Patrick Cloke 6fc8be9a1b Include more information in oEmbed previews. (#10819)
* Improved titles (fall back to the author name if there's not title) and include the site name.
* Handle photo/video payloads.
* Include the original URL in the Open Graph response.
* Fix the expiration time (by properly converting from seconds to milliseconds).
2021-09-22 09:45:20 -04:00
Sean Quah 9391de3f37 Fix /initialSync error due to unhashable RoomStreamToken (#10827)
The deprecated /initialSync endpoint maintains a cache of responses,
using parameter values as part of the cache key. When a `from` or `to`
parameter is specified, it gets converted into a `StreamToken`, which
contains a `RoomStreamToken` and forms part of the cache key.
`RoomStreamToken`s need to be made hashable for this to work.
2021-09-22 14:43:26 +01:00
Patrick Cloke 52913d56a5 Add documentation for experimental feature flags. (#10865) 2021-09-22 13:41:42 +00:00
David Robertson 724aef9a87 Opt out of cache expiry for get_users_who_share_room_with_user (#10826)
* Allow LruCaches to opt out of time-based expiry
* Don't expire `get_users_who_share_room` & friends
2021-09-22 14:21:58 +01:00
David Teller 80828eda06 Extend ModuleApi with the methods we'll need to reject spam based on …IP - resolves #10832 (#10833)
Extend ModuleApi with the methods we'll need to reject spam based on IP - resolves #10832

Signed-off-by: David Teller <davidt@element.io>
2021-09-22 13:09:43 +00:00
Richard van der Hoff 4ecf51812e Include outlier status in str(event) for V2/V3 events (#10879)
I meant to do this before, in #10591, but because I'm stupid I forgot to do it
for V2 and V3 events.

I've factored the common code out to `EventBase` to save us having two copies
of it.

This means that for `FrozenEvent` we replace `self.get("event_id", None)` with
`self.event_id`, which I think is safe. `get()` is an alias for
`self._dict.get()`, whereas `event_id()` is an `@property` method which looks
up `self._event_id`, which is populated during construction from the same
dict. We don't seem to rely on the fallback, because if the `event_id` key is
absent from the dict then construction of the `EventBase` object will
fail.

Long story short, the only way this could change behaviour is if
`event_dict["event_id"]` is changed *after* the `EventBase` object is
constructed without updating the `_event_id` field, or vice versa - either of
which would be very problematic anyway and the behavior of `str(event)` is the
least of our worries.
2021-09-22 12:30:59 +01:00
David Robertson a2d7195e01 Track why we're evicting from caches (#10829)
So we can see distinguish between "evicting because the cache is too big" and "evicting because the cache entries haven't been recently used".
2021-09-22 10:59:52 +01:00
Eric Eastwood 51e2db3598 Rename MSC2716 things from chunk to batch to match /batch_send endpoint (#10838)
See https://github.com/matrix-org/matrix-doc/pull/2716#discussion_r684574497

Dropping support for older MSC2716 room versions so we don't have to worry about
supporting both chunk and batch events.
2021-09-21 15:06:28 -05:00
Patrick Cloke 4054dfa409 Add type hints for event streams. (#10856) 2021-09-21 13:34:26 -04:00
Erik Johnston b25a494779 Add types to http.site (#10867) 2021-09-21 16:41:27 +00:00
Patrick Cloke ebd8baf61f Clear our destination directories before copying files to GitHub pages. (#10869)
This should fix stale deleted files being still accessible.
2021-09-21 16:32:46 +00:00
Azrenbeth 8c0fe97edf Only run compressor if run_background_tasks is true 2021-09-21 13:42:21 +01:00
Azrenbeth da1f804aa0 Run linters 2021-09-20 16:50:17 +01:00
Azrenbeth ffb96458d3 Add TODO in state compressor docs for when auto_compressor docs merged 2021-09-20 16:40:22 +01:00
Azrenbeth 2e3d7f5e15 Sample config follows code style, and config is validated 2021-09-20 16:38:35 +01:00
Azrenbeth ede5974f3d No complaints if compressor config is empty 2021-09-20 16:38:34 +01:00
Azrenbeth b88026654f Added docs for state_compressor 2021-09-20 16:38:34 +01:00
Azrenbeth f84cb2c79d Moved state_compressor setup to util/state_compressor.py 2021-09-20 16:38:34 +01:00
Azrenbeth 5e32e2b12a Added handling in config for when compressor not installed 2021-09-20 16:38:34 +01:00
Azrenbeth 1b76638c2a Added config section for state compressor 2021-09-20 16:38:34 +01:00
Azrenbeth f122710716 run in background 2021-09-20 16:38:34 +01:00
Azrenbeth c0915ee998 call compress_largest_rooms every 1 minute 2021-09-20 16:38:34 +01:00
241 changed files with 2664 additions and 1113 deletions
-1
View File
@@ -61,6 +61,5 @@ jobs:
uses: peaceiris/actions-gh-pages@068dc23d9710f1ba62e86896f84735d869951305 # v3.8.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
keep_files: true
publish_dir: ./book
destination_dir: ./${{ steps.vars.outputs.branch-version }}
+1
View File
@@ -40,6 +40,7 @@ __pycache__/
/.coverage*
/.mypy_cache/
/.tox
/.tox-pg-container
/build/
/coverage.*
/dist/
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug that caused an `AssertionError` when purging history in certain rooms. Contributed by @Kokokokoka.
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug which caused deactivated users that were later reactivated to be missing from the user directory.
+1
View File
@@ -0,0 +1 @@
Improve oEmbed previews by processing the author name, photo, and video information.
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug where an `m.room.message` event containing a null byte would cause an internal server error.
+2
View File
@@ -0,0 +1,2 @@
Opt out of cache expiry for `get_users_who_share_room_with_user`, to hopefully improve `/sync` performance when you
haven't synced recently.
+1
View File
@@ -0,0 +1 @@
Fix error in deprecated `/initialSync` endpoint when using the undocumented `from` and `to` parameters.
+1
View File
@@ -0,0 +1 @@
Track cache eviction rates more finely in Prometheus' monitoring.
+1
View File
@@ -0,0 +1 @@
Extend the ModuleApi to let plug-ins check whether an ID is local and to access IP + User Agent data.
+1
View File
@@ -0,0 +1 @@
Rename [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) fields and event types from `chunk` to `batch` to match the `/batch_send` endpoint.
+1
View File
@@ -0,0 +1 @@
Add missing type hints to handlers.
+1
View File
@@ -0,0 +1 @@
Add developer documentation about experimental configuration flags.
+1
View File
@@ -0,0 +1 @@
Add type hints to `synapse.http.site`.
+1
View File
@@ -0,0 +1 @@
Speed up responding with large JSON objects to requests.
+1
View File
@@ -0,0 +1 @@
Properly remove deleted files from GitHub pages when generating the documentation.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.37.0 which caused `knock` events which we sent to remote servers to be incorrectly stored in the local database.
+1
View File
@@ -0,0 +1 @@
Fix invalidating one-time key count cache after claiming keys. Contributed by Tulir at Beeper.
+1
View File
@@ -0,0 +1 @@
Include outlier status when we log V2 or V3 events.
+1
View File
@@ -0,0 +1 @@
Break down Grafana's cache expiry time series based on reason for eviction---see #10829.
+1
View File
@@ -0,0 +1 @@
Fix application service users being subject to MAU blocking if MAU had been reached, even if configured not to be blocked.
+1
View File
@@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.
+1
View File
@@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.
+1
View File
@@ -0,0 +1 @@
Use direct references to config flags.
+1
View File
@@ -0,0 +1 @@
Allow the `.` and `~` characters when creating registration tokens as per the change to [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231).
+1
View File
@@ -0,0 +1 @@
Clean up some unnecessary parentheses in places around the codebase.
+1
View File
@@ -0,0 +1 @@
Improve type hinting in the user directory code.
+1
View File
@@ -0,0 +1 @@
Use direct references to config flags.
+1
View File
@@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.
+1
View File
@@ -0,0 +1 @@
Use direct references to config flags.
+1
View File
@@ -0,0 +1 @@
Add a `user_may_create_room_with_invites` spam checker callback to allow modules to allow or deny a room creation request based on the invites and/or 3PID invites it includes.
+1
View File
@@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.
+1
View File
@@ -0,0 +1 @@
Drop old functionality which maintained database compatibility with Synapse versions before 1.31.
+1
View File
@@ -0,0 +1 @@
Speed up responding with large JSON objects to requests.
+1
View File
@@ -0,0 +1 @@
Update development testing script `test_postgresql.sh` to use a supported Python version and make re-runs quicker.
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug which could cause events pulled over federation to be incorrectly rejected.
+1
View File
@@ -0,0 +1 @@
Avoid storing URL cache files in storage providers. Server admins may safely delete the `url_cache/` and `url_cache_thumbnails/` directories from any configured storage providers to reclaim space.
+1
View File
@@ -0,0 +1 @@
Fix race conditions when creating media store and config directories.
+1
View File
@@ -0,0 +1 @@
Clean-up configuration helper classes for the `ServerConfig` class.
+1
View File
@@ -0,0 +1 @@
Document and summarize changes in schema version `61` - `64`.
+1
View File
@@ -0,0 +1 @@
Update release script to sign the newly created git tags.
+1
View File
@@ -0,0 +1 @@
Clean up some of the federation event authentication code for clarity.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in Synapse v1.40.0 where the signature checks for room version 8/9 could be applied to earlier room versions in some situations.
+1
View File
@@ -0,0 +1 @@
Fix debian builds due to dh-virtualenv no longer being able to build their docs.
+2 -2
View File
@@ -6785,7 +6785,7 @@
"expr": "rate(synapse_util_caches_cache:evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{name}} {{job}}-{{index}}",
"legendFormat": "{{name}} ({{reason}}) {{job}}-{{index}}",
"refId": "A"
}
],
@@ -10888,5 +10888,5 @@
"timezone": "",
"title": "Synapse",
"uid": "000000012",
"version": 99
"version": 100
}
+3 -2
View File
@@ -47,8 +47,9 @@ RUN apt-get update -qq -o Acquire::Languages=none \
&& cd /dh-virtualenv \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -y --no-install-recommends"
# build it
RUN cd /dh-virtualenv && dpkg-buildpackage -us -uc -b
# Build it. Note that building the docs doesn't work due to differences in
# Sphinx APIs across versions/distros.
RUN cd /dh-virtualenv && DEB_BUILD_OPTIONS=nodoc dpkg-buildpackage -us -uc -b
###
### Stage 1
+21 -3
View File
@@ -1,6 +1,6 @@
# Use the Sytest image that comes with a lot of the build dependencies
# pre-installed
FROM matrixdotorg/sytest:latest
FROM matrixdotorg/sytest:bionic
# The Sytest image doesn't come with python, so install that
RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
@@ -8,5 +8,23 @@ RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
# We need tox to run the tests in run_pg_tests.sh
RUN python3 -m pip install tox
ADD run_pg_tests.sh /pg_tests.sh
ENTRYPOINT /pg_tests.sh
# Initialise the db
RUN su -c '/usr/lib/postgresql/10/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="C.UTF-8" --lc-ctype="C.UTF-8" --username=postgres' postgres
# Add a user with our UID and GID so that files get created on the host owned
# by us, not root.
ARG UID
ARG GID
RUN groupadd --gid $GID user
RUN useradd --uid $UID --gid $GID --groups sudo --no-create-home user
# Ensure we can start postgres by sudo-ing as the postgres user.
RUN apt-get update && apt-get -qq install -y sudo
RUN echo "user ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
ADD run_pg_tests.sh /run_pg_tests.sh
# Use the "exec form" of ENTRYPOINT (https://docs.docker.com/engine/reference/builder/#entrypoint)
# so that we can `docker run` this container and pass arguments to pg_tests.sh
ENTRYPOINT ["/run_pg_tests.sh"]
USER user
+3 -4
View File
@@ -10,11 +10,10 @@ set -e
# Set PGUSER so Synapse's tests know what user to connect to the database with
export PGUSER=postgres
# Initialise & start the database
su -c '/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="en_US.UTF-8" --lc-ctype="en_US.UTF-8" --username=postgres' postgres
su -c '/usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start' postgres
# Start the database
sudo -u postgres /usr/lib/postgresql/10/bin/pg_ctl -w -D /var/lib/postgresql/data start
# Run the tests
cd /src
export TRIAL_FLAGS="-j 4"
tox --workdir=/tmp -e py35-postgres
tox --workdir=./.tox-pg-container -e py36-postgres "$@"
+2
View File
@@ -47,6 +47,7 @@
- [Workers](workers.md)
- [Using `synctl` with Workers](synctl_workers.md)
- [Systemd](systemd-with-workers/README.md)
- [State Compressor](state_compressor.md)
- [Administration](usage/administration/README.md)
- [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md)
@@ -74,6 +75,7 @@
- [Testing]()
- [OpenTracing](opentracing.md)
- [Database Schemas](development/database_schema.md)
- [Experimental features](development/experimental_features.md)
- [Synapse Architecture]()
- [Log Contexts](log_contexts.md)
- [Replication](replication.md)
+47
View File
@@ -170,6 +170,53 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
```
### Running tests under PostgreSQL
Invoking `trial` as above will use an in-memory SQLite database. This is great for
quick development and testing. However, we recommend using a PostgreSQL database
in production (and indeed, we have some code paths specific to each database).
This means that we need to run our unit tests against PostgreSQL too. Our CI does
this automatically for pull requests and release candidates, but it's sometimes
useful to reproduce this locally.
To do so, [configure Postgres](../postgres.md) and run `trial` with the
following environment variables matching your configuration:
- `SYNAPSE_POSTGRES` to anything nonempty
- `SYNAPSE_POSTGRES_HOST`
- `SYNAPSE_POSTGRES_USER`
- `SYNAPSE_POSTGRES_PASSWORD`
For example:
```shell
export SYNAPSE_POSTGRES=1
export SYNAPSE_POSTGRES_HOST=localhost
export SYNAPSE_POSTGRES_USER=postgres
export SYNAPSE_POSTGRES_PASSWORD=mydevenvpassword
trial
```
#### Prebuilt container
Since configuring PostgreSQL can be fiddly, we can make use of a pre-made
Docker container to set up PostgreSQL and run our tests for us. To do so, run
```shell
scripts-dev/test_postgresql.sh
```
Any extra arguments to the script will be passed to `tox` and then to `trial`,
so we can run a specific test in this container with e.g.
```shell
scripts-dev/test_postgresql.sh tests.replication.test_sharded_event_persister.EventPersisterShardTestCase
```
The container creates a folder in your Synapse checkout called
`.tox-pg-container` and uses this as a tox environment. The output of any
`trial` runs goes into `_trial_temp` in your synapse source directory — the same
as running `trial` directly on your host machine.
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
+37
View File
@@ -0,0 +1,37 @@
# Implementing experimental features in Synapse
It can be desirable to implement "experimental" features which are disabled by
default and must be explicitly enabled via the Synapse configuration. This is
applicable for features which:
* Are unstable in the Matrix spec (e.g. those defined by an MSC that has not yet been merged).
* Developers are not confident in their use by general Synapse administrators/users
(e.g. a feature is incomplete, buggy, performs poorly, or needs further testing).
Note that this only really applies to features which are expected to be desirable
to a broad audience. The [module infrastructure](../modules/index.md) should
instead be investigated for non-standard features.
Guarding experimental features behind configuration flags should help with some
of the following scenarios:
* Ensure that clients do not assume that unstable features exist (failing
gracefully if they do not).
* Unstable features do not become de-facto standards and can be removed
aggressively (since only those who have opted-in will be affected).
* Ease finding the implementation of unstable features in Synapse (for future
removal or stabilization).
* Ease testing a feature (or removal of feature) due to enabling/disabling without
code changes. It also becomes possible to ask for wider testing, if desired.
Experimental configuration flags should be disabled by default (requiring Synapse
administrators to explicitly opt-in), although there are situations where it makes
sense (from a product point-of-view) to enable features by default. This is
expected and not an issue.
It is not a requirement for experimental features to be behind a configuration flag,
but one should be used if unsure.
New experimental configuration flags should be added under the `experimental`
configuration key (see the `synapse.config.experimental` file) and either explain
(briefly) what is being enabled, or include the MSC number.
+29
View File
@@ -38,6 +38,35 @@ async def user_may_create_room(user: str) -> bool
Called when processing a room creation request. The module must return a `bool` indicating
whether the given user (represented by their Matrix user ID) is allowed to create a room.
### `user_may_create_room_with_invites`
```python
async def user_may_create_room_with_invites(
user: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool
```
Called when processing a room creation request (right after `user_may_create_room`).
The module is given the Matrix user ID of the user trying to create a room, as well as a
list of Matrix users to invite and a list of third-party identifiers (3PID, e.g. email
addresses) to invite.
An invited Matrix user to invite is represented by their Matrix user IDs, and an invited
3PIDs is represented by a dict that includes the 3PID medium (e.g. "email") through its
`medium` key and its address (e.g. "alice@example.com") through its `address` key.
See [the Matrix specification](https://matrix.org/docs/spec/appendices#pid-types) for more
information regarding third-party identifiers.
If no invite and/or 3PID invite were specified in the room creation request, the
corresponding list(s) will be empty.
**Note**: This callback is not called when a room is cloned (e.g. during a room upgrade)
since no invites are sent when cloning a room. To cover this case, modules also need to
implement `user_may_create_room`.
### `user_may_create_room_alias`
```python
+35
View File
@@ -2648,3 +2648,38 @@ redis:
# Optional password if configured on the Redis instance
#
#password: <secret_password>
## State compressor ##
# The state compressor is an experimental tool which attempts to
# reduce the number of rows in the state_groups_state table
# of postgres databases.
#
# For more information please see
# https://matrix-org.github.io/synapse/latest/state_compressor.html
#
state_compressor:
# Whether the state compressor should run (defaults to false)
# Uncomment to enable it - Note, this requires the 'auto-compressor'
# library to be installed
#
#enabled: true
# The (rough) number of state groups to load at one time. Defaults
# to 500.
#
#chunk_size: 1000
# The number of chunks to compress on each run. Defaults to 100.
#
#number_of_chunks: 1
# The default level sizes for the compressor to use. Defaults to
# 100,50,25.
#
#default_levels: 128,64,32.
# How frequently to run the state compressor. Defaults to 1d
#
#time_between_runs: 1w
+47
View File
@@ -0,0 +1,47 @@
# State compressor
The state compressor is an **experimental** tool that attempts to reduce the number of rows
in the `state_groups_state` table inside of a postgres database. Documentation on how it works
can be found on [its github repository](https://github.com/matrix-org/rust-synapse-compress-state).
## Enabling the state compressor
The state compressor requires the python library for the `synapse_auto_compressor` tool to be
installed. This can be done with pip or by following the instructions for this can be found in [the `python.md` file in the source
repo](https://github.com/matrix-org/rust-synapse-compress-state/blob/main/docs/python.md).
The following configuration options are provided:
- `chunk_size`
The number of state groups to work on at once. All of the entries from
`state_groups_state` are requested from the database for state groups that are
worked on. Therefore small chunk sizes may be needed on machines with low memory.
Note: if the compressor fails to find space savings on the chunk as a whole
(which may well happen in rooms with lots of backfill in) then the entire chunk
is skipped. This defaults to 500
- `number_of_chunks`
The compressor will stop once it has finished compressing this many chunks. Defaults to 100
- `default_levels`
Sizes of each new level in the compression algorithm, as a comma separated list.
The first entry in the list is for the lowest, most granular level, with each
subsequent entry being for the next highest level. The number of entries in the
list determines the number of levels that will be used. The sum of the sizes of
the levels effect the performance of fetching the state from the database, as the
sum of the sizes is the upper bound on number of iterations needed to fetch a
given set of state. This defaults to "100,50,25"
- `time_between_runs`
This controls how often the state compressor is run. This defaults to once every
day.
An example configuration:
```yaml
state_compressor:
enabled: true
chunk_size: 500
number_of_chunks: 50
default_levels: 100,50,25
time_between_runs: 1d
```
+7
View File
@@ -85,6 +85,13 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.44.0
## The URL preview cache is no longer mirrored to storage providers
The `url_cache/` and `url_cache_thumbnails/` directories in the media store are
no longer mirrored to storage providers. These two directories can be safely
deleted from any configured storage providers to reclaim space.
# Upgrading to v1.43.0
## The spaces summary APIs can now be handled by workers
+6
View File
@@ -85,9 +85,11 @@ files =
tests/handlers/test_room_summary.py,
tests/handlers/test_send_email.py,
tests/handlers/test_sync.py,
tests/handlers/test_user_directory.py,
tests/rest/client/test_login.py,
tests/rest/client/test_auth.py,
tests/storage/test_state.py,
tests/storage/test_user_directory.py,
tests/util/test_itertools.py,
tests/util/test_stream_change_cache.py
@@ -255,3 +257,7 @@ ignore_missing_imports = True
[mypy-ijson.*]
ignore_missing_imports = True
[mypy-psycopg2.*]
ignore_missing_imports = True
+1 -1
View File
@@ -276,7 +276,7 @@ def tag(gh_token: Optional[str]):
if click.confirm("Edit text?", default=False):
changes = click.edit(changes, require_save=False)
repo.create_tag(tag_name, message=changes)
repo.create_tag(tag_name, message=changes, sign=True)
if not click.confirm("Push tag to GitHub?", default=True):
print("")
+19
View File
@@ -0,0 +1,19 @@
#!/usr/bin/env bash
# This script builds the Docker image to run the PostgreSQL tests, and then runs
# the tests. It uses a dedicated tox environment so that we don't have to
# rebuild it each time.
# Command line arguments to this script are forwarded to "tox" and then to "trial".
set -e
# Build, and tag
docker build docker/ \
--build-arg "UID=$(id -u)" \
--build-arg "GID=$(id -g)" \
-f docker/Dockerfile-pgtests \
-t synapsepgtests
# Run, mounting the current directory into /src
docker run --rm -it -v "$(pwd):/src" -v synapse-pg-test-tox:/tox synapsepgtests "$@"
+1 -1
View File
@@ -81,7 +81,7 @@ class AuthBlocking:
# We never block the server from doing actions on behalf of
# users.
return
elif requester.app_service and not self._track_appservice_user_ips:
if requester.app_service and not self._track_appservice_user_ips:
# If we're authenticated as an appservice then we only block
# auth if `track_appservice_user_ips` is set, as that option
# implicitly means that application services are part of MAU
+5 -5
View File
@@ -121,7 +121,7 @@ class EventTypes:
SpaceParent = "m.space.parent"
MSC2716_INSERTION = "org.matrix.msc2716.insertion"
MSC2716_CHUNK = "org.matrix.msc2716.chunk"
MSC2716_BATCH = "org.matrix.msc2716.batch"
MSC2716_MARKER = "org.matrix.msc2716.marker"
@@ -209,11 +209,11 @@ class EventContentFields:
# Used on normal messages to indicate they were historically imported after the fact
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
# For "insertion" events to indicate what the next chunk ID should be in
# For "insertion" events to indicate what the next batch ID should be in
# order to connect to it
MSC2716_NEXT_CHUNK_ID = "org.matrix.msc2716.next_chunk_id"
# Used on "chunk" events to indicate which insertion event it connects to
MSC2716_CHUNK_ID = "org.matrix.msc2716.chunk_id"
MSC2716_NEXT_BATCH_ID = "org.matrix.msc2716.next_batch_id"
# Used on "batch" events to indicate which insertion event it connects to
MSC2716_BATCH_ID = "org.matrix.msc2716.batch_id"
# For "marker" events
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
+3 -19
View File
@@ -244,24 +244,8 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
)
MSC2716 = RoomVersion(
"org.matrix.msc2716",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=False,
msc3375_redaction_rules=False,
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=False,
)
MSC2716v2 = RoomVersion(
"org.matrix.msc2716v2",
MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
@@ -289,9 +273,9 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V6,
RoomVersions.MSC2176,
RoomVersions.V7,
RoomVersions.MSC2716,
RoomVersions.V8,
RoomVersions.V9,
RoomVersions.MSC2716v3,
)
}
+2 -2
View File
@@ -39,12 +39,12 @@ class ConsentURIBuilder:
Args:
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
if hs_config.key.form_secret is None:
raise ConfigError("form_secret not set in config")
if hs_config.server.public_baseurl is None:
raise ConfigError("public_baseurl not set in config")
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._hmac_secret = hs_config.key.form_secret.encode("utf-8")
self._public_baseurl = hs_config.server.public_baseurl
def build_user_consent_uri(self, user_id):
+10 -4
View File
@@ -48,6 +48,7 @@ from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit
from synapse.util.state_compressor import setup_state_compressor
from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
@@ -88,8 +89,8 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
appname,
soft_file_limit=config.soft_file_limit,
gc_thresholds=config.gc_thresholds,
pid_file=config.worker_pid_file,
daemonize=config.worker_daemonize,
pid_file=config.worker.worker_pid_file,
daemonize=config.worker.worker_daemonize,
print_pidfile=config.print_pidfile,
logger=logger,
run_command=run_command,
@@ -383,6 +384,9 @@ async def start(hs: "HomeServer"):
# If we've configured an expiry time for caches, start the background job now.
setup_expire_lru_cache_entries(hs)
# Schedule the state compressor to run
setup_state_compressor(hs)
# It is now safe to start your Synapse.
hs.start_listening()
hs.get_datastore().db_pool.start_profiling()
@@ -424,12 +428,14 @@ def setup_sentry(hs):
hs (synapse.server.HomeServer)
"""
if not hs.config.sentry_enabled:
if not hs.config.metrics.sentry_enabled:
return
import sentry_sdk
sentry_sdk.init(dsn=hs.config.sentry_dsn, release=get_version_string(synapse))
sentry_sdk.init(
dsn=hs.config.metrics.sentry_dsn, release=get_version_string(synapse)
)
# We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope:
+4 -4
View File
@@ -186,13 +186,13 @@ def start(config_options):
config.worker.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
and not config.worker_log_file
and not config.worker_log_config
not config.worker.worker_daemonize
and not config.worker.worker_log_file
and not config.worker.worker_log_config
):
# Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config.
config.no_redirect_stdio = True
config.logging.no_redirect_stdio = True
# Explicitly disable background processes
config.update_user_directory = False
+5 -5
View File
@@ -140,7 +140,7 @@ class KeyUploadServlet(RestServlet):
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
self.main_uri = hs.config.worker.worker_main_http_uri
async def on_POST(self, request: Request, device_id: Optional[str]):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
@@ -321,7 +321,7 @@ class GenericWorkerServer(HomeServer):
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media":
if self.config.can_load_media_repo:
if self.config.media.can_load_media_repo:
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
@@ -384,7 +384,7 @@ class GenericWorkerServer(HomeServer):
logger.info("Synapse worker now listening on port %d", port)
def start_listening(self):
for listener in self.config.worker_listeners:
for listener in self.config.worker.worker_listeners:
if listener.type == "http":
self._listen_http(listener)
elif listener.type == "manhole":
@@ -395,7 +395,7 @@ class GenericWorkerServer(HomeServer):
manhole_globals={"hs": self},
)
elif listener.type == "metrics":
if not self.config.enable_metrics:
if not self.config.metrics.enable_metrics:
logger.warning(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -488,7 +488,7 @@ def start(config_options):
register_start(_base.start, hs)
# redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio:
if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs()
_base.start_worker_reactor("synapse-generic-worker", config)
+7 -7
View File
@@ -195,7 +195,7 @@ class SynapseHomeServer(HomeServer):
}
)
if self.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource,
)
@@ -234,7 +234,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["media", "federation", "client"]:
if self.config.enable_media_repo:
if self.config.media.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
@@ -269,7 +269,7 @@ class SynapseHomeServer(HomeServer):
# https://twistedmatrix.com/trac/ticket/7678
resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
if name == "metrics" and self.config.enable_metrics:
if name == "metrics" and self.config.metrics.enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
@@ -278,7 +278,7 @@ class SynapseHomeServer(HomeServer):
return resources
def start_listening(self):
if self.config.redis_enabled:
if self.config.redis.redis_enabled:
# If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client
# rather than a server).
@@ -305,7 +305,7 @@ class SynapseHomeServer(HomeServer):
for s in services:
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener.type == "metrics":
if not self.config.enable_metrics:
if not self.config.metrics.enable_metrics:
logger.warning(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -366,7 +366,7 @@ def setup(config_options):
async def start():
# Load the OIDC provider metadatas, if OIDC is enabled.
if hs.config.oidc_enabled:
if hs.config.oidc.oidc_enabled:
oidc = hs.get_oidc_handler()
# Loading the provider metadata also ensures the provider config is valid.
await oidc.load_metadata()
@@ -455,7 +455,7 @@ def main():
hs = setup(sys.argv[1:])
# redirect stdio to the logs, if configured.
if not hs.config.no_redirect_stdio:
if not hs.config.logging.no_redirect_stdio:
redirect_stdio_to_logs()
run(hs)
+5 -3
View File
@@ -131,10 +131,12 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
log_level = synapse_logger.getEffectiveLevel()
stats["log_level"] = logging.getLevelName(log_level)
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
logger.info(
"Reporting stats to %s: %s" % (hs.config.metrics.report_stats_endpoint, stats)
)
try:
await hs.get_proxied_http_client().put_json(
hs.config.report_stats_endpoint, stats
hs.config.metrics.report_stats_endpoint, stats
)
except Exception as e:
logger.warning("Error reporting stats: %s", e)
@@ -188,7 +190,7 @@ def start_phone_stats_home(hs):
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
# End of monthly active user settings
if hs.config.report_stats:
if hs.config.metrics.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000, hs, stats)
+2 -7
View File
@@ -200,11 +200,7 @@ class Config:
@classmethod
def ensure_directory(cls, dir_path):
dir_path = cls.abspath(dir_path)
try:
os.makedirs(dir_path)
except OSError as e:
if e.errno != errno.EEXIST:
raise
os.makedirs(dir_path, exist_ok=True)
if not os.path.isdir(dir_path):
raise ConfigError("%s is not a directory" % (dir_path,))
return dir_path
@@ -693,8 +689,7 @@ class RootConfig:
open_private_ports=config_args.open_private_ports,
)
if not path_exists(config_dir_path):
os.makedirs(config_dir_path)
os.makedirs(config_dir_path, exist_ok=True)
with open(config_path, "w") as config_file:
config_file.write(config_str)
config_file.write("\n\n# vim:ft=yaml")
+2
View File
@@ -32,6 +32,7 @@ from synapse.config import (
server_notices,
spam_checker,
sso,
state_compressor,
stats,
third_party_event_rules,
tls,
@@ -91,6 +92,7 @@ class RootConfig:
modules: modules.ModulesConfig
caches: cache.CacheConfig
federation: federation.FederationConfig
statecompressor: state_compressor.StateCompressorConfig
config_classes: List = ...
def __init__(self) -> None: ...
+6 -3
View File
@@ -13,6 +13,7 @@
# limitations under the License.
from os import path
from typing import Optional
from synapse.config import ConfigError
@@ -78,8 +79,8 @@ class ConsentConfig(Config):
def __init__(self, *args):
super().__init__(*args)
self.user_consent_version = None
self.user_consent_template_dir = None
self.user_consent_version: Optional[str] = None
self.user_consent_template_dir: Optional[str] = None
self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None
@@ -94,7 +95,9 @@ class ConsentConfig(Config):
return
self.user_consent_version = str(consent_config["version"])
self.user_consent_template_dir = self.abspath(consent_config["template_dir"])
if not path.isdir(self.user_consent_template_dir):
if not isinstance(self.user_consent_template_dir, str) or not path.isdir(
self.user_consent_template_dir
):
raise ConfigError(
"Could not find template directory '%s'"
% (self.user_consent_template_dir,)
+2
View File
@@ -45,6 +45,7 @@ from .server import ServerConfig
from .server_notices import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .sso import SSOConfig
from .state_compressor import StateCompressorConfig
from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig
@@ -97,4 +98,5 @@ class HomeServerConfig(RootConfig):
WorkerConfig,
RedisConfig,
ExperimentalConfig,
StateCompressorConfig,
]
+3 -1
View File
@@ -322,7 +322,9 @@ def setup_logging(
"""
log_config_path = (
config.worker_log_config if use_worker_options else config.log_config
config.worker.worker_log_config
if use_worker_options
else config.logging.log_config
)
# Perform one-time logging configuration.
+50 -52
View File
@@ -19,7 +19,7 @@ import logging
import os.path
import re
from textwrap import indent
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
import attr
import yaml
@@ -184,49 +184,74 @@ KNOWN_RESOURCES = {
@attr.s(frozen=True)
class HttpResourceConfig:
names = attr.ib(
type=List[str],
names: List[str] = attr.ib(
factory=list,
validator=attr.validators.deep_iterable(attr.validators.in_(KNOWN_RESOURCES)), # type: ignore
)
compress = attr.ib(
type=bool,
compress: bool = attr.ib(
default=False,
validator=attr.validators.optional(attr.validators.instance_of(bool)), # type: ignore[arg-type]
)
@attr.s(frozen=True)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class HttpListenerConfig:
"""Object describing the http-specific parts of the config of a listener"""
x_forwarded = attr.ib(type=bool, default=False)
resources = attr.ib(type=List[HttpResourceConfig], factory=list)
additional_resources = attr.ib(type=Dict[str, dict], factory=dict)
tag = attr.ib(type=str, default=None)
x_forwarded: bool = False
resources: List[HttpResourceConfig] = attr.ib(factory=list)
additional_resources: Dict[str, dict] = attr.ib(factory=dict)
tag: Optional[str] = None
@attr.s(frozen=True)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ListenerConfig:
"""Object describing the configuration of a single listener."""
port = attr.ib(type=int, validator=attr.validators.instance_of(int))
bind_addresses = attr.ib(type=List[str])
type = attr.ib(type=str, validator=attr.validators.in_(KNOWN_LISTENER_TYPES))
tls = attr.ib(type=bool, default=False)
port: int = attr.ib(validator=attr.validators.instance_of(int))
bind_addresses: List[str]
type: str = attr.ib(validator=attr.validators.in_(KNOWN_LISTENER_TYPES))
tls: bool = False
# http_options is only populated if type=http
http_options = attr.ib(type=Optional[HttpListenerConfig], default=None)
http_options: Optional[HttpListenerConfig] = None
@attr.s(frozen=True)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class ManholeConfig:
"""Object describing the configuration of the manhole"""
username = attr.ib(type=str, validator=attr.validators.instance_of(str))
password = attr.ib(type=str, validator=attr.validators.instance_of(str))
priv_key = attr.ib(type=Optional[Key])
pub_key = attr.ib(type=Optional[Key])
username: str = attr.ib(validator=attr.validators.instance_of(str))
password: str = attr.ib(validator=attr.validators.instance_of(str))
priv_key: Optional[Key]
pub_key: Optional[Key]
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RetentionConfig:
"""Object describing the configuration of the manhole"""
interval: int
shortest_max_lifetime: Optional[int]
longest_max_lifetime: Optional[int]
@attr.s(frozen=True)
class LimitRemoteRoomsConfig:
enabled: bool = attr.ib(validator=attr.validators.instance_of(bool), default=False)
complexity: Union[float, int] = attr.ib(
validator=attr.validators.instance_of(
(float, int) # type: ignore[arg-type] # noqa
),
default=1.0,
)
complexity_error: str = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
admins_can_join: bool = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
class ServerConfig(Config):
@@ -519,7 +544,7 @@ class ServerConfig(Config):
" greater than 'allowed_lifetime_max'"
)
self.retention_purge_jobs: List[Dict[str, Optional[int]]] = []
self.retention_purge_jobs: List[RetentionConfig] = []
for purge_job_config in retention_config.get("purge_jobs", []):
interval_config = purge_job_config.get("interval")
@@ -553,20 +578,12 @@ class ServerConfig(Config):
)
self.retention_purge_jobs.append(
{
"interval": interval,
"shortest_max_lifetime": shortest_max_lifetime,
"longest_max_lifetime": longest_max_lifetime,
}
RetentionConfig(interval, shortest_max_lifetime, longest_max_lifetime)
)
if not self.retention_purge_jobs:
self.retention_purge_jobs = [
{
"interval": self.parse_duration("1d"),
"shortest_max_lifetime": None,
"longest_max_lifetime": None,
}
RetentionConfig(self.parse_duration("1d"), None, None)
]
self.listeners = [parse_listener_def(x) for x in config.get("listeners", [])]
@@ -591,25 +608,6 @@ class ServerConfig(Config):
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None))
@attr.s
class LimitRemoteRoomsConfig:
enabled = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
complexity = attr.ib(
validator=attr.validators.instance_of(
(float, int) # type: ignore[arg-type] # noqa
),
default=1.0,
)
complexity_error = attr.ib(
validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT,
)
admins_can_join = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
self.limit_remote_rooms = LimitRemoteRoomsConfig(
**(config.get("limit_remote_rooms") or {})
)
@@ -1447,7 +1445,7 @@ def read_gc_thresholds(thresholds):
return None
try:
assert len(thresholds) == 3
return (int(thresholds[0]), int(thresholds[1]), int(thresholds[2]))
return int(thresholds[0]), int(thresholds[1]), int(thresholds[2])
except Exception:
raise ConfigError(
"Value of `gc_threshold` must be a list of three integers if set"
+96
View File
@@ -0,0 +1,96 @@
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.config._base import Config, ConfigError
from synapse.config._util import validate_config
from synapse.python_dependencies import DependencyException, check_requirements
class StateCompressorConfig(Config):
section = "statecompressor"
def read_config(self, config, **kwargs):
compressor_config = config.get("state_compressor") or {}
validate_config(
_STATE_COMPRESSOR_SCHEMA, compressor_config, ("state_compressor",)
)
self.compressor_enabled = compressor_config.get("enabled") or False
if not self.compressor_enabled:
return
try:
check_requirements("synapse_auto_compressor")
except DependencyException as e:
raise ConfigError from e
self.compressor_chunk_size = compressor_config.get("chunk_size") or 500
self.compressor_number_of_chunks = (
compressor_config.get("number_of_chunks") or 100
)
self.compressor_default_levels = (
compressor_config.get("default_levels") or "100,50,25"
)
self.time_between_compressor_runs = self.parse_duration(
compressor_config.get("time_between_runs") or "1d"
)
def generate_config_section(self, **kwargs):
return """\
## State compressor ##
# The state compressor is an experimental tool which attempts to
# reduce the number of rows in the state_groups_state table
# of postgres databases.
#
# For more information please see
# https://matrix-org.github.io/synapse/latest/state_compressor.html
#
state_compressor:
# Whether the state compressor should run (defaults to false)
# Uncomment to enable it - Note, this requires the 'auto-compressor'
# library to be installed
#
#enabled: true
# The (rough) number of state groups to load at one time. Defaults
# to 500.
#
#chunk_size: 1000
# The number of chunks to compress on each run. Defaults to 100.
#
#number_of_chunks: 1
# The default level sizes for the compressor to use. Defaults to
# 100,50,25.
#
#default_levels: 128,64,32.
# How frequently to run the state compressor. Defaults to 1d
#
#time_between_runs: 1w
"""
_STATE_COMPRESSOR_SCHEMA = {
"type": "object",
"properties": {
"enabled": {"type": "boolean"},
"chunk_size": {"type": "number"},
"number_of_chunks": {"type": "number"},
"default_levels": {"type": "string"},
"time_between_runs": {"type": "string"},
},
}
+2 -2
View File
@@ -74,8 +74,8 @@ class ServerContextFactory(ContextFactory):
context.set_options(
SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3 | SSL.OP_NO_TLSv1 | SSL.OP_NO_TLSv1_1
)
context.use_certificate_chain_file(config.tls_certificate_file)
context.use_privatekey(config.tls_private_key)
context.use_certificate_chain_file(config.tls.tls_certificate_file)
context.use_privatekey(config.tls.tls_private_key)
# https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
context.set_cipher_list(
+6 -5
View File
@@ -113,7 +113,8 @@ def check(
raise AuthError(403, "Event not signed by sending server")
is_invite_via_allow_rule = (
event.type == EventTypes.Member
room_version_obj.msc3083_join_rules
and event.type == EventTypes.Member
and event.membership == Membership.JOIN
and "join_authorised_via_users_server" in event.content
)
@@ -213,7 +214,7 @@ def check(
if (
event.type == EventTypes.MSC2716_INSERTION
or event.type == EventTypes.MSC2716_CHUNK
or event.type == EventTypes.MSC2716_BATCH
or event.type == EventTypes.MSC2716_MARKER
):
check_historical(room_version_obj, event, auth_events)
@@ -552,14 +553,14 @@ def check_historical(
auth_events: StateMap[EventBase],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "chunk", and "marker".
events like "insertion", "batch", and "marker".
Returns:
None
Raises:
AuthError if the event sender is not allowed to send historical related events
("insertion", "chunk", and "marker").
("insertion", "batch", and "marker").
"""
# Ignore the auth checks in room versions that do not support historical
# events
@@ -573,7 +574,7 @@ def check_historical(
if user_level < historical_level:
raise AuthError(
403,
'You don\'t have permission to send send historical related events ("insertion", "chunk", and "marker")',
'You don\'t have permission to send send historical related events ("insertion", "batch", and "marker")',
)
+12 -22
View File
@@ -344,6 +344,18 @@ class EventBase(metaclass=abc.ABCMeta):
# this will be a no-op if the event dict is already frozen.
self._dict = freeze(self._dict)
def __str__(self):
return self.__repr__()
def __repr__(self):
return "<%s event_id=%r, type=%r, state_key=%r, outlier=%s>" % (
self.__class__.__name__,
self.event_id,
self.get("type", None),
self.get("state_key", None),
self.internal_metadata.is_outlier(),
)
class FrozenEvent(EventBase):
format_version = EventFormatVersions.V1 # All events of this type are V1
@@ -392,17 +404,6 @@ class FrozenEvent(EventBase):
def event_id(self) -> str:
return self._event_id
def __str__(self):
return self.__repr__()
def __repr__(self):
return "<FrozenEvent event_id=%r, type=%r, state_key=%r, outlier=%s>" % (
self.get("event_id", None),
self.get("type", None),
self.get("state_key", None),
self.internal_metadata.is_outlier(),
)
class FrozenEventV2(EventBase):
format_version = EventFormatVersions.V2 # All events of this type are V2
@@ -478,17 +479,6 @@ class FrozenEventV2(EventBase):
"""
return self.auth_events
def __str__(self):
return self.__repr__()
def __repr__(self):
return "<%s event_id=%r, type=%r, state_key=%r>" % (
self.__class__.__name__,
self.event_id,
self.get("type", None),
self.get("state_key", None),
)
class FrozenEventV3(FrozenEventV2):
"""FrozenEventV3, which differs from FrozenEventV2 only in the event_id format"""
+10 -4
View File
@@ -80,9 +80,7 @@ class EventContext:
(type, state_key) -> event_id
FIXME: what is this for an outlier? it seems ill-defined. It seems like
it could be either {}, or the state we were given by the remote
server, depending on $THINGS
For an outlier, this is {}
Note that this is a private attribute: it should be accessed via
``get_current_state_ids``. _AsyncEventContext impl calculates this
@@ -96,7 +94,7 @@ class EventContext:
(type, state_key) -> event_id
FIXME: again, what is this for an outlier?
For an outlier, this is {}
As with _current_state_ids, this is a private attribute. It should be
accessed via get_prev_state_ids.
@@ -130,6 +128,14 @@ class EventContext:
delta_ids=delta_ids,
)
@staticmethod
def for_outlier():
"""Return an EventContext instance suitable for persisting an outlier event"""
return EventContext(
current_state_ids={},
prev_state_ids={},
)
async def serialize(self, event: EventBase, store: "DataStore") -> dict:
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
+43 -1
View File
@@ -46,6 +46,9 @@ CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
]
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK = Callable[
[str, List[str], List[Dict[str, str]]], Awaitable[bool]
]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
@@ -78,7 +81,7 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer"):
"""
spam_checkers: List[Any] = []
api = hs.get_module_api()
for module, config in hs.config.spam_checkers:
for module, config in hs.config.spamchecker.spam_checkers:
# Older spam checkers don't accept the `api` argument, so we
# try and detect support.
spam_args = inspect.getfullargspec(module)
@@ -164,6 +167,9 @@ class SpamChecker:
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
self._user_may_create_room_with_invites_callbacks: List[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = []
self._user_may_create_room_alias_callbacks: List[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = []
@@ -183,6 +189,9 @@ class SpamChecker:
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_with_invites: Optional[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = None,
user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None,
@@ -203,6 +212,11 @@ class SpamChecker:
if user_may_create_room is not None:
self._user_may_create_room_callbacks.append(user_may_create_room)
if user_may_create_room_with_invites is not None:
self._user_may_create_room_with_invites_callbacks.append(
user_may_create_room_with_invites,
)
if user_may_create_room_alias is not None:
self._user_may_create_room_alias_callbacks.append(
user_may_create_room_alias,
@@ -283,6 +297,34 @@ class SpamChecker:
return True
async def user_may_create_room_with_invites(
self,
userid: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool:
"""Checks if a given user may create a room with invites
If this method returns false, the creation request will be rejected.
Args:
userid: The ID of the user attempting to create a room
invites: The IDs of the Matrix users to be invited if the room creation is
allowed.
threepid_invites: The threepids to be invited if the room creation is allowed,
as a dict including a "medium" key indicating the threepid's medium (e.g.
"email") and an "address" key indicating the threepid's address (e.g.
"alice@example.com")
Returns:
True if the user may create the room, otherwise False
"""
for callback in self._user_may_create_room_with_invites_callbacks:
if await callback(userid, invites, threepid_invites) is False:
return False
return True
async def user_may_create_room_alias(
self, userid: str, room_alias: RoomAlias
) -> bool:
+2 -2
View File
@@ -42,10 +42,10 @@ def load_legacy_third_party_event_rules(hs: "HomeServer"):
"""Wrapper that loads a third party event rules module configured using the old
configuration, and registers the hooks they implement.
"""
if hs.config.third_party_event_rules is None:
if hs.config.thirdpartyrules.third_party_event_rules is None:
return
module, config = hs.config.third_party_event_rules
module, config = hs.config.thirdpartyrules.third_party_event_rules
api = hs.get_module_api()
third_party_rules = module(config=config, module_api=api)
+3 -3
View File
@@ -141,9 +141,9 @@ def prune_event_dict(room_version: RoomVersion, event_dict: dict) -> dict:
elif event_type == EventTypes.Redaction and room_version.msc2176_redaction_rules:
add_fields("redacts")
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_INSERTION:
add_fields(EventContentFields.MSC2716_NEXT_CHUNK_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_CHUNK:
add_fields(EventContentFields.MSC2716_CHUNK_ID)
add_fields(EventContentFields.MSC2716_NEXT_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_BATCH:
add_fields(EventContentFields.MSC2716_BATCH_ID)
elif room_version.msc2716_redactions and event_type == EventTypes.MSC2716_MARKER:
add_fields(EventContentFields.MSC2716_MARKER_INSERTION)
-2
View File
@@ -501,8 +501,6 @@ class FederationClient(FederationBase):
destination, auth_chain, outlier=True, room_version=room_version
)
signed_auth.sort(key=lambda e: e.depth)
return signed_auth
def _is_unknown_endpoint(
@@ -560,7 +560,7 @@ class PerDestinationQueue:
assert len(edus) <= limit, "get_device_updates_by_remote returned too many EDUs"
return (edus, now_stream_id)
return edus, now_stream_id
async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]:
last_device_stream_id = self._last_device_stream_id
@@ -593,7 +593,7 @@ class PerDestinationQueue:
stream_id,
)
return (edus, stream_id)
return edus, stream_id
def _start_catching_up(self) -> None:
"""
+3 -1
View File
@@ -49,7 +49,9 @@ class Authenticator:
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
self.federation_domain_whitelist = (
hs.config.federation.federation_domain_whitelist
)
self.notifier = hs.get_notifier()
self.replication_client = None
+3 -3
View File
@@ -847,16 +847,16 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
UserID.from_string(requester_user_id)
)
if not is_admin:
if not self.hs.config.enable_group_creation:
if not self.hs.config.groups.enable_group_creation:
raise SynapseError(
403, "Only a server admin can create groups on this server"
)
localpart = group_id_obj.localpart
if not localpart.startswith(self.hs.config.group_creation_prefix):
if not localpart.startswith(self.hs.config.groups.group_creation_prefix):
raise SynapseError(
400,
"Can only create groups with prefix %r on this server"
% (self.hs.config.group_creation_prefix,),
% (self.hs.config.groups.group_creation_prefix,),
)
profile = content.get("profile", {})
+10 -3
View File
@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import random
from typing import TYPE_CHECKING, Any, List, Tuple
from typing import TYPE_CHECKING, Collection, List, Optional, Tuple
from synapse.replication.http.account_data import (
ReplicationAddTagRestServlet,
@@ -21,6 +21,7 @@ from synapse.replication.http.account_data import (
ReplicationRoomAccountDataRestServlet,
ReplicationUserAccountDataRestServlet,
)
from synapse.streams import EventSource
from synapse.types import JsonDict, UserID
if TYPE_CHECKING:
@@ -163,7 +164,7 @@ class AccountDataHandler:
return response["max_stream_id"]
class AccountDataEventSource:
class AccountDataEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
@@ -171,7 +172,13 @@ class AccountDataEventSource:
return self.store.get_max_account_data_stream_id()
async def get_new_events(
self, user: UserID, from_key: int, **kwargs: Any
self,
user: UserID,
from_key: int,
limit: Optional[int],
room_ids: Collection[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
) -> Tuple[List[JsonDict], int]:
user_id = user.to_string()
last_stream_id = from_key
+1 -1
View File
@@ -47,7 +47,7 @@ class AccountValidityHandler:
self.send_email_handler = self.hs.get_send_email_handler()
self.clock = self.hs.get_clock()
self._app_name = self.hs.config.email_app_name
self._app_name = self.hs.config.email.email_app_name
self._account_validity_enabled = (
hs.config.account_validity.account_validity_enabled
+4 -4
View File
@@ -52,7 +52,7 @@ class ApplicationServicesHandler:
self.scheduler = hs.get_application_service_scheduler()
self.started_scheduler = False
self.clock = hs.get_clock()
self.notify_appservices = hs.config.notify_appservices
self.notify_appservices = hs.config.appservice.notify_appservices
self.event_sources = hs.get_event_sources()
self.current_max = 0
@@ -254,7 +254,7 @@ class ApplicationServicesHandler:
async def _handle_typing(
self, service: ApplicationService, new_token: int
) -> List[JsonDict]:
typing_source = self.event_sources.sources["typing"]
typing_source = self.event_sources.sources.typing
# Get the typing events from just before current
typing, _ = await typing_source.get_new_events_as(
service=service,
@@ -269,7 +269,7 @@ class ApplicationServicesHandler:
from_key = await self.store.get_type_stream_id_for_appservice(
service, "read_receipt"
)
receipts_source = self.event_sources.sources["receipt"]
receipts_source = self.event_sources.sources.receipt
receipts, _ = await receipts_source.get_new_events_as(
service=service, from_key=from_key
)
@@ -279,7 +279,7 @@ class ApplicationServicesHandler:
self, service: ApplicationService, users: Collection[Union[str, UserID]]
) -> List[JsonDict]:
events: List[JsonDict] = []
presence_source = self.event_sources.sources["presence"]
presence_source = self.event_sources.sources.presence
from_key = await self.store.get_type_stream_id_for_appservice(
service, "presence"
)
+18 -16
View File
@@ -210,15 +210,15 @@ class AuthHandler(BaseHandler):
self.password_providers = [
PasswordProvider.load(module, config, account_handler)
for module, config in hs.config.password_providers
for module, config in hs.config.authproviders.password_providers
]
logger.info("Extra password_providers: %s", self.password_providers)
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled
self._password_localdb_enabled = hs.config.password_localdb_enabled
self._password_enabled = hs.config.auth.password_enabled
self._password_localdb_enabled = hs.config.auth.password_localdb_enabled
# start out by assuming PASSWORD is enabled; we will remove it later if not.
login_types = set()
@@ -250,7 +250,7 @@ class AuthHandler(BaseHandler):
)
# The number of seconds to keep a UI auth session active.
self._ui_auth_session_timeout = hs.config.ui_auth_session_timeout
self._ui_auth_session_timeout = hs.config.auth.ui_auth_session_timeout
# Ratelimitier for failed /login attempts
self._failed_login_attempts_ratelimiter = Ratelimiter(
@@ -277,23 +277,25 @@ class AuthHandler(BaseHandler):
# after the SSO completes and before redirecting them back to their client.
# It notifies the user they are about to give access to their matrix account
# to the client.
self._sso_redirect_confirm_template = hs.config.sso_redirect_confirm_template
self._sso_redirect_confirm_template = (
hs.config.sso.sso_redirect_confirm_template
)
# The following template is shown during user interactive authentication
# in the fallback auth scenario. It notifies the user that they are
# authenticating for an operation to occur on their account.
self._sso_auth_confirm_template = hs.config.sso_auth_confirm_template
self._sso_auth_confirm_template = hs.config.sso.sso_auth_confirm_template
# The following template is shown during the SSO authentication process if
# the account is deactivated.
self._sso_account_deactivated_template = (
hs.config.sso_account_deactivated_template
hs.config.sso.sso_account_deactivated_template
)
self._server_name = hs.config.server.server_name
# cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist)
self._whitelisted_sso_clients = tuple(hs.config.sso.sso_client_whitelist)
# A mapping of user ID to extra attributes to include in the login
# response.
@@ -739,19 +741,19 @@ class AuthHandler(BaseHandler):
return canonical_id
def _get_params_recaptcha(self) -> dict:
return {"public_key": self.hs.config.recaptcha_public_key}
return {"public_key": self.hs.config.captcha.recaptcha_public_key}
def _get_params_terms(self) -> dict:
return {
"policies": {
"privacy_policy": {
"version": self.hs.config.user_consent_version,
"version": self.hs.config.consent.user_consent_version,
"en": {
"name": self.hs.config.user_consent_policy_name,
"name": self.hs.config.consent.user_consent_policy_name,
"url": "%s_matrix/consent?v=%s"
% (
self.hs.config.server.public_baseurl,
self.hs.config.user_consent_version,
self.hs.config.consent.user_consent_version,
),
},
}
@@ -1016,7 +1018,7 @@ class AuthHandler(BaseHandler):
def can_change_password(self) -> bool:
"""Get whether users on this server are allowed to change or set a password.
Both `config.password_enabled` and `config.password_localdb_enabled` must be true.
Both `config.auth.password_enabled` and `config.auth.password_localdb_enabled` must be true.
Note that any account (even SSO accounts) are allowed to add passwords if the above
is true.
@@ -1486,7 +1488,7 @@ class AuthHandler(BaseHandler):
pw = unicodedata.normalize("NFKC", password)
return bcrypt.hashpw(
pw.encode("utf8") + self.hs.config.password_pepper.encode("utf8"),
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii")
@@ -1510,7 +1512,7 @@ class AuthHandler(BaseHandler):
pw = unicodedata.normalize("NFKC", password)
return bcrypt.checkpw(
pw.encode("utf8") + self.hs.config.password_pepper.encode("utf8"),
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
checked_hash,
)
@@ -1802,7 +1804,7 @@ class MacaroonGenerator:
macaroon = pymacaroons.Macaroon(
location=self.hs.config.server.server_name,
identifier="key",
key=self.hs.config.macaroon_secret_key,
key=self.hs.config.key.macaroon_secret_key,
)
macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
+4 -4
View File
@@ -65,10 +65,10 @@ class CasHandler:
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
self._cas_server_url = hs.config.cas_server_url
self._cas_service_url = hs.config.cas_service_url
self._cas_displayname_attribute = hs.config.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas_required_attributes
self._cas_server_url = hs.config.cas.cas_server_url
self._cas_service_url = hs.config.cas.cas_service_url
self._cas_displayname_attribute = hs.config.cas.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas.cas_required_attributes
self._http_client = hs.get_proxied_http_client()
+6 -3
View File
@@ -255,13 +255,16 @@ class DeactivateAccountHandler(BaseHandler):
Args:
user_id: ID of user to be re-activated
"""
# Add the user to the directory, if necessary.
user = UserID.from_string(user_id)
profile = await self.store.get_profileinfo(user.localpart)
await self.user_directory_handler.handle_local_profile_change(user_id, profile)
# Ensure the user is not marked as erased.
await self.store.mark_user_not_erased(user_id)
# Mark the user as active.
await self.store.set_user_deactivated_status(user_id, False)
# Add the user to the directory, if necessary. Note that
# this must be done after the user is re-activated, because
# deactivated users are excluded from the user directory.
profile = await self.store.get_profileinfo(user.localpart)
await self.user_directory_handler.handle_local_profile_change(user_id, profile)
+3 -3
View File
@@ -48,7 +48,7 @@ class DirectoryHandler(BaseHandler):
self.event_creation_handler = hs.get_event_creation_handler()
self.store = hs.get_datastore()
self.config = hs.config
self.enable_room_list_search = hs.config.enable_room_list_search
self.enable_room_list_search = hs.config.roomdirectory.enable_room_list_search
self.require_membership = hs.config.require_membership_for_aliases
self.third_party_event_rules = hs.get_third_party_event_rules()
@@ -143,7 +143,7 @@ class DirectoryHandler(BaseHandler):
):
raise AuthError(403, "This user is not permitted to create this alias")
if not self.config.is_alias_creation_allowed(
if not self.config.roomdirectory.is_alias_creation_allowed(
user_id, room_id, room_alias_str
):
# Lets just return a generic message, as there may be all sorts of
@@ -459,7 +459,7 @@ class DirectoryHandler(BaseHandler):
if canonical_alias:
room_aliases.append(canonical_alias)
if not self.config.is_publishing_room_allowed(
if not self.config.roomdirectory.is_publishing_room_allowed(
user_id, room_id, room_aliases
):
# Lets just return a generic message, as there may be all sorts of
+13 -7
View File
@@ -91,7 +91,7 @@ class FederationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self._event_auth_handler = hs.get_event_auth_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
self.config = hs.config
self.http_client = hs.get_proxied_blacklisted_http_client()
self._replication = hs.get_replication_data_handler()
@@ -593,6 +593,13 @@ class FederationHandler(BaseHandler):
target_hosts, room_id, knockee, Membership.KNOCK, content, params=params
)
# Mark the knock as an outlier as we don't yet have the state at this point in
# the DAG.
event.internal_metadata.outlier = True
# ... but tell /sync to send it to clients anyway.
event.internal_metadata.out_of_band_membership = True
# Record the room ID and its version so that we have a record of the room
await self._maybe_store_room_on_outlier_membership(
room_id=event.room_id, room_version=event_format_version
@@ -617,7 +624,7 @@ class FederationHandler(BaseHandler):
# in the invitee's sync stream. It is stripped out for all other local users.
event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"]
context = await self.state_handler.compute_event_context(event)
context = EventContext.for_outlier()
stream_id = await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)]
)
@@ -807,7 +814,7 @@ class FederationHandler(BaseHandler):
)
)
context = await self.state_handler.compute_event_context(event)
context = EventContext.for_outlier()
await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)]
)
@@ -836,7 +843,7 @@ class FederationHandler(BaseHandler):
await self.federation_client.send_leave(host_list, event)
context = await self.state_handler.compute_event_context(event)
context = EventContext.for_outlier()
stream_id = await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)]
)
@@ -1108,8 +1115,7 @@ class FederationHandler(BaseHandler):
events_to_context = {}
for e in itertools.chain(auth_events, state):
e.internal_metadata.outlier = True
ctx = await self.state_handler.compute_event_context(e)
events_to_context[e.event_id] = ctx
events_to_context[e.event_id] = EventContext.for_outlier()
event_map = {
e.event_id: e for e in itertools.chain(auth_events, state, [event])
@@ -1363,7 +1369,7 @@ class FederationHandler(BaseHandler):
builder=builder
)
EventValidator().validate_new(event, self.config)
return (event, context)
return event, context
async def _check_signature(self, event: EventBase, context: EventContext) -> None:
"""
+134 -184
View File
@@ -27,11 +27,8 @@ from typing import (
Tuple,
)
import attr
from prometheus_client import Counter
from twisted.internet import defer
from synapse import event_auth
from synapse.api.constants import (
EventContentFields,
@@ -54,11 +51,7 @@ from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
run_in_background,
)
from synapse.logging.context import nested_logging_context, run_in_background
from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
@@ -92,30 +85,6 @@ soft_failed_event_counter = Counter(
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _NewEventInfo:
"""Holds information about a received event, ready for passing to _auth_and_persist_events
Attributes:
event: the received event
claimed_auth_event_map: a map of (type, state_key) => event for the event's
claimed auth_events.
This can include events which have not yet been persisted, in the case that
we are backfilling a batch of events.
Note: May be incomplete: if we were unable to find all of the claimed auth
events. Also, treat the contents with caution: the events might also have
been rejected, might not yet have been authorized themselves, or they might
be in the wrong room.
"""
event: EventBase
claimed_auth_event_map: StateMap[EventBase]
class FederationEventHandler:
"""Handles events that originated from federation.
@@ -1107,7 +1076,7 @@ class FederationEventHandler:
room_version = await self._store.get_room_version(room_id)
event_map: Dict[str, EventBase] = {}
events: List[EventBase] = []
async def get_event(event_id: str) -> None:
with nested_logging_context(event_id):
@@ -1125,8 +1094,7 @@ class FederationEventHandler:
event_id,
)
return
event_map[event.event_id] = event
events.append(event)
except Exception as e:
logger.warning(
@@ -1137,11 +1105,29 @@ class FederationEventHandler:
)
await concurrently_execute(get_event, event_ids, 5)
logger.info("Fetched %i events of %i requested", len(event_map), len(event_ids))
logger.info("Fetched %i events of %i requested", len(events), len(event_ids))
await self._auth_and_persist_fetched_events(destination, room_id, events)
async def _auth_and_persist_fetched_events(
self, origin: str, room_id: str, events: Iterable[EventBase]
) -> None:
"""Persist the events fetched by _get_events_and_persist or _get_remote_auth_chain_for_event
The events to be persisted must be outliers.
We first sort the events to make sure that we process each event's auth_events
before the event itself, and then auth and persist them.
Notifies about the events where appropriate.
Params:
origin: where the events came from
room_id: the room that the events are meant to be in (though this has
not yet been checked)
events: the events that have been fetched
"""
event_map = {event.event_id: event for event in events}
# we now need to auth the events in an order which ensures that each event's
# auth_events are authed before the event itself.
#
# XXX: it might be possible to kick this process off in parallel with fetching
# the events.
while event_map:
@@ -1168,22 +1154,18 @@ class FederationEventHandler:
"Persisting %i of %i remaining events", len(roots), len(event_map)
)
await self._auth_and_persist_fetched_events(destination, room_id, roots)
await self._auth_and_persist_fetched_events_inner(origin, room_id, roots)
for ev in roots:
del event_map[ev.event_id]
async def _auth_and_persist_fetched_events(
async def _auth_and_persist_fetched_events_inner(
self, origin: str, room_id: str, fetched_events: Collection[EventBase]
) -> None:
"""Persist the events fetched by _get_events_and_persist.
"""Helper for _auth_and_persist_fetched_events
The events should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
We also assume that all of the auth events for all of the events have already
been persisted.
Persists a batch of events where we have (theoretically) already persisted all
of their auth events.
Notifies about the events where appropriate.
@@ -1191,7 +1173,7 @@ class FederationEventHandler:
origin: where the events came from
room_id: the room that the events are meant to be in (though this has
not yet been checked)
event_id: map from event_id -> event for the fetched events
fetched_events: the events to persist
"""
# get all the auth events for all the events in this batch. By now, they should
# have been persisted.
@@ -1203,47 +1185,37 @@ class FederationEventHandler:
allow_rejected=True,
)
event_infos = []
for event in fetched_events:
auth = {}
for auth_event_id in event.auth_event_ids():
ae = persisted_events.get(auth_event_id)
if ae:
auth[(ae.type, ae.state_key)] = ae
else:
logger.info("Missing auth event %s", auth_event_id)
room_version = await self._store.get_room_version_id(room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_infos.append(_NewEventInfo(event, auth))
if not event_infos:
return
async def prep(ev_info: _NewEventInfo) -> EventContext:
event = ev_info.event
def prep(event: EventBase) -> Optional[Tuple[EventBase, EventContext]]:
with nested_logging_context(suffix=event.event_id):
res = await self._state_handler.compute_event_context(event)
res = await self._check_event_auth(
origin,
event,
res,
claimed_auth_event_map=ev_info.claimed_auth_event_map,
)
return res
auth = {}
for auth_event_id in event.auth_event_ids():
ae = persisted_events.get(auth_event_id)
if not ae:
logger.warning(
"Event %s relies on auth_event %s, which could not be found.",
event,
auth_event_id,
)
# the fact we can't find the auth event doesn't mean it doesn't
# exist, which means it is premature to reject `event`. Instead we
# just ignore it for now.
return None
auth[(ae.type, ae.state_key)] = ae
contexts = await make_deferred_yieldable(
defer.gatherResults(
[run_in_background(prep, ev_info) for ev_info in event_infos],
consumeErrors=True,
)
)
context = EventContext.for_outlier()
try:
event_auth.check(room_version_obj, event, auth_events=auth)
except AuthError as e:
logger.warning("Rejecting %r because %s", event, e)
context.rejected = RejectedReason.AUTH_ERROR
await self.persist_events_and_notify(
room_id,
[
(ev_info.event, context)
for ev_info, context in zip(event_infos, contexts)
],
)
return event, context
events_to_persist = (x for x in (prep(event) for event in fetched_events) if x)
await self.persist_events_and_notify(room_id, tuple(events_to_persist))
async def _check_event_auth(
self,
@@ -1251,7 +1223,6 @@ class FederationEventHandler:
event: EventBase,
context: EventContext,
state: Optional[Iterable[EventBase]] = None,
claimed_auth_event_map: Optional[StateMap[EventBase]] = None,
backfilled: bool = False,
) -> EventContext:
"""
@@ -1267,43 +1238,36 @@ class FederationEventHandler:
The state events used to check the event for soft-fail. If this is
not provided the current state events will be used.
claimed_auth_event_map:
A map of (type, state_key) => event for the event's claimed auth_events.
Possibly incomplete, and possibly including events that are not yet
persisted, or authed, or in the right room.
Only populated when populating outliers.
backfilled: True if the event was backfilled.
Returns:
The updated context object.
"""
# claimed_auth_event_map should be given iff the event is an outlier
assert bool(claimed_auth_event_map) == event.internal_metadata.outlier
# This method should only be used for non-outliers
assert not event.internal_metadata.outlier
room_version = await self._store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
if claimed_auth_event_map:
# if we have a copy of the auth events from the event, use that as the
# basis for auth.
auth_events = claimed_auth_event_map
else:
# otherwise, we calculate what the auth events *should* be, and use that
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self._store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_x.values()}
# calculate what the auth events *should* be, to use as a basis for auth.
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self._store.get_events(auth_events_ids)
calculated_auth_event_map = {
(e.type, e.state_key): e for e in auth_events_x.values()
}
try:
(
context,
auth_events_for_auth,
) = await self._update_auth_events_and_context_for_auth(
origin, event, context, auth_events
origin,
event,
context,
calculated_auth_event_map=calculated_auth_event_map,
)
except Exception:
# We don't really mind if the above fails, so lets not fail
@@ -1315,7 +1279,7 @@ class FederationEventHandler:
"Ignoring failure and continuing processing of event.",
event.event_id,
)
auth_events_for_auth = auth_events
auth_events_for_auth = calculated_auth_event_map
try:
event_auth.check(room_version_obj, event, auth_events=auth_events_for_auth)
@@ -1451,7 +1415,7 @@ class FederationEventHandler:
origin: str,
event: EventBase,
context: EventContext,
input_auth_events: StateMap[EventBase],
calculated_auth_event_map: StateMap[EventBase],
) -> Tuple[EventContext, StateMap[EventBase]]:
"""Helper for _check_event_auth. See there for docs.
@@ -1469,19 +1433,17 @@ class FederationEventHandler:
event:
context:
input_auth_events:
Map from (event_type, state_key) to event
Normally, our calculated auth_events based on the state of the room
at the event's position in the DAG, though occasionally (eg if the
event is an outlier), may be the auth events claimed by the remote
server.
calculated_auth_event_map:
Our calculated auth_events based on the state of the room
at the event's position in the DAG.
Returns:
updated context, updated auth event map
"""
# take a copy of input_auth_events before we modify it.
auth_events: MutableStateMap[EventBase] = dict(input_auth_events)
assert not event.internal_metadata.outlier
# take a copy of calculated_auth_event_map before we modify it.
auth_events: MutableStateMap[EventBase] = dict(calculated_auth_event_map)
event_auth_events = set(event.auth_event_ids())
@@ -1505,73 +1467,22 @@ class FederationEventHandler:
# If we don't have all the auth events, we need to get them.
logger.info("auth_events contains unknown events: %s", missing_auth)
try:
try:
remote_auth_chain = await self._federation_client.get_event_auth(
origin, event.room_id, event.event_id
)
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e1)
return context, auth_events
seen_remotes = await self._store.have_seen_events(
event.room_id, [e.event_id for e in remote_auth_chain]
await self._get_remote_auth_chain_for_event(
origin, event.room_id, event.event_id
)
for auth_event in remote_auth_chain:
if auth_event.event_id in seen_remotes:
continue
if auth_event.event_id == event.event_id:
continue
try:
auth_ids = auth_event.auth_event_ids()
auth = {
(e.type, e.state_key): e
for e in remote_auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
auth_event.internal_metadata.outlier = True
logger.debug(
"_check_event_auth %s missing_auth: %s",
event.event_id,
auth_event.event_id,
)
missing_auth_event_context = (
await self._state_handler.compute_event_context(auth_event)
)
missing_auth_event_context = await self._check_event_auth(
origin,
auth_event,
missing_auth_event_context,
claimed_auth_event_map=auth,
)
await self.persist_events_and_notify(
event.room_id, [(auth_event, missing_auth_event_context)]
)
if auth_event.event_id in event_auth_events:
auth_events[
(auth_event.type, auth_event.state_key)
] = auth_event
except AuthError:
pass
except Exception:
logger.exception("Failed to get auth chain")
if event.internal_metadata.is_outlier():
# XXX: given that, for an outlier, we'll be working with the
# event's *claimed* auth events rather than those we calculated:
# (a) is there any point in this test, since different_auth below will
# obviously be empty
# (b) alternatively, why don't we do it earlier?
logger.info("Skipping auth_event fetch for outlier")
return context, auth_events
else:
# load any auth events we might have persisted from the database. This
# has the side-effect of correctly setting the rejected_reason on them.
auth_events.update(
{
(ae.type, ae.state_key): ae
for ae in await self._store.get_events_as_list(
missing_auth, allow_rejected=True
)
}
)
different_auth = event_auth_events.difference(
e.event_id for e in auth_events.values()
@@ -1636,6 +1547,45 @@ class FederationEventHandler:
return context, auth_events
async def _get_remote_auth_chain_for_event(
self, destination: str, room_id: str, event_id: str
) -> None:
"""If we are missing some of an event's auth events, attempt to request them
Args:
destination: where to fetch the auth tree from
room_id: the room in which we are lacking auth events
event_id: the event for which we are lacking auth events
"""
try:
remote_event_map = {
e.event_id: e
for e in await self._federation_client.get_event_auth(
destination, room_id, event_id
)
}
except RequestSendFailed as e1:
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e1)
return
logger.info("/event_auth returned %i events", len(remote_event_map))
# `event` may be returned, but we should not yet process it.
remote_event_map.pop(event_id, None)
# nor should we reprocess any events we have already seen.
seen_remotes = await self._store.have_seen_events(
room_id, remote_event_map.keys()
)
for s in seen_remotes:
remote_event_map.pop(s, None)
await self._auth_and_persist_fetched_events(
destination, room_id, remote_event_map.values()
)
async def _update_context_for_auth_events(
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
) -> EventContext:
+6 -6
View File
@@ -62,7 +62,7 @@ class IdentityHandler(BaseHandler):
self.federation_http_client = hs.get_federation_http_client()
self.hs = hs
self._web_client_location = hs.config.invite_client_location
self._web_client_location = hs.config.email.invite_client_location
# Ratelimiters for `/requestToken` endpoints.
self._3pid_validation_ratelimiter_ip = Ratelimiter(
@@ -419,7 +419,7 @@ class IdentityHandler(BaseHandler):
token_expires = (
self.hs.get_clock().time_msec()
+ self.hs.config.email_validation_token_lifetime
+ self.hs.config.email.email_validation_token_lifetime
)
await self.store.start_or_continue_validation_session(
@@ -465,7 +465,7 @@ class IdentityHandler(BaseHandler):
if next_link:
params["next_link"] = next_link
if self.hs.config.using_identity_server_from_trusted_list:
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
@@ -518,7 +518,7 @@ class IdentityHandler(BaseHandler):
if next_link:
params["next_link"] = next_link
if self.hs.config.using_identity_server_from_trusted_list:
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
@@ -572,12 +572,12 @@ class IdentityHandler(BaseHandler):
validation_session = None
# Try to validate as email
if self.hs.config.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
# Ask our delegated email identity server
validation_session = await self.threepid_from_creds(
self.hs.config.account_threepid_delegate_email, threepid_creds
)
elif self.hs.config.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
elif self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
# Get a validated session matching these details
validation_session = await self.store.get_threepid_validation_session(
"email", client_secret, sid=sid, validated=True
+1 -1
View File
@@ -125,7 +125,7 @@ class InitialSyncHandler(BaseHandler):
now_token = self.hs.get_event_sources().get_current_token()
presence_stream = self.hs.get_event_sources().sources["presence"]
presence_stream = self.hs.get_event_sources().sources.presence
presence, _ = await presence_stream.get_new_events(
user, from_key=None, include_offline=False
)
+9 -9
View File
@@ -443,7 +443,7 @@ class EventCreationHandler:
)
self._block_events_without_consent_error = (
self.config.block_events_without_consent_error
self.config.consent.block_events_without_consent_error
)
# we need to construct a ConsentURIBuilder here, as it checks that the necessary
@@ -666,7 +666,7 @@ class EventCreationHandler:
self.validator.validate_new(event, self.config)
return (event, context)
return event, context
async def _is_exempt_from_privacy_policy(
self, builder: EventBuilder, requester: Requester
@@ -692,10 +692,10 @@ class EventCreationHandler:
return False
async def _is_server_notices_room(self, room_id: str) -> bool:
if self.config.server_notices_mxid is None:
if self.config.servernotices.server_notices_mxid is None:
return False
user_ids = await self.store.get_users_in_room(room_id)
return self.config.server_notices_mxid in user_ids
return self.config.servernotices.server_notices_mxid in user_ids
async def assert_accepted_privacy_policy(self, requester: Requester) -> None:
"""Check if a user has accepted the privacy policy
@@ -731,8 +731,8 @@ class EventCreationHandler:
# exempt the system notices user
if (
self.config.server_notices_mxid is not None
and user_id == self.config.server_notices_mxid
self.config.servernotices.server_notices_mxid is not None
and user_id == self.config.servernotices.server_notices_mxid
):
return
@@ -744,7 +744,7 @@ class EventCreationHandler:
if u["appservice_id"] is not None:
# users registered by an appservice are exempt
return
if u["consent_version"] == self.config.user_consent_version:
if u["consent_version"] == self.config.consent.user_consent_version:
return
consent_uri = self._consent_uri_builder.build_user_consent_uri(user.localpart)
@@ -1004,7 +1004,7 @@ class EventCreationHandler:
logger.debug("Created event %s", event.event_id)
return (event, context)
return event, context
@measure_func("handle_new_client_event")
async def handle_new_client_event(
@@ -1425,7 +1425,7 @@ class EventCreationHandler:
# structural protocol level).
is_msc2716_event = (
original_event.type == EventTypes.MSC2716_INSERTION
or original_event.type == EventTypes.MSC2716_CHUNK
or original_event.type == EventTypes.MSC2716_BATCH
or original_event.type == EventTypes.MSC2716_MARKER
)
if not room_version_obj.msc2716_historical and is_msc2716_event:
+1 -1
View File
@@ -277,7 +277,7 @@ class OidcProvider:
self._token_generator = token_generator
self._config = provider
self._callback_url: str = hs.config.oidc_callback_url
self._callback_url: str = hs.config.oidc.oidc_callback_url
# Calculate the prefix for OIDC callback paths based on the public_baseurl.
# We'll insert this into the Path= parameter of any session cookies we set.
+4 -4
View File
@@ -92,16 +92,16 @@ class PaginationHandler:
if hs.config.worker.run_background_tasks and hs.config.retention_enabled:
# Run the purge jobs described in the configuration file.
for job in hs.config.retention_purge_jobs:
for job in hs.config.server.retention_purge_jobs:
logger.info("Setting up purge job with config: %s", job)
self.clock.looping_call(
run_as_background_process,
job["interval"],
job.interval,
"purge_history_for_rooms_in_range",
self.purge_history_for_rooms_in_range,
job["shortest_max_lifetime"],
job["longest_max_lifetime"],
job.shortest_max_lifetime,
job.longest_max_lifetime,
)
async def purge_history_for_rooms_in_range(
+2 -2
View File
@@ -27,8 +27,8 @@ logger = logging.getLogger(__name__)
class PasswordPolicyHandler:
def __init__(self, hs: "HomeServer"):
self.policy = hs.config.password_policy
self.enabled = hs.config.password_policy_enabled
self.policy = hs.config.auth.password_policy
self.enabled = hs.config.auth.password_policy_enabled
# Regexps for the spec'd policy parameters.
self.regexp_digit = re.compile("[0-9]")

Some files were not shown because too many files have changed in this diff Show More