1
0

Compare commits

...

158 Commits

Author SHA1 Message Date
David Robertson a39c9adce6 Will this work?!?!?!?! 2023-02-07 20:48:34 +00:00
David Robertson 2629f7f852 Revert "Revert "Allow poetry-core 1.5.0 (#14949)""
does this break it?

This reverts commit 50936db2fc.
2023-02-07 20:24:38 +00:00
David Robertson 99bc8cdaa4 Don't test pypy3.8 wheels either 2023-02-07 19:50:17 +00:00
David Robertson 1b6ab81658 Use old fstring syntaxxxxxxxxxxxxxxxxxxxxxxxxx 2023-02-07 19:36:18 +00:00
David Robertson d87276dc0f Desperation 2023-02-07 19:27:09 +00:00
David Robertson 50936db2fc Revert "Allow poetry-core 1.5.0 (#14949)"
This reverts commit 64a631879c.
2023-02-07 19:05:49 +00:00
David Robertson 03cc9af631 Oh my god I'm an IDIOT 2023-02-07 18:31:55 +00:00
David Robertson 5d16d26c4a DEBUG: builds lodsewheels 2023-02-07 18:21:37 +00:00
David Robertson f4e4de70e6 DEBUG: Upload wheel artifacts, even on failure 2023-02-07 18:15:06 +00:00
David Robertson f7aa7b4dbe DEBUG: build mac wheels on PRs 2023-02-07 18:13:55 +00:00
David Robertson f10caa73ee Disambiguate get_ex_outlier_stream_rows query
A backwards-compatible piece of #14979 that's safe to land now.
2023-02-07 15:33:33 +00:00
David Robertson 9cd7610f86 Revert "Add event_stream_ordering column to membership state tables (#14979)"
This reverts commit 5fdc12f482.
2023-02-07 15:26:55 +00:00
David Robertson f630536a94 1.77.0rc1 2023-02-07 13:45:19 +00:00
David Robertson 4dd2b6165c Proper types for tests.test_terms_auth (#15007)
* Proper types for tests.test_terms_auth

* Changelog
2023-02-07 12:03:39 +00:00
Patrick Cloke 5b55c32d61 Add tests for using _flatten_dict with an event. (#15002) 2023-02-07 06:56:09 -05:00
David Robertson d0fed7a37b Properly typecheck types.http (#14988)
* Tweak http types in Synapse

AFACIS these are correct, and they make mypy happier on tests.http.

* Type hints for test_proxyagent

* type hints for test_srv_resolver

* test_matrix_federation_agent

* tests.http.server._base

* tests.http.__init__

* tests.http.test_additional_resource

* tests.http.test_client

* tests.http.test_endpoint

* tests.http.test_matrixfederationclient

* tests.http.test_servlet

* tests.http.test_simple_client

* tests.http.test_site

* One fixup in tests.server

* Untyped defs

* Changelog

* Fixup syntax for Python 3.7

* Fix olddeps syntax

* Use a twisted IPv4 addr for dummy_address

* Fix typo, thanks Sean

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Remove redundant `Optional`

---------

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2023-02-07 00:20:04 +00:00
Nick Mills-Barrett 5fdc12f482 Add event_stream_ordering column to membership state tables (#14979)
This adds an `event_stream_ordering` column to `current_state_events`,
`local_current_membership` and `room_memberships`. Each of these tables
is regularly joined with the `events` table to get the stream ordering
and denormalising this into each table will yield significant query
performance improvements once used. Includes a background job to
populate these values from the `events` table.

Same idea as https://github.com/matrix-org/synapse/pull/13703.

Signed off by Nick @ Beeper (@fizzadar).
2023-02-07 00:10:54 +00:00
icp 64a631879c Allow poetry-core 1.5.0 (#14949) 2023-02-06 19:34:14 +00:00
Patrick Cloke d0fa217cd9 Add missing types to test_state. (#14985) 2023-02-06 16:11:09 +00:00
David Robertson 0f34abed7c Type hints for tests.federation (#14991)
* Make tests.federation pass mypy

* Untyped defs in tests.federation.transport

* test methods return None

* Remaining type hints in tests.federation

* Changelog

* Avoid an uncessary type-ignore
2023-02-06 16:05:06 +00:00
Patrick Cloke 156cd88eef Add missing type hints to tests.replication. (#14987) 2023-02-06 09:55:00 -05:00
David Robertson b275763c65 Expect type stubs from canonicaljson (#14992)
* canonicaljson has stubs now

since https://github.com/matrix-org/python-canonicaljson/pull/52

which is included in the lockfile version we use for type checking.

* Changelog
2023-02-06 12:54:11 +00:00
David Robertson e8269ed391 Type hints for tests.appservice (#14990)
* Accept a Sequence of events in synapse.appservice

This avoids some casts/ignores in the tests I'm about to fixup. It seems
that `List[Mock]` is not a subtype of `List[EventBase]`, but
`Sequence[Mock]` is a subtype of `Sequence[EventBase]`. So presumably
`Mock` is considered a subtype of anything, much like `Any`.

* make tests.appservice.test_scheduler pass mypy

* Extra hints in tests.appservice.test_scheduler

* Extra hints in tests.appservice.test_api

* Extra hints in tests.appservice.test_appservice

* Disallow untyped defs

* Changelog
2023-02-06 12:49:06 +00:00
dependabot[bot] 3e37ff1a7e Bump anyhow from 1.0.68 to 1.0.69 (#14996)
* Bump anyhow from 1.0.68 to 1.0.69

Bumps [anyhow](https://github.com/dtolnay/anyhow) from 1.0.68 to 1.0.69.
- [Release notes](https://github.com/dtolnay/anyhow/releases)
- [Commits](https://github.com/dtolnay/anyhow/compare/1.0.68...1.0.69)

---
updated-dependencies:
- dependency-name: anyhow
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 12:18:11 +00:00
dependabot[bot] e3808e53dc Bump phonenumbers from 8.13.4 to 8.13.5 (#14999)
* Bump phonenumbers from 8.13.4 to 8.13.5

Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.4 to 8.13.5.
- [Release notes](https://github.com/daviddrysdale/python-phonenumbers/releases)
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.4...v8.13.5)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:35:39 +00:00
dependabot[bot] 4e2b58bc52 Bump isort from 5.11.4 to 5.11.5 (#14998)
* Bump isort from 5.11.4 to 5.11.5

Bumps [isort](https://github.com/pycqa/isort) from 5.11.4 to 5.11.5.
- [Release notes](https://github.com/pycqa/isort/releases)
- [Changelog](https://github.com/PyCQA/isort/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pycqa/isort/compare/5.11.4...5.11.5)

---
updated-dependencies:
- dependency-name: isort
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:35:06 +00:00
dependabot[bot] 041eab647d Bump serde_json from 1.0.91 to 1.0.92 (#14997)
* Bump serde_json from 1.0.91 to 1.0.92

Bumps [serde_json](https://github.com/serde-rs/json) from 1.0.91 to 1.0.92.
- [Release notes](https://github.com/serde-rs/json/releases)
- [Commits](https://github.com/serde-rs/json/compare/v1.0.91...v1.0.92)

---
updated-dependencies:
- dependency-name: serde_json
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:34:54 +00:00
dependabot[bot] ef23d6b296 Bump prometheus-client from 0.15.0 to 0.16.0 (#14995)
* Bump prometheus-client from 0.15.0 to 0.16.0

Bumps [prometheus-client](https://github.com/prometheus/client_python) from 0.15.0 to 0.16.0.
- [Release notes](https://github.com/prometheus/client_python/releases)
- [Commits](https://github.com/prometheus/client_python/compare/v0.15.0...v0.16.0)

---
updated-dependencies:
- dependency-name: prometheus-client
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:34:22 +00:00
dependabot[bot] 96e67d5cba Bump types-setuptools from 65.6.0.3 to 67.1.0.0 (#14994)
* Bump types-setuptools from 65.6.0.3 to 67.1.0.0

Bumps [types-setuptools](https://github.com/python/typeshed) from 65.6.0.3 to 67.1.0.0.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:34:01 +00:00
dependabot[bot] f3f495c4e3 Bump hiredis from 2.1.1 to 2.2.1 (#14993)
* Bump hiredis from 2.1.1 to 2.2.1

Bumps [hiredis](https://github.com/redis/hiredis-py) from 2.1.1 to 2.2.1.
- [Release notes](https://github.com/redis/hiredis-py/releases)
- [Changelog](https://github.com/redis/hiredis-py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/hiredis-py/compare/v2.1.1...v2.2.1)

---
updated-dependencies:
- dependency-name: hiredis
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-06 11:33:23 +00:00
David Robertson b3bf58a8a5 Only notify the target of a membership event (#14971)
* Only notify the target of a membership event

Naughty, but should be a big speedup in large rooms
2023-02-06 11:29:51 +00:00
David Robertson 6e6edea6c1 Properly typecheck tests.api (#14983) 2023-02-03 20:03:23 +00:00
Patrick Cloke b2d97bac09 Implement MSC3958: suppress notifications from edits (#14960)
Co-authored-by: Brad Murray <brad@beeper.com>
Co-authored-by: Nick Barrett <nick@beeper.com>

Copy the suppress_edits push rule from Beeper to implement MSC3958.

https://github.com/beeper/synapse/blame/9415a1284b1bfb558bd66f28c24ca1611e6c6fa2/rust/src/push/base_rules.rs#L98-L114
2023-02-03 14:31:14 -05:00
David Robertson e301ee6189 Properly typecheck tests.app (#14984 2023-02-03 19:22:40 +00:00
Patrick Cloke f0cae26d58 Add a docstring & tests for _flatten_dict. (#14981) 2023-02-03 16:48:13 +00:00
Patrick Cloke 52700a0bcf Support the backwards compatibility features in MSC3952. (#14958)
If the feature is enabled and the event has a `m.mentions` property,
skip processing of the legacy mentions rules.
2023-02-03 16:28:20 +00:00
Sean Quah 0a686d1d13 Faster joins: Refactor handling of servers in room (#14954)
Ensure that the list of servers in a partial state room always contains
the server we joined off.

Also refactor `get_partial_state_servers_at_join` to return `None` when
the given room is no longer partial stated, to explicitly indicate when
the room has partial state. Otherwise it's not clear whether an empty
list means that the room has full state, or the room is partial stated,
but the server we joined off told us that there are no servers in the
room.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-02-03 15:39:59 +00:00
Patrick Cloke 8e9fc28c6a Reload the pyo3-log config when the Python logging config changes. (#14976)
Since pyo3-log is initialized very early in the Python start-up
it caches the state of the loggers before they're fully initialized
(and thus are essentially disabled). Whenever we reload the
logging configuration we now also tell pyo3-log to discard
any cached logging configuration it has; it will refetch the
current logging configuration from Python at the next point
it logs.

This fixes Rust log lines not appearing in the homeserver logs.
2023-02-03 08:27:31 -05:00
Patrick Cloke da05b70af5 Skip unused calculations in sync handler. (#14908)
If a sync request does not need to calculate per-room entries &
is not generating presence & is not generating device list data
(e.g. during initial sync) avoid the expensive calculation of room
specific data.

This is a micro-optimisation for clients syncing simply to receive
to-device information.
2023-02-02 13:45:12 -05:00
Patrick Cloke f36da501be Do not calculate presence or ephemeral events when they are filtered out (#14970)
This expands the previous optimisation from being only for initial
sync to being for all sync requests.

It also inverts some of the logic to be inclusive instead of exclusive.
2023-02-02 11:58:20 -05:00
David Robertson 2186ebed6c Fetch fewer events when getting hosts in room (#14962) 2023-02-02 16:49:14 +00:00
dependabot[bot] f398886ab8 Bump dtolnay/rust-toolchain from e645b0cf01249a964ec099494d38d2da0f0b349f to 9cd00a88a73addc8617065438eff914dd08d0955 (#14968) 2023-02-02 07:21:46 -05:00
Patrick Cloke da8a957113 Make extension-module optional, but default. (#14965) 2023-02-01 19:01:06 -05:00
realtyem 58214dbb9b Allow enabling the asyncio reactor in complement (#14858)
Signed-off-by: Jason Little realtyem@gmail.com
2023-02-01 23:42:45 +00:00
dependabot[bot] 1d3a54aa30 Bump hiredis from 2.0.0 to 2.1.1 (#14939)
* Bump hiredis from 2.0.0 to 2.1.1

Bumps [hiredis](https://github.com/redis/hiredis-py) from 2.0.0 to 2.1.1.
- [Release notes](https://github.com/redis/hiredis-py/releases)
- [Changelog](https://github.com/redis/hiredis-py/blob/master/CHANGELOG.md)
- [Commits](https://github.com/redis/hiredis-py/compare/v2.0.0...v2.1.1)

---
updated-dependencies:
- dependency-name: hiredis
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-01 23:25:15 +00:00
Patrick Cloke 1182ae5063 Add helper to parse an enum from query args & use it. (#14956)
The `parse_enum` helper pulls an enum value from the query string
(by delegating down to the parse_string helper with values generated
from the enum).

This is used to pull out "f" and "b" in most places and then we thread
the resulting Direction enum throughout more code.
2023-02-01 21:35:24 +00:00
Patrick Cloke 230a831c73 Attempt to delete more duplicate rows in receipts_linearized table. (#14915)
The previous assumption was that the stream_id column was unique
(for a room ID, receipt type, user ID tuple), but this turned out to be
incorrect.

Now find the max stream ID, then map this back to a database-specific
row identifier and delete other rows which match the (room ID, receipt type,
user ID) tuple, but *not* the row ID.
2023-02-01 15:45:10 -05:00
dependabot[bot] bb675913f0 Bump docker/build-push-action from 3 to 4 (#14952)
* Bump docker/build-push-action from 3 to 4

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 3 to 4.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-02-01 20:06:28 +00:00
Dirk Klimpel bf82b56bab Add more user information to export-data command. (#14894)
* The user's profile information.
* The user's devices.
* The user's connections / IP address information.
2023-02-01 15:45:19 +00:00
David Robertson 1958f9de45 lnav config for synpase logs (#14953) 2023-02-01 12:36:04 +00:00
Patrick Cloke 73403d5e5e Fix inconsistencies between MSC3952 and implementation. (#14957)
* Correct the push rule IDs.
* Removes the sound tweak for room notifications.
2023-02-01 06:24:02 -05:00
H. Shay 41d177ca4a Merge branch 'master' into develop 2023-01-31 10:36:31 -08:00
H. Shay eafdb12dd8 update changelog and upgrade notes 2023-01-31 08:35:22 -08:00
H. Shay e4bf5f3b05 update changelog 2023-01-31 08:28:16 -08:00
H. Shay 9cb25b20e5 1.76.0 2023-01-31 08:23:07 -08:00
Patrick Cloke 585180594b Fix running cargo bench & test in CI. (#14943) 2023-01-31 08:00:07 -05:00
David Robertson 3b8574b4f2 Tag /send_join responses to detect faster joins (#14950)
* Tag /send_join responses to detect faster joins

* Changelog

* Define a proper SynapseTag

* isort
2023-01-31 12:43:20 +00:00
Sean Quah 805b641fb6 Fix "Re-starting finished log context" spam when creating events (#14947)
`run_in_background` calls re-use the current logging context. When they
are not awaited, they can complete after the current logging context has
been marked as finished, which leads to log spam. Use
`run_as_background_process` instead.

Fixes one of the instances of #13090.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-31 11:31:52 +00:00
Sean Quah 6d14fdc271 Make sqlite database migrations transactional again, part two (#14926)
#14910 fixed the regression introduced by #13873 where sqlite database
migrations would no longer run inside a transaction. However, it
committed the transaction before Synapse updated its bookkeeping of
which migrations have been run, which means that migrations may be run
again after they have completed successfully.

Leave the transaction open at the end of `executescript`, to restore the
old, correct behaviour. Also make the PostgreSQL behaviour consistent
with SQLite.

Fixes #14909.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-31 11:03:55 +00:00
David Robertson a134e626e4 Reject boolean power levels (#14944)
* Better test for bad values in power levels events

The previous test only checked that Synapse didn't raise an exception,
but didn't check that we had correctly interpreted the value of the
dodgy power level.

It also conflated two things: bad room notification levels, and bad user
levels. There _is_ logic for converting the latter to integers, but we
should test it separately.

* Check we ignore types that don't convert to int

* Handle `None` values in `notifications.room`

* Changelog

* Also test that bad values are rejected by event auth

* Docstring

* linter scripttttttttt

* Test boolean values in PL content

* Reject boolean power levels

* Changelog
2023-01-31 10:57:02 +00:00
David Robertson 796a4b7482 Prefer type(x) is int to isinstance(x, int) (#14945)
* Perfer `type(x) is int` to `isinstance(x, int)`

This covered all additional instances I could see where `x` was
user-controlled.
The remaining cases are

```
$ rg -s 'isinstance.*[^_]int'
tests/replication/_base.py
576:        if isinstance(obj, int):

synapse/util/caches/stream_change_cache.py
136:        assert isinstance(stream_pos, int)
214:        assert isinstance(stream_pos, int)
246:        assert isinstance(stream_pos, int)
267:        assert isinstance(stream_pos, int)

synapse/replication/tcp/external_cache.py
133:        if isinstance(result, int):

synapse/metrics/__init__.py
100:        if isinstance(calls, (int, float)):

synapse/handlers/appservice.py
262:        assert isinstance(new_token, int)

synapse/config/_util.py
62:        if isinstance(p, int):
```

which cover metrics, logic related to `jsonschema`, and replication and
data streams. AFAICS these are all internal to Synapse

* Changelog
2023-01-31 10:33:07 +00:00
David Robertson 510d4b06e7 Handle malformed values of notification.room in power level events (#14942)
* Better test for bad values in power levels events

The previous test only checked that Synapse didn't raise an exception,
but didn't check that we had correctly interpreted the value of the
dodgy power level.

It also conflated two things: bad room notification levels, and bad user
levels. There _is_ logic for converting the latter to integers, but we
should test it separately.

* Check we ignore types that don't convert to int

* Handle `None` values in `notifications.room`

* Changelog

* Also test that bad values are rejected by event auth

* Docstring

* linter scripttttttttt
2023-01-30 21:29:30 +00:00
David Robertson cbb0ee43cc Initial batch of notes on faster joins (#14677)
Co-authored-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Shay <hillerys@element.io>
2023-01-30 21:27:52 +00:00
dependabot[bot] 43c7d814e6 Bump types-pillow from 9.4.0.3 to 9.4.0.5 (#14938)
* Bump types-pillow from 9.4.0.3 to 9.4.0.5

Bumps [types-pillow](https://github.com/python/typeshed) from 9.4.0.3 to 9.4.0.5.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pillow
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-30 10:32:51 +00:00
dependabot[bot] ed2b17bb9f Bump types-jsonschema from 4.17.0.2 to 4.17.0.3 (#14937)
* Bump types-jsonschema from 4.17.0.2 to 4.17.0.3

Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.17.0.2 to 4.17.0.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-30 10:32:27 +00:00
dependabot[bot] 1b3343c4b4 Bump types-pyyaml from 6.0.12.2 to 6.0.12.3 (#14936)
* Bump types-pyyaml from 6.0.12.2 to 6.0.12.3

Bumps [types-pyyaml](https://github.com/python/typeshed) from 6.0.12.2 to 6.0.12.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-30 10:32:15 +00:00
dependabot[bot] 2b27a33bb6 Bump ijson from 3.1.4 to 3.2.0.post0 (#14935)
* Bump ijson from 3.1.4 to 3.2.0.post0

Bumps [ijson](https://github.com/ICRAR/ijson) from 3.1.4 to 3.2.0.post0.
- [Release notes](https://github.com/ICRAR/ijson/releases)
- [Changelog](https://github.com/ICRAR/ijson/blob/master/CHANGELOG.md)
- [Commits](https://github.com/ICRAR/ijson/compare/v3.1.4...v3.2.0.post0)

---
updated-dependencies:
- dependency-name: ijson
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-30 10:31:05 +00:00
Patrick Cloke 2a51f3ec36 Implement MSC3952: Intentional mentions (#14823)
MSC3952 defines push rules which searches for mentions in a list of
Matrix IDs in the event body, instead of searching the entire event
body for display name / local part.

This is implemented behind an experimental configuration flag and
does not yet implement the backwards compatibility pieces of the MSC.
2023-01-27 10:16:21 -05:00
David Robertson fca5617a0d Describe faster joins 2023-01-27 15:05:29 +00:00
David Robertson faecc6c083 Merge branch 'release-v1.76' into develop 2023-01-27 13:01:18 +00:00
Patrick Cloke 265735db9d Use an enum for direction. (#14927)
For better type safety we  use an enum instead of strings to
configure direction (backwards or forwards).
2023-01-27 07:27:55 -05:00
David Robertson 5ef9ff54ef 1.76.0rc2 2023-01-27 11:18:36 +00:00
Patrick Cloke fc35e0673f Add missing type hints in tests (#14879)
* FIx-up type hints in tests.logging.
* Add missing type hints to test_transactions.
2023-01-26 14:45:24 -05:00
Patrick Cloke 345576bc34 Fix paginating /relations with a live token (#14866)
The `/relations` endpoint was not properly handle "live tokens"
(i.e sync tokens), to do this properly we abstract the code that
`/messages` has and re-use it.
2023-01-26 13:24:15 -05:00
Patrick Cloke ba79fb4a61 Use StrCollection in place of Collection[str] in (most) handlers code. (#14922)
Due to the increased safety of StrCollection over Collection[str]
and Sequence[str].
2023-01-26 12:31:58 -05:00
Patrick Cloke 8a05d5de21 Batch look-ups to see if rooms are partial stated. (#14917)
* Batch look-ups to see if rooms are partial stated.

* Fix issues found in linting.

* Fix typo.

* Apply suggestions from code review

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Clarify comments.

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Also improve the cache size while we're at it

* is_partial_state_rooms -> is_partial_state_room_batched

* Run `black`

* Improve annotation for `simple_select_many_batch`

* Fix is_partial_state_room_batched impl

* Okay, _actually_ fix impl

* Update description.

* Update synapse/storage/databases/main/room.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Run black.

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2023-01-26 17:15:36 +00:00
David Robertson dc901a885f Fix typo in release script (#14920)
* Fix typo in release script

* Changelog
2023-01-26 13:27:27 +00:00
Sean Quah cf66d712c6 Fix initialization of _device_list_id_gen (#14914)
On startup, the `_device_list_id_gen` stream id generator is initialized
using the maximum stream id seen in a list of tables. When we started
populating the `device_list_remote_pending` table in #13913, we forgot
to add it to the aforementioned list of tables, so the stream id
generator can hand out old stream ids after a restart. The end result is
that Synapse can fail to handle device list update EDUs after a restart
when a partial state join is in progress.

Add the `device_list_remote_pending` table to the list of tables to
consider when initializing the `_device_list_id_gen` stream id generator.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-26 10:38:49 +00:00
Andrew Morgan 871ff05add Fix type hints in typing edu unit tests (#14886) 2023-01-26 10:15:50 +00:00
Patrick Cloke 7e8d455280 Fix a bug in the send_local_online_presence_to module API (#14880)
Destination was being used incorrectly (a single destination instead
of a list of destinations was being passed).

This also updates some of the types in the area to not use Collection[str],
which is a footgun.
2023-01-25 21:34:37 +00:00
Patrick Cloke 3c3ba31507 Add missing type hints for tests.events. (#14904) 2023-01-25 15:14:03 -05:00
Patrick Cloke 8bc5d1406c Document how to handle Dependabot pull requests. (#14916) 2023-01-25 14:49:37 -05:00
Andrew Morgan 836c592f15 Fix type hints in knocking tests. (#14887) 2023-01-25 14:38:20 -05:00
David Robertson f51035bc87 Fix link syntax in changelog 2023-01-25 16:44:04 +00:00
David Robertson 58fa1ed21e Refer to upgrade notes 2023-01-25 16:41:55 +00:00
David Robertson 5f25fa358d Touch-up the features section 2023-01-25 16:41:42 +00:00
David Robertson 48e3ad8a06 Group dependabot lines 2023-01-25 16:41:32 +00:00
David Robertson 8a7d2de51f 1.76.0rc1 2023-01-25 16:21:27 +00:00
David Robertson 8e37ece015 Bump the client-side timeout for /state (#14912)
* Bump the client-side timeout for /state

to allow faster joins resyncs the chance to complete for large rooms.
We have seen this fair poorly (~90s for Matrix HQ's /state) in testing,
causing the resync to advance to another HS who hasn't seen our join yet.

* Changelog

* Milliseconds!!!!
2023-01-25 16:11:06 +00:00
Sean Quah a63d4cc9e9 Make sqlite database migrations transactional again (#14910)
#13873 introduced a regression which causes sqlite database migrations
to no longer run inside a transaction. Wrap them in a transaction again,
to avoid database corruption when migrations are interrupted.

Fixes #14909.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-25 13:38:53 +00:00
ZAID BIN TARIQ b15f0758e5 Document the export user data command. (#14883) 2023-01-25 07:01:27 -05:00
David Robertson 4607be0b7b Request partial joins by default (#14905)
* Request partial joins by default

This is a little sloppy, but we are trying to gain confidence in faster
joins in the upcoming RC.

Admins can still opt out by adding the following to their Synapse
config:

```yaml
experimental:
    faster_joins: false
```

We may revert this change before the release proper, depending on how
testing in the wild goes.

* Changelog

* Try to fix the backfill test failures

* Upgrade notes

* Postgres compat?
2023-01-24 15:28:20 +00:00
David Robertson 80d44060c9 Faster joins: omit partial rooms from eager syncs until the resync completes (#14870)
* Allow `AbstractSet` in `StrCollection`

Or else frozensets are excluded. This will be useful in an upcoming
commit where I plan to change a function that accepts `List[str]` to
accept `StrCollection` instead.

* `rooms_to_exclude` -> `rooms_to_exclude_globally`

I am about to make use of this exclusion mechanism to exclude rooms for
a specific user and a specific sync. This rename helps to clarify the
distinction between the global config and the rooms to exclude for a
specific sync.

* Better function names for internal sync methods

* Track a list of excluded rooms on SyncResultBuilder

I plan to feed a list of partially stated rooms for this sync to ignore

* Exclude partial state rooms during eager sync

using the mechanism established in the previous commit

* Track un-partial-state stream in sync tokens

So that we can work out which rooms have become fully-stated during a
given sync period.

* Fix mutation of `@cached` return value

This was fouling up a complement test added alongside this PR.
Excluding a room would mean the set of forgotten rooms in the cache
would be extended. This means that room could be erroneously considered
forgotten in the future.

Introduced in #12310, Synapse 1.57.0. I don't think this had any
user-visible side effects (until now).

* SyncResultBuilder: track rooms to force as newly joined

Similar plan as before. We've omitted rooms from certain sync responses;
now we establish the mechanism to reintroduce them into future syncs.

* Read new field, to present rooms as newly joined

* Force un-partial-stated rooms to be newly-joined

for eager incremental syncs only, provided they're still fully stated

* Notify user stream listeners to wake up long polling syncs

* Changelog

* Typo fix

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Unnecessary list cast

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Rephrase comment

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Another comment

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

* Fixup merge(?)

* Poke notifier when receiving un-partial-stated msg over replication

* Fixup merge whoops

Thanks MV :)

Co-authored-by: Mathieu Velen <mathieuv@matrix.org>

Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2023-01-23 15:44:39 +00:00
dependabot[bot] 5e75771ece Bump ruff from 0.0.224 to 0.0.230 (#14897) 2023-01-23 09:32:07 -05:00
dependabot[bot] 19f325387b Bump types-opentracing from 2.4.10 to 2.4.10.1 (#14896) 2023-01-23 09:26:15 -05:00
dependabot[bot] 18ace676d8 Bump types-commonmark from 0.9.2 to 0.9.2.1 (#14901) 2023-01-23 09:22:38 -05:00
dependabot[bot] 641d3e3081 Bump types-psycopg2 from 2.9.21.2 to 2.9.21.4 (#14900) 2023-01-23 09:21:36 -05:00
dependabot[bot] 6005befa23 Bump types-requests from 2.28.11.7 to 2.28.11.8 (#14899) 2023-01-23 09:13:26 -05:00
Patrick Cloke 82d3efa312 Skip processing stats for broken rooms. (#14873)
* Skip processing stats for broken rooms.

* Newsfragment

* Use a custom exception.
2023-01-23 11:36:20 +00:00
Sean Quah 2ec9c58496 Faster joins: Update room stats and the user directory on workers when finishing join (#14874)
* Faster joins: Update room stats and user directory on workers when done

When finishing a partial state join to a room, we update the current
state of the room without persisting additional events. Workers receive
notice of the current state update over replication, but neglect to wake
the room stats and user directory updaters, which then get incidentally
triggered the next time an event is persisted or an unrelated event
persister sends out a stream position update.

We wake the room stats and user directory updaters at the appropriate
time in this commit.

Part of #12814 and #12815.

Signed-off-by: Sean Quah <seanq@matrix.org>

* fixup comment

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-23 10:31:36 +00:00
reivilibre 22cc93afe3 Enable Faster Remote Room Joins against worker-mode Synapse. (#14752)
* Enable Complement tests for Faster Remote Room Joins on worker-mode

* (dangerous) Add an override to allow Complement to use FRRJ under workers

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

* Fix race where we didn't send out replication notification

* MORE HACKS

* Fix get_un_partial_stated_rooms_token to take instance_name

* Fix bad merge

* Remove warning

* Correctly advance un_partial_stated_room_stream

* Fix merge

* Add another notify_replication

* Fixups

* Create a separate ReplicationNotifier

* Fix test

* Fix portdb

* Create a separate ReplicationNotifier

* Fix test

* Fix portdb

* Fix presence test

* Newsfile

* Apply suggestions from code review

* Update changelog.d/14752.misc

Co-authored-by: Erik Johnston <erik@matrix.org>

* lint

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
2023-01-22 21:10:11 +00:00
Sean Quah d329a566df Faster joins: Fix incompatibility with restricted joins (#14882)
* Avoid clearing out forward extremities when doing a second remote join

When joining a restricted room where the local homeserver does not have
a user able to issue invites, we perform a second remote join. We want
to avoid clearing out forward extremities in this case because the
forward extremities we have are up to date and clearing out forward
extremities creates a window in which the room can get bricked if
Synapse crashes.

Signed-off-by: Sean Quah <seanq@matrix.org>

* Do a full join when doing a second remote join into a full state room

We cannot persist a partial state join event into a joined full state
room, so we perform a full state join for such rooms instead. As a
future optimization, we could always perform a partial state join and
compute or retrieve the full state ourselves if necessary.

Signed-off-by: Sean Quah <seanq@matrix.org>

* Add lock around partial state flag for rooms

Signed-off-by: Sean Quah <seanq@matrix.org>

* Preserve partial state info when doing a second partial state join

Signed-off-by: Sean Quah <seanq@matrix.org>

* Add newsfile

* Add a TODO(faster_joins) marker

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-22 19:19:31 +00:00
Andrew Morgan f075f6ae2b Fix type hints for Monthly Active Users tests (#14889) 2023-01-22 10:50:14 +01:00
Andrew Morgan 8d90e5f200 Add type hints to TestRatelimiter (#14885) 2023-01-21 15:59:15 +00:00
Erik Johnston 0ec12a3753 Reduce max time we wait for stream positions (#14881)
Now that we wait for stream positions whenever we do a HTTP replication
hit, we need to be less brutal in the case where we do timeout (as we
have bugs around this).
2023-01-20 21:04:33 +00:00
Erik Johnston 65d0386693 Always notify replication when a stream advances (#14877)
This ensures that all other workers are told about stream updates in a timely manner, without having to remember to manually poke replication.
2023-01-20 18:02:18 +00:00
katlol cf18fea9e1 Dockerfile: Bump Python version from 3.9 to 3.11 (#14875)
Closes https://github.com/matrix-org/synapse/issues/13234

Signed-off-by: Katia Esposito <1695469+katlol@users.noreply.github.com>

Signed-off-by: Katia Esposito <1695469+katlol@users.noreply.github.com>
2023-01-20 12:07:13 +00:00
Sean Quah cdea7c11d0 Faster joins: Avoid starting duplicate partial state syncs (#14844)
Currently, we will try to start a new partial state sync every time we
perform a remote join, which is undesirable if there is already one
running for a given room.

We intend to perform remote joins whenever additional local users wish
to join a partial state room, so let's ensure that we do not start more
than one concurrent partial state sync for any given room.

------------------------------------------------------------------------

There is a race condition where the homeserver leaves a room and later
rejoins while the partial state sync from the previous membership is
still running. There is no guarantee that the previous partial state
sync will process the latest join, so we restart it if needed.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-20 12:06:19 +00:00
Erik Johnston cdf2707678 Fix bug in wait for stream position (#14872)
This caused some requests to fail.

This caused some requests to fail.

This really only started causing issues due to #14856
2023-01-19 22:19:56 +00:00
Andrew Morgan a7b54ca8d8 Implement MSC3930: polls push rules (#14787) 2023-01-19 12:47:10 +00:00
Richard van der Hoff 2069231645 Update logging_sample_config.md (#14868)
You do not have to restart synapse to reload the log config.
2023-01-19 11:58:17 +00:00
Erik Johnston 9187fd940e Wait for streams to catch up when processing HTTP replication. (#14820)
This should hopefully mitigate a class of races where data gets out of
sync due a HTTP replication request racing with the replication streams.
2023-01-18 19:35:29 +00:00
Catalan Lover e8f2bf5c40 Change default room version to 10. Implements MSC3904 (#14111)
* Change Documentation to have v10 as default room version

* Change Default Room version to 10

* Add changelog entry for default room version swap

* Add changelog entry for v10 default room version in docs

* Clarify doc changelog entry

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>

* Improve Documentation changes.

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>

* Update Changelog entry to have correct format

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>

* Update Spec Version to 1.5

* Only need 1 changelog.

* Fix test.

* Update "Changed in" line

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Patrick Cloke <patrickc@matrix.org>
2023-01-18 18:59:48 +00:00
Patrick Cloke 4d6b1d3c47 Properly check for frozendicts in event auth code. (#14864)
Check for for an instance of a mapping instead of a dict.

This only affects room version 10 when frozen events are enabled.
2023-01-18 09:27:57 -05:00
dependabot[bot] e1b2c7095d Bump packaging from 22.0 to 23.0 (#14847)
Bumps [packaging](https://github.com/pypa/packaging) from 22.0 to 23.0.
- [Release notes](https://github.com/pypa/packaging/releases)
- [Changelog](https://github.com/pypa/packaging/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pypa/packaging/compare/22.0...23.0)

---
updated-dependencies:
- dependency-name: packaging
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 19:16:43 +00:00
dependabot[bot] 87e5f4599a Bump phonenumbers from 8.13.2 to 8.13.4 (#14849)
Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.2 to 8.13.4.
- [Release notes](https://github.com/daviddrysdale/python-phonenumbers/releases)
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.2...v8.13.4)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 19:09:15 +00:00
dependabot[bot] f1135a7930 Bump sentry-sdk from 1.12.1 to 1.13.0 (#14852)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.12.1 to 1.13.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.12.1...1.13.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-01-17 19:01:29 +00:00
dependabot[bot] 3a777e7dc2 Bump ruff from 0.0.215 to 0.0.224 (#14862)
* Bump ruff from 0.0.215 to 0.0.224

Bumps [ruff](https://github.com/charliermarsh/ruff) from 0.0.215 to 0.0.224.
- [Release notes](https://github.com/charliermarsh/ruff/releases)
- [Changelog](https://github.com/charliermarsh/ruff/blob/main/BREAKING_CHANGES.md)
- [Commits](https://github.com/charliermarsh/ruff/compare/v0.0.215...v0.0.224)

---
updated-dependencies:
- dependency-name: ruff
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-17 18:47:47 +00:00
dependabot[bot] a34682f7d6 Bump types-pillow from 9.4.0.0 to 9.4.0.3 (#14863)
* Bump types-pillow from 9.4.0.0 to 9.4.0.3

Bumps [types-pillow](https://github.com/python/typeshed) from 9.4.0.0 to 9.4.0.3.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pillow
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-17 18:39:50 +00:00
dependabot[bot] 4389b8518f Bump peaceiris/actions-gh-pages from 3.9.1 to 3.9.2 (#14861)
* Bump peaceiris/actions-gh-pages from 3.9.1 to 3.9.2

Bumps [peaceiris/actions-gh-pages](https://github.com/peaceiris/actions-gh-pages) from 3.9.1 to 3.9.2.
- [Release notes](https://github.com/peaceiris/actions-gh-pages/releases)
- [Changelog](https://github.com/peaceiris/actions-gh-pages/blob/main/CHANGELOG.md)
- [Commits](https://github.com/peaceiris/actions-gh-pages/compare/64b46b4226a4a12da2239ba3ea5aa73e3163c75b...bd8c6b06eba6b3d25d72b7a1767993c0aeee42e7)

---
updated-dependencies:
- dependency-name: peaceiris/actions-gh-pages
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-17 18:28:04 +00:00
David Robertson b88cfe6d41 Require poetry>=1.3.2 (#14860)
* Upgrade to new lockfile format

Now requires poetry >= 1.2.2 to read and poetry >= 1.3.0 to write.

Cheat sheet:

```
poetry --version
poetry show > scratch/before
pipx upgrade poetry
poetry --version
poetry show > scratch/after
diff scratch{before,after} && echo "no change!"
```

* Use Poetry 1.3.2 when reading or writing lockfile

* Remove unneeded(?) poetry dep for cibuildwheel

* Update docs

* Remove redundant call to setup-python

* Remove outdated comments related to Poetry 1.x

* Remove outdated docs line

was fixed in #13082

* Minor improvements to poetry cheat sheet

* Invoke setup-python-poetry with explicit version

Not sure about this. It's hardcoding versions everywhere.

* Changelog

* Check the lockfile is version 2.0

Might one day incorporate other checks like #14742

* Typo fixes, thanks Sean

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2023-01-17 18:04:44 +00:00
David Robertson f820740b7d Merge branch 'master' into develop 2023-01-17 12:45:50 +00:00
David Robertson 5b3af1c7d0 Stabilise serving partial join responses (#14839)
Serving partial join responses is no longer experimental. They will only be served under the stable identifier if the the undocumented config flag experimental.msc3706_enabled is set to true.

Synapse continues to request a partial join only if the undocumented config flag experimental.faster_joins is set to true; this setting remains present and unaffected.
2023-01-17 12:44:15 +00:00
David Robertson b6955673bf 1.75.0 2023-01-17 11:36:22 +00:00
Erik Johnston 316590d1ea Fix bug in wait_for_stream_position (#14856)
We were incorrectly checking if the *local* token had been advanced, rather than the token for the remote instance.

In practice, I don't think this has caused any bugs due to where we use `wait_for_stream_position`, as critically we don't use it on instances that also write to the given streams (and so the local token will lag behind all remote tokens).
2023-01-17 09:58:22 +00:00
Erik Johnston 2b084c5b71 Merge device list replication streams (#14833) 2023-01-17 09:29:58 +00:00
Sean Quah db5145a31d Add parameter to control whether we do a partial state join (#14843)
When the local homeserver is already joined to a room and wants to
perform another remote join, we may find it useful to do a non-partial
state join if we already have the full state for the room.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-16 23:15:17 +00:00
Erik Johnston 4db3331bb9 Add an early return when handling no-op presence updates. (#14855)
This stops us from incrementing the presence stream position for no-op updates.
2023-01-16 14:20:12 +00:00
Sean Quah a302d3ecf7 Remove unnecessary reactor reference from _PerHostRatelimiter (#14842)
Fix up #14812 to avoid introducing a reference to the reactor.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-16 13:16:19 +00:00
Rhea Danzey 7801fd74da Fix missing field in AS documentation (#14845)
* Fix missing field in AS documentation

The [AS Configuration Snippet](https://matrix-org.github.io/synapse/latest/application_services.html) is missing `id` field, without it Synapse will fail to load:

```
synapse-synapse-main-0 synapse 2023-01-13 23:05:25,450 - synapse.storage.databases - 84 - INFO - main - [database config 'master']: Starting 'main' database
synapse-synapse-main-0 synapse 2023-01-13 23:05:25,452 - synapse.config.appservice - 79 - ERROR - main - Failed to load appservice from '/as/synapse-hookshot-as/registration.yaml'
synapse-synapse-main-0 synapse 2023-01-13 23:05:25,452 - synapse.config.appservice - 80 - ERROR - main - "Required string field: 'id' (/as/synapse-hookshot-as/registration.yaml)"
synapse-synapse-main-0 synapse Traceback (most recent call last):
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/config/appservice.py", line 57, in load_appservices
synapse-synapse-main-0 synapse     appservice = _load_appservice(hostname, yaml.safe_load(f), config_file)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/config/appservice.py", line 91, in _load_appservice
synapse-synapse-main-0 synapse     raise KeyError(
synapse-synapse-main-0 synapse KeyError: "Required string field: 'id' (/as/synapse-hookshot-as/registration.yaml)"
synapse-synapse-main-0 synapse 2023-01-13 23:05:25,452 - synapse.app._base - 207 - ERROR - main - Exception during startup
synapse-synapse-main-0 synapse Traceback (most recent call last):
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/app/homeserver.py", line 340, in setup
synapse-synapse-main-0 synapse     hs.setup()
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/server.py", line 310, in setup
synapse-synapse-main-0 synapse     self.datastores = Databases(self.DATASTORE_CLASS, self)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/__init__.py", line 93, in __init__
synapse-synapse-main-0 synapse     main = main_store_class(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/__init__.py", line 139, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/events_bg_updates.py", line 98, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/devices.py", line 1584, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/devices.py", line 89, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/roommember.py", line 1494, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/room.py", line 1827, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/room.py", line 1365, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/room.py", line 119, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/registration.py", line 2158, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/presence.py", line 67, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/presence.py", line 48, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/transactions.py", line 73, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/state.py", line 666, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/state.py", line 82, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/state.py", line 470, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/event_federation.py", line 2007, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/media_repository.py", line 148, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/media_repository.py", line 68, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/push_rule.py", line 330, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/event_push_actions.py", line 1938, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/metrics.py", line 68, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/event_push_actions.py", line 249, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/end_to_end_keys.py", line 1181, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/search.py", line 426, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/search.py", line 137, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/account_data.py", line 64, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/push_rule.py", line 114, in __init__
synapse-synapse-main-0 synapse     super().__init__(database, db_conn, hs)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/storage/databases/main/appservice.py", line 76, in __init__
synapse-synapse-main-0 synapse     self.services_cache = load_appservices(
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/config/appservice.py", line 57, in load_appservices
synapse-synapse-main-0 synapse     appservice = _load_appservice(hostname, yaml.safe_load(f), config_file)
synapse-synapse-main-0 synapse   File "/usr/local/lib/python3.9/site-packages/synapse/config/appservice.py", line 91, in _load_appservice
synapse-synapse-main-0 synapse     raise KeyError(
synapse-synapse-main-0 synapse KeyError: "Required string field: 'id' (/as/synapse-hookshot-as/registration.yaml)"
synapse-synapse-main-0 synapse ******************************************************************************
synapse-synapse-main-0 synapse  Error during initialisation:
synapse-synapse-main-0 synapse     "Required string field: 'id' (/as/synapse-hookshot-as/registration.yaml)"
synapse-synapse-main-0 synapse  There may be more information in the logs.
synapse-synapse-main-0 synapse ******************************************************************************
```

* Changelog
2023-01-16 12:59:15 +00:00
David Robertson 85a7a201fa Also use stable name in SendJoinResponse struct (#14841)
* Also use stable name in SendJoinResponse struct

follow-up to #14832

* Changelog

* Fix a rename I missed

* Run black

* Update synapse/federation/federation_client.py

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>

Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2023-01-16 12:40:25 +00:00
dependabot[bot] 5f171c1651 Bump regex from 1.7.0 to 1.7.1 (#14848)
* Bump regex from 1.7.0 to 1.7.1

Bumps [regex](https://github.com/rust-lang/regex) from 1.7.0 to 1.7.1.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.7.0...1.7.1)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2023-01-16 10:51:55 +00:00
Andrew Morgan 54cd90ea60 Implement MSC3890: Remotely silence local notifications (#14775) 2023-01-13 19:32:10 +00:00
David Robertson 52ae80dd1a Use stable identifiers for faster joins (#14832)
* Use new query param when requesting a partial join

* Read new query param when serving partial join

* Provide new field names when serving partial joins

* Read new field names from partial join response

* Changelog
2023-01-13 17:58:53 +00:00
Erik Johnston 73ff493dfb Merge account data streams (#14826) 2023-01-13 14:57:43 +00:00
Tejaswini Gurram 1416096527 Update misleading documentation user_directory.search_all_users (#14818)
Fixes #13852
2023-01-13 14:46:21 +00:00
Dirk Klimpel 8d5325ec0c Drop unused table presence (#14825) 2023-01-13 14:17:03 +00:00
Dirk Klimpel 1caf16a450 Add worker_manhole to configuration manual (#14824)
Closes: #13643
2023-01-13 14:14:39 +00:00
villepeh d344bc8b6e Include x_forwarded in workers example configs (#14667) 2023-01-13 14:06:58 +00:00
Andrew Morgan 3a125625e7 Add some clarifying comments and refactor a portion of the Keyring class for readability (#14804) 2023-01-13 12:37:28 +00:00
Sean Quah 772e8c2385 Fix stack overflow in _PerHostRatelimiter due to synchronous requests (#14812)
When there are many synchronous requests waiting on a
`_PerHostRatelimiter`, each request will be started recursively just
after the previous request has completed. Under the right conditions,
this leads to stack exhaustion.

A common way for requests to become synchronous is when the remote
client disconnects early, because the homeserver is overloaded and slow
to respond.

Avoid stack exhaustion under these conditions by deferring subsequent
requests until the next reactor tick.

Fixes #14480.

Signed-off-by: Sean Quah <seanq@matrix.org>
2023-01-13 00:16:21 +00:00
H. Shay 12083d37a8 Merge branch 'release-v1.75' into develop 2023-01-12 12:40:09 -08:00
H. Shay ea45257199 1.75.0rc2 2023-01-12 10:30:54 -08:00
Richard van der Hoff 0f061f39f0 Merge remote-tracking branch 'origin/release-v1.75' into develop 2023-01-12 16:45:23 +00:00
Andrew Morgan f5ea9f2b1d Add rust linting commands to scripts-dev/lint.sh (#14822) 2023-01-12 16:20:34 +00:00
Erik Johnston b50c008453 Re-enable some linting (#14821)
* Re-enable some linting

* Newsfile

* Remove comment
2023-01-12 10:52:07 +00:00
Erik Johnston 84ce93c12f Fix race calling /members?at= (#14817)
Fixes #14814
2023-01-12 10:29:09 +00:00
Emelie Graven dd9e71dc7f Add set_displayname to the module API (#14629) 2023-01-11 18:41:52 +00:00
Patrick Cloke 071f8b0f9b Factor out common code in tests and fix comments. (#14819) 2023-01-11 13:36:41 -05:00
Andrew Morgan f4d2a734f9 Remove outdated commands from the code style doc & point to the contributing guide. (#14773) 2023-01-11 15:21:12 +00:00
reivilibre 5172c8c403 Faster remote room joins (worker mode): do not populate external hosts-in-room cache when sending events as this requires blocking for full state. (#14749)
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
Co-authored-by: Sean Quah <seanq@matrix.org>
2023-01-11 13:21:53 +00:00
Patrick Cloke 7f2cabf271 Fix-up type hints for tests.push module. (#14816) 2023-01-11 07:35:40 -05:00
reivilibre d6bda5addd Add index to improve performance of the /timestamp_to_event endpoint used for jumping to a specific date in the timeline of a room. (#14799) 2023-01-11 12:29:13 +00:00
Patrick Cloke 3952297f6f Calculate rooms changed for device lists to work. (#14810)
Back-out some changes from 7e582a25f8
(#14786) which skipped necessary logic to calculate device lists properly.
2023-01-11 12:16:41 +00:00
Dirk Klimpel 73f097888e Add listener health (#14747)
Fixes: #8780
2023-01-11 12:00:38 +00:00
Andrew Morgan 7b3a8f2b0c Add poetry.toml to .gitignore (#14807) 2023-01-11 11:44:13 +00:00
Dirk Klimpel bc7ca704dd Add tag to listeners documentation (#14803)
* Add `tag` to `listeners` documentation

* newsfile
2023-01-11 10:47:44 +00:00
Richard van der Hoff 06ab64f201 Implement MSC3925: changes to bundling of edits (#14811)
Two parts to this:

 * Bundle the whole of the replacement with any edited events. This is backwards-compatible so I haven't put it behind a flag.
 * Optionally, inhibit server-side replacement of edited events. This has scope to break things, so it is currently disabled by default.
2023-01-10 16:31:28 +00:00
262 changed files with 8298 additions and 4147 deletions
+10 -1
View File
@@ -50,7 +50,9 @@ def cpython(wheel_file: str, name: str, version: Version, tag: Tag) -> str:
check_is_abi3_compatible(wheel_file)
abi3_tag = Tag(tag.interpreter, "abi3", tag.platform)
# DMR: surely this won't make it work?
platform = tag.platform.replace("macosx_11_0", "macosx_10_16")
abi3_tag = Tag(tag.interpreter, "abi3", platform)
dirname = os.path.dirname(wheel_file)
new_wheel_file = os.path.join(
@@ -100,6 +102,13 @@ def main(wheel_file: str, dest_dir: str, archs: Optional[str]) -> None:
else:
subprocess.run(["auditwheel", "repair", "-w", dest_dir, wheel_file], check=True)
# DMR: INSPECT THE WHEEL NAUGHTY EVIL DEBUG
from pip._internal.models.wheel import Wheel
from pip._internal.utils.compatibility_tags import get_supported
w = Wheel(wheel_file)
tags = get_supported()
print(f"w={w} tags={tags} supported={w.supported(tags)}")
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Tag wheel as abi3 and repair it.")
+23
View File
@@ -0,0 +1,23 @@
#! /usr/bin/env python
import sys
if sys.version_info < (3, 11):
raise RuntimeError("Requires at least Python 3.11, to import tomllib")
import tomllib
with open("poetry.lock", "rb") as f:
lockfile = tomllib.load(f)
try:
lock_version = lockfile["metadata"]["lock-version"]
assert lock_version == "2.0"
except Exception:
print(
"""\
Lockfile is not version 2.0. You probably need to upgrade poetry on your local box
and re-run `poetry lock --no-update`. See the Poetry cheat sheet at
https://matrix-org.github.io/synapse/develop/development/dependencies.html
"""
)
raise
+1 -1
View File
@@ -53,7 +53,7 @@ with open('pyproject.toml', 'w') as f:
"
python3 -c "$REMOVE_DEV_DEPENDENCIES"
pip install poetry==1.2.0
pip install poetry==1.3.2
poetry lock
echo "::group::Patched pyproject.toml"
+6 -4
View File
@@ -23,8 +23,9 @@ poetry run python -m synapse.app.admin_cmd -c .ci/sqlite-config.yaml export-dat
--output-directory /tmp/export_data
# Test that the output directory exists and contains the rooms directory
dir="/tmp/export_data/rooms"
if [ -d "$dir" ]; then
dir_r="/tmp/export_data/rooms"
dir_u="/tmp/export_data/user_data"
if [ -d "$dir_r" ] && [ -d "$dir_u" ]; then
echo "Command successful, this test passes"
else
echo "No output directories found, the command fails against a sqlite database."
@@ -43,8 +44,9 @@ poetry run python -m synapse.app.admin_cmd -c .ci/postgres-config.yaml export-d
--output-directory /tmp/export_data2
# Test that the output directory exists and contains the rooms directory
dir2="/tmp/export_data2/rooms"
if [ -d "$dir2" ]; then
dir_r2="/tmp/export_data2/rooms"
dir_u2="/tmp/export_data2/user_data"
if [ -d "$dir_r2" ] && [ -d "$dir_u2" ]; then
echo "Command successful, this test passes"
else
echo "No output directories found, the command fails against a postgres database."
+1 -1
View File
@@ -48,7 +48,7 @@ jobs:
type=pep440,pattern={{raw}}
- name: Build and push all platforms
uses: docker/build-push-action@v3
uses: docker/build-push-action@v4
with:
push: true
labels: "gitsha1=${{ github.sha }}"
+1 -1
View File
@@ -58,7 +58,7 @@ jobs:
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@64b46b4226a4a12da2239ba3ea5aa73e3163c75b # v3.9.1
uses: peaceiris/actions-gh-pages@bd8c6b06eba6b3d25d72b7a1767993c0aeee42e7 # v3.9.2
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book
+4 -4
View File
@@ -27,7 +27,7 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
@@ -37,7 +37,7 @@ jobs:
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.2.0"
poetry-version: "1.3.2"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
@@ -61,7 +61,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
@@ -134,7 +134,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
+9 -8
View File
@@ -107,9 +107,9 @@ jobs:
- ${{ startsWith(github.ref, 'refs/pull/') }}
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-11"
# # Don't build macos wheels on PR CI.
# - is_pr: true
# os: "macos-11"
# Don't build aarch64 wheels on mac.
- os: "macos-11"
arch: aarch64
@@ -127,7 +127,7 @@ jobs:
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.9.0 poetry==1.2.0
run: python -m pip install cibuildwheel==2.9.0
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'
@@ -139,21 +139,22 @@ jobs:
if: matrix.arch == 'aarch64'
run: echo 'CIBW_ARCHS_LINUX=aarch64' >> $GITHUB_ENV
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp37-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
# - name: Only build a single wheel on PR
# if: startsWith(github.ref, 'refs/pull/')
# run: echo "CIBW_BUILD="cp37-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
env:
# Skip testing for platforms which various libraries don't have wheels
# for, and so need extra build deps.
CIBW_TEST_SKIP: pp3{7,9}-* *i686* *musl*
CIBW_TEST_SKIP: pp3*-* *i686* *musl*
# Fix Rust OOM errors on emulated aarch64: https://github.com/rust-lang/cargo/issues/10583
CARGO_NET_GIT_FETCH_WITH_CLI: true
CIBW_ENVIRONMENT_PASS_LINUX: CARGO_NET_GIT_FETCH_WITH_CLI
- uses: actions/upload-artifact@v3
if: always()
with:
name: Wheel
path: ./wheelhouse/*.whl
+55 -12
View File
@@ -33,11 +33,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.3.2"
extras: "all"
- run: poetry run scripts-dev/generate_sample_config.sh --check
- run: poetry run scripts-dev/config-lint.sh
@@ -52,6 +51,15 @@ jobs:
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: scripts-dev/check_schema_delta.py --force-colors
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
lint:
uses: "matrix-org/backend-meta/.github/workflows/python-poetry-ci.yml@v2"
with:
@@ -88,6 +96,7 @@ jobs:
ref: ${{ github.event.pull_request.head.sha }}
- uses: matrix-org/setup-python-poetry@v1
with:
poetry-version: "1.3.2"
extras: "all"
- run: poetry run scripts-dev/check_pydantic_models.py
@@ -103,7 +112,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
components: clippy
@@ -125,7 +134,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: nightly-2022-12-01
components: clippy
@@ -145,7 +154,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
components: rustfmt
@@ -163,6 +172,7 @@ jobs:
- lint-pydantic
- check-sampleconfig
- check-schema-delta
- check-lockfile
- lint-clippy
- lint-rustfmt
runs-on: ubuntu-latest
@@ -211,7 +221,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
@@ -219,6 +229,7 @@ jobs:
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.job.python-version }}
poetry-version: "1.3.2"
extras: ${{ matrix.job.extras }}
- name: Await PostgreSQL
if: ${{ matrix.job.postgres-version }}
@@ -255,7 +266,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
@@ -294,6 +305,7 @@ jobs:
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: '3.7'
poetry-version: "1.3.2"
extras: "all test"
- run: poetry run trial -j6 tests
@@ -328,6 +340,7 @@ jobs:
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
extras: ${{ matrix.extras }}
- run: poetry run trial --jobs=2 tests
- name: Dump logs
@@ -373,7 +386,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
@@ -419,6 +432,7 @@ jobs:
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
poetry-version: "1.3.2"
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
env:
@@ -470,6 +484,7 @@ jobs:
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
id: run_tester_script
@@ -516,7 +531,7 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
@@ -526,8 +541,11 @@ jobs:
- run: |
set -o pipefail
POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
env:
POSTGRES: ${{ (matrix.database == 'Postgres') && 1 || '' }}
WORKERS: ${{ (matrix.arrangement == 'workers') && 1 || '' }}
name: Run Complement Tests
cargo-test:
@@ -544,13 +562,36 @@ jobs:
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: 1.58.1
- uses: Swatinem/rust-cache@v2
- run: cargo test
# We want to ensure that the cargo benchmarks still compile, which requires a
# nightly compiler.
cargo-bench:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@v3
- name: Install Rust
# There don't seem to be versioned releases of this action per se: for each rust
# version there is a branch which gets constantly rebased on top of master.
# We pin to a specific commit for paranoia's sake.
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@v2
- run: cargo bench --no-run
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}
@@ -562,6 +603,7 @@ jobs:
- portdb
- complement
- cargo-test
- cargo-bench
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v2
@@ -573,3 +615,4 @@ jobs:
skippable: |
lint-newsfile
cargo-test
cargo-bench
+4 -4
View File
@@ -18,7 +18,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
@@ -43,7 +43,7 @@ jobs:
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
@@ -82,7 +82,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@e645b0cf01249a964ec099494d38d2da0f0b349f
uses: dtolnay/rust-toolchain@9cd00a88a73addc8617065438eff914dd08d0955
with:
toolchain: stable
- uses: Swatinem/rust-cache@v2
@@ -148,7 +148,7 @@ jobs:
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.2.0
pipx install poetry==1.3.2
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
+3
View File
@@ -69,3 +69,6 @@ book/
# Poetry will create a setup.py, which we don't want to include.
/setup.py
# Don't include users' poetry configs
/poetry.toml
+212
View File
@@ -1,3 +1,215 @@
Synapse 1.77.0rc1 (2023-02-07)
==============================
Features
--------
- Experimental support for [MSC3952](https://github.com/matrix-org/matrix-spec-proposals/pull/3952): intentional mentions. ([\#14823](https://github.com/matrix-org/synapse/issues/14823), [\#14943](https://github.com/matrix-org/synapse/issues/14943), [\#14957](https://github.com/matrix-org/synapse/issues/14957), [\#14958](https://github.com/matrix-org/synapse/issues/14958))
- Adds profile information, devices and connections to the user data export via command line. ([\#14894](https://github.com/matrix-org/synapse/issues/14894))
- Experimental support to suppress notifications from message edits ([MSC3958](https://github.com/matrix-org/matrix-spec-proposals/pull/3958)). ([\#14960](https://github.com/matrix-org/synapse/issues/14960))
- Improve performance when joining or sending an event large rooms. ([\#14962](https://github.com/matrix-org/synapse/issues/14962))
Bugfixes
--------
- Fix a bug introduced in Synapse 1.53.0 where `next_batch` tokens from `/sync` could not be used with the `/relations` endpoint. ([\#14866](https://github.com/matrix-org/synapse/issues/14866))
- Fix a bug when using the `send_local_online_presence_to` module API. ([\#14880](https://github.com/matrix-org/synapse/issues/14880))
- Fix a bug introduced in Synapse 1.70.0 where the background updates to add non-thread unique indexes on receipts could fail when upgrading from 1.67.0 or earlier. ([\#14915](https://github.com/matrix-org/synapse/issues/14915))
- Fix a regression introduced in Synapse 1.69.0 which can result in database corruption when database migrations are interrupted on sqlite. ([\#14926](https://github.com/matrix-org/synapse/issues/14926))
- Fix a bug introduced in Synapse 1.68.0 where we were unable to service remote joins in rooms with `@room` notification levels set to `null` in their (malformed) power levels. ([\#14942](https://github.com/matrix-org/synapse/issues/14942))
- Fix a bug introduced in Synapse v1.64 where boolean power levels were erroneously permitted in [v10 rooms](https://spec.matrix.org/v1.5/rooms/v10/). ([\#14944](https://github.com/matrix-org/synapse/issues/14944))
- Fix a long-standing bug where sending messages on servers with presence enabled would spam "Re-starting finished log context" log lines. ([\#14947](https://github.com/matrix-org/synapse/issues/14947))
- Fix a bug introduced in Synapse 1.68.0 where logging from the Rust module was not properly logged. ([\#14976](https://github.com/matrix-org/synapse/issues/14976))
Internal Changes
----------------
- Run the integration test suites with the asyncio reactor enabled in CI. ([\#14858](https://github.com/matrix-org/synapse/issues/14858))
- Add missing type hints. ([\#14879](https://github.com/matrix-org/synapse/issues/14879), [\#14886](https://github.com/matrix-org/synapse/issues/14886), [\#14887](https://github.com/matrix-org/synapse/issues/14887), [\#14904](https://github.com/matrix-org/synapse/issues/14904), [\#14927](https://github.com/matrix-org/synapse/issues/14927), [\#14956](https://github.com/matrix-org/synapse/issues/14956))
- Improve performance of `/sync` in a few situations. ([\#14908](https://github.com/matrix-org/synapse/issues/14908), [\#14970](https://github.com/matrix-org/synapse/issues/14970))
- Document how to handle Dependabot pull requests. ([\#14916](https://github.com/matrix-org/synapse/issues/14916))
- Fix typo in release script. ([\#14920](https://github.com/matrix-org/synapse/issues/14920))
- Use `StrCollection` to avoid potential bugs with `Collection[str]`. ([\#14922](https://github.com/matrix-org/synapse/issues/14922))
- Bump ijson from 3.1.4 to 3.2.0.post0. ([\#14935](https://github.com/matrix-org/synapse/issues/14935))
- Bump types-pyyaml from 6.0.12.2 to 6.0.12.3. ([\#14936](https://github.com/matrix-org/synapse/issues/14936))
- Bump types-jsonschema from 4.17.0.2 to 4.17.0.3. ([\#14937](https://github.com/matrix-org/synapse/issues/14937))
- Bump types-pillow from 9.4.0.3 to 9.4.0.5. ([\#14938](https://github.com/matrix-org/synapse/issues/14938))
- Bump hiredis from 2.0.0 to 2.1.1. ([\#14939](https://github.com/matrix-org/synapse/issues/14939))
- Fix various long-standing bugs in Synapse's config, event and request handling where booleans were unintentionally accepted where an integer was expected. ([\#14945](https://github.com/matrix-org/synapse/issues/14945))
- Update build system requirements to allow building with poetry-core==1.5.0. ([\#14949](https://github.com/matrix-org/synapse/issues/14949))
- Faster joins: tag `v2/send_join/` requests to indicate if they served a partial join response. ([\#14950](https://github.com/matrix-org/synapse/issues/14950))
- Bump docker/build-push-action from 3 to 4. ([\#14952](https://github.com/matrix-org/synapse/issues/14952))
- Add an [lnav](https://lnav.org) config file for Synapse logs to `/contrib/lnav`. ([\#14953](https://github.com/matrix-org/synapse/issues/14953))
- Faster room joins: Refactor internal handling of servers in room to never store an empty list. ([\#14954](https://github.com/matrix-org/synapse/issues/14954))
- Allow running `cargo` without the `extension-module` option. ([\#14965](https://github.com/matrix-org/synapse/issues/14965))
- Bump dtolnay/rust-toolchain from e645b0cf01249a964ec099494d38d2da0f0b349f to 9cd00a88a73addc8617065438eff914dd08d0955. ([\#14968](https://github.com/matrix-org/synapse/issues/14968))
- Improve performance of joining and leaving large rooms with many local users. ([\#14971](https://github.com/matrix-org/synapse/issues/14971))
- Add denormalised event stream ordering column to membership state tables for future use. Contributed by Nick @ Beeper (@fizzadar). ([\#14979](https://github.com/matrix-org/synapse/issues/14979))
- Add tests for `_flatten_dict`. ([\#14981](https://github.com/matrix-org/synapse/issues/14981), [\#15002](https://github.com/matrix-org/synapse/issues/15002))
- Improve type hints. ([\#14983](https://github.com/matrix-org/synapse/issues/14983), [\#14984](https://github.com/matrix-org/synapse/issues/14984), [\#14985](https://github.com/matrix-org/synapse/issues/14985), [\#14987](https://github.com/matrix-org/synapse/issues/14987), [\#14988](https://github.com/matrix-org/synapse/issues/14988), [\#14990](https://github.com/matrix-org/synapse/issues/14990), [\#14991](https://github.com/matrix-org/synapse/issues/14991), [\#14992](https://github.com/matrix-org/synapse/issues/14992), [\#15007](https://github.com/matrix-org/synapse/issues/15007))
- Bump hiredis from 2.1.1 to 2.2.1. ([\#14993](https://github.com/matrix-org/synapse/issues/14993))
- Bump types-setuptools from 65.6.0.3 to 67.1.0.0. ([\#14994](https://github.com/matrix-org/synapse/issues/14994))
- Bump prometheus-client from 0.15.0 to 0.16.0. ([\#14995](https://github.com/matrix-org/synapse/issues/14995))
- Bump anyhow from 1.0.68 to 1.0.69. ([\#14996](https://github.com/matrix-org/synapse/issues/14996))
- Bump serde_json from 1.0.91 to 1.0.92. ([\#14997](https://github.com/matrix-org/synapse/issues/14997))
- Bump isort from 5.11.4 to 5.11.5. ([\#14998](https://github.com/matrix-org/synapse/issues/14998))
- Bump phonenumbers from 8.13.4 to 8.13.5. ([\#14999](https://github.com/matrix-org/synapse/issues/14999))
Synapse 1.76.0 (2023-01-31)
===========================
The 1.76 release is the first to enable faster joins ([MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706) and [MSC3902](https://github.com/matrix-org/matrix-spec-proposals/pull/3902)) by default. Admins can opt-out: see [the upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.76/docs/upgrade.md#faster-joins-are-enabled-by-default) for more details.
The upgrade from 1.75 to 1.76 changes the account data replication streams in a backwards-incompatible manner. Server operators running a multi-worker deployment should consult [the upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.76/docs/upgrade.md#changes-to-the-account-data-replication-streams).
Those who are `poetry install`ing from source using our lockfile should ensure their poetry version is 1.3.2 or higher; [see upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.76/docs/upgrade.md#minimum-version-of-poetry-is-now-132).
Notes on faster joins
---------------------
The faster joins project sees the most benefit when joining a room with a large number of members (joined or historical). We expect it to be particularly useful for joining large public rooms like the [Matrix HQ](https://matrix.to/#/#matrix:matrix.org) or [Synapse Admins](https://matrix.to/#/#synapse:matrix.org) rooms.
After a faster join, Synapse considers that room "partially joined". In this state, you should be able to
- read incoming messages;
- see incoming state changes, e.g. room topic changes; and
- send messages, if the room is unencrypted.
Synapse has to spend more effort to complete the join in the background. Once this finishes, you will be able to
- send messages, if the room is in encrypted;
- retrieve room history from before your join, if permitted by the room settings; and
- access the full list of room members.
Improved Documentation
----------------------
- Describe the ideas and the internal machinery behind faster joins. ([\#14677](https://github.com/matrix-org/synapse/issues/14677))
Synapse 1.76.0rc2 (2023-01-27)
==============================
Bugfixes
--------
- Faster joins: Fix a bug introduced in Synapse 1.69 where device list EDUs could fail to be handled after a restart when a faster join sync is in progress. ([\#14914](https://github.com/matrix-org/synapse/issues/14914))
Internal Changes
----------------
- Faster joins: Improve performance of looking up partial-state status of rooms. ([\#14917](https://github.com/matrix-org/synapse/issues/14917))
Synapse 1.76.0rc1 (2023-01-25)
==============================
Features
--------
- Update the default room version to [v10](https://spec.matrix.org/v1.5/rooms/v10/) ([MSC 3904](https://github.com/matrix-org/matrix-spec-proposals/pull/3904)). Contributed by @FSG-Cat. ([\#14111](https://github.com/matrix-org/synapse/issues/14111))
- Add a `set_displayname()` method to the module API for setting a user's display name. ([\#14629](https://github.com/matrix-org/synapse/issues/14629))
- Add a dedicated listener configuration for `health` endpoint. ([\#14747](https://github.com/matrix-org/synapse/issues/14747))
- Implement support for [MSC3890](https://github.com/matrix-org/matrix-spec-proposals/pull/3890): Remotely silence local notifications. ([\#14775](https://github.com/matrix-org/synapse/issues/14775))
- Implement experimental support for [MSC3930](https://github.com/matrix-org/matrix-spec-proposals/pull/3930): Push rules for ([MSC3381](https://github.com/matrix-org/matrix-spec-proposals/pull/3381)) Polls. ([\#14787](https://github.com/matrix-org/synapse/issues/14787))
- Per [MSC3925](https://github.com/matrix-org/matrix-spec-proposals/pull/3925), bundle the whole of the replacement with any edited events, and optionally inhibit server-side replacement. ([\#14811](https://github.com/matrix-org/synapse/issues/14811))
- Faster joins: always serve a partial join response to servers that request it with the stable query param. ([\#14839](https://github.com/matrix-org/synapse/issues/14839))
- Faster joins: allow non-lazy-loading ("eager") syncs to complete after a partial join by omitting partial state rooms until they become fully stated. ([\#14870](https://github.com/matrix-org/synapse/issues/14870))
- Faster joins: request partial joins by default. Admins can opt-out of this for the time being---see the upgrade notes. ([\#14905](https://github.com/matrix-org/synapse/issues/14905))
Bugfixes
--------
- Add index to improve performance of the `/timestamp_to_event` endpoint used for jumping to a specific date in the timeline of a room. ([\#14799](https://github.com/matrix-org/synapse/issues/14799))
- Fix a long-standing bug where Synapse would exhaust the stack when processing many federation requests where the remote homeserver has disconencted early. ([\#14812](https://github.com/matrix-org/synapse/issues/14812), [\#14842](https://github.com/matrix-org/synapse/issues/14842))
- Fix rare races when using workers. ([\#14820](https://github.com/matrix-org/synapse/issues/14820))
- Fix a bug introduced in Synapse 1.64.0 when using room version 10 with frozen events enabled. ([\#14864](https://github.com/matrix-org/synapse/issues/14864))
- Fix a long-standing bug where the `populate_room_stats` background job could fail on broken rooms. ([\#14873](https://github.com/matrix-org/synapse/issues/14873))
- Faster joins: Fix a bug in worker deployments where the room stats and user directory would not get updated when finishing a fast join until another event is sent or received. ([\#14874](https://github.com/matrix-org/synapse/issues/14874))
- Faster joins: Fix incompatibility with joins into restricted rooms where no local users have the ability to invite. ([\#14882](https://github.com/matrix-org/synapse/issues/14882))
- Fix a regression introduced in Synapse 1.69.0 which can result in database corruption when database migrations are interrupted on sqlite. ([\#14910](https://github.com/matrix-org/synapse/issues/14910))
Updates to the Docker image
---------------------------
- Bump default Python version in the Dockerfile from 3.9 to 3.11. ([\#14875](https://github.com/matrix-org/synapse/issues/14875))
Improved Documentation
----------------------
- Include `x_forwarded` entry in the HTTP listener example configs and remove the remaining `worker_main_http_uri` entries. ([\#14667](https://github.com/matrix-org/synapse/issues/14667))
- Remove duplicate commands from the Code Style documentation page; point to the Contributing Guide instead. ([\#14773](https://github.com/matrix-org/synapse/issues/14773))
- Add missing documentation for `tag` to `listeners` section. ([\#14803](https://github.com/matrix-org/synapse/issues/14803))
- Updated documentation in configuration manual for `user_directory.search_all_users`. ([\#14818](https://github.com/matrix-org/synapse/issues/14818))
- Add `worker_manhole` to configuration manual. ([\#14824](https://github.com/matrix-org/synapse/issues/14824))
- Fix the example config missing the `id` field in [application service documentation](https://matrix-org.github.io/synapse/latest/application_services.html). ([\#14845](https://github.com/matrix-org/synapse/issues/14845))
- Minor corrections to the logging configuration documentation. ([\#14868](https://github.com/matrix-org/synapse/issues/14868))
- Document the export user data command. Contributed by @thezaidbintariq. ([\#14883](https://github.com/matrix-org/synapse/issues/14883))
Deprecations and Removals
-------------------------
- Poetry 1.3.2 or higher is now required when `poetry install`ing from source. ([\#14860](https://github.com/matrix-org/synapse/issues/14860))
Internal Changes
----------------
- Faster remote room joins (worker mode): do not populate external hosts-in-room cache when sending events as this requires blocking for full state. ([\#14749](https://github.com/matrix-org/synapse/issues/14749))
- Enable Complement tests for Faster Remote Room Joins against worker-mode Synapse. ([\#14752](https://github.com/matrix-org/synapse/issues/14752))
- Add some clarifying comments and refactor a portion of the `Keyring` class for readability. ([\#14804](https://github.com/matrix-org/synapse/issues/14804))
- Add local poetry config files (`poetry.toml`) to `.gitignore`. ([\#14807](https://github.com/matrix-org/synapse/issues/14807))
- Add missing type hints. ([\#14816](https://github.com/matrix-org/synapse/issues/14816), [\#14885](https://github.com/matrix-org/synapse/issues/14885), [\#14889](https://github.com/matrix-org/synapse/issues/14889))
- Refactor push tests. ([\#14819](https://github.com/matrix-org/synapse/issues/14819))
- Re-enable some linting that was disabled when we switched to ruff. ([\#14821](https://github.com/matrix-org/synapse/issues/14821))
- Add `cargo fmt` and `cargo clippy` to the lint script. ([\#14822](https://github.com/matrix-org/synapse/issues/14822))
- Drop unused table `presence`. ([\#14825](https://github.com/matrix-org/synapse/issues/14825))
- Merge the two account data and the two device list replication streams. ([\#14826](https://github.com/matrix-org/synapse/issues/14826), [\#14833](https://github.com/matrix-org/synapse/issues/14833))
- Faster joins: use stable identifiers from [MSC3706](https://github.com/matrix-org/matrix-spec-proposals/pull/3706). ([\#14832](https://github.com/matrix-org/synapse/issues/14832), [\#14841](https://github.com/matrix-org/synapse/issues/14841))
- Add a parameter to control whether the federation client performs a partial state join. ([\#14843](https://github.com/matrix-org/synapse/issues/14843))
- Add check to avoid starting duplicate partial state syncs. ([\#14844](https://github.com/matrix-org/synapse/issues/14844))
- Add an early return when handling no-op presence updates. ([\#14855](https://github.com/matrix-org/synapse/issues/14855))
- Fix `wait_for_stream_position` to correctly wait for the right instance to advance its token. ([\#14856](https://github.com/matrix-org/synapse/issues/14856), [\#14872](https://github.com/matrix-org/synapse/issues/14872))
- Always notify replication when a stream advances automatically. ([\#14877](https://github.com/matrix-org/synapse/issues/14877))
- Reduce max time we wait for stream positions. ([\#14881](https://github.com/matrix-org/synapse/issues/14881))
- Faster joins: allow the resync process more time to fetch `/state` ids. ([\#14912](https://github.com/matrix-org/synapse/issues/14912))
- Bump regex from 1.7.0 to 1.7.1. ([\#14848](https://github.com/matrix-org/synapse/issues/14848))
- Bump peaceiris/actions-gh-pages from 3.9.1 to 3.9.2. ([\#14861](https://github.com/matrix-org/synapse/issues/14861))
- Bump ruff from 0.0.215 to 0.0.224. ([\#14862](https://github.com/matrix-org/synapse/issues/14862))
- Bump types-pillow from 9.4.0.0 to 9.4.0.3. ([\#14863](https://github.com/matrix-org/synapse/issues/14863))
- Bump types-opentracing from 2.4.10 to 2.4.10.1. ([\#14896](https://github.com/matrix-org/synapse/issues/14896))
- Bump ruff from 0.0.224 to 0.0.230. ([\#14897](https://github.com/matrix-org/synapse/issues/14897))
- Bump types-requests from 2.28.11.7 to 2.28.11.8. ([\#14899](https://github.com/matrix-org/synapse/issues/14899))
- Bump types-psycopg2 from 2.9.21.2 to 2.9.21.4. ([\#14900](https://github.com/matrix-org/synapse/issues/14900))
- Bump types-commonmark from 0.9.2 to 0.9.2.1. ([\#14901](https://github.com/matrix-org/synapse/issues/14901))
Synapse 1.75.0 (2023-01-17)
===========================
No significant changes since 1.75.0rc2.
Synapse 1.75.0rc2 (2023-01-12)
==============================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.75.0rc1 where device lists could be miscalculated with some sync filters. ([\#14810](https://github.com/matrix-org/synapse/issues/14810))
- Fix race where calling `/members` or `/state` with an `at` parameter could fail for newly created rooms, when using multiple workers. ([\#14817](https://github.com/matrix-org/synapse/issues/14817))
Synapse 1.75.0rc1 (2023-01-10)
==============================
Generated
+6 -6
View File
@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.68"
version = "1.0.69"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2cb2f989d18dd141ab8ae82f64d1a8cdd37e0840f73a406896cf5e99502fab61"
checksum = "224afbd727c3d6e4b90103ece64b8d1b67fbb1973b1046c2281eed3f3803f800"
[[package]]
name = "arc-swap"
@@ -294,9 +294,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.7.0"
version = "1.7.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e076559ef8e241f2ae3479e36f97bd5741c0330689e217ad51ce2c76808b868a"
checksum = "48aaa5748ba571fb95cd2c85c09f629215d3a6ece942baa100950af03a34f733"
dependencies = [
"aho-corasick",
"memchr",
@@ -343,9 +343,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.91"
version = "1.0.92"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "877c235533714907a8c2464236f5c4b2a17262ef1bd71f38f35ea592c8da6883"
checksum = "7434af0dc1cbd59268aa98b4c22c131c0584d2232f6fb166efb993e2832e896a"
dependencies = [
"itoa",
"ryu",
+47
View File
@@ -0,0 +1,47 @@
# `lnav` config for Synapse logs
[lnav](https://lnav.org/) is a log-viewing tool. It is particularly useful when
you need to interleave multiple log files, or for exploring a large log file
with regex filters. The downside is that it is not as ubiquitous as tools like
`less`, `grep`, etc.
This directory contains an `lnav` [log format definition](
https://docs.lnav.org/en/v0.10.1/formats.html#defining-a-new-format
) for Synapse logs as
emitted by Synapse with the default [logging configuration](
https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#log_config
). It supports lnav 0.10.1 because that's what's packaged by my distribution.
This should allow lnav:
- to interpret timestamps, allowing log interleaving;
- to interpret log severity levels, allowing colouring by log level(!!!);
- to interpret request IDs, allowing you to skip through a specific request; and
- to highlight room, event and user IDs in logs.
See also https://gist.github.com/benje/e2ab750b0a81d11920d83af637d289f7 for a
similar example.
## Example
[![asciicast](https://asciinema.org/a/556133.svg)](https://asciinema.org/a/556133)
## Tips
- `lnav -i /path/to/synapse/checkout/contrib/lnav/synapse-log-format.json`
- `lnav my_synapse_log_file` or `lnav synapse_log_files.*`, etc.
- `lnav --help` for CLI help.
Within lnav itself:
- `?` for help within lnav itself.
- `q` to quit.
- `/` to search a-la `less` and `vim`, then `n` and `N` to continue searching
down and up.
- Use `o` and `O` to skip through logs based on the request ID (`POST-1234`, or
else the value of the [`request_id_header`](
https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html?highlight=request_id_header#listeners
) header). This may get confused if the same request ID is repeated among
multiple files or process restarts.
- ???
- Profit
+67
View File
@@ -0,0 +1,67 @@
{
"$schema": "https://lnav.org/schemas/format-v1.schema.json",
"synapse": {
"title": "Synapse logs",
"description": "Logs output by Synapse, a Matrix homesever, under its default logging config.",
"regex": {
"log": {
"pattern": ".*(?<timestamp>\\d{4}-\\d{2}-\\d{2} \\d{2}:\\d{2}:\\d{2},\\d{3}) - (?<logger>.+) - (?<lineno>\\d+) - (?<level>\\w+) - (?<context>.+) - (?<body>.*)"
}
},
"json": false,
"timestamp-field": "timestamp",
"timestamp-format": [
"%Y-%m-%d %H:%M:%S,%L"
],
"level-field": "level",
"body-field": "body",
"opid-field": "context",
"level": {
"critical": "CRITICAL",
"error": "ERROR",
"warning": "WARNING",
"info": "INFO",
"debug": "DEBUG"
},
"sample": [
{
"line": "my-matrix-server-generic-worker-4 | 2023-01-27 09:47:09,818 - synapse.replication.tcp.client - 381 - ERROR - PUT-32992 - Timed out waiting for stream receipts",
"level": "error"
},
{
"line": "my-matrix-server-federation-sender-1 | 2023-01-25 20:56:20,995 - synapse.http.matrixfederationclient - 709 - WARNING - federation_transaction_transmission_loop-3 - {PUT-O-3} [example.com] Request failed: PUT matrix://example.com/_matrix/federation/v1/send/1674680155797: HttpResponseException('403: Forbidden')",
"level": "warning"
},
{
"line": "my-matrix-server | 2023-01-25 20:55:54,433 - synapse.storage.databases - 66 - INFO - main - [database config 'master']: Checking database server",
"level": "info"
},
{
"line": "my-matrix-server | 2023-01-26 15:08:40,447 - synapse.access.http.8008 - 460 - INFO - PUT-74929 - 0.0.0.0 - 8008 - {@alice:example.com} Processed request: 0.011sec/0.000sec (0.000sec, 0.000sec) (0.001sec/0.008sec/3) 2B 200 \"PUT /_matrix/client/r0/user/%40alice%3Atexample.com/account_data/im.vector.setting.breadcrumbs HTTP/1.0\" \"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Element/1.11.20 Chrome/108.0.5359.179 Electron/22.0.3 Safari/537.36\" [0 dbevts]",
"level": "info"
}
],
"highlights": {
"user_id": {
"pattern": "(@|%40)[^:% ]+(:|%3A)[\\[\\]0-9a-zA-Z.\\-:]+(:\\d{1,5})?(?<!:)",
"underline": true
},
"room_id": {
"pattern": "(!|%21)[^:% ]+(:|%3A)[\\[\\]0-9a-zA-Z.\\-:]+(:\\d{1,5})?(?<!:)",
"underline": true
},
"room_alias": {
"pattern": "(#|%23)[^:% ]+(:|%3A)[\\[\\]0-9a-zA-Z.\\-:]+(:\\d{1,5})?(?<!:)",
"underline": true
},
"event_id_v1_v2": {
"pattern": "(\\$|%25)[^:% ]+(:|%3A)[\\[\\]0-9a-zA-Z.\\-:]+(:\\d{1,5})?(?<!:)",
"underline": true
},
"event_id_v3_plus": {
"pattern": "(\\$|%25)([A-Za-z0-9+/_]|-){43}",
"underline": true
}
}
}
}
@@ -15,19 +15,19 @@ worker_name: generic_worker$i
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_main_http_uri: http://localhost:8008/
worker_listeners:
- type: http
port: 808$i
x_forwarded: true
resources:
- names: [client, federation]
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
#worker_pid_file: DATADIR/generic_worker$i.pid
EOF
done
```
This would create five generic workers with a unique `worker_name` field in each file and listening on ports 8081-8085.
Customise the script to your needs.
Customise the script to your needs. Note that `worker_pid_file` is required if `worker_daemonize` is `true`. Uncomment and/or modify the line if needed.
@@ -8,7 +8,9 @@ It also prints out the example lines for Synapse main configuration file.
Remember to route necessary endpoints directly to a worker associated with it.
If you run the script as-is, it will create workers with the replication listener starting from port 8034 and another, regular http listener starting from 8044. If you don't need all of the stream writers listed in the script, just remove them from the ```STREAM_WRITERS``` array.
If you run the script as-is, it will create workers with the replication listener starting from port 8034 and another, regular http listener starting from 8044. If you don't need all of the stream writers listed in the script, just remove them from the ```STREAM_WRITERS``` array.
Hint: Note that `worker_pid_file` is required if `worker_daemonize` is `true`. Uncomment and/or modify the line if needed.
```sh
#!/bin/bash
@@ -46,9 +48,11 @@ worker_listeners:
- type: http
port: $(expr $HTTP_START_PORT + $i)
x_forwarded: true
resources:
- names: [client]
#worker_pid_file: DATADIR/${STREAM_WRITERS[$i]}.pid
worker_log_config: /etc/matrix-synapse/stream-writer-log.yaml
EOF
HOMESERVER_YAML_INSTANCE_MAP+=$" ${STREAM_WRITERS[$i]}_stream_writer:
@@ -91,7 +95,9 @@ Simply run the script to create YAML files in the current folder and print out t
```console
$ ./create_stream_writers.sh
```
You should receive an output similar to the following:
```console
# Add these lines to your homeserver.yaml.
# Don't forget to configure your reverse proxy and
# necessary endpoints to their respective worker.
+1 -2
View File
@@ -31,12 +31,11 @@ case $(dpkg-architecture -q DEB_HOST_ARCH) in
esac
# Manually install Poetry and export a pip-compatible `requirements.txt`
# We need a Poetry pre-release as the export command is buggy in < 1.2
TEMP_VENV="$(mktemp -d)"
python3 -m venv "$TEMP_VENV"
source "$TEMP_VENV/bin/activate"
pip install -U pip
pip install poetry==1.2.0
pip install poetry==1.3.2
poetry export \
--extras all \
--extras test \
+37
View File
@@ -1,3 +1,40 @@
matrix-synapse-py3 (1.77.0~rc1) stable; urgency=medium
* New Synapse release 1.77.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Feb 2023 13:45:14 +0000
matrix-synapse-py3 (1.76.0) stable; urgency=medium
* New Synapse release 1.76.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 31 Jan 2023 08:21:47 -0800
matrix-synapse-py3 (1.76.0~rc2) stable; urgency=medium
* New Synapse release 1.76.0rc2.
-- Synapse Packaging team <packages@matrix.org> Fri, 27 Jan 2023 11:17:57 +0000
matrix-synapse-py3 (1.76.0~rc1) stable; urgency=medium
* Use Poetry 1.3.2 to manage the bundled virtualenv included with this package.
* New Synapse release 1.76.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Jan 2023 16:21:16 +0000
matrix-synapse-py3 (1.75.0) stable; urgency=medium
* New Synapse release 1.75.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Jan 2023 11:36:02 +0000
matrix-synapse-py3 (1.75.0~rc2) stable; urgency=medium
* New Synapse release 1.75.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 12 Jan 2023 10:30:15 -0800
matrix-synapse-py3 (1.75.0~rc1) stable; urgency=medium
* New Synapse release 1.75.0rc1.
+45 -51
View File
@@ -17,16 +17,10 @@
# Irritatingly, there is no blessed guide on how to distribute an application with its
# poetry-managed environment in a docker image. We have opted for
# `poetry export | pip install -r /dev/stdin`, but there are known bugs in
# in `poetry export` whose fixes (scheduled for poetry 1.2) have yet to be released.
# In case we get bitten by those bugs in the future, the recommendations here might
# be useful:
# https://github.com/python-poetry/poetry/discussions/1879#discussioncomment-216865
# https://stackoverflow.com/questions/53835198/integrating-python-poetry-with-docker?answertab=scoredesc
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG PYTHON_VERSION=3.9
ARG PYTHON_VERSION=3.11
###
### Stage 0: generate requirements.txt
@@ -40,16 +34,16 @@ FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as requirements
# Here we use it to set up a cache for apt (and below for pip), to improve
# rebuild speeds on slow connections.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential git libffi-dev libssl-dev \
&& rm -rf /var/lib/apt/lists/*
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential git libffi-dev libssl-dev \
&& rm -rf /var/lib/apt/lists/*
# We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --user "poetry==1.2.0"
pip install --user "poetry==1.3.2"
WORKDIR /synapse
@@ -70,9 +64,9 @@ ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# Otherwise, just create an empty requirements file so that the Dockerfile can
# proceed.
RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
/root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
/root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
else \
touch /synapse/requirements.txt; \
touch /synapse/requirements.txt; \
fi
###
@@ -82,24 +76,24 @@ FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as builder
# install the OS build deps
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential \
libffi-dev \
libjpeg-dev \
libpq-dev \
libssl-dev \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
zlib1g-dev \
git \
curl \
libicu-dev \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential \
libffi-dev \
libjpeg-dev \
libpq-dev \
libssl-dev \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
zlib1g-dev \
git \
curl \
libicu-dev \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
# Install rust and ensure its in the PATH
@@ -140,9 +134,9 @@ ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
RUN --mount=type=cache,target=/synapse/target,sharing=locked \
--mount=type=cache,target=${CARGO_HOME}/registry,sharing=locked \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; \
pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; \
else \
pip install --prefix="/install" --no-warn-script-location /synapse[all]; \
pip install --prefix="/install" --no-warn-script-location /synapse[all]; \
fi
###
@@ -157,20 +151,20 @@ LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git
LABEL org.opencontainers.image.licenses='Apache-2.0'
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
curl \
gosu \
libjpeg62-turbo \
libpq5 \
libwebp6 \
xmlsec1 \
libjemalloc2 \
libicu67 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
curl \
gosu \
libjpeg62-turbo \
libpq5 \
libwebp6 \
xmlsec1 \
libjemalloc2 \
libicu67 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
@@ -181,4 +175,4 @@ EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1
CMD curl -fSs http://localhost:8008/health || exit 1
+12 -1
View File
@@ -6,7 +6,7 @@ set -e
echo "Complement Synapse launcher"
echo " Args: $@"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=$SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR"
function log {
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
@@ -76,6 +76,17 @@ else
fi
if [[ -n "$SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR" ]]; then
if [[ -n "$SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER" ]]; then
export SYNAPSE_COMPLEMENT_FORKING_LAUNCHER_ASYNC_IO_REACTOR="1"
else
export SYNAPSE_ASYNC_IO_REACTOR="1"
fi
else
export SYNAPSE_ASYNC_IO_REACTOR="0"
fi
# Add Complement's appservice registration directory, if there is one
# (It can be absent when there are no application services in this test!)
if [ -d /complement/appservice ]; then
@@ -94,16 +94,16 @@ allow_device_name_lookup_over_federation: true
experimental_features:
# Enable history backfilling support
msc2716_enabled: true
# server-side support for partial state in /send_join responses
msc3706_enabled: true
{% if not workers_in_use %}
# client-side support for partial state in /send_join responses
faster_joins: true
{% endif %}
# Filtering /messages by relation type.
msc3874_enabled: true
# Enable support for polls
msc3381_polls_enabled: true
# Enable deleting device-specific notification settings stored in account data
msc3890_enabled: true
# Enable removing account data support
msc3391_enabled: true
# Filtering /messages by relation type.
msc3874_enabled: true
server_notices:
system_mxid_localpart: _server
+1
View File
@@ -97,6 +97,7 @@
- [Log Contexts](log_contexts.md)
- [Replication](replication.md)
- [TCP Replication](tcp_replication.md)
- [Faster remote joins](development/synapse_architecture/faster_joins.md)
- [Internal Documentation](development/internal_documentation/README.md)
- [Single Sign-On]()
- [SAML](development/saml.md)
+1
View File
@@ -15,6 +15,7 @@ app_service_config_files:
The format of the AS configuration file is as follows:
```yaml
id: <your-AS-id>
url: <base url of AS>
as_token: <token AS will add to requests to HS>
hs_token: <token HS will add to requests to AS>
+3 -12
View File
@@ -13,23 +13,14 @@ The necessary tools are:
- [ruff](https://github.com/charliermarsh/ruff), which can spot common errors; and
- [mypy](https://mypy.readthedocs.io/en/stable/), a type checker.
Install them with:
```sh
pip install -e ".[lint,mypy]"
```
The easiest way to run the lints is to invoke the linter script as follows.
```sh
scripts-dev/lint.sh
```
See [the contributing guide](development/contributing_guide.md#run-the-linters) for instructions
on how to install the above tools and run the linters.
It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient
development workflow. It is not, however, recommended to run `mypy`
on save as they take a while and can be very resource intensive.
on save as it takes a while and can be very resource intensive.
## General rules
+2 -1
View File
@@ -67,7 +67,7 @@ pipx install poetry
but see poetry's [installation instructions](https://python-poetry.org/docs/#installation)
for other installation methods.
Synapse requires Poetry version 1.2.0 or later.
Developing Synapse requires Poetry version 1.3.2 or later.
Next, open a terminal and install dependencies as follows:
@@ -332,6 +332,7 @@ The above will run a monolithic (single-process) Synapse with SQLite as the data
[here](https://github.com/matrix-org/synapse/blob/develop/docker/configure_workers_and_start.py#L54).
A safe example would be `WORKER_TYPES="federation_inbound, federation_sender, synchrotron"`.
See the [worker documentation](../workers.md) for additional information on workers.
- Passing `ASYNCIO_REACTOR=1` as an environment variable to use the Twisted asyncio reactor instead of the default one.
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh
+24 -7
View File
@@ -2,6 +2,13 @@
This is a quick cheat sheet for developers on how to use [`poetry`](https://python-poetry.org/).
# Installing
See the [contributing guide](contributing_guide.md#4-install-the-dependencies).
Developers should use Poetry 1.3.2 or higher. If you encounter problems related
to poetry, please [double-check your poetry version](#check-the-version-of-poetry-with-poetry---version).
# Background
Synapse uses a variety of third-party Python packages to function as a homeserver.
@@ -123,7 +130,7 @@ context of poetry's venv, without having to run `poetry shell` beforehand.
## ...reset my venv to the locked environment?
```shell
poetry install --extras all --remove-untracked
poetry install --all-extras --sync
```
## ...delete everything and start over from scratch?
@@ -183,7 +190,6 @@ Either:
- manually update `pyproject.toml`; then `poetry lock --no-update`; or else
- `poetry add packagename`. See `poetry add --help`; note the `--dev`,
`--extras` and `--optional` flags in particular.
- **NB**: this specifies the new package with a version given by a "caret bound". This won't get forced to its lowest version in the old deps CI job: see [this TODO](https://github.com/matrix-org/synapse/blob/4e1374373857f2f7a911a31c50476342d9070681/.ci/scripts/test_old_deps.sh#L35-L39).
Include the updated `pyproject.toml` and `poetry.lock` files in your commit.
@@ -196,7 +202,7 @@ poetry remove packagename
```
ought to do the trick. Alternatively, manually update `pyproject.toml` and
`poetry lock --no-update`. Include the updated `pyproject.toml` and poetry.lock`
`poetry lock --no-update`. Include the updated `pyproject.toml` and `poetry.lock`
files in your commit.
## ...update the version range for an existing dependency?
@@ -240,9 +246,6 @@ poetry export --extras all
Be wary of bugs in `poetry export` and `pip install -r requirements.txt`.
Note: `poetry export` will be made a plugin in Poetry 1.2. Additional config may
be required.
## ...build a test wheel?
I usually use
@@ -255,12 +258,26 @@ because [`build`](https://github.com/pypa/build) is a standardish tool which
doesn't require poetry. (It's what we use in CI too). However, you could try
`poetry build` too.
## ...handle a Dependabot pull request?
Synapse uses Dependabot to keep the `poetry.lock` file up-to-date. When it
creates a pull request a GitHub Action will run to automatically create a changelog
file. Ensure that:
* the lockfile changes look reasonable;
* the upstream changelog file (linked in the description) doesn't include any
breaking changes;
* continuous integration passes (due to permissions, the GitHub Actions run on
the changelog commit will fail, look at the initial commit of the pull request);
In particular, any updates to the type hints (usually packages which start with `types-`)
should be safe to merge if linting passes.
# Troubleshooting
## Check the version of poetry with `poetry --version`.
The minimum version of poetry supported by Synapse is 1.2.
The minimum version of poetry supported by Synapse is 1.3.2.
It can also be useful to check the version of `poetry-core` in use. If you've
installed `poetry` with `pipx`, try `pipx runpip poetry list | grep
@@ -0,0 +1,375 @@
# How do faster joins work?
This is a work-in-progress set of notes with two goals:
- act as a reference, explaining how Synapse implements faster joins; and
- record the rationale behind our choices.
See also [MSC3902](https://github.com/matrix-org/matrix-spec-proposals/pull/3902).
The key idea is described by [MSC706](https://github.com/matrix-org/matrix-spec-proposals/pull/3902). This allows servers to
request a lightweight response to the federation `/send_join` endpoint.
This is called a **faster join**, also known as a **partial join**. In these
notes we'll usually use the word "partial" as it matches the database schema.
## Overview: processing events in a partially-joined room
The response to a partial join consists of
- the requested join event `J`,
- a list of the servers in the room (according to the state before `J`),
- a subset of the state of the room before `J`,
- the full auth chain of that state subset.
Synapse marks the room as partially joined by adding a row to the database table
`partial_state_rooms`. It also marks the join event `J` as "partially stated",
meaning that we have neither received nor computed the full state before/after
`J`. This is done by adding a row to `partial_state_events`.
<details><summary>DB schema</summary>
```
matrix=> \d partial_state_events
Table "matrix.partial_state_events"
Column │ Type │ Collation │ Nullable │ Default
══════════╪══════╪═══════════╪══════════╪═════════
room_id │ text │ │ not null │
event_id │ text │ │ not null │
matrix=> \d partial_state_rooms
Table "matrix.partial_state_rooms"
Column │ Type │ Collation │ Nullable │ Default
════════════════════════╪════════╪═══════════╪══════════╪═════════
room_id │ text │ │ not null │
device_lists_stream_id │ bigint │ │ not null │ 0
join_event_id │ text │ │ │
joined_via │ text │ │ │
matrix=> \d partial_state_rooms_servers
Table "matrix.partial_state_rooms_servers"
Column │ Type │ Collation │ Nullable │ Default
═════════════╪══════╪═══════════╪══════════╪═════════
room_id │ text │ │ not null │
server_name │ text │ │ not null │
```
Indices, foreign-keys and check constraints are omitted for brevity.
</details>
While partially joined to a room, Synapse receives events `E` from remote
homeservers as normal, and can create events at the request of its local users.
However, we run into trouble when we enforce the [checks on an event].
> 1. Is a valid event, otherwise it is dropped. For an event to be valid, it
must contain a room_id, and it must comply with the event format of that
> room version.
> 2. Passes signature checks, otherwise it is dropped.
> 3. Passes hash checks, otherwise it is redacted before being processed further.
> 4. Passes authorization rules based on the events auth events, otherwise it
> is rejected.
> 5. **Passes authorization rules based on the state before the event, otherwise
> it is rejected.**
> 6. **Passes authorization rules based on the current state of the room,
> otherwise it is “soft failed”.**
[checks on an event]: https://spec.matrix.org/v1.5/server-server-api/#checks-performed-on-receipt-of-a-pdu
We can enforce checks 1--4 without any problems.
But we cannot enforce checks 5 or 6 with complete certainty, since Synapse does
not know the full state before `E`, nor that of the room.
### Partial state
Instead, we make a best-effort approximation.
While the room is considered partially joined, Synapse tracks the "partial
state" before events.
This works in a similar way as regular state:
- The partial state before `J` is that given to us by the partial join response.
- The partial state before an event `E` is the resolution of the partial states
after each of `E`'s `prev_event`s.
- If `E` is rejected or a message event, the partial state after `E` is the
partial state before `E`.
- Otherwise, the partial state after `E` is the partial state before `E`, plus
`E` itself.
More concisely, partial state propagates just like full state; the only
difference is that we "seed" it with an incomplete initial state.
Synapse records that we have only calculated partial state for this event with
a row in `partial_state_events`.
While the room remains partially stated, check 5 on incoming events to that
room becomes:
> 5. Passes authorization rules based on **the resolution between the partial
> state before `E` and `E`'s auth events.** If the event fails to pass
> authorization rules, it is rejected.
Additionally, check 6 is deleted: no soft-failures are enforced.
While partially joined, the current partial state of the room is defined as the
resolution across the partial states after all forward extremities in the room.
_Remark._ Events with partial state are _not_ considered
[outliers](../room-dag-concepts.md#outliers).
### Approximation error
Using partial state means the auth checks can fail in a few different ways[^2].
[^2]: Is this exhaustive?
- We may erroneously accept an incoming event in check 5 based on partial state
when it would have been rejected based on full state, or vice versa.
- This means that an event could erroneously be added to the current partial
state of the room when it would not be present in the full state of the room,
or vice versa.
- Additionally, we may have skipped soft-failing an event that would have been
soft-failed based on full state.
(Note that the discrepancies described in the last two bullets are user-visible.)
This means that we have to be very careful when we want to lookup pieces of room
state in a partially-joined room. Our approximation of the state may be
incorrect or missing. But we can make some educated guesses. If
- our partial state is likely to be correct, or
- the consequences of our partial state being incorrect are minor,
then we proceed as normal, and let the resync process fix up any mistakes (see
below).
When is our partial state likely to be correct?
- It's more accurate the closer we are to the partial join event. (So we should
ideally complete the resync as soon as possible.)
- Non-member events: we will have received them as part of the partial join
response, if they were part of the room state at that point. We may
incorrectly accept or reject updates to that state (at first because we lack
remote membership information; later because of compounding errors), so these
can become incorrect over time.
- Local members' memberships: we are the only ones who can create join and
knock events for our users. We can't be completely confident in the
correctness of bans, invites and kicks from other homeservers, but the resync
process should correct any mistakes.
- Remote members' memberships: we did not receive these in the /send_join
response, so we have essentially no idea if these are correct or not.
In short, we deem it acceptable to trust the partial state for non-membership
and local membership events. For remote membership events, we wait for the
resync to complete, at which point we have the full state of the room and can
proceed as normal.
### Fixing the approximation with a resync
The partial-state approximation is only a temporary affair. In the background,
synapse beings a "resync" process. This is a continuous loop, starting at the
partial join event and proceeding downwards through the event graph. For each
`E` seen in the room since partial join, Synapse will fetch
- the event ids in the state of the room before `E`, via
[`/state_ids`](https://spec.matrix.org/v1.5/server-server-api/#get_matrixfederationv1state_idsroomid);
- the event ids in the full auth chain of `E`, included in the `/state_ids`
response; and
- any events from the previous two bullets that Synapse hasn't persisted, via
[`/state](https://spec.matrix.org/v1.5/server-server-api/#get_matrixfederationv1stateroomid).
This means Synapse has (or can compute) the full state before `E`, which allows
Synapse to properly authorise or reject `E`. At this point ,the event
is considered to have "full state" rather than "partial state". We record this
by removing `E` from the `partial_state_events` table.
\[**TODO:** Does Synapse persist a new state group for the full state
before `E`, or do we alter the (partial-)state group in-place? Are state groups
ever marked as partially-stated? \]
This scheme means it is possible for us to have accepted and sent an event to
clients, only to reject it during the resync. From a client's perspective, the
effect is similar to a retroactive
state change due to state resolution---i.e. a "state reset".[^3]
[^3]: Clients should refresh caches to detect such a change. Rumour has it that
sliding sync will fix this.
When all events since the join `J` have been fully-stated, the room resync
process is complete. We record this by removing the room from
`partial_state_rooms`.
## Faster joins on workers
For the time being, the resync process happens on the master worker.
A new replication stream `un_partial_stated_room` is added. Whenever a resync
completes and a partial-state room becomes fully stated, a new message is sent
into that stream containing the room ID.
## Notes on specific cases
> **NB.** The notes below are rough. Some of them are hidden under `<details>`
disclosures because they have yet to be implemented in mainline Synapse.
### Creating events during a partial join
When sending out messages during a partial join, we assume our partial state is
accurate and proceed as normal. For this to have any hope of succeeding at all,
our partial state must contain an entry for each of the (type, state key) pairs
[specified by the auth rules](https://spec.matrix.org/v1.3/rooms/v10/#authorization-rules):
- `m.room.create`
- `m.room.join_rules`
- `m.room.power_levels`
- `m.room.third_party_invite`
- `m.room.member`
The first four of these should be present in the state before `J` that is given
to us in the partial join response; only membership events are omitted. In order
for us to consider the user joined, we must have their membership event. That
means the only possible omission is the target's membership in an invite, kick
or ban.
The worst possibility is that we locally invite someone who is banned according to
the full state, because we lack their ban in our current partial state. The rest
of the federation---at least, those who are fully joined---should correctly
enforce the [membership transition constraints](
https://spec.matrix.org/v1.3/client-server-api/#room-membership
). So any the erroneous invite should be ignored by fully-joined
homeservers and resolved by the resync for partially-joined homeservers.
In more generality, there are two problems we're worrying about here:
- We might create an event that is valid under our partial state, only to later
find out that is actually invalid according to the full state.
- Or: we might refuse to create an event that is invalid under our partial
state, even though it would be perfectly valid under the full state.
However we expect such problems to be unlikely in practise, because
- We trust that the room has sensible power levels, e.g. that bad actors with
high power levels are demoted before their ban.
- We trust that the resident server provides us up-to-date power levels, join
rules, etc.
- State changes in rooms are relatively infrequent, and the resync period is
relatively quick.
#### Sending out the event over federation
**TODO:** needs prose fleshing out.
Normally: send out in a fed txn to all HSes in the room.
We only know that some HSes were in the room at some point. Wat do.
Send it out to the list of servers from the first join.
**TODO** what do we do here if we have full state?
If the prev event was created by us, we can risk sending it to the wrong HS. (Motivation: privacy concern of the content. Not such a big deal for a public room or an encrypted room. But non-encrypted invite-only...)
But don't want to send out sensitive data in other HS's events in this way.
Suppose we discover after resync that we shouldn't have sent out one our events (not a prev_event) to a target HS. Not much we can do.
What about if we didn't send them an event but shouldn't've?
E.g. what if someone joined from a new HS shortly after you did? We wouldn't talk to them.
Could imagine sending out the "Missed" events after the resync but... painful to work out what they shuld have seen if they joined/left.
Instead, just send them the latest event (if they're still in the room after resync) and let them backfill.(?)
- Don't do this currently.
- If anyone who has received our messages sends a message to a HS we missed, they can backfill our messages
- Gap: rooms which are infrequently used and take a long time to resync.
### Joining after a partial join
**NB.** Not yet implemented.
<details>
**TODO:** needs prose fleshing out. Liase with Matthieu. Explain why /send_join
(Rich was surprised we didn't just create it locally. Answer: to try and avoid
a join which then gets rejected after resync.)
We don't know for sure that any join we create would be accepted.
E.g. the joined user might have been banned; the join rules might have changed in a way that we didn't realise... some way in which the partial state was mistaken.
Instead, do another partial make-join/send-join handshake to confirm that the join works.
- Probably going to get a bunch of duplicate state events and auth events.... but the point of partial joins is that these should be small. Many are already persisted = good.
- What if the second send_join response includes a different list of reisdent HSes? Could ignore it.
- Could even have a special flag that says "just make me a join", i.e. don't bother giving me state or servers in room. Deffo want the auth chain tho.
- SQ: wrt device lists it's a lot safer to ignore it!!!!!
- What if the state at the second join is inconsistent with what we have? Ignore it?
</details>
### Leaving (and kicks and bans) after a partial join
**NB.** Not yet implemented.
<details>
When you're fully joined to a room, to have `U` leave a room their homeserver
needs to
- create a new leave event for `U` which will be accepted by other homeservers,
and
- send that event `U` out to the homeservers in the federation.
When is a leave event accepted? See
[v10 auth rules](https://spec.matrix.org/v1.5/rooms/v10/#authorization-rules):
> 4. If type is m.room.member: [...]
>
> 5. If membership is leave:
>
> 1. If the sender matches state_key, allow if and only if that users current membership state is invite, join, or knock.
> 2. [...]
I think this means that (well-formed!) self-leaves are governed entirely by
4.5.1. This means that if we correctly calculate state which says that `U` is
invited, joined or knocked and include it in the leave's auth events, our event
is accepted by checks 4 and 5 on incoming events.
> 4. Passes authorization rules based on the events auth events, otherwise
> it is rejected.
> 5. Passes authorization rules based on the state before the event, otherwise
> it is rejected.
The only way to fail check 6 is if the receiving server's current state of the
room says that `U` is banned, has left, or has no membership event. But this is
fine: the receiving server already thinks that `U` isn't in the room.
> 6. Passes authorization rules based on the current state of the room,
> otherwise it is “soft failed”.
For the second point (publishing the leave event), the best thing we can do is
to is publish to all HSes we know to be currently in the room. If they miss that
event, they might send us traffic in the room that we don't care about. This is
a problem with leaving after a "full" join; we don't seek to fix this with
partial joins.
(With that said: there's nothing machine-readable in the /send response. I don't
think we can deduce "destination has left the room" from a failure to /send an
event into that room?)
#### Can we still do this during a partial join?
We can create leave events and can choose what gets included in our auth events,
so we can be sure that we pass check 4 on incoming events. For check 5, we might
have an incorrect view of the state before an event.
The only way we might erroneously think a leave is valid is if
- the partial state before the leave has `U` joined, invited or knocked, but
- the full state before the leave has `U` banned, left or not present,
in which case the leave doesn't make anything worse: other HSes already consider
us as not in the room, and will continue to do so after seeing the leave.
The remaining obstacle is then: can we safely broadcast the leave event? We may
miss servers or incorrectly think that a server is in the room. Or the
destination server may be offline and miss the transaction containing our leave
event.This should self-heal when they see an event whose `prev_events` descends
from our leave.
Another option we considered was to use federation `/send_leave` to ask a
fully-joined server to send out the event on our behalf. But that introduces
complexity without much benefit. Besides, as Rich put it,
> sending out leaves is pretty best-effort currently
so this is probably good enough as-is.
#### Cleanup after the last leave
**TODO**: what cleanup is necessary? Is it all just nice-to-have to save unused
work?
</details>
@@ -17,6 +17,7 @@ worker_listeners:
#
#- type: http
# port: 8035
# x_forwarded: true
# resources:
# - names: [client]
@@ -5,11 +5,10 @@ worker_name: generic_worker1
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_main_http_uri: http://localhost:8008/
worker_listeners:
- type: http
port: 8083
x_forwarded: true
resources:
- names: [client, federation]
@@ -8,6 +8,7 @@ worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8085
x_forwarded: true
resources:
- names: [media]
+33
View File
@@ -88,6 +88,39 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.76.0
## Faster joins are enabled by default
When joining a room for the first time, Synapse 1.76.0 will request a partial join from the other server by default. Previously, server admins had to opt-in to this using an experimental config flag.
Server admins can opt out of this feature for the time being by setting
```yaml
experimental:
faster_joins: false
```
in their server config.
## Changes to the account data replication streams
Synapse has changed the format of the account data and devices replication
streams (between workers). This is a forwards- and backwards-incompatible
change: v1.75 workers cannot process account data replicated by v1.76 workers,
and vice versa.
Once all workers are upgraded to v1.76 (or downgraded to v1.75), account data
and device replication will resume as normal.
## Minimum version of Poetry is now 1.3.2
The minimum supported version of Poetry is now 1.3.2 (previously 1.2.0, [since
Synapse 1.67](#upgrading-to-v1670)). If you have used `poetry install` to
install Synapse from a source checkout, you should upgrade poetry: see its
[installation instructions](https://python-poetry.org/docs/#installation).
For all other installation methods, no acction is required.
# Upgrading to v1.74.0
## Unicode support in user search
+72 -14
View File
@@ -2,13 +2,19 @@
How do I become a server admin?
---
If your server already has an admin account you should use the [User Admin API](../../admin_api/user_admin_api.md#change-whether-a-user-is-a-server-administrator-or-not) to promote other accounts to become admins.
If your server already has an admin account you should use the
[User Admin API](../../admin_api/user_admin_api.md#change-whether-a-user-is-a-server-administrator-or-not)
to promote other accounts to become admins.
If you don't have any admin accounts yet you won't be able to use the admin API, so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account: use the admin APIs to make further changes.
If you don't have any admin accounts yet you won't be able to use the admin API,
so you'll have to edit the database manually. Manually editing the database is
generally not recommended so once you have an admin account: use the admin APIs
to make further changes.
```sql
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
```
What servers are my server talking to?
---
Run this sql query on your db:
@@ -32,6 +38,44 @@ What users are registered on my server?
SELECT NAME from users;
```
How can I export user data?
---
Synapse includes a Python command to export data for a specific user. It takes the homeserver
configuration file and the full Matrix ID of the user to export:
```console
python -m synapse.app.admin_cmd -c <config_file> export-data <user_id> --output-directory <directory_path>
```
If you uses [Poetry](../../development/dependencies.md#managing-dependencies-with-poetry)
to run Synapse:
```console
poetry run python -m synapse.app.admin_cmd -c <config_file> export-data <user_id> --output-directory <directory_path>
```
The directory to store the export data in can be customised with the
`--output-directory` parameter; ensure that the provided directory is
empty. If this parameter is not provided, Synapse defaults to creating
a temporary directory (which starts with "synapse-exfiltrate") in `/tmp`,
`/var/tmp`, or `/usr/tmp`, in that order.
The exported data has the following layout:
```
output-directory
├───rooms
│ └───<room_id>
│ ├───events
│ ├───state
│ ├───invite_state
│ └───knock_state
└───user_data
├───connections
├───devices
└───profile
```
Manually resetting passwords
---
Users can reset their password through their client. Alternatively, a server admin
@@ -42,21 +86,29 @@ I have a problem with my server. Can I just delete my database and start again?
---
Deleting your database is unlikely to make anything better.
It's easy to make the mistake of thinking that you can start again from a clean slate by dropping your database, but things don't work like that in a federated network: lots of other servers have information about your server.
It's easy to make the mistake of thinking that you can start again from a clean
slate by dropping your database, but things don't work like that in a federated
network: lots of other servers have information about your server.
For example: other servers might think that you are in a room, your server will think that you are not, and you'll probably be unable to interact with that room in a sensible way ever again.
For example: other servers might think that you are in a room, your server will
think that you are not, and you'll probably be unable to interact with that room
in a sensible way ever again.
In general, there are better solutions to any problem than dropping the database. Come and seek help in https://matrix.to/#/#synapse:matrix.org.
In general, there are better solutions to any problem than dropping the database.
Come and seek help in https://matrix.to/#/#synapse:matrix.org.
There are two exceptions when it might be sensible to delete your database and start again:
* You have *never* joined any rooms which are federated with other servers. For instance, a local deployment which the outside world can't talk to.
* You are changing the `server_name` in the homeserver configuration. In effect this makes your server a completely new one from the point of view of the network, so in this case it makes sense to start with a clean database.
* You have *never* joined any rooms which are federated with other servers. For
instance, a local deployment which the outside world can't talk to.
* You are changing the `server_name` in the homeserver configuration. In effect
this makes your server a completely new one from the point of view of the network,
so in this case it makes sense to start with a clean database.
(In both cases you probably also want to clear out the media_store.)
I've stuffed up access to my room, how can I delete it to free up the alias?
---
Using the following curl command:
```
```console
curl -H 'Authorization: Bearer <access-token>' -X DELETE https://matrix.org/_matrix/client/r0/directory/room/<room-alias>
```
`<access-token>` - can be obtained in riot by looking in the riot settings, down the bottom is:
@@ -67,19 +119,25 @@ Access Token:\<click to reveal\>
How can I find the lines corresponding to a given HTTP request in my homeserver log?
---
Synapse tags each log line according to the HTTP request it is processing. When it finishes processing each request, it logs a line containing the words `Processed request: `. For example:
Synapse tags each log line according to the HTTP request it is processing. When
it finishes processing each request, it logs a line containing the words
`Processed request: `. For example:
```
2019-02-14 22:35:08,196 - synapse.access.http.8008 - 302 - INFO - GET-37 - ::1 - 8008 - {@richvdh:localhost} Processed request: 0.173sec/0.001sec (0.002sec, 0.000sec) (0.027sec/0.026sec/2) 687B 200 "GET /_matrix/client/r0/sync HTTP/1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" [0 dbevts]"
```
Here we can see that the request has been tagged with `GET-37`. (The tag depends on the method of the HTTP request, so might start with `GET-`, `PUT-`, `POST-`, `OPTIONS-` or `DELETE-`.) So to find all lines corresponding to this request, we can do:
Here we can see that the request has been tagged with `GET-37`. (The tag depends
on the method of the HTTP request, so might start with `GET-`, `PUT-`, `POST-`,
`OPTIONS-` or `DELETE-`.) So to find all lines corresponding to this request, we can do:
```
```console
grep 'GET-37' homeserver.log
```
If you want to paste that output into a github issue or matrix room, please remember to surround it with triple-backticks (```) to make it legible (see [quoting code](https://help.github.com/en/articles/basic-writing-and-formatting-syntax#quoting-code)).
If you want to paste that output into a github issue or matrix room, please
remember to surround it with triple-backticks (```) to make it legible
(see [quoting code](https://help.github.com/en/articles/basic-writing-and-formatting-syntax#quoting-code)).
What do all those fields in the 'Processed' line mean?
@@ -119,7 +177,7 @@ This is normally caused by a misconfiguration in your reverse-proxy. See [the re
Help!! Synapse is slow and eats all my RAM/CPU!
-----------------------------------------------
---
First, ensure you are running the latest version of Synapse, using Python 3
with a [PostgreSQL database](../../postgres.md).
@@ -161,7 +219,7 @@ in the Synapse config file: [see here](../configuration/config_documentation.md#
Running out of File Handles
---------------------------
---
If Synapse runs out of file handles, it typically fails badly - live-locking
at 100% CPU, and/or failing to accept new TCP connections (blocking the
+2 -2
View File
@@ -10,10 +10,10 @@ See the following for how to decode the dense data available from the default lo
```
| Part | Explanation |
| Part | Explanation |
| ----- | ------------ |
| AAAA | Timestamp request was logged (not received) |
| BBBB | Logger name (`synapse.access.(http\|https).<tag>`, where 'tag' is defined in the `listeners` config section, normally the port) |
| BBBB | Logger name (`synapse.access.(http\|https).<tag>`, where 'tag' is defined in the [`listeners`](../configuration/config_documentation.md#listeners) config section, normally the port) |
| CCCC | Line number in code |
| DDDD | Log Level |
| EEEE | Request Identifier (This identifier is shared by related log lines)|
@@ -295,7 +295,9 @@ Known room versions are listed [here](https://spec.matrix.org/latest/rooms/#comp
For example, for room version 1, `default_room_version` should be set
to "1".
Currently defaults to "9".
Currently defaults to ["10"](https://spec.matrix.org/v1.5/rooms/v10/).
_Changed in Synapse 1.76:_ the default version room version was increased from [9](https://spec.matrix.org/v1.5/rooms/v9/) to [10](https://spec.matrix.org/v1.5/rooms/v10/).
Example configuration:
```yaml
@@ -422,6 +424,10 @@ Sub-options for each listener include:
* `port`: the TCP port to bind to.
* `tag`: An alias for the port in the logger name. If set the tag is logged instead
of the port. Default to `None`, is optional and only valid for listener with `type: http`.
See the docs [request log format](../administration/request_log.md).
* `bind_addresses`: a list of local addresses to listen on. The default is
'all local interfaces'.
@@ -476,6 +482,12 @@ Valid resource names are:
* `static`: static resources under synapse/static (/_matrix/static). (Mostly useful for 'fallback authentication'.)
* `health`: the [health check endpoint](../../reverse_proxy.md#health-check-endpoint). This endpoint
is by default active for all other resources and does not have to be activated separately.
This is only useful if you want to use the health endpoint explicitly on a dedicated port or
for [workers](../../workers.md) and containers without listener e.g.
[application services](../../workers.md#notifying-application-services).
Example configuration #1:
```yaml
listeners:
@@ -3462,8 +3474,8 @@ This setting defines options related to the user directory.
This option has the following sub-options:
* `enabled`: Defines whether users can search the user directory. If false then
empty responses are returned to all queries. Defaults to true.
* `search_all_users`: Defines whether to search all users visible to your HS when searching
the user directory. If false, search results will only contain users
* `search_all_users`: Defines whether to search all users visible to your HS at the time the search is performed. If set to true, will return all users who share a room with the user from the homeserver.
If false, search results will only contain users
visible in public rooms and users sharing a room with the requester.
Defaults to false.
@@ -4019,6 +4031,27 @@ worker_listeners:
resources:
- names: [client, federation]
```
---
### `worker_manhole`
A worker may have a listener for [`manhole`](../../manhole.md).
It allows server administrators to access a Python shell on the worker.
Example configuration:
```yaml
worker_manhole: 9000
```
This is a short form for:
```yaml
worker_listeners:
- port: 9000
bind_addresses: ['127.0.0.1']
type: manhole
```
It needs also an additional [`manhole_settings`](#manhole_settings) configuration.
---
### `worker_daemonize`
@@ -1,9 +1,11 @@
# Logging Sample Configuration File
Below is a sample logging configuration file. This file can be tweaked to control how your
homeserver will output logs. A restart of the server is generally required to apply any
changes made to this file. The value of the `log_config` option in your homeserver
config should be the path to this file.
homeserver will output logs. The value of the `log_config` option in your homeserver config
should be the path to this file.
To apply changes made to this file, send Synapse a SIGHUP signal (or, if using `systemd`, run
`systemctl reload` on the Synapse service).
Note that a default logging configuration (shown below) is created automatically alongside
the homeserver config when following the [installation instructions](../../setup/installation.md).
+29 -27
View File
@@ -32,31 +32,9 @@ exclude = (?x)
|synapse/storage/databases/main/cache.py
|synapse/storage/schema/
|tests/api/test_auth.py
|tests/api/test_ratelimiting.py
|tests/app/test_openid_listener.py
|tests/appservice/test_scheduler.py
|tests/events/test_presence_router.py
|tests/events/test_utils.py
|tests/federation/test_federation_catch_up.py
|tests/federation/test_federation_sender.py
|tests/federation/transport/test_knocking.py
|tests/handlers/test_typing.py
|tests/http/federation/test_matrix_federation_agent.py
|tests/http/federation/test_srv_resolver.py
|tests/http/test_proxyagent.py
|tests/logging/__init__.py
|tests/logging/test_terse_json.py
|tests/module_api/test_api.py
|tests/push/test_email.py
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/client/test_transactions.py
|tests/rest/media/v1/test_media_storage.py
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/test_state.py
|tests/test_terms_auth.py
)$
[mypy-synapse.federation.transport.client]
@@ -86,22 +64,43 @@ disallow_untyped_defs = False
[mypy-tests.*]
disallow_untyped_defs = False
[mypy-tests.api.*]
disallow_untyped_defs = True
[mypy-tests.app.*]
disallow_untyped_defs = True
[mypy-tests.appservice.*]
disallow_untyped_defs = True
[mypy-tests.config.*]
disallow_untyped_defs = True
[mypy-tests.crypto.*]
disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]
[mypy-tests.events.*]
disallow_untyped_defs = True
[mypy-tests.federation.*]
disallow_untyped_defs = True
[mypy-tests.handlers.*]
disallow_untyped_defs = True
[mypy-tests.http.*]
disallow_untyped_defs = True
[mypy-tests.logging.*]
disallow_untyped_defs = True
[mypy-tests.metrics.*]
disallow_untyped_defs = True
[mypy-tests.push.test_bulk_push_rule_evaluator]
[mypy-tests.push.*]
disallow_untyped_defs = True
[mypy-tests.replication.*]
disallow_untyped_defs = True
[mypy-tests.rest.*]
@@ -116,6 +115,12 @@ disallow_untyped_defs = True
[mypy-tests.test_server]
disallow_untyped_defs = True
[mypy-tests.test_state]
disallow_untyped_defs = True
[mypy-tests.test_terms_auth]
disallow_untyped_defs = True
[mypy-tests.types.*]
disallow_untyped_defs = True
@@ -142,9 +147,6 @@ disallow_untyped_defs = True
[mypy-authlib.*]
ignore_missing_imports = True
[mypy-canonicaljson]
ignore_missing_imports = True
[mypy-ijson.*]
ignore_missing_imports = True
Generated
+1875 -1803
View File
File diff suppressed because it is too large Load Diff
+6 -12
View File
@@ -48,11 +48,6 @@ line-length = 88
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
#
# See https://github.com/charliermarsh/ruff/#pyflakes
# F401: unused import
# F811: Redefinition of unused
# F821: Undefined name
#
# flake8-bugbear compatible checks. Its error codes are described at
# https://github.com/charliermarsh/ruff/#flake8-bugbear
# B019: Use of functools.lru_cache or functools.cache on methods can lead to memory leaks
@@ -64,9 +59,6 @@ ignore = [
"B024",
"E501",
"E731",
"F401",
"F811",
"F821",
]
select = [
# pycodestyle checks.
@@ -97,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.75.0rc1"
version = "1.77.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -135,7 +127,9 @@ exclude = [
{ path = "synapse/*.so", format = "sdist"}
]
build = "build_rust.py"
[tool.poetry.build]
script = "build_rust.py"
generate-setup-file = true
[tool.poetry.scripts]
synapse_homeserver = "synapse.app.homeserver:main"
@@ -317,7 +311,7 @@ all = [
# We pin black so that our tests don't start failing on new releases.
isort = ">=5.10.1"
black = ">=22.3.0"
ruff = "0.0.215"
ruff = "0.0.230"
# Typechecking
mypy = "*"
@@ -358,7 +352,7 @@ towncrier = ">=18.6.0rc1"
# system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.0.0,<=1.3.2", "setuptools_rust>=1.3,<=1.5.2"]
requires = ["poetry-core>=1.0.0,<=1.5.0", "setuptools_rust>=1.3,<=1.5.2"]
build-backend = "poetry.core.masonry.api"
+5 -1
View File
@@ -23,13 +23,17 @@ name = "synapse.synapse_rust"
anyhow = "1.0.63"
lazy_static = "1.4.0"
log = "0.4.17"
pyo3 = { version = "0.17.1", features = ["extension-module", "macros", "anyhow", "abi3", "abi3-py37"] }
pyo3 = { version = "0.17.1", features = ["macros", "anyhow", "abi3", "abi3-py37"] }
pyo3-log = "0.7.0"
pythonize = "0.17.0"
regex = "1.6.0"
serde = { version = "1.0.144", features = ["derive"] }
serde_json = "1.0.85"
[features]
extension-module = ["pyo3/extension-module"]
default = ["extension-module"]
[build-dependencies]
blake2 = "0.10.4"
hex = "0.4.3"
+22 -2
View File
@@ -13,6 +13,7 @@
// limitations under the License.
#![feature(test)]
use std::collections::BTreeSet;
use synapse::push::{
evaluator::PushRuleEvaluator, Condition, EventMatchCondition, FilteredPushRules, PushRules,
};
@@ -32,6 +33,9 @@ fn bench_match_exact(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
Default::default(),
@@ -68,6 +72,9 @@ fn bench_match_word(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
Default::default(),
@@ -104,6 +111,9 @@ fn bench_match_word_miss(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
Default::default(),
@@ -140,6 +150,9 @@ fn bench_eval_message(b: &mut Bencher) {
let eval = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
Default::default(),
@@ -150,8 +163,15 @@ fn bench_eval_message(b: &mut Bencher) {
)
.unwrap();
let rules =
FilteredPushRules::py_new(PushRules::new(Vec::new()), Default::default(), false, false);
let rules = FilteredPushRules::py_new(
PushRules::new(Vec::new()),
Default::default(),
false,
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));
}
+15 -2
View File
@@ -1,7 +1,13 @@
use lazy_static::lazy_static;
use pyo3::prelude::*;
use pyo3_log::ResetHandle;
pub mod push;
lazy_static! {
static ref LOGGING_HANDLE: ResetHandle = pyo3_log::init();
}
/// Returns the hash of all the rust source files at the time it was compiled.
///
/// Used by python to detect if the rust library is outdated.
@@ -17,13 +23,20 @@ fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
Ok((a + b).to_string())
}
/// Reset the cached logging configuration of pyo3-log to pick up any changes
/// in the Python logging configuration.
///
#[pyfunction]
fn reset_logging_config() {
LOGGING_HANDLE.reset();
}
/// The entry point for defining the Python module.
#[pymodule]
fn synapse_rust(py: Python<'_>, m: &PyModule) -> PyResult<()> {
pyo3_log::init();
m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
m.add_function(wrap_pyfunction!(reset_logging_config, m)?)?;
push::register_module(py, m)?;
+115 -1
View File
@@ -1,4 +1,4 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
// Copyright 2022, 2023 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
@@ -63,6 +63,23 @@ pub const BASE_PREPEND_OVERRIDE_RULES: &[PushRule] = &[PushRule {
}];
pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
// We don't want to notify on edits. Not only can this be confusing in real
// time (2 notifications, one message) but it's especially confusing
// if a bridge needs to edit a previously backfilled message.
PushRule {
rule_id: Cow::Borrowed("global/override/.com.beeper.suppress_edits"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("content.m.relates_to.rel_type"),
pattern: Some(Cow::Borrowed("m.replace")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),
priority_class: 5,
@@ -131,6 +148,14 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_user_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::IsUserMention)]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.contains_display_name"),
priority_class: 5,
@@ -139,6 +164,19 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_room_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::IsRoomMention),
Condition::Known(KnownCondition::SenderNotificationPermission {
key: Cow::Borrowed("room"),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.roomnotif"),
priority_class: 5,
@@ -208,6 +246,20 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.org.matrix.msc3930.rule.poll_response"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("org.matrix.msc3381.poll.response")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},
];
pub const BASE_APPEND_CONTENT_RULES: &[PushRule] = &[PushRule {
@@ -596,6 +648,68 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc3930.rule.poll_start_one_to_one"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("org.matrix.msc3381.poll.start")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc3930.rule.poll_start"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("org.matrix.msc3381.poll.start")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc3930.rule.poll_end_one_to_one"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("org.matrix.msc3381.poll.end")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc3930.rule.poll_end"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("org.matrix.msc3381.poll.end")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify]),
default: true,
default_enabled: true,
},
];
lazy_static! {
+42 -2
View File
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::BTreeMap;
use std::collections::{BTreeMap, BTreeSet};
use anyhow::{Context, Error};
use lazy_static::lazy_static;
@@ -68,6 +68,13 @@ pub struct PushRuleEvaluator {
/// The "content.body", if any.
body: String,
/// True if the event has a mentions property and MSC3952 support is enabled.
has_mentions: bool,
/// The user mentions that were part of the message.
user_mentions: BTreeSet<String>,
/// True if the message is a room message.
room_mention: bool,
/// The number of users in the room.
room_member_count: u64,
@@ -100,6 +107,9 @@ impl PushRuleEvaluator {
#[new]
pub fn py_new(
flattened_keys: BTreeMap<String, String>,
has_mentions: bool,
user_mentions: BTreeSet<String>,
room_mention: bool,
room_member_count: u64,
sender_power_level: Option<i64>,
notification_power_levels: BTreeMap<String, i64>,
@@ -116,6 +126,9 @@ impl PushRuleEvaluator {
Ok(PushRuleEvaluator {
flattened_keys,
body,
has_mentions,
user_mentions,
room_mention,
room_member_count,
notification_power_levels,
sender_power_level,
@@ -146,6 +159,19 @@ impl PushRuleEvaluator {
}
let rule_id = &push_rule.rule_id().to_string();
// For backwards-compatibility the legacy mention rules are disabled
// if the event contains the 'm.mentions' property (and if the
// experimental feature is enabled, both of these are represented
// by the has_mentions flag).
if self.has_mentions
&& (rule_id == "global/override/.m.rule.contains_display_name"
|| rule_id == "global/content/.m.rule.contains_user_name"
|| rule_id == "global/override/.m.rule.roomnotif")
{
continue;
}
let extev_flag = &RoomVersionFeatures::ExtensibleEvents.as_str().to_string();
let supports_extensible_events = self.room_version_feature_flags.contains(extev_flag);
let safe_from_rver_condition = SAFE_EXTENSIBLE_EVENTS_RULE_IDS.contains(rule_id);
@@ -229,6 +255,14 @@ impl PushRuleEvaluator {
KnownCondition::RelatedEventMatch(event_match) => {
self.match_related_event_match(event_match, user_id)?
}
KnownCondition::IsUserMention => {
if let Some(uid) = user_id {
self.user_mentions.contains(uid)
} else {
false
}
}
KnownCondition::IsRoomMention => self.room_mention,
KnownCondition::ContainsDisplayName => {
if let Some(dn) = display_name {
if !dn.is_empty() {
@@ -424,6 +458,9 @@ fn push_rule_evaluator() {
flattened_keys.insert("content.body".to_string(), "foo bar bob hello".to_string());
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
BTreeMap::new(),
@@ -449,6 +486,9 @@ fn test_requires_room_version_supports_condition() {
let flags = vec![RoomVersionFeatures::ExtensibleEvents.as_str().to_string()];
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
false,
BTreeSet::new(),
false,
10,
Some(0),
BTreeMap::new(),
@@ -483,7 +523,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, true),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
None,
None,
);
+54 -4
View File
@@ -269,6 +269,10 @@ pub enum KnownCondition {
EventMatch(EventMatchCondition),
#[serde(rename = "im.nheko.msc3664.related_event_match")]
RelatedEventMatch(RelatedEventMatchCondition),
#[serde(rename = "org.matrix.msc3952.is_user_mention")]
IsUserMention,
#[serde(rename = "org.matrix.msc3952.is_room_mention")]
IsRoomMention,
ContainsDisplayName,
RoomMemberCount {
#[serde(skip_serializing_if = "Option::is_none")]
@@ -411,8 +415,11 @@ impl PushRules {
pub struct FilteredPushRules {
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3664_enabled: bool,
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
}
#[pymethods]
@@ -421,14 +428,20 @@ impl FilteredPushRules {
pub fn py_new(
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3664_enabled: bool,
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
) -> Self {
Self {
push_rules,
enabled_map,
msc3664_enabled,
msc1767_enabled,
msc3381_polls_enabled,
msc3664_enabled,
msc3952_intentional_mentions,
msc3958_suppress_edits_enabled,
}
}
@@ -447,13 +460,28 @@ impl FilteredPushRules {
.iter()
.filter(|rule| {
// Ignore disabled experimental push rules
if !self.msc1767_enabled && rule.rule_id.contains("org.matrix.msc1767") {
return false;
}
if !self.msc3664_enabled
&& rule.rule_id == "global/override/.im.nheko.msc3664.reply"
{
return false;
}
if !self.msc1767_enabled && rule.rule_id.contains("org.matrix.msc1767") {
if !self.msc3381_polls_enabled && rule.rule_id.contains("org.matrix.msc3930") {
return false;
}
if !self.msc3952_intentional_mentions && rule.rule_id.contains("org.matrix.msc3952")
{
return false;
}
if !self.msc3958_suppress_edits_enabled
&& rule.rule_id == "global/override/.com.beeper.suppress_edits"
{
return false;
}
@@ -514,6 +542,28 @@ fn test_deserialize_unstable_msc3931_condition() {
));
}
#[test]
fn test_deserialize_unstable_msc3952_user_condition() {
let json = r#"{"kind":"org.matrix.msc3952.is_user_mention"}"#;
let condition: Condition = serde_json::from_str(json).unwrap();
assert!(matches!(
condition,
Condition::Known(KnownCondition::IsUserMention)
));
}
#[test]
fn test_deserialize_unstable_msc3952_room_condition() {
let json = r#"{"kind":"org.matrix.msc3952.is_room_mention"}"#;
let condition: Condition = serde_json::from_str(json).unwrap();
assert!(matches!(
condition,
Condition::Known(KnownCondition::IsRoomMention)
));
}
#[test]
fn test_deserialize_custom_condition() {
let json = r#"{"kind":"custom_tag"}"#;
+9 -7
View File
@@ -190,7 +190,7 @@ fi
extra_test_args=()
test_tags="synapse_blacklist,msc3787,msc3874,msc3391"
test_tags="synapse_blacklist,msc3787,msc3874,msc3890,msc3391,msc3930,faster_joins"
# All environment variables starting with PASS_ will be shared.
# (The prefix is stripped off before reaching the container.)
@@ -223,12 +223,14 @@ else
export PASS_SYNAPSE_COMPLEMENT_DATABASE=sqlite
fi
# We only test faster room joins on monoliths, because they are purposefully
# being developed without worker support to start with.
#
# The tests for importing historical messages (MSC2716) also only pass with monoliths,
# currently.
test_tags="$test_tags,faster_joins,msc2716"
# The tests for importing historical messages (MSC2716)
# only pass with monoliths, currently.
test_tags="$test_tags,msc2716"
fi
if [[ -n "$ASYNCIO_REACTOR" ]]; then
# Enable the Twisted asyncio reactor
export PASS_SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=true
fi
-1
View File
@@ -11,6 +11,5 @@
sqlite3 "$1" <<'EOF' >table-save.sql
.dump users
.dump access_tokens
.dump presence
.dump profiles
EOF
+33
View File
@@ -101,10 +101,43 @@ echo
# Print out the commands being run
set -x
# Ensure the sort order of imports.
isort "${files[@]}"
# Ensure Python code conforms to an opinionated style.
python3 -m black "${files[@]}"
# Ensure the sample configuration file conforms to style checks.
./scripts-dev/config-lint.sh
# Catch any common programming mistakes in Python code.
# --quiet suppresses the update check.
ruff --quiet "${files[@]}"
# Catch any common programming mistakes in Rust code.
#
# --bins, --examples, --lib, --tests combined explicitly disable checking
# the benchmarks, which can fail due to `#![feature]` macros not being
# allowed on the stable rust toolchain (rustc error E0554).
#
# --allow-staged and --allow-dirty suppress clippy raising errors
# for uncommitted files. Only needed when using --fix.
#
# -D warnings disables the "warnings" lint.
#
# Using --fix has a tendency to cause subsequent runs of clippy to recompile
# rust code, which can slow down this script. Thus we run clippy without --fix
# first which is quick, and then re-run it with --fix if an error was found.
if ! cargo-clippy --bins --examples --lib --tests -- -D warnings > /dev/null 2>&1; then
cargo-clippy \
--bins --examples --lib --tests --allow-staged --allow-dirty --fix -- -D warnings
fi
# Ensure the formatting of Rust code.
cargo-fmt
# Ensure all Pydantic models use strict types.
./scripts-dev/check_pydantic_models.py lint
# Ensure type hints are correct.
mypy
+1 -1
View File
@@ -438,7 +438,7 @@ def _upload(gh_token: Optional[str]) -> None:
repo = get_repo_and_check_clean_checkout()
tag = repo.tag(f"refs/tags/{tag_name}")
if repo.head.commit != tag.commit:
click.echo("Tag {tag_name} (tag.commit) is not currently checked out!")
click.echo(f"Tag {tag_name} ({tag.commit}) is not currently checked out!")
click.get_current_context().abort()
# Query all the assets corresponding to this release.
-1
View File
@@ -7,7 +7,6 @@ from __future__ import annotations
from typing import (
Any,
Callable,
Generic,
Iterable,
Iterator,
List,
-2
View File
@@ -5,10 +5,8 @@
from __future__ import annotations
from typing import (
AbstractSet,
Any,
Callable,
Generic,
Hashable,
Iterable,
Iterator,
+1
View File
@@ -1,2 +1,3 @@
def sum_as_string(a: int, b: int) -> str: ...
def get_rust_file_digest() -> str: ...
def reset_logging_config() -> None: ...
+25 -2
View File
@@ -1,4 +1,18 @@
from typing import Any, Collection, Dict, Mapping, Optional, Sequence, Tuple, Union
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Collection, Dict, Mapping, Optional, Sequence, Set, Tuple, Union
from synapse.types import JsonDict
@@ -29,8 +43,11 @@ class FilteredPushRules:
self,
push_rules: PushRules,
enabled_map: Dict[str, bool],
msc3664_enabled: bool,
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...
@@ -40,6 +57,9 @@ class PushRuleEvaluator:
def __init__(
self,
flattened_keys: Mapping[str, str],
has_mentions: bool,
user_mentions: Set[str],
room_mention: bool,
room_member_count: int,
sender_power_level: Optional[int],
notification_power_levels: Mapping[str, int],
@@ -54,3 +74,6 @@ class PushRuleEvaluator:
user_id: Optional[str],
display_name: Optional[str],
) -> Collection[Union[Mapping, str]]: ...
def matches(
self, condition: JsonDict, user_id: Optional[str], display_name: Optional[str]
) -> bool: ...
+4
View File
@@ -51,6 +51,7 @@ from synapse.logging.context import (
make_deferred_yieldable,
run_in_background,
)
from synapse.notifier import ReplicationNotifier
from synapse.storage.database import DatabasePool, LoggingTransaction, make_conn
from synapse.storage.databases.main import PushRuleStore
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
@@ -260,6 +261,9 @@ class MockHomeserver:
def should_send_federation(self) -> bool:
return False
def get_replication_notifier(self) -> ReplicationNotifier:
return ReplicationNotifier()
class Porter:
def __init__(
+11
View File
@@ -17,6 +17,8 @@
"""Contains constants from the specification."""
import enum
from typing_extensions import Final
# the max size of a (canonical-json-encoded) event
@@ -231,6 +233,9 @@ class EventContentFields:
# The authorising user for joining a restricted room.
AUTHORISING_USER: Final = "join_authorised_via_users_server"
# Use for mentioning users.
MSC3952_MENTIONS: Final = "org.matrix.msc3952.mentions"
# an unspecced field added to to-device messages to identify them uniquely-ish
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
@@ -249,6 +254,7 @@ class RoomEncryptionAlgorithms:
class AccountDataTypes:
DIRECT: Final = "m.direct"
IGNORED_USER_LIST: Final = "m.ignored_user_list"
TAG: Final = "m.tag"
class HistoryVisibility:
@@ -289,3 +295,8 @@ class ApprovalNoticeMedium:
NONE = "org.matrix.msc3866.none"
EMAIL = "org.matrix.msc3866.email"
class Direction(enum.Enum):
BACKWARDS = "b"
FORWARDS = "f"
+2 -2
View File
@@ -252,9 +252,9 @@ class FilterCollection:
return self._room_timeline_filter.unread_thread_notifications
async def filter_presence(
self, events: Iterable[UserPresenceState]
self, presence_states: Iterable[UserPresenceState]
) -> List[UserPresenceState]:
return await self._presence_filter.filter(events)
return await self._presence_filter.filter(presence_states)
async def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
return await self._account_data.filter(events)
+31 -1
View File
@@ -35,6 +35,7 @@ from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)
from synapse.storage.databases.main.client_ips import ClientIpWorkerStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.event_federation import EventFederationWorkerStore
@@ -43,6 +44,7 @@ from synapse.storage.databases.main.event_push_actions import (
)
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.filtering import FilteringWorkerStore
from synapse.storage.databases.main.profile import ProfileWorkerStore
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.registration import RegistrationWorkerStore
@@ -54,7 +56,7 @@ from synapse.storage.databases.main.state import StateGroupWorkerStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
from synapse.storage.databases.main.user_erasure_store import UserErasureWorkerStore
from synapse.types import StateMap
from synapse.types import JsonDict, StateMap
from synapse.util import SYNAPSE_VERSION
from synapse.util.logcontext import LoggingContext
@@ -63,6 +65,7 @@ logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
FilteringWorkerStore,
ClientIpWorkerStore,
DeviceWorkerStore,
TagsWorkerStore,
DeviceInboxWorkerStore,
@@ -82,6 +85,7 @@ class AdminCmdSlavedStore(
EventsWorkerStore,
RegistrationWorkerStore,
RoomWorkerStore,
ProfileWorkerStore,
):
def __init__(
self,
@@ -192,6 +196,32 @@ class FileExfiltrationWriter(ExfiltrationWriter):
for event in state.values():
print(json.dumps(event), file=f)
def write_profile(self, profile: JsonDict) -> None:
user_directory = os.path.join(self.base_directory, "user_data")
os.makedirs(user_directory, exist_ok=True)
profile_file = os.path.join(user_directory, "profile")
with open(profile_file, "a") as f:
print(json.dumps(profile), file=f)
def write_devices(self, devices: List[JsonDict]) -> None:
user_directory = os.path.join(self.base_directory, "user_data")
os.makedirs(user_directory, exist_ok=True)
device_file = os.path.join(user_directory, "devices")
for device in devices:
with open(device_file, "a") as f:
print(json.dumps(device), file=f)
def write_connections(self, connections: List[JsonDict]) -> None:
user_directory = os.path.join(self.base_directory, "user_data")
os.makedirs(user_directory, exist_ok=True)
connection_file = os.path.join(user_directory, "connections")
for connection in connections:
with open(connection_file, "a") as f:
print(json.dumps(connection), file=f)
def finished(self) -> str:
return self.base_directory
+19 -2
View File
@@ -110,6 +110,8 @@ def _worker_entrypoint(
and then kick off the worker's main() function.
"""
from synapse.util.stringutils import strtobool
sys.argv = args
# reset the custom signal handlers that we installed, so that the children start
@@ -117,9 +119,24 @@ def _worker_entrypoint(
for sig, handler in _original_signal_handlers.items():
signal.signal(sig, handler)
from twisted.internet.epollreactor import EPollReactor
# Install the asyncio reactor if the
# SYNAPSE_COMPLEMENT_FORKING_LAUNCHER_ASYNC_IO_REACTOR is set to 1. The
# SYNAPSE_ASYNC_IO_REACTOR variable would be used, but then causes
# synapse/__init__.py to also try to install an asyncio reactor.
if strtobool(
os.environ.get("SYNAPSE_COMPLEMENT_FORKING_LAUNCHER_ASYNC_IO_REACTOR", "0")
):
import asyncio
from twisted.internet.asyncioreactor import AsyncioSelectorReactor
reactor = AsyncioSelectorReactor(asyncio.get_event_loop())
proxy_reactor._install_real_reactor(reactor)
else:
from twisted.internet.epollreactor import EPollReactor
proxy_reactor._install_real_reactor(EPollReactor())
proxy_reactor._install_real_reactor(EPollReactor())
func()
+3 -7
View File
@@ -199,6 +199,9 @@ class GenericWorkerServer(HomeServer):
"A 'media' listener is configured but the media"
" repository is disabled. Ignoring."
)
elif name == "health":
# Skip loading, health resource is always included
continue
if name == "openid" and "federation" not in res.names:
# Only load the openid resource separately if federation resource
@@ -279,13 +282,6 @@ def start(config_options: List[str]) -> None:
"synapse.app.user_dir",
)
if config.experimental.faster_joins_enabled:
raise ConfigError(
"You have enabled the experimental `faster_joins` config option, but it is "
"not compatible with worker deployments yet. Please disable `faster_joins` "
"or run Synapse as a single process deployment instead."
)
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
+3
View File
@@ -96,6 +96,9 @@ class SynapseHomeServer(HomeServer):
# Skip loading openid resource if federation is defined
# since federation resource will include openid
continue
if name == "health":
# Skip loading, health resource is always included
continue
resources.update(self._configure_named_resource(name, res.compress))
additional_resources = listener_config.http_options.additional_resources
+2 -2
View File
@@ -16,7 +16,7 @@
import logging
import re
from enum import Enum
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Pattern
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Pattern, Sequence
import attr
from netaddr import IPSet
@@ -377,7 +377,7 @@ class AppServiceTransaction:
self,
service: ApplicationService,
id: int,
events: List[EventBase],
events: Sequence[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_keys_count: TransactionOneTimeKeysCount,
+12 -2
View File
@@ -14,7 +14,17 @@
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Mapping, Optional, Tuple
from typing import (
TYPE_CHECKING,
Any,
Dict,
Iterable,
List,
Mapping,
Optional,
Sequence,
Tuple,
)
from prometheus_client import Counter
from typing_extensions import TypeGuard
@@ -259,7 +269,7 @@ class ApplicationServiceApi(SimpleHttpClient):
async def push_bulk(
self,
service: "ApplicationService",
events: List[EventBase],
events: Sequence[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
one_time_keys_count: TransactionOneTimeKeysCount,
+2 -1
View File
@@ -57,6 +57,7 @@ from typing import (
Iterable,
List,
Optional,
Sequence,
Set,
Tuple,
)
@@ -364,7 +365,7 @@ class _TransactionController:
async def send(
self,
service: ApplicationService,
events: List[EventBase],
events: Sequence[EventBase],
ephemeral: Optional[List[JsonDict]] = None,
to_device_messages: Optional[List[JsonDict]] = None,
one_time_keys_count: Optional[TransactionOneTimeKeysCount] = None,
+50 -22
View File
@@ -174,15 +174,29 @@ class Config:
@staticmethod
def parse_size(value: Union[str, int]) -> int:
if isinstance(value, int):
"""Interpret `value` as a number of bytes.
If an integer is provided it is treated as bytes and is unchanged.
String byte sizes can have a suffix of 'K' or `M`, representing kibibytes and
mebibytes respectively. No suffix is understood as a plain byte count.
Raises:
TypeError, if given something other than an integer or a string
ValueError: if given a string not of the form described above.
"""
if type(value) is int:
return value
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
elif type(value) is str:
sizes = {"K": 1024, "M": 1024 * 1024}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
else:
raise TypeError(f"Bad byte size {value!r}")
@staticmethod
def parse_duration(value: Union[str, int]) -> int:
@@ -198,22 +212,36 @@ class Config:
Returns:
The number of milliseconds in the duration.
Raises:
TypeError, if given something other than an integer or a string
ValueError: if given a string not of the form described above.
"""
if isinstance(value, int):
if type(value) is int:
return value
second = 1000
minute = 60 * second
hour = 60 * minute
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {"s": second, "m": minute, "h": hour, "d": day, "w": week, "y": year}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
elif type(value) is str:
second = 1000
minute = 60 * second
hour = 60 * minute
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {
"s": second,
"m": minute,
"h": hour,
"d": day,
"w": week,
"y": year,
}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
else:
raise TypeError(f"Bad duration {value!r}")
@staticmethod
def abspath(file_path: str) -> str:
+4 -6
View File
@@ -1,5 +1,3 @@
from __future__ import annotations
import argparse
from typing import (
Any,
@@ -20,7 +18,7 @@ from typing import (
import jinja2
from synapse.config import (
from synapse.config import ( # noqa: F401
account_validity,
api,
appservice,
@@ -169,7 +167,7 @@ class RootConfig:
self, section_name: Literal["caches"]
) -> cache.CacheConfig: ...
@overload
def reload_config_section(self, section_name: str) -> Config: ...
def reload_config_section(self, section_name: str) -> "Config": ...
class Config:
root: RootConfig
@@ -202,9 +200,9 @@ def find_config_files(search_paths: List[str]) -> List[str]: ...
class ShardedWorkerHandlingConfig:
instances: List[str]
def __init__(self, instances: List[str]) -> None: ...
def should_handle(self, instance_name: str, key: str) -> bool: ...
def should_handle(self, instance_name: str, key: str) -> bool: ... # noqa: F811
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
def get_instance(self, key: str) -> str: ...
def get_instance(self, key: str) -> str: ... # noqa: F811
def read_file(file_path: Any, config_path: Iterable[str]) -> str: ...
+2 -2
View File
@@ -126,7 +126,7 @@ class CacheConfig(Config):
cache_config = config.get("caches") or {}
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
if not isinstance(self.global_factor, (int, float)):
if type(self.global_factor) not in (int, float):
raise ConfigError("caches.global_factor must be a number.")
# Load cache factors from the config
@@ -151,7 +151,7 @@ class CacheConfig(Config):
)
for cache, factor in individual_factors.items():
if not isinstance(factor, (int, float)):
if type(factor) not in (int, float):
raise ConfigError(
"caches.per_cache_factors.%s must be a number" % (cache,)
)
+41 -2
View File
@@ -17,6 +17,7 @@ from typing import Any, Optional
import attr
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.config import ConfigError
from synapse.config._base import Config
from synapse.types import JsonDict
@@ -74,12 +75,16 @@ class ExperimentalConfig(Config):
)
# MSC3706 (server-side support for partial state in /send_join responses)
# Synapse will always serve partial state responses to requests using the stable
# query parameter `omit_members`. If this flag is set, Synapse will also serve
# partial state responses to requests using the unstable query parameter
# `org.matrix.msc3706.partial_state`.
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)
# experimental support for faster joins over federation
# (MSC2775, MSC3706, MSC3895)
# requires a target server with msc3706_enabled enabled.
self.faster_joins_enabled: bool = experimental.get("faster_joins", False)
# requires a target server that can provide a partial join response (MSC3706)
self.faster_joins_enabled: bool = experimental.get("faster_joins", True)
# MSC3720 (Account status endpoint)
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
@@ -93,6 +98,9 @@ class ExperimentalConfig(Config):
# MSC2815 (allow room moderators to view redacted event content)
self.msc2815_enabled: bool = experimental.get("msc2815_enabled", False)
# MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3773: Thread notifications
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
@@ -127,6 +135,24 @@ class ExperimentalConfig(Config):
"msc3886_endpoint", None
)
# MSC3890: Remotely silence local notifications
# Note: This option requires "experimental_features.msc3391_enabled" to be
# set to "true", in order to communicate account data deletions to clients.
self.msc3890_enabled: bool = experimental.get("msc3890_enabled", False)
if self.msc3890_enabled and not self.msc3391_enabled:
raise ConfigError(
"Option 'experimental_features.msc3391' must be set to 'true' to "
"enable 'experimental_features.msc3890'. MSC3391 functionality is "
"required to communicate account data deletions to clients."
)
# MSC3381: Polls.
# In practice, supporting polls in Synapse only requires an implementation of
# MSC3930: Push rules for MSC3391 polls; which is what this option enables.
self.msc3381_polls_enabled: bool = experimental.get(
"msc3381_polls_enabled", False
)
# MSC3912: Relation-based redactions.
self.msc3912_enabled: bool = experimental.get("msc3912_enabled", False)
@@ -139,3 +165,16 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3925: do not replace events with their edits
self.msc3925_inhibit_edit = experimental.get("msc3925_inhibit_edit", False)
# MSC3952: Intentional mentions
self.msc3952_intentional_mentions = experimental.get(
"msc3952_intentional_mentions", False
)
# MSC3959: Do not generate notifications for edits.
self.msc3958_supress_edit_notifs = experimental.get(
"msc3958_supress_edit_notifs", False
)
+24 -18
View File
@@ -34,6 +34,7 @@ from twisted.logger import (
from synapse.logging.context import LoggingContextFilter
from synapse.logging.filter import MetadataFilter
from synapse.synapse_rust import reset_logging_config
from synapse.types import JsonDict
from ..util import SYNAPSE_VERSION
@@ -200,24 +201,6 @@ def _setup_stdlib_logging(
"""
Set up Python standard library logging.
"""
if log_config_path is None:
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
else:
# Load the logging configuration.
_load_logging_config(log_config_path)
# We add a log record factory that runs all messages through the
# LoggingContextFilter so that we get the context *at the time we log*
@@ -237,6 +220,26 @@ def _setup_stdlib_logging(
logging.setLogRecordFactory(factory)
# Configure the logger with the initial configuration.
if log_config_path is None:
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
else:
# Load the logging configuration.
_load_logging_config(log_config_path)
# Route Twisted's native logging through to the standard library logging
# system.
observer = STDLibLogObserver()
@@ -294,6 +297,9 @@ def _load_logging_config(log_config_path: str) -> None:
logging.config.dictConfig(log_config)
# Blow away the pyo3-log cache so that it reloads the configuration.
reset_logging_config()
def _reload_logging_config(log_config_path: Optional[str]) -> None:
"""
+2 -2
View File
@@ -151,7 +151,7 @@ DEFAULT_IP_RANGE_BLACKLIST = [
"fec0::/10",
]
DEFAULT_ROOM_VERSION = "9"
DEFAULT_ROOM_VERSION = "10"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
@@ -904,7 +904,7 @@ def parse_listener_def(num: int, listener: Any) -> ListenerConfig:
raise ConfigError(DIRECT_TCP_ERROR, ("listeners", str(num), "type"))
port = listener.get("port")
if not isinstance(port, int):
if type(port) is not int:
raise ConfigError("Listener configuration is lacking a valid 'port' option")
tls = listener.get("tls", False)
+45 -20
View File
@@ -154,17 +154,21 @@ class Keyring:
if key_fetchers is None:
key_fetchers = (
# Fetch keys from the database.
StoreKeyFetcher(hs),
# Fetch keys from a configured Perspectives server.
PerspectivesKeyFetcher(hs),
# Fetch keys from the origin server directly.
ServerKeyFetcher(hs),
)
self._key_fetchers = key_fetchers
self._server_queue: BatchingQueue[
self._fetch_keys_queue: BatchingQueue[
_FetchKeyRequest, Dict[str, Dict[str, FetchKeyResult]]
] = BatchingQueue(
"keyring_server",
clock=hs.get_clock(),
# The method called to fetch each key
process_batch_callback=self._inner_fetch_key_requests,
)
@@ -287,7 +291,7 @@ class Keyring:
minimum_valid_until_ts=verify_request.minimum_valid_until_ts,
key_ids=list(key_ids_to_find),
)
found_keys_by_server = await self._server_queue.add_to_queue(
found_keys_by_server = await self._fetch_keys_queue.add_to_queue(
key_request, key=verify_request.server_name
)
@@ -352,7 +356,17 @@ class Keyring:
async def _inner_fetch_key_requests(
self, requests: List[_FetchKeyRequest]
) -> Dict[str, Dict[str, FetchKeyResult]]:
"""Processing function for the queue of `_FetchKeyRequest`."""
"""Processing function for the queue of `_FetchKeyRequest`.
Takes a list of key fetch requests, de-duplicates them and then carries out
each request by invoking self._inner_fetch_key_request.
Args:
requests: A list of requests for homeserver verify keys.
Returns:
{server name: {key id: fetch key result}}
"""
logger.debug("Starting fetch for %s", requests)
@@ -397,8 +411,23 @@ class Keyring:
async def _inner_fetch_key_request(
self, verify_request: _FetchKeyRequest
) -> Dict[str, FetchKeyResult]:
"""Attempt to fetch the given key by calling each key fetcher one by
one.
"""Attempt to fetch the given key by calling each key fetcher one by one.
If a key is found, check whether its `valid_until_ts` attribute satisfies the
`minimum_valid_until_ts` attribute of the `verify_request`. If it does, we
refrain from asking subsequent fetchers for that key.
Even if the above check fails, we still return the found key - the caller may
still find the invalid key result useful. In this case, we continue to ask
subsequent fetchers for the invalid key, in case they return a valid result
for it. This can happen when fetching a stale key result from the database,
before querying the origin server for an up-to-date result.
Args:
verify_request: The request for a verify key. Can include multiple key IDs.
Returns:
A map of {key_id: the key fetch result}.
"""
logger.debug("Starting fetch for %s", verify_request)
@@ -420,25 +449,21 @@ class Keyring:
if not key:
continue
# If we already have a result for the given key ID we keep the
# If we already have a result for the given key ID, we keep the
# one with the highest `valid_until_ts`.
existing_key = found_keys.get(key_id)
if existing_key:
if key.valid_until_ts <= existing_key.valid_until_ts:
continue
# We always store the returned key even if it doesn't the
# `minimum_valid_until_ts` requirement, as some verification
# requests may still be able to be satisfied by it.
#
# We still keep looking for the key from other fetchers in that
# case though.
found_keys[key_id] = key
if key.valid_until_ts < verify_request.minimum_valid_until_ts:
if existing_key and existing_key.valid_until_ts > key.valid_until_ts:
continue
missing_key_ids.discard(key_id)
# Check if this key's expiry timestamp is valid for the verify request.
if key.valid_until_ts >= verify_request.minimum_valid_until_ts:
# Stop looking for this key from subsequent fetchers.
missing_key_ids.discard(key_id)
# We always store the returned key even if it doesn't meet the
# `minimum_valid_until_ts` requirement, as some verification
# requests may still be able to be satisfied by it.
found_keys[key_id] = key
return found_keys
+4 -3
View File
@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections.abc
import logging
import typing
from typing import (
@@ -874,11 +875,11 @@ def _check_power_levels(
"kick",
"invite",
}:
if not isinstance(v, int):
if type(v) is not int:
raise SynapseError(400, f"{v!r} must be an integer.")
if k in {"events", "notifications", "users"}:
if not isinstance(v, dict) or not all(
isinstance(v, int) for v in v.values()
if not isinstance(v, collections.abc.Mapping) or not all(
type(v) is int for v in v.values()
):
raise SynapseError(
400,
+29 -11
View File
@@ -403,6 +403,14 @@ class EventClientSerializer:
clients.
"""
def __init__(self, inhibit_replacement_via_edits: bool = False):
"""
Args:
inhibit_replacement_via_edits: If this is set to True, then events are
never replaced by their edits.
"""
self._inhibit_replacement_via_edits = inhibit_replacement_via_edits
def serialize_event(
self,
event: Union[JsonDict, EventBase],
@@ -422,6 +430,8 @@ class EventClientSerializer:
into the event.
apply_edits: Whether the content of the event should be modified to reflect
any replacement in `bundle_aggregations[<event_id>].replace`.
See also the `inhibit_replacement_via_edits` constructor arg: if that is
set to True, then this argument is ignored.
Returns:
The serialized event
"""
@@ -495,7 +505,8 @@ class EventClientSerializer:
again for additional events in a recursive manner.
serialized_event: The serialized event which may be modified.
apply_edits: Whether the content of the event should be modified to reflect
any replacement in `aggregations.replace`.
any replacement in `aggregations.replace` (subject to the
`inhibit_replacement_via_edits` constructor arg).
"""
# We have already checked that aggregations exist for this event.
@@ -518,15 +529,21 @@ class EventClientSerializer:
if event_aggregations.replace:
# If there is an edit, optionally apply it to the event.
edit = event_aggregations.replace
if apply_edits:
if apply_edits and not self._inhibit_replacement_via_edits:
self._apply_edit(event, serialized_event, edit)
# Include information about it in the relations dict.
serialized_aggregations[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
#
# Matrix spec v1.5 (https://spec.matrix.org/v1.5/client-server-api/#server-side-aggregation-of-mreplace-relationships)
# said that we should only include the `event_id`, `origin_server_ts` and
# `sender` of the edit; however MSC3925 proposes extending it to the whole
# of the edit, which is what we do here.
serialized_aggregations[RelationTypes.REPLACE] = self.serialize_event(
edit,
time_now,
config=config,
apply_edits=False,
)
# Include any threaded replies to this event.
if event_aggregations.thread:
@@ -588,10 +605,11 @@ class EventClientSerializer:
_PowerLevel = Union[str, int]
PowerLevelsContent = Mapping[str, Union[_PowerLevel, Mapping[str, _PowerLevel]]]
def copy_and_fixup_power_levels_contents(
old_power_levels: Mapping[str, Union[_PowerLevel, Mapping[str, _PowerLevel]]]
old_power_levels: PowerLevelsContent,
) -> Dict[str, Union[int, Dict[str, int]]]:
"""Copy the content of a power_levels event, unfreezing frozendicts along the way.
@@ -630,10 +648,10 @@ def _copy_power_level_value_as_integer(
) -> None:
"""Set `power_levels[key]` to the integer represented by `old_value`.
:raises TypeError: if `old_value` is not an integer, nor a base-10 string
:raises TypeError: if `old_value` is neither an integer nor a base-10 string
representation of an integer.
"""
if isinstance(old_value, int):
if type(old_value) is int:
power_levels[key] = old_value
return
@@ -661,7 +679,7 @@ def validate_canonicaljson(value: Any) -> None:
* Floats
* NaN, Infinity, -Infinity
"""
if isinstance(value, int):
if type(value) is int:
if value < CANONICALJSON_MIN_INT or CANONICALJSON_MAX_INT < value:
raise SynapseError(400, "JSON integer out of range", Codes.BAD_JSON)
+2 -2
View File
@@ -139,7 +139,7 @@ class EventValidator:
max_lifetime = event.content.get("max_lifetime")
if min_lifetime is not None:
if not isinstance(min_lifetime, int):
if type(min_lifetime) is not int:
raise SynapseError(
code=400,
msg="'min_lifetime' must be an integer",
@@ -147,7 +147,7 @@ class EventValidator:
)
if max_lifetime is not None:
if not isinstance(max_lifetime, int):
if type(max_lifetime) is not int:
raise SynapseError(
code=400,
msg="'max_lifetime' must be an integer",
+1 -1
View File
@@ -280,7 +280,7 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
_strip_unsigned_values(pdu_json)
depth = pdu_json["depth"]
if not isinstance(depth, int):
if type(depth) is not int:
raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON)
if depth < 0:
+53 -17
View File
@@ -19,6 +19,7 @@ import itertools
import logging
from typing import (
TYPE_CHECKING,
AbstractSet,
Awaitable,
Callable,
Collection,
@@ -37,7 +38,7 @@ from typing import (
import attr
from prometheus_client import Counter
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.constants import Direction, EventContentFields, EventTypes, Membership
from synapse.api.errors import (
CodeMessageException,
Codes,
@@ -110,8 +111,9 @@ class SendJoinResult:
# True if 'state' elides non-critical membership events
partial_state: bool
# if 'partial_state' is set, a list of the servers in the room (otherwise empty)
servers_in_room: List[str]
# If 'partial_state' is set, a set of the servers in the room (otherwise empty).
# Always contains the server we joined off.
servers_in_room: AbstractSet[str]
class FederationClient(FederationBase):
@@ -1014,7 +1016,11 @@ class FederationClient(FederationBase):
)
async def send_join(
self, destinations: Iterable[str], pdu: EventBase, room_version: RoomVersion
self,
destinations: Iterable[str],
pdu: EventBase,
room_version: RoomVersion,
partial_state: bool = True,
) -> SendJoinResult:
"""Sends a join event to one of a list of homeservers.
@@ -1027,6 +1033,10 @@ class FederationClient(FederationBase):
pdu: event to be sent
room_version: the version of the room (according to the server that
did the make_join)
partial_state: whether to ask the remote server to omit membership state
events from the response. If the remote server complies,
`partial_state` in the send join result will be set. Defaults to
`True`.
Returns:
The result of the send join request.
@@ -1037,7 +1047,9 @@ class FederationClient(FederationBase):
"""
async def send_request(destination: str) -> SendJoinResult:
response = await self._do_send_join(room_version, destination, pdu)
response = await self._do_send_join(
room_version, destination, pdu, omit_members=partial_state
)
# If an event was returned (and expected to be returned):
#
@@ -1142,18 +1154,32 @@ class FederationClient(FederationBase):
% (auth_chain_create_events,)
)
if response.partial_state and not response.servers_in_room:
raise InvalidResponseError(
"partial_state was set, but no servers were listed in the room"
)
servers_in_room = None
if response.servers_in_room is not None:
servers_in_room = set(response.servers_in_room)
if response.members_omitted:
if not servers_in_room:
raise InvalidResponseError(
"members_omitted was set, but no servers were listed in the room"
)
if not partial_state:
raise InvalidResponseError(
"members_omitted was set, but we asked for full state"
)
# `servers_in_room` is supposed to be a complete list.
# Fix things up in case the remote homeserver is badly behaved.
servers_in_room.add(destination)
return SendJoinResult(
event=event,
state=signed_state,
auth_chain=signed_auth,
origin=destination,
partial_state=response.partial_state,
servers_in_room=response.servers_in_room or [],
partial_state=response.members_omitted,
servers_in_room=servers_in_room or frozenset(),
)
# MSC3083 defines additional error codes for room joins.
@@ -1177,7 +1203,11 @@ class FederationClient(FederationBase):
)
async def _do_send_join(
self, room_version: RoomVersion, destination: str, pdu: EventBase
self,
room_version: RoomVersion,
destination: str,
pdu: EventBase,
omit_members: bool,
) -> SendJoinResponse:
time_now = self._clock.time_msec()
@@ -1188,6 +1218,7 @@ class FederationClient(FederationBase):
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
omit_members=omit_members,
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
@@ -1660,7 +1691,12 @@ class FederationClient(FederationBase):
return result
async def timestamp_to_event(
self, *, destinations: List[str], room_id: str, timestamp: int, direction: str
self,
*,
destinations: List[str],
room_id: str,
timestamp: int,
direction: Direction,
) -> Optional["TimestampToEventResponse"]:
"""
Calls each remote federating server from `destinations` asking for their closest
@@ -1673,7 +1709,7 @@ class FederationClient(FederationBase):
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
direction: indicates whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
@@ -1718,7 +1754,7 @@ class FederationClient(FederationBase):
return None
async def _timestamp_to_event_from_destination(
self, destination: str, room_id: str, timestamp: int, direction: str
self, destination: str, room_id: str, timestamp: int, direction: Direction
) -> "TimestampToEventResponse":
"""
Calls a remote federating server at `destination` asking for their
@@ -1731,7 +1767,7 @@ class FederationClient(FederationBase):
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
direction: indicates whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
@@ -1844,7 +1880,7 @@ class TimestampToEventResponse:
)
origin_server_ts = d.get("origin_server_ts")
if not isinstance(origin_server_ts, int):
if type(origin_server_ts) is not int:
raise ValueError(
"Invalid response: 'origin_server_ts' must be a int but received %r"
% origin_server_ts
+18 -4
View File
@@ -34,7 +34,13 @@ from prometheus_client import Counter, Gauge, Histogram
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.constants import EduTypes, EventContentFields, EventTypes, Membership
from synapse.api.constants import (
Direction,
EduTypes,
EventContentFields,
EventTypes,
Membership,
)
from synapse.api.errors import (
AuthError,
Codes,
@@ -62,7 +68,9 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.logging.opentracing import (
SynapseTags,
log_kv,
set_tag,
start_active_span_from_edu,
tag_args,
trace,
@@ -216,7 +224,7 @@ class FederationServer(FederationBase):
return 200, res
async def on_timestamp_to_event_request(
self, origin: str, room_id: str, timestamp: int, direction: str
self, origin: str, room_id: str, timestamp: int, direction: Direction
) -> Tuple[int, Dict[str, Any]]:
"""When we receive a federated `/timestamp_to_event` request,
handle all of the logic for validating and fetching the event.
@@ -226,7 +234,7 @@ class FederationServer(FederationBase):
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
direction: indicates whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
@@ -678,6 +686,10 @@ class FederationServer(FederationBase):
room_id: str,
caller_supports_partial_state: bool = False,
) -> Dict[str, Any]:
set_tag(
SynapseTags.SEND_JOIN_RESPONSE_IS_PARTIAL_STATE,
caller_supports_partial_state,
)
await self._room_member_handler._join_rate_per_room_limiter.ratelimit( # type: ignore[has-type]
requester=None,
key=room_id,
@@ -725,10 +737,12 @@ class FederationServer(FederationBase):
"state": [p.get_pdu_json(time_now) for p in state_events],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain_events],
"org.matrix.msc3706.partial_state": caller_supports_partial_state,
"members_omitted": caller_supports_partial_state,
}
if servers_in_room is not None:
resp["org.matrix.msc3706.servers_in_room"] = list(servers_in_room)
resp["servers_in_room"] = list(servers_in_room)
return resp
@@ -1500,7 +1514,7 @@ def _get_event_ids_for_partial_state_join(
prev_state_ids: StateMap[str],
summary: Dict[str, MemberSummary],
) -> Collection[str]:
"""Calculate state to be retuned in a partial_state send_join
"""Calculate state to be returned in a partial_state send_join
Args:
join_event: the join event being send_joined
+1 -1
View File
@@ -447,7 +447,7 @@ class FederationSender(AbstractFederationSender):
)
)
if len(partial_state_destinations) > 0:
if partial_state_destinations is not None:
destinations = partial_state_destinations
if destinations is None:
+38 -11
View File
@@ -32,7 +32,7 @@ from typing import (
import attr
import ijson
from synapse.api.constants import Membership
from synapse.api.constants import Direction, Membership
from synapse.api.errors import Codes, HttpResponseException, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.api.urls import (
@@ -102,6 +102,10 @@ class TransportLayerClient:
destination,
path=path,
args={"event_id": event_id},
# This can take a looooooong time for large rooms. Give this a generous
# timeout of 10 minutes to avoid the partial state resync timing out early
# and trying a bunch of servers who haven't seen our join yet.
timeout=600_000,
parser=_StateParser(room_version),
)
@@ -165,7 +169,7 @@ class TransportLayerClient:
)
async def timestamp_to_event(
self, destination: str, room_id: str, timestamp: int, direction: str
self, destination: str, room_id: str, timestamp: int, direction: Direction
) -> Union[JsonDict, List]:
"""
Calls a remote federating server at `destination` asking for their
@@ -176,7 +180,7 @@ class TransportLayerClient:
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
direction: indicates whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
@@ -190,7 +194,7 @@ class TransportLayerClient:
room_id,
)
args = {"ts": [str(timestamp)], "dir": [direction]}
args = {"ts": [str(timestamp)], "dir": [direction.value]}
remote_response = await self.client.get_json(
destination, path=path, args=args, try_trailing_slash_on_400=True
@@ -351,12 +355,16 @@ class TransportLayerClient:
room_id: str,
event_id: str,
content: JsonDict,
omit_members: bool,
) -> "SendJoinResponse":
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
query_params: Dict[str, str] = {}
if self._faster_joins_enabled:
# lazy-load state on join
query_params["org.matrix.msc3706.partial_state"] = "true"
query_params["org.matrix.msc3706.partial_state"] = (
"true" if omit_members else "false"
)
query_params["omit_members"] = "true" if omit_members else "false"
return await self.client.put_json(
destination=destination,
@@ -794,7 +802,7 @@ class SendJoinResponse:
event: Optional[EventBase] = None
# The room state is incomplete
partial_state: bool = False
members_omitted: bool = False
# List of servers in the room
servers_in_room: Optional[List[str]] = None
@@ -834,16 +842,18 @@ def _event_list_parser(
@ijson.coroutine
def _partial_state_parser(response: SendJoinResponse) -> Generator[None, Any, None]:
def _members_omitted_parser(response: SendJoinResponse) -> Generator[None, Any, None]:
"""Helper function for use with `ijson.items_coro`
Parses the partial_state field in send_join responses
Parses the members_omitted field in send_join responses
"""
while True:
val = yield
if not isinstance(val, bool):
raise TypeError("partial_state must be a boolean")
response.partial_state = val
raise TypeError(
"members_omitted (formerly org.matrix.msc370c.partial_state) must be a boolean"
)
response.members_omitted = val
@ijson.coroutine
@@ -904,11 +914,19 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
if not v1_api:
self._coros.append(
ijson.items_coro(
_partial_state_parser(self._response),
_members_omitted_parser(self._response),
"org.matrix.msc3706.partial_state",
use_float="True",
)
)
# The stable field name comes last, so it "wins" if the fields disagree
self._coros.append(
ijson.items_coro(
_members_omitted_parser(self._response),
"members_omitted",
use_float="True",
)
)
self._coros.append(
ijson.items_coro(
@@ -918,6 +936,15 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
)
)
# Again, stable field name comes last
self._coros.append(
ijson.items_coro(
_servers_in_room_parser(self._response),
"servers_in_room",
use_float="True",
)
)
def write(self, data: bytes) -> int:
for c in self._coros:
c.send(data)
@@ -26,7 +26,7 @@ from typing import (
from typing_extensions import Literal
from synapse.api.constants import EduTypes
from synapse.api.constants import Direction, EduTypes
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersions
from synapse.api.urls import FEDERATION_UNSTABLE_PREFIX, FEDERATION_V2_PREFIX
@@ -234,9 +234,10 @@ class FederationTimestampLookupServlet(BaseFederationServerServlet):
room_id: str,
) -> Tuple[int, JsonDict]:
timestamp = parse_integer_from_args(query, "ts", required=True)
direction = parse_string_from_args(
query, "dir", default="f", allowed_values=["f", "b"], required=True
direction_str = parse_string_from_args(
query, "dir", allowed_values=["f", "b"], required=True
)
direction = Direction(direction_str)
return await self.handler.on_timestamp_to_event_request(
origin, room_id, timestamp, direction
@@ -422,7 +423,7 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self._msc3706_enabled = hs.config.experimental.msc3706_enabled
self._read_msc3706_query_param = hs.config.experimental.msc3706_enabled
async def on_PUT(
self,
@@ -436,10 +437,16 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
# match those given in content
partial_state = False
if self._msc3706_enabled:
# The stable query parameter wins, if it disagrees with the unstable
# parameter for some reason.
stable_param = parse_boolean_from_args(query, "omit_members", default=None)
if stable_param is not None:
partial_state = stable_param
elif self._read_msc3706_query_param:
partial_state = parse_boolean_from_args(
query, "org.matrix.msc3706.partial_state", default=False
)
result = await self.handler.on_send_join_request(
origin, content, room_id, caller_supports_partial_state=partial_state
)
+10 -5
View File
@@ -14,8 +14,9 @@
# limitations under the License.
import logging
import random
from typing import TYPE_CHECKING, Awaitable, Callable, Collection, List, Optional, Tuple
from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional, Tuple
from synapse.api.constants import AccountDataTypes
from synapse.replication.http.account_data import (
ReplicationAddRoomAccountDataRestServlet,
ReplicationAddTagRestServlet,
@@ -25,7 +26,7 @@ from synapse.replication.http.account_data import (
ReplicationRemoveUserAccountDataRestServlet,
)
from synapse.streams import EventSource
from synapse.types import JsonDict, StreamKeyType, UserID
from synapse.types import JsonDict, StrCollection, StreamKeyType, UserID
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -313,7 +314,7 @@ class AccountDataEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
def get_current_key(self, direction: str = "f") -> int:
def get_current_key(self) -> int:
return self.store.get_max_account_data_stream_id()
async def get_new_events(
@@ -321,7 +322,7 @@ class AccountDataEventSource(EventSource[int, JsonDict]):
user: UserID,
from_key: int,
limit: int,
room_ids: Collection[str],
room_ids: StrCollection,
is_guest: bool,
explicit_room_id: Optional[str] = None,
) -> Tuple[List[JsonDict], int]:
@@ -335,7 +336,11 @@ class AccountDataEventSource(EventSource[int, JsonDict]):
for room_id, room_tags in tags.items():
results.append(
{"type": "m.tag", "content": {"tags": room_tags}, "room_id": room_id}
{
"type": AccountDataTypes.TAG,
"content": {"tags": room_tags},
"room_id": room_id,
}
)
(
+45 -2
View File
@@ -16,7 +16,7 @@ import abc
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Set
from synapse.api.constants import Membership
from synapse.api.constants import Direction, Membership
from synapse.events import EventBase
from synapse.types import JsonDict, RoomStreamToken, StateMap, UserID
from synapse.visibility import filter_events_for_client
@@ -30,6 +30,7 @@ logger = logging.getLogger(__name__)
class AdminHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self._device_handler = hs.get_device_handler()
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
@@ -197,7 +198,7 @@ class AdminHandler:
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = await self.store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction="f"
room_id, from_key, to_key, limit=100, direction=Direction.FORWARDS
)
if not events:
break
@@ -247,6 +248,21 @@ class AdminHandler:
)
writer.write_state(room_id, event_id, state)
# Get the user profile
profile = await self.get_user(UserID.from_string(user_id))
if profile is not None:
writer.write_profile(profile)
# Get all devices the user has
devices = await self._device_handler.get_devices_by_user(user_id)
writer.write_devices(devices)
# Get all connections the user has
connections = await self.get_whois(UserID.from_string(user_id))
writer.write_connections(
connections["devices"][""]["sessions"][0]["connections"]
)
return writer.finished()
@@ -297,6 +313,33 @@ class ExfiltrationWriter(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
def write_profile(self, profile: JsonDict) -> None:
"""Write the profile of a user.
Args:
profile: The user profile.
"""
raise NotImplementedError()
@abc.abstractmethod
def write_devices(self, devices: List[JsonDict]) -> None:
"""Write the devices of a user.
Args:
devices: The list of devices.
"""
raise NotImplementedError()
@abc.abstractmethod
def write_connections(self, connections: List[JsonDict]) -> None:
"""Write the connections of a user.
Args:
connections: The list of connections / sessions.
"""
raise NotImplementedError()
@abc.abstractmethod
def finished(self) -> Any:
"""Called when all data has successfully been exported and written.
+16 -4
View File
@@ -18,7 +18,6 @@ from http import HTTPStatus
from typing import (
TYPE_CHECKING,
Any,
Collection,
Dict,
Iterable,
List,
@@ -45,6 +44,7 @@ from synapse.metrics.background_process_metrics import (
)
from synapse.types import (
JsonDict,
StrCollection,
StreamKeyType,
StreamToken,
UserID,
@@ -146,7 +146,7 @@ class DeviceWorkerHandler:
@cancellable
async def get_device_changes_in_shared_rooms(
self, user_id: str, room_ids: Collection[str], from_token: StreamToken
self, user_id: str, room_ids: StrCollection, from_token: StreamToken
) -> Set[str]:
"""Get the set of users whose devices have changed who share a room with
the given user.
@@ -346,6 +346,7 @@ class DeviceHandler(DeviceWorkerHandler):
super().__init__(hs)
self.federation_sender = hs.get_federation_sender()
self._account_data_handler = hs.get_account_data_handler()
self._storage_controllers = hs.get_storage_controllers()
self.device_list_updater = DeviceListUpdater(hs, self)
@@ -502,7 +503,7 @@ class DeviceHandler(DeviceWorkerHandler):
else:
raise
# Delete access tokens and e2e keys for each device. Not optimised as it is not
# Delete data specific to each device. Not optimised as it is not
# considered as part of a critical path.
for device_id in device_ids:
await self._auth_handler.delete_access_tokens_for_user(
@@ -512,6 +513,14 @@ class DeviceHandler(DeviceWorkerHandler):
user_id=user_id, device_id=device_id
)
if self.hs.config.experimental.msc3890_enabled:
# Remove any local notification settings for this device in accordance
# with MSC3890.
await self._account_data_handler.remove_account_data_for_user(
user_id,
f"org.matrix.msc3890.local_notification_settings.{device_id}",
)
await self.notify_device_update(user_id, device_ids)
async def update_device(self, user_id: str, device_id: str, content: dict) -> None:
@@ -542,7 +551,7 @@ class DeviceHandler(DeviceWorkerHandler):
@trace
@measure_func("notify_device_update")
async def notify_device_update(
self, user_id: str, device_ids: Collection[str]
self, user_id: str, device_ids: StrCollection
) -> None:
"""Notify that a user's device(s) has changed. Pokes the notifier, and
remote servers if the user is local.
@@ -850,6 +859,7 @@ class DeviceHandler(DeviceWorkerHandler):
known_hosts_at_join = await self.store.get_partial_state_servers_at_join(
room_id
)
assert known_hosts_at_join is not None
potentially_changed_hosts.difference_update(known_hosts_at_join)
potentially_changed_hosts.discard(self.server_name)
@@ -965,6 +975,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.device_handler = device_handler
self._notifier = hs.get_notifier()
self._remote_edu_linearizer = Linearizer(name="remote_device_list")
@@ -1045,6 +1056,7 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
user_id,
device_id,
)
self._notifier.notify_replication()
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
+4 -4
View File
@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, List, Mapping, Optional, Union
from typing import TYPE_CHECKING, List, Mapping, Optional, Union
from synapse import event_auth
from synapse.api.constants import (
@@ -29,7 +29,7 @@ from synapse.event_auth import (
)
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap, get_domain_from_id
from synapse.types import StateMap, StrCollection, get_domain_from_id
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -290,7 +290,7 @@ class EventAuthHandler:
async def get_rooms_that_allow_join(
self, state_ids: StateMap[str]
) -> Collection[str]:
) -> StrCollection:
"""
Generate a list of rooms in which membership allows access to a room.
@@ -331,7 +331,7 @@ class EventAuthHandler:
return result
async def is_user_in_rooms(self, room_ids: Collection[str], user_id: str) -> bool:
async def is_user_in_rooms(self, room_ids: StrCollection, user_id: str) -> bool:
"""
Check whether a user is a member of any of the provided rooms.
+239 -99
View File
@@ -22,11 +22,12 @@ from enum import Enum
from http import HTTPStatus
from typing import (
TYPE_CHECKING,
Collection,
AbstractSet,
Dict,
Iterable,
List,
Optional,
Set,
Tuple,
Union,
)
@@ -47,7 +48,6 @@ from synapse.api.errors import (
FederationError,
FederationPullAttemptBackoffError,
HttpResponseException,
LimitExceededError,
NotFoundError,
RequestSendFailed,
SynapseError,
@@ -70,7 +70,7 @@ from synapse.replication.http.federation import (
)
from synapse.storage.databases.main.events import PartialStateConflictError
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.types import JsonDict, get_domain_from_id
from synapse.types import JsonDict, StrCollection, get_domain_from_id
from synapse.types.state import StateFilter
from synapse.util.async_helpers import Linearizer
from synapse.util.retryutils import NotRetryingDestination
@@ -171,12 +171,29 @@ class FederationHandler:
self.third_party_event_rules = hs.get_third_party_event_rules()
# Tracks running partial state syncs by room ID.
# Partial state syncs currently only run on the main process, so it's okay to
# track them in-memory for now.
self._active_partial_state_syncs: Set[str] = set()
# Tracks partial state syncs we may want to restart.
# A dictionary mapping room IDs to (initial destination, other destinations)
# tuples.
self._partial_state_syncs_maybe_needing_restart: Dict[
str, Tuple[Optional[str], AbstractSet[str]]
] = {}
# A lock guarding the partial state flag for rooms.
# When the lock is held for a given room, no other concurrent code may
# partial state or un-partial state the room.
self._is_partial_state_room_linearizer = Linearizer(
name="_is_partial_state_room_linearizer"
)
# if this is the main process, fire off a background process to resume
# any partial-state-resync operations which were in flight when we
# were shut down.
if not hs.config.worker.worker_app:
run_as_background_process(
"resume_sync_partial_state_room", self._resume_sync_partial_state_room
"resume_sync_partial_state_room", self._resume_partial_state_room_sync
)
@trace
@@ -420,7 +437,7 @@ class FederationHandler:
)
)
async def try_backfill(domains: Collection[str]) -> bool:
async def try_backfill(domains: StrCollection) -> bool:
# TODO: Should we try multiple of these at a time?
# Number of contacted remote homeservers that have denied our backfill
@@ -587,7 +604,23 @@ class FederationHandler:
self._federation_event_handler.room_queues[room_id] = []
await self._clean_room_for_join(room_id)
is_host_joined = await self.store.is_host_joined(room_id, self.server_name)
if not is_host_joined:
# We may have old forward extremities lying around if the homeserver left
# the room completely in the past. Clear them out.
#
# Note that this check-then-clear is subject to races where
# * the homeserver is in the room and stops being in the room just after
# the check. We won't reset the forward extremities, but that's okay,
# since they will be almost up to date.
# * the homeserver is not in the room and starts being in the room just
# after the check. This can't happen, since `RoomMemberHandler` has a
# linearizer lock which prevents concurrent remote joins into the same
# room.
# In short, the races either have an acceptable outcome or should be
# impossible.
await self._clean_room_for_join(room_id)
try:
# Try the host we successfully got a response to /make_join/
@@ -599,94 +632,116 @@ class FederationHandler:
except ValueError:
pass
ret = await self.federation_client.send_join(
host_list, event, room_version_obj
)
async with self._is_partial_state_room_linearizer.queue(room_id):
already_partial_state_room = await self.store.is_partial_state_room(
room_id
)
event = ret.event
origin = ret.origin
state = ret.state
auth_chain = ret.auth_chain
auth_chain.sort(key=lambda e: e.depth)
ret = await self.federation_client.send_join(
host_list,
event,
room_version_obj,
# Perform a full join when we are already in the room and it is a
# full state room, since we are not allowed to persist a partial
# state join event in a full state room. In the future, we could
# optimize this by always performing a partial state join and
# computing the state ourselves or retrieving it from the remote
# homeserver if necessary.
#
# There's a race where we leave the room, then perform a full join
# anyway. This should end up being fast anyway, since we would
# already have the full room state and auth chain persisted.
partial_state=not is_host_joined or already_partial_state_room,
)
logger.debug("do_invite_join auth_chain: %s", auth_chain)
logger.debug("do_invite_join state: %s", state)
event = ret.event
origin = ret.origin
state = ret.state
auth_chain = ret.auth_chain
auth_chain.sort(key=lambda e: e.depth)
logger.debug("do_invite_join event: %s", event)
logger.debug("do_invite_join auth_chain: %s", auth_chain)
logger.debug("do_invite_join state: %s", state)
# if this is the first time we've joined this room, it's time to add
# a row to `rooms` with the correct room version. If there's already a
# row there, we should override it, since it may have been populated
# based on an invite request which lied about the room version.
#
# federation_client.send_join has already checked that the room
# version in the received create event is the same as room_version_obj,
# so we can rely on it now.
#
await self.store.upsert_room_on_join(
room_id=room_id,
room_version=room_version_obj,
state_events=state,
)
logger.debug("do_invite_join event: %s", event)
if ret.partial_state:
# Mark the room as having partial state.
# The background process is responsible for unmarking this flag,
# even if the join fails.
await self.store.store_partial_state_room(
# if this is the first time we've joined this room, it's time to add
# a row to `rooms` with the correct room version. If there's already a
# row there, we should override it, since it may have been populated
# based on an invite request which lied about the room version.
#
# federation_client.send_join has already checked that the room
# version in the received create event is the same as room_version_obj,
# so we can rely on it now.
#
await self.store.upsert_room_on_join(
room_id=room_id,
servers=ret.servers_in_room,
device_lists_stream_id=self.store.get_device_stream_token(),
joined_via=origin,
room_version=room_version_obj,
state_events=state,
)
try:
max_stream_id = (
await self._federation_event_handler.process_remote_join(
origin,
room_id,
auth_chain,
state,
event,
room_version_obj,
partial_state=ret.partial_state,
)
)
except PartialStateConflictError as e:
# The homeserver was already in the room and it is no longer partial
# stated. We ought to be doing a local join instead. Turn the error into
# a 429, as a hint to the client to try again.
# TODO(faster_joins): `_should_perform_remote_join` suggests that we may
# do a remote join for restricted rooms even if we have full state.
logger.error(
"Room %s was un-partial stated while processing remote join.",
room_id,
)
raise LimitExceededError(msg=e.msg, errcode=e.errcode, retry_after_ms=0)
else:
# Record the join event id for future use (when we finish the full
# join). We have to do this after persisting the event to keep foreign
# key constraints intact.
if ret.partial_state:
await self.store.write_partial_state_rooms_join_event_id(
room_id, event.event_id
)
finally:
# Always kick off the background process that asynchronously fetches
# state for the room.
# If the join failed, the background process is responsible for
# cleaning up — including unmarking the room as a partial state room.
if ret.partial_state:
# Kick off the process of asynchronously fetching the state for this
# room.
run_as_background_process(
desc="sync_partial_state_room",
func=self._sync_partial_state_room,
initial_destination=origin,
other_destinations=ret.servers_in_room,
if ret.partial_state and not already_partial_state_room:
# Mark the room as having partial state.
# The background process is responsible for unmarking this flag,
# even if the join fails.
# TODO(faster_joins):
# We may want to reset the partial state info if it's from an
# old, failed partial state join.
# https://github.com/matrix-org/synapse/issues/13000
await self.store.store_partial_state_room(
room_id=room_id,
servers=ret.servers_in_room,
device_lists_stream_id=self.store.get_device_stream_token(),
joined_via=origin,
)
try:
max_stream_id = (
await self._federation_event_handler.process_remote_join(
origin,
room_id,
auth_chain,
state,
event,
room_version_obj,
partial_state=ret.partial_state,
)
)
except PartialStateConflictError:
# This should be impossible, since we hold the lock on the room's
# partial statedness.
logger.error(
"Room %s was un-partial stated while processing remote join.",
room_id,
)
raise
else:
# Record the join event id for future use (when we finish the full
# join). We have to do this after persisting the event to keep
# foreign key constraints intact.
if ret.partial_state and not already_partial_state_room:
# TODO(faster_joins):
# We may want to reset the partial state info if it's from
# an old, failed partial state join.
# https://github.com/matrix-org/synapse/issues/13000
await self.store.write_partial_state_rooms_join_event_id(
room_id, event.event_id
)
finally:
# Always kick off the background process that asynchronously fetches
# state for the room.
# If the join failed, the background process is responsible for
# cleaning up — including unmarking the room as a partial state
# room.
if ret.partial_state:
# Kick off the process of asynchronously fetching the state for
# this room.
self._start_partial_state_room_sync(
initial_destination=origin,
other_destinations=ret.servers_in_room,
room_id=room_id,
)
# We wait here until this instance has seen the events come down
# replication (if we're using replication) as the below uses caches.
await self._replication.wait_for_stream_position(
@@ -1660,24 +1715,104 @@ class FederationHandler:
# well.
return None
async def _resume_sync_partial_state_room(self) -> None:
async def _resume_partial_state_room_sync(self) -> None:
"""Resumes resyncing of all partial-state rooms after a restart."""
assert not self.config.worker.worker_app
partial_state_rooms = await self.store.get_partial_state_room_resync_info()
for room_id, resync_info in partial_state_rooms.items():
run_as_background_process(
desc="sync_partial_state_room",
func=self._sync_partial_state_room,
self._start_partial_state_room_sync(
initial_destination=resync_info.joined_via,
other_destinations=resync_info.servers_in_room,
room_id=room_id,
)
def _start_partial_state_room_sync(
self,
initial_destination: Optional[str],
other_destinations: AbstractSet[str],
room_id: str,
) -> None:
"""Starts the background process to resync the state of a partial state room,
if it is not already running.
Args:
initial_destination: the initial homeserver to pull the state from
other_destinations: other homeservers to try to pull the state from, if
`initial_destination` is unavailable
room_id: room to be resynced
"""
async def _sync_partial_state_room_wrapper() -> None:
if room_id in self._active_partial_state_syncs:
# Another local user has joined the room while there is already a
# partial state sync running. This implies that there is a new join
# event to un-partial state. We might find ourselves in one of a few
# scenarios:
# 1. There is an existing partial state sync. The partial state sync
# un-partial states the new join event before completing and all is
# well.
# 2. Before the latest join, the homeserver was no longer in the room
# and there is an existing partial state sync from our previous
# membership of the room. The partial state sync may have:
# a) succeeded, but not yet terminated. The room will not be
# un-partial stated again unless we restart the partial state
# sync.
# b) failed, because we were no longer in the room and remote
# homeservers were refusing our requests, but not yet
# terminated. After the latest join, remote homeservers may
# start answering our requests again, so we should restart the
# partial state sync.
# In the cases where we would want to restart the partial state sync,
# the room would have the partial state flag when the partial state sync
# terminates.
self._partial_state_syncs_maybe_needing_restart[room_id] = (
initial_destination,
other_destinations,
)
return
self._active_partial_state_syncs.add(room_id)
try:
await self._sync_partial_state_room(
initial_destination=initial_destination,
other_destinations=other_destinations,
room_id=room_id,
)
finally:
# Read the room's partial state flag while we still hold the claim to
# being the active partial state sync (so that another partial state
# sync can't come along and mess with it under us).
# Normally, the partial state flag will be gone. If it isn't, then we
# may find ourselves in scenario 2a or 2b as described in the comment
# above, where we want to restart the partial state sync.
is_still_partial_state_room = await self.store.is_partial_state_room(
room_id
)
self._active_partial_state_syncs.remove(room_id)
if room_id in self._partial_state_syncs_maybe_needing_restart:
(
restart_initial_destination,
restart_other_destinations,
) = self._partial_state_syncs_maybe_needing_restart.pop(room_id)
if is_still_partial_state_room:
self._start_partial_state_room_sync(
initial_destination=restart_initial_destination,
other_destinations=restart_other_destinations,
room_id=room_id,
)
run_as_background_process(
desc="sync_partial_state_room", func=_sync_partial_state_room_wrapper
)
async def _sync_partial_state_room(
self,
initial_destination: Optional[str],
other_destinations: Collection[str],
other_destinations: AbstractSet[str],
room_id: str,
) -> None:
"""Background process to resync the state of a partial-state room
@@ -1688,6 +1823,12 @@ class FederationHandler:
`initial_destination` is unavailable
room_id: room to be resynced
"""
# Assume that we run on the main process for now.
# TODO(faster_joins,multiple workers)
# When moving the sync to workers, we need to ensure that
# * `_start_partial_state_room_sync` still prevents duplicate resyncs
# * `_is_partial_state_room_linearizer` correctly guards partial state flags
# for rooms between the workers doing remote joins and resync.
assert not self.config.worker.worker_app
# TODO(faster_joins): do we need to lock to avoid races? What happens if other
@@ -1725,20 +1866,19 @@ class FederationHandler:
logger.info("Handling any pending device list updates")
await self._device_handler.handle_room_un_partial_stated(room_id)
logger.info("Clearing partial-state flag for %s", room_id)
success = await self.store.clear_partial_state_room(room_id)
if success:
async with self._is_partial_state_room_linearizer.queue(room_id):
logger.info("Clearing partial-state flag for %s", room_id)
new_stream_id = await self.store.clear_partial_state_room(room_id)
if new_stream_id is not None:
logger.info("State resync complete for %s", room_id)
self._storage_controllers.state.notify_room_un_partial_stated(
room_id
)
# Poke the notifier so that other workers see the write to
# the un-partial-stated rooms stream.
self._notifier.notify_replication()
# TODO(faster_joins) update room stats and user directory?
# https://github.com/matrix-org/synapse/issues/12814
# https://github.com/matrix-org/synapse/issues/12815
await self._notifier.on_un_partial_stated_room(
room_id, new_stream_id
)
return
# we raced against more events arriving with partial state. Go round
@@ -1809,9 +1949,9 @@ class FederationHandler:
def _prioritise_destinations_for_partial_state_resync(
initial_destination: Optional[str],
other_destinations: Collection[str],
other_destinations: AbstractSet[str],
room_id: str,
) -> Collection[str]:
) -> StrCollection:
"""Work out the order in which we should ask servers to resync events.
If an `initial_destination` is given, it takes top priority. Otherwise
+7 -2
View File
@@ -80,6 +80,7 @@ from synapse.types import (
PersistedEventPosition,
RoomStreamToken,
StateMap,
StrCollection,
UserID,
get_domain_from_id,
)
@@ -615,7 +616,7 @@ class FederationEventHandler:
@trace
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
self, dest: str, room_id: str, limit: int, extremities: StrCollection
) -> None:
"""Trigger a backfill request to `dest` for the given `room_id`
@@ -1565,7 +1566,7 @@ class FederationEventHandler:
@trace
@tag_args
async def _get_events_and_persist(
self, destination: str, room_id: str, event_ids: Collection[str]
self, destination: str, room_id: str, event_ids: StrCollection
) -> None:
"""Fetch the given events from a server, and persist them as outliers.
@@ -2259,6 +2260,10 @@ class FederationEventHandler:
event_and_contexts, backfilled=backfilled
)
# After persistence we always need to notify replication there may
# be new data.
self._notifier.notify_replication()
if self._ephemeral_messages_enabled:
for event in events:
# If there's an expiry timestamp on the event, schedule its expiry.
+18 -4
View File
@@ -15,7 +15,13 @@
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple, cast
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import (
AccountDataTypes,
Direction,
EduTypes,
EventTypes,
Membership,
)
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.events.utils import SerializeEventConfig
@@ -57,7 +63,13 @@ class InitialSyncHandler:
self.validator = EventValidator()
self.snapshot_cache: ResponseCache[
Tuple[
str, Optional[StreamToken], Optional[StreamToken], str, int, bool, bool
str,
Optional[StreamToken],
Optional[StreamToken],
Direction,
int,
bool,
bool,
]
] = ResponseCache(hs.get_clock(), "initial_sync_cache")
self._event_serializer = hs.get_event_client_serializer()
@@ -239,7 +251,7 @@ class InitialSyncHandler:
tags = tags_by_room.get(event.room_id)
if tags:
account_data_events.append(
{"type": "m.tag", "content": {"tags": tags}}
{"type": AccountDataTypes.TAG, "content": {"tags": tags}}
)
account_data = account_data_by_room.get(event.room_id, {})
@@ -326,7 +338,9 @@ class InitialSyncHandler:
account_data_events = []
tags = await self.store.get_tags_for_room(user_id, room_id)
if tags:
account_data_events.append({"type": "m.tag", "content": {"tags": tags}})
account_data_events.append(
{"type": AccountDataTypes.TAG, "content": {"tags": tags}}
)
account_data = await self.store.get_account_data_for_room(user_id, room_id)
for account_data_type, content in account_data.items():
+20 -7
View File
@@ -377,7 +377,7 @@ class MessageHandler:
"""
expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
if not isinstance(expiry_ts, int) or event.is_state():
if type(expiry_ts) is not int or event.is_state():
return
# _schedule_expiry_for_event won't actually schedule anything if there's already
@@ -1531,12 +1531,23 @@ class EventCreationHandler:
external federation senders don't have to recalculate it themselves.
"""
for event, _ in events_and_context:
if not self._external_cache.is_enabled():
return
if not self._external_cache.is_enabled():
return
# If external cache is enabled we should always have this.
assert self._external_cache_joined_hosts_updates is not None
# If external cache is enabled we should always have this.
assert self._external_cache_joined_hosts_updates is not None
for event, event_context in events_and_context:
if event_context.partial_state:
# To populate the cache for a partial-state event, we either have to
# block until full state, which the code below does, or change the
# meaning of cache values to be the list of hosts to which we plan to
# send events and calculate that instead.
#
# The federation senders don't use the external cache when sending
# events in partial-state rooms anyway, so let's not bother populating
# the cache.
continue
# We actually store two mappings, event ID -> prev state group,
# state group -> joined hosts, which is much more space efficient
@@ -1928,7 +1939,9 @@ class EventCreationHandler:
if event.type == EventTypes.Message:
# We don't want to block sending messages on any presence code. This
# matters as sometimes presence code can take a while.
run_in_background(self._bump_active_time, requester.user)
run_as_background_process(
"bump_presence_active_time", self._bump_active_time, requester.user
)
async def _notify() -> None:
try:
+6 -6
View File
@@ -13,13 +13,13 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, Dict, List, Optional, Set
from typing import TYPE_CHECKING, Dict, List, Optional, Set
import attr
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import Direction, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.api.filtering import Filter
from synapse.events.utils import SerializeEventConfig
@@ -28,7 +28,7 @@ from synapse.logging.opentracing import trace
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.rest.admin._base import assert_user_is_admin
from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, Requester, StreamKeyType
from synapse.types import JsonDict, Requester, StrCollection, StreamKeyType
from synapse.types.state import StateFilter
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.stringutils import random_string
@@ -391,7 +391,7 @@ class PaginationHandler:
"""
return self._delete_by_id.get(delete_id)
def get_delete_ids_by_room(self, room_id: str) -> Optional[Collection[str]]:
def get_delete_ids_by_room(self, room_id: str) -> Optional[StrCollection]:
"""Get all active delete ids by room
Args:
@@ -448,7 +448,7 @@ class PaginationHandler:
if pagin_config.from_token:
from_token = pagin_config.from_token
elif pagin_config.direction == "f":
elif pagin_config.direction == Direction.FORWARDS:
from_token = (
await self.hs.get_event_sources().get_start_token_for_pagination(
room_id
@@ -476,7 +476,7 @@ class PaginationHandler:
room_id, requester, allow_departed_users=True
)
if pagin_config.direction == "b":
if pagin_config.direction == Direction.BACKWARDS:
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
+17 -6
View File
@@ -64,7 +64,13 @@ from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.streams import PresenceFederationStream, PresenceStream
from synapse.storage.databases.main import DataStore
from synapse.streams import EventSource
from synapse.types import JsonDict, StreamKeyType, UserID, get_domain_from_id
from synapse.types import (
JsonDict,
StrCollection,
StreamKeyType,
UserID,
get_domain_from_id,
)
from synapse.util.async_helpers import Linearizer
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
@@ -320,7 +326,7 @@ class BasePresenceHandler(abc.ABC):
for destination, host_states in hosts_to_states.items():
self._federation.send_presence_to_destinations(host_states, [destination])
async def send_full_presence_to_users(self, user_ids: Collection[str]) -> None:
async def send_full_presence_to_users(self, user_ids: StrCollection) -> None:
"""
Adds to the list of users who should receive a full snapshot of presence
upon their next sync. Note that this only works for local users.
@@ -1601,7 +1607,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
# Having a default limit doesn't match the EventSource API, but some
# callers do not provide it. It is unused in this class.
limit: int = 0,
room_ids: Optional[Collection[str]] = None,
room_ids: Optional[StrCollection] = None,
is_guest: bool = False,
explicit_room_id: Optional[str] = None,
include_offline: bool = True,
@@ -1688,7 +1694,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
# The set of users that we're interested in and that have had a presence update.
# We'll actually pull the presence updates for these users at the end.
interested_and_updated_users: Collection[str]
interested_and_updated_users: StrCollection
if from_key is not None:
# First get all users that have had a presence update
@@ -2120,7 +2126,7 @@ class PresenceFederationQueue:
# stream_id, destinations, user_ids)`. We don't store the full states
# for efficiency, and remote workers will already have the full states
# cached.
self._queue: List[Tuple[int, int, Collection[str], Set[str]]] = []
self._queue: List[Tuple[int, int, StrCollection, Set[str]]] = []
self._next_id = 1
@@ -2142,7 +2148,7 @@ class PresenceFederationQueue:
self._queue = self._queue[index:]
def send_presence_to_destinations(
self, states: Collection[UserPresenceState], destinations: Collection[str]
self, states: Collection[UserPresenceState], destinations: StrCollection
) -> None:
"""Send the presence states to the given destinations.
@@ -2155,6 +2161,11 @@ class PresenceFederationQueue:
# This should only be called on a presence writer.
assert self._presence_writer
if not states or not destinations:
# Ignore calls which either don't have any new states or don't need
# to be sent anywhere.
return
if self._federation:
self._federation.send_presence_to_destinations(
states=states,
+1 -1
View File
@@ -315,5 +315,5 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
return events, to_key
def get_current_key(self, direction: str = "f") -> int:
def get_current_key(self) -> int:
return self.store.get_max_receipt_stream_id()
+6 -2
View File
@@ -17,7 +17,7 @@ from typing import TYPE_CHECKING, Collection, Dict, FrozenSet, Iterable, List, O
import attr
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.constants import Direction, EventTypes, RelationTypes
from synapse.api.errors import SynapseError
from synapse.events import EventBase, relation_from_event
from synapse.logging.context import make_deferred_yieldable, run_in_background
@@ -413,7 +413,11 @@ class RelationsHandler:
# Attempt to find another event to use as the latest event.
potential_events, _ = await self._main_store.get_relations_for_event(
event_id, event, room_id, RelationTypes.THREAD, direction="f"
event_id,
event,
room_id,
RelationTypes.THREAD,
direction=Direction.FORWARDS,
)
# Filter out ignored users.
+8 -15
View File
@@ -20,22 +20,14 @@ import random
import string
from collections import OrderedDict
from http import HTTPStatus
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Collection,
Dict,
List,
Optional,
Tuple,
)
from typing import TYPE_CHECKING, Any, Awaitable, Dict, List, Optional, Tuple
import attr
from typing_extensions import TypedDict
import synapse.events.snapshot
from synapse.api.constants import (
Direction,
EventContentFields,
EventTypes,
GuestAccess,
@@ -72,6 +64,7 @@ from synapse.types import (
RoomID,
RoomStreamToken,
StateMap,
StrCollection,
StreamKeyType,
StreamToken,
UserID,
@@ -1495,7 +1488,7 @@ class TimestampLookupHandler:
requester: Requester,
room_id: str,
timestamp: int,
direction: str,
direction: Direction,
) -> Tuple[str, int]:
"""Find the closest event to the given timestamp in the given direction.
If we can't find an event locally or the event we have locally is next to a gap,
@@ -1506,7 +1499,7 @@ class TimestampLookupHandler:
room_id: Room to fetch the event from
timestamp: The point in time (inclusive) we should navigate from in
the given direction to find the closest event.
direction: ["f"|"b"] to indicate whether we should navigate forward
direction: indicates whether we should navigate forward
or backward from the given timestamp to find the closest event.
Returns:
@@ -1541,13 +1534,13 @@ class TimestampLookupHandler:
local_event_id, allow_none=False, allow_rejected=False
)
if direction == "f":
if direction == Direction.FORWARDS:
# We only need to check for a backward gap if we're looking forwards
# to ensure there is nothing in between.
is_event_next_to_backward_gap = (
await self.store.is_event_next_to_backward_gap(local_event)
)
elif direction == "b":
elif direction == Direction.BACKWARDS:
# We only need to check for a forward gap if we're looking backwards
# to ensure there is nothing in between
is_event_next_to_forward_gap = (
@@ -1644,7 +1637,7 @@ class RoomEventSource(EventSource[RoomStreamToken, EventBase]):
user: UserID,
from_key: RoomStreamToken,
limit: int,
room_ids: Collection[str],
room_ids: StrCollection,
is_guest: bool,
explicit_room_id: Optional[str] = None,
) -> Tuple[List[EventBase], RoomStreamToken]:
+2 -2
View File
@@ -36,7 +36,7 @@ from synapse.api.errors import (
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.events import EventBase
from synapse.types import JsonDict, Requester
from synapse.types import JsonDict, Requester, StrCollection
from synapse.util.caches.response_cache import ResponseCache
if TYPE_CHECKING:
@@ -870,7 +870,7 @@ class _RoomQueueEntry:
# The room ID of this entry.
room_id: str
# The server to query if the room is not known locally.
via: Sequence[str]
via: StrCollection
# The minimum number of hops necessary to get to this room (compared to the
# originally requested room).
depth: int = 0
+4 -4
View File
@@ -14,7 +14,7 @@
import itertools
import logging
from typing import TYPE_CHECKING, Collection, Dict, Iterable, List, Optional, Set, Tuple
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Set, Tuple
import attr
from unpaddedbase64 import decode_base64, encode_base64
@@ -23,7 +23,7 @@ from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import NotFoundError, SynapseError
from synapse.api.filtering import Filter
from synapse.events import EventBase
from synapse.types import JsonDict, StreamKeyType, UserID
from synapse.types import JsonDict, StrCollection, StreamKeyType, UserID
from synapse.types.state import StateFilter
from synapse.visibility import filter_events_for_client
@@ -418,7 +418,7 @@ class SearchHandler:
async def _search_by_rank(
self,
user: UserID,
room_ids: Collection[str],
room_ids: StrCollection,
search_term: str,
keys: Iterable[str],
search_filter: Filter,
@@ -491,7 +491,7 @@ class SearchHandler:
async def _search_by_recent(
self,
user: UserID,
room_ids: Collection[str],
room_ids: StrCollection,
search_term: str,
keys: Iterable[str],
search_filter: Filter,
+5 -4
View File
@@ -20,7 +20,6 @@ from typing import (
Any,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
@@ -47,6 +46,7 @@ from synapse.http.server import respond_with_html, respond_with_redirect
from synapse.http.site import SynapseRequest
from synapse.types import (
JsonDict,
StrCollection,
UserID,
contains_invalid_mxid_characters,
create_requester,
@@ -141,7 +141,8 @@ class UserAttributes:
confirm_localpart: bool = False
display_name: Optional[str] = None
picture: Optional[str] = None
emails: Collection[str] = attr.Factory(list)
# mypy thinks these are incompatible for some reason.
emails: StrCollection = attr.Factory(list) # type: ignore[assignment]
@attr.s(slots=True, auto_attribs=True)
@@ -159,7 +160,7 @@ class UsernameMappingSession:
# attributes returned by the ID mapper
display_name: Optional[str]
emails: Collection[str]
emails: StrCollection
# An optional dictionary of extra attributes to be provided to the client in the
# login response.
@@ -174,7 +175,7 @@ class UsernameMappingSession:
# choices made by the user
chosen_localpart: Optional[str] = None
use_display_name: bool = True
emails_to_use: Collection[str] = ()
emails_to_use: StrCollection = ()
terms_accepted_version: Optional[str] = None
+213 -148
View File
@@ -17,7 +17,6 @@ from typing import (
TYPE_CHECKING,
AbstractSet,
Any,
Collection,
Dict,
FrozenSet,
List,
@@ -31,7 +30,12 @@ from typing import (
import attr
from prometheus_client import Counter
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.constants import (
AccountDataTypes,
EventContentFields,
EventTypes,
Membership,
)
from synapse.api.filtering import FilterCollection
from synapse.api.presence import UserPresenceState
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
@@ -57,6 +61,7 @@ from synapse.types import (
Requester,
RoomStreamToken,
StateMap,
StrCollection,
StreamKeyType,
StreamToken,
UserID,
@@ -285,7 +290,7 @@ class SyncHandler:
expiry_ms=LAZY_LOADED_MEMBERS_CACHE_MAX_AGE,
)
self.rooms_to_exclude = hs.config.server.rooms_to_exclude_from_sync
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
async def wait_for_sync_for_user(
self,
@@ -1174,7 +1179,7 @@ class SyncHandler:
async def _find_missing_partial_state_memberships(
self,
room_id: str,
members_to_fetch: Collection[str],
members_to_fetch: StrCollection,
events_with_membership_auth: Mapping[str, EventBase],
found_state_ids: StateMap[str],
) -> StateMap[str]:
@@ -1335,7 +1340,10 @@ class SyncHandler:
membership_change_events = []
if since_token:
membership_change_events = await self.store.get_membership_changes_for_user(
user_id, since_token.room_key, now_token.room_key, self.rooms_to_exclude
user_id,
since_token.room_key,
now_token.room_key,
self.rooms_to_exclude_globally,
)
mem_last_change_by_room_id: Dict[str, EventBase] = {}
@@ -1370,12 +1378,49 @@ class SyncHandler:
else:
mutable_joined_room_ids.discard(room_id)
# Tweak the set of rooms to return to the client for eager (non-lazy) syncs.
mutable_rooms_to_exclude = set(self.rooms_to_exclude_globally)
if not sync_config.filter_collection.lazy_load_members():
# Non-lazy syncs should never include partially stated rooms.
# Exclude all partially stated rooms from this sync.
results = await self.store.is_partial_state_room_batched(
mutable_joined_room_ids
)
mutable_rooms_to_exclude.update(
room_id
for room_id, is_partial_state in results.items()
if is_partial_state
)
# Incremental eager syncs should additionally include rooms that
# - we are joined to
# - are full-stated
# - became fully-stated at some point during the sync period
# (These rooms will have been omitted during a previous eager sync.)
forced_newly_joined_room_ids: Set[str] = set()
if since_token and not sync_config.filter_collection.lazy_load_members():
un_partial_stated_rooms = (
await self.store.get_un_partial_stated_rooms_between(
since_token.un_partial_stated_rooms_key,
now_token.un_partial_stated_rooms_key,
mutable_joined_room_ids,
)
)
results = await self.store.is_partial_state_room_batched(
un_partial_stated_rooms
)
forced_newly_joined_room_ids.update(
room_id
for room_id, is_partial_state in results.items()
if not is_partial_state
)
# Now we have our list of joined room IDs, exclude as configured and freeze
joined_room_ids = frozenset(
(
room_id
for room_id in mutable_joined_room_ids
if room_id not in self.rooms_to_exclude
if room_id not in mutable_rooms_to_exclude
)
)
@@ -1392,6 +1437,8 @@ class SyncHandler:
since_token=since_token,
now_token=now_token,
joined_room_ids=joined_room_ids,
excluded_room_ids=frozenset(mutable_rooms_to_exclude),
forced_newly_joined_room_ids=frozenset(forced_newly_joined_room_ids),
membership_change_events=membership_change_events,
)
@@ -1401,39 +1448,67 @@ class SyncHandler:
sync_result_builder
)
logger.debug("Fetching room data")
(
newly_joined_rooms,
newly_joined_or_invited_or_knocked_users,
newly_left_rooms,
newly_left_users,
) = await self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
# Presence data is included if the server has it enabled and not filtered out.
include_presence_data = bool(
self.hs_config.server.use_presence
and not sync_config.filter_collection.blocks_all_presence()
)
# Device list updates are sent if a since token is provided.
include_device_list_updates = bool(since_token and since_token.device_list_key)
block_all_presence_data = (
since_token is None and sync_config.filter_collection.blocks_all_presence()
)
if self.hs_config.server.use_presence and not block_all_presence_data:
logger.debug("Fetching presence data")
await self._generate_sync_entry_for_presence(
sync_result_builder,
# If we do not care about the rooms or things which depend on the room
# data (namely presence and device list updates), then we can skip
# this process completely.
device_lists = DeviceListUpdates()
if (
not sync_result_builder.sync_config.filter_collection.blocks_all_rooms()
or include_presence_data
or include_device_list_updates
):
logger.debug("Fetching room data")
# Note that _generate_sync_entry_for_rooms sets sync_result_builder.joined, which
# is used in calculate_user_changes below.
(
newly_joined_rooms,
newly_joined_or_invited_or_knocked_users,
newly_left_rooms,
) = await self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
# Work out which users have joined or left rooms we're in. We use this
# to build the presence and device_list parts of the sync response in
# `_generate_sync_entry_for_presence` and
# `_generate_sync_entry_for_device_list` respectively.
if include_presence_data or include_device_list_updates:
# This uses the sync_result_builder.joined which is set in
# `_generate_sync_entry_for_rooms`, if that didn't find any joined
# rooms for some reason it is a no-op.
(
newly_joined_or_invited_or_knocked_users,
newly_left_users,
) = sync_result_builder.calculate_user_changes()
if include_presence_data:
logger.debug("Fetching presence data")
await self._generate_sync_entry_for_presence(
sync_result_builder,
newly_joined_rooms,
newly_joined_or_invited_or_knocked_users,
)
if include_device_list_updates:
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
logger.debug("Fetching to-device data")
await self._generate_sync_entry_for_to_device(sync_result_builder)
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_or_knocked_users=newly_joined_or_invited_or_knocked_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
logger.debug("Fetching OTK data")
device_id = sync_config.device_id
one_time_keys_count: JsonDict = {}
@@ -1502,6 +1577,7 @@ class SyncHandler:
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
assert since_token is not None
# Take a copy since these fields will be mutated later.
newly_joined_or_invited_or_knocked_users = set(
@@ -1509,92 +1585,85 @@ class SyncHandler:
)
newly_left_users = set(newly_left_users)
if since_token and since_token.device_list_key:
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
# We want to figure out what user IDs the client should refetch
# device keys for, and which users we aren't going to track changes
# for anymore.
#
# For the first step we check:
# a. if any users we share a room with have updated their devices,
# and
# b. we also check if we've joined any new rooms, or if a user has
# joined a room we're in.
#
# For the second step we just find any users we no longer share a
# room with by looking at all users that have left a room plus users
# that were in a room we've left.
users_that_have_changed = set()
users_that_have_changed = set()
joined_rooms = sync_result_builder.joined_room_ids
joined_rooms = sync_result_builder.joined_room_ids
# Step 1a, check for changes in devices of users we share a room
# with
#
# We do this in two different ways depending on what we have cached.
# If we already have a list of all the user that have changed since
# the last sync then it's likely more efficient to compare the rooms
# they're in with the rooms the syncing user is in.
#
# If we don't have that info cached then we get all the users that
# share a room with our user and check if those users have changed.
cache_result = self.store.get_cached_device_list_changes(
since_token.device_list_key
)
if cache_result.hit:
changed_users = cache_result.entities
# Step 1a, check for changes in devices of users we share a room
# with
#
# We do this in two different ways depending on what we have cached.
# If we already have a list of all the user that have changed since
# the last sync then it's likely more efficient to compare the rooms
# they're in with the rooms the syncing user is in.
#
# If we don't have that info cached then we get all the users that
# share a room with our user and check if those users have changed.
cache_result = self.store.get_cached_device_list_changes(
since_token.device_list_key
)
if cache_result.hit:
changed_users = cache_result.entities
result = await self.store.get_rooms_for_users(changed_users)
result = await self.store.get_rooms_for_users(changed_users)
for changed_user_id, entries in result.items():
# Check if the changed user shares any rooms with the user,
# or if the changed user is the syncing user (as we always
# want to include device list updates of their own devices).
if user_id == changed_user_id or any(
rid in joined_rooms for rid in entries
):
users_that_have_changed.add(changed_user_id)
else:
users_that_have_changed = (
await self._device_handler.get_device_changes_in_shared_rooms(
user_id,
sync_result_builder.joined_room_ids,
from_token=since_token,
)
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = (
await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_rooms for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(
changed=users_that_have_changed, left=newly_left_users
)
for changed_user_id, entries in result.items():
# Check if the changed user shares any rooms with the user,
# or if the changed user is the syncing user (as we always
# want to include device list updates of their own devices).
if user_id == changed_user_id or any(
rid in joined_rooms for rid in entries
):
users_that_have_changed.add(changed_user_id)
else:
return DeviceListUpdates()
users_that_have_changed = (
await self._device_handler.get_device_changes_in_shared_rooms(
user_id,
sync_result_builder.joined_room_ids,
from_token=since_token,
)
)
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.store.get_users_in_room(room_id)
newly_joined_or_invited_or_knocked_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_or_knocked_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
)
users_that_have_changed.update(user_signatures_changed)
# Now find users that we no longer track
for room_id in newly_left_rooms:
left_users = await self.store.get_users_in_room(room_id)
newly_left_users.update(left_users)
# Remove any users that we still share a room with.
left_users_rooms = await self.store.get_rooms_for_users(newly_left_users)
for user_id, entries in left_users_rooms.items():
if any(rid in joined_rooms for rid in entries):
newly_left_users.discard(user_id)
return DeviceListUpdates(changed=users_that_have_changed, left=newly_left_users)
@trace
async def _generate_sync_entry_for_to_device(
@@ -1671,6 +1740,7 @@ class SyncHandler:
since_token = sync_result_builder.since_token
if since_token and not sync_result_builder.full_state:
# TODO Do not fetch room account data if it will be unused.
(
global_account_data,
account_data_by_room,
@@ -1687,6 +1757,7 @@ class SyncHandler:
sync_config.user
)
else:
# TODO Do not fetch room account data if it will be unused.
(
global_account_data,
account_data_by_room,
@@ -1769,7 +1840,7 @@ class SyncHandler:
self,
sync_result_builder: "SyncResultBuilder",
account_data_by_room: Dict[str, Dict[str, JsonDict]],
) -> Tuple[AbstractSet[str], AbstractSet[str], AbstractSet[str], AbstractSet[str]]:
) -> Tuple[AbstractSet[str], AbstractSet[str]]:
"""Generates the rooms portion of the sync response. Populates the
`sync_result_builder` with the result.
@@ -1782,30 +1853,21 @@ class SyncHandler:
account_data_by_room: Dictionary of per room account data
Returns:
Returns a 4-tuple describing rooms the user has joined or left, and users who've
joined or left rooms any rooms the user is in. This gets used later in
`_generate_sync_entry_for_device_list`.
Returns a 2-tuple describing rooms the user has joined or left.
Its entries are:
- newly_joined_rooms
- newly_joined_or_invited_or_knocked_users
- newly_left_rooms
- newly_left_users
"""
# If the request doesn't care about rooms then nothing to do!
if sync_result_builder.sync_config.filter_collection.blocks_all_rooms():
return set(), set(), set(), set()
since_token = sync_result_builder.since_token
user_id = sync_result_builder.sync_config.user.to_string()
# 1. Start by fetching all ephemeral events in rooms we've joined (if required).
user_id = sync_result_builder.sync_config.user.to_string()
block_all_room_ephemeral = (
since_token is None
and sync_result_builder.sync_config.filter_collection.blocks_all_room_ephemeral()
sync_result_builder.sync_config.filter_collection.blocks_all_rooms()
or sync_result_builder.sync_config.filter_collection.blocks_all_room_ephemeral()
)
if block_all_room_ephemeral:
ephemeral_by_room: Dict[str, List[JsonDict]] = {}
else:
@@ -1828,19 +1890,21 @@ class SyncHandler:
)
if not tags_by_room:
logger.debug("no-oping sync")
return set(), set(), set(), set()
return set(), set()
# 3. Work out which rooms need reporting in the sync response.
ignored_users = await self.store.ignored_users(user_id)
if since_token:
room_changes = await self._get_rooms_changed(
room_changes = await self._get_room_changes_for_incremental_sync(
sync_result_builder, ignored_users
)
tags_by_room = await self.store.get_updated_tags(
user_id, since_token.account_data_key
)
else:
room_changes = await self._get_all_rooms(sync_result_builder, ignored_users)
room_changes = await self._get_room_changes_for_initial_sync(
sync_result_builder, ignored_users
)
tags_by_room = await self.store.get_tags_for_user(user_id)
log_kv({"rooms_changed": len(room_changes.room_entries)})
@@ -1855,6 +1919,7 @@ class SyncHandler:
# joined or archived).
async def handle_room_entries(room_entry: "RoomSyncResultBuilder") -> None:
logger.debug("Generating room entry for %s", room_entry.room_id)
# Note that this mutates sync_result_builder.{joined,archived}.
await self._generate_room_entry(
sync_result_builder,
room_entry,
@@ -1871,20 +1936,7 @@ class SyncHandler:
sync_result_builder.invited.extend(invited)
sync_result_builder.knocked.extend(knocked)
# 5. Work out which users have joined or left rooms we're in. We use this
# to build the device_list part of the sync response in
# `_generate_sync_entry_for_device_list`.
(
newly_joined_or_invited_or_knocked_users,
newly_left_users,
) = sync_result_builder.calculate_user_changes()
return (
set(newly_joined_rooms),
newly_joined_or_invited_or_knocked_users,
set(newly_left_rooms),
newly_left_users,
)
return set(newly_joined_rooms), set(newly_left_rooms)
async def _have_rooms_changed(
self, sync_result_builder: "SyncResultBuilder"
@@ -1899,7 +1951,7 @@ class SyncHandler:
assert since_token
if membership_change_events:
if membership_change_events or sync_result_builder.forced_newly_joined_room_ids:
return True
stream_id = since_token.room_key.stream
@@ -1908,7 +1960,7 @@ class SyncHandler:
return True
return False
async def _get_rooms_changed(
async def _get_room_changes_for_incremental_sync(
self,
sync_result_builder: "SyncResultBuilder",
ignored_users: FrozenSet[str],
@@ -1946,7 +1998,9 @@ class SyncHandler:
for event in membership_change_events:
mem_change_events_by_room_id.setdefault(event.room_id, []).append(event)
newly_joined_rooms: List[str] = []
newly_joined_rooms: List[str] = list(
sync_result_builder.forced_newly_joined_room_ids
)
newly_left_rooms: List[str] = []
room_entries: List[RoomSyncResultBuilder] = []
invited: List[InvitedSyncResult] = []
@@ -2152,7 +2206,7 @@ class SyncHandler:
newly_left_rooms,
)
async def _get_all_rooms(
async def _get_room_changes_for_initial_sync(
self,
sync_result_builder: "SyncResultBuilder",
ignored_users: FrozenSet[str],
@@ -2177,7 +2231,7 @@ class SyncHandler:
room_list = await self.store.get_rooms_for_local_user_where_membership_is(
user_id=user_id,
membership_list=Membership.LIST,
excluded_rooms=self.rooms_to_exclude,
excluded_rooms=sync_result_builder.excluded_room_ids,
)
room_entries = []
@@ -2335,7 +2389,9 @@ class SyncHandler:
account_data_events = []
if tags is not None:
account_data_events.append({"type": "m.tag", "content": {"tags": tags}})
account_data_events.append(
{"type": AccountDataTypes.TAG, "content": {"tags": tags}}
)
for account_data_type, content in account_data.items():
account_data_events.append(
@@ -2546,6 +2602,13 @@ class SyncResultBuilder:
since_token: The token supplied by user, or None.
now_token: The token to sync up to.
joined_room_ids: List of rooms the user is joined to
excluded_room_ids: Set of room ids we should omit from the /sync response.
forced_newly_joined_room_ids:
Rooms that should be presented in the /sync response as if they were
newly joined during the sync period, even if that's not the case.
(This is useful if the room was previously excluded from a /sync response,
and now the client should be made aware of it.)
Only used by incremental syncs.
# The following mirror the fields in a sync response
presence
@@ -2562,6 +2625,8 @@ class SyncResultBuilder:
since_token: Optional[StreamToken]
now_token: StreamToken
joined_room_ids: FrozenSet[str]
excluded_room_ids: FrozenSet[str]
forced_newly_joined_room_ids: FrozenSet[str]
membership_change_events: List[EventBase]
presence: List[UserPresenceState] = attr.Factory(list)
+4 -1
View File
@@ -44,6 +44,7 @@ from twisted.internet.interfaces import (
IAddress,
IDelayedCall,
IHostResolution,
IReactorCore,
IReactorPluggableNameResolver,
IReactorTime,
IResolutionReceiver,
@@ -226,7 +227,9 @@ class _IPBlacklistingResolver:
return recv
@implementer(ISynapseReactor)
# ISynapseReactor implies IReactorCore, but explicitly marking it this as an implementer
# of IReactorCore seems to keep mypy-zope happier.
@implementer(IReactorCore, ISynapseReactor)
class BlacklistingReactorWrapper:
"""
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
+1 -2
View File
@@ -38,7 +38,6 @@ from twisted.web.iweb import IAgent, IBodyProducer, IPolicyForHTTPS, IResponse
from synapse.http import redact_uri
from synapse.http.connectproxyclient import HTTPConnectProxyEndpoint, ProxyCredentials
from synapse.types import ISynapseReactor
logger = logging.getLogger(__name__)
@@ -84,7 +83,7 @@ class ProxyAgent(_AgentBase):
def __init__(
self,
reactor: IReactorCore,
proxy_reactor: Optional[ISynapseReactor] = None,
proxy_reactor: Optional[IReactorCore] = None,
contextFactory: Optional[IPolicyForHTTPS] = None,
connectTimeout: Optional[float] = None,
bindAddress: Optional[bytes] = None,
+70
View File
@@ -13,6 +13,7 @@
# limitations under the License.
""" This module contains base REST classes for constructing REST servlets. """
import enum
import logging
from http import HTTPStatus
from typing import (
@@ -362,6 +363,7 @@ def parse_string(
request: Request,
name: str,
*,
default: Optional[str] = None,
required: bool = False,
allowed_values: Optional[Iterable[str]] = None,
encoding: str = "ascii",
@@ -413,6 +415,74 @@ def parse_string(
)
EnumT = TypeVar("EnumT", bound=enum.Enum)
@overload
def parse_enum(
request: Request,
name: str,
E: Type[EnumT],
default: EnumT,
) -> EnumT:
...
@overload
def parse_enum(
request: Request,
name: str,
E: Type[EnumT],
*,
required: Literal[True],
) -> EnumT:
...
def parse_enum(
request: Request,
name: str,
E: Type[EnumT],
default: Optional[EnumT] = None,
required: bool = False,
) -> Optional[EnumT]:
"""
Parse an enum parameter from the request query string.
Note that the enum *must only have string values*.
Args:
request: the twisted HTTP request.
name: the name of the query parameter.
E: the enum which represents valid values
default: enum value to use if the parameter is absent, defaults to None.
required: whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
Returns:
An enum value.
Raises:
SynapseError if the parameter is absent and required, or if the
parameter is present, must be one of a list of allowed values and
is not one of those allowed values.
"""
# Assert the enum values are strings.
assert all(
isinstance(e.value, str) for e in E
), "parse_enum only works with string values"
str_value = parse_string(
request,
name,
default=default.value if default is not None else None,
required=required,
allowed_values=[e.value for e in E],
)
if str_value is None:
return None
return E(str_value)
def _parse_string_value(
value: bytes,
allowed_values: Optional[Iterable[str]],
+5
View File
@@ -322,6 +322,11 @@ class SynapseTags:
# The name of the external cache
CACHE_NAME = "cache.name"
# Boolean. Present on /v2/send_join requests, omitted from all others.
# True iff partial state was requested and we provided (or intended to provide)
# partial state in the response.
SEND_JOIN_RESPONSE_IS_PARTIAL_STATE = "send_join.partial_state_response"
# Used to tag function arguments
#
# Tag a named arg. The name of the argument should be appended to this prefix.

Some files were not shown because too many files have changed in this diff Show More