1
0

Compare commits

..

17 Commits

Author SHA1 Message Date
Sean Quah
2e2f5c043e Merge branch 'develop' into squah/leave_space_admin_api 2021-12-08 10:47:09 +00:00
Sean Quah
07580acdc0 Merge branch 'develop' into squah/leave_space_admin_api 2021-11-30 12:18:02 +00:00
Sean Quah
64c56177a4 Add tests for remote spaces 2021-11-19 16:31:43 +00:00
Sean Quah
94586c596f Add flag to control whether remote spaces are processed 2021-11-19 15:25:43 +00:00
Sean Quah
856656a8ee Leave rooms in a deterministic order 2021-11-18 20:01:36 +00:00
Sean Quah
e5751a6350 Revert "Convert strings in synapse.api.constants to enums or Final"
This reverts commit b43d085472.
2021-11-18 16:06:38 +00:00
Sean Quah
b105beafdb Fix bug/redundant code to do with handling of the root space id 2021-11-18 16:00:00 +00:00
Sean Quah
f77da61ce8 Add missing type hints to new tests 2021-11-18 15:43:02 +00:00
Sean Quah
8627a456e3 Refer to "spaces" instead of "rooms" 2021-11-18 15:42:56 +00:00
Sean Quah
b43d085472 Convert strings in synapse.api.constants to enums or Final
This change makes mypy type the constants as `Literal`s instead of
`str`s, allowing code of the following form to pass mypy:
```py
def do_something(
    membership: Literal[Membership.JOIN, Membership.LEAVE], ...
):
    ...

do_something(Membership.JOIN, ...)
```
2021-11-16 13:55:20 +00:00
Sean Quah
ab89c60702 Add newsfile 2021-11-16 13:55:02 +00:00
Sean Quah
b8c228ae98 Add tests for admin API to remove a local user from a space 2021-11-16 13:45:03 +00:00
Sean Quah
3371ec0b85 Add documentation for new admin API to remove a local user from a space 2021-11-16 13:45:03 +00:00
Sean Quah
75be1be9d5 Add admin API to remove a local user from a space 2021-11-16 13:45:03 +00:00
Sean Quah
f004687410 Add tests for RoomHierarchyHandler 2021-11-16 13:45:03 +00:00
Sean Quah
98873d7be3 Add RoomHierarchyHandler for walking space hierarchies 2021-11-16 13:45:03 +00:00
Sean Quah
17bc6167d6 Annotate string constants in synapse.api.constants with Final
This change makes mypy complain if the constants are ever reassigned,
and, more usefully, makes mypy type them as `Literal`s instead of `str`s,
allowing code of the following form to pass mypy:
```py
def do_something(membership: Literal["join", "leave"], ...): ...

do_something(Membership.JOIN, ...)
```
2021-11-16 13:37:51 +00:00
312 changed files with 5425 additions and 7400 deletions

View File

@@ -8,7 +8,6 @@
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
* [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
* [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@@ -76,7 +76,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
python-version: ["3.6", "3.7", "3.8", "3.9", "3.10"]
database: ["sqlite"]
toxenv: ["py"]
include:
@@ -85,9 +85,9 @@ jobs:
toxenv: "py-noextras"
# Oldest Python with PostgreSQL
- python-version: "3.7"
- python-version: "3.6"
database: "postgres"
postgres-version: "10"
postgres-version: "9.6"
toxenv: "py"
# Newest Python with newest PostgreSQL
@@ -167,7 +167,7 @@ jobs:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.7"]
python-version: ["pypy-3.6"]
steps:
- uses: actions/checkout@v2
@@ -291,8 +291,8 @@ jobs:
strategy:
matrix:
include:
- python-version: "3.7"
postgres-version: "10"
- python-version: "3.6"
postgres-version: "9.6"
- python-version: "3.10"
postgres-version: "14"
@@ -366,8 +366,6 @@ jobs:
# Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
working-directory: synapse
env:
DOCKER_BUILDKIT: 1
# Build a ready-to-run Synapse image based on the initial image above.
# This new image includes a config file, keys for signing and TLS, and
@@ -376,8 +374,7 @@ jobs:
working-directory: complement/dockerfiles
# Run Complement
- run: set -o pipefail && go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
shell: bash
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

4
.gitignore vendored
View File

@@ -50,7 +50,3 @@ __pycache__/
# docs
book/
# complement
/complement-master
/master.tar.gz

View File

@@ -1,181 +1,6 @@
Synapse 1.50.1 (2022-01-18)
===========================
This release fixes a bug in Synapse 1.50.0 that could prevent clients from being able to connect to Synapse if the `webclient` resource was enabled. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).
Bugfixes
--------
- Fix a bug introduced in Synapse 1.50.0rc1 that could cause Matrix clients to be unable to connect to Synapse instances with the `webclient` resource enabled. ([\#11764](https://github.com/matrix-org/synapse/issues/11764))
Synapse 1.50.0 (2022-01-18)
===========================
**This release contains a critical bug that may prevent clients from being able to connect.
As such, it is not recommended to upgrade to 1.50.0. Instead, please upgrade straight to
to 1.50.1. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).**
Please note that we now only support Python 3.7+ and PostgreSQL 10+ (if applicable), because Python 3.6 and PostgreSQL 9.6 have reached end-of-life.
No significant changes since 1.50.0rc2.
Synapse 1.50.0rc2 (2022-01-14)
Synapse 1.49.0rc1 (2021-12-07)
==============================
This release candidate fixes a federation-breaking regression introduced in Synapse 1.50.0rc1.
Bugfixes
--------
- Fix a bug introduced in Synapse v1.0.0 whereby some device list updates would not be sent to remote homeservers if there were too many to send at once. ([\#11729](https://github.com/matrix-org/synapse/issues/11729))
- Fix a bug introduced in Synapse v1.50.0rc1 whereby outbound federation could fail because too many EDUs were produced for device updates. ([\#11730](https://github.com/matrix-org/synapse/issues/11730))
Improved Documentation
----------------------
- Document that now the minimum supported PostgreSQL version is 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
Internal Changes
----------------
- Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s. ([\#11714](https://github.com/matrix-org/synapse/issues/11714))
Synapse 1.50.0rc1 (2022-01-05)
==============================
Features
--------
- Allow guests to send state events per [MSC3419](https://github.com/matrix-org/matrix-doc/pull/3419). ([\#11378](https://github.com/matrix-org/synapse/issues/11378))
- Add experimental support for part of [MSC3202](https://github.com/matrix-org/matrix-doc/pull/3202): allowing application services to masquerade as specific devices. ([\#11538](https://github.com/matrix-org/synapse/issues/11538))
- Add admin API to get users' account data. ([\#11664](https://github.com/matrix-org/synapse/issues/11664))
- Include the room topic in the stripped state included with invites and knocking. ([\#11666](https://github.com/matrix-org/synapse/issues/11666))
- Send and handle cross-signing messages using the stable prefix. ([\#10520](https://github.com/matrix-org/synapse/issues/10520))
- Support unprefixed versions of fallback key property names. ([\#11541](https://github.com/matrix-org/synapse/issues/11541))
Bugfixes
--------
- Fix a long-standing bug where relations from other rooms could be included in the bundled aggregations of an event. ([\#11516](https://github.com/matrix-org/synapse/issues/11516))
- Fix a long-standing bug which could cause `AssertionError`s to be written to the log when Synapse was restarted after purging events from the database. ([\#11536](https://github.com/matrix-org/synapse/issues/11536), [\#11642](https://github.com/matrix-org/synapse/issues/11642))
- Fix a bug introduced in Synapse 1.17.0 where a pusher created for an email with capital letters would fail to be created. ([\#11547](https://github.com/matrix-org/synapse/issues/11547))
- Fix a long-standing bug where responses included bundled aggregations when they should not, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11592](https://github.com/matrix-org/synapse/issues/11592), [\#11623](https://github.com/matrix-org/synapse/issues/11623))
- Fix a long-standing bug that some unknown endpoints would return HTML error pages instead of JSON `M_UNRECOGNIZED` errors. ([\#11602](https://github.com/matrix-org/synapse/issues/11602))
- Fix a bug introduced in Synapse 1.19.3 which could sometimes cause `AssertionError`s when backfilling rooms over federation. ([\#11632](https://github.com/matrix-org/synapse/issues/11632))
Improved Documentation
----------------------
- Update Synapse install command for FreeBSD as the package is now prefixed with `py38`. Contributed by @itchychips. ([\#11267](https://github.com/matrix-org/synapse/issues/11267))
- Document the usage of refresh tokens. ([\#11427](https://github.com/matrix-org/synapse/issues/11427))
- Add details for how to configure a TURN server when behind a NAT. Contibuted by @AndrewFerr. ([\#11553](https://github.com/matrix-org/synapse/issues/11553))
- Add references for using Postgres to the Docker documentation. ([\#11640](https://github.com/matrix-org/synapse/issues/11640))
- Fix the documentation link in newly-generated configuration files. ([\#11678](https://github.com/matrix-org/synapse/issues/11678))
- Correct the documentation for `nginx` to use a case-sensitive url pattern. Fixes an error introduced in v1.21.0. ([\#11680](https://github.com/matrix-org/synapse/issues/11680))
- Clarify SSO mapping provider documentation by writing `def` or `async def` before the names of methods, as appropriate. ([\#11681](https://github.com/matrix-org/synapse/issues/11681))
Deprecations and Removals
-------------------------
- Replace `mock` package by its standard library version. ([\#11588](https://github.com/matrix-org/synapse/issues/11588))
- Drop support for Python 3.6 and Ubuntu 18.04. ([\#11633](https://github.com/matrix-org/synapse/issues/11633))
Internal Changes
----------------
- Allow specific, experimental events to be created without `prev_events`. Used by [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716). ([\#11243](https://github.com/matrix-org/synapse/issues/11243))
- A test helper (`wait_for_background_updates`) no longer depends on classes defining a `store` property. ([\#11331](https://github.com/matrix-org/synapse/issues/11331))
- Add type hints to `synapse.appservice`. ([\#11360](https://github.com/matrix-org/synapse/issues/11360))
- Add missing type hints to `synapse.config` module. ([\#11480](https://github.com/matrix-org/synapse/issues/11480))
- Add test to ensure we share the same `state_group` across the whole historical batch when using the [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) `/batch_send` endpoint. ([\#11487](https://github.com/matrix-org/synapse/issues/11487))
- Refactor `tests.util.setup_test_homeserver` and `tests.server.setup_test_homeserver`. ([\#11503](https://github.com/matrix-org/synapse/issues/11503))
- Move `glob_to_regex` and `re_word_boundary` to `matrix-python-common`. ([\#11505](https://github.com/matrix-org/synapse/issues/11505), [\#11687](https://github.com/matrix-org/synapse/issues/11687))
- Use `HTTPStatus` constants in place of literals in `tests.rest.client.test_auth`. ([\#11520](https://github.com/matrix-org/synapse/issues/11520))
- Add a receipt types constant for `m.read`. ([\#11531](https://github.com/matrix-org/synapse/issues/11531))
- Clean up `synapse.rest.admin`. ([\#11535](https://github.com/matrix-org/synapse/issues/11535))
- Add missing `errcode` to `parse_string` and `parse_boolean`. ([\#11542](https://github.com/matrix-org/synapse/issues/11542))
- Use `HTTPStatus` constants in place of literals in `synapse.http`. ([\#11543](https://github.com/matrix-org/synapse/issues/11543))
- Add missing type hints to storage classes. ([\#11546](https://github.com/matrix-org/synapse/issues/11546), [\#11549](https://github.com/matrix-org/synapse/issues/11549), [\#11551](https://github.com/matrix-org/synapse/issues/11551), [\#11555](https://github.com/matrix-org/synapse/issues/11555), [\#11575](https://github.com/matrix-org/synapse/issues/11575), [\#11589](https://github.com/matrix-org/synapse/issues/11589), [\#11594](https://github.com/matrix-org/synapse/issues/11594), [\#11652](https://github.com/matrix-org/synapse/issues/11652), [\#11653](https://github.com/matrix-org/synapse/issues/11653), [\#11654](https://github.com/matrix-org/synapse/issues/11654), [\#11657](https://github.com/matrix-org/synapse/issues/11657))
- Fix an inaccurate and misleading comment in the `/sync` code. ([\#11550](https://github.com/matrix-org/synapse/issues/11550))
- Add missing type hints to `synapse.logging.context`. ([\#11556](https://github.com/matrix-org/synapse/issues/11556))
- Stop populating unused database column `state_events.prev_state`. ([\#11558](https://github.com/matrix-org/synapse/issues/11558))
- Minor efficiency improvements in event persistence. ([\#11560](https://github.com/matrix-org/synapse/issues/11560))
- Add some safety checks that storage functions are used correctly. ([\#11564](https://github.com/matrix-org/synapse/issues/11564), [\#11580](https://github.com/matrix-org/synapse/issues/11580))
- Make `get_device` return `None` if the device doesn't exist rather than raising an exception. ([\#11565](https://github.com/matrix-org/synapse/issues/11565))
- Split the HTML parsing code from the URL preview resource code. ([\#11566](https://github.com/matrix-org/synapse/issues/11566))
- Remove redundant `COALESCE()`s around `COUNT()`s in database queries. ([\#11570](https://github.com/matrix-org/synapse/issues/11570))
- Add missing type hints to `synapse.http`. ([\#11571](https://github.com/matrix-org/synapse/issues/11571))
- Add [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) and [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) to `/versions` -> `unstable_features` to detect server support. ([\#11582](https://github.com/matrix-org/synapse/issues/11582))
- Add type hints to `synapse/tests/rest/admin`. ([\#11590](https://github.com/matrix-org/synapse/issues/11590))
- Drop end-of-life Python 3.6 and Postgres 9.6 from CI. ([\#11595](https://github.com/matrix-org/synapse/issues/11595))
- Update black version and run it on all the files. ([\#11596](https://github.com/matrix-org/synapse/issues/11596))
- Add opentracing type stubs and fix associated mypy errors. ([\#11603](https://github.com/matrix-org/synapse/issues/11603), [\#11622](https://github.com/matrix-org/synapse/issues/11622))
- Improve OpenTracing support for requests which use a `ResponseCache`. ([\#11607](https://github.com/matrix-org/synapse/issues/11607))
- Improve OpenTracing support for incoming HTTP requests. ([\#11618](https://github.com/matrix-org/synapse/issues/11618))
- A number of improvements to opentracing support. ([\#11619](https://github.com/matrix-org/synapse/issues/11619))
- Refactor the way that the `outlier` flag is set on events received over federation. ([\#11634](https://github.com/matrix-org/synapse/issues/11634))
- Improve the error messages from `get_create_event_for_room`. ([\#11638](https://github.com/matrix-org/synapse/issues/11638))
- Remove redundant `get_current_events_token` method. ([\#11643](https://github.com/matrix-org/synapse/issues/11643))
- Convert `namedtuples` to `attrs`. ([\#11665](https://github.com/matrix-org/synapse/issues/11665), [\#11574](https://github.com/matrix-org/synapse/issues/11574))
- Update the `/capabilities` response to include whether support for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440) is available. ([\#11690](https://github.com/matrix-org/synapse/issues/11690))
- Send the `Accept` header in HTTP requests made using `SimpleHttpClient.get_json`. ([\#11677](https://github.com/matrix-org/synapse/issues/11677))
- Work around Mjolnir compatibility issue by adding an import for `glob_to_regex` in `synapse.util`, where it moved from. ([\#11696](https://github.com/matrix-org/synapse/issues/11696))
Synapse 1.49.2 (2021-12-21)
===========================
This release fixes a regression introduced in Synapse 1.49.0 which could cause `/sync` requests to take significantly longer. This would particularly affect "initial" syncs for users participating in a large number of rooms, and in extreme cases, could make it impossible for such users to log in on a new client.
**Note:** in line with our [deprecation policy](https://matrix-org.github.io/synapse/latest/deprecation_policy.html) for platform dependencies, this will be the last release to support Python 3.6 and PostgreSQL 9.6, both of which have now reached upstream end-of-life. Synapse will require Python 3.7+ and PostgreSQL 10+.
**Note:** We will also stop producing packages for Ubuntu 18.04 (Bionic Beaver) after this release, as it uses Python 3.6.
Bugfixes
--------
- Fix a performance regression in `/sync` handling, introduced in 1.49.0. ([\#11583](https://github.com/matrix-org/synapse/issues/11583))
Internal Changes
----------------
- Work around a build problem on Debian Buster. ([\#11625](https://github.com/matrix-org/synapse/issues/11625))
Synapse 1.49.1 (2021-12-21)
===========================
Not released due to problems building the debian packages.
Synapse 1.49.0 (2021-12-14)
===========================
No significant changes since version 1.49.0rc1.
Support for Ubuntu 21.04 ends next month on the 20th of January
---------------------------------------------------------------
For users of Ubuntu 21.04 (Hirsute Hippo), please be aware that [upstream support for this version of Ubuntu will end next month][Ubuntu2104EOL].
We will stop producing packages for Ubuntu 21.04 after upstream support ends.
[Ubuntu2104EOL]: https://lists.ubuntu.com/archives/ubuntu-announce/2021-December/000275.html
The wiki has been migrated to the documentation website
-------------------------------------------------------
We've decided to move the existing, somewhat stagnant pages from the GitHub wiki
to the [documentation website](https://matrix-org.github.io/synapse/latest/).
@@ -191,9 +16,6 @@ requests](https://github.com/matrix-org/synapse/pulls). Please visit [#synapse-d
if you need help with the process!
Synapse 1.49.0rc1 (2021-12-07)
==============================
Features
--------

1
changelog.d/10520.misc Normal file
View File

@@ -0,0 +1 @@
Send and handle cross-signing messages using the stable prefix.

1
changelog.d/11331.misc Normal file
View File

@@ -0,0 +1 @@
A test helper (`wait_for_background_updates`) no longer depends on classes defining a `store` property.

View File

@@ -0,0 +1 @@
Add an admin API endpoint to force a local user to leave all non-public rooms in a space.

View File

@@ -1,2 +0,0 @@
Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
received over federation.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Remove the `"password_hash"` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html).

View File

@@ -1 +0,0 @@
Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

View File

@@ -1 +0,0 @@
Fix a performance regression in `/sync` handling, introduced in 1.49.0.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices.

View File

@@ -1 +0,0 @@
Fix an error in to get federation status of a destination server even if no error has occurred. This admin API was new introduced in Synapse 1.49.0.

View File

@@ -1 +0,0 @@
Avoid database access in the JSON serialization process.

View File

@@ -1 +0,0 @@
Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

View File

@@ -1 +0,0 @@
Fix `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0.

View File

@@ -1 +0,0 @@
Fix preview of some gif URLs (like tenor.com). Contributed by Philippe Daouadi.

View File

@@ -1 +0,0 @@
Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room.

View File

@@ -1 +0,0 @@
Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users.

View File

@@ -1 +0,0 @@
Remove the unstable `/send_relation` endpoint.

View File

@@ -1 +0,0 @@
Run `pyupgrade --py37-plus --keep-percent-format` on Synapse.

View File

@@ -1 +0,0 @@
Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This bypasses client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr.

View File

@@ -1 +0,0 @@
Use buildkit's cache feature to speed up docker builds.

View File

@@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

View File

@@ -1 +0,0 @@
Remove debug logging for #4422, which has been closed since Synapse 0.99.

View File

@@ -1 +0,0 @@
Fix a bug where the only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0.

View File

@@ -1 +0,0 @@
Remove fallback code for Python 2.

View File

@@ -1 +0,0 @@
Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic.

View File

@@ -1 +0,0 @@
Add the option to write sqlite test dbs to disk when running tests.

View File

@@ -1 +0,0 @@
Improve Complement test output for Gitub Actions.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@@ -1 +0,0 @@
Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s.

View File

@@ -1 +0,0 @@
Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide.

View File

@@ -1 +0,0 @@
Fix docstring on `add_account_data_for_user`.

View File

@@ -1 +0,0 @@
Complement environment variable name change and update `.gitignore`.

View File

@@ -1 +0,0 @@
Simplify calculation of prometheus metrics for garbage collection.

View File

@@ -1 +0,0 @@
Improve accuracy of `python_twisted_reactor_tick_time` prometheus metric.

View File

@@ -1 +0,0 @@
Remove `python_twisted_reactor_pending_calls` prometheus metric.

View File

@@ -1 +0,0 @@
Document that now the minimum supported PostgreSQL version is 10.

View File

@@ -1 +0,0 @@
Fix typo in demo docs: differnt.

View File

@@ -1 +0,0 @@
Make the list rooms admin api sort stable. Contributed by Daniël Sonck.

View File

@@ -1 +0,0 @@
Update room spec url in config files.

View File

@@ -1 +0,0 @@
Mention python3-venv and libpq-dev dependencies in contribution guide.

View File

@@ -1 +0,0 @@
Minor efficiency improvements when inserting many values into the database.

View File

@@ -1 +0,0 @@
Invite PR authors to give themselves credit in the changelog.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Update documentation for configuring login with facebook.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Remove `log_function` utility function and its uses.

View File

@@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

View File

@@ -14,7 +14,6 @@ services:
# failure
restart: unless-stopped
# See the readme for a full documentation of the environment settings
# NOTE: You must edit homeserver.yaml to use postgres, it defaults to sqlite
environment:
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
volumes:

View File

@@ -92,6 +92,22 @@ new PromConsole.Graph({
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
name: "[[job]]-[[index]]",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Calls"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>

42
debian/changelog vendored
View File

@@ -1,45 +1,3 @@
matrix-synapse-py3 (1.50.1) stable; urgency=medium
* New synapse release 1.50.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 16:06:26 +0000
matrix-synapse-py3 (1.50.0) stable; urgency=medium
* New synapse release 1.50.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 10:40:38 +0000
matrix-synapse-py3 (1.50.0~rc2) stable; urgency=medium
* New synapse release 1.50.0~rc2.
-- Synapse Packaging team <packages@matrix.org> Fri, 14 Jan 2022 11:18:06 +0000
matrix-synapse-py3 (1.50.0~rc1) stable; urgency=medium
* New synapse release 1.50.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Jan 2022 12:36:17 +0000
matrix-synapse-py3 (1.49.2) stable; urgency=medium
* New synapse release 1.49.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Dec 2021 17:31:03 +0000
matrix-synapse-py3 (1.49.1) stable; urgency=medium
* New synapse release 1.49.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Dec 2021 11:07:30 +0000
matrix-synapse-py3 (1.49.0) stable; urgency=medium
* New synapse release 1.49.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Dec 2021 12:39:46 +0000
matrix-synapse-py3 (1.49.0~rc1) stable; urgency=medium
* New synapse release 1.49.0~rc1.

View File

@@ -22,5 +22,5 @@ Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
Also note that when joining a public room on a different HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.

View File

@@ -1,17 +1,14 @@
# Dockerfile to build the matrixdotorg/synapse docker images.
#
# Note that it uses features which are only available in BuildKit - see
# https://docs.docker.com/go/buildkit/ for more information.
#
# To build the image, run `docker build` command from the root of the
# synapse repository:
#
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile .
# docker build -f docker/Dockerfile .
#
# There is an optional PYTHON_VERSION build argument which sets the
# version of python to build against: for example:
#
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 .
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=3.8
@@ -22,16 +19,7 @@ ARG PYTHON_VERSION=3.8
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
# install the OS build deps
#
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt, to improve rebuild speeds on
# slow connections.
#
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y \
build-essential \
libffi-dev \
libjpeg-dev \
@@ -56,8 +44,7 @@ COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --prefix="/install" --no-warn-script-location \
RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all]
# Copy over the rest of the project
@@ -79,10 +66,7 @@ LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/syna
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0'
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y \
curl \
gosu \
libjpeg62-turbo \

View File

@@ -16,7 +16,7 @@ ARG distro=""
### Stage 0: build a dh-virtualenv
###
# This is only really needed on focal, since other distributions we
# This is only really needed on bionic and focal, since other distributions we
# care about have a recent version of dh-virtualenv by default. Unfortunately,
# it looks like focal is going to be with us for a while.
#
@@ -36,8 +36,9 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget
# fetch and unpack the package
# TODO: Upgrade to 1.2.2 once bionic is dropped (1.2.2 requires debhelper 12; bionic has only 11)
RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/refs/tags/1.2.2.tar.gz
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
# install its build deps. We do another apt-cache-update here, because we might
@@ -85,12 +86,12 @@ RUN apt-get update -qq -o Acquire::Languages=none \
libpq-dev \
xmlsec1
COPY --from=builder /dh-virtualenv_1.2.2-1_all.deb /
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a
# cached cache from docker the first time.
RUN apt-get update -qq -o Acquire::Languages=none \
&& apt-get install -yq /dh-virtualenv_1.2.2-1_all.deb
&& apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb
WORKDIR /synapse/source
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]

View File

@@ -68,10 +68,6 @@ The following environment variables are supported in `generate` mode:
directories. If unset, and no user is set via `docker run --user`, defaults
to `991`, `991`.
## Postgres
By default the config will use SQLite. See the [docs on using Postgres](https://github.com/matrix-org/synapse/blob/develop/docs/postgres.md) for more info on how to use Postgres. Until this section is improved [this issue](https://github.com/matrix-org/synapse/issues/8304) may provide useful information.
## Running synapse
Once you have a valid configuration file, you can start synapse as follows:

View File

@@ -30,7 +30,6 @@
- [SSO Mapping Providers](sso_mapping_providers.md)
- [Password Auth Providers](password_auth_providers.md)
- [JSON Web Tokens](jwt.md)
- [Refresh Tokens](usage/configuration/user_authentication/refresh_tokens.md)
- [Registration Captcha](CAPTCHA_SETUP.md)
- [Application Services](application_services.md)
- [Server Notices](server_notices.md)
@@ -62,6 +61,7 @@
- [Registration Tokens](usage/administration/admin_api/registration_tokens.md)
- [Manipulate Room Membership](admin_api/room_membership.md)
- [Rooms](admin_api/rooms.md)
- [Spaces](usage/administration/admin_api/spaces.md)
- [Server Notices](admin_api/server_notices.md)
- [Statistics](admin_api/statistics.md)
- [Users](admin_api/user_admin_api.md)

View File

@@ -15,10 +15,9 @@ server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
```jsonc
```json
{
"name": "@user:example.com",
"displayname": "User", // can be null if not set
"displayname": "User",
"threepids": [
{
"medium": "email",
@@ -33,11 +32,11 @@ It returns a JSON body like the following:
"validated_at": 1586458409743
}
],
"avatar_url": "<avatar_url>", // can be null if not set
"is_guest": 0,
"avatar_url": "<avatar_url>",
"admin": 0,
"deactivated": 0,
"shadow_banned": 0,
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
"creation_ts": 1560432506,
"appservice_id": null,
"consent_server_notice_sent": null,
@@ -481,81 +480,6 @@ The following fields are returned in the JSON response body:
- `joined_rooms` - An array of `room_id`.
- `total` - Number of rooms.
## Account Data
Gets information about account data for a specific `user_id`.
The API is:
```
GET /_synapse/admin/v1/users/<user_id>/accountdata
```
A response body like the following is returned:
```json
{
"account_data": {
"global": {
"m.secret_storage.key.LmIGHTg5W": {
"algorithm": "m.secret_storage.v1.aes-hmac-sha2",
"iv": "fwjNZatxg==",
"mac": "eWh9kNnLWZUNOgnc="
},
"im.vector.hide_profile": {
"hide_profile": true
},
"org.matrix.preview_urls": {
"disable": false
},
"im.vector.riot.breadcrumb_rooms": {
"rooms": [
"!LxcBDAsDUVAfJDEo:matrix.org",
"!MAhRxqasbItjOqxu:matrix.org"
]
},
"m.accepted_terms": {
"accepted": [
"https://example.org/somewhere/privacy-1.2-en.html",
"https://example.org/somewhere/terms-2.0-en.html"
]
},
"im.vector.setting.breadcrumbs": {
"recent_rooms": [
"!MAhRxqasbItqxuEt:matrix.org",
"!ZtSaPCawyWtxiImy:matrix.org"
]
}
},
"rooms": {
"!GUdfZSHUJibpiVqHYd:matrix.org": {
"m.fully_read": {
"event_id": "$156334540fYIhZ:matrix.org"
}
},
"!tOZwOOiqwCYQkLhV:matrix.org": {
"m.fully_read": {
"event_id": "$xjsIyp4_NaVl2yPvIZs_k1Jl8tsC_Sp23wjqXPno"
}
}
}
}
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - fully qualified: for example, `@user:server.com`.
**Response**
The following fields are returned in the JSON response body:
- `account_data` - A map containing the account data for the user
- `global` - A map containing the global account data for the user
- `rooms` - A map containing the account data per room for the user
## User media
### List media uploaded by a user

View File

@@ -14,8 +14,8 @@ i.e. when a version reaches End of Life Synapse will withdraw support for that
version in future releases.
Details on the upstream support life cycles for Python and PostgreSQL are
documented at [https://endoflife.date/python](https://endoflife.date/python) and
[https://endoflife.date/postgresql](https://endoflife.date/postgresql).
documented at https://endoflife.date/python and
https://endoflife.date/postgresql.
Context

View File

@@ -20,9 +20,7 @@ recommended for development. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough.
Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
@@ -171,27 +169,6 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
```
By default, tests will use an in-memory SQLite database for test data. For additional
help with debugging, one can use an on-disk SQLite database file instead, in order to
review database state during and after running tests. This can be done by setting
the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the
database state to be stored in a file named `test.db` under the trial process'
working directory. Typically, this ends up being `_trial_temp/test.db`. For example:
```sh
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests
```
The database file can then be inspected with:
```sh
sqlite3 _trial_temp/test.db
```
Note that the database file is cleared at the beginning of each test run. Thus it
will always only contain the data generated by the *last run test*. Though generally
when debugging, one is only running a single test anyway.
### Running tests under PostgreSQL
Invoking `trial` as above will use an in-memory SQLite database. This is great for

View File

@@ -35,12 +35,7 @@ When Synapse is asked to preview a URL it does the following:
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
3. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.

View File

@@ -390,6 +390,9 @@ oidc_providers:
### Facebook
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
one so requires a little more configuration.
0. You will need a Facebook developer account. You can register for one
[here](https://developers.facebook.com/async/registration/).
1. On the [apps](https://developers.facebook.com/apps/) page of the developer
@@ -409,28 +412,24 @@ Synapse config:
idp_name: Facebook
idp_brand: "facebook" # optional: styling hint for clients
discover: false
issuer: "https://www.facebook.com"
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "email"]
authorization_endpoint: "https://facebook.com/dialog/oauth"
token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token"
jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
authorization_endpoint: https://facebook.com/dialog/oauth
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
user_profile_method: "userinfo_endpoint"
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
user_mapping_provider:
config:
subject_claim: "id"
display_name_template: "{{ user.name }}"
email_template: "{{ '{{ user.email }}' }}"
```
Relevant documents:
* [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow)
* [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/)
* [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user)
Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration),
but it has a `response_types_supported` which excludes "code" (which we rely on, and
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
so we have to disable discovery and configure the URIs manually.
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
### Gitea

View File

@@ -1,6 +1,6 @@
# Using Postgres
Synapse supports PostgreSQL versions 10 or later.
Synapse supports PostgreSQL versions 9.6 or later.
## Install postgres client libraries

View File

@@ -63,7 +63,7 @@ server {
server_name matrix.example.com;
location ~ ^(/_matrix|/_synapse/client) {
location ~* ^(\/_matrix|\/_synapse\/client) {
# note: do not add a path (even a single /) after the port in `proxy_pass`,
# otherwise nginx will canonicalise the URI and cause signature verification
# errors.

View File

@@ -37,7 +37,7 @@
# Server admins can expand Synapse's functionality with external modules.
#
# See https://matrix-org.github.io/synapse/latest/modules/index.html for more
# See https://matrix-org.github.io/synapse/latest/modules.html for more
# documentation on how to configure or create custom modules for Synapse.
#
modules:
@@ -164,7 +164,7 @@ presence:
# The default room version for newly created rooms.
#
# Known room versions are listed here:
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
# https://matrix.org/docs/spec/#complete-list-of-room-versions
#
# For example, for room version 1, default_room_version should be set
# to "1".
@@ -1488,7 +1488,6 @@ room_prejoin_state:
# - m.room.encryption
# - m.room.name
# - m.room.create
# - m.room.topic
#
# Uncomment the following to disable these defaults (so that only the event
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
@@ -1503,21 +1502,6 @@ room_prejoin_state:
#additional_event_types:
# - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
# A list of application service config files to use
#
@@ -1885,13 +1869,10 @@ saml2_config:
# Defaults to false. Avoid this in production.
#
# user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
# userinfo endpoint.
#
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@@ -164,7 +164,7 @@ xbps-install -S synapse
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py38-matrix-synapse`
- Packages: `pkg install py37-matrix-synapse`
#### OpenBSD

View File

@@ -49,12 +49,12 @@ comment these options out and use those specified by the module instead.
A custom mapping provider must specify the following methods:
* `def __init__(self, parsed_config)`
* `__init__(self, parsed_config)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
* `def parse_config(config)`
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
@@ -63,13 +63,13 @@ A custom mapping provider must specify the following methods:
any option values they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `def get_remote_user_id(self, userinfo)`
* `get_remote_user_id(self, userinfo)`
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
information from.
- This method must return a string, which is the unique, immutable identifier
for the user. Commonly the `sub` claim of the response.
* `async def map_user_attributes(self, userinfo, token, failures)`
* `map_user_attributes(self, userinfo, token, failures)`
- This method must be async.
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
@@ -91,7 +91,7 @@ A custom mapping provider must specify the following methods:
during a user's first login. Once a localpart has been associated with a
remote user ID (see `get_remote_user_id`) it cannot be updated.
- `displayname`: An optional string, the display name for the user.
* `async def get_extra_attributes(self, userinfo, token)`
* `get_extra_attributes(self, userinfo, token)`
- This method must be async.
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
@@ -125,15 +125,15 @@ comment these options out and use those specified by the module instead.
A custom mapping provider must specify the following methods:
* `def __init__(self, parsed_config, module_api)`
* `__init__(self, parsed_config, module_api)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
- `module_api` - a `synapse.module_api.ModuleApi` object which provides the
stable API available for extension modules.
* `def parse_config(config)`
- **This method should have the `@staticmethod` decoration.**
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`saml_config.user_mapping_provider.config` homeserver config option.
@@ -141,15 +141,15 @@ A custom mapping provider must specify the following methods:
any option values they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `def get_saml_attributes(config)`
- **This method should have the `@staticmethod` decoration.**
* `get_saml_attributes(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A object resulting from a call to `parse_config`.
- Returns a tuple of two sets. The first set equates to the SAML auth
response attributes that are required for the module to function, whereas
the second set consists of those attributes which can be used if available,
but are not necessary.
* `def get_remote_user_id(self, saml_response, client_redirect_url)`
* `get_remote_user_id(self, saml_response, client_redirect_url)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.
@@ -157,7 +157,7 @@ A custom mapping provider must specify the following methods:
redirected to.
- This method must return a string, which is the unique, immutable identifier
for the user. Commonly the `uid` claim of the response.
* `def saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.

View File

@@ -15,8 +15,8 @@ The following sections describe how to install [coturn](<https://github.com/cotu
For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint with a public IP.
Hosting TURN behind NAT requires port forwaring and for the NAT gateway to have a public IP.
However, even with appropriate configuration, NAT is known to cause issues and to often not work.
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work.
## `coturn` setup
@@ -103,23 +103,7 @@ This will install and start a systemd service called `coturn`.
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# recommended additional local peers to block, to mitigate external access to internal services.
# https://www.rtcsec.com/article/slack-webrtc-turn-compromise-and-bug-bounty/#how-to-fix-an-open-turn-relay-to-address-this-vulnerability
no-multicast-peers
denied-peer-ip=0.0.0.0-0.255.255.255
denied-peer-ip=100.64.0.0-100.127.255.255
denied-peer-ip=127.0.0.0-127.255.255.255
denied-peer-ip=169.254.0.0-169.254.255.255
denied-peer-ip=192.0.0.0-192.0.0.255
denied-peer-ip=192.0.2.0-192.0.2.255
denied-peer-ip=192.88.99.0-192.88.99.255
denied-peer-ip=198.18.0.0-198.19.255.255
denied-peer-ip=198.51.100.0-198.51.100.255
denied-peer-ip=203.0.113.0-203.0.113.255
denied-peer-ip=240.0.0.0-255.255.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
# this should be one of the turn server's listening IPs
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
@@ -137,58 +121,34 @@ This will install and start a systemd service called `coturn`.
# TLS private key file
pkey=/path/to/privkey.pem
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
#no-tls
#no-dtls
```
In this case, replace the `turn:` schemes in the `turn_uris` settings below
In this case, replace the `turn:` schemes in the `turn_uri` settings below
with `turns:`.
We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
1. If your TURN server is behind NAT, the NAT gateway must have an external,
publicly-reachable IP address. You must configure coturn to advertise that
address to connecting clients:
1. We do not recommend running a TURN server behind NAT, and are not aware of
anyone doing so successfully.
If you want to try it anyway, you will at least need to tell coturn its
external IP address:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
external-ip=192.88.99.1
```
You may optionally limit the TURN server to listen only on the local
address that is mapped by NAT to the external address:
... and your NAT gateway must forward all of the relayed ports directly
(eg, port 56789 on the external IP must be always be forwarded to port
56789 on the internal IP).
```
listening-ip=INTERNAL_TURNSERVER_IPv4_ADDRESS
```
If your NAT gateway is reachable over both IPv4 and IPv6, you may
configure coturn to advertise each available address:
```
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
external-ip=EXTERNAL_NAT_IPv6_ADDRESS
```
When advertising an external IPv6 address, ensure that the firewall and
network settings of the system running your TURN server are configured to
accept IPv6 traffic, and that the TURN server is listening on the local
IPv6 address that is mapped by NAT to the external IPv6 address.
If you get this working, let us know!
1. (Re)start the turn server:
@@ -256,16 +216,15 @@ connecting". Unfortunately, troubleshooting this can be tricky.
Here are a few things to try:
* Check that your TURN server is not behind NAT. As above, we're not aware of
anyone who has successfully set this up.
* Check that you have opened your firewall to allow TCP and UDP traffic to the
TURN ports (normally 3478 and 5349).
* Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
voice & video WebRTC traffic is always encrypted.)
* Some WebRTC implementations (notably, that of Google Chrome) appear to get
confused by TURN servers which are reachable over IPv6 (this appears to be
an unexpected side-effect of its handling of multiple IP addresses as
@@ -275,18 +234,6 @@ Here are a few things to try:
Try removing any AAAA records for your TURN server, so that it is only
reachable over IPv4.
* If your TURN server is behind NAT:
* double-check that your NAT gateway is correctly forwarding all TURN
ports (normally 3478 & 5349 for TCP & UDP TURN traffic, and 49152-65535 for the UDP
relay) to the NAT-internal address of your TURN server. If advertising
both IPv4 and IPv6 external addresses via the `external-ip` option, ensure
that the NAT is forwarding both IPv4 and IPv6 traffic to the IPv4 and IPv6
internal addresses of your TURN server. When in doubt, remove AAAA records
for your TURN server and specify only an IPv4 address as your `external-ip`.
* ensure that your TURN server uses the NAT gateway as its default route.
* Enable more verbose logging in coturn via the `verbose` setting:
```

View File

@@ -85,17 +85,6 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.50.0
## Dropping support for old Python and Postgres versions
In line with our [deprecation policy](deprecation_policy.md),
we've dropped support for Python 3.6 and PostgreSQL 9.6, as they are no
longer supported upstream.
This release of Synapse requires Python 3.7+ and PostgreSQL 10+.
# Upgrading to v1.47.0
## Removal of old Room Admin API

View File

@@ -0,0 +1,57 @@
# Spaces API
This API allows a server administrator to manage spaces.
## Remove local user
This API forces a local user to leave all non-public rooms in a space.
The space itself is always left, regardless of whether it is public.
May succeed partially if the user fails to leave some rooms.
The API is:
```
DELETE /_synapse/admin/v1/rooms/<room_id>/hierarchy/members/<user_id>
```
with an optional body of:
```json
{
"include_remote_spaces": true,
}
```
`include_remote_spaces` controls whether to process subspaces that the
local homeserver is not participating in. The listings of such subspaces
have to be retrieved over federation and their accuracy cannot be
guaranteed.
Returning:
```json
{
"left_rooms": ["!room1:example.net", "!room2:example.net", ...],
"inaccessible_rooms": ["!subspace1:example.net", ...],
"failed_rooms": {
"!room4:example.net": "Failed to leave room.",
...
}
}
```
`left_rooms`: A list of rooms that the user has been made to leave.
`inaccessible_rooms`: A list of rooms and spaces that the local
homeserver is not in, and may have not been fully processed. Rooms may
appear here if:
* The room is a space that the local homeserver is not in, and so its
full list of child rooms could not be determined.
* The room is inaccessible to the local homeserver, and it is not
known whether the room is a subspace containing further rooms.
`failed_rooms`: A dictionary of errors encountered when leaving rooms.
The keys of the dictionary are room IDs and the values of the dictionary
are error messages.

View File

@@ -1,139 +0,0 @@
# Refresh Tokens
Synapse supports refresh tokens since version 1.49 (some earlier versions had support for an earlier, experimental draft of [MSC2918] which is not compatible).
[MSC2918]: https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens
## Background and motivation
Synapse users' sessions are identified by **access tokens**; access tokens are
issued to users on login. Each session gets a unique access token which identifies
it; the access token must be kept secret as it grants access to the user's account.
Traditionally, these access tokens were eternally valid (at least until the user
explicitly chose to log out).
In some cases, it may be desirable for these access tokens to expire so that the
potential damage caused by leaking an access token is reduced.
On the other hand, forcing a user to re-authenticate (log in again) often might
be too much of an inconvenience.
**Refresh tokens** are a mechanism to avoid some of this inconvenience whilst
still getting most of the benefits of short access token lifetimes.
Refresh tokens are also a concept present in OAuth 2 — further reading is available
[here](https://datatracker.ietf.org/doc/html/rfc6749#section-1.5).
When refresh tokens are in use, both an access token and a refresh token will be
issued to users on login. The access token will expire after a predetermined amount
of time, but otherwise works in the same way as before. When the access token is
close to expiring (or has expired), the user's client should present the homeserver
(Synapse) with the refresh token.
The homeserver will then generate a new access token and refresh token for the user
and return them. The old refresh token is invalidated and can not be used again*.
Finally, refresh tokens also make it possible for sessions to be logged out if they
are inactive for too long, before the session naturally ends; see the configuration
guide below.
*To prevent issues if clients lose connection half-way through refreshing a token,
the refresh token is only invalidated once the new access token has been used at
least once. For all intents and purposes, the above simplification is sufficient.
## Caveats
There are some caveats:
* If a third party gets both your access token and refresh token, they will be able to
continue to enjoy access to your session.
* This is still an improvement because you (the user) will notice when *your*
session expires and you're not able to use your refresh token.
That would be a giveaway that someone else has compromised your session.
You would be able to log in again and terminate that session.
Previously (with long-lived access tokens), a third party that has your access
token could go undetected for a very long time.
* Clients need to implement support for refresh tokens in order for them to be a
useful mechanism.
* It is up to homeserver administrators if they want to issue long-lived access
tokens to clients not implementing refresh tokens.
* For compatibility, it is likely that they should, at least until client support
is widespread.
* Users with clients that support refresh tokens will still benefit from the
added security; it's not possible to downgrade a session to using long-lived
access tokens so this effectively gives users the choice.
* In a closed environment where all users use known clients, this may not be
an issue as the homeserver administrator can know if the clients have refresh
token support. In that case, the non-refreshable access token lifetime
may be set to a short duration so that a similar level of security is provided.
## Configuration Guide
The following configuration options, in the `registration` section, are related:
* `session_lifetime`: maximum length of a session, even if it's refreshed.
In other words, the client must log in again after this time period.
In most cases, this can be unset (infinite) or set to a long time (years or months).
* `refreshable_access_token_lifetime`: lifetime of access tokens that are created
by clients supporting refresh tokens.
This should be short; a good value might be 5 minutes (`5m`).
* `nonrefreshable_access_token_lifetime`: lifetime of access tokens that are created
by clients which don't support refresh tokens.
Make this short if you want to effectively force use of refresh tokens.
Make this long if you don't want to inconvenience users of clients which don't
support refresh tokens (by forcing them to frequently re-authenticate using
login credentials).
* `refresh_token_lifetime`: lifetime of refresh tokens.
In other words, the client must refresh within this time period to maintain its session.
Unless you want to log inactive sessions out, it is often fine to use a long
value here or even leave it unset (infinite).
Beware that making it too short will inconvenience clients that do not connect
very often, including mobile clients and clients of infrequent users (by making
it more difficult for them to refresh in time, which may force them to need to
re-authenticate using login credentials).
**Note:** All four options above only apply when tokens are created (by logging in or refreshing).
Changes to these settings do not apply retroactively.
### Using refresh token expiry to log out inactive sessions
If you'd like to force sessions to be logged out upon inactivity, you can enable
refreshable access token expiry and refresh token expiry.
This works because a client must refresh at least once within a period of
`refresh_token_lifetime` in order to maintain valid credentials to access the
account.
(It's suggested that `refresh_token_lifetime` should be longer than
`refreshable_access_token_lifetime` and this section assumes that to be the case
for simplicity.)
Note: this will only affect sessions using refresh tokens. You may wish to
set a short `nonrefreshable_access_token_lifetime` to prevent this being bypassed
by clients that do not support refresh tokens.
#### Choosing values that guarantee permitting some inactivity
It may be desirable to permit some short periods of inactivity, for example to
accommodate brief outages in client connectivity.
The following model aims to provide guidance for choosing `refresh_token_lifetime`
and `refreshable_access_token_lifetime` to satisfy requirements of the form:
1. inactivity longer than `L` **MUST** cause the session to be logged out; and
2. inactivity shorter than `S` **MUST NOT** cause the session to be logged out.
This model makes the weakest assumption that all active clients will refresh as
needed to maintain an active access token, but no sooner.
*In reality, clients may refresh more often than this model assumes, but the
above requirements will still hold.*
To satisfy the above model,
* `refresh_token_lifetime` should be set to `L`; and
* `refreshable_access_token_lifetime` should be set to `L - S`.

View File

@@ -25,9 +25,14 @@ exclude = (?x)
^(
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/__init__.py
|synapse/storage/databases/main/account_data.py
|synapse/storage/databases/main/cache.py
|synapse/storage/databases/main/devices.py
|synapse/storage/databases/main/e2e_room_keys.py
|synapse/storage/databases/main/end_to_end_keys.py
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/event_push_actions.py
|synapse/storage/databases/main/events_bg_updates.py
|synapse/storage/databases/main/group_server.py
|synapse/storage/databases/main/metrics.py
|synapse/storage/databases/main/monthly_active_users.py
@@ -35,9 +40,12 @@ exclude = (?x)
|synapse/storage/databases/main/purge_events.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/receipts.py
|synapse/storage/databases/main/room.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/databases/main/search.py
|synapse/storage/databases/main/state.py
|synapse/storage/databases/main/stats.py
|synapse/storage/databases/main/transactions.py
|synapse/storage/databases/main/user_directory.py
|synapse/storage/schema/
@@ -99,6 +107,7 @@ exclude = (?x)
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/state/test_v2.py
|tests/storage/test_account_data.py
|tests/storage/test_background_update.py
|tests/storage/test_base.py
|tests/storage/test_client_ips.py
@@ -136,9 +145,6 @@ disallow_untyped_defs = True
[mypy-synapse.app.*]
disallow_untyped_defs = True
[mypy-synapse.appservice.*]
disallow_untyped_defs = True
[mypy-synapse.config._base]
disallow_untyped_defs = True
@@ -157,12 +163,6 @@ disallow_untyped_defs = False
[mypy-synapse.handlers.*]
disallow_untyped_defs = True
[mypy-synapse.http.server]
disallow_untyped_defs = True
[mypy-synapse.logging.context]
disallow_untyped_defs = True
[mypy-synapse.metrics.*]
disallow_untyped_defs = True
@@ -181,48 +181,24 @@ disallow_untyped_defs = True
[mypy-synapse.state.*]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.account_data]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.client_ips]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.directory]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.e2e_room_keys]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.end_to_end_keys]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.event_push_actions]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.events_bg_updates]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.events_worker]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.room]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.room_batch]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.profile]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.stats]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.state_deltas]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.transactions]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.user_erasure_store]
disallow_untyped_defs = True
@@ -247,9 +223,6 @@ disallow_untyped_defs = True
[mypy-tests.storage.test_user_directory]
disallow_untyped_defs = True
[mypy-tests.rest.admin.*]
disallow_untyped_defs = True
[mypy-tests.rest.client.test_directory]
disallow_untyped_defs = True
@@ -313,6 +286,9 @@ ignore_missing_imports = True
[mypy-netaddr]
ignore_missing_imports = True
[mypy-opentracing]
ignore_missing_imports = True
[mypy-parameterized.*]
ignore_missing_imports = True

View File

@@ -35,7 +35,7 @@
showcontent = true
[tool.black]
target-version = ['py37', 'py38', 'py39', 'py310']
target-version = ['py36']
exclude = '''
(

View File

@@ -24,6 +24,7 @@ DISTS = (
"debian:bullseye",
"debian:bookworm",
"debian:sid",
"ubuntu:bionic", # 18.04 LTS (our EOL forced by Py36 on 2021-12-23)
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:hirsute", # 21.04 (EOL 2022-01-05)
"ubuntu:impish", # 21.10 (EOL 2022-07)

View File

@@ -42,8 +42,8 @@ echo "--------------------------"
echo
matched=0
for f in $(git diff --diff-filter=d --name-only FETCH_HEAD... -- changelog.d); do
# check that any added newsfiles on this branch end with a full stop.
for f in $(git diff --name-only FETCH_HEAD... -- changelog.d); do
# check that any modified newsfiles on this branch end with a full stop.
lastchar=$(tr -d '\n' < "$f" | tail -c 1)
if [ "$lastchar" != '.' ] && [ "$lastchar" != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2

View File

@@ -23,9 +23,6 @@
# Exit if a line returns a non-zero exit code
set -e
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# Change to the repository root
cd "$(dirname $0)/.."
@@ -50,7 +47,7 @@ if [[ -n "$WORKERS" ]]; then
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile
# And provide some more configuration to complement.
export COMPLEMENT_CA=true
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25
export COMPLEMENT_VERSION_CHECK_ITERATIONS=500
else
export COMPLEMENT_BASE_IMAGE=complement-synapse
COMPLEMENT_DOCKERFILE=Synapse.Dockerfile
@@ -68,5 +65,4 @@ if [[ -n "$1" ]]; then
fi
# Run the tests!
echo "Images built; running complement"
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@@ -96,7 +96,7 @@ CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
# We pin black so that our tests don't start failing on new releases.
CONDITIONAL_REQUIREMENTS["lint"] = [
"isort==5.7.0",
"black==21.12b0",
"black==21.6b0",
"flake8-comprehensions",
"flake8-bugbear==21.3.2",
"flake8",
@@ -107,7 +107,6 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
"mypy-zope==0.3.2",
"types-bleach>=4.1.0",
"types-jsonschema>=3.2.0",
"types-opentracing>=2.4.2",
"types-Pillow>=8.3.4",
"types-pyOpenSSL>=20.0.7",
"types-PyYAML>=5.4.10",
@@ -120,7 +119,9 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
# Tests assume that all optional dependencies are installed.
#
# parameterized_class decorator was introduced in parameterized 0.7.0
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0"]
#
# We use `mock` library as that backports `AsyncMock` to Python 3.6
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0", "mock>=4.0.0"]
CONDITIONAL_REQUIREMENTS["dev"] = (
CONDITIONAL_REQUIREMENTS["lint"]
@@ -162,6 +163,7 @@ setup(
"Topic :: Communications :: Chat",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",

View File

@@ -17,12 +17,11 @@
from typing import Any, List, Optional, Type, Union
from twisted.internet import protocol
from twisted.internet.defer import Deferred
class RedisProtocol(protocol.Protocol):
def publish(self, channel: str, message: bytes): ...
def ping(self) -> "Deferred[None]": ...
def set(
async def ping(self) -> None: ...
async def set(
self,
key: str,
value: Any,
@@ -30,8 +29,8 @@ class RedisProtocol(protocol.Protocol):
pexpire: Optional[int] = None,
only_if_not_exists: bool = False,
only_if_exists: bool = False,
) -> "Deferred[None]": ...
def get(self, key: str) -> "Deferred[Any]": ...
) -> None: ...
async def get(self, key: str) -> Any: ...
class SubscriberProtocol(RedisProtocol):
def __init__(self, *args, **kwargs): ...

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.50.1"
__version__ = "1.49.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -46,9 +46,7 @@ class UserInfo:
ips: List[str] = attr.Factory(list)
def get_recent_users(
txn: LoggingTransaction, since_ms: int, exclude_app_service: bool
) -> List[UserInfo]:
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
"""Fetches recently registered users and some info on them."""
sql = """
@@ -58,9 +56,6 @@ def get_recent_users(
AND deactivated = 0
"""
if exclude_app_service:
sql += " AND appservice_id IS NULL"
txn.execute(sql, (since_ms / 1000,))
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
@@ -118,7 +113,7 @@ def main() -> None:
"-e",
"--exclude-emails",
action="store_true",
help="Exclude users that have validated email addresses.",
help="Exclude users that have validated email addresses",
)
parser.add_argument(
"-u",
@@ -126,12 +121,6 @@ def main() -> None:
action="store_true",
help="Only print user IDs that match.",
)
parser.add_argument(
"-a",
"--exclude-app-service",
help="Exclude appservice users.",
action="store_true",
)
config = ReviewConfig()
@@ -144,7 +133,6 @@ def main() -> None:
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails
exclude_users_with_appservice = config_args.exclude_app_service
include_context = not config_args.only_users
for database_config in config.database.databases:
@@ -155,7 +143,7 @@ def main() -> None:
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
# This generates a type of Cursor, not LoggingTransaction.
user_infos = get_recent_users(db_conn.cursor(), since_ms, exclude_users_with_appservice) # type: ignore[arg-type]
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
for user_info in user_infos:
if exclude_users_with_email and user_info.emails:

View File

@@ -32,7 +32,7 @@ from synapse.appservice import ApplicationService
from synapse.events import EventBase
from synapse.http import get_request_user_agent
from synapse.http.site import SynapseRequest
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
from synapse.logging import opentracing as opentracing
from synapse.storage.databases.main.registration import TokenLookupResult
from synapse.types import Requester, StateMap, UserID, create_requester
from synapse.util.caches.lrucache import LruCache
@@ -71,7 +71,6 @@ class Auth:
self._auth_blocking = AuthBlocking(self.hs)
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._macaroon_secret_key = hs.config.key.macaroon_secret_key
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
@@ -150,53 +149,13 @@ class Auth:
is invalid.
AuthError if access is denied for the user in the access token
"""
parent_span = active_span()
with start_active_span("get_user_by_req"):
requester = await self._wrapped_get_user_by_req(
request, allow_guest, rights, allow_expired
)
if parent_span:
if requester.authenticated_entity in self._force_tracing_for_users:
# request tracing is enabled for this user, so we need to force it
# tracing on for the parent span (which will be the servlet span).
#
# It's too late for the get_user_by_req span to inherit the setting,
# so we also force it on for that.
force_tracing()
force_tracing(parent_span)
parent_span.set_tag(
"authenticated_entity", requester.authenticated_entity
)
parent_span.set_tag("user_id", requester.user.to_string())
if requester.device_id is not None:
parent_span.set_tag("device_id", requester.device_id)
if requester.app_service is not None:
parent_span.set_tag("appservice_id", requester.app_service.id)
return requester
async def _wrapped_get_user_by_req(
self,
request: SynapseRequest,
allow_guest: bool,
rights: str,
allow_expired: bool,
) -> Requester:
"""Helper for get_user_by_req
Once get_user_by_req has set up the opentracing span, this does the actual work.
"""
try:
ip_addr = request.getClientIP()
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
(
user_id,
device_id,
app_service,
) = await self._get_appservice_user_id_and_device_id(request)
user_id, app_service = await self._get_appservice_user_id(request)
if user_id and app_service:
if ip_addr and self._track_appservice_user_ips:
await self.store.insert_client_ip(
@@ -204,16 +163,18 @@ class Auth:
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id="dummy-device"
if device_id is None
else device_id, # stubbed
device_id="dummy-device", # stubbed
)
requester = create_requester(
user_id, app_service=app_service, device_id=device_id
)
requester = create_requester(user_id, app_service=app_service)
request.requester = user_id
if user_id in self._force_tracing_for_users:
opentracing.force_tracing()
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("user_id", user_id)
opentracing.set_tag("appservice_id", app_service.id)
return requester
user_info = await self.get_user_by_access_token(
@@ -247,18 +208,6 @@ class Auth:
user_agent=user_agent,
device_id=device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
user_info.user_id != user_info.token_owner
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=user_info.user_id,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=device_id,
)
if is_guest and not allow_guest:
raise AuthError(
@@ -283,6 +232,13 @@ class Auth:
)
request.requester = requester
if user_info.token_owner in self._force_tracing_for_users:
opentracing.force_tracing()
opentracing.set_tag("authenticated_entity", user_info.token_owner)
opentracing.set_tag("user_id", user_info.user_id)
if device_id:
opentracing.set_tag("device_id", device_id)
return requester
except KeyError:
raise MissingClientTokenError()
@@ -318,81 +274,33 @@ class Auth:
403, "Application service has not registered this user (%s)" % user_id
)
async def _get_appservice_user_id_and_device_id(
async def _get_appservice_user_id(
self, request: Request
) -> Tuple[Optional[str], Optional[str], Optional[ApplicationService]]:
"""
Given a request, reads the request parameters to determine:
- whether it's an application service that's making this request
- what user the application service should be treated as controlling
(the user_id URI parameter allows an application service to masquerade
any applicable user in its namespace)
- what device the application service should be treated as controlling
(the device_id[^1] URI parameter allows an application service to masquerade
as any device that exists for the relevant user)
[^1] Unstable and provided by MSC3202.
Must use `org.matrix.msc3202.device_id` in place of `device_id` for now.
Returns:
3-tuple of
(user ID?, device ID?, application service?)
Postconditions:
- If an application service is returned, so is a user ID
- A user ID is never returned without an application service
- A device ID is never returned without a user ID or an application service
- The returned application service, if present, is permitted to control the
returned user ID.
- The returned device ID, if present, has been checked to be a valid device ID
for the returned user ID.
"""
DEVICE_ID_ARG_NAME = b"org.matrix.msc3202.device_id"
) -> Tuple[Optional[str], Optional[ApplicationService]]:
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(request)
)
if app_service is None:
return None, None, None
return None, None
if app_service.ip_range_whitelist:
ip_address = IPAddress(request.getClientIP())
if ip_address not in app_service.ip_range_whitelist:
return None, None, None
return None, None
# This will always be set by the time Twisted calls us.
assert request.args is not None
if b"user_id" in request.args:
effective_user_id = request.args[b"user_id"][0].decode("utf8")
await self.validate_appservice_can_control_user_id(
app_service, effective_user_id
)
else:
effective_user_id = app_service.sender
if b"user_id" not in request.args:
return app_service.sender, app_service
effective_device_id: Optional[str] = None
user_id = request.args[b"user_id"][0].decode("utf8")
await self.validate_appservice_can_control_user_id(app_service, user_id)
if (
self.hs.config.experimental.msc3202_device_masquerading_enabled
and DEVICE_ID_ARG_NAME in request.args
):
effective_device_id = request.args[DEVICE_ID_ARG_NAME][0].decode("utf8")
# We only just set this so it can't be None!
assert effective_device_id is not None
device_opt = await self.store.get_device(
effective_user_id, effective_device_id
)
if device_opt is None:
# For now, use 400 M_EXCLUSIVE if the device doesn't exist.
# This is an open thread of discussion on MSC3202 as of 2021-12-09.
raise AuthError(
400,
f"Application service trying to use a device that doesn't exist ('{effective_device_id}' for {effective_user_id})",
Codes.EXCLUSIVE,
)
if app_service.sender == user_id:
return app_service.sender, app_service
return effective_user_id, effective_device_id, app_service
return user_id, app_service
async def get_user_by_access_token(
self,

View File

@@ -253,9 +253,5 @@ class GuestAccess:
FORBIDDEN: Final = "forbidden"
class ReceiptTypes:
READ: Final = "m.read"
class ReadReceiptEventFields:
MSC2285_HIDDEN: Final = "org.matrix.msc2285.hidden"

View File

@@ -351,7 +351,8 @@ class Filter:
True if the event matches the filter.
"""
# We usually get the full "events" as dictionaries coming through,
# except for presence which actually gets passed around as its own type.
# except for presence which actually gets passed around as its own
# namedtuple type.
if isinstance(event, UserPresenceState):
user_id = event.user_id
field_matchers = {

View File

@@ -46,41 +46,41 @@ class RoomDisposition:
UNSTABLE = "unstable"
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class RoomVersion:
"""An object which describes the unique attributes of a room version."""
identifier: str # the identifier for this version
disposition: str # one of the RoomDispositions
event_format: int # one of the EventFormatVersions
state_res: int # one of the StateResolutionVersions
enforce_key_validity: bool
identifier = attr.ib(type=str) # the identifier for this version
disposition = attr.ib(type=str) # one of the RoomDispositions
event_format = attr.ib(type=int) # one of the EventFormatVersions
state_res = attr.ib(type=int) # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool)
# Before MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth: bool
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats
# * NaN, Infinity, -Infinity
strict_canonicaljson: bool
strict_canonicaljson = attr.ib(type=bool)
# MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels: bool
limit_notifications_power_levels = attr.ib(type=bool)
# MSC2174/MSC2176: Apply updated redaction rules algorithm.
msc2176_redaction_rules: bool
msc2176_redaction_rules = attr.ib(type=bool)
# MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules: bool
msc3083_join_rules = attr.ib(type=bool)
# MSC3375: Support for the proper redaction rules for MSC3083. This mustn't
# be enabled if MSC3083 is not.
msc3375_redaction_rules: bool
msc3375_redaction_rules = attr.ib(type=bool)
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking: bool
msc2403_knocking = attr.ib(type=bool)
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical: bool
msc2716_historical = attr.ib(type=bool)
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions: bool
msc2716_redactions = attr.ib(type=bool)
class RoomVersions:

View File

@@ -60,7 +60,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics import register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.types import ISynapseReactor
@@ -159,7 +159,6 @@ def start_reactor(
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
install_gc_manager()
run_command()
# make sure that we run the reactor with the sentinel log context,

View File

@@ -11,14 +11,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
from enum import Enum
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Pattern
import attr
from netaddr import IPSet
from typing import TYPE_CHECKING, Iterable, List, Match, Optional
from synapse.api.constants import EventTypes
from synapse.events import EventBase
@@ -37,13 +33,6 @@ class ApplicationServiceState(Enum):
UP = "up"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class Namespace:
exclusive: bool
group_id: Optional[str]
regex: Pattern[str]
class ApplicationService:
"""Defines an application service. This definition is mostly what is
provided to the /register AS API.
@@ -61,17 +50,17 @@ class ApplicationService:
def __init__(
self,
token: str,
hostname: str,
id: str,
sender: str,
url: Optional[str] = None,
namespaces: Optional[JsonDict] = None,
hs_token: Optional[str] = None,
protocols: Optional[Iterable[str]] = None,
rate_limited: bool = True,
ip_range_whitelist: Optional[IPSet] = None,
supports_ephemeral: bool = False,
token,
hostname,
id,
sender,
url=None,
namespaces=None,
hs_token=None,
protocols=None,
rate_limited=True,
ip_range_whitelist=None,
supports_ephemeral=False,
):
self.token = token
self.url = (
@@ -96,33 +85,27 @@ class ApplicationService:
self.rate_limited = rate_limited
def _check_namespaces(
self, namespaces: Optional[JsonDict]
) -> Dict[str, List[Namespace]]:
def _check_namespaces(self, namespaces):
# Sanity check that it is of the form:
# {
# users: [ {regex: "[A-z]+.*", exclusive: true}, ...],
# aliases: [ {regex: "[A-z]+.*", exclusive: true}, ...],
# rooms: [ {regex: "[A-z]+.*", exclusive: true}, ...],
# }
if namespaces is None:
if not namespaces:
namespaces = {}
result: Dict[str, List[Namespace]] = {}
for ns in ApplicationService.NS_LIST:
result[ns] = []
if ns not in namespaces:
namespaces[ns] = []
continue
if not isinstance(namespaces[ns], list):
if type(namespaces[ns]) != list:
raise ValueError("Bad namespace value for '%s'" % ns)
for regex_obj in namespaces[ns]:
if not isinstance(regex_obj, dict):
raise ValueError("Expected dict regex for ns '%s'" % ns)
exclusive = regex_obj.get("exclusive")
if not isinstance(exclusive, bool):
if not isinstance(regex_obj.get("exclusive"), bool):
raise ValueError("Expected bool for 'exclusive' in ns '%s'" % ns)
group_id = regex_obj.get("group_id")
if group_id:
@@ -143,26 +126,22 @@ class ApplicationService:
)
regex = regex_obj.get("regex")
if not isinstance(regex, str):
if isinstance(regex, str):
regex_obj["regex"] = re.compile(regex) # Pre-compile regex
else:
raise ValueError("Expected string for 'regex' in ns '%s'" % ns)
return namespaces
# Pre-compile regex.
result[ns].append(Namespace(exclusive, group_id, re.compile(regex)))
return result
def _matches_regex(
self, namespace_key: str, test_string: str
) -> Optional[Namespace]:
for namespace in self.namespaces[namespace_key]:
if namespace.regex.match(test_string):
return namespace
def _matches_regex(self, test_string: str, namespace_key: str) -> Optional[Match]:
for regex_obj in self.namespaces[namespace_key]:
if regex_obj["regex"].match(test_string):
return regex_obj
return None
def _is_exclusive(self, namespace_key: str, test_string: str) -> bool:
namespace = self._matches_regex(namespace_key, test_string)
if namespace:
return namespace.exclusive
def _is_exclusive(self, ns_key: str, test_string: str) -> bool:
regex_obj = self._matches_regex(test_string, ns_key)
if regex_obj:
return regex_obj["exclusive"]
return False
async def _matches_user(
@@ -281,15 +260,15 @@ class ApplicationService:
def is_interested_in_user(self, user_id: str) -> bool:
return (
bool(self._matches_regex(ApplicationService.NS_USERS, user_id))
bool(self._matches_regex(user_id, ApplicationService.NS_USERS))
or user_id == self.sender
)
def is_interested_in_alias(self, alias: str) -> bool:
return bool(self._matches_regex(ApplicationService.NS_ALIASES, alias))
return bool(self._matches_regex(alias, ApplicationService.NS_ALIASES))
def is_interested_in_room(self, room_id: str) -> bool:
return bool(self._matches_regex(ApplicationService.NS_ROOMS, room_id))
return bool(self._matches_regex(room_id, ApplicationService.NS_ROOMS))
def is_exclusive_user(self, user_id: str) -> bool:
return (
@@ -306,14 +285,14 @@ class ApplicationService:
def is_exclusive_room(self, room_id: str) -> bool:
return self._is_exclusive(ApplicationService.NS_ROOMS, room_id)
def get_exclusive_user_regexes(self) -> List[Pattern[str]]:
def get_exclusive_user_regexes(self):
"""Get the list of regexes used to determine if a user is exclusively
registered by the AS
"""
return [
namespace.regex
for namespace in self.namespaces[ApplicationService.NS_USERS]
if namespace.exclusive
regex_obj["regex"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if regex_obj["exclusive"]
]
def get_groups_for_user(self, user_id: str) -> Iterable[str]:
@@ -326,15 +305,15 @@ class ApplicationService:
An iterable that yields group_id strings.
"""
return (
namespace.group_id
for namespace in self.namespaces[ApplicationService.NS_USERS]
if namespace.group_id and namespace.regex.match(user_id)
regex_obj["group_id"]
for regex_obj in self.namespaces[ApplicationService.NS_USERS]
if "group_id" in regex_obj and regex_obj["regex"].match(user_id)
)
def is_rate_limited(self) -> bool:
return self.rate_limited
def __str__(self) -> str:
def __str__(self):
# copy dictionary and redact token fields so they don't get logged
dict_copy = self.__dict__.copy()
dict_copy["token"] = "<redacted>"

View File

@@ -12,8 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
import urllib
from typing import TYPE_CHECKING, List, Optional, Tuple
from prometheus_client import Counter
@@ -53,7 +53,7 @@ HOUR_IN_MS = 60 * 60 * 1000
APP_SERVICE_PREFIX = "/_matrix/app/unstable"
def _is_valid_3pe_metadata(info: JsonDict) -> bool:
def _is_valid_3pe_metadata(info):
if "instances" not in info:
return False
if not isinstance(info["instances"], list):
@@ -61,7 +61,7 @@ def _is_valid_3pe_metadata(info: JsonDict) -> bool:
return True
def _is_valid_3pe_result(r: JsonDict, field: str) -> bool:
def _is_valid_3pe_result(r, field):
if not isinstance(r, dict):
return False
@@ -93,13 +93,9 @@ class ApplicationServiceApi(SimpleHttpClient):
hs.get_clock(), "as_protocol_meta", timeout_ms=HOUR_IN_MS
)
async def query_user(self, service: "ApplicationService", user_id: str) -> bool:
async def query_user(self, service, user_id):
if service.url is None:
return False
# This is required by the configuration.
assert service.hs_token is not None
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
try:
response = await self.get_json(uri, {"access_token": service.hs_token})
@@ -113,13 +109,9 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("query_user to %s threw exception %s", uri, ex)
return False
async def query_alias(self, service: "ApplicationService", alias: str) -> bool:
async def query_alias(self, service, alias):
if service.url is None:
return False
# This is required by the configuration.
assert service.hs_token is not None
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
try:
response = await self.get_json(uri, {"access_token": service.hs_token})
@@ -133,13 +125,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("query_alias to %s threw exception %s", uri, ex)
return False
async def query_3pe(
self,
service: "ApplicationService",
kind: str,
protocol: str,
fields: Dict[bytes, List[bytes]],
) -> List[JsonDict]:
async def query_3pe(self, service, kind, protocol, fields):
if kind == ThirdPartyEntityKind.USER:
required_field = "userid"
elif kind == ThirdPartyEntityKind.LOCATION:
@@ -219,14 +205,11 @@ class ApplicationServiceApi(SimpleHttpClient):
events: List[EventBase],
ephemeral: List[JsonDict],
txn_id: Optional[int] = None,
) -> bool:
):
if service.url is None:
return True
# This is required by the configuration.
assert service.hs_token is not None
serialized_events = self._serialize(service, events)
events = self._serialize(service, events)
if txn_id is None:
logger.warning(
@@ -238,12 +221,9 @@ class ApplicationServiceApi(SimpleHttpClient):
# Never send ephemeral events to appservices that do not support it
if service.supports_ephemeral:
body = {
"events": serialized_events,
"de.sorunome.msc2409.ephemeral": ephemeral,
}
body = {"events": events, "de.sorunome.msc2409.ephemeral": ephemeral}
else:
body = {"events": serialized_events}
body = {"events": events}
try:
await self.put_json(
@@ -258,7 +238,7 @@ class ApplicationServiceApi(SimpleHttpClient):
[event.get("event_id") for event in events],
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(serialized_events))
sent_events_counter.labels(service.id).inc(len(events))
return True
except CodeMessageException as e:
logger.warning(
@@ -280,9 +260,7 @@ class ApplicationServiceApi(SimpleHttpClient):
failed_transactions_counter.labels(service.id).inc()
return False
def _serialize(
self, service: "ApplicationService", events: Iterable[EventBase]
) -> List[JsonDict]:
def _serialize(self, service, events):
time_now = self.clock.time_msec()
return [
serialize_event(

View File

@@ -48,19 +48,13 @@ This is all tied together by the AppServiceScheduler which DIs the required
components.
"""
import logging
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional, Set
from typing import List, Optional
from synapse.appservice import ApplicationService, ApplicationServiceState
from synapse.appservice.api import ApplicationServiceApi
from synapse.events import EventBase
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.databases.main import DataStore
from synapse.types import JsonDict
from synapse.util import Clock
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -78,7 +72,7 @@ class ApplicationServiceScheduler:
case is a simple array.
"""
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.as_api = hs.get_application_service_api()
@@ -86,7 +80,7 @@ class ApplicationServiceScheduler:
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
async def start(self) -> None:
async def start(self):
logger.info("Starting appservice scheduler")
# check for any DOWN ASes and start recoverers for them.
@@ -97,14 +91,12 @@ class ApplicationServiceScheduler:
for service in services:
self.txn_ctrl.start_recoverer(service)
def submit_event_for_as(
self, service: ApplicationService, event: EventBase
) -> None:
def submit_event_for_as(self, service: ApplicationService, event: EventBase):
self.queuer.enqueue_event(service, event)
def submit_ephemeral_events_for_as(
self, service: ApplicationService, events: List[JsonDict]
) -> None:
):
self.queuer.enqueue_ephemeral(service, events)
@@ -116,18 +108,16 @@ class _ServiceQueuer:
appservice at a given time.
"""
def __init__(self, txn_ctrl: "_TransactionController", clock: Clock):
# dict of {service_id: [events]}
self.queued_events: Dict[str, List[EventBase]] = {}
# dict of {service_id: [events]}
self.queued_ephemeral: Dict[str, List[JsonDict]] = {}
def __init__(self, txn_ctrl, clock):
self.queued_events = {} # dict of {service_id: [events]}
self.queued_ephemeral = {} # dict of {service_id: [events]}
# the appservices which currently have a transaction in flight
self.requests_in_flight: Set[str] = set()
self.requests_in_flight = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
def _start_background_request(self, service: ApplicationService) -> None:
def _start_background_request(self, service):
# start a sender for this appservice if we don't already have one
if service.id in self.requests_in_flight:
return
@@ -136,17 +126,15 @@ class _ServiceQueuer:
"as-sender-%s" % (service.id,), self._send_request, service
)
def enqueue_event(self, service: ApplicationService, event: EventBase) -> None:
def enqueue_event(self, service: ApplicationService, event: EventBase):
self.queued_events.setdefault(service.id, []).append(event)
self._start_background_request(service)
def enqueue_ephemeral(
self, service: ApplicationService, events: List[JsonDict]
) -> None:
def enqueue_ephemeral(self, service: ApplicationService, events: List[JsonDict]):
self.queued_ephemeral.setdefault(service.id, []).extend(events)
self._start_background_request(service)
async def _send_request(self, service: ApplicationService) -> None:
async def _send_request(self, service: ApplicationService):
# sanity-check: we shouldn't get here if this service already has a sender
# running.
assert service.id not in self.requests_in_flight
@@ -180,15 +168,20 @@ class _TransactionController:
if a transaction fails.
(Note we have only have one of these in the homeserver.)
Args:
clock (synapse.util.Clock):
store (synapse.storage.DataStore):
as_api (synapse.appservice.api.ApplicationServiceApi):
"""
def __init__(self, clock: Clock, store: DataStore, as_api: ApplicationServiceApi):
def __init__(self, clock, store, as_api):
self.clock = clock
self.store = store
self.as_api = as_api
# map from service id to recoverer instance
self.recoverers: Dict[str, "_Recoverer"] = {}
self.recoverers = {}
# for UTs
self.RECOVERER_CLASS = _Recoverer
@@ -198,7 +191,7 @@ class _TransactionController:
service: ApplicationService,
events: List[EventBase],
ephemeral: Optional[List[JsonDict]] = None,
) -> None:
):
try:
txn = await self.store.create_appservice_txn(
service=service, events=events, ephemeral=ephemeral or []
@@ -214,7 +207,7 @@ class _TransactionController:
logger.exception("Error creating appservice transaction")
run_in_background(self._on_txn_fail, service)
async def on_recovered(self, recoverer: "_Recoverer") -> None:
async def on_recovered(self, recoverer):
logger.info(
"Successfully recovered application service AS ID %s", recoverer.service.id
)
@@ -224,18 +217,18 @@ class _TransactionController:
recoverer.service, ApplicationServiceState.UP
)
async def _on_txn_fail(self, service: ApplicationService) -> None:
async def _on_txn_fail(self, service):
try:
await self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
self.start_recoverer(service)
except Exception:
logger.exception("Error starting AS recoverer")
def start_recoverer(self, service: ApplicationService) -> None:
def start_recoverer(self, service):
"""Start a Recoverer for the given service
Args:
service:
service (synapse.appservice.ApplicationService):
"""
logger.info("Starting recoverer for AS ID %s", service.id)
assert service.id not in self.recoverers
@@ -264,14 +257,7 @@ class _Recoverer:
callback (callable[_Recoverer]): called once the service recovers.
"""
def __init__(
self,
clock: Clock,
store: DataStore,
as_api: ApplicationServiceApi,
service: ApplicationService,
callback: Callable[["_Recoverer"], Awaitable[None]],
):
def __init__(self, clock, store, as_api, service, callback):
self.clock = clock
self.store = store
self.as_api = as_api
@@ -279,8 +265,8 @@ class _Recoverer:
self.callback = callback
self.backoff_counter = 1
def recover(self) -> None:
def _retry() -> None:
def recover(self):
def _retry():
run_as_background_process(
"as-recoverer-%s" % (self.service.id,), self.retry
)
@@ -289,13 +275,13 @@ class _Recoverer:
logger.info("Scheduling retries on %s in %fs", self.service.id, delay)
self.clock.call_later(delay, _retry)
def _backoff(self) -> None:
def _backoff(self):
# cap the backoff to be around 8.5min => (2^9) = 512 secs
if self.backoff_counter < 9:
self.backoff_counter += 1
self.recover()
async def retry(self) -> None:
async def retry(self):
logger.info("Starting retries on %s", self.service.id)
try:
while True:

View File

@@ -29,7 +29,6 @@ class ApiConfig(Config):
def read_config(self, config: JsonDict, **kwargs):
validate_config(_MAIN_SCHEMA, config, ())
self.room_prejoin_state = list(self._get_prejoin_state_types(config))
self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False)
def generate_config_section(cls, **kwargs) -> str:
formatted_default_state_types = "\n".join(
@@ -60,21 +59,6 @@ class ApiConfig(Config):
#
#additional_event_types:
# - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
""" % {
"formatted_default_state_types": formatted_default_state_types
}
@@ -123,8 +107,6 @@ _DEFAULT_PREJOIN_STATE_TYPES = [
EventTypes.Name,
# Per MSC1772.
EventTypes.Create,
# Per MSC3173.
EventTypes.Topic,
]
@@ -154,8 +136,5 @@ _MAIN_SCHEMA = {
"properties": {
"room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
"room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
"track_puppeted_user_ips": {
"type": "boolean",
},
},
}

View File

@@ -147,7 +147,8 @@ def _load_appservice(
# protocols check
protocols = as_info.get("protocols")
if protocols:
if not isinstance(protocols, list):
# Because strings are lists in python
if isinstance(protocols, str) or not isinstance(protocols, list):
raise KeyError("Optional 'protocols' must be a list if present.")
for p in protocols:
if not isinstance(p, str):

View File

@@ -55,19 +55,19 @@ https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class EmailSubjectConfig:
message_from_person_in_room: str
message_from_person: str
messages_from_person: str
messages_in_room: str
messages_in_room_and_others: str
messages_from_person_and_others: str
invite_from_person: str
invite_from_person_to_room: str
invite_from_person_to_space: str
password_reset: str
email_validation: str
message_from_person_in_room = attr.ib(type=str)
message_from_person = attr.ib(type=str)
messages_from_person = attr.ib(type=str)
messages_in_room = attr.ib(type=str)
messages_in_room_and_others = attr.ib(type=str)
messages_from_person_and_others = attr.ib(type=str)
invite_from_person = attr.ib(type=str)
invite_from_person_to_room = attr.ib(type=str)
invite_from_person_to_space = attr.ib(type=str)
password_reset = attr.ib(type=str)
email_validation = attr.ib(type=str)
class EmailConfig(Config):

View File

@@ -32,7 +32,7 @@ class ExperimentalConfig(Config):
# MSC3026 (busy presence state)
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
# MSC2716 (importing historical messages)
# MSC2716 (backfill existing history)
self.msc2716_enabled: bool = experimental.get("msc2716_enabled", False)
# MSC2285 (hidden read receipts)
@@ -49,8 +49,3 @@ class ExperimentalConfig(Config):
# MSC3030 (Jump to date API endpoint)
self.msc3030_enabled: bool = experimental.get("msc3030_enabled", False)
# The portion of MSC3202 which is related to device masquerading.
self.msc3202_device_masquerading_enabled: bool = experimental.get(
"msc3202_device_masquerading", False
)

View File

@@ -16,14 +16,12 @@
import hashlib
import logging
import os
from typing import Any, Dict, Iterator, List, Optional
from typing import Any, Dict
import attr
import jsonschema
from signedjson.key import (
NACL_ED25519,
SigningKey,
VerifyKey,
decode_signing_key_base64,
decode_verify_key_bytes,
generate_signing_key,
@@ -33,7 +31,6 @@ from signedjson.key import (
)
from unpaddedbase64 import decode_base64
from synapse.types import JsonDict
from synapse.util.stringutils import random_string, random_string_with_symbols
from ._base import Config, ConfigError
@@ -84,13 +81,14 @@ To suppress this warning and continue using 'matrix.org', admins should set
logger = logging.getLogger(__name__)
@attr.s(slots=True, auto_attribs=True)
@attr.s
class TrustedKeyServer:
# name of the server.
server_name: str
# string: name of the server.
server_name = attr.ib()
# map from key id to key object, or None to disable signature verification.
verify_keys: Optional[Dict[str, VerifyKey]] = None
# dict[str,VerifyKey]|None: map from key id to key object, or None to disable
# signature verification.
verify_keys = attr.ib(default=None)
class KeyConfig(Config):
@@ -281,15 +279,15 @@ class KeyConfig(Config):
% locals()
)
def read_signing_keys(self, signing_key_path: str, name: str) -> List[SigningKey]:
def read_signing_keys(self, signing_key_path, name):
"""Read the signing keys in the given path.
Args:
signing_key_path
name: Associated config key name
signing_key_path (str)
name (str): Associated config key name
Returns:
The signing keys read from the given path.
list[SigningKey]
"""
signing_keys = self.read_file(signing_key_path, name)
@@ -298,9 +296,7 @@ class KeyConfig(Config):
except Exception as e:
raise ConfigError("Error reading %s: %s" % (name, str(e)))
def read_old_signing_keys(
self, old_signing_keys: Optional[JsonDict]
) -> Dict[str, VerifyKey]:
def read_old_signing_keys(self, old_signing_keys):
if old_signing_keys is None:
return {}
keys = {}
@@ -344,7 +340,7 @@ class KeyConfig(Config):
write_signing_keys(signing_key_file, (key,))
def _perspectives_to_key_servers(config: JsonDict) -> Iterator[JsonDict]:
def _perspectives_to_key_servers(config):
"""Convert old-style 'perspectives' configs into new-style 'trusted_key_servers'
Returns an iterable of entries to add to trusted_key_servers.
@@ -406,9 +402,7 @@ TRUSTED_KEY_SERVERS_SCHEMA = {
}
def _parse_key_servers(
key_servers: List[Any], federation_verify_certificates: bool
) -> Iterator[TrustedKeyServer]:
def _parse_key_servers(key_servers, federation_verify_certificates):
try:
jsonschema.validate(key_servers, TRUSTED_KEY_SERVERS_SCHEMA)
except jsonschema.ValidationError as e:
@@ -450,7 +444,7 @@ def _parse_key_servers(
yield result
def _assert_keyserver_has_verify_keys(trusted_key_server: TrustedKeyServer) -> None:
def _assert_keyserver_has_verify_keys(trusted_key_server):
if not trusted_key_server.verify_keys:
raise ConfigError(INSECURE_NOTARY_ERROR)

View File

@@ -22,12 +22,10 @@ from ._base import Config, ConfigError
@attr.s
class MetricsFlags:
known_servers: bool = attr.ib(
default=False, validator=attr.validators.instance_of(bool)
)
known_servers = attr.ib(default=False, validator=attr.validators.instance_of(bool))
@classmethod
def all_off(cls) -> "MetricsFlags":
def all_off(cls):
"""
Instantiate the flags with all options set to off.
"""

View File

@@ -37,7 +37,7 @@ class ModulesConfig(Config):
# Server admins can expand Synapse's functionality with external modules.
#
# See https://matrix-org.github.io/synapse/latest/modules/index.html for more
# See https://matrix-org.github.io/synapse/latest/modules.html for more
# documentation on how to configure or create custom modules for Synapse.
#
modules:

View File

@@ -148,13 +148,10 @@ class OIDCConfig(Config):
# Defaults to false. Avoid this in production.
#
# user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
# userinfo endpoint.
#
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@@ -14,11 +14,10 @@
import logging
import os
from collections import namedtuple
from typing import Dict, List, Tuple
from urllib.request import getproxies_environment # type: ignore
import attr
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import JsonDict
@@ -45,20 +44,18 @@ THUMBNAIL_SIZE_YAML = """\
HTTP_PROXY_SET_WARNING = """\
The Synapse config url_preview_ip_range_blacklist will be ignored as an HTTP(s) proxy is configured."""
ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
@attr.s(frozen=True, slots=True, auto_attribs=True)
class ThumbnailRequirement:
width: int
height: int
method: str
media_type: str
@attr.s(frozen=True, slots=True, auto_attribs=True)
class MediaStorageProviderConfig:
store_local: bool # Whether to store newly uploaded local files
store_remote: bool # Whether to store newly downloaded remote files
store_synchronous: bool # Whether to wait for successful storage for local uploads
MediaStorageProviderConfig = namedtuple(
"MediaStorageProviderConfig",
(
"store_local", # Whether to store newly uploaded local files
"store_remote", # Whether to store newly downloaded remote files
"store_synchronous", # Whether to wait for successful storage for local uploads
),
)
def parse_thumbnail_requirements(
@@ -69,10 +66,11 @@ def parse_thumbnail_requirements(
method, and thumbnail media type to precalculate
Args:
thumbnail_sizes: List of dicts with "width", "height", and "method" keys
thumbnail_sizes(list): List of dicts with "width", "height", and
"method" keys
Returns:
Dictionary mapping from media type string to list of ThumbnailRequirement.
Dictionary mapping from media type string to list of
ThumbnailRequirement tuples.
"""
requirements: Dict[str, List[ThumbnailRequirement]] = {}
for size in thumbnail_sizes:

Some files were not shown because too many files have changed in this diff Show More