Compare commits
66 Commits
rei/moh-co
...
anoa/log_e
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
20fc57683c | ||
|
|
d1e6333f12 | ||
|
|
7ad7a47e5a | ||
|
|
20d4418485 | ||
|
|
15ffc4143c | ||
|
|
9eab71aa93 | ||
|
|
68acb0a29d | ||
|
|
fd05a3ed03 | ||
|
|
9d0098595e | ||
|
|
ab12c909a2 | ||
|
|
d93ec0a0ba | ||
|
|
251b5567ec | ||
|
|
47961ea855 | ||
|
|
4ec0a309cf | ||
|
|
3ba9389699 | ||
|
|
d8be9924ef | ||
|
|
cefd4b87a3 | ||
|
|
86615aa965 | ||
|
|
b0352f9c08 | ||
|
|
6a78ede569 | ||
|
|
6b241f5286 | ||
|
|
e7da1ced24 | ||
|
|
18862f20b5 | ||
|
|
904bb04409 | ||
|
|
422e33fabf | ||
|
|
867443472c | ||
|
|
8e8a00829f | ||
|
|
3e0536cd2a | ||
|
|
d70169bf9b | ||
|
|
4ca8fcdd5a | ||
|
|
b602ba194b | ||
|
|
b9632046fb | ||
|
|
5ff5f17377 | ||
|
|
0c40c619aa | ||
|
|
20c6d85c6e | ||
|
|
10a88ba91c | ||
|
|
b92a2ff797 | ||
|
|
2560b1b6b2 | ||
|
|
22abfca8d9 | ||
|
|
1b1aed38e3 | ||
|
|
2185b28184 | ||
|
|
99ba5ae7b7 | ||
|
|
d41c4654db | ||
|
|
338e70c617 | ||
|
|
7c3408d1a8 | ||
|
|
ffd227c382 | ||
|
|
c43dd4d01b | ||
|
|
3be63654e4 | ||
|
|
8e57584a58 | ||
|
|
d3cf0730f8 | ||
|
|
2bb4bd1269 | ||
|
|
6a04767439 | ||
|
|
6bf81a7a61 | ||
|
|
7fe7c45438 | ||
|
|
5cc41f1b05 | ||
|
|
99e7fb1d52 | ||
|
|
6c68e874b1 | ||
|
|
201c48c8de | ||
|
|
e87540abb1 | ||
|
|
70ce9aea71 | ||
|
|
2ef1fea8d2 | ||
|
|
c9eb678b73 | ||
|
|
feb3e006d7 | ||
|
|
3b51c763ba | ||
|
|
d8f94eeec2 | ||
|
|
88a78c6577 |
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
1
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -8,6 +8,7 @@
|
||||
- Use markdown where necessary, mostly for `code blocks`.
|
||||
- End with either a period (.) or an exclamation mark (!).
|
||||
- Start with a capital letter.
|
||||
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
|
||||
* [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
|
||||
* [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
|
||||
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
|
||||
|
||||
5
.github/workflows/tests.yml
vendored
5
.github/workflows/tests.yml
vendored
@@ -366,6 +366,8 @@ jobs:
|
||||
# Build initial Synapse image
|
||||
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
|
||||
working-directory: synapse
|
||||
env:
|
||||
DOCKER_BUILDKIT: 1
|
||||
|
||||
# Build a ready-to-run Synapse image based on the initial image above.
|
||||
# This new image includes a config file, keys for signing and TLS, and
|
||||
@@ -374,7 +376,8 @@ jobs:
|
||||
working-directory: complement/dockerfiles
|
||||
|
||||
# Run Complement
|
||||
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
|
||||
- run: set -o pipefail && go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
|
||||
shell: bash
|
||||
env:
|
||||
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
|
||||
working-directory: complement
|
||||
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -50,3 +50,7 @@ __pycache__/
|
||||
|
||||
# docs
|
||||
book/
|
||||
|
||||
# complement
|
||||
/complement-master
|
||||
/master.tar.gz
|
||||
|
||||
52
CHANGES.md
52
CHANGES.md
@@ -1,8 +1,53 @@
|
||||
Synapse 1.50.0rc1 (2022-01-05)
|
||||
==============================
|
||||
Synapse 1.50.1 (2022-01-18)
|
||||
===========================
|
||||
|
||||
This release fixes a bug in Synapse 1.50.0 that could prevent clients from being able to connect to Synapse if the `webclient` resource was enabled. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse 1.50.0rc1 that could cause Matrix clients to be unable to connect to Synapse instances with the `webclient` resource enabled. ([\#11764](https://github.com/matrix-org/synapse/issues/11764))
|
||||
|
||||
|
||||
Synapse 1.50.0 (2022-01-18)
|
||||
===========================
|
||||
|
||||
**This release contains a critical bug that may prevent clients from being able to connect.
|
||||
As such, it is not recommended to upgrade to 1.50.0. Instead, please upgrade straight to
|
||||
to 1.50.1. Further details are available in [this issue](https://github.com/matrix-org/synapse/issues/11763).**
|
||||
|
||||
Please note that we now only support Python 3.7+ and PostgreSQL 10+ (if applicable), because Python 3.6 and PostgreSQL 9.6 have reached end-of-life.
|
||||
|
||||
No significant changes since 1.50.0rc2.
|
||||
|
||||
|
||||
Synapse 1.50.0rc2 (2022-01-14)
|
||||
==============================
|
||||
|
||||
This release candidate fixes a federation-breaking regression introduced in Synapse 1.50.0rc1.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse v1.0.0 whereby some device list updates would not be sent to remote homeservers if there were too many to send at once. ([\#11729](https://github.com/matrix-org/synapse/issues/11729))
|
||||
- Fix a bug introduced in Synapse v1.50.0rc1 whereby outbound federation could fail because too many EDUs were produced for device updates. ([\#11730](https://github.com/matrix-org/synapse/issues/11730))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Document that now the minimum supported PostgreSQL version is 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s. ([\#11714](https://github.com/matrix-org/synapse/issues/11714))
|
||||
|
||||
|
||||
Synapse 1.50.0rc1 (2022-01-05)
|
||||
==============================
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
@@ -42,6 +87,7 @@ Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Replace `mock` package by its standard library version. ([\#11588](https://github.com/matrix-org/synapse/issues/11588))
|
||||
- Drop support for Python 3.6 and Ubuntu 18.04. ([\#11633](https://github.com/matrix-org/synapse/issues/11633))
|
||||
|
||||
|
||||
Internal Changes
|
||||
@@ -77,13 +123,13 @@ Internal Changes
|
||||
- Improve OpenTracing support for requests which use a `ResponseCache`. ([\#11607](https://github.com/matrix-org/synapse/issues/11607))
|
||||
- Improve OpenTracing support for incoming HTTP requests. ([\#11618](https://github.com/matrix-org/synapse/issues/11618))
|
||||
- A number of improvements to opentracing support. ([\#11619](https://github.com/matrix-org/synapse/issues/11619))
|
||||
- Drop support for Python 3.6 and Ubuntu 18.04. ([\#11633](https://github.com/matrix-org/synapse/issues/11633))
|
||||
- Refactor the way that the `outlier` flag is set on events received over federation. ([\#11634](https://github.com/matrix-org/synapse/issues/11634))
|
||||
- Improve the error messages from `get_create_event_for_room`. ([\#11638](https://github.com/matrix-org/synapse/issues/11638))
|
||||
- Remove redundant `get_current_events_token` method. ([\#11643](https://github.com/matrix-org/synapse/issues/11643))
|
||||
- Convert `namedtuples` to `attrs`. ([\#11665](https://github.com/matrix-org/synapse/issues/11665), [\#11574](https://github.com/matrix-org/synapse/issues/11574))
|
||||
- Update the `/capabilities` response to include whether support for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440) is available. ([\#11690](https://github.com/matrix-org/synapse/issues/11690))
|
||||
- Send the `Accept` header in HTTP requests made using `SimpleHttpClient.get_json`. ([\#11677](https://github.com/matrix-org/synapse/issues/11677))
|
||||
- Work around Mjolnir compatibility issue by adding an import for `glob_to_regex` in `synapse.util`, where it moved from. ([\#11696](https://github.com/matrix-org/synapse/issues/11696))
|
||||
|
||||
|
||||
Synapse 1.49.2 (2021-12-21)
|
||||
|
||||
2
changelog.d/11530.bugfix
Normal file
2
changelog.d/11530.bugfix
Normal file
@@ -0,0 +1,2 @@
|
||||
Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
|
||||
received over federation.
|
||||
1
changelog.d/11561.feature
Normal file
1
changelog.d/11561.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.
|
||||
1
changelog.d/11576.feature
Normal file
1
changelog.d/11576.feature
Normal file
@@ -0,0 +1 @@
|
||||
Remove the `"password_hash"` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html).
|
||||
1
changelog.d/11577.feature
Normal file
1
changelog.d/11577.feature
Normal file
@@ -0,0 +1 @@
|
||||
Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).
|
||||
1
changelog.d/11587.bugfix
Normal file
1
changelog.d/11587.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices.
|
||||
1
changelog.d/11593.bugfix
Normal file
1
changelog.d/11593.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix an error in to get federation status of a destination server even if no error has occurred. This admin API was new introduced in Synapse 1.49.0.
|
||||
1
changelog.d/11612.misc
Normal file
1
changelog.d/11612.misc
Normal file
@@ -0,0 +1 @@
|
||||
Avoid database access in the JSON serialization process.
|
||||
1
changelog.d/11659.bugfix
Normal file
1
changelog.d/11659.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).
|
||||
1
changelog.d/11667.bugfix
Normal file
1
changelog.d/11667.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0.
|
||||
1
changelog.d/11669.bugfix
Normal file
1
changelog.d/11669.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix preview of some gif URLs (like tenor.com). Contributed by Philippe Daouadi.
|
||||
1
changelog.d/11672.feature
Normal file
1
changelog.d/11672.feature
Normal file
@@ -0,0 +1 @@
|
||||
Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room.
|
||||
1
changelog.d/11675.feature
Normal file
1
changelog.d/11675.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users.
|
||||
1
changelog.d/11682.removal
Normal file
1
changelog.d/11682.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove the unstable `/send_relation` endpoint.
|
||||
1
changelog.d/11685.misc
Normal file
1
changelog.d/11685.misc
Normal file
@@ -0,0 +1 @@
|
||||
Run `pyupgrade --py37-plus --keep-percent-format` on Synapse.
|
||||
1
changelog.d/11686.doc
Normal file
1
changelog.d/11686.doc
Normal file
@@ -0,0 +1 @@
|
||||
Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This bypasses client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr.
|
||||
1
changelog.d/11691.misc
Normal file
1
changelog.d/11691.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use buildkit's cache feature to speed up docker builds.
|
||||
1
changelog.d/11692.misc
Normal file
1
changelog.d/11692.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use `auto_attribs` and native type hints for attrs classes.
|
||||
1
changelog.d/11693.misc
Normal file
1
changelog.d/11693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove debug logging for #4422, which has been closed since Synapse 0.99.
|
||||
1
changelog.d/11695.bugfix
Normal file
1
changelog.d/11695.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where the only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0.
|
||||
@@ -1 +0,0 @@
|
||||
Work around Mjolnir compatibility issue by adding an import for `glob_to_regex` in `synapse.util`, where it moved from.
|
||||
1
changelog.d/11699.misc
Normal file
1
changelog.d/11699.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove fallback code for Python 2.
|
||||
1
changelog.d/11701.misc
Normal file
1
changelog.d/11701.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic.
|
||||
1
changelog.d/11702.misc
Normal file
1
changelog.d/11702.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add the option to write sqlite test dbs to disk when running tests.
|
||||
1
changelog.d/11707.misc
Normal file
1
changelog.d/11707.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve Complement test output for Gitub Actions.
|
||||
1
changelog.d/11710.bugfix
Normal file
1
changelog.d/11710.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.
|
||||
1
changelog.d/11714.misc
Normal file
1
changelog.d/11714.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s.
|
||||
1
changelog.d/11715.doc
Normal file
1
changelog.d/11715.doc
Normal file
@@ -0,0 +1 @@
|
||||
Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide.
|
||||
1
changelog.d/11716.misc
Normal file
1
changelog.d/11716.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix docstring on `add_account_data_for_user`.
|
||||
1
changelog.d/11718.misc
Normal file
1
changelog.d/11718.misc
Normal file
@@ -0,0 +1 @@
|
||||
Complement environment variable name change and update `.gitignore`.
|
||||
1
changelog.d/11723.misc
Normal file
1
changelog.d/11723.misc
Normal file
@@ -0,0 +1 @@
|
||||
Simplify calculation of prometheus metrics for garbage collection.
|
||||
1
changelog.d/11724.misc
Normal file
1
changelog.d/11724.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve accuracy of `python_twisted_reactor_tick_time` prometheus metric.
|
||||
1
changelog.d/11724.removal
Normal file
1
changelog.d/11724.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove `python_twisted_reactor_pending_calls` prometheus metric.
|
||||
1
changelog.d/11725.doc
Normal file
1
changelog.d/11725.doc
Normal file
@@ -0,0 +1 @@
|
||||
Document that now the minimum supported PostgreSQL version is 10.
|
||||
1
changelog.d/11735.doc
Normal file
1
changelog.d/11735.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in demo docs: differnt.
|
||||
1
changelog.d/11737.bugfix
Normal file
1
changelog.d/11737.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Make the list rooms admin api sort stable. Contributed by Daniël Sonck.
|
||||
1
changelog.d/11739.doc
Normal file
1
changelog.d/11739.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update room spec url in config files.
|
||||
1
changelog.d/11740.doc
Normal file
1
changelog.d/11740.doc
Normal file
@@ -0,0 +1 @@
|
||||
Mention python3-venv and libpq-dev dependencies in contribution guide.
|
||||
1
changelog.d/11742.misc
Normal file
1
changelog.d/11742.misc
Normal file
@@ -0,0 +1 @@
|
||||
Minor efficiency improvements when inserting many values into the database.
|
||||
1
changelog.d/11744.misc
Normal file
1
changelog.d/11744.misc
Normal file
@@ -0,0 +1 @@
|
||||
Invite PR authors to give themselves credit in the changelog.
|
||||
1
changelog.d/11745.bugfix
Normal file
1
changelog.d/11745.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.
|
||||
1
changelog.d/11749.feature
Normal file
1
changelog.d/11749.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.
|
||||
1
changelog.d/11755.doc
Normal file
1
changelog.d/11755.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update documentation for configuring login with facebook.
|
||||
1
changelog.d/11757.feature
Normal file
1
changelog.d/11757.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.
|
||||
1
changelog.d/11761.misc
Normal file
1
changelog.d/11761.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove `log_function` utility function and its uses.
|
||||
1
changelog.d/11768.misc
Normal file
1
changelog.d/11768.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use `auto_attribs` and native type hints for attrs classes.
|
||||
@@ -92,22 +92,6 @@ new PromConsole.Graph({
|
||||
})
|
||||
</script>
|
||||
|
||||
<h3>Pending calls per tick</h3>
|
||||
<div id="reactor_pending_calls"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#reactor_pending_calls"),
|
||||
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
|
||||
name: "[[job]]-[[index]]",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yTitle: "Pending Calls"
|
||||
})
|
||||
</script>
|
||||
|
||||
<h1>Storage</h1>
|
||||
|
||||
<h3>Queries</h3>
|
||||
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.50.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 16:06:26 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 10:40:38 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 14 Jan 2022 11:18:06 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc1.
|
||||
|
||||
@@ -22,5 +22,5 @@ Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
|
||||
|
||||
|
||||
|
||||
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
|
||||
Also note that when joining a public room on a different HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
|
||||
|
||||
|
||||
@@ -1,14 +1,17 @@
|
||||
# Dockerfile to build the matrixdotorg/synapse docker images.
|
||||
#
|
||||
# Note that it uses features which are only available in BuildKit - see
|
||||
# https://docs.docker.com/go/buildkit/ for more information.
|
||||
#
|
||||
# To build the image, run `docker build` command from the root of the
|
||||
# synapse repository:
|
||||
#
|
||||
# docker build -f docker/Dockerfile .
|
||||
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile .
|
||||
#
|
||||
# There is an optional PYTHON_VERSION build argument which sets the
|
||||
# version of python to build against: for example:
|
||||
#
|
||||
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
|
||||
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 .
|
||||
#
|
||||
|
||||
ARG PYTHON_VERSION=3.8
|
||||
@@ -19,7 +22,16 @@ ARG PYTHON_VERSION=3.8
|
||||
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
|
||||
|
||||
# install the OS build deps
|
||||
RUN apt-get update && apt-get install -y \
|
||||
#
|
||||
# RUN --mount is specific to buildkit and is documented at
|
||||
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
|
||||
# Here we use it to set up a cache for apt, to improve rebuild speeds on
|
||||
# slow connections.
|
||||
#
|
||||
RUN \
|
||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && apt-get install -y \
|
||||
build-essential \
|
||||
libffi-dev \
|
||||
libjpeg-dev \
|
||||
@@ -44,7 +56,8 @@ COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
|
||||
# used while you develop on the source
|
||||
#
|
||||
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
|
||||
RUN pip install --prefix="/install" --no-warn-script-location \
|
||||
RUN --mount=type=cache,target=/root/.cache/pip \
|
||||
pip install --prefix="/install" --no-warn-script-location \
|
||||
/synapse[all]
|
||||
|
||||
# Copy over the rest of the project
|
||||
@@ -66,7 +79,10 @@ LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/syna
|
||||
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
|
||||
LABEL org.opencontainers.image.licenses='Apache-2.0'
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
RUN \
|
||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && apt-get install -y \
|
||||
curl \
|
||||
gosu \
|
||||
libjpeg62-turbo \
|
||||
|
||||
@@ -15,9 +15,10 @@ server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
```jsonc
|
||||
{
|
||||
"displayname": "User",
|
||||
"name": "@user:example.com",
|
||||
"displayname": "User", // can be null if not set
|
||||
"threepids": [
|
||||
{
|
||||
"medium": "email",
|
||||
@@ -32,11 +33,11 @@ It returns a JSON body like the following:
|
||||
"validated_at": 1586458409743
|
||||
}
|
||||
],
|
||||
"avatar_url": "<avatar_url>",
|
||||
"avatar_url": "<avatar_url>", // can be null if not set
|
||||
"is_guest": 0,
|
||||
"admin": 0,
|
||||
"deactivated": 0,
|
||||
"shadow_banned": 0,
|
||||
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
|
||||
"creation_ts": 1560432506,
|
||||
"appservice_id": null,
|
||||
"consent_server_notice_sent": null,
|
||||
|
||||
@@ -20,7 +20,9 @@ recommended for development. More information about WSL can be found at
|
||||
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
|
||||
on Windows is not officially supported.
|
||||
|
||||
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
|
||||
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough.
|
||||
|
||||
Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`.
|
||||
|
||||
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
|
||||
|
||||
@@ -169,6 +171,27 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
|
||||
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
|
||||
```
|
||||
|
||||
By default, tests will use an in-memory SQLite database for test data. For additional
|
||||
help with debugging, one can use an on-disk SQLite database file instead, in order to
|
||||
review database state during and after running tests. This can be done by setting
|
||||
the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the
|
||||
database state to be stored in a file named `test.db` under the trial process'
|
||||
working directory. Typically, this ends up being `_trial_temp/test.db`. For example:
|
||||
|
||||
```sh
|
||||
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests
|
||||
```
|
||||
|
||||
The database file can then be inspected with:
|
||||
|
||||
```sh
|
||||
sqlite3 _trial_temp/test.db
|
||||
```
|
||||
|
||||
Note that the database file is cleared at the beginning of each test run. Thus it
|
||||
will always only contain the data generated by the *last run test*. Though generally
|
||||
when debugging, one is only running a single test anyway.
|
||||
|
||||
### Running tests under PostgreSQL
|
||||
|
||||
Invoking `trial` as above will use an in-memory SQLite database. This is great for
|
||||
|
||||
@@ -35,7 +35,12 @@ When Synapse is asked to preview a URL it does the following:
|
||||
5. If the media is HTML:
|
||||
1. Decodes the HTML via the stored file.
|
||||
2. Generates an Open Graph response from the HTML.
|
||||
3. If an image exists in the Open Graph response:
|
||||
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
|
||||
1. Downloads the URL and stores it into a file via the media storage provider
|
||||
and saves the local media metadata.
|
||||
2. Convert the oEmbed response to an Open Graph response.
|
||||
3. Override any Open Graph data from the HTML with data from oEmbed.
|
||||
4. If an image exists in the Open Graph response:
|
||||
1. Downloads the URL and stores it into a file via the media storage
|
||||
provider and saves the local media metadata.
|
||||
2. Generates thumbnails.
|
||||
|
||||
@@ -390,9 +390,6 @@ oidc_providers:
|
||||
|
||||
### Facebook
|
||||
|
||||
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
|
||||
one so requires a little more configuration.
|
||||
|
||||
0. You will need a Facebook developer account. You can register for one
|
||||
[here](https://developers.facebook.com/async/registration/).
|
||||
1. On the [apps](https://developers.facebook.com/apps/) page of the developer
|
||||
@@ -412,24 +409,28 @@ Synapse config:
|
||||
idp_name: Facebook
|
||||
idp_brand: "facebook" # optional: styling hint for clients
|
||||
discover: false
|
||||
issuer: "https://facebook.com"
|
||||
issuer: "https://www.facebook.com"
|
||||
client_id: "your-client-id" # TO BE FILLED
|
||||
client_secret: "your-client-secret" # TO BE FILLED
|
||||
scopes: ["openid", "email"]
|
||||
authorization_endpoint: https://facebook.com/dialog/oauth
|
||||
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
|
||||
user_profile_method: "userinfo_endpoint"
|
||||
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
|
||||
authorization_endpoint: "https://facebook.com/dialog/oauth"
|
||||
token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token"
|
||||
jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
subject_claim: "id"
|
||||
display_name_template: "{{ user.name }}"
|
||||
email_template: "{{ '{{ user.email }}' }}"
|
||||
```
|
||||
|
||||
Relevant documents:
|
||||
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
|
||||
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
|
||||
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
|
||||
* [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow)
|
||||
* [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/)
|
||||
* [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user)
|
||||
|
||||
Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration),
|
||||
but it has a `response_types_supported` which excludes "code" (which we rely on, and
|
||||
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
|
||||
so we have to disable discovery and configure the URIs manually.
|
||||
|
||||
### Gitea
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Using Postgres
|
||||
|
||||
Synapse supports PostgreSQL versions 9.6 or later.
|
||||
Synapse supports PostgreSQL versions 10 or later.
|
||||
|
||||
## Install postgres client libraries
|
||||
|
||||
|
||||
@@ -164,7 +164,7 @@ presence:
|
||||
# The default room version for newly created rooms.
|
||||
#
|
||||
# Known room versions are listed here:
|
||||
# https://matrix.org/docs/spec/#complete-list-of-room-versions
|
||||
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
|
||||
#
|
||||
# For example, for room version 1, default_room_version should be set
|
||||
# to "1".
|
||||
@@ -1503,6 +1503,21 @@ room_prejoin_state:
|
||||
#additional_event_types:
|
||||
# - org.example.custom.event.type
|
||||
|
||||
# We record the IP address of clients used to access the API for various
|
||||
# reasons, including displaying it to the user in the "Where you're signed in"
|
||||
# dialog.
|
||||
#
|
||||
# By default, when puppeting another user via the admin API, the client IP
|
||||
# address is recorded against the user who created the access token (ie, the
|
||||
# admin user), and *not* the puppeted user.
|
||||
#
|
||||
# Uncomment the following to also record the IP address against the puppeted
|
||||
# user. (This also means that the puppeted user will count as an "active" user
|
||||
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
|
||||
# above.)
|
||||
#
|
||||
#track_puppeted_user_ips: true
|
||||
|
||||
|
||||
# A list of application service config files to use
|
||||
#
|
||||
@@ -1870,10 +1885,13 @@ saml2_config:
|
||||
# Defaults to false. Avoid this in production.
|
||||
#
|
||||
# user_profile_method: Whether to fetch the user profile from the userinfo
|
||||
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
# endpoint, or to rely on the data returned in the id_token from the
|
||||
# token_endpoint.
|
||||
#
|
||||
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
|
||||
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
|
||||
# Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
#
|
||||
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
|
||||
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
|
||||
# userinfo endpoint.
|
||||
#
|
||||
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to
|
||||
|
||||
@@ -137,6 +137,10 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
# TLS private key file
|
||||
pkey=/path/to/privkey.pem
|
||||
|
||||
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
|
||||
#no-tls
|
||||
#no-dtls
|
||||
```
|
||||
|
||||
In this case, replace the `turn:` schemes in the `turn_uris` settings below
|
||||
@@ -145,6 +149,14 @@ This will install and start a systemd service called `coturn`.
|
||||
We recommend that you only try to set up TLS/DTLS once you have set up a
|
||||
basic installation and got it working.
|
||||
|
||||
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
|
||||
not work with any Matrix client that uses Chromium's WebRTC library. This
|
||||
currently includes Element Android & iOS; for more details, see their
|
||||
[respective](https://github.com/vector-im/element-android/issues/1533)
|
||||
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
|
||||
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
|
||||
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
|
||||
|
||||
1. Ensure your firewall allows traffic into the TURN server on the ports
|
||||
you've configured it to listen on (By default: 3478 and 5349 for TURN
|
||||
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
|
||||
@@ -250,6 +262,10 @@ Here are a few things to try:
|
||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||
relay ports (49152-65535 by default).
|
||||
|
||||
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
|
||||
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
|
||||
voice & video WebRTC traffic is always encrypted.)
|
||||
|
||||
* Some WebRTC implementations (notably, that of Google Chrome) appear to get
|
||||
confused by TURN servers which are reachable over IPv6 (this appears to be
|
||||
an unexpected side-effect of its handling of multiple IP addresses as
|
||||
|
||||
@@ -23,6 +23,9 @@
|
||||
# Exit if a line returns a non-zero exit code
|
||||
set -e
|
||||
|
||||
# enable buildkit for the docker builds
|
||||
export DOCKER_BUILDKIT=1
|
||||
|
||||
# Change to the repository root
|
||||
cd "$(dirname $0)/.."
|
||||
|
||||
@@ -47,7 +50,7 @@ if [[ -n "$WORKERS" ]]; then
|
||||
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile
|
||||
# And provide some more configuration to complement.
|
||||
export COMPLEMENT_CA=true
|
||||
export COMPLEMENT_VERSION_CHECK_ITERATIONS=500
|
||||
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25
|
||||
else
|
||||
export COMPLEMENT_BASE_IMAGE=complement-synapse
|
||||
COMPLEMENT_DOCKERFILE=Synapse.Dockerfile
|
||||
@@ -65,4 +68,5 @@ if [[ -n "$1" ]]; then
|
||||
fi
|
||||
|
||||
# Run the tests!
|
||||
echo "Images built; running complement"
|
||||
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
||||
|
||||
@@ -47,7 +47,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.50.0rc1"
|
||||
__version__ = "1.50.1"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -46,7 +46,9 @@ class UserInfo:
|
||||
ips: List[str] = attr.Factory(list)
|
||||
|
||||
|
||||
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
|
||||
def get_recent_users(
|
||||
txn: LoggingTransaction, since_ms: int, exclude_app_service: bool
|
||||
) -> List[UserInfo]:
|
||||
"""Fetches recently registered users and some info on them."""
|
||||
|
||||
sql = """
|
||||
@@ -56,6 +58,9 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
|
||||
AND deactivated = 0
|
||||
"""
|
||||
|
||||
if exclude_app_service:
|
||||
sql += " AND appservice_id IS NULL"
|
||||
|
||||
txn.execute(sql, (since_ms / 1000,))
|
||||
|
||||
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
|
||||
@@ -113,7 +118,7 @@ def main() -> None:
|
||||
"-e",
|
||||
"--exclude-emails",
|
||||
action="store_true",
|
||||
help="Exclude users that have validated email addresses",
|
||||
help="Exclude users that have validated email addresses.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-u",
|
||||
@@ -121,6 +126,12 @@ def main() -> None:
|
||||
action="store_true",
|
||||
help="Only print user IDs that match.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-a",
|
||||
"--exclude-app-service",
|
||||
help="Exclude appservice users.",
|
||||
action="store_true",
|
||||
)
|
||||
|
||||
config = ReviewConfig()
|
||||
|
||||
@@ -133,6 +144,7 @@ def main() -> None:
|
||||
|
||||
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
|
||||
exclude_users_with_email = config_args.exclude_emails
|
||||
exclude_users_with_appservice = config_args.exclude_app_service
|
||||
include_context = not config_args.only_users
|
||||
|
||||
for database_config in config.database.databases:
|
||||
@@ -143,7 +155,7 @@ def main() -> None:
|
||||
|
||||
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
||||
# This generates a type of Cursor, not LoggingTransaction.
|
||||
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
|
||||
user_infos = get_recent_users(db_conn.cursor(), since_ms, exclude_users_with_appservice) # type: ignore[arg-type]
|
||||
|
||||
for user_info in user_infos:
|
||||
if exclude_users_with_email and user_info.emails:
|
||||
|
||||
@@ -71,6 +71,7 @@ class Auth:
|
||||
self._auth_blocking = AuthBlocking(self.hs)
|
||||
|
||||
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
|
||||
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
|
||||
self._macaroon_secret_key = hs.config.key.macaroon_secret_key
|
||||
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
|
||||
|
||||
@@ -246,6 +247,18 @@ class Auth:
|
||||
user_agent=user_agent,
|
||||
device_id=device_id,
|
||||
)
|
||||
# Track also the puppeted user client IP if enabled and the user is puppeting
|
||||
if (
|
||||
user_info.user_id != user_info.token_owner
|
||||
and self._track_puppeted_user_ips
|
||||
):
|
||||
await self.store.insert_client_ip(
|
||||
user_id=user_info.user_id,
|
||||
access_token=access_token,
|
||||
ip=ip_addr,
|
||||
user_agent=user_agent,
|
||||
device_id=device_id,
|
||||
)
|
||||
|
||||
if is_guest and not allow_guest:
|
||||
raise AuthError(
|
||||
|
||||
@@ -46,41 +46,41 @@ class RoomDisposition:
|
||||
UNSTABLE = "unstable"
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True)
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class RoomVersion:
|
||||
"""An object which describes the unique attributes of a room version."""
|
||||
|
||||
identifier = attr.ib(type=str) # the identifier for this version
|
||||
disposition = attr.ib(type=str) # one of the RoomDispositions
|
||||
event_format = attr.ib(type=int) # one of the EventFormatVersions
|
||||
state_res = attr.ib(type=int) # one of the StateResolutionVersions
|
||||
enforce_key_validity = attr.ib(type=bool)
|
||||
identifier: str # the identifier for this version
|
||||
disposition: str # one of the RoomDispositions
|
||||
event_format: int # one of the EventFormatVersions
|
||||
state_res: int # one of the StateResolutionVersions
|
||||
enforce_key_validity: bool
|
||||
|
||||
# Before MSC2432, m.room.aliases had special auth rules and redaction rules
|
||||
special_case_aliases_auth = attr.ib(type=bool)
|
||||
special_case_aliases_auth: bool
|
||||
# Strictly enforce canonicaljson, do not allow:
|
||||
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
|
||||
# * Floats
|
||||
# * NaN, Infinity, -Infinity
|
||||
strict_canonicaljson = attr.ib(type=bool)
|
||||
strict_canonicaljson: bool
|
||||
# MSC2209: Check 'notifications' key while verifying
|
||||
# m.room.power_levels auth rules.
|
||||
limit_notifications_power_levels = attr.ib(type=bool)
|
||||
limit_notifications_power_levels: bool
|
||||
# MSC2174/MSC2176: Apply updated redaction rules algorithm.
|
||||
msc2176_redaction_rules = attr.ib(type=bool)
|
||||
msc2176_redaction_rules: bool
|
||||
# MSC3083: Support the 'restricted' join_rule.
|
||||
msc3083_join_rules = attr.ib(type=bool)
|
||||
msc3083_join_rules: bool
|
||||
# MSC3375: Support for the proper redaction rules for MSC3083. This mustn't
|
||||
# be enabled if MSC3083 is not.
|
||||
msc3375_redaction_rules = attr.ib(type=bool)
|
||||
msc3375_redaction_rules: bool
|
||||
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
|
||||
# m.room.membership event with membership 'knock'.
|
||||
msc2403_knocking = attr.ib(type=bool)
|
||||
msc2403_knocking: bool
|
||||
# MSC2716: Adds m.room.power_levels -> content.historical field to control
|
||||
# whether "insertion", "chunk", "marker" events can be sent
|
||||
msc2716_historical = attr.ib(type=bool)
|
||||
msc2716_historical: bool
|
||||
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
|
||||
msc2716_redactions = attr.ib(type=bool)
|
||||
msc2716_redactions: bool
|
||||
|
||||
|
||||
class RoomVersions:
|
||||
|
||||
@@ -60,7 +60,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
|
||||
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
|
||||
from synapse.handlers.auth import load_legacy_password_auth_providers
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.metrics import register_threadpool
|
||||
from synapse.metrics import install_gc_manager, register_threadpool
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
||||
from synapse.types import ISynapseReactor
|
||||
@@ -159,6 +159,7 @@ def start_reactor(
|
||||
change_resource_limit(soft_file_limit)
|
||||
if gc_thresholds:
|
||||
gc.set_threshold(*gc_thresholds)
|
||||
install_gc_manager()
|
||||
run_command()
|
||||
|
||||
# make sure that we run the reactor with the sentinel log context,
|
||||
|
||||
@@ -27,7 +27,6 @@ import synapse
|
||||
import synapse.config.logger
|
||||
from synapse import events
|
||||
from synapse.api.urls import (
|
||||
CLIENT_API_PREFIX,
|
||||
FEDERATION_PREFIX,
|
||||
LEGACY_MEDIA_PREFIX,
|
||||
MEDIA_R0_PREFIX,
|
||||
@@ -193,7 +192,13 @@ class SynapseHomeServer(HomeServer):
|
||||
|
||||
resources.update(
|
||||
{
|
||||
CLIENT_API_PREFIX: client_resource,
|
||||
"/_matrix/client/api/v1": client_resource,
|
||||
"/_matrix/client/r0": client_resource,
|
||||
"/_matrix/client/v1": client_resource,
|
||||
"/_matrix/client/v3": client_resource,
|
||||
"/_matrix/client/unstable": client_resource,
|
||||
"/_matrix/client/v2_alpha": client_resource,
|
||||
"/_matrix/client/versions": client_resource,
|
||||
"/.well-known": well_known_resource(self),
|
||||
"/_synapse/admin": AdminRestResource(self),
|
||||
**build_synapse_client_resource_tree(self),
|
||||
|
||||
@@ -29,6 +29,7 @@ class ApiConfig(Config):
|
||||
def read_config(self, config: JsonDict, **kwargs):
|
||||
validate_config(_MAIN_SCHEMA, config, ())
|
||||
self.room_prejoin_state = list(self._get_prejoin_state_types(config))
|
||||
self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False)
|
||||
|
||||
def generate_config_section(cls, **kwargs) -> str:
|
||||
formatted_default_state_types = "\n".join(
|
||||
@@ -59,6 +60,21 @@ class ApiConfig(Config):
|
||||
#
|
||||
#additional_event_types:
|
||||
# - org.example.custom.event.type
|
||||
|
||||
# We record the IP address of clients used to access the API for various
|
||||
# reasons, including displaying it to the user in the "Where you're signed in"
|
||||
# dialog.
|
||||
#
|
||||
# By default, when puppeting another user via the admin API, the client IP
|
||||
# address is recorded against the user who created the access token (ie, the
|
||||
# admin user), and *not* the puppeted user.
|
||||
#
|
||||
# Uncomment the following to also record the IP address against the puppeted
|
||||
# user. (This also means that the puppeted user will count as an "active" user
|
||||
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
|
||||
# above.)
|
||||
#
|
||||
#track_puppeted_user_ips: true
|
||||
""" % {
|
||||
"formatted_default_state_types": formatted_default_state_types
|
||||
}
|
||||
@@ -138,5 +154,8 @@ _MAIN_SCHEMA = {
|
||||
"properties": {
|
||||
"room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
|
||||
"room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
|
||||
"track_puppeted_user_ips": {
|
||||
"type": "boolean",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -55,19 +55,19 @@ https://matrix-org.github.io/synapse/latest/templates.html
|
||||
---------------------------------------------------------------------------------------"""
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True)
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class EmailSubjectConfig:
|
||||
message_from_person_in_room = attr.ib(type=str)
|
||||
message_from_person = attr.ib(type=str)
|
||||
messages_from_person = attr.ib(type=str)
|
||||
messages_in_room = attr.ib(type=str)
|
||||
messages_in_room_and_others = attr.ib(type=str)
|
||||
messages_from_person_and_others = attr.ib(type=str)
|
||||
invite_from_person = attr.ib(type=str)
|
||||
invite_from_person_to_room = attr.ib(type=str)
|
||||
invite_from_person_to_space = attr.ib(type=str)
|
||||
password_reset = attr.ib(type=str)
|
||||
email_validation = attr.ib(type=str)
|
||||
message_from_person_in_room: str
|
||||
message_from_person: str
|
||||
messages_from_person: str
|
||||
messages_in_room: str
|
||||
messages_in_room_and_others: str
|
||||
messages_from_person_and_others: str
|
||||
invite_from_person: str
|
||||
invite_from_person_to_room: str
|
||||
invite_from_person_to_space: str
|
||||
password_reset: str
|
||||
email_validation: str
|
||||
|
||||
|
||||
class EmailConfig(Config):
|
||||
|
||||
@@ -148,10 +148,13 @@ class OIDCConfig(Config):
|
||||
# Defaults to false. Avoid this in production.
|
||||
#
|
||||
# user_profile_method: Whether to fetch the user profile from the userinfo
|
||||
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
# endpoint, or to rely on the data returned in the id_token from the
|
||||
# token_endpoint.
|
||||
#
|
||||
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
|
||||
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
|
||||
# Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
#
|
||||
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
|
||||
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
|
||||
# userinfo endpoint.
|
||||
#
|
||||
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to
|
||||
|
||||
@@ -200,8 +200,8 @@ class HttpListenerConfig:
|
||||
"""Object describing the http-specific parts of the config of a listener"""
|
||||
|
||||
x_forwarded: bool = False
|
||||
resources: List[HttpResourceConfig] = attr.ib(factory=list)
|
||||
additional_resources: Dict[str, dict] = attr.ib(factory=dict)
|
||||
resources: List[HttpResourceConfig] = attr.Factory(list)
|
||||
additional_resources: Dict[str, dict] = attr.Factory(dict)
|
||||
tag: Optional[str] = None
|
||||
|
||||
|
||||
@@ -883,7 +883,7 @@ class ServerConfig(Config):
|
||||
# The default room version for newly created rooms.
|
||||
#
|
||||
# Known room versions are listed here:
|
||||
# https://matrix.org/docs/spec/#complete-list-of-room-versions
|
||||
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
|
||||
#
|
||||
# For example, for room version 1, default_room_version should be set
|
||||
# to "1".
|
||||
|
||||
@@ -51,12 +51,12 @@ def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
|
||||
return obj
|
||||
|
||||
|
||||
@attr.s
|
||||
@attr.s(auto_attribs=True)
|
||||
class InstanceLocationConfig:
|
||||
"""The host and port to talk to an instance via HTTP replication."""
|
||||
|
||||
host = attr.ib(type=str)
|
||||
port = attr.ib(type=int)
|
||||
host: str
|
||||
port: int
|
||||
|
||||
|
||||
@attr.s
|
||||
@@ -77,34 +77,28 @@ class WriterLocations:
|
||||
can only be a single instance.
|
||||
"""
|
||||
|
||||
events = attr.ib(
|
||||
events: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
typing = attr.ib(
|
||||
typing: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
to_device = attr.ib(
|
||||
to_device: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
account_data = attr.ib(
|
||||
account_data: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
receipts = attr.ib(
|
||||
receipts: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
presence = attr.ib(
|
||||
presence: List[str] = attr.ib(
|
||||
default=["master"],
|
||||
type=List[str],
|
||||
converter=_instance_to_list_converter,
|
||||
)
|
||||
|
||||
|
||||
@@ -58,7 +58,7 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@attr.s(slots=True, cmp=False)
|
||||
@attr.s(slots=True, frozen=True, cmp=False, auto_attribs=True)
|
||||
class VerifyJsonRequest:
|
||||
"""
|
||||
A request to verify a JSON object.
|
||||
@@ -78,10 +78,10 @@ class VerifyJsonRequest:
|
||||
key_ids: The set of key_ids to that could be used to verify the JSON object
|
||||
"""
|
||||
|
||||
server_name = attr.ib(type=str)
|
||||
get_json_object = attr.ib(type=Callable[[], JsonDict])
|
||||
minimum_valid_until_ts = attr.ib(type=int)
|
||||
key_ids = attr.ib(type=List[str])
|
||||
server_name: str
|
||||
get_json_object: Callable[[], JsonDict]
|
||||
minimum_valid_until_ts: int
|
||||
key_ids: List[str]
|
||||
|
||||
@staticmethod
|
||||
def from_json_object(
|
||||
@@ -124,7 +124,7 @@ class KeyLookupError(ValueError):
|
||||
pass
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class _FetchKeyRequest:
|
||||
"""A request for keys for a given server.
|
||||
|
||||
@@ -138,9 +138,9 @@ class _FetchKeyRequest:
|
||||
key_ids: The IDs of the keys to attempt to fetch
|
||||
"""
|
||||
|
||||
server_name = attr.ib(type=str)
|
||||
minimum_valid_until_ts = attr.ib(type=int)
|
||||
key_ids = attr.ib(type=List[str])
|
||||
server_name: str
|
||||
minimum_valid_until_ts: int
|
||||
key_ids: List[str]
|
||||
|
||||
|
||||
class Keyring:
|
||||
|
||||
@@ -28,7 +28,7 @@ if TYPE_CHECKING:
|
||||
from synapse.storage.databases.main import DataStore
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class EventContext:
|
||||
"""
|
||||
Holds information relevant to persisting an event
|
||||
@@ -103,15 +103,15 @@ class EventContext:
|
||||
accessed via get_prev_state_ids.
|
||||
"""
|
||||
|
||||
rejected = attr.ib(default=False, type=Union[bool, str])
|
||||
_state_group = attr.ib(default=None, type=Optional[int])
|
||||
state_group_before_event = attr.ib(default=None, type=Optional[int])
|
||||
prev_group = attr.ib(default=None, type=Optional[int])
|
||||
delta_ids = attr.ib(default=None, type=Optional[StateMap[str]])
|
||||
app_service = attr.ib(default=None, type=Optional[ApplicationService])
|
||||
rejected: Union[bool, str] = False
|
||||
_state_group: Optional[int] = None
|
||||
state_group_before_event: Optional[int] = None
|
||||
prev_group: Optional[int] = None
|
||||
delta_ids: Optional[StateMap[str]] = None
|
||||
app_service: Optional[ApplicationService] = None
|
||||
|
||||
_current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
|
||||
_prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
|
||||
_current_state_ids: Optional[StateMap[str]] = None
|
||||
_prev_state_ids: Optional[StateMap[str]] = None
|
||||
|
||||
@staticmethod
|
||||
def with_state(
|
||||
|
||||
@@ -14,17 +14,7 @@
|
||||
# limitations under the License.
|
||||
import collections.abc
|
||||
import re
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Union,
|
||||
)
|
||||
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Union
|
||||
|
||||
from frozendict import frozendict
|
||||
|
||||
@@ -32,14 +22,10 @@ from synapse.api.constants import EventContentFields, EventTypes, RelationTypes
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import RoomVersion
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.async_helpers import yieldable_gather_results
|
||||
from synapse.util.frozenutils import unfreeze
|
||||
|
||||
from . import EventBase
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
|
||||
# (?<!stuff) matches if the current position in the string is not preceded
|
||||
# by a match for 'stuff'.
|
||||
@@ -385,17 +371,12 @@ class EventClientSerializer:
|
||||
clients.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastore()
|
||||
self._msc1849_enabled = hs.config.experimental.msc1849_enabled
|
||||
self._msc3440_enabled = hs.config.experimental.msc3440_enabled
|
||||
|
||||
async def serialize_event(
|
||||
def serialize_event(
|
||||
self,
|
||||
event: Union[JsonDict, EventBase],
|
||||
time_now: int,
|
||||
*,
|
||||
bundle_aggregations: bool = False,
|
||||
bundle_aggregations: Optional[Dict[str, JsonDict]] = None,
|
||||
**kwargs: Any,
|
||||
) -> JsonDict:
|
||||
"""Serializes a single event.
|
||||
@@ -418,66 +399,41 @@ class EventClientSerializer:
|
||||
serialized_event = serialize_event(event, time_now, **kwargs)
|
||||
|
||||
# Check if there are any bundled aggregations to include with the event.
|
||||
#
|
||||
# Do not bundle aggregations if any of the following at true:
|
||||
#
|
||||
# * Support is disabled via the configuration or the caller.
|
||||
# * The event is a state event.
|
||||
# * The event has been redacted.
|
||||
if (
|
||||
self._msc1849_enabled
|
||||
and bundle_aggregations
|
||||
and not event.is_state()
|
||||
and not event.internal_metadata.is_redacted()
|
||||
):
|
||||
await self._injected_bundled_aggregations(event, time_now, serialized_event)
|
||||
if bundle_aggregations:
|
||||
event_aggregations = bundle_aggregations.get(event.event_id)
|
||||
if event_aggregations:
|
||||
self._injected_bundled_aggregations(
|
||||
event,
|
||||
time_now,
|
||||
bundle_aggregations[event.event_id],
|
||||
serialized_event,
|
||||
)
|
||||
|
||||
return serialized_event
|
||||
|
||||
async def _injected_bundled_aggregations(
|
||||
self, event: EventBase, time_now: int, serialized_event: JsonDict
|
||||
def _injected_bundled_aggregations(
|
||||
self,
|
||||
event: EventBase,
|
||||
time_now: int,
|
||||
aggregations: JsonDict,
|
||||
serialized_event: JsonDict,
|
||||
) -> None:
|
||||
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event.
|
||||
|
||||
Args:
|
||||
event: The event being serialized.
|
||||
time_now: The current time in milliseconds
|
||||
aggregations: The bundled aggregation to serialize.
|
||||
serialized_event: The serialized event which may be modified.
|
||||
|
||||
"""
|
||||
# Do not bundle aggregations for an event which represents an edit or an
|
||||
# annotation. It does not make sense for them to have related events.
|
||||
relates_to = event.content.get("m.relates_to")
|
||||
if isinstance(relates_to, (dict, frozendict)):
|
||||
relation_type = relates_to.get("rel_type")
|
||||
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
|
||||
return
|
||||
# Make a copy in-case the object is cached.
|
||||
aggregations = aggregations.copy()
|
||||
|
||||
event_id = event.event_id
|
||||
room_id = event.room_id
|
||||
|
||||
# The bundled aggregations to include.
|
||||
aggregations = {}
|
||||
|
||||
annotations = await self.store.get_aggregation_groups_for_event(
|
||||
event_id, room_id
|
||||
)
|
||||
if annotations.chunk:
|
||||
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
|
||||
|
||||
references = await self.store.get_relations_for_event(
|
||||
event_id, room_id, RelationTypes.REFERENCE, direction="f"
|
||||
)
|
||||
if references.chunk:
|
||||
aggregations[RelationTypes.REFERENCE] = references.to_dict()
|
||||
|
||||
edit = None
|
||||
if event.type == EventTypes.Message:
|
||||
edit = await self.store.get_applicable_edit(event_id, room_id)
|
||||
|
||||
if edit:
|
||||
if RelationTypes.REPLACE in aggregations:
|
||||
# If there is an edit replace the content, preserving existing
|
||||
# relations.
|
||||
edit = aggregations[RelationTypes.REPLACE]
|
||||
|
||||
# Ensure we take copies of the edit content, otherwise we risk modifying
|
||||
# the original event.
|
||||
@@ -502,27 +458,19 @@ class EventClientSerializer:
|
||||
}
|
||||
|
||||
# If this event is the start of a thread, include a summary of the replies.
|
||||
if self._msc3440_enabled:
|
||||
(
|
||||
thread_count,
|
||||
latest_thread_event,
|
||||
) = await self.store.get_thread_summary(event_id, room_id)
|
||||
if latest_thread_event:
|
||||
aggregations[RelationTypes.THREAD] = {
|
||||
# Don't bundle aggregations as this could recurse forever.
|
||||
"latest_event": await self.serialize_event(
|
||||
latest_thread_event, time_now, bundle_aggregations=False
|
||||
),
|
||||
"count": thread_count,
|
||||
}
|
||||
if RelationTypes.THREAD in aggregations:
|
||||
# Serialize the latest thread event.
|
||||
latest_thread_event = aggregations[RelationTypes.THREAD]["latest_event"]
|
||||
|
||||
# If any bundled aggregations were found, include them.
|
||||
if aggregations:
|
||||
serialized_event["unsigned"].setdefault("m.relations", {}).update(
|
||||
aggregations
|
||||
# Don't bundle aggregations as this could recurse forever.
|
||||
aggregations[RelationTypes.THREAD]["latest_event"] = self.serialize_event(
|
||||
latest_thread_event, time_now, bundle_aggregations=None
|
||||
)
|
||||
|
||||
async def serialize_events(
|
||||
# Include the bundled aggregations in the event.
|
||||
serialized_event["unsigned"].setdefault("m.relations", {}).update(aggregations)
|
||||
|
||||
def serialize_events(
|
||||
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
||||
) -> List[JsonDict]:
|
||||
"""Serializes multiple events.
|
||||
@@ -535,9 +483,9 @@ class EventClientSerializer:
|
||||
Returns:
|
||||
The list of serialized events
|
||||
"""
|
||||
return await yieldable_gather_results(
|
||||
self.serialize_event, events, time_now=time_now, **kwargs
|
||||
)
|
||||
return [
|
||||
self.serialize_event(event, time_now=time_now, **kwargs) for event in events
|
||||
]
|
||||
|
||||
|
||||
def copy_power_levels_contents(
|
||||
|
||||
@@ -230,6 +230,10 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
|
||||
# origin, etc etc)
|
||||
assert_params_in_dict(pdu_json, ("type", "depth"))
|
||||
|
||||
# Strip any unauthorized values from "unsigned" if they exist
|
||||
if "unsigned" in pdu_json:
|
||||
_strip_unsigned_values(pdu_json)
|
||||
|
||||
depth = pdu_json["depth"]
|
||||
if not isinstance(depth, int):
|
||||
raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON)
|
||||
@@ -245,3 +249,24 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
|
||||
|
||||
event = make_event_from_dict(pdu_json, room_version)
|
||||
return event
|
||||
|
||||
|
||||
def _strip_unsigned_values(pdu_dict: JsonDict) -> None:
|
||||
"""
|
||||
Strip any unsigned values unless specifically allowed, as defined by the whitelist.
|
||||
|
||||
pdu: the json dict to strip values from. Note that the dict is mutated by this
|
||||
function
|
||||
"""
|
||||
unsigned = pdu_dict["unsigned"]
|
||||
|
||||
if not isinstance(unsigned, dict):
|
||||
pdu_dict["unsigned"] = {}
|
||||
|
||||
if pdu_dict["type"] == "m.room.member":
|
||||
whitelist = ["knock_room_state", "invite_room_state", "age"]
|
||||
else:
|
||||
whitelist = ["age"]
|
||||
|
||||
filtered_unsigned = {k: v for k, v in unsigned.items() if k in whitelist}
|
||||
pdu_dict["unsigned"] = filtered_unsigned
|
||||
|
||||
@@ -56,7 +56,6 @@ from synapse.api.room_versions import (
|
||||
from synapse.events import EventBase, builder
|
||||
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
|
||||
from synapse.federation.transport.client import SendJoinResponse
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.types import JsonDict, get_domain_from_id
|
||||
from synapse.util.async_helpers import concurrently_execute
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
@@ -144,7 +143,6 @@ class FederationClient(FederationBase):
|
||||
if destination_dict:
|
||||
self.pdu_destination_tried[event_id] = destination_dict
|
||||
|
||||
@log_function
|
||||
async def make_query(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -178,7 +176,6 @@ class FederationClient(FederationBase):
|
||||
ignore_backoff=ignore_backoff,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def query_client_keys(
|
||||
self, destination: str, content: JsonDict, timeout: int
|
||||
) -> JsonDict:
|
||||
@@ -196,7 +193,6 @@ class FederationClient(FederationBase):
|
||||
destination, content, timeout
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def query_user_devices(
|
||||
self, destination: str, user_id: str, timeout: int = 30000
|
||||
) -> JsonDict:
|
||||
@@ -208,7 +204,6 @@ class FederationClient(FederationBase):
|
||||
destination, user_id, timeout
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def claim_client_keys(
|
||||
self, destination: str, content: JsonDict, timeout: int
|
||||
) -> JsonDict:
|
||||
|
||||
@@ -58,7 +58,6 @@ from synapse.logging.context import (
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.replication.http.federation import (
|
||||
ReplicationFederationSendEduRestServlet,
|
||||
@@ -859,7 +858,6 @@ class FederationServer(FederationBase):
|
||||
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
|
||||
return 200, res
|
||||
|
||||
@log_function
|
||||
async def on_query_client_keys(
|
||||
self, origin: str, content: Dict[str, str]
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
@@ -940,7 +938,6 @@ class FederationServer(FederationBase):
|
||||
|
||||
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
|
||||
|
||||
@log_function
|
||||
async def on_openid_userinfo(self, token: str) -> Optional[str]:
|
||||
ts_now_ms = self._clock.time_msec()
|
||||
return await self.store.get_user_id_for_open_id_token(token, ts_now_ms)
|
||||
|
||||
@@ -23,7 +23,6 @@ import logging
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from synapse.federation.units import Transaction
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.types import JsonDict
|
||||
|
||||
@@ -36,7 +35,6 @@ class TransactionActions:
|
||||
def __init__(self, datastore: DataStore):
|
||||
self.store = datastore
|
||||
|
||||
@log_function
|
||||
async def have_responded(
|
||||
self, origin: str, transaction: Transaction
|
||||
) -> Optional[Tuple[int, JsonDict]]:
|
||||
@@ -53,7 +51,6 @@ class TransactionActions:
|
||||
|
||||
return await self.store.get_received_txn_response(transaction_id, origin)
|
||||
|
||||
@log_function
|
||||
async def set_response(
|
||||
self, origin: str, transaction: Transaction, code: int, response: JsonDict
|
||||
) -> None:
|
||||
|
||||
@@ -607,18 +607,18 @@ class PerDestinationQueue:
|
||||
self._pending_pdus = []
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class _TransactionQueueManager:
|
||||
"""A helper async context manager for pulling stuff off the queues and
|
||||
tracking what was last successfully sent, etc.
|
||||
"""
|
||||
|
||||
queue = attr.ib(type=PerDestinationQueue)
|
||||
queue: PerDestinationQueue
|
||||
|
||||
_device_stream_id = attr.ib(type=Optional[int], default=None)
|
||||
_device_list_id = attr.ib(type=Optional[int], default=None)
|
||||
_last_stream_ordering = attr.ib(type=Optional[int], default=None)
|
||||
_pdus = attr.ib(type=List[EventBase], factory=list)
|
||||
_device_stream_id: Optional[int] = None
|
||||
_device_list_id: Optional[int] = None
|
||||
_last_stream_ordering: Optional[int] = None
|
||||
_pdus: List[EventBase] = attr.Factory(list)
|
||||
|
||||
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
|
||||
# First we calculate the EDUs we want to send, if any.
|
||||
|
||||
@@ -44,7 +44,6 @@ from synapse.api.urls import (
|
||||
from synapse.events import EventBase, make_event_from_dict
|
||||
from synapse.federation.units import Transaction
|
||||
from synapse.http.matrixfederationclient import ByteParser
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.types import JsonDict
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -62,7 +61,6 @@ class TransportLayerClient:
|
||||
self.server_name = hs.hostname
|
||||
self.client = hs.get_federation_http_client()
|
||||
|
||||
@log_function
|
||||
async def get_room_state_ids(
|
||||
self, destination: str, room_id: str, event_id: str
|
||||
) -> JsonDict:
|
||||
@@ -88,7 +86,6 @@ class TransportLayerClient:
|
||||
try_trailing_slash_on_400=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_event(
|
||||
self, destination: str, event_id: str, timeout: Optional[int] = None
|
||||
) -> JsonDict:
|
||||
@@ -111,7 +108,6 @@ class TransportLayerClient:
|
||||
destination, path=path, timeout=timeout, try_trailing_slash_on_400=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def backfill(
|
||||
self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
|
||||
) -> Optional[JsonDict]:
|
||||
@@ -149,7 +145,6 @@ class TransportLayerClient:
|
||||
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def timestamp_to_event(
|
||||
self, destination: str, room_id: str, timestamp: int, direction: str
|
||||
) -> Union[JsonDict, List]:
|
||||
@@ -185,7 +180,6 @@ class TransportLayerClient:
|
||||
|
||||
return remote_response
|
||||
|
||||
@log_function
|
||||
async def send_transaction(
|
||||
self,
|
||||
transaction: Transaction,
|
||||
@@ -234,7 +228,6 @@ class TransportLayerClient:
|
||||
try_trailing_slash_on_400=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def make_query(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -254,7 +247,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=ignore_backoff,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def make_membership_event(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -317,7 +309,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=ignore_backoff,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_join_v1(
|
||||
self,
|
||||
room_version: RoomVersion,
|
||||
@@ -336,7 +327,6 @@ class TransportLayerClient:
|
||||
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_join_v2(
|
||||
self,
|
||||
room_version: RoomVersion,
|
||||
@@ -355,7 +345,6 @@ class TransportLayerClient:
|
||||
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_leave_v1(
|
||||
self, destination: str, room_id: str, event_id: str, content: JsonDict
|
||||
) -> Tuple[int, JsonDict]:
|
||||
@@ -372,7 +361,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_leave_v2(
|
||||
self, destination: str, room_id: str, event_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -389,7 +377,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_knock_v1(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -423,7 +410,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_invite_v1(
|
||||
self, destination: str, room_id: str, event_id: str, content: JsonDict
|
||||
) -> Tuple[int, JsonDict]:
|
||||
@@ -433,7 +419,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def send_invite_v2(
|
||||
self, destination: str, room_id: str, event_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -443,7 +428,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_public_rooms(
|
||||
self,
|
||||
remote_server: str,
|
||||
@@ -516,7 +500,6 @@ class TransportLayerClient:
|
||||
|
||||
return response
|
||||
|
||||
@log_function
|
||||
async def exchange_third_party_invite(
|
||||
self, destination: str, room_id: str, event_dict: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -526,7 +509,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=event_dict
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_event_auth(
|
||||
self, destination: str, room_id: str, event_id: str
|
||||
) -> JsonDict:
|
||||
@@ -534,7 +516,6 @@ class TransportLayerClient:
|
||||
|
||||
return await self.client.get_json(destination=destination, path=path)
|
||||
|
||||
@log_function
|
||||
async def query_client_keys(
|
||||
self, destination: str, query_content: JsonDict, timeout: int
|
||||
) -> JsonDict:
|
||||
@@ -576,7 +557,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=query_content, timeout=timeout
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def query_user_devices(
|
||||
self, destination: str, user_id: str, timeout: int
|
||||
) -> JsonDict:
|
||||
@@ -616,7 +596,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, timeout=timeout
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def claim_client_keys(
|
||||
self, destination: str, query_content: JsonDict, timeout: int
|
||||
) -> JsonDict:
|
||||
@@ -655,7 +634,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=query_content, timeout=timeout
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_missing_events(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -680,7 +658,6 @@ class TransportLayerClient:
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_profile(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -694,7 +671,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def update_group_profile(
|
||||
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -716,7 +692,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_summary(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -730,7 +705,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_rooms_in_group(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -798,7 +772,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_users_in_group(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -812,7 +785,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_invited_users_in_group(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -826,7 +798,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def accept_group_invite(
|
||||
self, destination: str, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -837,7 +808,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
def join_group(
|
||||
self, destination: str, group_id: str, user_id: str, content: JsonDict
|
||||
) -> Awaitable[JsonDict]:
|
||||
@@ -848,7 +818,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def invite_to_group(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -868,7 +837,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def invite_to_group_notification(
|
||||
self, destination: str, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -882,7 +850,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def remove_user_from_group(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -902,7 +869,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def remove_user_from_group_notification(
|
||||
self, destination: str, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -916,7 +882,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def renew_group_attestation(
|
||||
self, destination: str, group_id: str, user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -930,7 +895,6 @@ class TransportLayerClient:
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def update_group_summary_room(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -959,7 +923,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def delete_group_summary_room(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -986,7 +949,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_categories(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1000,7 +962,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_category(
|
||||
self, destination: str, group_id: str, requester_user_id: str, category_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1014,7 +975,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def update_group_category(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -1034,7 +994,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def delete_group_category(
|
||||
self, destination: str, group_id: str, requester_user_id: str, category_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1048,7 +1007,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_roles(
|
||||
self, destination: str, group_id: str, requester_user_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1062,7 +1020,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def get_group_role(
|
||||
self, destination: str, group_id: str, requester_user_id: str, role_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1076,7 +1033,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def update_group_role(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -1096,7 +1052,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def delete_group_role(
|
||||
self, destination: str, group_id: str, requester_user_id: str, role_id: str
|
||||
) -> JsonDict:
|
||||
@@ -1110,7 +1065,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def update_group_summary_user(
|
||||
self,
|
||||
destination: str,
|
||||
@@ -1136,7 +1090,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def set_group_join_policy(
|
||||
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
|
||||
) -> JsonDict:
|
||||
@@ -1151,7 +1104,6 @@ class TransportLayerClient:
|
||||
ignore_backoff=True,
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def delete_group_summary_user(
|
||||
self,
|
||||
destination: str,
|
||||
|
||||
@@ -77,7 +77,7 @@ class AccountDataHandler:
|
||||
async def add_account_data_for_user(
|
||||
self, user_id: str, account_data_type: str, content: JsonDict
|
||||
) -> int:
|
||||
"""Add some account_data to a room for a user.
|
||||
"""Add some global account_data for a user.
|
||||
|
||||
Args:
|
||||
user_id: The user to add a tag for.
|
||||
|
||||
@@ -55,21 +55,47 @@ class AdminHandler:
|
||||
|
||||
async def get_user(self, user: UserID) -> Optional[JsonDict]:
|
||||
"""Function to get user details"""
|
||||
ret = await self.store.get_user_by_id(user.to_string())
|
||||
if ret:
|
||||
profile = await self.store.get_profileinfo(user.localpart)
|
||||
threepids = await self.store.user_get_threepids(user.to_string())
|
||||
external_ids = [
|
||||
({"auth_provider": auth_provider, "external_id": external_id})
|
||||
for auth_provider, external_id in await self.store.get_external_ids_by_user(
|
||||
user.to_string()
|
||||
)
|
||||
]
|
||||
ret["displayname"] = profile.display_name
|
||||
ret["avatar_url"] = profile.avatar_url
|
||||
ret["threepids"] = threepids
|
||||
ret["external_ids"] = external_ids
|
||||
return ret
|
||||
user_info_dict = await self.store.get_user_by_id(user.to_string())
|
||||
if user_info_dict is None:
|
||||
return None
|
||||
|
||||
# Restrict returned information to a known set of fields. This prevents additional
|
||||
# fields added to get_user_by_id from modifying Synapse's external API surface.
|
||||
user_info_to_return = {
|
||||
"name",
|
||||
"admin",
|
||||
"deactivated",
|
||||
"shadow_banned",
|
||||
"creation_ts",
|
||||
"appservice_id",
|
||||
"consent_server_notice_sent",
|
||||
"consent_version",
|
||||
"user_type",
|
||||
"is_guest",
|
||||
}
|
||||
|
||||
# Restrict returned keys to a known set.
|
||||
user_info_dict = {
|
||||
key: value
|
||||
for key, value in user_info_dict.items()
|
||||
if key in user_info_to_return
|
||||
}
|
||||
|
||||
# Add additional user metadata
|
||||
profile = await self.store.get_profileinfo(user.localpart)
|
||||
threepids = await self.store.user_get_threepids(user.to_string())
|
||||
external_ids = [
|
||||
({"auth_provider": auth_provider, "external_id": external_id})
|
||||
for auth_provider, external_id in await self.store.get_external_ids_by_user(
|
||||
user.to_string()
|
||||
)
|
||||
]
|
||||
user_info_dict["displayname"] = profile.display_name
|
||||
user_info_dict["avatar_url"] = profile.avatar_url
|
||||
user_info_dict["threepids"] = threepids
|
||||
user_info_dict["external_ids"] = external_ids
|
||||
|
||||
return user_info_dict
|
||||
|
||||
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:
|
||||
"""Write all data we have on the user to the given writer.
|
||||
|
||||
@@ -168,25 +168,25 @@ def login_id_phone_to_thirdparty(identifier: JsonDict) -> Dict[str, str]:
|
||||
}
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class SsoLoginExtraAttributes:
|
||||
"""Data we track about SAML2 sessions"""
|
||||
|
||||
# time the session was created, in milliseconds
|
||||
creation_time = attr.ib(type=int)
|
||||
extra_attributes = attr.ib(type=JsonDict)
|
||||
creation_time: int
|
||||
extra_attributes: JsonDict
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True)
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class LoginTokenAttributes:
|
||||
"""Data we store in a short-term login token"""
|
||||
|
||||
user_id = attr.ib(type=str)
|
||||
user_id: str
|
||||
|
||||
auth_provider_id = attr.ib(type=str)
|
||||
auth_provider_id: str
|
||||
"""The SSO Identity Provider that the user authenticated with, to get this token."""
|
||||
|
||||
auth_provider_session_id = attr.ib(type=Optional[str])
|
||||
auth_provider_session_id: Optional[str]
|
||||
"""The session ID advertised by the SSO Identity Provider."""
|
||||
|
||||
|
||||
|
||||
@@ -948,8 +948,16 @@ class DeviceListUpdater:
|
||||
devices = []
|
||||
ignore_devices = True
|
||||
else:
|
||||
prev_stream_id = await self.store.get_device_list_last_stream_id_for_remote(
|
||||
user_id
|
||||
)
|
||||
cached_devices = await self.store.get_cached_devices_for_user(user_id)
|
||||
if cached_devices == {d["device_id"]: d for d in devices}:
|
||||
|
||||
# To ensure that a user with no devices is cached, we skip the resync only
|
||||
# if we have a stream_id from previously writing a cache entry.
|
||||
if prev_stream_id is not None and cached_devices == {
|
||||
d["device_id"]: d for d in devices
|
||||
}:
|
||||
logging.info(
|
||||
"Skipping device list resync for %s, as our cache matches already",
|
||||
user_id,
|
||||
|
||||
@@ -1321,14 +1321,14 @@ def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool:
|
||||
return old_key == new_key_copy
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class SignatureListItem:
|
||||
"""An item in the signature list as used by upload_signatures_for_device_keys."""
|
||||
|
||||
signing_key_id = attr.ib(type=str)
|
||||
target_user_id = attr.ib(type=str)
|
||||
target_device_id = attr.ib(type=str)
|
||||
signature = attr.ib(type=JsonDict)
|
||||
signing_key_id: str
|
||||
target_user_id: str
|
||||
target_device_id: str
|
||||
signature: JsonDict
|
||||
|
||||
|
||||
class SigningKeyEduUpdater:
|
||||
|
||||
@@ -20,7 +20,6 @@ from synapse.api.constants import EduTypes, EventTypes, Membership
|
||||
from synapse.api.errors import AuthError, SynapseError
|
||||
from synapse.events import EventBase
|
||||
from synapse.handlers.presence import format_user_presence_state
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.streams.config import PaginationConfig
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.visibility import filter_events_for_client
|
||||
@@ -43,7 +42,6 @@ class EventStreamHandler:
|
||||
self._server_notices_sender = hs.get_server_notices_sender()
|
||||
self._event_serializer = hs.get_event_client_serializer()
|
||||
|
||||
@log_function
|
||||
async def get_stream(
|
||||
self,
|
||||
auth_user_id: str,
|
||||
@@ -119,7 +117,7 @@ class EventStreamHandler:
|
||||
|
||||
events.extend(to_add)
|
||||
|
||||
chunks = await self._event_serializer.serialize_events(
|
||||
chunks = self._event_serializer.serialize_events(
|
||||
events,
|
||||
time_now,
|
||||
as_client_event=as_client_event,
|
||||
|
||||
@@ -51,7 +51,6 @@ from synapse.logging.context import (
|
||||
preserve_fn,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.replication.http.federation import (
|
||||
ReplicationCleanRoomRestServlet,
|
||||
ReplicationStoreRoomOnOutlierMembershipRestServlet,
|
||||
@@ -556,7 +555,6 @@ class FederationHandler:
|
||||
|
||||
run_in_background(self._handle_queued_pdus, room_queue)
|
||||
|
||||
@log_function
|
||||
async def do_knock(
|
||||
self,
|
||||
target_hosts: List[str],
|
||||
@@ -928,7 +926,6 @@ class FederationHandler:
|
||||
|
||||
return event
|
||||
|
||||
@log_function
|
||||
async def on_make_knock_request(
|
||||
self, origin: str, room_id: str, user_id: str
|
||||
) -> EventBase:
|
||||
@@ -1039,7 +1036,6 @@ class FederationHandler:
|
||||
else:
|
||||
return []
|
||||
|
||||
@log_function
|
||||
async def on_backfill_request(
|
||||
self, origin: str, room_id: str, pdu_list: List[str], limit: int
|
||||
) -> List[EventBase]:
|
||||
@@ -1056,7 +1052,6 @@ class FederationHandler:
|
||||
|
||||
return events
|
||||
|
||||
@log_function
|
||||
async def get_persisted_pdu(
|
||||
self, origin: str, event_id: str
|
||||
) -> Optional[EventBase]:
|
||||
@@ -1118,7 +1113,6 @@ class FederationHandler:
|
||||
|
||||
return missing_events
|
||||
|
||||
@log_function
|
||||
async def exchange_third_party_invite(
|
||||
self, sender_user_id: str, target_user_id: str, room_id: str, signed: JsonDict
|
||||
) -> None:
|
||||
|
||||
@@ -56,7 +56,6 @@ from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.federation.federation_client import InvalidResponseError
|
||||
from synapse.logging.context import nested_logging_context, run_in_background
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
|
||||
from synapse.replication.http.federation import (
|
||||
@@ -275,7 +274,6 @@ class FederationEventHandler:
|
||||
|
||||
await self._process_received_pdu(origin, pdu, state=None)
|
||||
|
||||
@log_function
|
||||
async def on_send_membership_event(
|
||||
self, origin: str, event: EventBase
|
||||
) -> Tuple[EventBase, EventContext]:
|
||||
@@ -472,7 +470,6 @@ class FederationEventHandler:
|
||||
|
||||
return await self.persist_events_and_notify(room_id, [(event, context)])
|
||||
|
||||
@log_function
|
||||
async def backfill(
|
||||
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
|
||||
) -> None:
|
||||
|
||||
@@ -170,7 +170,7 @@ class InitialSyncHandler:
|
||||
d["inviter"] = event.sender
|
||||
|
||||
invite_event = await self.store.get_event(event.event_id)
|
||||
d["invite"] = await self._event_serializer.serialize_event(
|
||||
d["invite"] = self._event_serializer.serialize_event(
|
||||
invite_event,
|
||||
time_now,
|
||||
as_client_event=as_client_event,
|
||||
@@ -222,7 +222,7 @@ class InitialSyncHandler:
|
||||
|
||||
d["messages"] = {
|
||||
"chunk": (
|
||||
await self._event_serializer.serialize_events(
|
||||
self._event_serializer.serialize_events(
|
||||
messages,
|
||||
time_now=time_now,
|
||||
as_client_event=as_client_event,
|
||||
@@ -232,7 +232,7 @@ class InitialSyncHandler:
|
||||
"end": await end_token.to_string(self.store),
|
||||
}
|
||||
|
||||
d["state"] = await self._event_serializer.serialize_events(
|
||||
d["state"] = self._event_serializer.serialize_events(
|
||||
current_state.values(),
|
||||
time_now=time_now,
|
||||
as_client_event=as_client_event,
|
||||
@@ -376,16 +376,14 @@ class InitialSyncHandler:
|
||||
"messages": {
|
||||
"chunk": (
|
||||
# Don't bundle aggregations as this is a deprecated API.
|
||||
await self._event_serializer.serialize_events(messages, time_now)
|
||||
self._event_serializer.serialize_events(messages, time_now)
|
||||
),
|
||||
"start": await start_token.to_string(self.store),
|
||||
"end": await end_token.to_string(self.store),
|
||||
},
|
||||
"state": (
|
||||
# Don't bundle aggregations as this is a deprecated API.
|
||||
await self._event_serializer.serialize_events(
|
||||
room_state.values(), time_now
|
||||
)
|
||||
self._event_serializer.serialize_events(room_state.values(), time_now)
|
||||
),
|
||||
"presence": [],
|
||||
"receipts": [],
|
||||
@@ -404,7 +402,7 @@ class InitialSyncHandler:
|
||||
# TODO: These concurrently
|
||||
time_now = self.clock.time_msec()
|
||||
# Don't bundle aggregations as this is a deprecated API.
|
||||
state = await self._event_serializer.serialize_events(
|
||||
state = self._event_serializer.serialize_events(
|
||||
current_state.values(), time_now
|
||||
)
|
||||
|
||||
@@ -480,7 +478,7 @@ class InitialSyncHandler:
|
||||
"messages": {
|
||||
"chunk": (
|
||||
# Don't bundle aggregations as this is a deprecated API.
|
||||
await self._event_serializer.serialize_events(messages, time_now)
|
||||
self._event_serializer.serialize_events(messages, time_now)
|
||||
),
|
||||
"start": await start_token.to_string(self.store),
|
||||
"end": await end_token.to_string(self.store),
|
||||
|
||||
@@ -246,7 +246,7 @@ class MessageHandler:
|
||||
room_state = room_state_events[membership_event_id]
|
||||
|
||||
now = self.clock.time_msec()
|
||||
events = await self._event_serializer.serialize_events(room_state.values(), now)
|
||||
events = self._event_serializer.serialize_events(room_state.values(), now)
|
||||
return events
|
||||
|
||||
async def get_joined_members(self, requester: Requester, room_id: str) -> dict:
|
||||
|
||||
@@ -537,14 +537,16 @@ class PaginationHandler:
|
||||
state_dict = await self.store.get_events(list(state_ids.values()))
|
||||
state = state_dict.values()
|
||||
|
||||
aggregations = await self.store.get_bundled_aggregations(events, user_id)
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
chunk = {
|
||||
"chunk": (
|
||||
await self._event_serializer.serialize_events(
|
||||
self._event_serializer.serialize_events(
|
||||
events,
|
||||
time_now,
|
||||
bundle_aggregations=True,
|
||||
bundle_aggregations=aggregations,
|
||||
as_client_event=as_client_event,
|
||||
)
|
||||
),
|
||||
@@ -553,7 +555,7 @@ class PaginationHandler:
|
||||
}
|
||||
|
||||
if state:
|
||||
chunk["state"] = await self._event_serializer.serialize_events(
|
||||
chunk["state"] = self._event_serializer.serialize_events(
|
||||
state, time_now, as_client_event=as_client_event
|
||||
)
|
||||
|
||||
|
||||
@@ -55,7 +55,6 @@ from synapse.api.presence import UserPresenceState
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.events.presence_router import PresenceRouter
|
||||
from synapse.logging.context import run_in_background
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.presence import (
|
||||
@@ -1542,7 +1541,6 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@log_function
|
||||
async def get_new_events(
|
||||
self,
|
||||
user: UserID,
|
||||
|
||||
@@ -979,16 +979,18 @@ class RegistrationHandler:
|
||||
if (
|
||||
self.hs.config.email.email_enable_notifs
|
||||
and self.hs.config.email.email_notif_for_new_users
|
||||
and token
|
||||
):
|
||||
# Pull the ID of the access token back out of the db
|
||||
# It would really make more sense for this to be passed
|
||||
# up when the access token is saved, but that's quite an
|
||||
# invasive change I'd rather do separately.
|
||||
user_tuple = await self.store.get_user_by_access_token(token)
|
||||
# The token better still exist.
|
||||
assert user_tuple
|
||||
token_id = user_tuple.token_id
|
||||
if token:
|
||||
user_tuple = await self.store.get_user_by_access_token(token)
|
||||
# The token better still exist.
|
||||
assert user_tuple
|
||||
token_id = user_tuple.token_id
|
||||
else:
|
||||
token_id = None
|
||||
|
||||
await self.pusher_pool.add_pusher(
|
||||
user_id=user_id,
|
||||
|
||||
@@ -393,7 +393,9 @@ class RoomCreationHandler:
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
if not await self.spam_checker.user_may_create_room(user_id):
|
||||
raise SynapseError(403, "You are not permitted to create rooms")
|
||||
raise SynapseError(
|
||||
403, "You are not permitted to create rooms", Codes.FORBIDDEN
|
||||
)
|
||||
|
||||
creation_content: JsonDict = {
|
||||
"room_version": new_room_version.identifier,
|
||||
@@ -685,7 +687,9 @@ class RoomCreationHandler:
|
||||
invite_3pid_list,
|
||||
)
|
||||
):
|
||||
raise SynapseError(403, "You are not permitted to create rooms")
|
||||
raise SynapseError(
|
||||
403, "You are not permitted to create rooms", Codes.FORBIDDEN
|
||||
)
|
||||
|
||||
if ratelimit:
|
||||
await self.request_ratelimiter.ratelimit(requester)
|
||||
@@ -1177,6 +1181,22 @@ class RoomContextHandler:
|
||||
# `filtered` rather than the event we retrieved from the datastore.
|
||||
results["event"] = filtered[0]
|
||||
|
||||
# Fetch the aggregations.
|
||||
aggregations = await self.store.get_bundled_aggregations(
|
||||
[results["event"]], user.to_string()
|
||||
)
|
||||
aggregations.update(
|
||||
await self.store.get_bundled_aggregations(
|
||||
results["events_before"], user.to_string()
|
||||
)
|
||||
)
|
||||
aggregations.update(
|
||||
await self.store.get_bundled_aggregations(
|
||||
results["events_after"], user.to_string()
|
||||
)
|
||||
)
|
||||
results["aggregations"] = aggregations
|
||||
|
||||
if results["events_after"]:
|
||||
last_event_id = results["events_after"][-1].event_id
|
||||
else:
|
||||
|
||||
@@ -153,6 +153,9 @@ class RoomSummaryHandler:
|
||||
rooms_result: List[JsonDict] = []
|
||||
events_result: List[JsonDict] = []
|
||||
|
||||
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
|
||||
max_rooms_per_space = MAX_ROOMS_PER_SPACE
|
||||
|
||||
while room_queue and len(rooms_result) < MAX_ROOMS:
|
||||
queue_entry = room_queue.popleft()
|
||||
room_id = queue_entry.room_id
|
||||
@@ -167,7 +170,7 @@ class RoomSummaryHandler:
|
||||
# The client-specified max_rooms_per_space limit doesn't apply to the
|
||||
# room_id specified in the request, so we ignore it if this is the
|
||||
# first room we are processing.
|
||||
max_children = max_rooms_per_space if processed_rooms else None
|
||||
max_children = max_rooms_per_space if processed_rooms else MAX_ROOMS
|
||||
|
||||
if is_in_room:
|
||||
room_entry = await self._summarize_local_room(
|
||||
@@ -209,7 +212,7 @@ class RoomSummaryHandler:
|
||||
# Before returning to the client, remove the allowed_room_ids
|
||||
# and allowed_spaces keys.
|
||||
room.pop("allowed_room_ids", None)
|
||||
room.pop("allowed_spaces", None)
|
||||
room.pop("allowed_spaces", None) # historical
|
||||
|
||||
rooms_result.append(room)
|
||||
events.extend(room_entry.children_state_events)
|
||||
@@ -395,7 +398,7 @@ class RoomSummaryHandler:
|
||||
None,
|
||||
room_id,
|
||||
suggested_only,
|
||||
# TODO Handle max children.
|
||||
# Do not limit the maximum children.
|
||||
max_children=None,
|
||||
)
|
||||
|
||||
@@ -525,6 +528,10 @@ class RoomSummaryHandler:
|
||||
rooms_result: List[JsonDict] = []
|
||||
events_result: List[JsonDict] = []
|
||||
|
||||
# Set a limit on the number of rooms to return.
|
||||
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
|
||||
max_rooms_per_space = MAX_ROOMS_PER_SPACE
|
||||
|
||||
while room_queue and len(rooms_result) < MAX_ROOMS:
|
||||
room_id = room_queue.popleft()
|
||||
if room_id in processed_rooms:
|
||||
@@ -583,7 +590,9 @@ class RoomSummaryHandler:
|
||||
|
||||
# Iterate through each child and potentially add it, but not its children,
|
||||
# to the response.
|
||||
for child_room in root_room_entry.children_state_events:
|
||||
for child_room in itertools.islice(
|
||||
root_room_entry.children_state_events, MAX_ROOMS_PER_SPACE
|
||||
):
|
||||
room_id = child_room.get("state_key")
|
||||
assert isinstance(room_id, str)
|
||||
# If the room is unknown, skip it.
|
||||
@@ -633,8 +642,8 @@ class RoomSummaryHandler:
|
||||
suggested_only: True if only suggested children should be returned.
|
||||
Otherwise, all children are returned.
|
||||
max_children:
|
||||
The maximum number of children rooms to include. This is capped
|
||||
to a server-set limit.
|
||||
The maximum number of children rooms to include. A value of None
|
||||
means no limit.
|
||||
|
||||
Returns:
|
||||
A room entry if the room should be returned. None, otherwise.
|
||||
@@ -656,8 +665,13 @@ class RoomSummaryHandler:
|
||||
# we only care about suggested children
|
||||
child_events = filter(_is_suggested_child_event, child_events)
|
||||
|
||||
if max_children is None or max_children > MAX_ROOMS_PER_SPACE:
|
||||
max_children = MAX_ROOMS_PER_SPACE
|
||||
# TODO max_children is legacy code for the /spaces endpoint.
|
||||
if max_children is not None:
|
||||
child_iter: Iterable[EventBase] = itertools.islice(
|
||||
child_events, max_children
|
||||
)
|
||||
else:
|
||||
child_iter = child_events
|
||||
|
||||
stripped_events: List[JsonDict] = [
|
||||
{
|
||||
@@ -668,7 +682,7 @@ class RoomSummaryHandler:
|
||||
"sender": e.sender,
|
||||
"origin_server_ts": e.origin_server_ts,
|
||||
}
|
||||
for e in itertools.islice(child_events, max_children)
|
||||
for e in child_iter
|
||||
]
|
||||
return _RoomEntry(room_id, room_entry, stripped_events)
|
||||
|
||||
@@ -988,12 +1002,14 @@ class RoomSummaryHandler:
|
||||
"canonical_alias": stats["canonical_alias"],
|
||||
"num_joined_members": stats["joined_members"],
|
||||
"avatar_url": stats["avatar"],
|
||||
# plural join_rules is a documentation error but kept for historical
|
||||
# purposes. Should match /publicRooms.
|
||||
"join_rules": stats["join_rules"],
|
||||
"join_rule": stats["join_rules"],
|
||||
"world_readable": (
|
||||
stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
|
||||
),
|
||||
"guest_can_join": stats["guest_access"] == "can_join",
|
||||
"creation_ts": create_event.origin_server_ts,
|
||||
"room_type": create_event.content.get(EventContentFields.ROOM_TYPE),
|
||||
}
|
||||
|
||||
|
||||
@@ -420,10 +420,10 @@ class SearchHandler:
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
for context in contexts.values():
|
||||
context["events_before"] = await self._event_serializer.serialize_events(
|
||||
context["events_before"] = self._event_serializer.serialize_events(
|
||||
context["events_before"], time_now
|
||||
)
|
||||
context["events_after"] = await self._event_serializer.serialize_events(
|
||||
context["events_after"] = self._event_serializer.serialize_events(
|
||||
context["events_after"], time_now
|
||||
)
|
||||
|
||||
@@ -441,9 +441,7 @@ class SearchHandler:
|
||||
results.append(
|
||||
{
|
||||
"rank": rank_map[e.event_id],
|
||||
"result": (
|
||||
await self._event_serializer.serialize_event(e, time_now)
|
||||
),
|
||||
"result": self._event_serializer.serialize_event(e, time_now),
|
||||
"context": contexts.get(e.event_id, {}),
|
||||
}
|
||||
)
|
||||
@@ -457,7 +455,7 @@ class SearchHandler:
|
||||
if state_results:
|
||||
s = {}
|
||||
for room_id, state_events in state_results.items():
|
||||
s[room_id] = await self._event_serializer.serialize_events(
|
||||
s[room_id] = self._event_serializer.serialize_events(
|
||||
state_events, time_now
|
||||
)
|
||||
|
||||
|
||||
@@ -126,45 +126,45 @@ class SsoIdentityProvider(Protocol):
|
||||
raise NotImplementedError()
|
||||
|
||||
|
||||
@attr.s
|
||||
@attr.s(auto_attribs=True)
|
||||
class UserAttributes:
|
||||
# the localpart of the mxid that the mapper has assigned to the user.
|
||||
# if `None`, the mapper has not picked a userid, and the user should be prompted to
|
||||
# enter one.
|
||||
localpart = attr.ib(type=Optional[str])
|
||||
display_name = attr.ib(type=Optional[str], default=None)
|
||||
emails = attr.ib(type=Collection[str], default=attr.Factory(list))
|
||||
localpart: Optional[str]
|
||||
display_name: Optional[str] = None
|
||||
emails: Collection[str] = attr.Factory(list)
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class UsernameMappingSession:
|
||||
"""Data we track about SSO sessions"""
|
||||
|
||||
# A unique identifier for this SSO provider, e.g. "oidc" or "saml".
|
||||
auth_provider_id = attr.ib(type=str)
|
||||
auth_provider_id: str
|
||||
|
||||
# user ID on the IdP server
|
||||
remote_user_id = attr.ib(type=str)
|
||||
remote_user_id: str
|
||||
|
||||
# attributes returned by the ID mapper
|
||||
display_name = attr.ib(type=Optional[str])
|
||||
emails = attr.ib(type=Collection[str])
|
||||
display_name: Optional[str]
|
||||
emails: Collection[str]
|
||||
|
||||
# An optional dictionary of extra attributes to be provided to the client in the
|
||||
# login response.
|
||||
extra_login_attributes = attr.ib(type=Optional[JsonDict])
|
||||
extra_login_attributes: Optional[JsonDict]
|
||||
|
||||
# where to redirect the client back to
|
||||
client_redirect_url = attr.ib(type=str)
|
||||
client_redirect_url: str
|
||||
|
||||
# expiry time for the session, in milliseconds
|
||||
expiry_time_ms = attr.ib(type=int)
|
||||
expiry_time_ms: int
|
||||
|
||||
# choices made by the user
|
||||
chosen_localpart = attr.ib(type=Optional[str], default=None)
|
||||
use_display_name = attr.ib(type=bool, default=True)
|
||||
emails_to_use = attr.ib(type=Collection[str], default=())
|
||||
terms_accepted_version = attr.ib(type=Optional[str], default=None)
|
||||
chosen_localpart: Optional[str] = None
|
||||
use_display_name: bool = True
|
||||
emails_to_use: Collection[str] = ()
|
||||
terms_accepted_version: Optional[str] = None
|
||||
|
||||
|
||||
# the HTTP cookie used to track the mapping session id
|
||||
|
||||
@@ -60,10 +60,6 @@ if TYPE_CHECKING:
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Debug logger for https://github.com/matrix-org/synapse/issues/4422
|
||||
issue4422_logger = logging.getLogger("synapse.handler.sync.4422_debug")
|
||||
|
||||
|
||||
# Counts the number of times we returned a non-empty sync. `type` is one of
|
||||
# "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is
|
||||
# "true" or "false" depending on if the request asked for lazy loaded members or
|
||||
@@ -102,6 +98,9 @@ class TimelineBatch:
|
||||
prev_batch: StreamToken
|
||||
events: List[EventBase]
|
||||
limited: bool
|
||||
# A mapping of event ID to the bundled aggregations for the above events.
|
||||
# This is only calculated if limited is true.
|
||||
bundled_aggregations: Optional[Dict[str, Dict[str, Any]]] = None
|
||||
|
||||
def __bool__(self) -> bool:
|
||||
"""Make the result appear empty if there are no updates. This is used
|
||||
@@ -634,10 +633,19 @@ class SyncHandler:
|
||||
|
||||
prev_batch_token = now_token.copy_and_replace("room_key", room_key)
|
||||
|
||||
# Don't bother to bundle aggregations if the timeline is unlimited,
|
||||
# as clients will have all the necessary information.
|
||||
bundled_aggregations = None
|
||||
if limited or newly_joined_room:
|
||||
bundled_aggregations = await self.store.get_bundled_aggregations(
|
||||
recents, sync_config.user.to_string()
|
||||
)
|
||||
|
||||
return TimelineBatch(
|
||||
events=recents,
|
||||
prev_batch=prev_batch_token,
|
||||
limited=limited or newly_joined_room,
|
||||
bundled_aggregations=bundled_aggregations,
|
||||
)
|
||||
|
||||
async def get_state_after_event(
|
||||
@@ -1161,13 +1169,8 @@ class SyncHandler:
|
||||
|
||||
num_events = 0
|
||||
|
||||
# debug for https://github.com/matrix-org/synapse/issues/4422
|
||||
# debug for https://github.com/matrix-org/synapse/issues/9424
|
||||
for joined_room in sync_result_builder.joined:
|
||||
room_id = joined_room.room_id
|
||||
if room_id in newly_joined_rooms:
|
||||
issue4422_logger.debug(
|
||||
"Sync result for newly joined room %s: %r", room_id, joined_room
|
||||
)
|
||||
num_events += len(joined_room.timeline.events)
|
||||
|
||||
log_kv(
|
||||
@@ -1740,18 +1743,6 @@ class SyncHandler:
|
||||
old_mem_ev_id, allow_none=True
|
||||
)
|
||||
|
||||
# debug for #4422
|
||||
if has_join:
|
||||
prev_membership = None
|
||||
if old_mem_ev:
|
||||
prev_membership = old_mem_ev.membership
|
||||
issue4422_logger.debug(
|
||||
"Previous membership for room %s with join: %s (event %s)",
|
||||
room_id,
|
||||
prev_membership,
|
||||
old_mem_ev_id,
|
||||
)
|
||||
|
||||
if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
|
||||
newly_joined_rooms.append(room_id)
|
||||
|
||||
@@ -1893,13 +1884,6 @@ class SyncHandler:
|
||||
upto_token=since_token,
|
||||
)
|
||||
|
||||
if newly_joined:
|
||||
# debugging for https://github.com/matrix-org/synapse/issues/4422
|
||||
issue4422_logger.debug(
|
||||
"RoomSyncResultBuilder events for newly joined room %s: %r",
|
||||
room_id,
|
||||
entry.events,
|
||||
)
|
||||
room_entries.append(entry)
|
||||
|
||||
return _RoomChanges(
|
||||
@@ -2077,14 +2061,6 @@ class SyncHandler:
|
||||
# `_load_filtered_recents` can't find any events the user should see
|
||||
# (e.g. due to having ignored the sender of the last 50 events).
|
||||
|
||||
if newly_joined:
|
||||
# debug for https://github.com/matrix-org/synapse/issues/4422
|
||||
issue4422_logger.debug(
|
||||
"Timeline events after filtering in newly-joined room %s: %r",
|
||||
room_id,
|
||||
batch,
|
||||
)
|
||||
|
||||
# When we join the room (or the client requests full_state), we should
|
||||
# send down any existing tags. Usually the user won't have tags in a
|
||||
# newly joined room, unless either a) they've joined before or b) the
|
||||
|
||||
@@ -32,9 +32,9 @@ class ProxyConnectError(ConnectError):
|
||||
pass
|
||||
|
||||
|
||||
@attr.s
|
||||
@attr.s(auto_attribs=True)
|
||||
class ProxyCredentials:
|
||||
username_password = attr.ib(type=bytes)
|
||||
username_password: bytes
|
||||
|
||||
def as_proxy_authorization_value(self) -> bytes:
|
||||
"""
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user