1
0

Compare commits

..

2 Commits

Author SHA1 Message Date
Andrew Morgan
36f37acf53 1.50.2 2022-01-24 13:37:20 +00:00
Andrew Morgan
8ff465d206 Fix logic for dropping old events in fed queue (#11806)
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Richard van der Hoff <richard@matrix.org>
2022-01-24 13:35:50 +00:00
180 changed files with 1613 additions and 2200 deletions

View File

@@ -8,7 +8,6 @@
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
* [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
* [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@@ -366,8 +366,6 @@ jobs:
# Build initial Synapse image
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
working-directory: synapse
env:
DOCKER_BUILDKIT: 1
# Build a ready-to-run Synapse image based on the initial image above.
# This new image includes a config file, keys for signing and TLS, and
@@ -376,8 +374,7 @@ jobs:
working-directory: complement/dockerfiles
# Run Complement
- run: set -o pipefail && go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
shell: bash
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

4
.gitignore vendored
View File

@@ -50,7 +50,3 @@ __pycache__/
# docs
book/
# complement
/complement-master
/master.tar.gz

View File

@@ -1,3 +1,12 @@
Synapse 1.50.2 (2022-01-24)
===========================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.40.0 that caused Synapse to fail to process incoming federation traffic after handling a large amount of events in a v1 room. ([\#11806](https://github.com/matrix-org/synapse/issues/11806))
Synapse 1.50.1 (2022-01-18)
===========================

View File

@@ -1,2 +0,0 @@
Fix a long-standing issue which could cause Synapse to incorrectly accept data in the unsigned field of events
received over federation.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Remove the `"password_hash"` field from the response dictionaries of the [Users Admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html).

View File

@@ -1 +0,0 @@
Include whether the requesting user has participated in a thread when generating a summary for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

View File

@@ -1 +0,0 @@
Fix a performance regression in `/sync` handling, introduced in 1.49.0.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where Synapse wouldn't cache a response indicating that a remote user has no devices.

View File

@@ -1 +0,0 @@
Fix an error in to get federation status of a destination server even if no error has occurred. This admin API was new introduced in Synapse 1.49.0.

View File

@@ -1 +0,0 @@
Avoid database access in the JSON serialization process.

View File

@@ -1 +0,0 @@
Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

View File

@@ -1 +0,0 @@
Fix `/_matrix/client/v1/room/{roomId}/hierarchy` endpoint returning incorrect fields which have been present since Synapse 1.49.0.

View File

@@ -1 +0,0 @@
Fix preview of some gif URLs (like tenor.com). Contributed by Philippe Daouadi.

View File

@@ -1 +0,0 @@
Return an `M_FORBIDDEN` error code instead of `M_UNKNOWN` when a spam checker module prevents a user from creating a room.

View File

@@ -1 +0,0 @@
Add a flag to the `synapse_review_recent_signups` script to ignore and filter appservice users.

View File

@@ -1 +0,0 @@
Remove the unstable `/send_relation` endpoint.

View File

@@ -1 +0,0 @@
Run `pyupgrade --py37-plus --keep-percent-format` on Synapse.

View File

@@ -1 +0,0 @@
Warn against using a Let's Encrypt certificate for TLS/DTLS TURN server client connections, and suggest using ZeroSSL certificate instead. This bypasses client-side connectivity errors caused by WebRTC libraries that reject Let's Encrypt certificates. Contibuted by @AndrewFerr.

View File

@@ -1 +0,0 @@
Use buildkit's cache feature to speed up docker builds.

View File

@@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

View File

@@ -1 +0,0 @@
Remove debug logging for #4422, which has been closed since Synapse 0.99.

View File

@@ -1 +0,0 @@
Fix a bug where the only the first 50 rooms from a space were returned from the `/hierarchy` API. This has existed since the introduction of the API in Synapse v1.41.0.

View File

@@ -1 +0,0 @@
Remove fallback code for Python 2.

View File

@@ -1 +0,0 @@
Add a test for [an edge case](https://github.com/matrix-org/synapse/pull/11532#discussion_r769104461) in the `/sync` logic.

View File

@@ -1 +0,0 @@
Add the option to write sqlite test dbs to disk when running tests.

View File

@@ -1 +0,0 @@
Improve Complement test output for Gitub Actions.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@@ -1 +0,0 @@
Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s.

View File

@@ -1 +0,0 @@
Document the new `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable in the contributing guide.

View File

@@ -1 +0,0 @@
Fix docstring on `add_account_data_for_user`.

View File

@@ -1 +0,0 @@
Complement environment variable name change and update `.gitignore`.

View File

@@ -1 +0,0 @@
Simplify calculation of prometheus metrics for garbage collection.

View File

@@ -1 +0,0 @@
Improve accuracy of `python_twisted_reactor_tick_time` prometheus metric.

View File

@@ -1 +0,0 @@
Remove `python_twisted_reactor_pending_calls` prometheus metric.

View File

@@ -1 +0,0 @@
Document that now the minimum supported PostgreSQL version is 10.

View File

@@ -1 +0,0 @@
Fix typo in demo docs: differnt.

View File

@@ -1 +0,0 @@
Make the list rooms admin api sort stable. Contributed by Daniël Sonck.

View File

@@ -1 +0,0 @@
Update room spec url in config files.

View File

@@ -1 +0,0 @@
Mention python3-venv and libpq-dev dependencies in contribution guide.

View File

@@ -1 +0,0 @@
Minor efficiency improvements when inserting many values into the database.

View File

@@ -1 +0,0 @@
Invite PR authors to give themselves credit in the changelog.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.18.0 where password reset and address validation emails would not be sent if their subject was configured to use the 'app' template variable. Contributed by @br4nnigan.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Update documentation for configuring login with facebook.

View File

@@ -1 +0,0 @@
Add `track_puppeted_user_ips` config flag to record client IP addresses against puppeted users, and include the puppeted users in monthly active user counts.

View File

@@ -1 +0,0 @@
Remove `log_function` utility function and its uses.

View File

@@ -1 +0,0 @@
Use `auto_attribs` and native type hints for attrs classes.

View File

@@ -92,6 +92,22 @@ new PromConsole.Graph({
})
</script>
<h3>Pending calls per tick</h3>
<div id="reactor_pending_calls"></div>
<script>
new PromConsole.Graph({
node: document.querySelector("#reactor_pending_calls"),
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
name: "[[job]]-[[index]]",
min: 0,
renderer: "line",
height: 150,
yAxisFormatter: PromConsole.NumberFormatter.humanize,
yHoverFormatter: PromConsole.NumberFormatter.humanize,
yTitle: "Pending Calls"
})
</script>
<h1>Storage</h1>
<h3>Queries</h3>

6
debian/changelog vendored
View File

@@ -1,3 +1,9 @@
matrix-synapse-py3 (1.50.2) stable; urgency=medium
* New synapse release 1.50.2.
-- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 13:37:11 +0000
matrix-synapse-py3 (1.50.1) stable; urgency=medium
* New synapse release 1.50.1.

View File

@@ -22,5 +22,5 @@ Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
Also note that when joining a public room on a different HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.

View File

@@ -1,17 +1,14 @@
# Dockerfile to build the matrixdotorg/synapse docker images.
#
# Note that it uses features which are only available in BuildKit - see
# https://docs.docker.com/go/buildkit/ for more information.
#
# To build the image, run `docker build` command from the root of the
# synapse repository:
#
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile .
# docker build -f docker/Dockerfile .
#
# There is an optional PYTHON_VERSION build argument which sets the
# version of python to build against: for example:
#
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.9 .
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=3.8
@@ -22,16 +19,7 @@ ARG PYTHON_VERSION=3.8
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
# install the OS build deps
#
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt, to improve rebuild speeds on
# slow connections.
#
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y \
build-essential \
libffi-dev \
libjpeg-dev \
@@ -56,8 +44,7 @@ COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
# used while you develop on the source
#
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --prefix="/install" --no-warn-script-location \
RUN pip install --prefix="/install" --no-warn-script-location \
/synapse[all]
# Copy over the rest of the project
@@ -79,10 +66,7 @@ LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/syna
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
LABEL org.opencontainers.image.licenses='Apache-2.0'
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
RUN apt-get update && apt-get install -y \
curl \
gosu \
libjpeg62-turbo \

View File

@@ -15,10 +15,9 @@ server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
```jsonc
```json
{
"name": "@user:example.com",
"displayname": "User", // can be null if not set
"displayname": "User",
"threepids": [
{
"medium": "email",
@@ -33,11 +32,11 @@ It returns a JSON body like the following:
"validated_at": 1586458409743
}
],
"avatar_url": "<avatar_url>", // can be null if not set
"is_guest": 0,
"avatar_url": "<avatar_url>",
"admin": 0,
"deactivated": 0,
"shadow_banned": 0,
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
"creation_ts": 1560432506,
"appservice_id": null,
"consent_server_notice_sent": null,

View File

@@ -20,9 +20,7 @@ recommended for development. More information about WSL can be found at
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
on Windows is not officially supported.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough.
Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`.
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
@@ -171,27 +169,6 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
```
By default, tests will use an in-memory SQLite database for test data. For additional
help with debugging, one can use an on-disk SQLite database file instead, in order to
review database state during and after running tests. This can be done by setting
the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the
database state to be stored in a file named `test.db` under the trial process'
working directory. Typically, this ends up being `_trial_temp/test.db`. For example:
```sh
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests
```
The database file can then be inspected with:
```sh
sqlite3 _trial_temp/test.db
```
Note that the database file is cleared at the beginning of each test run. Thus it
will always only contain the data generated by the *last run test*. Though generally
when debugging, one is only running a single test anyway.
### Running tests under PostgreSQL
Invoking `trial` as above will use an in-memory SQLite database. This is great for

View File

@@ -35,12 +35,7 @@ When Synapse is asked to preview a URL it does the following:
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
3. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.

View File

@@ -390,6 +390,9 @@ oidc_providers:
### Facebook
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
one so requires a little more configuration.
0. You will need a Facebook developer account. You can register for one
[here](https://developers.facebook.com/async/registration/).
1. On the [apps](https://developers.facebook.com/apps/) page of the developer
@@ -409,28 +412,24 @@ Synapse config:
idp_name: Facebook
idp_brand: "facebook" # optional: styling hint for clients
discover: false
issuer: "https://www.facebook.com"
issuer: "https://facebook.com"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "email"]
authorization_endpoint: "https://facebook.com/dialog/oauth"
token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token"
jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
authorization_endpoint: https://facebook.com/dialog/oauth
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
user_profile_method: "userinfo_endpoint"
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
user_mapping_provider:
config:
subject_claim: "id"
display_name_template: "{{ user.name }}"
email_template: "{{ '{{ user.email }}' }}"
```
Relevant documents:
* [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow)
* [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/)
* [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user)
Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration),
but it has a `response_types_supported` which excludes "code" (which we rely on, and
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
so we have to disable discovery and configure the URIs manually.
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
### Gitea

View File

@@ -164,7 +164,7 @@ presence:
# The default room version for newly created rooms.
#
# Known room versions are listed here:
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
# https://matrix.org/docs/spec/#complete-list-of-room-versions
#
# For example, for room version 1, default_room_version should be set
# to "1".
@@ -1503,21 +1503,6 @@ room_prejoin_state:
#additional_event_types:
# - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
# A list of application service config files to use
#
@@ -1885,13 +1870,10 @@ saml2_config:
# Defaults to false. Avoid this in production.
#
# user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
# userinfo endpoint.
#
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@@ -137,10 +137,6 @@ This will install and start a systemd service called `coturn`.
# TLS private key file
pkey=/path/to/privkey.pem
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
#no-tls
#no-dtls
```
In this case, replace the `turn:` schemes in the `turn_uris` settings below
@@ -149,14 +145,6 @@ This will install and start a systemd service called `coturn`.
We recommend that you only try to set up TLS/DTLS once you have set up a
basic installation and got it working.
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
not work with any Matrix client that uses Chromium's WebRTC library. This
currently includes Element Android & iOS; for more details, see their
[respective](https://github.com/vector-im/element-android/issues/1533)
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for TURN
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
@@ -262,10 +250,6 @@ Here are a few things to try:
* Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
voice & video WebRTC traffic is always encrypted.)
* Some WebRTC implementations (notably, that of Google Chrome) appear to get
confused by TURN servers which are reachable over IPv6 (this appears to be
an unexpected side-effect of its handling of multiple IP addresses as

View File

@@ -23,9 +23,6 @@
# Exit if a line returns a non-zero exit code
set -e
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
# Change to the repository root
cd "$(dirname $0)/.."
@@ -50,7 +47,7 @@ if [[ -n "$WORKERS" ]]; then
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile
# And provide some more configuration to complement.
export COMPLEMENT_CA=true
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=25
export COMPLEMENT_VERSION_CHECK_ITERATIONS=500
else
export COMPLEMENT_BASE_IMAGE=complement-synapse
COMPLEMENT_DOCKERFILE=Synapse.Dockerfile
@@ -68,5 +65,4 @@ if [[ -n "$1" ]]; then
fi
# Run the tests!
echo "Images built; running complement"
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.50.1"
__version__ = "1.50.2"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -46,9 +46,7 @@ class UserInfo:
ips: List[str] = attr.Factory(list)
def get_recent_users(
txn: LoggingTransaction, since_ms: int, exclude_app_service: bool
) -> List[UserInfo]:
def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
"""Fetches recently registered users and some info on them."""
sql = """
@@ -58,9 +56,6 @@ def get_recent_users(
AND deactivated = 0
"""
if exclude_app_service:
sql += " AND appservice_id IS NULL"
txn.execute(sql, (since_ms / 1000,))
user_infos = [UserInfo(user_id, creation_ts) for user_id, creation_ts in txn]
@@ -118,7 +113,7 @@ def main() -> None:
"-e",
"--exclude-emails",
action="store_true",
help="Exclude users that have validated email addresses.",
help="Exclude users that have validated email addresses",
)
parser.add_argument(
"-u",
@@ -126,12 +121,6 @@ def main() -> None:
action="store_true",
help="Only print user IDs that match.",
)
parser.add_argument(
"-a",
"--exclude-app-service",
help="Exclude appservice users.",
action="store_true",
)
config = ReviewConfig()
@@ -144,7 +133,6 @@ def main() -> None:
since_ms = time.time() * 1000 - Config.parse_duration(config_args.since)
exclude_users_with_email = config_args.exclude_emails
exclude_users_with_appservice = config_args.exclude_app_service
include_context = not config_args.only_users
for database_config in config.database.databases:
@@ -155,7 +143,7 @@ def main() -> None:
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
# This generates a type of Cursor, not LoggingTransaction.
user_infos = get_recent_users(db_conn.cursor(), since_ms, exclude_users_with_appservice) # type: ignore[arg-type]
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
for user_info in user_infos:
if exclude_users_with_email and user_info.emails:

View File

@@ -71,7 +71,6 @@ class Auth:
self._auth_blocking = AuthBlocking(self.hs)
self._track_appservice_user_ips = hs.config.appservice.track_appservice_user_ips
self._track_puppeted_user_ips = hs.config.api.track_puppeted_user_ips
self._macaroon_secret_key = hs.config.key.macaroon_secret_key
self._force_tracing_for_users = hs.config.tracing.force_tracing_for_users
@@ -247,18 +246,6 @@ class Auth:
user_agent=user_agent,
device_id=device_id,
)
# Track also the puppeted user client IP if enabled and the user is puppeting
if (
user_info.user_id != user_info.token_owner
and self._track_puppeted_user_ips
):
await self.store.insert_client_ip(
user_id=user_info.user_id,
access_token=access_token,
ip=ip_addr,
user_agent=user_agent,
device_id=device_id,
)
if is_guest and not allow_guest:
raise AuthError(

View File

@@ -46,41 +46,41 @@ class RoomDisposition:
UNSTABLE = "unstable"
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class RoomVersion:
"""An object which describes the unique attributes of a room version."""
identifier: str # the identifier for this version
disposition: str # one of the RoomDispositions
event_format: int # one of the EventFormatVersions
state_res: int # one of the StateResolutionVersions
enforce_key_validity: bool
identifier = attr.ib(type=str) # the identifier for this version
disposition = attr.ib(type=str) # one of the RoomDispositions
event_format = attr.ib(type=int) # one of the EventFormatVersions
state_res = attr.ib(type=int) # one of the StateResolutionVersions
enforce_key_validity = attr.ib(type=bool)
# Before MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth: bool
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats
# * NaN, Infinity, -Infinity
strict_canonicaljson: bool
strict_canonicaljson = attr.ib(type=bool)
# MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels: bool
limit_notifications_power_levels = attr.ib(type=bool)
# MSC2174/MSC2176: Apply updated redaction rules algorithm.
msc2176_redaction_rules: bool
msc2176_redaction_rules = attr.ib(type=bool)
# MSC3083: Support the 'restricted' join_rule.
msc3083_join_rules: bool
msc3083_join_rules = attr.ib(type=bool)
# MSC3375: Support for the proper redaction rules for MSC3083. This mustn't
# be enabled if MSC3083 is not.
msc3375_redaction_rules: bool
msc3375_redaction_rules = attr.ib(type=bool)
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
msc2403_knocking: bool
msc2403_knocking = attr.ib(type=bool)
# MSC2716: Adds m.room.power_levels -> content.historical field to control
# whether "insertion", "chunk", "marker" events can be sent
msc2716_historical: bool
msc2716_historical = attr.ib(type=bool)
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions: bool
msc2716_redactions = attr.ib(type=bool)
class RoomVersions:

View File

@@ -60,7 +60,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext
from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics import register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.types import ISynapseReactor
@@ -159,7 +159,6 @@ def start_reactor(
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
install_gc_manager()
run_command()
# make sure that we run the reactor with the sentinel log context,

View File

@@ -29,7 +29,6 @@ class ApiConfig(Config):
def read_config(self, config: JsonDict, **kwargs):
validate_config(_MAIN_SCHEMA, config, ())
self.room_prejoin_state = list(self._get_prejoin_state_types(config))
self.track_puppeted_user_ips = config.get("track_puppeted_user_ips", False)
def generate_config_section(cls, **kwargs) -> str:
formatted_default_state_types = "\n".join(
@@ -60,21 +59,6 @@ class ApiConfig(Config):
#
#additional_event_types:
# - org.example.custom.event.type
# We record the IP address of clients used to access the API for various
# reasons, including displaying it to the user in the "Where you're signed in"
# dialog.
#
# By default, when puppeting another user via the admin API, the client IP
# address is recorded against the user who created the access token (ie, the
# admin user), and *not* the puppeted user.
#
# Uncomment the following to also record the IP address against the puppeted
# user. (This also means that the puppeted user will count as an "active" user
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
# above.)
#
#track_puppeted_user_ips: true
""" % {
"formatted_default_state_types": formatted_default_state_types
}
@@ -154,8 +138,5 @@ _MAIN_SCHEMA = {
"properties": {
"room_prejoin_state": _ROOM_PREJOIN_STATE_CONFIG_SCHEMA,
"room_invite_state_types": _ROOM_INVITE_STATE_TYPES_SCHEMA,
"track_puppeted_user_ips": {
"type": "boolean",
},
},
}

View File

@@ -55,19 +55,19 @@ https://matrix-org.github.io/synapse/latest/templates.html
---------------------------------------------------------------------------------------"""
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class EmailSubjectConfig:
message_from_person_in_room: str
message_from_person: str
messages_from_person: str
messages_in_room: str
messages_in_room_and_others: str
messages_from_person_and_others: str
invite_from_person: str
invite_from_person_to_room: str
invite_from_person_to_space: str
password_reset: str
email_validation: str
message_from_person_in_room = attr.ib(type=str)
message_from_person = attr.ib(type=str)
messages_from_person = attr.ib(type=str)
messages_in_room = attr.ib(type=str)
messages_in_room_and_others = attr.ib(type=str)
messages_from_person_and_others = attr.ib(type=str)
invite_from_person = attr.ib(type=str)
invite_from_person_to_room = attr.ib(type=str)
invite_from_person_to_space = attr.ib(type=str)
password_reset = attr.ib(type=str)
email_validation = attr.ib(type=str)
class EmailConfig(Config):

View File

@@ -148,13 +148,10 @@ class OIDCConfig(Config):
# Defaults to false. Avoid this in production.
#
# user_profile_method: Whether to fetch the user profile from the userinfo
# endpoint, or to rely on the data returned in the id_token from the
# token_endpoint.
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Valid values are: 'auto' or 'userinfo_endpoint'.
#
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
# userinfo endpoint.
#
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to

View File

@@ -200,8 +200,8 @@ class HttpListenerConfig:
"""Object describing the http-specific parts of the config of a listener"""
x_forwarded: bool = False
resources: List[HttpResourceConfig] = attr.Factory(list)
additional_resources: Dict[str, dict] = attr.Factory(dict)
resources: List[HttpResourceConfig] = attr.ib(factory=list)
additional_resources: Dict[str, dict] = attr.ib(factory=dict)
tag: Optional[str] = None
@@ -883,7 +883,7 @@ class ServerConfig(Config):
# The default room version for newly created rooms.
#
# Known room versions are listed here:
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
# https://matrix.org/docs/spec/#complete-list-of-room-versions
#
# For example, for room version 1, default_room_version should be set
# to "1".

View File

@@ -51,12 +51,12 @@ def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
return obj
@attr.s(auto_attribs=True)
@attr.s
class InstanceLocationConfig:
"""The host and port to talk to an instance via HTTP replication."""
host: str
port: int
host = attr.ib(type=str)
port = attr.ib(type=int)
@attr.s
@@ -77,28 +77,34 @@ class WriterLocations:
can only be a single instance.
"""
events: List[str] = attr.ib(
events = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
typing: List[str] = attr.ib(
typing = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
to_device: List[str] = attr.ib(
to_device = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
account_data: List[str] = attr.ib(
account_data = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
receipts: List[str] = attr.ib(
receipts = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)
presence: List[str] = attr.ib(
presence = attr.ib(
default=["master"],
type=List[str],
converter=_instance_to_list_converter,
)

View File

@@ -58,7 +58,7 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
@attr.s(slots=True, frozen=True, cmp=False, auto_attribs=True)
@attr.s(slots=True, cmp=False)
class VerifyJsonRequest:
"""
A request to verify a JSON object.
@@ -78,10 +78,10 @@ class VerifyJsonRequest:
key_ids: The set of key_ids to that could be used to verify the JSON object
"""
server_name: str
get_json_object: Callable[[], JsonDict]
minimum_valid_until_ts: int
key_ids: List[str]
server_name = attr.ib(type=str)
get_json_object = attr.ib(type=Callable[[], JsonDict])
minimum_valid_until_ts = attr.ib(type=int)
key_ids = attr.ib(type=List[str])
@staticmethod
def from_json_object(
@@ -124,7 +124,7 @@ class KeyLookupError(ValueError):
pass
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True)
class _FetchKeyRequest:
"""A request for keys for a given server.
@@ -138,9 +138,9 @@ class _FetchKeyRequest:
key_ids: The IDs of the keys to attempt to fetch
"""
server_name: str
minimum_valid_until_ts: int
key_ids: List[str]
server_name = attr.ib(type=str)
minimum_valid_until_ts = attr.ib(type=int)
key_ids = attr.ib(type=List[str])
class Keyring:

View File

@@ -28,7 +28,7 @@ if TYPE_CHECKING:
from synapse.storage.databases.main import DataStore
@attr.s(slots=True, auto_attribs=True)
@attr.s(slots=True)
class EventContext:
"""
Holds information relevant to persisting an event
@@ -103,15 +103,15 @@ class EventContext:
accessed via get_prev_state_ids.
"""
rejected: Union[bool, str] = False
_state_group: Optional[int] = None
state_group_before_event: Optional[int] = None
prev_group: Optional[int] = None
delta_ids: Optional[StateMap[str]] = None
app_service: Optional[ApplicationService] = None
rejected = attr.ib(default=False, type=Union[bool, str])
_state_group = attr.ib(default=None, type=Optional[int])
state_group_before_event = attr.ib(default=None, type=Optional[int])
prev_group = attr.ib(default=None, type=Optional[int])
delta_ids = attr.ib(default=None, type=Optional[StateMap[str]])
app_service = attr.ib(default=None, type=Optional[ApplicationService])
_current_state_ids: Optional[StateMap[str]] = None
_prev_state_ids: Optional[StateMap[str]] = None
_current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
_prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
@staticmethod
def with_state(

View File

@@ -14,7 +14,17 @@
# limitations under the License.
import collections.abc
import re
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Union
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Mapping,
Optional,
Union,
)
from frozendict import frozendict
@@ -22,10 +32,14 @@ from synapse.api.constants import EventContentFields, EventTypes, RelationTypes
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.types import JsonDict
from synapse.util.async_helpers import yieldable_gather_results
from synapse.util.frozenutils import unfreeze
from . import EventBase
if TYPE_CHECKING:
from synapse.server import HomeServer
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.
@@ -371,12 +385,17 @@ class EventClientSerializer:
clients.
"""
def serialize_event(
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self._msc1849_enabled = hs.config.experimental.msc1849_enabled
self._msc3440_enabled = hs.config.experimental.msc3440_enabled
async def serialize_event(
self,
event: Union[JsonDict, EventBase],
time_now: int,
*,
bundle_aggregations: Optional[Dict[str, JsonDict]] = None,
bundle_aggregations: bool = False,
**kwargs: Any,
) -> JsonDict:
"""Serializes a single event.
@@ -399,41 +418,66 @@ class EventClientSerializer:
serialized_event = serialize_event(event, time_now, **kwargs)
# Check if there are any bundled aggregations to include with the event.
if bundle_aggregations:
event_aggregations = bundle_aggregations.get(event.event_id)
if event_aggregations:
self._injected_bundled_aggregations(
event,
time_now,
bundle_aggregations[event.event_id],
serialized_event,
)
#
# Do not bundle aggregations if any of the following at true:
#
# * Support is disabled via the configuration or the caller.
# * The event is a state event.
# * The event has been redacted.
if (
self._msc1849_enabled
and bundle_aggregations
and not event.is_state()
and not event.internal_metadata.is_redacted()
):
await self._injected_bundled_aggregations(event, time_now, serialized_event)
return serialized_event
def _injected_bundled_aggregations(
self,
event: EventBase,
time_now: int,
aggregations: JsonDict,
serialized_event: JsonDict,
async def _injected_bundled_aggregations(
self, event: EventBase, time_now: int, serialized_event: JsonDict
) -> None:
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event.
Args:
event: The event being serialized.
time_now: The current time in milliseconds
aggregations: The bundled aggregation to serialize.
serialized_event: The serialized event which may be modified.
"""
# Make a copy in-case the object is cached.
aggregations = aggregations.copy()
# Do not bundle aggregations for an event which represents an edit or an
# annotation. It does not make sense for them to have related events.
relates_to = event.content.get("m.relates_to")
if isinstance(relates_to, (dict, frozendict)):
relation_type = relates_to.get("rel_type")
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
return
if RelationTypes.REPLACE in aggregations:
event_id = event.event_id
room_id = event.room_id
# The bundled aggregations to include.
aggregations = {}
annotations = await self.store.get_aggregation_groups_for_event(
event_id, room_id
)
if annotations.chunk:
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
references = await self.store.get_relations_for_event(
event_id, room_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
aggregations[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id, room_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
edit = aggregations[RelationTypes.REPLACE]
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
@@ -458,19 +502,27 @@ class EventClientSerializer:
}
# If this event is the start of a thread, include a summary of the replies.
if RelationTypes.THREAD in aggregations:
# Serialize the latest thread event.
latest_thread_event = aggregations[RelationTypes.THREAD]["latest_event"]
if self._msc3440_enabled:
(
thread_count,
latest_thread_event,
) = await self.store.get_thread_summary(event_id, room_id)
if latest_thread_event:
aggregations[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=False
),
"count": thread_count,
}
# Don't bundle aggregations as this could recurse forever.
aggregations[RelationTypes.THREAD]["latest_event"] = self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=None
# If any bundled aggregations were found, include them.
if aggregations:
serialized_event["unsigned"].setdefault("m.relations", {}).update(
aggregations
)
# Include the bundled aggregations in the event.
serialized_event["unsigned"].setdefault("m.relations", {}).update(aggregations)
def serialize_events(
async def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
) -> List[JsonDict]:
"""Serializes multiple events.
@@ -483,9 +535,9 @@ class EventClientSerializer:
Returns:
The list of serialized events
"""
return [
self.serialize_event(event, time_now=time_now, **kwargs) for event in events
]
return await yieldable_gather_results(
self.serialize_event, events, time_now=time_now, **kwargs
)
def copy_power_levels_contents(

View File

@@ -230,10 +230,6 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
# origin, etc etc)
assert_params_in_dict(pdu_json, ("type", "depth"))
# Strip any unauthorized values from "unsigned" if they exist
if "unsigned" in pdu_json:
_strip_unsigned_values(pdu_json)
depth = pdu_json["depth"]
if not isinstance(depth, int):
raise SynapseError(400, "Depth %r not an intger" % (depth,), Codes.BAD_JSON)
@@ -249,24 +245,3 @@ def event_from_pdu_json(pdu_json: JsonDict, room_version: RoomVersion) -> EventB
event = make_event_from_dict(pdu_json, room_version)
return event
def _strip_unsigned_values(pdu_dict: JsonDict) -> None:
"""
Strip any unsigned values unless specifically allowed, as defined by the whitelist.
pdu: the json dict to strip values from. Note that the dict is mutated by this
function
"""
unsigned = pdu_dict["unsigned"]
if not isinstance(unsigned, dict):
pdu_dict["unsigned"] = {}
if pdu_dict["type"] == "m.room.member":
whitelist = ["knock_room_state", "invite_room_state", "age"]
else:
whitelist = ["age"]
filtered_unsigned = {k: v for k, v in unsigned.items() if k in whitelist}
pdu_dict["unsigned"] = filtered_unsigned

View File

@@ -56,6 +56,7 @@ from synapse.api.room_versions import (
from synapse.events import EventBase, builder
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.transport.client import SendJoinResponse
from synapse.logging.utils import log_function
from synapse.types import JsonDict, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
@@ -143,6 +144,7 @@ class FederationClient(FederationBase):
if destination_dict:
self.pdu_destination_tried[event_id] = destination_dict
@log_function
async def make_query(
self,
destination: str,
@@ -176,6 +178,7 @@ class FederationClient(FederationBase):
ignore_backoff=ignore_backoff,
)
@log_function
async def query_client_keys(
self, destination: str, content: JsonDict, timeout: int
) -> JsonDict:
@@ -193,6 +196,7 @@ class FederationClient(FederationBase):
destination, content, timeout
)
@log_function
async def query_user_devices(
self, destination: str, user_id: str, timeout: int = 30000
) -> JsonDict:
@@ -204,6 +208,7 @@ class FederationClient(FederationBase):
destination, user_id, timeout
)
@log_function
async def claim_client_keys(
self, destination: str, content: JsonDict, timeout: int
) -> JsonDict:

View File

@@ -58,6 +58,7 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
@@ -858,6 +859,7 @@ class FederationServer(FederationBase):
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res
@log_function
async def on_query_client_keys(
self, origin: str, content: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]:
@@ -938,6 +940,7 @@ class FederationServer(FederationBase):
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function
async def on_openid_userinfo(self, token: str) -> Optional[str]:
ts_now_ms = self._clock.time_msec()
return await self.store.get_user_id_for_open_id_token(token, ts_now_ms)

View File

@@ -23,6 +23,7 @@ import logging
from typing import Optional, Tuple
from synapse.federation.units import Transaction
from synapse.logging.utils import log_function
from synapse.storage.databases.main import DataStore
from synapse.types import JsonDict
@@ -35,6 +36,7 @@ class TransactionActions:
def __init__(self, datastore: DataStore):
self.store = datastore
@log_function
async def have_responded(
self, origin: str, transaction: Transaction
) -> Optional[Tuple[int, JsonDict]]:
@@ -51,6 +53,7 @@ class TransactionActions:
return await self.store.get_received_txn_response(transaction_id, origin)
@log_function
async def set_response(
self, origin: str, transaction: Transaction, code: int, response: JsonDict
) -> None:

View File

@@ -607,18 +607,18 @@ class PerDestinationQueue:
self._pending_pdus = []
@attr.s(slots=True, auto_attribs=True)
@attr.s(slots=True)
class _TransactionQueueManager:
"""A helper async context manager for pulling stuff off the queues and
tracking what was last successfully sent, etc.
"""
queue: PerDestinationQueue
queue = attr.ib(type=PerDestinationQueue)
_device_stream_id: Optional[int] = None
_device_list_id: Optional[int] = None
_last_stream_ordering: Optional[int] = None
_pdus: List[EventBase] = attr.Factory(list)
_device_stream_id = attr.ib(type=Optional[int], default=None)
_device_list_id = attr.ib(type=Optional[int], default=None)
_last_stream_ordering = attr.ib(type=Optional[int], default=None)
_pdus = attr.ib(type=List[EventBase], factory=list)
async def __aenter__(self) -> Tuple[List[EventBase], List[Edu]]:
# First we calculate the EDUs we want to send, if any.

View File

@@ -44,6 +44,7 @@ from synapse.api.urls import (
from synapse.events import EventBase, make_event_from_dict
from synapse.federation.units import Transaction
from synapse.http.matrixfederationclient import ByteParser
from synapse.logging.utils import log_function
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -61,6 +62,7 @@ class TransportLayerClient:
self.server_name = hs.hostname
self.client = hs.get_federation_http_client()
@log_function
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
) -> JsonDict:
@@ -86,6 +88,7 @@ class TransportLayerClient:
try_trailing_slash_on_400=True,
)
@log_function
async def get_event(
self, destination: str, event_id: str, timeout: Optional[int] = None
) -> JsonDict:
@@ -108,6 +111,7 @@ class TransportLayerClient:
destination, path=path, timeout=timeout, try_trailing_slash_on_400=True
)
@log_function
async def backfill(
self, destination: str, room_id: str, event_tuples: Collection[str], limit: int
) -> Optional[JsonDict]:
@@ -145,6 +149,7 @@ class TransportLayerClient:
destination, path=path, args=args, try_trailing_slash_on_400=True
)
@log_function
async def timestamp_to_event(
self, destination: str, room_id: str, timestamp: int, direction: str
) -> Union[JsonDict, List]:
@@ -180,6 +185,7 @@ class TransportLayerClient:
return remote_response
@log_function
async def send_transaction(
self,
transaction: Transaction,
@@ -228,6 +234,7 @@ class TransportLayerClient:
try_trailing_slash_on_400=True,
)
@log_function
async def make_query(
self,
destination: str,
@@ -247,6 +254,7 @@ class TransportLayerClient:
ignore_backoff=ignore_backoff,
)
@log_function
async def make_membership_event(
self,
destination: str,
@@ -309,6 +317,7 @@ class TransportLayerClient:
ignore_backoff=ignore_backoff,
)
@log_function
async def send_join_v1(
self,
room_version: RoomVersion,
@@ -327,6 +336,7 @@ class TransportLayerClient:
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
)
@log_function
async def send_join_v2(
self,
room_version: RoomVersion,
@@ -345,6 +355,7 @@ class TransportLayerClient:
max_response_size=MAX_RESPONSE_SIZE_SEND_JOIN,
)
@log_function
async def send_leave_v1(
self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> Tuple[int, JsonDict]:
@@ -361,6 +372,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def send_leave_v2(
self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> JsonDict:
@@ -377,6 +389,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def send_knock_v1(
self,
destination: str,
@@ -410,6 +423,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content
)
@log_function
async def send_invite_v1(
self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> Tuple[int, JsonDict]:
@@ -419,6 +433,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def send_invite_v2(
self, destination: str, room_id: str, event_id: str, content: JsonDict
) -> JsonDict:
@@ -428,6 +443,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def get_public_rooms(
self,
remote_server: str,
@@ -500,6 +516,7 @@ class TransportLayerClient:
return response
@log_function
async def exchange_third_party_invite(
self, destination: str, room_id: str, event_dict: JsonDict
) -> JsonDict:
@@ -509,6 +526,7 @@ class TransportLayerClient:
destination=destination, path=path, data=event_dict
)
@log_function
async def get_event_auth(
self, destination: str, room_id: str, event_id: str
) -> JsonDict:
@@ -516,6 +534,7 @@ class TransportLayerClient:
return await self.client.get_json(destination=destination, path=path)
@log_function
async def query_client_keys(
self, destination: str, query_content: JsonDict, timeout: int
) -> JsonDict:
@@ -557,6 +576,7 @@ class TransportLayerClient:
destination=destination, path=path, data=query_content, timeout=timeout
)
@log_function
async def query_user_devices(
self, destination: str, user_id: str, timeout: int
) -> JsonDict:
@@ -596,6 +616,7 @@ class TransportLayerClient:
destination=destination, path=path, timeout=timeout
)
@log_function
async def claim_client_keys(
self, destination: str, query_content: JsonDict, timeout: int
) -> JsonDict:
@@ -634,6 +655,7 @@ class TransportLayerClient:
destination=destination, path=path, data=query_content, timeout=timeout
)
@log_function
async def get_missing_events(
self,
destination: str,
@@ -658,6 +680,7 @@ class TransportLayerClient:
timeout=timeout,
)
@log_function
async def get_group_profile(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -671,6 +694,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def update_group_profile(
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
@@ -692,6 +716,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_group_summary(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -705,6 +730,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_rooms_in_group(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -772,6 +798,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_users_in_group(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -785,6 +812,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_invited_users_in_group(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -798,6 +826,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def accept_group_invite(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
@@ -808,6 +837,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
def join_group(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> Awaitable[JsonDict]:
@@ -818,6 +848,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def invite_to_group(
self,
destination: str,
@@ -837,6 +868,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def invite_to_group_notification(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
@@ -850,6 +882,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def remove_user_from_group(
self,
destination: str,
@@ -869,6 +902,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def remove_user_from_group_notification(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
@@ -882,6 +916,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def renew_group_attestation(
self, destination: str, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
@@ -895,6 +930,7 @@ class TransportLayerClient:
destination=destination, path=path, data=content, ignore_backoff=True
)
@log_function
async def update_group_summary_room(
self,
destination: str,
@@ -923,6 +959,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def delete_group_summary_room(
self,
destination: str,
@@ -949,6 +986,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_group_categories(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -962,6 +1000,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_group_category(
self, destination: str, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict:
@@ -975,6 +1014,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def update_group_category(
self,
destination: str,
@@ -994,6 +1034,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def delete_group_category(
self, destination: str, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict:
@@ -1007,6 +1048,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_group_roles(
self, destination: str, group_id: str, requester_user_id: str
) -> JsonDict:
@@ -1020,6 +1062,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def get_group_role(
self, destination: str, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict:
@@ -1033,6 +1076,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def update_group_role(
self,
destination: str,
@@ -1052,6 +1096,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def delete_group_role(
self, destination: str, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict:
@@ -1065,6 +1110,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def update_group_summary_user(
self,
destination: str,
@@ -1090,6 +1136,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def set_group_join_policy(
self, destination: str, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
@@ -1104,6 +1151,7 @@ class TransportLayerClient:
ignore_backoff=True,
)
@log_function
async def delete_group_summary_user(
self,
destination: str,

View File

@@ -77,7 +77,7 @@ class AccountDataHandler:
async def add_account_data_for_user(
self, user_id: str, account_data_type: str, content: JsonDict
) -> int:
"""Add some global account_data for a user.
"""Add some account_data to a room for a user.
Args:
user_id: The user to add a tag for.

View File

@@ -55,47 +55,21 @@ class AdminHandler:
async def get_user(self, user: UserID) -> Optional[JsonDict]:
"""Function to get user details"""
user_info_dict = await self.store.get_user_by_id(user.to_string())
if user_info_dict is None:
return None
# Restrict returned information to a known set of fields. This prevents additional
# fields added to get_user_by_id from modifying Synapse's external API surface.
user_info_to_return = {
"name",
"admin",
"deactivated",
"shadow_banned",
"creation_ts",
"appservice_id",
"consent_server_notice_sent",
"consent_version",
"user_type",
"is_guest",
}
# Restrict returned keys to a known set.
user_info_dict = {
key: value
for key, value in user_info_dict.items()
if key in user_info_to_return
}
# Add additional user metadata
profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string())
external_ids = [
({"auth_provider": auth_provider, "external_id": external_id})
for auth_provider, external_id in await self.store.get_external_ids_by_user(
user.to_string()
)
]
user_info_dict["displayname"] = profile.display_name
user_info_dict["avatar_url"] = profile.avatar_url
user_info_dict["threepids"] = threepids
user_info_dict["external_ids"] = external_ids
return user_info_dict
ret = await self.store.get_user_by_id(user.to_string())
if ret:
profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string())
external_ids = [
({"auth_provider": auth_provider, "external_id": external_id})
for auth_provider, external_id in await self.store.get_external_ids_by_user(
user.to_string()
)
]
ret["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url
ret["threepids"] = threepids
ret["external_ids"] = external_ids
return ret
async def export_user_data(self, user_id: str, writer: "ExfiltrationWriter") -> Any:
"""Write all data we have on the user to the given writer.

View File

@@ -168,25 +168,25 @@ def login_id_phone_to_thirdparty(identifier: JsonDict) -> Dict[str, str]:
}
@attr.s(slots=True, auto_attribs=True)
@attr.s(slots=True)
class SsoLoginExtraAttributes:
"""Data we track about SAML2 sessions"""
# time the session was created, in milliseconds
creation_time: int
extra_attributes: JsonDict
creation_time = attr.ib(type=int)
extra_attributes = attr.ib(type=JsonDict)
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class LoginTokenAttributes:
"""Data we store in a short-term login token"""
user_id: str
user_id = attr.ib(type=str)
auth_provider_id: str
auth_provider_id = attr.ib(type=str)
"""The SSO Identity Provider that the user authenticated with, to get this token."""
auth_provider_session_id: Optional[str]
auth_provider_session_id = attr.ib(type=Optional[str])
"""The session ID advertised by the SSO Identity Provider."""

View File

@@ -948,16 +948,8 @@ class DeviceListUpdater:
devices = []
ignore_devices = True
else:
prev_stream_id = await self.store.get_device_list_last_stream_id_for_remote(
user_id
)
cached_devices = await self.store.get_cached_devices_for_user(user_id)
# To ensure that a user with no devices is cached, we skip the resync only
# if we have a stream_id from previously writing a cache entry.
if prev_stream_id is not None and cached_devices == {
d["device_id"]: d for d in devices
}:
if cached_devices == {d["device_id"]: d for d in devices}:
logging.info(
"Skipping device list resync for %s, as our cache matches already",
user_id,

View File

@@ -1321,14 +1321,14 @@ def _one_time_keys_match(old_key_json: str, new_key: JsonDict) -> bool:
return old_key == new_key_copy
@attr.s(slots=True, auto_attribs=True)
@attr.s(slots=True)
class SignatureListItem:
"""An item in the signature list as used by upload_signatures_for_device_keys."""
signing_key_id: str
target_user_id: str
target_device_id: str
signature: JsonDict
signing_key_id = attr.ib(type=str)
target_user_id = attr.ib(type=str)
target_device_id = attr.ib(type=str)
signature = attr.ib(type=JsonDict)
class SigningKeyEduUpdater:

View File

@@ -20,6 +20,7 @@ from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state
from synapse.logging.utils import log_function
from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, UserID
from synapse.visibility import filter_events_for_client
@@ -42,6 +43,7 @@ class EventStreamHandler:
self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer()
@log_function
async def get_stream(
self,
auth_user_id: str,
@@ -117,7 +119,7 @@ class EventStreamHandler:
events.extend(to_add)
chunks = self._event_serializer.serialize_events(
chunks = await self._event_serializer.serialize_events(
events,
time_now,
as_client_event=as_client_event,

View File

@@ -51,6 +51,7 @@ from synapse.logging.context import (
preserve_fn,
run_in_background,
)
from synapse.logging.utils import log_function
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationStoreRoomOnOutlierMembershipRestServlet,
@@ -555,6 +556,7 @@ class FederationHandler:
run_in_background(self._handle_queued_pdus, room_queue)
@log_function
async def do_knock(
self,
target_hosts: List[str],
@@ -926,6 +928,7 @@ class FederationHandler:
return event
@log_function
async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str
) -> EventBase:
@@ -1036,6 +1039,7 @@ class FederationHandler:
else:
return []
@log_function
async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]:
@@ -1052,6 +1056,7 @@ class FederationHandler:
return events
@log_function
async def get_persisted_pdu(
self, origin: str, event_id: str
) -> Optional[EventBase]:
@@ -1113,6 +1118,7 @@ class FederationHandler:
return missing_events
@log_function
async def exchange_third_party_invite(
self, sender_user_id: str, target_user_id: str, room_id: str, signed: JsonDict
) -> None:

View File

@@ -56,6 +56,7 @@ from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.federation.federation_client import InvalidResponseError
from synapse.logging.context import nested_logging_context, run_in_background
from synapse.logging.utils import log_function
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.replication.http.federation import (
@@ -274,6 +275,7 @@ class FederationEventHandler:
await self._process_received_pdu(origin, pdu, state=None)
@log_function
async def on_send_membership_event(
self, origin: str, event: EventBase
) -> Tuple[EventBase, EventContext]:
@@ -470,6 +472,7 @@ class FederationEventHandler:
return await self.persist_events_and_notify(room_id, [(event, context)])
@log_function
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Collection[str]
) -> None:

View File

@@ -170,7 +170,7 @@ class InitialSyncHandler:
d["inviter"] = event.sender
invite_event = await self.store.get_event(event.event_id)
d["invite"] = self._event_serializer.serialize_event(
d["invite"] = await self._event_serializer.serialize_event(
invite_event,
time_now,
as_client_event=as_client_event,
@@ -222,7 +222,7 @@ class InitialSyncHandler:
d["messages"] = {
"chunk": (
self._event_serializer.serialize_events(
await self._event_serializer.serialize_events(
messages,
time_now=time_now,
as_client_event=as_client_event,
@@ -232,7 +232,7 @@ class InitialSyncHandler:
"end": await end_token.to_string(self.store),
}
d["state"] = self._event_serializer.serialize_events(
d["state"] = await self._event_serializer.serialize_events(
current_state.values(),
time_now=time_now,
as_client_event=as_client_event,
@@ -376,14 +376,16 @@ class InitialSyncHandler:
"messages": {
"chunk": (
# Don't bundle aggregations as this is a deprecated API.
self._event_serializer.serialize_events(messages, time_now)
await self._event_serializer.serialize_events(messages, time_now)
),
"start": await start_token.to_string(self.store),
"end": await end_token.to_string(self.store),
},
"state": (
# Don't bundle aggregations as this is a deprecated API.
self._event_serializer.serialize_events(room_state.values(), time_now)
await self._event_serializer.serialize_events(
room_state.values(), time_now
)
),
"presence": [],
"receipts": [],
@@ -402,7 +404,7 @@ class InitialSyncHandler:
# TODO: These concurrently
time_now = self.clock.time_msec()
# Don't bundle aggregations as this is a deprecated API.
state = self._event_serializer.serialize_events(
state = await self._event_serializer.serialize_events(
current_state.values(), time_now
)
@@ -478,7 +480,7 @@ class InitialSyncHandler:
"messages": {
"chunk": (
# Don't bundle aggregations as this is a deprecated API.
self._event_serializer.serialize_events(messages, time_now)
await self._event_serializer.serialize_events(messages, time_now)
),
"start": await start_token.to_string(self.store),
"end": await end_token.to_string(self.store),

View File

@@ -246,7 +246,7 @@ class MessageHandler:
room_state = room_state_events[membership_event_id]
now = self.clock.time_msec()
events = self._event_serializer.serialize_events(room_state.values(), now)
events = await self._event_serializer.serialize_events(room_state.values(), now)
return events
async def get_joined_members(self, requester: Requester, room_id: str) -> dict:
@@ -277,7 +277,7 @@ class MessageHandler:
# If this is an AS, double check that they are allowed to see the members.
# This can either be because the AS user is in the room or because there
# is a user in the room that the AS is "interested in"
if False and requester.app_service and user_id not in users_with_profile:
if requester.app_service and user_id not in users_with_profile:
for uid in users_with_profile:
if requester.app_service.is_interested_in_user(uid):
break

View File

@@ -537,16 +537,14 @@ class PaginationHandler:
state_dict = await self.store.get_events(list(state_ids.values()))
state = state_dict.values()
aggregations = await self.store.get_bundled_aggregations(events, user_id)
time_now = self.clock.time_msec()
chunk = {
"chunk": (
self._event_serializer.serialize_events(
await self._event_serializer.serialize_events(
events,
time_now,
bundle_aggregations=aggregations,
bundle_aggregations=True,
as_client_event=as_client_event,
)
),
@@ -555,7 +553,7 @@ class PaginationHandler:
}
if state:
chunk["state"] = self._event_serializer.serialize_events(
chunk["state"] = await self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
)

View File

@@ -55,6 +55,7 @@ from synapse.api.presence import UserPresenceState
from synapse.appservice import ApplicationService
from synapse.events.presence_router import PresenceRouter
from synapse.logging.context import run_in_background
from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.presence import (
@@ -1541,6 +1542,7 @@ class PresenceEventSource(EventSource[int, UserPresenceState]):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
@log_function
async def get_new_events(
self,
user: UserID,

View File

@@ -979,18 +979,16 @@ class RegistrationHandler:
if (
self.hs.config.email.email_enable_notifs
and self.hs.config.email.email_notif_for_new_users
and token
):
# Pull the ID of the access token back out of the db
# It would really make more sense for this to be passed
# up when the access token is saved, but that's quite an
# invasive change I'd rather do separately.
if token:
user_tuple = await self.store.get_user_by_access_token(token)
# The token better still exist.
assert user_tuple
token_id = user_tuple.token_id
else:
token_id = None
user_tuple = await self.store.get_user_by_access_token(token)
# The token better still exist.
assert user_tuple
token_id = user_tuple.token_id
await self.pusher_pool.add_pusher(
user_id=user_id,

View File

@@ -393,9 +393,7 @@ class RoomCreationHandler:
user_id = requester.user.to_string()
if not await self.spam_checker.user_may_create_room(user_id):
raise SynapseError(
403, "You are not permitted to create rooms", Codes.FORBIDDEN
)
raise SynapseError(403, "You are not permitted to create rooms")
creation_content: JsonDict = {
"room_version": new_room_version.identifier,
@@ -687,9 +685,7 @@ class RoomCreationHandler:
invite_3pid_list,
)
):
raise SynapseError(
403, "You are not permitted to create rooms", Codes.FORBIDDEN
)
raise SynapseError(403, "You are not permitted to create rooms")
if ratelimit:
await self.request_ratelimiter.ratelimit(requester)
@@ -1181,22 +1177,6 @@ class RoomContextHandler:
# `filtered` rather than the event we retrieved from the datastore.
results["event"] = filtered[0]
# Fetch the aggregations.
aggregations = await self.store.get_bundled_aggregations(
[results["event"]], user.to_string()
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_before"], user.to_string()
)
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_after"], user.to_string()
)
)
results["aggregations"] = aggregations
if results["events_after"]:
last_event_id = results["events_after"][-1].event_id
else:

View File

@@ -82,7 +82,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
self.event_auth_handler = hs.get_event_auth_handler()
self.member_linearizer: Linearizer = Linearizer(name="member")
self.member_limiter = Linearizer(max_count=10, name="member_as_limiter")
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
@@ -483,43 +482,24 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
key = (room_id,)
as_id = object()
if requester.app_service:
as_id = requester.app_service.id
then = self.clock.time_msec()
with (await self.member_limiter.queue(as_id)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
with (await self.member_linearizer.queue(key)):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
with (await self.member_linearizer.queue(key)):
result = await self.update_membership_locked(
requester,
target,
room_id,
action,
txn_id=txn_id,
remote_room_hosts=remote_room_hosts,
third_party_signed=third_party_signed,
ratelimit=ratelimit,
content=content,
new_room=new_room,
require_consent=require_consent,
outlier=outlier,
historical=historical,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
return result

View File

@@ -153,9 +153,6 @@ class RoomSummaryHandler:
rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = []
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS:
queue_entry = room_queue.popleft()
room_id = queue_entry.room_id
@@ -170,7 +167,7 @@ class RoomSummaryHandler:
# The client-specified max_rooms_per_space limit doesn't apply to the
# room_id specified in the request, so we ignore it if this is the
# first room we are processing.
max_children = max_rooms_per_space if processed_rooms else MAX_ROOMS
max_children = max_rooms_per_space if processed_rooms else None
if is_in_room:
room_entry = await self._summarize_local_room(
@@ -212,7 +209,7 @@ class RoomSummaryHandler:
# Before returning to the client, remove the allowed_room_ids
# and allowed_spaces keys.
room.pop("allowed_room_ids", None)
room.pop("allowed_spaces", None) # historical
room.pop("allowed_spaces", None)
rooms_result.append(room)
events.extend(room_entry.children_state_events)
@@ -398,7 +395,7 @@ class RoomSummaryHandler:
None,
room_id,
suggested_only,
# Do not limit the maximum children.
# TODO Handle max children.
max_children=None,
)
@@ -528,10 +525,6 @@ class RoomSummaryHandler:
rooms_result: List[JsonDict] = []
events_result: List[JsonDict] = []
# Set a limit on the number of rooms to return.
if max_rooms_per_space is None or max_rooms_per_space > MAX_ROOMS_PER_SPACE:
max_rooms_per_space = MAX_ROOMS_PER_SPACE
while room_queue and len(rooms_result) < MAX_ROOMS:
room_id = room_queue.popleft()
if room_id in processed_rooms:
@@ -590,9 +583,7 @@ class RoomSummaryHandler:
# Iterate through each child and potentially add it, but not its children,
# to the response.
for child_room in itertools.islice(
root_room_entry.children_state_events, MAX_ROOMS_PER_SPACE
):
for child_room in root_room_entry.children_state_events:
room_id = child_room.get("state_key")
assert isinstance(room_id, str)
# If the room is unknown, skip it.
@@ -642,8 +633,8 @@ class RoomSummaryHandler:
suggested_only: True if only suggested children should be returned.
Otherwise, all children are returned.
max_children:
The maximum number of children rooms to include. A value of None
means no limit.
The maximum number of children rooms to include. This is capped
to a server-set limit.
Returns:
A room entry if the room should be returned. None, otherwise.
@@ -665,13 +656,8 @@ class RoomSummaryHandler:
# we only care about suggested children
child_events = filter(_is_suggested_child_event, child_events)
# TODO max_children is legacy code for the /spaces endpoint.
if max_children is not None:
child_iter: Iterable[EventBase] = itertools.islice(
child_events, max_children
)
else:
child_iter = child_events
if max_children is None or max_children > MAX_ROOMS_PER_SPACE:
max_children = MAX_ROOMS_PER_SPACE
stripped_events: List[JsonDict] = [
{
@@ -682,7 +668,7 @@ class RoomSummaryHandler:
"sender": e.sender,
"origin_server_ts": e.origin_server_ts,
}
for e in child_iter
for e in itertools.islice(child_events, max_children)
]
return _RoomEntry(room_id, room_entry, stripped_events)
@@ -1002,14 +988,12 @@ class RoomSummaryHandler:
"canonical_alias": stats["canonical_alias"],
"num_joined_members": stats["joined_members"],
"avatar_url": stats["avatar"],
# plural join_rules is a documentation error but kept for historical
# purposes. Should match /publicRooms.
"join_rules": stats["join_rules"],
"join_rule": stats["join_rules"],
"world_readable": (
stats["history_visibility"] == HistoryVisibility.WORLD_READABLE
),
"guest_can_join": stats["guest_access"] == "can_join",
"creation_ts": create_event.origin_server_ts,
"room_type": create_event.content.get(EventContentFields.ROOM_TYPE),
}

View File

@@ -420,10 +420,10 @@ class SearchHandler:
time_now = self.clock.time_msec()
for context in contexts.values():
context["events_before"] = self._event_serializer.serialize_events(
context["events_before"] = await self._event_serializer.serialize_events(
context["events_before"], time_now
)
context["events_after"] = self._event_serializer.serialize_events(
context["events_after"] = await self._event_serializer.serialize_events(
context["events_after"], time_now
)
@@ -441,7 +441,9 @@ class SearchHandler:
results.append(
{
"rank": rank_map[e.event_id],
"result": self._event_serializer.serialize_event(e, time_now),
"result": (
await self._event_serializer.serialize_event(e, time_now)
),
"context": contexts.get(e.event_id, {}),
}
)
@@ -455,7 +457,7 @@ class SearchHandler:
if state_results:
s = {}
for room_id, state_events in state_results.items():
s[room_id] = self._event_serializer.serialize_events(
s[room_id] = await self._event_serializer.serialize_events(
state_events, time_now
)

View File

@@ -126,45 +126,45 @@ class SsoIdentityProvider(Protocol):
raise NotImplementedError()
@attr.s(auto_attribs=True)
@attr.s
class UserAttributes:
# the localpart of the mxid that the mapper has assigned to the user.
# if `None`, the mapper has not picked a userid, and the user should be prompted to
# enter one.
localpart: Optional[str]
display_name: Optional[str] = None
emails: Collection[str] = attr.Factory(list)
localpart = attr.ib(type=Optional[str])
display_name = attr.ib(type=Optional[str], default=None)
emails = attr.ib(type=Collection[str], default=attr.Factory(list))
@attr.s(slots=True, auto_attribs=True)
@attr.s(slots=True)
class UsernameMappingSession:
"""Data we track about SSO sessions"""
# A unique identifier for this SSO provider, e.g. "oidc" or "saml".
auth_provider_id: str
auth_provider_id = attr.ib(type=str)
# user ID on the IdP server
remote_user_id: str
remote_user_id = attr.ib(type=str)
# attributes returned by the ID mapper
display_name: Optional[str]
emails: Collection[str]
display_name = attr.ib(type=Optional[str])
emails = attr.ib(type=Collection[str])
# An optional dictionary of extra attributes to be provided to the client in the
# login response.
extra_login_attributes: Optional[JsonDict]
extra_login_attributes = attr.ib(type=Optional[JsonDict])
# where to redirect the client back to
client_redirect_url: str
client_redirect_url = attr.ib(type=str)
# expiry time for the session, in milliseconds
expiry_time_ms: int
expiry_time_ms = attr.ib(type=int)
# choices made by the user
chosen_localpart: Optional[str] = None
use_display_name: bool = True
emails_to_use: Collection[str] = ()
terms_accepted_version: Optional[str] = None
chosen_localpart = attr.ib(type=Optional[str], default=None)
use_display_name = attr.ib(type=bool, default=True)
emails_to_use = attr.ib(type=Collection[str], default=())
terms_accepted_version = attr.ib(type=Optional[str], default=None)
# the HTTP cookie used to track the mapping session id

View File

@@ -60,6 +60,10 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# Debug logger for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger = logging.getLogger("synapse.handler.sync.4422_debug")
# Counts the number of times we returned a non-empty sync. `type` is one of
# "initial_sync", "full_state_sync" or "incremental_sync", `lazy_loaded` is
# "true" or "false" depending on if the request asked for lazy loaded members or
@@ -98,9 +102,6 @@ class TimelineBatch:
prev_batch: StreamToken
events: List[EventBase]
limited: bool
# A mapping of event ID to the bundled aggregations for the above events.
# This is only calculated if limited is true.
bundled_aggregations: Optional[Dict[str, Dict[str, Any]]] = None
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
@@ -633,19 +634,10 @@ class SyncHandler:
prev_batch_token = now_token.copy_and_replace("room_key", room_key)
# Don't bother to bundle aggregations if the timeline is unlimited,
# as clients will have all the necessary information.
bundled_aggregations = None
if limited or newly_joined_room:
bundled_aggregations = await self.store.get_bundled_aggregations(
recents, sync_config.user.to_string()
)
return TimelineBatch(
events=recents,
prev_batch=prev_batch_token,
limited=limited or newly_joined_room,
bundled_aggregations=bundled_aggregations,
)
async def get_state_after_event(
@@ -1169,8 +1161,13 @@ class SyncHandler:
num_events = 0
# debug for https://github.com/matrix-org/synapse/issues/9424
# debug for https://github.com/matrix-org/synapse/issues/4422
for joined_room in sync_result_builder.joined:
room_id = joined_room.room_id
if room_id in newly_joined_rooms:
issue4422_logger.debug(
"Sync result for newly joined room %s: %r", room_id, joined_room
)
num_events += len(joined_room.timeline.events)
log_kv(
@@ -1743,6 +1740,18 @@ class SyncHandler:
old_mem_ev_id, allow_none=True
)
# debug for #4422
if has_join:
prev_membership = None
if old_mem_ev:
prev_membership = old_mem_ev.membership
issue4422_logger.debug(
"Previous membership for room %s with join: %s (event %s)",
room_id,
prev_membership,
old_mem_ev_id,
)
if not old_mem_ev or old_mem_ev.membership != Membership.JOIN:
newly_joined_rooms.append(room_id)
@@ -1884,6 +1893,13 @@ class SyncHandler:
upto_token=since_token,
)
if newly_joined:
# debugging for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger.debug(
"RoomSyncResultBuilder events for newly joined room %s: %r",
room_id,
entry.events,
)
room_entries.append(entry)
return _RoomChanges(
@@ -2061,6 +2077,14 @@ class SyncHandler:
# `_load_filtered_recents` can't find any events the user should see
# (e.g. due to having ignored the sender of the last 50 events).
if newly_joined:
# debug for https://github.com/matrix-org/synapse/issues/4422
issue4422_logger.debug(
"Timeline events after filtering in newly-joined room %s: %r",
room_id,
batch,
)
# When we join the room (or the client requests full_state), we should
# send down any existing tags. Usually the user won't have tags in a
# newly joined room, unless either a) they've joined before or b) the

View File

@@ -32,9 +32,9 @@ class ProxyConnectError(ConnectError):
pass
@attr.s(auto_attribs=True)
@attr.s
class ProxyCredentials:
username_password: bytes
username_password = attr.ib(type=bytes)
def as_proxy_authorization_value(self) -> bytes:
"""

View File

@@ -123,37 +123,37 @@ class ByteParser(ByteWriteable, Generic[T], abc.ABC):
pass
@attr.s(slots=True, frozen=True, auto_attribs=True)
@attr.s(slots=True, frozen=True)
class MatrixFederationRequest:
method: str
method = attr.ib(type=str)
"""HTTP method
"""
path: str
path = attr.ib(type=str)
"""HTTP path
"""
destination: str
destination = attr.ib(type=str)
"""The remote server to send the HTTP request to.
"""
json: Optional[JsonDict] = None
json = attr.ib(default=None, type=Optional[JsonDict])
"""JSON to send in the body.
"""
json_callback: Optional[Callable[[], JsonDict]] = None
json_callback = attr.ib(default=None, type=Optional[Callable[[], JsonDict]])
"""A callback to generate the JSON.
"""
query: Optional[dict] = None
query = attr.ib(default=None, type=Optional[dict])
"""Query arguments.
"""
txn_id: Optional[str] = None
txn_id = attr.ib(default=None, type=Optional[str])
"""Unique ID for this request (for logging)
"""
uri: bytes = attr.ib(init=False)
uri = attr.ib(init=False, type=bytes)
"""The URI of this request
"""

Some files were not shown because too many files have changed in this diff Show More