1
0

Compare commits

..

32 Commits

Author SHA1 Message Date
Andrew Morgan
4c781a1662 Changelog 2020-11-13 16:23:00 +00:00
Andrew Morgan
55163a0c80 Add knock membership events to stats generation 2020-11-13 16:23:00 +00:00
Andrew Morgan
a2ab56a068 Implement locally rescinding a federated knock
As mentioned in the MSC, a user can rescind (take back) a knock while it
is pending by sending a leave event to the room. This will set their
membership to leave instead of knock.

Now, this functionality already worked before this commit for rooms that
the homeserver was already in. What didn't work was:

* Rescinding a knock over federation to a room with active homeservers
* Rescinding a knock over federation to a room with inactive homeservers

This commit addresses the second bullet point, and leaves the first
bullet point as a TODO (as it is an edge case an not immediately obvious
how it would be done).

What this commit does is crib off the same functionality as locally
rejecting an invite. That occurs when we are unable to contact the
homeserver that originally sent us an invite. Instead an out-of-band
leave membership event will be generated and sent to clients locally.

The same is happening here. You can mostly ignore the new
generate_local_out_of_band_membership methods, those are just some
structural bits to allow us to call that method from RoomMemberHandler.

The real meat of this commit is moving about and adding some logic in
`update_membership_locked`, specifically for when we're updating a
user's membership to "leave". There was already some code in there to
check whether the room to send the leave to was a room the homeserver is
not currently a part of. In that case, we'd remote reject the knock.
This commit just extends that to also rescind knocks if the user's
membership in the room is currently "knock". We skip the remote attempt
for now and go straight to generating a local leave membership event.
2020-11-13 16:23:00 +00:00
Andrew Morgan
9bdd420f9b Generalise _locally_reject_invite
We'll be using this to rescind knocks locally too, so generalise the method.
2020-11-13 16:23:00 +00:00
Andrew Morgan
f10ee502c7 Add some handy db methods for knocking
This just adds some database methods that we'll need to use when
implementing rescinding of knocks of clients. They are equivalent to
their invite-related counterparts.
2020-11-13 16:23:00 +00:00
Andrew Morgan
fa7f378bba Auto-add displaynames to knock events if they're missing
Tiny commit to just bring knocking up to feature parity.
2020-11-13 16:23:00 +00:00
Andrew Morgan
3dbe05d022 Extend sync to inform clients about the progress of their knocks
So we've got federation so that homeservers can communicate knocking
information between them - but how does that information actually get
down to the client? The client knows that it knocked successfully from a
200 in its original request, but what else does it need? This commit
adds a new "knock" section to /sync (in addition to "invite", "join",
and "leave") all help give the client the information it needs.

The new "knock" section is used for sending down the stripped state
events we collected earlier. The client will use these to display the
room and its metadata in a little "pending knocks" section or similar.

This is all this commit adds. If the user's knock has been accepted or
rejected, they will receive that information in the "join" or "leave"
sections of /sync.

Most of this code is just cribbing off the invite and join sync code yet
again, with some minor differences. For instance, we don't need to
exclude knock events from sync if the sender is in your ignore list, as
you are the only ones that can send knocks for yourself.

The structure of the "knock" dict in sync is modeled after "invite", as
clients also receive stripped state in that. The structure can be viewed
in the linked MSC.
2020-11-13 16:23:00 +00:00
Andrew Morgan
09e1942b04 Send stripped state events back to the knocking homeserver
Here we finally send the stripped state events back to the knocking
homeserver, which then ingests and stores those events.

A future commit will actually start sending those events down /sync to
the relevant user.
2020-11-13 16:23:00 +00:00
Andrew Morgan
a4dafd407d Federation: make_knock and send_knock implementations
Most of this is explained in the linked MSC (and don't miss the sequence
diagram in the MSC comments), but roughly knocking takes inspiration from
room joins and room invites. This commit is the room join stuff.

First the knocking homeserver POSTs to the make_knock endpoint on
another homeserver. The other homeserver will send back a knock event
that is valid for the knocking user and the room that they are knocking
on. The knocking homeserver will sign the event and send it back, before
the other homeserver takes that event and then sends it into the room on
the knocking homeserver's behalf.

It's worth noting that the accepting/rejecting knocks all happen over
existing room invite/leave flows. A homeserver rescinding its knock as
well is also just sending a leave.

Once the event has been inserted into the room, the homeserver that's in
the room will send back a 200 and an empty JSON dict to confirm
everything went well to the knocker. In a future commit, this dict will
instead be filled with some stripped state events from the room which
the knocking homeserver will pass back to the knocking user.

And yes, the logging statements in this commit are intentional. They're
consistent with the rest of the file :)
2020-11-13 16:22:56 +00:00
Andrew Morgan
fa1de1d858 Rename maybe_store_room_on_{invite,outlier_membership}
There's a handy function called maybe_store_room_on_invite which allows us to
create an entry in the rooms table for a room and its version for which we
aren't joined to yet, but we can reference when ingesting events about.

This is currently used for invites where we receive some stripped state about
the room and pass it down via /sync to the client, without us being in the
room yet.

There is a similar requirement for knocking, where we will eventually do the
same thing, and need an entry in the rooms table as well. Thus, reusing this
function works, however its name needs to be generalised a bit.

So that is what this commit does.
2020-11-11 17:19:13 +00:00
Andrew Morgan
63c767ac10 Add room_knock_state_types config option
This option serves the same purpose as the existing
room_invite_state_types option, which defines what state events are sent
over to a user that is invited to a room. This information is necessary
for the user - who isn't in the room yet - to get some metadata about
the room in order to display it in a pretty fashion in the user's
pending-invites list.

It includes information such as the room's name, avatar, topic,
canonical alias, room encryption state etc. as well as the invite
membership event which the invited user's homeserver can reference.

This new option is the exact same, but is sent by a homeserver in the
room to the knocker during the knock process. This option will actually
be utilised in a later commit.
2020-11-11 16:46:17 +00:00
Andrew Morgan
c4ff5ddd00 Add CS /_matrix/client/r0/knock/{roomIdOrAlias} endpoint
We're ditching the usual idea of having two endpoints for each
membership-related endpoint as per the MSC. Thus knocking only gets the
more powerful variant (the one that supports room aliases as well as
IDs. The reason is also optional.

The other small change is just to ensure displaynames get added to the
content of this particular membership event.
2020-11-11 16:40:41 +00:00
Andrew Morgan
6bd57f4bf3 Update the event auth rules for knocking
Hopefully most of these changes are explained through the added comments and error
messages. The changes are also described conceptually in the MSC:
https://github.com/Sorunome/matrix-doc/blob/soru/knock/proposals/2403-knock.md#join-rules
2020-11-11 16:40:38 +00:00
Andrew Morgan
f7d11de227 Add xyz.amorgan.knock /versions string 2020-11-10 19:04:49 +00:00
Andrew Morgan
8b79ea03e5 Add new experimental room version for knocking 2020-11-10 19:04:10 +00:00
Marcus Schopen
c059413001 Notes on SSO logins and media_repository worker (#8701)
If SSO login is used (e.g. SAML) in a multi worker setup, it should be mentioned that currently all SAML logins must run on the same worker, see https://github.com/matrix-org/synapse/issues/7530

Also, if you are using different ports (for example 443 and 8448) in a reverse proxy for client and federation, the path `/_matrix/media` on the client and federation port must point to the listener of the `media_repository` worker, otherwise you'll get a 404 on the federation port for the path `/_matrix/media`, if a remote server is trying to get the media object on federation port, see https://github.com/matrix-org/synapse/issues/8695
2020-11-06 14:33:07 +00:00
Andrew Morgan
2a6b685294 Add documentation about documentation to CONTRIBUTING.md (#8714)
This PR adds some documentation that:

* Describes who the audience for the `docs/`, `docs/dev/` and `docs/admin/` directories are, as well as Synapse's wiki page.
* Stresses that we'd like all documentation to be down in markdown.
2020-11-06 11:59:22 +00:00
Richard van der Hoff
fb56dfdccd Fix SIGHUP handler (#8697)
Fixes:

```
builtins.TypeError: _reload_logging_config() takes 1 positional argument but 2 were given
```
2020-11-06 11:42:07 +00:00
Dirk Klimpel
c3119d1536 Add an admin API for users' media statistics (#8700)
Add `GET /_synapse/admin/v1/statistics/users/media` to get statisics about local media usage by users.
Related to #6094
It is the first API for statistics.
Goal is to avoid/reduce usage of sql queries like [Wiki analyzing Synapse](https://github.com/matrix-org/synapse/wiki/SQL-for-analyzing-Synapse-PostgreSQL-database-stats)

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2020-11-05 18:59:12 +00:00
Dirk Klimpel
e4676bd877 Add displayname to Shared-Secret Registration for admins (#8722)
Add `displayname` to Shared-Secret Registration for admins to `POST /_synapse/admin/v1/register`
2020-11-05 13:55:45 +00:00
Andrew Morgan
6abb1ad0be Consolidate purge table lists to prevent desyncronisation (#8713)
I idly noticed that these lists were out of sync with each other, causing us to miss a table in a test case (`local_invites`). Let's consolidate this list instead to prevent this from happening in the future.
2020-11-04 11:26:05 +00:00
Dirk Klimpel
4fda58ddd2 Remove the "draft" status of the Room Details Admin API (#8702)
Fixes #8550
2020-11-03 12:48:25 +00:00
Erik Johnston
243d427fbc Block clients from sending server ACLs that lock the local server out. (#8708)
Fixes #4042
2020-11-03 12:13:48 +00:00
Erik Johnston
4b09b7438e Document how to set up multiple event persisters (#8706) 2020-11-03 10:27:11 +00:00
Matthew Hodgson
d04c2d19b3 grammar 2020-11-02 21:22:36 +00:00
Andrew Morgan
e89bd3ea92 Improve error messages of non-str displayname/avatar_url (#8705)
This PR fixes two things:

* Corrects the copy/paste error of telling the client their displayname is wrong when they are submitting an `avatar_url`.
* Returns a `M_INVALID_PARAM` instead of `M_UNKNOWN` for non-str type parameters.

Reported by @t3chguy.
2020-11-02 18:01:09 +00:00
David Baker
59cc2472b3 Add base pushrule to notify for jitsi conferences (#8286)
This could be customised to trigger a different kind of notification in the future, but for now it's a normal non-highlight one.
2020-11-02 16:36:14 +00:00
Dan Callahan
ca39e67f3d Use Python 3.8 in Docker images by default (#8698)
This bumps us closer to current Python without going all the way to 3.9.

Fixes #8674

Signed-off-by: Dan Callahan <danc@element.io>
2020-11-02 16:33:06 +00:00
Erik Johnston
1eb9de90c0 Improve start time by adding index to e2e_cross_signing_keys (#8694)
We do a `SELECT MAX(stream_id) FROM e2e_cross_signing_keys` on startup.
2020-11-02 13:55:56 +00:00
Matthew Hodgson
11fd90a2b7 typo 2020-11-02 13:33:56 +00:00
Andrew Morgan
26b46796ea Fix typos in systemd-with-workers doc 2020-11-02 12:56:16 +00:00
Andrew Morgan
305545682d Fix typo in workers doc 2020-11-02 12:36:18 +00:00
63 changed files with 2094 additions and 429 deletions

View File

@@ -156,6 +156,24 @@ directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
## Documentation
There is a growing amount of documentation located in the [docs](docs)
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
New files added to both folders should be written in [Github-Flavoured
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
should be made to migrate existing documents to markdown where possible.
Some documentation also exists in [Synapse's Github
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
## Sign off
In order to have a concrete record that your contribution is intentional

1
changelog.d/6739.feature Normal file
View File

@@ -0,0 +1 @@
Implement "room knocking" as per MSC2403. Contributed by Sorunome and anoa.

1
changelog.d/8286.feature Normal file
View File

@@ -0,0 +1 @@
Add a push rule that highlights when a jitsi conference is created in a room.

1
changelog.d/8694.misc Normal file
View File

@@ -0,0 +1 @@
Improve start time by adding an index to `e2e_cross_signing_keys.stream_id`.

1
changelog.d/8697.misc Normal file
View File

@@ -0,0 +1 @@
Re-organize the structured logging code to separate the TCP transport handling from the JSON formatting.

1
changelog.d/8698.misc Normal file
View File

@@ -0,0 +1 @@
Use Python 3.8 in Docker images by default.

1
changelog.d/8700.feature Normal file
View File

@@ -0,0 +1 @@
Add an admin API for local user media statistics. Contributed by @dklimpel.

1
changelog.d/8701.doc Normal file
View File

@@ -0,0 +1 @@
Notes on SSO logins and media_repository worker.

1
changelog.d/8702.misc Normal file
View File

@@ -0,0 +1 @@
Remove the "draft" status of the Room Details Admin API.

1
changelog.d/8705.misc Normal file
View File

@@ -0,0 +1 @@
Improve the error returned when a non-string displayname or avatar_url is used when updating a user's profile.

1
changelog.d/8706.doc Normal file
View File

@@ -0,0 +1 @@
Document experimental support for running multiple event persisters.

1
changelog.d/8708.misc Normal file
View File

@@ -0,0 +1 @@
Block attempts by clients to send server ACLs, or redactions of server ACLs, that would result in the local server being blocked from the room.

1
changelog.d/8713.misc Normal file
View File

@@ -0,0 +1 @@
Consolidate duplicated lists of purged tables that are checked in tests.

1
changelog.d/8714.doc Normal file
View File

@@ -0,0 +1 @@
Add information regarding the various sources of, and expected contributions to, Synapse's documentation to `CONTRIBUTING.md`.

1
changelog.d/8722.feature Normal file
View File

@@ -0,0 +1 @@
Add `displayname` to Shared-Secret Registration for admins.

View File

@@ -11,7 +11,7 @@
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=3.7
ARG PYTHON_VERSION=3.8
###
### Stage 0: builder

View File

@@ -18,7 +18,8 @@ To fetch the nonce, you need to request one from the API::
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
body containing the nonce, username, password, whether they are an admin
(optional, False by default), and a HMAC digest of the content.
(optional, False by default), and a HMAC digest of the content. Also you can
set the displayname (optional, ``username`` by default).
As an example::
@@ -26,6 +27,7 @@ As an example::
> {
"nonce": "thisisanonce",
"username": "pepper_roni",
"displayname": "Pepper Roni",
"password": "pizza",
"admin": true,
"mac": "mac_digest_here"

View File

@@ -265,12 +265,10 @@ Response:
Once the `next_token` parameter is no longer present, we know we've reached the
end of the list.
# DRAFT: Room Details API
# Room Details API
The Room Details admin API allows server admins to get all details of a room.
This API is still a draft and details might change!
The following fields are possible in the JSON response body:
* `room_id` - The ID of the room.

View File

@@ -0,0 +1,83 @@
# Users' media usage statistics
Returns information about all local media usage of users. Gives the
possibility to filter them by time and user.
The API is:
```
GET /_synapse/admin/v1/statistics/users/media
```
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [README.rst](README.rst).
A response body like the following is returned:
```json
{
"users": [
{
"displayname": "foo_user_0",
"media_count": 2,
"media_length": 134,
"user_id": "@foo_user_0:test"
},
{
"displayname": "foo_user_1",
"media_count": 2,
"media_length": 134,
"user_id": "@foo_user_1:test"
}
],
"next_token": 3,
"total": 10
}
```
To paginate, check for `next_token` and if present, call the endpoint
again with `from` set to the value of `next_token`. This will return a new page.
If the endpoint does not return a `next_token` then there are no more
reports to paginate through.
**Parameters**
The following parameters should be set in the URL:
* `limit`: string representing a positive integer - Is optional but is
used for pagination, denoting the maximum number of items to return
in this call. Defaults to `100`.
* `from`: string representing a positive integer - Is optional but used for pagination,
denoting the offset in the returned results. This should be treated as an opaque value
and not explicitly set to anything other than the return value of `next_token` from a
previous call. Defaults to `0`.
* `order_by` - string - The method in which to sort the returned list of users. Valid values are:
- `user_id` - Users are ordered alphabetically by `user_id`. This is the default.
- `displayname` - Users are ordered alphabetically by `displayname`.
- `media_length` - Users are ordered by the total size of uploaded media in bytes.
Smallest to largest.
- `media_count` - Users are ordered by number of uploaded media. Smallest to largest.
* `from_ts` - string representing a positive integer - Considers only
files created at this timestamp or later. Unix timestamp in ms.
* `until_ts` - string representing a positive integer - Considers only
files created at this timestamp or earlier. Unix timestamp in ms.
* `search_term` - string - Filter users by their user ID localpart **or** displayname.
The search term can be found in any part of the string.
Defaults to no filtering.
* `dir` - string - Direction of order. Either `f` for forwards or `b` for backwards.
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
**Response**
The following fields are returned in the JSON response body:
* `users` - An array of objects, each containing information
about the user and their local media. Objects contain the following fields:
- `displayname` - string - Displayname of this user.
- `media_count` - integer - Number of uploaded media by this user.
- `media_length` - integer - Size of uploaded media in bytes by this user.
- `user_id` - string - Fully-qualified user ID (ex. `@user:server.com`).
* `next_token` - integer - Opaque value used for pagination. See above.
* `total` - integer - Total number of users after filtering.

View File

@@ -205,7 +205,7 @@ GitHub is a bit special as it is not an OpenID Connect compliant provider, but
just a regular OAuth2 provider.
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
can be used to retrieve information on the authenticated user. As the Synaspse
can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.

View File

@@ -1395,6 +1395,17 @@ metrics_flags:
# - "m.room.encryption"
# - "m.room.name"
# A list of event types from a room that will be given to users when they
# knock on the room. This allows clients to display information about the
# room that they've knocked on, without actually being in the room yet.
#
#room_knock_state_types:
# - "m.room.join_rules"
# - "m.room.canonical_alias"
# - "m.room.avatar"
# - "m.room.encryption"
# - "m.room.name"
# A list of application service config files to use
#

View File

@@ -37,10 +37,10 @@ synapse master process to be started as part of the `matrix-synapse.target`
target.
1. For each worker process to be enabled, run `systemctl enable
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
should be a corresponding configuration file
should be a corresponding configuration file.
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`.
## Usage

View File

@@ -116,7 +116,7 @@ public internet; it has no authentication and is unencrypted.
### Worker configuration
In the config file for each worker, you must specify the type of worker
application (`worker_app`), and you should specify a unqiue name for the worker
application (`worker_app`), and you should specify a unique name for the worker
(`worker_name`). The currently available worker applications are listed below.
You must also specify the HTTP replication endpoint that it should talk to on
the main synapse process. `worker_replication_host` should specify the host of
@@ -262,6 +262,9 @@ using):
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
Ensure that all SSO logins go to a single process (usually the main process).
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530).
#### Load balancing
@@ -302,7 +305,7 @@ Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)
Currently support streams are `events` and `typing`.
Currently supported streams are `events` and `typing`.
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
@@ -319,6 +322,18 @@ stream_writers:
events: event_persister1
```
The `events` stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
configuration with multiple writers:
```yaml
stream_writers:
events:
- event_persister1
- event_persister2
```
#### Background tasks
There is also *experimental* support for moving background tasks to a separate
@@ -408,6 +423,8 @@ and you must configure a single instance to run the background tasks, e.g.:
media_instance_running_background_jobs: "media-repository-1"
```
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
### `synapse.app.user_dir`
Handles searches in the user directory. It can handle REST endpoints matching

View File

@@ -13,6 +13,7 @@ files =
synapse/config,
synapse/event_auth.py,
synapse/events/builder.py,
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/federation,
synapse/handlers/_base.py,

View File

@@ -1,227 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import subprocess
import sys
from typing import Optional
import click
import git
from packaging import version
from redbaron import RedBaron
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote.
"""
if ref_name in repo.refs:
return repo.refs[ref_name]
elif ref_name in repo.remote().refs:
return repo.remote().refs[ref_name]
else:
return None
def update_branch(repo: git.Repo):
"""Ensure branch is up to date if it has a remote
"""
if repo.active_branch.tracking_branch():
repo.git.merge(repo.active_branch.tracking_branch().name)
@click.command()
def release():
"""Main release command
"""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
click.secho("Updating git repo...")
repo.remote().fetch()
# Parse the AST and load the `__version__` node so that we can edit it
# later.
with open("synapse/__init__.py") as f:
red = RedBaron(f.read())
version_node = None
for node in red:
if node.type != "assignment":
continue
if node.target.type != "name":
continue
if node.target.value != "__version__":
continue
version_node = node
break
if not version_node:
print("Failed to find '__version__' definition in synapse/__init__.py")
sys.exit(1)
# Parse the current version.
current_version = version.parse(version_node.value.value.strip('"'))
assert isinstance(current_version, version.Version)
# Figure out what sort of release we're doing and calcuate the new version.
rc = click.confirm("RC", default=True)
if current_version.pre:
# If the current version is an RC we don't need to bump any of the
# version numbers (other than the RC number).
base_version = "{}.{}.{}".format(
current_version.major, current_version.minor, current_version.micro,
)
if rc:
new_version = "{}.{}.{}rc{}".format(
current_version.major,
current_version.minor,
current_version.micro,
current_version.pre[1] + 1,
)
else:
new_version = base_version
else:
# If this is a new release cycle then we need to know if its a major
# version bump or a hotfix.
release_type = click.prompt(
"Release type",
type=click.Choice(("major", "hotfix")),
show_choices=True,
default="major",
)
if release_type == "major":
base_version = new_version = "{}.{}.{}".format(
current_version.major, current_version.minor + 1, 0,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major, current_version.minor + 1, 0,
)
else:
base_version = new_version = "{}.{}.{}".format(
current_version.major, current_version.minor, current_version.micro + 1,
)
if rc:
new_version = "{}.{}.{}rc1".format(
current_version.major,
current_version.minor,
current_version.micro + 1,
)
# Confirm the calculated version is OK.
if not click.confirm(f"Create new version: {new_version}?", default=True):
click.get_current_context().abort()
# Switch to the release branch.
release_branch_name = f"release-v{base_version}"
release_branch = find_ref(repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
# If the release branch only exists on the remote we check it out
# locally.
repo.git.checkout(release_branch_name)
release_branch = repo.active_branch
else:
# If a branch doesn't exist we create one. We ask which one branch it
# should be based off, defaulting to sensible values depending on the
# release type.
if current_version.is_prerelease:
default = release_branch_name
elif release_type == "major":
default = "develop"
else:
default = "master"
branch_name = click.prompt(
"Which branch should release be based off of?", default=default
)
base_branch = find_ref(repo, branch_name)
if not base_branch:
print(f"Could not find base branch {branch_name}!")
click.get_current_context().abort()
# Checkout the base branch and ensure its up to date
repo.head.reference = base_branch
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Create the new release branch
release_branch = repo.create_head(release_branch_name, commit=base_branch)
# Switch to the release branch and ensure its up to date.
repo.git.checkout(release_branch_name)
update_branch(repo)
# Update the `__version__` variable and write it back to the file.
version_node.value = '"' + new_version + '"'
with open("synapse/__init__.py", "w") as f:
f.write(red.dumps())
# Generate changelgs
subprocess.run("python3 -m towncrier", shell=True)
# Generate debian changelogs if its not an RC.
if not rc:
subprocess.run(
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
)
subprocess.run('dch -M -r -D stable ""', shell=True)
# Show the user the changes and ask if they want to edit the change log.
repo.git.add("-u")
subprocess.run("git diff --cached", shell=True)
if click.confirm("Edit changelog?", default=False):
click.edit(filename="CHANGES.md")
# Commit the changes.
repo.git.add("-u")
repo.git.commit(f"-m {new_version}")
# We give the option to bail here in case the user wants to make sure things
# are OK before pushing.
if not click.confirm("Push branch to github?", default=True):
print("")
print("Run when ready to push:")
print("")
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
print("")
sys.exit(0)
# Otherwise, push and open the changelog in the browser.
repo.git.push(f"-u {repo.remote().name} {repo.active_branch.name}")
click.launch(
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
)
if __name__ == "__main__":
release()

View File

@@ -40,6 +40,7 @@ from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackgroundStore
from synapse.storage.databases.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
)
@@ -174,6 +175,7 @@ class Store(
StateBackgroundUpdateStore,
MainStateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
EndToEndKeyBackgroundStore,
StatsStore,
):
def execute(self, f, *args, **kwargs):

View File

@@ -57,16 +57,19 @@ class RoomVersion:
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
# Before MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool)
# Strictly enforce canonicaljson, do not allow:
# * Integers outside the range of [-2 ^ 53 + 1, 2 ^ 53 - 1]
# * Floats
# * NaN, Infinity, -Infinity
strict_canonicaljson = attr.ib(type=bool)
# bool: MSC2209: Check 'notifications' key while verifying
# MSC2209: Check 'notifications' key while verifying
# m.room.power_levels auth rules.
limit_notifications_power_levels = attr.ib(type=bool)
# MSC2403: Allows join_rules to be set to 'knock', changes auth rules to allow sending
# m.room.membership event with membership 'knock'.
allow_knocking = attr.ib(type=bool)
class RoomVersions:
@@ -79,6 +82,7 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
allow_knocking=False,
)
V2 = RoomVersion(
"2",
@@ -89,6 +93,7 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
allow_knocking=False,
)
V3 = RoomVersion(
"3",
@@ -99,6 +104,7 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
allow_knocking=False,
)
V4 = RoomVersion(
"4",
@@ -109,6 +115,7 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
allow_knocking=False,
)
V5 = RoomVersion(
"5",
@@ -119,6 +126,7 @@ class RoomVersions:
special_case_aliases_auth=True,
strict_canonicaljson=False,
limit_notifications_power_levels=False,
allow_knocking=False,
)
V6 = RoomVersion(
"6",
@@ -129,6 +137,18 @@ class RoomVersions:
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
allow_knocking=False,
)
MSC2403_DEV = RoomVersion(
"xyz.amorgan.knock",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
allow_knocking=True,
)
@@ -141,5 +161,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.V6,
RoomVersions.MSC2403_DEV,
)
} # type: Dict[str, RoomVersion]

View File

@@ -49,7 +49,6 @@ def register_sighup(func, *args, **kwargs):
Args:
func (function): Function to be called when sent a SIGHUP signal.
Will be called with a single default argument, the homeserver.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
_sighup_callbacks.append((func, args, kwargs))
@@ -251,13 +250,13 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
sdnotify(b"RELOADING=1")
for i, args, kwargs in _sighup_callbacks:
i(hs, *args, **kwargs)
i(*args, **kwargs)
sdnotify(b"READY=1")
signal.signal(signal.SIGHUP, handle_sighup)
register_sighup(refresh_certificate)
register_sighup(refresh_certificate, hs)
# Load the certificate from disk.
refresh_certificate(hs)

View File

@@ -1,4 +1,5 @@
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -21,15 +22,18 @@ class ApiConfig(Config):
section = "api"
def read_config(self, config, **kwargs):
default_room_state_types = [
EventTypes.JoinRules,
EventTypes.CanonicalAlias,
EventTypes.RoomAvatar,
EventTypes.RoomEncryption,
EventTypes.Name,
]
self.room_invite_state_types = config.get(
"room_invite_state_types",
[
EventTypes.JoinRules,
EventTypes.CanonicalAlias,
EventTypes.RoomAvatar,
EventTypes.RoomEncryption,
EventTypes.Name,
],
"room_invite_state_types", default_room_state_types
)
self.room_knock_state_types = config.get(
"room_knock_state_types", default_room_state_types
)
def generate_config_section(cls, **kwargs):
@@ -44,6 +48,17 @@ class ApiConfig(Config):
# - "{RoomAvatar}"
# - "{RoomEncryption}"
# - "{Name}"
# A list of event types from a room that will be given to users when they
# knock on the room. This allows clients to display information about the
# room that they've knocked on, without actually being in the room yet.
#
#room_knock_state_types:
# - "{JoinRules}"
# - "{CanonicalAlias}"
# - "{RoomAvatar}"
# - "{RoomEncryption}"
# - "{Name}"
""".format(
**vars(EventTypes)
)

View File

@@ -161,6 +161,7 @@ def check(
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Auth events: %s", [a.event_id for a in auth_events.values()])
# 5. If type is m.room.membership
if event.type == EventTypes.Member:
_is_membership_change_allowed(event, auth_events)
logger.debug("Allowing! %s", event)
@@ -247,6 +248,7 @@ def _is_membership_change_allowed(
caller_in_room = caller and caller.membership == Membership.JOIN
caller_invited = caller and caller.membership == Membership.INVITE
caller_knocked = caller and caller.membership == Membership.KNOCK
# get info about the target
key = (EventTypes.Member, target_user_id)
@@ -289,9 +291,12 @@ def _is_membership_change_allowed(
raise AuthError(403, "%s is banned from the room" % (target_user_id,))
return
if Membership.JOIN != membership:
# Require the user to be in the room for membership changes other than join/knocking
if Membership.JOIN != membership and Membership.KNOCK != membership:
# If the user has been invited or has knocked, they are allowed to change their
# membership event to leave
if (
caller_invited
(caller_invited or caller_knocked)
and Membership.LEAVE == membership
and target_user_id == event.user_id
):
@@ -324,7 +329,7 @@ def _is_membership_change_allowed(
raise AuthError(403, "You are banned from this room")
elif join_rule == JoinRules.PUBLIC:
pass
elif join_rule == JoinRules.INVITE:
elif join_rule in [JoinRules.INVITE, JoinRules.KNOCK]:
if not caller_in_room and not caller_invited:
raise AuthError(403, "You are not invited to this room.")
else:
@@ -343,6 +348,15 @@ def _is_membership_change_allowed(
elif Membership.BAN == membership:
if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban")
elif Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK:
raise AuthError(403, "You don't have permission to knock")
elif target_user_id != event.user_id:
raise AuthError(403, "You cannot knock for other users")
elif target_in_room:
raise AuthError(403, "You cannot knock on a room you are already in")
elif target_banned:
raise AuthError(403, "You are banned from this room")
else:
raise AuthError(500, "Unknown membership %s" % membership)
@@ -699,7 +713,7 @@ def auth_types_for_event(event: EventBase) -> Set[Tuple[str, str]]:
if event.type == EventTypes.Member:
membership = event.content["membership"]
if membership in [Membership.JOIN, Membership.INVITE]:
if membership in [Membership.JOIN, Membership.INVITE, Membership.KNOCK]:
auth_types.add((EventTypes.JoinRules, ""))
auth_types.add((EventTypes.Member, event.state_key))

View File

@@ -228,6 +228,7 @@ def format_event_for_client_v1(d):
"replaces_state",
"prev_content",
"invite_room_state",
"knock_room_state",
)
for key in copy_keys:
if key in d["unsigned"]:
@@ -311,6 +312,10 @@ def serialize_event(
# If this is an invite for somebody else, then we don't care about the
# invite_room_state as that's meant solely for the invitee. Other clients
# will already have the state since they're in the room.
#
# Note that is_invite only seems to be set to False in the case of appservices
# that are not interested in the target user of the invite.
# TODO: Figure out what to do here w.r.t knocking
if not is_invite:
d["unsigned"].pop("invite_room_state", None)

View File

@@ -13,20 +13,26 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Union
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions
from synapse.config.homeserver import HomeServerConfig
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.events.utils import validate_canonicaljson
from synapse.federation.federation_server import server_matches_acl_event
from synapse.types import EventID, RoomID, UserID
class EventValidator:
def validate_new(self, event, config):
def validate_new(self, event: EventBase, config: HomeServerConfig):
"""Validates the event has roughly the right format
Args:
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
event: The event to validate.
config: The homeserver's configuration.
"""
self.validate_builder(event)
@@ -76,12 +82,18 @@ class EventValidator:
if event.type == EventTypes.Retention:
self._validate_retention(event)
def _validate_retention(self, event):
if event.type == EventTypes.ServerACL:
if not server_matches_acl_event(config.server_name, event):
raise SynapseError(
400, "Can't create an ACL event that denies the local server"
)
def _validate_retention(self, event: EventBase):
"""Checks that an event that defines the retention policy for a room respects the
format enforced by the spec.
Args:
event (FrozenEvent): The event to validate.
event: The event to validate.
"""
if not event.is_state():
raise SynapseError(code=400, msg="must be a state event")
@@ -116,13 +128,10 @@ class EventValidator:
errcode=Codes.BAD_JSON,
)
def validate_builder(self, event):
def validate_builder(self, event: Union[EventBase, EventBuilder]):
"""Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the
fields an event would have
Args:
event (EventBuilder|FrozenEvent)
"""
strings = ["room_id", "sender", "type"]

View File

@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyrignt 2020 Sorunome
# Copyrignt 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -539,7 +541,7 @@ class FederationClient(FederationBase):
RuntimeError: if no servers were reachable.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
valid_memberships = {Membership.JOIN, Membership.LEAVE, Membership.KNOCK}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s"
@@ -879,6 +881,62 @@ class FederationClient(FederationBase):
# content.
return resp[1]
async def send_knock(self, destinations: List[str], pdu: EventBase) -> JsonDict:
"""Attempts to send a knock event to given a list of servers. Iterates
through the list until one attempt succeeds.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
Args:
destinations: A list of candidate homeservers which are likely to be
participating in the room.
pdu: The event to be sent.
Returns:
The remote homeserver return some state from the room. The response
dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]}
The list of state events may be empty.
Raises:
SynapseError: If the chosen remote server returns a 3xx/4xx code.
RuntimeError: If no servers were reachable.
"""
async def send_request(destination: str) -> JsonDict:
return await self._do_send_knock(destination, pdu)
return await self._try_destination_list(
"send_knock", destinations, send_request
)
async def _do_send_knock(self, destination: str, pdu: EventBase) -> JsonDict:
"""Send a knock event to a remote homeserver.
Args:
destination: The homeserver to send to.
pdu: The event to send.
Returns:
The remote homeserver can optionally return some state from the room. The response
dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]}
The list of state events may be empty.
"""
time_now = self._clock.time_msec()
return await self.transport_layer.send_knock_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
def get_public_rooms(
self,
remote_server: str,

View File

@@ -118,6 +118,8 @@ class FederationServer(FederationBase):
hs, "fed_txn_handler", timeout_ms=30000
) # type: ResponseCache[Tuple[str, str]]
self._room_knock_state_types = hs.config.room_knock_state_types
self.transaction_actions = TransactionActions(self.store)
self.registry = hs.get_federation_registry()
@@ -565,6 +567,40 @@ class FederationServer(FederationBase):
await self.handler.on_send_leave_request(origin, pdu)
return {}
async def on_make_knock_request(self, origin: str, room_id: str, user_id: str):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_knock_request(origin, room_id, user_id)
room_version = await self.store.get_room_version_id(room_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_send_knock_request(self, origin: str, content: JsonDict, room_id: str):
logger.debug("on_send_knock_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_knock_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
# Handle the event, and retrieve the EventContext
event_context = await self.handler.on_send_knock_request(origin, pdu)
# Retrieve stripped state events from the room and send them back to the remote
# server. This will allow the remote server's clients to display information
# related to the room while the knock request is pending.
stripped_room_state = await self.store.get_stripped_room_state_from_event_context(
event_context, self._room_knock_state_types
)
return {"knock_state_events": stripped_room_state}
async def on_event_auth(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:

View File

@@ -1,6 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2020 Sorunome
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -26,6 +28,7 @@ from synapse.api.urls import (
FEDERATION_V2_PREFIX,
)
from synapse.logging.utils import log_function
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -209,7 +212,7 @@ class TransportLayerClient:
Fails with ``FederationDeniedError`` if the remote destination
is not in our federation whitelist
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
valid_memberships = {Membership.JOIN, Membership.LEAVE, Membership.KNOCK}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s"
@@ -293,6 +296,36 @@ class TransportLayerClient:
return response
@log_function
async def send_knock_v1(
self, destination: str, room_id: str, event_id: str, content: JsonDict,
) -> JsonDict:
"""
Sends an signed knock membership event to a remote server. This is the second
step knocking after make_knock.
Args:
destination: The remote homeserver.
room_id: The ID of the room to knock on.
event_id: The ID of the knock membership event that we're sending.
content: The knock membership event that we're sending. Note that this is not the
`content` field of the membership event, but the entire signed membership event
itself represented as a JSON dict.
Returns:
The remote homeserver can optionally return some state from the room. The response
dictionary is in the form:
{"knock_state_events": [<state event dict>, ...]}
The list of state events may be empty.
"""
path = _create_v1_path("/send_knock/%s/%s", room_id, event_id)
return await self.client.put_json(
destination=destination, path=path, data=content
)
@log_function
async def send_invite_v1(self, destination, room_id, event_id, content):
path = _create_v1_path("/invite/%s/%s", room_id, event_id)

View File

@@ -1,7 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -542,6 +543,22 @@ class FederationV2SendLeaveServlet(BaseFederationServlet):
return 200, content
class FederationMakeKnockServlet(BaseFederationServlet):
PATH = "/make_knock/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
async def on_GET(self, origin, content, query, context, user_id):
content = await self.handler.on_make_knock_request(origin, context, user_id)
return 200, content
class FederationV1MakeKnockServlet(BaseFederationServlet):
PATH = "/send_knock/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id, event_id):
content = await self.handler.on_send_knock_request(origin, content, room_id)
return 200, content
class FederationEventAuthServlet(BaseFederationServlet):
PATH = "/event_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
@@ -1389,11 +1406,13 @@ FEDERATION_SERVLET_CLASSES = (
FederationQueryServlet,
FederationMakeJoinServlet,
FederationMakeLeaveServlet,
FederationMakeKnockServlet,
FederationEventServlet,
FederationV1SendJoinServlet,
FederationV2SendJoinServlet,
FederationV1SendLeaveServlet,
FederationV2SendLeaveServlet,
FederationV1MakeKnockServlet,
FederationV1InviteServlet,
FederationV2InviteServlet,
FederationGetMissingEventsServlet,

View File

@@ -1,7 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -67,7 +68,7 @@ from synapse.replication.http.devices import ReplicationUserDevicesResyncRestSer
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationFederationSendEventsRestServlet,
ReplicationStoreRoomOnInviteRestServlet,
ReplicationStoreRoomOnOutlierMembershipRestServlet,
)
from synapse.state import StateResolutionStore
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
@@ -152,12 +153,14 @@ class FederationHandler(BaseHandler):
self._user_device_resync = ReplicationUserDevicesResyncRestServlet.make_client(
hs
)
self._maybe_store_room_on_invite = ReplicationStoreRoomOnInviteRestServlet.make_client(
self._maybe_store_room_on_outlier_membership = ReplicationStoreRoomOnOutlierMembershipRestServlet.make_client(
hs
)
else:
self._device_list_updater = hs.get_device_handler().device_list_updater
self._maybe_store_room_on_invite = self.store.maybe_store_room_on_invite
self._maybe_store_room_on_outlier_membership = (
self.store.maybe_store_room_on_outlier_membership
)
# When joining a room we need to queue any events for that room up.
# For each room, a list of (pdu, origin) tuples.
@@ -1436,6 +1439,69 @@ class FederationHandler(BaseHandler):
run_in_background(self._handle_queued_pdus, room_queue)
@log_function
async def do_knock(
self, target_hosts: List[str], room_id: str, knockee: str, content: JsonDict,
) -> Tuple[str, int]:
"""Sends the knock to the remote server.
This first triggers a /make_knock request that returns a partial
event that we can fill out and sign. This is then sent to the
remote server via /send_knock.
Knock events must be signed by the knockee's server before distributing.
Args:
target_hosts: A list of hosts that we want to try knocking through.
room_id: The ID of the room to knock on.
knockee: The ID of the user who is knocking.
content: The content of the knock event.
Returns:
A tuple of (event ID, stream ID).
Raises:
SynapseError: If the chosen remote server returns a 3xx/4xx code.
RuntimeError: If no servers were reachable.
"""
logger.debug("Knocking on room %s on behalf of user %s", room_id, knockee)
# Ask the remote server to create a valid knock event for us. Once received,
# we sign the event
origin, event, event_format_version = await self._make_and_verify_event(
target_hosts, room_id, knockee, "knock", content,
)
# Record the room ID and its version so that we have a record of the room
await self._maybe_store_room_on_outlier_membership(
room_id=event.room_id, room_version=event_format_version
)
# Initially try the host that we successfully called /make_knock on
try:
target_hosts.remove(origin)
target_hosts.insert(0, origin)
except ValueError:
pass
# Send the signed event back to the room, and potentially receive some
# further information about the room in the form of partial state events
stripped_room_state = await self.federation_client.send_knock(
target_hosts, event
)
# Store any stripped room state events in the "unsigned" key of the event.
# This is a bit of a hack and is cribbing off of invites. Basically we
# store the room state here and retrieve it again when this event appears
# in the invitee's sync stream. It is stripped out for all other local users.
event.unsigned["knock_room_state"] = stripped_room_state["knock_state_events"]
context = await self.state_handler.compute_event_context(event)
stream_id = await self.persist_events_and_notify(
event.room_id, [(event, context)]
)
return event.event_id, stream_id
async def _handle_queued_pdus(self, room_queue):
"""Process PDUs which got queued up while we were busy send_joining.
@@ -1617,7 +1683,7 @@ class FederationHandler(BaseHandler):
# keep a record of the room version, if we don't yet know it.
# (this may get overwritten if we later get a different room version in a
# join dance).
await self._maybe_store_room_on_invite(
await self._maybe_store_room_on_outlier_membership(
room_id=event.room_id, room_version=room_version
)
@@ -1772,6 +1838,117 @@ class FederationHandler(BaseHandler):
return None
@log_function
async def on_make_knock_request(
self, origin: str, room_id: str, user_id: str
) -> EventBase:
"""We've received a /make_knock/ request, so we create a partial
knock event for the room and return that. We do *not* persist or
process it until the other server has signed it and sent it back.
Args:
origin: The (verified) server name of the requesting server.
room_id: The room to create the knock event in.
user_id: The user to create the knock for.
Returns:
The partial knock event.
"""
if get_domain_from_id(user_id) != origin:
logger.info(
"Get /make_knock request for user %r from different origin %s, ignoring",
user_id,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
room_version = await self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(
room_version,
{
"type": EventTypes.Member,
"content": {"membership": Membership.KNOCK},
"room_id": room_id,
"sender": user_id,
"state_key": user_id,
},
)
event, context = await self.event_creation_handler.create_new_client_event(
builder=builder
)
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
logger.warning("Creation of knock %s forbidden by third-party rules", event)
raise SynapseError(
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_knock_request`
await self.auth.check_from_context(
room_version, event, context, do_sig_check=False
)
except AuthError as e:
logger.warning("Failed to create new knock %r because %s", event, e)
raise e
return event
@log_function
async def on_send_knock_request(self, origin: str, pdu: EventBase) -> EventContext:
"""
We have received a knock event for a room. Verify that event and send it into the room
on the knocking homeserver's behalf.
Args:
origin: The remote homeserver of the knocking user.
pdu: The knocking member event that has been signed by the remote homeserver.
Returns:
The context of the event after inserting it into the room graph.
"""
event = pdu
logger.debug(
"on_send_knock_request: Got event: %s, signatures: %s",
event.event_id,
event.signatures,
)
if get_domain_from_id(event.sender) != origin:
logger.info(
"Got /send_knock request for user %r from different origin %s",
event.sender,
origin,
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
event.internal_metadata.outlier = False
context = await self._handle_new_event(origin, event)
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
logger.info("Sending of knock %s forbidden by third-party rules", event)
raise SynapseError(
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
logger.debug(
"on_send_knock_request: After _handle_new_event: %s, sigs: %s",
event.event_id,
event.signatures,
)
return context
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
"""Returns the state at the event. i.e. not including said event.
"""

View File

@@ -1,7 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyrignt 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -492,7 +493,7 @@ class EventCreationHandler:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership in {Membership.JOIN, Membership.INVITE}:
if membership in {Membership.JOIN, Membership.INVITE, Membership.KNOCK}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content
@@ -914,8 +915,8 @@ class EventCreationHandler:
room_version = await self.store.get_room_version_id(event.room_id)
if event.internal_metadata.is_out_of_band_membership():
# the only sort of out-of-band-membership events we expect to see here
# are invite rejections we have generated ourselves.
# the only sort of out-of-band-membership events we expect to see here are
# invite rejections and rescinded knocks that we have generated ourselves.
assert event.type == EventTypes.Member
assert event.content["membership"] == Membership.LEAVE
else:

View File

@@ -189,7 +189,9 @@ class ProfileHandler(BaseHandler):
)
if not isinstance(new_displayname, str):
raise SynapseError(400, "Invalid displayname")
raise SynapseError(
400, "'displayname' must be a string", errcode=Codes.INVALID_PARAM
)
if len(new_displayname) > MAX_DISPLAYNAME_LEN:
raise SynapseError(
@@ -273,7 +275,9 @@ class ProfileHandler(BaseHandler):
)
if not isinstance(new_avatar_url, str):
raise SynapseError(400, "Invalid displayname")
raise SynapseError(
400, "'avatar_url' must be a string", errcode=Codes.INVALID_PARAM
)
if len(new_avatar_url) > MAX_AVATAR_URL_LEN:
raise SynapseError(

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2016-2020 The Matrix.org Foundation C.I.C.
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -110,6 +111,20 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
async def _remote_knock(
self, remote_room_hosts: List[str], room_id: str, user: UserID, content: dict,
) -> Tuple[str, int]:
"""Try and knock on a room that this server is not in
Args:
remote_room_hosts: List of servers that can be used to knock via.
room_id: Room that we are trying to knock on.
user: User who is trying to knock.
content: A dict that should be used as the content of the knock event.
"""
raise NotImplementedError()
@abc.abstractmethod
async def remote_reject_invite(
self,
@@ -133,6 +148,28 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"""
raise NotImplementedError()
@abc.abstractmethod
async def generate_local_out_of_band_membership(
self,
previous_membership_event: EventBase,
txn_id: Optional[str],
requester: Requester,
content: JsonDict,
):
"""Generate a local leave event for a room
Args:
previous_membership_event: the previous membership event for this user
txn_id: optional transaction ID supplied by the client
requester: user making the request, according to the access token
content: additional content to include in the leave event.
Normally an empty dict.
Returns:
A tuple containing (event_id, stream_id of the leave event)
"""
raise NotImplementedError()
@abc.abstractmethod
async def _user_left_room(self, target: UserID, room_id: str) -> None:
"""Notifies distributor on master process that the user has left the
@@ -518,40 +555,77 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
invite = await self.store.get_invite_for_local_user_in_room(
user_id=target.to_string(), room_id=room_id
) # type: Optional[RoomsForUser]
if not invite:
if invite:
logger.info(
"%s rejects invite to %s from %s",
target,
room_id,
invite.sender,
)
if not self.hs.is_mine_id(invite.sender):
# send the rejection to the inviter's HS (with fallback to
# local event)
return await self.remote_reject_invite(
invite.event_id, txn_id, requester, content,
)
# the inviter was on our server, but has now left. Carry on
# with the normal rejection codepath, which will also send the
# rejection out to any other servers we believe are still in the room.
# thanks to overzealous cleaning up of event_forward_extremities in
# `delete_old_current_state_events`, it's possible to end up with no
# forward extremities here. If that happens, let's just hang the
# rejection off the invite event.
#
# see: https://github.com/matrix-org/synapse/issues/7139
if len(latest_event_ids) == 0:
latest_event_ids = [invite.event_id]
else:
# or perhaps this is a remote room that a local user has knocked on
knock = await self.store.get_knock_for_local_user_in_room(
user_id=target.to_string(), room_id=room_id
) # type: Optional[RoomsForUser]
if knock:
# TODO: We don't yet support rescinding knocks over federation
# as we don't know which homeserver to send it to. An obvious
# candidate is the remote homeserver we originally knocked through,
# however we don't currently store that information.
# Just rescind the knock locally
knock_event = await self.store.get_event(knock.event_id)
return await self.generate_local_out_of_band_membership(
knock_event, txn_id, requester, content
)
logger.info(
"%s sent a leave request to %s, but that is not an active room "
"on this server, and there is no pending invite",
"on this server, and there is no pending knock",
target,
room_id,
)
raise SynapseError(404, "Not a known room")
logger.info(
"%s rejects invite to %s from %s", target, room_id, invite.sender
elif effective_membership_state == Membership.KNOCK:
if not is_host_in_room:
# The knock needs to be sent over federation instead
remote_room_hosts.append(room_id.split(":", 1)[1])
content["membership"] = Membership.KNOCK
profile = self.profile_handler
if "displayname" not in content:
content["displayname"] = await profile.get_displayname(target)
if "avatar_url" not in content:
content["avatar_url"] = await profile.get_avatar_url(target)
return await self._remote_knock(
remote_room_hosts, room_id, target, content
)
if not self.hs.is_mine_id(invite.sender):
# send the rejection to the inviter's HS (with fallback to
# local event)
return await self.remote_reject_invite(
invite.event_id, txn_id, requester, content,
)
# the inviter was on our server, but has now left. Carry on
# with the normal rejection codepath, which will also send the
# rejection out to any other servers we believe are still in the room.
# thanks to overzealous cleaning up of event_forward_extremities in
# `delete_old_current_state_events`, it's possible to end up with no
# forward extremities here. If that happens, let's just hang the
# rejection off the invite event.
#
# see: https://github.com/matrix-org/synapse/issues/7139
if len(latest_event_ids) == 0:
latest_event_ids = [invite.event_id]
return await self._local_membership_update(
requester=requester,
target=target,
@@ -1104,32 +1178,34 @@ class RoomMemberMasterHandler(RoomMemberHandler):
#
logger.warning("Failed to reject invite: %s", e)
return await self._locally_reject_invite(
return await self.generate_local_out_of_band_membership(
invite_event, txn_id, requester, content
)
async def _locally_reject_invite(
async def generate_local_out_of_band_membership(
self,
invite_event: EventBase,
previous_membership_event: EventBase,
txn_id: Optional[str],
requester: Requester,
content: JsonDict,
) -> Tuple[str, int]:
"""Generate a local invite rejection
):
"""Generate a local leave event for a room
This is called after we fail to reject an invite via a remote server. It
generates an out-of-band membership event locally.
This can be called after we e.g fail to reject an invite via a remote server.
It generates an out-of-band membership event locally.
Args:
invite_event: the invite to be rejected
previous_membership_event: the previous membership event for this user
txn_id: optional transaction ID supplied by the client
requester: user making the rejection request, according to the access token
content: additional content to include in the rejection event.
requester: user making the request, according to the access token
content: additional content to include in the leave event.
Normally an empty dict.
"""
room_id = invite_event.room_id
target_user = invite_event.state_key
Returns:
A tuple containing (event_id, stream_id of the leave event)
"""
room_id = previous_membership_event.room_id
target_user = previous_membership_event.state_key
content["membership"] = Membership.LEAVE
@@ -1141,12 +1217,12 @@ class RoomMemberMasterHandler(RoomMemberHandler):
"state_key": target_user,
}
# the auth events for the new event are the same as that of the invite, plus
# the invite itself.
# the auth events for the new event are the same as that of the previous event, plus
# the event itself.
#
# the prev_events are just the invite.
prev_event_ids = [invite_event.event_id]
auth_event_ids = invite_event.auth_event_ids() + prev_event_ids
# the prev_events consist solely of the previous membership event.
prev_event_ids = [previous_membership_event.event_id]
auth_event_ids = previous_membership_event.auth_event_ids() + prev_event_ids
event, context = await self.event_creation_handler.create_event(
requester,
@@ -1166,6 +1242,32 @@ class RoomMemberMasterHandler(RoomMemberHandler):
return result_event.event_id, result_event.internal_metadata.stream_ordering
async def _remote_knock(
self, remote_room_hosts: List[str], room_id: str, user: UserID, content: dict,
) -> Tuple[str, int]:
"""Sends a knock to a room. Attempts to do so via one remote out of a given list.
Args:
remote_room_hosts: A list of homeservers to try knocking through.
room_id: The ID of the room to knock on.
user: The user to knock on behalf of.
content: The content of the knock event.
Returns:
A tuple of (event ID, stream ID).
"""
# filter ourselves out of remote_room_hosts
remote_room_hosts = [
host for host in remote_room_hosts if host != self.hs.hostname
]
if len(remote_room_hosts) == 0:
raise SynapseError(404, "No known servers")
return await self.federation_handler.do_knock(
remote_room_hosts, room_id, user.to_string(), content=content
)
async def _user_left_room(self, target: UserID, room_id: str) -> None:
"""Implements RoomMemberHandler._user_left_room
"""

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,13 +18,14 @@ import logging
from typing import List, Optional, Tuple
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.handlers.room_member import RoomMemberHandler
from synapse.replication.http.membership import (
ReplicationRemoteJoinRestServlet as ReplRemoteJoin,
ReplicationRemoteRejectInviteRestServlet as ReplRejectInvite,
ReplicationUserJoinedLeftRoomRestServlet as ReplJoinedLeft,
)
from synapse.types import Requester, UserID
from synapse.types import JsonDict, Requester, UserID
logger = logging.getLogger(__name__)
@@ -79,6 +81,47 @@ class RoomMemberWorkerHandler(RoomMemberHandler):
)
return ret["event_id"], ret["stream_id"]
async def generate_local_out_of_band_membership(
self,
previous_membership_event: EventBase,
txn_id: Optional[str],
requester: Requester,
content: JsonDict,
):
"""Generate a local leave event for a room
Args:
previous_membership_event: the previous membership event for this user
txn_id: optional transaction ID supplied by the client
requester: user making the request, according to the access token
content: additional content to include in the leave event.
Normally an empty dict.
Returns:
A tuple containing (event_id, stream_id of the leave event)
"""
ret = await self.generate_local_out_of_band_membership(
previous_membership_event=previous_membership_event,
txn_id=txn_id,
requester=requester,
content=content,
)
return ret["event_id"], ret["stream_id"]
async def _remote_knock(
self, remote_room_hosts: List[str], room_id: str, user: UserID, content: dict,
) -> Tuple[str, int]:
"""Sends a knock to a room.
Implements RoomMemberHandler._remote_knock
"""
return await self._remote_knock(
remote_room_hosts=remote_room_hosts,
room_id=room_id,
user=user,
content=content,
)
async def _user_left_room(self, target: UserID, room_id: str) -> None:
"""Implements RoomMemberHandler._user_left_room
"""

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
# Copyright 2020 Sorunome
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -225,6 +226,8 @@ class StatsHandler:
room_stats_delta["left_members"] -= 1
elif prev_membership == Membership.BAN:
room_stats_delta["banned_members"] -= 1
elif prev_membership == Membership.KNOCK:
room_stats_delta["knocked_members"] -= 1
else:
raise ValueError(
"%r is not a valid prev_membership" % (prev_membership,)
@@ -246,6 +249,8 @@ class StatsHandler:
room_stats_delta["left_members"] += 1
elif membership == Membership.BAN:
room_stats_delta["banned_members"] += 1
elif membership == Membership.KNOCK:
room_stats_delta["knocked_members"] += 1
else:
raise ValueError("%r is not a valid membership" % (membership,))

View File

@@ -149,6 +149,16 @@ class InvitedSyncResult:
return True
@attr.s(slots=True, frozen=True)
class KnockedSyncResult:
room_id = attr.ib(type=str)
knock = attr.ib(type=EventBase)
def __bool__(self) -> bool:
"""Knocked rooms should always be reported to the client"""
return True
@attr.s(slots=True, frozen=True)
class GroupsSyncResult:
join = attr.ib(type=JsonDict)
@@ -182,6 +192,7 @@ class _RoomChanges:
room_entries = attr.ib(type=List["RoomSyncResultBuilder"])
invited = attr.ib(type=List[InvitedSyncResult])
knocked = attr.ib(type=List[KnockedSyncResult])
newly_joined_rooms = attr.ib(type=List[str])
newly_left_rooms = attr.ib(type=List[str])
@@ -195,6 +206,7 @@ class SyncResult:
account_data: List of account_data events for the user.
joined: JoinedSyncResult for each joined room.
invited: InvitedSyncResult for each invited room.
knocked: KnockedSyncResult for each knocked on room.
archived: ArchivedSyncResult for each archived room.
to_device: List of direct messages for the device.
device_lists: List of user_ids whose devices have changed
@@ -210,6 +222,7 @@ class SyncResult:
account_data = attr.ib(type=List[JsonDict])
joined = attr.ib(type=List[JoinedSyncResult])
invited = attr.ib(type=List[InvitedSyncResult])
knocked = attr.ib(type=List[KnockedSyncResult])
archived = attr.ib(type=List[ArchivedSyncResult])
to_device = attr.ib(type=List[JsonDict])
device_lists = attr.ib(type=DeviceLists)
@@ -226,6 +239,7 @@ class SyncResult:
self.presence
or self.joined
or self.invited
or self.knocked
or self.archived
or self.account_data
or self.to_device
@@ -997,7 +1011,7 @@ class SyncHandler:
res = await self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
newly_joined_rooms, newly_joined_or_invited_users, _, _ = res
newly_joined_rooms, newly_joined_or_invited_or_knocking_users, _, _ = res
_, _, newly_left_rooms, newly_left_users = res
block_all_presence_data = (
@@ -1006,7 +1020,9 @@ class SyncHandler:
if self.hs_config.use_presence and not block_all_presence_data:
logger.debug("Fetching presence data")
await self._generate_sync_entry_for_presence(
sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
sync_result_builder,
newly_joined_rooms,
newly_joined_or_invited_or_knocking_users,
)
logger.debug("Fetching to-device data")
@@ -1015,7 +1031,7 @@ class SyncHandler:
device_lists = await self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_or_invited_users=newly_joined_or_invited_users,
newly_joined_or_invited_or_knocking_users=newly_joined_or_invited_or_knocking_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
@@ -1049,6 +1065,7 @@ class SyncHandler:
account_data=sync_result_builder.account_data,
joined=sync_result_builder.joined,
invited=sync_result_builder.invited,
knocked=sync_result_builder.knocked,
archived=sync_result_builder.archived,
to_device=sync_result_builder.to_device,
device_lists=device_lists,
@@ -1108,7 +1125,7 @@ class SyncHandler:
self,
sync_result_builder: "SyncResultBuilder",
newly_joined_rooms: Set[str],
newly_joined_or_invited_users: Set[str],
newly_joined_or_invited_or_knocking_users: Set[str],
newly_left_rooms: Set[str],
newly_left_users: Set[str],
) -> DeviceLists:
@@ -1117,8 +1134,9 @@ class SyncHandler:
Args:
sync_result_builder
newly_joined_rooms: Set of rooms user has joined since previous sync
newly_joined_or_invited_users: Set of users that have joined or
been invited to a room since previous sync.
newly_joined_or_invited_or_knocking_users: Set of users that have joined,
been invited to a room or are knocking on a room since
previous sync.
newly_left_rooms: Set of rooms user has left since previous sync
newly_left_users: Set of users that have left a room we're in since
previous sync
@@ -1129,7 +1147,9 @@ class SyncHandler:
# We're going to mutate these fields, so lets copy them rather than
# assume they won't get used later.
newly_joined_or_invited_users = set(newly_joined_or_invited_users)
newly_joined_or_invited_or_knocking_users = set(
newly_joined_or_invited_or_knocking_users
)
newly_left_users = set(newly_left_users)
if since_token and since_token.device_list_key:
@@ -1168,11 +1188,11 @@ class SyncHandler:
# Step 1b, check for newly joined rooms
for room_id in newly_joined_rooms:
joined_users = await self.state.get_current_users_in_room(room_id)
newly_joined_or_invited_users.update(joined_users)
newly_joined_or_invited_or_knocking_users.update(joined_users)
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
users_that_have_changed.update(newly_joined_or_invited_users)
users_that_have_changed.update(newly_joined_or_invited_or_knocking_users)
user_signatures_changed = await self.store.get_users_whose_signatures_changed(
user_id, since_token.device_list_key
@@ -1417,6 +1437,7 @@ class SyncHandler:
room_entries = room_changes.room_entries
invited = room_changes.invited
knocked = room_changes.knocked
newly_joined_rooms = room_changes.newly_joined_rooms
newly_left_rooms = room_changes.newly_left_rooms
@@ -1437,9 +1458,10 @@ class SyncHandler:
await concurrently_execute(handle_room_entries, room_entries, 10)
sync_result_builder.invited.extend(invited)
sync_result_builder.knocked.extend(knocked)
# Now we want to get any newly joined or invited users
newly_joined_or_invited_users = set()
# Now we want to get any newly joined, invited or knocking users
newly_joined_or_invited_or_knocking_users = set()
newly_left_users = set()
if since_token:
for joined_sync in sync_result_builder.joined:
@@ -1451,19 +1473,22 @@ class SyncHandler:
if (
event.membership == Membership.JOIN
or event.membership == Membership.INVITE
or event.membership == Membership.KNOCK
):
newly_joined_or_invited_users.add(event.state_key)
newly_joined_or_invited_or_knocking_users.add(
event.state_key
)
else:
prev_content = event.unsigned.get("prev_content", {})
prev_membership = prev_content.get("membership", None)
if prev_membership == Membership.JOIN:
newly_left_users.add(event.state_key)
newly_left_users -= newly_joined_or_invited_users
newly_left_users -= newly_joined_or_invited_or_knocking_users
return (
set(newly_joined_rooms),
newly_joined_or_invited_users,
newly_joined_or_invited_or_knocking_users,
set(newly_left_rooms),
newly_left_users,
)
@@ -1519,6 +1544,7 @@ class SyncHandler:
newly_left_rooms = []
room_entries = []
invited = []
knocked = []
for room_id, events in mem_change_events_by_room_id.items():
logger.debug(
"Membership changes in %s: [%s]",
@@ -1598,9 +1624,17 @@ class SyncHandler:
should_invite = non_joins[-1].membership == Membership.INVITE
if should_invite:
if event.sender not in ignored_users:
room_sync = InvitedSyncResult(room_id, invite=non_joins[-1])
if room_sync:
invited.append(room_sync)
invite_room_sync = InvitedSyncResult(room_id, invite=non_joins[-1])
if invite_room_sync:
invited.append(invite_room_sync)
# Only bother if our latest membership in the room is knock (and we haven't
# been accepted/rejected in the meantime).
should_knock = non_joins[-1].membership == Membership.KNOCK
if should_knock:
knock_room_sync = KnockedSyncResult(room_id, knock=non_joins[-1])
if knock_room_sync:
knocked.append(knock_room_sync)
# Always include leave/ban events. Just take the last one.
# TODO: How do we handle ban -> leave in same batch?
@@ -1704,7 +1738,9 @@ class SyncHandler:
)
room_entries.append(entry)
return _RoomChanges(room_entries, invited, newly_joined_rooms, newly_left_rooms)
return _RoomChanges(
room_entries, invited, knocked, newly_joined_rooms, newly_left_rooms,
)
async def _get_all_rooms(
self, sync_result_builder: "SyncResultBuilder", ignored_users: FrozenSet[str]
@@ -1724,6 +1760,7 @@ class SyncHandler:
membership_list = (
Membership.INVITE,
Membership.KNOCK,
Membership.JOIN,
Membership.LEAVE,
Membership.BAN,
@@ -1735,6 +1772,7 @@ class SyncHandler:
room_entries = []
invited = []
knocked = []
for event in room_list:
if event.membership == Membership.JOIN:
@@ -1754,8 +1792,11 @@ class SyncHandler:
continue
invite = await self.store.get_event(event.event_id)
invited.append(InvitedSyncResult(room_id=event.room_id, invite=invite))
elif event.membership == Membership.KNOCK:
knock = await self.store.get_event(event.event_id)
knocked.append(KnockedSyncResult(room_id=event.room_id, knock=knock))
elif event.membership in (Membership.LEAVE, Membership.BAN):
# Always send down rooms we were banned or kicked from.
# Always send down rooms we were banned from or kicked from.
if not sync_config.filter_collection.include_leave:
if event.membership == Membership.LEAVE:
if user_id == event.sender:
@@ -1776,7 +1817,7 @@ class SyncHandler:
)
)
return _RoomChanges(room_entries, invited, [], [])
return _RoomChanges(room_entries, invited, knocked, [], [])
async def _generate_room_entry(
self,
@@ -2065,6 +2106,7 @@ class SyncResultBuilder:
account_data (list)
joined (list[JoinedSyncResult])
invited (list[InvitedSyncResult])
knocked (list[KnockedSyncResult])
archived (list[ArchivedSyncResult])
groups (GroupsSyncResult|None)
to_device (list)
@@ -2080,6 +2122,7 @@ class SyncResultBuilder:
account_data = attr.ib(type=List[JsonDict], default=attr.Factory(list))
joined = attr.ib(type=List[JoinedSyncResult], default=attr.Factory(list))
invited = attr.ib(type=List[InvitedSyncResult], default=attr.Factory(list))
knocked = attr.ib(type=List[KnockedSyncResult], default=attr.Factory(list))
archived = attr.ib(type=List[ArchivedSyncResult], default=attr.Factory(list))
groups = attr.ib(type=Optional[GroupsSyncResult], default=None)
to_device = attr.ib(type=List[JsonDict], default=attr.Factory(list))

View File

@@ -498,6 +498,30 @@ BASE_APPEND_UNDERRIDE_RULES = [
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
{
"rule_id": "global/underride/.im.vector.jitsi",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "im.vector.modular.widgets",
"_id": "_type_modular_widgets",
},
{
"kind": "event_match",
"key": "content.type",
"pattern": "jitsi",
"_id": "_content_type_jitsi",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "*",
"_id": "_is_state_event",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
},
]

View File

@@ -254,20 +254,20 @@ class ReplicationCleanRoomRestServlet(ReplicationEndpoint):
return 200, {}
class ReplicationStoreRoomOnInviteRestServlet(ReplicationEndpoint):
class ReplicationStoreRoomOnOutlierMembershipRestServlet(ReplicationEndpoint):
"""Called to clean up any data in DB for a given room, ready for the
server to join the room.
Request format:
POST /_synapse/replication/store_room_on_invite/:room_id/:txn_id
POST /_synapse/replication/store_room_on_outlier_membership/:room_id/:txn_id
{
"room_version": "1",
}
"""
NAME = "store_room_on_invite"
NAME = "store_room_on_outlier_membership"
PATH_ARGS = ("room_id",)
def __init__(self, hs):
@@ -282,7 +282,7 @@ class ReplicationStoreRoomOnInviteRestServlet(ReplicationEndpoint):
async def _handle_request(self, request, room_id):
content = parse_json_object_from_request(request)
room_version = KNOWN_ROOM_VERSIONS[content["room_version"]]
await self.store.maybe_store_room_on_invite(room_id, room_version)
await self.store.maybe_store_room_on_outlier_membership(room_id, room_version)
return 200, {}
@@ -291,4 +291,4 @@ def register_servlets(hs, http_server):
ReplicationFederationSendEduRestServlet(hs).register(http_server)
ReplicationGetQueryRestServlet(hs).register(http_server)
ReplicationCleanRoomRestServlet(hs).register(http_server)
ReplicationStoreRoomOnInviteRestServlet(hs).register(http_server)
ReplicationStoreRoomOnOutlierMembershipRestServlet(hs).register(http_server)

View File

@@ -39,6 +39,7 @@ from synapse.rest.client.v2_alpha import (
filter,
groups,
keys,
knock,
notifications,
openid,
password_policy,
@@ -121,6 +122,7 @@ class ClientRestResource(JsonResource):
account_validity.register_servlets(hs, client_resource)
relations.register_servlets(hs, client_resource)
password_policy.register_servlets(hs, client_resource)
knock.register_servlets(hs, client_resource)
# moving to /_synapse/admin
admin.register_servlets_for_client_rest_resource(hs, client_resource)

View File

@@ -47,6 +47,7 @@ from synapse.rest.admin.rooms import (
ShutdownRoomRestServlet,
)
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
from synapse.rest.admin.statistics import UserMediaStatisticsRestServlet
from synapse.rest.admin.users import (
AccountValidityRenewServlet,
DeactivateAccountRestServlet,
@@ -227,6 +228,7 @@ def register_servlets(hs, http_server):
DeviceRestServlet(hs).register(http_server)
DevicesRestServlet(hs).register(http_server)
DeleteDevicesRestServlet(hs).register(http_server)
UserMediaStatisticsRestServlet(hs).register(http_server)
EventReportDetailRestServlet(hs).register(http_server)
EventReportsRestServlet(hs).register(http_server)
PushersRestServlet(hs).register(http_server)

View File

@@ -0,0 +1,122 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Dirk Klimpel
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
from synapse.storage.databases.main.stats import UserSortOrder
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class UserMediaStatisticsRestServlet(RestServlet):
"""
Get statistics about uploaded media by users.
"""
PATTERNS = admin_patterns("/statistics/users/media$")
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
order_by = parse_string(
request, "order_by", default=UserSortOrder.USER_ID.value
)
if order_by not in (
UserSortOrder.MEDIA_LENGTH.value,
UserSortOrder.MEDIA_COUNT.value,
UserSortOrder.USER_ID.value,
UserSortOrder.DISPLAYNAME.value,
):
raise SynapseError(
400,
"Unknown value for order_by: %s" % (order_by,),
errcode=Codes.INVALID_PARAM,
)
start = parse_integer(request, "from", default=0)
if start < 0:
raise SynapseError(
400,
"Query parameter from must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
limit = parse_integer(request, "limit", default=100)
if limit < 0:
raise SynapseError(
400,
"Query parameter limit must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
from_ts = parse_integer(request, "from_ts", default=0)
if from_ts < 0:
raise SynapseError(
400,
"Query parameter from_ts must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
until_ts = parse_integer(request, "until_ts")
if until_ts is not None:
if until_ts < 0:
raise SynapseError(
400,
"Query parameter until_ts must be a string representing a positive integer.",
errcode=Codes.INVALID_PARAM,
)
if until_ts <= from_ts:
raise SynapseError(
400,
"Query parameter until_ts must be greater than from_ts.",
errcode=Codes.INVALID_PARAM,
)
search_term = parse_string(request, "search_term")
if search_term == "":
raise SynapseError(
400,
"Query parameter search_term cannot be an empty string.",
errcode=Codes.INVALID_PARAM,
)
direction = parse_string(request, "dir", default="f")
if direction not in ("f", "b"):
raise SynapseError(
400, "Unknown direction: %s" % (direction,), errcode=Codes.INVALID_PARAM
)
users_media, total = await self.store.get_users_media_usage_paginate(
start, limit, from_ts, until_ts, order_by, direction, search_term
)
ret = {"users": users_media, "total": total}
if (start + limit) < total:
ret["next_token"] = start + len(users_media)
return 200, ret

View File

@@ -412,6 +412,7 @@ class UserRegisterServlet(RestServlet):
admin = body.get("admin", None)
user_type = body.get("user_type", None)
displayname = body.get("displayname", None)
if user_type is not None and user_type not in UserTypes.ALL_USER_TYPES:
raise SynapseError(400, "Invalid user type")
@@ -448,6 +449,7 @@ class UserRegisterServlet(RestServlet):
password_hash=password_hash,
admin=bool(admin),
user_type=user_type,
default_display_name=displayname,
by_admin=True,
)

View File

@@ -0,0 +1,103 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Sorunome
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple
from twisted.web.server import Request
from synapse.api.errors import SynapseError
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.logging.opentracing import set_tag
from synapse.rest.client.transactions import HttpTransactionCache
from synapse.types import JsonDict, RoomAlias, RoomID
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from ._base import client_patterns
logger = logging.getLogger(__name__)
class TransactionRestServlet(RestServlet):
def __init__(self, hs: "HomeServer"):
super(TransactionRestServlet, self).__init__()
self.txns = HttpTransactionCache(hs)
class KnockRoomAliasServlet(TransactionRestServlet):
"""
POST /knock/{roomIdOrAlias}
"""
PATTERNS = client_patterns("/knock/(?P<room_identifier>[^/]*)")
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.room_member_handler = hs.get_room_member_handler()
self.auth = hs.get_auth()
async def on_POST(
self, request: Request, room_identifier: str, txn_id: Optional[str] = None,
) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
content = parse_json_object_from_request(request)
event_content = None
if "reason" in content:
event_content = {"reason": content["reason"]}
if RoomID.is_valid(room_identifier):
room_id = room_identifier
try:
remote_room_hosts = [
x.decode("ascii") for x in request.args[b"server_name"]
] # type: Optional[List[str]]
except Exception:
remote_room_hosts = None
elif RoomAlias.is_valid(room_identifier):
handler = self.room_member_handler
room_alias = RoomAlias.from_string(room_identifier)
room_id_obj, remote_room_hosts = await handler.lookup_room_alias(room_alias)
room_id = room_id_obj.to_string()
else:
raise SynapseError(
400, "%s was not legal room ID or room alias" % (room_identifier,)
)
await self.room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
action="knock",
txn_id=txn_id,
third_party_signed=None,
remote_room_hosts=remote_room_hosts,
content=event_content,
)
return 200, {"room_id": room_id}
def on_PUT(self, request: Request, room_identifier: str, txn_id: str):
set_tag("txn_id", txn_id)
return self.txns.fetch_or_execute_request(
request, self.on_POST, request, room_identifier, txn_id
)
def register_servlets(hs, http_server):
KnockRoomAliasServlet(hs).register(http_server)

View File

@@ -15,6 +15,7 @@
import itertools
import logging
from typing import Any, Callable, Dict, List
from synapse.api.constants import PresenceState
from synapse.api.errors import Codes, StoreError, SynapseError
@@ -24,7 +25,7 @@ from synapse.events.utils import (
format_event_raw,
)
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.sync import SyncConfig
from synapse.handlers.sync import KnockedSyncResult, SyncConfig
from synapse.http.servlet import RestServlet, parse_boolean, parse_integer, parse_string
from synapse.types import StreamToken
from synapse.util import json_decoder
@@ -212,6 +213,10 @@ class SyncRestServlet(RestServlet):
sync_result.invited, time_now, access_token_id, event_formatter
)
knocked = await self.encode_knocked(
sync_result.knocked, time_now, access_token_id, event_formatter
)
archived = await self.encode_archived(
sync_result.archived,
time_now,
@@ -229,7 +234,12 @@ class SyncRestServlet(RestServlet):
"left": list(sync_result.device_lists.left),
},
"presence": SyncRestServlet.encode_presence(sync_result.presence, time_now),
"rooms": {"join": joined, "invite": invited, "leave": archived},
"rooms": {
"join": joined,
"invite": invited,
"knock": knocked,
"leave": archived,
},
"groups": {
"join": sync_result.groups.join,
"invite": sync_result.groups.invite,
@@ -295,7 +305,7 @@ class SyncRestServlet(RestServlet):
Args:
rooms(list[synapse.handlers.sync.InvitedSyncResult]): list of
sync results for rooms this user is joined to
sync results for rooms this user is invited to
time_now(int): current time - used as a baseline for age
calculations
token_id(int): ID of the user's auth token - used for namespacing
@@ -324,6 +334,52 @@ class SyncRestServlet(RestServlet):
return invited
async def encode_knocked(
self,
rooms: List[KnockedSyncResult],
time_now: int,
token_id: int,
event_formatter: Callable[[Dict], Dict],
) -> Dict[str, Dict[str, Any]]:
"""
Encode the rooms we've knocked on in a sync result.
Args:
rooms: list of sync results for rooms this user is knocking on
time_now: current time - used as a baseline for age calculations
token_id: ID of the user's auth token - used for namespacing of transaction IDs
event_formatter: function to convert from federation format to client format
Returns:
The list of rooms the user has knocked on, in our response format.
"""
knocked = {}
for room in rooms:
knock = await self._event_serializer.serialize_event(
room.knock, time_now, token_id=token_id, event_format=event_formatter,
)
# Extract the `unsigned` key from the knock event.
# This is where we (cheekily) store the knock state events
unsigned = knock.setdefault("unsigned", {})
# Extract the stripped room state from the unsigned dict
# This is for clients to get a little bit of information about
# the room they've knocked on, without revealing any sensitive information
knocked_state = list(unsigned.pop("knock_room_state", []))
# Append the actual knock membership event itself as well
# TODO: I *believe* this is just for the client's sake of track its membership
# state in each room, but I could be wrong. This certainly doesn't seem like it
# could have any negative effects besides resource usage
knocked_state.append(knock)
# Build the `knock_state` dictionary, which will contain the state of the
# room that the client has knocked on
knocked[room.room_id] = {"knock_state": {"events": knocked_state}}
return knocked
async def encode_archived(
self, rooms, time_now, token_id, event_fields, event_formatter
):

View File

@@ -81,6 +81,8 @@ class VersionsRestServlet(RestServlet):
"io.element.e2ee_forced.public": self.e2ee_forced_public,
"io.element.e2ee_forced.private": self.e2ee_forced_private,
"io.element.e2ee_forced.trusted_private": self.e2ee_forced_trusted_private,
# Implements additional endpoints and features as described in MSC2403
"xyz.amorgan.knock": True,
},
},
)

View File

@@ -24,7 +24,7 @@ from twisted.enterprise.adbapi import Connection
from synapse.logging.opentracing import log_kv, set_tag, trace
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage.database import make_in_list_sql_clause
from synapse.storage.database import DatabasePool, make_in_list_sql_clause
from synapse.storage.types import Cursor
from synapse.types import JsonDict
from synapse.util import json_encoder
@@ -33,6 +33,7 @@ from synapse.util.iterutils import batch_iter
if TYPE_CHECKING:
from synapse.handlers.e2e_keys import SignatureListItem
from synapse.server import HomeServer
@attr.s(slots=True)
@@ -47,7 +48,20 @@ class DeviceKeyLookupResult:
keys = attr.ib(type=Optional[JsonDict])
class EndToEndKeyWorkerStore(SQLBaseStore):
class EndToEndKeyBackgroundStore(SQLBaseStore):
def __init__(self, database: DatabasePool, db_conn: Connection, hs: "HomeServer"):
super().__init__(database, db_conn, hs)
self.db_pool.updates.register_background_index_update(
"e2e_cross_signing_keys_idx",
index_name="e2e_cross_signing_keys_stream_idx",
table="e2e_cross_signing_keys",
columns=["stream_id"],
unique=True,
)
class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore):
async def get_e2e_device_keys_for_federation_query(
self, user_id: str
) -> Tuple[int, List[JsonDict]]:

View File

@@ -1240,13 +1240,16 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore, SearchStore):
logger.error("store_room with room_id=%s failed: %s", room_id, e)
raise StoreError(500, "Problem creating room.")
async def maybe_store_room_on_invite(self, room_id: str, room_version: RoomVersion):
async def maybe_store_room_on_outlier_membership(
self, room_id: str, room_version: RoomVersion
):
"""
When we receive an invite over federation, store the version of the room if we
don't already know the room version.
When we receive an invite, a knock or any other event over federation that may relate
to a room we are not in, store the version of the room if we don't already know the
room version.
"""
await self.db_pool.simple_upsert(
desc="maybe_store_room_on_invite",
desc="maybe_store_room_on_outlier_membership",
table="rooms",
keyvalues={"room_id": room_id},
values={},

View File

@@ -272,6 +272,21 @@ class RoomMemberWorkerStore(EventsWorkerStore):
user_id, [Membership.INVITE]
)
@cached()
async def get_knocked_rooms_for_local_user(self, user_id: str) -> RoomsForUser:
"""Get all the rooms the *local* user currently has the membership of knock for.
Args:
user_id: The user ID.
Returns:
A list of RoomsForUser.
"""
return await self.get_rooms_for_local_user_where_membership_is(
user_id, [Membership.KNOCK]
)
async def get_invite_for_local_user_in_room(
self, user_id: str, room_id: str
) -> Optional[RoomsForUser]:
@@ -290,6 +305,24 @@ class RoomMemberWorkerStore(EventsWorkerStore):
return invite
return None
async def get_knock_for_local_user_in_room(
self, user_id: str, room_id: str
) -> Optional[RoomsForUser]:
"""Gets the knock for the given *local* user and room.
Args:
user_id: The user ID to find the knock of.
room_id: The room to user knocked on.
Returns:
Either a RoomsForUser or None if no knock was found.
"""
knocks = await self.get_knocked_rooms_for_local_user(user_id)
for knock in knocks:
if knock.room_id == room_id:
return knock
return None
async def get_rooms_for_local_user_where_membership_is(
self, user_id: str, membership_list: Collection[str]
) -> List[RoomsForUser]:

View File

@@ -0,0 +1,17 @@
/* Copyright 2020 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
INSERT INTO background_updates (update_name, progress_json) VALUES
('e2e_cross_signing_keys_idx', '{}');

View File

@@ -0,0 +1,17 @@
/* Copyright 2020 Sorunome
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
ALTER TABLE room_stats_current ADD knock_members INT NOT NULL DEFAULT '0';
ALTER TABLE room_stats_historical ADD knock_members BIGINT NOT NULL DEFAULT '0';

View File

@@ -16,15 +16,18 @@
import logging
from collections import Counter
from enum import Enum
from itertools import chain
from typing import Any, Dict, List, Optional, Tuple
from twisted.internet.defer import DeferredLock
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import StoreError
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.state_deltas import StateDeltasStore
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict
from synapse.util.caches.descriptors import cached
logger = logging.getLogger(__name__)
@@ -38,6 +41,7 @@ ABSOLUTE_STATS_FIELDS = {
"current_state_events",
"joined_members",
"invited_members",
"knocked_members",
"left_members",
"banned_members",
"local_users_in_room",
@@ -59,6 +63,23 @@ TYPE_TO_TABLE = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user
TYPE_TO_ORIGIN_TABLE = {"room": ("rooms", "room_id"), "user": ("users", "name")}
class UserSortOrder(Enum):
"""
Enum to define the sorting method used when returning users
with get_users_media_usage_paginate
MEDIA_LENGTH = ordered by size of uploaded media. Smallest to largest.
MEDIA_COUNT = ordered by number of uploaded media. Smallest to largest.
USER_ID = ordered alphabetically by `user_id`.
DISPLAYNAME = ordered alphabetically by `displayname`
"""
MEDIA_LENGTH = "media_length"
MEDIA_COUNT = "media_count"
USER_ID = "user_id"
DISPLAYNAME = "displayname"
class StatsStore(StateDeltasStore):
def __init__(self, database: DatabasePool, db_conn, hs):
super().__init__(database, db_conn, hs)
@@ -882,3 +903,110 @@ class StatsStore(StateDeltasStore):
complete_with_stream_id=pos,
absolute_field_overrides={"joined_rooms": joined_rooms},
)
async def get_users_media_usage_paginate(
self,
start: int,
limit: int,
from_ts: Optional[int] = None,
until_ts: Optional[int] = None,
order_by: Optional[UserSortOrder] = UserSortOrder.USER_ID.value,
direction: Optional[str] = "f",
search_term: Optional[str] = None,
) -> Tuple[List[JsonDict], Dict[str, int]]:
"""Function to retrieve a paginated list of users and their uploaded local media
(size and number). This will return a json list of users and the
total number of users matching the filter criteria.
Args:
start: offset to begin the query from
limit: number of rows to retrieve
from_ts: request only media that are created later than this timestamp (ms)
until_ts: request only media that are created earlier than this timestamp (ms)
order_by: the sort order of the returned list
direction: sort ascending or descending
search_term: a string to filter user names by
Returns:
A list of user dicts and an integer representing the total number of
users that exist given this query
"""
def get_users_media_usage_paginate_txn(txn):
filters = []
args = [self.hs.config.server_name]
if search_term:
filters.append("(lmr.user_id LIKE ? OR displayname LIKE ?)")
args.extend(["@%" + search_term + "%:%", "%" + search_term + "%"])
if from_ts:
filters.append("created_ts >= ?")
args.extend([from_ts])
if until_ts:
filters.append("created_ts <= ?")
args.extend([until_ts])
# Set ordering
if UserSortOrder(order_by) == UserSortOrder.MEDIA_LENGTH:
order_by_column = "media_length"
elif UserSortOrder(order_by) == UserSortOrder.MEDIA_COUNT:
order_by_column = "media_count"
elif UserSortOrder(order_by) == UserSortOrder.USER_ID:
order_by_column = "lmr.user_id"
elif UserSortOrder(order_by) == UserSortOrder.DISPLAYNAME:
order_by_column = "displayname"
else:
raise StoreError(
500, "Incorrect value for order_by provided: %s" % order_by
)
if direction == "b":
order = "DESC"
else:
order = "ASC"
where_clause = "WHERE " + " AND ".join(filters) if len(filters) > 0 else ""
sql_base = """
FROM local_media_repository as lmr
LEFT JOIN profiles AS p ON lmr.user_id = '@' || p.user_id || ':' || ?
{}
GROUP BY lmr.user_id, displayname
""".format(
where_clause
)
# SQLite does not support SELECT COUNT(*) OVER()
sql = """
SELECT COUNT(*) FROM (
SELECT lmr.user_id
{sql_base}
) AS count_user_ids
""".format(
sql_base=sql_base,
)
txn.execute(sql, args)
count = txn.fetchone()[0]
sql = """
SELECT
lmr.user_id,
displayname,
COUNT(lmr.user_id) as media_count,
SUM(media_length) as media_length
{sql_base}
ORDER BY {order_by_column} {order}
LIMIT ? OFFSET ?
""".format(
sql_base=sql_base, order_by_column=order_by_column, order=order,
)
args += [limit, start]
txn.execute(sql, args)
users = self.db_pool.cursor_to_dict(txn)
return users, count
return await self.db_pool.runInteraction(
"get_users_media_usage_paginate_txn", get_users_media_usage_paginate_txn
)

View File

@@ -154,3 +154,60 @@ class EventCreationTestCase(unittest.HomeserverTestCase):
# Check that we've deduplicated the events.
self.assertEqual(len(events), 2)
self.assertEqual(events[0].event_id, events[1].event_id)
class ServerAclValidationTestCase(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets,
login.register_servlets,
room.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.user_id = self.register_user("tester", "foobar")
self.access_token = self.login("tester", "foobar")
self.room_id = self.helper.create_room_as(self.user_id, tok=self.access_token)
def test_allow_server_acl(self):
"""Test that sending an ACL that blocks everyone but ourselves works.
"""
self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={"allow": [self.hs.hostname]},
tok=self.access_token,
expect_code=200,
)
def test_deny_server_acl_block_outselves(self):
"""Test that sending an ACL that blocks ourselves does not work.
"""
self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={},
tok=self.access_token,
expect_code=400,
)
def test_deny_redact_server_acl(self):
"""Test that attempting to redact an ACL is blocked.
"""
body = self.helper.send_state(
self.room_id,
EventTypes.ServerACL,
body={"allow": [self.hs.hostname]},
tok=self.access_token,
expect_code=200,
)
event_id = body["event_id"]
# Redaction of event should fail.
path = "/_matrix/client/r0/rooms/%s/redact/%s" % (self.room_id, event_id)
request, channel = self.make_request(
"POST", path, content={}, access_token=self.access_token
)
self.render(request)
self.assertEqual(int(channel.result["code"]), 403)

View File

@@ -531,40 +531,7 @@ class DeleteRoomTestCase(unittest.HomeserverTestCase):
def _is_purged(self, room_id):
"""Test that the following tables have been purged of all rows related to the room.
"""
for table in (
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"local_invites",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
):
for table in PURGE_TABLES:
count = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table=table,
@@ -633,39 +600,7 @@ class PurgeRoomTestCase(unittest.HomeserverTestCase):
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
# Test that the following tables have been purged of all rows related to the room.
for table in (
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
):
for table in PURGE_TABLES:
count = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table=table,
@@ -1500,3 +1435,39 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
self.render(request)
self.assertEquals(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
PURGE_TABLES = [
"current_state_events",
"event_backward_extremities",
"event_forward_extremities",
"event_json",
"event_push_actions",
"event_search",
"events",
"group_rooms",
"public_room_list_stream",
"receipts_graph",
"receipts_linearized",
"room_aliases",
"room_depth",
"room_memberships",
"room_stats_state",
"room_stats_current",
"room_stats_historical",
"room_stats_earliest_token",
"rooms",
"stream_ordering_to_exterm",
"users_in_public_rooms",
"users_who_share_private_rooms",
"appservice_room_list",
"e2e_room_keys",
"event_push_summary",
"pusher_throttle",
"group_summary_rooms",
"local_invites",
"room_account_data",
"room_tags",
# "state_groups", # Current impl leaves orphaned state groups around.
"state_groups_state",
]

View File

@@ -0,0 +1,485 @@
# -*- coding: utf-8 -*-
# Copyright 2020 Dirk Klimpel
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from binascii import unhexlify
from typing import Any, Dict, List, Optional
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.rest.client.v1 import login
from tests import unittest
class UserMediaStatisticsTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.store = hs.get_datastore()
self.media_repo = hs.get_media_repository_resource()
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
self.other_user = self.register_user("user", "pass")
self.other_user_tok = self.login("user", "pass")
self.url = "/_synapse/admin/v1/statistics/users/media"
def test_no_auth(self):
"""
Try to list users without authentication.
"""
request, channel = self.make_request("GET", self.url, b"{}")
self.render(request)
self.assertEqual(401, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_no_admin(self):
"""
If the user is not a server admin, an error 403 is returned.
"""
request, channel = self.make_request(
"GET", self.url, json.dumps({}), access_token=self.other_user_tok,
)
self.render(request)
self.assertEqual(403, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_invalid_parameter(self):
"""
If parameters are invalid, an error is returned.
"""
# unkown order_by
request, channel = self.make_request(
"GET", self.url + "?order_by=bar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative from
request, channel = self.make_request(
"GET", self.url + "?from=-5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative limit
request, channel = self.make_request(
"GET", self.url + "?limit=-5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative from_ts
request, channel = self.make_request(
"GET", self.url + "?from_ts=-1234", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# negative until_ts
request, channel = self.make_request(
"GET", self.url + "?until_ts=-1234", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# until_ts smaller from_ts
request, channel = self.make_request(
"GET",
self.url + "?from_ts=10&until_ts=5",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# empty search term
request, channel = self.make_request(
"GET", self.url + "?search_term=", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
# invalid search order
request, channel = self.make_request(
"GET", self.url + "?dir=bar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
def test_limit(self):
"""
Testing list of media with limit
"""
self._create_users_with_media(10, 2)
request, channel = self.make_request(
"GET", self.url + "?limit=5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 10)
self.assertEqual(len(channel.json_body["users"]), 5)
self.assertEqual(channel.json_body["next_token"], 5)
self._check_fields(channel.json_body["users"])
def test_from(self):
"""
Testing list of media with a defined starting point (from)
"""
self._create_users_with_media(20, 2)
request, channel = self.make_request(
"GET", self.url + "?from=5", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
self.assertEqual(len(channel.json_body["users"]), 15)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["users"])
def test_limit_and_from(self):
"""
Testing list of media with a defined starting point and limit
"""
self._create_users_with_media(20, 2)
request, channel = self.make_request(
"GET", self.url + "?from=5&limit=10", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
self.assertEqual(channel.json_body["next_token"], 15)
self.assertEqual(len(channel.json_body["users"]), 10)
self._check_fields(channel.json_body["users"])
def test_next_token(self):
"""
Testing that `next_token` appears at the right place
"""
number_users = 20
self._create_users_with_media(number_users, 3)
# `next_token` does not appear
# Number of results is the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=20", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), number_users)
self.assertNotIn("next_token", channel.json_body)
# `next_token` does not appear
# Number of max results is larger than the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=21", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), number_users)
self.assertNotIn("next_token", channel.json_body)
# `next_token` does appear
# Number of max results is smaller than the number of entries
request, channel = self.make_request(
"GET", self.url + "?limit=19", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), 19)
self.assertEqual(channel.json_body["next_token"], 19)
# Set `from` to value of `next_token` for request remaining entries
# Check `next_token` does not appear
request, channel = self.make_request(
"GET", self.url + "?from=19", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], number_users)
self.assertEqual(len(channel.json_body["users"]), 1)
self.assertNotIn("next_token", channel.json_body)
def test_no_media(self):
"""
Tests that a normal lookup for statistics is successfully
if users have no media created
"""
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(0, channel.json_body["total"])
self.assertEqual(0, len(channel.json_body["users"]))
def test_order_by(self):
"""
Testing order list with parameter `order_by`
"""
# create users
self.register_user("user_a", "pass", displayname="UserZ")
userA_tok = self.login("user_a", "pass")
self._create_media(userA_tok, 1)
self.register_user("user_b", "pass", displayname="UserY")
userB_tok = self.login("user_b", "pass")
self._create_media(userB_tok, 3)
self.register_user("user_c", "pass", displayname="UserX")
userC_tok = self.login("user_c", "pass")
self._create_media(userC_tok, 2)
# order by user_id
self._order_test("user_id", ["@user_a:test", "@user_b:test", "@user_c:test"])
self._order_test(
"user_id", ["@user_a:test", "@user_b:test", "@user_c:test"], "f",
)
self._order_test(
"user_id", ["@user_c:test", "@user_b:test", "@user_a:test"], "b",
)
# order by displayname
self._order_test(
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"]
)
self._order_test(
"displayname", ["@user_c:test", "@user_b:test", "@user_a:test"], "f",
)
self._order_test(
"displayname", ["@user_a:test", "@user_b:test", "@user_c:test"], "b",
)
# order by media_length
self._order_test(
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"],
)
self._order_test(
"media_length", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
)
self._order_test(
"media_length", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
)
# order by media_count
self._order_test(
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"],
)
self._order_test(
"media_count", ["@user_a:test", "@user_c:test", "@user_b:test"], "f",
)
self._order_test(
"media_count", ["@user_b:test", "@user_c:test", "@user_a:test"], "b",
)
def test_from_until_ts(self):
"""
Testing filter by time with parameters `from_ts` and `until_ts`
"""
# create media earlier than `ts1` to ensure that `from_ts` is working
self._create_media(self.other_user_tok, 3)
self.pump(1)
ts1 = self.clock.time_msec()
# list all media when filter is not set
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
# filter media starting at `ts1` after creating first media
# result is 0
request, channel = self.make_request(
"GET", self.url + "?from_ts=%s" % (ts1,), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 0)
self._create_media(self.other_user_tok, 3)
self.pump(1)
ts2 = self.clock.time_msec()
# create media after `ts2` to ensure that `until_ts` is working
self._create_media(self.other_user_tok, 3)
# filter media between `ts1` and `ts2`
request, channel = self.make_request(
"GET",
self.url + "?from_ts=%s&until_ts=%s" % (ts1, ts2),
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 3)
# filter media until `ts2` and earlier
request, channel = self.make_request(
"GET", self.url + "?until_ts=%s" % (ts2,), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["media_count"], 6)
def test_search_term(self):
self._create_users_with_media(20, 1)
# check without filter get all users
request, channel = self.make_request(
"GET", self.url, access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 20)
# filter user 1 and 10-19 by `user_id`
request, channel = self.make_request(
"GET",
self.url + "?search_term=foo_user_1",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 11)
# filter on this user in `displayname`
request, channel = self.make_request(
"GET",
self.url + "?search_term=bar_user_10",
access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["users"][0]["displayname"], "bar_user_10")
self.assertEqual(channel.json_body["total"], 1)
# filter and get empty result
request, channel = self.make_request(
"GET", self.url + "?search_term=foobar", access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual(channel.json_body["total"], 0)
def _create_users_with_media(self, number_users: int, media_per_user: int):
"""
Create a number of users with a number of media
Args:
number_users: Number of users to be created
media_per_user: Number of media to be created for each user
"""
for i in range(number_users):
self.register_user("foo_user_%s" % i, "pass", displayname="bar_user_%s" % i)
user_tok = self.login("foo_user_%s" % i, "pass")
self._create_media(user_tok, media_per_user)
def _create_media(self, user_token: str, number_media: int):
"""
Create a number of media for a specific user
Args:
user_token: Access token of the user
number_media: Number of media to be created for the user
"""
upload_resource = self.media_repo.children[b"upload"]
for i in range(number_media):
# file size is 67 Byte
image_data = unhexlify(
b"89504e470d0a1a0a0000000d4948445200000001000000010806"
b"0000001f15c4890000000a49444154789c63000100000500010d"
b"0a2db40000000049454e44ae426082"
)
# Upload some media into the room
self.helper.upload_media(
upload_resource, image_data, tok=user_token, expect_code=200
)
def _check_fields(self, content: List[Dict[str, Any]]):
"""Checks that all attributes are present in content
Args:
content: List that is checked for content
"""
for c in content:
self.assertIn("user_id", c)
self.assertIn("displayname", c)
self.assertIn("media_count", c)
self.assertIn("media_length", c)
def _order_test(
self, order_type: str, expected_user_list: List[str], dir: Optional[str] = None
):
"""Request the list of users in a certain order. Assert that order is what
we expect
Args:
order_type: The type of ordering to give the server
expected_user_list: The list of user_ids in the order we expect to get
back from the server
dir: The direction of ordering to give the server
"""
url = self.url + "?order_by=%s" % (order_type,)
if dir is not None and dir in ("b", "f"):
url += "&dir=%s" % (dir,)
request, channel = self.make_request(
"GET", url.encode("ascii"), access_token=self.admin_user_tok,
)
self.render(request)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], len(expected_user_list))
returned_order = [row["user_id"] for row in channel.json_body["users"]]
self.assertListEqual(expected_user_list, returned_order)
self._check_fields(channel.json_body["users"])

View File

@@ -24,7 +24,7 @@ from mock import Mock
import synapse.rest.admin
from synapse.api.constants import UserTypes
from synapse.api.errors import Codes, HttpResponseException, ResourceLimitError
from synapse.rest.client.v1 import login, room
from synapse.rest.client.v1 import login, profile, room
from synapse.rest.client.v2_alpha import sync
from tests import unittest
@@ -34,7 +34,10 @@ from tests.unittest import override_config
class UserRegisterTestCase(unittest.HomeserverTestCase):
servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
profile.register_servlets,
]
def make_homeserver(self, reactor, clock):
@@ -325,6 +328,120 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
self.assertEqual(400, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("Invalid user type", channel.json_body["error"])
def test_displayname(self):
"""
Test that displayname of new user is set
"""
# set no displayname
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob1\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{"nonce": nonce, "username": "bob1", "password": "abc123", "mac": want_mac}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob1:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob1:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("bob1", channel.json_body["displayname"])
# displayname is None
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob2\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob2",
"displayname": None,
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob2:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob2:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("bob2", channel.json_body["displayname"])
# displayname is empty
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob3\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob3",
"displayname": "",
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob3:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob3:test/displayname")
self.render(request)
self.assertEqual(404, int(channel.result["code"]), msg=channel.result["body"])
# set displayname
request, channel = self.make_request("GET", self.url)
self.render(request)
nonce = channel.json_body["nonce"]
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
want_mac.update(nonce.encode("ascii") + b"\x00bob4\x00abc123\x00notadmin")
want_mac = want_mac.hexdigest()
body = json.dumps(
{
"nonce": nonce,
"username": "bob4",
"displayname": "Bob's Name",
"password": "abc123",
"mac": want_mac,
}
)
request, channel = self.make_request("POST", self.url, body.encode("utf8"))
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("@bob4:test", channel.json_body["user_id"])
request, channel = self.make_request("GET", "/profile/@bob4:test/displayname")
self.render(request)
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
self.assertEqual("Bob's Name", channel.json_body["displayname"])
@override_config(
{"limit_usage_by_mau": True, "max_mau_value": 2, "mau_trial_days": 0}
)

View File

@@ -546,18 +546,24 @@ class HomeserverTestCase(TestCase):
return result
def register_user(self, username, password, admin=False):
def register_user(
self,
username: str,
password: str,
admin: Optional[bool] = False,
displayname: Optional[str] = None,
) -> str:
"""
Register a user. Requires the Admin API be registered.
Args:
username (bytes/unicode): The user part of the new user.
password (bytes/unicode): The password of the new user.
admin (bool): Whether the user should be created as an admin
or not.
username: The user part of the new user.
password: The password of the new user.
admin: Whether the user should be created as an admin or not.
displayname: The displayname of the new user.
Returns:
The MXID of the new user (unicode).
The MXID of the new user.
"""
self.hs.config.registration_shared_secret = "shared"
@@ -581,6 +587,7 @@ class HomeserverTestCase(TestCase):
{
"nonce": nonce,
"username": username,
"displayname": displayname,
"password": password,
"admin": admin,
"mac": want_mac,