Merge branch 'develop' into anoa/blacklist_ip_ranges
* develop: (45 commits)
URL preview blacklisting fixes (#5155)
Revert 085ae346ac
Add a DUMMY stage to captcha-only registration flow
Make Prometheus snippet less confusing on the metrics collection doc (#4288)
Set syslog identifiers in systemd units (#5023)
Run Black on the tests again (#5170)
Add AllowEncodedSlashes to apache (#5068)
remove instructions for jessie installation (#5164)
Run `black` on per_destination_queue
Limit the number of EDUs in transactions to 100 as expected by receiver (#5138)
Fix bogus imports in tests (#5154)
add options to require an access_token to GET /profile and /publicRooms on CS API (#5083)
Do checks on aliases for incoming m.room.aliases events (#5128)
Remove the requirement to authenticate for /admin/server_version. (#5122)
Fix spelling in server notices admin API docs (#5142)
Fix sample config
0.99.3.2
include disco in deb build target list
changelog
Debian: we now need libpq-dev.
...
This commit is contained in:
20
CHANGES.md
20
CHANGES.md
@@ -1,3 +1,23 @@
|
||||
Synapse 0.99.3.2 (2019-05-03)
|
||||
=============================
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Ensure that we have `urllib3` <1.25, to resolve incompatibility with `requests`. ([\#5135](https://github.com/matrix-org/synapse/issues/5135))
|
||||
|
||||
|
||||
Synapse 0.99.3.1 (2019-05-03)
|
||||
=============================
|
||||
|
||||
Security update
|
||||
---------------
|
||||
|
||||
This release includes two security fixes:
|
||||
|
||||
- Switch to using a cryptographically-secure random number generator for token strings, ensuring they cannot be predicted by an attacker. Thanks to @opnsec for identifying and responsibly disclosing this issue! ([\#5133](https://github.com/matrix-org/synapse/issues/5133))
|
||||
- Blacklist 0.0.0.0 and :: by default for URL previews. Thanks to @opnsec for identifying and responsibly disclosing this issue too! ([\#5134](https://github.com/matrix-org/synapse/issues/5134))
|
||||
|
||||
Synapse 0.99.3 (2019-04-01)
|
||||
===========================
|
||||
|
||||
|
||||
21
INSTALL.md
21
INSTALL.md
@@ -257,9 +257,8 @@ https://github.com/spantaleev/matrix-docker-ansible-deploy
|
||||
#### Matrix.org packages
|
||||
|
||||
Matrix.org provides Debian/Ubuntu packages of the latest stable version of
|
||||
Synapse via https://packages.matrix.org/debian/. To use them:
|
||||
|
||||
For Debian 9 (Stretch), Ubuntu 16.04 (Xenial), and later:
|
||||
Synapse via https://packages.matrix.org/debian/. They are available for Debian
|
||||
9 (Stretch), Ubuntu 16.04 (Xenial), and later. To use them:
|
||||
|
||||
```
|
||||
sudo apt install -y lsb-release wget apt-transport-https
|
||||
@@ -270,19 +269,6 @@ sudo apt update
|
||||
sudo apt install matrix-synapse-py3
|
||||
```
|
||||
|
||||
For Debian 8 (Jessie):
|
||||
|
||||
```
|
||||
sudo apt install -y lsb-release wget apt-transport-https
|
||||
sudo wget -O /etc/apt/trusted.gpg.d/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
|
||||
echo "deb [signed-by=5586CCC0CBBBEFC7A25811ADF473DD4473365DE1] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
|
||||
sudo tee /etc/apt/sources.list.d/matrix-org.list
|
||||
sudo apt update
|
||||
sudo apt install matrix-synapse-py3
|
||||
```
|
||||
|
||||
The fingerprint of the repository signing key is AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058.
|
||||
|
||||
**Note**: if you followed a previous version of these instructions which
|
||||
recommended using `apt-key add` to add an old key from
|
||||
`https://matrix.org/packages/debian/`, you should note that this key has been
|
||||
@@ -290,6 +276,9 @@ revoked. You should remove the old key with `sudo apt-key remove
|
||||
C35EB17E1EAE708E6603A9B3AD0592FE47F0DF61`, and follow the above instructions to
|
||||
update your configuration.
|
||||
|
||||
The fingerprint of the repository signing key (as shown by `gpg
|
||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||
|
||||
#### Downstream Debian/Ubuntu packages
|
||||
|
||||
|
||||
3
changelog.d/5023.feature
Normal file
3
changelog.d/5023.feature
Normal file
@@ -0,0 +1,3 @@
|
||||
Configure the example systemd units to have a log identifier of `matrix-synapse`
|
||||
instead of the executable name, `python`.
|
||||
Contributed by Christoph Müller.
|
||||
1
changelog.d/5037.bugfix
Normal file
1
changelog.d/5037.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Workaround bug in twisted where attempting too many concurrent DNS requests could cause it to hang due to running out of file descriptors.
|
||||
1
changelog.d/5083.feature
Normal file
1
changelog.d/5083.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an configuration option to require authentication on /publicRooms and /profile endpoints.
|
||||
1
changelog.d/5104.bugfix
Normal file
1
changelog.d/5104.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix the ratelimting on third party invites.
|
||||
1
changelog.d/5116.feature
Normal file
1
changelog.d/5116.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add time-based account expiration.
|
||||
1
changelog.d/5119.feature
Normal file
1
changelog.d/5119.feature
Normal file
@@ -0,0 +1 @@
|
||||
Move admin APIs to `/_synapse/admin/v1`. (The old paths are retained for backwards-compatibility, for now).
|
||||
1
changelog.d/5120.misc
Normal file
1
changelog.d/5120.misc
Normal file
@@ -0,0 +1 @@
|
||||
Factor out an "assert_requester_is_admin" function.
|
||||
1
changelog.d/5121.feature
Normal file
1
changelog.d/5121.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
|
||||
1
changelog.d/5122.misc
Normal file
1
changelog.d/5122.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove the requirement to authenticate for /admin/server_version.
|
||||
1
changelog.d/5124.bugfix
Normal file
1
changelog.d/5124.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Add some missing limitations to room alias creation.
|
||||
1
changelog.d/5128.bugfix
Normal file
1
changelog.d/5128.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Add some missing limitations to room alias creation.
|
||||
1
changelog.d/5138.bugfix
Normal file
1
changelog.d/5138.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Limit the number of EDUs in transactions to 100 as expected by synapse. Thanks to @superboum for this work!
|
||||
1
changelog.d/5142.feature
Normal file
1
changelog.d/5142.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement an admin API for sending server notices. Many thanks to @krombel who provided a foundation for this work.
|
||||
1
changelog.d/5154.bugfix
Normal file
1
changelog.d/5154.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bogus imports in unit tests.
|
||||
1
changelog.d/5155.misc
Normal file
1
changelog.d/5155.misc
Normal file
@@ -0,0 +1 @@
|
||||
Prevent an exception from being raised in a IResolutionReceiver and use a more generic error message for blacklisted URL previews.
|
||||
1
changelog.d/5170.misc
Normal file
1
changelog.d/5170.misc
Normal file
@@ -0,0 +1 @@
|
||||
Run `black` on the tests directory.
|
||||
@@ -12,6 +12,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
Restart=always
|
||||
RestartSec=3
|
||||
SyslogIdentifier=matrix-synapse-%i
|
||||
|
||||
[Install]
|
||||
WantedBy=matrix-synapse.service
|
||||
|
||||
@@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
Restart=always
|
||||
RestartSec=3
|
||||
SyslogIdentifier=matrix-synapse
|
||||
|
||||
[Install]
|
||||
WantedBy=matrix.target
|
||||
|
||||
@@ -22,10 +22,10 @@ Group=nogroup
|
||||
|
||||
WorkingDirectory=/opt/synapse
|
||||
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
|
||||
SyslogIdentifier=matrix-synapse
|
||||
|
||||
# adjust the cache factor if necessary
|
||||
# Environment=SYNAPSE_CACHE_FACTOR=2.0
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
|
||||
19
debian/changelog
vendored
19
debian/changelog
vendored
@@ -1,3 +1,22 @@
|
||||
matrix-synapse-py3 (0.99.3.2+nmu1) UNRELEASED; urgency=medium
|
||||
|
||||
[ Christoph Müller ]
|
||||
* Configure the systemd units to have a log identifier of `matrix-synapse`
|
||||
|
||||
-- Christoph Müller <iblzm@hotmail.de> Wed, 17 Apr 2019 16:17:32 +0200
|
||||
|
||||
matrix-synapse-py3 (0.99.3.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 0.99.3.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 03 May 2019 18:56:20 +0100
|
||||
|
||||
matrix-synapse-py3 (0.99.3.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 0.99.3.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 03 May 2019 16:02:43 +0100
|
||||
|
||||
matrix-synapse-py3 (0.99.3) stable; urgency=medium
|
||||
|
||||
[ Richard van der Hoff ]
|
||||
|
||||
1
debian/matrix-synapse.service
vendored
1
debian/matrix-synapse.service
vendored
@@ -11,6 +11,7 @@ ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --confi
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
Restart=always
|
||||
RestartSec=3
|
||||
SyslogIdentifier=matrix-synapse
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
||||
@@ -57,7 +57,8 @@ RUN apt-get update -qq -o Acquire::Languages=none \
|
||||
python3-pip \
|
||||
python3-setuptools \
|
||||
python3-venv \
|
||||
sqlite3
|
||||
sqlite3 \
|
||||
libpq-dev
|
||||
|
||||
COPY --from=builder /dh-virtualenv_1.1-1_all.deb /
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ This API extends the validity of an account by as much time as configured in the
|
||||
|
||||
The API is::
|
||||
|
||||
POST /_matrix/client/unstable/account_validity/send_mail
|
||||
POST /_synapse/admin/v1/account_validity/validity
|
||||
|
||||
with the following body:
|
||||
|
||||
|
||||
@@ -8,7 +8,7 @@ being deleted.
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_matrix/client/r0/admin/delete_group/<group_id>
|
||||
POST /_synapse/admin/v1/delete_group/<group_id>
|
||||
```
|
||||
|
||||
including an `access_token` of a server admin.
|
||||
|
||||
@@ -4,7 +4,7 @@ This API gets a list of known media in a room.
|
||||
|
||||
The API is:
|
||||
```
|
||||
GET /_matrix/client/r0/admin/room/<room_id>/media
|
||||
GET /_synapse/admin/v1/room/<room_id>/media
|
||||
```
|
||||
including an `access_token` of a server admin.
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ paginate further back in the room from the point being purged from.
|
||||
|
||||
The API is:
|
||||
|
||||
``POST /_matrix/client/r0/admin/purge_history/<room_id>[/<event_id>]``
|
||||
``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
|
||||
|
||||
including an ``access_token`` of a server admin.
|
||||
|
||||
@@ -49,7 +49,7 @@ Purge status query
|
||||
|
||||
It is possible to poll for updates on recent purges with a second API;
|
||||
|
||||
``GET /_matrix/client/r0/admin/purge_history_status/<purge_id>``
|
||||
``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
|
||||
|
||||
(again, with a suitable ``access_token``). This API returns a JSON body like
|
||||
the following:
|
||||
|
||||
@@ -6,7 +6,7 @@ media.
|
||||
|
||||
The API is::
|
||||
|
||||
POST /_matrix/client/r0/admin/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
|
||||
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
|
||||
|
||||
{}
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ is not enabled.
|
||||
|
||||
To fetch the nonce, you need to request one from the API::
|
||||
|
||||
> GET /_matrix/client/r0/admin/register
|
||||
> GET /_synapse/admin/v1/register
|
||||
|
||||
< {"nonce": "thisisanonce"}
|
||||
|
||||
@@ -22,7 +22,7 @@ body containing the nonce, username, password, whether they are an admin
|
||||
|
||||
As an example::
|
||||
|
||||
> POST /_matrix/client/r0/admin/register
|
||||
> POST /_synapse/admin/v1/register
|
||||
> {
|
||||
"nonce": "thisisanonce",
|
||||
"username": "pepper_roni",
|
||||
|
||||
48
docs/admin_api/server_notices.md
Normal file
48
docs/admin_api/server_notices.md
Normal file
@@ -0,0 +1,48 @@
|
||||
# Server Notices
|
||||
|
||||
The API to send notices is as follows:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/send_server_notice
|
||||
```
|
||||
|
||||
or:
|
||||
|
||||
```
|
||||
PUT /_synapse/admin/v1/send_server_notice/{txnId}
|
||||
```
|
||||
|
||||
You will need to authenticate with an access token for an admin user.
|
||||
|
||||
When using the `PUT` form, retransmissions with the same transaction ID will be
|
||||
ignored in the same way as with `PUT
|
||||
/_matrix/client/r0/rooms/{roomId}/send/{eventType}/{txnId}`.
|
||||
|
||||
The request body should look something like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"user_id": "@target_user:server_name",
|
||||
"content": {
|
||||
"msgtype": "m.text",
|
||||
"body": "This is my message"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
You can optionally include the following additional parameters:
|
||||
|
||||
* `type`: the type of event. Defaults to `m.room.message`.
|
||||
* `state_key`: Setting this will result in a state event being sent.
|
||||
|
||||
|
||||
Once the notice has been sent, the API will return the following response:
|
||||
|
||||
```json
|
||||
{
|
||||
"event_id": "<event_id>"
|
||||
}
|
||||
```
|
||||
|
||||
Note that server notices must be enabled in `homeserver.yaml` before this API
|
||||
can be used. See [server_notices.md](../server_notices.md) for more information.
|
||||
@@ -5,7 +5,7 @@ This API returns information about a specific user account.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_matrix/client/r0/admin/whois/<user_id>
|
||||
GET /_synapse/admin/v1/whois/<user_id>
|
||||
|
||||
including an ``access_token`` of a server admin.
|
||||
|
||||
@@ -50,7 +50,7 @@ references to it).
|
||||
|
||||
The api is::
|
||||
|
||||
POST /_matrix/client/r0/admin/deactivate/<user_id>
|
||||
POST /_synapse/admin/v1/deactivate/<user_id>
|
||||
|
||||
with a body of:
|
||||
|
||||
@@ -73,7 +73,7 @@ Changes the password of another user.
|
||||
|
||||
The api is::
|
||||
|
||||
POST /_matrix/client/r0/admin/reset_password/<user_id>
|
||||
POST /_synapse/admin/v1/reset_password/<user_id>
|
||||
|
||||
with a body of:
|
||||
|
||||
|
||||
@@ -8,9 +8,7 @@ contains Synapse version information).
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_matrix/client/r0/admin/server_version
|
||||
|
||||
including an ``access_token`` of a server admin.
|
||||
GET /_synapse/admin/v1/server_version
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
|
||||
@@ -48,7 +48,10 @@ How to monitor Synapse metrics using Prometheus
|
||||
- job_name: "synapse"
|
||||
metrics_path: "/_synapse/metrics"
|
||||
static_configs:
|
||||
- targets: ["my.server.here:9092"]
|
||||
- targets: ["my.server.here:port"]
|
||||
|
||||
where ``my.server.here`` is the IP address of Synapse, and ``port`` is the listener port
|
||||
configured with the ``metrics`` resource.
|
||||
|
||||
If your prometheus is older than 1.5.2, you will need to replace
|
||||
``static_configs`` in the above with ``target_groups``.
|
||||
|
||||
@@ -69,6 +69,7 @@ Let's assume that we expect clients to connect to our server at
|
||||
SSLEngine on
|
||||
ServerName matrix.example.com;
|
||||
|
||||
AllowEncodedSlashes NoDecode
|
||||
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||
</VirtualHost>
|
||||
@@ -77,6 +78,7 @@ Let's assume that we expect clients to connect to our server at
|
||||
SSLEngine on
|
||||
ServerName example.com;
|
||||
|
||||
AllowEncodedSlashes NoDecode
|
||||
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||
</VirtualHost>
|
||||
|
||||
@@ -69,6 +69,20 @@ pid_file: DATADIR/homeserver.pid
|
||||
#
|
||||
#use_presence: false
|
||||
|
||||
# Whether to require authentication to retrieve profile data (avatars,
|
||||
# display names) of other users through the client API. Defaults to
|
||||
# 'false'. Note that profile data is also available via the federation
|
||||
# API, so this setting is of limited value if federation is enabled on
|
||||
# the server.
|
||||
#
|
||||
#require_auth_for_profile_requests: true
|
||||
|
||||
# If set to 'true', requires authentication to access the server's
|
||||
# public rooms directory through the client API, and forbids any other
|
||||
# homeserver to fetch it via federation. Defaults to 'false'.
|
||||
#
|
||||
#restrict_public_rooms_to_local_users: true
|
||||
|
||||
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
||||
#
|
||||
#gc_thresholds: [700, 10, 10]
|
||||
@@ -154,8 +168,8 @@ federation_ip_range_blacklist:
|
||||
#
|
||||
# Valid resource names are:
|
||||
#
|
||||
# client: the client-server API (/_matrix/client). Also implies 'media' and
|
||||
# 'static'.
|
||||
# client: the client-server API (/_matrix/client), and the synapse admin
|
||||
# API (/_synapse/admin). Also implies 'media' and 'static'.
|
||||
#
|
||||
# consent: user consent forms (/_matrix/consent). See
|
||||
# docs/consent_tracking.md.
|
||||
@@ -257,6 +271,11 @@ listeners:
|
||||
# Used by phonehome stats to group together related servers.
|
||||
#server_context: context
|
||||
|
||||
# Whether to require a user to be in the room to add an alias to it.
|
||||
# Defaults to 'true'.
|
||||
#
|
||||
#require_membership_for_aliases: false
|
||||
|
||||
|
||||
## TLS ##
|
||||
|
||||
@@ -561,11 +580,12 @@ uploads_path: "DATADIR/uploads"
|
||||
# height: 600
|
||||
# method: scale
|
||||
|
||||
# Is the preview URL API enabled? If enabled, you *must* specify
|
||||
# an explicit url_preview_ip_range_blacklist of IPs that the spider is
|
||||
# denied from accessing.
|
||||
# Is the preview URL API enabled?
|
||||
#
|
||||
#url_preview_enabled: false
|
||||
# 'false' by default: uncomment the following to enable it (and specify a
|
||||
# url_preview_ip_range_blacklist blacklist).
|
||||
#
|
||||
#url_preview_enabled: true
|
||||
|
||||
# List of IP address CIDR ranges that the URL preview spider is denied
|
||||
# from accessing. There are no defaults: you must explicitly
|
||||
@@ -575,6 +595,12 @@ uploads_path: "DATADIR/uploads"
|
||||
# synapse to issue arbitrary GET requests to your internal services,
|
||||
# causing serious security issues.
|
||||
#
|
||||
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||
# listed here, since they correspond to unroutable addresses.)
|
||||
#
|
||||
# This must be specified if url_preview_enabled is set. It is recommended that
|
||||
# you uncomment the following list as a starting point.
|
||||
#
|
||||
#url_preview_ip_range_blacklist:
|
||||
# - '127.0.0.0/8'
|
||||
# - '10.0.0.0/8'
|
||||
@@ -585,7 +611,7 @@ uploads_path: "DATADIR/uploads"
|
||||
# - '::1/128'
|
||||
# - 'fe80::/64'
|
||||
# - 'fc00::/7'
|
||||
#
|
||||
|
||||
# List of IP address CIDR ranges that the URL preview spider is allowed
|
||||
# to access even if they are specified in url_preview_ip_range_blacklist.
|
||||
# This is useful for specifying exceptions to wide-ranging blacklisted
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
Server Notices
|
||||
==============
|
||||
# Server Notices
|
||||
|
||||
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
|
||||
channel whereby server administrators can send messages to users on the server.
|
||||
@@ -11,8 +10,7 @@ they may also find a use for features such as "Message of the day".
|
||||
This is a feature specific to Synapse, but it uses standard Matrix
|
||||
communication mechanisms, so should work with any Matrix client.
|
||||
|
||||
User experience
|
||||
---------------
|
||||
## User experience
|
||||
|
||||
When the user is first sent a server notice, they will get an invitation to a
|
||||
room (typically called 'Server Notices', though this is configurable in
|
||||
@@ -29,8 +27,7 @@ levels.
|
||||
Having joined the room, the user can leave the room if they want. Subsequent
|
||||
server notices will then cause a new room to be created.
|
||||
|
||||
Synapse configuration
|
||||
---------------------
|
||||
## Synapse configuration
|
||||
|
||||
Server notices come from a specific user id on the server. Server
|
||||
administrators are free to choose the user id - something like `server` is
|
||||
@@ -58,17 +55,7 @@ room which will be created.
|
||||
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
|
||||
displayname and avatar of the Server Notices user.
|
||||
|
||||
Sending notices
|
||||
---------------
|
||||
## Sending notices
|
||||
|
||||
As of the current version of synapse, there is no convenient interface for
|
||||
sending notices (other than the automated ones sent as part of consent
|
||||
tracking).
|
||||
|
||||
In the meantime, it is possible to test this feature using the manhole. Having
|
||||
gone into the manhole as described in [manhole.md](manhole.md), a notice can be
|
||||
sent with something like:
|
||||
|
||||
```
|
||||
>>> hs.get_server_notices_manager().send_notice('@user:server.com', {'msgtype':'m.text', 'body':'foo'})
|
||||
```
|
||||
To send server notices to users you can use the
|
||||
[admin_api](admin_api/server_notices.md).
|
||||
|
||||
@@ -24,6 +24,7 @@ DISTS = (
|
||||
"ubuntu:xenial",
|
||||
"ubuntu:bionic",
|
||||
"ubuntu:cosmic",
|
||||
"ubuntu:disco",
|
||||
)
|
||||
|
||||
DESC = '''\
|
||||
|
||||
@@ -27,4 +27,4 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "0.99.3"
|
||||
__version__ = "0.99.3.2"
|
||||
|
||||
@@ -556,7 +556,7 @@ class Auth(object):
|
||||
""" Check if the given user is a local server admin.
|
||||
|
||||
Args:
|
||||
user (str): mxid of user to check
|
||||
user (UserID): user to check
|
||||
|
||||
Returns:
|
||||
bool: True if the user is an admin
|
||||
|
||||
@@ -20,6 +20,9 @@
|
||||
# the "depth" field on events is limited to 2**63 - 1
|
||||
MAX_DEPTH = 2**63 - 1
|
||||
|
||||
# the maximum length for a room alias is 255 characters
|
||||
MAX_ALIAS_LENGTH = 255
|
||||
|
||||
|
||||
class Membership(object):
|
||||
|
||||
|
||||
@@ -22,13 +22,14 @@ import traceback
|
||||
import psutil
|
||||
from daemonize import Daemonize
|
||||
|
||||
from twisted.internet import error, reactor
|
||||
from twisted.internet import defer, error, reactor
|
||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||
|
||||
import synapse
|
||||
from synapse.app import check_bind_error
|
||||
from synapse.crypto import context_factory
|
||||
from synapse.util import PreserveLoggingContext
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.rlimit import change_resource_limit
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
@@ -99,6 +100,8 @@ def start_reactor(
|
||||
logger (logging.Logger): logger instance to pass to Daemonize
|
||||
"""
|
||||
|
||||
install_dns_limiter(reactor)
|
||||
|
||||
def run():
|
||||
# make sure that we run the reactor with the sentinel log context,
|
||||
# otherwise other PreserveLoggingContext instances will get confused
|
||||
@@ -312,3 +315,81 @@ def setup_sentry(hs):
|
||||
name = hs.config.worker_name if hs.config.worker_name else "master"
|
||||
scope.set_tag("worker_app", app)
|
||||
scope.set_tag("worker_name", name)
|
||||
|
||||
|
||||
def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
|
||||
"""Replaces the resolver with one that limits the number of in flight DNS
|
||||
requests.
|
||||
|
||||
This is to workaround https://twistedmatrix.com/trac/ticket/9620, where we
|
||||
can run out of file descriptors and infinite loop if we attempt to do too
|
||||
many DNS queries at once
|
||||
"""
|
||||
new_resolver = _LimitedHostnameResolver(
|
||||
reactor.nameResolver, max_dns_requests_in_flight,
|
||||
)
|
||||
|
||||
reactor.installNameResolver(new_resolver)
|
||||
|
||||
|
||||
class _LimitedHostnameResolver(object):
|
||||
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups.
|
||||
"""
|
||||
|
||||
def __init__(self, resolver, max_dns_requests_in_flight):
|
||||
self._resolver = resolver
|
||||
self._limiter = Linearizer(
|
||||
name="dns_client_limiter", max_count=max_dns_requests_in_flight,
|
||||
)
|
||||
|
||||
def resolveHostName(self, resolutionReceiver, hostName, portNumber=0,
|
||||
addressTypes=None, transportSemantics='TCP'):
|
||||
# Note this is happening deep within the reactor, so we don't need to
|
||||
# worry about log contexts.
|
||||
|
||||
# We need this function to return `resolutionReceiver` so we do all the
|
||||
# actual logic involving deferreds in a separate function.
|
||||
self._resolve(
|
||||
resolutionReceiver, hostName, portNumber,
|
||||
addressTypes, transportSemantics,
|
||||
)
|
||||
|
||||
return resolutionReceiver
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _resolve(self, resolutionReceiver, hostName, portNumber=0,
|
||||
addressTypes=None, transportSemantics='TCP'):
|
||||
|
||||
with (yield self._limiter.queue(())):
|
||||
# resolveHostName doesn't return a Deferred, so we need to hook into
|
||||
# the receiver interface to get told when resolution has finished.
|
||||
|
||||
deferred = defer.Deferred()
|
||||
receiver = _DeferredResolutionReceiver(resolutionReceiver, deferred)
|
||||
|
||||
self._resolver.resolveHostName(
|
||||
receiver, hostName, portNumber,
|
||||
addressTypes, transportSemantics,
|
||||
)
|
||||
|
||||
yield deferred
|
||||
|
||||
|
||||
class _DeferredResolutionReceiver(object):
|
||||
"""Wraps a IResolutionReceiver and simply resolves the given deferred when
|
||||
resolution is complete
|
||||
"""
|
||||
|
||||
def __init__(self, receiver, deferred):
|
||||
self._receiver = receiver
|
||||
self._deferred = deferred
|
||||
|
||||
def resolutionBegan(self, resolutionInProgress):
|
||||
self._receiver.resolutionBegan(resolutionInProgress)
|
||||
|
||||
def addressResolved(self, address):
|
||||
self._receiver.addressResolved(address)
|
||||
|
||||
def resolutionComplete(self):
|
||||
self._deferred.callback(())
|
||||
self._receiver.resolutionComplete()
|
||||
|
||||
@@ -62,6 +62,7 @@ from synapse.python_dependencies import check_requirements
|
||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
||||
from synapse.rest import ClientRestResource
|
||||
from synapse.rest.admin import AdminRestResource
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||
from synapse.rest.well_known import WellKnownResource
|
||||
@@ -180,6 +181,7 @@ class SynapseHomeServer(HomeServer):
|
||||
"/_matrix/client/v2_alpha": client_resource,
|
||||
"/_matrix/client/versions": client_resource,
|
||||
"/.well-known/matrix/client": WellKnownResource(self),
|
||||
"/_synapse/admin": AdminRestResource(self),
|
||||
})
|
||||
|
||||
if self.get_config().saml2_enabled:
|
||||
|
||||
@@ -186,17 +186,21 @@ class ContentRepositoryConfig(Config):
|
||||
except ImportError:
|
||||
raise ConfigError(MISSING_NETADDR)
|
||||
|
||||
if "url_preview_ip_range_blacklist" in config:
|
||||
self.url_preview_ip_range_blacklist = IPSet(
|
||||
config["url_preview_ip_range_blacklist"]
|
||||
)
|
||||
else:
|
||||
if "url_preview_ip_range_blacklist" not in config:
|
||||
raise ConfigError(
|
||||
"For security, you must specify an explicit target IP address "
|
||||
"blacklist in url_preview_ip_range_blacklist for url previewing "
|
||||
"to work"
|
||||
)
|
||||
|
||||
self.url_preview_ip_range_blacklist = IPSet(
|
||||
config["url_preview_ip_range_blacklist"]
|
||||
)
|
||||
|
||||
# we always blacklist '0.0.0.0' and '::', which are supposed to be
|
||||
# unroutable addresses.
|
||||
self.url_preview_ip_range_blacklist.update(['0.0.0.0', '::'])
|
||||
|
||||
self.url_preview_ip_range_whitelist = IPSet(
|
||||
config.get("url_preview_ip_range_whitelist", ())
|
||||
)
|
||||
@@ -260,11 +264,12 @@ class ContentRepositoryConfig(Config):
|
||||
#thumbnail_sizes:
|
||||
%(formatted_thumbnail_sizes)s
|
||||
|
||||
# Is the preview URL API enabled? If enabled, you *must* specify
|
||||
# an explicit url_preview_ip_range_blacklist of IPs that the spider is
|
||||
# denied from accessing.
|
||||
# Is the preview URL API enabled?
|
||||
#
|
||||
#url_preview_enabled: false
|
||||
# 'false' by default: uncomment the following to enable it (and specify a
|
||||
# url_preview_ip_range_blacklist blacklist).
|
||||
#
|
||||
#url_preview_enabled: true
|
||||
|
||||
# List of IP address CIDR ranges that the URL preview spider is denied
|
||||
# from accessing. There are no defaults: you must explicitly
|
||||
@@ -274,6 +279,12 @@ class ContentRepositoryConfig(Config):
|
||||
# synapse to issue arbitrary GET requests to your internal services,
|
||||
# causing serious security issues.
|
||||
#
|
||||
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
|
||||
# listed here, since they correspond to unroutable addresses.)
|
||||
#
|
||||
# This must be specified if url_preview_enabled is set. It is recommended that
|
||||
# you uncomment the following list as a starting point.
|
||||
#
|
||||
#url_preview_ip_range_blacklist:
|
||||
# - '127.0.0.0/8'
|
||||
# - '10.0.0.0/8'
|
||||
@@ -284,7 +295,7 @@ class ContentRepositoryConfig(Config):
|
||||
# - '::1/128'
|
||||
# - 'fe80::/64'
|
||||
# - 'fc00::/7'
|
||||
#
|
||||
|
||||
# List of IP address CIDR ranges that the URL preview spider is allowed
|
||||
# to access even if they are specified in url_preview_ip_range_blacklist.
|
||||
# This is useful for specifying exceptions to wide-ranging blacklisted
|
||||
|
||||
@@ -74,6 +74,19 @@ class ServerConfig(Config):
|
||||
# master, potentially causing inconsistency.
|
||||
self.enable_media_repo = config.get("enable_media_repo", True)
|
||||
|
||||
# Whether to require authentication to retrieve profile data (avatars,
|
||||
# display names) of other users through the client API.
|
||||
self.require_auth_for_profile_requests = config.get(
|
||||
"require_auth_for_profile_requests", False,
|
||||
)
|
||||
|
||||
# If set to 'True', requires authentication to access the server's
|
||||
# public rooms directory through the client API, and forbids any other
|
||||
# homeserver to fetch it via federation.
|
||||
self.restrict_public_rooms_to_local_users = config.get(
|
||||
"restrict_public_rooms_to_local_users", False,
|
||||
)
|
||||
|
||||
# whether to enable search. If disabled, new entries will not be inserted
|
||||
# into the search tables and they will not be indexed. Users will receive
|
||||
# errors when attempting to search for messages.
|
||||
@@ -154,6 +167,12 @@ class ServerConfig(Config):
|
||||
# sending out any replication updates.
|
||||
self.replication_torture_level = config.get("replication_torture_level")
|
||||
|
||||
# Whether to require a user to be in the room to add an alias to it.
|
||||
# Defaults to True.
|
||||
self.require_membership_for_aliases = config.get(
|
||||
"require_membership_for_aliases", True,
|
||||
)
|
||||
|
||||
self.listeners = []
|
||||
for listener in config.get("listeners", []):
|
||||
if not isinstance(listener.get("port", None), int):
|
||||
@@ -341,6 +360,20 @@ class ServerConfig(Config):
|
||||
#
|
||||
#use_presence: false
|
||||
|
||||
# Whether to require authentication to retrieve profile data (avatars,
|
||||
# display names) of other users through the client API. Defaults to
|
||||
# 'false'. Note that profile data is also available via the federation
|
||||
# API, so this setting is of limited value if federation is enabled on
|
||||
# the server.
|
||||
#
|
||||
#require_auth_for_profile_requests: true
|
||||
|
||||
# If set to 'true', requires authentication to access the server's
|
||||
# public rooms directory through the client API, and forbids any other
|
||||
# homeserver to fetch it via federation. Defaults to 'false'.
|
||||
#
|
||||
#restrict_public_rooms_to_local_users: true
|
||||
|
||||
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
||||
#
|
||||
#gc_thresholds: [700, 10, 10]
|
||||
@@ -426,8 +459,8 @@ class ServerConfig(Config):
|
||||
#
|
||||
# Valid resource names are:
|
||||
#
|
||||
# client: the client-server API (/_matrix/client). Also implies 'media' and
|
||||
# 'static'.
|
||||
# client: the client-server API (/_matrix/client), and the synapse admin
|
||||
# API (/_synapse/admin). Also implies 'media' and 'static'.
|
||||
#
|
||||
# consent: user consent forms (/_matrix/consent). See
|
||||
# docs/consent_tracking.md.
|
||||
@@ -528,6 +561,11 @@ class ServerConfig(Config):
|
||||
|
||||
# Used by phonehome stats to group together related servers.
|
||||
#server_context: context
|
||||
|
||||
# Whether to require a user to be in the room to add an alias to it.
|
||||
# Defaults to 'true'.
|
||||
#
|
||||
#require_membership_for_aliases: false
|
||||
""" % locals()
|
||||
|
||||
def read_arguments(self, args):
|
||||
|
||||
@@ -187,7 +187,9 @@ class EventContext(object):
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
is None, which happens when the associated event is an outlier.
|
||||
is None, which happens when the associated event is an outlier.
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
|
||||
if not self._fetching_state_deferred:
|
||||
@@ -205,7 +207,9 @@ class EventContext(object):
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
is None, which happens when the associated event is an outlier.
|
||||
is None, which happens when the associated event is an outlier.
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
|
||||
if not self._fetching_state_deferred:
|
||||
|
||||
@@ -15,8 +15,8 @@
|
||||
|
||||
from six import string_types
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import EventFormatVersions
|
||||
from synapse.types import EventID, RoomID, UserID
|
||||
|
||||
@@ -56,6 +56,17 @@ class EventValidator(object):
|
||||
if not isinstance(getattr(event, s), string_types):
|
||||
raise SynapseError(400, "'%s' not a string type" % (s,))
|
||||
|
||||
if event.type == EventTypes.Aliases:
|
||||
if "aliases" in event.content:
|
||||
for alias in event.content["aliases"]:
|
||||
if len(alias) > MAX_ALIAS_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
("Can't create aliases longer than"
|
||||
" %d characters" % (MAX_ALIAS_LENGTH,)),
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
def validate_builder(self, event):
|
||||
"""Validates that the builder/event has roughly the right format. Only
|
||||
checks values that we expect a proto event to have, rather than all the
|
||||
|
||||
@@ -33,12 +33,14 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage import UserPresenceState
|
||||
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
|
||||
|
||||
# This is defined in the Matrix spec and enforced by the receiver.
|
||||
MAX_EDUS_PER_TRANSACTION = 100
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
sent_edus_counter = Counter(
|
||||
"synapse_federation_client_sent_edus",
|
||||
"Total number of EDUs successfully sent",
|
||||
"synapse_federation_client_sent_edus", "Total number of EDUs successfully sent"
|
||||
)
|
||||
|
||||
sent_edus_by_type = Counter(
|
||||
@@ -58,6 +60,7 @@ class PerDestinationQueue(object):
|
||||
destination (str): the server_name of the destination that we are managing
|
||||
transmission for.
|
||||
"""
|
||||
|
||||
def __init__(self, hs, transaction_manager, destination):
|
||||
self._server_name = hs.hostname
|
||||
self._clock = hs.get_clock()
|
||||
@@ -68,17 +71,17 @@ class PerDestinationQueue(object):
|
||||
self.transmission_loop_running = False
|
||||
|
||||
# a list of tuples of (pending pdu, order)
|
||||
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
|
||||
self._pending_edus = [] # type: list[Edu]
|
||||
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
|
||||
self._pending_edus = [] # type: list[Edu]
|
||||
|
||||
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
|
||||
# based on their key (e.g. typing events by room_id)
|
||||
# Map of (edu_type, key) -> Edu
|
||||
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
|
||||
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
|
||||
|
||||
# Map of user_id -> UserPresenceState of pending presence to be sent to this
|
||||
# destination
|
||||
self._pending_presence = {} # type: dict[str, UserPresenceState]
|
||||
self._pending_presence = {} # type: dict[str, UserPresenceState]
|
||||
|
||||
# room_id -> receipt_type -> user_id -> receipt_dict
|
||||
self._pending_rrs = {}
|
||||
@@ -120,9 +123,7 @@ class PerDestinationQueue(object):
|
||||
Args:
|
||||
states (iterable[UserPresenceState]): presence to send
|
||||
"""
|
||||
self._pending_presence.update({
|
||||
state.user_id: state for state in states
|
||||
})
|
||||
self._pending_presence.update({state.user_id: state for state in states})
|
||||
self.attempt_new_transaction()
|
||||
|
||||
def queue_read_receipt(self, receipt):
|
||||
@@ -132,14 +133,9 @@ class PerDestinationQueue(object):
|
||||
Args:
|
||||
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
|
||||
"""
|
||||
self._pending_rrs.setdefault(
|
||||
receipt.room_id, {},
|
||||
).setdefault(
|
||||
self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
|
||||
receipt.receipt_type, {}
|
||||
)[receipt.user_id] = {
|
||||
"event_ids": receipt.event_ids,
|
||||
"data": receipt.data,
|
||||
}
|
||||
)[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
|
||||
|
||||
def flush_read_receipts_for_room(self, room_id):
|
||||
# if we don't have any read-receipts for this room, it may be that we've already
|
||||
@@ -170,10 +166,7 @@ class PerDestinationQueue(object):
|
||||
# request at which point pending_pdus just keeps growing.
|
||||
# we need application-layer timeouts of some flavour of these
|
||||
# requests
|
||||
logger.debug(
|
||||
"TX [%s] Transaction already in progress",
|
||||
self._destination
|
||||
)
|
||||
logger.debug("TX [%s] Transaction already in progress", self._destination)
|
||||
return
|
||||
|
||||
logger.debug("TX [%s] Starting transaction loop", self._destination)
|
||||
@@ -197,7 +190,8 @@ class PerDestinationQueue(object):
|
||||
pending_pdus = []
|
||||
while True:
|
||||
device_message_edus, device_stream_id, dev_list_id = (
|
||||
yield self._get_new_device_messages()
|
||||
# We have to keep 2 free slots for presence and rr_edus
|
||||
yield self._get_new_device_messages(MAX_EDUS_PER_TRANSACTION - 2)
|
||||
)
|
||||
|
||||
# BEGIN CRITICAL SECTION
|
||||
@@ -216,19 +210,9 @@ class PerDestinationQueue(object):
|
||||
|
||||
pending_edus = []
|
||||
|
||||
pending_edus.extend(self._get_rr_edus(force_flush=False))
|
||||
|
||||
# We can only include at most 100 EDUs per transactions
|
||||
pending_edus.extend(self._pop_pending_edus(100 - len(pending_edus)))
|
||||
|
||||
pending_edus.extend(
|
||||
self._pending_edus_keyed.values()
|
||||
)
|
||||
|
||||
self._pending_edus_keyed = {}
|
||||
|
||||
pending_edus.extend(device_message_edus)
|
||||
|
||||
# rr_edus and pending_presence take at most one slot each
|
||||
pending_edus.extend(self._get_rr_edus(force_flush=False))
|
||||
pending_presence = self._pending_presence
|
||||
self._pending_presence = {}
|
||||
if pending_presence:
|
||||
@@ -248,9 +232,23 @@ class PerDestinationQueue(object):
|
||||
)
|
||||
)
|
||||
|
||||
pending_edus.extend(device_message_edus)
|
||||
pending_edus.extend(
|
||||
self._pop_pending_edus(MAX_EDUS_PER_TRANSACTION - len(pending_edus))
|
||||
)
|
||||
while (
|
||||
len(pending_edus) < MAX_EDUS_PER_TRANSACTION
|
||||
and self._pending_edus_keyed
|
||||
):
|
||||
_, val = self._pending_edus_keyed.popitem()
|
||||
pending_edus.append(val)
|
||||
|
||||
if pending_pdus:
|
||||
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
|
||||
self._destination, len(pending_pdus))
|
||||
logger.debug(
|
||||
"TX [%s] len(pending_pdus_by_dest[dest]) = %d",
|
||||
self._destination,
|
||||
len(pending_pdus),
|
||||
)
|
||||
|
||||
if not pending_pdus and not pending_edus:
|
||||
logger.debug("TX [%s] Nothing to send", self._destination)
|
||||
@@ -259,7 +257,7 @@ class PerDestinationQueue(object):
|
||||
|
||||
# if we've decided to send a transaction anyway, and we have room, we
|
||||
# may as well send any pending RRs
|
||||
if len(pending_edus) < 100:
|
||||
if len(pending_edus) < MAX_EDUS_PER_TRANSACTION:
|
||||
pending_edus.extend(self._get_rr_edus(force_flush=True))
|
||||
|
||||
# END CRITICAL SECTION
|
||||
@@ -303,22 +301,25 @@ class PerDestinationQueue(object):
|
||||
except HttpResponseException as e:
|
||||
logger.warning(
|
||||
"TX [%s] Received %d response to transaction: %s",
|
||||
self._destination, e.code, e,
|
||||
self._destination,
|
||||
e.code,
|
||||
e,
|
||||
)
|
||||
except RequestSendFailed as e:
|
||||
logger.warning("TX [%s] Failed to send transaction: %s", self._destination, e)
|
||||
logger.warning(
|
||||
"TX [%s] Failed to send transaction: %s", self._destination, e
|
||||
)
|
||||
|
||||
for p, _ in pending_pdus:
|
||||
logger.info("Failed to send event %s to %s", p.event_id,
|
||||
self._destination)
|
||||
logger.info(
|
||||
"Failed to send event %s to %s", p.event_id, self._destination
|
||||
)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"TX [%s] Failed to send transaction",
|
||||
self._destination,
|
||||
)
|
||||
logger.exception("TX [%s] Failed to send transaction", self._destination)
|
||||
for p, _ in pending_pdus:
|
||||
logger.info("Failed to send event %s to %s", p.event_id,
|
||||
self._destination)
|
||||
logger.info(
|
||||
"Failed to send event %s to %s", p.event_id, self._destination
|
||||
)
|
||||
finally:
|
||||
# We want to be *very* sure we clear this after we stop processing
|
||||
self.transmission_loop_running = False
|
||||
@@ -346,27 +347,13 @@ class PerDestinationQueue(object):
|
||||
return pending_edus
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _get_new_device_messages(self):
|
||||
last_device_stream_id = self._last_device_stream_id
|
||||
to_device_stream_id = self._store.get_to_device_stream_token()
|
||||
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
|
||||
self._destination, last_device_stream_id, to_device_stream_id
|
||||
)
|
||||
edus = [
|
||||
Edu(
|
||||
origin=self._server_name,
|
||||
destination=self._destination,
|
||||
edu_type="m.direct_to_device",
|
||||
content=content,
|
||||
)
|
||||
for content in contents
|
||||
]
|
||||
|
||||
def _get_new_device_messages(self, limit):
|
||||
last_device_list = self._last_device_list_stream_id
|
||||
# Will return at most 20 entries
|
||||
now_stream_id, results = yield self._store.get_devices_by_remote(
|
||||
self._destination, last_device_list
|
||||
)
|
||||
edus.extend(
|
||||
edus = [
|
||||
Edu(
|
||||
origin=self._server_name,
|
||||
destination=self._destination,
|
||||
@@ -374,5 +361,26 @@ class PerDestinationQueue(object):
|
||||
content=content,
|
||||
)
|
||||
for content in results
|
||||
]
|
||||
|
||||
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
|
||||
|
||||
last_device_stream_id = self._last_device_stream_id
|
||||
to_device_stream_id = self._store.get_to_device_stream_token()
|
||||
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
|
||||
self._destination,
|
||||
last_device_stream_id,
|
||||
to_device_stream_id,
|
||||
limit - len(edus),
|
||||
)
|
||||
edus.extend(
|
||||
Edu(
|
||||
origin=self._server_name,
|
||||
destination=self._destination,
|
||||
edu_type="m.direct_to_device",
|
||||
content=content,
|
||||
)
|
||||
for content in contents
|
||||
)
|
||||
|
||||
defer.returnValue((edus, stream_id, now_stream_id))
|
||||
|
||||
@@ -716,8 +716,17 @@ class PublicRoomList(BaseFederationServlet):
|
||||
|
||||
PATH = "/publicRooms"
|
||||
|
||||
def __init__(self, handler, authenticator, ratelimiter, server_name, deny_access):
|
||||
super(PublicRoomList, self).__init__(
|
||||
handler, authenticator, ratelimiter, server_name,
|
||||
)
|
||||
self.deny_access = deny_access
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, origin, content, query):
|
||||
if self.deny_access:
|
||||
raise FederationDeniedError(origin)
|
||||
|
||||
limit = parse_integer_from_args(query, "limit", 0)
|
||||
since_token = parse_string_from_args(query, "since", None)
|
||||
include_all_networks = parse_boolean_from_args(
|
||||
@@ -1417,6 +1426,7 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
|
||||
authenticator=authenticator,
|
||||
ratelimiter=ratelimiter,
|
||||
server_name=hs.hostname,
|
||||
deny_access=hs.config.restrict_public_rooms_to_local_users,
|
||||
).register(resource)
|
||||
|
||||
if "group_server" in servlet_groups:
|
||||
|
||||
@@ -19,7 +19,7 @@ import string
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
CodeMessageException,
|
||||
@@ -43,8 +43,10 @@ class DirectoryHandler(BaseHandler):
|
||||
self.state = hs.get_state_handler()
|
||||
self.appservice_handler = hs.get_application_service_handler()
|
||||
self.event_creation_handler = hs.get_event_creation_handler()
|
||||
self.store = hs.get_datastore()
|
||||
self.config = hs.config
|
||||
self.enable_room_list_search = hs.config.enable_room_list_search
|
||||
self.require_membership = hs.config.require_membership_for_aliases
|
||||
|
||||
self.federation = hs.get_federation_client()
|
||||
hs.get_federation_registry().register_query_handler(
|
||||
@@ -83,7 +85,7 @@ class DirectoryHandler(BaseHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def create_association(self, requester, room_alias, room_id, servers=None,
|
||||
send_event=True):
|
||||
send_event=True, check_membership=True):
|
||||
"""Attempt to create a new alias
|
||||
|
||||
Args:
|
||||
@@ -93,6 +95,8 @@ class DirectoryHandler(BaseHandler):
|
||||
servers (list[str]|None): List of servers that others servers
|
||||
should try and join via
|
||||
send_event (bool): Whether to send an updated m.room.aliases event
|
||||
check_membership (bool): Whether to check if the user is in the room
|
||||
before the alias can be set (if the server's config requires it).
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
@@ -100,6 +104,13 @@ class DirectoryHandler(BaseHandler):
|
||||
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
if len(room_alias.to_string()) > MAX_ALIAS_LENGTH:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Can't create aliases longer than %s characters" % MAX_ALIAS_LENGTH,
|
||||
Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
service = requester.app_service
|
||||
if service:
|
||||
if not service.is_interested_in_alias(room_alias.to_string()):
|
||||
@@ -108,6 +119,14 @@ class DirectoryHandler(BaseHandler):
|
||||
" this kind of alias.", errcode=Codes.EXCLUSIVE
|
||||
)
|
||||
else:
|
||||
if self.require_membership and check_membership:
|
||||
rooms_for_user = yield self.store.get_rooms_for_user(user_id)
|
||||
if room_id not in rooms_for_user:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You must be in the room to create an alias for it",
|
||||
)
|
||||
|
||||
if not self.spam_checker.user_may_create_room_alias(user_id, room_alias):
|
||||
raise AuthError(
|
||||
403, "This user is not permitted to create this alias",
|
||||
|
||||
@@ -228,6 +228,7 @@ class EventCreationHandler(object):
|
||||
self.ratelimiter = hs.get_ratelimiter()
|
||||
self.notifier = hs.get_notifier()
|
||||
self.config = hs.config
|
||||
self.require_membership_for_aliases = hs.config.require_membership_for_aliases
|
||||
|
||||
self.send_event_to_master = ReplicationSendEventRestServlet.make_client(hs)
|
||||
|
||||
@@ -336,6 +337,35 @@ class EventCreationHandler(object):
|
||||
prev_events_and_hashes=prev_events_and_hashes,
|
||||
)
|
||||
|
||||
# In an ideal world we wouldn't need the second part of this condition. However,
|
||||
# this behaviour isn't spec'd yet, meaning we should be able to deactivate this
|
||||
# behaviour. Another reason is that this code is also evaluated each time a new
|
||||
# m.room.aliases event is created, which includes hitting a /directory route.
|
||||
# Therefore not including this condition here would render the similar one in
|
||||
# synapse.handlers.directory pointless.
|
||||
if builder.type == EventTypes.Aliases and self.require_membership_for_aliases:
|
||||
# Ideally we'd do the membership check in event_auth.check(), which
|
||||
# describes a spec'd algorithm for authenticating events received over
|
||||
# federation as well as those created locally. As of room v3, aliases events
|
||||
# can be created by users that are not in the room, therefore we have to
|
||||
# tolerate them in event_auth.check().
|
||||
prev_state_ids = yield context.get_prev_state_ids(self.store)
|
||||
prev_event_id = prev_state_ids.get((EventTypes.Member, event.sender))
|
||||
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
|
||||
if not prev_event or prev_event.membership != Membership.JOIN:
|
||||
logger.warning(
|
||||
("Attempt to send `m.room.aliases` in room %s by user %s but"
|
||||
" membership is %s"),
|
||||
event.room_id,
|
||||
event.sender,
|
||||
prev_event.membership if prev_event else None,
|
||||
)
|
||||
|
||||
raise AuthError(
|
||||
403,
|
||||
"You must be in the room to create an alias for it",
|
||||
)
|
||||
|
||||
self.validator.validate_new(event)
|
||||
|
||||
defer.returnValue((event, context))
|
||||
|
||||
@@ -53,6 +53,7 @@ class BaseProfileHandler(BaseHandler):
|
||||
@defer.inlineCallbacks
|
||||
def get_profile(self, user_id):
|
||||
target_user = UserID.from_string(user_id)
|
||||
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
displayname = yield self.store.get_profile_displayname(
|
||||
@@ -283,6 +284,48 @@ class BaseProfileHandler(BaseHandler):
|
||||
room_id, str(e)
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_profile_query_allowed(self, target_user, requester=None):
|
||||
"""Checks whether a profile query is allowed. If the
|
||||
'require_auth_for_profile_requests' config flag is set to True and a
|
||||
'requester' is provided, the query is only allowed if the two users
|
||||
share a room.
|
||||
|
||||
Args:
|
||||
target_user (UserID): The owner of the queried profile.
|
||||
requester (None|UserID): The user querying for the profile.
|
||||
|
||||
Raises:
|
||||
SynapseError(403): The two users share no room, or ne user couldn't
|
||||
be found to be in any room the server is in, and therefore the query
|
||||
is denied.
|
||||
"""
|
||||
# Implementation of MSC1301: don't allow looking up profiles if the
|
||||
# requester isn't in the same room as the target. We expect requester to
|
||||
# be None when this function is called outside of a profile query, e.g.
|
||||
# when building a membership event. In this case, we must allow the
|
||||
# lookup.
|
||||
if not self.hs.config.require_auth_for_profile_requests or not requester:
|
||||
return
|
||||
|
||||
try:
|
||||
requester_rooms = yield self.store.get_rooms_for_user(
|
||||
requester.to_string()
|
||||
)
|
||||
target_user_rooms = yield self.store.get_rooms_for_user(
|
||||
target_user.to_string(),
|
||||
)
|
||||
|
||||
# Check if the room lists have no elements in common.
|
||||
if requester_rooms.isdisjoint(target_user_rooms):
|
||||
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
|
||||
except StoreError as e:
|
||||
if e.code == 404:
|
||||
# This likely means that one of the users doesn't exist,
|
||||
# so we act as if we couldn't find the profile.
|
||||
raise SynapseError(403, "Profile isn't available", Codes.FORBIDDEN)
|
||||
raise
|
||||
|
||||
|
||||
class MasterProfileHandler(BaseProfileHandler):
|
||||
PROFILE_UPDATE_MS = 60 * 1000
|
||||
|
||||
@@ -402,7 +402,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
yield directory_handler.create_association(
|
||||
requester, RoomAlias.from_string(alias),
|
||||
new_room_id, servers=(self.hs.hostname, ),
|
||||
send_event=False,
|
||||
send_event=False, check_membership=False,
|
||||
)
|
||||
logger.info("Moved alias %s to new room", alias)
|
||||
except SynapseError as e:
|
||||
@@ -538,6 +538,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
room_alias=room_alias,
|
||||
servers=[self.hs.hostname],
|
||||
send_event=False,
|
||||
check_membership=False,
|
||||
)
|
||||
|
||||
preset_config = config.get(
|
||||
|
||||
@@ -33,6 +33,8 @@ from synapse.types import RoomID, UserID
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.distributor import user_joined_room, user_left_room
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
id_server_scheme = "https://"
|
||||
@@ -72,6 +74,11 @@ class RoomMemberHandler(object):
|
||||
self._server_notices_mxid = self.config.server_notices_mxid
|
||||
self._enable_lookup = hs.config.enable_3pid_lookup
|
||||
|
||||
# This is only used to get at ratelimit function, and
|
||||
# maybe_kick_guest_users. It's fine there are multiple of these as
|
||||
# it doesn't store state.
|
||||
self.base_handler = BaseHandler(hs)
|
||||
|
||||
@abc.abstractmethod
|
||||
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
|
||||
"""Try and join a room that this server is not in
|
||||
@@ -703,6 +710,10 @@ class RoomMemberHandler(object):
|
||||
Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
# We need to rate limit *before* we send out any 3PID invites, so we
|
||||
# can't just rely on the standard ratelimiting of events.
|
||||
yield self.base_handler.ratelimit(requester)
|
||||
|
||||
invitee = yield self._lookup_3pid(
|
||||
id_server, medium, address
|
||||
)
|
||||
|
||||
@@ -69,6 +69,14 @@ REQUIREMENTS = [
|
||||
"attrs>=17.4.0",
|
||||
|
||||
"netaddr>=0.7.18",
|
||||
|
||||
# requests is a transitive dep of treq, and urlib3 is a transitive dep
|
||||
# of requests, as well as of sentry-sdk.
|
||||
#
|
||||
# As of requests 2.21, requests does not yet support urllib3 1.25.
|
||||
# (If we do not pin it here, pip will give us the latest urllib3
|
||||
# due to the dep via sentry-sdk.)
|
||||
"urllib3<1.25",
|
||||
]
|
||||
|
||||
CONDITIONAL_REQUIREMENTS = {
|
||||
|
||||
@@ -13,11 +13,10 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.rest.client import versions
|
||||
from synapse.rest.client.v1 import (
|
||||
admin,
|
||||
directory,
|
||||
events,
|
||||
initial_sync,
|
||||
@@ -58,8 +57,14 @@ from synapse.rest.client.v2_alpha import (
|
||||
|
||||
|
||||
class ClientRestResource(JsonResource):
|
||||
"""A resource for version 1 of the matrix client API."""
|
||||
"""Matrix Client API REST resource.
|
||||
|
||||
This gets mounted at various points under /_matrix/client, including:
|
||||
* /_matrix/client/r0
|
||||
* /_matrix/client/api/v1
|
||||
* /_matrix/client/unstable
|
||||
* etc
|
||||
"""
|
||||
def __init__(self, hs):
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
self.register_servlets(self, hs)
|
||||
@@ -82,7 +87,6 @@ class ClientRestResource(JsonResource):
|
||||
presence.register_servlets(hs, client_resource)
|
||||
directory.register_servlets(hs, client_resource)
|
||||
voip.register_servlets(hs, client_resource)
|
||||
admin.register_servlets(hs, client_resource)
|
||||
pusher.register_servlets(hs, client_resource)
|
||||
push_rule.register_servlets(hs, client_resource)
|
||||
logout.register_servlets(hs, client_resource)
|
||||
@@ -111,3 +115,8 @@ class ClientRestResource(JsonResource):
|
||||
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||
capabilities.register_servlets(hs, client_resource)
|
||||
account_validity.register_servlets(hs, client_resource)
|
||||
|
||||
# moving to /_synapse/admin
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource(
|
||||
hs, client_resource
|
||||
)
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector Ltd
|
||||
# Copyright 2018-2019 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -18,6 +18,7 @@ import hashlib
|
||||
import hmac
|
||||
import logging
|
||||
import platform
|
||||
import re
|
||||
|
||||
from six import text_type
|
||||
from six.moves import http_client
|
||||
@@ -27,39 +28,56 @@ from twisted.internet import defer
|
||||
import synapse
|
||||
from synapse.api.constants import Membership, UserTypes
|
||||
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
assert_params_in_dict,
|
||||
parse_integer,
|
||||
parse_json_object_from_request,
|
||||
parse_string,
|
||||
)
|
||||
from synapse.rest.admin._base import assert_requester_is_admin, assert_user_is_admin
|
||||
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
||||
from synapse.types import UserID, create_requester
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
from .base import ClientV1RestServlet, client_path_patterns
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class UsersRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/users/(?P<user_id>[^/]*)")
|
||||
def historical_admin_path_patterns(path_regex):
|
||||
"""Returns the list of patterns for an admin endpoint, including historical ones
|
||||
|
||||
This is a backwards-compatibility hack. Previously, the Admin API was exposed at
|
||||
various paths under /_matrix/client. This function returns a list of patterns
|
||||
matching those paths (as well as the new one), so that existing scripts which rely
|
||||
on the endpoints being available there are not broken.
|
||||
|
||||
Note that this should only be used for existing endpoints: new ones should just
|
||||
register for the /_synapse/admin path.
|
||||
"""
|
||||
return list(
|
||||
re.compile(prefix + path_regex)
|
||||
for prefix in (
|
||||
"^/_synapse/admin/v1",
|
||||
"^/_matrix/client/api/v1/admin",
|
||||
"^/_matrix/client/unstable/admin",
|
||||
"^/_matrix/client/r0/admin"
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
class UsersRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/users/(?P<user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(UsersRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
target_user = UserID.from_string(user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
# To allow all users to get the users list
|
||||
# if not is_admin and target_user != auth_user:
|
||||
# raise AuthError(403, "You are not a server admin")
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only users a local user")
|
||||
@@ -69,37 +87,30 @@ class UsersRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
class VersionServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/server_version")
|
||||
class VersionServlet(RestServlet):
|
||||
PATTERNS = (re.compile("^/_synapse/admin/v1/server_version$"), )
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
ret = {
|
||||
def __init__(self, hs):
|
||||
self.res = {
|
||||
'server_version': get_version_string(synapse),
|
||||
'python_version': platform.python_version(),
|
||||
}
|
||||
|
||||
defer.returnValue((200, ret))
|
||||
def on_GET(self, request):
|
||||
return 200, self.res
|
||||
|
||||
|
||||
class UserRegisterServlet(ClientV1RestServlet):
|
||||
class UserRegisterServlet(RestServlet):
|
||||
"""
|
||||
Attributes:
|
||||
NONCE_TIMEOUT (int): Seconds until a generated nonce won't be accepted
|
||||
nonces (dict[str, int]): The nonces that we will accept. A dict of
|
||||
nonce to the time it was generated, in int seconds.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/register")
|
||||
PATTERNS = historical_admin_path_patterns("/register")
|
||||
NONCE_TIMEOUT = 60
|
||||
|
||||
def __init__(self, hs):
|
||||
super(UserRegisterServlet, self).__init__(hs)
|
||||
self.handlers = hs.get_handlers()
|
||||
self.reactor = hs.get_reactor()
|
||||
self.nonces = {}
|
||||
@@ -226,11 +237,12 @@ class UserRegisterServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, result))
|
||||
|
||||
|
||||
class WhoisRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/whois/(?P<user_id>[^/]*)")
|
||||
class WhoisRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/whois/(?P<user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(WhoisRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@@ -238,10 +250,9 @@ class WhoisRestServlet(ClientV1RestServlet):
|
||||
target_user = UserID.from_string(user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
auth_user = requester.user
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin and target_user != auth_user:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
if target_user != auth_user:
|
||||
yield assert_user_is_admin(self.auth, auth_user)
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only whois a local user")
|
||||
@@ -251,20 +262,16 @@ class WhoisRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
class PurgeMediaCacheRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/purge_media_cache")
|
||||
class PurgeMediaCacheRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/purge_media_cache")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.media_repository = hs.get_media_repository()
|
||||
super(PurgeMediaCacheRestServlet, self).__init__(hs)
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
before_ts = parse_integer(request, "before_ts", required=True)
|
||||
logger.info("before_ts: %r", before_ts)
|
||||
@@ -274,9 +281,9 @@ class PurgeMediaCacheRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
class PurgeHistoryRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns(
|
||||
"/admin/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?"
|
||||
class PurgeHistoryRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns(
|
||||
"/purge_history/(?P<room_id>[^/]*)(/(?P<event_id>[^/]+))?"
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
@@ -285,17 +292,13 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
|
||||
Args:
|
||||
hs (synapse.server.HomeServer)
|
||||
"""
|
||||
super(PurgeHistoryRestServlet, self).__init__(hs)
|
||||
self.pagination_handler = hs.get_pagination_handler()
|
||||
self.store = hs.get_datastore()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, room_id, event_id):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
body = parse_json_object_from_request(request, allow_empty_body=True)
|
||||
|
||||
@@ -371,9 +374,9 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
|
||||
}))
|
||||
|
||||
|
||||
class PurgeHistoryStatusRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns(
|
||||
"/admin/purge_history_status/(?P<purge_id>[^/]+)"
|
||||
class PurgeHistoryStatusRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns(
|
||||
"/purge_history_status/(?P<purge_id>[^/]+)"
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
@@ -382,16 +385,12 @@ class PurgeHistoryStatusRestServlet(ClientV1RestServlet):
|
||||
Args:
|
||||
hs (synapse.server.HomeServer)
|
||||
"""
|
||||
super(PurgeHistoryStatusRestServlet, self).__init__(hs)
|
||||
self.pagination_handler = hs.get_pagination_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, purge_id):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
purge_status = self.pagination_handler.get_purge_status(purge_id)
|
||||
if purge_status is None:
|
||||
@@ -400,15 +399,16 @@ class PurgeHistoryStatusRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, purge_status.asdict()))
|
||||
|
||||
|
||||
class DeactivateAccountRestServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/deactivate/(?P<target_user_id>[^/]*)")
|
||||
class DeactivateAccountRestServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/deactivate/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(DeactivateAccountRestServlet, self).__init__(hs)
|
||||
self._deactivate_account_handler = hs.get_deactivate_account_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, target_user_id):
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
body = parse_json_object_from_request(request, allow_empty_body=True)
|
||||
erase = body.get("erase", False)
|
||||
if not isinstance(erase, bool):
|
||||
@@ -419,11 +419,6 @@ class DeactivateAccountRestServlet(ClientV1RestServlet):
|
||||
)
|
||||
|
||||
UserID.from_string(target_user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
result = yield self._deactivate_account_handler.deactivate_account(
|
||||
target_user_id, erase,
|
||||
@@ -438,13 +433,13 @@ class DeactivateAccountRestServlet(ClientV1RestServlet):
|
||||
}))
|
||||
|
||||
|
||||
class ShutdownRoomRestServlet(ClientV1RestServlet):
|
||||
class ShutdownRoomRestServlet(RestServlet):
|
||||
"""Shuts down a room by removing all local users from the room and blocking
|
||||
all future invites and joins to the room. Any local aliases will be repointed
|
||||
to a new room created by `new_room_user_id` and kicked users will be auto
|
||||
joined to the new room.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/shutdown_room/(?P<room_id>[^/]+)")
|
||||
PATTERNS = historical_admin_path_patterns("/shutdown_room/(?P<room_id>[^/]+)")
|
||||
|
||||
DEFAULT_MESSAGE = (
|
||||
"Sharing illegal content on this server is not permitted and rooms in"
|
||||
@@ -452,19 +447,18 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
|
||||
)
|
||||
|
||||
def __init__(self, hs):
|
||||
super(ShutdownRoomRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.store = hs.get_datastore()
|
||||
self.state = hs.get_state_handler()
|
||||
self._room_creation_handler = hs.get_room_creation_handler()
|
||||
self.event_creation_handler = hs.get_event_creation_handler()
|
||||
self.room_member_handler = hs.get_room_member_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, room_id):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
content = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(content, ["new_room_user_id"])
|
||||
@@ -564,22 +558,20 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
|
||||
}))
|
||||
|
||||
|
||||
class QuarantineMediaInRoom(ClientV1RestServlet):
|
||||
class QuarantineMediaInRoom(RestServlet):
|
||||
"""Quarantines all media in a room so that no one can download it via
|
||||
this server.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/quarantine_media/(?P<room_id>[^/]+)")
|
||||
PATTERNS = historical_admin_path_patterns("/quarantine_media/(?P<room_id>[^/]+)")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(QuarantineMediaInRoom, self).__init__(hs)
|
||||
self.store = hs.get_datastore()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, room_id):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
num_quarantined = yield self.store.quarantine_media_ids_in_room(
|
||||
room_id, requester.user.to_string(),
|
||||
@@ -588,13 +580,12 @@ class QuarantineMediaInRoom(ClientV1RestServlet):
|
||||
defer.returnValue((200, {"num_quarantined": num_quarantined}))
|
||||
|
||||
|
||||
class ListMediaInRoom(ClientV1RestServlet):
|
||||
class ListMediaInRoom(RestServlet):
|
||||
"""Lists all of the media in a given room.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/room/(?P<room_id>[^/]+)/media")
|
||||
PATTERNS = historical_admin_path_patterns("/room/(?P<room_id>[^/]+)/media")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(ListMediaInRoom, self).__init__(hs)
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@@ -609,11 +600,11 @@ class ListMediaInRoom(ClientV1RestServlet):
|
||||
defer.returnValue((200, {"local": local_mxcs, "remote": remote_mxcs}))
|
||||
|
||||
|
||||
class ResetPasswordRestServlet(ClientV1RestServlet):
|
||||
class ResetPasswordRestServlet(RestServlet):
|
||||
"""Post request to allow an administrator reset password for a user.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_matrix/client/api/v1/admin/reset_password/
|
||||
http://localhost:8008/_synapse/admin/v1/reset_password/
|
||||
@user:to_reset_password?access_token=admin_access_token
|
||||
JsonBodyToSend:
|
||||
{
|
||||
@@ -622,11 +613,10 @@ class ResetPasswordRestServlet(ClientV1RestServlet):
|
||||
Returns:
|
||||
200 OK with empty object if success otherwise an error.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/reset_password/(?P<target_user_id>[^/]*)")
|
||||
PATTERNS = historical_admin_path_patterns("/reset_password/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
super(ResetPasswordRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self._set_password_handler = hs.get_set_password_handler()
|
||||
@@ -636,12 +626,10 @@ class ResetPasswordRestServlet(ClientV1RestServlet):
|
||||
"""Post request to allow an administrator reset password for a user.
|
||||
This needs user to have administrator access in Synapse.
|
||||
"""
|
||||
UserID.from_string(target_user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
yield assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
UserID.from_string(target_user_id)
|
||||
|
||||
params = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(params, ["new_password"])
|
||||
@@ -653,20 +641,19 @@ class ResetPasswordRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
|
||||
class GetUsersPaginatedRestServlet(ClientV1RestServlet):
|
||||
class GetUsersPaginatedRestServlet(RestServlet):
|
||||
"""Get request to get specific number of users from Synapse.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
|
||||
http://localhost:8008/_synapse/admin/v1/users_paginate/
|
||||
@admin:user?access_token=admin_access_token&start=0&limit=10
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/users_paginate/(?P<target_user_id>[^/]*)")
|
||||
PATTERNS = historical_admin_path_patterns("/users_paginate/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
super(GetUsersPaginatedRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
@@ -676,16 +663,9 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
|
||||
"""Get request to get specific number of users from Synapse.
|
||||
This needs user to have administrator access in Synapse.
|
||||
"""
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
target_user = UserID.from_string(target_user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
# To allow all users to get the users list
|
||||
# if not is_admin and target_user != auth_user:
|
||||
# raise AuthError(403, "You are not a server admin")
|
||||
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "Can only users a local user")
|
||||
@@ -706,7 +686,7 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
|
||||
"""Post request to get specific number of users from Synapse..
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_matrix/client/api/v1/admin/users_paginate/
|
||||
http://localhost:8008/_synapse/admin/v1/users_paginate/
|
||||
@admin:user?access_token=admin_access_token
|
||||
JsonBodyToSend:
|
||||
{
|
||||
@@ -716,12 +696,8 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
UserID.from_string(target_user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
order = "name" # order by name in user table
|
||||
params = parse_json_object_from_request(request)
|
||||
@@ -736,21 +712,20 @@ class GetUsersPaginatedRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
class SearchUsersRestServlet(ClientV1RestServlet):
|
||||
class SearchUsersRestServlet(RestServlet):
|
||||
"""Get request to search user table for specific users according to
|
||||
search term.
|
||||
This needs user to have administrator access in Synapse.
|
||||
Example:
|
||||
http://localhost:8008/_matrix/client/api/v1/admin/search_users/
|
||||
http://localhost:8008/_synapse/admin/v1/search_users/
|
||||
@admin:user?access_token=admin_access_token&term=alice
|
||||
Returns:
|
||||
200 OK with json object {list[dict[str, Any]], count} or empty object.
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/search_users/(?P<target_user_id>[^/]*)")
|
||||
PATTERNS = historical_admin_path_patterns("/search_users/(?P<target_user_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
super(SearchUsersRestServlet, self).__init__(hs)
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.handlers = hs.get_handlers()
|
||||
@@ -761,12 +736,9 @@ class SearchUsersRestServlet(ClientV1RestServlet):
|
||||
search term.
|
||||
This needs user to have a administrator access in Synapse.
|
||||
"""
|
||||
target_user = UserID.from_string(target_user_id)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
target_user = UserID.from_string(target_user_id)
|
||||
|
||||
# To allow all users to get the users list
|
||||
# if not is_admin and target_user != auth_user:
|
||||
@@ -784,23 +756,20 @@ class SearchUsersRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
class DeleteGroupAdminRestServlet(ClientV1RestServlet):
|
||||
class DeleteGroupAdminRestServlet(RestServlet):
|
||||
"""Allows deleting of local groups
|
||||
"""
|
||||
PATTERNS = client_path_patterns("/admin/delete_group/(?P<group_id>[^/]*)")
|
||||
PATTERNS = historical_admin_path_patterns("/delete_group/(?P<group_id>[^/]*)")
|
||||
|
||||
def __init__(self, hs):
|
||||
super(DeleteGroupAdminRestServlet, self).__init__(hs)
|
||||
self.group_server = hs.get_groups_server_handler()
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, group_id):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_user_is_admin(self.auth, requester.user)
|
||||
|
||||
if not self.is_mine_id(group_id):
|
||||
raise SynapseError(400, "Can only delete local groups")
|
||||
@@ -809,27 +778,21 @@ class DeleteGroupAdminRestServlet(ClientV1RestServlet):
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
|
||||
class AccountValidityRenewServlet(ClientV1RestServlet):
|
||||
PATTERNS = client_path_patterns("/admin/account_validity/validity$")
|
||||
class AccountValidityRenewServlet(RestServlet):
|
||||
PATTERNS = historical_admin_path_patterns("/account_validity/validity$")
|
||||
|
||||
def __init__(self, hs):
|
||||
"""
|
||||
Args:
|
||||
hs (synapse.server.HomeServer): server
|
||||
"""
|
||||
super(AccountValidityRenewServlet, self).__init__(hs)
|
||||
|
||||
self.hs = hs
|
||||
self.account_activity_handler = hs.get_account_validity_handler()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(requester.user)
|
||||
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
@@ -846,8 +809,27 @@ class AccountValidityRenewServlet(ClientV1RestServlet):
|
||||
}
|
||||
defer.returnValue((200, res))
|
||||
|
||||
########################################################################################
|
||||
#
|
||||
# please don't add more servlets here: this file is already long and unwieldy. Put
|
||||
# them in separate files within the 'admin' package.
|
||||
#
|
||||
########################################################################################
|
||||
|
||||
def register_servlets(hs, http_server):
|
||||
|
||||
class AdminRestResource(JsonResource):
|
||||
"""The REST resource which gets mounted at /_synapse/admin"""
|
||||
|
||||
def __init__(self, hs):
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
|
||||
register_servlets_for_client_rest_resource(hs, self)
|
||||
SendServerNoticeServlet(hs).register(self)
|
||||
VersionServlet(hs).register(self)
|
||||
|
||||
|
||||
def register_servlets_for_client_rest_resource(hs, http_server):
|
||||
"""Register only the servlets which need to be exposed on /_matrix/client/xxx"""
|
||||
WhoisRestServlet(hs).register(http_server)
|
||||
PurgeMediaCacheRestServlet(hs).register(http_server)
|
||||
PurgeHistoryStatusRestServlet(hs).register(http_server)
|
||||
@@ -861,6 +843,7 @@ def register_servlets(hs, http_server):
|
||||
QuarantineMediaInRoom(hs).register(http_server)
|
||||
ListMediaInRoom(hs).register(http_server)
|
||||
UserRegisterServlet(hs).register(http_server)
|
||||
VersionServlet(hs).register(http_server)
|
||||
DeleteGroupAdminRestServlet(hs).register(http_server)
|
||||
AccountValidityRenewServlet(hs).register(http_server)
|
||||
# don't add more things here: new servlets should only be exposed on
|
||||
# /_synapse/admin so should not go here. Instead register them in AdminRestResource.
|
||||
59
synapse/rest/admin/_base.py
Normal file
59
synapse/rest/admin/_base.py
Normal file
@@ -0,0 +1,59 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import AuthError
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def assert_requester_is_admin(auth, request):
|
||||
"""Verify that the requester is an admin user
|
||||
|
||||
WARNING: MAKE SURE YOU YIELD ON THE RESULT!
|
||||
|
||||
Args:
|
||||
auth (synapse.api.auth.Auth):
|
||||
request (twisted.web.server.Request): incoming request
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
|
||||
Raises:
|
||||
AuthError if the requester is not an admin
|
||||
"""
|
||||
requester = yield auth.get_user_by_req(request)
|
||||
yield assert_user_is_admin(auth, requester.user)
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def assert_user_is_admin(auth, user_id):
|
||||
"""Verify that the given user is an admin user
|
||||
|
||||
WARNING: MAKE SURE YOU YIELD ON THE RESULT!
|
||||
|
||||
Args:
|
||||
auth (synapse.api.auth.Auth):
|
||||
user_id (UserID):
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
|
||||
Raises:
|
||||
AuthError if the user is not an admin
|
||||
"""
|
||||
|
||||
is_admin = yield auth.is_server_admin(user_id)
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
100
synapse/rest/admin/server_notice_servlet.py
Normal file
100
synapse/rest/admin/server_notice_servlet.py
Normal file
@@ -0,0 +1,100 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import re
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
assert_params_in_dict,
|
||||
parse_json_object_from_request,
|
||||
)
|
||||
from synapse.rest.admin import assert_requester_is_admin
|
||||
from synapse.rest.client.transactions import HttpTransactionCache
|
||||
from synapse.types import UserID
|
||||
|
||||
|
||||
class SendServerNoticeServlet(RestServlet):
|
||||
"""Servlet which will send a server notice to a given user
|
||||
|
||||
POST /_synapse/admin/v1/send_server_notice
|
||||
{
|
||||
"user_id": "@target_user:server_name",
|
||||
"content": {
|
||||
"msgtype": "m.text",
|
||||
"body": "This is my message"
|
||||
}
|
||||
}
|
||||
|
||||
returns:
|
||||
|
||||
{
|
||||
"event_id": "$1895723857jgskldgujpious"
|
||||
}
|
||||
"""
|
||||
def __init__(self, hs):
|
||||
"""
|
||||
Args:
|
||||
hs (synapse.server.HomeServer): server
|
||||
"""
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.txns = HttpTransactionCache(hs)
|
||||
self.snm = hs.get_server_notices_manager()
|
||||
|
||||
def register(self, json_resource):
|
||||
PATTERN = "^/_synapse/admin/v1/send_server_notice"
|
||||
json_resource.register_paths(
|
||||
"POST",
|
||||
(re.compile(PATTERN + "$"), ),
|
||||
self.on_POST,
|
||||
)
|
||||
json_resource.register_paths(
|
||||
"PUT",
|
||||
(re.compile(PATTERN + "/(?P<txn_id>[^/]*)$",), ),
|
||||
self.on_PUT,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request, txn_id=None):
|
||||
yield assert_requester_is_admin(self.auth, request)
|
||||
body = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(body, ("user_id", "content"))
|
||||
event_type = body.get("type", EventTypes.Message)
|
||||
state_key = body.get("state_key")
|
||||
|
||||
if not self.snm.is_enabled():
|
||||
raise SynapseError(400, "Server notices are not enabled on this server")
|
||||
|
||||
user_id = body["user_id"]
|
||||
UserID.from_string(user_id)
|
||||
if not self.hs.is_mine_id(user_id):
|
||||
raise SynapseError(400, "Server notices can only be sent to local users")
|
||||
|
||||
event = yield self.snm.send_notice(
|
||||
user_id=body["user_id"],
|
||||
type=event_type,
|
||||
state_key=state_key,
|
||||
event_content=body["content"],
|
||||
)
|
||||
|
||||
defer.returnValue((200, {"event_id": event.event_id}))
|
||||
|
||||
def on_PUT(self, request, txn_id):
|
||||
return self.txns.fetch_or_execute_request(
|
||||
request, self.on_POST, request, txn_id,
|
||||
)
|
||||
@@ -31,11 +31,17 @@ class ProfileDisplaynameRestServlet(ClientV1RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
requester_user = None
|
||||
|
||||
if self.hs.config.require_auth_for_profile_requests:
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
requester_user = requester.user
|
||||
|
||||
user = UserID.from_string(user_id)
|
||||
|
||||
displayname = yield self.profile_handler.get_displayname(
|
||||
user,
|
||||
)
|
||||
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
|
||||
|
||||
displayname = yield self.profile_handler.get_displayname(user)
|
||||
|
||||
ret = {}
|
||||
if displayname is not None:
|
||||
@@ -74,11 +80,17 @@ class ProfileAvatarURLRestServlet(ClientV1RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
requester_user = None
|
||||
|
||||
if self.hs.config.require_auth_for_profile_requests:
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
requester_user = requester.user
|
||||
|
||||
user = UserID.from_string(user_id)
|
||||
|
||||
avatar_url = yield self.profile_handler.get_avatar_url(
|
||||
user,
|
||||
)
|
||||
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
|
||||
|
||||
avatar_url = yield self.profile_handler.get_avatar_url(user)
|
||||
|
||||
ret = {}
|
||||
if avatar_url is not None:
|
||||
@@ -116,14 +128,18 @@ class ProfileRestServlet(ClientV1RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
requester_user = None
|
||||
|
||||
if self.hs.config.require_auth_for_profile_requests:
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
requester_user = requester.user
|
||||
|
||||
user = UserID.from_string(user_id)
|
||||
|
||||
displayname = yield self.profile_handler.get_displayname(
|
||||
user,
|
||||
)
|
||||
avatar_url = yield self.profile_handler.get_avatar_url(
|
||||
user,
|
||||
)
|
||||
yield self.profile_handler.check_profile_query_allowed(user, requester_user)
|
||||
|
||||
displayname = yield self.profile_handler.get_displayname(user)
|
||||
avatar_url = yield self.profile_handler.get_avatar_url(user)
|
||||
|
||||
ret = {}
|
||||
if displayname is not None:
|
||||
|
||||
@@ -301,6 +301,12 @@ class PublicRoomListRestServlet(ClientV1RestServlet):
|
||||
try:
|
||||
yield self.auth.get_user_by_req(request, allow_guest=True)
|
||||
except AuthError as e:
|
||||
# Option to allow servers to require auth when accessing
|
||||
# /publicRooms via CS API. This is especially helpful in private
|
||||
# federations.
|
||||
if self.hs.config.restrict_public_rooms_to_local_users:
|
||||
raise
|
||||
|
||||
# We allow people to not be authed if they're just looking at our
|
||||
# room list, but require auth when we proxy the request.
|
||||
# In both cases we call the auth function, as that has the side
|
||||
|
||||
@@ -118,7 +118,7 @@ class DeviceInboxWorkerStore(SQLBaseStore):
|
||||
defer.returnValue(count)
|
||||
|
||||
def get_new_device_msgs_for_remote(
|
||||
self, destination, last_stream_id, current_stream_id, limit=100
|
||||
self, destination, last_stream_id, current_stream_id, limit
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
|
||||
@@ -24,14 +24,19 @@ _string_with_symbols = (
|
||||
string.digits + string.ascii_letters + ".,;:^&*-_+=#~@"
|
||||
)
|
||||
|
||||
# random_string and random_string_with_symbols are used for a range of things,
|
||||
# some cryptographically important, some less so. We use SystemRandom to make sure
|
||||
# we get cryptographically-secure randoms.
|
||||
rand = random.SystemRandom()
|
||||
|
||||
|
||||
def random_string(length):
|
||||
return ''.join(random.choice(string.ascii_letters) for _ in range(length))
|
||||
return ''.join(rand.choice(string.ascii_letters) for _ in range(length))
|
||||
|
||||
|
||||
def random_string_with_symbols(length):
|
||||
return ''.join(
|
||||
random.choice(_string_with_symbols) for _ in range(length)
|
||||
rand.choice(_string_with_symbols) for _ in range(length)
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -109,7 +109,6 @@ class FilteringTestCase(unittest.TestCase):
|
||||
"event_format": "client",
|
||||
"event_fields": ["type", "content", "sender"],
|
||||
},
|
||||
|
||||
# a single backslash should be permitted (though it is debatable whether
|
||||
# it should be permitted before anything other than `.`, and what that
|
||||
# actually means)
|
||||
|
||||
@@ -10,19 +10,19 @@ class TestRatelimiter(unittest.TestCase):
|
||||
key="test_id", time_now_s=0, rate_hz=0.1, burst_count=1
|
||||
)
|
||||
self.assertTrue(allowed)
|
||||
self.assertEquals(10., time_allowed)
|
||||
self.assertEquals(10.0, time_allowed)
|
||||
|
||||
allowed, time_allowed = limiter.can_do_action(
|
||||
key="test_id", time_now_s=5, rate_hz=0.1, burst_count=1
|
||||
)
|
||||
self.assertFalse(allowed)
|
||||
self.assertEquals(10., time_allowed)
|
||||
self.assertEquals(10.0, time_allowed)
|
||||
|
||||
allowed, time_allowed = limiter.can_do_action(
|
||||
key="test_id", time_now_s=10, rate_hz=0.1, burst_count=1
|
||||
)
|
||||
self.assertTrue(allowed)
|
||||
self.assertEquals(20., time_allowed)
|
||||
self.assertEquals(20.0, time_allowed)
|
||||
|
||||
def test_pruning(self):
|
||||
limiter = Ratelimiter()
|
||||
|
||||
@@ -25,16 +25,18 @@ from tests.unittest import HomeserverTestCase
|
||||
class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||
def make_homeserver(self, reactor, clock):
|
||||
hs = self.setup_test_homeserver(
|
||||
http_client=None, homeserverToUse=FederationReaderServer,
|
||||
http_client=None, homeserverToUse=FederationReaderServer
|
||||
)
|
||||
return hs
|
||||
|
||||
@parameterized.expand([
|
||||
(["federation"], "auth_fail"),
|
||||
([], "no_resource"),
|
||||
(["openid", "federation"], "auth_fail"),
|
||||
(["openid"], "auth_fail"),
|
||||
])
|
||||
@parameterized.expand(
|
||||
[
|
||||
(["federation"], "auth_fail"),
|
||||
([], "no_resource"),
|
||||
(["openid", "federation"], "auth_fail"),
|
||||
(["openid"], "auth_fail"),
|
||||
]
|
||||
)
|
||||
def test_openid_listener(self, names, expectation):
|
||||
"""
|
||||
Test different openid listener configurations.
|
||||
@@ -53,17 +55,14 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||
# Grab the resource from the site that was told to listen
|
||||
site = self.reactor.tcpServers[0][1]
|
||||
try:
|
||||
self.resource = (
|
||||
site.resource.children[b"_matrix"].children[b"federation"]
|
||||
)
|
||||
self.resource = site.resource.children[b"_matrix"].children[b"federation"]
|
||||
except KeyError:
|
||||
if expectation == "no_resource":
|
||||
return
|
||||
raise
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/federation/v1/openid/userinfo",
|
||||
"GET", "/_matrix/federation/v1/openid/userinfo"
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
@@ -74,16 +73,18 @@ class FederationReaderOpenIDListenerTests(HomeserverTestCase):
|
||||
class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
||||
def make_homeserver(self, reactor, clock):
|
||||
hs = self.setup_test_homeserver(
|
||||
http_client=None, homeserverToUse=SynapseHomeServer,
|
||||
http_client=None, homeserverToUse=SynapseHomeServer
|
||||
)
|
||||
return hs
|
||||
|
||||
@parameterized.expand([
|
||||
(["federation"], "auth_fail"),
|
||||
([], "no_resource"),
|
||||
(["openid", "federation"], "auth_fail"),
|
||||
(["openid"], "auth_fail"),
|
||||
])
|
||||
@parameterized.expand(
|
||||
[
|
||||
(["federation"], "auth_fail"),
|
||||
([], "no_resource"),
|
||||
(["openid", "federation"], "auth_fail"),
|
||||
(["openid"], "auth_fail"),
|
||||
]
|
||||
)
|
||||
def test_openid_listener(self, names, expectation):
|
||||
"""
|
||||
Test different openid listener configurations.
|
||||
@@ -102,17 +103,14 @@ class SynapseHomeserverOpenIDListenerTests(HomeserverTestCase):
|
||||
# Grab the resource from the site that was told to listen
|
||||
site = self.reactor.tcpServers[0][1]
|
||||
try:
|
||||
self.resource = (
|
||||
site.resource.children[b"_matrix"].children[b"federation"]
|
||||
)
|
||||
self.resource = site.resource.children[b"_matrix"].children[b"federation"]
|
||||
except KeyError:
|
||||
if expectation == "no_resource":
|
||||
return
|
||||
raise
|
||||
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/federation/v1/openid/userinfo",
|
||||
"GET", "/_matrix/federation/v1/openid/userinfo"
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
|
||||
@@ -45,13 +45,7 @@ class ConfigGenerationTestCase(unittest.TestCase):
|
||||
)
|
||||
|
||||
self.assertSetEqual(
|
||||
set(
|
||||
[
|
||||
"homeserver.yaml",
|
||||
"lemurs.win.log.config",
|
||||
"lemurs.win.signing.key",
|
||||
]
|
||||
),
|
||||
set(["homeserver.yaml", "lemurs.win.log.config", "lemurs.win.signing.key"]),
|
||||
set(os.listdir(self.dir)),
|
||||
)
|
||||
|
||||
|
||||
@@ -22,7 +22,8 @@ from tests import unittest
|
||||
|
||||
class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||
def test_alias_creation_acl(self):
|
||||
config = yaml.safe_load("""
|
||||
config = yaml.safe_load(
|
||||
"""
|
||||
alias_creation_rules:
|
||||
- user_id: "*bob*"
|
||||
alias: "*"
|
||||
@@ -38,43 +39,49 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||
action: "allow"
|
||||
|
||||
room_list_publication_rules: []
|
||||
""")
|
||||
"""
|
||||
)
|
||||
|
||||
rd_config = RoomDirectoryConfig()
|
||||
rd_config.read_config(config)
|
||||
|
||||
self.assertFalse(rd_config.is_alias_creation_allowed(
|
||||
user_id="@bob:example.com",
|
||||
room_id="!test",
|
||||
alias="#test:example.com",
|
||||
))
|
||||
self.assertFalse(
|
||||
rd_config.is_alias_creation_allowed(
|
||||
user_id="@bob:example.com", room_id="!test", alias="#test:example.com"
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
alias="#unofficial_st:example.com",
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_alias_creation_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
alias="#unofficial_st:example.com",
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
||||
user_id="@foobar:example.com",
|
||||
room_id="!test",
|
||||
alias="#test:example.com",
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_alias_creation_allowed(
|
||||
user_id="@foobar:example.com",
|
||||
room_id="!test",
|
||||
alias="#test:example.com",
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_alias_creation_allowed(
|
||||
user_id="@gah:example.com",
|
||||
room_id="!test",
|
||||
alias="#goo:example.com",
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_alias_creation_allowed(
|
||||
user_id="@gah:example.com", room_id="!test", alias="#goo:example.com"
|
||||
)
|
||||
)
|
||||
|
||||
self.assertFalse(rd_config.is_alias_creation_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
alias="#test:example.com",
|
||||
))
|
||||
self.assertFalse(
|
||||
rd_config.is_alias_creation_allowed(
|
||||
user_id="@test:example.com", room_id="!test", alias="#test:example.com"
|
||||
)
|
||||
)
|
||||
|
||||
def test_room_publish_acl(self):
|
||||
config = yaml.safe_load("""
|
||||
config = yaml.safe_load(
|
||||
"""
|
||||
alias_creation_rules: []
|
||||
|
||||
room_list_publication_rules:
|
||||
@@ -92,55 +99,66 @@ class RoomDirectoryConfigTestCase(unittest.TestCase):
|
||||
action: "allow"
|
||||
- room_id: "!test-deny"
|
||||
action: "deny"
|
||||
""")
|
||||
"""
|
||||
)
|
||||
|
||||
rd_config = RoomDirectoryConfig()
|
||||
rd_config.read_config(config)
|
||||
|
||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
||||
user_id="@bob:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#test:example.com"],
|
||||
))
|
||||
self.assertFalse(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@bob:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#test:example.com"],
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#unofficial_st:example.com"],
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#unofficial_st:example.com"],
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
||||
user_id="@foobar:example.com",
|
||||
room_id="!test",
|
||||
aliases=[],
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@foobar:example.com", room_id="!test", aliases=[]
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
||||
user_id="@gah:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#goo:example.com"],
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@gah:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#goo:example.com"],
|
||||
)
|
||||
)
|
||||
|
||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#test:example.com"],
|
||||
))
|
||||
self.assertFalse(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#test:example.com"],
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
||||
user_id="@foobar:example.com",
|
||||
room_id="!test-deny",
|
||||
aliases=[],
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@foobar:example.com", room_id="!test-deny", aliases=[]
|
||||
)
|
||||
)
|
||||
|
||||
self.assertFalse(rd_config.is_publishing_room_allowed(
|
||||
user_id="@gah:example.com",
|
||||
room_id="!test-deny",
|
||||
aliases=[],
|
||||
))
|
||||
self.assertFalse(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@gah:example.com", room_id="!test-deny", aliases=[]
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#unofficial_st:example.com", "#blah:example.com"],
|
||||
))
|
||||
self.assertTrue(
|
||||
rd_config.is_publishing_room_allowed(
|
||||
user_id="@test:example.com",
|
||||
room_id="!test",
|
||||
aliases=["#unofficial_st:example.com", "#blah:example.com"],
|
||||
)
|
||||
)
|
||||
|
||||
@@ -19,7 +19,6 @@ from tests import unittest
|
||||
|
||||
|
||||
class ServerConfigTestCase(unittest.TestCase):
|
||||
|
||||
def test_is_threepid_reserved(self):
|
||||
user1 = {'medium': 'email', 'address': 'user1@example.com'}
|
||||
user2 = {'medium': 'email', 'address': 'user2@example.com'}
|
||||
|
||||
@@ -26,7 +26,6 @@ class TestConfig(TlsConfig):
|
||||
|
||||
|
||||
class TLSConfigTests(TestCase):
|
||||
|
||||
def test_warn_self_signed(self):
|
||||
"""
|
||||
Synapse will give a warning when it loads a self-signed certificate.
|
||||
@@ -34,7 +33,8 @@ class TLSConfigTests(TestCase):
|
||||
config_dir = self.mktemp()
|
||||
os.mkdir(config_dir)
|
||||
with open(os.path.join(config_dir, "cert.pem"), 'w') as f:
|
||||
f.write("""-----BEGIN CERTIFICATE-----
|
||||
f.write(
|
||||
"""-----BEGIN CERTIFICATE-----
|
||||
MIID6DCCAtACAws9CjANBgkqhkiG9w0BAQUFADCBtzELMAkGA1UEBhMCVFIxDzAN
|
||||
BgNVBAgMBsOHb3J1bTEUMBIGA1UEBwwLQmHFn21ha8OnxLExEjAQBgNVBAMMCWxv
|
||||
Y2FsaG9zdDEcMBoGA1UECgwTVHdpc3RlZCBNYXRyaXggTGFiczEkMCIGA1UECwwb
|
||||
@@ -56,11 +56,12 @@ I8OtG1xGwcok53lyDuuUUDexnK4O5BkjKiVlNPg4HPim5Kuj2hRNFfNt/F2BVIlj
|
||||
iZupikC5MT1LQaRwidkSNxCku1TfAyueiBwhLnFwTmIGNnhuDCutEVAD9kFmcJN2
|
||||
SznugAcPk4doX2+rL+ila+ThqgPzIkwTUHtnmjI0TI6xsDUlXz5S3UyudrE2Qsfz
|
||||
s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
||||
-----END CERTIFICATE-----""")
|
||||
-----END CERTIFICATE-----"""
|
||||
)
|
||||
|
||||
config = {
|
||||
"tls_certificate_path": os.path.join(config_dir, "cert.pem"),
|
||||
"tls_fingerprints": []
|
||||
"tls_fingerprints": [],
|
||||
}
|
||||
|
||||
t = TestConfig()
|
||||
@@ -75,5 +76,5 @@ s4niecZKPBizL6aucT59CsunNmmb5Glq8rlAcU+1ZTZZzGYqVYhF6axB9Qg=
|
||||
"Self-signed TLS certificates will not be accepted by "
|
||||
"Synapse 1.0. Please either provide a valid certificate, "
|
||||
"or use Synapse's ACME support to provision one."
|
||||
)
|
||||
),
|
||||
)
|
||||
|
||||
@@ -169,7 +169,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
|
||||
self.http_client.post_json.return_value = defer.Deferred()
|
||||
|
||||
res_deferreds_2 = kr.verify_json_objects_for_server(
|
||||
[("server10", json1, )]
|
||||
[("server10", json1)]
|
||||
)
|
||||
res_deferreds_2[0].addBoth(self.check_context, None)
|
||||
yield logcontext.make_deferred_yieldable(res_deferreds_2[0])
|
||||
@@ -345,6 +345,7 @@ def _verify_json_for_server(keyring, server_name, json_object):
|
||||
"""thin wrapper around verify_json_for_server which makes sure it is wrapped
|
||||
with the patched defer.inlineCallbacks.
|
||||
"""
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def v():
|
||||
rv1 = yield keyring.verify_json_for_server(server_name, json_object)
|
||||
|
||||
@@ -33,11 +33,15 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||
mock_state_handler = self.hs.get_state_handler()
|
||||
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
||||
|
||||
mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
|
||||
mock_send_transaction = (
|
||||
self.hs.get_federation_transport_client().send_transaction
|
||||
)
|
||||
mock_send_transaction.return_value = defer.succeed({})
|
||||
|
||||
sender = self.hs.get_federation_sender()
|
||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
|
||||
receipt = ReadReceipt(
|
||||
"room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
|
||||
)
|
||||
self.successResultOf(sender.send_read_receipt(receipt))
|
||||
|
||||
self.pump()
|
||||
@@ -46,21 +50,24 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||
mock_send_transaction.assert_called_once()
|
||||
json_cb = mock_send_transaction.call_args[0][1]
|
||||
data = json_cb()
|
||||
self.assertEqual(data['edus'], [
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['event_id'],
|
||||
'data': {'ts': 1234},
|
||||
},
|
||||
},
|
||||
self.assertEqual(
|
||||
data['edus'],
|
||||
[
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['event_id'],
|
||||
'data': {'ts': 1234},
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
])
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
def test_send_receipts_with_backoff(self):
|
||||
"""Send two receipts in quick succession; the second should be flushed, but
|
||||
@@ -68,11 +75,15 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||
mock_state_handler = self.hs.get_state_handler()
|
||||
mock_state_handler.get_current_hosts_in_room.return_value = ["test", "host2"]
|
||||
|
||||
mock_send_transaction = self.hs.get_federation_transport_client().send_transaction
|
||||
mock_send_transaction = (
|
||||
self.hs.get_federation_transport_client().send_transaction
|
||||
)
|
||||
mock_send_transaction.return_value = defer.succeed({})
|
||||
|
||||
sender = self.hs.get_federation_sender()
|
||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["event_id"], {"ts": 1234})
|
||||
receipt = ReadReceipt(
|
||||
"room_id", "m.read", "user_id", ["event_id"], {"ts": 1234}
|
||||
)
|
||||
self.successResultOf(sender.send_read_receipt(receipt))
|
||||
|
||||
self.pump()
|
||||
@@ -81,25 +92,30 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||
mock_send_transaction.assert_called_once()
|
||||
json_cb = mock_send_transaction.call_args[0][1]
|
||||
data = json_cb()
|
||||
self.assertEqual(data['edus'], [
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['event_id'],
|
||||
'data': {'ts': 1234},
|
||||
},
|
||||
},
|
||||
self.assertEqual(
|
||||
data['edus'],
|
||||
[
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['event_id'],
|
||||
'data': {'ts': 1234},
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
])
|
||||
}
|
||||
],
|
||||
)
|
||||
mock_send_transaction.reset_mock()
|
||||
|
||||
# send the second RR
|
||||
receipt = ReadReceipt("room_id", "m.read", "user_id", ["other_id"], {"ts": 1234})
|
||||
receipt = ReadReceipt(
|
||||
"room_id", "m.read", "user_id", ["other_id"], {"ts": 1234}
|
||||
)
|
||||
self.successResultOf(sender.send_read_receipt(receipt))
|
||||
self.pump()
|
||||
mock_send_transaction.assert_not_called()
|
||||
@@ -111,18 +127,21 @@ class FederationSenderTestCases(HomeserverTestCase):
|
||||
mock_send_transaction.assert_called_once()
|
||||
json_cb = mock_send_transaction.call_args[0][1]
|
||||
data = json_cb()
|
||||
self.assertEqual(data['edus'], [
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['other_id'],
|
||||
'data': {'ts': 1234},
|
||||
},
|
||||
},
|
||||
self.assertEqual(
|
||||
data['edus'],
|
||||
[
|
||||
{
|
||||
'edu_type': 'm.receipt',
|
||||
'content': {
|
||||
'room_id': {
|
||||
'm.read': {
|
||||
'user_id': {
|
||||
'event_ids': ['other_id'],
|
||||
'data': {'ts': 1234},
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
])
|
||||
}
|
||||
],
|
||||
)
|
||||
|
||||
@@ -115,11 +115,7 @@ class TestCreateAliasACL(unittest.HomeserverTestCase):
|
||||
# We cheekily override the config to add custom alias creation rules
|
||||
config = {}
|
||||
config["alias_creation_rules"] = [
|
||||
{
|
||||
"user_id": "*",
|
||||
"alias": "#unofficial_*",
|
||||
"action": "allow",
|
||||
}
|
||||
{"user_id": "*", "alias": "#unofficial_*", "action": "allow"}
|
||||
]
|
||||
config["room_list_publication_rules"] = []
|
||||
|
||||
@@ -162,9 +158,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||
room_id = self.helper.create_room_as(self.user_id)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
b"directory/list/room/%s" % (room_id.encode('ascii'),),
|
||||
b'{}',
|
||||
"PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.result)
|
||||
@@ -179,10 +173,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||
self.directory_handler.enable_room_list_search = True
|
||||
|
||||
# Room list is enabled so we should get some results
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
b"publicRooms",
|
||||
)
|
||||
request, channel = self.make_request("GET", b"publicRooms")
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.result)
|
||||
self.assertTrue(len(channel.json_body["chunk"]) > 0)
|
||||
@@ -191,10 +182,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||
self.directory_handler.enable_room_list_search = False
|
||||
|
||||
# Room list disabled so we should get no results
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
b"publicRooms",
|
||||
)
|
||||
request, channel = self.make_request("GET", b"publicRooms")
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.result)
|
||||
self.assertTrue(len(channel.json_body["chunk"]) == 0)
|
||||
@@ -202,9 +190,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase):
|
||||
# Room list disabled so we shouldn't be allowed to publish rooms
|
||||
room_id = self.helper.create_room_as(self.user_id)
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
b"directory/list/room/%s" % (room_id.encode('ascii'),),
|
||||
b'{}',
|
||||
"PUT", b"directory/list/room/%s" % (room_id.encode('ascii'),), b'{}'
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(403, channel.code, channel.result)
|
||||
|
||||
@@ -36,7 +36,7 @@ room_keys = {
|
||||
"first_message_index": 1,
|
||||
"forwarded_count": 1,
|
||||
"is_verified": False,
|
||||
"session_data": "SSBBTSBBIEZJU0gK"
|
||||
"session_data": "SSBBTSBBIEZJU0gK",
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -47,15 +47,13 @@ room_keys = {
|
||||
class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def __init__(self, *args, **kwargs):
|
||||
super(E2eRoomKeysHandlerTestCase, self).__init__(*args, **kwargs)
|
||||
self.hs = None # type: synapse.server.HomeServer
|
||||
self.hs = None # type: synapse.server.HomeServer
|
||||
self.handler = None # type: synapse.handlers.e2e_keys.E2eRoomKeysHandler
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def setUp(self):
|
||||
self.hs = yield utils.setup_test_homeserver(
|
||||
self.addCleanup,
|
||||
handlers=None,
|
||||
replication_layer=mock.Mock(),
|
||||
self.addCleanup, handlers=None, replication_layer=mock.Mock()
|
||||
)
|
||||
self.handler = synapse.handlers.e2e_room_keys.E2eRoomKeysHandler(self.hs)
|
||||
self.local_user = "@boris:" + self.hs.hostname
|
||||
@@ -88,67 +86,86 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_create_version(self):
|
||||
"""Check that we can create and then retrieve versions.
|
||||
"""
|
||||
res = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
res = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(res, "1")
|
||||
|
||||
# check we can retrieve it as the current version
|
||||
res = yield self.handler.get_version_info(self.local_user)
|
||||
self.assertDictEqual(res, {
|
||||
"version": "1",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
self.assertDictEqual(
|
||||
res,
|
||||
{
|
||||
"version": "1",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
},
|
||||
)
|
||||
|
||||
# check we can retrieve it as a specific version
|
||||
res = yield self.handler.get_version_info(self.local_user, "1")
|
||||
self.assertDictEqual(res, {
|
||||
"version": "1",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
self.assertDictEqual(
|
||||
res,
|
||||
{
|
||||
"version": "1",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
},
|
||||
)
|
||||
|
||||
# upload a new one...
|
||||
res = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
})
|
||||
res = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
},
|
||||
)
|
||||
self.assertEqual(res, "2")
|
||||
|
||||
# check we can retrieve it as the current version
|
||||
res = yield self.handler.get_version_info(self.local_user)
|
||||
self.assertDictEqual(res, {
|
||||
"version": "2",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
})
|
||||
self.assertDictEqual(
|
||||
res,
|
||||
{
|
||||
"version": "2",
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
},
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_update_version(self):
|
||||
"""Check that we can update versions.
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
res = yield self.handler.update_version(self.local_user, version, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": version
|
||||
})
|
||||
res = yield self.handler.update_version(
|
||||
self.local_user,
|
||||
version,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": version,
|
||||
},
|
||||
)
|
||||
self.assertDictEqual(res, {})
|
||||
|
||||
# check we can retrieve it as the current version
|
||||
res = yield self.handler.get_version_info(self.local_user)
|
||||
self.assertDictEqual(res, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": version
|
||||
})
|
||||
self.assertDictEqual(
|
||||
res,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": version,
|
||||
},
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def test_update_missing_version(self):
|
||||
@@ -156,11 +173,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
"""
|
||||
res = None
|
||||
try:
|
||||
yield self.handler.update_version(self.local_user, "1", {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": "1"
|
||||
})
|
||||
yield self.handler.update_version(
|
||||
self.local_user,
|
||||
"1",
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": "1",
|
||||
},
|
||||
)
|
||||
except errors.SynapseError as e:
|
||||
res = e.code
|
||||
self.assertEqual(res, 404)
|
||||
@@ -170,29 +191,37 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
"""Check that we get a 400 if the version in the body is missing or
|
||||
doesn't match
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
res = None
|
||||
try:
|
||||
yield self.handler.update_version(self.local_user, version, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data"
|
||||
})
|
||||
yield self.handler.update_version(
|
||||
self.local_user,
|
||||
version,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
},
|
||||
)
|
||||
except errors.SynapseError as e:
|
||||
res = e.code
|
||||
self.assertEqual(res, 400)
|
||||
|
||||
res = None
|
||||
try:
|
||||
yield self.handler.update_version(self.local_user, version, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": "incorrect"
|
||||
})
|
||||
yield self.handler.update_version(
|
||||
self.local_user,
|
||||
version,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "revised_first_version_auth_data",
|
||||
"version": "incorrect",
|
||||
},
|
||||
)
|
||||
except errors.SynapseError as e:
|
||||
res = e.code
|
||||
self.assertEqual(res, 400)
|
||||
@@ -223,10 +252,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_delete_version(self):
|
||||
"""Check that we can create and then delete versions.
|
||||
"""
|
||||
res = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
res = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(res, "1")
|
||||
|
||||
# check we can delete it
|
||||
@@ -255,16 +284,14 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_get_missing_room_keys(self):
|
||||
"""Check we get an empty response from an empty backup
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||
self.assertDictEqual(res, {
|
||||
"rooms": {}
|
||||
})
|
||||
self.assertDictEqual(res, {"rooms": {}})
|
||||
|
||||
# TODO: test the locking semantics when uploading room_keys,
|
||||
# although this is probably best done in sytest
|
||||
@@ -275,7 +302,9 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
"""
|
||||
res = None
|
||||
try:
|
||||
yield self.handler.upload_room_keys(self.local_user, "no_version", room_keys)
|
||||
yield self.handler.upload_room_keys(
|
||||
self.local_user, "no_version", room_keys
|
||||
)
|
||||
except errors.SynapseError as e:
|
||||
res = e.code
|
||||
self.assertEqual(res, 404)
|
||||
@@ -285,10 +314,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
"""Check that we get a 404 on uploading keys when an nonexistent version
|
||||
is specified
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
res = None
|
||||
@@ -304,16 +333,19 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_upload_room_keys_wrong_version(self):
|
||||
"""Check that we get a 403 on uploading keys for an old version
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "second_version_auth_data",
|
||||
},
|
||||
)
|
||||
self.assertEqual(version, "2")
|
||||
|
||||
res = None
|
||||
@@ -327,10 +359,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_upload_room_keys_insert(self):
|
||||
"""Check that we can insert and retrieve keys for a session
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||
@@ -340,18 +372,13 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
|
||||
# check getting room_keys for a given room
|
||||
res = yield self.handler.get_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org"
|
||||
self.local_user, version, room_id="!abc:matrix.org"
|
||||
)
|
||||
self.assertDictEqual(res, room_keys)
|
||||
|
||||
# check getting room_keys for a given session_id
|
||||
res = yield self.handler.get_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
session_id="c0ff33",
|
||||
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||
)
|
||||
self.assertDictEqual(res, room_keys)
|
||||
|
||||
@@ -359,10 +386,10 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_upload_room_keys_merge(self):
|
||||
"""Check that we can upload a new room_key for an existing session and
|
||||
have it correctly merged"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||
@@ -378,7 +405,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||
self.assertEqual(
|
||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
||||
"SSBBTSBBIEZJU0gK"
|
||||
"SSBBTSBBIEZJU0gK",
|
||||
)
|
||||
|
||||
# test that marking the session as verified however /does/ replace it
|
||||
@@ -387,8 +414,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
|
||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||
self.assertEqual(
|
||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
||||
"new"
|
||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
|
||||
)
|
||||
|
||||
# test that a session with a higher forwarded_count doesn't replace one
|
||||
@@ -399,8 +425,7 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
|
||||
res = yield self.handler.get_room_keys(self.local_user, version)
|
||||
self.assertEqual(
|
||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'],
|
||||
"new"
|
||||
res['rooms']['!abc:matrix.org']['sessions']['c0ff33']['session_data'], "new"
|
||||
)
|
||||
|
||||
# TODO: check edge cases as well as the common variations here
|
||||
@@ -409,56 +434,36 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
|
||||
def test_delete_room_keys(self):
|
||||
"""Check that we can insert and delete keys for a session
|
||||
"""
|
||||
version = yield self.handler.create_version(self.local_user, {
|
||||
"algorithm": "m.megolm_backup.v1",
|
||||
"auth_data": "first_version_auth_data",
|
||||
})
|
||||
version = yield self.handler.create_version(
|
||||
self.local_user,
|
||||
{"algorithm": "m.megolm_backup.v1", "auth_data": "first_version_auth_data"},
|
||||
)
|
||||
self.assertEqual(version, "1")
|
||||
|
||||
# check for bulk-delete
|
||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||
yield self.handler.delete_room_keys(self.local_user, version)
|
||||
res = yield self.handler.get_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
session_id="c0ff33",
|
||||
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||
)
|
||||
self.assertDictEqual(res, {
|
||||
"rooms": {}
|
||||
})
|
||||
self.assertDictEqual(res, {"rooms": {}})
|
||||
|
||||
# check for bulk-delete per room
|
||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||
yield self.handler.delete_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
self.local_user, version, room_id="!abc:matrix.org"
|
||||
)
|
||||
res = yield self.handler.get_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
session_id="c0ff33",
|
||||
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||
)
|
||||
self.assertDictEqual(res, {
|
||||
"rooms": {}
|
||||
})
|
||||
self.assertDictEqual(res, {"rooms": {}})
|
||||
|
||||
# check for bulk-delete per session
|
||||
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
|
||||
yield self.handler.delete_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
session_id="c0ff33",
|
||||
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||
)
|
||||
res = yield self.handler.get_room_keys(
|
||||
self.local_user,
|
||||
version,
|
||||
room_id="!abc:matrix.org",
|
||||
session_id="c0ff33",
|
||||
self.local_user, version, room_id="!abc:matrix.org", session_id="c0ff33"
|
||||
)
|
||||
self.assertDictEqual(res, {
|
||||
"rooms": {}
|
||||
})
|
||||
self.assertDictEqual(res, {"rooms": {}})
|
||||
|
||||
@@ -424,8 +424,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
hs = self.setup_test_homeserver(
|
||||
"server", http_client=None,
|
||||
federation_sender=Mock(),
|
||||
"server", http_client=None, federation_sender=Mock()
|
||||
)
|
||||
return hs
|
||||
|
||||
@@ -457,7 +456,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
# Mark test2 as online, test will be offline with a last_active of 0
|
||||
self.presence_handler.set_state(
|
||||
UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE},
|
||||
UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE}
|
||||
)
|
||||
self.reactor.pump([0]) # Wait for presence updates to be handled
|
||||
|
||||
@@ -506,13 +505,13 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
# Mark test as online
|
||||
self.presence_handler.set_state(
|
||||
UserID.from_string("@test:server"), {"presence": PresenceState.ONLINE},
|
||||
UserID.from_string("@test:server"), {"presence": PresenceState.ONLINE}
|
||||
)
|
||||
|
||||
# Mark test2 as online, test will be offline with a last_active of 0.
|
||||
# Note we don't join them to the room yet
|
||||
self.presence_handler.set_state(
|
||||
UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE},
|
||||
UserID.from_string("@test2:server"), {"presence": PresenceState.ONLINE}
|
||||
)
|
||||
|
||||
# Add servers to the room
|
||||
@@ -541,8 +540,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
self.assertEqual(expected_state.state, PresenceState.ONLINE)
|
||||
self.federation_sender.send_presence_to_destinations.assert_called_once_with(
|
||||
destinations=set(("server2", "server3")),
|
||||
states=[expected_state]
|
||||
destinations=set(("server2", "server3")), states=[expected_state]
|
||||
)
|
||||
|
||||
def _add_new_user(self, room_id, user_id):
|
||||
@@ -565,7 +563,7 @@ class PresenceJoinTestCase(unittest.HomeserverTestCase):
|
||||
type=EventTypes.Member,
|
||||
sender=user_id,
|
||||
state_key=user_id,
|
||||
content={"membership": Membership.JOIN}
|
||||
content={"membership": Membership.JOIN},
|
||||
)
|
||||
|
||||
prev_event_ids = self.get_success(
|
||||
|
||||
@@ -64,20 +64,22 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
mock_federation_client.put_json.return_value = defer.succeed((200, "OK"))
|
||||
|
||||
hs = self.setup_test_homeserver(
|
||||
datastore=(Mock(
|
||||
spec=[
|
||||
# Bits that Federation needs
|
||||
"prep_send_transaction",
|
||||
"delivered_txn",
|
||||
"get_received_txn_response",
|
||||
"set_received_txn_response",
|
||||
"get_destination_retry_timings",
|
||||
"get_devices_by_remote",
|
||||
# Bits that user_directory needs
|
||||
"get_user_directory_stream_pos",
|
||||
"get_current_state_deltas",
|
||||
]
|
||||
)),
|
||||
datastore=(
|
||||
Mock(
|
||||
spec=[
|
||||
# Bits that Federation needs
|
||||
"prep_send_transaction",
|
||||
"delivered_txn",
|
||||
"get_received_txn_response",
|
||||
"set_received_txn_response",
|
||||
"get_destination_retry_timings",
|
||||
"get_devices_by_remote",
|
||||
# Bits that user_directory needs
|
||||
"get_user_directory_stream_pos",
|
||||
"get_current_state_deltas",
|
||||
]
|
||||
)
|
||||
),
|
||||
notifier=Mock(),
|
||||
http_client=mock_federation_client,
|
||||
keyring=mock_keyring,
|
||||
@@ -87,7 +89,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
# the tests assume that we are starting at unix time 1000
|
||||
reactor.pump((1000, ))
|
||||
reactor.pump((1000,))
|
||||
|
||||
mock_notifier = hs.get_notifier()
|
||||
self.on_new_event = mock_notifier.on_new_event
|
||||
@@ -114,6 +116,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
def check_joined_room(room_id, user_id):
|
||||
if user_id not in [u.to_string() for u in self.room_members]:
|
||||
raise AuthError(401, "User is not in the room")
|
||||
|
||||
hs.get_auth().check_joined_room = check_joined_room
|
||||
|
||||
def get_joined_hosts_for_room(room_id):
|
||||
@@ -123,6 +126,7 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
def get_current_users_in_room(room_id):
|
||||
return set(str(u) for u in self.room_members)
|
||||
|
||||
hs.get_state_handler().get_current_users_in_room = get_current_users_in_room
|
||||
|
||||
self.datastore.get_user_directory_stream_pos.return_value = (
|
||||
@@ -141,21 +145,16 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 0)
|
||||
|
||||
self.successResultOf(self.handler.started_typing(
|
||||
target_user=U_APPLE,
|
||||
auth_user=U_APPLE,
|
||||
room_id=ROOM_ID,
|
||||
timeout=20000,
|
||||
))
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 1, rooms=[ROOM_ID])]
|
||||
self.successResultOf(
|
||||
self.handler.started_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=20000
|
||||
)
|
||||
)
|
||||
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 1)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=0
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
@@ -170,12 +169,11 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
def test_started_typing_remote_send(self):
|
||||
self.room_members = [U_APPLE, U_ONION]
|
||||
|
||||
self.successResultOf(self.handler.started_typing(
|
||||
target_user=U_APPLE,
|
||||
auth_user=U_APPLE,
|
||||
room_id=ROOM_ID,
|
||||
timeout=20000,
|
||||
))
|
||||
self.successResultOf(
|
||||
self.handler.started_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=20000
|
||||
)
|
||||
)
|
||||
|
||||
put_json = self.hs.get_http_client().put_json
|
||||
put_json.assert_called_once_with(
|
||||
@@ -216,14 +214,10 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 200)
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 1, rooms=[ROOM_ID])]
|
||||
)
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 1)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=0
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
@@ -247,14 +241,14 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 0)
|
||||
|
||||
self.successResultOf(self.handler.stopped_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID
|
||||
))
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 1, rooms=[ROOM_ID])]
|
||||
self.successResultOf(
|
||||
self.handler.stopped_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID
|
||||
)
|
||||
)
|
||||
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
|
||||
|
||||
put_json = self.hs.get_http_client().put_json
|
||||
put_json.assert_called_once_with(
|
||||
"farm",
|
||||
@@ -274,18 +268,10 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 1)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=0
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
{
|
||||
"type": "m.typing",
|
||||
"room_id": ROOM_ID,
|
||||
"content": {"user_ids": []},
|
||||
}
|
||||
],
|
||||
[{"type": "m.typing", "room_id": ROOM_ID, "content": {"user_ids": []}}],
|
||||
)
|
||||
|
||||
def test_typing_timeout(self):
|
||||
@@ -293,22 +279,17 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 0)
|
||||
|
||||
self.successResultOf(self.handler.started_typing(
|
||||
target_user=U_APPLE,
|
||||
auth_user=U_APPLE,
|
||||
room_id=ROOM_ID,
|
||||
timeout=10000,
|
||||
))
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 1, rooms=[ROOM_ID])]
|
||||
self.successResultOf(
|
||||
self.handler.started_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=10000
|
||||
)
|
||||
)
|
||||
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 1, rooms=[ROOM_ID])])
|
||||
self.on_new_event.reset_mock()
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 1)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=0
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
@@ -320,45 +301,30 @@ class TypingNotificationsTestCase(unittest.HomeserverTestCase):
|
||||
],
|
||||
)
|
||||
|
||||
self.reactor.pump([16, ])
|
||||
self.reactor.pump([16])
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 2, rooms=[ROOM_ID])]
|
||||
)
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 2, rooms=[ROOM_ID])])
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 2)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=1
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=1)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
{
|
||||
"type": "m.typing",
|
||||
"room_id": ROOM_ID,
|
||||
"content": {"user_ids": []},
|
||||
}
|
||||
],
|
||||
[{"type": "m.typing", "room_id": ROOM_ID, "content": {"user_ids": []}}],
|
||||
)
|
||||
|
||||
# SYN-230 - see if we can still set after timeout
|
||||
|
||||
self.successResultOf(self.handler.started_typing(
|
||||
target_user=U_APPLE,
|
||||
auth_user=U_APPLE,
|
||||
room_id=ROOM_ID,
|
||||
timeout=10000,
|
||||
))
|
||||
|
||||
self.on_new_event.assert_has_calls(
|
||||
[call('typing_key', 3, rooms=[ROOM_ID])]
|
||||
self.successResultOf(
|
||||
self.handler.started_typing(
|
||||
target_user=U_APPLE, auth_user=U_APPLE, room_id=ROOM_ID, timeout=10000
|
||||
)
|
||||
)
|
||||
|
||||
self.on_new_event.assert_has_calls([call('typing_key', 3, rooms=[ROOM_ID])])
|
||||
self.on_new_event.reset_mock()
|
||||
|
||||
self.assertEquals(self.event_source.get_current_key(), 3)
|
||||
events = self.event_source.get_new_events(
|
||||
room_ids=[ROOM_ID], from_key=0
|
||||
)
|
||||
events = self.event_source.get_new_events(room_ids=[ROOM_ID], from_key=0)
|
||||
self.assertEquals(
|
||||
events[0],
|
||||
[
|
||||
|
||||
@@ -14,8 +14,9 @@
|
||||
# limitations under the License.
|
||||
from mock import Mock
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import UserTypes
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.rest.client.v2_alpha import user_directory
|
||||
from synapse.storage.roommember import ProfileInfo
|
||||
|
||||
@@ -29,7 +30,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
login.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
]
|
||||
|
||||
@@ -327,7 +328,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
|
||||
user_directory.register_servlets,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
@@ -351,9 +352,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
|
||||
|
||||
# Assert user directory is not empty
|
||||
request, channel = self.make_request(
|
||||
"POST",
|
||||
b"user_directory/search",
|
||||
b'{"search_term":"user2"}',
|
||||
"POST", b"user_directory/search", b'{"search_term":"user2"}'
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.result)
|
||||
@@ -362,9 +361,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
|
||||
# Disable user directory and check search returns nothing
|
||||
self.config.user_directory_search_enabled = False
|
||||
request, channel = self.make_request(
|
||||
"POST",
|
||||
b"user_directory/search",
|
||||
b'{"search_term":"user2"}',
|
||||
"POST", b"user_directory/search", b'{"search_term":"user2"}'
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.result)
|
||||
|
||||
@@ -24,14 +24,12 @@ def get_test_cert_file():
|
||||
#
|
||||
# openssl req -x509 -newkey rsa:4096 -keyout server.pem -out server.pem -days 36500 \
|
||||
# -nodes -subj '/CN=testserv'
|
||||
return os.path.join(
|
||||
os.path.dirname(__file__),
|
||||
'server.pem',
|
||||
)
|
||||
return os.path.join(os.path.dirname(__file__), 'server.pem')
|
||||
|
||||
|
||||
class ServerTLSContext(object):
|
||||
"""A TLS Context which presents our test cert."""
|
||||
|
||||
def __init__(self):
|
||||
self.filename = get_test_cert_file()
|
||||
|
||||
|
||||
@@ -79,12 +79,12 @@ class MatrixFederationAgentTests(TestCase):
|
||||
# stubbing that out here.
|
||||
client_protocol = client_factory.buildProtocol(None)
|
||||
client_protocol.makeConnection(
|
||||
FakeTransport(server_tls_protocol, self.reactor, client_protocol),
|
||||
FakeTransport(server_tls_protocol, self.reactor, client_protocol)
|
||||
)
|
||||
|
||||
# tell the server tls protocol to send its stuff back to the client, too
|
||||
server_tls_protocol.makeConnection(
|
||||
FakeTransport(client_protocol, self.reactor, server_tls_protocol),
|
||||
FakeTransport(client_protocol, self.reactor, server_tls_protocol)
|
||||
)
|
||||
|
||||
# give the reactor a pump to get the TLS juices flowing.
|
||||
@@ -125,7 +125,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
_check_logcontext(context)
|
||||
|
||||
def _handle_well_known_connection(
|
||||
self, client_factory, expected_sni, content, response_headers={},
|
||||
self, client_factory, expected_sni, content, response_headers={}
|
||||
):
|
||||
"""Handle an outgoing HTTPs connection: wire it up to a server, check that the
|
||||
request is for a .well-known, and send the response.
|
||||
@@ -139,8 +139,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
"""
|
||||
# make the connection for .well-known
|
||||
well_known_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=expected_sni,
|
||||
client_factory, expected_sni=expected_sni
|
||||
)
|
||||
# check the .well-known request and send a response
|
||||
self.assertEqual(len(well_known_server.requests), 1)
|
||||
@@ -154,17 +153,14 @@ class MatrixFederationAgentTests(TestCase):
|
||||
"""
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/.well-known/matrix/server')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'testserv'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
|
||||
# send back a response
|
||||
for k, v in headers.items():
|
||||
request.setHeader(k, v)
|
||||
request.write(content)
|
||||
request.finish()
|
||||
|
||||
self.reactor.pump((0.1, ))
|
||||
self.reactor.pump((0.1,))
|
||||
|
||||
def test_get(self):
|
||||
"""
|
||||
@@ -184,18 +180,14 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8448)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b"testserv",
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=b"testserv")
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'testserv:8448']
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'testserv:8448']
|
||||
)
|
||||
content = request.content.read()
|
||||
self.assertEqual(content, b'')
|
||||
@@ -244,19 +236,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8448)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=None,
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=None)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'1.2.3.4'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'1.2.3.4'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -285,19 +271,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8448)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=None,
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=None)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'[::1]'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'[::1]'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -326,19 +306,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 80)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=None,
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=None)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'[::1]:80'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'[::1]:80'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -377,7 +351,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# now there should be a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.testserv",
|
||||
b"_matrix._tcp.testserv"
|
||||
)
|
||||
|
||||
# we should fall back to a direct connection
|
||||
@@ -387,19 +361,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8448)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'testserv',
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=b'testserv')
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'testserv'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -427,13 +395,14 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 443)
|
||||
|
||||
self._handle_well_known_connection(
|
||||
client_factory, expected_sni=b"testserv",
|
||||
client_factory,
|
||||
expected_sni=b"testserv",
|
||||
content=b'{ "m.server": "target-server" }',
|
||||
)
|
||||
|
||||
# there should be a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.target-server",
|
||||
b"_matrix._tcp.target-server"
|
||||
)
|
||||
|
||||
# now we should get a connection to the target server
|
||||
@@ -444,8 +413,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'target-server',
|
||||
client_factory, expected_sni=b'target-server'
|
||||
)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
@@ -453,8 +421,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'target-server'],
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
|
||||
)
|
||||
|
||||
# finish the request
|
||||
@@ -490,8 +457,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 443)
|
||||
|
||||
redirect_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b"testserv",
|
||||
client_factory, expected_sni=b"testserv"
|
||||
)
|
||||
|
||||
# send a 302 redirect
|
||||
@@ -500,7 +466,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
request.redirect(b'https://testserv/even_better_known')
|
||||
request.finish()
|
||||
|
||||
self.reactor.pump((0.1, ))
|
||||
self.reactor.pump((0.1,))
|
||||
|
||||
# now there should be another connection
|
||||
clients = self.reactor.tcpClients
|
||||
@@ -510,8 +476,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 443)
|
||||
|
||||
well_known_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b"testserv",
|
||||
client_factory, expected_sni=b"testserv"
|
||||
)
|
||||
|
||||
self.assertEqual(len(well_known_server.requests), 1, "No request after 302")
|
||||
@@ -521,11 +486,11 @@ class MatrixFederationAgentTests(TestCase):
|
||||
request.write(b'{ "m.server": "target-server" }')
|
||||
request.finish()
|
||||
|
||||
self.reactor.pump((0.1, ))
|
||||
self.reactor.pump((0.1,))
|
||||
|
||||
# there should be a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.target-server",
|
||||
b"_matrix._tcp.target-server"
|
||||
)
|
||||
|
||||
# now we should get a connection to the target server
|
||||
@@ -536,8 +501,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'target-server',
|
||||
client_factory, expected_sni=b'target-server'
|
||||
)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
@@ -545,8 +509,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'target-server'],
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
|
||||
)
|
||||
|
||||
# finish the request
|
||||
@@ -585,12 +548,12 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 443)
|
||||
|
||||
self._handle_well_known_connection(
|
||||
client_factory, expected_sni=b"testserv", content=b'NOT JSON',
|
||||
client_factory, expected_sni=b"testserv", content=b'NOT JSON'
|
||||
)
|
||||
|
||||
# now there should be a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.testserv",
|
||||
b"_matrix._tcp.testserv"
|
||||
)
|
||||
|
||||
# we should fall back to a direct connection
|
||||
@@ -600,19 +563,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8448)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'testserv',
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=b'testserv')
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'testserv'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -635,7 +592,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# the request for a .well-known will have failed with a DNS lookup error.
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.testserv",
|
||||
b"_matrix._tcp.testserv"
|
||||
)
|
||||
|
||||
# Make sure treq is trying to connect
|
||||
@@ -646,19 +603,13 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 8443)
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'testserv',
|
||||
)
|
||||
http_server = self._make_connection(client_factory, expected_sni=b'testserv')
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
request = http_server.requests[0]
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'testserv'],
|
||||
)
|
||||
self.assertEqual(request.requestHeaders.getRawHeaders(b'host'), [b'testserv'])
|
||||
|
||||
# finish the request
|
||||
request.finish()
|
||||
@@ -685,17 +636,18 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(port, 443)
|
||||
|
||||
self.mock_resolver.resolve_service.side_effect = lambda _: [
|
||||
Server(host=b"srvtarget", port=8443),
|
||||
Server(host=b"srvtarget", port=8443)
|
||||
]
|
||||
|
||||
self._handle_well_known_connection(
|
||||
client_factory, expected_sni=b"testserv",
|
||||
client_factory,
|
||||
expected_sni=b"testserv",
|
||||
content=b'{ "m.server": "target-server" }',
|
||||
)
|
||||
|
||||
# there should be a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.target-server",
|
||||
b"_matrix._tcp.target-server"
|
||||
)
|
||||
|
||||
# now we should get a connection to the target of the SRV record
|
||||
@@ -706,8 +658,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'target-server',
|
||||
client_factory, expected_sni=b'target-server'
|
||||
)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
@@ -715,8 +666,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'target-server'],
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'target-server']
|
||||
)
|
||||
|
||||
# finish the request
|
||||
@@ -757,7 +707,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# now there should have been a SRV lookup
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.xn--bcher-kva.com",
|
||||
b"_matrix._tcp.xn--bcher-kva.com"
|
||||
)
|
||||
|
||||
# We should fall back to port 8448
|
||||
@@ -769,8 +719,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'xn--bcher-kva.com',
|
||||
client_factory, expected_sni=b'xn--bcher-kva.com'
|
||||
)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
@@ -778,8 +727,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'xn--bcher-kva.com'],
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'xn--bcher-kva.com']
|
||||
)
|
||||
|
||||
# finish the request
|
||||
@@ -801,7 +749,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertNoResult(test_d)
|
||||
|
||||
self.mock_resolver.resolve_service.assert_called_once_with(
|
||||
b"_matrix._tcp.xn--bcher-kva.com",
|
||||
b"_matrix._tcp.xn--bcher-kva.com"
|
||||
)
|
||||
|
||||
# Make sure treq is trying to connect
|
||||
@@ -813,8 +761,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
|
||||
# make a test server, and wire up the client
|
||||
http_server = self._make_connection(
|
||||
client_factory,
|
||||
expected_sni=b'xn--bcher-kva.com',
|
||||
client_factory, expected_sni=b'xn--bcher-kva.com'
|
||||
)
|
||||
|
||||
self.assertEqual(len(http_server.requests), 1)
|
||||
@@ -822,8 +769,7 @@ class MatrixFederationAgentTests(TestCase):
|
||||
self.assertEqual(request.method, b'GET')
|
||||
self.assertEqual(request.path, b'/foo/bar')
|
||||
self.assertEqual(
|
||||
request.requestHeaders.getRawHeaders(b'host'),
|
||||
[b'xn--bcher-kva.com'],
|
||||
request.requestHeaders.getRawHeaders(b'host'), [b'xn--bcher-kva.com']
|
||||
)
|
||||
|
||||
# finish the request
|
||||
@@ -897,67 +843,70 @@ class TestCachePeriodFromHeaders(TestCase):
|
||||
# uppercase
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'Cache-Control': [b'foo, Max-Age = 100, bar']}),
|
||||
), 100,
|
||||
Headers({b'Cache-Control': [b'foo, Max-Age = 100, bar']})
|
||||
),
|
||||
100,
|
||||
)
|
||||
|
||||
# missing value
|
||||
self.assertIsNone(_cache_period_from_headers(
|
||||
Headers({b'Cache-Control': [b'max-age=, bar']}),
|
||||
))
|
||||
self.assertIsNone(
|
||||
_cache_period_from_headers(Headers({b'Cache-Control': [b'max-age=, bar']}))
|
||||
)
|
||||
|
||||
# hackernews: bogus due to semicolon
|
||||
self.assertIsNone(_cache_period_from_headers(
|
||||
Headers({b'Cache-Control': [b'private; max-age=0']}),
|
||||
))
|
||||
self.assertIsNone(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'Cache-Control': [b'private; max-age=0']})
|
||||
)
|
||||
)
|
||||
|
||||
# github
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'Cache-Control': [b'max-age=0, private, must-revalidate']}),
|
||||
), 0,
|
||||
Headers({b'Cache-Control': [b'max-age=0, private, must-revalidate']})
|
||||
),
|
||||
0,
|
||||
)
|
||||
|
||||
# google
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'cache-control': [b'private, max-age=0']}),
|
||||
), 0,
|
||||
Headers({b'cache-control': [b'private, max-age=0']})
|
||||
),
|
||||
0,
|
||||
)
|
||||
|
||||
def test_expires(self):
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT']}),
|
||||
time_now=lambda: 1548833700
|
||||
), 33,
|
||||
time_now=lambda: 1548833700,
|
||||
),
|
||||
33,
|
||||
)
|
||||
|
||||
# cache-control overrides expires
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({
|
||||
b'cache-control': [b'max-age=10'],
|
||||
b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT']
|
||||
}),
|
||||
time_now=lambda: 1548833700
|
||||
), 10,
|
||||
Headers(
|
||||
{
|
||||
b'cache-control': [b'max-age=10'],
|
||||
b'Expires': [b'Wed, 30 Jan 2019 07:35:33 GMT'],
|
||||
}
|
||||
),
|
||||
time_now=lambda: 1548833700,
|
||||
),
|
||||
10,
|
||||
)
|
||||
|
||||
# invalid expires means immediate expiry
|
||||
self.assertEqual(
|
||||
_cache_period_from_headers(
|
||||
Headers({b'Expires': [b'0']}),
|
||||
), 0,
|
||||
)
|
||||
self.assertEqual(_cache_period_from_headers(Headers({b'Expires': [b'0']})), 0)
|
||||
|
||||
|
||||
def _check_logcontext(context):
|
||||
current = LoggingContext.current_context()
|
||||
if current is not context:
|
||||
raise AssertionError(
|
||||
"Expected logcontext %s but was %s" % (context, current),
|
||||
)
|
||||
raise AssertionError("Expected logcontext %s but was %s" % (context, current))
|
||||
|
||||
|
||||
def _build_test_server():
|
||||
@@ -973,7 +922,7 @@ def _build_test_server():
|
||||
server_factory.log = _log_request
|
||||
|
||||
server_tls_factory = TLSMemoryBIOFactory(
|
||||
ServerTLSContext(), isClient=False, wrappedFactory=server_factory,
|
||||
ServerTLSContext(), isClient=False, wrappedFactory=server_factory
|
||||
)
|
||||
|
||||
return server_tls_factory.buildProtocol(None)
|
||||
@@ -987,6 +936,7 @@ def _log_request(request):
|
||||
@implementer(IPolicyForHTTPS)
|
||||
class TrustingTLSPolicyForHTTPS(object):
|
||||
"""An IPolicyForHTTPS which doesn't do any certificate verification"""
|
||||
|
||||
def creatorForNetloc(self, hostname, port):
|
||||
certificateOptions = OpenSSLCertificateOptions()
|
||||
return ClientTLSOptions(hostname, certificateOptions.getContext())
|
||||
|
||||
@@ -68,9 +68,7 @@ class SrvResolverTestCase(unittest.TestCase):
|
||||
|
||||
dns_client_mock.lookupService.assert_called_once_with(service_name)
|
||||
|
||||
result_deferred.callback(
|
||||
([answer_srv], None, None)
|
||||
)
|
||||
result_deferred.callback(([answer_srv], None, None))
|
||||
|
||||
servers = self.successResultOf(test_d)
|
||||
|
||||
@@ -112,7 +110,7 @@ class SrvResolverTestCase(unittest.TestCase):
|
||||
|
||||
cache = {service_name: [entry]}
|
||||
resolver = SrvResolver(
|
||||
dns_client=dns_client_mock, cache=cache, get_time=clock.time,
|
||||
dns_client=dns_client_mock, cache=cache, get_time=clock.time
|
||||
)
|
||||
|
||||
servers = yield resolver.resolve_service(service_name)
|
||||
@@ -168,11 +166,13 @@ class SrvResolverTestCase(unittest.TestCase):
|
||||
self.assertNoResult(resolve_d)
|
||||
|
||||
# returning a single "." should make the lookup fail with a ConenctError
|
||||
lookup_deferred.callback((
|
||||
[dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"."))],
|
||||
None,
|
||||
None,
|
||||
))
|
||||
lookup_deferred.callback(
|
||||
(
|
||||
[dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"."))],
|
||||
None,
|
||||
None,
|
||||
)
|
||||
)
|
||||
|
||||
self.failureResultOf(resolve_d, ConnectError)
|
||||
|
||||
@@ -191,14 +191,16 @@ class SrvResolverTestCase(unittest.TestCase):
|
||||
resolve_d = resolver.resolve_service(service_name)
|
||||
self.assertNoResult(resolve_d)
|
||||
|
||||
lookup_deferred.callback((
|
||||
[
|
||||
dns.RRHeader(type=dns.A, payload=dns.Record_A()),
|
||||
dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"host")),
|
||||
],
|
||||
None,
|
||||
None,
|
||||
))
|
||||
lookup_deferred.callback(
|
||||
(
|
||||
[
|
||||
dns.RRHeader(type=dns.A, payload=dns.Record_A()),
|
||||
dns.RRHeader(type=dns.SRV, payload=dns.Record_SRV(target=b"host")),
|
||||
],
|
||||
None,
|
||||
None,
|
||||
)
|
||||
)
|
||||
|
||||
servers = self.successResultOf(resolve_d)
|
||||
|
||||
|
||||
@@ -38,9 +38,7 @@ from tests.unittest import HomeserverTestCase
|
||||
def check_logcontext(context):
|
||||
current = LoggingContext.current_context()
|
||||
if current is not context:
|
||||
raise AssertionError(
|
||||
"Expected logcontext %s but was %s" % (context, current),
|
||||
)
|
||||
raise AssertionError("Expected logcontext %s but was %s" % (context, current))
|
||||
|
||||
|
||||
class FederationClientTests(HomeserverTestCase):
|
||||
@@ -56,6 +54,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
"""
|
||||
happy-path test of a GET request
|
||||
"""
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def do_request():
|
||||
with LoggingContext("one") as context:
|
||||
@@ -177,8 +176,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
|
||||
self.assertIsInstance(f.value, RequestSendFailed)
|
||||
self.assertIsInstance(
|
||||
f.value.inner_exception,
|
||||
(ConnectingCancelledError, TimeoutError),
|
||||
f.value.inner_exception, (ConnectingCancelledError, TimeoutError)
|
||||
)
|
||||
|
||||
def test_client_connect_no_response(self):
|
||||
@@ -287,9 +285,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
Once the client gets the headers, _request returns successfully.
|
||||
"""
|
||||
request = MatrixFederationRequest(
|
||||
method="GET",
|
||||
destination="testserv:8008",
|
||||
path="foo/bar",
|
||||
method="GET", destination="testserv:8008", path="foo/bar"
|
||||
)
|
||||
d = self.cl._send_request(request, timeout=10000)
|
||||
|
||||
@@ -329,8 +325,10 @@ class FederationClientTests(HomeserverTestCase):
|
||||
|
||||
# Send it the HTTP response
|
||||
client.dataReceived(
|
||||
(b"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\n"
|
||||
b"Server: Fake\r\n\r\n")
|
||||
(
|
||||
b"HTTP/1.1 200 OK\r\nContent-Type: application/json\r\n"
|
||||
b"Server: Fake\r\n\r\n"
|
||||
)
|
||||
)
|
||||
|
||||
# Push by enough to time it out
|
||||
@@ -345,9 +343,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
requiring a trailing slash. We need to retry the request with a
|
||||
trailing slash. Workaround for Synapse <= v0.99.3, explained in #3622.
|
||||
"""
|
||||
d = self.cl.get_json(
|
||||
"testserv:8008", "foo/bar", try_trailing_slash_on_400=True,
|
||||
)
|
||||
d = self.cl.get_json("testserv:8008", "foo/bar", try_trailing_slash_on_400=True)
|
||||
|
||||
# Send the request
|
||||
self.pump()
|
||||
@@ -400,9 +396,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
|
||||
See test_client_requires_trailing_slashes() for context.
|
||||
"""
|
||||
d = self.cl.get_json(
|
||||
"testserv:8008", "foo/bar", try_trailing_slash_on_400=True,
|
||||
)
|
||||
d = self.cl.get_json("testserv:8008", "foo/bar", try_trailing_slash_on_400=True)
|
||||
|
||||
# Send the request
|
||||
self.pump()
|
||||
@@ -439,10 +433,7 @@ class FederationClientTests(HomeserverTestCase):
|
||||
self.failureResultOf(d)
|
||||
|
||||
def test_client_sends_body(self):
|
||||
self.cl.post_json(
|
||||
"testserv:8008", "foo/bar", timeout=10000,
|
||||
data={"a": "b"}
|
||||
)
|
||||
self.cl.post_json("testserv:8008", "foo/bar", timeout=10000, data={"a": "b"})
|
||||
|
||||
self.pump()
|
||||
|
||||
|
||||
@@ -45,7 +45,9 @@ def do_patch():
|
||||
except Exception:
|
||||
if LoggingContext.current_context() != start_context:
|
||||
err = "%s changed context from %s to %s on exception" % (
|
||||
f, start_context, LoggingContext.current_context()
|
||||
f,
|
||||
start_context,
|
||||
LoggingContext.current_context(),
|
||||
)
|
||||
print(err, file=sys.stderr)
|
||||
raise Exception(err)
|
||||
@@ -54,7 +56,9 @@ def do_patch():
|
||||
if not isinstance(res, Deferred) or res.called:
|
||||
if LoggingContext.current_context() != start_context:
|
||||
err = "%s changed context from %s to %s" % (
|
||||
f, start_context, LoggingContext.current_context()
|
||||
f,
|
||||
start_context,
|
||||
LoggingContext.current_context(),
|
||||
)
|
||||
# print the error to stderr because otherwise all we
|
||||
# see in travis-ci is the 500 error
|
||||
@@ -66,9 +70,7 @@ def do_patch():
|
||||
err = (
|
||||
"%s returned incomplete deferred in non-sentinel context "
|
||||
"%s (start was %s)"
|
||||
) % (
|
||||
f, LoggingContext.current_context(), start_context,
|
||||
)
|
||||
) % (f, LoggingContext.current_context(), start_context)
|
||||
print(err, file=sys.stderr)
|
||||
raise Exception(err)
|
||||
|
||||
@@ -76,7 +78,9 @@ def do_patch():
|
||||
if LoggingContext.current_context() != start_context:
|
||||
err = "%s completion of %s changed context from %s to %s" % (
|
||||
"Failure" if isinstance(r, Failure) else "Success",
|
||||
f, start_context, LoggingContext.current_context(),
|
||||
f,
|
||||
start_context,
|
||||
LoggingContext.current_context(),
|
||||
)
|
||||
print(err, file=sys.stderr)
|
||||
raise Exception(err)
|
||||
|
||||
@@ -19,7 +19,8 @@ import pkg_resources
|
||||
|
||||
from twisted.internet.defer import Deferred
|
||||
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import login, room
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
@@ -33,7 +34,7 @@ class EmailPusherTests(HomeserverTestCase):
|
||||
|
||||
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
@@ -17,7 +17,8 @@ from mock import Mock
|
||||
|
||||
from twisted.internet.defer import Deferred
|
||||
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.util.logcontext import make_deferred_yieldable
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
@@ -32,7 +33,7 @@ class HTTPPusherTests(HomeserverTestCase):
|
||||
|
||||
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
@@ -74,21 +74,18 @@ class BaseSlavedStoreTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(
|
||||
master_result,
|
||||
expected_result,
|
||||
"Expected master result to be %r but was %r" % (
|
||||
expected_result, master_result
|
||||
),
|
||||
"Expected master result to be %r but was %r"
|
||||
% (expected_result, master_result),
|
||||
)
|
||||
self.assertEqual(
|
||||
slaved_result,
|
||||
expected_result,
|
||||
"Expected slave result to be %r but was %r" % (
|
||||
expected_result, slaved_result
|
||||
),
|
||||
"Expected slave result to be %r but was %r"
|
||||
% (expected_result, slaved_result),
|
||||
)
|
||||
self.assertEqual(
|
||||
master_result,
|
||||
slaved_result,
|
||||
"Slave result %r does not match master result %r" % (
|
||||
slaved_result, master_result
|
||||
),
|
||||
"Slave result %r does not match master result %r"
|
||||
% (slaved_result, master_result),
|
||||
)
|
||||
|
||||
@@ -234,10 +234,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
|
||||
type="m.room.member", sender=USER_ID_2, key=USER_ID_2, membership="join"
|
||||
)
|
||||
msg, msgctx = self.build_event()
|
||||
self.get_success(self.master_store.persist_events([
|
||||
(j2, j2ctx),
|
||||
(msg, msgctx),
|
||||
]))
|
||||
self.get_success(self.master_store.persist_events([(j2, j2ctx), (msg, msgctx)]))
|
||||
self.replicate()
|
||||
|
||||
event_source = RoomEventSource(self.hs)
|
||||
@@ -257,15 +254,13 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
|
||||
#
|
||||
# First, we get a list of the rooms we are joined to
|
||||
joined_rooms = self.get_success(
|
||||
self.slaved_store.get_rooms_for_user_with_stream_ordering(
|
||||
USER_ID_2,
|
||||
),
|
||||
self.slaved_store.get_rooms_for_user_with_stream_ordering(USER_ID_2)
|
||||
)
|
||||
|
||||
# Then, we get a list of the events since the last sync
|
||||
membership_changes = self.get_success(
|
||||
self.slaved_store.get_membership_changes_for_user(
|
||||
USER_ID_2, prev_token, current_token,
|
||||
USER_ID_2, prev_token, current_token
|
||||
)
|
||||
)
|
||||
|
||||
@@ -298,9 +293,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
|
||||
self.master_store.persist_events([(event, context)], backfilled=True)
|
||||
)
|
||||
else:
|
||||
self.get_success(
|
||||
self.master_store.persist_event(event, context)
|
||||
)
|
||||
self.get_success(self.master_store.persist_event(event, context))
|
||||
|
||||
return event
|
||||
|
||||
@@ -359,9 +352,7 @@ class SlavedEventStoreTestCase(BaseSlavedStoreTestCase):
|
||||
)
|
||||
else:
|
||||
state_handler = self.hs.get_state_handler()
|
||||
context = self.get_success(state_handler.compute_event_context(
|
||||
event
|
||||
))
|
||||
context = self.get_success(state_handler.compute_event_context(event))
|
||||
|
||||
self.master_store.add_push_actions_to_staging(
|
||||
event.event_id, {user_id: actions for user_id, actions in push_actions}
|
||||
|
||||
@@ -22,6 +22,7 @@ from tests.server import FakeTransport
|
||||
|
||||
class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||
"""Base class for tests of the replication streams"""
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
# build a replication server
|
||||
server_factory = ReplicationStreamProtocolFactory(self.hs)
|
||||
@@ -52,6 +53,7 @@ class BaseStreamTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
class TestReplicationClientHandler(object):
|
||||
"""Drop-in for ReplicationClientHandler which just collects RDATA rows"""
|
||||
|
||||
def __init__(self):
|
||||
self.received_rdata_rows = []
|
||||
|
||||
@@ -69,6 +71,4 @@ class TestReplicationClientHandler(object):
|
||||
|
||||
def on_rdata(self, stream_name, token, rows):
|
||||
for r in rows:
|
||||
self.received_rdata_rows.append(
|
||||
(stream_name, token, r)
|
||||
)
|
||||
self.received_rdata_rows.append((stream_name, token, r))
|
||||
|
||||
14
tests/rest/admin/__init__.py
Normal file
14
tests/rest/admin/__init__.py
Normal file
@@ -0,0 +1,14 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
@@ -19,50 +19,37 @@ import json
|
||||
|
||||
from mock import Mock
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import UserTypes
|
||||
from synapse.rest.client.v1 import admin, events, login, room
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.rest.admin import VersionServlet
|
||||
from synapse.rest.client.v1 import events, login, room
|
||||
from synapse.rest.client.v2_alpha import groups
|
||||
|
||||
from tests import unittest
|
||||
|
||||
|
||||
class VersionTestCase(unittest.HomeserverTestCase):
|
||||
url = '/_synapse/admin/v1/server_version'
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
url = '/_matrix/client/r0/admin/server_version'
|
||||
def create_test_json_resource(self):
|
||||
resource = JsonResource(self.hs)
|
||||
VersionServlet(self.hs).register(resource)
|
||||
return resource
|
||||
|
||||
def test_version_string(self):
|
||||
self.register_user("admin", "pass", admin=True)
|
||||
self.admin_token = self.login("admin", "pass")
|
||||
|
||||
request, channel = self.make_request("GET", self.url,
|
||||
access_token=self.admin_token)
|
||||
request, channel = self.make_request("GET", self.url, shorthand=False)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(200, int(channel.result["code"]),
|
||||
msg=channel.result["body"])
|
||||
self.assertEqual({'server_version', 'python_version'},
|
||||
set(channel.json_body.keys()))
|
||||
|
||||
def test_inaccessible_to_non_admins(self):
|
||||
self.register_user("unprivileged-user", "pass", admin=False)
|
||||
user_token = self.login("unprivileged-user", "pass")
|
||||
|
||||
request, channel = self.make_request("GET", self.url,
|
||||
access_token=user_token)
|
||||
self.render(request)
|
||||
|
||||
self.assertEqual(403, int(channel.result['code']),
|
||||
msg=channel.result['body'])
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
self.assertEqual(
|
||||
{'server_version', 'python_version'}, set(channel.json_body.keys())
|
||||
)
|
||||
|
||||
|
||||
class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [admin.register_servlets]
|
||||
servlets = [synapse.rest.admin.register_servlets_for_client_rest_resource]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
|
||||
@@ -213,9 +200,7 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
want_mac.update(
|
||||
nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin"
|
||||
)
|
||||
want_mac.update(nonce.encode('ascii') + b"\x00bob\x00abc123\x00admin")
|
||||
want_mac = want_mac.hexdigest()
|
||||
|
||||
body = json.dumps(
|
||||
@@ -343,11 +328,13 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
#
|
||||
|
||||
# Invalid user_type
|
||||
body = json.dumps({
|
||||
"nonce": nonce(),
|
||||
"username": "a",
|
||||
"password": "1234",
|
||||
"user_type": "invalid"}
|
||||
body = json.dumps(
|
||||
{
|
||||
"nonce": nonce(),
|
||||
"username": "a",
|
||||
"password": "1234",
|
||||
"user_type": "invalid",
|
||||
}
|
||||
)
|
||||
request, channel = self.make_request("POST", self.url, body.encode('utf8'))
|
||||
self.render(request)
|
||||
@@ -358,7 +345,7 @@ class UserRegisterTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
events.register_servlets,
|
||||
room.register_servlets,
|
||||
@@ -370,9 +357,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
hs.config.user_consent_version = "1"
|
||||
|
||||
consent_uri_builder = Mock()
|
||||
consent_uri_builder.build_user_consent_uri.return_value = (
|
||||
"http://example.com"
|
||||
)
|
||||
consent_uri_builder.build_user_consent_uri.return_value = "http://example.com"
|
||||
self.event_creation_handler._consent_uri_builder = consent_uri_builder
|
||||
|
||||
self.store = hs.get_datastore()
|
||||
@@ -384,9 +369,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
self.other_user_token = self.login("user", "pass")
|
||||
|
||||
# Mark the admin user as having consented
|
||||
self.get_success(
|
||||
self.store.user_set_consent_version(self.admin_user, "1"),
|
||||
)
|
||||
self.get_success(self.store.user_set_consent_version(self.admin_user, "1"))
|
||||
|
||||
def test_shutdown_room_consent(self):
|
||||
"""Test that we can shutdown rooms with local users who have not
|
||||
@@ -398,9 +381,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
room_id = self.helper.create_room_as(self.other_user, tok=self.other_user_token)
|
||||
|
||||
# Assert one user in room
|
||||
users_in_room = self.get_success(
|
||||
self.store.get_users_in_room(room_id),
|
||||
)
|
||||
users_in_room = self.get_success(self.store.get_users_in_room(room_id))
|
||||
self.assertEqual([self.other_user], users_in_room)
|
||||
|
||||
# Enable require consent to send events
|
||||
@@ -408,8 +389,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
# Assert that the user is getting consent error
|
||||
self.helper.send(
|
||||
room_id,
|
||||
body="foo", tok=self.other_user_token, expect_code=403,
|
||||
room_id, body="foo", tok=self.other_user_token, expect_code=403
|
||||
)
|
||||
|
||||
# Test that the admin can still send shutdown
|
||||
@@ -425,9 +405,7 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
# Assert there is now no longer anyone in the room
|
||||
users_in_room = self.get_success(
|
||||
self.store.get_users_in_room(room_id),
|
||||
)
|
||||
users_in_room = self.get_success(self.store.get_users_in_room(room_id))
|
||||
self.assertEqual([], users_in_room)
|
||||
|
||||
@unittest.DEBUG
|
||||
@@ -472,30 +450,26 @@ class ShutdownRoomTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
url = "rooms/%s/initialSync" % (room_id,)
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
url.encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
"GET", url.encode('ascii'), access_token=self.admin_user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"],
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"]
|
||||
)
|
||||
|
||||
url = "events?timeout=0&room_id=" + room_id
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
url.encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
"GET", url.encode('ascii'), access_token=self.admin_user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"],
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"]
|
||||
)
|
||||
|
||||
|
||||
class DeleteGroupTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
groups.register_servlets,
|
||||
]
|
||||
@@ -515,15 +489,11 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
|
||||
"POST",
|
||||
"/create_group".encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
content={
|
||||
"localpart": "test",
|
||||
}
|
||||
content={"localpart": "test"},
|
||||
)
|
||||
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
200, int(channel.result["code"]), msg=channel.result["body"],
|
||||
)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
group_id = channel.json_body["group_id"]
|
||||
|
||||
@@ -533,27 +503,17 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
url = "/groups/%s/admin/users/invite/%s" % (group_id, self.other_user)
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
url.encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
content={}
|
||||
"PUT", url.encode('ascii'), access_token=self.admin_user_tok, content={}
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
200, int(channel.result["code"]), msg=channel.result["body"],
|
||||
)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
url = "/groups/%s/self/accept_invite" % (group_id,)
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
url.encode('ascii'),
|
||||
access_token=self.other_user_token,
|
||||
content={}
|
||||
"PUT", url.encode('ascii'), access_token=self.other_user_token, content={}
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
200, int(channel.result["code"]), msg=channel.result["body"],
|
||||
)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
# Check other user knows they're in the group
|
||||
self.assertIn(group_id, self._get_groups_user_is_in(self.admin_user_tok))
|
||||
@@ -565,15 +525,11 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
|
||||
"POST",
|
||||
url.encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
content={
|
||||
"localpart": "test",
|
||||
}
|
||||
content={"localpart": "test"},
|
||||
)
|
||||
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
200, int(channel.result["code"]), msg=channel.result["body"],
|
||||
)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
# Check group returns 404
|
||||
self._check_group(group_id, expect_code=404)
|
||||
@@ -589,28 +545,22 @@ class DeleteGroupTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
url = "/groups/%s/profile" % (group_id,)
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
url.encode('ascii'),
|
||||
access_token=self.admin_user_tok,
|
||||
"GET", url.encode('ascii'), access_token=self.admin_user_tok
|
||||
)
|
||||
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"],
|
||||
expect_code, int(channel.result["code"]), msg=channel.result["body"]
|
||||
)
|
||||
|
||||
def _get_groups_user_is_in(self, access_token):
|
||||
"""Returns the list of groups the user is in (given their access token)
|
||||
"""
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/joined_groups".encode('ascii'),
|
||||
access_token=access_token,
|
||||
"GET", "/joined_groups".encode('ascii'), access_token=access_token
|
||||
)
|
||||
|
||||
self.render(request)
|
||||
self.assertEqual(
|
||||
200, int(channel.result["code"]), msg=channel.result["body"],
|
||||
)
|
||||
self.assertEqual(200, int(channel.result["code"]), msg=channel.result["body"])
|
||||
|
||||
return channel.json_body["groups"]
|
||||
@@ -15,8 +15,9 @@
|
||||
|
||||
import os
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.urls import ConsentURIBuilder
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.rest.consent import consent_resource
|
||||
|
||||
from tests import unittest
|
||||
@@ -31,7 +32,7 @@ except Exception:
|
||||
class ConsentResourceTestCase(unittest.HomeserverTestCase):
|
||||
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
@@ -15,7 +15,8 @@
|
||||
|
||||
import json
|
||||
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import login, room
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -23,7 +24,7 @@ from tests import unittest
|
||||
class IdentityTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
@@ -43,7 +44,7 @@ class IdentityTestCase(unittest.HomeserverTestCase):
|
||||
tok = self.login("kermit", "monkey")
|
||||
|
||||
request, channel = self.make_request(
|
||||
b"POST", "/createRoom", b"{}", access_token=tok,
|
||||
b"POST", "/createRoom", b"{}", access_token=tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
@@ -55,11 +56,9 @@ class IdentityTestCase(unittest.HomeserverTestCase):
|
||||
"address": "test@example.com",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
request_url = (
|
||||
"/rooms/%s/invite" % (room_id)
|
||||
).encode('ascii')
|
||||
request_url = ("/rooms/%s/invite" % (room_id)).encode('ascii')
|
||||
request, channel = self.make_request(
|
||||
b"POST", request_url, request_data, access_token=tok,
|
||||
b"POST", request_url, request_data, access_token=tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"403", channel.result)
|
||||
|
||||
150
tests/rest/client/v1/test_directory.py
Normal file
150
tests/rest/client/v1/test_directory.py
Normal file
@@ -0,0 +1,150 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import json
|
||||
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client.v1 import directory, login, room
|
||||
from synapse.types import RoomAlias
|
||||
from synapse.util.stringutils import random_string
|
||||
|
||||
from tests import unittest
|
||||
|
||||
|
||||
class DirectoryTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
directory.register_servlets,
|
||||
login.register_servlets,
|
||||
room.register_servlets,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
config = self.default_config()
|
||||
config.require_membership_for_aliases = True
|
||||
|
||||
self.hs = self.setup_test_homeserver(config=config)
|
||||
|
||||
return self.hs
|
||||
|
||||
def prepare(self, reactor, clock, homeserver):
|
||||
self.room_owner = self.register_user("room_owner", "test")
|
||||
self.room_owner_tok = self.login("room_owner", "test")
|
||||
|
||||
self.room_id = self.helper.create_room_as(
|
||||
self.room_owner, tok=self.room_owner_tok
|
||||
)
|
||||
|
||||
self.user = self.register_user("user", "test")
|
||||
self.user_tok = self.login("user", "test")
|
||||
|
||||
def test_state_event_not_in_room(self):
|
||||
self.ensure_user_left_room()
|
||||
self.set_alias_via_state_event(403)
|
||||
|
||||
def test_directory_endpoint_not_in_room(self):
|
||||
self.ensure_user_left_room()
|
||||
self.set_alias_via_directory(403)
|
||||
|
||||
def test_state_event_in_room_too_long(self):
|
||||
self.ensure_user_joined_room()
|
||||
self.set_alias_via_state_event(400, alias_length=256)
|
||||
|
||||
def test_directory_in_room_too_long(self):
|
||||
self.ensure_user_joined_room()
|
||||
self.set_alias_via_directory(400, alias_length=256)
|
||||
|
||||
def test_state_event_in_room(self):
|
||||
self.ensure_user_joined_room()
|
||||
self.set_alias_via_state_event(200)
|
||||
|
||||
def test_directory_in_room(self):
|
||||
self.ensure_user_joined_room()
|
||||
self.set_alias_via_directory(200)
|
||||
|
||||
def test_room_creation_too_long(self):
|
||||
url = "/_matrix/client/r0/createRoom"
|
||||
|
||||
# We use deliberately a localpart under the length threshold so
|
||||
# that we can make sure that the check is done on the whole alias.
|
||||
data = {"room_alias_name": random_string(256 - len(self.hs.hostname))}
|
||||
request_data = json.dumps(data)
|
||||
request, channel = self.make_request(
|
||||
"POST", url, request_data, access_token=self.user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 400, channel.result)
|
||||
|
||||
def test_room_creation(self):
|
||||
url = "/_matrix/client/r0/createRoom"
|
||||
|
||||
# Check with an alias of allowed length. There should already be
|
||||
# a test that ensures it works in test_register.py, but let's be
|
||||
# as cautious as possible here.
|
||||
data = {"room_alias_name": random_string(5)}
|
||||
request_data = json.dumps(data)
|
||||
request, channel = self.make_request(
|
||||
"POST", url, request_data, access_token=self.user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
|
||||
def set_alias_via_state_event(self, expected_code, alias_length=5):
|
||||
url = "/_matrix/client/r0/rooms/%s/state/m.room.aliases/%s" % (
|
||||
self.room_id,
|
||||
self.hs.hostname,
|
||||
)
|
||||
|
||||
data = {"aliases": [self.random_alias(alias_length)]}
|
||||
request_data = json.dumps(data)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"PUT", url, request_data, access_token=self.user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, expected_code, channel.result)
|
||||
|
||||
def set_alias_via_directory(self, expected_code, alias_length=5):
|
||||
url = "/_matrix/client/r0/directory/room/%s" % self.random_alias(alias_length)
|
||||
data = {"room_id": self.room_id}
|
||||
request_data = json.dumps(data)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"PUT", url, request_data, access_token=self.user_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, expected_code, channel.result)
|
||||
|
||||
def random_alias(self, length):
|
||||
return RoomAlias(random_string(length), self.hs.hostname).to_string()
|
||||
|
||||
def ensure_user_left_room(self):
|
||||
self.ensure_membership("leave")
|
||||
|
||||
def ensure_user_joined_room(self):
|
||||
self.ensure_membership("join")
|
||||
|
||||
def ensure_membership(self, membership):
|
||||
try:
|
||||
if membership == "leave":
|
||||
self.helper.leave(room=self.room_id, user=self.user, tok=self.user_tok)
|
||||
if membership == "join":
|
||||
self.helper.join(room=self.room_id, user=self.user, tok=self.user_tok)
|
||||
except AssertionError:
|
||||
# We don't care whether the leave request didn't return a 200 (e.g.
|
||||
# if the user isn't already in the room), because we only want to
|
||||
# make sure the user isn't in the room.
|
||||
pass
|
||||
@@ -17,7 +17,8 @@
|
||||
|
||||
from mock import Mock, NonCallableMock
|
||||
|
||||
from synapse.rest.client.v1 import admin, events, login, room
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import events, login, room
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -28,7 +29,7 @@ class EventStreamPermissionsTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
events.register_servlets,
|
||||
room.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
import json
|
||||
|
||||
from synapse.rest.client.v1 import admin, login
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import login
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -10,7 +11,7 @@ LOGIN_URL = b"/_matrix/client/r0/login"
|
||||
class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
@@ -36,10 +37,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
for i in range(0, 6):
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit" + str(i),
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit" + str(i)},
|
||||
"password": "monkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
@@ -56,14 +54,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
# than 1min.
|
||||
self.assertTrue(retry_after_ms < 6000)
|
||||
|
||||
self.reactor.advance(retry_after_ms / 1000.)
|
||||
self.reactor.advance(retry_after_ms / 1000.0)
|
||||
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit" + str(i),
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit" + str(i)},
|
||||
"password": "monkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
@@ -81,10 +76,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
for i in range(0, 6):
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit",
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit"},
|
||||
"password": "monkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
@@ -101,14 +93,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
# than 1min.
|
||||
self.assertTrue(retry_after_ms < 6000)
|
||||
|
||||
self.reactor.advance(retry_after_ms / 1000.)
|
||||
self.reactor.advance(retry_after_ms / 1000.0)
|
||||
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit",
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit"},
|
||||
"password": "monkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
@@ -126,10 +115,7 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
for i in range(0, 6):
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit",
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit"},
|
||||
"password": "notamonkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
@@ -146,14 +132,11 @@ class LoginRestServletTestCase(unittest.HomeserverTestCase):
|
||||
# than 1min.
|
||||
self.assertTrue(retry_after_ms < 6000)
|
||||
|
||||
self.reactor.advance(retry_after_ms / 1000.)
|
||||
self.reactor.advance(retry_after_ms / 1000.0)
|
||||
|
||||
params = {
|
||||
"type": "m.login.password",
|
||||
"identifier": {
|
||||
"type": "m.id.user",
|
||||
"user": "kermit",
|
||||
},
|
||||
"identifier": {"type": "m.id.user", "user": "kermit"},
|
||||
"password": "notamonkey",
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
|
||||
@@ -20,7 +20,8 @@ from twisted.internet import defer
|
||||
|
||||
import synapse.types
|
||||
from synapse.api.errors import AuthError, SynapseError
|
||||
from synapse.rest.client.v1 import profile
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client.v1 import login, profile, room
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -42,6 +43,7 @@ class ProfileTestCase(unittest.TestCase):
|
||||
"set_displayname",
|
||||
"get_avatar_url",
|
||||
"set_avatar_url",
|
||||
"check_profile_query_allowed",
|
||||
]
|
||||
)
|
||||
|
||||
@@ -155,3 +157,77 @@ class ProfileTestCase(unittest.TestCase):
|
||||
self.assertEquals(mocked_set.call_args[0][0].localpart, "1234ABCD")
|
||||
self.assertEquals(mocked_set.call_args[0][1].user.localpart, "1234ABCD")
|
||||
self.assertEquals(mocked_set.call_args[0][2], "http://my.server/pic.gif")
|
||||
|
||||
|
||||
class ProfilesRestrictedTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
profile.register_servlets,
|
||||
room.register_servlets,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
|
||||
config = self.default_config()
|
||||
config.require_auth_for_profile_requests = True
|
||||
self.hs = self.setup_test_homeserver(config=config)
|
||||
|
||||
return self.hs
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
# User owning the requested profile.
|
||||
self.owner = self.register_user("owner", "pass")
|
||||
self.owner_tok = self.login("owner", "pass")
|
||||
self.profile_url = "/profile/%s" % (self.owner)
|
||||
|
||||
# User requesting the profile.
|
||||
self.requester = self.register_user("requester", "pass")
|
||||
self.requester_tok = self.login("requester", "pass")
|
||||
|
||||
self.room_id = self.helper.create_room_as(self.owner, tok=self.owner_tok)
|
||||
|
||||
def test_no_auth(self):
|
||||
self.try_fetch_profile(401)
|
||||
|
||||
def test_not_in_shared_room(self):
|
||||
self.ensure_requester_left_room()
|
||||
|
||||
self.try_fetch_profile(403, access_token=self.requester_tok)
|
||||
|
||||
def test_in_shared_room(self):
|
||||
self.ensure_requester_left_room()
|
||||
|
||||
self.helper.join(room=self.room_id, user=self.requester, tok=self.requester_tok)
|
||||
|
||||
self.try_fetch_profile(200, self.requester_tok)
|
||||
|
||||
def try_fetch_profile(self, expected_code, access_token=None):
|
||||
self.request_profile(expected_code, access_token=access_token)
|
||||
|
||||
self.request_profile(
|
||||
expected_code, url_suffix="/displayname", access_token=access_token
|
||||
)
|
||||
|
||||
self.request_profile(
|
||||
expected_code, url_suffix="/avatar_url", access_token=access_token
|
||||
)
|
||||
|
||||
def request_profile(self, expected_code, url_suffix="", access_token=None):
|
||||
request, channel = self.make_request(
|
||||
"GET", self.profile_url + url_suffix, access_token=access_token
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, expected_code, channel.result)
|
||||
|
||||
def ensure_requester_left_room(self):
|
||||
try:
|
||||
self.helper.leave(
|
||||
room=self.room_id, user=self.requester, tok=self.requester_tok
|
||||
)
|
||||
except AssertionError:
|
||||
# We don't care whether the leave request didn't return a 200 (e.g.
|
||||
# if the user isn't already in the room), because we only want to
|
||||
# make sure the user isn't in the room.
|
||||
pass
|
||||
|
||||
@@ -22,8 +22,9 @@ from six.moves.urllib import parse as urlparse
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
from synapse.rest.client.v1 import login, room
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -803,7 +804,7 @@ class RoomMessageListTestCase(RoomBase):
|
||||
|
||||
class RoomSearchTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
@@ -903,3 +904,35 @@ class RoomSearchTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(
|
||||
context["profile_info"][self.other_user_id]["displayname"], "otheruser"
|
||||
)
|
||||
|
||||
|
||||
class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
|
||||
self.url = b"/_matrix/client/r0/publicRooms"
|
||||
|
||||
config = self.default_config()
|
||||
config.restrict_public_rooms_to_local_users = True
|
||||
self.hs = self.setup_test_homeserver(config=config)
|
||||
|
||||
return self.hs
|
||||
|
||||
def test_restricted_no_auth(self):
|
||||
request, channel = self.make_request("GET", self.url)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 401, channel.result)
|
||||
|
||||
def test_restricted_auth(self):
|
||||
self.register_user("user", "pass")
|
||||
tok = self.login("user", "pass")
|
||||
|
||||
request, channel = self.make_request("GET", self.url, access_token=tok)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 200, channel.result)
|
||||
|
||||
@@ -16,8 +16,8 @@
|
||||
|
||||
from twisted.internet.defer import succeed
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.rest.client.v1 import admin
|
||||
from synapse.rest.client.v2_alpha import auth, register
|
||||
|
||||
from tests import unittest
|
||||
@@ -27,7 +27,7 @@ class FallbackAuthTests(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
auth.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
register.register_servlets,
|
||||
]
|
||||
hijack_auth = False
|
||||
|
||||
@@ -12,9 +12,9 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.room_versions import DEFAULT_ROOM_VERSION, KNOWN_ROOM_VERSIONS
|
||||
from synapse.rest.client.v1 import admin, login
|
||||
from synapse.rest.client.v1 import login
|
||||
from synapse.rest.client.v2_alpha import capabilities
|
||||
|
||||
from tests import unittest
|
||||
@@ -23,7 +23,7 @@ from tests import unittest
|
||||
class CapabilitiesTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
capabilities.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
@@ -4,10 +4,11 @@ import os
|
||||
|
||||
import pkg_resources
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.rest.client.v1 import admin, login
|
||||
from synapse.rest.client.v1 import login
|
||||
from synapse.rest.client.v2_alpha import account_validity, register, sync
|
||||
|
||||
from tests import unittest
|
||||
@@ -40,11 +41,10 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||
as_token = "i_am_an_app_service"
|
||||
|
||||
appservice = ApplicationService(
|
||||
as_token, self.hs.config.server_name,
|
||||
as_token,
|
||||
self.hs.config.server_name,
|
||||
id="1234",
|
||||
namespaces={
|
||||
"users": [{"regex": r"@as_user.*", "exclusive": True}],
|
||||
},
|
||||
namespaces={"users": [{"regex": r"@as_user.*", "exclusive": True}]},
|
||||
)
|
||||
|
||||
self.hs.get_datastore().services_cache.append(appservice)
|
||||
@@ -56,10 +56,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||
self.render(request)
|
||||
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
det_data = {
|
||||
"user_id": user_id,
|
||||
"home_server": self.hs.hostname,
|
||||
}
|
||||
det_data = {"user_id": user_id, "home_server": self.hs.hostname}
|
||||
self.assertDictContainsSubset(det_data, channel.json_body)
|
||||
|
||||
def test_POST_appservice_registration_invalid(self):
|
||||
@@ -127,10 +124,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
||||
self.render(request)
|
||||
|
||||
det_data = {
|
||||
"home_server": self.hs.hostname,
|
||||
"device_id": "guest_device",
|
||||
}
|
||||
det_data = {"home_server": self.hs.hostname, "device_id": "guest_device"}
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
self.assertDictContainsSubset(det_data, channel.json_body)
|
||||
|
||||
@@ -158,7 +152,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||
else:
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
self.reactor.advance(retry_after_ms / 1000.)
|
||||
self.reactor.advance(retry_after_ms / 1000.0)
|
||||
|
||||
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
||||
self.render(request)
|
||||
@@ -186,7 +180,7 @@ class RegisterRestServletTestCase(unittest.HomeserverTestCase):
|
||||
else:
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
self.reactor.advance(retry_after_ms / 1000.)
|
||||
self.reactor.advance(retry_after_ms / 1000.0)
|
||||
|
||||
request, channel = self.make_request(b"POST", self.url + b"?kind=guest", b"{}")
|
||||
self.render(request)
|
||||
@@ -198,7 +192,7 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
register.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
@@ -220,23 +214,19 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
# The specific endpoint doesn't matter, all we need is an authenticated
|
||||
# endpoint.
|
||||
request, channel = self.make_request(
|
||||
b"GET", "/sync", access_token=tok,
|
||||
)
|
||||
request, channel = self.make_request(b"GET", "/sync", access_token=tok)
|
||||
self.render(request)
|
||||
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
self.reactor.advance(datetime.timedelta(weeks=1).total_seconds())
|
||||
|
||||
request, channel = self.make_request(
|
||||
b"GET", "/sync", access_token=tok,
|
||||
)
|
||||
request, channel = self.make_request(b"GET", "/sync", access_token=tok)
|
||||
self.render(request)
|
||||
|
||||
self.assertEquals(channel.result["code"], b"403", channel.result)
|
||||
self.assertEquals(
|
||||
channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result,
|
||||
channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result
|
||||
)
|
||||
|
||||
def test_manual_renewal(self):
|
||||
@@ -252,21 +242,17 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
|
||||
admin_tok = self.login("admin", "adminpassword")
|
||||
|
||||
url = "/_matrix/client/unstable/admin/account_validity/validity"
|
||||
params = {
|
||||
"user_id": user_id,
|
||||
}
|
||||
params = {"user_id": user_id}
|
||||
request_data = json.dumps(params)
|
||||
request, channel = self.make_request(
|
||||
b"POST", url, request_data, access_token=admin_tok,
|
||||
b"POST", url, request_data, access_token=admin_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
# The specific endpoint doesn't matter, all we need is an authenticated
|
||||
# endpoint.
|
||||
request, channel = self.make_request(
|
||||
b"GET", "/sync", access_token=tok,
|
||||
)
|
||||
request, channel = self.make_request(b"GET", "/sync", access_token=tok)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
@@ -285,20 +271,18 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
|
||||
}
|
||||
request_data = json.dumps(params)
|
||||
request, channel = self.make_request(
|
||||
b"POST", url, request_data, access_token=admin_tok,
|
||||
b"POST", url, request_data, access_token=admin_tok
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
# The specific endpoint doesn't matter, all we need is an authenticated
|
||||
# endpoint.
|
||||
request, channel = self.make_request(
|
||||
b"GET", "/sync", access_token=tok,
|
||||
)
|
||||
request, channel = self.make_request(b"GET", "/sync", access_token=tok)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"403", channel.result)
|
||||
self.assertEquals(
|
||||
channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result,
|
||||
channel.json_body["errcode"], Codes.EXPIRED_ACCOUNT, channel.result
|
||||
)
|
||||
|
||||
|
||||
@@ -307,7 +291,7 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
|
||||
skip = "No Jinja installed" if not load_jinja2_templates else None
|
||||
servlets = [
|
||||
register.register_servlets,
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
account_validity.register_servlets,
|
||||
@@ -357,10 +341,15 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
|
||||
# We need to manually add an email address otherwise the handler will do
|
||||
# nothing.
|
||||
now = self.hs.clock.time_msec()
|
||||
self.get_success(self.store.user_add_threepid(
|
||||
user_id=user_id, medium="email", address="kermit@example.com",
|
||||
validated_at=now, added_at=now,
|
||||
))
|
||||
self.get_success(
|
||||
self.store.user_add_threepid(
|
||||
user_id=user_id,
|
||||
medium="email",
|
||||
address="kermit@example.com",
|
||||
validated_at=now,
|
||||
added_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
# Move 6 days forward. This should trigger a renewal email to be sent.
|
||||
self.reactor.advance(datetime.timedelta(days=6).total_seconds())
|
||||
@@ -378,9 +367,7 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
|
||||
# our access token should be denied from now, otherwise they should
|
||||
# succeed.
|
||||
self.reactor.advance(datetime.timedelta(days=3).total_seconds())
|
||||
request, channel = self.make_request(
|
||||
b"GET", "/sync", access_token=tok,
|
||||
)
|
||||
request, channel = self.make_request(b"GET", "/sync", access_token=tok)
|
||||
self.render(request)
|
||||
self.assertEquals(channel.result["code"], b"200", channel.result)
|
||||
|
||||
@@ -392,13 +379,19 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
|
||||
# We need to manually add an email address otherwise the handler will do
|
||||
# nothing.
|
||||
now = self.hs.clock.time_msec()
|
||||
self.get_success(self.store.user_add_threepid(
|
||||
user_id=user_id, medium="email", address="kermit@example.com",
|
||||
validated_at=now, added_at=now,
|
||||
))
|
||||
self.get_success(
|
||||
self.store.user_add_threepid(
|
||||
user_id=user_id,
|
||||
medium="email",
|
||||
address="kermit@example.com",
|
||||
validated_at=now,
|
||||
added_at=now,
|
||||
)
|
||||
)
|
||||
|
||||
request, channel = self.make_request(
|
||||
b"POST", "/_matrix/client/unstable/account_validity/send_mail",
|
||||
b"POST",
|
||||
"/_matrix/client/unstable/account_validity/send_mail",
|
||||
access_token=tok,
|
||||
)
|
||||
self.render(request)
|
||||
|
||||
@@ -15,7 +15,8 @@
|
||||
|
||||
from mock import Mock
|
||||
|
||||
from synapse.rest.client.v1 import admin, login, room
|
||||
import synapse.rest.admin
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.rest.client.v2_alpha import sync
|
||||
|
||||
from tests import unittest
|
||||
@@ -72,7 +73,7 @@ class FilterTestCase(unittest.HomeserverTestCase):
|
||||
class SyncTypingTests(unittest.HomeserverTestCase):
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
synapse.rest.admin.register_servlets_for_client_rest_resource,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user