Compare commits
42 Commits
v1.18.0
...
erikj/remo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ac2aad2123 | ||
|
|
14ddce892f | ||
|
|
481f76c7aa | ||
|
|
5d92a1428c | ||
|
|
6812509807 | ||
|
|
2a89ce8cd4 | ||
|
|
b6c6fb7950 | ||
|
|
3b415e23a5 | ||
|
|
db5970ac6d | ||
|
|
d1008fe949 | ||
|
|
394be6a0e6 | ||
|
|
faba873d4b | ||
|
|
9b3ab57acd | ||
|
|
18de00adb4 | ||
|
|
e2a4ba6f9b | ||
|
|
6d4b790021 | ||
|
|
0a7fb24716 | ||
|
|
606805bf06 | ||
|
|
3aa36b782c | ||
|
|
c978f6c451 | ||
|
|
4cce8ef74e | ||
|
|
b3a97d6dac | ||
|
|
3950ae51ef | ||
|
|
a53e0160a2 | ||
|
|
d90087cffa | ||
|
|
3a00bd1378 | ||
|
|
f23c77389d | ||
|
|
8dff4a1242 | ||
|
|
2184f61fae | ||
|
|
3345c166a4 | ||
|
|
e866e3b896 | ||
|
|
8a25332d94 | ||
|
|
2c1e1b153d | ||
|
|
8078dec3be | ||
|
|
3857de2194 | ||
|
|
349119a340 | ||
|
|
aaf9ce72a0 | ||
|
|
c4ce0da6fe | ||
|
|
68626ff8e9 | ||
|
|
8553f46498 | ||
|
|
5f65e62681 | ||
|
|
8144bc26a7 |
109
INSTALL.md
109
INSTALL.md
@@ -1,10 +1,12 @@
|
||||
- [Choosing your server name](#choosing-your-server-name)
|
||||
- [Picking a database engine](#picking-a-database-engine)
|
||||
- [Installing Synapse](#installing-synapse)
|
||||
- [Installing from source](#installing-from-source)
|
||||
- [Platform-Specific Instructions](#platform-specific-instructions)
|
||||
- [Prebuilt packages](#prebuilt-packages)
|
||||
- [Setting up Synapse](#setting-up-synapse)
|
||||
- [TLS certificates](#tls-certificates)
|
||||
- [Client Well-Known URI](#client-well-known-uri)
|
||||
- [Email](#email)
|
||||
- [Registering a user](#registering-a-user)
|
||||
- [Setting up a TURN server](#setting-up-a-turn-server)
|
||||
@@ -27,6 +29,25 @@ that your email address is probably `user@example.com` rather than
|
||||
`user@email.example.com`) - but doing so may require more advanced setup: see
|
||||
[Setting up Federation](docs/federate.md).
|
||||
|
||||
# Picking a database engine
|
||||
|
||||
Synapse offers two database engines:
|
||||
* [PostgreSQL](https://www.postgresql.org)
|
||||
* [SQLite](https://sqlite.org/)
|
||||
|
||||
Almost all installations should opt to use PostgreSQL. Advantages include:
|
||||
|
||||
* significant performance improvements due to the superior threading and
|
||||
caching model, smarter query optimiser
|
||||
* allowing the DB to be run on separate hardware
|
||||
|
||||
For information on how to install and use PostgreSQL, please see
|
||||
[docs/postgres.md](docs/postgres.md)
|
||||
|
||||
By default Synapse uses SQLite and in doing so trades performance for convenience.
|
||||
SQLite is only recommended in Synapse for testing purposes or for servers with
|
||||
light workloads.
|
||||
|
||||
# Installing Synapse
|
||||
|
||||
## Installing from source
|
||||
@@ -234,9 +255,9 @@ for a number of platforms.
|
||||
|
||||
There is an offical synapse image available at
|
||||
https://hub.docker.com/r/matrixdotorg/synapse which can be used with
|
||||
the docker-compose file available at [contrib/docker](contrib/docker). Further information on
|
||||
this including configuration options is available in the README on
|
||||
hub.docker.com.
|
||||
the docker-compose file available at [contrib/docker](contrib/docker). Further
|
||||
information on this including configuration options is available in the README
|
||||
on hub.docker.com.
|
||||
|
||||
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
|
||||
Dockerfile to automate a synapse server in a single Docker image, at
|
||||
@@ -244,7 +265,8 @@ https://hub.docker.com/r/avhost/docker-matrix/tags/
|
||||
|
||||
Slavi Pantaleev has created an Ansible playbook,
|
||||
which installs the offical Docker image of Matrix Synapse
|
||||
along with many other Matrix-related services (Postgres database, riot-web, coturn, mxisd, SSL support, etc.).
|
||||
along with many other Matrix-related services (Postgres database, Element, coturn,
|
||||
ma1sd, SSL support, etc.).
|
||||
For more details, see
|
||||
https://github.com/spantaleev/matrix-docker-ansible-deploy
|
||||
|
||||
@@ -277,22 +299,27 @@ The fingerprint of the repository signing key (as shown by `gpg
|
||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||
|
||||
#### Downstream Debian/Ubuntu packages
|
||||
#### Downstream Debian packages
|
||||
|
||||
For `buster` and `sid`, Synapse is available in the Debian repositories and
|
||||
it should be possible to install it with simply:
|
||||
We do not recommend using the packages from the default Debian `buster`
|
||||
repository at this time, as they are old and suffer from known security
|
||||
vulnerabilities. You can install the latest version of Synapse from
|
||||
[our repository](#matrixorg-packages) or from `buster-backports`. Please
|
||||
see the [Debian documentation](https://backports.debian.org/Instructions/)
|
||||
for information on how to use backports.
|
||||
|
||||
If you are using Debian `sid` or testing, Synapse is available in the default
|
||||
repositories and it should be possible to install it simply with:
|
||||
|
||||
```
|
||||
sudo apt install matrix-synapse
|
||||
```
|
||||
|
||||
There is also a version of `matrix-synapse` in `stretch-backports`. Please see
|
||||
the [Debian documentation on
|
||||
backports](https://backports.debian.org/Instructions/) for information on how
|
||||
to use them.
|
||||
#### Downstream Ubuntu packages
|
||||
|
||||
We do not recommend using the packages in downstream Ubuntu at this time, as
|
||||
they are old and suffer from known security vulnerabilities.
|
||||
We do not recommend using the packages in the default Ubuntu repository
|
||||
at this time, as they are old and suffer from known security vulnerabilities.
|
||||
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
|
||||
|
||||
### Fedora
|
||||
|
||||
@@ -419,6 +446,60 @@ so, you will need to edit `homeserver.yaml`, as follows:
|
||||
For a more detailed guide to configuring your server for federation, see
|
||||
[federate.md](docs/federate.md).
|
||||
|
||||
## Client Well-Known URI
|
||||
|
||||
Setting up the client Well-Known URI is optional but if you set it up, it will
|
||||
allow users to enter their full username (e.g. `@user:<server_name>`) into clients
|
||||
which support well-known lookup to automatically configure the homeserver and
|
||||
identity server URLs. This is useful so that users don't have to memorize or think
|
||||
about the actual homeserver URL you are using.
|
||||
|
||||
The URL `https://<server_name>/.well-known/matrix/client` should return JSON in
|
||||
the following format.
|
||||
|
||||
```
|
||||
{
|
||||
"m.homeserver": {
|
||||
"base_url": "https://<matrix.example.com>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
It can optionally contain identity server information as well.
|
||||
|
||||
```
|
||||
{
|
||||
"m.homeserver": {
|
||||
"base_url": "https://<matrix.example.com>"
|
||||
},
|
||||
"m.identity_server": {
|
||||
"base_url": "https://<identity.example.com>"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To work in browser based clients, the file must be served with the appropriate
|
||||
Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
|
||||
`Access-Control-Allow-Origin: *` which would allow all browser based clients to
|
||||
view it.
|
||||
|
||||
In nginx this would be something like:
|
||||
```
|
||||
location /.well-known/matrix/client {
|
||||
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
|
||||
add_header Content-Type application/json;
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
}
|
||||
```
|
||||
|
||||
You should also ensure the `public_baseurl` option in `homeserver.yaml` is set
|
||||
correctly. `public_baseurl` should be set to the URL that clients will use to
|
||||
connect to your server. This is the same URL you put for the `m.homeserver`
|
||||
`base_url` above.
|
||||
|
||||
```
|
||||
public_baseurl: "https://<matrix.example.com>"
|
||||
```
|
||||
|
||||
## Email
|
||||
|
||||
@@ -437,7 +518,7 @@ email will be disabled.
|
||||
|
||||
## Registering a user
|
||||
|
||||
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
|
||||
The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
|
||||
|
||||
Alternatively you can do so from the command line if you have installed via pip.
|
||||
|
||||
|
||||
43
README.rst
43
README.rst
@@ -45,7 +45,7 @@ which handle:
|
||||
- Eventually-consistent cryptographically secure synchronisation of room
|
||||
state across a global open network of federated servers and services
|
||||
- Sending and receiving extensible messages in a room with (optional)
|
||||
end-to-end encryption[1]
|
||||
end-to-end encryption
|
||||
- Inviting, joining, leaving, kicking, banning room members
|
||||
- Managing user accounts (registration, login, logout)
|
||||
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
|
||||
@@ -82,9 +82,6 @@ at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
|
||||
|
||||
Thanks for using Matrix!
|
||||
|
||||
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
|
||||
|
||||
|
||||
Support
|
||||
=======
|
||||
|
||||
@@ -115,12 +112,11 @@ Unless you are running a test instance of Synapse on your local machine, in
|
||||
general, you will need to enable TLS support before you can successfully
|
||||
connect from a client: see `<INSTALL.md#tls-certificates>`_.
|
||||
|
||||
An easy way to get started is to login or register via Riot at
|
||||
https://riot.im/app/#/login or https://riot.im/app/#/register respectively.
|
||||
An easy way to get started is to login or register via Element at
|
||||
https://app.element.io/#/login or https://app.element.io/#/register respectively.
|
||||
You will need to change the server you are logging into from ``matrix.org``
|
||||
and instead specify a Homeserver URL of ``https://<server_name>:8448``
|
||||
(or just ``https://<server_name>`` if you are using a reverse proxy).
|
||||
(Leave the identity server as the default - see `Identity servers`_.)
|
||||
If you prefer to use another client, refer to our
|
||||
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
|
||||
|
||||
@@ -137,7 +133,7 @@ it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
|
||||
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
|
||||
|
||||
Once ``enable_registration`` is set to ``true``, it is possible to register a
|
||||
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
|
||||
user via a Matrix client.
|
||||
|
||||
Your new user name will be formed partly from the ``server_name``, and partly
|
||||
from a localpart you specify when you create the account. Your name will take
|
||||
@@ -183,30 +179,6 @@ versions of synapse.
|
||||
|
||||
.. _UPGRADE.rst: UPGRADE.rst
|
||||
|
||||
|
||||
Using PostgreSQL
|
||||
================
|
||||
|
||||
Synapse offers two database engines:
|
||||
* `PostgreSQL <https://www.postgresql.org>`_
|
||||
* `SQLite <https://sqlite.org/>`_
|
||||
|
||||
Almost all installations should opt to use PostgreSQL. Advantages include:
|
||||
|
||||
* significant performance improvements due to the superior threading and
|
||||
caching model, smarter query optimiser
|
||||
* allowing the DB to be run on separate hardware
|
||||
* allowing basic active/backup high-availability with a "hot spare" synapse
|
||||
pointing at the same DB master, as well as enabling DB replication in
|
||||
synapse itself.
|
||||
|
||||
For information on how to install and use PostgreSQL, please see
|
||||
`docs/postgres.md <docs/postgres.md>`_.
|
||||
|
||||
By default Synapse uses SQLite and in doing so trades performance for convenience.
|
||||
SQLite is only recommended in Synapse for testing purposes or for servers with
|
||||
light workloads.
|
||||
|
||||
.. _reverse-proxy:
|
||||
|
||||
Using a reverse proxy with Synapse
|
||||
@@ -255,10 +227,9 @@ email address.
|
||||
Password reset
|
||||
==============
|
||||
|
||||
If a user has registered an email address to their account using an identity
|
||||
server, they can request a password-reset token via clients such as Riot.
|
||||
|
||||
A manual password reset can be done via direct database access as follows.
|
||||
Users can reset their password through their client. Alternatively, a server admin
|
||||
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
|
||||
or by directly editing the database as shown below.
|
||||
|
||||
First calculate the hash of the new password::
|
||||
|
||||
|
||||
1
changelog.d/7314.misc
Normal file
1
changelog.d/7314.misc
Normal file
@@ -0,0 +1 @@
|
||||
Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH.
|
||||
1
changelog.d/7736.feature
Normal file
1
changelog.d/7736.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add unread messages count to sync responses, as specified in [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654).
|
||||
1
changelog.d/7899.doc
Normal file
1
changelog.d/7899.doc
Normal file
@@ -0,0 +1 @@
|
||||
Document how to set up a Client Well-Known file and fix several pieces of outdated documentation.
|
||||
1
changelog.d/7902.feature
Normal file
1
changelog.d/7902.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus.
|
||||
1
changelog.d/7936.misc
Normal file
1
changelog.d/7936.misc
Normal file
@@ -0,0 +1 @@
|
||||
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
|
||||
1
changelog.d/7947.misc
Normal file
1
changelog.d/7947.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7948.misc
Normal file
1
changelog.d/7948.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7949.misc
Normal file
1
changelog.d/7949.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7951.misc
Normal file
1
changelog.d/7951.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7952.misc
Normal file
1
changelog.d/7952.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move some database-related log lines from the default logger to the database/transaction loggers.
|
||||
1
changelog.d/7963.misc
Normal file
1
changelog.d/7963.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7964.feature
Normal file
1
changelog.d/7964.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel.
|
||||
1
changelog.d/7965.misc
Normal file
1
changelog.d/7965.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a script to detect source code files using non-unix line terminators.
|
||||
1
changelog.d/7970.misc
Normal file
1
changelog.d/7970.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a script to detect source code files using non-unix line terminators.
|
||||
1
changelog.d/7971.misc
Normal file
1
changelog.d/7971.misc
Normal file
@@ -0,0 +1 @@
|
||||
Log the SAML session ID during creation.
|
||||
1
changelog.d/7973.misc
Normal file
1
changelog.d/7973.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7975.misc
Normal file
1
changelog.d/7975.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7976.misc
Normal file
1
changelog.d/7976.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7977.bugfix
Normal file
1
changelog.d/7977.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory.
|
||||
1
changelog.d/7978.bugfix
Normal file
1
changelog.d/7978.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured.
|
||||
1
changelog.d/7979.misc
Normal file
1
changelog.d/7979.misc
Normal file
@@ -0,0 +1 @@
|
||||
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
|
||||
1
changelog.d/7980.bugfix
Normal file
1
changelog.d/7980.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0.
|
||||
1
changelog.d/7981.misc
Normal file
1
changelog.d/7981.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7987.misc
Normal file
1
changelog.d/7987.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7989.misc
Normal file
1
changelog.d/7989.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7990.doc
Normal file
1
changelog.d/7990.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve workers docs.
|
||||
1
changelog.d/7992.doc
Normal file
1
changelog.d/7992.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in `docs/workers.md`.
|
||||
1
changelog.d/7996.bugfix
Normal file
1
changelog.d/7996.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix various comments and minor discrepencies in server notices code.
|
||||
1
changelog.d/7998.doc
Normal file
1
changelog.d/7998.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add documentation for how to undo a room shutdown.
|
||||
1
changelog.d/7999.bugfix
Normal file
1
changelog.d/7999.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long standing bug where HTTP HEAD requests resulted in a 400 error.
|
||||
1
changelog.d/8001.misc
Normal file
1
changelog.d/8001.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove redundant and unreliable signature check for v1 Identity Service lookup responses.
|
||||
1
changelog.d/8003.misc
Normal file
1
changelog.d/8003.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8008.feature
Normal file
1
changelog.d/8008.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add rate limiting to users joining rooms.
|
||||
1
changelog.d/8025.bugfix
Normal file
1
changelog.d/8025.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug where state (e.g. power levels) would reset incorrectly when receiving an event from a remote server.
|
||||
@@ -609,13 +609,15 @@ class SynapseCmd(cmd.Cmd):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _do_event_stream(self, timeout):
|
||||
res = yield self.http_client.get_json(
|
||||
self._url() + "/events",
|
||||
{
|
||||
"access_token": self._tok(),
|
||||
"timeout": str(timeout),
|
||||
"from": self.event_stream_token,
|
||||
},
|
||||
res = yield defer.ensureDeferred(
|
||||
self.http_client.get_json(
|
||||
self._url() + "/events",
|
||||
{
|
||||
"access_token": self._tok(),
|
||||
"timeout": str(timeout),
|
||||
"from": self.event_stream_token,
|
||||
},
|
||||
)
|
||||
)
|
||||
print(json.dumps(res, indent=4))
|
||||
|
||||
|
||||
10
debian/changelog
vendored
10
debian/changelog
vendored
@@ -1,3 +1,13 @@
|
||||
matrix-synapse-py3 (1.xx.0) stable; urgency=medium
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.xx.0.
|
||||
|
||||
[ Aaron Raimist ]
|
||||
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> XXXXX
|
||||
|
||||
matrix-synapse-py3 (1.18.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.18.0.
|
||||
|
||||
2
debian/matrix-synapse.default
vendored
2
debian/matrix-synapse.default
vendored
@@ -1,2 +1,2 @@
|
||||
# Specify environment variables used when running Synapse
|
||||
# SYNAPSE_CACHE_FACTOR=1 (default)
|
||||
# SYNAPSE_CACHE_FACTOR=0.5 (default)
|
||||
|
||||
27
debian/synctl.ronn
vendored
27
debian/synctl.ronn
vendored
@@ -46,19 +46,20 @@ Configuration file may be generated as follows:
|
||||
## ENVIRONMENT
|
||||
|
||||
* `SYNAPSE_CACHE_FACTOR`:
|
||||
Synapse's architecture is quite RAM hungry currently - a lot of
|
||||
recent room data and metadata is deliberately cached in RAM in
|
||||
order to speed up common requests. This will be improved in
|
||||
future, but for now the easiest way to either reduce the RAM usage
|
||||
(at the risk of slowing things down) is to set the
|
||||
SYNAPSE_CACHE_FACTOR environment variable. Roughly speaking, a
|
||||
SYNAPSE_CACHE_FACTOR of 1.0 will max out at around 3-4GB of
|
||||
resident memory - this is what we currently run the matrix.org
|
||||
on. The default setting is currently 0.1, which is probably around
|
||||
a ~700MB footprint. You can dial it down further to 0.02 if
|
||||
desired, which targets roughly ~512MB. Conversely you can dial it
|
||||
up if you need performance for lots of users and have a box with a
|
||||
lot of RAM.
|
||||
Synapse's architecture is quite RAM hungry currently - we deliberately
|
||||
cache a lot of recent room data and metadata in RAM in order to speed up
|
||||
common requests. We'll improve this in the future, but for now the easiest
|
||||
way to either reduce the RAM usage (at the risk of slowing things down)
|
||||
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
|
||||
variable. The default is 0.5, which can be decreased to reduce RAM usage
|
||||
in memory constrained enviroments, or increased if performance starts to
|
||||
degrade.
|
||||
|
||||
However, degraded performance due to a low cache factor, common on
|
||||
machines with slow disks, often leads to explosions in memory use due
|
||||
backlogged requests. In this case, reducing the cache factor will make
|
||||
things worse. Instead, try increasing it drastically. 2.0 is a good
|
||||
starting value.
|
||||
|
||||
## COPYRIGHT
|
||||
|
||||
|
||||
@@ -10,5 +10,16 @@
|
||||
# homeserver.yaml. Instead, if you are starting from scratch, please generate
|
||||
# a fresh config using Synapse by following the instructions in INSTALL.md.
|
||||
|
||||
# Configuration options that take a time period can be set using a number
|
||||
# followed by a letter. Letters have the following meanings:
|
||||
# s = second
|
||||
# m = minute
|
||||
# h = hour
|
||||
# d = day
|
||||
# w = week
|
||||
# y = year
|
||||
# For example, setting redaction_retention_period: 5m would remove redacted
|
||||
# messages from the database after 5 minutes, rather than 5 months.
|
||||
|
||||
################################################################################
|
||||
|
||||
|
||||
@@ -369,7 +369,9 @@ to the new room will have power level `-10` by default, and thus be unable to sp
|
||||
If `block` is `True` it prevents new joins to the old room.
|
||||
|
||||
This API will remove all trace of the old room from your database after removing
|
||||
all local users.
|
||||
all local users. If `purge` is `true` (the default), all traces of the old room will
|
||||
be removed from your database after removing all local users. If you do not want
|
||||
this to happen, set `purge` to `false`.
|
||||
Depending on the amount of history being purged a call to the API may take
|
||||
several minutes or longer.
|
||||
|
||||
@@ -388,7 +390,8 @@ with a body of:
|
||||
"new_room_user_id": "@someuser:example.com",
|
||||
"room_name": "Content Violation Notification",
|
||||
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
|
||||
"block": true
|
||||
"block": true,
|
||||
"purge": true
|
||||
}
|
||||
```
|
||||
|
||||
@@ -430,8 +433,10 @@ The following JSON body parameters are available:
|
||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||
original room was shut down. Defaults to `Sharing illegal content on this server
|
||||
is not permitted and rooms in violation will be blocked.`
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing future attempts to
|
||||
join the room. Defaults to `false`.
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
|
||||
future attempts to join the room. Defaults to `false`.
|
||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||
Defaults to `true`.
|
||||
|
||||
The JSON body must not be empty. The body must be at least `{}`.
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ You will need to authenticate with an access token for an admin user.
|
||||
* `message` - Optional. A string containing the first message that will be sent as
|
||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||
original room was shut down.
|
||||
|
||||
|
||||
If not specified, the default value of `room_name` is "Content Violation
|
||||
Notification". The default value of `message` is "Sharing illegal content on
|
||||
othis server is not permitted and rooms in violation will be blocked."
|
||||
@@ -72,3 +72,23 @@ Response:
|
||||
"new_room_id": "!newroomid:example.com",
|
||||
},
|
||||
```
|
||||
|
||||
## Undoing room shutdowns
|
||||
|
||||
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
|
||||
the structure can and does change without notice.
|
||||
|
||||
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
|
||||
never happened - work has to be done to move forward instead of resetting the past.
|
||||
|
||||
1. For safety reasons, it is recommended to shut down Synapse prior to continuing.
|
||||
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
|
||||
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
|
||||
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
|
||||
3. Restart Synapse (required).
|
||||
|
||||
You will have to manually handle, if you so choose, the following:
|
||||
|
||||
* Aliases that would have been redirected to the Content Violation room.
|
||||
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
|
||||
* Removal of the Content Violation room if desired.
|
||||
|
||||
@@ -27,7 +27,7 @@
|
||||
different thread to Synapse. This can make it more resilient to
|
||||
heavy load meaning metrics cannot be retrieved, and can be exposed
|
||||
to just internal networks easier. The served metrics are available
|
||||
over HTTP only, and will be available at `/`.
|
||||
over HTTP only, and will be available at `/_synapse/metrics`.
|
||||
|
||||
Add a new listener to homeserver.yaml:
|
||||
|
||||
|
||||
@@ -188,6 +188,9 @@ to do step 2.
|
||||
|
||||
It is safe to at any time kill the port script and restart it.
|
||||
|
||||
Note that the database may take up significantly more (25% - 100% more)
|
||||
space on disk after porting to Postgres.
|
||||
|
||||
### Using the port script
|
||||
|
||||
Firstly, shut down the currently running synapse server and copy its
|
||||
|
||||
@@ -10,6 +10,17 @@
|
||||
# homeserver.yaml. Instead, if you are starting from scratch, please generate
|
||||
# a fresh config using Synapse by following the instructions in INSTALL.md.
|
||||
|
||||
# Configuration options that take a time period can be set using a number
|
||||
# followed by a letter. Letters have the following meanings:
|
||||
# s = second
|
||||
# m = minute
|
||||
# h = hour
|
||||
# d = day
|
||||
# w = week
|
||||
# y = year
|
||||
# For example, setting redaction_retention_period: 5m would remove redacted
|
||||
# messages from the database after 5 minutes, rather than 5 months.
|
||||
|
||||
################################################################################
|
||||
|
||||
# Configuration file for Synapse.
|
||||
@@ -314,6 +325,10 @@ limit_remote_rooms:
|
||||
#
|
||||
#complexity_error: "This room is too complex."
|
||||
|
||||
# allow server admins to join complex rooms. Default is false.
|
||||
#
|
||||
#admins_can_join: true
|
||||
|
||||
# Whether to require a user to be in the room to add an alias to it.
|
||||
# Defaults to 'true'.
|
||||
#
|
||||
@@ -731,6 +746,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
||||
# - one for ratelimiting redactions by room admins. If this is not explicitly
|
||||
# set then it uses the same ratelimiting as per rc_message. This is useful
|
||||
# to allow room admins to deal with abuse quickly.
|
||||
# - two for ratelimiting number of rooms a user can join, "local" for when
|
||||
# users are joining rooms the server is already in (this is cheap) vs
|
||||
# "remote" for when users are trying to join rooms not on the server (which
|
||||
# can be more expensive)
|
||||
#
|
||||
# The defaults are as shown below.
|
||||
#
|
||||
@@ -756,6 +775,14 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
||||
#rc_admin_redaction:
|
||||
# per_second: 1
|
||||
# burst_count: 50
|
||||
#
|
||||
#rc_joins:
|
||||
# local:
|
||||
# per_second: 0.1
|
||||
# burst_count: 3
|
||||
# remote:
|
||||
# per_second: 0.01
|
||||
# burst_count: 3
|
||||
|
||||
|
||||
# Ratelimiting settings for incoming federation
|
||||
@@ -1145,24 +1172,6 @@ account_validity:
|
||||
#
|
||||
#default_identity_server: https://matrix.org
|
||||
|
||||
# The list of identity servers trusted to verify third party
|
||||
# identifiers by this server.
|
||||
#
|
||||
# Also defines the ID server which will be called when an account is
|
||||
# deactivated (one will be picked arbitrarily).
|
||||
#
|
||||
# Note: This option is deprecated. Since v0.99.4, Synapse has tracked which identity
|
||||
# server a 3PID has been bound to. For 3PIDs bound before then, Synapse runs a
|
||||
# background migration script, informing itself that the identity server all of its
|
||||
# 3PIDs have been bound to is likely one of the below.
|
||||
#
|
||||
# As of Synapse v1.4.0, all other functionality of this option has been deprecated, and
|
||||
# it is now solely used for the purposes of the background migration script, and can be
|
||||
# removed once it has run.
|
||||
#trusted_third_party_id_servers:
|
||||
# - matrix.org
|
||||
# - vector.im
|
||||
|
||||
# Handle threepid (email/phone etc) registration and password resets through a set of
|
||||
# *trusted* identity servers. Note that this allows the configured identity server to
|
||||
# reset passwords for accounts!
|
||||
|
||||
@@ -1,10 +1,10 @@
|
||||
# Scaling synapse via workers
|
||||
|
||||
For small instances it recommended to run Synapse in monolith mode (the
|
||||
default). For larger instances where performance is a concern it can be helpful
|
||||
to split out functionality into multiple separate python processes. These
|
||||
processes are called 'workers', and are (eventually) intended to scale
|
||||
horizontally independently.
|
||||
For small instances it recommended to run Synapse in the default monolith mode.
|
||||
For larger instances where performance is a concern it can be helpful to split
|
||||
out functionality into multiple separate python processes. These processes are
|
||||
called 'workers', and are (eventually) intended to scale horizontally
|
||||
independently.
|
||||
|
||||
Synapse's worker support is under active development and subject to change as
|
||||
we attempt to rapidly scale ever larger Synapse instances. However we are
|
||||
@@ -23,29 +23,30 @@ The processes communicate with each other via a Synapse-specific protocol called
|
||||
feeds streams of newly written data between processes so they can be kept in
|
||||
sync with the database state.
|
||||
|
||||
Additionally, processes may make HTTP requests to each other. Typically this is
|
||||
used for operations which need to wait for a reply - such as sending an event.
|
||||
When configured to do so, Synapse uses a
|
||||
[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
|
||||
stream between all configured Synapse processes. Additionally, processes may
|
||||
make HTTP requests to each other, primarily for operations which need to wait
|
||||
for a reply ─ such as sending an event.
|
||||
|
||||
As of Synapse v1.13.0, it is possible to configure Synapse to send replication
|
||||
via a [Redis pub/sub channel](https://redis.io/topics/pubsub), and is now the
|
||||
recommended way of configuring replication. This is an alternative to the old
|
||||
direct TCP connections to the main process: rather than all the workers
|
||||
connecting to the main process, all the workers and the main process connect to
|
||||
Redis, which relays replication commands between processes. This can give a
|
||||
significant cpu saving on the main process and will be a prerequisite for
|
||||
upcoming performance improvements.
|
||||
Redis support was added in v1.13.0 with it becoming the recommended method in
|
||||
v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
|
||||
v1.18.0) to the main process. With Redis, rather than all the workers connecting
|
||||
to the main process, all the workers and the main process connect to Redis,
|
||||
which relays replication commands between processes. This can give a significant
|
||||
cpu saving on the main process and will be a prerequisite for upcoming
|
||||
performance improvements.
|
||||
|
||||
(See the [Architectural diagram](#architectural-diagram) section at the end for
|
||||
a visualisation of what this looks like)
|
||||
See the [Architectural diagram](#architectural-diagram) section at the end for
|
||||
a visualisation of what this looks like.
|
||||
|
||||
|
||||
## Setting up workers
|
||||
|
||||
A Redis server is required to manage the communication between the processes.
|
||||
(The older direct TCP connections are now deprecated.) The Redis server
|
||||
should be installed following the normal procedure for your distribution (e.g.
|
||||
`apt install redis-server` on Debian). It is safe to use an existing Redis
|
||||
deployment if you have one.
|
||||
The Redis server should be installed following the normal procedure for your
|
||||
distribution (e.g. `apt install redis-server` on Debian). It is safe to use an
|
||||
existing Redis deployment if you have one.
|
||||
|
||||
Once installed, check that Redis is running and accessible from the host running
|
||||
Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
|
||||
@@ -65,8 +66,9 @@ https://hub.docker.com/r/matrixdotorg/synapse/.
|
||||
|
||||
To make effective use of the workers, you will need to configure an HTTP
|
||||
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
|
||||
the correct worker, or to the main synapse instance. See [reverse_proxy.md](reverse_proxy.md)
|
||||
for information on setting up a reverse proxy.
|
||||
the correct worker, or to the main synapse instance. See
|
||||
[reverse_proxy.md](reverse_proxy.md) for information on setting up a reverse
|
||||
proxy.
|
||||
|
||||
To enable workers you should create a configuration file for each worker
|
||||
process. Each worker configuration file inherits the configuration of the shared
|
||||
@@ -75,8 +77,12 @@ that worker, e.g. the HTTP listener that it provides (if any); logging
|
||||
configuration; etc. You should minimise the number of overrides though to
|
||||
maintain a usable config.
|
||||
|
||||
Next you need to add both a HTTP replication listener and redis config to the
|
||||
shared Synapse configuration file (`homeserver.yaml`). For example:
|
||||
|
||||
### Shared Configuration
|
||||
|
||||
Next you need to add both a HTTP replication listener, used for HTTP requests
|
||||
between processes, and redis config to the shared Synapse configuration file
|
||||
(`homeserver.yaml`). For example:
|
||||
|
||||
```yaml
|
||||
# extend the existing `listeners` section. This defines the ports that the
|
||||
@@ -98,6 +104,9 @@ See the sample config for the full documentation of each option.
|
||||
Under **no circumstances** should the replication listener be exposed to the
|
||||
public internet; it has no authentication and is unencrypted.
|
||||
|
||||
|
||||
### Worker Configuration
|
||||
|
||||
In the config file for each worker, you must specify the type of worker
|
||||
application (`worker_app`), and you should specify a unqiue name for the worker
|
||||
(`worker_name`). The currently available worker applications are listed below.
|
||||
@@ -278,7 +287,7 @@ instance_map:
|
||||
host: localhost
|
||||
port: 8034
|
||||
|
||||
streams_writers:
|
||||
stream_writers:
|
||||
events: event_persister1
|
||||
```
|
||||
|
||||
|
||||
34
scripts-dev/check_line_terminators.sh
Executable file
34
scripts-dev/check_line_terminators.sh
Executable file
@@ -0,0 +1,34 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
#
|
||||
# This script checks that line terminators in all repository files (excluding
|
||||
# those in the .git directory) feature unix line terminators.
|
||||
#
|
||||
# Usage:
|
||||
#
|
||||
# ./check_line_terminators.sh
|
||||
#
|
||||
# The script will emit exit code 1 if any files that do not use unix line
|
||||
# terminators are found, 0 otherwise.
|
||||
|
||||
# cd to the root of the repository
|
||||
cd `dirname $0`/..
|
||||
|
||||
# Find and print files with non-unix line terminators
|
||||
if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then
|
||||
echo -e '\e[31mERROR: found files with CRLF line endings. See above.\e[39m'
|
||||
exit 1
|
||||
fi
|
||||
@@ -69,7 +69,7 @@ logger = logging.getLogger("synapse_port_db")
|
||||
|
||||
|
||||
BOOLEAN_COLUMNS = {
|
||||
"events": ["processed", "outlier", "contains_url"],
|
||||
"events": ["processed", "outlier", "contains_url", "count_as_unread"],
|
||||
"rooms": ["is_public"],
|
||||
"event_edges": ["is_state"],
|
||||
"presence_list": ["accepted"],
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
""" This is a reference implementation of a Matrix homeserver.
|
||||
"""
|
||||
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
@@ -25,6 +26,9 @@ if sys.version_info < (3, 5):
|
||||
print("Synapse requires Python 3.5 or above.")
|
||||
sys.exit(1)
|
||||
|
||||
# Twisted and canonicaljson will fail to import when this file is executed to
|
||||
# get the __version__ during a fresh install. That's OK and subsequent calls to
|
||||
# actually start Synapse will import these libraries fine.
|
||||
try:
|
||||
from twisted.internet import protocol
|
||||
from twisted.internet.protocol import Factory
|
||||
@@ -36,6 +40,14 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
# Use the standard library json implementation instead of simplejson.
|
||||
try:
|
||||
from canonicaljson import set_json_library
|
||||
|
||||
set_json_library(json)
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.18.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
|
||||
@@ -82,7 +82,7 @@ class Auth(object):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
|
||||
prev_state_ids = yield context.get_prev_state_ids()
|
||||
prev_state_ids = yield defer.ensureDeferred(context.get_prev_state_ids())
|
||||
auth_events_ids = yield self.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
)
|
||||
|
||||
@@ -628,7 +628,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
self.get_tcp_replication().start_replication(self)
|
||||
|
||||
def remove_pusher(self, app_id, push_key, user_id):
|
||||
async def remove_pusher(self, app_id, push_key, user_id):
|
||||
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
|
||||
|
||||
def build_replication_data_handler(self):
|
||||
|
||||
@@ -380,13 +380,12 @@ def setup(config_options):
|
||||
|
||||
hs.setup_master()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def do_acme():
|
||||
async def do_acme() -> bool:
|
||||
"""
|
||||
Reprovision an ACME certificate, if it's required.
|
||||
|
||||
Returns:
|
||||
Deferred[bool]: Whether the cert has been updated.
|
||||
Whether the cert has been updated.
|
||||
"""
|
||||
acme = hs.get_acme_handler()
|
||||
|
||||
@@ -405,7 +404,7 @@ def setup(config_options):
|
||||
provision = True
|
||||
|
||||
if provision:
|
||||
yield acme.provision_certificate()
|
||||
await acme.provision_certificate()
|
||||
|
||||
return provision
|
||||
|
||||
@@ -415,7 +414,7 @@ def setup(config_options):
|
||||
Provision a certificate from ACME, if required, and reload the TLS
|
||||
certificate if it's renewed.
|
||||
"""
|
||||
reprovisioned = yield do_acme()
|
||||
reprovisioned = yield defer.ensureDeferred(do_acme())
|
||||
if reprovisioned:
|
||||
_base.refresh_certificate(hs)
|
||||
|
||||
@@ -427,8 +426,8 @@ def setup(config_options):
|
||||
acme = hs.get_acme_handler()
|
||||
# Start up the webservices which we will respond to ACME
|
||||
# challenges with, and then provision.
|
||||
yield acme.start_listening()
|
||||
yield do_acme()
|
||||
yield defer.ensureDeferred(acme.start_listening())
|
||||
yield defer.ensureDeferred(do_acme())
|
||||
|
||||
# Check if it needs to be reprovisioned every day.
|
||||
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
|
||||
|
||||
@@ -15,11 +15,9 @@
|
||||
import logging
|
||||
import re
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.types import GroupID, get_domain_from_id
|
||||
from synapse.util.caches.descriptors import cachedInlineCallbacks
|
||||
from synapse.util.caches.descriptors import cached
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -43,7 +41,7 @@ class AppServiceTransaction(object):
|
||||
Args:
|
||||
as_api(ApplicationServiceApi): The API to use to send.
|
||||
Returns:
|
||||
A Deferred which resolves to True if the transaction was sent.
|
||||
An Awaitable which resolves to True if the transaction was sent.
|
||||
"""
|
||||
return as_api.push_bulk(
|
||||
service=self.service, events=self.events, txn_id=self.id
|
||||
@@ -172,8 +170,7 @@ class ApplicationService(object):
|
||||
return regex_obj["exclusive"]
|
||||
return False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _matches_user(self, event, store):
|
||||
async def _matches_user(self, event, store):
|
||||
if not event:
|
||||
return False
|
||||
|
||||
@@ -188,12 +185,12 @@ class ApplicationService(object):
|
||||
if not store:
|
||||
return False
|
||||
|
||||
does_match = yield self._matches_user_in_member_list(event.room_id, store)
|
||||
does_match = await self._matches_user_in_member_list(event.room_id, store)
|
||||
return does_match
|
||||
|
||||
@cachedInlineCallbacks(num_args=1, cache_context=True)
|
||||
def _matches_user_in_member_list(self, room_id, store, cache_context):
|
||||
member_list = yield store.get_users_in_room(
|
||||
@cached(num_args=1, cache_context=True)
|
||||
async def _matches_user_in_member_list(self, room_id, store, cache_context):
|
||||
member_list = await store.get_users_in_room(
|
||||
room_id, on_invalidate=cache_context.invalidate
|
||||
)
|
||||
|
||||
@@ -208,35 +205,33 @@ class ApplicationService(object):
|
||||
return self.is_interested_in_room(event.room_id)
|
||||
return False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _matches_aliases(self, event, store):
|
||||
async def _matches_aliases(self, event, store):
|
||||
if not store or not event:
|
||||
return False
|
||||
|
||||
alias_list = yield store.get_aliases_for_room(event.room_id)
|
||||
alias_list = await store.get_aliases_for_room(event.room_id)
|
||||
for alias in alias_list:
|
||||
if self.is_interested_in_alias(alias):
|
||||
return True
|
||||
return False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def is_interested(self, event, store=None):
|
||||
async def is_interested(self, event, store=None) -> bool:
|
||||
"""Check if this service is interested in this event.
|
||||
|
||||
Args:
|
||||
event(Event): The event to check.
|
||||
store(DataStore)
|
||||
Returns:
|
||||
bool: True if this service would like to know about this event.
|
||||
True if this service would like to know about this event.
|
||||
"""
|
||||
# Do cheap checks first
|
||||
if self._matches_room_id(event):
|
||||
return True
|
||||
|
||||
if (yield self._matches_aliases(event, store)):
|
||||
if await self._matches_aliases(event, store):
|
||||
return True
|
||||
|
||||
if (yield self._matches_user(event, store)):
|
||||
if await self._matches_user(event, store):
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
@@ -93,13 +93,12 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
hs, "as_protocol_meta", timeout_ms=HOUR_IN_MS
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def query_user(self, service, user_id):
|
||||
async def query_user(self, service, user_id):
|
||||
if service.url is None:
|
||||
return False
|
||||
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
|
||||
try:
|
||||
response = yield self.get_json(uri, {"access_token": service.hs_token})
|
||||
response = await self.get_json(uri, {"access_token": service.hs_token})
|
||||
if response is not None: # just an empty json object
|
||||
return True
|
||||
except CodeMessageException as e:
|
||||
@@ -110,14 +109,12 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
logger.warning("query_user to %s threw exception %s", uri, ex)
|
||||
return False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def query_alias(self, service, alias):
|
||||
async def query_alias(self, service, alias):
|
||||
if service.url is None:
|
||||
return False
|
||||
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
|
||||
response = None
|
||||
try:
|
||||
response = yield self.get_json(uri, {"access_token": service.hs_token})
|
||||
response = await self.get_json(uri, {"access_token": service.hs_token})
|
||||
if response is not None: # just an empty json object
|
||||
return True
|
||||
except CodeMessageException as e:
|
||||
@@ -128,8 +125,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
logger.warning("query_alias to %s threw exception %s", uri, ex)
|
||||
return False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def query_3pe(self, service, kind, protocol, fields):
|
||||
async def query_3pe(self, service, kind, protocol, fields):
|
||||
if kind == ThirdPartyEntityKind.USER:
|
||||
required_field = "userid"
|
||||
elif kind == ThirdPartyEntityKind.LOCATION:
|
||||
@@ -146,7 +142,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
urllib.parse.quote(protocol),
|
||||
)
|
||||
try:
|
||||
response = yield self.get_json(uri, fields)
|
||||
response = await self.get_json(uri, fields)
|
||||
if not isinstance(response, list):
|
||||
logger.warning(
|
||||
"query_3pe to %s returned an invalid response %r", uri, response
|
||||
@@ -202,8 +198,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
key = (service.id, protocol)
|
||||
return self.protocol_meta_cache.wrap(key, _get)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def push_bulk(self, service, events, txn_id=None):
|
||||
async def push_bulk(self, service, events, txn_id=None):
|
||||
if service.url is None:
|
||||
return True
|
||||
|
||||
@@ -218,7 +213,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
|
||||
uri = service.url + ("/transactions/%s" % urllib.parse.quote(txn_id))
|
||||
try:
|
||||
yield self.put_json(
|
||||
await self.put_json(
|
||||
uri=uri,
|
||||
json_body={"events": events},
|
||||
args={"access_token": service.hs_token},
|
||||
|
||||
@@ -50,8 +50,6 @@ components.
|
||||
"""
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.appservice import ApplicationServiceState
|
||||
from synapse.logging.context import run_in_background
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
@@ -73,12 +71,11 @@ class ApplicationServiceScheduler(object):
|
||||
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
|
||||
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def start(self):
|
||||
async def start(self):
|
||||
logger.info("Starting appservice scheduler")
|
||||
|
||||
# check for any DOWN ASes and start recoverers for them.
|
||||
services = yield self.store.get_appservices_by_state(
|
||||
services = await self.store.get_appservices_by_state(
|
||||
ApplicationServiceState.DOWN
|
||||
)
|
||||
|
||||
@@ -117,8 +114,7 @@ class _ServiceQueuer(object):
|
||||
"as-sender-%s" % (service.id,), self._send_request, service
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _send_request(self, service):
|
||||
async def _send_request(self, service):
|
||||
# sanity-check: we shouldn't get here if this service already has a sender
|
||||
# running.
|
||||
assert service.id not in self.requests_in_flight
|
||||
@@ -130,7 +126,7 @@ class _ServiceQueuer(object):
|
||||
if not events:
|
||||
return
|
||||
try:
|
||||
yield self.txn_ctrl.send(service, events)
|
||||
await self.txn_ctrl.send(service, events)
|
||||
except Exception:
|
||||
logger.exception("AS request failed")
|
||||
finally:
|
||||
@@ -162,36 +158,33 @@ class _TransactionController(object):
|
||||
# for UTs
|
||||
self.RECOVERER_CLASS = _Recoverer
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send(self, service, events):
|
||||
async def send(self, service, events):
|
||||
try:
|
||||
txn = yield self.store.create_appservice_txn(service=service, events=events)
|
||||
service_is_up = yield self._is_service_up(service)
|
||||
txn = await self.store.create_appservice_txn(service=service, events=events)
|
||||
service_is_up = await self._is_service_up(service)
|
||||
if service_is_up:
|
||||
sent = yield txn.send(self.as_api)
|
||||
sent = await txn.send(self.as_api)
|
||||
if sent:
|
||||
yield txn.complete(self.store)
|
||||
await txn.complete(self.store)
|
||||
else:
|
||||
run_in_background(self._on_txn_fail, service)
|
||||
except Exception:
|
||||
logger.exception("Error creating appservice transaction")
|
||||
run_in_background(self._on_txn_fail, service)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_recovered(self, recoverer):
|
||||
async def on_recovered(self, recoverer):
|
||||
logger.info(
|
||||
"Successfully recovered application service AS ID %s", recoverer.service.id
|
||||
)
|
||||
self.recoverers.pop(recoverer.service.id)
|
||||
logger.info("Remaining active recoverers: %s", len(self.recoverers))
|
||||
yield self.store.set_appservice_state(
|
||||
await self.store.set_appservice_state(
|
||||
recoverer.service, ApplicationServiceState.UP
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _on_txn_fail(self, service):
|
||||
async def _on_txn_fail(self, service):
|
||||
try:
|
||||
yield self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
|
||||
await self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
|
||||
self.start_recoverer(service)
|
||||
except Exception:
|
||||
logger.exception("Error starting AS recoverer")
|
||||
@@ -211,9 +204,8 @@ class _TransactionController(object):
|
||||
recoverer.recover()
|
||||
logger.info("Now %i active recoverers", len(self.recoverers))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _is_service_up(self, service):
|
||||
state = yield self.store.get_appservice_state(service)
|
||||
async def _is_service_up(self, service):
|
||||
state = await self.store.get_appservice_state(service)
|
||||
return state == ApplicationServiceState.UP or state is None
|
||||
|
||||
|
||||
@@ -254,25 +246,24 @@ class _Recoverer(object):
|
||||
self.backoff_counter += 1
|
||||
self.recover()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def retry(self):
|
||||
async def retry(self):
|
||||
logger.info("Starting retries on %s", self.service.id)
|
||||
try:
|
||||
while True:
|
||||
txn = yield self.store.get_oldest_unsent_txn(self.service)
|
||||
txn = await self.store.get_oldest_unsent_txn(self.service)
|
||||
if not txn:
|
||||
# nothing left: we're done!
|
||||
self.callback(self)
|
||||
await self.callback(self)
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"Retrying transaction %s for AS ID %s", txn.id, txn.service.id
|
||||
)
|
||||
sent = yield txn.send(self.as_api)
|
||||
sent = await txn.send(self.as_api)
|
||||
if not sent:
|
||||
break
|
||||
|
||||
yield txn.complete(self.store)
|
||||
await txn.complete(self.store)
|
||||
|
||||
# reset the backoff counter and then process the next transaction
|
||||
self.backoff_counter = 1
|
||||
|
||||
@@ -93,6 +93,15 @@ class RatelimitConfig(Config):
|
||||
if rc_admin_redaction:
|
||||
self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
|
||||
|
||||
self.rc_joins_local = RateLimitConfig(
|
||||
config.get("rc_joins", {}).get("local", {}),
|
||||
defaults={"per_second": 0.1, "burst_count": 3},
|
||||
)
|
||||
self.rc_joins_remote = RateLimitConfig(
|
||||
config.get("rc_joins", {}).get("remote", {}),
|
||||
defaults={"per_second": 0.01, "burst_count": 3},
|
||||
)
|
||||
|
||||
def generate_config_section(self, **kwargs):
|
||||
return """\
|
||||
## Ratelimiting ##
|
||||
@@ -118,6 +127,10 @@ class RatelimitConfig(Config):
|
||||
# - one for ratelimiting redactions by room admins. If this is not explicitly
|
||||
# set then it uses the same ratelimiting as per rc_message. This is useful
|
||||
# to allow room admins to deal with abuse quickly.
|
||||
# - two for ratelimiting number of rooms a user can join, "local" for when
|
||||
# users are joining rooms the server is already in (this is cheap) vs
|
||||
# "remote" for when users are trying to join rooms not on the server (which
|
||||
# can be more expensive)
|
||||
#
|
||||
# The defaults are as shown below.
|
||||
#
|
||||
@@ -143,6 +156,14 @@ class RatelimitConfig(Config):
|
||||
#rc_admin_redaction:
|
||||
# per_second: 1
|
||||
# burst_count: 50
|
||||
#
|
||||
#rc_joins:
|
||||
# local:
|
||||
# per_second: 0.1
|
||||
# burst_count: 3
|
||||
# remote:
|
||||
# per_second: 0.01
|
||||
# burst_count: 3
|
||||
|
||||
|
||||
# Ratelimiting settings for incoming federation
|
||||
|
||||
@@ -333,24 +333,6 @@ class RegistrationConfig(Config):
|
||||
#
|
||||
#default_identity_server: https://matrix.org
|
||||
|
||||
# The list of identity servers trusted to verify third party
|
||||
# identifiers by this server.
|
||||
#
|
||||
# Also defines the ID server which will be called when an account is
|
||||
# deactivated (one will be picked arbitrarily).
|
||||
#
|
||||
# Note: This option is deprecated. Since v0.99.4, Synapse has tracked which identity
|
||||
# server a 3PID has been bound to. For 3PIDs bound before then, Synapse runs a
|
||||
# background migration script, informing itself that the identity server all of its
|
||||
# 3PIDs have been bound to is likely one of the below.
|
||||
#
|
||||
# As of Synapse v1.4.0, all other functionality of this option has been deprecated, and
|
||||
# it is now solely used for the purposes of the background migration script, and can be
|
||||
# removed once it has run.
|
||||
#trusted_third_party_id_servers:
|
||||
# - matrix.org
|
||||
# - vector.im
|
||||
|
||||
# Handle threepid (email/phone etc) registration and password resets through a set of
|
||||
# *trusted* identity servers. Note that this allows the configured identity server to
|
||||
# reset passwords for accounts!
|
||||
|
||||
@@ -439,6 +439,9 @@ class ServerConfig(Config):
|
||||
validator=attr.validators.instance_of(str),
|
||||
default=ROOM_COMPLEXITY_TOO_GREAT,
|
||||
)
|
||||
admins_can_join = attr.ib(
|
||||
validator=attr.validators.instance_of(bool), default=False
|
||||
)
|
||||
|
||||
self.limit_remote_rooms = LimitRemoteRoomsConfig(
|
||||
**(config.get("limit_remote_rooms") or {})
|
||||
@@ -893,6 +896,10 @@ class ServerConfig(Config):
|
||||
#
|
||||
#complexity_error: "This room is too complex."
|
||||
|
||||
# allow server admins to join complex rooms. Default is false.
|
||||
#
|
||||
#admins_can_join: true
|
||||
|
||||
# Whether to require a user to be in the room to add an alias to it.
|
||||
# Defaults to 'true'.
|
||||
#
|
||||
|
||||
@@ -223,8 +223,7 @@ class Keyring(object):
|
||||
|
||||
return results
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _start_key_lookups(self, verify_requests):
|
||||
async def _start_key_lookups(self, verify_requests):
|
||||
"""Sets off the key fetches for each verify request
|
||||
|
||||
Once each fetch completes, verify_request.key_ready will be resolved.
|
||||
@@ -245,7 +244,7 @@ class Keyring(object):
|
||||
server_to_request_ids.setdefault(server_name, set()).add(request_id)
|
||||
|
||||
# Wait for any previous lookups to complete before proceeding.
|
||||
yield self.wait_for_previous_lookups(server_to_request_ids.keys())
|
||||
await self.wait_for_previous_lookups(server_to_request_ids.keys())
|
||||
|
||||
# take out a lock on each of the servers by sticking a Deferred in
|
||||
# key_downloads
|
||||
@@ -283,15 +282,14 @@ class Keyring(object):
|
||||
except Exception:
|
||||
logger.exception("Error starting key lookups")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def wait_for_previous_lookups(self, server_names):
|
||||
async def wait_for_previous_lookups(self, server_names) -> None:
|
||||
"""Waits for any previous key lookups for the given servers to finish.
|
||||
|
||||
Args:
|
||||
server_names (Iterable[str]): list of servers which we want to look up
|
||||
|
||||
Returns:
|
||||
Deferred[None]: resolves once all key lookups for the given servers have
|
||||
Resolves once all key lookups for the given servers have
|
||||
completed. Follows the synapse rules of logcontext preservation.
|
||||
"""
|
||||
loop_count = 1
|
||||
@@ -309,7 +307,7 @@ class Keyring(object):
|
||||
loop_count,
|
||||
)
|
||||
with PreserveLoggingContext():
|
||||
yield defer.DeferredList((w[1] for w in wait_on))
|
||||
await defer.DeferredList((w[1] for w in wait_on))
|
||||
|
||||
loop_count += 1
|
||||
|
||||
@@ -326,44 +324,44 @@ class Keyring(object):
|
||||
|
||||
remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called}
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def do_iterations():
|
||||
with Measure(self.clock, "get_server_verify_keys"):
|
||||
for f in self._key_fetchers:
|
||||
if not remaining_requests:
|
||||
return
|
||||
yield self._attempt_key_fetches_with_fetcher(f, remaining_requests)
|
||||
|
||||
# look for any requests which weren't satisfied
|
||||
with PreserveLoggingContext():
|
||||
for verify_request in remaining_requests:
|
||||
verify_request.key_ready.errback(
|
||||
SynapseError(
|
||||
401,
|
||||
"No key for %s with ids in %s (min_validity %i)"
|
||||
% (
|
||||
verify_request.server_name,
|
||||
verify_request.key_ids,
|
||||
verify_request.minimum_valid_until_ts,
|
||||
),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
async def do_iterations():
|
||||
try:
|
||||
with Measure(self.clock, "get_server_verify_keys"):
|
||||
for f in self._key_fetchers:
|
||||
if not remaining_requests:
|
||||
return
|
||||
await self._attempt_key_fetches_with_fetcher(
|
||||
f, remaining_requests
|
||||
)
|
||||
|
||||
def on_err(err):
|
||||
# we don't really expect to get here, because any errors should already
|
||||
# have been caught and logged. But if we do, let's log the error and make
|
||||
# sure that all of the deferreds are resolved.
|
||||
logger.error("Unexpected error in _get_server_verify_keys: %s", err)
|
||||
with PreserveLoggingContext():
|
||||
for verify_request in remaining_requests:
|
||||
if not verify_request.key_ready.called:
|
||||
verify_request.key_ready.errback(err)
|
||||
# look for any requests which weren't satisfied
|
||||
with PreserveLoggingContext():
|
||||
for verify_request in remaining_requests:
|
||||
verify_request.key_ready.errback(
|
||||
SynapseError(
|
||||
401,
|
||||
"No key for %s with ids in %s (min_validity %i)"
|
||||
% (
|
||||
verify_request.server_name,
|
||||
verify_request.key_ids,
|
||||
verify_request.minimum_valid_until_ts,
|
||||
),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
)
|
||||
except Exception as err:
|
||||
# we don't really expect to get here, because any errors should already
|
||||
# have been caught and logged. But if we do, let's log the error and make
|
||||
# sure that all of the deferreds are resolved.
|
||||
logger.error("Unexpected error in _get_server_verify_keys: %s", err)
|
||||
with PreserveLoggingContext():
|
||||
for verify_request in remaining_requests:
|
||||
if not verify_request.key_ready.called:
|
||||
verify_request.key_ready.errback(err)
|
||||
|
||||
run_in_background(do_iterations).addErrback(on_err)
|
||||
run_in_background(do_iterations)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _attempt_key_fetches_with_fetcher(self, fetcher, remaining_requests):
|
||||
async def _attempt_key_fetches_with_fetcher(self, fetcher, remaining_requests):
|
||||
"""Use a key fetcher to attempt to satisfy some key requests
|
||||
|
||||
Args:
|
||||
@@ -390,7 +388,7 @@ class Keyring(object):
|
||||
verify_request.minimum_valid_until_ts,
|
||||
)
|
||||
|
||||
results = yield fetcher.get_keys(missing_keys)
|
||||
results = await fetcher.get_keys(missing_keys)
|
||||
|
||||
completed = []
|
||||
for verify_request in remaining_requests:
|
||||
@@ -423,7 +421,7 @@ class Keyring(object):
|
||||
|
||||
|
||||
class KeyFetcher(object):
|
||||
def get_keys(self, keys_to_fetch):
|
||||
async def get_keys(self, keys_to_fetch):
|
||||
"""
|
||||
Args:
|
||||
keys_to_fetch (dict[str, dict[str, int]]):
|
||||
@@ -442,8 +440,7 @@ class StoreKeyFetcher(KeyFetcher):
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_keys(self, keys_to_fetch):
|
||||
async def get_keys(self, keys_to_fetch):
|
||||
"""see KeyFetcher.get_keys"""
|
||||
|
||||
keys_to_fetch = (
|
||||
@@ -452,7 +449,7 @@ class StoreKeyFetcher(KeyFetcher):
|
||||
for key_id in keys_for_server.keys()
|
||||
)
|
||||
|
||||
res = yield self.store.get_server_verify_keys(keys_to_fetch)
|
||||
res = await self.store.get_server_verify_keys(keys_to_fetch)
|
||||
keys = {}
|
||||
for (server_name, key_id), key in res.items():
|
||||
keys.setdefault(server_name, {})[key_id] = key
|
||||
@@ -464,8 +461,7 @@ class BaseV2KeyFetcher(object):
|
||||
self.store = hs.get_datastore()
|
||||
self.config = hs.get_config()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def process_v2_response(self, from_server, response_json, time_added_ms):
|
||||
async def process_v2_response(self, from_server, response_json, time_added_ms):
|
||||
"""Parse a 'Server Keys' structure from the result of a /key request
|
||||
|
||||
This is used to parse either the entirety of the response from
|
||||
@@ -537,7 +533,7 @@ class BaseV2KeyFetcher(object):
|
||||
|
||||
key_json_bytes = encode_canonical_json(response_json)
|
||||
|
||||
yield make_deferred_yieldable(
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[
|
||||
run_in_background(
|
||||
@@ -567,14 +563,12 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
self.client = hs.get_http_client()
|
||||
self.key_servers = self.config.key_servers
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_keys(self, keys_to_fetch):
|
||||
async def get_keys(self, keys_to_fetch):
|
||||
"""see KeyFetcher.get_keys"""
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_key(key_server):
|
||||
async def get_key(key_server):
|
||||
try:
|
||||
result = yield self.get_server_verify_key_v2_indirect(
|
||||
result = await self.get_server_verify_key_v2_indirect(
|
||||
keys_to_fetch, key_server
|
||||
)
|
||||
return result
|
||||
@@ -592,7 +586,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
|
||||
return {}
|
||||
|
||||
results = yield make_deferred_yieldable(
|
||||
results = await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[run_in_background(get_key, server) for server in self.key_servers],
|
||||
consumeErrors=True,
|
||||
@@ -606,8 +600,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
|
||||
return union_of_keys
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
|
||||
async def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
|
||||
"""
|
||||
Args:
|
||||
keys_to_fetch (dict[str, dict[str, int]]):
|
||||
@@ -617,7 +610,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
the keys
|
||||
|
||||
Returns:
|
||||
Deferred[dict[str, dict[str, synapse.storage.keys.FetchKeyResult]]]: map
|
||||
dict[str, dict[str, synapse.storage.keys.FetchKeyResult]]: map
|
||||
from server_name -> key_id -> FetchKeyResult
|
||||
|
||||
Raises:
|
||||
@@ -632,7 +625,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
)
|
||||
|
||||
try:
|
||||
query_response = yield self.client.post_json(
|
||||
query_response = await self.client.post_json(
|
||||
destination=perspective_name,
|
||||
path="/_matrix/key/v2/query",
|
||||
data={
|
||||
@@ -668,7 +661,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
try:
|
||||
self._validate_perspectives_response(key_server, response)
|
||||
|
||||
processed_response = yield self.process_v2_response(
|
||||
processed_response = await self.process_v2_response(
|
||||
perspective_name, response, time_added_ms=time_now_ms
|
||||
)
|
||||
except KeyLookupError as e:
|
||||
@@ -687,7 +680,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
)
|
||||
keys.setdefault(server_name, {}).update(processed_response)
|
||||
|
||||
yield self.store.store_server_verify_keys(
|
||||
await self.store.store_server_verify_keys(
|
||||
perspective_name, time_now_ms, added_keys
|
||||
)
|
||||
|
||||
@@ -739,24 +732,23 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
self.clock = hs.get_clock()
|
||||
self.client = hs.get_http_client()
|
||||
|
||||
def get_keys(self, keys_to_fetch):
|
||||
async def get_keys(self, keys_to_fetch):
|
||||
"""
|
||||
Args:
|
||||
keys_to_fetch (dict[str, iterable[str]]):
|
||||
the keys to be fetched. server_name -> key_ids
|
||||
|
||||
Returns:
|
||||
Deferred[dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]]:
|
||||
dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]:
|
||||
map from server_name -> key_id -> FetchKeyResult
|
||||
"""
|
||||
|
||||
results = {}
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_key(key_to_fetch_item):
|
||||
async def get_key(key_to_fetch_item):
|
||||
server_name, key_ids = key_to_fetch_item
|
||||
try:
|
||||
keys = yield self.get_server_verify_key_v2_direct(server_name, key_ids)
|
||||
keys = await self.get_server_verify_key_v2_direct(server_name, key_ids)
|
||||
results[server_name] = keys
|
||||
except KeyLookupError as e:
|
||||
logger.warning(
|
||||
@@ -765,12 +757,11 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
except Exception:
|
||||
logger.exception("Error getting keys %s from %s", key_ids, server_name)
|
||||
|
||||
return yieldable_gather_results(get_key, keys_to_fetch.items()).addCallback(
|
||||
lambda _: results
|
||||
)
|
||||
return await yieldable_gather_results(
|
||||
get_key, keys_to_fetch.items()
|
||||
).addCallback(lambda _: results)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_server_verify_key_v2_direct(self, server_name, key_ids):
|
||||
async def get_server_verify_key_v2_direct(self, server_name, key_ids):
|
||||
"""
|
||||
|
||||
Args:
|
||||
@@ -792,7 +783,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
|
||||
time_now_ms = self.clock.time_msec()
|
||||
try:
|
||||
response = yield self.client.get_json(
|
||||
response = await self.client.get_json(
|
||||
destination=server_name,
|
||||
path="/_matrix/key/v2/server/"
|
||||
+ urllib.parse.quote(requested_key_id),
|
||||
@@ -823,12 +814,12 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
% (server_name, response["server_name"])
|
||||
)
|
||||
|
||||
response_keys = yield self.process_v2_response(
|
||||
response_keys = await self.process_v2_response(
|
||||
from_server=server_name,
|
||||
response_json=response,
|
||||
time_added_ms=time_now_ms,
|
||||
)
|
||||
yield self.store.store_server_verify_keys(
|
||||
await self.store.store_server_verify_keys(
|
||||
server_name,
|
||||
time_now_ms,
|
||||
((server_name, key_id, key) for key_id, key in response_keys.items()),
|
||||
@@ -838,22 +829,18 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
return keys
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _handle_key_deferred(verify_request):
|
||||
async def _handle_key_deferred(verify_request) -> None:
|
||||
"""Waits for the key to become available, and then performs a verification
|
||||
|
||||
Args:
|
||||
verify_request (VerifyJsonRequest):
|
||||
|
||||
Returns:
|
||||
Deferred[None]
|
||||
|
||||
Raises:
|
||||
SynapseError if there was a problem performing the verification
|
||||
"""
|
||||
server_name = verify_request.server_name
|
||||
with PreserveLoggingContext():
|
||||
_, key_id, verify_key = yield verify_request.key_ready
|
||||
_, key_id, verify_key = await verify_request.key_ready
|
||||
|
||||
json_object = verify_request.json_object
|
||||
|
||||
|
||||
@@ -17,8 +17,6 @@ from typing import Optional
|
||||
import attr
|
||||
from nacl.signing import SigningKey
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import MAX_DEPTH
|
||||
from synapse.api.errors import UnsupportedRoomVersionError
|
||||
from synapse.api.room_versions import (
|
||||
@@ -95,31 +93,30 @@ class EventBuilder(object):
|
||||
def is_state(self):
|
||||
return self._state_key is not None
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def build(self, prev_event_ids):
|
||||
async def build(self, prev_event_ids):
|
||||
"""Transform into a fully signed and hashed event
|
||||
|
||||
Args:
|
||||
prev_event_ids (list[str]): The event IDs to use as the prev events
|
||||
|
||||
Returns:
|
||||
Deferred[FrozenEvent]
|
||||
FrozenEvent
|
||||
"""
|
||||
|
||||
state_ids = yield defer.ensureDeferred(
|
||||
self._state.get_current_state_ids(self.room_id, prev_event_ids)
|
||||
state_ids = await self._state.get_current_state_ids(
|
||||
self.room_id, prev_event_ids
|
||||
)
|
||||
auth_ids = yield self._auth.compute_auth_events(self, state_ids)
|
||||
auth_ids = await self._auth.compute_auth_events(self, state_ids)
|
||||
|
||||
format_version = self.room_version.event_format
|
||||
if format_version == EventFormatVersions.V1:
|
||||
auth_events = yield self._store.add_event_hashes(auth_ids)
|
||||
prev_events = yield self._store.add_event_hashes(prev_event_ids)
|
||||
auth_events = await self._store.add_event_hashes(auth_ids)
|
||||
prev_events = await self._store.add_event_hashes(prev_event_ids)
|
||||
else:
|
||||
auth_events = auth_ids
|
||||
prev_events = prev_event_ids
|
||||
|
||||
old_depth = yield self._store.get_max_depth_of(prev_event_ids)
|
||||
old_depth = await self._store.get_max_depth_of(prev_event_ids)
|
||||
depth = old_depth + 1
|
||||
|
||||
# we cap depth of generated events, to ensure that they are not
|
||||
|
||||
@@ -12,17 +12,19 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Optional, Union
|
||||
from typing import TYPE_CHECKING, Optional, Union
|
||||
|
||||
import attr
|
||||
from frozendict import frozendict
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.types import StateMap
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.storage.data_stores.main import DataStore
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
class EventContext:
|
||||
@@ -129,8 +131,7 @@ class EventContext:
|
||||
delta_ids=delta_ids,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def serialize(self, event, store):
|
||||
async def serialize(self, event: EventBase, store: "DataStore") -> dict:
|
||||
"""Converts self to a type that can be serialized as JSON, and then
|
||||
deserialized by `deserialize`
|
||||
|
||||
@@ -146,7 +147,7 @@ class EventContext:
|
||||
# the prev_state_ids, so if we're a state event we include the event
|
||||
# id that we replaced in the state.
|
||||
if event.is_state():
|
||||
prev_state_ids = yield self.get_prev_state_ids()
|
||||
prev_state_ids = await self.get_prev_state_ids()
|
||||
prev_state_id = prev_state_ids.get((event.type, event.state_key))
|
||||
else:
|
||||
prev_state_id = None
|
||||
@@ -214,8 +215,7 @@ class EventContext:
|
||||
|
||||
return self._state_group
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_current_state_ids(self):
|
||||
async def get_current_state_ids(self) -> Optional[StateMap[str]]:
|
||||
"""
|
||||
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||
|
||||
@@ -224,32 +224,31 @@ class EventContext:
|
||||
``rejected`` is set.
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
is None, which happens when the associated event is an outlier.
|
||||
Returns None if state_group is None, which happens when the associated
|
||||
event is an outlier.
|
||||
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
if self.rejected:
|
||||
raise RuntimeError("Attempt to access state_ids of rejected event")
|
||||
|
||||
yield self._ensure_fetched()
|
||||
await self._ensure_fetched()
|
||||
return self._current_state_ids
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_prev_state_ids(self):
|
||||
async def get_prev_state_ids(self):
|
||||
"""
|
||||
Gets the room state map, excluding this event.
|
||||
|
||||
For a non-state event, this will be the same as get_current_state_ids().
|
||||
|
||||
Returns:
|
||||
Deferred[dict[(str, str), str]|None]: Returns None if state_group
|
||||
dict[(str, str), str]|None: Returns None if state_group
|
||||
is None, which happens when the associated event is an outlier.
|
||||
Maps a (type, state_key) to the event ID of the state event matching
|
||||
this tuple.
|
||||
"""
|
||||
yield self._ensure_fetched()
|
||||
await self._ensure_fetched()
|
||||
return self._prev_state_ids
|
||||
|
||||
def get_cached_current_state_ids(self):
|
||||
@@ -269,8 +268,8 @@ class EventContext:
|
||||
|
||||
return self._current_state_ids
|
||||
|
||||
def _ensure_fetched(self):
|
||||
return defer.succeed(None)
|
||||
async def _ensure_fetched(self):
|
||||
return None
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
@@ -303,21 +302,20 @@ class _AsyncEventContextImpl(EventContext):
|
||||
_event_state_key = attr.ib(default=None)
|
||||
_fetching_state_deferred = attr.ib(default=None)
|
||||
|
||||
def _ensure_fetched(self):
|
||||
async def _ensure_fetched(self):
|
||||
if not self._fetching_state_deferred:
|
||||
self._fetching_state_deferred = run_in_background(self._fill_out_state)
|
||||
|
||||
return make_deferred_yieldable(self._fetching_state_deferred)
|
||||
return await make_deferred_yieldable(self._fetching_state_deferred)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _fill_out_state(self):
|
||||
async def _fill_out_state(self):
|
||||
"""Called to populate the _current_state_ids and _prev_state_ids
|
||||
attributes by loading from the database.
|
||||
"""
|
||||
if self.state_group is None:
|
||||
return
|
||||
|
||||
self._current_state_ids = yield self._storage.state.get_state_ids_for_group(
|
||||
self._current_state_ids = await self._storage.state.get_state_ids_for_group(
|
||||
self.state_group
|
||||
)
|
||||
if self._event_state_key is not None:
|
||||
|
||||
@@ -13,7 +13,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.types import Requester
|
||||
|
||||
|
||||
class ThirdPartyEventRules(object):
|
||||
@@ -39,76 +41,79 @@ class ThirdPartyEventRules(object):
|
||||
config=config, http_client=hs.get_simple_http_client()
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_event_allowed(self, event, context):
|
||||
async def check_event_allowed(
|
||||
self, event: EventBase, context: EventContext
|
||||
) -> bool:
|
||||
"""Check if a provided event should be allowed in the given context.
|
||||
|
||||
Args:
|
||||
event (synapse.events.EventBase): The event to be checked.
|
||||
context (synapse.events.snapshot.EventContext): The context of the event.
|
||||
event: The event to be checked.
|
||||
context: The context of the event.
|
||||
|
||||
Returns:
|
||||
defer.Deferred[bool]: True if the event should be allowed, False if not.
|
||||
True if the event should be allowed, False if not.
|
||||
"""
|
||||
if self.third_party_rules is None:
|
||||
return True
|
||||
|
||||
prev_state_ids = yield context.get_prev_state_ids()
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
|
||||
# Retrieve the state events from the database.
|
||||
state_events = {}
|
||||
for key, event_id in prev_state_ids.items():
|
||||
state_events[key] = yield self.store.get_event(event_id, allow_none=True)
|
||||
state_events[key] = await self.store.get_event(event_id, allow_none=True)
|
||||
|
||||
ret = yield self.third_party_rules.check_event_allowed(event, state_events)
|
||||
ret = await self.third_party_rules.check_event_allowed(event, state_events)
|
||||
return ret
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_create_room(self, requester, config, is_requester_admin):
|
||||
async def on_create_room(
|
||||
self, requester: Requester, config: dict, is_requester_admin: bool
|
||||
) -> bool:
|
||||
"""Intercept requests to create room to allow, deny or update the
|
||||
request config.
|
||||
|
||||
Args:
|
||||
requester (Requester)
|
||||
config (dict): The creation config from the client.
|
||||
is_requester_admin (bool): If the requester is an admin
|
||||
requester
|
||||
config: The creation config from the client.
|
||||
is_requester_admin: If the requester is an admin
|
||||
|
||||
Returns:
|
||||
defer.Deferred[bool]: Whether room creation is allowed or denied.
|
||||
Whether room creation is allowed or denied.
|
||||
"""
|
||||
|
||||
if self.third_party_rules is None:
|
||||
return True
|
||||
|
||||
ret = yield self.third_party_rules.on_create_room(
|
||||
ret = await self.third_party_rules.on_create_room(
|
||||
requester, config, is_requester_admin
|
||||
)
|
||||
return ret
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_threepid_can_be_invited(self, medium, address, room_id):
|
||||
async def check_threepid_can_be_invited(
|
||||
self, medium: str, address: str, room_id: str
|
||||
) -> bool:
|
||||
"""Check if a provided 3PID can be invited in the given room.
|
||||
|
||||
Args:
|
||||
medium (str): The 3PID's medium.
|
||||
address (str): The 3PID's address.
|
||||
room_id (str): The room we want to invite the threepid to.
|
||||
medium: The 3PID's medium.
|
||||
address: The 3PID's address.
|
||||
room_id: The room we want to invite the threepid to.
|
||||
|
||||
Returns:
|
||||
defer.Deferred[bool], True if the 3PID can be invited, False if not.
|
||||
True if the 3PID can be invited, False if not.
|
||||
"""
|
||||
|
||||
if self.third_party_rules is None:
|
||||
return True
|
||||
|
||||
state_ids = yield self.store.get_filtered_current_state_ids(room_id)
|
||||
room_state_events = yield self.store.get_events(state_ids.values())
|
||||
state_ids = await self.store.get_filtered_current_state_ids(room_id)
|
||||
room_state_events = await self.store.get_events(state_ids.values())
|
||||
|
||||
state_events = {}
|
||||
for key, event_id in state_ids.items():
|
||||
state_events[key] = room_state_events[event_id]
|
||||
|
||||
ret = yield self.third_party_rules.check_threepid_can_be_invited(
|
||||
ret = await self.third_party_rules.check_threepid_can_be_invited(
|
||||
medium, address, state_events
|
||||
)
|
||||
return ret
|
||||
|
||||
@@ -18,8 +18,6 @@ from typing import Any, Mapping, Union
|
||||
|
||||
from frozendict import frozendict
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes, RelationTypes
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import RoomVersion
|
||||
@@ -337,8 +335,9 @@ class EventClientSerializer(object):
|
||||
hs.config.experimental_msc1849_support_enabled
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def serialize_event(self, event, time_now, bundle_aggregations=True, **kwargs):
|
||||
async def serialize_event(
|
||||
self, event, time_now, bundle_aggregations=True, **kwargs
|
||||
):
|
||||
"""Serializes a single event.
|
||||
|
||||
Args:
|
||||
@@ -348,7 +347,7 @@ class EventClientSerializer(object):
|
||||
**kwargs: Arguments to pass to `serialize_event`
|
||||
|
||||
Returns:
|
||||
Deferred[dict]: The serialized event
|
||||
dict: The serialized event
|
||||
"""
|
||||
# To handle the case of presence events and the like
|
||||
if not isinstance(event, EventBase):
|
||||
@@ -363,8 +362,8 @@ class EventClientSerializer(object):
|
||||
if not event.internal_metadata.is_redacted() and (
|
||||
self.experimental_msc1849_support_enabled and bundle_aggregations
|
||||
):
|
||||
annotations = yield self.store.get_aggregation_groups_for_event(event_id)
|
||||
references = yield self.store.get_relations_for_event(
|
||||
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
||||
references = await self.store.get_relations_for_event(
|
||||
event_id, RelationTypes.REFERENCE, direction="f"
|
||||
)
|
||||
|
||||
@@ -378,7 +377,7 @@ class EventClientSerializer(object):
|
||||
|
||||
edit = None
|
||||
if event.type == EventTypes.Message:
|
||||
edit = yield self.store.get_applicable_edit(event_id)
|
||||
edit = await self.store.get_applicable_edit(event_id)
|
||||
|
||||
if edit:
|
||||
# If there is an edit replace the content, preserving existing
|
||||
|
||||
@@ -135,7 +135,7 @@ class FederationClient(FederationBase):
|
||||
and try the request anyway.
|
||||
|
||||
Returns:
|
||||
a Deferred which will eventually yield a JSON object from the
|
||||
a Awaitable which will eventually yield a JSON object from the
|
||||
response
|
||||
"""
|
||||
sent_queries_counter.labels(query_type).inc()
|
||||
@@ -157,7 +157,7 @@ class FederationClient(FederationBase):
|
||||
content (dict): The query content.
|
||||
|
||||
Returns:
|
||||
a Deferred which will eventually yield a JSON object from the
|
||||
an Awaitable which will eventually yield a JSON object from the
|
||||
response
|
||||
"""
|
||||
sent_queries_counter.labels("client_device_keys").inc()
|
||||
@@ -180,7 +180,7 @@ class FederationClient(FederationBase):
|
||||
content (dict): The query content.
|
||||
|
||||
Returns:
|
||||
a Deferred which will eventually yield a JSON object from the
|
||||
an Awaitable which will eventually yield a JSON object from the
|
||||
response
|
||||
"""
|
||||
sent_queries_counter.labels("client_one_time_keys").inc()
|
||||
@@ -900,7 +900,7 @@ class FederationClient(FederationBase):
|
||||
party instance
|
||||
|
||||
Returns:
|
||||
Deferred[Dict[str, Any]]: The response from the remote server, or None if
|
||||
Awaitable[Dict[str, Any]]: The response from the remote server, or None if
|
||||
`remote_server` is the same as the local server_name
|
||||
|
||||
Raises:
|
||||
|
||||
@@ -288,8 +288,7 @@ class FederationSender(object):
|
||||
for destination in destinations:
|
||||
self._get_per_destination_queue(destination).send_pdu(pdu, order)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_read_receipt(self, receipt: ReadReceipt):
|
||||
async def send_read_receipt(self, receipt: ReadReceipt) -> None:
|
||||
"""Send a RR to any other servers in the room
|
||||
|
||||
Args:
|
||||
@@ -330,9 +329,7 @@ class FederationSender(object):
|
||||
room_id = receipt.room_id
|
||||
|
||||
# Work out which remote servers should be poked and poke them.
|
||||
domains = yield defer.ensureDeferred(
|
||||
self.state.get_current_hosts_in_room(room_id)
|
||||
)
|
||||
domains = await self.state.get_current_hosts_in_room(room_id)
|
||||
domains = [
|
||||
d
|
||||
for d in domains
|
||||
@@ -387,8 +384,7 @@ class FederationSender(object):
|
||||
queue.flush_read_receipts_for_room(room_id)
|
||||
|
||||
@preserve_fn # the caller should not yield on this
|
||||
@defer.inlineCallbacks
|
||||
def send_presence(self, states: List[UserPresenceState]):
|
||||
async def send_presence(self, states: List[UserPresenceState]):
|
||||
"""Send the new presence states to the appropriate destinations.
|
||||
|
||||
This actually queues up the presence states ready for sending and
|
||||
@@ -423,7 +419,7 @@ class FederationSender(object):
|
||||
if not states_map:
|
||||
break
|
||||
|
||||
yield self._process_presence_inner(list(states_map.values()))
|
||||
await self._process_presence_inner(list(states_map.values()))
|
||||
except Exception:
|
||||
logger.exception("Error sending presence states to servers")
|
||||
finally:
|
||||
@@ -450,14 +446,11 @@ class FederationSender(object):
|
||||
self._get_per_destination_queue(destination).send_presence(states)
|
||||
|
||||
@measure_func("txnqueue._process_presence")
|
||||
@defer.inlineCallbacks
|
||||
def _process_presence_inner(self, states: List[UserPresenceState]):
|
||||
async def _process_presence_inner(self, states: List[UserPresenceState]):
|
||||
"""Given a list of states populate self.pending_presence_by_dest and
|
||||
poke to send a new transaction to each destination
|
||||
"""
|
||||
hosts_and_states = yield defer.ensureDeferred(
|
||||
get_interested_remotes(self.store, states, self.state)
|
||||
)
|
||||
hosts_and_states = await get_interested_remotes(self.store, states, self.state)
|
||||
|
||||
for destinations, states in hosts_and_states:
|
||||
for destination in destinations:
|
||||
|
||||
@@ -18,8 +18,6 @@ import logging
|
||||
import urllib
|
||||
from typing import Any, Dict, Optional
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.api.errors import Codes, HttpResponseException, SynapseError
|
||||
from synapse.api.urls import (
|
||||
@@ -51,7 +49,7 @@ class TransportLayerClient(object):
|
||||
event_id (str): The event we want the context at.
|
||||
|
||||
Returns:
|
||||
Deferred: Results in a dict received from the remote homeserver.
|
||||
Awaitable: Results in a dict received from the remote homeserver.
|
||||
"""
|
||||
logger.debug("get_room_state_ids dest=%s, room=%s", destination, room_id)
|
||||
|
||||
@@ -75,7 +73,7 @@ class TransportLayerClient(object):
|
||||
giving up. None indicates no timeout.
|
||||
|
||||
Returns:
|
||||
Deferred: Results in a dict received from the remote homeserver.
|
||||
Awaitable: Results in a dict received from the remote homeserver.
|
||||
"""
|
||||
logger.debug("get_pdu dest=%s, event_id=%s", destination, event_id)
|
||||
|
||||
@@ -96,7 +94,7 @@ class TransportLayerClient(object):
|
||||
limit (int)
|
||||
|
||||
Returns:
|
||||
Deferred: Results in a dict received from the remote homeserver.
|
||||
Awaitable: Results in a dict received from the remote homeserver.
|
||||
"""
|
||||
logger.debug(
|
||||
"backfill dest=%s, room_id=%s, event_tuples=%r, limit=%s",
|
||||
@@ -118,16 +116,15 @@ class TransportLayerClient(object):
|
||||
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_transaction(self, transaction, json_data_callback=None):
|
||||
async def send_transaction(self, transaction, json_data_callback=None):
|
||||
""" Sends the given Transaction to its destination
|
||||
|
||||
Args:
|
||||
transaction (Transaction)
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body.
|
||||
|
||||
Fails with ``HTTPRequestException`` if we get an HTTP response
|
||||
@@ -154,7 +151,7 @@ class TransportLayerClient(object):
|
||||
|
||||
path = _create_v1_path("/send/%s", transaction.transaction_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
transaction.destination,
|
||||
path=path,
|
||||
data=json_data,
|
||||
@@ -166,14 +163,13 @@ class TransportLayerClient(object):
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def make_query(
|
||||
async def make_query(
|
||||
self, destination, query_type, args, retry_on_dns_fail, ignore_backoff=False
|
||||
):
|
||||
path = _create_v1_path("/query/%s", query_type)
|
||||
|
||||
content = yield self.client.get_json(
|
||||
content = await self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args=args,
|
||||
@@ -184,9 +180,10 @@ class TransportLayerClient(object):
|
||||
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def make_membership_event(self, destination, room_id, user_id, membership, params):
|
||||
async def make_membership_event(
|
||||
self, destination, room_id, user_id, membership, params
|
||||
):
|
||||
"""Asks a remote server to build and sign us a membership event
|
||||
|
||||
Note that this does not append any events to any graphs.
|
||||
@@ -200,7 +197,7 @@ class TransportLayerClient(object):
|
||||
request.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body (ie, the new event).
|
||||
|
||||
Fails with ``HTTPRequestException`` if we get an HTTP response
|
||||
@@ -231,7 +228,7 @@ class TransportLayerClient(object):
|
||||
ignore_backoff = True
|
||||
retry_on_dns_fail = True
|
||||
|
||||
content = yield self.client.get_json(
|
||||
content = await self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args=params,
|
||||
@@ -242,34 +239,31 @@ class TransportLayerClient(object):
|
||||
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_join_v1(self, destination, room_id, event_id, content):
|
||||
async def send_join_v1(self, destination, room_id, event_id, content):
|
||||
path = _create_v1_path("/send_join/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination, path=path, data=content
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_join_v2(self, destination, room_id, event_id, content):
|
||||
async def send_join_v2(self, destination, room_id, event_id, content):
|
||||
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination, path=path, data=content
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_leave_v1(self, destination, room_id, event_id, content):
|
||||
async def send_leave_v1(self, destination, room_id, event_id, content):
|
||||
path = _create_v1_path("/send_leave/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
data=content,
|
||||
@@ -282,12 +276,11 @@ class TransportLayerClient(object):
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_leave_v2(self, destination, room_id, event_id, content):
|
||||
async def send_leave_v2(self, destination, room_id, event_id, content):
|
||||
path = _create_v2_path("/send_leave/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
data=content,
|
||||
@@ -300,31 +293,28 @@ class TransportLayerClient(object):
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_invite_v1(self, destination, room_id, event_id, content):
|
||||
async def send_invite_v1(self, destination, room_id, event_id, content):
|
||||
path = _create_v1_path("/invite/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_invite_v2(self, destination, room_id, event_id, content):
|
||||
async def send_invite_v2(self, destination, room_id, event_id, content):
|
||||
path = _create_v2_path("/invite/%s/%s", room_id, event_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination, path=path, data=content, ignore_backoff=True
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def get_public_rooms(
|
||||
async def get_public_rooms(
|
||||
self,
|
||||
remote_server: str,
|
||||
limit: Optional[int] = None,
|
||||
@@ -355,7 +345,7 @@ class TransportLayerClient(object):
|
||||
data["filter"] = search_filter
|
||||
|
||||
try:
|
||||
response = yield self.client.post_json(
|
||||
response = await self.client.post_json(
|
||||
destination=remote_server, path=path, data=data, ignore_backoff=True
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
@@ -381,7 +371,7 @@ class TransportLayerClient(object):
|
||||
args["since"] = [since_token]
|
||||
|
||||
try:
|
||||
response = yield self.client.get_json(
|
||||
response = await self.client.get_json(
|
||||
destination=remote_server, path=path, args=args, ignore_backoff=True
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
@@ -396,29 +386,26 @@ class TransportLayerClient(object):
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def exchange_third_party_invite(self, destination, room_id, event_dict):
|
||||
async def exchange_third_party_invite(self, destination, room_id, event_dict):
|
||||
path = _create_v1_path("/exchange_third_party_invite/%s", room_id)
|
||||
|
||||
response = yield self.client.put_json(
|
||||
response = await self.client.put_json(
|
||||
destination=destination, path=path, data=event_dict
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def get_event_auth(self, destination, room_id, event_id):
|
||||
async def get_event_auth(self, destination, room_id, event_id):
|
||||
path = _create_v1_path("/event_auth/%s/%s", room_id, event_id)
|
||||
|
||||
content = yield self.client.get_json(destination=destination, path=path)
|
||||
content = await self.client.get_json(destination=destination, path=path)
|
||||
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def query_client_keys(self, destination, query_content, timeout):
|
||||
async def query_client_keys(self, destination, query_content, timeout):
|
||||
"""Query the device keys for a list of user ids hosted on a remote
|
||||
server.
|
||||
|
||||
@@ -453,14 +440,13 @@ class TransportLayerClient(object):
|
||||
"""
|
||||
path = _create_v1_path("/user/keys/query")
|
||||
|
||||
content = yield self.client.post_json(
|
||||
content = await self.client.post_json(
|
||||
destination=destination, path=path, data=query_content, timeout=timeout
|
||||
)
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def query_user_devices(self, destination, user_id, timeout):
|
||||
async def query_user_devices(self, destination, user_id, timeout):
|
||||
"""Query the devices for a user id hosted on a remote server.
|
||||
|
||||
Response:
|
||||
@@ -493,14 +479,13 @@ class TransportLayerClient(object):
|
||||
"""
|
||||
path = _create_v1_path("/user/devices/%s", user_id)
|
||||
|
||||
content = yield self.client.get_json(
|
||||
content = await self.client.get_json(
|
||||
destination=destination, path=path, timeout=timeout
|
||||
)
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def claim_client_keys(self, destination, query_content, timeout):
|
||||
async def claim_client_keys(self, destination, query_content, timeout):
|
||||
"""Claim one-time keys for a list of devices hosted on a remote server.
|
||||
|
||||
Request:
|
||||
@@ -532,14 +517,13 @@ class TransportLayerClient(object):
|
||||
|
||||
path = _create_v1_path("/user/keys/claim")
|
||||
|
||||
content = yield self.client.post_json(
|
||||
content = await self.client.post_json(
|
||||
destination=destination, path=path, data=query_content, timeout=timeout
|
||||
)
|
||||
return content
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def get_missing_events(
|
||||
async def get_missing_events(
|
||||
self,
|
||||
destination,
|
||||
room_id,
|
||||
@@ -551,7 +535,7 @@ class TransportLayerClient(object):
|
||||
):
|
||||
path = _create_v1_path("/get_missing_events/%s", room_id)
|
||||
|
||||
content = yield self.client.post_json(
|
||||
content = await self.client.post_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
data={
|
||||
|
||||
@@ -41,8 +41,6 @@ from typing import Tuple
|
||||
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import get_domain_from_id
|
||||
@@ -72,8 +70,9 @@ class GroupAttestationSigning(object):
|
||||
self.server_name = hs.hostname
|
||||
self.signing_key = hs.signing_key
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def verify_attestation(self, attestation, group_id, user_id, server_name=None):
|
||||
async def verify_attestation(
|
||||
self, attestation, group_id, user_id, server_name=None
|
||||
):
|
||||
"""Verifies that the given attestation matches the given parameters.
|
||||
|
||||
An optional server_name can be supplied to explicitly set which server's
|
||||
@@ -102,7 +101,7 @@ class GroupAttestationSigning(object):
|
||||
if valid_until_ms < now:
|
||||
raise SynapseError(400, "Attestation expired")
|
||||
|
||||
yield self.keyring.verify_json_for_server(
|
||||
await self.keyring.verify_json_for_server(
|
||||
server_name, attestation, now, "Group attestation"
|
||||
)
|
||||
|
||||
@@ -142,8 +141,7 @@ class GroupAttestionRenewer(object):
|
||||
self._start_renew_attestations, 30 * 60 * 1000
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_renew_attestation(self, group_id, user_id, content):
|
||||
async def on_renew_attestation(self, group_id, user_id, content):
|
||||
"""When a remote updates an attestation
|
||||
"""
|
||||
attestation = content["attestation"]
|
||||
@@ -151,11 +149,11 @@ class GroupAttestionRenewer(object):
|
||||
if not self.is_mine_id(group_id) and not self.is_mine_id(user_id):
|
||||
raise SynapseError(400, "Neither user not group are on this server")
|
||||
|
||||
yield self.attestations.verify_attestation(
|
||||
await self.attestations.verify_attestation(
|
||||
attestation, user_id=user_id, group_id=group_id
|
||||
)
|
||||
|
||||
yield self.store.update_remote_attestion(group_id, user_id, attestation)
|
||||
await self.store.update_remote_attestion(group_id, user_id, attestation)
|
||||
|
||||
return {}
|
||||
|
||||
@@ -172,8 +170,7 @@ class GroupAttestionRenewer(object):
|
||||
now + UPDATE_ATTESTATION_TIME_MS
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _renew_attestation(group_user: Tuple[str, str]):
|
||||
async def _renew_attestation(group_user: Tuple[str, str]):
|
||||
group_id, user_id = group_user
|
||||
try:
|
||||
if not self.is_mine_id(group_id):
|
||||
@@ -186,16 +183,16 @@ class GroupAttestionRenewer(object):
|
||||
user_id,
|
||||
group_id,
|
||||
)
|
||||
yield self.store.remove_attestation_renewal(group_id, user_id)
|
||||
await self.store.remove_attestation_renewal(group_id, user_id)
|
||||
return
|
||||
|
||||
attestation = self.attestations.create_attestation(group_id, user_id)
|
||||
|
||||
yield self.transport_client.renew_group_attestation(
|
||||
await self.transport_client.renew_group_attestation(
|
||||
destination, group_id, user_id, content={"attestation": attestation}
|
||||
)
|
||||
|
||||
yield self.store.update_attestation_renewal(
|
||||
await self.store.update_attestation_renewal(
|
||||
group_id, user_id, attestation
|
||||
)
|
||||
except (RequestSendFailed, HttpResponseException) as e:
|
||||
|
||||
@@ -17,7 +17,6 @@ import logging
|
||||
|
||||
import twisted
|
||||
import twisted.internet.error
|
||||
from twisted.internet import defer
|
||||
from twisted.web import server, static
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
@@ -41,8 +40,7 @@ class AcmeHandler(object):
|
||||
self.reactor = hs.get_reactor()
|
||||
self._acme_domain = hs.config.acme_domain
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def start_listening(self):
|
||||
async def start_listening(self):
|
||||
from synapse.handlers import acme_issuing_service
|
||||
|
||||
# Configure logging for txacme, if you need to debug
|
||||
@@ -82,18 +80,17 @@ class AcmeHandler(object):
|
||||
self._issuer._registered = False
|
||||
|
||||
try:
|
||||
yield self._issuer._ensure_registered()
|
||||
await self._issuer._ensure_registered()
|
||||
except Exception:
|
||||
logger.error(ACME_REGISTER_FAIL_ERROR)
|
||||
raise
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def provision_certificate(self):
|
||||
async def provision_certificate(self):
|
||||
|
||||
logger.warning("Reprovisioning %s", self._acme_domain)
|
||||
|
||||
try:
|
||||
yield self._issuer.issue_cert(self._acme_domain)
|
||||
await self._issuer.issue_cert(self._acme_domain)
|
||||
except Exception:
|
||||
logger.exception("Fail!")
|
||||
raise
|
||||
|
||||
@@ -27,7 +27,6 @@ from synapse.metrics import (
|
||||
event_processing_loop_room_count,
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.util import log_failure
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -100,10 +99,11 @@ class ApplicationServicesHandler(object):
|
||||
|
||||
if not self.started_scheduler:
|
||||
|
||||
def start_scheduler():
|
||||
return self.scheduler.start().addErrback(
|
||||
log_failure, "Application Services Failure"
|
||||
)
|
||||
async def start_scheduler():
|
||||
try:
|
||||
return self.scheduler.start()
|
||||
except Exception:
|
||||
logger.error("Application Services Failure")
|
||||
|
||||
run_as_background_process("as_scheduler", start_scheduler)
|
||||
self.started_scheduler = True
|
||||
|
||||
@@ -2242,8 +2242,6 @@ class FederationHandler(BaseHandler):
|
||||
at the event's position in the DAG, though occasionally (eg if the
|
||||
event is an outlier), may be the auth events claimed by the remote
|
||||
server.
|
||||
|
||||
Also NB that this function adds entries to it.
|
||||
Returns:
|
||||
updated context object
|
||||
"""
|
||||
@@ -2251,7 +2249,7 @@ class FederationHandler(BaseHandler):
|
||||
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
|
||||
|
||||
try:
|
||||
context = await self._update_auth_events_and_context_for_auth(
|
||||
context = await self._fetch_missing_auth_events(
|
||||
origin, event, context, auth_events
|
||||
)
|
||||
except Exception:
|
||||
@@ -2273,7 +2271,7 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
return context
|
||||
|
||||
async def _update_auth_events_and_context_for_auth(
|
||||
async def _fetch_missing_auth_events(
|
||||
self,
|
||||
origin: str,
|
||||
event: EventBase,
|
||||
@@ -2282,14 +2280,8 @@ class FederationHandler(BaseHandler):
|
||||
) -> EventContext:
|
||||
"""Helper for do_auth. See there for docs.
|
||||
|
||||
Checks whether a given event has the expected auth events. If it
|
||||
doesn't then we talk to the remote server to compare state to see if
|
||||
we can come to a consensus (e.g. if one server missed some valid
|
||||
state).
|
||||
|
||||
This attempts to resolve any potential divergence of state between
|
||||
servers, but is not essential and so failures should not block further
|
||||
processing of the event.
|
||||
Checks and fetches if there are any auth events that we don't have,
|
||||
and if so fetch them.
|
||||
|
||||
Args:
|
||||
origin:
|
||||
@@ -2304,8 +2296,6 @@ class FederationHandler(BaseHandler):
|
||||
event is an outlier), may be the auth events claimed by the remote
|
||||
server.
|
||||
|
||||
Also NB that this function adds entries to it.
|
||||
|
||||
Returns:
|
||||
updated context
|
||||
"""
|
||||
@@ -2363,141 +2353,14 @@ class FederationHandler(BaseHandler):
|
||||
"do_auth %s missing_auth: %s", event.event_id, e.event_id
|
||||
)
|
||||
await self._handle_new_event(origin, e, auth_events=auth)
|
||||
|
||||
if e.event_id in event_auth_events:
|
||||
auth_events[(e.type, e.state_key)] = e
|
||||
except AuthError:
|
||||
pass
|
||||
|
||||
except Exception:
|
||||
logger.exception("Failed to get auth chain")
|
||||
|
||||
if event.internal_metadata.is_outlier():
|
||||
# XXX: given that, for an outlier, we'll be working with the
|
||||
# event's *claimed* auth events rather than those we calculated:
|
||||
# (a) is there any point in this test, since different_auth below will
|
||||
# obviously be empty
|
||||
# (b) alternatively, why don't we do it earlier?
|
||||
logger.info("Skipping auth_event fetch for outlier")
|
||||
return context
|
||||
|
||||
different_auth = event_auth_events.difference(
|
||||
e.event_id for e in auth_events.values()
|
||||
)
|
||||
|
||||
if not different_auth:
|
||||
return context
|
||||
|
||||
logger.info(
|
||||
"auth_events refers to events which are not in our calculated auth "
|
||||
"chain: %s",
|
||||
different_auth,
|
||||
)
|
||||
|
||||
# XXX: currently this checks for redactions but I'm not convinced that is
|
||||
# necessary?
|
||||
different_events = await self.store.get_events_as_list(different_auth)
|
||||
|
||||
for d in different_events:
|
||||
if d.room_id != event.room_id:
|
||||
logger.warning(
|
||||
"Event %s refers to auth_event %s which is in a different room",
|
||||
event.event_id,
|
||||
d.event_id,
|
||||
)
|
||||
|
||||
# don't attempt to resolve the claimed auth events against our own
|
||||
# in this case: just use our own auth events.
|
||||
#
|
||||
# XXX: should we reject the event in this case? It feels like we should,
|
||||
# but then shouldn't we also do so if we've failed to fetch any of the
|
||||
# auth events?
|
||||
return context
|
||||
|
||||
# now we state-resolve between our own idea of the auth events, and the remote's
|
||||
# idea of them.
|
||||
|
||||
local_state = auth_events.values()
|
||||
remote_auth_events = dict(auth_events)
|
||||
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
|
||||
remote_state = remote_auth_events.values()
|
||||
|
||||
room_version = await self.store.get_room_version_id(event.room_id)
|
||||
new_state = await self.state_handler.resolve_events(
|
||||
room_version, (local_state, remote_state), event
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"After state res: updating auth_events with new state %s",
|
||||
{
|
||||
(d.type, d.state_key): d.event_id
|
||||
for d in new_state.values()
|
||||
if auth_events.get((d.type, d.state_key)) != d
|
||||
},
|
||||
)
|
||||
|
||||
auth_events.update(new_state)
|
||||
|
||||
context = await self._update_context_for_auth_events(
|
||||
event, context, auth_events
|
||||
)
|
||||
|
||||
return context
|
||||
|
||||
async def _update_context_for_auth_events(
|
||||
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
|
||||
) -> EventContext:
|
||||
"""Update the state_ids in an event context after auth event resolution,
|
||||
storing the changes as a new state group.
|
||||
|
||||
Args:
|
||||
event: The event we're handling the context for
|
||||
|
||||
context: initial event context
|
||||
|
||||
auth_events: Events to update in the event context.
|
||||
|
||||
Returns:
|
||||
new event context
|
||||
"""
|
||||
# exclude the state key of the new event from the current_state in the context.
|
||||
if event.is_state():
|
||||
event_key = (event.type, event.state_key) # type: Optional[Tuple[str, str]]
|
||||
else:
|
||||
event_key = None
|
||||
state_updates = {
|
||||
k: a.event_id for k, a in auth_events.items() if k != event_key
|
||||
}
|
||||
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
current_state_ids = dict(current_state_ids)
|
||||
|
||||
current_state_ids.update(state_updates)
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
prev_state_ids = dict(prev_state_ids)
|
||||
|
||||
prev_state_ids.update({k: a.event_id for k, a in auth_events.items()})
|
||||
|
||||
# create a new state group as a delta from the existing one.
|
||||
prev_group = context.state_group
|
||||
state_group = await self.state_store.store_state_group(
|
||||
event.event_id,
|
||||
event.room_id,
|
||||
prev_group=prev_group,
|
||||
delta_ids=state_updates,
|
||||
current_state_ids=current_state_ids,
|
||||
)
|
||||
|
||||
return EventContext.with_state(
|
||||
state_group=state_group,
|
||||
state_group_before_event=context.state_group_before_event,
|
||||
current_state_ids=current_state_ids,
|
||||
prev_state_ids=prev_state_ids,
|
||||
prev_group=prev_group,
|
||||
delta_ids=state_updates,
|
||||
)
|
||||
|
||||
async def construct_auth_difference(
|
||||
self, local_auth: Iterable[EventBase], remote_auth: Iterable[EventBase]
|
||||
) -> Dict:
|
||||
|
||||
@@ -23,39 +23,32 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def _create_rerouter(func_name):
|
||||
"""Returns a function that looks at the group id and calls the function
|
||||
"""Returns an async function that looks at the group id and calls the function
|
||||
on federation or the local group server if the group is local
|
||||
"""
|
||||
|
||||
def f(self, group_id, *args, **kwargs):
|
||||
async def f(self, group_id, *args, **kwargs):
|
||||
if self.is_mine_id(group_id):
|
||||
return getattr(self.groups_server_handler, func_name)(
|
||||
return await getattr(self.groups_server_handler, func_name)(
|
||||
group_id, *args, **kwargs
|
||||
)
|
||||
else:
|
||||
destination = get_domain_from_id(group_id)
|
||||
d = getattr(self.transport_client, func_name)(
|
||||
destination, group_id, *args, **kwargs
|
||||
)
|
||||
|
||||
# Capture errors returned by the remote homeserver and
|
||||
# re-throw specific errors as SynapseErrors. This is so
|
||||
# when the remote end responds with things like 403 Not
|
||||
# In Group, we can communicate that to the client instead
|
||||
# of a 500.
|
||||
def http_response_errback(failure):
|
||||
failure.trap(HttpResponseException)
|
||||
e = failure.value
|
||||
try:
|
||||
return await getattr(self.transport_client, func_name)(
|
||||
destination, group_id, *args, **kwargs
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
# Capture errors returned by the remote homeserver and
|
||||
# re-throw specific errors as SynapseErrors. This is so
|
||||
# when the remote end responds with things like 403 Not
|
||||
# In Group, we can communicate that to the client instead
|
||||
# of a 500.
|
||||
raise e.to_synapse_error()
|
||||
|
||||
def request_failed_errback(failure):
|
||||
failure.trap(RequestSendFailed)
|
||||
except RequestSendFailed:
|
||||
raise SynapseError(502, "Failed to contact group server")
|
||||
|
||||
d.addErrback(http_response_errback)
|
||||
d.addErrback(request_failed_errback)
|
||||
return d
|
||||
|
||||
return f
|
||||
|
||||
|
||||
|
||||
@@ -22,14 +22,10 @@ import urllib.parse
|
||||
from typing import Awaitable, Callable, Dict, List, Optional, Tuple
|
||||
|
||||
from canonicaljson import json
|
||||
from signedjson.key import decode_verify_key_bytes
|
||||
from signedjson.sign import verify_signed_json
|
||||
from unpaddedbase64 import decode_base64
|
||||
|
||||
from twisted.internet.error import TimeoutError
|
||||
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
CodeMessageException,
|
||||
Codes,
|
||||
HttpResponseException,
|
||||
@@ -628,9 +624,9 @@ class IdentityHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if "mxid" in data:
|
||||
if "signatures" not in data:
|
||||
raise AuthError(401, "No signatures on 3pid binding")
|
||||
await self._verify_any_signature(data, id_server)
|
||||
# note: we used to verify the identity server's signature here, but no longer
|
||||
# require or validate it. See the following for context:
|
||||
# https://github.com/matrix-org/synapse/issues/5253#issuecomment-666246950
|
||||
return data["mxid"]
|
||||
except TimeoutError:
|
||||
raise SynapseError(500, "Timed out contacting identity server")
|
||||
@@ -751,30 +747,6 @@ class IdentityHandler(BaseHandler):
|
||||
mxid = lookup_results["mappings"].get(lookup_value)
|
||||
return mxid
|
||||
|
||||
async def _verify_any_signature(self, data, server_hostname):
|
||||
if server_hostname not in data["signatures"]:
|
||||
raise AuthError(401, "No signature from server %s" % (server_hostname,))
|
||||
for key_name, signature in data["signatures"][server_hostname].items():
|
||||
try:
|
||||
key_data = await self.blacklisting_http_client.get_json(
|
||||
"%s%s/_matrix/identity/api/v1/pubkey/%s"
|
||||
% (id_server_scheme, server_hostname, key_name)
|
||||
)
|
||||
except TimeoutError:
|
||||
raise SynapseError(500, "Timed out contacting identity server")
|
||||
if "public_key" not in key_data:
|
||||
raise AuthError(
|
||||
401, "No public key named %s from %s" % (key_name, server_hostname)
|
||||
)
|
||||
verify_signed_json(
|
||||
data,
|
||||
server_hostname,
|
||||
decode_verify_key_bytes(
|
||||
key_name, decode_base64(key_data["public_key"])
|
||||
),
|
||||
)
|
||||
return
|
||||
|
||||
async def ask_id_server_for_third_party_invite(
|
||||
self,
|
||||
requester: Requester,
|
||||
|
||||
@@ -548,7 +548,7 @@ class RegistrationHandler(BaseHandler):
|
||||
address (str|None): the IP address used to perform the registration.
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
Awaitable
|
||||
"""
|
||||
if self.hs.config.worker_app:
|
||||
return self._register_client(
|
||||
|
||||
@@ -22,7 +22,8 @@ from unpaddedbase64 import encode_base64
|
||||
|
||||
from synapse import types
|
||||
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
|
||||
from synapse.api.errors import AuthError, Codes, SynapseError
|
||||
from synapse.api.errors import AuthError, Codes, LimitExceededError, SynapseError
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.api.room_versions import EventFormatVersions
|
||||
from synapse.crypto.event_signing import compute_event_reference_hash
|
||||
from synapse.events import EventBase
|
||||
@@ -77,6 +78,17 @@ class RoomMemberHandler(object):
|
||||
if self._is_on_event_persistence_instance:
|
||||
self.persist_event_storage = hs.get_storage().persistence
|
||||
|
||||
self._join_rate_limiter_local = Ratelimiter(
|
||||
clock=self.clock,
|
||||
rate_hz=hs.config.ratelimiting.rc_joins_local.per_second,
|
||||
burst_count=hs.config.ratelimiting.rc_joins_local.burst_count,
|
||||
)
|
||||
self._join_rate_limiter_remote = Ratelimiter(
|
||||
clock=self.clock,
|
||||
rate_hz=hs.config.ratelimiting.rc_joins_remote.per_second,
|
||||
burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count,
|
||||
)
|
||||
|
||||
# This is only used to get at ratelimit function, and
|
||||
# maybe_kick_guest_users. It's fine there are multiple of these as
|
||||
# it doesn't store state.
|
||||
@@ -441,7 +453,28 @@ class RoomMemberHandler(object):
|
||||
# so don't really fit into the general auth process.
|
||||
raise AuthError(403, "Guest access not allowed")
|
||||
|
||||
if not is_host_in_room:
|
||||
if is_host_in_room:
|
||||
time_now_s = self.clock.time()
|
||||
allowed, time_allowed = self._join_rate_limiter_local.can_do_action(
|
||||
requester.user.to_string(),
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now_s))
|
||||
)
|
||||
|
||||
else:
|
||||
time_now_s = self.clock.time()
|
||||
allowed, time_allowed = self._join_rate_limiter_remote.can_do_action(
|
||||
requester.user.to_string(),
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now_s))
|
||||
)
|
||||
|
||||
inviter = await self._get_inviter(target.to_string(), room_id)
|
||||
if inviter and not self.hs.is_mine(inviter):
|
||||
remote_room_hosts.append(inviter.domain)
|
||||
@@ -469,26 +502,39 @@ class RoomMemberHandler(object):
|
||||
user_id=target.to_string(), room_id=room_id
|
||||
) # type: Optional[RoomsForUser]
|
||||
if not invite:
|
||||
logger.info(
|
||||
"%s sent a leave request to %s, but that is not an active room "
|
||||
"on this server, and there is no pending invite",
|
||||
target,
|
||||
room_id,
|
||||
)
|
||||
|
||||
raise SynapseError(404, "Not a known room")
|
||||
|
||||
logger.info(
|
||||
"%s rejects invite to %s from %s", target, room_id, invite.sender
|
||||
)
|
||||
|
||||
if self.hs.is_mine_id(invite.sender):
|
||||
# the inviter was on our server, but has now left. Carry on
|
||||
# with the normal rejection codepath.
|
||||
#
|
||||
# This is a bit of a hack, because the room might still be
|
||||
# active on other servers.
|
||||
pass
|
||||
else:
|
||||
if not self.hs.is_mine_id(invite.sender):
|
||||
# send the rejection to the inviter's HS (with fallback to
|
||||
# local event)
|
||||
return await self.remote_reject_invite(
|
||||
invite.event_id, txn_id, requester, content,
|
||||
)
|
||||
|
||||
# the inviter was on our server, but has now left. Carry on
|
||||
# with the normal rejection codepath, which will also send the
|
||||
# rejection out to any other servers we believe are still in the room.
|
||||
|
||||
# thanks to overzealous cleaning up of event_forward_extremities in
|
||||
# `delete_old_current_state_events`, it's possible to end up with no
|
||||
# forward extremities here. If that happens, let's just hang the
|
||||
# rejection off the invite event.
|
||||
#
|
||||
# see: https://github.com/matrix-org/synapse/issues/7139
|
||||
if len(latest_event_ids) == 0:
|
||||
latest_event_ids = [invite.event_id]
|
||||
|
||||
return await self._local_membership_update(
|
||||
requester=requester,
|
||||
target=target,
|
||||
@@ -952,7 +998,11 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
if len(remote_room_hosts) == 0:
|
||||
raise SynapseError(404, "No known servers")
|
||||
|
||||
if self.hs.config.limit_remote_rooms.enabled:
|
||||
check_complexity = self.hs.config.limit_remote_rooms.enabled
|
||||
if check_complexity and self.hs.config.limit_remote_rooms.admins_can_join:
|
||||
check_complexity = not await self.hs.auth.is_server_admin(user)
|
||||
|
||||
if check_complexity:
|
||||
# Fetch the room complexity
|
||||
too_complex = await self._is_remote_room_too_complex(
|
||||
room_id, remote_room_hosts
|
||||
@@ -975,7 +1025,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
|
||||
# Check the room we just joined wasn't too large, if we didn't fetch the
|
||||
# complexity of it before.
|
||||
if self.hs.config.limit_remote_rooms.enabled:
|
||||
if check_complexity:
|
||||
if too_complex is False:
|
||||
# We checked, and we're under the limit.
|
||||
return event_id, stream_id
|
||||
|
||||
@@ -96,6 +96,9 @@ class SamlHandler:
|
||||
relay_state=client_redirect_url
|
||||
)
|
||||
|
||||
# Since SAML sessions timeout it is useful to log when they were created.
|
||||
logger.info("Initiating a new SAML session: %s" % (reqid,))
|
||||
|
||||
now = self._clock.time_msec()
|
||||
self._outstanding_requests_dict[reqid] = Saml2SessionData(
|
||||
creation_time=now, ui_auth_session_id=ui_auth_session_id,
|
||||
|
||||
@@ -232,7 +232,7 @@ class StatsHandler:
|
||||
|
||||
if membership == prev_membership:
|
||||
pass # noop
|
||||
if membership == Membership.JOIN:
|
||||
elif membership == Membership.JOIN:
|
||||
room_stats_delta["joined_members"] += 1
|
||||
elif membership == Membership.INVITE:
|
||||
room_stats_delta["invited_members"] += 1
|
||||
|
||||
@@ -103,6 +103,7 @@ class JoinedSyncResult:
|
||||
account_data = attr.ib(type=List[JsonDict])
|
||||
unread_notifications = attr.ib(type=JsonDict)
|
||||
summary = attr.ib(type=Optional[JsonDict])
|
||||
unread_count = attr.ib(type=int)
|
||||
|
||||
def __nonzero__(self) -> bool:
|
||||
"""Make the result appear empty if there are no updates. This is used
|
||||
@@ -1886,6 +1887,10 @@ class SyncHandler(object):
|
||||
|
||||
if room_builder.rtype == "joined":
|
||||
unread_notifications = {} # type: Dict[str, str]
|
||||
|
||||
unread_count = await self.store.get_unread_message_count_for_user(
|
||||
room_id, sync_config.user.to_string(),
|
||||
)
|
||||
room_sync = JoinedSyncResult(
|
||||
room_id=room_id,
|
||||
timeline=batch,
|
||||
@@ -1894,6 +1899,7 @@ class SyncHandler(object):
|
||||
account_data=account_data_events,
|
||||
unread_notifications=unread_notifications,
|
||||
summary=summary,
|
||||
unread_count=unread_count,
|
||||
)
|
||||
|
||||
if room_sync or always_include:
|
||||
|
||||
@@ -395,7 +395,9 @@ class SimpleHttpClient(object):
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
else:
|
||||
raise HttpResponseException(response.code, response.phrase, body)
|
||||
raise HttpResponseException(
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_json_get_json(self, uri, post_json, headers=None):
|
||||
@@ -436,7 +438,9 @@ class SimpleHttpClient(object):
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
else:
|
||||
raise HttpResponseException(response.code, response.phrase, body)
|
||||
raise HttpResponseException(
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, uri, args={}, headers=None):
|
||||
@@ -509,7 +513,9 @@ class SimpleHttpClient(object):
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
else:
|
||||
raise HttpResponseException(response.code, response.phrase, body)
|
||||
raise HttpResponseException(
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_raw(self, uri, args={}, headers=None):
|
||||
@@ -544,7 +550,9 @@ class SimpleHttpClient(object):
|
||||
if 200 <= response.code < 300:
|
||||
return body
|
||||
else:
|
||||
raise HttpResponseException(response.code, response.phrase, body)
|
||||
raise HttpResponseException(
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
|
||||
# The two should be factored out.
|
||||
|
||||
@@ -121,8 +121,7 @@ class MatrixFederationRequest(object):
|
||||
return self.json
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _handle_json_response(reactor, timeout_sec, request, response):
|
||||
async def _handle_json_response(reactor, timeout_sec, request, response):
|
||||
"""
|
||||
Reads the JSON body of a response, with a timeout
|
||||
|
||||
@@ -141,7 +140,7 @@ def _handle_json_response(reactor, timeout_sec, request, response):
|
||||
d = treq.json_content(response)
|
||||
d = timeout_deferred(d, timeout=timeout_sec, reactor=reactor)
|
||||
|
||||
body = yield make_deferred_yieldable(d)
|
||||
body = await make_deferred_yieldable(d)
|
||||
except TimeoutError as e:
|
||||
logger.warning(
|
||||
"{%s} [%s] Timed out reading response", request.txn_id, request.destination,
|
||||
@@ -224,8 +223,7 @@ class MatrixFederationHttpClient(object):
|
||||
|
||||
self._cooperator = Cooperator(scheduler=schedule)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _send_request_with_optional_trailing_slash(
|
||||
async def _send_request_with_optional_trailing_slash(
|
||||
self, request, try_trailing_slash_on_400=False, **send_request_args
|
||||
):
|
||||
"""Wrapper for _send_request which can optionally retry the request
|
||||
@@ -246,10 +244,10 @@ class MatrixFederationHttpClient(object):
|
||||
(except 429).
|
||||
|
||||
Returns:
|
||||
Deferred[Dict]: Parsed JSON response body.
|
||||
Dict: Parsed JSON response body.
|
||||
"""
|
||||
try:
|
||||
response = yield self._send_request(request, **send_request_args)
|
||||
response = await self._send_request(request, **send_request_args)
|
||||
except HttpResponseException as e:
|
||||
# Received an HTTP error > 300. Check if it meets the requirements
|
||||
# to retry with a trailing slash
|
||||
@@ -265,12 +263,11 @@ class MatrixFederationHttpClient(object):
|
||||
logger.info("Retrying request with trailing slash")
|
||||
request.path += "/"
|
||||
|
||||
response = yield self._send_request(request, **send_request_args)
|
||||
response = await self._send_request(request, **send_request_args)
|
||||
|
||||
return response
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _send_request(
|
||||
async def _send_request(
|
||||
self,
|
||||
request,
|
||||
retry_on_dns_fail=True,
|
||||
@@ -311,7 +308,7 @@ class MatrixFederationHttpClient(object):
|
||||
backoff_on_404 (bool): Back off if we get a 404
|
||||
|
||||
Returns:
|
||||
Deferred[twisted.web.client.Response]: resolves with the HTTP
|
||||
twisted.web.client.Response: resolves with the HTTP
|
||||
response object on success.
|
||||
|
||||
Raises:
|
||||
@@ -335,7 +332,7 @@ class MatrixFederationHttpClient(object):
|
||||
):
|
||||
raise FederationDeniedError(request.destination)
|
||||
|
||||
limiter = yield synapse.util.retryutils.get_retry_limiter(
|
||||
limiter = await synapse.util.retryutils.get_retry_limiter(
|
||||
request.destination,
|
||||
self.clock,
|
||||
self._store,
|
||||
@@ -433,7 +430,7 @@ class MatrixFederationHttpClient(object):
|
||||
reactor=self.reactor,
|
||||
)
|
||||
|
||||
response = yield request_deferred
|
||||
response = await request_deferred
|
||||
except TimeoutError as e:
|
||||
raise RequestSendFailed(e, can_retry=True) from e
|
||||
except DNSLookupError as e:
|
||||
@@ -447,6 +444,7 @@ class MatrixFederationHttpClient(object):
|
||||
).inc()
|
||||
|
||||
set_tag(tags.HTTP_STATUS_CODE, response.code)
|
||||
response_phrase = response.phrase.decode("ascii", errors="replace")
|
||||
|
||||
if 200 <= response.code < 300:
|
||||
logger.debug(
|
||||
@@ -454,7 +452,7 @@ class MatrixFederationHttpClient(object):
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
response.code,
|
||||
response.phrase.decode("ascii", errors="replace"),
|
||||
response_phrase,
|
||||
)
|
||||
pass
|
||||
else:
|
||||
@@ -463,7 +461,7 @@ class MatrixFederationHttpClient(object):
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
response.code,
|
||||
response.phrase.decode("ascii", errors="replace"),
|
||||
response_phrase,
|
||||
)
|
||||
# :'(
|
||||
# Update transactions table?
|
||||
@@ -473,7 +471,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
|
||||
try:
|
||||
body = yield make_deferred_yieldable(d)
|
||||
body = await make_deferred_yieldable(d)
|
||||
except Exception as e:
|
||||
# Eh, we're already going to raise an exception so lets
|
||||
# ignore if this fails.
|
||||
@@ -487,7 +485,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
body = None
|
||||
|
||||
e = HttpResponseException(response.code, response.phrase, body)
|
||||
e = HttpResponseException(response.code, response_phrase, body)
|
||||
|
||||
# Retry if the error is a 429 (Too Many Requests),
|
||||
# otherwise just raise a standard HttpResponseException
|
||||
@@ -527,7 +525,7 @@ class MatrixFederationHttpClient(object):
|
||||
delay,
|
||||
)
|
||||
|
||||
yield self.clock.sleep(delay)
|
||||
await self.clock.sleep(delay)
|
||||
retries_left -= 1
|
||||
else:
|
||||
raise
|
||||
@@ -590,8 +588,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
return auth_headers
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def put_json(
|
||||
async def put_json(
|
||||
self,
|
||||
destination,
|
||||
path,
|
||||
@@ -635,7 +632,7 @@ class MatrixFederationHttpClient(object):
|
||||
enabled.
|
||||
|
||||
Returns:
|
||||
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
|
||||
dict|list: Succeeds when we get a 2xx HTTP response. The
|
||||
result will be the decoded JSON body.
|
||||
|
||||
Raises:
|
||||
@@ -657,7 +654,7 @@ class MatrixFederationHttpClient(object):
|
||||
json=data,
|
||||
)
|
||||
|
||||
response = yield self._send_request_with_optional_trailing_slash(
|
||||
response = await self._send_request_with_optional_trailing_slash(
|
||||
request,
|
||||
try_trailing_slash_on_400,
|
||||
backoff_on_404=backoff_on_404,
|
||||
@@ -666,14 +663,13 @@ class MatrixFederationHttpClient(object):
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
body = yield _handle_json_response(
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
|
||||
return body
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_json(
|
||||
async def post_json(
|
||||
self,
|
||||
destination,
|
||||
path,
|
||||
@@ -706,7 +702,7 @@ class MatrixFederationHttpClient(object):
|
||||
|
||||
args (dict): query params
|
||||
Returns:
|
||||
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
|
||||
dict|list: Succeeds when we get a 2xx HTTP response. The
|
||||
result will be the decoded JSON body.
|
||||
|
||||
Raises:
|
||||
@@ -724,7 +720,7 @@ class MatrixFederationHttpClient(object):
|
||||
method="POST", destination=destination, path=path, query=args, json=data
|
||||
)
|
||||
|
||||
response = yield self._send_request(
|
||||
response = await self._send_request(
|
||||
request,
|
||||
long_retries=long_retries,
|
||||
timeout=timeout,
|
||||
@@ -736,13 +732,12 @@ class MatrixFederationHttpClient(object):
|
||||
else:
|
||||
_sec_timeout = self.default_timeout
|
||||
|
||||
body = yield _handle_json_response(
|
||||
body = await _handle_json_response(
|
||||
self.reactor, _sec_timeout, request, response
|
||||
)
|
||||
return body
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(
|
||||
async def get_json(
|
||||
self,
|
||||
destination,
|
||||
path,
|
||||
@@ -774,7 +769,7 @@ class MatrixFederationHttpClient(object):
|
||||
response we should try appending a trailing slash to the end of
|
||||
the request. Workaround for #3622 in Synapse <= v0.99.3.
|
||||
Returns:
|
||||
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
|
||||
dict|list: Succeeds when we get a 2xx HTTP response. The
|
||||
result will be the decoded JSON body.
|
||||
|
||||
Raises:
|
||||
@@ -791,7 +786,7 @@ class MatrixFederationHttpClient(object):
|
||||
method="GET", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
response = yield self._send_request_with_optional_trailing_slash(
|
||||
response = await self._send_request_with_optional_trailing_slash(
|
||||
request,
|
||||
try_trailing_slash_on_400,
|
||||
backoff_on_404=False,
|
||||
@@ -800,14 +795,13 @@ class MatrixFederationHttpClient(object):
|
||||
timeout=timeout,
|
||||
)
|
||||
|
||||
body = yield _handle_json_response(
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
|
||||
return body
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def delete_json(
|
||||
async def delete_json(
|
||||
self,
|
||||
destination,
|
||||
path,
|
||||
@@ -835,7 +829,7 @@ class MatrixFederationHttpClient(object):
|
||||
|
||||
args (dict): query params
|
||||
Returns:
|
||||
Deferred[dict|list]: Succeeds when we get a 2xx HTTP response. The
|
||||
dict|list: Succeeds when we get a 2xx HTTP response. The
|
||||
result will be the decoded JSON body.
|
||||
|
||||
Raises:
|
||||
@@ -852,20 +846,19 @@ class MatrixFederationHttpClient(object):
|
||||
method="DELETE", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
response = yield self._send_request(
|
||||
response = await self._send_request(
|
||||
request,
|
||||
long_retries=long_retries,
|
||||
timeout=timeout,
|
||||
ignore_backoff=ignore_backoff,
|
||||
)
|
||||
|
||||
body = yield _handle_json_response(
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
return body
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_file(
|
||||
async def get_file(
|
||||
self,
|
||||
destination,
|
||||
path,
|
||||
@@ -885,7 +878,7 @@ class MatrixFederationHttpClient(object):
|
||||
and try the request anyway.
|
||||
|
||||
Returns:
|
||||
Deferred[tuple[int, dict]]: Resolves with an (int,dict) tuple of
|
||||
tuple[int, dict]: Resolves with an (int,dict) tuple of
|
||||
the file length and a dict of the response headers.
|
||||
|
||||
Raises:
|
||||
@@ -902,7 +895,7 @@ class MatrixFederationHttpClient(object):
|
||||
method="GET", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
response = yield self._send_request(
|
||||
response = await self._send_request(
|
||||
request, retry_on_dns_fail=retry_on_dns_fail, ignore_backoff=ignore_backoff
|
||||
)
|
||||
|
||||
@@ -911,7 +904,7 @@ class MatrixFederationHttpClient(object):
|
||||
try:
|
||||
d = _readBodyToFile(response, output_stream, max_size)
|
||||
d.addTimeout(self.default_timeout, self.reactor)
|
||||
length = yield make_deferred_yieldable(d)
|
||||
length = await make_deferred_yieldable(d)
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"{%s} [%s] Error reading response: %s",
|
||||
|
||||
@@ -242,10 +242,12 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||
no appropriate method exists. Can be overriden in sub classes for
|
||||
different routing.
|
||||
"""
|
||||
# Treat HEAD requests as GET requests.
|
||||
request_method = request.method.decode("ascii")
|
||||
if request_method == "HEAD":
|
||||
request_method = "GET"
|
||||
|
||||
method_handler = getattr(
|
||||
self, "_async_render_%s" % (request.method.decode("ascii"),), None
|
||||
)
|
||||
method_handler = getattr(self, "_async_render_%s" % (request_method,), None)
|
||||
if method_handler:
|
||||
raw_callback_return = method_handler(request)
|
||||
|
||||
@@ -362,11 +364,15 @@ class JsonResource(DirectServeJsonResource):
|
||||
A tuple of the callback to use, the name of the servlet, and the
|
||||
key word arguments to pass to the callback
|
||||
"""
|
||||
# Treat HEAD requests as GET requests.
|
||||
request_path = request.path.decode("ascii")
|
||||
request_method = request.method
|
||||
if request_method == b"HEAD":
|
||||
request_method = b"GET"
|
||||
|
||||
# Loop through all the registered callbacks to check if the method
|
||||
# and path regex match
|
||||
for path_entry in self.path_regexs.get(request.method, []):
|
||||
for path_entry in self.path_regexs.get(request_method, []):
|
||||
m = path_entry.pattern.match(request_path)
|
||||
if m:
|
||||
# We found a match!
|
||||
@@ -579,7 +585,7 @@ def set_cors_headers(request: Request):
|
||||
"""
|
||||
request.setHeader(b"Access-Control-Allow-Origin", b"*")
|
||||
request.setHeader(
|
||||
b"Access-Control-Allow-Methods", b"GET, POST, PUT, DELETE, OPTIONS"
|
||||
b"Access-Control-Allow-Methods", b"GET, HEAD, POST, PUT, DELETE, OPTIONS"
|
||||
)
|
||||
request.setHeader(
|
||||
b"Access-Control-Allow-Headers",
|
||||
|
||||
@@ -15,8 +15,6 @@
|
||||
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
from .bulk_push_rule_evaluator import BulkPushRuleEvaluator
|
||||
@@ -37,7 +35,6 @@ class ActionGenerator(object):
|
||||
# event stream, so we just run the rules for a client with no profile
|
||||
# tag (ie. we just need all the users).
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def handle_push_actions_for_event(self, event, context):
|
||||
async def handle_push_actions_for_event(self, event, context):
|
||||
with Measure(self.clock, "action_for_event_by_user"):
|
||||
yield self.bulk_evaluator.action_for_event_by_user(event, context)
|
||||
await self.bulk_evaluator.action_for_event_by_user(event, context)
|
||||
|
||||
@@ -19,8 +19,6 @@ from collections import namedtuple
|
||||
|
||||
from prometheus_client import Counter
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.event_auth import get_user_power_level
|
||||
from synapse.state import POWER_KEY
|
||||
@@ -70,8 +68,7 @@ class BulkPushRuleEvaluator(object):
|
||||
resizable=False,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _get_rules_for_event(self, event, context):
|
||||
async def _get_rules_for_event(self, event, context):
|
||||
"""This gets the rules for all users in the room at the time of the event,
|
||||
as well as the push rules for the invitee if the event is an invite.
|
||||
|
||||
@@ -79,19 +76,19 @@ class BulkPushRuleEvaluator(object):
|
||||
dict of user_id -> push_rules
|
||||
"""
|
||||
room_id = event.room_id
|
||||
rules_for_room = yield self._get_rules_for_room(room_id)
|
||||
rules_for_room = await self._get_rules_for_room(room_id)
|
||||
|
||||
rules_by_user = yield rules_for_room.get_rules(event, context)
|
||||
rules_by_user = await rules_for_room.get_rules(event, context)
|
||||
|
||||
# if this event is an invite event, we may need to run rules for the user
|
||||
# who's been invited, otherwise they won't get told they've been invited
|
||||
if event.type == "m.room.member" and event.content["membership"] == "invite":
|
||||
invited = event.state_key
|
||||
if invited and self.hs.is_mine_id(invited):
|
||||
has_pusher = yield self.store.user_has_pusher(invited)
|
||||
has_pusher = await self.store.user_has_pusher(invited)
|
||||
if has_pusher:
|
||||
rules_by_user = dict(rules_by_user)
|
||||
rules_by_user[invited] = yield self.store.get_push_rules_for_user(
|
||||
rules_by_user[invited] = await self.store.get_push_rules_for_user(
|
||||
invited
|
||||
)
|
||||
|
||||
@@ -114,20 +111,19 @@ class BulkPushRuleEvaluator(object):
|
||||
self.room_push_rule_cache_metrics,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _get_power_levels_and_sender_level(self, event, context):
|
||||
prev_state_ids = yield context.get_prev_state_ids()
|
||||
async def _get_power_levels_and_sender_level(self, event, context):
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
pl_event_id = prev_state_ids.get(POWER_KEY)
|
||||
if pl_event_id:
|
||||
# fastpath: if there's a power level event, that's all we need, and
|
||||
# not having a power level event is an extreme edge case
|
||||
pl_event = yield self.store.get_event(pl_event_id)
|
||||
pl_event = await self.store.get_event(pl_event_id)
|
||||
auth_events = {POWER_KEY: pl_event}
|
||||
else:
|
||||
auth_events_ids = yield self.auth.compute_auth_events(
|
||||
auth_events_ids = await self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=False
|
||||
)
|
||||
auth_events = yield self.store.get_events(auth_events_ids)
|
||||
auth_events = await self.store.get_events(auth_events_ids)
|
||||
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
|
||||
|
||||
sender_level = get_user_power_level(event.sender, auth_events)
|
||||
@@ -136,23 +132,19 @@ class BulkPushRuleEvaluator(object):
|
||||
|
||||
return pl_event.content if pl_event else {}, sender_level
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def action_for_event_by_user(self, event, context):
|
||||
async def action_for_event_by_user(self, event, context) -> None:
|
||||
"""Given an event and context, evaluate the push rules and insert the
|
||||
results into the event_push_actions_staging table.
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
rules_by_user = yield self._get_rules_for_event(event, context)
|
||||
rules_by_user = await self._get_rules_for_event(event, context)
|
||||
actions_by_user = {}
|
||||
|
||||
room_members = yield self.store.get_joined_users_from_context(event, context)
|
||||
room_members = await self.store.get_joined_users_from_context(event, context)
|
||||
|
||||
(
|
||||
power_levels,
|
||||
sender_power_level,
|
||||
) = yield self._get_power_levels_and_sender_level(event, context)
|
||||
) = await self._get_power_levels_and_sender_level(event, context)
|
||||
|
||||
evaluator = PushRuleEvaluatorForEvent(
|
||||
event, len(room_members), sender_power_level, power_levels
|
||||
@@ -165,7 +157,7 @@ class BulkPushRuleEvaluator(object):
|
||||
continue
|
||||
|
||||
if not event.is_state():
|
||||
is_ignored = yield self.store.is_ignored_by(event.sender, uid)
|
||||
is_ignored = await self.store.is_ignored_by(event.sender, uid)
|
||||
if is_ignored:
|
||||
continue
|
||||
|
||||
@@ -197,7 +189,7 @@ class BulkPushRuleEvaluator(object):
|
||||
# Mark in the DB staging area the push actions for users who should be
|
||||
# notified for this event. (This will then get handled when we persist
|
||||
# the event)
|
||||
yield self.store.add_push_actions_to_staging(event.event_id, actions_by_user)
|
||||
await self.store.add_push_actions_to_staging(event.event_id, actions_by_user)
|
||||
|
||||
|
||||
def _condition_checker(evaluator, conditions, uid, display_name, cache):
|
||||
@@ -274,8 +266,7 @@ class RulesForRoom(object):
|
||||
# to self around in the callback.
|
||||
self.invalidate_all_cb = _Invalidation(rules_for_room_cache, room_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_rules(self, event, context):
|
||||
async def get_rules(self, event, context):
|
||||
"""Given an event context return the rules for all users who are
|
||||
currently in the room.
|
||||
"""
|
||||
@@ -286,7 +277,7 @@ class RulesForRoom(object):
|
||||
self.room_push_rule_cache_metrics.inc_hits()
|
||||
return self.rules_by_user
|
||||
|
||||
with (yield self.linearizer.queue(())):
|
||||
with (await self.linearizer.queue(())):
|
||||
if state_group and self.state_group == state_group:
|
||||
logger.debug("Using cached rules for %r", self.room_id)
|
||||
self.room_push_rule_cache_metrics.inc_hits()
|
||||
@@ -304,9 +295,7 @@ class RulesForRoom(object):
|
||||
|
||||
push_rules_delta_state_cache_metric.inc_hits()
|
||||
else:
|
||||
current_state_ids = yield defer.ensureDeferred(
|
||||
context.get_current_state_ids()
|
||||
)
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
push_rules_delta_state_cache_metric.inc_misses()
|
||||
|
||||
push_rules_state_size_counter.inc(len(current_state_ids))
|
||||
@@ -353,7 +342,7 @@ class RulesForRoom(object):
|
||||
# If we have some memebr events we haven't seen, look them up
|
||||
# and fetch push rules for them if appropriate.
|
||||
logger.debug("Found new member events %r", missing_member_event_ids)
|
||||
yield self._update_rules_with_member_event_ids(
|
||||
await self._update_rules_with_member_event_ids(
|
||||
ret_rules_by_user, missing_member_event_ids, state_group, event
|
||||
)
|
||||
else:
|
||||
@@ -371,8 +360,7 @@ class RulesForRoom(object):
|
||||
)
|
||||
return ret_rules_by_user
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _update_rules_with_member_event_ids(
|
||||
async def _update_rules_with_member_event_ids(
|
||||
self, ret_rules_by_user, member_event_ids, state_group, event
|
||||
):
|
||||
"""Update the partially filled rules_by_user dict by fetching rules for
|
||||
@@ -388,7 +376,7 @@ class RulesForRoom(object):
|
||||
"""
|
||||
sequence = self.sequence
|
||||
|
||||
rows = yield self.store.get_membership_from_event_ids(member_event_ids.values())
|
||||
rows = await self.store.get_membership_from_event_ids(member_event_ids.values())
|
||||
|
||||
members = {row["event_id"]: (row["user_id"], row["membership"]) for row in rows}
|
||||
|
||||
@@ -410,7 +398,7 @@ class RulesForRoom(object):
|
||||
|
||||
logger.debug("Joined: %r", interested_in_user_ids)
|
||||
|
||||
if_users_with_pushers = yield self.store.get_if_users_have_pushers(
|
||||
if_users_with_pushers = await self.store.get_if_users_have_pushers(
|
||||
interested_in_user_ids, on_invalidate=self.invalidate_all_cb
|
||||
)
|
||||
|
||||
@@ -420,7 +408,7 @@ class RulesForRoom(object):
|
||||
|
||||
logger.debug("With pushers: %r", user_ids)
|
||||
|
||||
users_with_receipts = yield self.store.get_users_with_read_receipts_in_room(
|
||||
users_with_receipts = await self.store.get_users_with_read_receipts_in_room(
|
||||
self.room_id, on_invalidate=self.invalidate_all_cb
|
||||
)
|
||||
|
||||
@@ -431,7 +419,7 @@ class RulesForRoom(object):
|
||||
if uid in interested_in_user_ids:
|
||||
user_ids.add(uid)
|
||||
|
||||
rules_by_user = yield self.store.bulk_get_push_rules(
|
||||
rules_by_user = await self.store.bulk_get_push_rules(
|
||||
user_ids, on_invalidate=self.invalidate_all_cb
|
||||
)
|
||||
|
||||
|
||||
@@ -17,7 +17,6 @@ import logging
|
||||
|
||||
from prometheus_client import Counter
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
@@ -128,12 +127,11 @@ class HttpPusher(object):
|
||||
# but currently that's the only type of receipt anyway...
|
||||
run_as_background_process("http_pusher.on_new_receipts", self._update_badge)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _update_badge(self):
|
||||
async def _update_badge(self):
|
||||
# XXX as per https://github.com/matrix-org/matrix-doc/issues/2627, this seems
|
||||
# to be largely redundant. perhaps we can remove it.
|
||||
badge = yield push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
|
||||
yield self._send_badge(badge)
|
||||
badge = await push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
|
||||
await self._send_badge(badge)
|
||||
|
||||
def on_timer(self):
|
||||
self._start_processing()
|
||||
@@ -152,8 +150,7 @@ class HttpPusher(object):
|
||||
|
||||
run_as_background_process("httppush.process", self._process)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _process(self):
|
||||
async def _process(self):
|
||||
# we should never get here if we are already processing
|
||||
assert not self._is_processing
|
||||
|
||||
@@ -164,7 +161,7 @@ class HttpPusher(object):
|
||||
while True:
|
||||
starting_max_ordering = self.max_stream_ordering
|
||||
try:
|
||||
yield self._unsafe_process()
|
||||
await self._unsafe_process()
|
||||
except Exception:
|
||||
logger.exception("Exception processing notifs")
|
||||
if self.max_stream_ordering == starting_max_ordering:
|
||||
@@ -172,8 +169,7 @@ class HttpPusher(object):
|
||||
finally:
|
||||
self._is_processing = False
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _unsafe_process(self):
|
||||
async def _unsafe_process(self):
|
||||
"""
|
||||
Looks for unset notifications and dispatch them, in order
|
||||
Never call this directly: use _process which will only allow this to
|
||||
@@ -181,7 +177,7 @@ class HttpPusher(object):
|
||||
"""
|
||||
|
||||
fn = self.store.get_unread_push_actions_for_user_in_range_for_http
|
||||
unprocessed = yield fn(
|
||||
unprocessed = await fn(
|
||||
self.user_id, self.last_stream_ordering, self.max_stream_ordering
|
||||
)
|
||||
|
||||
@@ -203,13 +199,13 @@ class HttpPusher(object):
|
||||
"app_display_name": self.app_display_name,
|
||||
},
|
||||
):
|
||||
processed = yield self._process_one(push_action)
|
||||
processed = await self._process_one(push_action)
|
||||
|
||||
if processed:
|
||||
http_push_processed_counter.inc()
|
||||
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
|
||||
self.last_stream_ordering = push_action["stream_ordering"]
|
||||
pusher_still_exists = yield self.store.update_pusher_last_stream_ordering_and_success(
|
||||
pusher_still_exists = await self.store.update_pusher_last_stream_ordering_and_success(
|
||||
self.app_id,
|
||||
self.pushkey,
|
||||
self.user_id,
|
||||
@@ -224,14 +220,14 @@ class HttpPusher(object):
|
||||
|
||||
if self.failing_since:
|
||||
self.failing_since = None
|
||||
yield self.store.update_pusher_failing_since(
|
||||
await self.store.update_pusher_failing_since(
|
||||
self.app_id, self.pushkey, self.user_id, self.failing_since
|
||||
)
|
||||
else:
|
||||
http_push_failed_counter.inc()
|
||||
if not self.failing_since:
|
||||
self.failing_since = self.clock.time_msec()
|
||||
yield self.store.update_pusher_failing_since(
|
||||
await self.store.update_pusher_failing_since(
|
||||
self.app_id, self.pushkey, self.user_id, self.failing_since
|
||||
)
|
||||
|
||||
@@ -250,7 +246,7 @@ class HttpPusher(object):
|
||||
)
|
||||
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
|
||||
self.last_stream_ordering = push_action["stream_ordering"]
|
||||
pusher_still_exists = yield self.store.update_pusher_last_stream_ordering(
|
||||
pusher_still_exists = await self.store.update_pusher_last_stream_ordering(
|
||||
self.app_id,
|
||||
self.pushkey,
|
||||
self.user_id,
|
||||
@@ -263,7 +259,7 @@ class HttpPusher(object):
|
||||
return
|
||||
|
||||
self.failing_since = None
|
||||
yield self.store.update_pusher_failing_since(
|
||||
await self.store.update_pusher_failing_since(
|
||||
self.app_id, self.pushkey, self.user_id, self.failing_since
|
||||
)
|
||||
else:
|
||||
@@ -276,18 +272,17 @@ class HttpPusher(object):
|
||||
)
|
||||
break
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _process_one(self, push_action):
|
||||
async def _process_one(self, push_action):
|
||||
if "notify" not in push_action["actions"]:
|
||||
return True
|
||||
|
||||
tweaks = push_rule_evaluator.tweaks_for_actions(push_action["actions"])
|
||||
badge = yield push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
|
||||
badge = await push_tools.get_badge_count(self.hs.get_datastore(), self.user_id)
|
||||
|
||||
event = yield self.store.get_event(push_action["event_id"], allow_none=True)
|
||||
event = await self.store.get_event(push_action["event_id"], allow_none=True)
|
||||
if event is None:
|
||||
return True # It's been redacted
|
||||
rejected = yield self.dispatch_push(event, tweaks, badge)
|
||||
rejected = await self.dispatch_push(event, tweaks, badge)
|
||||
if rejected is False:
|
||||
return False
|
||||
|
||||
@@ -301,11 +296,10 @@ class HttpPusher(object):
|
||||
)
|
||||
else:
|
||||
logger.info("Pushkey %s was rejected: removing", pk)
|
||||
yield self.hs.remove_pusher(self.app_id, pk, self.user_id)
|
||||
await self.hs.remove_pusher(self.app_id, pk, self.user_id)
|
||||
return True
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _build_notification_dict(self, event, tweaks, badge):
|
||||
async def _build_notification_dict(self, event, tweaks, badge):
|
||||
priority = "low"
|
||||
if (
|
||||
event.type == EventTypes.Encrypted
|
||||
@@ -335,7 +329,7 @@ class HttpPusher(object):
|
||||
}
|
||||
return d
|
||||
|
||||
ctx = yield push_tools.get_context_for_event(
|
||||
ctx = await push_tools.get_context_for_event(
|
||||
self.storage, self.state_handler, event, self.user_id
|
||||
)
|
||||
|
||||
@@ -377,13 +371,12 @@ class HttpPusher(object):
|
||||
|
||||
return d
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def dispatch_push(self, event, tweaks, badge):
|
||||
notification_dict = yield self._build_notification_dict(event, tweaks, badge)
|
||||
async def dispatch_push(self, event, tweaks, badge):
|
||||
notification_dict = await self._build_notification_dict(event, tweaks, badge)
|
||||
if not notification_dict:
|
||||
return []
|
||||
try:
|
||||
resp = yield self.http_client.post_json_get_json(
|
||||
resp = await self.http_client.post_json_get_json(
|
||||
self.url, notification_dict
|
||||
)
|
||||
except Exception as e:
|
||||
@@ -400,8 +393,7 @@ class HttpPusher(object):
|
||||
rejected = resp["rejected"]
|
||||
return rejected
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _send_badge(self, badge):
|
||||
async def _send_badge(self, badge):
|
||||
"""
|
||||
Args:
|
||||
badge (int): number of unread messages
|
||||
@@ -424,7 +416,7 @@ class HttpPusher(object):
|
||||
}
|
||||
}
|
||||
try:
|
||||
yield self.http_client.post_json_get_json(self.url, d)
|
||||
await self.http_client.post_json_get_json(self.url, d)
|
||||
http_badges_processed_counter.inc()
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
|
||||
@@ -16,8 +16,6 @@
|
||||
import logging
|
||||
import re
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -29,8 +27,7 @@ ALIAS_RE = re.compile(r"^#.*:.+$")
|
||||
ALL_ALONE = "Empty Room"
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def calculate_room_name(
|
||||
async def calculate_room_name(
|
||||
store,
|
||||
room_state_ids,
|
||||
user_id,
|
||||
@@ -53,7 +50,7 @@ def calculate_room_name(
|
||||
"""
|
||||
# does it have a name?
|
||||
if (EventTypes.Name, "") in room_state_ids:
|
||||
m_room_name = yield store.get_event(
|
||||
m_room_name = await store.get_event(
|
||||
room_state_ids[(EventTypes.Name, "")], allow_none=True
|
||||
)
|
||||
if m_room_name and m_room_name.content and m_room_name.content["name"]:
|
||||
@@ -61,7 +58,7 @@ def calculate_room_name(
|
||||
|
||||
# does it have a canonical alias?
|
||||
if (EventTypes.CanonicalAlias, "") in room_state_ids:
|
||||
canon_alias = yield store.get_event(
|
||||
canon_alias = await store.get_event(
|
||||
room_state_ids[(EventTypes.CanonicalAlias, "")], allow_none=True
|
||||
)
|
||||
if (
|
||||
@@ -81,7 +78,7 @@ def calculate_room_name(
|
||||
|
||||
my_member_event = None
|
||||
if (EventTypes.Member, user_id) in room_state_ids:
|
||||
my_member_event = yield store.get_event(
|
||||
my_member_event = await store.get_event(
|
||||
room_state_ids[(EventTypes.Member, user_id)], allow_none=True
|
||||
)
|
||||
|
||||
@@ -90,7 +87,7 @@ def calculate_room_name(
|
||||
and my_member_event.content["membership"] == "invite"
|
||||
):
|
||||
if (EventTypes.Member, my_member_event.sender) in room_state_ids:
|
||||
inviter_member_event = yield store.get_event(
|
||||
inviter_member_event = await store.get_event(
|
||||
room_state_ids[(EventTypes.Member, my_member_event.sender)],
|
||||
allow_none=True,
|
||||
)
|
||||
@@ -107,7 +104,7 @@ def calculate_room_name(
|
||||
# we're going to have to generate a name based on who's in the room,
|
||||
# so find out who is in the room that isn't the user.
|
||||
if EventTypes.Member in room_state_bytype_ids:
|
||||
member_events = yield store.get_events(
|
||||
member_events = await store.get_events(
|
||||
list(room_state_bytype_ids[EventTypes.Member].values())
|
||||
)
|
||||
all_members = [
|
||||
|
||||
@@ -13,53 +13,40 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.push.presentable_names import calculate_room_name, name_from_member_event
|
||||
from synapse.storage import Storage
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_badge_count(store, user_id):
|
||||
invites = yield store.get_invited_rooms_for_local_user(user_id)
|
||||
joins = yield store.get_rooms_for_user(user_id)
|
||||
|
||||
my_receipts_by_room = yield store.get_receipts_for_user(user_id, "m.read")
|
||||
async def get_badge_count(store, user_id):
|
||||
invites = await store.get_invited_rooms_for_local_user(user_id)
|
||||
joins = await store.get_rooms_for_user(user_id)
|
||||
|
||||
badge = len(invites)
|
||||
|
||||
for room_id in joins:
|
||||
if room_id in my_receipts_by_room:
|
||||
last_unread_event_id = my_receipts_by_room[room_id]
|
||||
|
||||
notifs = yield (
|
||||
store.get_unread_event_push_actions_by_room_for_user(
|
||||
room_id, user_id, last_unread_event_id
|
||||
)
|
||||
)
|
||||
# return one badge count per conversation, as count per
|
||||
# message is so noisy as to be almost useless
|
||||
badge += 1 if notifs["notify_count"] else 0
|
||||
unread_count = await store.get_unread_message_count_for_user(room_id, user_id)
|
||||
# return one badge count per conversation, as count per
|
||||
# message is so noisy as to be almost useless
|
||||
badge += 1 if unread_count else 0
|
||||
return badge
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_context_for_event(storage: Storage, state_handler, ev, user_id):
|
||||
async def get_context_for_event(storage: Storage, state_handler, ev, user_id):
|
||||
ctx = {}
|
||||
|
||||
room_state_ids = yield storage.state.get_state_ids_for_event(ev.event_id)
|
||||
room_state_ids = await storage.state.get_state_ids_for_event(ev.event_id)
|
||||
|
||||
# we no longer bother setting room_alias, and make room_name the
|
||||
# human-readable name instead, be that m.room.name, an alias or
|
||||
# a list of people in the room
|
||||
name = yield calculate_room_name(
|
||||
name = await calculate_room_name(
|
||||
storage.main, room_state_ids, user_id, fallback_to_single_member=False
|
||||
)
|
||||
if name:
|
||||
ctx["name"] = name
|
||||
|
||||
sender_state_event_id = room_state_ids[("m.room.member", ev.sender)]
|
||||
sender_state_event = yield storage.main.get_event(sender_state_event_id)
|
||||
sender_state_event = await storage.main.get_event(sender_state_event_id)
|
||||
ctx["sender_display_name"] = name_from_member_event(sender_state_event)
|
||||
|
||||
return ctx
|
||||
|
||||
@@ -19,8 +19,6 @@ from typing import TYPE_CHECKING, Dict, Union
|
||||
|
||||
from prometheus_client import Gauge
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.push import PusherConfigException
|
||||
from synapse.push.emailpusher import EmailPusher
|
||||
@@ -52,7 +50,7 @@ class PusherPool:
|
||||
Note that it is expected that each pusher will have its own 'processing' loop which
|
||||
will send out the notifications in the background, rather than blocking until the
|
||||
notifications are sent; accordingly Pusher.on_started, Pusher.on_new_notifications and
|
||||
Pusher.on_new_receipts are not expected to return deferreds.
|
||||
Pusher.on_new_receipts are not expected to return awaitables.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
@@ -77,8 +75,7 @@ class PusherPool:
|
||||
return
|
||||
run_as_background_process("start_pushers", self._start_pushers)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def add_pusher(
|
||||
async def add_pusher(
|
||||
self,
|
||||
user_id,
|
||||
access_token,
|
||||
@@ -94,7 +91,7 @@ class PusherPool:
|
||||
"""Creates a new pusher and adds it to the pool
|
||||
|
||||
Returns:
|
||||
Deferred[EmailPusher|HttpPusher]
|
||||
EmailPusher|HttpPusher
|
||||
"""
|
||||
|
||||
time_now_msec = self.clock.time_msec()
|
||||
@@ -124,9 +121,9 @@ class PusherPool:
|
||||
# create the pusher setting last_stream_ordering to the current maximum
|
||||
# stream ordering in event_push_actions, so it will process
|
||||
# pushes from this point onwards.
|
||||
last_stream_ordering = yield self.store.get_latest_push_action_stream_ordering()
|
||||
last_stream_ordering = await self.store.get_latest_push_action_stream_ordering()
|
||||
|
||||
yield self.store.add_pusher(
|
||||
await self.store.add_pusher(
|
||||
user_id=user_id,
|
||||
access_token=access_token,
|
||||
kind=kind,
|
||||
@@ -140,15 +137,14 @@ class PusherPool:
|
||||
last_stream_ordering=last_stream_ordering,
|
||||
profile_tag=profile_tag,
|
||||
)
|
||||
pusher = yield self.start_pusher_by_id(app_id, pushkey, user_id)
|
||||
pusher = await self.start_pusher_by_id(app_id, pushkey, user_id)
|
||||
|
||||
return pusher
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def remove_pushers_by_app_id_and_pushkey_not_user(
|
||||
async def remove_pushers_by_app_id_and_pushkey_not_user(
|
||||
self, app_id, pushkey, not_user_id
|
||||
):
|
||||
to_remove = yield self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
|
||||
to_remove = await self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
|
||||
for p in to_remove:
|
||||
if p["user_name"] != not_user_id:
|
||||
logger.info(
|
||||
@@ -157,10 +153,9 @@ class PusherPool:
|
||||
pushkey,
|
||||
p["user_name"],
|
||||
)
|
||||
yield self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
||||
await self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def remove_pushers_by_access_token(self, user_id, access_tokens):
|
||||
async def remove_pushers_by_access_token(self, user_id, access_tokens):
|
||||
"""Remove the pushers for a given user corresponding to a set of
|
||||
access_tokens.
|
||||
|
||||
@@ -173,7 +168,7 @@ class PusherPool:
|
||||
return
|
||||
|
||||
tokens = set(access_tokens)
|
||||
for p in (yield self.store.get_pushers_by_user_id(user_id)):
|
||||
for p in await self.store.get_pushers_by_user_id(user_id):
|
||||
if p["access_token"] in tokens:
|
||||
logger.info(
|
||||
"Removing pusher for app id %s, pushkey %s, user %s",
|
||||
@@ -181,16 +176,15 @@ class PusherPool:
|
||||
p["pushkey"],
|
||||
p["user_name"],
|
||||
)
|
||||
yield self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
||||
await self.remove_pusher(p["app_id"], p["pushkey"], p["user_name"])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_new_notifications(self, min_stream_id, max_stream_id):
|
||||
async def on_new_notifications(self, min_stream_id, max_stream_id):
|
||||
if not self.pushers:
|
||||
# nothing to do here.
|
||||
return
|
||||
|
||||
try:
|
||||
users_affected = yield self.store.get_push_action_users_in_range(
|
||||
users_affected = await self.store.get_push_action_users_in_range(
|
||||
min_stream_id, max_stream_id
|
||||
)
|
||||
|
||||
@@ -202,8 +196,7 @@ class PusherPool:
|
||||
except Exception:
|
||||
logger.exception("Exception in pusher on_new_notifications")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_new_receipts(self, min_stream_id, max_stream_id, affected_room_ids):
|
||||
async def on_new_receipts(self, min_stream_id, max_stream_id, affected_room_ids):
|
||||
if not self.pushers:
|
||||
# nothing to do here.
|
||||
return
|
||||
@@ -211,7 +204,7 @@ class PusherPool:
|
||||
try:
|
||||
# Need to subtract 1 from the minimum because the lower bound here
|
||||
# is not inclusive
|
||||
users_affected = yield self.store.get_users_sent_receipts_between(
|
||||
users_affected = await self.store.get_users_sent_receipts_between(
|
||||
min_stream_id - 1, max_stream_id
|
||||
)
|
||||
|
||||
@@ -223,12 +216,11 @@ class PusherPool:
|
||||
except Exception:
|
||||
logger.exception("Exception in pusher on_new_receipts")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def start_pusher_by_id(self, app_id, pushkey, user_id):
|
||||
async def start_pusher_by_id(self, app_id, pushkey, user_id):
|
||||
"""Look up the details for the given pusher, and start it
|
||||
|
||||
Returns:
|
||||
Deferred[EmailPusher|HttpPusher|None]: The pusher started, if any
|
||||
EmailPusher|HttpPusher|None: The pusher started, if any
|
||||
"""
|
||||
if not self._should_start_pushers:
|
||||
return
|
||||
@@ -236,7 +228,7 @@ class PusherPool:
|
||||
if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
|
||||
return
|
||||
|
||||
resultlist = yield self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
|
||||
resultlist = await self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
|
||||
|
||||
pusher_dict = None
|
||||
for r in resultlist:
|
||||
@@ -245,34 +237,29 @@ class PusherPool:
|
||||
|
||||
pusher = None
|
||||
if pusher_dict:
|
||||
pusher = yield self._start_pusher(pusher_dict)
|
||||
pusher = await self._start_pusher(pusher_dict)
|
||||
|
||||
return pusher
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _start_pushers(self):
|
||||
async def _start_pushers(self) -> None:
|
||||
"""Start all the pushers
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
pushers = yield self.store.get_all_pushers()
|
||||
pushers = await self.store.get_all_pushers()
|
||||
|
||||
# Stagger starting up the pushers so we don't completely drown the
|
||||
# process on start up.
|
||||
yield concurrently_execute(self._start_pusher, pushers, 10)
|
||||
await concurrently_execute(self._start_pusher, pushers, 10)
|
||||
|
||||
logger.info("Started pushers")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _start_pusher(self, pusherdict):
|
||||
async def _start_pusher(self, pusherdict):
|
||||
"""Start the given pusher
|
||||
|
||||
Args:
|
||||
pusherdict (dict): dict with the values pulled from the db table
|
||||
|
||||
Returns:
|
||||
Deferred[EmailPusher|HttpPusher]
|
||||
EmailPusher|HttpPusher
|
||||
"""
|
||||
if not self._pusher_shard_config.should_handle(
|
||||
self._instance_name, pusherdict["user_name"]
|
||||
@@ -315,7 +302,7 @@ class PusherPool:
|
||||
user_id = pusherdict["user_name"]
|
||||
last_stream_ordering = pusherdict["last_stream_ordering"]
|
||||
if last_stream_ordering:
|
||||
have_notifs = yield self.store.get_if_maybe_push_in_range_for_user(
|
||||
have_notifs = await self.store.get_if_maybe_push_in_range_for_user(
|
||||
user_id, last_stream_ordering
|
||||
)
|
||||
else:
|
||||
@@ -327,8 +314,7 @@ class PusherPool:
|
||||
|
||||
return p
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def remove_pusher(self, app_id, pushkey, user_id):
|
||||
async def remove_pusher(self, app_id, pushkey, user_id):
|
||||
appid_pushkey = "%s:%s" % (app_id, pushkey)
|
||||
|
||||
byuser = self.pushers.get(user_id, {})
|
||||
@@ -340,6 +326,6 @@ class PusherPool:
|
||||
|
||||
synapse_pushers.labels(type(pusher).__name__, pusher.app_id).dec()
|
||||
|
||||
yield self.store.delete_pusher_by_app_id_pushkey_user_id(
|
||||
await self.store.delete_pusher_by_app_id_pushkey_user_id(
|
||||
app_id, pushkey, user_id
|
||||
)
|
||||
|
||||
@@ -43,7 +43,7 @@ REQUIREMENTS = [
|
||||
"jsonschema>=2.5.1",
|
||||
"frozendict>=1",
|
||||
"unpaddedbase64>=1.1.0",
|
||||
"canonicaljson>=1.1.3",
|
||||
"canonicaljson>=1.2.0",
|
||||
# we use the type definitions added in signedjson 1.1.
|
||||
"signedjson>=1.1.0",
|
||||
"pynacl>=1.2.1",
|
||||
|
||||
@@ -20,8 +20,6 @@ import urllib
|
||||
from inspect import signature
|
||||
from typing import Dict, List, Tuple
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import (
|
||||
CodeMessageException,
|
||||
HttpResponseException,
|
||||
@@ -101,7 +99,7 @@ class ReplicationEndpoint(object):
|
||||
assert self.METHOD in ("PUT", "POST", "GET")
|
||||
|
||||
@abc.abstractmethod
|
||||
def _serialize_payload(**kwargs):
|
||||
async def _serialize_payload(**kwargs):
|
||||
"""Static method that is called when creating a request.
|
||||
|
||||
Concrete implementations should have explicit parameters (rather than
|
||||
@@ -110,9 +108,8 @@ class ReplicationEndpoint(object):
|
||||
argument list.
|
||||
|
||||
Returns:
|
||||
Deferred[dict]|dict: If POST/PUT request then dictionary must be
|
||||
JSON serialisable, otherwise must be appropriate for adding as
|
||||
query args.
|
||||
dict: If POST/PUT request then dictionary must be JSON serialisable,
|
||||
otherwise must be appropriate for adding as query args.
|
||||
"""
|
||||
return {}
|
||||
|
||||
@@ -144,8 +141,7 @@ class ReplicationEndpoint(object):
|
||||
instance_map = hs.config.worker.instance_map
|
||||
|
||||
@trace(opname="outgoing_replication_request")
|
||||
@defer.inlineCallbacks
|
||||
def send_request(instance_name="master", **kwargs):
|
||||
async def send_request(instance_name="master", **kwargs):
|
||||
if instance_name == local_instance_name:
|
||||
raise Exception("Trying to send HTTP request to self")
|
||||
if instance_name == "master":
|
||||
@@ -159,7 +155,7 @@ class ReplicationEndpoint(object):
|
||||
"Instance %r not in 'instance_map' config" % (instance_name,)
|
||||
)
|
||||
|
||||
data = yield cls._serialize_payload(**kwargs)
|
||||
data = await cls._serialize_payload(**kwargs)
|
||||
|
||||
url_args = [
|
||||
urllib.parse.quote(kwargs[name], safe="") for name in cls.PATH_ARGS
|
||||
@@ -197,7 +193,7 @@ class ReplicationEndpoint(object):
|
||||
headers = {} # type: Dict[bytes, List[bytes]]
|
||||
inject_active_span_byte_dict(headers, None, check_destination=False)
|
||||
try:
|
||||
result = yield request_func(uri, data, headers=headers)
|
||||
result = await request_func(uri, data, headers=headers)
|
||||
break
|
||||
except CodeMessageException as e:
|
||||
if e.code != 504 or not cls.RETRY_ON_TIMEOUT:
|
||||
@@ -207,7 +203,7 @@ class ReplicationEndpoint(object):
|
||||
|
||||
# If we timed out we probably don't need to worry about backing
|
||||
# off too much, but lets just wait a little anyway.
|
||||
yield clock.sleep(1)
|
||||
await clock.sleep(1)
|
||||
except HttpResponseException as e:
|
||||
# We convert to SynapseError as we know that it was a SynapseError
|
||||
# on the master process that we should send to the client. (And
|
||||
|
||||
@@ -60,7 +60,7 @@ class ReplicationUserDevicesResyncRestServlet(ReplicationEndpoint):
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id):
|
||||
async def _serialize_payload(user_id):
|
||||
return {}
|
||||
|
||||
async def _handle_request(self, request, user_id):
|
||||
|
||||
@@ -15,8 +15,6 @@
|
||||
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.events import make_event_from_dict
|
||||
from synapse.events.snapshot import EventContext
|
||||
@@ -67,8 +65,7 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
|
||||
self.federation_handler = hs.get_handlers().federation_handler
|
||||
|
||||
@staticmethod
|
||||
@defer.inlineCallbacks
|
||||
def _serialize_payload(store, event_and_contexts, backfilled):
|
||||
async def _serialize_payload(store, event_and_contexts, backfilled):
|
||||
"""
|
||||
Args:
|
||||
store
|
||||
@@ -78,7 +75,7 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
|
||||
"""
|
||||
event_payloads = []
|
||||
for event, context in event_and_contexts:
|
||||
serialized_context = yield context.serialize(event, store)
|
||||
serialized_context = await context.serialize(event, store)
|
||||
|
||||
event_payloads.append(
|
||||
{
|
||||
@@ -154,7 +151,7 @@ class ReplicationFederationSendEduRestServlet(ReplicationEndpoint):
|
||||
self.registry = hs.get_federation_registry()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(edu_type, origin, content):
|
||||
async def _serialize_payload(edu_type, origin, content):
|
||||
return {"origin": origin, "content": content}
|
||||
|
||||
async def _handle_request(self, request, edu_type):
|
||||
@@ -197,7 +194,7 @@ class ReplicationGetQueryRestServlet(ReplicationEndpoint):
|
||||
self.registry = hs.get_federation_registry()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(query_type, args):
|
||||
async def _serialize_payload(query_type, args):
|
||||
"""
|
||||
Args:
|
||||
query_type (str)
|
||||
@@ -238,7 +235,7 @@ class ReplicationCleanRoomRestServlet(ReplicationEndpoint):
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(room_id, args):
|
||||
async def _serialize_payload(room_id, args):
|
||||
"""
|
||||
Args:
|
||||
room_id (str)
|
||||
@@ -273,7 +270,7 @@ class ReplicationStoreRoomOnInviteRestServlet(ReplicationEndpoint):
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(room_id, room_version):
|
||||
async def _serialize_payload(room_id, room_version):
|
||||
return {"room_version": room_version.identifier}
|
||||
|
||||
async def _handle_request(self, request, room_id):
|
||||
|
||||
@@ -36,7 +36,7 @@ class RegisterDeviceReplicationServlet(ReplicationEndpoint):
|
||||
self.registration_handler = hs.get_registration_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id, device_id, initial_display_name, is_guest):
|
||||
async def _serialize_payload(user_id, device_id, initial_display_name, is_guest):
|
||||
"""
|
||||
Args:
|
||||
device_id (str|None): Device ID to use, if None a new one is
|
||||
|
||||
@@ -52,7 +52,9 @@ class ReplicationRemoteJoinRestServlet(ReplicationEndpoint):
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(requester, room_id, user_id, remote_room_hosts, content):
|
||||
async def _serialize_payload(
|
||||
requester, room_id, user_id, remote_room_hosts, content
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
requester(Requester)
|
||||
@@ -112,7 +114,7 @@ class ReplicationRemoteRejectInviteRestServlet(ReplicationEndpoint):
|
||||
self.member_handler = hs.get_room_member_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload( # type: ignore
|
||||
async def _serialize_payload( # type: ignore
|
||||
invite_event_id: str,
|
||||
txn_id: Optional[str],
|
||||
requester: Requester,
|
||||
@@ -174,7 +176,7 @@ class ReplicationUserJoinedLeftRoomRestServlet(ReplicationEndpoint):
|
||||
self.distributor = hs.get_distributor()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(room_id, user_id, change):
|
||||
async def _serialize_payload(room_id, user_id, change):
|
||||
"""
|
||||
Args:
|
||||
room_id (str)
|
||||
|
||||
@@ -50,7 +50,7 @@ class ReplicationBumpPresenceActiveTime(ReplicationEndpoint):
|
||||
self._presence_handler = hs.get_presence_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id):
|
||||
async def _serialize_payload(user_id):
|
||||
return {}
|
||||
|
||||
async def _handle_request(self, request, user_id):
|
||||
@@ -92,7 +92,7 @@ class ReplicationPresenceSetState(ReplicationEndpoint):
|
||||
self._presence_handler = hs.get_presence_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id, state, ignore_status_msg=False):
|
||||
async def _serialize_payload(user_id, state, ignore_status_msg=False):
|
||||
return {
|
||||
"state": state,
|
||||
"ignore_status_msg": ignore_status_msg,
|
||||
|
||||
@@ -34,7 +34,7 @@ class ReplicationRegisterServlet(ReplicationEndpoint):
|
||||
self.registration_handler = hs.get_registration_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(
|
||||
async def _serialize_payload(
|
||||
user_id,
|
||||
password_hash,
|
||||
was_guest,
|
||||
@@ -105,7 +105,7 @@ class ReplicationPostRegisterActionsServlet(ReplicationEndpoint):
|
||||
self.registration_handler = hs.get_registration_handler()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(user_id, auth_result, access_token):
|
||||
async def _serialize_payload(user_id, auth_result, access_token):
|
||||
"""
|
||||
Args:
|
||||
user_id (str): The user ID that consented
|
||||
|
||||
@@ -15,8 +15,6 @@
|
||||
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.events import make_event_from_dict
|
||||
from synapse.events.snapshot import EventContext
|
||||
@@ -62,8 +60,7 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
@staticmethod
|
||||
@defer.inlineCallbacks
|
||||
def _serialize_payload(
|
||||
async def _serialize_payload(
|
||||
event_id, store, event, context, requester, ratelimit, extra_users
|
||||
):
|
||||
"""
|
||||
@@ -77,7 +74,7 @@ class ReplicationSendEventRestServlet(ReplicationEndpoint):
|
||||
extra_users (list(UserID)): Any extra users to notify about event
|
||||
"""
|
||||
|
||||
serialized_context = yield context.serialize(event, store)
|
||||
serialized_context = await context.serialize(event, store)
|
||||
|
||||
payload = {
|
||||
"event": event.get_pdu_json(),
|
||||
|
||||
@@ -54,7 +54,7 @@ class ReplicationGetStreamUpdates(ReplicationEndpoint):
|
||||
self.streams = hs.get_replication_streams()
|
||||
|
||||
@staticmethod
|
||||
def _serialize_payload(stream_name, from_token, upto_token):
|
||||
async def _serialize_payload(stream_name, from_token, upto_token):
|
||||
return {"from_token": from_token, "upto_token": upto_token}
|
||||
|
||||
async def _handle_request(self, request, stream_name):
|
||||
|
||||
@@ -103,6 +103,14 @@ class DeleteRoomRestServlet(RestServlet):
|
||||
Codes.BAD_JSON,
|
||||
)
|
||||
|
||||
purge = content.get("purge", True)
|
||||
if not isinstance(purge, bool):
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"Param 'purge' must be a boolean, if given",
|
||||
Codes.BAD_JSON,
|
||||
)
|
||||
|
||||
ret = await self.room_shutdown_handler.shutdown_room(
|
||||
room_id=room_id,
|
||||
new_room_user_id=content.get("new_room_user_id"),
|
||||
@@ -113,7 +121,8 @@ class DeleteRoomRestServlet(RestServlet):
|
||||
)
|
||||
|
||||
# Purge room
|
||||
await self.pagination_handler.purge_room(room_id)
|
||||
if purge:
|
||||
await self.pagination_handler.purge_room(room_id)
|
||||
|
||||
return (200, ret)
|
||||
|
||||
|
||||
@@ -444,7 +444,7 @@ class RoomMemberListRestServlet(RestServlet):
|
||||
|
||||
async def on_GET(self, request, room_id):
|
||||
# TODO support Pagination stream API (limit/tokens)
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||
handler = self.message_handler
|
||||
|
||||
# request the state as of a given event, as identified by a stream token,
|
||||
|
||||
@@ -426,6 +426,7 @@ class SyncRestServlet(RestServlet):
|
||||
result["ephemeral"] = {"events": ephemeral_events}
|
||||
result["unread_notifications"] = room.unread_notifications
|
||||
result["summary"] = room.summary
|
||||
result["org.matrix.msc2654.unread_count"] = room.unread_count
|
||||
|
||||
return result
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user