1
0

Compare commits

..

116 Commits

Author SHA1 Message Date
Travis Ralston
177f2b838c changelog 2019-05-22 19:16:02 -06:00
Travis Ralston
f9d7d3aa89 Remove m.relates_to from events if the client set it to null
It appears as though Python only checks to see if the key exists in a dictionary, not necessarily for a useful value. This means that when clients submit (valid) requests with `m.relates_to: null` and Synapse later reads it, it gets a None reference error on access.

This is the easier route than guarding all the places where it could be None.
2019-05-22 19:14:10 -06:00
Richard van der Hoff
1a94de60e8 Run black on synapse.crypto.keyring (#5232) 2019-05-22 18:39:33 +01:00
Neil Johnson
73f1de31d1 Merge branch 'master' into develop 2019-05-22 17:59:43 +01:00
Neil Johnson
3d5bba581b 0.99.5.1 2019-05-22 17:52:44 +01:00
Neil Johnson
006bd8f4f6 Revert "0.99.5"
This reverts commit c31e375ade.
2019-05-22 17:49:53 +01:00
Neil Johnson
c31e375ade 0.99.5 2019-05-22 17:45:44 +01:00
Marcus Hoffmann
62388a1e44 remove urllib3 pin (#5230)
requests 2.22.0 as been released supporting urllib3 1.25.2

Signed-off-by: Marcus Hoffmann <bubu@bubu1.eu>
2019-05-22 16:48:12 +01:00
Neil Johnson
ae5521be9c Merge branch 'master' into develop 2019-05-22 15:56:55 +01:00
Neil Johnson
8031a6f3d5 0.99.5 2019-05-22 15:40:28 +01:00
Neil Johnson
66b75e2d81 Neilj/ensure get profileinfo available in client reader slaved store (#5213)
* expose SlavedProfileStore to ClientReaderSlavedStore
2019-05-22 13:55:32 +01:00
Steffen
2dfbeea66f Update README.md (#5222)
Add missing backslash
2019-05-22 12:53:16 +01:00
Richard van der Hoff
b898a5600a Merge branch 'master' into develop 2019-05-22 11:38:27 +01:00
Richard van der Hoff
e26e6b3230 update changelog 2019-05-21 17:37:19 +01:00
Amber Brown
4a30e4acb4 Room Statistics (#4338) 2019-05-21 11:36:50 -05:00
Richard van der Hoff
f3ff64e000 Merge commit 'f4c80d70f' into release-v0.99.5 2019-05-21 17:35:31 +01:00
Erik Johnston
f4c80d70f8 Merge pull request #5203 from matrix-org/erikj/aggregate_by_sender
Only count aggregations from distinct senders
2019-05-21 17:10:48 +01:00
Erik Johnston
9526aa96a6 Merge pull request #5212 from matrix-org/erikj/deny_multiple_reactions
Block attempts to annotate the same event twice
2019-05-21 17:08:14 +01:00
Erik Johnston
9259cd4bee Newsfile 2019-05-21 17:06:21 +01:00
Richard van der Hoff
8aed6d87ff Fix spelling in changelog 2019-05-21 16:58:22 +01:00
Richard van der Hoff
959550b645 0.99.5rc1 2019-05-21 16:51:49 +01:00
Erik Johnston
44b8ba484e Fix words 2019-05-21 16:51:45 +01:00
Richard van der Hoff
17f6804837 Introduce room v4 which updates event ID format. (#5217)
Implements https://github.com/matrix-org/matrix-doc/pull/2002.
2019-05-21 16:22:54 +01:00
Richard van der Hoff
c4aef549ad Exclude soft-failed events from fwd-extremity candidates. (#5146)
When considering the candidates to be forward-extremities, we must exclude soft
failures.

Hopefully fixes #5090.
2019-05-21 16:10:54 +01:00
Richard van der Hoff
bab3eddac4 Pin eliot to <1.8 on python 3.5.2 (#5218)
* Pin eliot to <1.8 on python 3.5.2

Fixes https://github.com/matrix-org/synapse/issues/5199

* Add support for 'markers' to python_dependencies

* tell xargs not to strip quotes
2019-05-21 15:58:01 +01:00
Brendan Abolivier
6a5a70edf0 Merge pull request #5204 from matrix-org/babolivier/account_validity_expiration_date
Add startup background job for account validity
2019-05-21 14:55:15 +01:00
Brendan Abolivier
384122efa8 Doc 2019-05-21 14:39:36 +01:00
Richard van der Hoff
04d53794d6 Fix error handling for rooms whose versions are unknown. (#5219)
If we remove support for a particular room version, we should behave more
gracefully. This should make client requests fail with a 400 rather than a 500,
and will ignore individiual PDUs in a federation transaction, rather than the
whole transaction.
2019-05-21 13:47:25 +01:00
Brendan Abolivier
5ceee46c6b Do the select and insert in a single transaction 2019-05-21 13:38:51 +01:00
Erik Johnston
0620dd49db Newsfile 2019-05-20 17:40:24 +01:00
Erik Johnston
c7ec06e8a6 Block attempts to annotate the same event twice 2019-05-20 17:39:05 +01:00
Richard van der Hoff
24b93b9c76 Revert "expose SlavedProfileStore to ClientReaderSlavedStore (#5200)"
This reverts commit ce5bcefc60.

This caused:

```
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/synapse/src/synapse/app/client_reader.py", line 32, in <module>
    from synapse.replication.slave.storage import SlavedProfileStore
ImportError: cannot import name 'SlavedProfileStore' from 'synapse.replication.slave.storage' (/home/synapse/src/synapse/replication/slave/storage/__init__.py)
error starting synapse.app.client_reader('/home/synapse/config/workers/client_reader.yaml') (exit code: 1); see above for logs
```
2019-05-20 16:21:34 +01:00
Richard van der Hoff
5206648a4a Add a test room version which updates event ID format (#5210)
Implements MSC1884
2019-05-20 15:54:42 +01:00
Erik Johnston
edef6d29ae Merge pull request #5211 from matrix-org/erikj/fixup_reaction_constants
Rename relation types to match MSC
2019-05-20 14:52:29 +01:00
Erik Johnston
d642178654 Newsfile 2019-05-20 14:32:16 +01:00
Erik Johnston
1dff859d6a Rename relation types to match MSC 2019-05-20 14:31:19 +01:00
Erik Johnston
57ba3451b6 Merge pull request #5209 from matrix-org/erikj/reactions_base
Land basic reaction and edit support.
2019-05-20 14:06:40 +01:00
Erik Johnston
06671057b6 Newsfile 2019-05-20 12:39:07 +01:00
Erik Johnston
9ad246e6d2 Merge pull request #5207 from matrix-org/erikj/reactions_redactions
Correctly update aggregation counts after redaction
2019-05-20 12:36:06 +01:00
Erik Johnston
2ac9c965dd Fixup comments 2019-05-20 12:32:26 +01:00
Erik Johnston
935af0da38 Correctly update aggregation counts after redaction 2019-05-20 12:09:27 +01:00
Erik Johnston
210cb6dae2 Merge pull request #5195 from matrix-org/erikj/edits
Add basic editing support
2019-05-20 12:06:19 +01:00
ReidAnderson
3787133c9e Limit UserIds to a length that fits in a state key (#5198) 2019-05-20 11:20:08 +01:00
Brendan Abolivier
99c4ec1eef Changelog 2019-05-17 19:38:41 +01:00
Brendan Abolivier
ad5b4074e1 Add startup background job for account validity
If account validity is enabled in the server's configuration, this job will run at startup as a background job and will stick an expiration date to any registered account missing one.
2019-05-17 19:37:31 +01:00
Erik Johnston
b63cc325a9 Only count aggregations from distinct senders
As a user isn't allowed to send a single emoji more than once.
2019-05-17 18:03:10 +01:00
Erik Johnston
d4ca533d70 Make tests use different user for each reaction it sends
As users aren't allowed to react with the same emoji more than once.
2019-05-17 18:03:05 +01:00
bytepoets-blo
291e1eea5e fix mapping of return values for get_or_register_3pid_guest (#5177)
* fix mapping of return values for get_or_register_3pid_guest
2019-05-17 17:27:14 +01:00
Erik Johnston
85ece3df46 Merge pull request #5191 from matrix-org/erikj/refactor_pagination_bounds
Make generating SQL bounds for pagination generic
2019-05-17 17:24:36 +01:00
Erik Johnston
8dd9cca8ea Spelling and clarifications 2019-05-17 16:40:51 +01:00
Erik Johnston
5dbff34509 Fixup bsaed on review comments 2019-05-17 15:48:04 +01:00
Neil Johnson
ce5bcefc60 expose SlavedProfileStore to ClientReaderSlavedStore (#5200)
* expose SlavedProfileStore to ClientReaderSlavedStore
2019-05-17 13:27:19 +01:00
Richard van der Hoff
afb463fb7a Some vagrant hackery for testing the debs 2019-05-17 12:56:46 +01:00
Richard van der Hoff
da5ef0bb42 Merge remote-tracking branch 'origin/master' into develop 2019-05-17 12:39:48 +01:00
Richard van der Hoff
7ce1f97a13 Stop telling people to install the optional dependencies. (#5197)
* Stop telling people to install the optional dependencies.

They're optional.

Also update the postgres docs a bit for clarity(?)
2019-05-17 12:38:03 +01:00
Brendan Abolivier
fdeac1e984 Merge pull request #5196 from matrix-org/babolivier/per_room_profiles
Add an option to disable per-room profiles
2019-05-17 12:10:49 +01:00
PauRE
f89f688a55 Fix image orientation when generating thumbnail (#5039) 2019-05-16 19:04:26 +01:00
David Baker
07cff7b121 Merge pull request #5174 from matrix-org/dbkr/add_dummy_flow_to_recaptcha_only
Re-order registration stages to do msisdn & email auth last
2019-05-16 17:27:39 +01:00
Erik Johnston
d46aab3fa8 Add basic editing support 2019-05-16 16:54:45 +01:00
Erik Johnston
5c39d262c0 Merge pull request #5192 from matrix-org/erikj/relations_aggregations
Add relation aggregation APIs
2019-05-16 16:54:05 +01:00
Erik Johnston
895179a4dc Update docstring 2019-05-16 16:41:05 +01:00
Brendan Abolivier
8f9ce1a8a2 Lint 2019-05-16 15:25:54 +01:00
Brendan Abolivier
cc8c139a39 Lint 2019-05-16 15:20:59 +01:00
Brendan Abolivier
a5fe16c5a7 Changelog + sample config 2019-05-16 15:11:37 +01:00
Brendan Abolivier
efdc55db75 Forgot copyright 2019-05-16 15:10:24 +01:00
Brendan Abolivier
54a582ed44 Add test case 2019-05-16 15:09:16 +01:00
Brendan Abolivier
cd32375846 Add option to disable per-room profiles 2019-05-16 14:34:28 +01:00
Erik Johnston
7a7eba8302 Move parsing of tokens out of storage layer 2019-05-16 14:26:23 +01:00
Erik Johnston
2c662ddde4 Indirect tuple conversion 2019-05-16 14:21:39 +01:00
Erik Johnston
95f3fcda3c Check that event is visible in new APIs 2019-05-16 14:19:06 +01:00
Matthew Hodgson
4a6d5de98c Make /sync attempt to return device updates for both joined and invited users (#3484) 2019-05-16 13:23:43 +01:00
David Baker
fafb936de5 Merge pull request #5187 from matrix-org/dbkr/only_check_threepid_not_in_use_if_actually_registering
Only check 3pids not in use when registering
2019-05-16 10:58:09 +01:00
Erik Johnston
b5c62c6b26 Fix relations in worker mode 2019-05-16 10:38:13 +01:00
Erik Johnston
33453419b0 Add cache to relations 2019-05-16 10:02:14 +01:00
Erik Johnston
a0603523d2 Add aggregations API 2019-05-16 09:37:20 +01:00
Erik Johnston
f201a30244 Merge pull request #5186 from matrix-org/erikj/simple_pagination
Add simple relations API
2019-05-16 09:34:12 +01:00
David Baker
cd0faba7cd Make newsfile clearer 2019-05-15 20:53:48 +01:00
Amber Brown
f1e5b41388 Make all the rate limiting options more consistent (#5181) 2019-05-15 12:06:04 -05:00
Richard van der Hoff
5f027a315f Drop support for v2_alpha API prefix (#5190) 2019-05-15 17:37:46 +01:00
Erik Johnston
5be34fc3e3 Actually check for None rather falsey 2019-05-15 17:30:23 +01:00
Erik Johnston
e6459c26b4 Actually implement idempotency 2019-05-15 17:28:33 +01:00
Richard van der Hoff
1757e2d7c3 Merge branch 'master' into develop 2019-05-15 14:09:30 +01:00
Erik Johnston
5fb72e6888 Newsfile 2019-05-15 13:36:51 +01:00
Erik Johnston
b50641e357 Add simple pagination API 2019-05-15 13:36:51 +01:00
Erik Johnston
efe3c7977a Add simple send_relation API and track in DB 2019-05-15 13:36:51 +01:00
Erik Johnston
a9fc71c372 Merge branch 'erikj/refactor_pagination_bounds' into erikj/reactions_base 2019-05-15 13:36:29 +01:00
Erik Johnston
7155162844 Newsfile 2019-05-15 11:33:22 +01:00
Erik Johnston
54d77107c1 Make generating SQL bounds for pagination generic
This will allow us to reuse the same structure when we paginate e.g.
relations
2019-05-15 11:30:05 +01:00
Erik Johnston
0aba6c8251 Merge pull request #5183 from matrix-org/erikj/async_serialize_event
Allow client event serialization to be async
2019-05-15 10:36:30 +01:00
Erik Johnston
d94544051b Merge pull request #5184 from matrix-org/erikj/expose_get_events_as_array
Expose DataStore._get_events as get_events_as_list
2019-05-15 10:17:38 +01:00
Erik Johnston
99c7dae087 Merge pull request #5185 from matrix-org/erikj/fix_config_ratelimiting
Use correct config option for ratelimiting in tests
2019-05-15 09:54:15 +01:00
Erik Johnston
8ed2f182f7 Update docstring with correct return type
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-05-15 09:52:52 +01:00
Erik Johnston
52ddc6c0ed Update docstring with correct type
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-05-15 09:52:15 +01:00
David Baker
efefb5bda2 Have I got newsfile for you 2019-05-14 19:18:42 +01:00
David Baker
6ca88c4693 Only check 3pids not in use when registering
We checked that 3pids were not already in use before we checked if
we were going to return the account previously registered in the
same UI auth session, in which case the 3pids will definitely
be in use.

https://github.com/vector-im/riot-web/issues/9586
2019-05-14 19:04:59 +01:00
Richard van der Hoff
daa2fb6317 comment about user_joined_room 2019-05-14 18:53:09 +01:00
Erik Johnston
495e859e58 Merge branch 'erikj/fix_config_ratelimiting' into erikj/test 2019-05-14 14:42:47 +01:00
Erik Johnston
db3046f565 Newsfile 2019-05-14 14:39:27 +01:00
Erik Johnston
dc4f6d1b01 Use correct config option for ratelimiting in tests 2019-05-14 14:37:40 +01:00
Erik Johnston
ae69a6aa9d Merge branch 'erikj/async_serialize_event' into erikj/reactions_rebase 2019-05-14 14:09:33 +01:00
Erik Johnston
53788a447f Newsfile 2019-05-14 13:41:36 +01:00
Erik Johnston
4fb44fb5b9 Expose DataStore._get_events as get_events_as_list
This is in preparation for reaction work which requires it.
2019-05-14 13:37:44 +01:00
Erik Johnston
a80e6b53f9 Newsfile 2019-05-14 13:12:23 +01:00
Erik Johnston
b54b03f9e1 Allow client event serialization to be async 2019-05-14 11:58:01 +01:00
Amber Brown
df2ebd75d3 Migrate all tests to use the dict-based config format instead of hanging items off HomeserverConfig (#5171) 2019-05-13 15:01:14 -05:00
Andrew Morgan
5a4b328f52 Add ability to blacklist ip ranges for federation traffic (#5043) 2019-05-13 19:05:06 +01:00
David Baker
822072b1bb Terms might not be the last stage 2019-05-13 16:10:26 +01:00
David Baker
516a5fb64b Merge remote-tracking branch 'origin/develop' into dbkr/add_dummy_flow_to_recaptcha_only 2019-05-13 15:54:25 +01:00
David Baker
9e99143c47 Merge remote-tracking branch 'origin/develop' into dbkr/add_dummy_flow_to_recaptcha_only 2019-05-13 15:37:03 +01:00
David Baker
8782bfb783 And now I realise why the test is failing... 2019-05-13 15:34:11 +01:00
David Baker
c9f811c5d4 Update changelog 2019-05-10 14:01:19 +01:00
David Baker
04299132af Re-order flows so that email auth is done last
It's more natural for the user if the bit that takes them away
from the registration flow comes last. Adding the dummy stage allows
us to do the stages in this order without the ambiguity.
2019-05-10 13:58:03 +01:00
David Baker
7a3eb8657d Thanks, automated grammar pedantry. 2019-05-10 11:18:35 +01:00
David Baker
9c61dce3c8 Comment 2019-05-10 11:14:55 +01:00
David Baker
a18f93279e Add changelog entry 2019-05-10 11:11:59 +01:00
David Baker
8714ff6d51 Add a DUMMY stage to captcha-only registration flow
This allows the client to complete the email last which is more
natual for the user. Without this stage, if the client would
complete the recaptcha (and terms, if enabled) stages and then the
registration request would complete because you've now completed a
flow, even if you were intending to complete the flow that's the
same except has email auth at the end.

Adding a dummy auth stage to the recaptcha-only flow means it's
always unambiguous which flow the client was trying to complete.
Longer term we should think about changing the protocol so the
client explicitly says which flow it's trying to complete.

vector-im/riot-web#9586
2019-05-10 11:09:53 +01:00
112 changed files with 4523 additions and 848 deletions

View File

@@ -1,3 +1,54 @@
Synapse 0.99.5.1 (2019-05-22)
=============================
No significant changes.
Synapse 0.99.5 (2019-05-22)
===========================
No significant changes.
Synapse 0.99.5rc1 (2019-05-21)
==============================
Features
--------
- Add ability to blacklist IP ranges for the federation client. ([\#5043](https://github.com/matrix-org/synapse/issues/5043))
- Ratelimiting configuration for clients sending messages and the federation server has been altered to match login ratelimiting. The old configuration names will continue working. Check the sample config for details of the new names. ([\#5181](https://github.com/matrix-org/synapse/issues/5181))
- Drop support for the undocumented /_matrix/client/v2_alpha API prefix. ([\#5190](https://github.com/matrix-org/synapse/issues/5190))
- Add an option to disable per-room profiles. ([\#5196](https://github.com/matrix-org/synapse/issues/5196))
- Stick an expiration date to any registered user missing one at startup if account validity is enabled. ([\#5204](https://github.com/matrix-org/synapse/issues/5204))
- Add experimental support for relations (aka reactions and edits). ([\#5209](https://github.com/matrix-org/synapse/issues/5209), [\#5211](https://github.com/matrix-org/synapse/issues/5211), [\#5203](https://github.com/matrix-org/synapse/issues/5203), [\#5212](https://github.com/matrix-org/synapse/issues/5212))
- Add a room version 4 which uses a new event ID format, as per [MSC2002](https://github.com/matrix-org/matrix-doc/pull/2002). ([\#5210](https://github.com/matrix-org/synapse/issues/5210), [\#5217](https://github.com/matrix-org/synapse/issues/5217))
Bugfixes
--------
- Fix image orientation when generating thumbnails (needs pillow>=4.3.0). Contributed by Pau Rodriguez-Estivill. ([\#5039](https://github.com/matrix-org/synapse/issues/5039))
- Exclude soft-failed events from forward-extremity candidates: fixes "No forward extremities left!" error. ([\#5146](https://github.com/matrix-org/synapse/issues/5146))
- Re-order stages in registration flows such that msisdn and email verification are done last. ([\#5174](https://github.com/matrix-org/synapse/issues/5174))
- Fix 3pid guest invites. ([\#5177](https://github.com/matrix-org/synapse/issues/5177))
- Fix a bug where the register endpoint would fail with M_THREEPID_IN_USE instead of returning an account previously registered in the same session. ([\#5187](https://github.com/matrix-org/synapse/issues/5187))
- Prevent registration for user ids that are too long to fit into a state key. Contributed by Reid Anderson. ([\#5198](https://github.com/matrix-org/synapse/issues/5198))
- Fix incompatibility between ACME support and Python 3.5.2. ([\#5218](https://github.com/matrix-org/synapse/issues/5218))
- Fix error handling for rooms whose versions are unknown. ([\#5219](https://github.com/matrix-org/synapse/issues/5219))
Internal Changes
----------------
- Make /sync attempt to return device updates for both joined and invited users. Note that this doesn't currently work correctly due to other bugs. ([\#3484](https://github.com/matrix-org/synapse/issues/3484))
- Update tests to consistently be configured via the same code that is used when loading from configuration files. ([\#5171](https://github.com/matrix-org/synapse/issues/5171), [\#5185](https://github.com/matrix-org/synapse/issues/5185))
- Allow client event serialization to be async. ([\#5183](https://github.com/matrix-org/synapse/issues/5183))
- Expose DataStore._get_events as get_events_as_list. ([\#5184](https://github.com/matrix-org/synapse/issues/5184))
- Make generating SQL bounds for pagination generic. ([\#5191](https://github.com/matrix-org/synapse/issues/5191))
- Stop telling people to install the optional dependencies by default. ([\#5197](https://github.com/matrix-org/synapse/issues/5197))
Synapse 0.99.4 (2019-05-15)
===========================

View File

@@ -35,7 +35,7 @@ virtualenv -p python3 ~/synapse/env
source ~/synapse/env/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install matrix-synapse[all]
pip install matrix-synapse
```
This will download Synapse from [PyPI](https://pypi.org/project/matrix-synapse)
@@ -48,7 +48,7 @@ update flag:
```
source ~/synapse/env/bin/activate
pip install -U matrix-synapse[all]
pip install -U matrix-synapse
```
Before you can start Synapse, you will need to generate a configuration

1
changelog.d/4338.feature Normal file
View File

@@ -0,0 +1 @@
Synapse now more efficiently collates room statistics.

1
changelog.d/5200.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix worker registration bug caused by ClientReaderSlavedStore being unable to see get_profileinfo.

1
changelog.d/5230.misc Normal file
View File

@@ -0,0 +1 @@
Remove urllib3 pin as requests 2.22.0 has been released supporting urllib3 1.25.2.

1
changelog.d/5232.misc Normal file
View File

@@ -0,0 +1 @@
Run black on synapse.crypto.keyring.

1
changelog.d/5239.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix 500 Internal Server Error when sending an event with `m.relates_to: null`.

6
debian/changelog vendored
View File

@@ -1,3 +1,9 @@
matrix-synapse-py3 (0.99.5.1) stable; urgency=medium
* New synapse release 0.99.5.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 22 May 2019 16:22:24 +0000
matrix-synapse-py3 (0.99.4) stable; urgency=medium
[ Christoph Müller ]

2
debian/test/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
.vagrant
*.log

23
debian/test/provision.sh vendored Normal file
View File

@@ -0,0 +1,23 @@
#!/bin/bash
#
# provisioning script for vagrant boxes for testing the matrix-synapse debs.
#
# Will install the most recent matrix-synapse-py3 deb for this platform from
# the /debs directory.
set -e
apt-get update
apt-get install -y lsb-release
deb=`ls /debs/matrix-synapse-py3_*+$(lsb_release -cs)*.deb | sort | tail -n1`
debconf-set-selections <<EOF
matrix-synapse matrix-synapse/report-stats boolean false
matrix-synapse matrix-synapse/server-name string localhost:18448
EOF
dpkg -i "$deb"
sed -i -e '/port: 8...$/{s/8448/18448/; s/8008/18008/}' -e '$aregistration_shared_secret: secret' /etc/matrix-synapse/homeserver.yaml
systemctl restart matrix-synapse

13
debian/test/stretch/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,13 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
ver = `cd ../../..; dpkg-parsechangelog -S Version`.strip()
Vagrant.configure("2") do |config|
config.vm.box = "debian/stretch64"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "../../../../debs", "/debs", type: "nfs"
config.vm.provision "shell", path: "../provision.sh"
end

10
debian/test/xenial/Vagrantfile vendored Normal file
View File

@@ -0,0 +1,10 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/xenial64"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "../../../../debs", "/debs"
config.vm.provision "shell", path: "../provision.sh"
end

View File

@@ -161,7 +161,7 @@ specify values for `SYNAPSE_CONFIG_PATH`, `SYNAPSE_SERVER_NAME` and
example:
```
docker run -it --rm
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
-e SYNAPSE_SERVER_NAME=my.matrix.host \

View File

@@ -3,6 +3,28 @@ Using Postgres
Postgres version 9.4 or later is known to work.
Install postgres client libraries
=================================
Synapse will require the python postgres client library in order to connect to
a postgres database.
* If you are using the `matrix.org debian/ubuntu
packages <../INSTALL.md#matrixorg-packages>`_,
the necessary libraries will already be installed.
* For other pre-built packages, please consult the documentation from the
relevant package.
* If you installed synapse `in a virtualenv
<../INSTALL.md#installing-from-source>`_, you can install the library with::
~/synapse/env/bin/pip install matrix-synapse[postgres]
(substituting the path to your virtualenv for ``~/synapse/env``, if you used a
different path). You will require the postgres development files. These are in
the ``libpq-dev`` package on Debian-derived distributions.
Set up database
===============
@@ -26,29 +48,6 @@ encoding use, e.g.::
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
Set up client in Debian/Ubuntu
===========================
Postgres support depends on the postgres python connector ``psycopg2``. In the
virtual env::
sudo apt-get install libpq-dev
pip install psycopg2
Set up client in RHEL/CentOs 7
==============================
Make sure you have the appropriate version of postgres-devel installed. For a
postgres 9.4, use the postgres 9.4 packages from
[here](https://wiki.postgresql.org/wiki/YUM_Installation).
As with Debian/Ubuntu, postgres support depends on the postgres python connector
``psycopg2``. In the virtual env::
sudo yum install postgresql-devel libpqxx-devel.x86_64
export PATH=/usr/pgsql-9.4/bin/:$PATH
pip install psycopg2
Tuning Postgres
===============

View File

@@ -115,6 +115,24 @@ pid_file: DATADIR/homeserver.pid
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
@@ -258,6 +276,12 @@ listeners:
#
#require_membership_for_aliases: false
# Whether to allow per-room membership profiles through the send of membership
# events with profile information that differ from the target's global profile.
# Defaults to 'true'.
#
#allow_per_room_profiles: false
## TLS ##
@@ -428,21 +452,15 @@ log_config: "CONFDIR/SERVERNAME.log.config"
## Ratelimiting ##
# Number of messages a client can send per second
#
#rc_messages_per_second: 0.2
# Number of message a client can send before being throttled
#
#rc_message_burst_count: 10.0
# Ratelimiting settings for registration and login.
# Ratelimiting settings for client actions (registration, login, messaging).
#
# Each ratelimiting configuration is made of two parameters:
# - per_second: number of requests a client can send per second.
# - burst_count: number of requests a client can send before being throttled.
#
# Synapse currently uses the following configurations:
# - one for messages that ratelimits sending based on the account the client
# is using
# - one for registration that ratelimits registration requests based on the
# client's IP address.
# - one for login that ratelimits login requests based on the client's IP
@@ -455,6 +473,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#
# The defaults are as shown below.
#
#rc_message:
# per_second: 0.2
# burst_count: 10
#
#rc_registration:
# per_second: 0.17
# burst_count: 3
@@ -470,29 +492,28 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# per_second: 0.17
# burst_count: 3
# The federation window size in milliseconds
#
#federation_rc_window_size: 1000
# The number of federation requests from a single server in a window
# before the server will delay processing the request.
# Ratelimiting settings for incoming federation
#
#federation_rc_sleep_limit: 10
# The duration in milliseconds to delay processing events from
# remote servers by if they go over the sleep limit.
# The rc_federation configuration is made up of the following settings:
# - window_size: window size in milliseconds
# - sleep_limit: number of federation requests from a single server in
# a window before the server will delay processing the request.
# - sleep_delay: duration in milliseconds to delay processing events
# from remote servers by if they go over the sleep limit.
# - reject_limit: maximum number of concurrent federation requests
# allowed from a single server
# - concurrent: number of federation requests to concurrently process
# from a single server
#
#federation_rc_sleep_delay: 500
# The maximum number of concurrent federation requests allowed
# from a single server
# The defaults are as shown below.
#
#federation_rc_reject_limit: 50
# The number of federation requests to concurrently process from a
# single server
#
#federation_rc_concurrent: 3
#rc_federation:
# window_size: 1000
# sleep_limit: 10
# sleep_delay: 500
# reject_limit: 50
# concurrent: 3
# Target outgoing federation transaction frequency for sending read-receipts,
# per-room.
@@ -726,6 +747,14 @@ uploads_path: "DATADIR/uploads"
# link. ``%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
# This means that, if a validity period is set, and Synapse is restarted (it will
# then derive an expiration date from the current validity period), and some time
# after that the validity period changes and Synapse is restarted, the users'
# expiration dates won't be updated unless their account is manually renewed.
#
#account_validity:
# enabled: True
# period: 6w
@@ -1124,6 +1153,22 @@ password_config:
#
# Local statistics collection. Used in populating the room directory.
#
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
# Server Notices room configuration
#
# Uncomment this section to enable a room which can be used to send notices

View File

@@ -27,4 +27,4 @@ try:
except ImportError:
pass
__version__ = "0.99.4"
__version__ = "0.99.5.1"

View File

@@ -23,6 +23,9 @@ MAX_DEPTH = 2**63 - 1
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255
# the maximum length for a user id is 255 characters
MAX_USERID_LENGTH = 255
class Membership(object):
@@ -76,6 +79,7 @@ class EventTypes(object):
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
Encryption = "m.room.encryption"
RoomAvatar = "m.room.avatar"
RoomEncryption = "m.room.encryption"
GuestAccess = "m.room.guest_access"
@@ -116,3 +120,11 @@ class UserTypes(object):
"""
SUPPORT = "support"
ALL_USER_TYPES = (SUPPORT,)
class RelationTypes(object):
"""The types of relations known to this server.
"""
ANNOTATION = "m.annotation"
REPLACE = "m.replace"
REFERENCE = "m.reference"

View File

@@ -328,9 +328,23 @@ class RoomKeysVersionError(SynapseError):
self.current_version = current_version
class IncompatibleRoomVersionError(SynapseError):
"""A server is trying to join a room whose version it does not support."""
class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does
not support."""
def __init__(self):
super(UnsupportedRoomVersionError, self).__init__(
code=400,
msg="Homeserver does not support this room version",
errcode=Codes.UNSUPPORTED_ROOM_VERSION,
)
class IncompatibleRoomVersionError(SynapseError):
"""A server is trying to join a room whose version it does not support.
Unlike UnsupportedRoomVersionError, it is specific to the case of the make_join
failing.
"""
def __init__(self, room_version):
super(IncompatibleRoomVersionError, self).__init__(
code=400,

View File

@@ -19,13 +19,15 @@ class EventFormatVersions(object):
"""This is an internal enum for tracking the version of the event format,
independently from the room version.
"""
V1 = 1 # $id:server format
V2 = 2 # MSC1659-style $hash format: introduced for room v3
V1 = 1 # $id:server event id format
V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
V3 = 3 # MSC1884-style $hash format: introduced for room v4
KNOWN_EVENT_FORMAT_VERSIONS = {
EventFormatVersions.V1,
EventFormatVersions.V2,
EventFormatVersions.V3,
}
@@ -75,6 +77,12 @@ class RoomVersions(object):
EventFormatVersions.V2,
StateResolutionVersions.V2,
)
V4 = RoomVersion(
"4",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
)
# the version we will give rooms which are created on this server
@@ -87,5 +95,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V2,
RoomVersions.V3,
RoomVersions.STATE_V2_TEST,
RoomVersions.V4,
)
} # type: dict[str, RoomVersion]

View File

@@ -22,8 +22,7 @@ from six.moves.urllib.parse import urlencode
from synapse.config import ConfigError
CLIENT_PREFIX = "/_matrix/client/api/v1"
CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha"
CLIENT_API_PREFIX = "/_matrix/client"
FEDERATION_PREFIX = "/_matrix/federation"
FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"

View File

@@ -38,6 +38,7 @@ from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
@@ -81,6 +82,7 @@ class ClientReaderSlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedClientIpStore,
BaseSlavedStore,
):

View File

@@ -13,6 +13,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .api import ApiConfig
from .appservice import AppServiceConfig
from .captcha import CaptchaConfig
@@ -36,20 +37,41 @@ from .saml2_config import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .stats import StatsConfig
from .tls import TlsConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
class HomeServerConfig(ServerConfig, TlsConfig, DatabaseConfig, LoggingConfig,
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
VoipConfig, RegistrationConfig, MetricsConfig, ApiConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,
ConsentConfig,
ServerNoticesConfig, RoomDirectoryConfig,
):
class HomeServerConfig(
ServerConfig,
TlsConfig,
DatabaseConfig,
LoggingConfig,
RatelimitConfig,
ContentRepositoryConfig,
CaptchaConfig,
VoipConfig,
RegistrationConfig,
MetricsConfig,
ApiConfig,
AppServiceConfig,
KeyConfig,
SAML2Config,
CasConfig,
JWTConfig,
PasswordConfig,
EmailConfig,
WorkerConfig,
PasswordAuthProviderConfig,
PushConfig,
SpamCheckerConfig,
GroupsConfig,
UserDirectoryConfig,
ConsentConfig,
StatsConfig,
ServerNoticesConfig,
RoomDirectoryConfig,
):
pass

View File

@@ -16,16 +16,56 @@ from ._base import Config
class RateLimitConfig(object):
def __init__(self, config):
self.per_second = config.get("per_second", 0.17)
self.burst_count = config.get("burst_count", 3.0)
def __init__(self, config, defaults={"per_second": 0.17, "burst_count": 3.0}):
self.per_second = config.get("per_second", defaults["per_second"])
self.burst_count = config.get("burst_count", defaults["burst_count"])
class FederationRateLimitConfig(object):
_items_and_default = {
"window_size": 10000,
"sleep_limit": 10,
"sleep_delay": 500,
"reject_limit": 50,
"concurrent": 3,
}
def __init__(self, **kwargs):
for i in self._items_and_default.keys():
setattr(self, i, kwargs.get(i) or self._items_and_default[i])
class RatelimitConfig(Config):
def read_config(self, config):
self.rc_messages_per_second = config.get("rc_messages_per_second", 0.2)
self.rc_message_burst_count = config.get("rc_message_burst_count", 10.0)
# Load the new-style messages config if it exists. Otherwise fall back
# to the old method.
if "rc_message" in config:
self.rc_message = RateLimitConfig(
config["rc_message"], defaults={"per_second": 0.2, "burst_count": 10.0}
)
else:
self.rc_message = RateLimitConfig(
{
"per_second": config.get("rc_messages_per_second", 0.2),
"burst_count": config.get("rc_message_burst_count", 10.0),
}
)
# Load the new-style federation config, if it exists. Otherwise, fall
# back to the old method.
if "federation_rc" in config:
self.rc_federation = FederationRateLimitConfig(**config["rc_federation"])
else:
self.rc_federation = FederationRateLimitConfig(
**{
"window_size": config.get("federation_rc_window_size"),
"sleep_limit": config.get("federation_rc_sleep_limit"),
"sleep_delay": config.get("federation_rc_sleep_delay"),
"reject_limit": config.get("federation_rc_reject_limit"),
"concurrent": config.get("federation_rc_concurrent"),
}
)
self.rc_registration = RateLimitConfig(config.get("rc_registration", {}))
@@ -33,38 +73,26 @@ class RatelimitConfig(Config):
self.rc_login_address = RateLimitConfig(rc_login_config.get("address", {}))
self.rc_login_account = RateLimitConfig(rc_login_config.get("account", {}))
self.rc_login_failed_attempts = RateLimitConfig(
rc_login_config.get("failed_attempts", {}),
rc_login_config.get("failed_attempts", {})
)
self.federation_rc_window_size = config.get("federation_rc_window_size", 1000)
self.federation_rc_sleep_limit = config.get("federation_rc_sleep_limit", 10)
self.federation_rc_sleep_delay = config.get("federation_rc_sleep_delay", 500)
self.federation_rc_reject_limit = config.get("federation_rc_reject_limit", 50)
self.federation_rc_concurrent = config.get("federation_rc_concurrent", 3)
self.federation_rr_transactions_per_room_per_second = config.get(
"federation_rr_transactions_per_room_per_second", 50,
"federation_rr_transactions_per_room_per_second", 50
)
def default_config(self, **kwargs):
return """\
## Ratelimiting ##
# Number of messages a client can send per second
#
#rc_messages_per_second: 0.2
# Number of message a client can send before being throttled
#
#rc_message_burst_count: 10.0
# Ratelimiting settings for registration and login.
# Ratelimiting settings for client actions (registration, login, messaging).
#
# Each ratelimiting configuration is made of two parameters:
# - per_second: number of requests a client can send per second.
# - burst_count: number of requests a client can send before being throttled.
#
# Synapse currently uses the following configurations:
# - one for messages that ratelimits sending based on the account the client
# is using
# - one for registration that ratelimits registration requests based on the
# client's IP address.
# - one for login that ratelimits login requests based on the client's IP
@@ -77,6 +105,10 @@ class RatelimitConfig(Config):
#
# The defaults are as shown below.
#
#rc_message:
# per_second: 0.2
# burst_count: 10
#
#rc_registration:
# per_second: 0.17
# burst_count: 3
@@ -92,29 +124,28 @@ class RatelimitConfig(Config):
# per_second: 0.17
# burst_count: 3
# The federation window size in milliseconds
#
#federation_rc_window_size: 1000
# The number of federation requests from a single server in a window
# before the server will delay processing the request.
# Ratelimiting settings for incoming federation
#
#federation_rc_sleep_limit: 10
# The duration in milliseconds to delay processing events from
# remote servers by if they go over the sleep limit.
# The rc_federation configuration is made up of the following settings:
# - window_size: window size in milliseconds
# - sleep_limit: number of federation requests from a single server in
# a window before the server will delay processing the request.
# - sleep_delay: duration in milliseconds to delay processing events
# from remote servers by if they go over the sleep limit.
# - reject_limit: maximum number of concurrent federation requests
# allowed from a single server
# - concurrent: number of federation requests to concurrently process
# from a single server
#
#federation_rc_sleep_delay: 500
# The maximum number of concurrent federation requests allowed
# from a single server
# The defaults are as shown below.
#
#federation_rc_reject_limit: 50
# The number of federation requests to concurrently process from a
# single server
#
#federation_rc_concurrent: 3
#rc_federation:
# window_size: 1000
# sleep_limit: 10
# sleep_delay: 500
# reject_limit: 50
# concurrent: 3
# Target outgoing federation transaction frequency for sending read-receipts,
# per-room.

View File

@@ -123,6 +123,14 @@ class RegistrationConfig(Config):
# link. ``%%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
# This means that, if a validity period is set, and Synapse is restarted (it will
# then derive an expiration date from the current validity period), and some time
# after that the validity period changes and Synapse is restarted, the users'
# expiration dates won't be updated unless their account is manually renewed.
#
#account_validity:
# enabled: True
# period: 6w

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,6 +18,8 @@
import logging
import os.path
from netaddr import IPSet
from synapse.http.endpoint import parse_and_validate_server_name
from synapse.python_dependencies import DependencyException, check_requirements
@@ -98,6 +101,11 @@ class ServerConfig(Config):
"block_non_admin_invites", False,
)
# Whether to enable experimental MSC1849 (aka relations) support
self.experimental_msc1849_support_enabled = config.get(
"experimental_msc1849_support_enabled", False,
)
# Options to control access by tracking MAU
self.limit_usage_by_mau = config.get("limit_usage_by_mau", False)
self.max_mau_value = 0
@@ -137,6 +145,24 @@ class ServerConfig(Config):
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
self.federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", [],
)
# Attempt to create an IPSet from the given ranges
try:
self.federation_ip_range_blacklist = IPSet(
self.federation_ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in "
"federation_ip_range_blacklist: %s" % e
)
if self.public_baseurl is not None:
if self.public_baseurl[-1] != '/':
self.public_baseurl += '/'
@@ -153,6 +179,10 @@ class ServerConfig(Config):
"require_membership_for_aliases", True,
)
# Whether to allow per-room membership profiles through the send of membership
# events with profile information that differ from the target's global profile.
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
self.listeners = []
for listener in config.get("listeners", []):
if not isinstance(listener.get("port", None), int):
@@ -386,6 +416,24 @@ class ServerConfig(Config):
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
@@ -528,6 +576,12 @@ class ServerConfig(Config):
# Defaults to 'true'.
#
#require_membership_for_aliases: false
# Whether to allow per-room membership profiles through the send of membership
# events with profile information that differ from the target's global profile.
# Defaults to 'true'.
#
#allow_per_room_profiles: false
""" % locals()
def read_arguments(self, args):

60
synapse/config/stats.py Normal file
View File

@@ -0,0 +1,60 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import division
import sys
from ._base import Config
class StatsConfig(Config):
"""Stats Configuration
Configuration for the behaviour of synapse's stats engine
"""
def read_config(self, config):
self.stats_enabled = True
self.stats_bucket_size = 86400
self.stats_retention = sys.maxsize
stats_config = config.get("stats", None)
if stats_config:
self.stats_enabled = stats_config.get("enabled", self.stats_enabled)
self.stats_bucket_size = (
self.parse_duration(stats_config.get("bucket_size", "1d")) / 1000
)
self.stats_retention = (
self.parse_duration(
stats_config.get("retention", "%ds" % (sys.maxsize,))
)
/ 1000
)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Local statistics collection. Used in populating the room directory.
#
# 'bucket_size' controls how large each statistics timeslice is. It can
# be defined in a human readable short form -- e.g. "1d", "1y".
#
# 'retention' controls how long historical statistics will be kept for.
# It can be defined in a human readable short form -- e.g. "1d", "1y".
#
#
#stats:
# enabled: true
# bucket_size: 1d
# retention: 1y
"""

View File

@@ -56,9 +56,9 @@ from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
VerifyKeyRequest = namedtuple("VerifyRequest", (
"server_name", "key_ids", "json_object", "deferred"
))
VerifyKeyRequest = namedtuple(
"VerifyRequest", ("server_name", "key_ids", "json_object", "deferred")
)
"""
A request for a verify key to verify a JSON object.
@@ -96,9 +96,7 @@ class Keyring(object):
def verify_json_for_server(self, server_name, json_object):
return logcontext.make_deferred_yieldable(
self.verify_json_objects_for_server(
[(server_name, json_object)]
)[0]
self.verify_json_objects_for_server([(server_name, json_object)])[0]
)
def verify_json_objects_for_server(self, server_and_json):
@@ -130,18 +128,15 @@ class Keyring(object):
if not key_ids:
return defer.fail(
SynapseError(
400,
"Not signed by %s" % (server_name,),
Codes.UNAUTHORIZED,
400, "Not signed by %s" % (server_name,), Codes.UNAUTHORIZED
)
)
logger.debug("Verifying for %s with key_ids %s",
server_name, key_ids)
logger.debug("Verifying for %s with key_ids %s", server_name, key_ids)
# add the key request to the queue, but don't start it off yet.
verify_request = VerifyKeyRequest(
server_name, key_ids, json_object, defer.Deferred(),
server_name, key_ids, json_object, defer.Deferred()
)
verify_requests.append(verify_request)
@@ -179,15 +174,13 @@ class Keyring(object):
# any other lookups until we have finished.
# The deferreds are called with no logcontext.
server_to_deferred = {
rq.server_name: defer.Deferred()
for rq in verify_requests
rq.server_name: defer.Deferred() for rq in verify_requests
}
# We want to wait for any previous lookups to complete before
# proceeding.
yield self.wait_for_previous_lookups(
[rq.server_name for rq in verify_requests],
server_to_deferred,
[rq.server_name for rq in verify_requests], server_to_deferred
)
# Actually start fetching keys.
@@ -216,9 +209,7 @@ class Keyring(object):
return res
for verify_request in verify_requests:
verify_request.deferred.addBoth(
remove_deferreds, verify_request,
)
verify_request.deferred.addBoth(remove_deferreds, verify_request)
except Exception:
logger.exception("Error starting key lookups")
@@ -248,7 +239,8 @@ class Keyring(object):
break
logger.info(
"Waiting for existing lookups for %s to complete [loop %i]",
[w[0] for w in wait_on], loop_count,
[w[0] for w in wait_on],
loop_count,
)
with PreserveLoggingContext():
yield defer.DeferredList((w[1] for w in wait_on))
@@ -335,13 +327,14 @@ class Keyring(object):
with PreserveLoggingContext():
for verify_request in requests_missing_keys:
verify_request.deferred.errback(SynapseError(
401,
"No key for %s with id %s" % (
verify_request.server_name, verify_request.key_ids,
),
Codes.UNAUTHORIZED,
))
verify_request.deferred.errback(
SynapseError(
401,
"No key for %s with id %s"
% (verify_request.server_name, verify_request.key_ids),
Codes.UNAUTHORIZED,
)
)
def on_err(err):
with PreserveLoggingContext():
@@ -383,25 +376,26 @@ class Keyring(object):
)
defer.returnValue(result)
except KeyLookupError as e:
logger.warning(
"Key lookup failed from %r: %s", perspective_name, e,
)
logger.warning("Key lookup failed from %r: %s", perspective_name, e)
except Exception as e:
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e),
type(e).__name__,
str(e),
)
defer.returnValue({})
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError))
results = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(get_key, p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
union_of_keys = {}
for result in results:
@@ -412,32 +406,30 @@ class Keyring(object):
@defer.inlineCallbacks
def get_keys_from_server(self, server_name_and_key_ids):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(
self.get_server_verify_key_v2_direct,
server_name,
key_ids,
)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError))
results = yield logcontext.make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.get_server_verify_key_v2_direct, server_name, key_ids
)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
merged = {}
for result in results:
merged.update(result)
defer.returnValue({
server_name: keys
for server_name, keys in merged.items()
if keys
})
defer.returnValue(
{server_name: keys for server_name, keys in merged.items() if keys}
)
@defer.inlineCallbacks
def get_server_verify_key_v2_indirect(self, server_names_and_key_ids,
perspective_name,
perspective_keys):
def get_server_verify_key_v2_indirect(
self, server_names_and_key_ids, perspective_name, perspective_keys
):
# TODO(mark): Set the minimum_valid_until_ts to that needed by
# the events being validated or the current time if validating
# an incoming request.
@@ -448,9 +440,7 @@ class Keyring(object):
data={
u"server_keys": {
server_name: {
key_id: {
u"minimum_valid_until_ts": 0
} for key_id in key_ids
key_id: {u"minimum_valid_until_ts": 0} for key_id in key_ids
}
for server_name, key_ids in server_names_and_key_ids
}
@@ -458,21 +448,19 @@ class Keyring(object):
long_retries=True,
)
except (NotRetryingDestination, RequestSendFailed) as e:
raise_from(
KeyLookupError("Failed to connect to remote server"), e,
)
raise_from(KeyLookupError("Failed to connect to remote server"), e)
except HttpResponseException as e:
raise_from(
KeyLookupError("Remote server returned an error"), e,
)
raise_from(KeyLookupError("Remote server returned an error"), e)
keys = {}
responses = query_response["server_keys"]
for response in responses:
if (u"signatures" not in response
or perspective_name not in response[u"signatures"]):
if (
u"signatures" not in response
or perspective_name not in response[u"signatures"]
):
raise KeyLookupError(
"Key response not signed by perspective server"
" %r" % (perspective_name,)
@@ -482,9 +470,7 @@ class Keyring(object):
for key_id in response[u"signatures"][perspective_name]:
if key_id in perspective_keys:
verify_signed_json(
response,
perspective_name,
perspective_keys[key_id]
response, perspective_name, perspective_keys[key_id]
)
verified = True
@@ -494,7 +480,7 @@ class Keyring(object):
" known key, signed with: %r, known keys: %r",
perspective_name,
list(response[u"signatures"][perspective_name]),
list(perspective_keys)
list(perspective_keys),
)
raise KeyLookupError(
"Response not signed with a known key for perspective"
@@ -508,18 +494,20 @@ class Keyring(object):
keys.setdefault(server_name, {}).update(processed_response)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(
self.store_keys,
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
)
for server_name, response_keys in keys.items()
],
consumeErrors=True
).addErrback(unwrapFirstError))
yield logcontext.make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.store_keys,
server_name=server_name,
from_server=perspective_name,
verify_keys=response_keys,
)
for server_name, response_keys in keys.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
defer.returnValue(keys)
@@ -534,26 +522,26 @@ class Keyring(object):
try:
response = yield self.client.get_json(
destination=server_name,
path="/_matrix/key/v2/server/" + urllib.parse.quote(requested_key_id),
path="/_matrix/key/v2/server/"
+ urllib.parse.quote(requested_key_id),
ignore_backoff=True,
)
except (NotRetryingDestination, RequestSendFailed) as e:
raise_from(
KeyLookupError("Failed to connect to remote server"), e,
)
raise_from(KeyLookupError("Failed to connect to remote server"), e)
except HttpResponseException as e:
raise_from(
KeyLookupError("Remote server returned an error"), e,
)
raise_from(KeyLookupError("Remote server returned an error"), e)
if (u"signatures" not in response
or server_name not in response[u"signatures"]):
if (
u"signatures" not in response
or server_name not in response[u"signatures"]
):
raise KeyLookupError("Key response not signed by remote server")
if response["server_name"] != server_name:
raise KeyLookupError("Expected a response for server %r not %r" % (
server_name, response["server_name"]
))
raise KeyLookupError(
"Expected a response for server %r not %r"
% (server_name, response["server_name"])
)
response_keys = yield self.process_v2_response(
from_server=server_name,
@@ -564,16 +552,12 @@ class Keyring(object):
keys.update(response_keys)
yield self.store_keys(
server_name=server_name,
from_server=server_name,
verify_keys=keys,
server_name=server_name, from_server=server_name, verify_keys=keys
)
defer.returnValue({server_name: keys})
@defer.inlineCallbacks
def process_v2_response(
self, from_server, response_json, requested_ids=[],
):
def process_v2_response(self, from_server, response_json, requested_ids=[]):
"""Parse a 'Server Keys' structure from the result of a /key request
This is used to parse either the entirety of the response from
@@ -627,20 +611,13 @@ class Keyring(object):
for key_id in response_json["signatures"].get(server_name, {}):
if key_id not in response_json["verify_keys"]:
raise KeyLookupError(
"Key response must include verification keys for all"
" signatures"
"Key response must include verification keys for all" " signatures"
)
if key_id in verify_keys:
verify_signed_json(
response_json,
server_name,
verify_keys[key_id]
)
verify_signed_json(response_json, server_name, verify_keys[key_id])
signed_key_json = sign_json(
response_json,
self.config.server_name,
self.config.signing_key[0],
response_json, self.config.server_name, self.config.signing_key[0]
)
signed_key_json_bytes = encode_canonical_json(signed_key_json)
@@ -653,21 +630,23 @@ class Keyring(object):
response_keys.update(verify_keys)
response_keys.update(old_verify_keys)
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(
self.store.store_server_keys_json,
server_name=server_name,
key_id=key_id,
from_server=from_server,
ts_now_ms=time_now_ms,
ts_expires_ms=ts_valid_until_ms,
key_json_bytes=signed_key_json_bytes,
)
for key_id in updated_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError))
yield logcontext.make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.store.store_server_keys_json,
server_name=server_name,
key_id=key_id,
from_server=from_server,
ts_now_ms=time_now_ms,
ts_expires_ms=ts_valid_until_ms,
key_json_bytes=signed_key_json_bytes,
)
for key_id in updated_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
defer.returnValue(response_keys)
@@ -681,16 +660,21 @@ class Keyring(object):
A deferred that completes when the keys are stored.
"""
# TODO(markjh): Store whether the keys have expired.
return logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(
self.store.store_server_verify_key,
server_name, server_name, key.time_added, key
)
for key_id, key in verify_keys.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError))
return logcontext.make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self.store.store_server_verify_key,
server_name,
server_name,
key.time_added,
key,
)
for key_id, key in verify_keys.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
@defer.inlineCallbacks
@@ -713,17 +697,19 @@ def _handle_key_deferred(verify_request):
except KeyLookupError as e:
logger.warn(
"Failed to download keys for %s: %s %s",
server_name, type(e).__name__, str(e),
server_name,
type(e).__name__,
str(e),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
502, "Error downloading keys for %s" % (server_name,), Codes.UNAUTHORIZED
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e),
server_name,
type(e).__name__,
str(e),
)
raise SynapseError(
401,
@@ -733,22 +719,24 @@ def _handle_key_deferred(verify_request):
json_object = verify_request.json_object
logger.debug("Got key %s %s:%s for server %s, verifying" % (
key_id, verify_key.alg, verify_key.version, server_name,
))
logger.debug(
"Got key %s %s:%s for server %s, verifying"
% (key_id, verify_key.alg, verify_key.version, server_name)
)
try:
verify_signed_json(json_object, server_name, verify_key)
except SignatureVerifyException as e:
logger.debug(
"Error verifying signature for %s:%s:%s with key %s: %s",
server_name, verify_key.alg, verify_key.version,
server_name,
verify_key.alg,
verify_key.version,
encode_verify_key_base64(verify_key),
str(e),
)
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s: %s" % (
server_name, verify_key.alg, verify_key.version, str(e),
),
"Invalid signature for server %s with key %s:%s: %s"
% (server_name, verify_key.alg, verify_key.version, str(e)),
Codes.UNAUTHORIZED,
)

View File

@@ -21,6 +21,7 @@ import six
from unpaddedbase64 import encode_base64
from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
@@ -335,13 +336,32 @@ class FrozenEventV2(EventBase):
return self.__repr__()
def __repr__(self):
return "<FrozenEventV2 event_id='%s', type='%s', state_key='%s'>" % (
return "<%s event_id='%s', type='%s', state_key='%s'>" % (
self.__class__.__name__,
self.event_id,
self.get("type", None),
self.get("state_key", None),
)
class FrozenEventV3(FrozenEventV2):
"""FrozenEventV3, which differs from FrozenEventV2 only in the event_id format"""
format_version = EventFormatVersions.V3 # All events of this type are V3
@property
def event_id(self):
# We have to import this here as otherwise we get an import loop which
# is hard to break.
from synapse.crypto.event_signing import compute_event_reference_hash
if self._event_id:
return self._event_id
self._event_id = "$" + encode_base64(
compute_event_reference_hash(self)[1], urlsafe=True
)
return self._event_id
def room_version_to_event_format(room_version):
"""Converts a room version string to the event format
@@ -350,12 +370,15 @@ def room_version_to_event_format(room_version):
Returns:
int
Raises:
UnsupportedRoomVersionError if the room version is unknown
"""
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
# We should have already checked version, so this should not happen
raise RuntimeError("Unrecognized room version %s" % (room_version,))
# this can happen if support is withdrawn for a room version
raise UnsupportedRoomVersionError()
return v.event_format
@@ -376,6 +399,8 @@ def event_type_from_format_version(format_version):
return FrozenEvent
elif format_version == EventFormatVersions.V2:
return FrozenEventV2
elif format_version == EventFormatVersions.V3:
return FrozenEventV3
else:
raise Exception(
"No event format %r" % (format_version,)

View File

@@ -18,6 +18,7 @@ import attr
from twisted.internet import defer
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import (
KNOWN_EVENT_FORMAT_VERSIONS,
KNOWN_ROOM_VERSIONS,
@@ -178,9 +179,8 @@ class EventBuilderFactory(object):
"""
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
raise Exception(
"No event format defined for version %r" % (room_version,)
)
# this can happen if support is withdrawn for a room version
raise UnsupportedRoomVersionError()
return self.for_room_version(v, key_values)
def for_room_version(self, room_version, key_values):

View File

@@ -19,7 +19,10 @@ from six import string_types
from frozendict import frozendict
from synapse.api.constants import EventTypes
from twisted.internet import defer
from synapse.api.constants import EventTypes, RelationTypes
from synapse.util.async_helpers import yieldable_gather_results
from . import EventBase
@@ -311,3 +314,92 @@ def serialize_event(e, time_now_ms, as_client_event=True,
d = only_fields(d, only_event_fields)
return d
class EventClientSerializer(object):
"""Serializes events that are to be sent to clients.
This is used for bundling extra information with any events to be sent to
clients.
"""
def __init__(self, hs):
self.store = hs.get_datastore()
self.experimental_msc1849_support_enabled = (
hs.config.experimental_msc1849_support_enabled
)
@defer.inlineCallbacks
def serialize_event(self, event, time_now, **kwargs):
"""Serializes a single event.
Args:
event (EventBase)
time_now (int): The current time in milliseconds
**kwargs: Arguments to pass to `serialize_event`
Returns:
Deferred[dict]: The serialized event
"""
# To handle the case of presence events and the like
if not isinstance(event, EventBase):
defer.returnValue(event)
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if thre are any relations
# we need to bundle in with the event
if self.experimental_msc1849_support_enabled:
annotations = yield self.store.get_aggregation_groups_for_event(
event_id,
)
references = yield self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f",
)
if annotations.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.ANNOTATION] = annotations.to_dict()
if references.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = yield self.store.get_applicable_edit(event_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
relations = event.content.get("m.relates_to")
serialized_event["content"] = edit.content.get("m.new_content", {})
if relations:
serialized_event["content"]["m.relates_to"] = relations
else:
serialized_event["content"].pop("m.relates_to", None)
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
}
defer.returnValue(serialized_event)
def serialize_events(self, events, time_now, **kwargs):
"""Serializes multiple events.
Args:
event (iter[EventBase])
time_now (int): The current time in milliseconds
**kwargs: Arguments to pass to `serialize_event`
Returns:
Deferred[list[dict]]: The list of serialized events
"""
return yieldable_gather_results(
self.serialize_event, events,
time_now=time_now, **kwargs
)

View File

@@ -33,6 +33,7 @@ from synapse.api.errors import (
IncompatibleRoomVersionError,
NotFoundError,
SynapseError,
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.crypto.event_signing import compute_event_signature
@@ -198,11 +199,22 @@ class FederationServer(FederationBase):
try:
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
try:
format_ver = room_version_to_event_format(room_version)
except UnsupportedRoomVersionError:
# this can happen if support for a given room version is withdrawn,
# so that we still get events for said room.
logger.info(
"Ignoring PDU for room %s with unknown version %s",
room_id,
room_version,
)
continue
event = event_from_pdu_json(p, format_ver)
pdus_by_room.setdefault(room_id, []).append(event)

View File

@@ -63,11 +63,7 @@ class TransportLayerServer(JsonResource):
self.authenticator = Authenticator(hs)
self.ratelimiter = FederationRateLimiter(
self.clock,
window_size=hs.config.federation_rc_window_size,
sleep_limit=hs.config.federation_rc_sleep_limit,
sleep_msec=hs.config.federation_rc_sleep_delay,
reject_limit=hs.config.federation_rc_reject_limit,
concurrent_requests=hs.config.federation_rc_concurrent,
config=hs.config.rc_federation,
)
self.register_servlets()

View File

@@ -90,8 +90,8 @@ class BaseHandler(object):
messages_per_second = override.messages_per_second
burst_count = override.burst_count
else:
messages_per_second = self.hs.config.rc_messages_per_second
burst_count = self.hs.config.rc_message_burst_count
messages_per_second = self.hs.config.rc_message.per_second
burst_count = self.hs.config.rc_message.burst_count
allowed, time_allowed = self.ratelimiter.can_do_action(
user_id, time_now,

View File

@@ -21,7 +21,6 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.types import UserID
from synapse.util.logutils import log_function
from synapse.visibility import filter_events_for_client
@@ -50,6 +49,7 @@ class EventStreamHandler(BaseHandler):
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
@log_function
@@ -120,9 +120,9 @@ class EventStreamHandler(BaseHandler):
time_now = self.clock.time_msec()
chunks = [
serialize_event(e, time_now, as_client_event) for e in events
]
chunks = yield self._event_serializer.serialize_events(
events, time_now, as_client_event=as_client_event,
)
chunk = {
"chunk": chunks,

View File

@@ -1916,6 +1916,11 @@ class FederationHandler(BaseHandler):
event.room_id, latest_event_ids=extrem_ids,
)
logger.debug(
"Doing soft-fail check for %s: state %s",
event.event_id, current_state_ids,
)
# Now check if event pass auth against said current state
auth_types = auth_types_for_event(event)
current_state_ids = [
@@ -1932,7 +1937,7 @@ class FederationHandler(BaseHandler):
self.auth.check(room_version, event, auth_events=current_auth_events)
except AuthError as e:
logger.warn(
"Failed current state auth resolution for %r because %s",
"Soft-failing %r because %s",
event, e,
)
event.internal_metadata.soft_failed = True

View File

@@ -19,7 +19,6 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state
from synapse.streams.config import PaginationConfig
@@ -43,6 +42,7 @@ class InitialSyncHandler(BaseHandler):
self.clock = hs.get_clock()
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self._event_serializer = hs.get_event_client_serializer()
def snapshot_all_rooms(self, user_id=None, pagin_config=None,
as_client_event=True, include_archived=False):
@@ -138,7 +138,9 @@ class InitialSyncHandler(BaseHandler):
d["inviter"] = event.sender
invite_event = yield self.store.get_event(event.event_id)
d["invite"] = serialize_event(invite_event, time_now, as_client_event)
d["invite"] = yield self._event_serializer.serialize_event(
invite_event, time_now, as_client_event,
)
rooms_ret.append(d)
@@ -185,18 +187,21 @@ class InitialSyncHandler(BaseHandler):
time_now = self.clock.time_msec()
d["messages"] = {
"chunk": [
serialize_event(m, time_now, as_client_event)
for m in messages
],
"chunk": (
yield self._event_serializer.serialize_events(
messages, time_now=time_now,
as_client_event=as_client_event,
)
),
"start": start_token.to_string(),
"end": end_token.to_string(),
}
d["state"] = [
serialize_event(c, time_now, as_client_event)
for c in current_state.values()
]
d["state"] = yield self._event_serializer.serialize_events(
current_state.values(),
time_now=time_now,
as_client_event=as_client_event
)
account_data_events = []
tags = tags_by_room.get(event.room_id)
@@ -337,11 +342,15 @@ class InitialSyncHandler(BaseHandler):
"membership": membership,
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"chunk": (yield self._event_serializer.serialize_events(
messages, time_now,
)),
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": [serialize_event(s, time_now) for s in room_state.values()],
"state": (yield self._event_serializer.serialize_events(
room_state.values(), time_now,
)),
"presence": [],
"receipts": [],
})
@@ -355,10 +364,9 @@ class InitialSyncHandler(BaseHandler):
# TODO: These concurrently
time_now = self.clock.time_msec()
state = [
serialize_event(x, time_now)
for x in current_state.values()
]
state = yield self._event_serializer.serialize_events(
current_state.values(), time_now,
)
now_token = yield self.hs.get_event_sources().get_current_token()
@@ -425,7 +433,9 @@ class InitialSyncHandler(BaseHandler):
ret = {
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"chunk": (yield self._event_serializer.serialize_events(
messages, time_now,
)),
"start": start_token.to_string(),
"end": end_token.to_string(),
},

View File

@@ -22,7 +22,7 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from synapse.api.constants import EventTypes, Membership
from synapse.api.constants import EventTypes, Membership, RelationTypes
from synapse.api.errors import (
AuthError,
Codes,
@@ -32,7 +32,6 @@ from synapse.api.errors import (
)
from synapse.api.room_versions import RoomVersions
from synapse.api.urls import ConsentURIBuilder
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.replication.http.send_event import ReplicationSendEventRestServlet
from synapse.storage.state import StateFilter
@@ -57,6 +56,7 @@ class MessageHandler(object):
self.clock = hs.get_clock()
self.state = hs.get_state_handler()
self.store = hs.get_datastore()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def get_room_data(self, user_id=None, room_id=None,
@@ -164,9 +164,10 @@ class MessageHandler(object):
room_state = room_state[membership_event_id]
now = self.clock.time_msec()
defer.returnValue(
[serialize_event(c, now) for c in room_state.values()]
events = yield self._event_serializer.serialize_events(
room_state.values(), now,
)
defer.returnValue(events)
@defer.inlineCallbacks
def get_joined_members(self, requester, room_id):
@@ -600,6 +601,20 @@ class EventCreationHandler(object):
self.validator.validate_new(event)
# If this event is an annotation then we check that that the sender
# can't annotate the same way twice (e.g. stops users from liking an
# event multiple times).
relation = event.content.get("m.relates_to", {})
if relation.get("rel_type") == RelationTypes.ANNOTATION:
relates_to = relation["event_id"]
aggregation_key = relation["key"]
already_exists = yield self.store.has_user_annotated_event(
relates_to, event.type, aggregation_key, event.sender,
)
if already_exists:
raise SynapseError(400, "Can't send same reaction twice")
logger.debug(
"Created event %s",
event.event_id,

View File

@@ -20,7 +20,6 @@ from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events.utils import serialize_event
from synapse.storage.state import StateFilter
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import ReadWriteLock
@@ -78,6 +77,7 @@ class PaginationHandler(object):
self._purges_in_progress_by_room = set()
# map from purge id to PurgeStatus
self._purges_by_id = {}
self._event_serializer = hs.get_event_client_serializer()
def start_purge_history(self, room_id, token,
delete_local_events=False):
@@ -278,18 +278,22 @@ class PaginationHandler(object):
time_now = self.clock.time_msec()
chunk = {
"chunk": [
serialize_event(e, time_now, as_client_event)
for e in events
],
"chunk": (
yield self._event_serializer.serialize_events(
events, time_now,
as_client_event=as_client_event,
)
),
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
}
if state:
chunk["state"] = [
serialize_event(e, time_now, as_client_event)
for e in state
]
chunk["state"] = (
yield self._event_serializer.serialize_events(
state, time_now,
as_client_event=as_client_event,
)
)
defer.returnValue(chunk)

View File

@@ -19,7 +19,7 @@ import logging
from twisted.internet import defer
from synapse import types
from synapse.api.constants import LoginType
from synapse.api.constants import MAX_USERID_LENGTH, LoginType
from synapse.api.errors import (
AuthError,
Codes,
@@ -123,6 +123,15 @@ class RegistrationHandler(BaseHandler):
self.check_user_id_not_appservice_exclusive(user_id)
if len(user_id) > MAX_USERID_LENGTH:
raise SynapseError(
400,
"User ID may not be longer than %s characters" % (
MAX_USERID_LENGTH,
),
Codes.INVALID_USERNAME
)
users = yield self.store.get_users_by_id_case_insensitive(user_id)
if users:
if not guest_access_token:

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -73,6 +74,7 @@ class RoomMemberHandler(object):
self.spam_checker = hs.get_spam_checker()
self._server_notices_mxid = self.config.server_notices_mxid
self._enable_lookup = hs.config.enable_3pid_lookup
self.allow_per_room_profiles = self.config.allow_per_room_profiles
# This is only used to get at ratelimit function, and
# maybe_kick_guest_users. It's fine there are multiple of these as
@@ -357,6 +359,13 @@ class RoomMemberHandler(object):
# later on.
content = dict(content)
if not self.allow_per_room_profiles:
# Strip profile data, knowing that new profile data will be added to the
# event's content in event_creation_handler.create_event() using the target's
# global profile.
content.pop("displayname", None)
content.pop("avatar_url", None)
effective_membership_state = action
if action in ["kick", "unban"]:
effective_membership_state = "leave"
@@ -935,7 +944,7 @@ class RoomMemberHandler(object):
}
if self.config.invite_3pid_guest:
guest_access_token, guest_user_id = yield self.get_or_register_3pid_guest(
guest_user_id, guest_access_token = yield self.get_or_register_3pid_guest(
requester=requester,
medium=medium,
address=address,

View File

@@ -23,7 +23,6 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.api.filtering import Filter
from synapse.events.utils import serialize_event
from synapse.storage.state import StateFilter
from synapse.visibility import filter_events_for_client
@@ -36,6 +35,7 @@ class SearchHandler(BaseHandler):
def __init__(self, hs):
super(SearchHandler, self).__init__(hs)
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def get_old_rooms_from_upgraded_room(self, room_id):
@@ -401,14 +401,16 @@ class SearchHandler(BaseHandler):
time_now = self.clock.time_msec()
for context in contexts.values():
context["events_before"] = [
serialize_event(e, time_now)
for e in context["events_before"]
]
context["events_after"] = [
serialize_event(e, time_now)
for e in context["events_after"]
]
context["events_before"] = (
yield self._event_serializer.serialize_events(
context["events_before"], time_now,
)
)
context["events_after"] = (
yield self._event_serializer.serialize_events(
context["events_after"], time_now,
)
)
state_results = {}
if include_state:
@@ -422,14 +424,13 @@ class SearchHandler(BaseHandler):
# We're now about to serialize the events. We should not make any
# blocking calls after this. Otherwise the 'age' will be wrong
results = [
{
results = []
for e in allowed_events:
results.append({
"rank": rank_map[e.event_id],
"result": serialize_event(e, time_now),
"result": (yield self._event_serializer.serialize_event(e, time_now)),
"context": contexts.get(e.event_id, {}),
}
for e in allowed_events
]
})
rooms_cat_res = {
"results": results,
@@ -438,10 +439,13 @@ class SearchHandler(BaseHandler):
}
if state_results:
rooms_cat_res["state"] = {
room_id: [serialize_event(e, time_now) for e in state]
for room_id, state in state_results.items()
}
s = {}
for room_id, state in state_results.items():
s[room_id] = yield self._event_serializer.serialize_events(
state, time_now,
)
rooms_cat_res["state"] = s
if room_groups and "room_id" in group_keys:
rooms_cat_res.setdefault("groups", {})["room_id"] = room_groups

325
synapse/handlers/stats.py Normal file
View File

@@ -0,0 +1,325 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.handlers.state_deltas import StateDeltasHandler
from synapse.metrics import event_processing_positions
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import UserID
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
class StatsHandler(StateDeltasHandler):
"""Handles keeping the *_stats tables updated with a simple time-series of
information about the users, rooms and media on the server, such that admins
have some idea of who is consuming their resources.
Heavily derived from UserDirectoryHandler
"""
def __init__(self, hs):
super(StatsHandler, self).__init__(hs)
self.hs = hs
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id
self.stats_bucket_size = hs.config.stats_bucket_size
# The current position in the current_state_delta stream
self.pos = None
# Guard to ensure we only process deltas one at a time
self._is_processing = False
if hs.config.stats_enabled:
self.notifier.add_replication_callback(self.notify_new_event)
# We kick this off so that we don't have to wait for a change before
# we start populating stats
self.clock.call_later(0, self.notify_new_event)
def notify_new_event(self):
"""Called when there may be more deltas to process
"""
if not self.hs.config.stats_enabled:
return
if self._is_processing:
return
@defer.inlineCallbacks
def process():
try:
yield self._unsafe_process()
finally:
self._is_processing = False
self._is_processing = True
run_as_background_process("stats.notify_new_event", process)
@defer.inlineCallbacks
def _unsafe_process(self):
# If self.pos is None then means we haven't fetched it from DB
if self.pos is None:
self.pos = yield self.store.get_stats_stream_pos()
# If still None then the initial background update hasn't happened yet
if self.pos is None:
defer.returnValue(None)
# Loop round handling deltas until we're up to date
while True:
with Measure(self.clock, "stats_delta"):
deltas = yield self.store.get_current_state_deltas(self.pos)
if not deltas:
return
logger.info("Handling %d state deltas", len(deltas))
yield self._handle_deltas(deltas)
self.pos = deltas[-1]["stream_id"]
yield self.store.update_stats_stream_pos(self.pos)
event_processing_positions.labels("stats").set(self.pos)
@defer.inlineCallbacks
def _handle_deltas(self, deltas):
"""
Called with the state deltas to process
"""
for delta in deltas:
typ = delta["type"]
state_key = delta["state_key"]
room_id = delta["room_id"]
event_id = delta["event_id"]
stream_id = delta["stream_id"]
prev_event_id = delta["prev_event_id"]
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
token = yield self.store.get_earliest_token_for_room_stats(room_id)
# If the earliest token to begin from is larger than our current
# stream ID, skip processing this delta.
if token is not None and token >= stream_id:
logger.debug(
"Ignoring: %s as earlier than this room's initial ingestion event",
event_id,
)
continue
if event_id is None and prev_event_id is None:
# Errr...
continue
event_content = {}
if event_id is not None:
event_content = (yield self.store.get_event(event_id)).content or {}
# quantise time to the nearest bucket
now = yield self.store.get_received_ts(event_id)
now = (now // 1000 // self.stats_bucket_size) * self.stats_bucket_size
if typ == EventTypes.Member:
# we could use _get_key_change here but it's a bit inefficient
# given we're not testing for a specific result; might as well
# just grab the prev_membership and membership strings and
# compare them.
prev_event_content = {}
if prev_event_id is not None:
prev_event_content = (
yield self.store.get_event(prev_event_id)
).content
membership = event_content.get("membership", Membership.LEAVE)
prev_membership = prev_event_content.get("membership", Membership.LEAVE)
if prev_membership == membership:
continue
if prev_membership == Membership.JOIN:
yield self.store.update_stats_delta(
now, "room", room_id, "joined_members", -1
)
elif prev_membership == Membership.INVITE:
yield self.store.update_stats_delta(
now, "room", room_id, "invited_members", -1
)
elif prev_membership == Membership.LEAVE:
yield self.store.update_stats_delta(
now, "room", room_id, "left_members", -1
)
elif prev_membership == Membership.BAN:
yield self.store.update_stats_delta(
now, "room", room_id, "banned_members", -1
)
else:
err = "%s is not a valid prev_membership" % (repr(prev_membership),)
logger.error(err)
raise ValueError(err)
if membership == Membership.JOIN:
yield self.store.update_stats_delta(
now, "room", room_id, "joined_members", +1
)
elif membership == Membership.INVITE:
yield self.store.update_stats_delta(
now, "room", room_id, "invited_members", +1
)
elif membership == Membership.LEAVE:
yield self.store.update_stats_delta(
now, "room", room_id, "left_members", +1
)
elif membership == Membership.BAN:
yield self.store.update_stats_delta(
now, "room", room_id, "banned_members", +1
)
else:
err = "%s is not a valid membership" % (repr(membership),)
logger.error(err)
raise ValueError(err)
user_id = state_key
if self.is_mine_id(user_id):
# update user_stats as it's one of our users
public = yield self._is_public_room(room_id)
if membership == Membership.LEAVE:
yield self.store.update_stats_delta(
now,
"user",
user_id,
"public_rooms" if public else "private_rooms",
-1,
)
elif membership == Membership.JOIN:
yield self.store.update_stats_delta(
now,
"user",
user_id,
"public_rooms" if public else "private_rooms",
+1,
)
elif typ == EventTypes.Create:
# Newly created room. Add it with all blank portions.
yield self.store.update_room_state(
room_id,
{
"join_rules": None,
"history_visibility": None,
"encryption": None,
"name": None,
"topic": None,
"avatar": None,
"canonical_alias": None,
},
)
elif typ == EventTypes.JoinRules:
yield self.store.update_room_state(
room_id, {"join_rules": event_content.get("join_rule")}
)
is_public = yield self._get_key_change(
prev_event_id, event_id, "join_rule", JoinRules.PUBLIC
)
if is_public is not None:
yield self.update_public_room_stats(now, room_id, is_public)
elif typ == EventTypes.RoomHistoryVisibility:
yield self.store.update_room_state(
room_id,
{"history_visibility": event_content.get("history_visibility")},
)
is_public = yield self._get_key_change(
prev_event_id, event_id, "history_visibility", "world_readable"
)
if is_public is not None:
yield self.update_public_room_stats(now, room_id, is_public)
elif typ == EventTypes.Encryption:
yield self.store.update_room_state(
room_id, {"encryption": event_content.get("algorithm")}
)
elif typ == EventTypes.Name:
yield self.store.update_room_state(
room_id, {"name": event_content.get("name")}
)
elif typ == EventTypes.Topic:
yield self.store.update_room_state(
room_id, {"topic": event_content.get("topic")}
)
elif typ == EventTypes.RoomAvatar:
yield self.store.update_room_state(
room_id, {"avatar": event_content.get("url")}
)
elif typ == EventTypes.CanonicalAlias:
yield self.store.update_room_state(
room_id, {"canonical_alias": event_content.get("alias")}
)
@defer.inlineCallbacks
def update_public_room_stats(self, ts, room_id, is_public):
"""
Increment/decrement a user's number of public rooms when a room they are
in changes to/from public visibility.
Args:
ts (int): Timestamp in seconds
room_id (str)
is_public (bool)
"""
# For now, blindly iterate over all local users in the room so that
# we can handle the whole problem of copying buckets over as needed
user_ids = yield self.store.get_users_in_room(room_id)
for user_id in user_ids:
if self.hs.is_mine(UserID.from_string(user_id)):
yield self.store.update_stats_delta(
ts, "user", user_id, "public_rooms", +1 if is_public else -1
)
yield self.store.update_stats_delta(
ts, "user", user_id, "private_rooms", -1 if is_public else +1
)
@defer.inlineCallbacks
def _is_public_room(self, room_id):
join_rules = yield self.state.get_current_state(room_id, EventTypes.JoinRules)
history_visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility
)
if (join_rules and join_rules.content.get("join_rule") == JoinRules.PUBLIC) or (
(
history_visibility
and history_visibility.content.get("history_visibility")
== "world_readable"
)
):
defer.returnValue(True)
else:
defer.returnValue(False)

View File

@@ -934,7 +934,7 @@ class SyncHandler(object):
res = yield self._generate_sync_entry_for_rooms(
sync_result_builder, account_data_by_room
)
newly_joined_rooms, newly_joined_users, _, _ = res
newly_joined_rooms, newly_joined_or_invited_users, _, _ = res
_, _, newly_left_rooms, newly_left_users = res
block_all_presence_data = (
@@ -943,7 +943,7 @@ class SyncHandler(object):
)
if self.hs_config.use_presence and not block_all_presence_data:
yield self._generate_sync_entry_for_presence(
sync_result_builder, newly_joined_rooms, newly_joined_users
sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
)
yield self._generate_sync_entry_for_to_device(sync_result_builder)
@@ -951,7 +951,7 @@ class SyncHandler(object):
device_lists = yield self._generate_sync_entry_for_device_list(
sync_result_builder,
newly_joined_rooms=newly_joined_rooms,
newly_joined_users=newly_joined_users,
newly_joined_or_invited_users=newly_joined_or_invited_users,
newly_left_rooms=newly_left_rooms,
newly_left_users=newly_left_users,
)
@@ -1036,7 +1036,8 @@ class SyncHandler(object):
@measure_func("_generate_sync_entry_for_device_list")
@defer.inlineCallbacks
def _generate_sync_entry_for_device_list(self, sync_result_builder,
newly_joined_rooms, newly_joined_users,
newly_joined_rooms,
newly_joined_or_invited_users,
newly_left_rooms, newly_left_users):
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
@@ -1050,7 +1051,7 @@ class SyncHandler(object):
# share a room with?
for room_id in newly_joined_rooms:
joined_users = yield self.state.get_current_users_in_room(room_id)
newly_joined_users.update(joined_users)
newly_joined_or_invited_users.update(joined_users)
for room_id in newly_left_rooms:
left_users = yield self.state.get_current_users_in_room(room_id)
@@ -1058,7 +1059,7 @@ class SyncHandler(object):
# TODO: Check that these users are actually new, i.e. either they
# weren't in the previous sync *or* they left and rejoined.
changed.update(newly_joined_users)
changed.update(newly_joined_or_invited_users)
if not changed and not newly_left_users:
defer.returnValue(DeviceLists(
@@ -1176,7 +1177,7 @@ class SyncHandler(object):
@defer.inlineCallbacks
def _generate_sync_entry_for_presence(self, sync_result_builder, newly_joined_rooms,
newly_joined_users):
newly_joined_or_invited_users):
"""Generates the presence portion of the sync response. Populates the
`sync_result_builder` with the result.
@@ -1184,8 +1185,9 @@ class SyncHandler(object):
sync_result_builder(SyncResultBuilder)
newly_joined_rooms(list): List of rooms that the user has joined
since the last sync (or empty if an initial sync)
newly_joined_users(list): List of users that have joined rooms
since the last sync (or empty if an initial sync)
newly_joined_or_invited_users(list): List of users that have joined
or been invited to rooms since the last sync (or empty if an initial
sync)
"""
now_token = sync_result_builder.now_token
sync_config = sync_result_builder.sync_config
@@ -1211,7 +1213,7 @@ class SyncHandler(object):
"presence_key", presence_key
)
extra_users_ids = set(newly_joined_users)
extra_users_ids = set(newly_joined_or_invited_users)
for room_id in newly_joined_rooms:
users = yield self.state.get_current_users_in_room(room_id)
extra_users_ids.update(users)
@@ -1243,7 +1245,8 @@ class SyncHandler(object):
Returns:
Deferred(tuple): Returns a 4-tuple of
`(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
`(newly_joined_rooms, newly_joined_or_invited_users,
newly_left_rooms, newly_left_users)`
"""
user_id = sync_result_builder.sync_config.user.to_string()
block_all_room_ephemeral = (
@@ -1314,8 +1317,8 @@ class SyncHandler(object):
sync_result_builder.invited.extend(invited)
# Now we want to get any newly joined users
newly_joined_users = set()
# Now we want to get any newly joined or invited users
newly_joined_or_invited_users = set()
newly_left_users = set()
if since_token:
for joined_sync in sync_result_builder.joined:
@@ -1324,19 +1327,22 @@ class SyncHandler(object):
)
for event in it:
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
newly_joined_users.add(event.state_key)
if (
event.membership == Membership.JOIN or
event.membership == Membership.INVITE
):
newly_joined_or_invited_users.add(event.state_key)
else:
prev_content = event.unsigned.get("prev_content", {})
prev_membership = prev_content.get("membership", None)
if prev_membership == Membership.JOIN:
newly_left_users.add(event.state_key)
newly_left_users -= newly_joined_users
newly_left_users -= newly_joined_or_invited_users
defer.returnValue((
newly_joined_rooms,
newly_joined_users,
newly_joined_or_invited_users,
newly_left_rooms,
newly_left_users,
))
@@ -1381,7 +1387,7 @@ class SyncHandler(object):
where:
room_entries is a list [RoomSyncResultBuilder]
invited_rooms is a list [InvitedSyncResult]
newly_joined rooms is a list[str] of room ids
newly_joined_rooms is a list[str] of room ids
newly_left_rooms is a list[str] of room ids
"""
user_id = sync_result_builder.sync_config.user.to_string()
@@ -1422,7 +1428,7 @@ class SyncHandler(object):
if room_id in sync_result_builder.joined_room_ids and non_joins:
# Always include if the user (re)joined the room, especially
# important so that device list changes are calculated correctly.
# If there are non join member events, but we are still in the room,
# If there are non-join member events, but we are still in the room,
# then the user must have left and joined
newly_joined_rooms.append(room_id)

View File

@@ -165,7 +165,8 @@ class BlacklistingAgentWrapper(Agent):
ip_address, self._ip_whitelist, self._ip_blacklist
):
logger.info(
"Blocking access to %s because of blacklist" % (ip_address,)
"Blocking access to %s due to blacklist" %
(ip_address,)
)
e = SynapseError(403, "IP address blocked by IP blacklist entry")
return defer.fail(Failure(e))
@@ -263,9 +264,6 @@ class SimpleHttpClient(object):
uri (str): URI to query.
data (bytes): Data to send in the request body, if applicable.
headers (t.w.http_headers.Headers): Request headers.
Raises:
SynapseError: If the IP is blacklisted.
"""
# A small wrapper around self.agent.request() so we can easily attach
# counters to it

View File

@@ -27,9 +27,11 @@ import treq
from canonicaljson import encode_canonical_json
from prometheus_client import Counter
from signedjson.sign import sign_json
from zope.interface import implementer
from twisted.internet import defer, protocol
from twisted.internet.error import DNSLookupError
from twisted.internet.interfaces import IReactorPluggableNameResolver
from twisted.internet.task import _EPSILON, Cooperator
from twisted.web._newclient import ResponseDone
from twisted.web.http_headers import Headers
@@ -44,6 +46,7 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.http import QuieterFileBodyProducer
from synapse.http.client import BlacklistingAgentWrapper, IPBlacklistingResolver
from synapse.http.federation.matrix_federation_agent import MatrixFederationAgent
from synapse.util.async_helpers import timeout_deferred
from synapse.util.logcontext import make_deferred_yieldable
@@ -172,19 +175,44 @@ class MatrixFederationHttpClient(object):
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
reactor = hs.get_reactor()
real_reactor = hs.get_reactor()
# We need to use a DNS resolver which filters out blacklisted IP
# addresses, to prevent DNS rebinding.
nameResolver = IPBlacklistingResolver(
real_reactor, None, hs.config.federation_ip_range_blacklist,
)
@implementer(IReactorPluggableNameResolver)
class Reactor(object):
def __getattr__(_self, attr):
if attr == "nameResolver":
return nameResolver
else:
return getattr(real_reactor, attr)
self.reactor = Reactor()
self.agent = MatrixFederationAgent(
hs.get_reactor(),
self.reactor,
tls_client_options_factory,
)
# Use a BlacklistingAgentWrapper to prevent circumventing the IP
# blacklist via IP literals in server names
self.agent = BlacklistingAgentWrapper(
self.agent, self.reactor,
ip_blacklist=hs.config.federation_ip_range_blacklist,
)
self.clock = hs.get_clock()
self._store = hs.get_datastore()
self.version_string_bytes = hs.version_string.encode('ascii')
self.default_timeout = 60
def schedule(x):
reactor.callLater(_EPSILON, x)
self.reactor.callLater(_EPSILON, x)
self._cooperator = Cooperator(scheduler=schedule)
@@ -370,7 +398,7 @@ class MatrixFederationHttpClient(object):
request_deferred = timeout_deferred(
request_deferred,
timeout=_sec_timeout,
reactor=self.hs.get_reactor(),
reactor=self.reactor,
)
response = yield request_deferred
@@ -397,7 +425,7 @@ class MatrixFederationHttpClient(object):
d = timeout_deferred(
d,
timeout=_sec_timeout,
reactor=self.hs.get_reactor(),
reactor=self.reactor,
)
try:
@@ -586,7 +614,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
self.hs.get_reactor(), self.default_timeout, request, response,
self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -645,7 +673,7 @@ class MatrixFederationHttpClient(object):
_sec_timeout = self.default_timeout
body = yield _handle_json_response(
self.hs.get_reactor(), _sec_timeout, request, response,
self.reactor, _sec_timeout, request, response,
)
defer.returnValue(body)
@@ -704,7 +732,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
self.hs.get_reactor(), self.default_timeout, request, response,
self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -753,7 +781,7 @@ class MatrixFederationHttpClient(object):
)
body = yield _handle_json_response(
self.hs.get_reactor(), self.default_timeout, request, response,
self.reactor, self.default_timeout, request, response,
)
defer.returnValue(body)
@@ -801,7 +829,7 @@ class MatrixFederationHttpClient(object):
try:
d = _readBodyToFile(response, output_stream, max_size)
d.addTimeout(self.default_timeout, self.hs.get_reactor())
d.addTimeout(self.default_timeout, self.reactor)
length = yield make_deferred_yieldable(d)
except Exception as e:
logger.warn(

View File

@@ -16,7 +16,12 @@
import logging
from pkg_resources import DistributionNotFound, VersionConflict, get_distribution
from pkg_resources import (
DistributionNotFound,
Requirement,
VersionConflict,
get_provider,
)
logger = logging.getLogger(__name__)
@@ -53,7 +58,7 @@ REQUIREMENTS = [
"pyasn1-modules>=0.0.7",
"daemonize>=2.3.1",
"bcrypt>=3.1.0",
"pillow>=3.1.2",
"pillow>=4.3.0",
"sortedcontainers>=1.4.4",
"psutil>=2.0.0",
"pymacaroons>=0.13.0",
@@ -69,14 +74,6 @@ REQUIREMENTS = [
"attrs>=17.4.0",
"netaddr>=0.7.18",
# requests is a transitive dep of treq, and urlib3 is a transitive dep
# of requests, as well as of sentry-sdk.
#
# As of requests 2.21, requests does not yet support urllib3 1.25.
# (If we do not pin it here, pip will give us the latest urllib3
# due to the dep via sentry-sdk.)
"urllib3<1.25",
]
CONDITIONAL_REQUIREMENTS = {
@@ -91,7 +88,13 @@ CONDITIONAL_REQUIREMENTS = {
# ACME support is required to provision TLS certificates from authorities
# that use the protocol, such as Let's Encrypt.
"acme": ["txacme>=0.9.2"],
"acme": [
"txacme>=0.9.2",
# txacme depends on eliot. Eliot 1.8.0 is incompatible with
# python 3.5.2, as per https://github.com/itamarst/eliot/issues/418
'eliot<1.8.0;python_version<"3.5.3"',
],
"saml2": ["pysaml2>=4.5.0"],
"systemd": ["systemd-python>=231"],
@@ -125,10 +128,10 @@ class DependencyException(Exception):
@property
def dependencies(self):
for i in self.args[0]:
yield '"' + i + '"'
yield "'" + i + "'"
def check_requirements(for_feature=None, _get_distribution=get_distribution):
def check_requirements(for_feature=None):
deps_needed = []
errors = []
@@ -139,7 +142,7 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
for dependency in reqs:
try:
_get_distribution(dependency)
_check_requirement(dependency)
except VersionConflict as e:
deps_needed.append(dependency)
errors.append(
@@ -157,7 +160,7 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
for dependency in OPTS:
try:
_get_distribution(dependency)
_check_requirement(dependency)
except VersionConflict as e:
deps_needed.append(dependency)
errors.append(
@@ -175,6 +178,23 @@ def check_requirements(for_feature=None, _get_distribution=get_distribution):
raise DependencyException(deps_needed)
def _check_requirement(dependency_string):
"""Parses a dependency string, and checks if the specified requirement is installed
Raises:
VersionConflict if the requirement is installed, but with the the wrong version
DistributionNotFound if nothing is found to provide the requirement
"""
req = Requirement.parse(dependency_string)
# first check if the markers specify that this requirement needs installing
if req.marker is not None and not req.marker.evaluate():
# not required for this environment
return
get_provider(req)
if __name__ == "__main__":
import sys

View File

@@ -23,6 +23,7 @@ from synapse.replication.tcp.streams.events import (
from synapse.storage.event_federation import EventFederationWorkerStore
from synapse.storage.event_push_actions import EventPushActionsWorkerStore
from synapse.storage.events_worker import EventsWorkerStore
from synapse.storage.relations import RelationsWorkerStore
from synapse.storage.roommember import RoomMemberWorkerStore
from synapse.storage.signatures import SignatureWorkerStore
from synapse.storage.state import StateGroupWorkerStore
@@ -52,6 +53,7 @@ class SlavedEventStore(EventFederationWorkerStore,
EventsWorkerStore,
SignatureWorkerStore,
UserErasureWorkerStore,
RelationsWorkerStore,
BaseSlavedStore):
def __init__(self, db_conn, hs):
@@ -89,7 +91,7 @@ class SlavedEventStore(EventFederationWorkerStore,
for row in rows:
self.invalidate_caches_for_event(
-token, row.event_id, row.room_id, row.type, row.state_key,
row.redacts,
row.redacts, row.relates_to,
backfilled=True,
)
return super(SlavedEventStore, self).process_replication_rows(
@@ -102,7 +104,7 @@ class SlavedEventStore(EventFederationWorkerStore,
if row.type == EventsStreamEventRow.TypeId:
self.invalidate_caches_for_event(
token, data.event_id, data.room_id, data.type, data.state_key,
data.redacts,
data.redacts, data.relates_to,
backfilled=False,
)
elif row.type == EventsStreamCurrentStateRow.TypeId:
@@ -114,7 +116,8 @@ class SlavedEventStore(EventFederationWorkerStore,
raise Exception("Unknown events stream row type %s" % (row.type, ))
def invalidate_caches_for_event(self, stream_ordering, event_id, room_id,
etype, state_key, redacts, backfilled):
etype, state_key, redacts, relates_to,
backfilled):
self._invalidate_get_event_cache(event_id)
self.get_latest_event_ids_in_room.invalidate((room_id,))
@@ -136,3 +139,8 @@ class SlavedEventStore(EventFederationWorkerStore,
state_key, stream_ordering
)
self.get_invited_rooms_for_user.invalidate((state_key,))
if relates_to:
self.get_relations_for_event.invalidate_many((relates_to,))
self.get_aggregation_groups_for_event.invalidate_many((relates_to,))
self.get_applicable_edit.invalidate((relates_to,))

View File

@@ -32,6 +32,7 @@ BackfillStreamRow = namedtuple("BackfillStreamRow", (
"type", # str
"state_key", # str, optional
"redacts", # str, optional
"relates_to", # str, optional
))
PresenceStreamRow = namedtuple("PresenceStreamRow", (
"user_id", # str

View File

@@ -80,11 +80,12 @@ class BaseEventsStreamRow(object):
class EventsStreamEventRow(BaseEventsStreamRow):
TypeId = "ev"
event_id = attr.ib() # str
room_id = attr.ib() # str
type = attr.ib() # str
state_key = attr.ib() # str, optional
redacts = attr.ib() # str, optional
event_id = attr.ib() # str
room_id = attr.ib() # str
type = attr.ib() # str
state_key = attr.ib() # str, optional
redacts = attr.ib() # str, optional
relates_to = attr.ib() # str, optional
@attr.s(slots=True, frozen=True)

View File

@@ -44,6 +44,7 @@ from synapse.rest.client.v2_alpha import (
read_marker,
receipts,
register,
relations,
report_event,
room_keys,
room_upgrade_rest_servlet,
@@ -115,6 +116,7 @@ class ClientRestResource(JsonResource):
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
capabilities.register_servlets(hs, client_resource)
account_validity.register_servlets(hs, client_resource)
relations.register_servlets(hs, client_resource)
# moving to /_synapse/admin
synapse.rest.admin.register_servlets_for_client_rest_resource(

View File

@@ -19,7 +19,7 @@
import logging
import re
from synapse.api.urls import CLIENT_PREFIX
from synapse.api.urls import CLIENT_API_PREFIX
from synapse.http.servlet import RestServlet
from synapse.rest.client.transactions import HttpTransactionCache
@@ -36,12 +36,12 @@ def client_path_patterns(path_regex, releases=(0,), include_in_unstable=True):
Returns:
SRE_Pattern
"""
patterns = [re.compile("^" + CLIENT_PREFIX + path_regex)]
patterns = [re.compile("^" + CLIENT_API_PREFIX + "/api/v1" + path_regex)]
if include_in_unstable:
unstable_prefix = CLIENT_PREFIX.replace("/api/v1", "/unstable")
unstable_prefix = CLIENT_API_PREFIX + "/unstable"
patterns.append(re.compile("^" + unstable_prefix + path_regex))
for release in releases:
new_prefix = CLIENT_PREFIX.replace("/api/v1", "/r%d" % release)
new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
patterns.append(re.compile("^" + new_prefix + path_regex))
return patterns

View File

@@ -19,7 +19,6 @@ import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.events.utils import serialize_event
from synapse.streams.config import PaginationConfig
from .base import ClientV1RestServlet, client_path_patterns
@@ -84,6 +83,7 @@ class EventRestServlet(ClientV1RestServlet):
super(EventRestServlet, self).__init__(hs)
self.clock = hs.get_clock()
self.event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request, event_id):
@@ -92,7 +92,8 @@ class EventRestServlet(ClientV1RestServlet):
time_now = self.clock.time_msec()
if event:
defer.returnValue((200, serialize_event(event, time_now)))
event = yield self._event_serializer.serialize_event(event, time_now)
defer.returnValue((200, event))
else:
defer.returnValue((404, "Event not found."))

View File

@@ -26,7 +26,7 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.filtering import Filter
from synapse.events.utils import format_event_for_client_v2, serialize_event
from synapse.events.utils import format_event_for_client_v2
from synapse.http.servlet import (
assert_params_in_dict,
parse_integer,
@@ -201,6 +201,11 @@ class RoomSendEventRestServlet(ClientV1RestServlet):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
content = parse_json_object_from_request(request)
# Pull out the relationship early if the client sent us something
# which cannot possibly be processed by us.
if content.get("m.relates_to", "not None") is None:
del content["m.relates_to"]
event_dict = {
"type": event_type,
"content": content,
@@ -537,6 +542,7 @@ class RoomEventServlet(ClientV1RestServlet):
super(RoomEventServlet, self).__init__(hs)
self.clock = hs.get_clock()
self.event_handler = hs.get_event_handler()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request, room_id, event_id):
@@ -545,7 +551,8 @@ class RoomEventServlet(ClientV1RestServlet):
time_now = self.clock.time_msec()
if event:
defer.returnValue((200, serialize_event(event, time_now)))
event = yield self._event_serializer.serialize_event(event, time_now)
defer.returnValue((200, event))
else:
defer.returnValue((404, "Event not found."))
@@ -559,6 +566,7 @@ class RoomEventContextServlet(ClientV1RestServlet):
super(RoomEventContextServlet, self).__init__(hs)
self.clock = hs.get_clock()
self.room_context_handler = hs.get_room_context_handler()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request, room_id, event_id):
@@ -588,16 +596,18 @@ class RoomEventContextServlet(ClientV1RestServlet):
)
time_now = self.clock.time_msec()
results["events_before"] = [
serialize_event(event, time_now) for event in results["events_before"]
]
results["event"] = serialize_event(results["event"], time_now)
results["events_after"] = [
serialize_event(event, time_now) for event in results["events_after"]
]
results["state"] = [
serialize_event(event, time_now) for event in results["state"]
]
results["events_before"] = yield self._event_serializer.serialize_events(
results["events_before"], time_now,
)
results["event"] = yield self._event_serializer.serialize_event(
results["event"], time_now,
)
results["events_after"] = yield self._event_serializer.serialize_events(
results["events_after"], time_now,
)
results["state"] = yield self._event_serializer.serialize_events(
results["state"], time_now,
)
defer.returnValue((200, results))

View File

@@ -21,13 +21,12 @@ import re
from twisted.internet import defer
from synapse.api.errors import InteractiveAuthIncompleteError
from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX
from synapse.api.urls import CLIENT_API_PREFIX
logger = logging.getLogger(__name__)
def client_v2_patterns(path_regex, releases=(0,),
v2_alpha=True,
unstable=True):
"""Creates a regex compiled client path with the correct client path
prefix.
@@ -39,13 +38,11 @@ def client_v2_patterns(path_regex, releases=(0,),
SRE_Pattern
"""
patterns = []
if v2_alpha:
patterns.append(re.compile("^" + CLIENT_V2_ALPHA_PREFIX + path_regex))
if unstable:
unstable_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/unstable")
unstable_prefix = CLIENT_API_PREFIX + "/unstable"
patterns.append(re.compile("^" + unstable_prefix + path_regex))
for release in releases:
new_prefix = CLIENT_V2_ALPHA_PREFIX.replace("/v2_alpha", "/r%d" % release)
new_prefix = CLIENT_API_PREFIX + "/r%d" % (release,)
patterns.append(re.compile("^" + new_prefix + path_regex))
return patterns

View File

@@ -19,7 +19,7 @@ from twisted.internet import defer
from synapse.api.constants import LoginType
from synapse.api.errors import SynapseError
from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX
from synapse.api.urls import CLIENT_API_PREFIX
from synapse.http.server import finish_request
from synapse.http.servlet import RestServlet, parse_string
@@ -139,8 +139,8 @@ class AuthRestServlet(RestServlet):
if stagetype == LoginType.RECAPTCHA:
html = RECAPTCHA_TEMPLATE % {
'session': session,
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA
'myurl': "%s/r0/auth/%s/fallback/web" % (
CLIENT_API_PREFIX, LoginType.RECAPTCHA
),
'sitekey': self.hs.config.recaptcha_public_key,
}
@@ -159,8 +159,8 @@ class AuthRestServlet(RestServlet):
self.hs.config.public_baseurl,
self.hs.config.user_consent_version,
),
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
'myurl': "%s/r0/auth/%s/fallback/web" % (
CLIENT_API_PREFIX, LoginType.TERMS
),
}
html_bytes = html.encode("utf8")
@@ -203,8 +203,8 @@ class AuthRestServlet(RestServlet):
else:
html = RECAPTCHA_TEMPLATE % {
'session': session,
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.RECAPTCHA
'myurl': "%s/r0/auth/%s/fallback/web" % (
CLIENT_API_PREFIX, LoginType.RECAPTCHA
),
'sitekey': self.hs.config.recaptcha_public_key,
}
@@ -240,8 +240,8 @@ class AuthRestServlet(RestServlet):
self.hs.config.public_baseurl,
self.hs.config.user_consent_version,
),
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
'myurl': "%s/r0/auth/%s/fallback/web" % (
CLIENT_API_PREFIX, LoginType.TERMS
),
}
html_bytes = html.encode("utf8")

View File

@@ -30,7 +30,7 @@ logger = logging.getLogger(__name__)
class DevicesRestServlet(RestServlet):
PATTERNS = client_v2_patterns("/devices$", v2_alpha=False)
PATTERNS = client_v2_patterns("/devices$")
def __init__(self, hs):
"""
@@ -56,7 +56,7 @@ class DeleteDevicesRestServlet(RestServlet):
API for bulk deletion of devices. Accepts a JSON object with a devices
key which lists the device_ids to delete. Requires user interactive auth.
"""
PATTERNS = client_v2_patterns("/delete_devices", v2_alpha=False)
PATTERNS = client_v2_patterns("/delete_devices")
def __init__(self, hs):
super(DeleteDevicesRestServlet, self).__init__()
@@ -95,7 +95,7 @@ class DeleteDevicesRestServlet(RestServlet):
class DeviceRestServlet(RestServlet):
PATTERNS = client_v2_patterns("/devices/(?P<device_id>[^/]*)$", v2_alpha=False)
PATTERNS = client_v2_patterns("/devices/(?P<device_id>[^/]*)$")
def __init__(self, hs):
"""

View File

@@ -17,10 +17,7 @@ import logging
from twisted.internet import defer
from synapse.events.utils import (
format_event_for_client_v2_without_room_id,
serialize_event,
)
from synapse.events.utils import format_event_for_client_v2_without_room_id
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from ._base import client_v2_patterns
@@ -36,6 +33,7 @@ class NotificationsServlet(RestServlet):
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request):
@@ -69,11 +67,11 @@ class NotificationsServlet(RestServlet):
"profile_tag": pa["profile_tag"],
"actions": pa["actions"],
"ts": pa["received_ts"],
"event": serialize_event(
"event": (yield self._event_serializer.serialize_event(
notif_events[pa["event_id"]],
self.clock.time_msec(),
event_format=format_event_for_client_v2_without_room_id,
),
)),
}
if pa["room_id"] not in receipts_by_room:

View File

@@ -31,6 +31,7 @@ from synapse.api.errors import (
SynapseError,
UnrecognizedRequestError,
)
from synapse.config.ratelimiting import FederationRateLimitConfig
from synapse.config.server import is_threepid_reserved
from synapse.http.servlet import (
RestServlet,
@@ -153,16 +154,18 @@ class UsernameAvailabilityRestServlet(RestServlet):
self.registration_handler = hs.get_registration_handler()
self.ratelimiter = FederationRateLimiter(
hs.get_clock(),
# Time window of 2s
window_size=2000,
# Artificially delay requests if rate > sleep_limit/window_size
sleep_limit=1,
# Amount of artificial delay to apply
sleep_msec=1000,
# Error with 429 if more than reject_limit requests are queued
reject_limit=1,
# Allow 1 request at a time
concurrent_requests=1,
FederationRateLimitConfig(
# Time window of 2s
window_size=2000,
# Artificially delay requests if rate > sleep_limit/window_size
sleep_limit=1,
# Amount of artificial delay to apply
sleep_msec=1000,
# Error with 429 if more than reject_limit requests are queued
reject_limit=1,
# Allow 1 request at a time
concurrent_requests=1,
)
)
@defer.inlineCallbacks
@@ -345,18 +348,22 @@ class RegisterRestServlet(RestServlet):
if self.hs.config.enable_registration_captcha:
# only support 3PIDless registration if no 3PIDs are required
if not require_email and not require_msisdn:
flows.extend([[LoginType.RECAPTCHA]])
# Also add a dummy flow here, otherwise if a client completes
# recaptcha first we'll assume they were going for this flow
# and complete the request, when they could have been trying to
# complete one of the flows with email/msisdn auth.
flows.extend([[LoginType.RECAPTCHA, LoginType.DUMMY]])
# only support the email-only flow if we don't require MSISDN 3PIDs
if not require_msisdn:
flows.extend([[LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA]])
flows.extend([[LoginType.RECAPTCHA, LoginType.EMAIL_IDENTITY]])
if show_msisdn:
# only support the MSISDN-only flow if we don't require email 3PIDs
if not require_email:
flows.extend([[LoginType.MSISDN, LoginType.RECAPTCHA]])
flows.extend([[LoginType.RECAPTCHA, LoginType.MSISDN]])
# always let users provide both MSISDN & email
flows.extend([
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY, LoginType.RECAPTCHA],
[LoginType.RECAPTCHA, LoginType.MSISDN, LoginType.EMAIL_IDENTITY],
])
else:
# only support 3PIDless registration if no 3PIDs are required
@@ -379,7 +386,15 @@ class RegisterRestServlet(RestServlet):
if self.hs.config.user_consent_at_registration:
new_flows = []
for flow in flows:
flow.append(LoginType.TERMS)
inserted = False
# m.login.terms should go near the end but before msisdn or email auth
for i, stage in enumerate(flow):
if stage == LoginType.EMAIL_IDENTITY or stage == LoginType.MSISDN:
flow.insert(i, LoginType.TERMS)
inserted = True
break
if not inserted:
flow.append(LoginType.TERMS)
flows.extend(new_flows)
auth_result, params, session_id = yield self.auth_handler.check_auth(
@@ -391,13 +406,6 @@ class RegisterRestServlet(RestServlet):
# the user-facing checks will probably already have happened in
# /register/email/requestToken when we requested a 3pid, but that's not
# guaranteed.
#
# Also check that we're not trying to register a 3pid that's already
# been registered.
#
# This has probably happened in /register/email/requestToken as well,
# but if a user hits this endpoint twice then clicks on each link from
# the two activation emails, they would register the same 3pid twice.
if auth_result:
for login_type in [LoginType.EMAIL_IDENTITY, LoginType.MSISDN]:
@@ -413,17 +421,6 @@ class RegisterRestServlet(RestServlet):
Codes.THREEPID_DENIED,
)
existingUid = yield self.store.get_user_id_by_threepid(
medium, address,
)
if existingUid is not None:
raise SynapseError(
400,
"%s is already in use" % medium,
Codes.THREEPID_IN_USE,
)
if registered_user_id is not None:
logger.info(
"Already registered user ID %r for this session",
@@ -446,6 +443,28 @@ class RegisterRestServlet(RestServlet):
if auth_result:
threepid = auth_result.get(LoginType.EMAIL_IDENTITY)
# Also check that we're not trying to register a 3pid that's already
# been registered.
#
# This has probably happened in /register/email/requestToken as well,
# but if a user hits this endpoint twice then clicks on each link from
# the two activation emails, they would register the same 3pid twice.
for login_type in [LoginType.EMAIL_IDENTITY, LoginType.MSISDN]:
if login_type in auth_result:
medium = auth_result[login_type]['medium']
address = auth_result[login_type]['address']
existingUid = yield self.store.get_user_id_by_threepid(
medium, address,
)
if existingUid is not None:
raise SynapseError(
400,
"%s is already in use" % medium,
Codes.THREEPID_IN_USE,
)
(registered_user_id, _) = yield self.registration_handler.register(
localpart=desired_username,
password=new_password,

View File

@@ -0,0 +1,338 @@
# -*- coding: utf-8 -*-
# Copyright 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This class implements the proposed relation APIs from MSC 1849.
Since the MSC has not been approved all APIs here are unstable and may change at
any time to reflect changes in the MSC.
"""
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.errors import SynapseError
from synapse.http.servlet import (
RestServlet,
parse_integer,
parse_json_object_from_request,
parse_string,
)
from synapse.rest.client.transactions import HttpTransactionCache
from synapse.storage.relations import AggregationPaginationToken, RelationPaginationToken
from ._base import client_v2_patterns
logger = logging.getLogger(__name__)
class RelationSendServlet(RestServlet):
"""Helper API for sending events that have relation data.
Example API shape to send a 👍 reaction to a room:
POST /rooms/!foo/send_relation/$bar/m.annotation/m.reaction?key=%F0%9F%91%8D
{}
{
"event_id": "$foobar"
}
"""
PATTERN = (
"/rooms/(?P<room_id>[^/]*)/send_relation"
"/(?P<parent_id>[^/]*)/(?P<relation_type>[^/]*)/(?P<event_type>[^/]*)"
)
def __init__(self, hs):
super(RelationSendServlet, self).__init__()
self.auth = hs.get_auth()
self.event_creation_handler = hs.get_event_creation_handler()
self.txns = HttpTransactionCache(hs)
def register(self, http_server):
http_server.register_paths(
"POST",
client_v2_patterns(self.PATTERN + "$", releases=()),
self.on_PUT_or_POST,
)
http_server.register_paths(
"PUT",
client_v2_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()),
self.on_PUT,
)
def on_PUT(self, request, *args, **kwargs):
return self.txns.fetch_or_execute_request(
request, self.on_PUT_or_POST, request, *args, **kwargs
)
@defer.inlineCallbacks
def on_PUT_or_POST(
self, request, room_id, parent_id, relation_type, event_type, txn_id=None
):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
if event_type == EventTypes.Member:
# Add relations to a membership is meaningless, so we just deny it
# at the CS API rather than trying to handle it correctly.
raise SynapseError(400, "Cannot send member events with relations")
content = parse_json_object_from_request(request)
aggregation_key = parse_string(request, "key", encoding="utf-8")
content["m.relates_to"] = {
"event_id": parent_id,
"key": aggregation_key,
"rel_type": relation_type,
}
event_dict = {
"type": event_type,
"content": content,
"room_id": room_id,
"sender": requester.user.to_string(),
}
event = yield self.event_creation_handler.create_and_send_nonmember_event(
requester, event_dict=event_dict, txn_id=txn_id
)
defer.returnValue((200, {"event_id": event.event_id}))
class RelationPaginationServlet(RestServlet):
"""API to paginate relations on an event by topological ordering, optionally
filtered by relation type and event type.
"""
PATTERNS = client_v2_patterns(
"/rooms/(?P<room_id>[^/]*)/relations/(?P<parent_id>[^/]*)"
"(/(?P<relation_type>[^/]*)(/(?P<event_type>[^/]*))?)?$",
releases=(),
)
def __init__(self, hs):
super(RelationPaginationServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.clock = hs.get_clock()
self._event_serializer = hs.get_event_client_serializer()
self.event_handler = hs.get_event_handler()
@defer.inlineCallbacks
def on_GET(self, request, room_id, parent_id, relation_type=None, event_type=None):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
yield self.auth.check_in_room_or_world_readable(
room_id, requester.user.to_string()
)
# This checks that a) the event exists and b) the user is allowed to
# view it.
yield self.event_handler.get_event(requester.user, room_id, parent_id)
limit = parse_integer(request, "limit", default=5)
from_token = parse_string(request, "from")
to_token = parse_string(request, "to")
if from_token:
from_token = RelationPaginationToken.from_string(from_token)
if to_token:
to_token = RelationPaginationToken.from_string(to_token)
result = yield self.store.get_relations_for_event(
event_id=parent_id,
relation_type=relation_type,
event_type=event_type,
limit=limit,
from_token=from_token,
to_token=to_token,
)
events = yield self.store.get_events_as_list(
[c["event_id"] for c in result.chunk]
)
now = self.clock.time_msec()
events = yield self._event_serializer.serialize_events(events, now)
return_value = result.to_dict()
return_value["chunk"] = events
defer.returnValue((200, return_value))
class RelationAggregationPaginationServlet(RestServlet):
"""API to paginate aggregation groups of relations, e.g. paginate the
types and counts of the reactions on the events.
Example request and response:
GET /rooms/{room_id}/aggregations/{parent_id}
{
chunk: [
{
"type": "m.reaction",
"key": "👍",
"count": 3
}
]
}
"""
PATTERNS = client_v2_patterns(
"/rooms/(?P<room_id>[^/]*)/aggregations/(?P<parent_id>[^/]*)"
"(/(?P<relation_type>[^/]*)(/(?P<event_type>[^/]*))?)?$",
releases=(),
)
def __init__(self, hs):
super(RelationAggregationPaginationServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.event_handler = hs.get_event_handler()
@defer.inlineCallbacks
def on_GET(self, request, room_id, parent_id, relation_type=None, event_type=None):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
yield self.auth.check_in_room_or_world_readable(
room_id, requester.user.to_string()
)
# This checks that a) the event exists and b) the user is allowed to
# view it.
yield self.event_handler.get_event(requester.user, room_id, parent_id)
if relation_type not in (RelationTypes.ANNOTATION, None):
raise SynapseError(400, "Relation type must be 'annotation'")
limit = parse_integer(request, "limit", default=5)
from_token = parse_string(request, "from")
to_token = parse_string(request, "to")
if from_token:
from_token = AggregationPaginationToken.from_string(from_token)
if to_token:
to_token = AggregationPaginationToken.from_string(to_token)
res = yield self.store.get_aggregation_groups_for_event(
event_id=parent_id,
event_type=event_type,
limit=limit,
from_token=from_token,
to_token=to_token,
)
defer.returnValue((200, res.to_dict()))
class RelationAggregationGroupPaginationServlet(RestServlet):
"""API to paginate within an aggregation group of relations, e.g. paginate
all the 👍 reactions on an event.
Example request and response:
GET /rooms/{room_id}/aggregations/{parent_id}/m.annotation/m.reaction/👍
{
chunk: [
{
"type": "m.reaction",
"content": {
"m.relates_to": {
"rel_type": "m.annotation",
"key": "👍"
}
}
},
...
]
}
"""
PATTERNS = client_v2_patterns(
"/rooms/(?P<room_id>[^/]*)/aggregations/(?P<parent_id>[^/]*)"
"/(?P<relation_type>[^/]*)/(?P<event_type>[^/]*)/(?P<key>[^/]*)$",
releases=(),
)
def __init__(self, hs):
super(RelationAggregationGroupPaginationServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.clock = hs.get_clock()
self._event_serializer = hs.get_event_client_serializer()
self.event_handler = hs.get_event_handler()
@defer.inlineCallbacks
def on_GET(self, request, room_id, parent_id, relation_type, event_type, key):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
yield self.auth.check_in_room_or_world_readable(
room_id, requester.user.to_string()
)
# This checks that a) the event exists and b) the user is allowed to
# view it.
yield self.event_handler.get_event(requester.user, room_id, parent_id)
if relation_type != RelationTypes.ANNOTATION:
raise SynapseError(400, "Relation type must be 'annotation'")
limit = parse_integer(request, "limit", default=5)
from_token = parse_string(request, "from")
to_token = parse_string(request, "to")
if from_token:
from_token = RelationPaginationToken.from_string(from_token)
if to_token:
to_token = RelationPaginationToken.from_string(to_token)
result = yield self.store.get_relations_for_event(
event_id=parent_id,
relation_type=relation_type,
event_type=event_type,
aggregation_key=key,
limit=limit,
from_token=from_token,
to_token=to_token,
)
events = yield self.store.get_events_as_list(
[c["event_id"] for c in result.chunk]
)
now = self.clock.time_msec()
events = yield self._event_serializer.serialize_events(events, now)
return_value = result.to_dict()
return_value["chunk"] = events
defer.returnValue((200, return_value))
def register_servlets(hs, http_server):
RelationSendServlet(hs).register(http_server)
RelationPaginationServlet(hs).register(http_server)
RelationAggregationPaginationServlet(hs).register(http_server)
RelationAggregationGroupPaginationServlet(hs).register(http_server)

View File

@@ -50,7 +50,6 @@ class RoomUpgradeRestServlet(RestServlet):
PATTERNS = client_v2_patterns(
# /rooms/$roomid/upgrade
"/rooms/(?P<room_id>[^/]*)/upgrade$",
v2_alpha=False,
)
def __init__(self, hs):

View File

@@ -29,7 +29,6 @@ logger = logging.getLogger(__name__)
class SendToDeviceRestServlet(servlet.RestServlet):
PATTERNS = client_v2_patterns(
"/sendToDevice/(?P<message_type>[^/]*)/(?P<txn_id>[^/]*)$",
v2_alpha=False
)
def __init__(self, hs):

View File

@@ -26,7 +26,6 @@ from synapse.api.filtering import DEFAULT_FILTER_COLLECTION, FilterCollection
from synapse.events.utils import (
format_event_for_client_v2_without_room_id,
format_event_raw,
serialize_event,
)
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.sync import SyncConfig
@@ -86,6 +85,7 @@ class SyncRestServlet(RestServlet):
self.filtering = hs.get_filtering()
self.presence_handler = hs.get_presence_handler()
self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
def on_GET(self, request):
@@ -168,14 +168,14 @@ class SyncRestServlet(RestServlet):
)
time_now = self.clock.time_msec()
response_content = self.encode_response(
response_content = yield self.encode_response(
time_now, sync_result, requester.access_token_id, filter
)
defer.returnValue((200, response_content))
@staticmethod
def encode_response(time_now, sync_result, access_token_id, filter):
@defer.inlineCallbacks
def encode_response(self, time_now, sync_result, access_token_id, filter):
if filter.event_format == 'client':
event_formatter = format_event_for_client_v2_without_room_id
elif filter.event_format == 'federation':
@@ -183,24 +183,24 @@ class SyncRestServlet(RestServlet):
else:
raise Exception("Unknown event format %s" % (filter.event_format, ))
joined = SyncRestServlet.encode_joined(
joined = yield self.encode_joined(
sync_result.joined, time_now, access_token_id,
filter.event_fields,
event_formatter,
)
invited = SyncRestServlet.encode_invited(
invited = yield self.encode_invited(
sync_result.invited, time_now, access_token_id,
event_formatter,
)
archived = SyncRestServlet.encode_archived(
archived = yield self.encode_archived(
sync_result.archived, time_now, access_token_id,
filter.event_fields,
event_formatter,
)
return {
defer.returnValue({
"account_data": {"events": sync_result.account_data},
"to_device": {"events": sync_result.to_device},
"device_lists": {
@@ -222,7 +222,7 @@ class SyncRestServlet(RestServlet):
},
"device_one_time_keys_count": sync_result.device_one_time_keys_count,
"next_batch": sync_result.next_batch.to_string(),
}
})
@staticmethod
def encode_presence(events, time_now):
@@ -239,8 +239,8 @@ class SyncRestServlet(RestServlet):
]
}
@staticmethod
def encode_joined(rooms, time_now, token_id, event_fields, event_formatter):
@defer.inlineCallbacks
def encode_joined(self, rooms, time_now, token_id, event_fields, event_formatter):
"""
Encode the joined rooms in a sync result
@@ -261,15 +261,15 @@ class SyncRestServlet(RestServlet):
"""
joined = {}
for room in rooms:
joined[room.room_id] = SyncRestServlet.encode_room(
joined[room.room_id] = yield self.encode_room(
room, time_now, token_id, joined=True, only_fields=event_fields,
event_formatter=event_formatter,
)
return joined
defer.returnValue(joined)
@staticmethod
def encode_invited(rooms, time_now, token_id, event_formatter):
@defer.inlineCallbacks
def encode_invited(self, rooms, time_now, token_id, event_formatter):
"""
Encode the invited rooms in a sync result
@@ -289,7 +289,7 @@ class SyncRestServlet(RestServlet):
"""
invited = {}
for room in rooms:
invite = serialize_event(
invite = yield self._event_serializer.serialize_event(
room.invite, time_now, token_id=token_id,
event_format=event_formatter,
is_invite=True,
@@ -302,10 +302,10 @@ class SyncRestServlet(RestServlet):
"invite_state": {"events": invited_state}
}
return invited
defer.returnValue(invited)
@staticmethod
def encode_archived(rooms, time_now, token_id, event_fields, event_formatter):
@defer.inlineCallbacks
def encode_archived(self, rooms, time_now, token_id, event_fields, event_formatter):
"""
Encode the archived rooms in a sync result
@@ -326,17 +326,17 @@ class SyncRestServlet(RestServlet):
"""
joined = {}
for room in rooms:
joined[room.room_id] = SyncRestServlet.encode_room(
joined[room.room_id] = yield self.encode_room(
room, time_now, token_id, joined=False,
only_fields=event_fields,
event_formatter=event_formatter,
)
return joined
defer.returnValue(joined)
@staticmethod
@defer.inlineCallbacks
def encode_room(
room, time_now, token_id, joined,
self, room, time_now, token_id, joined,
only_fields, event_formatter,
):
"""
@@ -355,9 +355,10 @@ class SyncRestServlet(RestServlet):
Returns:
dict[str, object]: the room, encoded in our response format
"""
def serialize(event):
return serialize_event(
event, time_now, token_id=token_id,
def serialize(events):
return self._event_serializer.serialize_events(
events, time_now=time_now,
token_id=token_id,
event_format=event_formatter,
only_event_fields=only_fields,
)
@@ -376,8 +377,8 @@ class SyncRestServlet(RestServlet):
event.event_id, room.room_id, event.room_id,
)
serialized_state = [serialize(e) for e in state_events]
serialized_timeline = [serialize(e) for e in timeline_events]
serialized_state = yield serialize(state_events)
serialized_timeline = yield serialize(timeline_events)
account_data = room.account_data
@@ -397,7 +398,7 @@ class SyncRestServlet(RestServlet):
result["unread_notifications"] = room.unread_notifications
result["summary"] = room.summary
return result
defer.returnValue(result)
def register_servlets(hs, http_server):

View File

@@ -444,6 +444,9 @@ class MediaRepository(object):
)
return
if thumbnailer.transpose_method is not None:
m_width, m_height = thumbnailer.transpose()
if t_method == "crop":
t_byte_source = thumbnailer.crop(t_width, t_height, t_type)
elif t_method == "scale":
@@ -578,6 +581,12 @@ class MediaRepository(object):
)
return
if thumbnailer.transpose_method is not None:
m_width, m_height = yield logcontext.defer_to_thread(
self.hs.get_reactor(),
thumbnailer.transpose
)
# We deduplicate the thumbnail sizes by ignoring the cropped versions if
# they have the same dimensions of a scaled one.
thumbnails = {}

View File

@@ -108,6 +108,7 @@ class FileStorageProviderBackend(StorageProvider):
"""
def __init__(self, hs, config):
self.hs = hs
self.cache_directory = hs.config.media_store_path
self.base_directory = config

View File

@@ -20,6 +20,17 @@ import PIL.Image as Image
logger = logging.getLogger(__name__)
EXIF_ORIENTATION_TAG = 0x0112
EXIF_TRANSPOSE_MAPPINGS = {
2: Image.FLIP_LEFT_RIGHT,
3: Image.ROTATE_180,
4: Image.FLIP_TOP_BOTTOM,
5: Image.TRANSPOSE,
6: Image.ROTATE_270,
7: Image.TRANSVERSE,
8: Image.ROTATE_90
}
class Thumbnailer(object):
@@ -31,6 +42,30 @@ class Thumbnailer(object):
def __init__(self, input_path):
self.image = Image.open(input_path)
self.width, self.height = self.image.size
self.transpose_method = None
try:
# We don't use ImageOps.exif_transpose since it crashes with big EXIF
image_exif = self.image._getexif()
if image_exif is not None:
image_orientation = image_exif.get(EXIF_ORIENTATION_TAG)
self.transpose_method = EXIF_TRANSPOSE_MAPPINGS.get(image_orientation)
except Exception as e:
# A lot of parsing errors can happen when parsing EXIF
logger.info("Error parsing image EXIF information: %s", e)
def transpose(self):
"""Transpose the image using its EXIF Orientation tag
Returns:
Tuple[int, int]: (width, height) containing the new image size in pixels.
"""
if self.transpose_method is not None:
self.image = self.image.transpose(self.transpose_method)
self.width, self.height = self.image.size
self.transpose_method = None
# We don't need EXIF any more
self.image.info["exif"] = None
return self.image.size
def aspect(self, max_width, max_height):
"""Calculate the largest size that preserves aspect ratio which

View File

@@ -35,6 +35,7 @@ from synapse.crypto import context_factory
from synapse.crypto.keyring import Keyring
from synapse.events.builder import EventBuilderFactory
from synapse.events.spamcheck import SpamChecker
from synapse.events.utils import EventClientSerializer
from synapse.federation.federation_client import FederationClient
from synapse.federation.federation_server import (
FederationHandlerRegistry,
@@ -71,6 +72,7 @@ from synapse.handlers.room_list import RoomListHandler
from synapse.handlers.room_member import RoomMemberMasterHandler
from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
from synapse.handlers.set_password import SetPasswordHandler
from synapse.handlers.stats import StatsHandler
from synapse.handlers.sync import SyncHandler
from synapse.handlers.typing import TypingHandler
from synapse.handlers.user_directory import UserDirectoryHandler
@@ -138,6 +140,7 @@ class HomeServer(object):
'acme_handler',
'auth_handler',
'device_handler',
'stats_handler',
'e2e_keys_handler',
'e2e_room_keys_handler',
'event_handler',
@@ -185,10 +188,12 @@ class HomeServer(object):
'sendmail',
'registration_handler',
'account_validity_handler',
'event_client_serializer',
]
REQUIRED_ON_MASTER_STARTUP = [
"user_directory_handler",
"stats_handler"
]
# This is overridden in derived application classes
@@ -472,6 +477,9 @@ class HomeServer(object):
def build_secrets(self):
return Secrets()
def build_stats_handler(self):
return StatsHandler(self)
def build_spam_checker(self):
return SpamChecker(self)
@@ -511,6 +519,9 @@ class HomeServer(object):
def build_account_validity_handler(self):
return AccountValidityHandler(self)
def build_event_client_serializer(self):
return EventClientSerializer(self)
def remove_pusher(self, app_id, push_key, user_id):
return self.get_pusherpool().remove_pusher(app_id, push_key, user_id)

View File

@@ -49,11 +49,13 @@ from .pusher import PusherStore
from .receipts import ReceiptsStore
from .registration import RegistrationStore
from .rejections import RejectionsStore
from .relations import RelationsStore
from .room import RoomStore
from .roommember import RoomMemberStore
from .search import SearchStore
from .signatures import SignatureStore
from .state import StateStore
from .stats import StatsStore
from .stream import StreamStore
from .tags import TagsStore
from .transactions import TransactionStore
@@ -99,6 +101,8 @@ class DataStore(
GroupServerStore,
UserErasureStore,
MonthlyActiveUsersStore,
StatsStore,
RelationsStore,
):
def __init__(self, db_conn, hs):
self.hs = hs

View File

@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -227,6 +229,8 @@ class SQLBaseStore(object):
# A set of tables that are not safe to use native upserts in.
self._unsafe_to_upsert_tables = set(UNIQUE_INDEX_BACKGROUND_UPDATES.keys())
self._account_validity = self.hs.config.account_validity
# We add the user_directory_search table to the blacklist on SQLite
# because the existing search table does not have an index, making it
# unsafe to use native upserts.
@@ -243,6 +247,14 @@ class SQLBaseStore(object):
self._check_safe_to_upsert,
)
if self._account_validity.enabled:
self._clock.call_later(
0.0,
run_as_background_process,
"account_validity_set_expiration_dates",
self._set_expiration_date_when_missing,
)
@defer.inlineCallbacks
def _check_safe_to_upsert(self):
"""
@@ -275,6 +287,52 @@ class SQLBaseStore(object):
self._check_safe_to_upsert,
)
@defer.inlineCallbacks
def _set_expiration_date_when_missing(self):
"""
Retrieves the list of registered users that don't have an expiration date, and
adds an expiration date for each of them.
"""
def select_users_with_no_expiration_date_txn(txn):
"""Retrieves the list of registered users with no expiration date from the
database.
"""
sql = (
"SELECT users.name FROM users"
" LEFT JOIN account_validity ON (users.name = account_validity.user_id)"
" WHERE account_validity.user_id is NULL;"
)
txn.execute(sql, [])
res = self.cursor_to_dict(txn)
if res:
for user in res:
self.set_expiration_date_for_user_txn(txn, user["name"])
yield self.runInteraction(
"get_users_with_no_expiration_date",
select_users_with_no_expiration_date_txn,
)
def set_expiration_date_for_user_txn(self, txn, user_id):
"""Sets an expiration date to the account with the given user ID.
Args:
user_id (str): User ID to set an expiration date for.
"""
now_ms = self._clock.time_msec()
expiration_ts = now_ms + self._account_validity.period
self._simple_insert_txn(
txn,
"account_validity",
values={
"user_id": user_id,
"expiration_ts_ms": expiration_ts,
"email_sent": False,
},
)
def start_profiling(self):
self._previous_loop_ts = self._clock.time_msec()

View File

@@ -302,7 +302,7 @@ class ApplicationServiceTransactionWorkerStore(
event_ids = json.loads(entry["event_ids"])
events = yield self._get_events(event_ids)
events = yield self.get_events_as_list(event_ids)
defer.returnValue(
AppServiceTransaction(service=service, id=entry["txn_id"], events=events)
@@ -358,7 +358,7 @@ class ApplicationServiceTransactionWorkerStore(
"get_new_events_for_appservice", get_new_events_for_appservice_txn
)
events = yield self._get_events(event_ids)
events = yield self.get_events_as_list(event_ids)
defer.returnValue((upper_bound, events))

View File

@@ -45,7 +45,7 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
"""
return self.get_auth_chain_ids(
event_ids, include_given=include_given
).addCallback(self._get_events)
).addCallback(self.get_events_as_list)
def get_auth_chain_ids(self, event_ids, include_given=False):
"""Get auth events for given event_ids. The events *must* be state events.
@@ -316,7 +316,7 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
event_list,
limit,
)
.addCallback(self._get_events)
.addCallback(self.get_events_as_list)
.addCallback(lambda l: sorted(l, key=lambda e: -e.depth))
)
@@ -382,7 +382,7 @@ class EventFederationWorkerStore(EventsWorkerStore, SignatureWorkerStore, SQLBas
latest_events,
limit,
)
events = yield self._get_events(ids)
events = yield self.get_events_as_list(ids)
defer.returnValue(events)
def _get_missing_events(self, txn, room_id, earliest_events, latest_events, limit):

View File

@@ -575,10 +575,11 @@ class EventsStore(
def _get_events(txn, batch):
sql = """
SELECT prev_event_id
SELECT prev_event_id, internal_metadata
FROM event_edges
INNER JOIN events USING (event_id)
LEFT JOIN rejections USING (event_id)
LEFT JOIN event_json USING (event_id)
WHERE
prev_event_id IN (%s)
AND NOT events.outlier
@@ -588,7 +589,11 @@ class EventsStore(
)
txn.execute(sql, batch)
results.extend(r[0] for r in txn)
results.extend(
r[0]
for r in txn
if not json.loads(r[1]).get("soft_failed")
)
for chunk in batch_iter(event_ids, 100):
yield self.runInteraction("_get_events_which_are_prevs", _get_events, chunk)
@@ -1325,6 +1330,9 @@ class EventsStore(
txn, event.room_id, event.redacts
)
# Remove from relations table.
self._handle_redaction(txn, event.redacts)
# Update the event_forward_extremities, event_backward_extremities and
# event_edges tables.
self._handle_mult_prev_events(
@@ -1351,6 +1359,8 @@ class EventsStore(
# Insert into the event_search table.
self._store_guest_access_txn(txn, event)
self._handle_event_relations(txn, event)
# Insert into the room_memberships table.
self._store_room_members_txn(
txn,
@@ -1655,10 +1665,11 @@ class EventsStore(
def get_all_new_forward_event_rows(txn):
sql = (
"SELECT e.stream_ordering, e.event_id, e.room_id, e.type,"
" state_key, redacts"
" state_key, redacts, relates_to_id"
" FROM events AS e"
" LEFT JOIN redactions USING (event_id)"
" LEFT JOIN state_events USING (event_id)"
" LEFT JOIN event_relations USING (event_id)"
" WHERE ? < stream_ordering AND stream_ordering <= ?"
" ORDER BY stream_ordering ASC"
" LIMIT ?"
@@ -1673,11 +1684,12 @@ class EventsStore(
sql = (
"SELECT event_stream_ordering, e.event_id, e.room_id, e.type,"
" state_key, redacts"
" state_key, redacts, relates_to_id"
" FROM events AS e"
" INNER JOIN ex_outlier_stream USING (event_id)"
" LEFT JOIN redactions USING (event_id)"
" LEFT JOIN state_events USING (event_id)"
" LEFT JOIN event_relations USING (event_id)"
" WHERE ? < event_stream_ordering"
" AND event_stream_ordering <= ?"
" ORDER BY event_stream_ordering DESC"
@@ -1698,10 +1710,11 @@ class EventsStore(
def get_all_new_backfill_event_rows(txn):
sql = (
"SELECT -e.stream_ordering, e.event_id, e.room_id, e.type,"
" state_key, redacts"
" state_key, redacts, relates_to_id"
" FROM events AS e"
" LEFT JOIN redactions USING (event_id)"
" LEFT JOIN state_events USING (event_id)"
" LEFT JOIN event_relations USING (event_id)"
" WHERE ? > stream_ordering AND stream_ordering >= ?"
" ORDER BY stream_ordering ASC"
" LIMIT ?"
@@ -1716,11 +1729,12 @@ class EventsStore(
sql = (
"SELECT -event_stream_ordering, e.event_id, e.room_id, e.type,"
" state_key, redacts"
" state_key, redacts, relates_to_id"
" FROM events AS e"
" INNER JOIN ex_outlier_stream USING (event_id)"
" LEFT JOIN redactions USING (event_id)"
" LEFT JOIN state_events USING (event_id)"
" LEFT JOIN event_relations USING (event_id)"
" WHERE ? > event_stream_ordering"
" AND event_stream_ordering >= ?"
" ORDER BY event_stream_ordering DESC"

View File

@@ -103,7 +103,7 @@ class EventsWorkerStore(SQLBaseStore):
Returns:
Deferred : A FrozenEvent.
"""
events = yield self._get_events(
events = yield self.get_events_as_list(
[event_id],
check_redacted=check_redacted,
get_prev_content=get_prev_content,
@@ -142,7 +142,7 @@ class EventsWorkerStore(SQLBaseStore):
Returns:
Deferred : Dict from event_id to event.
"""
events = yield self._get_events(
events = yield self.get_events_as_list(
event_ids,
check_redacted=check_redacted,
get_prev_content=get_prev_content,
@@ -152,13 +152,32 @@ class EventsWorkerStore(SQLBaseStore):
defer.returnValue({e.event_id: e for e in events})
@defer.inlineCallbacks
def _get_events(
def get_events_as_list(
self,
event_ids,
check_redacted=True,
get_prev_content=False,
allow_rejected=False,
):
"""Get events from the database and return in a list in the same order
as given by `event_ids` arg.
Args:
event_ids (list): The event_ids of the events to fetch
check_redacted (bool): If True, check if event has been redacted
and redact it.
get_prev_content (bool): If True and event is a state event,
include the previous states content in the unsigned field.
allow_rejected (bool): If True return rejected events.
Returns:
Deferred[list[EventBase]]: List of events fetched from the database. The
events are in the same order as `event_ids` arg.
Note that the returned list may be smaller than the list of event
IDs if not all events could be fetched.
"""
if not event_ids:
defer.returnValue([])
@@ -202,21 +221,22 @@ class EventsWorkerStore(SQLBaseStore):
#
# The problem is that we end up at this point when an event
# which has been redacted is pulled out of the database by
# _enqueue_events, because _enqueue_events needs to check the
# redaction before it can cache the redacted event. So obviously,
# calling get_event to get the redacted event out of the database
# gives us an infinite loop.
# _enqueue_events, because _enqueue_events needs to check
# the redaction before it can cache the redacted event. So
# obviously, calling get_event to get the redacted event out
# of the database gives us an infinite loop.
#
# For now (quick hack to fix during 0.99 release cycle), we just
# go and fetch the relevant row from the db, but it would be nice
# to think about how we can cache this rather than hit the db
# every time we access a redaction event.
# For now (quick hack to fix during 0.99 release cycle), we
# just go and fetch the relevant row from the db, but it
# would be nice to think about how we can cache this rather
# than hit the db every time we access a redaction event.
#
# One thought on how to do this:
# 1. split _get_events up so that it is divided into (a) get the
# rawish event from the db/cache, (b) do the redaction/rejection
# filtering
# 2. have _get_event_from_row just call the first half of that
# 1. split get_events_as_list up so that it is divided into
# (a) get the rawish event from the db/cache, (b) do the
# redaction/rejection filtering
# 2. have _get_event_from_row just call the first half of
# that
orig_sender = yield self._simple_select_one_onecol(
table="events",
@@ -591,3 +611,27 @@ class EventsWorkerStore(SQLBaseStore):
return res
return self.runInteraction("get_rejection_reasons", f)
def _get_total_state_event_counts_txn(self, txn, room_id):
"""
See get_state_event_counts.
"""
sql = "SELECT COUNT(*) FROM state_events WHERE room_id=?"
txn.execute(sql, (room_id,))
row = txn.fetchone()
return row[0] if row else 0
def get_total_state_event_counts(self, room_id):
"""
Gets the total number of state events in a room.
Args:
room_id (str)
Returns:
Deferred[int]
"""
return self.runInteraction(
"get_total_state_event_counts",
self._get_total_state_event_counts_txn, room_id
)

View File

@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -725,17 +727,7 @@ class RegistrationStore(
raise StoreError(400, "User ID already taken.", errcode=Codes.USER_IN_USE)
if self._account_validity.enabled:
now_ms = self.clock.time_msec()
expiration_ts = now_ms + self._account_validity.period
self._simple_insert_txn(
txn,
"account_validity",
values={
"user_id": user_id,
"expiration_ts_ms": expiration_ts,
"email_sent": False,
}
)
self.set_expiration_date_for_user_txn(txn, user_id)
if token:
# it's possible for this to get a conflict, but only for a single user

View File

@@ -0,0 +1,476 @@
# -*- coding: utf-8 -*-
# Copyright 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import attr
from twisted.internet import defer
from synapse.api.constants import RelationTypes
from synapse.api.errors import SynapseError
from synapse.storage._base import SQLBaseStore
from synapse.storage.stream import generate_pagination_where_clause
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
logger = logging.getLogger(__name__)
@attr.s
class PaginationChunk(object):
"""Returned by relation pagination APIs.
Attributes:
chunk (list): The rows returned by pagination
next_batch (Any|None): Token to fetch next set of results with, if
None then there are no more results.
prev_batch (Any|None): Token to fetch previous set of results with, if
None then there are no previous results.
"""
chunk = attr.ib()
next_batch = attr.ib(default=None)
prev_batch = attr.ib(default=None)
def to_dict(self):
d = {"chunk": self.chunk}
if self.next_batch:
d["next_batch"] = self.next_batch.to_string()
if self.prev_batch:
d["prev_batch"] = self.prev_batch.to_string()
return d
@attr.s(frozen=True, slots=True)
class RelationPaginationToken(object):
"""Pagination token for relation pagination API.
As the results are order by topological ordering, we can use the
`topological_ordering` and `stream_ordering` fields of the events at the
boundaries of the chunk as pagination tokens.
Attributes:
topological (int): The topological ordering of the boundary event
stream (int): The stream ordering of the boundary event.
"""
topological = attr.ib()
stream = attr.ib()
@staticmethod
def from_string(string):
try:
t, s = string.split("-")
return RelationPaginationToken(int(t), int(s))
except ValueError:
raise SynapseError(400, "Invalid token")
def to_string(self):
return "%d-%d" % (self.topological, self.stream)
def as_tuple(self):
return attr.astuple(self)
@attr.s(frozen=True, slots=True)
class AggregationPaginationToken(object):
"""Pagination token for relation aggregation pagination API.
As the results are order by count and then MAX(stream_ordering) of the
aggregation groups, we can just use them as our pagination token.
Attributes:
count (int): The count of relations in the boundar group.
stream (int): The MAX stream ordering in the boundary group.
"""
count = attr.ib()
stream = attr.ib()
@staticmethod
def from_string(string):
try:
c, s = string.split("-")
return AggregationPaginationToken(int(c), int(s))
except ValueError:
raise SynapseError(400, "Invalid token")
def to_string(self):
return "%d-%d" % (self.count, self.stream)
def as_tuple(self):
return attr.astuple(self)
class RelationsWorkerStore(SQLBaseStore):
@cached(tree=True)
def get_relations_for_event(
self,
event_id,
relation_type=None,
event_type=None,
aggregation_key=None,
limit=5,
direction="b",
from_token=None,
to_token=None,
):
"""Get a list of relations for an event, ordered by topological ordering.
Args:
event_id (str): Fetch events that relate to this event ID.
relation_type (str|None): Only fetch events with this relation
type, if given.
event_type (str|None): Only fetch events with this event type, if
given.
aggregation_key (str|None): Only fetch events with this aggregation
key, if given.
limit (int): Only fetch the most recent `limit` events.
direction (str): Whether to fetch the most recent first (`"b"`) or
the oldest first (`"f"`).
from_token (RelationPaginationToken|None): Fetch rows from the given
token, or from the start if None.
to_token (RelationPaginationToken|None): Fetch rows up to the given
token, or up to the end if None.
Returns:
Deferred[PaginationChunk]: List of event IDs that match relations
requested. The rows are of the form `{"event_id": "..."}`.
"""
where_clause = ["relates_to_id = ?"]
where_args = [event_id]
if relation_type is not None:
where_clause.append("relation_type = ?")
where_args.append(relation_type)
if event_type is not None:
where_clause.append("type = ?")
where_args.append(event_type)
if aggregation_key:
where_clause.append("aggregation_key = ?")
where_args.append(aggregation_key)
pagination_clause = generate_pagination_where_clause(
direction=direction,
column_names=("topological_ordering", "stream_ordering"),
from_token=attr.astuple(from_token) if from_token else None,
to_token=attr.astuple(to_token) if to_token else None,
engine=self.database_engine,
)
if pagination_clause:
where_clause.append(pagination_clause)
if direction == "b":
order = "DESC"
else:
order = "ASC"
sql = """
SELECT event_id, topological_ordering, stream_ordering
FROM event_relations
INNER JOIN events USING (event_id)
WHERE %s
ORDER BY topological_ordering %s, stream_ordering %s
LIMIT ?
""" % (
" AND ".join(where_clause),
order,
order,
)
def _get_recent_references_for_event_txn(txn):
txn.execute(sql, where_args + [limit + 1])
last_topo_id = None
last_stream_id = None
events = []
for row in txn:
events.append({"event_id": row[0]})
last_topo_id = row[1]
last_stream_id = row[2]
next_batch = None
if len(events) > limit and last_topo_id and last_stream_id:
next_batch = RelationPaginationToken(last_topo_id, last_stream_id)
return PaginationChunk(
chunk=list(events[:limit]), next_batch=next_batch, prev_batch=from_token
)
return self.runInteraction(
"get_recent_references_for_event", _get_recent_references_for_event_txn
)
@cached(tree=True)
def get_aggregation_groups_for_event(
self,
event_id,
event_type=None,
limit=5,
direction="b",
from_token=None,
to_token=None,
):
"""Get a list of annotations on the event, grouped by event type and
aggregation key, sorted by count.
This is used e.g. to get the what and how many reactions have happend
on an event.
Args:
event_id (str): Fetch events that relate to this event ID.
event_type (str|None): Only fetch events with this event type, if
given.
limit (int): Only fetch the `limit` groups.
direction (str): Whether to fetch the highest count first (`"b"`) or
the lowest count first (`"f"`).
from_token (AggregationPaginationToken|None): Fetch rows from the
given token, or from the start if None.
to_token (AggregationPaginationToken|None): Fetch rows up to the
given token, or up to the end if None.
Returns:
Deferred[PaginationChunk]: List of groups of annotations that
match. Each row is a dict with `type`, `key` and `count` fields.
"""
where_clause = ["relates_to_id = ?", "relation_type = ?"]
where_args = [event_id, RelationTypes.ANNOTATION]
if event_type:
where_clause.append("type = ?")
where_args.append(event_type)
having_clause = generate_pagination_where_clause(
direction=direction,
column_names=("COUNT(*)", "MAX(stream_ordering)"),
from_token=attr.astuple(from_token) if from_token else None,
to_token=attr.astuple(to_token) if to_token else None,
engine=self.database_engine,
)
if direction == "b":
order = "DESC"
else:
order = "ASC"
if having_clause:
having_clause = "HAVING " + having_clause
else:
having_clause = ""
sql = """
SELECT type, aggregation_key, COUNT(DISTINCT sender), MAX(stream_ordering)
FROM event_relations
INNER JOIN events USING (event_id)
WHERE {where_clause}
GROUP BY relation_type, type, aggregation_key
{having_clause}
ORDER BY COUNT(*) {order}, MAX(stream_ordering) {order}
LIMIT ?
""".format(
where_clause=" AND ".join(where_clause),
order=order,
having_clause=having_clause,
)
def _get_aggregation_groups_for_event_txn(txn):
txn.execute(sql, where_args + [limit + 1])
next_batch = None
events = []
for row in txn:
events.append({"type": row[0], "key": row[1], "count": row[2]})
next_batch = AggregationPaginationToken(row[2], row[3])
if len(events) <= limit:
next_batch = None
return PaginationChunk(
chunk=list(events[:limit]), next_batch=next_batch, prev_batch=from_token
)
return self.runInteraction(
"get_aggregation_groups_for_event", _get_aggregation_groups_for_event_txn
)
@cachedInlineCallbacks()
def get_applicable_edit(self, event_id):
"""Get the most recent edit (if any) that has happened for the given
event.
Correctly handles checking whether edits were allowed to happen.
Args:
event_id (str): The original event ID
Returns:
Deferred[EventBase|None]: Returns the most recent edit, if any.
"""
# We only allow edits for `m.room.message` events that have the same sender
# and event type. We can't assert these things during regular event auth so
# we have to do the checks post hoc.
# Fetches latest edit that has the same type and sender as the
# original, and is an `m.room.message`.
sql = """
SELECT edit.event_id FROM events AS edit
INNER JOIN event_relations USING (event_id)
INNER JOIN events AS original ON
original.event_id = relates_to_id
AND edit.type = original.type
AND edit.sender = original.sender
WHERE
relates_to_id = ?
AND relation_type = ?
AND edit.type = 'm.room.message'
ORDER by edit.origin_server_ts DESC, edit.event_id DESC
LIMIT 1
"""
def _get_applicable_edit_txn(txn):
txn.execute(sql, (event_id, RelationTypes.REPLACE))
row = txn.fetchone()
if row:
return row[0]
edit_id = yield self.runInteraction(
"get_applicable_edit", _get_applicable_edit_txn
)
if not edit_id:
return
edit_event = yield self.get_event(edit_id, allow_none=True)
defer.returnValue(edit_event)
def has_user_annotated_event(self, parent_id, event_type, aggregation_key, sender):
"""Check if a user has already annotated an event with the same key
(e.g. already liked an event).
Args:
parent_id (str): The event being annotated
event_type (str): The event type of the annotation
aggregation_key (str): The aggregation key of the annotation
sender (str): The sender of the annotation
Returns:
Deferred[bool]
"""
sql = """
SELECT 1 FROM event_relations
INNER JOIN events USING (event_id)
WHERE
relates_to_id = ?
AND relation_type = ?
AND type = ?
AND sender = ?
AND aggregation_key = ?
LIMIT 1;
"""
def _get_if_user_has_annotated_event(txn):
txn.execute(
sql,
(
parent_id,
RelationTypes.ANNOTATION,
event_type,
sender,
aggregation_key,
),
)
return bool(txn.fetchone())
return self.runInteraction(
"get_if_user_has_annotated_event", _get_if_user_has_annotated_event
)
class RelationsStore(RelationsWorkerStore):
def _handle_event_relations(self, txn, event):
"""Handles inserting relation data during peristence of events
Args:
txn
event (EventBase)
"""
relation = event.content.get("m.relates_to")
if not relation:
# No relations
return
rel_type = relation.get("rel_type")
if rel_type not in (
RelationTypes.ANNOTATION,
RelationTypes.REFERENCE,
RelationTypes.REPLACE,
):
# Unknown relation type
return
parent_id = relation.get("event_id")
if not parent_id:
# Invalid relation
return
aggregation_key = relation.get("key")
self._simple_insert_txn(
txn,
table="event_relations",
values={
"event_id": event.event_id,
"relates_to_id": parent_id,
"relation_type": rel_type,
"aggregation_key": aggregation_key,
},
)
txn.call_after(self.get_relations_for_event.invalidate_many, (parent_id,))
txn.call_after(
self.get_aggregation_groups_for_event.invalidate_many, (parent_id,)
)
if rel_type == RelationTypes.REPLACE:
txn.call_after(self.get_applicable_edit.invalidate, (parent_id,))
def _handle_redaction(self, txn, redacted_event_id):
"""Handles receiving a redaction and checking whether we need to remove
any redacted relations from the database.
Args:
txn
redacted_event_id (str): The event that was redacted.
"""
self._simple_delete_txn(
txn,
table="event_relations",
keyvalues={
"event_id": redacted_event_id,
}
)

View File

@@ -142,6 +142,38 @@ class RoomMemberWorkerStore(EventsWorkerStore):
return self.runInteraction("get_room_summary", _get_room_summary_txn)
def _get_user_count_in_room_txn(self, txn, room_id, membership):
"""
See get_user_count_in_room.
"""
sql = (
"SELECT count(*) FROM room_memberships as m"
" INNER JOIN current_state_events as c"
" ON m.event_id = c.event_id "
" AND m.room_id = c.room_id "
" AND m.user_id = c.state_key"
" WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?"
)
txn.execute(sql, (room_id, membership))
row = txn.fetchone()
return row[0]
def get_user_count_in_room(self, room_id, membership):
"""
Get the user count in a room with a particular membership.
Args:
room_id (str)
membership (Membership)
Returns:
Deferred[int]
"""
return self.runInteraction(
"get_users_in_room", self._get_user_count_in_room_txn, room_id, membership
)
@cached()
def get_invited_rooms_for_user(self, user_id):
""" Get all the rooms the user is invited to

View File

@@ -0,0 +1,27 @@
/* Copyright 2019 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Tracks related events, like reactions, replies, edits, etc. Note that things
-- in this table are not necessarily "valid", e.g. it may contain edits from
-- people who don't have power to edit other peoples events.
CREATE TABLE IF NOT EXISTS event_relations (
event_id TEXT NOT NULL,
relates_to_id TEXT NOT NULL,
relation_type TEXT NOT NULL,
aggregation_key TEXT
);
CREATE UNIQUE INDEX event_relations_id ON event_relations(event_id);
CREATE INDEX event_relations_relates ON event_relations(relates_to_id, relation_type, aggregation_key);

View File

@@ -0,0 +1,80 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
CREATE TABLE stats_stream_pos (
Lock CHAR(1) NOT NULL DEFAULT 'X' UNIQUE, -- Makes sure this table only has one row.
stream_id BIGINT,
CHECK (Lock='X')
);
INSERT INTO stats_stream_pos (stream_id) VALUES (null);
CREATE TABLE user_stats (
user_id TEXT NOT NULL,
ts BIGINT NOT NULL,
bucket_size INT NOT NULL,
public_rooms INT NOT NULL,
private_rooms INT NOT NULL
);
CREATE UNIQUE INDEX user_stats_user_ts ON user_stats(user_id, ts);
CREATE TABLE room_stats (
room_id TEXT NOT NULL,
ts BIGINT NOT NULL,
bucket_size INT NOT NULL,
current_state_events INT NOT NULL,
joined_members INT NOT NULL,
invited_members INT NOT NULL,
left_members INT NOT NULL,
banned_members INT NOT NULL,
state_events INT NOT NULL
);
CREATE UNIQUE INDEX room_stats_room_ts ON room_stats(room_id, ts);
-- cache of current room state; useful for the publicRooms list
CREATE TABLE room_state (
room_id TEXT NOT NULL,
join_rules TEXT,
history_visibility TEXT,
encryption TEXT,
name TEXT,
topic TEXT,
avatar TEXT,
canonical_alias TEXT
-- get aliases straight from the right table
);
CREATE UNIQUE INDEX room_state_room ON room_state(room_id);
CREATE TABLE room_stats_earliest_token (
room_id TEXT NOT NULL,
token BIGINT NOT NULL
);
CREATE UNIQUE INDEX room_stats_earliest_token_idx ON room_stats_earliest_token(room_id);
-- Set up staging tables
INSERT INTO background_updates (update_name, progress_json) VALUES
('populate_stats_createtables', '{}');
-- Run through each room and update stats
INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_process_rooms', '{}', 'populate_stats_createtables');
-- Clean up staging tables
INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_cleanup', '{}', 'populate_stats_process_rooms');

View File

@@ -460,7 +460,7 @@ class SearchStore(BackgroundUpdateStore):
results = list(filter(lambda row: row["room_id"] in room_ids, results))
events = yield self._get_events([r["event_id"] for r in results])
events = yield self.get_events_as_list([r["event_id"] for r in results])
event_map = {ev.event_id: ev for ev in events}
@@ -605,7 +605,7 @@ class SearchStore(BackgroundUpdateStore):
results = list(filter(lambda row: row["room_id"] in room_ids, results))
events = yield self._get_events([r["event_id"] for r in results])
events = yield self.get_events_as_list([r["event_id"] for r in results])
event_map = {ev.event_id: ev for ev in events}

View File

@@ -84,10 +84,16 @@ class StateDeltasStore(SQLBaseStore):
"get_current_state_deltas", get_current_state_deltas_txn
)
def get_max_stream_id_in_current_state_deltas(self):
return self._simple_select_one_onecol(
def _get_max_stream_id_in_current_state_deltas_txn(self, txn):
return self._simple_select_one_onecol_txn(
txn,
table="current_state_delta_stream",
keyvalues={},
retcol="COALESCE(MAX(stream_id), -1)",
desc="get_max_stream_id_in_current_state_deltas",
)
def get_max_stream_id_in_current_state_deltas(self):
return self.runInteraction(
"get_max_stream_id_in_current_state_deltas",
self._get_max_stream_id_in_current_state_deltas_txn,
)

450
synapse/storage/stats.py Normal file
View File

@@ -0,0 +1,450 @@
# -*- coding: utf-8 -*-
# Copyright 2018, 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.storage.state_deltas import StateDeltasStore
from synapse.util.caches.descriptors import cached
logger = logging.getLogger(__name__)
# these fields track absolutes (e.g. total number of rooms on the server)
ABSOLUTE_STATS_FIELDS = {
"room": (
"current_state_events",
"joined_members",
"invited_members",
"left_members",
"banned_members",
"state_events",
),
"user": ("public_rooms", "private_rooms"),
}
TYPE_TO_ROOM = {"room": ("room_stats", "room_id"), "user": ("user_stats", "user_id")}
TEMP_TABLE = "_temp_populate_stats"
class StatsStore(StateDeltasStore):
def __init__(self, db_conn, hs):
super(StatsStore, self).__init__(db_conn, hs)
self.server_name = hs.hostname
self.clock = self.hs.get_clock()
self.stats_enabled = hs.config.stats_enabled
self.stats_bucket_size = hs.config.stats_bucket_size
self.register_background_update_handler(
"populate_stats_createtables", self._populate_stats_createtables
)
self.register_background_update_handler(
"populate_stats_process_rooms", self._populate_stats_process_rooms
)
self.register_background_update_handler(
"populate_stats_cleanup", self._populate_stats_cleanup
)
@defer.inlineCallbacks
def _populate_stats_createtables(self, progress, batch_size):
if not self.stats_enabled:
yield self._end_background_update("populate_stats_createtables")
defer.returnValue(1)
# Get all the rooms that we want to process.
def _make_staging_area(txn):
sql = (
"CREATE TABLE IF NOT EXISTS "
+ TEMP_TABLE
+ "_rooms(room_id TEXT NOT NULL, events BIGINT NOT NULL)"
)
txn.execute(sql)
sql = (
"CREATE TABLE IF NOT EXISTS "
+ TEMP_TABLE
+ "_position(position TEXT NOT NULL)"
)
txn.execute(sql)
# Get rooms we want to process from the database
sql = """
SELECT room_id, count(*) FROM current_state_events
GROUP BY room_id
"""
txn.execute(sql)
rooms = [{"room_id": x[0], "events": x[1]} for x in txn.fetchall()]
self._simple_insert_many_txn(txn, TEMP_TABLE + "_rooms", rooms)
del rooms
new_pos = yield self.get_max_stream_id_in_current_state_deltas()
yield self.runInteraction("populate_stats_temp_build", _make_staging_area)
yield self._simple_insert(TEMP_TABLE + "_position", {"position": new_pos})
self.get_earliest_token_for_room_stats.invalidate_all()
yield self._end_background_update("populate_stats_createtables")
defer.returnValue(1)
@defer.inlineCallbacks
def _populate_stats_cleanup(self, progress, batch_size):
"""
Update the user directory stream position, then clean up the old tables.
"""
if not self.stats_enabled:
yield self._end_background_update("populate_stats_cleanup")
defer.returnValue(1)
position = yield self._simple_select_one_onecol(
TEMP_TABLE + "_position", None, "position"
)
yield self.update_stats_stream_pos(position)
def _delete_staging_area(txn):
txn.execute("DROP TABLE IF EXISTS " + TEMP_TABLE + "_rooms")
txn.execute("DROP TABLE IF EXISTS " + TEMP_TABLE + "_position")
yield self.runInteraction("populate_stats_cleanup", _delete_staging_area)
yield self._end_background_update("populate_stats_cleanup")
defer.returnValue(1)
@defer.inlineCallbacks
def _populate_stats_process_rooms(self, progress, batch_size):
if not self.stats_enabled:
yield self._end_background_update("populate_stats_process_rooms")
defer.returnValue(1)
# If we don't have progress filed, delete everything.
if not progress:
yield self.delete_all_stats()
def _get_next_batch(txn):
# Only fetch 250 rooms, so we don't fetch too many at once, even
# if those 250 rooms have less than batch_size state events.
sql = """
SELECT room_id, events FROM %s_rooms
ORDER BY events DESC
LIMIT 250
""" % (
TEMP_TABLE,
)
txn.execute(sql)
rooms_to_work_on = txn.fetchall()
if not rooms_to_work_on:
return None
# Get how many are left to process, so we can give status on how
# far we are in processing
txn.execute("SELECT COUNT(*) FROM " + TEMP_TABLE + "_rooms")
progress["remaining"] = txn.fetchone()[0]
return rooms_to_work_on
rooms_to_work_on = yield self.runInteraction(
"populate_stats_temp_read", _get_next_batch
)
# No more rooms -- complete the transaction.
if not rooms_to_work_on:
yield self._end_background_update("populate_stats_process_rooms")
defer.returnValue(1)
logger.info(
"Processing the next %d rooms of %d remaining",
(len(rooms_to_work_on), progress["remaining"]),
)
# Number of state events we've processed by going through each room
processed_event_count = 0
for room_id, event_count in rooms_to_work_on:
current_state_ids = yield self.get_current_state_ids(room_id)
join_rules = yield self.get_event(
current_state_ids.get((EventTypes.JoinRules, "")), allow_none=True
)
history_visibility = yield self.get_event(
current_state_ids.get((EventTypes.RoomHistoryVisibility, "")),
allow_none=True,
)
encryption = yield self.get_event(
current_state_ids.get((EventTypes.RoomEncryption, "")), allow_none=True
)
name = yield self.get_event(
current_state_ids.get((EventTypes.Name, "")), allow_none=True
)
topic = yield self.get_event(
current_state_ids.get((EventTypes.Topic, "")), allow_none=True
)
avatar = yield self.get_event(
current_state_ids.get((EventTypes.RoomAvatar, "")), allow_none=True
)
canonical_alias = yield self.get_event(
current_state_ids.get((EventTypes.CanonicalAlias, "")), allow_none=True
)
def _or_none(x, arg):
if x:
return x.content.get(arg)
return None
yield self.update_room_state(
room_id,
{
"join_rules": _or_none(join_rules, "join_rule"),
"history_visibility": _or_none(
history_visibility, "history_visibility"
),
"encryption": _or_none(encryption, "algorithm"),
"name": _or_none(name, "name"),
"topic": _or_none(topic, "topic"),
"avatar": _or_none(avatar, "url"),
"canonical_alias": _or_none(canonical_alias, "alias"),
},
)
now = self.hs.get_reactor().seconds()
# quantise time to the nearest bucket
now = (now // self.stats_bucket_size) * self.stats_bucket_size
def _fetch_data(txn):
# Get the current token of the room
current_token = self._get_max_stream_id_in_current_state_deltas_txn(txn)
current_state_events = len(current_state_ids)
joined_members = self._get_user_count_in_room_txn(
txn, room_id, Membership.JOIN
)
invited_members = self._get_user_count_in_room_txn(
txn, room_id, Membership.INVITE
)
left_members = self._get_user_count_in_room_txn(
txn, room_id, Membership.LEAVE
)
banned_members = self._get_user_count_in_room_txn(
txn, room_id, Membership.BAN
)
total_state_events = self._get_total_state_event_counts_txn(
txn, room_id
)
self._update_stats_txn(
txn,
"room",
room_id,
now,
{
"bucket_size": self.stats_bucket_size,
"current_state_events": current_state_events,
"joined_members": joined_members,
"invited_members": invited_members,
"left_members": left_members,
"banned_members": banned_members,
"state_events": total_state_events,
},
)
self._simple_insert_txn(
txn,
"room_stats_earliest_token",
{"room_id": room_id, "token": current_token},
)
yield self.runInteraction("update_room_stats", _fetch_data)
# We've finished a room. Delete it from the table.
yield self._simple_delete_one(TEMP_TABLE + "_rooms", {"room_id": room_id})
# Update the remaining counter.
progress["remaining"] -= 1
yield self.runInteraction(
"populate_stats",
self._background_update_progress_txn,
"populate_stats_process_rooms",
progress,
)
processed_event_count += event_count
if processed_event_count > batch_size:
# Don't process any more rooms, we've hit our batch size.
defer.returnValue(processed_event_count)
defer.returnValue(processed_event_count)
def delete_all_stats(self):
"""
Delete all statistics records.
"""
def _delete_all_stats_txn(txn):
txn.execute("DELETE FROM room_state")
txn.execute("DELETE FROM room_stats")
txn.execute("DELETE FROM room_stats_earliest_token")
txn.execute("DELETE FROM user_stats")
return self.runInteraction("delete_all_stats", _delete_all_stats_txn)
def get_stats_stream_pos(self):
return self._simple_select_one_onecol(
table="stats_stream_pos",
keyvalues={},
retcol="stream_id",
desc="stats_stream_pos",
)
def update_stats_stream_pos(self, stream_id):
return self._simple_update_one(
table="stats_stream_pos",
keyvalues={},
updatevalues={"stream_id": stream_id},
desc="update_stats_stream_pos",
)
def update_room_state(self, room_id, fields):
"""
Args:
room_id (str)
fields (dict[str:Any])
"""
return self._simple_upsert(
table="room_state",
keyvalues={"room_id": room_id},
values=fields,
desc="update_room_state",
)
def get_deltas_for_room(self, room_id, start, size=100):
"""
Get statistics deltas for a given room.
Args:
room_id (str)
start (int): Pagination start. Number of entries, not timestamp.
size (int): How many entries to return.
Returns:
Deferred[list[dict]], where the dict has the keys of
ABSOLUTE_STATS_FIELDS["room"] and "ts".
"""
return self._simple_select_list_paginate(
"room_stats",
{"room_id": room_id},
"ts",
start,
size,
retcols=(list(ABSOLUTE_STATS_FIELDS["room"]) + ["ts"]),
order_direction="DESC",
)
def get_all_room_state(self):
return self._simple_select_list(
"room_state", None, retcols=("name", "topic", "canonical_alias")
)
@cached()
def get_earliest_token_for_room_stats(self, room_id):
"""
Fetch the "earliest token". This is used by the room stats delta
processor to ignore deltas that have been processed between the
start of the background task and any particular room's stats
being calculated.
Returns:
Deferred[int]
"""
return self._simple_select_one_onecol(
"room_stats_earliest_token",
{"room_id": room_id},
retcol="token",
allow_none=True,
)
def update_stats(self, stats_type, stats_id, ts, fields):
table, id_col = TYPE_TO_ROOM[stats_type]
return self._simple_upsert(
table=table,
keyvalues={id_col: stats_id, "ts": ts},
values=fields,
desc="update_stats",
)
def _update_stats_txn(self, txn, stats_type, stats_id, ts, fields):
table, id_col = TYPE_TO_ROOM[stats_type]
return self._simple_upsert_txn(
txn, table=table, keyvalues={id_col: stats_id, "ts": ts}, values=fields
)
def update_stats_delta(self, ts, stats_type, stats_id, field, value):
def _update_stats_delta(txn):
table, id_col = TYPE_TO_ROOM[stats_type]
sql = (
"SELECT * FROM %s"
" WHERE %s=? and ts=("
" SELECT MAX(ts) FROM %s"
" WHERE %s=?"
")"
) % (table, id_col, table, id_col)
txn.execute(sql, (stats_id, stats_id))
rows = self.cursor_to_dict(txn)
if len(rows) == 0:
# silently skip as we don't have anything to apply a delta to yet.
# this tries to minimise any race between the initial sync and
# subsequent deltas arriving.
return
current_ts = ts
latest_ts = rows[0]["ts"]
if current_ts < latest_ts:
# This one is in the past, but we're just encountering it now.
# Mark it as part of the current bucket.
current_ts = latest_ts
elif ts != latest_ts:
# we have to copy our absolute counters over to the new entry.
values = {
key: rows[0][key] for key in ABSOLUTE_STATS_FIELDS[stats_type]
}
values[id_col] = stats_id
values["ts"] = ts
values["bucket_size"] = self.stats_bucket_size
self._simple_insert_txn(txn, table=table, values=values)
# actually update the new value
if stats_type in ABSOLUTE_STATS_FIELDS[stats_type]:
self._simple_update_txn(
txn,
table=table,
keyvalues={id_col: stats_id, "ts": current_ts},
updatevalues={field: value},
)
else:
sql = ("UPDATE %s SET %s=%s+? WHERE %s=? AND ts=?") % (
table,
field,
field,
id_col,
)
txn.execute(sql, (value, stats_id, current_ts))
return self.runInteraction("update_stats_delta", _update_stats_delta)

View File

@@ -64,59 +64,135 @@ _EventDictReturn = namedtuple(
)
def lower_bound(token, engine, inclusive=False):
inclusive = "=" if inclusive else ""
if token.topological is None:
return "(%d <%s %s)" % (token.stream, inclusive, "stream_ordering")
else:
if isinstance(engine, PostgresEngine):
# Postgres doesn't optimise ``(x < a) OR (x=a AND y<b)`` as well
# as it optimises ``(x,y) < (a,b)`` on multicolumn indexes. So we
# use the later form when running against postgres.
return "((%d,%d) <%s (%s,%s))" % (
token.topological,
token.stream,
inclusive,
"topological_ordering",
"stream_ordering",
def generate_pagination_where_clause(
direction, column_names, from_token, to_token, engine,
):
"""Creates an SQL expression to bound the columns by the pagination
tokens.
For example creates an SQL expression like:
(6, 7) >= (topological_ordering, stream_ordering)
AND (5, 3) < (topological_ordering, stream_ordering)
would be generated for dir=b, from_token=(6, 7) and to_token=(5, 3).
Note that tokens are considered to be after the row they are in, e.g. if
a row A has a token T, then we consider A to be before T. This convention
is important when figuring out inequalities for the generated SQL, and
produces the following result:
- If paginating forwards then we exclude any rows matching the from
token, but include those that match the to token.
- If paginating backwards then we include any rows matching the from
token, but include those that match the to token.
Args:
direction (str): Whether we're paginating backwards("b") or
forwards ("f").
column_names (tuple[str, str]): The column names to bound. Must *not*
be user defined as these get inserted directly into the SQL
statement without escapes.
from_token (tuple[int, int]|None): The start point for the pagination.
This is an exclusive minimum bound if direction is "f", and an
inclusive maximum bound if direction is "b".
to_token (tuple[int, int]|None): The endpoint point for the pagination.
This is an inclusive maximum bound if direction is "f", and an
exclusive minimum bound if direction is "b".
engine: The database engine to generate the clauses for
Returns:
str: The sql expression
"""
assert direction in ("b", "f")
where_clause = []
if from_token:
where_clause.append(
_make_generic_sql_bound(
bound=">=" if direction == "b" else "<",
column_names=column_names,
values=from_token,
engine=engine,
)
return "(%d < %s OR (%d = %s AND %d <%s %s))" % (
token.topological,
"topological_ordering",
token.topological,
"topological_ordering",
token.stream,
inclusive,
"stream_ordering",
)
def upper_bound(token, engine, inclusive=True):
inclusive = "=" if inclusive else ""
if token.topological is None:
return "(%d >%s %s)" % (token.stream, inclusive, "stream_ordering")
else:
if isinstance(engine, PostgresEngine):
# Postgres doesn't optimise ``(x > a) OR (x=a AND y>b)`` as well
# as it optimises ``(x,y) > (a,b)`` on multicolumn indexes. So we
# use the later form when running against postgres.
return "((%d,%d) >%s (%s,%s))" % (
token.topological,
token.stream,
inclusive,
"topological_ordering",
"stream_ordering",
if to_token:
where_clause.append(
_make_generic_sql_bound(
bound="<" if direction == "b" else ">=",
column_names=column_names,
values=to_token,
engine=engine,
)
return "(%d > %s OR (%d = %s AND %d >%s %s))" % (
token.topological,
"topological_ordering",
token.topological,
"topological_ordering",
token.stream,
inclusive,
"stream_ordering",
)
return " AND ".join(where_clause)
def _make_generic_sql_bound(bound, column_names, values, engine):
"""Create an SQL expression that bounds the given column names by the
values, e.g. create the equivalent of `(1, 2) < (col1, col2)`.
Only works with two columns.
Older versions of SQLite don't support that syntax so we have to expand it
out manually.
Args:
bound (str): The comparison operator to use. One of ">", "<", ">=",
"<=", where the values are on the left and columns on the right.
names (tuple[str, str]): The column names. Must *not* be user defined
as these get inserted directly into the SQL statement without
escapes.
values (tuple[int|None, int]): The values to bound the columns by. If
the first value is None then only creates a bound on the second
column.
engine: The database engine to generate the SQL for
Returns:
str
"""
assert(bound in (">", "<", ">=", "<="))
name1, name2 = column_names
val1, val2 = values
if val1 is None:
val2 = int(val2)
return "(%d %s %s)" % (val2, bound, name2)
val1 = int(val1)
val2 = int(val2)
if isinstance(engine, PostgresEngine):
# Postgres doesn't optimise ``(x < a) OR (x=a AND y<b)`` as well
# as it optimises ``(x,y) < (a,b)`` on multicolumn indexes. So we
# use the later form when running against postgres.
return "((%d,%d) %s (%s,%s))" % (
val1, val2,
bound,
name1, name2,
)
# We want to generate queries of e.g. the form:
#
# (val1 < name1 OR (val1 = name1 AND val2 <= name2))
#
# which is equivalent to (val1, val2) < (name1, name2)
return """(
{val1:d} {strict_bound} {name1}
OR ({val1:d} = {name1} AND {val2:d} {bound} {name2})
)""".format(
name1=name1,
val1=val1,
name2=name2,
val2=val2,
strict_bound=bound[0], # The first bound must always be strict equality here
bound=bound,
)
def filter_to_clause(event_filter):
# NB: This may create SQL clauses that don't optimise well (and we don't
@@ -319,7 +395,9 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
rows = yield self.runInteraction("get_room_events_stream_for_room", f)
ret = yield self._get_events([r.event_id for r in rows], get_prev_content=True)
ret = yield self.get_events_as_list([
r.event_id for r in rows], get_prev_content=True,
)
self._set_before_and_after(ret, rows, topo_order=from_id is None)
@@ -367,7 +445,9 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
rows = yield self.runInteraction("get_membership_changes_for_user", f)
ret = yield self._get_events([r.event_id for r in rows], get_prev_content=True)
ret = yield self.get_events_as_list(
[r.event_id for r in rows], get_prev_content=True,
)
self._set_before_and_after(ret, rows, topo_order=False)
@@ -394,7 +474,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
)
logger.debug("stream before")
events = yield self._get_events(
events = yield self.get_events_as_list(
[r.event_id for r in rows], get_prev_content=True
)
logger.debug("stream after")
@@ -580,11 +660,11 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
event_filter,
)
events_before = yield self._get_events(
events_before = yield self.get_events_as_list(
[e for e in results["before"]["event_ids"]], get_prev_content=True
)
events_after = yield self._get_events(
events_after = yield self.get_events_as_list(
[e for e in results["after"]["event_ids"]], get_prev_content=True
)
@@ -697,7 +777,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
"get_all_new_events_stream", get_all_new_events_stream_txn
)
events = yield self._get_events(event_ids)
events = yield self.get_events_as_list(event_ids)
defer.returnValue((upper_bound, events))
@@ -758,20 +838,16 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
args = [False, room_id]
if direction == 'b':
order = "DESC"
bounds = upper_bound(from_token, self.database_engine)
if to_token:
bounds = "%s AND %s" % (
bounds,
lower_bound(to_token, self.database_engine),
)
else:
order = "ASC"
bounds = lower_bound(from_token, self.database_engine)
if to_token:
bounds = "%s AND %s" % (
bounds,
upper_bound(to_token, self.database_engine),
)
bounds = generate_pagination_where_clause(
direction=direction,
column_names=("topological_ordering", "stream_ordering"),
from_token=from_token,
to_token=to_token,
engine=self.database_engine,
)
filter_clause, filter_args = filter_to_clause(event_filter)
@@ -849,7 +925,7 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
event_filter,
)
events = yield self._get_events(
events = yield self.get_events_as_list(
[r.event_id for r in rows], get_prev_content=True
)

View File

@@ -156,6 +156,25 @@ def concurrently_execute(func, args, limit):
], consumeErrors=True)).addErrback(unwrapFirstError)
def yieldable_gather_results(func, iter, *args, **kwargs):
"""Executes the function with each argument concurrently.
Args:
func (func): Function to execute that returns a Deferred
iter (iter): An iterable that yields items that get passed as the first
argument to the function
*args: Arguments to be passed to each call to func
Returns
Deferred[list]: Resolved when all functions have been invoked, or errors if
one of the function calls fails.
"""
return logcontext.make_deferred_yieldable(defer.gatherResults([
run_in_background(func, item, *args, **kwargs)
for item in iter
], consumeErrors=True)).addErrback(unwrapFirstError)
class Linearizer(object):
"""Limits concurrent access to resources based on a key. Useful to ensure
only a few things happen at a time on a given resource.

View File

@@ -27,6 +27,7 @@ def user_left_room(distributor, user, room_id):
distributor.fire("user_left_room", user=user, room_id=room_id)
# XXX: this is no longer used. We should probably kill it.
def user_joined_room(distributor, user, room_id):
distributor.fire("user_joined_room", user=user, room_id=room_id)

View File

@@ -30,31 +30,14 @@ logger = logging.getLogger(__name__)
class FederationRateLimiter(object):
def __init__(self, clock, window_size, sleep_limit, sleep_msec,
reject_limit, concurrent_requests):
def __init__(self, clock, config):
"""
Args:
clock (Clock)
window_size (int): The window size in milliseconds.
sleep_limit (int): The number of requests received in the last
`window_size` milliseconds before we artificially start
delaying processing of requests.
sleep_msec (int): The number of milliseconds to delay processing
of incoming requests by.
reject_limit (int): The maximum number of requests that are can be
queued for processing before we start rejecting requests with
a 429 Too Many Requests response.
concurrent_requests (int): The number of concurrent requests to
process.
config (FederationRateLimitConfig)
"""
self.clock = clock
self.window_size = window_size
self.sleep_limit = sleep_limit
self.sleep_msec = sleep_msec
self.reject_limit = reject_limit
self.concurrent_requests = concurrent_requests
self._config = config
self.ratelimiters = {}
def ratelimit(self, host):
@@ -76,25 +59,25 @@ class FederationRateLimiter(object):
host,
_PerHostRatelimiter(
clock=self.clock,
window_size=self.window_size,
sleep_limit=self.sleep_limit,
sleep_msec=self.sleep_msec,
reject_limit=self.reject_limit,
concurrent_requests=self.concurrent_requests,
config=self._config,
)
).ratelimit()
class _PerHostRatelimiter(object):
def __init__(self, clock, window_size, sleep_limit, sleep_msec,
reject_limit, concurrent_requests):
def __init__(self, clock, config):
"""
Args:
clock (Clock)
config (FederationRateLimitConfig)
"""
self.clock = clock
self.window_size = window_size
self.sleep_limit = sleep_limit
self.sleep_sec = sleep_msec / 1000.0
self.reject_limit = reject_limit
self.concurrent_requests = concurrent_requests
self.window_size = config.window_size
self.sleep_limit = config.sleep_limit
self.sleep_sec = config.sleep_delay / 1000.0
self.reject_limit = config.reject_limit
self.concurrent_requests = config.concurrent
# request_id objects for requests which have been slept
self.sleeping_requests = set()

View File

@@ -37,8 +37,12 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
hs_config = self.default_config("test")
# some of the tests rely on us having a user consent version
hs_config.user_consent_version = "test_consent_version"
hs_config.max_mau_value = 50
hs_config["user_consent"] = {
"version": "test_consent_version",
"template_dir": ".",
}
hs_config["max_mau_value"] = 50
hs_config["limit_usage_by_mau"] = True
hs = self.setup_test_homeserver(config=hs_config, expire_access_token=True)
return hs
@@ -224,3 +228,10 @@ class RegistrationTestCase(unittest.HomeserverTestCase):
def test_register_not_support_user(self):
res = self.get_success(self.handler.register(localpart='user'))
self.assertFalse(self.store.is_support_user(res[0]))
def test_invalid_user_id_length(self):
invalid_user_id = "x" * 256
self.get_failure(
self.handler.register(localpart=invalid_user_id),
SynapseError
)

View File

@@ -0,0 +1,251 @@
# -*- coding: utf-8 -*-
# Copyright 2019 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from mock import Mock
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.rest import admin
from synapse.rest.client.v1 import login, room
from tests import unittest
class StatsRoomTests(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets_for_client_rest_resource,
room.register_servlets,
login.register_servlets,
]
def prepare(self, reactor, clock, hs):
self.store = hs.get_datastore()
self.handler = self.hs.get_stats_handler()
def _add_background_updates(self):
"""
Add the background updates we need to run.
"""
# Ugh, have to reset this flag
self.store._all_done = False
self.get_success(
self.store._simple_insert(
"background_updates",
{"update_name": "populate_stats_createtables", "progress_json": "{}"},
)
)
self.get_success(
self.store._simple_insert(
"background_updates",
{
"update_name": "populate_stats_process_rooms",
"progress_json": "{}",
"depends_on": "populate_stats_createtables",
},
)
)
self.get_success(
self.store._simple_insert(
"background_updates",
{
"update_name": "populate_stats_cleanup",
"progress_json": "{}",
"depends_on": "populate_stats_process_rooms",
},
)
)
def test_initial_room(self):
"""
The background updates will build the table from scratch.
"""
r = self.get_success(self.store.get_all_room_state())
self.assertEqual(len(r), 0)
# Disable stats
self.hs.config.stats_enabled = False
self.handler.stats_enabled = False
u1 = self.register_user("u1", "pass")
u1_token = self.login("u1", "pass")
room_1 = self.helper.create_room_as(u1, tok=u1_token)
self.helper.send_state(
room_1, event_type="m.room.topic", body={"topic": "foo"}, tok=u1_token
)
# Stats disabled, shouldn't have done anything
r = self.get_success(self.store.get_all_room_state())
self.assertEqual(len(r), 0)
# Enable stats
self.hs.config.stats_enabled = True
self.handler.stats_enabled = True
# Do the initial population of the user directory via the background update
self._add_background_updates()
while not self.get_success(self.store.has_completed_background_updates()):
self.get_success(self.store.do_next_background_update(100), by=0.1)
r = self.get_success(self.store.get_all_room_state())
self.assertEqual(len(r), 1)
self.assertEqual(r[0]["topic"], "foo")
def test_initial_earliest_token(self):
"""
Ingestion via notify_new_event will ignore tokens that the background
update have already processed.
"""
self.reactor.advance(86401)
self.hs.config.stats_enabled = False
self.handler.stats_enabled = False
u1 = self.register_user("u1", "pass")
u1_token = self.login("u1", "pass")
u2 = self.register_user("u2", "pass")
u2_token = self.login("u2", "pass")
u3 = self.register_user("u3", "pass")
u3_token = self.login("u3", "pass")
room_1 = self.helper.create_room_as(u1, tok=u1_token)
self.helper.send_state(
room_1, event_type="m.room.topic", body={"topic": "foo"}, tok=u1_token
)
# Begin the ingestion by creating the temp tables. This will also store
# the position that the deltas should begin at, once they take over.
self.hs.config.stats_enabled = True
self.handler.stats_enabled = True
self.store._all_done = False
self.get_success(self.store.update_stats_stream_pos(None))
self.get_success(
self.store._simple_insert(
"background_updates",
{"update_name": "populate_stats_createtables", "progress_json": "{}"},
)
)
while not self.get_success(self.store.has_completed_background_updates()):
self.get_success(self.store.do_next_background_update(100), by=0.1)
# Now, before the table is actually ingested, add some more events.
self.helper.invite(room=room_1, src=u1, targ=u2, tok=u1_token)
self.helper.join(room=room_1, user=u2, tok=u2_token)
# Now do the initial ingestion.
self.get_success(
self.store._simple_insert(
"background_updates",
{"update_name": "populate_stats_process_rooms", "progress_json": "{}"},
)
)
self.get_success(
self.store._simple_insert(
"background_updates",
{
"update_name": "populate_stats_cleanup",
"progress_json": "{}",
"depends_on": "populate_stats_process_rooms",
},
)
)
self.store._all_done = False
while not self.get_success(self.store.has_completed_background_updates()):
self.get_success(self.store.do_next_background_update(100), by=0.1)
self.reactor.advance(86401)
# Now add some more events, triggering ingestion. Because of the stream
# position being set to before the events sent in the middle, a simpler
# implementation would reprocess those events, and say there were four
# users, not three.
self.helper.invite(room=room_1, src=u1, targ=u3, tok=u1_token)
self.helper.join(room=room_1, user=u3, tok=u3_token)
# Get the deltas! There should be two -- day 1, and day 2.
r = self.get_success(self.store.get_deltas_for_room(room_1, 0))
# The oldest has 2 joined members
self.assertEqual(r[-1]["joined_members"], 2)
# The newest has 3
self.assertEqual(r[0]["joined_members"], 3)
def test_incorrect_state_transition(self):
"""
If the state transition is not one of (JOIN, INVITE, LEAVE, BAN) to
(JOIN, INVITE, LEAVE, BAN), an error is raised.
"""
events = {
"a1": {"membership": Membership.LEAVE},
"a2": {"membership": "not a real thing"},
}
def get_event(event_id):
m = Mock()
m.content = events[event_id]
d = defer.Deferred()
self.reactor.callLater(0.0, d.callback, m)
return d
def get_received_ts(event_id):
return defer.succeed(1)
self.store.get_received_ts = get_received_ts
self.store.get_event = get_event
deltas = [
{
"type": EventTypes.Member,
"state_key": "some_user",
"room_id": "room",
"event_id": "a1",
"prev_event_id": "a2",
"stream_id": "bleb",
}
]
f = self.get_failure(self.handler._handle_deltas(deltas), ValueError)
self.assertEqual(
f.value.args[0], "'not a real thing' is not a valid prev_membership"
)
# And the other way...
deltas = [
{
"type": EventTypes.Member,
"state_key": "some_user",
"room_id": "room",
"event_id": "a2",
"prev_event_id": "a1",
"stream_id": "bleb",
}
]
f = self.get_failure(self.handler._handle_deltas(deltas), ValueError)
self.assertEqual(
f.value.args[0], "'not a real thing' is not a valid membership"
)

View File

@@ -37,7 +37,7 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.update_user_directory = True
config["update_user_directory"] = True
return self.setup_test_homeserver(config=config)
def prepare(self, reactor, clock, hs):
@@ -333,7 +333,7 @@ class TestUserDirSearchDisabled(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.update_user_directory = True
config["update_user_directory"] = True
hs = self.setup_test_homeserver(config=config)
self.config = hs.config

View File

@@ -54,7 +54,9 @@ class MatrixFederationAgentTests(TestCase):
self.agent = MatrixFederationAgent(
reactor=self.reactor,
tls_client_options_factory=ClientTLSOptionsFactory(default_config("test")),
tls_client_options_factory=ClientTLSOptionsFactory(
default_config("test", parse=True)
),
_well_known_tls_policy=TrustingTLSPolicyForHTTPS(),
_srv_resolver=self.mock_resolver,
_well_known_cache=self.well_known_cache,

View File

@@ -15,6 +15,8 @@
from mock import Mock
from netaddr import IPSet
from twisted.internet import defer
from twisted.internet.defer import TimeoutError
from twisted.internet.error import ConnectingCancelledError, DNSLookupError
@@ -209,6 +211,75 @@ class FederationClientTests(HomeserverTestCase):
self.assertIsInstance(f.value, RequestSendFailed)
self.assertIsInstance(f.value.inner_exception, ResponseNeverReceived)
def test_client_ip_range_blacklist(self):
"""Ensure that Synapse does not try to connect to blacklisted IPs"""
# Set up the ip_range blacklist
self.hs.config.federation_ip_range_blacklist = IPSet([
"127.0.0.0/8",
"fe80::/64",
])
self.reactor.lookups["internal"] = "127.0.0.1"
self.reactor.lookups["internalv6"] = "fe80:0:0:0:0:8a2e:370:7337"
self.reactor.lookups["fine"] = "10.20.30.40"
cl = MatrixFederationHttpClient(self.hs, None)
# Try making a GET request to a blacklisted IPv4 address
# ------------------------------------------------------
# Make the request
d = cl.get_json("internal:8008", "foo/bar", timeout=10000)
# Nothing happened yet
self.assertNoResult(d)
self.pump(1)
# Check that it was unable to resolve the address
clients = self.reactor.tcpClients
self.assertEqual(len(clients), 0)
f = self.failureResultOf(d)
self.assertIsInstance(f.value, RequestSendFailed)
self.assertIsInstance(f.value.inner_exception, DNSLookupError)
# Try making a POST request to a blacklisted IPv6 address
# -------------------------------------------------------
# Make the request
d = cl.post_json("internalv6:8008", "foo/bar", timeout=10000)
# Nothing has happened yet
self.assertNoResult(d)
# Move the reactor forwards
self.pump(1)
# Check that it was unable to resolve the address
clients = self.reactor.tcpClients
self.assertEqual(len(clients), 0)
# Check that it was due to a blacklisted DNS lookup
f = self.failureResultOf(d, RequestSendFailed)
self.assertIsInstance(f.value.inner_exception, DNSLookupError)
# Try making a GET request to a non-blacklisted IPv4 address
# ----------------------------------------------------------
# Make the request
d = cl.post_json("fine:8008", "foo/bar", timeout=10000)
# Nothing has happened yet
self.assertNoResult(d)
# Move the reactor forwards
self.pump(1)
# Check that it was able to resolve the address
clients = self.reactor.tcpClients
self.assertNotEqual(len(clients), 0)
# Connection will still fail as this IP address does not resolve to anything
f = self.failureResultOf(d, RequestSendFailed)
self.assertIsInstance(f.value.inner_exception, ConnectingCancelledError)
def test_client_gets_headers(self):
"""
Once the client gets the headers, _request returns successfully.

View File

@@ -52,22 +52,26 @@ class EmailPusherTests(HomeserverTestCase):
return d
config = self.default_config()
config.email_enable_notifs = True
config.start_pushers = True
config.email_template_dir = os.path.abspath(
pkg_resources.resource_filename('synapse', 'res/templates')
)
config.email_notif_template_html = "notif_mail.html"
config.email_notif_template_text = "notif_mail.txt"
config.email_smtp_host = "127.0.0.1"
config.email_smtp_port = 20
config.require_transport_security = False
config.email_smtp_user = None
config.email_smtp_pass = None
config.email_app_name = "Matrix"
config.email_notif_from = "test@example.com"
config.email_riot_base_url = None
config["email"] = {
"enable_notifs": True,
"template_dir": os.path.abspath(
pkg_resources.resource_filename('synapse', 'res/templates')
),
"expiry_template_html": "notice_expiry.html",
"expiry_template_text": "notice_expiry.txt",
"notif_template_html": "notif_mail.html",
"notif_template_text": "notif_mail.txt",
"smtp_host": "127.0.0.1",
"smtp_port": 20,
"require_transport_security": False,
"smtp_user": None,
"smtp_pass": None,
"app_name": "Matrix",
"notif_from": "test@example.com",
"riot_base_url": None,
}
config["public_baseurl"] = "aaa"
config["start_pushers"] = True
hs = self.setup_test_homeserver(config=config, sendmail=sendmail)

View File

@@ -54,7 +54,7 @@ class HTTPPusherTests(HomeserverTestCase):
m.post_json_get_json = post_json_get_json
config = self.default_config()
config.start_pushers = True
config["start_pushers"] = True
hs = self.setup_test_homeserver(config=config, simple_http_client=m)

View File

@@ -42,15 +42,18 @@ class ConsentResourceTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.user_consent_version = "1"
config.public_baseurl = ""
config.form_secret = "123abc"
config["public_baseurl"] = "aaaa"
config["form_secret"] = "123abc"
# Make some temporary templates...
temp_consent_path = self.mktemp()
os.mkdir(temp_consent_path)
os.mkdir(os.path.join(temp_consent_path, 'en'))
config.user_consent_template_dir = os.path.abspath(temp_consent_path)
config["user_consent"] = {
"version": "1",
"template_dir": os.path.abspath(temp_consent_path),
}
with open(os.path.join(temp_consent_path, "en/1.html"), 'w') as f:
f.write("{{version}},{{has_consented}}")

View File

@@ -32,7 +32,7 @@ class IdentityTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.enable_3pid_lookup = False
config["enable_3pid_lookup"] = False
self.hs = self.setup_test_homeserver(config=config)
return self.hs

View File

@@ -34,7 +34,7 @@ class DirectoryTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.require_membership_for_aliases = True
config["require_membership_for_aliases"] = True
self.hs = self.setup_test_homeserver(config=config)

View File

@@ -36,9 +36,9 @@ class EventStreamPermissionsTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.enable_registration_captcha = False
config.enable_registration = True
config.auto_join_rooms = []
config["enable_registration_captcha"] = False
config["enable_registration"] = True
config["auto_join_rooms"] = []
hs = self.setup_test_homeserver(
config=config, ratelimiter=NonCallableMock(spec_set=["can_do_action"])

View File

@@ -171,7 +171,7 @@ class ProfilesRestrictedTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
config.require_auth_for_profile_requests = True
config["require_auth_for_profile_requests"] = True
self.hs = self.setup_test_homeserver(config=config)
return self.hs

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -24,7 +25,7 @@ from twisted.internet import defer
import synapse.rest.admin
from synapse.api.constants import Membership
from synapse.rest.client.v1 import login, room
from synapse.rest.client.v1 import login, profile, room
from tests import unittest
@@ -919,7 +920,7 @@ class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
self.url = b"/_matrix/client/r0/publicRooms"
config = self.default_config()
config.restrict_public_rooms_to_local_users = True
config["restrict_public_rooms_to_local_users"] = True
self.hs = self.setup_test_homeserver(config=config)
return self.hs
@@ -936,3 +937,70 @@ class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
request, channel = self.make_request("GET", self.url, access_token=tok)
self.render(request)
self.assertEqual(channel.code, 200, channel.result)
class PerRoomProfilesForbiddenTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
room.register_servlets,
login.register_servlets,
profile.register_servlets,
]
def make_homeserver(self, reactor, clock):
config = self.default_config()
config["allow_per_room_profiles"] = False
self.hs = self.setup_test_homeserver(config=config)
return self.hs
def prepare(self, reactor, clock, homeserver):
self.user_id = self.register_user("test", "test")
self.tok = self.login("test", "test")
# Set a profile for the test user
self.displayname = "test user"
data = {
"displayname": self.displayname,
}
request_data = json.dumps(data)
request, channel = self.make_request(
"PUT",
"/_matrix/client/r0/profile/%s/displayname" % (self.user_id,),
request_data,
access_token=self.tok,
)
self.render(request)
self.assertEqual(channel.code, 200, channel.result)
self.room_id = self.helper.create_room_as(self.user_id, tok=self.tok)
def test_per_room_profile_forbidden(self):
data = {
"membership": "join",
"displayname": "other test user"
}
request_data = json.dumps(data)
request, channel = self.make_request(
"PUT",
"/_matrix/client/r0/rooms/%s/state/m.room.member/%s" % (
self.room_id, self.user_id,
),
request_data,
access_token=self.tok,
)
self.render(request)
self.assertEqual(channel.code, 200, channel.result)
event_id = channel.json_body["event_id"]
request, channel = self.make_request(
"GET",
"/_matrix/client/r0/rooms/%s/event/%s" % (self.room_id, event_id),
access_token=self.tok,
)
self.render(request)
self.assertEqual(channel.code, 200, channel.result)
res_displayname = channel.json_body["content"]["displayname"]
self.assertEqual(res_displayname, self.displayname, channel.result)

View File

@@ -127,3 +127,20 @@ class RestHelper(object):
)
return channel.json_body
def send_state(self, room_id, event_type, body, tok, expect_code=200):
path = "/_matrix/client/r0/rooms/%s/state/%s" % (room_id, event_type)
if tok:
path = path + "?access_token=%s" % tok
request, channel = make_request(
self.hs.get_reactor(), "PUT", path, json.dumps(body).encode('utf8')
)
render(request, self.resource, self.hs.get_reactor())
assert int(channel.result["code"]) == expect_code, (
"Expected: %d, got: %d, resp: %r"
% (expect_code, int(channel.result["code"]), channel.result["body"])
)
return channel.json_body

View File

@@ -36,9 +36,9 @@ class FallbackAuthTests(unittest.HomeserverTestCase):
config = self.default_config()
config.enable_registration_captcha = True
config.recaptcha_public_key = "brokencake"
config.registrations_require_3pid = []
config["enable_registration_captcha"] = True
config["recaptcha_public_key"] = "brokencake"
config["registrations_require_3pid"] = []
hs = self.setup_test_homeserver(config=config)
return hs
@@ -92,7 +92,14 @@ class FallbackAuthTests(unittest.HomeserverTestCase):
self.assertEqual(len(self.recaptcha_attempts), 1)
self.assertEqual(self.recaptcha_attempts[0][0]["response"], "a")
# Now we have fufilled the recaptcha fallback step, we can then send a
# also complete the dummy auth
request, channel = self.make_request(
"POST", "register", {"auth": {"session": session, "type": "m.login.dummy"}}
)
self.render(request)
# Now we should have fufilled a complete auth flow, including
# the recaptcha fallback step, we can then send a
# request to the register API with the session in the authdict.
request, channel = self.make_request(
"POST", "register", {"auth": {"session": session}}

View File

@@ -1,3 +1,20 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import json
import os
@@ -201,9 +218,11 @@ class AccountValidityTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
# Test for account expiring after a week.
config.enable_registration = True
config.account_validity.enabled = True
config.account_validity.period = 604800000 # Time in ms for 1 week
config["enable_registration"] = True
config["account_validity"] = {
"enabled": True,
"period": 604800000, # Time in ms for 1 week
}
self.hs = self.setup_test_homeserver(config=config)
return self.hs
@@ -299,14 +318,17 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
def make_homeserver(self, reactor, clock):
config = self.default_config()
# Test for account expiring after a week and renewal emails being sent 2
# days before expiry.
config.enable_registration = True
config.account_validity.enabled = True
config.account_validity.renew_by_email_enabled = True
config.account_validity.period = 604800000 # Time in ms for 1 week
config.account_validity.renew_at = 172800000 # Time in ms for 2 days
config.account_validity.renew_email_subject = "Renew your account"
config["enable_registration"] = True
config["account_validity"] = {
"enabled": True,
"period": 604800000, # Time in ms for 1 week
"renew_at": 172800000, # Time in ms for 2 days
"renew_by_email_enabled": True,
"renew_email_subject": "Renew your account",
}
# Email config.
self.email_attempts = []
@@ -315,17 +337,23 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
self.email_attempts.append((args, kwargs))
return
config.email_template_dir = os.path.abspath(
pkg_resources.resource_filename('synapse', 'res/templates')
)
config.email_expiry_template_html = "notice_expiry.html"
config.email_expiry_template_text = "notice_expiry.txt"
config.email_smtp_host = "127.0.0.1"
config.email_smtp_port = 20
config.require_transport_security = False
config.email_smtp_user = None
config.email_smtp_pass = None
config.email_notif_from = "test@example.com"
config["email"] = {
"enable_notifs": True,
"template_dir": os.path.abspath(
pkg_resources.resource_filename('synapse', 'res/templates')
),
"expiry_template_html": "notice_expiry.html",
"expiry_template_text": "notice_expiry.txt",
"notif_template_html": "notif_mail.html",
"notif_template_text": "notif_mail.txt",
"smtp_host": "127.0.0.1",
"smtp_port": 20,
"require_transport_security": False,
"smtp_user": None,
"smtp_pass": None,
"notif_from": "test@example.com",
}
config["public_baseurl"] = "aaa"
self.hs = self.setup_test_homeserver(config=config, sendmail=sendmail)
@@ -398,3 +426,41 @@ class AccountValidityRenewalByEmailTestCase(unittest.HomeserverTestCase):
self.assertEquals(channel.result["code"], b"200", channel.result)
self.assertEqual(len(self.email_attempts), 1)
class AccountValidityBackgroundJobTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
]
def make_homeserver(self, reactor, clock):
self.validity_period = 10
config = self.default_config()
config["enable_registration"] = True
config["account_validity"] = {
"enabled": False,
}
self.hs = self.setup_test_homeserver(config=config)
self.hs.config.account_validity.period = self.validity_period
self.store = self.hs.get_datastore()
return self.hs
def test_background_job(self):
"""
Tests whether the account validity startup background job does the right thing,
which is sticking an expiration date to every account that doesn't already have
one.
"""
user_id = self.register_user("kermit", "user")
now_ms = self.hs.clock.time_msec()
self.get_success(self.store._set_expiration_date_when_missing())
res = self.get_success(self.store.get_expiration_ts_for_user(user_id))
self.assertEqual(res, now_ms + self.validity_period)

Some files were not shown because too many files have changed in this diff Show More