1
0

Compare commits

...

188 Commits

Author SHA1 Message Date
Erik Johnston
230474b620 Actually fix exceptions 2018-11-29 11:46:28 +00:00
Erik Johnston
cf09912280 Don't log ERROR when no profile exists 2018-11-29 11:32:03 +00:00
Matthew Hodgson
cd317a1910 Merge pull request #4235 from matrix-org/travis/fix-auto-invite-errors
Catch room profile errors and anything else that can go wrong
2018-11-28 18:53:22 -08:00
Travis Ralston
11a168442d Catch room profile errors and anything else that can go wrong
Fixes an issue where things become unhappy when the room profile for a user is missing.
2018-11-28 08:57:56 -07:00
Travis Ralston
e8d99369bc Merge pull request #4218 from matrix-org/travis/account-merging
Proof of concept for auto-accepting invites on merged accounts
2018-11-22 09:00:25 -07:00
Travis Ralston
921469383e Use run_as_background_process 2018-11-22 08:50:05 -07:00
Travis Ralston
ccbf6bb222 Safer execution 2018-11-22 08:47:35 -07:00
Travis Ralston
c68d510564 Preserve log contexts in the room_member_handler 2018-11-21 13:21:21 -07:00
Travis Ralston
ce1b393682 Proof of concept for auto-accepting invites
This is for demonstration purposes only. In practice this would actually look up the right profile and use the right thing, not to mention be in a more reasonable location.
2018-11-21 13:03:35 -07:00
Neil Johnson
78ba0e7ab8 Remove riot.im from the list of trusted Identity Servers in the default configuration (#4207) 2018-11-20 12:29:25 +01:00
Richard van der Hoff
416c671474 Merge pull request #4204 from matrix-org/rav/logcontext_leak_fixes
Fix some logcontext leaks
2018-11-20 12:19:19 +01:00
Amber Brown
31425d82a3 Merge remote-tracking branch 'origin/master' into develop 2018-11-19 12:55:25 -06:00
Amber Brown
678ad155a2 Merge tag 'v0.33.9'
Features
--------

- Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled.
([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133),
[\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184))
- Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099),
[\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101))

Bugfixes
--------

- Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095))
- Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113))
- Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122))
- fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123))
- If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new
typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127))
- Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132))
- Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135))
- Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140),
[\#4163](https://github.com/matrix-org/synapse/issues/4163))
- Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if
the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157))
- The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161))
- Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164))

Deprecations and Removals
-------------------------

- The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106))
- The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118))
- The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119))
- Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2.
([\#4120](https://github.com/matrix-org/synapse/issues/4120))

Internal Changes
----------------

- Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778))
- Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006))
- The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108))
- Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109))
- Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character
([\#4110](https://github.com/matrix-org/synapse/issues/4110))
- Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121))
- Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124))
- Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128))
- Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137))
- The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138))
- Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139))
- Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149))
- add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155))
- HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156))
- Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165))
2018-11-19 12:54:29 -06:00
Amber Brown
47e26f5a4d towncrier 2018-11-19 12:43:14 -06:00
Amber Brown
d102e19e47 version 2018-11-19 12:42:49 -06:00
Amber Brown
80cac86b2c Fix fallback auth on Python 3 (#4197) 2018-11-19 12:27:33 -06:00
Richard van der Hoff
0c05da2e2e changelog 2018-11-19 17:07:42 +00:00
Richard van der Hoff
828f18bd8b Fix logcontext leak in test_url_preview 2018-11-19 17:07:01 +00:00
Richard van der Hoff
a267c2e3ed Fix logcontext leak in http pusher test 2018-11-19 17:07:01 +00:00
Richard van der Hoff
884a561447 Fix some tests which leaked logcontexts 2018-11-19 17:07:01 +00:00
Richard van der Hoff
f5faf6bc14 Fix logcontext leak in EmailPusher 2018-11-19 17:07:01 +00:00
Richard van der Hoff
10cdf519aa Merge pull request #4182 from aaronraimist/update-issue-template
Add a pull request template and add multiple issue templates
2018-11-19 14:24:30 +01:00
Richard van der Hoff
65b793c5a1 Merge pull request #4200 from aaronraimist/vacuum-full-note
Add a note saying you need to manually reclaim disk space
2018-11-19 14:19:51 +01:00
Aaron Raimist
cc2cf2da97 Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-11-18 12:42:08 -06:00
Aaron Raimist
f6cbef6332 Add a note saying you need to manually reclaim disk space
People keep asking why their database hasn't gotten smaller after using this API.

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-11-18 12:38:04 -06:00
Amber Brown
4285c818ec Merge pull request #4193 from kivikakk/add-openbsd-prereq
add jpeg to OpenBSD prereq list
2018-11-17 14:27:53 -06:00
Ashe Connor
ceca3b2f30 add changelog.d entry 2018-11-17 15:01:02 +11:00
Ashe Connor
9548dd9586 add jpeg to OpenBSD prereq list
Signed-off-by: Ashe Connor <ashe@kivikakk.ee>
2018-11-17 14:57:20 +11:00
Travis Ralston
0bb273db07 Merge pull request #4192 from matrix-org/travis/fix-consent-urls
Remove duplicate slashes in generated consent URLs
2018-11-16 09:40:50 -07:00
Travis Ralston
3da9781c98 Fix the terms UI auth tests
By setting the config value directly, we skip the block that adds the slash automatically for us.
2018-11-15 23:00:28 -07:00
Travis Ralston
d75db3df59 Changelog 2018-11-15 20:44:57 -07:00
Travis Ralston
ab4526a153 Remove duplicate slashes in generated consent URLs 2018-11-15 20:41:53 -07:00
Amber Brown
8b1affe7d5 Fix Content-Disposition in media repository (#4176) 2018-11-15 15:55:58 -06:00
Travis Ralston
835779f7fb Add option to track MAU stats (but not limit people) (#3830) 2018-11-15 18:08:27 +00:00
Amber Brown
df758e155d Use <meta> tags to discover the per-page encoding of html previews (#4183) 2018-11-15 11:05:08 -06:00
Amber Brown
a51288e5d6 Add a coveragerc (#4180) 2018-11-15 10:50:08 -06:00
Neil Johnson
b5d92d4d46 Merge pull request #4188 from matrix-org/rav/readme-update-1
Update README for #1491 fix
2018-11-15 13:06:41 +00:00
Richard van der Hoff
4f8bb633c7 Update README for #1491 fix 2018-11-15 10:03:36 +00:00
Neil Johnson
bf648c37e7 release 0.33.9rc1 2018-11-14 11:45:52 +00:00
Richard van der Hoff
4b60c969d8 Merge pull request #4184 from matrix-org/rav/fix_public_consent
Fix an internal server error when viewing the public privacy policy
2018-11-14 11:32:43 +00:00
Richard van der Hoff
0c4dc6fd76 changelog 2018-11-14 10:48:08 +00:00
Richard van der Hoff
c1efcd7c6a Add a test for the public T&Cs form 2018-11-14 10:46:27 +00:00
Richard van der Hoff
83a5f459aa Fix an internal server error when viewing the public privacy policy 2018-11-14 10:21:07 +00:00
David Baker
0869566ad3 Merge pull request #4113 from matrix-org/dbkr/e2e_backup_versions_are_numbers
Make e2e backup versions numeric in the DB
2018-11-14 07:55:48 +00:00
Aaron Raimist
924c82ca16 Fix case
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-11-13 22:12:07 -06:00
Aaron Raimist
5d02704822 Add SUPPORT.md
https://help.github.com/articles/adding-support-resources-to-your-project/
2018-11-13 21:57:10 -06:00
Aaron Raimist
9ca1215582 Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-11-13 21:46:48 -06:00
Aaron Raimist
d86826277d Add a pull request template and add multiple issue templates
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-11-13 21:43:40 -06:00
David Baker
bca3b91c2d Merge remote-tracking branch 'origin/develop' into dbkr/e2e_backup_versions_are_numbers 2018-11-09 18:35:02 +00:00
Erik Johnston
db5a1c059a Merge pull request #4166 from matrix-org/erikj/drop_unknown_events
Drop incoming events from federation for unknown rooms
2018-11-09 17:59:34 +00:00
Erik Johnston
dc59ad5334 Remove hack to support rejoining rooms 2018-11-09 14:58:09 +00:00
David Baker
d44dea0223 pep8 2018-11-09 14:38:31 +00:00
David Baker
4f93abd62d add docs 2018-11-09 13:25:38 +00:00
Erik Johnston
30dd27afff Simplify to always drop events if server isn't in the room 2018-11-09 11:36:45 +00:00
Richard van der Hoff
3cecf5340d Update synapse/federation/federation_server.py
Co-Authored-By: erikjohnston <erikj@jki.re>
2018-11-09 11:28:25 +00:00
Richard van der Hoff
9bce065a53 Update synapse/federation/federation_server.py
Co-Authored-By: erikjohnston <erikj@jki.re>
2018-11-09 11:28:22 +00:00
David Baker
d3fa6194f7 Remove unnecessary str() 2018-11-09 11:11:31 +00:00
Brendan Abolivier
0f3f0a64bf Merge pull request #4168 from matrix-org/babolivier/federation-client-content-type
Add a Content-Type header on POST requests to the federation client script
2018-11-09 11:00:55 +00:00
Brendan Abolivier
91d96759c9 Add a Content-Type header on POST requests to the federation client 2018-11-09 10:41:34 +00:00
Erik Johnston
7b22421a7b Merge pull request #4164 from matrix-org/erikj/fix_device_comparison
Fix noop checks when updating device keys
2018-11-08 14:37:20 +00:00
Erik Johnston
abaa93c158 Add test to assert set_e2e_device_keys correctly returns False on no-op 2018-11-08 14:06:44 +00:00
Richard van der Hoff
c70809a275 Merge pull request #4163 from matrix-org/rav/fix_consent_on_py3
Fix encoding error for consent form on python3
2018-11-08 12:48:51 +00:00
Erik Johnston
5ebed18692 Lets convert bytes to unicode instead 2018-11-08 12:33:13 +00:00
Erik Johnston
94896d7ffe Newsfile 2018-11-08 12:30:25 +00:00
Erik Johnston
06c3d8050f Newsfile 2018-11-08 12:18:41 +00:00
Erik Johnston
b1a22b24ab Fix noop checks when updating device keys
Clients often reupload their device keys (for some reason) so its
important for the server to check for no-ops before sending out device
list update notifications.

The check is broken in python 3 due to the fact comparing bytes and
unicode always fails, and that we write bytes to the DB but get unicode
when we read.
2018-11-08 12:18:38 +00:00
Erik Johnston
9417986f77 Drop PDUs of unknown rooms
When we receive events over federation we will need to know the room
version to be able to correctly handle them, e.g. once we start changing
event formats. Currently, we attempt to handle events in unknown rooms.
2018-11-08 12:11:20 +00:00
Richard van der Hoff
0a1fc52971 fix parse_string docstring 2018-11-08 11:12:29 +00:00
Richard van der Hoff
de6223836e changelog 2018-11-08 11:06:28 +00:00
hera
2b075fb03a Fix encoding error for consent form on python3
The form was rendering this as "b'01234....'".

-- richvdh
2018-11-08 11:05:39 +00:00
Amber Brown
264cb14402 Port hash_password to Python 3 (#4161)
* port hash_password

* changelog
2018-11-08 04:57:28 +11:00
Amber Brown
b3708830b8 Fix URL preview bugs (type error when loading cache from db, content-type including quotes) (#4157) 2018-11-08 01:37:43 +11:00
Richard van der Hoff
c8ba79327b Merge pull request #4155 from rubo77/purge-api
add purge_history.sh and purge_remote_media.sh scripts
2018-11-07 14:06:41 +00:00
rubo77
2904d133f3 add purge_history.sh and purge_remote_media.sh scripts to contrib/purge_api/
Signed-off-by: Ruben Barkow <github@r.z11.de>
2018-11-07 14:02:41 +01:00
Amber Brown
e62f7f17b3 Remove some boilerplate in tests (#4156) 2018-11-07 03:00:00 +11:00
Travis Ralston
0f5e51f726 Add config variables for enabling terms auth and the policy name (#4142)
So people can still collect consent the old way if they want to.
2018-11-06 10:32:34 +00:00
Hubert Chathi
f1087106cf handle empty backups according to latest spec proposal (#4123)
fixes #4056
2018-11-05 17:59:29 -05:00
Amber Brown
efdcbbe46b Tests for user consent resource (#4140) 2018-11-06 05:53:44 +11:00
Amber Brown
5a63589e80 Add some tests for the HTTP pusher (#4149) 2018-11-06 05:53:24 +11:00
Erik Johnston
bc80b3f454 Add helpers for getting prev and auth events (#4139)
* Add helpers for getting prev and auth events

This is in preparation for allowing the event format to change between
room versions.
2018-11-06 00:35:15 +11:00
Amber Brown
0467384d2f Set the encoding to UTF8 in the default logconfig (#4138) 2018-11-03 02:28:07 +11:00
Erik Johnston
90d713b8c6 Merge pull request #4137 from matrix-org/erikj/clean_up_events
Clean up event accesses and tests
2018-11-02 14:12:49 +00:00
Erik Johnston
76cd7de108 Newsfile 2018-11-02 13:45:56 +00:00
Erik Johnston
b86d05a279 Clean up event accesses and tests
This is in preparation to refactor FrozenEvent to support different
event formats for different room versions
2018-11-02 13:44:14 +00:00
Amber Brown
cb7a6b2379 Fix typing being reset causing infinite syncs (#4127) 2018-11-03 00:19:23 +11:00
Richard van der Hoff
efb9343c8c Merge pull request #4132 from matrix-org/rav/fix_device_list_locking
Fix locked upsert on device_lists_remote_cache
2018-11-02 10:50:53 +00:00
Richard van der Hoff
00f12e00f8 Merge pull request #4133 from matrix-org/travis/fix-terms-auth
Fix logic error that prevented guests from seeing the privacy policy
2018-11-02 10:50:43 +00:00
Erik Johnston
b199534518 Merge pull request #4135 from matrix-org/erikj/fix_state_res_none
Fix None exception in state res v2
2018-11-02 10:45:57 +00:00
Richard van der Hoff
1cc6671ec4 changelog 2018-11-02 10:36:13 +00:00
Richard van der Hoff
350f654e7b Add unique indexes to a couple of tables
The indexes on device_lists_remote_extremeties can be unique, and they
therefore should, to ensure that the db remains consistent.
2018-11-02 10:36:13 +00:00
Richard van der Hoff
50e328d1e7 Remove redundant database locks for device list updates
We can rely on the application-level per-user linearizer.
2018-11-02 10:36:13 +00:00
Erik Johnston
f05d97e283 Newsfile 2018-11-02 10:32:06 +00:00
Erik Johnston
54aec35867 Fix None exception in state res v2 2018-11-02 10:29:19 +00:00
Travis Ralston
552f090f62 Changelog 2018-11-01 16:51:11 -06:00
Travis Ralston
642505abc3 Fix logic error that prevented guests from seeing the privacy policy 2018-11-01 16:48:32 -06:00
Richard van der Hoff
3149d55b7d Merge pull request #3778 from z3ntu/patch-1
Fix build of Docker image with docker-compose
2018-11-01 17:34:56 +00:00
Travis Ralston
c68aab1536 Merge pull request #4004 from matrix-org/travis/login-terms
Add m.login.terms to the registration flow
2018-11-01 11:03:38 -06:00
Erik Johnston
1b21e771d0 Merge pull request #4128 from matrix-org/erikj/state_res_v2_version
Add STATE_V2_TEST room version
2018-11-01 13:17:57 +00:00
Erik Johnston
62d683161e Newsfile 2018-11-01 11:44:44 +00:00
Erik Johnston
b3dd6fa981 Add STATE_V2_TEST room version 2018-11-01 11:43:46 +00:00
Amber Brown
073d400b84 Merge branch 'master' into develop 2018-11-01 21:32:12 +11:00
Amber Brown
907e6da5be Merge branch 'release-v0.33.8' 2018-11-01 21:31:46 +11:00
Travis Ralston
a8c9faa9a2 The tests also need a version parameter 2018-10-31 13:28:08 -06:00
Travis Ralston
a8d41c6aff Include a version query string arg for the consent route 2018-10-31 13:19:28 -06:00
Travis Ralston
d1e7b9c44c Merge branch 'develop' into travis/login-terms 2018-10-31 13:15:14 -06:00
Richard van der Hoff
1729ba1650 Merge pull request #4101 from matrix-org/rav/aliases_for_upgrades
Attempt to move room aliases on room upgrades
2018-10-31 17:52:18 +00:00
Richard van der Hoff
4ecb8b7de8 Merge pull request #4125 from MazeChaZer/fix-typo-in-docker-compose
Fix typo in docker-compose.yml
2018-10-31 15:55:01 +00:00
Richard van der Hoff
0f8591a5a8 Avoid else clause on exception for clarity 2018-10-31 15:43:57 +00:00
Richard van der Hoff
94c7fadc98 Attempt to move room aliases on room upgrades 2018-10-31 15:43:57 +00:00
Richard van der Hoff
9b827c40ca Log some bits about event creation (#4121)
I found these helpful in debugging my room upgrade tests.
2018-10-31 15:42:23 +00:00
Richard van der Hoff
60f128a401 Merge pull request #4124 from matrix-org/rav/fix_tox
Attempt to fix tox installs
2018-10-31 15:41:55 +00:00
Jonas Schürmann
e3758c8c92 Fix typo in docker-compose.yml
Signed-off-by: Jonas Schürmann <jonasschuermann@aol.de>
2018-10-31 15:46:47 +01:00
Amber Brown
916efc8249 Remove fetching keys via the deprecated v1 kex method (#4120) 2018-10-31 23:14:39 +11:00
Amber Brown
f79f454485 Remove deprecated v1 key exchange endpoint (#4119) 2018-10-31 22:29:02 +11:00
Richard van der Hoff
a2d8bff0dc changelog 2018-10-30 21:21:05 +00:00
Richard van der Hoff
0f6ec6d1ae Attempt to fix tox installs
It seems that, at some point, the ability to run tox on old servers (with old
setuptools) got broken - and it was only working on our Jenkins instance by
dint of reusing the tox environments.

Let's try to get tox to do the right thing, and remove the guff from
jenkins/prepare_synapse.sh.

(There is a separate question about whether the jenkins builds should be using
tox to prepare the virtualenv at all here, but that is somewhat orthogonal).
2018-10-30 21:00:31 +00:00
Amber Brown
3bade14ec0 Fix search 500ing (#4122) 2018-10-31 04:33:41 +11:00
Amber Brown
2e223a8c22 Remove the unused /pull federation API (#4118) 2018-10-31 04:24:59 +11:00
Erik Johnston
0794504bce Merge pull request #4006 from matrix-org/erikj/purge_state_groups
Delete unreferenced state groups during purge
2018-10-30 16:58:22 +00:00
Amber Brown
0dce9e1379 Write some tests for the email pusher (#4095) 2018-10-30 23:55:43 +11:00
David Baker
e0934acdbb Cast to int here too 2018-10-30 11:12:23 +00:00
David Baker
12941f5f8b Cast bacjup version to int when querying 2018-10-30 11:01:07 +00:00
David Baker
2f0f911c52 Convert version back to a string 2018-10-30 10:35:18 +00:00
David Baker
4eacf0f200 news fragment 2018-10-30 10:05:51 +00:00
David Baker
64fa557f80 Try & make it work on postgres 2018-10-30 09:51:04 +00:00
David Baker
563f9b61b1 Make e2e backup versions numeric in the DB
We were doing max(version) which does not do what we wanted
on a column of type TEXT.
2018-10-29 21:01:22 +00:00
Erik Johnston
169851b412 Merge pull request #4109 from matrix-org/erikj/repl_devices
A couple of replication fixes for device lists
2018-10-29 18:16:48 +00:00
Erik Johnston
00fdfbc213 Merge pull request #4111 from matrix-org/erikj/repl_names
Erikj/repl names
2018-10-29 18:16:03 +00:00
Erik Johnston
4f0fa7a120 Newsfile 2018-10-29 18:15:42 +00:00
Erik Johnston
39f419868f Newsfile 2018-10-29 17:38:09 +00:00
Erik Johnston
88e5ffe6fe Deduplicate device updates sent over replication
We currently send several kHz of device list updates over replication
occisonally, which often causes the replications streams to lag and then
get dropped.

A lot of those updates will actually be duplicates, since we don't send
e.g. device_ids across replication, so let's deduplicate it when we pull
them out of the database.
2018-10-29 17:34:34 +00:00
Erik Johnston
a163b748a5 Don't truncate command name in metrics 2018-10-29 17:34:21 +00:00
Erik Johnston
ad88460e0d Move _find_unreferenced_groups 2018-10-29 14:24:19 +00:00
Erik Johnston
664b192a3b Fix set operations thinko 2018-10-29 14:21:43 +00:00
Erik Johnston
f4f223aa44 Don't make temporary list 2018-10-29 14:01:49 +00:00
Erik Johnston
b2399f6281 Make SQL a bit cleaner 2018-10-29 14:01:11 +00:00
Amber Brown
4cd1c9f2ff Delete the disused & unspecced identicon functionality (#4106) 2018-10-29 23:57:24 +11:00
Richard van der Hoff
7fbfea062e Merge pull request #4100 from matrix-org/rav/room_upgrade_avatar
Remember to copy the avatar on room upgrades
2018-10-29 12:49:21 +00:00
Richard van der Hoff
56ca578f77 Merge pull request #4099 from matrix-org/rav/upgrade_odd_pls
Better handling of odd PLs during room upgrades
2018-10-29 12:48:51 +00:00
Richard van der Hoff
bf33eed609 Merge pull request #4091 from matrix-org/rav/room_version_upgrades
Room version upgrade support
2018-10-29 12:47:20 +00:00
Amber Brown
c4b3698a80 Make the replication logger quieter (#4108) 2018-10-29 22:59:44 +11:00
Richard van der Hoff
db24d7f15e Better handling of odd PLs during room upgrades
Fixes handling of rooms where we have permission to send the tombstone, but not
other state. We need to (a) fail more gracefully when we can't send the PLs in
the old room, and (b) not set the PLs in the new room until we are done with
the other stuff.
2018-10-27 00:54:26 +01:00
Richard van der Hoff
5caf79b312 Remember to copy the avatar on room upgrades 2018-10-26 23:56:40 +01:00
Richard van der Hoff
54bbe71867 optimise state copying 2018-10-26 22:51:34 +01:00
Richard van der Hoff
193cadc988 Address review comments
Improve comments, get old room state from the context we already have
2018-10-26 17:10:30 +01:00
Erik Johnston
03e634dad4 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/purge_state_groups 2018-10-26 16:22:45 +01:00
Richard van der Hoff
474810d9d5 fix broken test
This test stubbed out some stuff in a very weird way. I have no idea why. It broke.
2018-10-25 23:15:03 +01:00
Richard van der Hoff
68c0ce62d8 changelog 2018-10-25 19:18:25 +01:00
Richard van der Hoff
e6babc27d5 restrict PLs in old room 2018-10-25 19:18:25 +01:00
Richard van der Hoff
3a263bf3ae copy state 2018-10-25 19:18:25 +01:00
Richard van der Hoff
1b9f253e20 preserve PLs 2018-10-25 19:10:24 +01:00
Richard van der Hoff
4cda300058 preserve room visibility 2018-10-25 19:10:24 +01:00
Richard van der Hoff
0f7d1c9906 Basic initial support for room upgrades
Currently just creates a new, empty, room, and sends a tombstone in the old
room.
2018-10-25 19:10:24 +01:00
Richard van der Hoff
e1948175ee Allow power_level_content_override=None for _send_events_for_new_room 2018-10-25 19:10:24 +01:00
Richard van der Hoff
7f7b2cd3de Make room_member_handler a member of RoomCreationHandler
... to save passing it into `_send_events_for_new_room`
2018-10-25 19:10:18 +01:00
Richard van der Hoff
871c4abfec Factor _generate_room_id out of create_room
we're going to need this for room upgrades.
2018-10-25 18:23:09 +01:00
Travis Ralston
a5468eaadf pep8 2018-10-24 13:54:38 -06:00
Travis Ralston
81880beff4 It helps to import things 2018-10-24 13:32:13 -06:00
Travis Ralston
4acb6fe8a3 Move test to where the other integration tests are 2018-10-24 13:24:24 -06:00
Travis Ralston
9283987f7e Fix test
Debug tests

Try printing the channel

fix

Import and use six

Remove debugging

Disable captcha

Add some mocks

Define the URL

Fix the clock?

Less rendering?

use the other render

Complete the dummy auth stage

Fix last stage of the test

Remove mocks we don't need
2018-10-24 13:23:08 -06:00
Travis Ralston
54def42c19 Merge branch 'develop' into travis/login-terms 2018-10-24 13:22:59 -06:00
Erik Johnston
67f7b9cb50 pep8 2018-10-19 16:06:59 +01:00
Erik Johnston
056f099126 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/purge_state_groups 2018-10-19 15:48:59 +01:00
Erik Johnston
47a9da28ca Batch process handling state groups 2018-10-19 15:48:15 +01:00
Travis Ralston
dba84fa69c Fix terms UI auth test 2018-10-18 12:45:21 -06:00
Travis Ralston
88c5ffec33 Test for terms UI auth 2018-10-18 12:35:30 -06:00
Travis Ralston
49a044aa5f Merge branch 'develop' into travis/login-terms 2018-10-18 09:57:58 -06:00
Travis Ralston
a8ed93a4b5 pep8 2018-10-15 16:10:29 -06:00
Travis Ralston
442734ff9e Ensure the terms params are actually provided 2018-10-15 14:56:13 -06:00
Travis Ralston
762a0982aa Python is hard 2018-10-15 14:46:09 -06:00
Travis Ralston
f293d124b6 Merge branch 'develop' into travis/login-terms 2018-10-15 14:44:32 -06:00
Travis Ralston
dd99db846d Update login terms structure for the proposed language support 2018-10-12 18:03:27 -06:00
Travis Ralston
5119818e9d Rely on the lack of ?u to represent public access
also general cleanup
2018-10-12 18:03:17 -06:00
Travis Ralston
22a2004428 Update documentation and templates for new consent 2018-10-12 17:53:14 -06:00
Travis Ralston
7ede650956 Merge branch 'develop' into travis/login-terms 2018-10-12 16:24:07 -06:00
Erik Johnston
67a1e315cc Fix up comments 2018-10-12 13:49:48 +01:00
Erik Johnston
d9f3db5081 Newsfile 2018-10-04 16:03:08 +01:00
Erik Johnston
4917ff5523 Add state_group index to event_to_state_groups
This is needed to efficiently check for unreferenced state groups during
purge.
2018-10-04 16:03:08 +01:00
Erik Johnston
17d585753f Delete unreferened state groups during purge 2018-10-04 16:03:06 +01:00
Travis Ralston
158d6c75b6 Changelog 2018-10-03 17:54:08 -06:00
Travis Ralston
537d0b7b36 Use a flag rather than a new route for the public policy
This also means that the template now has optional parameters, which will need to be documented somehow.
2018-10-03 17:50:11 -06:00
Travis Ralston
f9d34a763c Auto-consent to the privacy policy if the user registered with terms 2018-10-03 17:39:45 -06:00
Travis Ralston
dfcad5fad5 Make the terms flow requried 2018-10-03 17:39:00 -06:00
Travis Ralston
3099d96dba Flesh out the fallback auth for terms 2018-10-03 17:39:00 -06:00
Travis Ralston
149c4f1765 Supply params for terms auth stage
As per https://github.com/matrix-org/matrix-doc/pull/1692
2018-10-03 15:57:42 -06:00
Travis Ralston
fd99787162 Incorporate Dave's work for GDPR login flows
As per https://github.com/vector-im/riot-web/issues/7168#issuecomment-419996117
2018-10-03 15:57:42 -06:00
Luca Weiss
f8825748dd changelog.d entry somehow got lost 2018-09-11 12:56:31 +02:00
Luca Weiss
a40802bcbc Fix build of Docker image with docker-compose
... and fix a typo
2018-09-11 12:11:22 +02:00
125 changed files with 3631 additions and 1149 deletions

12
.coveragerc Normal file
View File

@@ -0,0 +1,12 @@
[run]
branch = True
parallel = True
source = synapse
[paths]
source=
coverage
[report]
precision = 2
ignore_errors = True

View File

@@ -1,3 +1,9 @@
---
name: Bug report
about: Create a report to help us improve
---
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
@@ -11,38 +17,50 @@ the necessary data to fix your issue.
You can also preview your report before submitting it. You may remove sections
that aren't relevant to your particular case.
Text between <!-- and --> marks will be invisible in the report.
Text between <!-- and --> marks will be invisible in the report.
-->
### Description
Describe here the problem that you are experiencing, or the feature you are requesting.
<!-- Describe here the problem that you are experiencing -->
### Steps to reproduce
- For bugs, list the steps
- list the steps
- that reproduce the bug
- using hyphens as bullet points
<!--
Describe how what happens differs from what you expected.
<!-- If you can identify any relevant log snippets from _homeserver.log_, please include
If you can identify any relevant log snippets from _homeserver.log_, please include
those (please be careful to remove any personal or private data). Please surround them with
``` (three backticks, on a line on their own), so that they are formatted legibly. -->
``` (three backticks, on a line on their own), so that they are formatted legibly.
-->
### Version information
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
- **Homeserver**: Was this issue identified on matrix.org or another homeserver?
<!-- Was this issue identified on matrix.org or another homeserver? -->
- **Homeserver**:
If not matrix.org:
- **Version**: What version of Synapse is running? <!--
<!--
What version of Synapse is running?
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Install method**: package manager/git clone/pip
- **Platform**: Tell us about the environment in which your homeserver is operating
- distro, hardware, if it's running in a vm/container, etc.
- **Version**:
- **Install method**:
<!-- examples: package manager/git clone/pip -->
- **Platform**:
<!--
Tell us about the environment in which your homeserver is operating
distro, hardware, if it's running in a vm/container, etc.
-->

View File

@@ -0,0 +1,9 @@
---
name: Feature request
about: Suggest an idea for this project
---
**Description:**
<!-- Describe here the feature you are requesting. -->

View File

@@ -0,0 +1,9 @@
---
name: Support request
about: I need support for Synapse
---
# Please ask for support in [**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org)
## Don't file an issue as a support request.

7
.github/PULL_REQUEST_TEMPLATE.md vendored Normal file
View File

@@ -0,0 +1,7 @@
### Pull Request Checklist
<!-- Please read CONTRIBUTING.rst before submitting your pull request -->
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](CONTRIBUTING.rst#changelog)
* [ ] Pull request includes a [sign off](CONTRIBUTING.rst#sign-off)

3
.github/SUPPORT.md vendored Normal file
View File

@@ -0,0 +1,3 @@
[**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org) is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html
It can also be access via IRC bridge at irc://irc.freenode.net/matrix or on the web here: https://webchat.freenode.net/?channels=matrix

View File

@@ -23,6 +23,9 @@ branches:
- develop
- /^release-v/
# When running the tox environments that call Twisted Trial, we can pass the -j
# flag to run the tests concurrently. We set this to 2 for CPU bound tests
# (SQLite) and 4 for I/O bound tests (PostgreSQL).
matrix:
fast_finish: true
include:
@@ -33,10 +36,10 @@ matrix:
env: TOX_ENV="pep8,check_isort"
- python: 2.7
env: TOX_ENV=py27
env: TOX_ENV=py27 TRIAL_FLAGS="-j 2"
- python: 2.7
env: TOX_ENV=py27-old
env: TOX_ENV=py27-old TRIAL_FLAGS="-j 2"
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
@@ -44,10 +47,10 @@ matrix:
- postgresql
- python: 3.5
env: TOX_ENV=py35
env: TOX_ENV=py35 TRIAL_FLAGS="-j 2"
- python: 3.6
env: TOX_ENV=py36
env: TOX_ENV=py36 TRIAL_FLAGS="-j 2"
- python: 3.6
env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"

View File

@@ -1,3 +1,64 @@
Synapse 0.33.9 (2018-11-19)
===========================
No significant changes.
Synapse 0.33.9rc1 (2018-11-14)
==============================
Features
--------
- Include flags to optionally add `m.login.terms` to the registration flow when consent tracking is enabled. ([\#4004](https://github.com/matrix-org/synapse/issues/4004), [\#4133](https://github.com/matrix-org/synapse/issues/4133), [\#4142](https://github.com/matrix-org/synapse/issues/4142), [\#4184](https://github.com/matrix-org/synapse/issues/4184))
- Support for replacing rooms with new ones ([\#4091](https://github.com/matrix-org/synapse/issues/4091), [\#4099](https://github.com/matrix-org/synapse/issues/4099), [\#4100](https://github.com/matrix-org/synapse/issues/4100), [\#4101](https://github.com/matrix-org/synapse/issues/4101))
Bugfixes
--------
- Fix exceptions when using the email mailer on Python 3. ([\#4095](https://github.com/matrix-org/synapse/issues/4095))
- Fix e2e key backup with more than 9 backup versions ([\#4113](https://github.com/matrix-org/synapse/issues/4113))
- Searches that request profile info now no longer fail with a 500. ([\#4122](https://github.com/matrix-org/synapse/issues/4122))
- fix return code of empty key backups ([\#4123](https://github.com/matrix-org/synapse/issues/4123))
- If the typing stream ID goes backwards (as on a worker when the master restarts), the worker's typing handler will no longer erroneously report rooms containing new typing events. ([\#4127](https://github.com/matrix-org/synapse/issues/4127))
- Fix table lock of device_lists_remote_cache which could freeze the application ([\#4132](https://github.com/matrix-org/synapse/issues/4132))
- Fix exception when using state res v2 algorithm ([\#4135](https://github.com/matrix-org/synapse/issues/4135))
- Generating the user consent URI no longer fails on Python 3. ([\#4140](https://github.com/matrix-org/synapse/issues/4140), [\#4163](https://github.com/matrix-org/synapse/issues/4163))
- Loading URL previews from the DB cache on Postgres will no longer cause Unicode type errors when responding to the request, and URL previews will no longer fail if the remote server returns a Content-Type header with the chartype in quotes. ([\#4157](https://github.com/matrix-org/synapse/issues/4157))
- The hash_password script now works on Python 3. ([\#4161](https://github.com/matrix-org/synapse/issues/4161))
- Fix noop checks when updating device keys, reducing spurious device list update notifications. ([\#4164](https://github.com/matrix-org/synapse/issues/4164))
Deprecations and Removals
-------------------------
- The disused and un-specced identicon generator has been removed. ([\#4106](https://github.com/matrix-org/synapse/issues/4106))
- The obsolete and non-functional /pull federation endpoint has been removed. ([\#4118](https://github.com/matrix-org/synapse/issues/4118))
- The deprecated v1 key exchange endpoints have been removed. ([\#4119](https://github.com/matrix-org/synapse/issues/4119))
- Synapse will no longer fetch keys using the fallback deprecated v1 key exchange method and will now always use v2. ([\#4120](https://github.com/matrix-org/synapse/issues/4120))
Internal Changes
----------------
- Fix build of Docker image with docker-compose ([\#3778](https://github.com/matrix-org/synapse/issues/3778))
- Delete unreferenced state groups during history purge ([\#4006](https://github.com/matrix-org/synapse/issues/4006))
- The "Received rdata" log messages on workers is now logged at DEBUG, not INFO. ([\#4108](https://github.com/matrix-org/synapse/issues/4108))
- Reduce replication traffic for device lists ([\#4109](https://github.com/matrix-org/synapse/issues/4109))
- Fix `synapse_replication_tcp_protocol_*_commands` metric label to be full command name, rather than just the first character ([\#4110](https://github.com/matrix-org/synapse/issues/4110))
- Log some bits about room creation ([\#4121](https://github.com/matrix-org/synapse/issues/4121))
- Fix `tox` failure on old systems ([\#4124](https://github.com/matrix-org/synapse/issues/4124))
- Add STATE_V2_TEST room version ([\#4128](https://github.com/matrix-org/synapse/issues/4128))
- Clean up event accesses and tests ([\#4137](https://github.com/matrix-org/synapse/issues/4137))
- The default logging config will now set an explicit log file encoding of UTF-8. ([\#4138](https://github.com/matrix-org/synapse/issues/4138))
- Add helpers functions for getting prev and auth events of an event ([\#4139](https://github.com/matrix-org/synapse/issues/4139))
- Add some tests for the HTTP pusher. ([\#4149](https://github.com/matrix-org/synapse/issues/4149))
- add purge_history.sh and purge_remote_media.sh scripts to contrib/ ([\#4155](https://github.com/matrix-org/synapse/issues/4155))
- HTTP tests have been refactored to contain less boilerplate. ([\#4156](https://github.com/matrix-org/synapse/issues/4156))
- Drop incoming events from federation for unknown rooms ([\#4165](https://github.com/matrix-org/synapse/issues/4165))
Synapse 0.33.8 (2018-11-01)
===========================

View File

@@ -34,6 +34,7 @@ prune .github
prune demo/etc
prune docker
prune .circleci
prune .coveragerc
exclude jenkins*
recursive-exclude jenkins *.sh

View File

@@ -142,7 +142,7 @@ Installing prerequisites on openSUSE::
Installing prerequisites on OpenBSD::
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt
libxslt jpeg
To install the Synapse homeserver run::
@@ -729,9 +729,10 @@ port:
.. __: `key_management`_
* Synapse does not currently support SNI on the federation protocol
(`bug #1491 <https://github.com/matrix-org/synapse/issues/1491>`_), which
means that using name-based virtual hosting is unreliable.
* Until v0.33.3, Synapse did not support SNI on the federation port
(`bug #1491 <https://github.com/matrix-org/synapse/issues/1491>`_). This bug
is now fixed, but means that federating with older servers can be unreliable
when using name-based virtual hosting.
Furthermore, a number of the normal reasons for using a reverse-proxy do not
apply:

1
changelog.d/3830.feature Normal file
View File

@@ -0,0 +1 @@
Add option to track MAU stats (but not limit people)

1
changelog.d/4176.bugfix Normal file
View File

@@ -0,0 +1 @@
The media repository now no longer fails to decode UTF-8 filenames when downloading remote media.

1
changelog.d/4180.misc Normal file
View File

@@ -0,0 +1 @@
A coveragerc file, as well as the py36-coverage tox target, have been added.

1
changelog.d/4182.misc Normal file
View File

@@ -0,0 +1 @@
Add a GitHub pull request template and add multiple issue templates

1
changelog.d/4183.bugfix Normal file
View File

@@ -0,0 +1 @@
URL previews now correctly decode non-UTF-8 text if the header contains a `<meta http-equiv="Content-Type"` header.

1
changelog.d/4188.misc Normal file
View File

@@ -0,0 +1 @@
Update README to reflect the fact that #1491 is fixed

1
changelog.d/4192.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix an issue where public consent URLs had two slashes.

1
changelog.d/4193.misc Normal file
View File

@@ -0,0 +1 @@
Add missing `jpeg` package prerequisite for OpenBSD in README.

1
changelog.d/4197.bugfix Normal file
View File

@@ -0,0 +1 @@
Fallback auth now accepts the session parameter on Python 3.

1
changelog.d/4200.misc Normal file
View File

@@ -0,0 +1 @@
Add a note saying you need to manually reclaim disk space after using the Purge History API

1
changelog.d/4204.misc Normal file
View File

@@ -0,0 +1 @@
Fix logcontext leaks in EmailPusher and in tests

1
changelog.d/4207.bugfix Normal file
View File

@@ -0,0 +1 @@
Remove riot.im from the list of trusted Identity Servers in the default configuration

View File

@@ -6,9 +6,11 @@ version: '3'
services:
synapse:
build: ../..
build:
context: ../..
dockerfile: docker/Dockerfile
image: docker.io/matrixdotorg/synapse:latest
# Since snyapse does not retry to connect to the database, restart upon
# Since synapse does not retry to connect to the database, restart upon
# failure
restart: unless-stopped
# See the readme for a full documentation of the environment settings
@@ -47,4 +49,4 @@ services:
# You may store the database tables in a local folder..
- ./schemas:/var/lib/postgresql/data
# .. or store them on some high performance storage for better results
# - /path/to/ssd/storage:/var/lib/postfesql/data
# - /path/to/ssd/storage:/var/lib/postgresql/data

View File

@@ -0,0 +1,16 @@
Purge history API examples
==========================
# `purge_history.sh`
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
purge all messages in a list of rooms up to a certain event. You can select a
timeframe or a number of messages that you want to keep in the room.
Just configure the variables DOMAIN, ADMIN, ROOMS_ARRAY and TIME at the top of
the script.
# `purge_remote_media.sh`
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
purge all old cached remote media.

View File

@@ -0,0 +1,141 @@
#!/bin/bash
# this script will use the api:
# https://github.com/matrix-org/synapse/blob/master/docs/admin_api/purge_history_api.rst
#
# It will purge all messages in a list of rooms up to a cetrain event
###################################################################################################
# define your domain and admin user
###################################################################################################
# add this user as admin in your home server:
DOMAIN=yourserver.tld
# add this user as admin in your home server:
ADMIN="@you_admin_username:$DOMAIN"
API_URL="$DOMAIN:8008/_matrix/client/r0"
###################################################################################################
#choose the rooms to prune old messages from (add a free comment at the end)
###################################################################################################
# the room_id's you can get e.g. from your Riot clients "View Source" button on each message
ROOMS_ARRAY=(
'!DgvjtOljKujDBrxyHk:matrix.org#riot:matrix.org'
'!QtykxKocfZaZOUrTwp:matrix.org#Matrix HQ'
)
# ALTERNATIVELY:
# you can select all the rooms that are not encrypted and loop over the result:
# SELECT room_id FROM rooms WHERE room_id NOT IN (SELECT DISTINCT room_id FROM events WHERE type ='m.room.encrypted')
# or
# select all rooms with at least 100 members:
# SELECT q.room_id FROM (select count(*) as numberofusers, room_id FROM current_state_events WHERE type ='m.room.member'
# GROUP BY room_id) AS q LEFT JOIN room_aliases a ON q.room_id=a.room_id WHERE q.numberofusers > 100 ORDER BY numberofusers desc
###################################################################################################
# evaluate the EVENT_ID before which should be pruned
###################################################################################################
# choose a time before which the messages should be pruned:
TIME='12 months ago'
# ALTERNATIVELY:
# a certain time:
# TIME='2016-08-31 23:59:59'
# creates a timestamp from the given time string:
UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
# ALTERNATIVELY:
# prune all messages that are older than 1000 messages ago:
# LAST_MESSAGES=1000
# SQL_GET_EVENT="SELECT event_id from events WHERE type='m.room.message' AND room_id ='$ROOM' ORDER BY received_ts DESC LIMIT 1 offset $(($LAST_MESSAGES - 1))"
# ALTERNATIVELY:
# select the EVENT_ID manually:
#EVENT_ID='$1471814088343495zpPNI:matrix.org' # an example event from 21st of Aug 2016 by Matthew
###################################################################################################
# make the admin user a server admin in the database with
###################################################################################################
# psql -A -t --dbname=synapse -c "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
###################################################################################################
# database function
###################################################################################################
sql (){
# for sqlite3:
#sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
# for postgres:
psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
}
###################################################################################################
# get an access token
###################################################################################################
# for example externally by watching Riot in your browser's network inspector
# or internally on the server locally, use this:
TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
AUTH="Authorization: Bearer $TOKEN"
###################################################################################################
# check, if your TOKEN works. For example this works:
###################################################################################################
# $ curl --header "$AUTH" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
###################################################################################################
# finally start pruning the room:
###################################################################################################
POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation
for ROOM in "${ROOMS_ARRAY[@]}"; do
echo "########################################### $(date) ################# "
echo "pruning room: $ROOM ..."
ROOM=${ROOM%#*}
#set -x
echo "check for alias in db..."
# for postgres:
sql "SELECT * FROM room_aliases WHERE room_id='$ROOM'"
echo "get event..."
# for postgres:
EVENT_ID=$(sql "SELECT event_id FROM events WHERE type='m.room.message' AND received_ts<'$UNIX_TIMESTAMP' AND room_id='$ROOM' ORDER BY received_ts DESC LIMIT 1;")
if [ "$EVENT_ID" == "" ]; then
echo "no event $TIME"
else
echo "event: $EVENT_ID"
SLEEP=2
set -x
# call purge
OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
if [ "$PURGE_ID" == "" ]; then
# probably the history purge is already in progress for $ROOM
: "continuing with next room"
else
while : ; do
# get status of purge and sleep longer each time if still active
sleep $SLEEP
STATUS=$(curl --header "$AUTH" -s GET "$API_URL/admin/purge_history_status/$PURGE_ID" |grep status|cut -d'"' -f4)
: "$ROOM --> Status: $STATUS"
[[ "$STATUS" == "active" ]] || break
SLEEP=$((SLEEP + 1))
done
fi
set +x
sleep 1
fi
done
###################################################################################################
# additionally
###################################################################################################
# to benefit from pruning large amounts of data, you need to call VACUUM to free the unused space.
# This can take a very long time (hours) and the client have to be stopped while you do so:
# $ synctl stop
# $ sqlite3 -line homeserver.db "vacuum;"
# $ synctl start
# This could be set, so you don't need to prune every time after deleting some rows:
# $ sqlite3 homeserver.db "PRAGMA auto_vacuum = FULL;"
# be cautious, it could make the database somewhat slow if there are a lot of deletions
exit

View File

@@ -0,0 +1,54 @@
#!/bin/bash
DOMAIN=yourserver.tld
# add this user as admin in your home server:
ADMIN="@you_admin_username:$DOMAIN"
API_URL="$DOMAIN:8008/_matrix/client/r0"
# choose a time before which the messages should be pruned:
# TIME='2016-08-31 23:59:59'
TIME='12 months ago'
# creates a timestamp from the given time string:
UNIX_TIMESTAMP=$(date +%s%3N --date='TZ="UTC+2" '"$TIME")
###################################################################################################
# database function
###################################################################################################
sql (){
# for sqlite3:
#sqlite3 homeserver.db "pragma busy_timeout=20000;$1" | awk '{print $2}'
# for postgres:
psql -A -t --dbname=synapse -c "$1" | grep -v 'Pager'
}
###############################################################################
# make the admin user a server admin in the database with
###############################################################################
# sql "UPDATE users SET admin=1 WHERE name LIKE '$ADMIN'"
###############################################################################
# get an access token
###############################################################################
# for example externally by watching Riot in your browser's network inspector
# or internally on the server locally, use this:
TOKEN=$(sql "SELECT token FROM access_tokens WHERE user_id='$ADMIN' ORDER BY id DESC LIMIT 1")
###############################################################################
# check, if your TOKEN works. For example this works:
###############################################################################
# curl --header "Authorization: Bearer $TOKEN" "$API_URL/rooms/$ROOM/state/m.room.power_levels"
###############################################################################
# optional check size before
###############################################################################
# echo calculate used storage before ...
# du -shc ../.synapse/media_store/*
###############################################################################
# finally start pruning media:
###############################################################################
set -x # for debugging the generated string
curl --header "Authorization: Bearer $TOKEN" -v POST "$API_URL/admin/purge_media_cache/?before_ts=$UNIX_TIMESTAMP"

View File

@@ -150,10 +150,12 @@ enable_group_creation: true
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
## Metrics ###

View File

@@ -61,3 +61,11 @@ the following:
}
The status will be one of ``active``, ``complete``, or ``failed``.
Reclaim disk space (Postgres)
-----------------------------
To reclaim the disk space and return it to the operating system, you need to run
`VACUUM FULL;` on the database.
https://www.postgresql.org/docs/current/sql-vacuum.html

View File

@@ -31,7 +31,7 @@ Note that the templates must be stored under a name giving the language of the
template - currently this must always be `en` (for "English");
internationalisation support is intended for the future.
The template for the policy itself should be versioned and named according to
The template for the policy itself should be versioned and named according to
the version: for example `1.0.html`. The version of the policy which the user
has agreed to is stored in the database.
@@ -85,6 +85,37 @@ Once this is complete, and the server has been restarted, try visiting
an error "Missing string query parameter 'u'". It is now possible to manually
construct URIs where users can give their consent.
### Enabling consent tracking at registration
1. Add the following to your configuration:
```yaml
user_consent:
require_at_registration: true
policy_name: "Privacy Policy" # or whatever you'd like to call the policy
```
2. In your consent templates, make use of the `public_version` variable to
see if an unauthenticated user is viewing the page. This is typically
wrapped around the form that would be used to actually agree to the document:
```
{% if not public_version %}
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% endif %}
```
3. Restart Synapse to apply the changes.
Visiting `https://<server>/_matrix/consent` should now give you a view of the privacy
document. This is what users will be able to see when registering for accounts.
### Constructing the consent URI
It may be useful to manually construct the "consent URI" for a given user - for
@@ -106,6 +137,12 @@ query parameters:
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
Note that not providing a `u` parameter will be interpreted as wanting to view
the document from an unauthenticated perspective, such as prior to registration.
Therefore, the `h` parameter is not required in this scenario. To enable this
behaviour, set `require_at_registration` to `true` in your `user_consent` config.
Sending users a server notice asking them to agree to the policy
----------------------------------------------------------------

View File

@@ -12,12 +12,15 @@
<p>
All your base are belong to us.
</p>
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% if not public_version %}
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% endif %}
{% endif %}
</body>
</html>

View File

@@ -14,22 +14,3 @@ fi
# set up the virtualenv
tox -e py27 --notest -v
TOX_BIN=$TOX_DIR/py27/bin
# cryptography 2.2 requires setuptools >= 18.5.
#
# older versions of virtualenv (?) give us a virtualenv with the same version
# of setuptools as is installed on the system python (and tox runs virtualenv
# under python3, so we get the version of setuptools that is installed on that).
#
# anyway, make sure that we have a recent enough setuptools.
$TOX_BIN/pip install 'setuptools>=18.5'
# we also need a semi-recent version of pip, because old ones fail to install
# the "enum34" dependency of cryptography.
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py
echo lxml
} | xargs $TOX_BIN/pip install

View File

@@ -154,10 +154,15 @@ def request_json(method, origin_name, origin_key, destination, path, content):
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
headers = {"Host": destination, "Authorization": authorization_headers[0]}
if method == "POST":
headers["Content-Type"] = "application/json"
result = s.request(
method=method,
url=dest,
headers={"Host": destination, "Authorization": authorization_headers[0]},
headers=headers,
verify=False,
data=content,
)
@@ -203,7 +208,7 @@ def main():
parser.add_argument(
"-X",
"--method",
help="HTTP method to use for the request. Defaults to GET if --data is"
help="HTTP method to use for the request. Defaults to GET if --body is"
"unspecified, POST if it is.",
)

View File

@@ -1,39 +0,0 @@
#!/usr/bin/env perl
use strict;
use warnings;
use DBI;
use DBD::SQLite;
use JSON;
use Getopt::Long;
my $db; # = "homeserver.db";
my $server = "http://localhost:8008";
my $size = 320;
GetOptions("db|d=s", \$db,
"server|s=s", \$server,
"width|w=i", \$size) or usage();
usage() unless $db;
my $dbh = DBI->connect("dbi:SQLite:dbname=$db","","") || die $DBI::errstr;
my $res = $dbh->selectall_arrayref("select token, name from access_tokens, users where access_tokens.user_id = users.id group by user_id") || die $DBI::errstr;
foreach (@$res) {
my ($token, $mxid) = ($_->[0], $_->[1]);
my ($user_id) = ($mxid =~ m/@(.*):/);
my ($url) = $dbh->selectrow_array("select avatar_url from profiles where user_id=?", undef, $user_id);
if (!$url || $url =~ /#auto$/) {
`curl -s -o tmp.png "$server/_matrix/media/v1/identicon?name=${mxid}&width=$size&height=$size"`;
my $json = `curl -s -X POST -H "Content-Type: image/png" -T "tmp.png" $server/_matrix/media/v1/upload?access_token=$token`;
my $content_uri = from_json($json)->{content_uri};
`curl -X PUT -H "Content-Type: application/json" --data '{ "avatar_url": "${content_uri}#auto"}' $server/_matrix/client/api/v1/profile/${mxid}/avatar_url?access_token=$token`;
}
}
sub usage {
die "usage: ./make-identicons.pl\n\t-d database [e.g. homeserver.db]\n\t-s homeserver (default: http://localhost:8008)\n\t-w identicon size in pixels (default 320)";
}

View File

@@ -3,13 +3,15 @@
import argparse
import getpass
import sys
import unicodedata
import bcrypt
import yaml
bcrypt_rounds=12
bcrypt_rounds = 12
password_pepper = ""
def prompt_for_pass():
password = getpass.getpass("Password: ")
@@ -23,19 +25,27 @@ def prompt_for_pass():
return password
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Calculate the hash of a new password, so that passwords"
" can be reset")
description=(
"Calculate the hash of a new password, so that passwords can be reset"
)
)
parser.add_argument(
"-p", "--password",
"-p",
"--password",
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-c", "--config",
"-c",
"--config",
type=argparse.FileType('r'),
help="Path to server config file. Used to read in bcrypt_rounds and password_pepper.",
help=(
"Path to server config file. "
"Used to read in bcrypt_rounds and password_pepper."
),
)
args = parser.parse_args()
@@ -49,4 +59,21 @@ if __name__ == "__main__":
if not password:
password = prompt_for_pass()
print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))
# On Python 2, make sure we decode it to Unicode before we normalise it
if isinstance(password, bytes):
try:
password = password.decode(sys.stdin.encoding)
except UnicodeDecodeError:
print(
"ERROR! Your password is not decodable using your terminal encoding (%s)."
% (sys.stdin.encoding,)
)
pw = unicodedata.normalize("NFKC", password)
hashed = bcrypt.hashpw(
pw.encode('utf8') + password_pepper.encode("utf8"),
bcrypt.gensalt(bcrypt_rounds),
).decode('ascii')
print(hashed)

View File

@@ -27,4 +27,4 @@ try:
except ImportError:
pass
__version__ = "0.33.8"
__version__ = "0.33.9"

View File

@@ -51,6 +51,7 @@ class LoginType(object):
EMAIL_IDENTITY = u"m.login.email.identity"
MSISDN = u"m.login.msisdn"
RECAPTCHA = u"m.login.recaptcha"
TERMS = u"m.login.terms"
DUMMY = u"m.login.dummy"
# Only for C/S API v1
@@ -61,6 +62,7 @@ class LoginType(object):
class EventTypes(object):
Member = "m.room.member"
Create = "m.room.create"
Tombstone = "m.room.tombstone"
JoinRules = "m.room.join_rules"
PowerLevels = "m.room.power_levels"
Aliases = "m.room.aliases"
@@ -101,6 +103,7 @@ class ThirdPartyEntityKind(object):
class RoomVersions(object):
V1 = "1"
VDH_TEST = "vdh-test-version"
STATE_V2_TEST = "state-v2-test"
# the version we will give rooms which are created on this server
@@ -108,7 +111,11 @@ DEFAULT_ROOM_VERSION = RoomVersions.V1
# vdh-test-version is a placeholder to get room versioning support working and tested
# until we have a working v2.
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
KNOWN_ROOM_VERSIONS = {
RoomVersions.V1,
RoomVersions.VDH_TEST,
RoomVersions.STATE_V2_TEST,
}
ServerNoticeMsgType = "m.server_notice"
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"

View File

@@ -28,7 +28,6 @@ FEDERATION_PREFIX = "/_matrix/federation/v1"
STATIC_PREFIX = "/_matrix/static"
WEB_CLIENT_PREFIX = "/_matrix/client"
CONTENT_REPO_PREFIX = "/_matrix/content"
SERVER_KEY_PREFIX = "/_matrix/key/v1"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"

View File

@@ -37,7 +37,6 @@ from synapse.api.urls import (
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
SERVER_KEY_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
@@ -59,7 +58,6 @@ from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirem
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
@@ -236,10 +234,7 @@ class SynapseHomeServer(HomeServer):
)
if name in ["keys", "federation"]:
resources.update({
SERVER_KEY_PREFIX: LocalKey(self),
SERVER_KEY_V2_PREFIX: KeyApiV2Resource(self),
})
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient":
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
@@ -540,7 +535,7 @@ def run(hs):
current_mau_count = 0
reserved_count = 0
store = hs.get_datastore()
if hs.config.limit_usage_by_mau:
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
current_mau_count = yield store.get_monthly_active_count()
reserved_count = yield store.get_registered_reserved_users_count()
current_mau_gauge.set(float(current_mau_count))

View File

@@ -226,7 +226,15 @@ class SynchrotronPresence(object):
class SynchrotronTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._reset()
def _reset(self):
"""
Reset the typing handler's data caches.
"""
# map room IDs to serial numbers
self._room_serials = {}
# map room IDs to sets of users currently typing
self._room_typing = {}
def stream_positions(self):
@@ -236,6 +244,12 @@ class SynchrotronTyping(object):
return {"typing": self._latest_room_serial}
def process_replication_rows(self, token, rows):
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
# clear everything.
self._reset()
# Set the latest serial token to whatever the server gave us.
self._latest_room_serial = token
for row in rows:

View File

@@ -42,6 +42,14 @@ DEFAULT_CONFIG = """\
# until the user consents to the privacy policy. The value of the setting is
# used as the text of the error.
#
# 'require_at_registration', if enabled, will add a step to the registration
# process, similar to how captcha works. Users will be required to accept the
# policy before their account is created.
#
# 'policy_name' is the display name of the policy users will see when registering
# for an account. Has no effect unless `require_at_registration` is enabled.
# Defaults to "Privacy Policy".
#
# user_consent:
# template_dir: res/templates/privacy
# version: 1.0
@@ -54,6 +62,8 @@ DEFAULT_CONFIG = """\
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# require_at_registration: False
# policy_name: Privacy Policy
#
"""
@@ -67,6 +77,8 @@ class ConsentConfig(Config):
self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None
self.user_consent_at_registration = False
self.user_consent_policy_name = "Privacy Policy"
def read_config(self, config):
consent_config = config.get("user_consent")
@@ -83,6 +95,12 @@ class ConsentConfig(Config):
self.user_consent_server_notice_to_guests = bool(consent_config.get(
"send_server_notice_to_guests", False,
))
self.user_consent_at_registration = bool(consent_config.get(
"require_at_registration", False,
))
self.user_consent_policy_name = consent_config.get(
"policy_name", "Privacy Policy",
)
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -50,6 +50,7 @@ handlers:
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise

View File

@@ -93,10 +93,12 @@ class RegistrationConfig(Config):
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
trusted_third_party_id_servers:
- matrix.org
- vector.im
- riot.im
# Users who register on this homeserver will automatically be joined
# to these rooms

View File

@@ -77,6 +77,7 @@ class ServerConfig(Config):
self.max_mau_value = config.get(
"max_mau_value", 0,
)
self.mau_stats_only = config.get("mau_stats_only", False)
self.mau_limits_reserved_threepids = config.get(
"mau_limit_reserved_threepids", []
@@ -372,6 +373,11 @@ class ServerConfig(Config):
# max_mau_value: 50
# mau_trial_days: 2
#
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau
# is true, this is implied to be true.
# mau_stats_only: False
#
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
#

View File

@@ -15,6 +15,8 @@
import logging
from six.moves import urllib
from canonicaljson import json
from twisted.internet import defer, reactor
@@ -28,15 +30,15 @@ from synapse.util import logcontext
logger = logging.getLogger(__name__)
KEY_API_V1 = b"/_matrix/key/v1/"
KEY_API_V2 = "/_matrix/key/v2/server/%s"
@defer.inlineCallbacks
def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
def fetch_server_key(server_name, tls_client_options_factory, key_id):
"""Fetch the keys for a remote server."""
factory = SynapseKeyClientFactory()
factory.path = path
factory.path = KEY_API_V2 % (urllib.parse.quote(key_id), )
factory.host = server_name
endpoint = matrix_federation_endpoint(
reactor, server_name, tls_client_options_factory, timeout=30

View File

@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 New Vector Ltd.
# Copyright 2017, 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -18,8 +18,6 @@ import hashlib
import logging
from collections import namedtuple
from six.moves import urllib
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
@@ -395,32 +393,13 @@ class Keyring(object):
@defer.inlineCallbacks
def get_keys_from_server(self, server_name_and_key_ids):
@defer.inlineCallbacks
def get_key(server_name, key_ids):
keys = None
try:
keys = yield self.get_server_verify_key_v2_direct(
server_name, key_ids
)
except Exception as e:
logger.info(
"Unable to get key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e),
)
if not keys:
keys = yield self.get_server_verify_key_v1_direct(
server_name, key_ids
)
keys = {server_name: keys}
defer.returnValue(keys)
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
run_in_background(get_key, server_name, key_ids)
run_in_background(
self.get_server_verify_key_v2_direct,
server_name,
key_ids,
)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
@@ -525,10 +504,7 @@ class Keyring(object):
continue
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_client_options_factory,
path=("/_matrix/key/v2/server/%s" % (
urllib.parse.quote(requested_key_id),
)).encode("ascii"),
server_name, self.hs.tls_client_options_factory, requested_key_id
)
if (u"signatures" not in response
@@ -657,78 +633,6 @@ class Keyring(object):
defer.returnValue(results)
@defer.inlineCallbacks
def get_server_verify_key_v1_direct(self, server_name, key_ids):
"""Finds a verification key for the server with one of the key ids.
Args:
server_name (str): The name of the server to fetch a key for.
keys_ids (list of str): The key_ids to check for.
"""
# Try to fetch the key from the remote server.
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_client_options_factory
)
# Check the response.
x509_certificate_bytes = crypto.dump_certificate(
crypto.FILETYPE_ASN1, tls_certificate
)
if ("signatures" not in response
or server_name not in response["signatures"]):
raise KeyLookupError("Key response not signed by remote server")
if "tls_certificate" not in response:
raise KeyLookupError("Key response missing TLS certificate")
tls_certificate_b64 = response["tls_certificate"]
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
raise KeyLookupError("TLS certificate doesn't match")
# Cache the result in the datastore.
time_now_ms = self.clock.time_msec()
verify_keys = {}
for key_id, key_base64 in response["verify_keys"].items():
if is_signing_algorithm_supported(key_id):
key_bytes = decode_base64(key_base64)
verify_key = decode_verify_key_bytes(key_id, key_bytes)
verify_key.time_added = time_now_ms
verify_keys[key_id] = verify_key
for key_id in response["signatures"][server_name]:
if key_id not in response["verify_keys"]:
raise KeyLookupError(
"Key response must include verification keys for all"
" signatures"
)
if key_id in verify_keys:
verify_signed_json(
response,
server_name,
verify_keys[key_id]
)
yield self.store.store_server_certificate(
server_name,
server_name,
time_now_ms,
tls_certificate,
)
yield self.store_keys(
server_name=server_name,
from_server=server_name,
verify_keys=verify_keys,
)
defer.returnValue(verify_keys)
def store_keys(self, server_name, from_server, verify_keys):
"""Store a collection of verify keys for a given server
Args:

View File

@@ -200,11 +200,11 @@ def _is_membership_change_allowed(event, auth_events):
membership = event.content["membership"]
# Check if this is the room creator joining:
if len(event.prev_events) == 1 and Membership.JOIN == membership:
if len(event.prev_event_ids()) == 1 and Membership.JOIN == membership:
# Get room creation event:
key = (EventTypes.Create, "", )
create = auth_events.get(key)
if create and event.prev_events[0][0] == create.event_id:
if create and event.prev_event_ids()[0] == create.event_id:
if create.content["creator"] == event.state_key:
return

View File

@@ -159,6 +159,24 @@ class EventBase(object):
def keys(self):
return six.iterkeys(self._event_dict)
def prev_event_ids(self):
"""Returns the list of prev event IDs. The order matches the order
specified in the event, though there is no meaning to it.
Returns:
list[str]: The list of event IDs of this event's prev_events
"""
return [e for e, _ in self.prev_events]
def auth_event_ids(self):
"""Returns the list of auth event IDs. The order matches the order
specified in the event, though there is no meaning to it.
Returns:
list[str]: The list of event IDs of this event's auth_events
"""
return [e for e, _ in self.auth_events]
class FrozenEvent(EventBase):
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):

View File

@@ -162,8 +162,30 @@ class FederationServer(FederationBase):
p["age_ts"] = request_time - int(p["age"])
del p["age"]
# We try and pull out an event ID so that if later checks fail we
# can log something sensible. We don't mandate an event ID here in
# case future event formats get rid of the key.
possible_event_id = p.get("event_id", "<Unknown>")
# Now we get the room ID so that we can check that we know the
# version of the room.
room_id = p.get("room_id")
if not room_id:
logger.info(
"Ignoring PDU as does not have a room_id. Event ID: %s",
possible_event_id,
)
continue
try:
# In future we will actually use the room version to parse the
# PDU into an event.
yield self.store.get_room_version(room_id)
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
event = event_from_pdu_json(p)
room_id = event.room_id
pdus_by_room.setdefault(room_id, []).append(event)
pdu_results = {}
@@ -323,11 +345,6 @@ class FederationServer(FederationBase):
else:
defer.returnValue((404, ""))
@defer.inlineCallbacks
@log_function
def on_pull_request(self, origin, versions):
raise NotImplementedError("Pull transactions not implemented")
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc()

View File

@@ -183,9 +183,7 @@ class TransactionQueue(object):
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_current_hosts_in_room(
event.room_id, latest_event_ids=[
prev_id for prev_id, _ in event.prev_events
],
event.room_id, latest_event_ids=event.prev_event_ids(),
)
except Exception:
logger.exception(

View File

@@ -362,14 +362,6 @@ class FederationSendServlet(BaseFederationServlet):
defer.returnValue((code, response))
class FederationPullServlet(BaseFederationServlet):
PATH = "/pull/"
# This is for when someone asks us for everything since version X
def on_GET(self, origin, content, query):
return self.handler.on_pull_request(query["origin"][0], query["v"])
class FederationEventServlet(BaseFederationServlet):
PATH = "/event/(?P<event_id>[^/]*)/"
@@ -1261,7 +1253,6 @@ class FederationGroupsSettingJoinPolicyServlet(BaseFederationServlet):
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationPullServlet,
FederationEventServlet,
FederationStateServlet,
FederationStateIdsServlet,

View File

@@ -117,9 +117,6 @@ class Transaction(JsonEncodedObject):
"Require 'transaction_id' to construct a Transaction"
)
for p in pdus:
p.transaction_id = kwargs["transaction_id"]
kwargs["pdus"] = [p.get_pdu_json() for p in pdus]
return Transaction(**kwargs)

View File

@@ -59,6 +59,7 @@ class AuthHandler(BaseHandler):
LoginType.EMAIL_IDENTITY: self._check_email_identity,
LoginType.MSISDN: self._check_msisdn,
LoginType.DUMMY: self._check_dummy_auth,
LoginType.TERMS: self._check_terms_auth,
}
self.bcrypt_rounds = hs.config.bcrypt_rounds
@@ -431,6 +432,9 @@ class AuthHandler(BaseHandler):
def _check_dummy_auth(self, authdict, _):
return defer.succeed(True)
def _check_terms_auth(self, authdict, _):
return defer.succeed(True)
@defer.inlineCallbacks
def _check_threepid(self, medium, authdict):
if 'threepid_creds' not in authdict:
@@ -462,6 +466,22 @@ class AuthHandler(BaseHandler):
def _get_params_recaptcha(self):
return {"public_key": self.hs.config.recaptcha_public_key}
def _get_params_terms(self):
return {
"policies": {
"privacy_policy": {
"version": self.hs.config.user_consent_version,
"en": {
"name": self.hs.config.user_consent_policy_name,
"url": "%s_matrix/consent?v=%s" % (
self.hs.config.public_baseurl,
self.hs.config.user_consent_version,
),
},
},
},
}
def _auth_dict_for_flows(self, flows, session):
public_flows = []
for f in flows:
@@ -469,6 +489,7 @@ class AuthHandler(BaseHandler):
get_params = {
LoginType.RECAPTCHA: self._get_params_recaptcha,
LoginType.TERMS: self._get_params_terms,
}
params = {}

View File

@@ -138,9 +138,30 @@ class DirectoryHandler(BaseHandler):
)
@defer.inlineCallbacks
def delete_association(self, requester, room_alias):
# association deletion for human users
def delete_association(self, requester, room_alias, send_event=True):
"""Remove an alias from the directory
(this is only meant for human users; AS users should call
delete_appservice_association)
Args:
requester (Requester):
room_alias (RoomAlias):
send_event (bool): Whether to send an updated m.room.aliases event.
Note that, if we delete the canonical alias, we will always attempt
to send an m.room.canonical_alias event
Returns:
Deferred[unicode]: room id that the alias used to point to
Raises:
NotFoundError: if the alias doesn't exist
AuthError: if the user doesn't have perms to delete the alias (ie, the user
is neither the creator of the alias, nor a server admin.
SynapseError: if the alias belongs to an AS
"""
user_id = requester.user.to_string()
try:
@@ -168,10 +189,11 @@ class DirectoryHandler(BaseHandler):
room_id = yield self._delete_association(room_alias)
try:
yield self.send_room_alias_update_event(
requester,
room_id
)
if send_event:
yield self.send_room_alias_update_event(
requester,
room_id
)
yield self._update_canonical_alias(
requester,
@@ -237,10 +259,8 @@ class DirectoryHandler(BaseHandler):
servers = result["servers"]
if not room_id:
raise SynapseError(
404,
raise NotFoundError(
"Room alias %s not found" % (room_alias.to_string(),),
Codes.NOT_FOUND
)
users = yield self.state.get_current_user_in_room(room_id)
@@ -280,10 +300,8 @@ class DirectoryHandler(BaseHandler):
"servers": result.servers,
})
else:
raise SynapseError(
404,
raise NotFoundError(
"Room alias %r not found" % (room_alias.to_string(),),
Codes.NOT_FOUND
)
@defer.inlineCallbacks

View File

@@ -19,7 +19,7 @@ from six import iteritems
from twisted.internet import defer
from synapse.api.errors import RoomKeysVersionError, StoreError, SynapseError
from synapse.api.errors import NotFoundError, RoomKeysVersionError, StoreError
from synapse.util.async_helpers import Linearizer
logger = logging.getLogger(__name__)
@@ -55,6 +55,8 @@ class E2eRoomKeysHandler(object):
room_id(string): room ID to get keys for, for None to get keys for all rooms
session_id(string): session ID to get keys for, for None to get keys for all
sessions
Raises:
NotFoundError: if the backup version does not exist
Returns:
A deferred list of dicts giving the session_data and message metadata for
these room keys.
@@ -63,13 +65,19 @@ class E2eRoomKeysHandler(object):
# we deliberately take the lock to get keys so that changing the version
# works atomically
with (yield self._upload_linearizer.queue(user_id)):
# make sure the backup version exists
try:
yield self.store.get_e2e_room_keys_version_info(user_id, version)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown backup version")
else:
raise
results = yield self.store.get_e2e_room_keys(
user_id, version, room_id, session_id
)
if results['rooms'] == {}:
raise SynapseError(404, "No room_keys found")
defer.returnValue(results)
@defer.inlineCallbacks
@@ -120,7 +128,7 @@ class E2eRoomKeysHandler(object):
}
Raises:
SynapseError: with code 404 if there are no versions defined
NotFoundError: if there are no versions defined
RoomKeysVersionError: if the uploaded version is not the current version
"""
@@ -134,7 +142,7 @@ class E2eRoomKeysHandler(object):
version_info = yield self.store.get_e2e_room_keys_version_info(user_id)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Version '%s' not found" % (version,))
raise NotFoundError("Version '%s' not found" % (version,))
else:
raise
@@ -148,7 +156,7 @@ class E2eRoomKeysHandler(object):
raise RoomKeysVersionError(current_version=version_info['version'])
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Version '%s' not found" % (version,))
raise NotFoundError("Version '%s' not found" % (version,))
else:
raise

View File

@@ -48,13 +48,14 @@ from synapse.crypto.event_signing import (
compute_event_signature,
)
from synapse.events.validator import EventValidator
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationFederationSendEventsRestServlet,
)
from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
from synapse.state import StateResolutionStore, resolve_events_with_store
from synapse.types import UserID, get_domain_from_id
from synapse.types import UserID, create_requester, get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room
@@ -105,6 +106,7 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore() # type: synapse.storage.DataStore
self.federation_client = hs.get_federation_client()
self.state_handler = hs.get_state_handler()
@@ -202,27 +204,22 @@ class FederationHandler(BaseHandler):
self.room_queues[room_id].append((pdu, origin))
return
# If we're no longer in the room just ditch the event entirely. This
# is probably an old server that has come back and thinks we're still
# in the room (or we've been rejoined to the room by a state reset).
# If we're not in the room just ditch the event entirely. This is
# probably an old server that has come back and thinks we're still in
# the room (or we've been rejoined to the room by a state reset).
#
# If we were never in the room then maybe our database got vaped and
# we should check if we *are* in fact in the room. If we are then we
# can magically rejoin the room.
# Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version.
is_in_room = yield self.auth.check_host_in_room(
room_id,
self.server_name
)
if not is_in_room:
was_in_room = yield self.store.was_host_joined(
pdu.room_id, self.server_name,
logger.info(
"[%s %s] Ignoring PDU from %s as we're not in the room",
room_id, event_id, origin,
)
if was_in_room:
logger.info(
"[%s %s] Ignoring PDU from %s as we've left the room",
room_id, event_id, origin,
)
defer.returnValue(None)
defer.returnValue(None)
state = None
auth_chain = []
@@ -239,7 +236,7 @@ class FederationHandler(BaseHandler):
room_id, event_id, min_depth,
)
prevs = {e_id for e_id, _ in pdu.prev_events}
prevs = set(pdu.prev_event_ids())
seen = yield self.store.have_seen_events(prevs)
if min_depth and pdu.depth < min_depth:
@@ -557,86 +554,54 @@ class FederationHandler(BaseHandler):
room_id, event_id, event,
)
# FIXME (erikj): Awful hack to make the case where we are not currently
# in the room work
# If state and auth_chain are None, then we don't need to do this check
# as we already know we have enough state in the DB to handle this
# event.
if state and auth_chain and not event.internal_metadata.is_outlier():
is_in_room = yield self.auth.check_host_in_room(
room_id,
self.server_name
)
else:
is_in_room = True
event_ids = set()
if state:
event_ids |= {e.event_id for e in state}
if auth_chain:
event_ids |= {e.event_id for e in auth_chain}
seen_ids = yield self.store.have_seen_events(event_ids)
if state and auth_chain is not None:
# If we have any state or auth_chain given to us by the replication
# layer, then we should handle them (if we haven't before.)
event_infos = []
for e in itertools.chain(auth_chain, state):
if e.event_id in seen_ids:
continue
e.internal_metadata.outlier = True
auth_ids = e.auth_event_ids()
auth = {
(e.type, e.state_key): e for e in auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
event_infos.append({
"event": e,
"auth_events": auth,
})
seen_ids.add(e.event_id)
if not is_in_room:
logger.info(
"[%s %s] Got event for room we're not in",
room_id, event_id,
"[%s %s] persisting newly-received auth/state events %s",
room_id, event_id, [e["event"].event_id for e in event_infos]
)
yield self._handle_new_events(origin, event_infos)
try:
yield self._persist_auth_tree(
origin, auth_chain, state, event
)
except AuthError as e:
raise FederationError(
"ERROR",
e.code,
e.msg,
affected=event_id,
)
else:
event_ids = set()
if state:
event_ids |= {e.event_id for e in state}
if auth_chain:
event_ids |= {e.event_id for e in auth_chain}
seen_ids = yield self.store.have_seen_events(event_ids)
if state and auth_chain is not None:
# If we have any state or auth_chain given to us by the replication
# layer, then we should handle them (if we haven't before.)
event_infos = []
for e in itertools.chain(auth_chain, state):
if e.event_id in seen_ids:
continue
e.internal_metadata.outlier = True
auth_ids = [e_id for e_id, _ in e.auth_events]
auth = {
(e.type, e.state_key): e for e in auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
}
event_infos.append({
"event": e,
"auth_events": auth,
})
seen_ids.add(e.event_id)
logger.info(
"[%s %s] persisting newly-received auth/state events %s",
room_id, event_id, [e["event"].event_id for e in event_infos]
)
yield self._handle_new_events(origin, event_infos)
try:
context = yield self._handle_new_event(
origin,
event,
state=state,
)
except AuthError as e:
raise FederationError(
"ERROR",
e.code,
e.msg,
affected=event.event_id,
)
try:
context = yield self._handle_new_event(
origin,
event,
state=state,
)
except AuthError as e:
raise FederationError(
"ERROR",
e.code,
e.msg,
affected=event.event_id,
)
room = yield self.store.get_room(room_id)
@@ -726,7 +691,7 @@ class FederationHandler(BaseHandler):
edges = [
ev.event_id
for ev in events
if set(e_id for e_id, _ in ev.prev_events) - event_ids
if set(ev.prev_event_ids()) - event_ids
]
logger.info(
@@ -753,7 +718,7 @@ class FederationHandler(BaseHandler):
required_auth = set(
a_id
for event in events + list(state_events.values()) + list(auth_events.values())
for a_id, _ in event.auth_events
for a_id in event.auth_event_ids()
)
auth_events.update({
e_id: event_map[e_id] for e_id in required_auth if e_id in event_map
@@ -769,7 +734,7 @@ class FederationHandler(BaseHandler):
auth_events.update(ret_events)
required_auth.update(
a_id for event in ret_events.values() for a_id, _ in event.auth_events
a_id for event in ret_events.values() for a_id in event.auth_event_ids()
)
missing_auth = required_auth - set(auth_events)
@@ -796,7 +761,7 @@ class FederationHandler(BaseHandler):
required_auth.update(
a_id
for event in results if event
for a_id, _ in event.auth_events
for a_id in event.auth_event_ids()
)
missing_auth = required_auth - set(auth_events)
@@ -816,7 +781,7 @@ class FederationHandler(BaseHandler):
"auth_events": {
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in a.auth_events
for a_id in a.auth_event_ids()
if a_id in auth_events
}
})
@@ -828,7 +793,7 @@ class FederationHandler(BaseHandler):
"auth_events": {
(auth_events[a_id].type, auth_events[a_id].state_key):
auth_events[a_id]
for a_id, _ in event_map[e_id].auth_events
for a_id in event_map[e_id].auth_event_ids()
if a_id in auth_events
}
})
@@ -1041,17 +1006,17 @@ class FederationHandler(BaseHandler):
Raises:
SynapseError if the event does not pass muster
"""
if len(ev.prev_events) > 20:
if len(ev.prev_event_ids()) > 20:
logger.warn("Rejecting event %s which has %i prev_events",
ev.event_id, len(ev.prev_events))
ev.event_id, len(ev.prev_event_ids()))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many prev_events",
)
if len(ev.auth_events) > 10:
if len(ev.auth_event_ids()) > 10:
logger.warn("Rejecting event %s which has %i auth_events",
ev.event_id, len(ev.auth_events))
ev.event_id, len(ev.auth_event_ids()))
raise SynapseError(
http_client.BAD_REQUEST,
"Too many auth_events",
@@ -1076,7 +1041,7 @@ class FederationHandler(BaseHandler):
def on_event_auth(self, event_id):
event = yield self.store.get_event(event_id)
auth = yield self.store.get_auth_chain(
[auth_id for auth_id, _ in event.auth_events],
[auth_id for auth_id in event.auth_event_ids()],
include_given=True
)
defer.returnValue([e for e in auth])
@@ -1337,8 +1302,37 @@ class FederationHandler(BaseHandler):
context = yield self.state_handler.compute_event_context(event)
yield self.persist_events_and_notify([(event, context)])
sender = UserID.from_string(event.sender)
target = UserID.from_string(event.state_key)
if (sender.localpart == target.localpart):
run_as_background_process(
"_auto_accept_invite",
self._auto_accept_invite,
sender, target, event.room_id,
)
defer.returnValue(event)
@defer.inlineCallbacks
def _auto_accept_invite(self, sender, target, room_id):
joined = False
for attempt in range(0, 10):
try:
yield self.hs.get_room_member_handler().update_membership(
requester=create_requester(target.to_string()),
target=target,
room_id=room_id,
action="join",
)
joined = True
break
except Exception:
# We're going to retry, but we should log the error
logger.exception("Error auto-accepting invite on attempt %d" % attempt)
yield self.clock.sleep(1)
if not joined:
logger.error("Giving up on trying to auto-accept invite: too many attempts")
@defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id):
origin, event = yield self._make_and_verify_event(
@@ -1698,7 +1692,7 @@ class FederationHandler(BaseHandler):
missing_auth_events = set()
for e in itertools.chain(auth_events, state, [event]):
for e_id, _ in e.auth_events:
for e_id in e.auth_event_ids():
if e_id not in event_map:
missing_auth_events.add(e_id)
@@ -1717,7 +1711,7 @@ class FederationHandler(BaseHandler):
for e in itertools.chain(auth_events, state, [event]):
auth_for_e = {
(event_map[e_id].type, event_map[e_id].state_key): event_map[e_id]
for e_id, _ in e.auth_events
for e_id in e.auth_event_ids()
if e_id in event_map
}
if create_event:
@@ -1785,10 +1779,10 @@ class FederationHandler(BaseHandler):
# This is a hack to fix some old rooms where the initial join event
# didn't reference the create event in its auth events.
if event.type == EventTypes.Member and not event.auth_events:
if len(event.prev_events) == 1 and event.depth < 5:
if event.type == EventTypes.Member and not event.auth_event_ids():
if len(event.prev_event_ids()) == 1 and event.depth < 5:
c = yield self.store.get_event(
event.prev_events[0][0],
event.prev_event_ids()[0],
allow_none=True,
)
if c and c.type == EventTypes.Create:
@@ -1835,7 +1829,7 @@ class FederationHandler(BaseHandler):
# Now get the current auth_chain for the event.
local_auth_chain = yield self.store.get_auth_chain(
[auth_id for auth_id, _ in event.auth_events],
[auth_id for auth_id in event.auth_event_ids()],
include_given=True
)
@@ -1891,7 +1885,7 @@ class FederationHandler(BaseHandler):
"""
# Check if we have all the auth events.
current_state = set(e.event_id for e in auth_events.values())
event_auth_events = set(e_id for e_id, _ in event.auth_events)
event_auth_events = set(event.auth_event_ids())
if event.is_state():
event_key = (event.type, event.state_key)
@@ -1935,7 +1929,7 @@ class FederationHandler(BaseHandler):
continue
try:
auth_ids = [e_id for e_id, _ in e.auth_events]
auth_ids = e.auth_event_ids()
auth = {
(e.type, e.state_key): e for e in remote_auth_chain
if e.event_id in auth_ids or e.type == EventTypes.Create
@@ -1956,7 +1950,7 @@ class FederationHandler(BaseHandler):
pass
have_events = yield self.store.get_seen_events_with_rejections(
[e_id for e_id, _ in event.auth_events]
event.auth_event_ids()
)
seen_events = set(have_events.keys())
except Exception:
@@ -2058,7 +2052,7 @@ class FederationHandler(BaseHandler):
continue
try:
auth_ids = [e_id for e_id, _ in ev.auth_events]
auth_ids = ev.auth_event_ids()
auth = {
(e.type, e.state_key): e
for e in result["auth_chain"]
@@ -2250,7 +2244,7 @@ class FederationHandler(BaseHandler):
missing_remote_ids = [e.event_id for e in missing_remotes]
base_remote_rejected = list(missing_remotes)
for e in missing_remotes:
for e_id, _ in e.auth_events:
for e_id in e.auth_event_ids():
if e_id in missing_remote_ids:
try:
base_remote_rejected.remove(e)

View File

@@ -427,6 +427,9 @@ class EventCreationHandler(object):
if event.is_state():
prev_state = yield self.deduplicate_state_event(event, context)
logger.info(
"Not bothering to persist duplicate state event %s", event.event_id,
)
if prev_state is not None:
defer.returnValue(prev_state)

View File

@@ -50,7 +50,6 @@ class RegistrationHandler(BaseHandler):
self._auth_handler = hs.get_auth_handler()
self.profile_handler = hs.get_profile_handler()
self.user_directory_handler = hs.get_user_directory_handler()
self.room_creation_handler = self.hs.get_room_creation_handler()
self.captcha_client = CaptchaServerHttpClient(hs)
self._next_generated_user_id = None
@@ -241,7 +240,10 @@ class RegistrationHandler(BaseHandler):
else:
# create room expects the localpart of the room alias
room_alias_localpart = room_alias.localpart
yield self.room_creation_handler.create_room(
# getting the RoomCreationHandler during init gives a dependency
# loop
yield self.hs.get_room_creation_handler().create_room(
fake_requester,
config={
"preset": "public_chat",
@@ -254,9 +256,6 @@ class RegistrationHandler(BaseHandler):
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
# We used to generate default identicons here, but nowadays
# we want clients to generate their own as part of their branding
# rather than there being consistent matrix-wide ones, so we don't.
defer.returnValue((user_id, token))
@defer.inlineCallbacks

View File

@@ -21,7 +21,7 @@ import math
import string
from collections import OrderedDict
from six import string_types
from six import iteritems, string_types
from twisted.internet import defer
@@ -32,10 +32,11 @@ from synapse.api.constants import (
JoinRules,
RoomCreationPreset,
)
from synapse.api.errors import AuthError, Codes, StoreError, SynapseError
from synapse.api.errors import AuthError, Codes, NotFoundError, StoreError, SynapseError
from synapse.storage.state import StateFilter
from synapse.types import RoomAlias, RoomID, RoomStreamToken, StreamToken, UserID
from synapse.util import stringutils
from synapse.util.async_helpers import Linearizer
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -73,6 +74,334 @@ class RoomCreationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
# linearizer to stop two upgrades happening at once
self._upgrade_linearizer = Linearizer("room_upgrade_linearizer")
@defer.inlineCallbacks
def upgrade_room(self, requester, old_room_id, new_version):
"""Replace a room with a new room with a different version
Args:
requester (synapse.types.Requester): the user requesting the upgrade
old_room_id (unicode): the id of the room to be replaced
new_version (unicode): the new room version to use
Returns:
Deferred[unicode]: the new room id
"""
yield self.ratelimit(requester)
user_id = requester.user.to_string()
with (yield self._upgrade_linearizer.queue(old_room_id)):
# start by allocating a new room id
r = yield self.store.get_room(old_room_id)
if r is None:
raise NotFoundError("Unknown room id %s" % (old_room_id,))
new_room_id = yield self._generate_room_id(
creator_id=user_id, is_public=r["is_public"],
)
logger.info("Creating new room %s to replace %s", new_room_id, old_room_id)
# we create and auth the tombstone event before properly creating the new
# room, to check our user has perms in the old room.
tombstone_event, tombstone_context = (
yield self.event_creation_handler.create_event(
requester, {
"type": EventTypes.Tombstone,
"state_key": "",
"room_id": old_room_id,
"sender": user_id,
"content": {
"body": "This room has been replaced",
"replacement_room": new_room_id,
}
},
token_id=requester.access_token_id,
)
)
yield self.auth.check_from_context(tombstone_event, tombstone_context)
yield self.clone_exiting_room(
requester,
old_room_id=old_room_id,
new_room_id=new_room_id,
new_room_version=new_version,
tombstone_event_id=tombstone_event.event_id,
)
# now send the tombstone
yield self.event_creation_handler.send_nonmember_event(
requester, tombstone_event, tombstone_context,
)
old_room_state = yield tombstone_context.get_current_state_ids(self.store)
# update any aliases
yield self._move_aliases_to_new_room(
requester, old_room_id, new_room_id, old_room_state,
)
# and finally, shut down the PLs in the old room, and update them in the new
# room.
yield self._update_upgraded_room_pls(
requester, old_room_id, new_room_id, old_room_state,
)
defer.returnValue(new_room_id)
@defer.inlineCallbacks
def _update_upgraded_room_pls(
self, requester, old_room_id, new_room_id, old_room_state,
):
"""Send updated power levels in both rooms after an upgrade
Args:
requester (synapse.types.Requester): the user requesting the upgrade
old_room_id (unicode): the id of the room to be replaced
new_room_id (unicode): the id of the replacement room
old_room_state (dict[tuple[str, str], str]): the state map for the old room
Returns:
Deferred
"""
old_room_pl_event_id = old_room_state.get((EventTypes.PowerLevels, ""))
if old_room_pl_event_id is None:
logger.warning(
"Not supported: upgrading a room with no PL event. Not setting PLs "
"in old room.",
)
return
old_room_pl_state = yield self.store.get_event(old_room_pl_event_id)
# we try to stop regular users from speaking by setting the PL required
# to send regular events and invites to 'Moderator' level. That's normally
# 50, but if the default PL in a room is 50 or more, then we set the
# required PL above that.
pl_content = dict(old_room_pl_state.content)
users_default = int(pl_content.get("users_default", 0))
restricted_level = max(users_default + 1, 50)
updated = False
for v in ("invite", "events_default"):
current = int(pl_content.get(v, 0))
if current < restricted_level:
logger.info(
"Setting level for %s in %s to %i (was %i)",
v, old_room_id, restricted_level, current,
)
pl_content[v] = restricted_level
updated = True
else:
logger.info(
"Not setting level for %s (already %i)",
v, current,
)
if updated:
try:
yield self.event_creation_handler.create_and_send_nonmember_event(
requester, {
"type": EventTypes.PowerLevels,
"state_key": '',
"room_id": old_room_id,
"sender": requester.user.to_string(),
"content": pl_content,
}, ratelimit=False,
)
except AuthError as e:
logger.warning("Unable to update PLs in old room: %s", e)
logger.info("Setting correct PLs in new room")
yield self.event_creation_handler.create_and_send_nonmember_event(
requester, {
"type": EventTypes.PowerLevels,
"state_key": '',
"room_id": new_room_id,
"sender": requester.user.to_string(),
"content": old_room_pl_state.content,
}, ratelimit=False,
)
@defer.inlineCallbacks
def clone_exiting_room(
self, requester, old_room_id, new_room_id, new_room_version,
tombstone_event_id,
):
"""Populate a new room based on an old room
Args:
requester (synapse.types.Requester): the user requesting the upgrade
old_room_id (unicode): the id of the room to be replaced
new_room_id (unicode): the id to give the new room (should already have been
created with _gemerate_room_id())
new_room_version (unicode): the new room version to use
tombstone_event_id (unicode|str): the ID of the tombstone event in the old
room.
Returns:
Deferred[None]
"""
user_id = requester.user.to_string()
if not self.spam_checker.user_may_create_room(user_id):
raise SynapseError(403, "You are not permitted to create rooms")
creation_content = {
"room_version": new_room_version,
"predecessor": {
"room_id": old_room_id,
"event_id": tombstone_event_id,
}
}
initial_state = dict()
types_to_copy = (
(EventTypes.JoinRules, ""),
(EventTypes.Name, ""),
(EventTypes.Topic, ""),
(EventTypes.RoomHistoryVisibility, ""),
(EventTypes.GuestAccess, ""),
(EventTypes.RoomAvatar, ""),
)
old_room_state_ids = yield self.store.get_filtered_current_state_ids(
old_room_id, StateFilter.from_types(types_to_copy),
)
# map from event_id to BaseEvent
old_room_state_events = yield self.store.get_events(old_room_state_ids.values())
for k, old_event_id in iteritems(old_room_state_ids):
old_event = old_room_state_events.get(old_event_id)
if old_event:
initial_state[k] = old_event.content
yield self._send_events_for_new_room(
requester,
new_room_id,
# we expect to override all the presets with initial_state, so this is
# somewhat arbitrary.
preset_config=RoomCreationPreset.PRIVATE_CHAT,
invite_list=[],
initial_state=initial_state,
creation_content=creation_content,
)
# XXX invites/joins
# XXX 3pid invites
@defer.inlineCallbacks
def _move_aliases_to_new_room(
self, requester, old_room_id, new_room_id, old_room_state,
):
directory_handler = self.hs.get_handlers().directory_handler
aliases = yield self.store.get_aliases_for_room(old_room_id)
# check to see if we have a canonical alias.
canonical_alias = None
canonical_alias_event_id = old_room_state.get((EventTypes.CanonicalAlias, ""))
if canonical_alias_event_id:
canonical_alias_event = yield self.store.get_event(canonical_alias_event_id)
if canonical_alias_event:
canonical_alias = canonical_alias_event.content.get("alias", "")
# first we try to remove the aliases from the old room (we suppress sending
# the room_aliases event until the end).
#
# Note that we'll only be able to remove aliases that (a) aren't owned by an AS,
# and (b) unless the user is a server admin, which the user created.
#
# This is probably correct - given we don't allow such aliases to be deleted
# normally, it would be odd to allow it in the case of doing a room upgrade -
# but it makes the upgrade less effective, and you have to wonder why a room
# admin can't remove aliases that point to that room anyway.
# (cf https://github.com/matrix-org/synapse/issues/2360)
#
removed_aliases = []
for alias_str in aliases:
alias = RoomAlias.from_string(alias_str)
try:
yield directory_handler.delete_association(
requester, alias, send_event=False,
)
removed_aliases.append(alias_str)
except SynapseError as e:
logger.warning(
"Unable to remove alias %s from old room: %s",
alias, e,
)
# if we didn't find any aliases, or couldn't remove anyway, we can skip the rest
# of this.
if not removed_aliases:
return
try:
# this can fail if, for some reason, our user doesn't have perms to send
# m.room.aliases events in the old room (note that we've already checked that
# they have perms to send a tombstone event, so that's not terribly likely).
#
# If that happens, it's regrettable, but we should carry on: it's the same
# as when you remove an alias from the directory normally - it just means that
# the aliases event gets out of sync with the directory
# (cf https://github.com/vector-im/riot-web/issues/2369)
yield directory_handler.send_room_alias_update_event(
requester, old_room_id,
)
except AuthError as e:
logger.warning(
"Failed to send updated alias event on old room: %s", e,
)
# we can now add any aliases we successfully removed to the new room.
for alias in removed_aliases:
try:
yield directory_handler.create_association(
requester, RoomAlias.from_string(alias),
new_room_id, servers=(self.hs.hostname, ),
send_event=False,
)
logger.info("Moved alias %s to new room", alias)
except SynapseError as e:
# I'm not really expecting this to happen, but it could if the spam
# checking module decides it shouldn't, or similar.
logger.error(
"Error adding alias %s to new room: %s",
alias, e,
)
try:
if canonical_alias and (canonical_alias in removed_aliases):
yield self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.CanonicalAlias,
"state_key": "",
"room_id": new_room_id,
"sender": requester.user.to_string(),
"content": {"alias": canonical_alias, },
},
ratelimit=False
)
yield directory_handler.send_room_alias_update_event(
requester, new_room_id,
)
except SynapseError as e:
# again I'm not really expecting this to fail, but if it does, I'd rather
# we returned the new room to the client at this point.
logger.error(
"Unable to send updated alias events in new room: %s", e,
)
@defer.inlineCallbacks
def create_room(self, requester, config, ratelimit=True,
@@ -165,28 +494,7 @@ class RoomCreationHandler(BaseHandler):
visibility = config.get("visibility", None)
is_public = visibility == "public"
# autogen room IDs and try to create it. We may clash, so just
# try a few times till one goes through, giving up eventually.
attempts = 0
room_id = None
while attempts < 5:
try:
random_string = stringutils.random_string(18)
gen_room_id = RoomID(
random_string,
self.hs.hostname,
)
yield self.store.store_room(
room_id=gen_room_id.to_string(),
room_creator_user_id=user_id,
is_public=is_public
)
room_id = gen_room_id.to_string()
break
except StoreError:
attempts += 1
if not room_id:
raise StoreError(500, "Couldn't generate a room ID.")
room_id = yield self._generate_room_id(creator_id=user_id, is_public=is_public)
if room_alias:
directory_handler = self.hs.get_handlers().directory_handler
@@ -216,18 +524,15 @@ class RoomCreationHandler(BaseHandler):
# override any attempt to set room versions via the creation_content
creation_content["room_version"] = room_version
room_member_handler = self.hs.get_room_member_handler()
yield self._send_events_for_new_room(
requester,
room_id,
room_member_handler,
preset_config=preset_config,
invite_list=invite_list,
initial_state=initial_state,
creation_content=creation_content,
room_alias=room_alias,
power_level_content_override=config.get("power_level_content_override", {}),
power_level_content_override=config.get("power_level_content_override"),
creator_join_profile=creator_join_profile,
)
@@ -263,7 +568,7 @@ class RoomCreationHandler(BaseHandler):
if is_direct:
content["is_direct"] = is_direct
yield room_member_handler.update_membership(
yield self.room_member_handler.update_membership(
requester,
UserID.from_string(invitee),
room_id,
@@ -301,14 +606,13 @@ class RoomCreationHandler(BaseHandler):
self,
creator, # A Requester object.
room_id,
room_member_handler,
preset_config,
invite_list,
initial_state,
creation_content,
room_alias,
power_level_content_override,
creator_join_profile,
room_alias=None,
power_level_content_override=None,
creator_join_profile=None,
):
def create(etype, content, **kwargs):
e = {
@@ -324,6 +628,7 @@ class RoomCreationHandler(BaseHandler):
@defer.inlineCallbacks
def send(etype, content, **kwargs):
event = create(etype, content, **kwargs)
logger.info("Sending %s in new room", etype)
yield self.event_creation_handler.create_and_send_nonmember_event(
creator,
event,
@@ -346,7 +651,8 @@ class RoomCreationHandler(BaseHandler):
content=creation_content,
)
yield room_member_handler.update_membership(
logger.info("Sending %s in new room", EventTypes.Member)
yield self.room_member_handler.update_membership(
creator,
creator.user,
room_id,
@@ -388,7 +694,8 @@ class RoomCreationHandler(BaseHandler):
for invitee in invite_list:
power_level_content["users"][invitee] = 100
power_level_content.update(power_level_content_override)
if power_level_content_override:
power_level_content.update(power_level_content_override)
yield send(
etype=EventTypes.PowerLevels,
@@ -427,6 +734,30 @@ class RoomCreationHandler(BaseHandler):
content=content,
)
@defer.inlineCallbacks
def _generate_room_id(self, creator_id, is_public):
# autogen room IDs and try to create it. We may clash, so just
# try a few times till one goes through, giving up eventually.
attempts = 0
while attempts < 5:
try:
random_string = stringutils.random_string(18)
gen_room_id = RoomID(
random_string,
self.hs.hostname,
).to_string()
if isinstance(gen_room_id, bytes):
gen_room_id = gen_room_id.decode('utf-8')
yield self.store.store_room(
room_id=gen_room_id,
room_creator_user_id=creator_id,
is_public=is_public,
)
defer.returnValue(gen_room_id)
except StoreError:
attempts += 1
raise StoreError(500, "Couldn't generate a room ID.")
class RoomContextHandler(object):
def __init__(self, hs):

View File

@@ -28,8 +28,9 @@ from twisted.internet import defer
import synapse.server
import synapse.types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.types import RoomID, UserID
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.types import RoomAlias, RoomID, UserID
from synapse.util import logcontext
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room
@@ -416,6 +417,10 @@ class RoomMemberHandler(object):
ret = yield self._remote_join(
requester, remote_room_hosts, room_id, target, content
)
logcontext.run_in_background(
self._send_merged_user_invites,
requester, room_id,
)
defer.returnValue(ret)
elif effective_membership_state == Membership.LEAVE:
@@ -450,8 +455,58 @@ class RoomMemberHandler(object):
prev_events_and_hashes=prev_events_and_hashes,
content=content,
)
if effective_membership_state == Membership.JOIN:
logcontext.run_in_background(
self._send_merged_user_invites,
requester, room_id,
)
defer.returnValue(res)
@defer.inlineCallbacks
def _send_merged_user_invites(self, requester, room_id):
try:
profile_alias = "#_profile_%s:%s" % (
requester.user.localpart, self.hs.hostname,
)
profile_alias = RoomAlias.from_string(profile_alias)
try:
profile_room_id, remote_room_hosts = yield self.lookup_room_alias(
profile_alias,
)
except NotFoundError:
logger.info(
"Not sending merged invites as %s does not exists",
profile_alias
)
return
linked_accounts = yield self.state_handler.get_current_state(
room_id=profile_room_id.to_string(),
event_type="m.linked_accounts",
state_key="",
)
if not linked_accounts or not linked_accounts.content['all_children']:
return
for child_id in linked_accounts.content['all_children']:
child = UserID.from_string(child_id)
if self.hs.is_mine(child) or child_id == requester.user.to_string():
# TODO: Handle auto-invite for local users (not a priority)
continue
try:
yield self.update_membership(
requester=requester,
target=child,
room_id=room_id,
action="invite",
)
except Exception:
logger.exception("Failed to invite %s to %s", child_id, room_id)
except Exception:
logger.exception(
"Failed to send invites to children of %s in %s",
requester.user.to_string(), room_id,
)
@defer.inlineCallbacks
def send_membership_event(
self,
@@ -578,7 +633,7 @@ class RoomMemberHandler(object):
mapping = yield directory_handler.get_association(room_alias)
if not mapping:
raise SynapseError(404, "No such room alias")
raise NotFoundError("No such room alias")
room_id = mapping["room_id"]
servers = mapping["servers"]

View File

@@ -63,11 +63,8 @@ class TypingHandler(object):
self._member_typing_until = {} # clock time we expect to stop
self._member_last_federation_poke = {}
# map room IDs to serial numbers
self._room_serials = {}
self._latest_room_serial = 0
# map room IDs to sets of users currently typing
self._room_typing = {}
self._reset()
# caches which room_ids changed at which serials
self._typing_stream_change_cache = StreamChangeCache(
@@ -79,6 +76,15 @@ class TypingHandler(object):
5000,
)
def _reset(self):
"""
Reset the typing handler's data caches.
"""
# map room IDs to serial numbers
self._room_serials = {}
# map room IDs to sets of users currently typing
self._room_typing = {}
def _handle_timeouts(self):
logger.info("Checking for typing timeouts")

View File

@@ -468,13 +468,13 @@ def set_cors_headers(request):
Args:
request (twisted.web.http.Request): The http request to add CORs to.
"""
request.setHeader("Access-Control-Allow-Origin", "*")
request.setHeader(b"Access-Control-Allow-Origin", b"*")
request.setHeader(
"Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS"
b"Access-Control-Allow-Methods", b"GET, POST, PUT, DELETE, OPTIONS"
)
request.setHeader(
"Access-Control-Allow-Headers",
"Origin, X-Requested-With, Content-Type, Accept, Authorization"
b"Access-Control-Allow-Headers",
b"Origin, X-Requested-With, Content-Type, Accept, Authorization"
)

View File

@@ -121,16 +121,15 @@ def parse_string(request, name, default=None, required=False,
Args:
request: the twisted HTTP request.
name (bytes/unicode): the name of the query parameter.
default (bytes/unicode|None): value to use if the parameter is absent,
name (bytes|unicode): the name of the query parameter.
default (bytes|unicode|None): value to use if the parameter is absent,
defaults to None. Must be bytes if encoding is None.
required (bool): whether to raise a 400 SynapseError if the
parameter is absent, defaults to False.
allowed_values (list[bytes/unicode]): List of allowed values for the
allowed_values (list[bytes|unicode]): List of allowed values for the
string, or None if any value is allowed, defaults to None. Must be
the same type as name, if given.
encoding: The encoding to decode the name to, and decode the string
content with.
encoding (str|None): The encoding to decode the string content with.
Returns:
bytes/unicode|None: A string value or the default. Unicode if encoding

View File

@@ -85,7 +85,10 @@ class EmailPusher(object):
self.timed_call = None
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
if self.max_stream_ordering:
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
else:
self.max_stream_ordering = max_stream_ordering
self._start_processing()
def on_new_receipts(self, min_stream_id, max_stream_id):

View File

@@ -311,10 +311,10 @@ class HttpPusher(object):
]
}
}
if event.type == 'm.room.member':
if event.type == 'm.room.member' and event.is_state():
d['notification']['membership'] = event.content['membership']
d['notification']['user_is_target'] = event.state_key == self.user_id
if self.hs.config.push_include_content and 'content' in event:
if self.hs.config.push_include_content and event.content:
d['notification']['content'] = event.content
# We no longer send aliases separately, instead, we send the human

View File

@@ -26,7 +26,6 @@ import bleach
import jinja2
from twisted.internet import defer
from twisted.mail.smtp import sendmail
from synapse.api.constants import EventTypes
from synapse.api.errors import StoreError
@@ -37,6 +36,7 @@ from synapse.push.presentable_names import (
)
from synapse.types import UserID
from synapse.util.async_helpers import concurrently_execute
from synapse.util.logcontext import make_deferred_yieldable
from synapse.visibility import filter_events_for_client
logger = logging.getLogger(__name__)
@@ -85,6 +85,7 @@ class Mailer(object):
self.notif_template_html = notif_template_html
self.notif_template_text = notif_template_text
self.sendmail = self.hs.get_sendmail()
self.store = self.hs.get_datastore()
self.macaroon_gen = self.hs.get_macaroon_generator()
self.state_handler = self.hs.get_state_handler()
@@ -191,17 +192,17 @@ class Mailer(object):
multipart_msg.attach(html_part)
logger.info("Sending email push notification to %s" % email_address)
# logger.debug(html_text)
yield sendmail(
yield make_deferred_yieldable(self.sendmail(
self.hs.config.email_smtp_host,
raw_from, raw_to, multipart_msg.as_string(),
raw_from, raw_to, multipart_msg.as_string().encode('utf8'),
reactor=self.hs.get_reactor(),
port=self.hs.config.email_smtp_port,
requireAuthentication=self.hs.config.email_smtp_user is not None,
username=self.hs.config.email_smtp_user,
password=self.hs.config.email_smtp_pass,
requireTransportSecurity=self.hs.config.require_transport_security
)
))
@defer.inlineCallbacks
def get_room_vars(self, room_id, user_id, notifs, notif_events, room_state_ids):
@@ -333,7 +334,7 @@ class Mailer(object):
notif_events, user_id, reason):
if len(notifs_by_room) == 1:
# Only one room has new stuff
room_id = notifs_by_room.keys()[0]
room_id = list(notifs_by_room.keys())[0]
# If the room has some kind of name, use it, but we don't
# want the generated-from-names one here otherwise we'll

View File

@@ -124,7 +124,7 @@ class PushRuleEvaluatorForEvent(object):
# XXX: optimisation: cache our pattern regexps
if condition['key'] == 'content.body':
body = self._event["content"].get("body", None)
body = self._event.content.get("body", None)
if not body:
return False
@@ -140,7 +140,7 @@ class PushRuleEvaluatorForEvent(object):
if not display_name:
return False
body = self._event["content"].get("body", None)
body = self._event.content.get("body", None)
if not body:
return False

View File

@@ -51,7 +51,6 @@ REQUIREMENTS = {
"daemonize>=2.3.1": ["daemonize"],
"bcrypt>=3.1.0": ["bcrypt>=3.1.0"],
"pillow>=3.1.2": ["PIL"],
"pydenticon>=0.2": ["pydenticon"],
"sortedcontainers>=1.4.4": ["sortedcontainers"],
"psutil>=2.0.0": ["psutil>=2.0.0"],
"pysaml2>=3.0.0": ["saml2"],

View File

@@ -106,7 +106,7 @@ class ReplicationClientHandler(object):
Can be overriden in subclasses to handle more.
"""
logger.info("Received rdata %s -> %s", stream_name, token)
logger.debug("Received rdata %s -> %s", stream_name, token)
return self.store.process_replication_rows(stream_name, token, rows)
def on_position(self, stream_name, token):

View File

@@ -656,7 +656,7 @@ tcp_inbound_commands = LaterGauge(
"",
["command", "name"],
lambda: {
(k[0], p.name,): count
(k, p.name,): count
for p in connected_connections
for k, count in iteritems(p.inbound_commands_counter)
},
@@ -667,7 +667,7 @@ tcp_outbound_commands = LaterGauge(
"",
["command", "name"],
lambda: {
(k[0], p.name,): count
(k, p.name,): count
for p in connected_connections
for k, count in iteritems(p.outbound_commands_counter)
},

View File

@@ -47,6 +47,7 @@ from synapse.rest.client.v2_alpha import (
register,
report_event,
room_keys,
room_upgrade_rest_servlet,
sendtodevice,
sync,
tags,
@@ -116,3 +117,4 @@ class ClientRestResource(JsonResource):
sendtodevice.register_servlets(hs, client_resource)
user_directory.register_servlets(hs, client_resource)
groups.register_servlets(hs, client_resource)
room_upgrade_rest_servlet.register_servlets(hs, client_resource)

View File

@@ -21,7 +21,7 @@ from synapse.api.constants import LoginType
from synapse.api.errors import SynapseError
from synapse.api.urls import CLIENT_V2_ALPHA_PREFIX
from synapse.http.server import finish_request
from synapse.http.servlet import RestServlet
from synapse.http.servlet import RestServlet, parse_string
from ._base import client_v2_patterns
@@ -68,6 +68,29 @@ function captchaDone() {
</html>
"""
TERMS_TEMPLATE = """
<html>
<head>
<title>Authentication</title>
<meta name='viewport' content='width=device-width, initial-scale=1,
user-scalable=no, minimum-scale=1.0, maximum-scale=1.0'>
<link rel="stylesheet" href="/_matrix/static/client/register/style.css">
</head>
<body>
<form id="registrationForm" method="post" action="%(myurl)s">
<div>
<p>
Please click the button below if you agree to the
<a href="%(terms_url)s">privacy policy of this homeserver.</a>
</p>
<input type="hidden" name="session" value="%(session)s" />
<input type="submit" value="Agree" />
</div>
</form>
</body>
</html>
"""
SUCCESS_TEMPLATE = """
<html>
<head>
@@ -108,16 +131,12 @@ class AuthRestServlet(RestServlet):
self.auth_handler = hs.get_auth_handler()
self.registration_handler = hs.get_handlers().registration_handler
@defer.inlineCallbacks
def on_GET(self, request, stagetype):
yield
session = parse_string(request, "session")
if not session:
raise SynapseError(400, "No session supplied")
if stagetype == LoginType.RECAPTCHA:
if ('session' not in request.args or
len(request.args['session']) == 0):
raise SynapseError(400, "No session supplied")
session = request.args["session"][0]
html = RECAPTCHA_TEMPLATE % {
'session': session,
'myurl': "%s/auth/%s/fallback/web" % (
@@ -132,25 +151,44 @@ class AuthRestServlet(RestServlet):
request.write(html_bytes)
finish_request(request)
defer.returnValue(None)
return None
elif stagetype == LoginType.TERMS:
html = TERMS_TEMPLATE % {
'session': session,
'terms_url': "%s_matrix/consent?v=%s" % (
self.hs.config.public_baseurl,
self.hs.config.user_consent_version,
),
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
),
}
html_bytes = html.encode("utf8")
request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
return None
else:
raise SynapseError(404, "Unknown auth stage type")
@defer.inlineCallbacks
def on_POST(self, request, stagetype):
yield
if stagetype == "m.login.recaptcha":
if ('g-recaptcha-response' not in request.args or
len(request.args['g-recaptcha-response'])) == 0:
raise SynapseError(400, "No captcha response supplied")
if ('session' not in request.args or
len(request.args['session'])) == 0:
raise SynapseError(400, "No session supplied")
session = request.args['session'][0]
session = parse_string(request, "session")
if not session:
raise SynapseError(400, "No session supplied")
if stagetype == LoginType.RECAPTCHA:
response = parse_string(request, "g-recaptcha-response")
if not response:
raise SynapseError(400, "No captcha response supplied")
authdict = {
'response': request.args['g-recaptcha-response'][0],
'response': response,
'session': session,
}
@@ -178,6 +216,41 @@ class AuthRestServlet(RestServlet):
request.write(html_bytes)
finish_request(request)
defer.returnValue(None)
elif stagetype == LoginType.TERMS:
if ('session' not in request.args or
len(request.args['session'])) == 0:
raise SynapseError(400, "No session supplied")
session = request.args['session'][0]
authdict = {'session': session}
success = yield self.auth_handler.add_oob_auth(
LoginType.TERMS,
authdict,
self.hs.get_ip_from_request(request)
)
if success:
html = SUCCESS_TEMPLATE
else:
html = TERMS_TEMPLATE % {
'session': session,
'terms_url': "%s_matrix/consent?v=%s" % (
self.hs.config.public_baseurl,
self.hs.config.user_consent_version,
),
'myurl': "%s/auth/%s/fallback/web" % (
CLIENT_V2_ALPHA_PREFIX, LoginType.TERMS
),
}
html_bytes = html.encode("utf8")
request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
defer.returnValue(None)
else:
raise SynapseError(404, "Unknown auth stage type")

View File

@@ -359,6 +359,13 @@ class RegisterRestServlet(RestServlet):
[LoginType.MSISDN, LoginType.EMAIL_IDENTITY]
])
# Append m.login.terms to all flows if we're requiring consent
if self.hs.config.user_consent_at_registration:
new_flows = []
for flow in flows:
flow.append(LoginType.TERMS)
flows.extend(new_flows)
auth_result, params, session_id = yield self.auth_handler.check_auth(
flows, body, self.hs.get_ip_from_request(request)
)
@@ -445,6 +452,12 @@ class RegisterRestServlet(RestServlet):
params.get("bind_msisdn")
)
if auth_result and LoginType.TERMS in auth_result:
logger.info("%s has consented to the privacy policy" % registered_user_id)
yield self.store.user_set_consent_version(
registered_user_id, self.hs.config.user_consent_version,
)
defer.returnValue((200, return_dict))
def on_OPTIONS(self, _):

View File

@@ -17,7 +17,7 @@ import logging
from twisted.internet import defer
from synapse.api.errors import Codes, SynapseError
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.http.servlet import (
RestServlet,
parse_json_object_from_request,
@@ -208,10 +208,25 @@ class RoomKeysServlet(RestServlet):
user_id, version, room_id, session_id
)
# Convert room_keys to the right format to return.
if session_id:
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
# If the client requests a specific session, but that session was
# not backed up, then return an M_NOT_FOUND.
if room_keys['rooms'] == {}:
raise NotFoundError("No room_keys found")
else:
room_keys = room_keys['rooms'][room_id]['sessions'][session_id]
elif room_id:
room_keys = room_keys['rooms'][room_id]
# If the client requests all sessions from a room, but no sessions
# are found, then return an empty result rather than an error, so
# that clients don't have to handle an error condition, and an
# empty result is valid. (Similarly if the client requests all
# sessions from the backup, but in that case, room_keys is already
# in the right format, so we don't need to do anything about it.)
if room_keys['rooms'] == {}:
room_keys = {'sessions': {}}
else:
room_keys = room_keys['rooms'][room_id]
defer.returnValue((200, room_keys))

View File

@@ -0,0 +1,89 @@
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import KNOWN_ROOM_VERSIONS
from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import (
RestServlet,
assert_params_in_dict,
parse_json_object_from_request,
)
from ._base import client_v2_patterns
logger = logging.getLogger(__name__)
class RoomUpgradeRestServlet(RestServlet):
"""Handler for room uprade requests.
Handles requests of the form:
POST /_matrix/client/r0/rooms/$roomid/upgrade HTTP/1.1
Content-Type: application/json
{
"new_version": "2",
}
Creates a new room and shuts down the old one. Returns the ID of the new room.
Args:
hs (synapse.server.HomeServer):
"""
PATTERNS = client_v2_patterns(
# /rooms/$roomid/upgrade
"/rooms/(?P<room_id>[^/]*)/upgrade$",
v2_alpha=False,
)
def __init__(self, hs):
super(RoomUpgradeRestServlet, self).__init__()
self._hs = hs
self._room_creation_handler = hs.get_room_creation_handler()
self._auth = hs.get_auth()
@defer.inlineCallbacks
def on_POST(self, request, room_id):
requester = yield self._auth.get_user_by_req(request)
content = parse_json_object_from_request(request)
assert_params_in_dict(content, ("new_version", ))
new_version = content["new_version"]
if new_version not in KNOWN_ROOM_VERSIONS:
raise SynapseError(
400,
"Your homeserver does not support this room version",
Codes.UNSUPPORTED_ROOM_VERSION,
)
new_room_id = yield self._room_creation_handler.upgrade_room(
requester, room_id, new_version
)
ret = {
"replacement_room": new_room_id,
}
defer.returnValue((200, ret))
def register_servlets(hs, http_server):
RoomUpgradeRestServlet(hs).register(http_server)

View File

@@ -137,27 +137,36 @@ class ConsentResource(Resource):
request (twisted.web.http.Request):
"""
version = parse_string(request, "v",
default=self._default_consent_version)
username = parse_string(request, "u", required=True)
userhmac = parse_string(request, "h", required=True, encoding=None)
version = parse_string(request, "v", default=self._default_consent_version)
username = parse_string(request, "u", required=False, default="")
userhmac = None
has_consented = False
public_version = username == ""
if not public_version:
userhmac_bytes = parse_string(request, "h", required=True, encoding=None)
self._check_hash(username, userhmac)
self._check_hash(username, userhmac_bytes)
if username.startswith('@'):
qualified_user_id = username
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
if username.startswith('@'):
qualified_user_id = username
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
u = yield self.store.get_user_by_id(qualified_user_id)
if u is None:
raise NotFoundError("Unknown user")
u = yield self.store.get_user_by_id(qualified_user_id)
if u is None:
raise NotFoundError("Unknown user")
has_consented = u["consent_version"] == version
userhmac = userhmac_bytes.decode("ascii")
try:
self._render_template(
request, "%s.html" % (version,),
user=username, userhmac=userhmac, version=version,
has_consented=(u["consent_version"] == version),
user=username,
userhmac=userhmac,
version=version,
has_consented=has_consented,
public_version=public_version,
)
except TemplateNotFound:
raise NotFoundError("Unknown policy version")
@@ -223,7 +232,7 @@ class ConsentResource(Resource):
key=self._hmac_secret,
msg=userid.encode('utf-8'),
digestmod=sha256,
).hexdigest()
).hexdigest().encode('ascii')
if not compare_digest(want_mac, userhmac):
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")

View File

@@ -1,14 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@@ -1,92 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from unpaddedbase64 import encode_base64
from OpenSSL import crypto
from twisted.web.resource import Resource
from synapse.http.server import respond_with_json_bytes
logger = logging.getLogger(__name__)
class LocalKey(Resource):
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
signature verification keys for this server::
GET /key HTTP/1.1
HTTP/1.1 200 OK
Content-Type: application/json
{
"server_name": "this.server.example.com"
"verify_keys": {
"algorithm:version": # base64 encoded NACL verification key.
},
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
"signatures": {
"this.server.example.com": {
"algorithm:version": # NACL signature for this server.
}
}
}
"""
def __init__(self, hs):
self.response_body = encode_canonical_json(
self.response_json_object(hs.config)
)
Resource.__init__(self)
@staticmethod
def response_json_object(server_config):
verify_keys = {}
for key in server_config.signing_key:
verify_key_bytes = key.verify_key.encode()
key_id = "%s:%s" % (key.alg, key.version)
verify_keys[key_id] = encode_base64(verify_key_bytes)
x509_certificate_bytes = crypto.dump_certificate(
crypto.FILETYPE_ASN1,
server_config.tls_certificate
)
json_object = {
u"server_name": server_config.server_name,
u"verify_keys": verify_keys,
u"tls_certificate": encode_base64(x509_certificate_bytes)
}
for key in server_config.signing_key:
json_object = sign_json(
json_object,
server_config.server_name,
key,
)
return json_object
def render_GET(self, request):
return respond_with_json_bytes(
request, 200, self.response_body,
)
def getChild(self, name, request):
if name == b'':
return self

View File

@@ -16,6 +16,7 @@
import logging
import os
from six import PY3
from six.moves import urllib
from twisted.internet import defer
@@ -48,26 +49,21 @@ def parse_media_id(request):
return server_name, media_id, file_name
except Exception:
raise SynapseError(
404,
"Invalid media id token %r" % (request.postpath,),
Codes.UNKNOWN,
404, "Invalid media id token %r" % (request.postpath,), Codes.UNKNOWN
)
def respond_404(request):
respond_with_json(
request, 404,
cs_error(
"Not found %r" % (request.postpath,),
code=Codes.NOT_FOUND,
),
send_cors=True
request,
404,
cs_error("Not found %r" % (request.postpath,), code=Codes.NOT_FOUND),
send_cors=True,
)
@defer.inlineCallbacks
def respond_with_file(request, media_type, file_path,
file_size=None, upload_name=None):
def respond_with_file(request, media_type, file_path, file_size=None, upload_name=None):
logger.debug("Responding with %r", file_path)
if os.path.isfile(file_path):
@@ -97,31 +93,26 @@ def add_file_headers(request, media_type, file_size, upload_name):
file_size (int): Size in bytes of the media, if known.
upload_name (str): The name of the requested file, if any.
"""
def _quote(x):
return urllib.parse.quote(x.encode("utf-8"))
request.setHeader(b"Content-Type", media_type.encode("UTF-8"))
if upload_name:
if is_ascii(upload_name):
disposition = ("inline; filename=%s" % (_quote(upload_name),)).encode("ascii")
disposition = "inline; filename=%s" % (_quote(upload_name),)
else:
disposition = (
"inline; filename*=utf-8''%s" % (_quote(upload_name),)).encode("ascii")
disposition = "inline; filename*=utf-8''%s" % (_quote(upload_name),)
request.setHeader(b"Content-Disposition", disposition)
request.setHeader(b"Content-Disposition", disposition.encode('ascii'))
# cache for at least a day.
# XXX: we might want to turn this off for data we don't want to
# recommend caching as it's sensitive or private - or at least
# select private. don't bother setting Expires as all our
# clients are smart enough to be happy with Cache-Control
request.setHeader(
b"Cache-Control", b"public,max-age=86400,s-maxage=86400"
)
request.setHeader(
b"Content-Length", b"%d" % (file_size,)
)
request.setHeader(b"Cache-Control", b"public,max-age=86400,s-maxage=86400")
request.setHeader(b"Content-Length", b"%d" % (file_size,))
@defer.inlineCallbacks
@@ -153,6 +144,7 @@ class Responder(object):
Responder is a context manager which *must* be used, so that any resources
held can be cleaned up.
"""
def write_to_consumer(self, consumer):
"""Stream response into consumer
@@ -186,9 +178,18 @@ class FileInfo(object):
thumbnail_method (str)
thumbnail_type (str): Content type of thumbnail, e.g. image/png
"""
def __init__(self, server_name, file_id, url_cache=False,
thumbnail=False, thumbnail_width=None, thumbnail_height=None,
thumbnail_method=None, thumbnail_type=None):
def __init__(
self,
server_name,
file_id,
url_cache=False,
thumbnail=False,
thumbnail_width=None,
thumbnail_height=None,
thumbnail_method=None,
thumbnail_type=None,
):
self.server_name = server_name
self.file_id = file_id
self.url_cache = url_cache
@@ -197,3 +198,74 @@ class FileInfo(object):
self.thumbnail_height = thumbnail_height
self.thumbnail_method = thumbnail_method
self.thumbnail_type = thumbnail_type
def get_filename_from_headers(headers):
"""
Get the filename of the downloaded file by inspecting the
Content-Disposition HTTP header.
Args:
headers (twisted.web.http_headers.Headers): The HTTP
request headers.
Returns:
A Unicode string of the filename, or None.
"""
content_disposition = headers.get(b"Content-Disposition", [b''])
# No header, bail out.
if not content_disposition[0]:
return
# dict of unicode: bytes, corresponding to the key value sections of the
# Content-Disposition header.
params = {}
parts = content_disposition[0].split(b";")
for i in parts:
# Split into key-value pairs, if able
# We don't care about things like `inline`, so throw it out
if b"=" not in i:
continue
key, value = i.strip().split(b"=")
params[key.decode('ascii')] = value
upload_name = None
# First check if there is a valid UTF-8 filename
upload_name_utf8 = params.get("filename*", None)
if upload_name_utf8:
if upload_name_utf8.lower().startswith(b"utf-8''"):
upload_name_utf8 = upload_name_utf8[7:]
# We have a filename*= section. This MUST be ASCII, and any UTF-8
# bytes are %-quoted.
if PY3:
try:
# Once it is decoded, we can then unquote the %-encoded
# parts strictly into a unicode string.
upload_name = urllib.parse.unquote(
upload_name_utf8.decode('ascii'), errors="strict"
)
except UnicodeDecodeError:
# Incorrect UTF-8.
pass
else:
# On Python 2, we first unquote the %-encoded parts and then
# decode it strictly using UTF-8.
try:
upload_name = urllib.parse.unquote(upload_name_utf8).decode('utf8')
except UnicodeDecodeError:
pass
# If there isn't check for an ascii name.
if not upload_name:
upload_name_ascii = params.get("filename", None)
if upload_name_ascii and is_ascii(upload_name_ascii):
# Make sure there's no %-quoted bytes. If there is, reject it as
# non-valid ASCII.
if b"%" not in upload_name_ascii:
upload_name = upload_name_ascii.decode('ascii')
# This may be None here, indicating we did not find a matching name.
return upload_name

View File

@@ -1,68 +0,0 @@
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from pydenticon import Generator
from twisted.web.resource import Resource
from synapse.http.servlet import parse_integer
FOREGROUND = [
"rgb(45,79,255)",
"rgb(254,180,44)",
"rgb(226,121,234)",
"rgb(30,179,253)",
"rgb(232,77,65)",
"rgb(49,203,115)",
"rgb(141,69,170)"
]
BACKGROUND = "rgb(224,224,224)"
SIZE = 5
class IdenticonResource(Resource):
isLeaf = True
def __init__(self):
Resource.__init__(self)
self.generator = Generator(
SIZE, SIZE, foreground=FOREGROUND, background=BACKGROUND,
)
def generate_identicon(self, name, width, height):
v_padding = width % SIZE
h_padding = height % SIZE
top_padding = v_padding // 2
left_padding = h_padding // 2
bottom_padding = v_padding - top_padding
right_padding = h_padding - left_padding
width -= v_padding
height -= h_padding
padding = (top_padding, bottom_padding, left_padding, right_padding)
identicon = self.generator.generate(
name, width, height, padding=padding
)
return identicon
def render_GET(self, request):
name = "/".join(request.postpath)
width = parse_integer(request, "width", default=96)
height = parse_integer(request, "height", default=96)
identicon_bytes = self.generate_identicon(name, width, height)
request.setHeader(b"Content-Type", b"image/png")
request.setHeader(
b"Cache-Control", b"public,max-age=86400,s-maxage=86400"
)
return identicon_bytes

View File

@@ -14,14 +14,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi
import errno
import logging
import os
import shutil
from six import PY3, iteritems
from six.moves.urllib import parse as urlparse
from six import iteritems
import twisted.internet.error
import twisted.web.http
@@ -34,18 +32,21 @@ from synapse.api.errors import (
NotFoundError,
SynapseError,
)
from synapse.http.matrixfederationclient import MatrixFederationHttpClient
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util import logcontext
from synapse.util.async_helpers import Linearizer
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import is_ascii, random_string
from synapse.util.stringutils import random_string
from ._base import FileInfo, respond_404, respond_with_responder
from ._base import (
FileInfo,
get_filename_from_headers,
respond_404,
respond_with_responder,
)
from .config_resource import MediaConfigResource
from .download_resource import DownloadResource
from .filepath import MediaFilePaths
from .identicon_resource import IdenticonResource
from .media_storage import MediaStorage
from .preview_url_resource import PreviewUrlResource
from .storage_provider import StorageProviderWrapper
@@ -63,7 +64,7 @@ class MediaRepository(object):
def __init__(self, hs):
self.hs = hs
self.auth = hs.get_auth()
self.client = MatrixFederationHttpClient(hs)
self.client = hs.get_http_client()
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.store = hs.get_datastore()
@@ -398,39 +399,9 @@ class MediaRepository(object):
yield finish()
media_type = headers[b"Content-Type"][0].decode('ascii')
upload_name = get_filename_from_headers(headers)
time_now_ms = self.clock.time_msec()
content_disposition = headers.get(b"Content-Disposition", None)
if content_disposition:
_, params = cgi.parse_header(content_disposition[0].decode('ascii'),)
upload_name = None
# First check if there is a valid UTF-8 filename
upload_name_utf8 = params.get("filename*", None)
if upload_name_utf8:
if upload_name_utf8.lower().startswith("utf-8''"):
upload_name = upload_name_utf8[7:]
# If there isn't check for an ascii name.
if not upload_name:
upload_name_ascii = params.get("filename", None)
if upload_name_ascii and is_ascii(upload_name_ascii):
upload_name = upload_name_ascii
if upload_name:
if PY3:
upload_name = urlparse.unquote(upload_name)
else:
upload_name = urlparse.unquote(upload_name.encode('ascii'))
try:
if isinstance(upload_name, bytes):
upload_name = upload_name.decode("utf-8")
except UnicodeDecodeError:
upload_name = None
else:
upload_name = None
logger.info("Stored remote media in file %r", fname)
yield self.store.store_cached_remote_media(
@@ -769,7 +740,6 @@ class MediaRepositoryResource(Resource):
self.putChild(b"thumbnail", ThumbnailResource(
hs, media_repo, media_repo.media_storage,
))
self.putChild(b"identicon", IdenticonResource())
if hs.config.url_preview_enabled:
self.putChild(b"preview_url", PreviewUrlResource(
hs, media_repo, media_repo.media_storage,

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi
import datetime
import errno
import fnmatch
@@ -24,6 +24,7 @@ import shutil
import sys
import traceback
import six
from six import string_types
from six.moves import urllib_parse as urlparse
@@ -42,15 +43,19 @@ from synapse.http.server import (
)
from synapse.http.servlet import parse_integer, parse_string
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.rest.media.v1._base import get_filename_from_headers
from synapse.util.async_helpers import ObservableDeferred
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.stringutils import is_ascii, random_string
from synapse.util.stringutils import random_string
from ._base import FileInfo
logger = logging.getLogger(__name__)
_charset_match = re.compile(br"<\s*meta[^>]*charset\s*=\s*([a-z0-9-]+)", flags=re.I)
_content_type_match = re.compile(r'.*; *charset="?(.*?)"?(;|$)', flags=re.I)
class PreviewUrlResource(Resource):
isLeaf = True
@@ -98,7 +103,7 @@ class PreviewUrlResource(Resource):
# XXX: if get_user_by_req fails, what should we do in an async render?
requester = yield self.auth.get_user_by_req(request)
url = parse_string(request, "url")
if "ts" in request.args:
if b"ts" in request.args:
ts = parse_integer(request, "ts")
else:
ts = self.clock.time_msec()
@@ -180,7 +185,12 @@ class PreviewUrlResource(Resource):
cache_result["expires_ts"] > ts and
cache_result["response_code"] / 100 == 2
):
defer.returnValue(cache_result["og"])
# It may be stored as text in the database, not as bytes (such as
# PostgreSQL). If so, encode it back before handing it on.
og = cache_result["og"]
if isinstance(og, six.text_type):
og = og.encode('utf8')
defer.returnValue(og)
return
media_info = yield self._download_url(url, user)
@@ -213,15 +223,28 @@ class PreviewUrlResource(Resource):
elif _is_html(media_info['media_type']):
# TODO: somehow stop a big HTML tree from exploding synapse's RAM
file = open(media_info['filename'])
body = file.read()
file.close()
with open(media_info['filename'], 'rb') as file:
body = file.read()
# clobber the encoding from the content-type, or default to utf-8
# XXX: this overrides any <meta/> or XML charset headers in the body
# which may pose problems, but so far seems to work okay.
match = re.match(r'.*; *charset=(.*?)(;|$)', media_info['media_type'], re.I)
encoding = match.group(1) if match else "utf-8"
encoding = None
# Let's try and figure out if it has an encoding set in a meta tag.
# Limit it to the first 1kb, since it ought to be in the meta tags
# at the top.
match = _charset_match.search(body[:1000])
# If we find a match, it should take precedence over the
# Content-Type header, so set it here.
if match:
encoding = match.group(1).decode('ascii')
# If we don't find a match, we'll look at the HTTP Content-Type, and
# if that doesn't exist, we'll fall back to UTF-8.
if not encoding:
match = _content_type_match.match(
media_info['media_type']
)
encoding = match.group(1) if match else "utf-8"
og = decode_and_calc_og(body, media_info['uri'], encoding)
@@ -313,31 +336,7 @@ class PreviewUrlResource(Resource):
media_type = "application/octet-stream"
time_now_ms = self.clock.time_msec()
content_disposition = headers.get(b"Content-Disposition", None)
if content_disposition:
_, params = cgi.parse_header(content_disposition[0],)
download_name = None
# First check if there is a valid UTF-8 filename
download_name_utf8 = params.get("filename*", None)
if download_name_utf8:
if download_name_utf8.lower().startswith("utf-8''"):
download_name = download_name_utf8[7:]
# If there isn't check for an ascii name.
if not download_name:
download_name_ascii = params.get("filename", None)
if download_name_ascii and is_ascii(download_name_ascii):
download_name = download_name_ascii
if download_name:
download_name = urlparse.unquote(download_name)
try:
download_name = download_name.decode("utf-8")
except UnicodeDecodeError:
download_name = None
else:
download_name = None
download_name = get_filename_from_headers(headers)
yield self.store.store_local_media(
media_id=file_id,

View File

@@ -23,6 +23,7 @@ import abc
import logging
from twisted.enterprise import adbapi
from twisted.mail.smtp import sendmail
from twisted.web.client import BrowserLikePolicyForHTTPS
from synapse.api.auth import Auth
@@ -174,6 +175,7 @@ class HomeServer(object):
'message_handler',
'pagination_handler',
'room_context_handler',
'sendmail',
]
# This is overridden in derived application classes
@@ -269,6 +271,9 @@ class HomeServer(object):
def build_room_creation_handler(self):
return RoomCreationHandler(self)
def build_sendmail(self):
return sendmail
def build_state_handler(self):
return StateHandler(self)

View File

@@ -7,6 +7,9 @@ import synapse.handlers.auth
import synapse.handlers.deactivate_account
import synapse.handlers.device
import synapse.handlers.e2e_keys
import synapse.handlers.room
import synapse.handlers.room_member
import synapse.handlers.message
import synapse.handlers.set_password
import synapse.rest.media.v1.media_repository
import synapse.server_notices.server_notices_manager
@@ -50,6 +53,9 @@ class HomeServer(object):
def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
pass
def get_room_member_handler(self) -> synapse.handlers.room_member.RoomMemberHandler:
pass
def get_event_creation_handler(self) -> synapse.handlers.message.EventCreationHandler:
pass

View File

@@ -261,7 +261,7 @@ class StateHandler(object):
logger.debug("calling resolve_state_groups from compute_event_context")
entry = yield self.resolve_state_groups_for_events(
event.room_id, [e for e, _ in event.prev_events],
event.room_id, event.prev_event_ids(),
)
prev_state_ids = entry.state
@@ -607,7 +607,7 @@ def resolve_events_with_store(room_version, state_sets, event_map, state_res_sto
return v1.resolve_events_with_store(
state_sets, event_map, state_res_store.get_events,
)
elif room_version == RoomVersions.VDH_TEST:
elif room_version in (RoomVersions.VDH_TEST, RoomVersions.STATE_V2_TEST):
return v2.resolve_events_with_store(
state_sets, event_map, state_res_store,
)

View File

@@ -53,6 +53,10 @@ def resolve_events_with_store(state_sets, event_map, state_res_store):
logger.debug("Computing conflicted state")
# We use event_map as a cache, so if its None we need to initialize it
if event_map is None:
event_map = {}
# First split up the un/conflicted state
unconflicted_state, conflicted_state = _seperate(state_sets)
@@ -155,7 +159,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
event = yield _get_event(event_id, event_map, state_res_store)
pl = None
for aid, _ in event.auth_events:
for aid in event.auth_event_ids():
aev = yield _get_event(aid, event_map, state_res_store)
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
pl = aev
@@ -163,7 +167,7 @@ def _get_power_level_for_sender(event_id, event_map, state_res_store):
if pl is None:
# Couldn't find power level. Check if they're the creator of the room
for aid, _ in event.auth_events:
for aid in event.auth_event_ids():
aev = yield _get_event(aid, event_map, state_res_store)
if (aev.type, aev.state_key) == (EventTypes.Create, ""):
if aev.content.get("creator") == event.sender:
@@ -295,7 +299,7 @@ def _add_event_and_auth_chain_to_graph(graph, event_id, event_map,
graph.setdefault(eid, set())
event = yield _get_event(eid, event_map, state_res_store)
for aid, _ in event.auth_events:
for aid in event.auth_event_ids():
if aid in auth_diff:
if aid not in graph:
state.append(aid)
@@ -365,7 +369,7 @@ def _iterative_auth_checks(event_ids, base_state, event_map, state_res_store):
event = event_map[event_id]
auth_events = {}
for aid, _ in event.auth_events:
for aid in event.auth_event_ids():
ev = yield _get_event(aid, event_map, state_res_store)
if ev.rejected_reason is None:
@@ -413,9 +417,9 @@ def _mainline_sort(event_ids, resolved_power_event_id, event_map,
while pl:
mainline.append(pl)
pl_ev = yield _get_event(pl, event_map, state_res_store)
auth_events = pl_ev.auth_events
auth_events = pl_ev.auth_event_ids()
pl = None
for aid, _ in auth_events:
for aid in auth_events:
ev = yield _get_event(aid, event_map, state_res_store)
if (ev.type, ev.state_key) == (EventTypes.PowerLevels, ""):
pl = aid
@@ -460,10 +464,10 @@ def _get_mainline_depth_for_event(event, mainline_map, event_map, state_res_stor
if depth is not None:
defer.returnValue(depth)
auth_events = event.auth_events
auth_events = event.auth_event_ids()
event = None
for aid, _ in auth_events:
for aid in auth_events:
aev = yield _get_event(aid, event_map, state_res_store)
if (aev.type, aev.state_key) == (EventTypes.PowerLevels, ""):
event = aev

View File

@@ -22,14 +22,19 @@ from twisted.internet import defer
from synapse.api.errors import StoreError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.background_updates import BackgroundUpdateStore
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
from ._base import Cache, SQLBaseStore, db_to_json
from ._base import Cache, db_to_json
logger = logging.getLogger(__name__)
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES = (
"drop_device_list_streams_non_unique_indexes"
)
class DeviceStore(SQLBaseStore):
class DeviceStore(BackgroundUpdateStore):
def __init__(self, db_conn, hs):
super(DeviceStore, self).__init__(db_conn, hs)
@@ -52,6 +57,30 @@ class DeviceStore(SQLBaseStore):
columns=["user_id", "device_id"],
)
# create a unique index on device_lists_remote_cache
self.register_background_index_update(
"device_lists_remote_cache_unique_idx",
index_name="device_lists_remote_cache_unique_id",
table="device_lists_remote_cache",
columns=["user_id", "device_id"],
unique=True,
)
# And one on device_lists_remote_extremeties
self.register_background_index_update(
"device_lists_remote_extremeties_unique_idx",
index_name="device_lists_remote_extremeties_unique_idx",
table="device_lists_remote_extremeties",
columns=["user_id"],
unique=True,
)
# once they complete, we can remove the old non-unique indexes.
self.register_background_update_handler(
DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES,
self._drop_device_list_streams_non_unique_indexes,
)
@defer.inlineCallbacks
def store_device(self, user_id, device_id,
initial_device_display_name):
@@ -239,7 +268,19 @@ class DeviceStore(SQLBaseStore):
def update_remote_device_list_cache_entry(self, user_id, device_id, content,
stream_id):
"""Updates a single user's device in the cache.
"""Updates a single device in the cache of a remote user's devicelist.
Note: assumes that we are the only thread that can be updating this user's
device list.
Args:
user_id (str): User to update device list for
device_id (str): ID of decivice being updated
content (dict): new data on this device
stream_id (int): the version of the device list
Returns:
Deferred[None]
"""
return self.runInteraction(
"update_remote_device_list_cache_entry",
@@ -272,7 +313,11 @@ class DeviceStore(SQLBaseStore):
},
values={
"content": json.dumps(content),
}
},
# we don't need to lock, because we assume we are the only thread
# updating this user's devices.
lock=False,
)
txn.call_after(self._get_cached_user_device.invalidate, (user_id, device_id,))
@@ -289,11 +334,26 @@ class DeviceStore(SQLBaseStore):
},
values={
"stream_id": stream_id,
}
},
# again, we can assume we are the only thread updating this user's
# extremity.
lock=False,
)
def update_remote_device_list_cache(self, user_id, devices, stream_id):
"""Replace the cache of the remote user's devices.
"""Replace the entire cache of the remote user's devices.
Note: assumes that we are the only thread that can be updating this user's
device list.
Args:
user_id (str): User to update device list for
devices (list[dict]): list of device objects supplied over federation
stream_id (int): the version of the device list
Returns:
Deferred[None]
"""
return self.runInteraction(
"update_remote_device_list_cache",
@@ -338,7 +398,11 @@ class DeviceStore(SQLBaseStore):
},
values={
"stream_id": stream_id,
}
},
# we don't need to lock, because we can assume we are the only thread
# updating this user's extremity.
lock=False,
)
def get_devices_by_remote(self, destination, from_stream_id):
@@ -589,10 +653,14 @@ class DeviceStore(SQLBaseStore):
combined list of changes to devices, and which destinations need to be
poked. `destination` may be None if no destinations need to be poked.
"""
# We do a group by here as there can be a large number of duplicate
# entries, since we throw away device IDs.
sql = """
SELECT stream_id, user_id, destination FROM device_lists_stream
SELECT MAX(stream_id) AS stream_id, user_id, destination
FROM device_lists_stream
LEFT JOIN device_lists_outbound_pokes USING (stream_id, user_id, device_id)
WHERE ? < stream_id AND stream_id <= ?
GROUP BY user_id, destination
"""
return self._execute(
"get_all_device_list_changes_for_remotes", None,
@@ -718,3 +786,19 @@ class DeviceStore(SQLBaseStore):
"_prune_old_outbound_device_pokes",
_prune_txn,
)
@defer.inlineCallbacks
def _drop_device_list_streams_non_unique_indexes(self, progress, batch_size):
def f(conn):
txn = conn.cursor()
txn.execute(
"DROP INDEX IF EXISTS device_lists_remote_cache_id"
)
txn.execute(
"DROP INDEX IF EXISTS device_lists_remote_extremeties_id"
)
txn.close()
yield self.runWithConnection(f)
yield self._end_background_update(DROP_DEVICE_LIST_STREAMS_NON_UNIQUE_INDEXES)
defer.returnValue(1)

View File

@@ -118,6 +118,11 @@ class EndToEndRoomKeyStore(SQLBaseStore):
these room keys.
"""
try:
version = int(version)
except ValueError:
defer.returnValue({'rooms': {}})
keyvalues = {
"user_id": user_id,
"version": version,
@@ -212,14 +217,23 @@ class EndToEndRoomKeyStore(SQLBaseStore):
Raises:
StoreError: with code 404 if there are no e2e_room_keys_versions present
Returns:
A deferred dict giving the info metadata for this backup version
A deferred dict giving the info metadata for this backup version, with
fields including:
version(str)
algorithm(str)
auth_data(object): opaque dict supplied by the client
"""
def _get_e2e_room_keys_version_info_txn(txn):
if version is None:
this_version = self._get_current_version(txn, user_id)
else:
this_version = version
try:
this_version = int(version)
except ValueError:
# Our versions are all ints so if we can't convert it to an integer,
# it isn't there.
raise StoreError(404, "No row found")
result = self._simple_select_one_txn(
txn,
@@ -236,6 +250,7 @@ class EndToEndRoomKeyStore(SQLBaseStore):
),
)
result["auth_data"] = json.loads(result["auth_data"])
result["version"] = str(result["version"])
return result
return self.runInteraction(

View File

@@ -40,7 +40,10 @@ class EndToEndKeyStore(SQLBaseStore):
allow_none=True,
)
new_key_json = encode_canonical_json(device_keys)
# In py3 we need old_key_json to match new_key_json type. The DB
# returns unicode while encode_canonical_json returns bytes.
new_key_json = encode_canonical_json(device_keys).decode("utf-8")
if old_key_json == new_key_json:
return False

View File

@@ -477,7 +477,7 @@ class EventFederationStore(EventFederationWorkerStore):
"is_state": False,
}
for ev in events
for e_id, _ in ev.prev_events
for e_id in ev.prev_event_ids()
],
)
@@ -510,7 +510,7 @@ class EventFederationStore(EventFederationWorkerStore):
txn.executemany(query, [
(e_id, ev.room_id, e_id, ev.room_id, e_id, ev.room_id, False)
for ev in events for e_id, _ in ev.prev_events
for ev in events for e_id in ev.prev_event_ids()
if not ev.internal_metadata.is_outlier()
])

View File

@@ -38,6 +38,7 @@ from synapse.state import StateResolutionStore
from synapse.storage.background_updates import BackgroundUpdateStore
from synapse.storage.event_federation import EventFederationStore
from synapse.storage.events_worker import EventsWorkerStore
from synapse.storage.state import StateGroupWorkerStore
from synapse.types import RoomStreamToken, get_domain_from_id
from synapse.util import batch_iter
from synapse.util.async_helpers import ObservableDeferred
@@ -205,7 +206,8 @@ def _retry_on_integrity_error(func):
# inherits from EventFederationStore so that we can call _update_backward_extremities
# and _handle_mult_prev_events (though arguably those could both be moved in here)
class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore):
class EventsStore(StateGroupWorkerStore, EventFederationStore, EventsWorkerStore,
BackgroundUpdateStore):
EVENT_ORIGIN_SERVER_TS_NAME = "event_origin_server_ts"
EVENT_FIELDS_SENDER_URL_UPDATE_NAME = "event_fields_sender_url"
@@ -414,7 +416,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
)
if len_1:
all_single_prev_not_state = all(
len(event.prev_events) == 1
len(event.prev_event_ids()) == 1
and not event.is_state()
for event, ctx in ev_ctx_rm
)
@@ -438,7 +440,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
# guess this by looking at the prev_events and checking
# if they match the current forward extremities.
for ev, _ in ev_ctx_rm:
prev_event_ids = set(e for e, _ in ev.prev_events)
prev_event_ids = set(ev.prev_event_ids())
if latest_event_ids == prev_event_ids:
state_delta_reuse_delta_counter.inc()
break
@@ -549,7 +551,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
result.difference_update(
e_id
for event in new_events
for e_id, _ in event.prev_events
for e_id in event.prev_event_ids()
)
# Finally, remove any events which are prev_events of any existing events.
@@ -867,7 +869,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
"auth_id": auth_id,
}
for event, _ in events_and_contexts
for auth_id, _ in event.auth_events
for auth_id in event.auth_event_ids()
if event.is_state()
],
)
@@ -2034,55 +2036,37 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
logger.info("[purge] finding redundant state groups")
# Get all state groups that are only referenced by events that are
# to be deleted.
# This works by first getting state groups that we may want to delete,
# joining against event_to_state_groups to get events that use that
# state group, then left joining against events_to_purge again. Any
# state group where the left join produce *no nulls* are referenced
# only by events that are going to be purged.
# Get all state groups that are referenced by events that are to be
# deleted. We then go and check if they are referenced by other events
# or state groups, and if not we delete them.
txn.execute("""
SELECT state_group FROM
(
SELECT DISTINCT state_group FROM events_to_purge
INNER JOIN event_to_state_groups USING (event_id)
) AS sp
INNER JOIN event_to_state_groups USING (state_group)
LEFT JOIN events_to_purge AS ep USING (event_id)
GROUP BY state_group
HAVING SUM(CASE WHEN ep.event_id IS NULL THEN 1 ELSE 0 END) = 0
SELECT DISTINCT state_group FROM events_to_purge
INNER JOIN event_to_state_groups USING (event_id)
""")
state_rows = txn.fetchall()
logger.info("[purge] found %i redundant state groups", len(state_rows))
referenced_state_groups = set(sg for sg, in txn)
logger.info(
"[purge] found %i referenced state groups",
len(referenced_state_groups),
)
# make a set of the redundant state groups, so that we can look them up
# efficiently
state_groups_to_delete = set([sg for sg, in state_rows])
logger.info("[purge] finding state groups that can be deleted")
# Now we get all the state groups that rely on these state groups
logger.info("[purge] finding state groups which depend on redundant"
" state groups")
remaining_state_groups = []
for i in range(0, len(state_rows), 100):
chunk = [sg for sg, in state_rows[i:i + 100]]
# look for state groups whose prev_state_group is one we are about
# to delete
rows = self._simple_select_many_txn(
txn,
table="state_group_edges",
column="prev_state_group",
iterable=chunk,
retcols=["state_group"],
keyvalues={},
state_groups_to_delete, remaining_state_groups = (
self._find_unreferenced_groups_during_purge(
txn, referenced_state_groups,
)
remaining_state_groups.extend(
row["state_group"] for row in rows
)
# exclude state groups we are about to delete: no point in
# updating them
if row["state_group"] not in state_groups_to_delete
)
logger.info(
"[purge] found %i state groups to delete",
len(state_groups_to_delete),
)
logger.info(
"[purge] de-delta-ing %i remaining state groups",
len(remaining_state_groups),
)
# Now we turn the state groups that reference to-be-deleted state
# groups to non delta versions.
@@ -2127,11 +2111,11 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
logger.info("[purge] removing redundant state groups")
txn.executemany(
"DELETE FROM state_groups_state WHERE state_group = ?",
state_rows
((sg,) for sg in state_groups_to_delete),
)
txn.executemany(
"DELETE FROM state_groups WHERE id = ?",
state_rows
((sg,) for sg in state_groups_to_delete),
)
logger.info("[purge] removing events from event_to_state_groups")
@@ -2227,6 +2211,85 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
logger.info("[purge] done")
def _find_unreferenced_groups_during_purge(self, txn, state_groups):
"""Used when purging history to figure out which state groups can be
deleted and which need to be de-delta'ed (due to one of its prev groups
being scheduled for deletion).
Args:
txn
state_groups (set[int]): Set of state groups referenced by events
that are going to be deleted.
Returns:
tuple[set[int], set[int]]: The set of state groups that can be
deleted and the set of state groups that need to be de-delta'ed
"""
# Graph of state group -> previous group
graph = {}
# Set of events that we have found to be referenced by events
referenced_groups = set()
# Set of state groups we've already seen
state_groups_seen = set(state_groups)
# Set of state groups to handle next.
next_to_search = set(state_groups)
while next_to_search:
# We bound size of groups we're looking up at once, to stop the
# SQL query getting too big
if len(next_to_search) < 100:
current_search = next_to_search
next_to_search = set()
else:
current_search = set(itertools.islice(next_to_search, 100))
next_to_search -= current_search
# Check if state groups are referenced
sql = """
SELECT DISTINCT state_group FROM event_to_state_groups
LEFT JOIN events_to_purge AS ep USING (event_id)
WHERE state_group IN (%s) AND ep.event_id IS NULL
""" % (",".join("?" for _ in current_search),)
txn.execute(sql, list(current_search))
referenced = set(sg for sg, in txn)
referenced_groups |= referenced
# We don't continue iterating up the state group graphs for state
# groups that are referenced.
current_search -= referenced
rows = self._simple_select_many_txn(
txn,
table="state_group_edges",
column="prev_state_group",
iterable=current_search,
keyvalues={},
retcols=("prev_state_group", "state_group",),
)
prevs = set(row["state_group"] for row in rows)
# We don't bother re-handling groups we've already seen
prevs -= state_groups_seen
next_to_search |= prevs
state_groups_seen |= prevs
for row in rows:
# Note: Each state group can have at most one prev group
graph[row["state_group"]] = row["prev_state_group"]
to_delete = state_groups_seen - referenced_groups
to_dedelta = set()
for sg in referenced_groups:
prev_sg = graph.get(sg)
if prev_sg and prev_sg in to_delete:
to_dedelta.add(sg)
return to_delete, to_dedelta
@defer.inlineCallbacks
def is_event_after(self, event_id1, event_id2):
"""Returns True if event_id1 is after event_id2 in the stream

View File

@@ -96,37 +96,38 @@ class MonthlyActiveUsersStore(SQLBaseStore):
txn.execute(sql, query_args)
# If MAU user count still exceeds the MAU threshold, then delete on
# a least recently active basis.
# Note it is not possible to write this query using OFFSET due to
# incompatibilities in how sqlite and postgres support the feature.
# sqlite requires 'LIMIT -1 OFFSET ?', the LIMIT must be present
# While Postgres does not require 'LIMIT', but also does not support
# negative LIMIT values. So there is no way to write it that both can
# support
safe_guard = self.hs.config.max_mau_value - len(self.reserved_users)
# Must be greater than zero for postgres
safe_guard = safe_guard if safe_guard > 0 else 0
query_args = [safe_guard]
if self.hs.config.limit_usage_by_mau:
# If MAU user count still exceeds the MAU threshold, then delete on
# a least recently active basis.
# Note it is not possible to write this query using OFFSET due to
# incompatibilities in how sqlite and postgres support the feature.
# sqlite requires 'LIMIT -1 OFFSET ?', the LIMIT must be present
# While Postgres does not require 'LIMIT', but also does not support
# negative LIMIT values. So there is no way to write it that both can
# support
safe_guard = self.hs.config.max_mau_value - len(self.reserved_users)
# Must be greater than zero for postgres
safe_guard = safe_guard if safe_guard > 0 else 0
query_args = [safe_guard]
base_sql = """
DELETE FROM monthly_active_users
WHERE user_id NOT IN (
SELECT user_id FROM monthly_active_users
ORDER BY timestamp DESC
LIMIT ?
base_sql = """
DELETE FROM monthly_active_users
WHERE user_id NOT IN (
SELECT user_id FROM monthly_active_users
ORDER BY timestamp DESC
LIMIT ?
)
"""
# Need if/else since 'AND user_id NOT IN ({})' fails on Postgres
# when len(reserved_users) == 0. Works fine on sqlite.
if len(self.reserved_users) > 0:
query_args.extend(self.reserved_users)
sql = base_sql + """ AND user_id NOT IN ({})""".format(
','.join(questionmarks)
)
"""
# Need if/else since 'AND user_id NOT IN ({})' fails on Postgres
# when len(reserved_users) == 0. Works fine on sqlite.
if len(self.reserved_users) > 0:
query_args.extend(self.reserved_users)
sql = base_sql + """ AND user_id NOT IN ({})""".format(
','.join(questionmarks)
)
else:
sql = base_sql
txn.execute(sql, query_args)
else:
sql = base_sql
txn.execute(sql, query_args)
yield self.runInteraction("reap_monthly_active_users", _reap_users)
# It seems poor to invalidate the whole cache, Postgres supports
@@ -252,8 +253,7 @@ class MonthlyActiveUsersStore(SQLBaseStore):
Args:
user_id(str): the user_id to query
"""
if self.hs.config.limit_usage_by_mau:
if self.hs.config.limit_usage_by_mau or self.hs.config.mau_stats_only:
# Trial users and guests should not be included as part of MAU group
is_guest = yield self.is_guest(user_id)
if is_guest:
@@ -271,8 +271,14 @@ class MonthlyActiveUsersStore(SQLBaseStore):
# but only update if we have not previously seen the user for
# LAST_SEEN_GRANULARITY ms
if last_seen_timestamp is None:
count = yield self.get_monthly_active_count()
if count < self.hs.config.max_mau_value:
# In the case where mau_stats_only is True and limit_usage_by_mau is
# False, there is no point in checking get_monthly_active_count - it
# adds no value and will break the logic if max_mau_value is exceeded.
if not self.hs.config.limit_usage_by_mau:
yield self.upsert_monthly_active_user(user_id)
else:
count = yield self.get_monthly_active_count()
if count < self.hs.config.max_mau_value:
yield self.upsert_monthly_active_user(user_id)
elif now - last_seen_timestamp > LAST_SEEN_GRANULARITY:
yield self.upsert_monthly_active_user(user_id)

View File

@@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
# Remember to update this number every time a change is made to database
# schema files, so the users will be informed on server restarts.
SCHEMA_VERSION = 51
SCHEMA_VERSION = 52
dir_path = os.path.abspath(os.path.dirname(__file__))

View File

@@ -47,7 +47,7 @@ class RoomWorkerStore(SQLBaseStore):
Args:
room_id (str): The ID of the room to retrieve.
Returns:
A namedtuple containing the room information, or an empty list.
A dict containing the room information, or None if the room is unknown.
"""
return self._simple_select_one(
table="rooms",

View File

@@ -20,9 +20,6 @@ CREATE TABLE device_lists_remote_cache (
content TEXT NOT NULL
);
CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
-- The last update we got for a user. Empty if we're not receiving updates for
-- that user.
CREATE TABLE device_lists_remote_extremeties (
@@ -30,7 +27,11 @@ CREATE TABLE device_lists_remote_extremeties (
stream_id TEXT NOT NULL
);
CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
-- we used to create non-unique indexes on these tables, but as of update 52 we create
-- unique indexes concurrently:
--
-- CREATE INDEX device_lists_remote_cache_id ON device_lists_remote_cache(user_id, device_id);
-- CREATE INDEX device_lists_remote_extremeties_id ON device_lists_remote_extremeties(user_id, stream_id);
-- Stream of device lists updates. Includes both local and remotes

View File

@@ -0,0 +1,19 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- This is needed to efficiently check for unreferenced state groups during
-- purge. Added events_to_state_group(state_group) index
INSERT into background_updates (update_name, progress_json)
VALUES ('event_to_state_groups_sg_index', '{}');

View File

@@ -0,0 +1,36 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- register a background update which will create a unique index on
-- device_lists_remote_cache
INSERT into background_updates (update_name, progress_json)
VALUES ('device_lists_remote_cache_unique_idx', '{}');
-- and one on device_lists_remote_extremeties
INSERT into background_updates (update_name, progress_json, depends_on)
VALUES (
'device_lists_remote_extremeties_unique_idx', '{}',
-- doesn't really depend on this, but we need to make sure both happen
-- before we drop the old indexes.
'device_lists_remote_cache_unique_idx'
);
-- once they complete, we can drop the old indexes.
INSERT into background_updates (update_name, progress_json, depends_on)
VALUES (
'drop_device_list_streams_non_unique_indexes', '{}',
'device_lists_remote_extremeties_unique_idx'
);

View File

@@ -0,0 +1,53 @@
/* Copyright 2018 New Vector Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
/* Change version column to an integer so we can do MAX() sensibly
*/
CREATE TABLE e2e_room_keys_versions_new (
user_id TEXT NOT NULL,
version BIGINT NOT NULL,
algorithm TEXT NOT NULL,
auth_data TEXT NOT NULL,
deleted SMALLINT DEFAULT 0 NOT NULL
);
INSERT INTO e2e_room_keys_versions_new
SELECT user_id, CAST(version as BIGINT), algorithm, auth_data, deleted FROM e2e_room_keys_versions;
DROP TABLE e2e_room_keys_versions;
ALTER TABLE e2e_room_keys_versions_new RENAME TO e2e_room_keys_versions;
CREATE UNIQUE INDEX e2e_room_keys_versions_idx ON e2e_room_keys_versions(user_id, version);
/* Change e2e_rooms_keys to match
*/
CREATE TABLE e2e_room_keys_new (
user_id TEXT NOT NULL,
room_id TEXT NOT NULL,
session_id TEXT NOT NULL,
version BIGINT NOT NULL,
first_message_index INT,
forwarded_count INT,
is_verified BOOLEAN,
session_data TEXT NOT NULL
);
INSERT INTO e2e_room_keys_new
SELECT user_id, room_id, session_id, CAST(version as BIGINT), first_message_index, forwarded_count, is_verified, session_data FROM e2e_room_keys;
DROP TABLE e2e_room_keys;
ALTER TABLE e2e_room_keys_new RENAME TO e2e_room_keys;
CREATE UNIQUE INDEX e2e_room_keys_idx ON e2e_room_keys(user_id, room_id, session_id);

View File

@@ -1257,6 +1257,7 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
STATE_GROUP_DEDUPLICATION_UPDATE_NAME = "state_group_state_deduplication"
STATE_GROUP_INDEX_UPDATE_NAME = "state_group_state_type_index"
CURRENT_STATE_INDEX_UPDATE_NAME = "current_state_members_idx"
EVENT_STATE_GROUP_INDEX_UPDATE_NAME = "event_to_state_groups_sg_index"
def __init__(self, db_conn, hs):
super(StateStore, self).__init__(db_conn, hs)
@@ -1275,6 +1276,12 @@ class StateStore(StateGroupWorkerStore, BackgroundUpdateStore):
columns=["state_key"],
where_clause="type='m.room.member'",
)
self.register_background_index_update(
self.EVENT_STATE_GROUP_INDEX_UPDATE_NAME,
index_name="event_to_state_groups_sg_index",
table="event_to_state_groups",
columns=["state_group"],
)
def _store_event_state_mappings_txn(self, txn, events_and_contexts):
state_groups = {}

View File

@@ -169,8 +169,8 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
self.assertEqual(res, 404)
@defer.inlineCallbacks
def test_get_missing_room_keys(self):
"""Check that we get a 404 on querying missing room_keys
def test_get_missing_backup(self):
"""Check that we get a 404 on querying missing backup
"""
res = None
try:
@@ -179,19 +179,20 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
res = e.code
self.assertEqual(res, 404)
# check we also get a 404 even if the version is valid
@defer.inlineCallbacks
def test_get_missing_room_keys(self):
"""Check we get an empty response from an empty backup
"""
version = yield self.handler.create_version(self.local_user, {
"algorithm": "m.megolm_backup.v1",
"auth_data": "first_version_auth_data",
})
self.assertEqual(version, "1")
res = None
try:
yield self.handler.get_room_keys(self.local_user, version)
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
res = yield self.handler.get_room_keys(self.local_user, version)
self.assertDictEqual(res, {
"rooms": {}
})
# TODO: test the locking semantics when uploading room_keys,
# although this is probably best done in sytest
@@ -345,17 +346,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
# check for bulk-delete
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
yield self.handler.delete_room_keys(self.local_user, version)
res = None
try:
yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
res = yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
self.assertDictEqual(res, {
"rooms": {}
})
# check for bulk-delete per room
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
@@ -364,17 +363,15 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
version,
room_id="!abc:matrix.org",
)
res = None
try:
yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
res = yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
self.assertDictEqual(res, {
"rooms": {}
})
# check for bulk-delete per session
yield self.handler.upload_room_keys(self.local_user, version, room_keys)
@@ -384,14 +381,12 @@ class E2eRoomKeysHandlerTestCase(unittest.TestCase):
room_id="!abc:matrix.org",
session_id="c0ff33",
)
res = None
try:
yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
except errors.SynapseError as e:
res = e.code
self.assertEqual(res, 404)
res = yield self.handler.get_room_keys(
self.local_user,
version,
room_id="!abc:matrix.org",
session_id="c0ff33",
)
self.assertDictEqual(res, {
"rooms": {}
})

0
tests/push/__init__.py Normal file
View File

Some files were not shown because too many files have changed in this diff Show More