1
0

Compare commits

..

693 Commits

Author SHA1 Message Date
Erik Johnston
897d976c1e Merge pull request #3755 from matrix-org/erikj/fix_server_notice_tags
Fix up tagging server notice rooms.
2018-08-24 15:04:41 +01:00
Erik Johnston
45fc0ead10 Newsfile 2018-08-24 15:04:29 +01:00
Erik Johnston
cdd24449ee Ensure we wake up /sync when we add tag to notice room 2018-08-24 14:50:03 +01:00
Erik Johnston
14d49c51db Make content of tag an empty object rather than null 2018-08-24 14:44:16 +01:00
Erik Johnston
84b4e76fed Merge pull request #3754 from matrix-org/erikj/fix_whitelist
Allow federation_domain_whitelist to be emtpy list
2018-08-24 12:23:39 +01:00
Erik Johnston
c780d84d66 Newsfile 2018-08-24 12:13:50 +01:00
Erik Johnston
1d67b13674 Fix bug when federation_domain_whitelist is an emtpy list
Outbound federation were incorrectly allowed when the config option was
set to an empty list
2018-08-24 12:13:12 +01:00
Erik Johnston
92d50e3c2a Merge pull request #3753 from matrix-org/erikj/fix_no_server_noticse
Fix bug where we broke sync when using limit_usage_by_mau
2018-08-24 11:56:08 +01:00
Richard van der Hoff
e94cdbaecf Merge pull request #3751 from matrix-org/rav/twisted_17
Pin to twisted 17.1 or later
2018-08-24 11:55:52 +01:00
Erik Johnston
9b1bc593c5 Newsfile 2018-08-24 11:36:39 +01:00
Erik Johnston
7f147d623b Fix bug where we broke sync when using limit_usage_by_mau
We assumed that we always had service notices configured, but that is
not always true
2018-08-24 11:33:50 +01:00
Erik Johnston
15e8dd2ccc Merge pull request #3749 from matrix-org/erikj/add_trial_users
Implement trial users
2018-08-24 10:10:58 +01:00
Richard van der Hoff
05fe8a6c1c changelog 2018-08-24 10:04:52 +01:00
Richard van der Hoff
f584d6108f Pin to twisted 17.1 or later
Fixes https://github.com/matrix-org/synapse/issues/3741.
2018-08-24 10:02:31 +01:00
Erik Johnston
28d7b546cb Newsfile 2018-08-23 19:18:52 +01:00
Erik Johnston
f2cbbda956 Unit tests 2018-08-23 19:17:19 +01:00
Erik Johnston
cd77270a66 Implement trail users 2018-08-23 19:17:19 +01:00
Erik Johnston
242a0483eb Merge pull request #3747 from matrix-org/erikj/fix_multiple_sends_notice
Fix bug where we resent "limit exceeded" server notices
2018-08-23 18:10:08 +01:00
Erik Johnston
60cf1c6e16 Newsfile 2018-08-23 16:34:32 +01:00
Erik Johnston
7e6e588e60 Fix bug where we resent "limit exceeded" server notices
This was due to a bug where we mutated a cached event's contents
2018-08-23 16:21:20 +01:00
Travis Ralston
1ca2744621 Merge pull request #3734 from matrix-org/travis/worker-docs
Reference that the federation_reader needs the HTTP replication port set
2018-08-23 07:51:46 -06:00
Erik Johnston
dddb5aa7bb Merge pull request #3746 from matrix-org/erikj/mau_fixups
Fix MAU invalidation due to missing yield
2018-08-23 10:55:14 +01:00
Erik Johnston
4eb8408ed2 Newsfile 2018-08-23 10:46:13 +01:00
Erik Johnston
c5842dff1a Actually run the tests 2018-08-23 10:35:54 +01:00
Erik Johnston
7a0da69eee Add missing yield 2018-08-23 10:28:12 +01:00
Erik Johnston
a5806aba27 Merge pull request #3680 from matrix-org/neilj/server_notices_on_blocking
server notices on resource limit blocking
2018-08-22 17:18:28 +01:00
Erik Johnston
fd2dbf1836 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-22 17:06:10 +01:00
Erik Johnston
9643a6f7f2 Update notice format 2018-08-22 17:00:29 +01:00
Richard van der Hoff
c7181dcc6c Merge branch 'master' into develop 2018-08-22 14:37:11 +01:00
Richard van der Hoff
74854a9719 Use recaptcha_ajax.js directly from Google
This was originally done in commit c75b71a397,
but got reverted on this branch due to the PR (#3677) being based on the wrong
branch.

We're ready to merge this to master now, so let's make it match
release-v0.33.3.
2018-08-22 14:30:49 +01:00
Richard van der Hoff
48fec67536 Merge tag 'v0.33.3'
Features
--------

- Add support for the SNI extension to federation TLS connections. Thanks to @vojeroen! ([\#3439](https://github.com/matrix-org/synapse/issues/3439))
- Add /_media/r0/config ([\#3184](https://github.com/matrix-org/synapse/issues/3184))
- speed up /members API and add `at` and `membership` params as per MSC1227 ([\#3568](https://github.com/matrix-org/synapse/issues/3568))
- implement `summary` block in /sync response as per MSC688 ([\#3574](https://github.com/matrix-org/synapse/issues/3574))
- Add lazy-loading support to /messages as per MSC1227 ([\#3589](https://github.com/matrix-org/synapse/issues/3589))
- Add ability to limit number of monthly active users on the server ([\#3633](https://github.com/matrix-org/synapse/issues/3633))
- Support more federation endpoints on workers ([\#3653](https://github.com/matrix-org/synapse/issues/3653))
- Basic support for room versioning ([\#3654](https://github.com/matrix-org/synapse/issues/3654))
- Ability to disable client/server Synapse via conf toggle ([\#3655](https://github.com/matrix-org/synapse/issues/3655))
- Ability to whitelist specific threepids against monthly active user limiting ([\#3662](https://github.com/matrix-org/synapse/issues/3662))
- Add some metrics for the appservice and federation event sending loops ([\#3664](https://github.com/matrix-org/synapse/issues/3664))
- Where server is disabled, block ability for locked out users to read new messages ([\#3670](https://github.com/matrix-org/synapse/issues/3670))
- set admin uri via config, to be used in error messages where the user should contact the administrator ([\#3687](https://github.com/matrix-org/synapse/issues/3687))
- Synapse's presence functionality can now be disabled with the "use_presence" configuration option. ([\#3694](https://github.com/matrix-org/synapse/issues/3694))
- For resource limit blocked users, prevent writing into rooms ([\#3708](https://github.com/matrix-org/synapse/issues/3708))

Bugfixes
--------

- Fix occasional glitches in the synapse_event_persisted_position metric ([\#3658](https://github.com/matrix-org/synapse/issues/3658))
- Fix bug on deleting 3pid when using identity servers that don't support unbind API ([\#3661](https://github.com/matrix-org/synapse/issues/3661))
- Make the tests pass on Twisted < 18.7.0 ([\#3676](https://github.com/matrix-org/synapse/issues/3676))
- Don’t ship recaptcha_ajax.js, use it directly from Google ([\#3677](https://github.com/matrix-org/synapse/issues/3677))
- Fixes test_reap_monthly_active_users so it passes under postgres ([\#3681](https://github.com/matrix-org/synapse/issues/3681))
- Fix mau blocking calulation bug on login ([\#3689](https://github.com/matrix-org/synapse/issues/3689))
- Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users ([\#3692](https://github.com/matrix-org/synapse/issues/3692))
- Improve HTTP request logging to include all requests ([\#3700](https://github.com/matrix-org/synapse/issues/3700))
- Avoid timing out requests while we are streaming back the response ([\#3701](https://github.com/matrix-org/synapse/issues/3701))
- Support more federation endpoints on workers ([\#3705](https://github.com/matrix-org/synapse/issues/3705), [\#3713](https://github.com/matrix-org/synapse/issues/3713))
- Fix "Starting db txn 'get_all_updated_receipts' from sentinel context" warning ([\#3710](https://github.com/matrix-org/synapse/issues/3710))
- Fix bug where `state_cache` cache factor ignored environment variables ([\#3719](https://github.com/matrix-org/synapse/issues/3719))
- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs ([\#3723](https://github.com/matrix-org/synapse/issues/3723))
- Fix bug introduced in v0.33.3rc1 which made the ToS give a 500 error ([\#3732](https://github.com/matrix-org/synapse/issues/3732))

Deprecations and Removals
-------------------------

- The Shared-Secret registration method of the legacy v1/register REST endpoint has been removed. For a replacement, please see [the admin/register API documentation](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/register_api.rst). ([\#3703](https://github.com/matrix-org/synapse/issues/3703))

Internal Changes
----------------

- The test suite now can run under PostgreSQL. ([\#3423](https://github.com/matrix-org/synapse/issues/3423))
- Refactor HTTP replication endpoints to reduce code duplication ([\#3632](https://github.com/matrix-org/synapse/issues/3632))
- Tests now correctly execute on Python 3. ([\#3647](https://github.com/matrix-org/synapse/issues/3647))
- Sytests can now be run inside a Docker container. ([\#3660](https://github.com/matrix-org/synapse/issues/3660))
- Port over enough to Python 3 to allow the sytests to start. ([\#3668](https://github.com/matrix-org/synapse/issues/3668))
- Update docker base image from alpine 3.7 to 3.8. ([\#3669](https://github.com/matrix-org/synapse/issues/3669))
- Rename synapse.util.async to synapse.util.async_helpers to mitigate async becoming a keyword on Python 3.7. ([\#3678](https://github.com/matrix-org/synapse/issues/3678))
- Synapse's tests are now formatted with the black autoformatter. ([\#3679](https://github.com/matrix-org/synapse/issues/3679))
- Implemented a new testing base class to reduce test boilerplate. ([\#3684](https://github.com/matrix-org/synapse/issues/3684))
- Rename MAU prometheus metrics ([\#3690](https://github.com/matrix-org/synapse/issues/3690))
- add new error type ResourceLimit ([\#3707](https://github.com/matrix-org/synapse/issues/3707))
- Logcontexts for replication command handlers ([\#3709](https://github.com/matrix-org/synapse/issues/3709))
- Update admin register API documentation to reference a real user ID. ([\#3712](https://github.com/matrix-org/synapse/issues/3712))
2018-08-22 14:28:55 +01:00
Richard van der Hoff
3504982cb7 changelog for 0.33.3 2018-08-22 14:07:44 +01:00
Richard van der Hoff
4e5a4549b6 bump version to 0.33.3 2018-08-22 14:07:10 +01:00
Richard van der Hoff
9b7d9d8ba0 Update attributions and PR links in changelog 2018-08-22 14:06:20 +01:00
Erik Johnston
db10f553ba Merge pull request #3724 from Half-Shot/hs/guest-fetch-event
Allow guests to use /rooms/:roomId/event/:eventId
2018-08-22 13:41:08 +01:00
Erik Johnston
764030cf63 Merge pull request #3659 from matrix-org/erikj/split_profiles
Allow profile updates to happen on workers
2018-08-22 11:35:55 +01:00
Erik Johnston
8432e2ebd7 Rename WorkerProfileHandler to BaseProfileHandler 2018-08-22 10:13:40 +01:00
Erik Johnston
a81f140880 Add assert to ensure handler is only run on master 2018-08-22 10:11:21 +01:00
Erik Johnston
47b25ba5f3 Remove redundant vars 2018-08-22 10:09:05 +01:00
Erik Johnston
3bf8bab8f9 Merge pull request #3673 from matrix-org/erikj/refactor_state_handler
Refactor state module to support multiple room versions
2018-08-22 10:04:55 +01:00
Richard van der Hoff
a4cf660a32 Merge pull request #3735 from matrix-org/travis/federation-spelling
limt -> limit
2018-08-22 09:34:21 +01:00
Richard van der Hoff
0d568ff403 Merge remote-tracking branch 'origin/release-v0.33.3' into develop 2018-08-22 09:15:44 +01:00
Richard van der Hoff
d7585a4c83 Merge pull request #3732 from matrix-org/rav/fix_gdpr_consent
Fix 500 error from /consent form
2018-08-22 09:15:06 +01:00
Travis Ralston
365f588d98 Create 3734.misc 2018-08-21 23:38:38 -06:00
Travis Ralston
eb7be75a10 Create 3735.misc 2018-08-21 23:38:06 -06:00
Travis Ralston
dd0ac1614c Reference that the federation_reader needs the HTTP replication port set 2018-08-21 23:35:50 -06:00
Matthew Hodgson
bb81e78ec6 Split the state_group_cache in two (#3726)
Splits the state_group_cache in two.

One half contains normal state events; the other contains member events.

The idea is that the lazyloading common case of: "I want a subset of member events plus all of the other state" can be accomplished efficiently by splitting the cache into two, and asking for "all events" from the non-members cache, and "just these keys" from the members cache.  This means we can avoid having to make DictionaryCache aware of these sort of complicated queries, whilst letting LL requests benefit from the caching.

Previously we were unable to sensibly use the caching and had to pull all state from the DB irrespective of the filtering, which made things slow.  Hopefully fixes https://github.com/matrix-org/synapse/issues/3720.
2018-08-22 00:56:37 +02:00
Richard van der Hoff
afb4b490a4 changelog 2018-08-21 23:19:14 +01:00
Richard van der Hoff
f7bf181a90 fix another consent encoding fail 2018-08-21 23:14:25 +01:00
Richard van der Hoff
f7baff6f7b Fix 500 error from /consent form
Fixes #3731
2018-08-21 22:47:07 +01:00
Richard van der Hoff
a52f276990 Merge tag 'v0.33.3rc2' into develop
Bugfixes
--------

- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs
([\#3723](https://github.com/matrix-org/synapse/issues/3723))
2018-08-21 20:30:09 +01:00
Erik Johnston
46c832eaac Merge pull request #3727 from matrix-org/erikj/dont_error_on_missing_keys
Don't log exceptions when failing to fetch server keys
2018-08-21 17:07:20 +01:00
Erik Johnston
2b1a4b2596 Merge pull request #3722 from matrix-org/erikj/bg_process_iteration
LaterGauge needs to call thread safe functions
2018-08-21 16:47:34 +01:00
Erik Johnston
cd6937fb26 Fix typo 2018-08-21 16:28:10 +01:00
Erik Johnston
c2c153dd3b Log more detail when we fail to authenticate request 2018-08-21 11:42:49 +01:00
Erik Johnston
79d3b4689e Newsfile 2018-08-21 11:21:48 +01:00
Erik Johnston
808d8e06aa Don't log exceptions when failing to fetch server keys
Not being able to resolve or connect to remote servers is an expected
error, so we shouldn't log at ERROR with stacktraces.
2018-08-21 11:19:26 +01:00
Erik Johnston
3f6762f0bb isort 2018-08-21 09:38:38 +01:00
Amber Brown
3b5b64ac99 changelog 2018-08-21 03:48:55 +10:00
Amber Brown
23d7e63a4a Merge pull request #3723 from matrix-org/rav/fix_logcontext_disaster
Fix exceptions when a connection is closed before we read the headers
2018-08-21 03:47:52 +10:00
Will Hunt
8bd585b09b 3724.feature 2018-08-20 18:27:42 +01:00
Richard van der Hoff
012d612f9d changelog 2018-08-20 18:26:27 +01:00
Will Hunt
f89f6b7c09 Allow guests to access /rooms/:roomId/event/:eventId 2018-08-20 18:25:54 +01:00
Richard van der Hoff
be6527325a Fix exceptions when a connection is closed before we read the headers
This fixes bugs introduced in #3700, by making sure that we behave sanely
when an incoming connection is closed before the headers are read.
2018-08-20 18:21:10 +01:00
Richard van der Hoff
55e6bdf287 Robustness fix for logcontext filter
Make the logcontext filter not explode if it somehow ends up with a logcontext
of None, since that infinite-loops the whole logging system.
2018-08-20 18:20:07 +01:00
Erik Johnston
e2c0aa2c26 Newsfile 2018-08-20 17:40:59 +01:00
Erik Johnston
b01a755498 Make the in flight requests metrics thread safe 2018-08-20 17:27:52 +01:00
Erik Johnston
1058d14127 Make the in flight background process metrics thread safe 2018-08-20 17:27:24 +01:00
Amber Brown
80bf7d3580 changelog 2018-08-21 00:01:14 +10:00
Amber Brown
9a2f960736 version 2018-08-21 00:00:19 +10:00
Amber Brown
324525f40c Port over enough to get some sytests running on Python 3 (#3668) 2018-08-20 23:54:49 +10:00
Erik Johnston
4d664278af Merge branch 'develop' of github.com:matrix-org/synapse into erikj/refactor_state_handler 2018-08-20 14:49:43 +01:00
Erik Johnston
8dee601054 Remove redundant room_version checks 2018-08-20 14:48:53 +01:00
Erik Johnston
e21c368b8b Revert spurious change 2018-08-20 13:54:51 +01:00
Erik Johnston
cf6f9a8b53 Merge pull request #3719 from matrix-org/erikj/use_cache_fact
Use get_cache_factor_for function for `state_cache`
2018-08-20 13:33:35 +01:00
Erik Johnston
48a910e128 Newsfile 2018-08-20 13:33:20 +01:00
Erik Johnston
f2a48d87df Use get_cache_factor_for function for state_cache
This allows the cache factor for `state_cache` to be individually
specified in the enviroment
2018-08-20 13:01:46 +01:00
Erik Johnston
2aa7cc6a46 Merge pull request #3713 from matrix-org/erikj/fixup_fed_logging
Fix logging bug in EDU handling over replication
2018-08-20 10:51:45 +01:00
Neil Johnson
e07970165f rename error code 2018-08-18 14:39:45 +01:00
Neil Johnson
c5171bf171 special case server_notices_mxid 2018-08-18 12:33:07 +01:00
Neil Johnson
ba1fbf7d5b special case server_notices_mxid 2018-08-18 12:31:08 +01:00
Richard van der Hoff
3cef867cc1 Merge pull request #3709 from matrix-org/rav/logcontext_for_replication_commands
Logcontexts for replication command handlers
2018-08-17 16:22:07 +01:00
Richard van der Hoff
c144252a8c Merge pull request #3710 from matrix-org/rav/logcontext_for_pusher_updates
Fix logcontexts for running pushers
2018-08-17 16:21:49 +01:00
Amber Brown
c334ca67bb Integrate presence from hotfixes (#3694) 2018-08-18 01:08:45 +10:00
Amber Brown
04f5d2db62 Remove v1/register's broken shared secret functionality (#3703) 2018-08-18 00:55:01 +10:00
Erik Johnston
ab822a2d1f Add some fixmes 2018-08-17 15:31:50 +01:00
Erik Johnston
91cdb6de08 Call UserDirectoryHandler methods directly
Turns out that the user directory handling is fairly racey as a bunch
of stuff assumes that the processing happens on master, which it doesn't
when there is a synapse.app.user_dir worker. So lets just call the
function directly until we actually get round to fixing it, since it
doesn't make the situation any worse.
2018-08-17 15:26:13 +01:00
Neil Johnson
d49b77404b clean up, no functional changes 2018-08-17 15:21:34 +01:00
Richard van der Hoff
63260397c6 Merge pull request #3701 from matrix-org/rav/use_producer_for_responses
Use a producer to stream back responses
2018-08-17 14:58:45 +01:00
Richard van der Hoff
3f8709ffe4 Merge pull request #3700 from matrix-org/rav/wait_for_producers
Refactor request logging code
2018-08-17 14:57:45 +01:00
Neil Johnson
3ee57bdcbb Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-17 14:34:10 +01:00
Neil Johnson
4c22b4047b Merge pull request #3707 from matrix-org/neilj/limit_exceeded_error
add new error type ResourceLimit
2018-08-17 13:33:54 +00:00
Erik Johnston
782689bd40 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_profiles 2018-08-17 14:15:48 +01:00
Erik Johnston
ca87ad1def Split ProfileHandler into master and worker 2018-08-17 14:15:14 +01:00
Neil Johnson
b5f638f1f4 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-17 14:04:15 +01:00
Neil Johnson
9fd161c6fb Merge branch 'neilj/limit_exceeded_error' of github.com:matrix-org/synapse into neilj/limit_exceeded_error 2018-08-17 13:58:40 +01:00
Neil Johnson
0195dfbf52 server limits config docs 2018-08-17 13:58:25 +01:00
Neil Johnson
69c49d3fa3 Merge branch 'develop' into neilj/limit_exceeded_error 2018-08-17 12:44:26 +00:00
Neil Johnson
a2d872e7b3 Merge pull request #3708 from matrix-org/neilj/resource_Limit_block_event_creation
Neilj/resource limit block event creation
2018-08-17 12:42:59 +00:00
Amber Brown
fa27073b14 Merge pull request #3712 from matrix-org/travis/register-admin-docs
Update the admin register documentation to return a real user ID
2018-08-17 22:31:02 +10:00
Erik Johnston
38f708a2bb Remote profile cache should remain in master worker 2018-08-17 11:37:42 +01:00
Erik Johnston
73737bd0f9 Newsfile 2018-08-17 11:14:05 +01:00
Erik Johnston
3b2dcfff78 Fix logging bug in EDU handling over replication 2018-08-17 11:11:06 +01:00
Neil Johnson
521d369e7a remove errant yield 2018-08-17 10:12:11 +01:00
Travis Ralston
b99a0f3941 Create 3712.misc 2018-08-17 02:47:31 -06:00
Travis Ralston
a8ffc27db7 Update the admin register documentation to return a real user ID
Presumably this is the intention anyways. I've also updated the domain part to be something more along the lines of what people might expect.
2018-08-17 02:46:25 -06:00
Richard van der Hoff
4c9da1440f changelog 2018-08-17 00:50:36 +01:00
Richard van der Hoff
d9efd87d55 changelog 2018-08-17 00:49:22 +01:00
Richard van der Hoff
0e8d78f6aa Logcontexts for replication command handlers
Run the handlers for replication commands as background processes. This should
improve the visibility in our metrics, and reduce the number of "running db
transaction from sentinel context" warnings.

Ideally it means converting the things that fire off deferreds into the night
into things that actually return a Deferred when they are done. I've made a bit
of a stab at this, but it will probably be leaky.
2018-08-17 00:43:43 +01:00
Richard van der Hoff
66f7dc8c87 Fix logcontexts for running pushers
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.

Instead, give them their own "background process" logcontexts.
2018-08-17 00:32:39 +01:00
Neil Johnson
7e51342196 For resource limit blocked users, prevent writing into rooms 2018-08-16 23:05:20 +01:00
Neil Johnson
bcfeb44afe call reap on start up and fix under reaping bug 2018-08-16 22:55:32 +01:00
Neil Johnson
372bf073c1 block event creation and room creation on hitting resource limits 2018-08-16 21:25:16 +01:00
Neil Johnson
7edd11623d add new error type ResourceLimit 2018-08-16 21:04:30 +01:00
Neil Johnson
13ad9930c8 add new error type ResourceLimit 2018-08-16 18:02:02 +01:00
Neil Johnson
51b17ec566 flake8 2018-08-16 17:32:22 +01:00
Neil Johnson
3c1080b6e4 refactor for readability, and reuse caching for setting tags 2018-08-16 17:02:04 +01:00
Neil Johnson
eff3ae3b9a add room tagging 2018-08-16 15:48:34 +01:00
Erik Johnston
b4d6db5c4a Merge pull request #3705 from matrix-org/erikj/fix_inbound_fed_worker
Fix inbound federation on reader worker
2018-08-16 15:42:50 +01:00
Erik Johnston
aae86a81ef Newsfile 2018-08-16 15:41:48 +01:00
Erik Johnston
7d5b1a60a3 Fix inbound federation on reader worker
Inbound federation requires calculating push, which in turn relies on
having access to account data.
2018-08-16 15:32:20 +01:00
Neil Johnson
bfb6c58624 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-16 15:10:16 +01:00
Neil Johnson
a675f9c556 check for room state before deciding on action 2018-08-16 14:53:35 +01:00
Will Hunt
c151b32b1d Add GET media/v1/config (#3184) 2018-08-16 14:23:38 +01:00
Matthew Hodgson
762a758fea lazyload aware /messages (#3589) 2018-08-16 14:22:47 +01:00
Neil Johnson
25d2b5d55f Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-16 11:37:17 +01:00
Neil Johnson
df1e4f259f WIP impl commiting to get feedback 2018-08-16 11:10:53 +01:00
Neil Johnson
c055c91655 fix case where empty string state check is evaulated as False 2018-08-16 11:10:19 +01:00
Matthew Hodgson
3f543dc021 initial cut at a room summary API (#3574) 2018-08-16 09:46:50 +01:00
Richard van der Hoff
859989f958 Merge pull request #3686 from matrix-org/rav/changelog_links_to_prs
Changelog should link to PRs
2018-08-15 22:22:38 +01:00
Neil Johnson
8cfad2e686 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-15 17:20:38 +01:00
Neil Johnson
81d727efa9 Merge pull request #3689 from matrix-org/neilj/fix_off_by_1+maus
Fix Mau off by one errors
2018-08-15 16:19:41 +00:00
Neil Johnson
da7fe43899 Fix mau blocking calulation bug on login 2018-08-15 17:03:30 +01:00
Richard van der Hoff
e2e846628d Remove incorrectly re-added changelog files
These were removed for the 0.33.2 release, but were incorrectly re-added in
2018-08-15 16:38:46 +01:00
Matthew Hodgson
2f78f432c4 speed up /members and add at= and membership params (#3568) 2018-08-15 16:35:22 +01:00
Neil Johnson
fc5d937550 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-15 16:31:40 +01:00
Neil Johnson
86a00e05e1 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-15 16:27:08 +01:00
Richard van der Hoff
d82fa0e9a6 changelog 2018-08-15 15:12:35 +01:00
Amber Brown
a87af25fbb Fix the tests 2018-08-15 15:12:23 +01:00
Neil Johnson
eabc5f8271 wip cut at sending resource server notices 2018-08-15 15:04:52 +01:00
Neil Johnson
c24fc9797b add new event types 2018-08-15 15:04:30 +01:00
Richard van der Hoff
afcd655ab6 Use a producer to stream back responses
The problem with dumping all of the json response into the Request object at
once is that doing so starts the timeout for the next request to be received:
so if it takes longer than 60s to stream back the response to the client, the
client never gets it.

The correct solution is to use a Producer; then the timeout is only started
once all of the content is sent over the TCP connection.
2018-08-15 15:04:16 +01:00
Erik Johnston
dc56c47dc0 Merge pull request #3653 from matrix-org/erikj/split_federation
Move more federation APIs to workers
2018-08-15 14:59:02 +01:00
Neil Johnson
4601129c44 Merge pull request #3687 from matrix-org/neilj/admin_email
support admin_email config and pass through into blocking errors,
2018-08-15 13:52:25 +00:00
Erik Johnston
ef184caf30 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_federation 2018-08-15 14:25:46 +01:00
Erik Johnston
488ffe6fdb Use federation handler function rather than duplicate
This involves renaming _persist_events to be a public function.
2018-08-15 14:17:18 +01:00
Erik Johnston
773db62a22 Rename slave TransactionStore to SlaveTransactionStore 2018-08-15 14:17:06 +01:00
Neil Johnson
39176f27f7 remove changelog referencing another PR 2018-08-15 13:53:51 +01:00
Richard van der Hoff
2de813a611 changelog 2018-08-15 13:49:03 +01:00
Richard van der Hoff
eaaa2248ff Refactor request logging code
This commit moves a bunch of the logic for deciding when to log the receipt and
completion of HTTP requests into SynapseRequest, rather than in the request
handling wrappers.

Advantages of this are:
 * we get logs for *all* requests (including OPTIONS and HEADs), rather than
   just those that end up hitting handlers we've remembered to decorate
   correctly.

 * when a request handler wires up a Producer (as the media stuff does
   currently, and as other things will do soon), we log at the point that all
   of the traffic has been sent to the client.
2018-08-15 13:47:52 +01:00
Neil Johnson
87a824bad1 clean up AuthError 2018-08-15 11:58:03 +01:00
Neil Johnson
c4eb97518f Merge branch 'neilj/update_limits_error_codes' of github.com:matrix-org/synapse into neilj/admin_email 2018-08-15 11:53:01 +01:00
Neil Johnson
55afba0fc5 update admin email to uri 2018-08-15 11:41:18 +01:00
Neil Johnson
75c663c7b9 update error codes 2018-08-15 11:27:48 +01:00
Neil Johnson
1c5e690a6b Merge pull request #3690 from matrix-org/neilj/change_prometheus_mau_metric_name
combine mau metrics into one group
2018-08-15 09:53:59 +00:00
Erik Johnston
fef2e65d12 Merge pull request #3667 from matrix-org/erikj/fixup_unbind
Don't fail requests to unbind 3pids for non supporting ID servers
2018-08-15 10:32:12 +01:00
Neil Johnson
596ce63576 update resource limit error codes 2018-08-15 10:23:28 +01:00
Neil Johnson
b8429c7c81 update error codes for resource limiting 2018-08-15 10:19:48 +01:00
Neil Johnson
ab035bdeac replace admin_email with admin_uri for greater flexibility 2018-08-15 10:16:41 +01:00
Neil Johnson
19b433e3f4 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/admin_email 2018-08-14 17:44:46 +01:00
Neil Johnson
b586b8b986 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 17:43:22 +01:00
Neil Johnson
70e48cbbb1 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/change_prometheus_mau_metric_name 2018-08-14 17:40:59 +01:00
Neil Johnson
aa3220df6a Merge pull request #3692 from matrix-org/neil/fix_postgres_test_initialise_reserved_users
Neil/fix postgres test initialise reserved users
2018-08-14 16:40:11 +00:00
Neil Johnson
7277216d01 fix setup_test_homeserver to be postgres compatible 2018-08-14 17:14:39 +01:00
Neil Johnson
cd0c749c4f Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users 2018-08-14 17:06:51 +01:00
Neil Johnson
9ecbaf8ba8 adding missing yield 2018-08-14 16:55:28 +01:00
Neil Johnson
1522ed9c07 in case max_mau is less than I think 2018-08-14 16:52:30 +01:00
Neil Johnson
629f390035 Rename MAU prometheus metric 2018-08-14 16:38:05 +01:00
Neil Johnson
e5962f845c pep8 2018-08-14 16:36:14 +01:00
Neil Johnson
e7d091fb86 combine mau metrics into one group 2018-08-14 16:26:55 +01:00
Neil Johnson
414d54b61a Merge pull request #3670 from matrix-org/neilj/mau_sync_block
Block ability to read via sync if mau limit exceeded
2018-08-14 15:21:31 +00:00
Neil Johnson
2545993ce4 make comments clearer 2018-08-14 15:48:12 +01:00
Neil Johnson
06b331ff40 fix off by 1 errors 2018-08-14 15:28:15 +01:00
Neil Johnson
8f9a7eb58d support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-14 15:11:54 +01:00
Neil Johnson
c74c71128d remove blank line 2018-08-14 15:06:24 +01:00
Neil Johnson
ed4bc3d2fc fix off by 1s on mau 2018-08-14 15:04:48 +01:00
Neil Johnson
bd92c8eaa7 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:56:45 +01:00
Neil Johnson
99ebaed8e6 Update register.py
remove comments
2018-08-14 14:55:55 +01:00
Neil Johnson
9b5bf3d858 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:51:38 +01:00
Neil Johnson
e25d87d97b Merge branch 'neilj/mau_sync_block' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:32:18 +01:00
Neil Johnson
e2c9fe0a6a backout ability to pass in event type to server notices 2018-08-14 13:32:56 +01:00
Amber Brown
614e6d517d Dockerised sytest (#3660) 2018-08-14 21:30:34 +10:00
Amber Brown
0bdc362598 Merge pull request #3681 from matrix-org/neilj/fix_reap_users_in_postgres
Neilj/fix reap users in postgres
2018-08-14 21:29:35 +10:00
Amber Brown
591bf87c6a Merge remote-tracking branch 'origin/develop' into neilj/fix_reap_users_in_postgres 2018-08-14 20:56:23 +10:00
Amber Brown
bdfbd934d6 Implement a new test baseclass to cut down on boilerplate (#3684) 2018-08-14 20:53:43 +10:00
Neil Johnson
9b75c78b4d support server notice state events for resource limits 2018-08-14 11:20:41 +01:00
Jan Christian Grünhage
8f0c430ca4 Merge pull request #3669 from matrix-org/jcgruenhage/update-docker-base
update docker base-image to alpine 3.8
2018-08-14 11:43:59 +02:00
Neil Johnson
63417c31e9 fix typo 2018-08-13 22:36:52 +01:00
Neil Johnson
1f24d8681b set admin email via config 2018-08-13 21:26:43 +01:00
Neil Johnson
f4b49152e2 support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-13 21:09:47 +01:00
Richard van der Hoff
b6d55588e1 Changelog should link to PRs
The changelog makes much more sense if it consistently links to PRs, rather
than mostly PRs with the occasional issue #.
2018-08-13 18:56:55 +01:00
Richard van der Hoff
8ba2dac6ea Name changelog after PR, not bug 2018-08-13 18:54:01 +01:00
Richard van der Hoff
50bcaf1a27 Merge pull request #3677 from andrewshadura/master
Use recaptcha_ajax.js directly from Google
2018-08-13 18:52:56 +01:00
Richard van der Hoff
cd32c19a60 Merge pull request #3685 from matrix-org/revert-3677-master
Revert "Use recaptcha_ajax.js directly from Google"
2018-08-13 18:49:50 +01:00
Richard van der Hoff
34f51babc5 Revert "Use recaptcha_ajax.js directly from Google" 2018-08-13 18:48:03 +01:00
Richard van der Hoff
8226e5597a Merge pull request #3677 from andrewshadura/master
Use recaptcha_ajax.js directly from Google
2018-08-13 18:47:51 +01:00
Neil Johnson
ce7de9ae6b Revert "support admin_email config and pass through into blocking errors, return AuthError in all cases"
This reverts commit 0d43f991a1.
2018-08-13 18:06:18 +01:00
Neil Johnson
0d43f991a1 support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-13 18:00:23 +01:00
Amber Brown
99dd975dae Run tests under PostgreSQL (#3423) 2018-08-13 16:47:46 +10:00
Neil Johnson
ac205a54b2 Fixes test_reap_monthly_active_users so it passes under postgres 2018-08-11 22:50:09 +01:00
Neil Johnson
31fa743567 fix sqlite/postgres incompatibility in reap_monthly_active_users 2018-08-11 22:38:34 +01:00
Neil Johnson
c79c4c0a7d server notices for resource limit blocking 2018-08-10 15:25:47 +01:00
Neil Johnson
6c6aba76e1 implementation of server notices to alert on hitting resource limits 2018-08-10 15:12:59 +01:00
Amber Brown
a001038b92 Merge pull request #3679 from matrix-org/hawkowl/blackify-tests
Blackify the tests
2018-08-11 00:12:56 +10:00
Amber Brown
807449d8f2 fix up a forced long line 2018-08-11 00:01:43 +10:00
Richard van der Hoff
c31793a784 Merge branch 'rav/fix_linearizer_cancellation' into develop 2018-08-10 14:57:27 +01:00
Richard van der Hoff
c08f9d95b2 log *after* reloading log config
... because logging *before* reloading means the log message gets lost in the old MemoryLogger
2018-08-10 14:56:48 +01:00
Amber Brown
dd16e7dfcc changelog 2018-08-10 23:54:46 +10:00
black
8b3d9b6b19 Run black. 2018-08-10 23:54:09 +10:00
Amber Brown
b37c472419 Rename async to async_helpers because async is a keyword on Python 3.7 (#3678) 2018-08-10 23:50:21 +10:00
Andrej Shadura
c75b71a397 Use recaptcha_ajax.js directly from Google
The script recaptcha_ajax.js contains minified non-free code which
we probably cannot redistribute. Since it talks to Google servers
anyway, it is better to just download it from Google directly instead
of shipping it.

This fixes #1932.
2018-08-10 13:36:14 +02:00
Richard van der Hoff
3c0213a217 Merge pull request #3439 from vojeroen/send_sni_for_federation_requests
send SNI for federation requests
2018-08-10 12:23:54 +01:00
Richard van der Hoff
178ab76ac0 changelog 2018-08-10 11:05:23 +01:00
Richard van der Hoff
638d35ef08 Fix linearizer cancellation on twisted < 18.7
Turns out that cancellation of inlineDeferreds didn't really work properly
until Twisted 18.7. This commit refactors Linearizer.queue to avoid
inlineCallbacks.
2018-08-10 10:59:09 +01:00
Neil Johnson
01021c812f wip at implementing MSC 7075 2018-08-09 22:16:00 +01:00
Jeroen
2e9c73e8ca more generic conversion of str/bytes to unicode 2018-08-09 21:31:26 +02:00
Jeroen
64899341dc include private functions from twisted 2018-08-09 21:04:22 +02:00
Neil Johnson
885ea9c602 rename _user_last_seen_monthly_active 2018-08-09 18:02:12 +01:00
Neil Johnson
c1f9dec92a fix errant parenthesis 2018-08-09 17:43:26 +01:00
Neil Johnson
04df714259 fix imports 2018-08-09 17:41:52 +01:00
Neil Johnson
09cf130898 only block on sync where user is not part of the mau cohort 2018-08-09 17:39:12 +01:00
Erik Johnston
5075e444f4 Newsfile 2018-08-09 14:58:49 +01:00
Erik Johnston
3e19beb941 Fix tests 2018-08-09 14:58:49 +01:00
Erik Johnston
bb99b1f550 Add fast path in state res for zero prev events 2018-08-09 14:58:49 +01:00
Erik Johnston
ce6db0e547 Choose state algorithm based on room version 2018-08-09 14:58:47 +01:00
Erik Johnston
152c0aa58e Add constants for room versions 2018-08-09 14:55:47 +01:00
Erik Johnston
119451dcd1 Refactor state module
We split out the actual state resolution algorithm to prepare for having
multiple versions.
2018-08-09 14:55:47 +01:00
Neil Johnson
c6b28fb479 Where server is disabled, block ability for locked out users to read new messages 2018-08-09 12:45:58 +01:00
Jan Christian Grünhage
d967653705 update docker base-image to alpine 3.8 2018-08-09 13:31:10 +02:00
Neil Johnson
69ce057ea6 block sync if auth checks fail 2018-08-09 12:26:27 +01:00
Neil Johnson
3dce9050cf Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sync_block 2018-08-09 11:42:02 +01:00
Neil Johnson
0ad98e38d0 Merge pull request #3655 from matrix-org/neilj/disable_hs
Flag to disable HS without disabling federation
2018-08-09 10:41:43 +00:00
Neil Johnson
a5ef110749 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sync_block 2018-08-09 11:40:37 +01:00
Erik Johnston
a6c813761a Docstrings 2018-08-09 10:41:08 +01:00
Erik Johnston
54a9bea88c Newsfile 2018-08-09 10:39:29 +01:00
Erik Johnston
5c6226707d Update docs/workers.rst 2018-08-09 10:37:42 +01:00
Erik Johnston
8876ce7f77 Newsfile 2018-08-09 10:37:42 +01:00
Erik Johnston
b179537f2a Move clean_room_for_join to master 2018-08-09 10:37:38 +01:00
Erik Johnston
72d1902bbe Fixup doc comments 2018-08-09 10:23:49 +01:00
Amber Brown
984376745b Merge branch 'master' into develop 2018-08-09 19:23:47 +10:00
Amber Brown
67dbe4c899 0.33.2 changelog 2018-08-09 19:21:07 +10:00
Amber Brown
dafe90a6be 0.33.2 2018-08-09 19:20:41 +10:00
Amber Brown
3d6f9fba19 fix the changelog & template 2018-08-09 19:19:30 +10:00
Erik Johnston
484a0ebdfc Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_profiles 2018-08-09 10:16:29 +01:00
Erik Johnston
5785b93711 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_federation 2018-08-09 10:16:16 +01:00
Erik Johnston
bf7598f582 Log when we 3pid/unbind request fails 2018-08-09 10:09:56 +01:00
Erik Johnston
2bdafaf3c1 Merge pull request #3632 from matrix-org/erikj/refactor_repl_servlet
Add helper base class for generating new replication endpoints
2018-08-09 10:06:23 +01:00
Erik Johnston
62564797f5 Fixup wording and remove dead code 2018-08-09 09:56:10 +01:00
Amber Brown
2511f3f8a0 Test fixes for Python 3 (#3647) 2018-08-09 12:22:01 +10:00
Richard van der Hoff
bb89c84614 Merge pull request #3664 from matrix-org/rav/federation_metrics
more metrics for the federation and appservice senders
2018-08-08 23:16:58 +01:00
Jeroen
d5c0ce4cad updated docstring for ServerContextFactory 2018-08-08 19:25:01 +02:00
Neil Johnson
e92fb00f32 sync auth blocking 2018-08-08 17:54:49 +01:00
Neil Johnson
839a317c96 fix pep8 too many lines 2018-08-08 17:39:04 +01:00
Neil Johnson
5298d79fb5 Merge branch 'develop' into neilj/disable_hs 2018-08-08 16:13:03 +00:00
Richard van der Hoff
8521ae13e3 Merge pull request #3654 from matrix-org/rav/room_versions
Support for room versioning
2018-08-08 17:10:53 +01:00
Neil Johnson
d2f3ef98ac Merge branch 'develop' into neilj/disable_hs 2018-08-08 15:55:47 +00:00
Neil Johnson
54685d294d Ability to disable client/server Synapse via conf toggle 2018-08-08 15:38:54 +01:00
Neil Johnson
cc187debf3 Merge pull request #3662 from matrix-org/neilj/reserved_users
Reserved users for MAU limits
2018-08-08 14:33:17 +00:00
Neil Johnson
2b5baebeba Merge branch 'develop' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-08 13:47:07 +01:00
Neil Johnson
be59910b93 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/reserved_users 2018-08-08 13:45:21 +01:00
Neil Johnson
990fe9fc23 Merge pull request #3633 from matrix-org/neilj/mau_tracker
API for monthly_active_users table
2018-08-08 12:44:15 +00:00
Neil Johnson
312ae74746 typos 2018-08-08 13:33:16 +01:00
Neil Johnson
d92675a486 Ability to whitelist specific threepids against monthly active user limiting 2018-08-08 12:06:42 +01:00
Erik Johnston
360ba89c50 Don't fail requests to unbind 3pids for non supporting ID servers
Older identity servers may not support the unbind 3pid request, so we
shouldn't fail the requests if we received one of 400/404/501. The
request still fails if we receive e.g. 500 responses, allowing clients
to retry requests on transient identity server errors that otherwise do
support the API.

Fixes #3661
2018-08-08 12:06:18 +01:00
Neil Johnson
7f3d897e7a mock config.max_mau_value 2018-08-08 11:46:23 +01:00
Erik Johnston
bebe325e6c Rename POST param to METHOD 2018-08-08 10:36:18 +01:00
Erik Johnston
5011417632 Fixup logging and docstrings 2018-08-08 10:29:58 +01:00
Richard van der Hoff
e6d73b8582 changelog 2018-08-07 22:14:35 +01:00
Richard van der Hoff
bab94da79c fix metric name 2018-08-07 22:11:45 +01:00
Richard van der Hoff
53bca4690b more metrics for the federation and appservice senders 2018-08-07 19:09:48 +01:00
Neil Johnson
ef3589063a prevent total number of reserved users being too large 2018-08-07 17:59:27 +01:00
Neil Johnson
e8eba2b4e3 implement reserved users for mau limits 2018-08-07 17:49:43 +01:00
Richard van der Hoff
e5d2c67844 Merge pull request #3658 from matrix-org/rav/fix_event_persisted_position_metrics
Fix occasional glitches in the synapse_event_persisted_position metric
2018-08-07 14:48:08 +01:00
Richard van der Hoff
3523f5432a Don't expose default_room_version as config opt 2018-08-07 12:51:57 +01:00
Richard van der Hoff
9b92720d88 fix event lag graph 2018-08-07 12:27:34 +01:00
Amber Brown
23cd86554e fix changelog 2018-08-07 21:20:42 +10:00
Amber Brown
848431be1d fix for rc1 2018-08-07 21:13:22 +10:00
Amber Brown
cf8ef11f35 changelog 2018-08-07 21:11:24 +10:00
Amber Brown
3b9662339b version 2018-08-07 21:10:52 +10:00
Erik Johnston
f81f421086 Update workers.rst with new paths 2018-08-07 10:51:35 +01:00
Erik Johnston
cd9765805e Allow ratelimiting on workers 2018-08-07 10:50:28 +01:00
Erik Johnston
495cb100d1 Allow profile changes to happen on workers 2018-08-07 10:50:26 +01:00
Richard van der Hoff
865d07cd38 changelog 2018-08-07 10:02:01 +01:00
Richard van der Hoff
ca9bc1f4fe Fix occasional glitches in the synapse_event_persisted_position metric
Every so often this metric glitched to a negative number. I'm assuming it was
due to backfilled events.
2018-08-07 10:00:03 +01:00
Neil Johnson
a74b25faaa WIP building out mau reserved users 2018-08-06 23:25:25 +01:00
Neil Johnson
fbe255f9a4 add default mau_limits_reserved_threepids 2018-08-06 23:24:54 +01:00
Neil Johnson
7daa8a78c5 load mau limit threepids 2018-08-06 22:55:05 +01:00
Neil Johnson
89834d9a29 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 21:56:56 +01:00
Neil Johnson
e54794f5b6 Fix postgres compatibility bug 2018-08-06 21:55:54 +01:00
Neil Johnson
7bcf126b18 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 21:39:44 +01:00
Neil Johnson
1911c037cb update comments to reflect new sig 2018-08-06 18:01:46 +01:00
Neil Johnson
33bd07d062 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/disable_hs 2018-08-06 17:52:28 +01:00
Neil Johnson
16d78be315 make use of _simple_select_one_onecol, improved comments 2018-08-06 17:51:15 +01:00
Richard van der Hoff
7f3f108561 changelog 2018-08-06 16:30:52 +01:00
Richard van der Hoff
19a17068f1 Check m.room.create for sane room_versions 2018-08-06 16:11:24 +01:00
Erik Johnston
96a9a29645 Pull in necessary stores in federation_reader 2018-08-06 15:23:57 +01:00
Erik Johnston
62ace05c45 Move necessary storage functions to worker classes 2018-08-06 15:23:38 +01:00
Erik Johnston
1e2bed9656 Import all functions from TransactionStore 2018-08-06 15:23:38 +01:00
Erik Johnston
a3f5bf79a0 Add EDU/query handling over replication 2018-08-06 15:23:31 +01:00
Erik Johnston
e26dbd82ef Add replication APIs for persisting federation events 2018-08-06 15:02:28 +01:00
Erik Johnston
051a99c400 Fix isort 2018-08-06 14:29:31 +01:00
Richard van der Hoff
f900d50824 include known room versions in outgoing make_joins 2018-08-06 13:45:37 +01:00
Neil Johnson
42c6823827 disable HS from config 2018-08-04 22:07:04 +01:00
Neil Johnson
e40a510fbf py3 fix 2018-08-03 23:19:13 +01:00
Neil Johnson
d08296f9f2 remove unused import 2018-08-03 23:00:16 +01:00
Neil Johnson
886be75ad1 bug fixes 2018-08-03 22:29:03 +01:00
Will Hunt
16d9701892 Return M_NOT_FOUND when a profile could not be found. (#3596) 2018-08-03 19:08:05 +01:00
Neil Johnson
e10830e976 wip commit - tests failing 2018-08-03 17:55:50 +01:00
Richard van der Hoff
3777fa26aa sanity check response from make_join 2018-08-03 16:08:32 +01:00
Richard van der Hoff
0d63d93ca8 Enforce compatibility when processing make_join requests
Reject make_join requests from servers which do not support the room version.

Also include the room version in the response.
2018-08-03 16:08:32 +01:00
Richard van der Hoff
0ca459ea33 Basic support for room versioning
This is the first tranche of support for room versioning. It includes:
 * setting the default room version in the config file
 * new room_version param on the createRoom API
 * storing the version of newly-created rooms in the m.room.create event
 * fishing the version of existing rooms out of the m.room.create event
2018-08-03 16:08:32 +01:00
Richard van der Hoff
15c1ae45e5 Docstrings for BaseFederationServlet
... to save me reverse-engineering this stuff again.
2018-08-03 16:08:32 +01:00
Neil Johnson
5593ff6773 fix (lots of) py3 test failures 2018-08-03 14:59:17 +01:00
Neil Johnson
b2aab04d2c fix py3 test failure 2018-08-03 14:12:56 +01:00
Neil Johnson
da90337d89 Add ability to limit number of monthly active users on the server 2018-08-03 14:05:20 +01:00
Neil Johnson
950807d93a fix caching and tests 2018-08-03 13:49:53 +01:00
Michael Kaye
df7f9871c1 Merge pull request #3644 from matrix-org/michaelkaye/refactor_docker_locations_v2
Refactor Dockerfile location
2018-08-03 13:44:35 +01:00
Neil Johnson
897c51d274 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-03 13:40:47 +01:00
Erik Johnston
cb298ff623 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/refactor_repl_servlet 2018-08-03 09:25:15 +01:00
Michael Kaye
42960aa047 Update README.md
Link to contrib/docker
2018-08-03 09:16:01 +01:00
Michael Kaye
c3f596180f Update README.md
Link to docker/README.md
2018-08-03 09:15:19 +01:00
Michael Kaye
637b11b9ed Update README.rst
Link to contrib/docker
2018-08-03 09:13:54 +01:00
Michael Kaye
feacd13932 Update README.rst
wrap at 80ish
2018-08-03 09:10:41 +01:00
Neil Johnson
c0affa7b4f update generate_monthly_active_users, and reap_monthly_active_users 2018-08-02 23:03:01 +01:00
Neil Johnson
5e2f7b8084 typo 2018-08-02 22:47:48 +01:00
Neil Johnson
4ecb4bdac9 wip attempt at caching 2018-08-02 22:41:05 +01:00
Amber Brown
8a86d7ea96 Merge pull request #3645 from matrix-org/michaelkaye/mention_newsfragment
Mention the word "newsfragment" in CONTRIBUTING.rst
2018-08-03 04:40:18 +10:00
Michael Kaye
70baa61e95 newsfragment 2018-08-02 18:58:43 +01:00
Michael Kaye
4c4a123bd0 Mention the word "newsfragment" in CONTRIBUTING.rst 2018-08-02 18:53:44 +01:00
Michael Kaye
489949879e Add news entry 2018-08-02 18:49:50 +01:00
Michael Kaye
1758f4e1c7 Address SPAG issues 2018-08-02 18:21:34 +01:00
Michael Kaye
26a37f3d4d Do not include docker files in python build 2018-08-02 18:21:34 +01:00
Michael Kaye
0d25724419 Refactor docker locations and README.
This addresses #3224
2018-08-02 18:21:32 +01:00
Richard van der Hoff
1fa98495d0 Merge pull request #3639 from matrix-org/rav/refactor_error_handling
Clean up handling of errors from outbound requests
2018-08-02 17:38:24 +01:00
Richard van der Hoff
bdae8f2e68 Merge pull request #3638 from matrix-org/rav/refactor_federation_client_exception_handling
Factor out exception handling in federation_client
2018-08-02 17:37:46 +01:00
Neil Johnson
74b1d46ad9 do mau checks based on monthly_active_users table 2018-08-02 16:57:35 +01:00
Neil Johnson
9180061b49 remove unused count_monthly_users 2018-08-02 15:55:29 +01:00
Richard van der Hoff
704c3e6239 Merge branch 'master' into develop 2018-08-02 15:43:30 +01:00
Richard van der Hoff
43ecfe0b10 Merge tag 'v0.33.1'
Synapse 0.33.1 (2018-08-02)
===========================

SECURITY FIXES
--------------

- Fix a potential issue where servers could request events for rooms they have not joined. (`#3641 <https://github.com/matrix-org/synapse/issues/3641>`_)
- Fix a potential issue where users could see events in private rooms before they joined. (`#3642 <https://github.com/matrix-org/synapse/issues/3642>`_)
2018-08-02 15:40:44 +01:00
Richard van der Hoff
c2a83349f0 changelog: this is a security release 2018-08-02 15:35:42 +01:00
Richard van der Hoff
db1f33fb36 fix changelog typos 2018-08-02 15:33:53 +01:00
Richard van der Hoff
14a4e7d5a4 Prepare 0.33.1 2018-08-02 15:31:04 +01:00
Richard van der Hoff
50d9d97408 Merge pull request #3642 from matrix-org/rav/another_room_id_check
Check room visibility for /event/ requests
2018-08-02 15:21:59 +01:00
Richard van der Hoff
8cefc690c9 changelogs 2018-08-02 15:11:19 +01:00
Richard van der Hoff
0bf5ec0db7 Check room visibility for /event/ requests
Make sure that the user has permission to view the requeseted event for
/event/{eventId} and /room/{roomId}/event/{eventId} requests.

Also check that the event is in the given room for
/room/{roomId}/event/{eventId}, for sanity.
2018-08-02 15:03:27 +01:00
Richard van der Hoff
a937497cf5 Merge pull request #3641 from matrix-org/rav/room_id_check
Validation for events/rooms in fed requests
2018-08-02 14:22:05 +01:00
Richard van der Hoff
a013404292 changelog 2018-08-02 14:00:29 +01:00
Richard van der Hoff
14fa9d4d92 Avoid extra db lookups
Since we're about to look up the events themselves anyway, we can skip the
extra db queries here.
2018-08-02 13:55:51 +01:00
Neil Johnson
c4ffbecb68 fix test, update constructor call 2018-08-02 13:51:05 +01:00
Richard van der Hoff
0a65450d04 Validation for events/rooms in fed requests
When we get a federation request which refers to an event id, make sure that
said event is in the room the caller claims it is in.

(patch supplied by @turt2live)
2018-08-02 13:48:40 +01:00
Neil Johnson
00f99f74b1 insertion into monthly_active_users 2018-08-02 13:47:19 +01:00
Neil Johnson
4a6725d9d1 Merge branch 'neilj/mau_tracker' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-02 11:04:18 +01:00
Neil Johnson
165e067033 Revert "change monthly_active_users table to be a single column"
This reverts commit ec716a35b2.
2018-08-02 10:59:58 +01:00
Erik Johnston
40c1c59cf4 Merge pull request #3621 from matrix-org/erikj/split_fed_store
Split out DB writes in federation handler
2018-08-02 10:41:42 +01:00
Neil Johnson
08281fe6b7 self.db_conn unused 2018-08-01 23:26:24 +01:00
Neil Johnson
c21d82bab3 normalise reaping query 2018-08-01 23:24:38 +01:00
Neil Johnson
ec716a35b2 change monthly_active_users table to be a single column 2018-08-01 17:54:37 +01:00
Neil Johnson
d766f26de9 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_tracker 2018-08-01 17:49:41 +01:00
Neil Johnson
085435e13a Merge pull request #3630 from matrix-org/neilj/mau_sign_in_log_in_limits
Initial impl of capping MAU
2018-08-01 15:58:45 +00:00
Richard van der Hoff
b8d7d3996b Merge pull request #3620 from fuzzmz/return-404-room-not-found
return 404 if room not found
2018-08-01 16:34:32 +01:00
Richard van der Hoff
908be65e64 changelog 2018-08-01 16:23:31 +01:00
Neil Johnson
b7f203a566 count_monthly_users is now async 2018-08-01 16:17:42 +01:00
Neil Johnson
7ff44d9215 improve clarity 2018-08-01 16:17:00 +01:00
Richard van der Hoff
38b98e5a98 changelog 2018-08-01 16:07:49 +01:00
Richard van der Hoff
01e93f48ed Kill off MatrixCodeMessageException
This code brings the SimpleHttpClient into line with the
MatrixFederationHttpClient by having it raise HttpResponseExceptions when a
request fails (rather than trying to parse for matrix errors and maybe raising
MatrixCodeMessageException).

Then, whenever we were checking for MatrixCodeMessageException and turning them
into SynapseErrors, we now need to check for HttpResponseExceptions and call
to_synapse_error.
2018-08-01 16:02:46 +01:00
Richard van der Hoff
018d75a148 Refactor code for turning HttpResponseException into SynapseError
This commit replaces SynapseError.from_http_response_exception with
HttpResponseException.to_synapse_error.

The new method actually returns a ProxiedRequestError, which allows us to pass
through additional metadata from the API call.
2018-08-01 16:02:46 +01:00
Richard van der Hoff
fa7dc889f1 Be more careful which errors we send back over the C-S API
We really shouldn't be sending all CodeMessageExceptions back over the C-S API;
it will include things like 401s which we shouldn't proxy.

That means that we need to explicitly turn a few HttpResponseExceptions into
SynapseErrors in the federation layer.

The effect of the latter is that the matrix errcode will get passed through
correctly to calling clients, which might help with some of the random
M_UNKNOWN errors when trying to join rooms.
2018-08-01 16:02:38 +01:00
Richard van der Hoff
c82ccd3027 Factor out exception handling in federation_client
Factor out the error handling from make_membership_event, send_join, and
send_leave, so that it can be shared.
2018-08-01 16:01:04 +01:00
Amber Brown
da7785147d Python 3: Convert some unicode/bytes uses (#3569) 2018-08-02 00:54:06 +10:00
Neil Johnson
c480c4c962 fix isort 2018-08-01 14:25:58 +01:00
Neil Johnson
6eed16d8a2 fix test for py3 2018-08-01 14:02:10 +01:00
Neil Johnson
303f1c851f Merge branch 'develop' of github.com:matrix-org/synapse into neilj/mau_sign_in_log_in_limits 2018-08-01 13:42:50 +01:00
Erik Johnston
a6d7b74915 update docs 2018-08-01 13:39:14 +01:00
Erik Johnston
4b256b9271 _persist_auth_tree no longer returns anything 2018-08-01 13:39:07 +01:00
Neil Johnson
4e5ac901dd clean up 2018-08-01 12:03:57 +01:00
Neil Johnson
f9f5559971 fix comment 2018-08-01 12:03:42 +01:00
Neil Johnson
4e6e00152c fix known broken test 2018-08-01 11:48:37 +01:00
Neil Johnson
0aba3d361a count_monthly_users() async 2018-08-01 11:47:58 +01:00
Neil Johnson
2c54f1c225 remove need to plot limit_usage_by_mau 2018-08-01 11:46:59 +01:00
Jan Christian Grünhage
c4842e16cb Merge pull request #3543 from bebehei/docker
Improvements for Docker usage
2018-08-01 11:32:45 +02:00
Richard van der Hoff
6e63d6868c Update 2952.bugfix 2018-08-01 10:31:22 +01:00
Richard van der Hoff
f49147d14f Merge pull request #3634 from matrix-org/rav/wtf_is_a_replication_layer
rename replication_layer to federation_client
2018-08-01 10:29:29 +01:00
Richard van der Hoff
cab782c17e Merge pull request #3384 from matrix-org/rav/rewrite_cachedlist_decorator
Rewrite cache list decorator
2018-08-01 10:28:56 +01:00
Neil Johnson
6023cdd227 remove errant print 2018-08-01 10:27:17 +01:00
Neil Johnson
7931393495 make count_monthly_users async synapse/handlers/auth.py 2018-08-01 10:21:56 +01:00
Neil Johnson
c507fa15ce only need to loop if mau limiting is enabled 2018-08-01 10:20:42 +01:00
Travis Ralston
37be52ac34 limt -> limit 2018-07-31 16:29:09 -06:00
Serban Constantin
70af98e361 return NotFoundError if room not found
Per the Client-Server API[0] we should return
`M_NOT_FOUND` if the room isn't found instead
of generic SynapseError.

This ensures that /directory/list API returns
404 for room not found instead of 400.

[0]: https://matrix.org/docs/spec/client_server/unstable.html#get-matrix-client-r0-directory-list-room-roomid

Signed-off-by: Serban Constantin <serban.constantin@gmail.com>
2018-07-31 21:47:23 +03:00
Travis Ralston
5e2ee64660 Merge pull request #3628 from turt2live/travis/goodby-pdu-failures
Remove pdu_failures from transactions
2018-07-31 12:13:09 -06:00
Richard van der Hoff
1841672c85 changelog 2018-07-31 18:26:54 +01:00
Neil Johnson
6ef983ce5c api into monthly_active_users table 2018-07-31 16:36:24 +01:00
Richard van der Hoff
bdbdceeafa rename replication_layer to federation_client
I have HAD ENOUGH of trying to remember wtf a replication layer is in terms of
classes.
2018-07-31 15:44:05 +01:00
Erik Johnston
16bd63f32f Newsfile 2018-07-31 14:48:43 +01:00
Erik Johnston
443da003bc Use new helper base class for membership requests 2018-07-31 14:32:23 +01:00
Erik Johnston
729b672823 Use new helper base class for ReplicationSendEventRestServlet 2018-07-31 14:32:23 +01:00
Erik Johnston
d81602b75a Add helper base class for generating new replication endpoints
This will hopefully reduce the boiler plate required to implement new
internal HTTP requests.
2018-07-31 14:32:20 +01:00
Richard van der Hoff
5de936caa1 Merge pull request #3612 from matrix-org/rav/store_heirarchy
Make EventStore inherit from EventFederationStore
2018-07-31 13:44:04 +01:00
Neil Johnson
5bb39b1e0c mau limts 2018-07-31 13:22:14 +01:00
Neil Johnson
df2235e7fa coding style 2018-07-31 13:16:20 +01:00
Richard van der Hoff
0bc9b9e397 reinstate explicit include of EventsWorkerStore 2018-07-31 13:11:04 +01:00
Neil Johnson
7d05406a07 fix user_ips counting 2018-07-31 12:03:23 +01:00
Richard van der Hoff
82977477e3 Merge pull request #3629 from ptman/patch-1
Add some documentation for using the dashboard
2018-07-31 11:31:52 +01:00
Paul Tötterman
9c14c2b561 Add some documentation for using the dashboard 2018-07-31 12:48:37 +03:00
Richard van der Hoff
6aab397ada synapse grafana dashboard 2018-07-31 09:45:58 +01:00
Amber Brown
52384f2ee5 Merge pull request #3626 from krombel/only_import_secrets_when_available
Only import secrets when available
2018-07-31 08:58:24 +10:00
Travis Ralston
e908b86832 Remove pdu_failures from transactions
The field is never read from, and all the opportunities given to populate it are not utilized. It should be very safe to remove this.
2018-07-30 16:28:47 -06:00
Krombel
254e8267e2 Only import secrets when available
secrets got introduced in python 3.6 so this class is not available
in 3.5 and before.

This now checks for the current running version and only tries using
secrets if the version is 3.6 or above

Signed-Off-By: Matthias Kesler <krombel@krombel.de>
2018-07-30 23:59:02 +02:00
Neil Johnson
21276ff846 remove errant logging 2018-07-30 22:42:12 +01:00
Neil Johnson
fef7e58ac6 actually close conn 2018-07-30 22:29:44 +01:00
Neil Johnson
cefac79c10 monthly_active_tests 2018-07-30 22:08:09 +01:00
Neil Johnson
9b13817e06 factor out metrics from __init__ to app/homeserver 2018-07-30 22:07:07 +01:00
Neil Johnson
251e6c1210 limit register and sign in on number of monthly users 2018-07-30 15:55:57 +01:00
Erik Johnston
143f1a2532 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_fed_store 2018-07-30 09:56:18 +01:00
Jeroen
2903e65aff fix isort 2018-07-29 19:47:08 +02:00
Richard van der Hoff
a8cbce0ced fix invalidation 2018-07-27 16:17:17 +01:00
Matthew Hodgson
e9b2d047f6 make /context lazyload & filter aware (#3567)
make /context lazyload & filter aware.
2018-07-27 15:12:50 +01:00
Erik Johnston
ad7cd95a78 Newsfile 2018-07-27 15:02:57 +01:00
Richard van der Hoff
f102c05856 Rewrite cache list decorator
Because it was complicated and annoyed me. I suspect this will be more
efficient too.
2018-07-27 13:47:04 +01:00
Jeroen
8e3f75b39a fix accidental removal of hs 2018-07-27 12:17:31 +02:00
Richard van der Hoff
b0b5566f36 Merge pull request #3391 from t3chguy/t3chguy/default_inviter_display_name_3pid
if inviter_display_name == ""||None then default to inviter MXID
2018-07-27 10:08:39 +01:00
Richard van der Hoff
781c2bd21a Merge pull request #3469 from DJViking/master
Add instructions for install on OpenSUSE and SLES
2018-07-27 09:21:41 +01:00
Richard van der Hoff
7041cd872b Merge branch 'develop' into send_sni_for_federation_requests 2018-07-27 09:17:11 +01:00
Richard van der Hoff
d42455cdc9 Merge pull request #3616 from matrix-org/travis/event_id_send_leave
Update the send_leave path to be an event_id
2018-07-26 23:25:17 +01:00
Richard van der Hoff
65c8dee900 Merge pull request #3614 from matrix-org/rav/stop_populating_event_content
Stop populating events.content
2018-07-26 22:54:08 +01:00
Matthew Hodgson
a75231b507 Deduplicate redundant lazy-loaded members (#3331)
* attempt at deduplicating lazy-loaded members

as per the proposal; we can deduplicate redundant lazy-loaded members
which are sent in the same sync sequence. we do this heuristically
rather than requiring the client to somehow tell us which members it
has chosen to cache, by instead caching the last N members sent to
a client, and not sending them again.  For now we hardcode N to 100.
Each cache for a given (user,device) tuple is in turn cached for up to
X minutes (to avoid the caches building up).  For now we hardcode X to 30.

* add include_redundant_members filter option & make it work

* remove stale todo

* add tests for _get_some_state_from_cache

* incorporate review
2018-07-26 22:51:30 +01:00
Richard van der Hoff
9e68b1bd2d Merge pull request #3613 from matrix-org/rav/stop_using_event_edges_room_id
Remove some redundant joins on event_edges.room_id
2018-07-26 22:31:01 +01:00
Travis Ralston
49254d43a6 Create 3616.misc 2018-07-26 14:45:48 -06:00
Travis Ralston
7d32f0d745 Update the send_leave path to be an event_id
It's still not used, however the parameter is an event ID not a transaction ID.
2018-07-26 14:41:59 -06:00
Richard van der Hoff
ef9d51b081 Merge pull request #3610 from matrix-org/rav/fix_looping_calls
Fix some looping_call calls which were broken in #3604
2018-07-26 14:55:57 +01:00
Richard van der Hoff
85531a06a2 changelog 2018-07-26 14:55:29 +01:00
Richard van der Hoff
51d7df1915 Create the column nullable
There's no real point in ever making the column non-nullable, and doing so
breaks the sytests.
2018-07-26 14:54:04 +01:00
Richard van der Hoff
5c1d301fd9 Stop populating events.content
This field is no longer read from, so we should stop populating it. Once we're
happy that this doesn't break everything, and a rollback is unlikely, we can
think about dropping the column.
2018-07-26 14:43:02 +01:00
Richard van der Hoff
cf78eaebad changelog 2018-07-26 13:22:40 +01:00
Richard van der Hoff
bd4b25f4d0 Remove some redundant joins on event_edges.room_id
We've long passed the point where it's possible to have the same event_id in
different tables, so these join conditions are redundant: we can just join on
event_id.

event_edges is of non-trivial size, and the room_id column is wasteful, so
let's stop reading from it. In future, we can stop writing to it, and then drop
it.
2018-07-26 13:19:08 +01:00
Richard van der Hoff
1b4d73fa52 comment on event_edges 2018-07-26 12:53:51 +01:00
Richard van der Hoff
a15ed52267 changelog 2018-07-26 12:53:18 +01:00
Richard van der Hoff
21e878ebb6 Make EventStore inherit from EventFederationStore
(since it uses methods therein)

Turns out that we had a bunch of things which were incorrectly importing
EventWorkerStore from events.py rather than events_worker.py, which broke once
I removed the import into events.py.
2018-07-26 12:48:51 +01:00
Richard van der Hoff
03751a6420 Fix some looping_call calls which were broken in #3604
It turns out that looping_call does check the deferred returned by its
callback, and (at least in the case of client_ips), we were relying on this,
and I broke it in #3604.

Update run_as_background_process to return the deferred, and make sure we
return it to clock.looping_call.
2018-07-26 11:48:08 +01:00
Matthew Hodgson
1bcd0490c2 Merge pull request #2970 from matrix-org/matthew/filter_members
Implement the lazy_load_members room state filter parameter
2018-07-26 00:03:01 +01:00
Matthew Hodgson
bc7944e6d2 switch missing_types to be a bool 2018-07-25 23:36:31 +01:00
Travis Ralston
a4fe9d2d36 Merge pull request #3609 from matrix-org/travis/doc-typo-2
Fix a minor documentation typo in on_make_leave
2018-07-25 16:09:45 -06:00
Travis Ralston
6185650f9c Create 3609.misc 2018-07-25 15:46:56 -06:00
Travis Ralston
d8e65ed7e1 Fix a minor documentation typo in on_make_leave 2018-07-25 15:44:41 -06:00
Matthew Hodgson
2565804030 Merge branch 'develop' into matthew/filter_members 2018-07-25 17:27:49 +01:00
Matthew Hodgson
0620d27f4d flake8 2018-07-25 17:21:17 +01:00
Matthew Hodgson
0a7ee0ab8b add tests for _get_some_state_from_cache 2018-07-25 16:33:52 +01:00
Matthew Hodgson
7d9fb88617 incorporate more review. 2018-07-25 16:33:50 +01:00
Erik Johnston
78a691d005 Split out DB writes in federation handler
This will allow us to easily add an internal replication API to proxy
these reqeusts to master, so that we can move federation APIs to
workers.
2018-07-25 16:22:56 +01:00
Erik Johnston
3849f7f69f Merge pull request #3603 from matrix-org/erikj/handle_outliers
Correctly handle outliers during persist events
2018-07-25 13:24:04 +01:00
Richard van der Hoff
cee1ae1b72 Merge pull request #3606 from matrix-org/rav/logcontext_fixes_once_more
Fix another logcontext leak in _persist_events
2018-07-25 11:56:00 +01:00
Richard van der Hoff
32b30e15f2 Merge pull request #3607 from matrix-org/rav/fix_persist_events_integrity_error
Fix occasional 'tuple index out of range' error
2018-07-25 11:55:45 +01:00
Richard van der Hoff
4081bd1560 Merge pull request #3605 from matrix-org/rav/fix_update_remote_profile_cache
Fix updating of cached remote profiles
2018-07-25 11:54:46 +01:00
Richard van der Hoff
1bfb5bed1d Merge pull request #3604 from matrix-org/rav/background_process_fixes
Wrap a number of things that run in the background
2018-07-25 11:54:33 +01:00
Erik Johnston
7780a7b47c Actually fix it by adding continue 2018-07-25 11:13:20 +01:00
Richard van der Hoff
1be94440d3 Fix occasional 'tuple index out of range' error
This fixes a bug in _delete_existing_rows_txn which was introduced in #3435
(though it's been on matrix-org-hotfixes for *years*). This code is only called
when there is some sort of conflict the first time we try to persist an event,
so it only happens rarely. Still, the exceptions are annoying.
2018-07-25 11:05:58 +01:00
Richard van der Hoff
07defd5fe6 Fix another logcontext leak in _persist_events
We need to run the errback in the sentinel context to avoid losing our own
context.

Also: add logging to runInteraction to help identify where "Starting db
connection from sentinel context" warnings are coming from
2018-07-25 10:53:23 +01:00
Richard van der Hoff
9c237a7ab5 changelog 2018-07-25 10:46:01 +01:00
Richard van der Hoff
55acd6856c Fix updating of cached remote profiles
_update_remote_profile_cache was missing its `defer.inlineCallbacks`, so when
it was called, would just return a generator object, without actually running
any of the method body.
2018-07-25 10:34:48 +01:00
Richard van der Hoff
f59be4eb0e Fix unit tests
on_notifier_poke no longer runs synchonously, so we have to do a different hack
to make sure that the replication data has been sent. Let's actually listen for
its arrival.
2018-07-25 10:30:36 +01:00
Erik Johnston
a297ff2b16 Fix typo in conditional 2018-07-25 09:48:01 +01:00
Richard van der Hoff
3f11d84534 Changelog 2018-07-25 09:43:25 +01:00
Richard van der Hoff
371da42ae4 Wrap a number of things that run in the background
This will reduce the number of "Starting db connection from sentinel context"
warnings, and will help with our metrics.
2018-07-25 09:41:12 +01:00
Erik Johnston
0b300d323a Newsfile 2018-07-25 09:39:45 +01:00
Erik Johnston
ec56121b0d Correctly handle outliers during persist events
We incorrectly asserted that all contexts must have a non None state
group without consider outliers. This would usually be fine as the
assertion would never be hit, as there is a shortcut during persistence
if the forward extremities don't change.

However, if the outlier is being persisted with non-outlier events, the
function would be called and the assertion would be hit.

Fixes #3601
2018-07-25 09:35:02 +01:00
Matthew Hodgson
cb5c37a57c handle the edge case for _get_some_state_from_cache where types is [] 2018-07-24 20:34:45 +01:00
Michael Telatynski
38eaa5280d add changelog entry for PR#3391
Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2018-07-24 17:25:42 +01:00
Erik Johnston
1e5dbdcbb1 Merge pull request #3597 from matrix-org/erikj/did_forget
Fix client_reader worker being able to handle /context requests
2018-07-24 17:20:39 +01:00
Erik Johnston
1674a85238 Move newsfile 2018-07-24 17:20:27 +01:00
Michael Telatynski
87951d3891 Merge branch 'develop' of github.com:matrix-org/synapse into t3chguy/default_inviter_display_name_3pid 2018-07-24 17:17:46 +01:00
Erik Johnston
f14c866e37 Newsfile 2018-07-24 16:51:19 +01:00
Erik Johnston
8b8c4f34a3 Replace usage of get_current_toke with StreamToken.START
This allows us to handle /context/ requests on the client_reader worker
without having to pull in all the various stream handlers (e.g.
precence, typing, pushers etc). The only thing the token gets used for
is pagination, and that ignores everything but the room portion of the
token.
2018-07-24 16:49:17 +01:00
Erik Johnston
3188973857 Pull out did_forget to worker store 2018-07-24 16:49:14 +01:00
Erik Johnston
60a1d147a7 Merge pull request #3595 from matrix-org/erikj/use_deltas
Use deltas to calculate current state deltas
2018-07-24 15:41:18 +01:00
Erik Johnston
709c309b0e Expand on docstring comment about return value 2018-07-24 15:12:50 +01:00
Erik Johnston
8f65ab98d2 Remove unnecessary iteritems 2018-07-24 15:08:01 +01:00
Erik Johnston
ed0dd68731 Fixup comment (and indent) 2018-07-24 14:31:38 +01:00
Erik Johnston
f33c596533 Newsfile 2018-07-24 14:28:04 +01:00
Erik Johnston
811ac73a42 Don't fetch state from the database unless needed 2018-07-24 14:18:23 +01:00
Erik Johnston
a79410e7b8 Have _get_new_state_after_events return delta
If we have a delta from the existing to new current state, then we can
reuse that rather than manually working it out by fetching both lots of
state.
2018-07-24 14:14:17 +01:00
Richard van der Hoff
f7b133fc26 Merge pull request #3594 from matrix-org/richvdh-patch-1
Update ISSUE_TEMPLATE.md
2018-07-24 14:13:15 +01:00
Richard van der Hoff
81946db9cf Merge pull request #3587 from matrix-org/rav/better_exception_logging
Improve logging for exceptions when handling PDUs
2018-07-24 14:12:12 +01:00
Richard van der Hoff
a321f78991 Merge pull request #3586 from matrix-org/rav/optimise_resolve_state_groups
Fixes and optimisations for resolve_state_groups
2018-07-24 14:11:45 +01:00
Richard van der Hoff
93b0722c50 Merge pull request #3583 from matrix-org/rav/remove_who_forgot_in_room
Remove redundant checks on room forgottenness
2018-07-24 14:11:11 +01:00
Matthew Hodgson
454f59b7ad Merge branch 'develop' into matthew/filter_members 2018-07-24 14:03:37 +01:00
Matthew Hodgson
1a01a5b964 clarify comment on p_ids 2018-07-24 14:03:15 +01:00
Erik Johnston
223341205e Don't require to_delete to have event_ids 2018-07-24 14:02:40 +01:00
Matthew Hodgson
e22700c3dd consider non-filter_type types as wildcards, thus missing from the state-group-cache 2018-07-24 13:59:07 +01:00
Richard van der Hoff
7417951117 Update ISSUE_TEMPLATE.md
request backticks for logs
2018-07-24 13:46:39 +01:00
Matthew Hodgson
eb1d911ab7 rather than adding ll_ids, remove them from p_ids 2018-07-24 13:40:49 +01:00
Erik Johnston
0a8e4f3af9 Merge pull request #3592 from matrix-org/erikj/speed_up_calculate_state_delta
Speed up _calculate_state_delta
2018-07-24 13:39:58 +01:00
Matthew Hodgson
d19fba3655 Merge branch 'develop' into matthew/filter_members 2018-07-24 12:39:54 +01:00
Matthew Hodgson
cd241d6bda incorporate more review 2018-07-24 12:39:40 +01:00
Richard van der Hoff
30bfed5aa5 Merge remote-tracking branch 'origin/develop' into rav/remove_who_forgot_in_room 2018-07-24 11:46:09 +01:00
Erik Johnston
97acd385a3 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/speed_up_calculate_state_delta 2018-07-24 11:32:13 +01:00
Erik Johnston
0fa73e4a63 Remove unnecessary if 2018-07-24 11:19:23 +01:00
Erik Johnston
2581eb3e1d Newsfile 2018-07-24 11:14:28 +01:00
Richard van der Hoff
69292de6cc Merge pull request #3591 from matrix-org/rav/logcontext_fixes
Logcontext fixes
2018-07-24 11:11:27 +01:00
Erik Johnston
ff5426f6b8 Speed up _calculate_state_delta 2018-07-24 10:55:11 +01:00
Richard van der Hoff
a678145010 Merge branch 'develop' into rav/logcontext_fixes 2018-07-24 10:43:30 +01:00
Erik Johnston
d436ad332c Merge pull request #3555 from matrix-org/erikj/client_apis_move
Make client_reader support some more read only APIs
2018-07-24 10:42:28 +01:00
Richard van der Hoff
2601ee28bd Merge pull request #3590 from matrix-org/rav/persist_events_metrics
Add some measure blocks to persist_events
2018-07-24 10:41:51 +01:00
Erik Johnston
536bc63a4e Merge branch 'develop' into erikj/client_apis_move 2018-07-24 09:57:05 +01:00
Richard van der Hoff
cf2d15c6a9 another couple of logcontext leaks 2018-07-24 00:57:48 +01:00
Richard van der Hoff
c6e66821a9 Changelog 2018-07-24 00:38:39 +01:00
Richard van der Hoff
8dff6e0322 Logcontext fixes
Fix some random logcontext leaks.
2018-07-24 00:37:17 +01:00
Richard van der Hoff
30957a941a changelog 2018-07-24 00:24:38 +01:00
Richard van der Hoff
69fb5dbdab fix idiocy 2018-07-24 00:04:44 +01:00
Richard van der Hoff
1938cffaea Add some measure blocks to persist_events
... to help us figure out where 40% of CPU is going
2018-07-23 23:48:19 +01:00
Matthew Hodgson
efcdacad7d handle case where types is [] on postgres correctly 2018-07-23 22:41:05 +01:00
Matthew Hodgson
004a83b43a changelog 2018-07-23 22:32:35 +01:00
Richard van der Hoff
d8709df739 changelog 2018-07-23 22:16:22 +01:00
Richard van der Hoff
ce0c18dec5 Improve logging for exceptions handling PDUs
when we get an exception handling a federation PDU, log the whole stacktrace.
2018-07-23 22:13:19 +01:00
Richard van der Hoff
c1f80effbe Handle delta_ids being None in _update_context_for_auth_events
it's easier to create the new state group as a delta from the existing one.

(There's an outside chance this will help with
https://github.com/matrix-org/synapse/issues/3364)
2018-07-23 22:06:50 +01:00
Matthew Hodgson
adfe29ec0b Merge branch 'develop' into matthew/filter_members 2018-07-23 19:21:37 +01:00
Matthew Hodgson
254fb430d1 incorporate review 2018-07-23 19:21:20 +01:00
Richard van der Hoff
cc99256e90 newsfile 2018-07-23 19:10:50 +01:00
Richard van der Hoff
5c705f70c9 Fixes and optimisations for resolve_state_groups
First of all, fix the logic which looks for identical input state groups so
that we actually use them. This turned out to be most easily done by factoring
the relevant code out to a separate function so that we could do an early
return.

Secondly, avoid building the whole `conflicted_state` dict (which was only ever
used as a boolean flag).

Thirdly, replace the construction of the `state` dict (which mapped from keys
to events that set them), with an optimistic construction of the resolution
result assuming there will be no conflicts. This should be no slower than
building the old `state` dict, and:
  - in the conflicted case, we'll short-cut it, saving part of the work
  - in the unconflicted case, it saves rebuilding the resolution from the
    `state` dict.

Finally, do a couple of s/values/itervalues/.
2018-07-23 19:10:11 +01:00
Erik Johnston
f559119de0 Merge pull request #3584 from matrix-org/erikj/use_cached
Only get cached state from context in persist_event
2018-07-23 17:52:45 +01:00
Erik Johnston
8b9f164fff Comments 2018-07-23 17:43:01 +01:00
Erik Johnston
2d5bba151b Newsfile 2018-07-23 17:29:32 +01:00
Erik Johnston
50c60e5fad Only get cached state from context in persist_event
We don't want to bother pulling out the current state from the DB since
until we know we have to. Checking the context for state is just an
optimisation.
2018-07-23 17:21:40 +01:00
Richard van der Hoff
4f5cc8e4e7 Merge remote-tracking branch 'origin/develop' into rav/remove_who_forgot_in_room 2018-07-23 17:15:12 +01:00
Richard van der Hoff
dae6dc1e77 Remove redundant checks on room forgottenness
Fixes #3550
2018-07-23 17:13:34 +01:00
Erik Johnston
a646bdc670 Merge pull request #3582 from matrix-org/erikj/fixup_stateless
Fix missing attributes on workers.
2018-07-23 16:44:42 +01:00
Erik Johnston
9f41ad491d Newsfile 2018-07-23 16:31:46 +01:00
Erik Johnston
0faa3223cd Fix missing attributes on workers.
This was missed during the transition from attribute to getter for
getting state from context.
2018-07-23 16:28:00 +01:00
Erik Johnston
37e87611bc Merge pull request #3581 from matrix-org/erikj/fixup_stateless
Fix EventContext when using workers
2018-07-23 16:06:59 +01:00
Erik Johnston
a4d24781bf Newsfile 2018-07-23 15:28:51 +01:00
Erik Johnston
999bcf9d01 Fix EventContext when using workers
We were:
  1. Not correctly setting all attributes
  2. Using defer.inlineCallbacks in a non-generator
2018-07-23 15:24:21 +01:00
Erik Johnston
9c294ea864 Merge pull request #3579 from matrix-org/erikj/stateless_contexts_4
Add concept of StatelessContext, take 4.
2018-07-23 15:14:39 +01:00
Erik Johnston
4797ed000e Update docstrings to make sense 2018-07-23 15:05:56 +01:00
Richard van der Hoff
726a0b1e64 Merge branch 'develop' into erikj/stateless_contexts_4 2018-07-23 14:44:27 +01:00
Erik Johnston
f0a1b8e4cd Merge pull request #3577 from matrix-org/erikj/cleanup_context
Refcator EventContext to accept state during init
2018-07-23 14:41:37 +01:00
Erik Johnston
8fbe418777 Fix unit tests 2018-07-23 13:33:49 +01:00
Erik Johnston
0b0b24cb82 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/client_apis_move 2018-07-23 13:21:15 +01:00
Erik Johnston
4fc52b1037 Update docs/workers.rst 2018-07-23 13:20:43 +01:00
Erik Johnston
f3182bb1d0 Newsfile 2018-07-23 13:19:24 +01:00
Erik Johnston
027bc01a1b Add support for updating state 2018-07-23 13:17:25 +01:00
Erik Johnston
e42510ba63 Use new getters 2018-07-23 13:17:22 +01:00
Erik Johnston
440b8845b5 Make EventContext lazy load state 2018-07-23 12:56:56 +01:00
Erik Johnston
842cdece42 pep8 2018-07-23 12:31:35 +01:00
Erik Johnston
959f4b9074 Newsfile 2018-07-23 12:22:59 +01:00
Erik Johnston
acbfdc3442 Refcator EventContext to accept state during init 2018-07-23 12:20:55 +01:00
Matthew Hodgson
354a99c968 Merge pull request #3520 from matrix-org/matthew/sync_deleted_devices
Announce deleted devices explicitly over federation.
2018-07-23 10:16:05 +01:00
Matthew Hodgson
9b34f3ea3a Merge branch 'develop' into matthew/sync_deleted_devices 2018-07-23 10:03:28 +01:00
Matthew Hodgson
c1bf2b587e add trailing comma 2018-07-23 09:56:23 +01:00
Amber Brown
3132b89f12 Make the rest of the .iterwhatever go away (#3562) 2018-07-21 15:47:18 +10:00
Erik Johnston
5c88bb722f Move PaginationHandler to its own file 2018-07-20 15:32:23 +01:00
Erik Johnston
0ecf68aedc Move check_in_room_or_world_readable to Auth 2018-07-20 15:30:59 +01:00
Richard van der Hoff
ff48ab8527 Merge pull request #3572 from matrix-org/rav/linearizer_cancellation
Test and fix support for cancellation in Linearizer
2018-07-20 14:44:02 +01:00
Richard van der Hoff
5c30cb709a Changelog 2018-07-20 14:01:36 +01:00
Richard van der Hoff
3d6df84658 Test and fix support for cancellation in Linearizer 2018-07-20 13:59:55 +01:00
Richard van der Hoff
4f67623674 Merge pull request #3571 from matrix-org/rav/limiter_fixes
A set of improvements to the Limiter
2018-07-20 13:53:03 +01:00
Amber Brown
e1a237eaab Admin API for creating new users (#3415) 2018-07-20 22:41:13 +10:00
Richard van der Hoff
683f4058c1 changelogs 2018-07-20 13:16:39 +01:00
Richard van der Hoff
7c712f95bb Combine Limiter and Linearizer
Linearizer was effectively a Limiter with max_count=1, so rather than
maintaining two sets of code, let's combine them.
2018-07-20 13:11:43 +01:00
Richard van der Hoff
8462c26485 Improvements to the Limiter
* give them names, to improve logging
* use a deque rather than a list for efficiency
2018-07-20 12:50:27 +01:00
Richard van der Hoff
d7275eecf3 Add a sleep to the Limiter to fix stack overflows.
Fixes #3570
2018-07-20 12:37:12 +01:00
Matthew Hodgson
650daf5628 make test work 2018-07-19 20:49:44 +01:00
Matthew Hodgson
1fa4f7e03e first cut of a UT for testing state store (untested) 2018-07-19 20:19:32 +01:00
Matthew Hodgson
2f558300cc fix thinkos; unbreak tests 2018-07-19 19:22:27 +01:00
Matthew Hodgson
bcaec2915a incorporate review 2018-07-19 19:03:50 +01:00
Matthew Hodgson
924eb34d94 add a filtered_types param to limit filtering to specific types 2018-07-19 18:32:02 +01:00
Richard van der Hoff
7044af3298 Merge pull request #3564 from matrix-org/hawkowl/markdown
Switch changes to markdown & flip content on for miscs
2018-07-19 17:31:59 +01:00
Richard van der Hoff
5f3658baf5 Merge pull request #3377 from Valodim/note-affinity
document that the affinity package is required for the cpu_affinity setting
2018-07-19 14:35:06 +01:00
Amber Brown
fd9b08873c Merge remote-tracking branch 'origin/develop' into hawkowl/markdown 2018-07-19 21:48:28 +10:00
Richard van der Hoff
11d592290c Merge branch 'master' into develop 2018-07-19 12:44:04 +01:00
Amber Brown
87086ac69d fixes 2018-07-19 21:42:40 +10:00
Amber Brown
7814e4cc86 fix up weird translation things 2018-07-19 21:37:57 +10:00
Richard van der Hoff
6284f579bf Update r0.33.0 release notes
(mostly just clarifications)
2018-07-19 12:37:55 +01:00
Amber Brown
7cf76c9a09 changelog 2018-07-19 21:26:30 +10:00
Amber Brown
37af0d2a13 make pyproject point at it 2018-07-19 21:25:47 +10:00
Amber Brown
252f80094c rst -> md 2018-07-19 21:25:33 +10:00
Amber Brown
9a8acdc720 Merge branch 'master' into develop 2018-07-19 21:16:44 +10:00
Amber Brown
d69decd5c7 0.33.0 final changelog 2018-07-19 21:12:15 +10:00
Amber Brown
38f53399a2 0.33 final 2018-07-19 21:11:40 +10:00
Amber Brown
13d501c773 update changelogs 2018-07-19 21:11:24 +10:00
Amber Brown
ce0545eca1 Revert "0.33.0rc1 changelog"
This reverts commit 21d3b87943.
2018-07-19 21:03:15 +10:00
Amber Brown
95ccb6e2ec Don't spew errors because we can't save metrics (#3563) 2018-07-19 20:58:18 +10:00
Richard van der Hoff
ba6477feac Merge pull request #3507 from Peetz0r/patch-1
Changed http links to https
2018-07-19 11:55:40 +01:00
Matthew Hodgson
be3adfc331 merge develop pydoc for _get_state_for_groups 2018-07-19 11:26:04 +01:00
Matthew Hodgson
9e40834f74 yes, we do need to invalidate the device_id_exists_cache when deleting a remote device 2018-07-19 11:15:10 +01:00
Richard van der Hoff
f1a15ea206 revert 00bc979
... we've fixed the things that caused the warnings, so we should reinstate the
warning.
2018-07-19 11:14:20 +01:00
Richard van der Hoff
1ffb7bec20 Merge remote-tracking branch 'origin/release-v0.33.0' into develop 2018-07-19 11:12:33 +01:00
Richard van der Hoff
2de3d994f3 Merge pull request #3561 from matrix-org/rav/disable_logcontext_warning
Disable logcontext warning
2018-07-19 11:05:57 +01:00
Richard van der Hoff
c754e006f4 Merge pull request #3556 from matrix-org/rav/background_processes
Run things as background processes
2018-07-19 11:04:18 +01:00
Amber Brown
a97c845271 Move v1-only APIs into their own module & isolate deprecated ones (#3460) 2018-07-19 20:03:33 +10:00
Matthew Hodgson
c0685f67c0 spell out that include_deleted_devices requires include_all_devices 2018-07-19 10:59:02 +01:00
Richard van der Hoff
18a2b2c0b4 changelog 2018-07-19 10:54:39 +01:00
Richard van der Hoff
00bc979137 Disable logcontext warning
Temporary workaround to #3518 while we release 0.33.0.
2018-07-19 10:51:15 +01:00
Benedikt Heine
f1dd89fe86 [Docker] Use sorted multiline package lists
This matches docker best practices.

Signed-off-by: Benedikt Heine <bebe@bebehei.de>
2018-07-19 11:38:58 +02:00
Erik Johnston
6f62a6ef21 Merge pull request #3554 from matrix-org/erikj/response_metrics_code
Add response code to response timer metrics
2018-07-19 10:25:18 +01:00
Amber Brown
77091d7e8e Merge pull request #3559 from matrix-org/rav/pep8_config
add config for pep8
2018-07-19 13:17:07 +10:00
Richard van der Hoff
eed24893fa changelog 2018-07-18 22:12:19 +01:00
Richard van der Hoff
3f9e649f17 add config for pep8
Since, for better or worse, we seem to have configured isort to generate
89-character lines, pycharm is now complaining at me that our lines are too
long.

So, let's configure pep8 to behave consistently with flake8.
2018-07-18 20:55:37 +01:00
Richard van der Hoff
8c69b735e3 Make Distributor run its processes as a background process
This is more involved than it might otherwise be, because the current
implementation just drops its logcontexts and runs everything in the sentinel
context.

It turns out that we aren't actually using a bunch of the functionality here
(notably suppress_failures and the fact that Distributor.fire returns a
deferred), so the easiest way to fix this is actually by simplifying a bunch of
code.
2018-07-18 20:55:05 +01:00
Richard van der Hoff
08436c556a changelog 2018-07-18 20:55:05 +01:00
Richard van der Hoff
667fba68f3 Run things as background processes
This fixes #3518, and ensures that we get useful logs and metrics for lots of
things that happen in the background.

(There are certainly more things that happen in the background; these are just
the common ones I've found running a single-process synapse locally).
2018-07-18 20:55:05 +01:00
Erik Johnston
3a993a660d Newsfile 2018-07-18 15:39:49 +01:00
Erik Johnston
9b596177ae Add some room read only APIs to client_reader 2018-07-18 15:33:03 +01:00
Erik Johnston
bacdf0cbf9 Move RoomContextHandler out of Handlers
This is in preparation for moving GET /context/ to a worker
2018-07-18 15:33:03 +01:00
Erik Johnston
8cb8df55e9 Split MessageHandler into read only and writers
This will let us call the read only parts from workers, and so be able
to move some APIs off of master, e.g. the `/state` API.
2018-07-18 15:33:03 +01:00
Richard van der Hoff
65d6a0e477 Merge pull request #3553 from matrix-org/rav/background_process_tracking
Resource tracking for background processes
2018-07-18 14:27:50 +01:00
Erik Johnston
5bd0a47fcd pep8 2018-07-18 14:19:00 +01:00
Erik Johnston
00845c49d2 Newsfile 2018-07-18 14:03:16 +01:00
Erik Johnston
e45a46b6e4 Add response code to response timer metrics 2018-07-18 13:59:36 +01:00
Richard van der Hoff
dab00faa83 Merge pull request #3367 from matrix-org/rav/drop_re_signing_hacks
Remove event re-signing hacks
2018-07-18 12:46:27 +01:00
David Baker
c91a44572e Merge pull request #3514 from matrix-org/dbkr/turn_dont_add_defaults
Comment dummy TURN parameters in default config
2018-07-18 11:56:18 +01:00
Richard van der Hoff
92aecd557b changelog 2018-07-18 11:29:48 +01:00
Richard van der Hoff
6e3fc657b4 Resource tracking for background processes
This introduces a mechanism for tracking resource usage by background
processes, along with an example of how it will be used.

This will help address #3518, but more importantly will give us better insights
into things which are happening but not being shown up by the request metrics.

We *could* do this with Measure blocks, but:
 - I think having them pulled out as a completely separate metric class will
   make it easier to distinguish top-level processes from those which are
   nested.

 - I want to be able to report on in-flight background processes, and I don't
   think we want to do this for *all* Measure blocks.
2018-07-18 10:50:33 +01:00
Amber Brown
21d3b87943 0.33.0rc1 changelog 2018-07-18 12:53:32 +10:00
Amber Brown
5f3d02f6eb bump to 0.33.0rc1 2018-07-18 12:52:56 +10:00
Richard van der Hoff
0aed3fc346 Merge pull request #3546 from matrix-org/rav/fix_erasure_over_federation
Fix visibility of events from erased users over federation
2018-07-17 15:16:45 +01:00
Richard van der Hoff
79eb339c66 add a comment 2018-07-17 14:53:34 +01:00
Richard van der Hoff
4a11df5b64 changelog 2018-07-17 14:24:29 +01:00
Richard van der Hoff
d897be6a98 Fix visibility of events from erased users over federation 2018-07-17 14:02:07 +01:00
Richard van der Hoff
9c04b4abf9 Merge pull request #3541 from matrix-org/rav/optimize_filter_events_for_server
Refactor and optimze filter_events_for_server
2018-07-17 14:01:39 +01:00
Richard van der Hoff
94440ae994 fix imports 2018-07-17 11:51:26 +01:00
Amber Brown
bc006b3c9d Refactor REST API tests to use explicit reactors (#3351) 2018-07-17 20:43:18 +10:00
Erik Johnston
c7320a5564 Merge pull request #3544 from matrix-org/erikj/fixup_stream_cache
Fix perf regression in PR #3530
2018-07-17 11:16:59 +01:00
Richard van der Hoff
2172a3d8cb add a comment 2018-07-17 11:13:57 +01:00
Erik Johnston
b2aa05a8d6 Use efficient .intersection 2018-07-17 11:07:04 +01:00
Erik Johnston
850238b4ef Add unit test 2018-07-17 10:59:02 +01:00
Erik Johnston
9952d18e4d Newsfile 2018-07-17 10:31:51 +01:00
Erik Johnston
547b1355d3 Fix perf regression in PR #3530
The get_entities_changed function was changed to return all changed
entities since the given stream position, rather than only those changed
from a given list of entities. This resulted in the function incorrectly
returning large numbers of entities that, for example, caused large
increases in database usage.
2018-07-17 10:27:51 +01:00
Benedikt Heine
fe089b13cb [Docker] Build docker image via compose
It's much easier to build the image via docker-compose instead of an
error-prone low-level docker call.

Signed-off-by: Benedikt Heine <bebe@bebehei.de>
2018-07-17 09:24:33 +02:00
Amber Brown
3fe0938b76 Merge pull request #3530 from matrix-org/erikj/stream_cache
Don't return unknown entities in get_entities_changed
2018-07-17 13:44:46 +10:00
Amber Brown
fe10dd9fb2 Merge pull request #3540 from krombel/enforce_isort
check isort by travis
2018-07-17 13:41:59 +10:00
Krombel
9677b1d1c0 rename 'isort' to 'check_isort' as requested 2018-07-16 16:03:41 +02:00
Richard van der Hoff
2731bf7ac3 Changelog 2018-07-16 14:12:25 +01:00
Richard van der Hoff
09e29fb58b Attempt to make _filter_events_for_server more efficient 2018-07-16 14:06:09 +01:00
Richard van der Hoff
15b13b537f Add a test which profiles filter_events_for_server in a large room 2018-07-16 14:06:09 +01:00
Krombel
78a9ddcf9a rerun isort with latest version 2018-07-16 14:23:25 +02:00
Richard van der Hoff
ea69d35651 Move filter_events_for_server out of FederationHandler
for easier unit testing.
2018-07-16 13:06:24 +01:00
Krombel
4a27000548 check isort by travis 2018-07-16 13:57:33 +02:00
Jeroen
505530f36a Merge remote-tracking branch 'upstream/develop' into send_sni_for_federation_requests
# Conflicts:
#	synapse/crypto/context_factory.py
2018-07-14 20:24:46 +02:00
Erik Johnston
bc832f822f Fixup unit test 2018-07-13 17:03:04 +01:00
Erik Johnston
5f263b607e Newsfile 2018-07-13 15:47:25 +01:00
Erik Johnston
77b692e65d Don't return unknown entities in get_entities_changed
The stream cache keeps track of all entities that have changed since
a particular stream position, so get_entities_changed does not need to
return unknown entites when given a larger stream position.

This makes it consistent with the behaviour of has_entity_changed.
2018-07-13 15:26:10 +01:00
Matthew Hodgson
37c4fba0ac changelog 2018-07-12 11:45:33 +01:00
Matthew Hodgson
12ec58301f shift to using an explicit deleted flag on m.device_list_update EDUs
and generally make it work.
2018-07-12 11:39:43 +01:00
Matthew Hodgson
5797f5542b WIP to announce deleted devices over federation
Previously we queued up the poke correctly when the device was deleted,
but then the actual EDU wouldn't get sent, as the device was no longer known.
Instead, we now send EDUs for deleted devices too if there's a poke for them.
2018-07-12 01:32:39 +01:00
David Baker
1b5425527c I failed to correctly guess the PR number 2018-07-11 16:02:29 +01:00
David Baker
36f4fd3e1e Comment dummy TURN parameters in default config
This default config is parsed and used a base before the actual
config is overlaid, so with these values not commented out, the
code to detect when no turn params were set and refuse to generate
credentials was never firing because the dummy default was always set.
2018-07-11 15:49:29 +01:00
Peter
2ef3f84945 change http links to https
Especially useful for the debian repo, as this makes it easier to get the key in a secure way
2018-07-10 21:14:22 +02:00
Jeroen
b5e157d895 Merge branch 'develop' into send_sni_for_federation_requests
# Conflicts:
#	synapse/http/endpoint.py
2018-07-09 08:51:11 +02:00
Richard van der Hoff
c4e7ad0e0f Add changelog 2018-07-04 09:15:45 +01:00
Richard van der Hoff
a4ab491371 Merge branch 'develop' into rav/drop_re_signing_hacks 2018-07-04 07:13:38 +01:00
Sverre Moe
ff65916108 Add instructions for install on OpenSUSE and SLES 2018-07-01 00:06:51 +02:00
Jeroen
95341a8f6f take idna implementation from twisted 2018-06-26 21:15:14 +02:00
Jeroen
26651b0d6a towncrier changelog 2018-06-26 21:10:45 +02:00
Jeroen
b7f34ee348 allow self-signed certificates 2018-06-26 20:41:05 +02:00
Jeroen
07b4f88de9 formatting changes for pep8 2018-06-25 12:31:16 +02:00
Jeroen
3d605853c8 send SNI for federation requests 2018-06-24 22:38:43 +02:00
Michael Telatynski
94700e55fa if inviter_display_name == ""||None then default to inviter MXID
to prevent email invite from "None"
2018-06-13 10:31:01 +01:00
Matthew Hodgson
c96d882a02 Merge branch 'develop' into matthew/filter_members 2018-06-10 12:26:14 +03:00
Vincent Breitmoser
b800834351 add note that the affinity package is required for the cpu_affinity setting 2018-06-09 22:50:29 +02:00
Richard van der Hoff
8503dd0047 Remove event re-signing hacks
These "temporary fixes" have been here three and a half years, and I can't find
any events in the matrix.org database where the calculated signature differs
from what's in the db. It's time for them to go away.
2018-06-07 16:08:29 +01:00
Matthew Hodgson
28f09fcdd5 Merge branch 'develop' into matthew/filter_members 2018-06-04 00:09:17 +03:00
Matthew Hodgson
5f6122fe10 more comments 2018-06-04 00:08:52 +03:00
Matthew Hodgson
9bbb9f5556 add lazy_load_members to the filter json schema 2018-05-29 04:26:10 +01:00
Matthew Hodgson
8df7bad839 pep8 2018-05-29 02:49:36 +01:00
Matthew Hodgson
b69ff33d9e disable CPUMetrics if no /proc/self/stat
fixes build on macOS again
2018-05-29 02:22:27 +01:00
Matthew Hodgson
5e6b31f0da fix dumb typo 2018-05-29 02:22:09 +01:00
Matthew Hodgson
a6c8f7c875 add pydoc 2018-05-29 01:09:55 +01:00
Matthew Hodgson
7a6df013cc merge develop 2018-05-29 00:25:22 +01:00
Matthew Hodgson
b2f2282947 make lazy_load_members configurable in filters 2018-03-19 01:15:13 +00:00
Matthew Hodgson
478af0f720 reshuffle todo & comments 2018-03-19 01:00:12 +00:00
Matthew Hodgson
366f730bf6 only get member state IDs for incremental syncs if we're filtering 2018-03-18 21:40:43 +00:00
Matthew Hodgson
0b56290f0b remove stale import 2018-03-16 01:45:49 +00:00
Matthew Hodgson
fc5397fdf5 remove debug 2018-03-16 01:44:55 +00:00
Matthew Hodgson
4f0493c850 fix tsm search again 2018-03-16 01:43:37 +00:00
Matthew Hodgson
f7dcc404f2 add state_ids for timeline entries 2018-03-16 01:37:53 +00:00
Matthew Hodgson
5b3b3aada8 simplify timeline_start_members 2018-03-16 01:17:34 +00:00
Erik Johnston
bf49d2dca8 Replace some ujson with simplejson to make it work 2018-03-16 00:55:44 +00:00
Matthew Hodgson
3bc5bd2d22 make incr syncs work 2018-03-16 00:52:04 +00:00
Matthew Hodgson
056a6df546 Merge branch 'develop' into matthew/filter_members 2018-03-14 15:38:05 +00:00
Matthew Hodgson
9f77001e27 pep8 2018-03-14 00:07:47 +00:00
Matthew Hodgson
4d0cfef6ee add copyright to nudge CI 2018-03-14 00:02:20 +00:00
Matthew Hodgson
c9d72e4571 oops 2018-03-13 23:46:45 +00:00
Matthew Hodgson
ccca02846d make it work 2018-03-13 22:31:41 +00:00
Matthew Hodgson
f0f9a0605b remove comment now #2969 is fixed 2018-03-13 22:12:15 +00:00
Matthew Hodgson
12350e3f9a merge proper fix to bug 2969 2018-03-13 22:11:58 +00:00
Matthew Hodgson
14a9d2f73d ensure we always include the members for a given timeline block 2018-03-13 22:03:42 +00:00
Matthew Hodgson
afbf4d3dcc typoe 2018-03-13 19:48:04 +00:00
Matthew Hodgson
865377a70d disable optimisation for searching for state groups
when type filter includes wildcards on state_key
2018-03-13 19:46:04 +00:00
Matthew Hodgson
b2aba9e430 build where_clause sanely 2018-03-13 18:13:44 +00:00
Matthew Hodgson
52f7e23c72 PR feedbackz 2018-03-13 18:07:55 +00:00
Matthew Hodgson
1b1c137771 fix bug #2926 2018-03-13 17:52:52 +00:00
Matthew Hodgson
fdedcd1f4d correctly handle None state_keys
and fix include_other_types thinko
2018-03-12 01:39:06 +00:00
Matthew Hodgson
97c0496cfa fix sqlite where clause 2018-03-12 00:27:06 +00:00
Matthew Hodgson
8713365265 typos 2018-03-11 20:10:25 +00:00
Matthew Hodgson
9b334b3f97 WIP experiment in lazyloading room members 2018-03-11 20:01:41 +00:00
309 changed files with 20021 additions and 10454 deletions

48
.circleci/config.yml Normal file
View File

@@ -0,0 +1,48 @@
version: 2
jobs:
sytestpy2:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy2postgres:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy3:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- store_artifacts:
path: ~/project/logs
destination: logs
sytestpy3postgres:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
workflows:
version: 2
build:
jobs:
- sytestpy2
- sytestpy2postgres
# Currently broken while the Python 3 port is incomplete
# - sytestpy3
# - sytestpy3postgres

View File

@@ -3,3 +3,6 @@ Dockerfile
.gitignore
demo/etc
tox.ini
synctl
.git/*
.tox/*

View File

@@ -27,8 +27,9 @@ Describe here the problem that you are experiencing, or the feature you are requ
Describe how what happens differs from what you expected.
If you can identify any relevant log snippets from _homeserver.log_, please include
those here (please be careful to remove any personal or private data):
<!-- If you can identify any relevant log snippets from _homeserver.log_, please include
those (please be careful to remove any personal or private data). Please surround them with
``` (three backticks, on a line on their own), so that they are formatted legibly. -->
### Version information

View File

@@ -8,6 +8,9 @@ before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
services:
- postgresql
matrix:
fast_finish: true
include:
@@ -20,12 +23,22 @@ matrix:
- python: 2.7
env: TOX_ENV=py27
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check_isort
- python: 3.6
env: TOX_ENV=check-newsfragment
allow_failures:
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
install:
- pip install tox

View File

@@ -62,4 +62,7 @@ Christoph Witzany <christoph at web.crofting.com>
* Add LDAP support for authentication
Pierre Jaury <pierre at jaury.eu>
* Docker packaging
* Docker packaging
Serban Constantin <serban.constantin at gmail dot com>
* Small bug fix

2634
CHANGES.md Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -30,11 +30,11 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Jenkins <http://matrix.org/jenkins>`_ and
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
Code style
@@ -51,22 +51,22 @@ makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
All changes, even minor ones, need a corresponding changelog
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``issuenumberOrPR.type``. The type can be
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
For example, a fix for a bug reported in #1234 would have its
changelog entry in ``changelog.d/1234.bugfix``, and contain content
like "The security levels of Florbs are now validated when
recieved over federation. Contributed by Jane Matrix".
For example, a fix in PR #1234 would have its changelog entry in
``changelog.d/1234.bugfix``, and contain content like "The security levels of
Florbs are now validated when recieved over federation. Contributed by Jane
Matrix".
Attribution
~~~~~~~~~~~
@@ -125,7 +125,7 @@ the contribution or otherwise have the right to contribute it to Matrix::
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::

View File

@@ -1,19 +0,0 @@
FROM docker.io/python:2-alpine3.7
RUN apk add --no-cache --virtual .nacl_deps su-exec build-base libffi-dev zlib-dev libressl-dev libjpeg-turbo-dev linux-headers postgresql-dev libxslt-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade pip setuptools psycopg2 lxml \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \
&& rm -rf setup.py setup.cfg synapse
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

View File

@@ -2,6 +2,7 @@ include synctl
include LICENSE
include VERSION
include *.rst
include *.md
include demo/README
include demo/demo.tls.dh
include demo/*.py
@@ -34,3 +35,5 @@ recursive-include changelog.d *
prune .github
prune demo/etc
prune docker
prune .circleci

View File

@@ -71,7 +71,7 @@ We'd like to invite you to join #matrix:matrix.org (via
https://matrix.org/docs/projects/try-matrix-now.html), run a homeserver, take a look
at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
`APIs <https://matrix.org/docs/api>`_ and `Client SDKs
<http://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
<https://matrix.org/docs/projects/try-matrix-now.html#client-sdks>`_.
Thanks for using Matrix!
@@ -157,12 +157,19 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`.
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
this including configuration options is available in the README on
hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy
tested with VirtualBox/AWS/DigitalOcean - see
https://github.com/EMnify/matrix-synapse-auto-deploy
for details.
Configuring synapse
@@ -283,7 +290,7 @@ Connecting to Synapse from a client
The easiest way to try out your new Synapse installation is by connecting to it
from a web client. The easiest option is probably the one at
http://riot.im/app. You will need to specify a "Custom server" when you log on
https://riot.im/app. You will need to specify a "Custom server" when you log on
or register: set this to ``https://domain.tld`` if you setup a reverse proxy
following the recommended setup, or ``https://localhost:8448`` - remember to specify the
port (``:8448``) if not ``:443`` unless you changed the configuration. (Leave the identity
@@ -329,7 +336,7 @@ Security Note
=============
Matrix serves raw user generated data in some APIs - specifically the `content
repository endpoints <http://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
repository endpoints <https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid>`_.
Whilst we have tried to mitigate against possible XSS attacks (e.g.
https://github.com/matrix-org/synapse/pull/1021) we recommend running
@@ -348,7 +355,7 @@ Platform-Specific Instructions
Debian
------
Matrix provides official Debian packages via apt from http://matrix.org/packages/debian/.
Matrix provides official Debian packages via apt from https://matrix.org/packages/debian/.
Note that these packages do not include a client - choose one from
https://matrix.org/docs/projects/try-matrix-now.html (or build your own with one of our SDKs :)
@@ -362,6 +369,19 @@ Synapse is in the Fedora repositories as ``matrix-synapse``::
Oleg Girko provides Fedora RPMs at
https://obs.infoserver.lv/project/monitor/matrix-synapse
OpenSUSE
--------
Synapse is in the OpenSUSE repositories as ``matrix-synapse``::
sudo zypper install matrix-synapse
SUSE Linux Enterprise Server
----------------------------
Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 repository at
https://download.opensuse.org/repositories/openSUSE:/Backports:/SLE-15/standard/
ArchLinux
---------
@@ -524,7 +544,7 @@ Troubleshooting Running
-----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for
to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
encryption and digital signatures.
Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and
@@ -672,8 +692,8 @@ useful just for development purposes. See `<demo/README>`_.
Using PostgreSQL
================
As of Synapse 0.9, `PostgreSQL <http://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <http://sqlite.org/>`_ database that Synapse has
As of Synapse 0.9, `PostgreSQL <https://www.postgresql.org>`_ is supported as an
alternative to the `SQLite <https://sqlite.org/>`_ database that Synapse has
traditionally used for convenience and simplicity.
The advantages of Postgres include:
@@ -697,7 +717,7 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`HAProxy <http://www.haproxy.org/>`_ in front of Synapse. One advantage of
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.

View File

@@ -1 +0,0 @@
Enforce the specified API for report_event

View File

@@ -1 +0,0 @@
Include CPU time from database threads in request/block metrics.

View File

@@ -1 +0,0 @@
Add CPU metrics for _fetch_event_list

View File

View File

View File

View File

@@ -1 +0,0 @@
Reduce database consumption when processing large numbers of receipts

View File

@@ -1 +0,0 @@
Cache optimisation for /sync requests

View File

@@ -1 +0,0 @@
Fix queued federation requests being processed in the wrong order

View File

@@ -1 +0,0 @@
refactor: use parse_{string,integer} and assert's from http.servlet for deduplication

View File

1
changelog.d/3659.feature Normal file
View File

@@ -0,0 +1 @@
Support profile API endpoints on workers

1
changelog.d/3673.misc Normal file
View File

@@ -0,0 +1 @@
Refactor state module to support multiple room versions

1
changelog.d/3680.feature Normal file
View File

@@ -0,0 +1 @@
Server notices for resource limit blocking

1
changelog.d/3722.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix error collecting prometheus metrics when run on dedicated thread due to threading concurrency issues

1
changelog.d/3724.feature Normal file
View File

@@ -0,0 +1 @@
Allow guests to use /rooms/:roomId/event/:eventId

1
changelog.d/3726.misc Normal file
View File

@@ -0,0 +1 @@
Split the state_group_cache into member and non-member state events (and so speed up LL /sync)

1
changelog.d/3727.misc Normal file
View File

@@ -0,0 +1 @@
Log failure to authenticate remote servers as warnings (without stack traces)

1
changelog.d/3734.misc Normal file
View File

@@ -0,0 +1 @@
Reference the need for an HTTP replication port when using the federation_reader worker

1
changelog.d/3735.misc Normal file
View File

@@ -0,0 +1 @@
Fix minor spelling error in federation client documentation.

1
changelog.d/3746.misc Normal file
View File

@@ -0,0 +1 @@
Fix MAU cache invalidation due to missing yield

1
changelog.d/3747.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where we resent "limit exceeded" server notices repeatedly

1
changelog.d/3749.feature Normal file
View File

@@ -0,0 +1 @@
Add mau_trial_days config param, so that users only get counted as MAU after N days.

1
changelog.d/3751.feature Normal file
View File

@@ -0,0 +1 @@
Require twisted 17.1 or later (fixes [#3741](https://github.com/matrix-org/synapse/issues/3741)).

1
changelog.d/3753.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where we broke sync when using limit_usage_by_mau but hadn't configured server notices

1
changelog.d/3754.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix 'federation_domain_whitelist' such that an empty list correctly blocks all outbound federation traffic

1
changelog.d/3755.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix tagging of server notice rooms

View File

@@ -1,29 +1,5 @@
# Synapse Docker
The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
database server or a TURN server, you should run these separately.
If you run a Postgres server, you should simply include it in the same Compose
project or set the proper environment variables and the image will automatically
use that server.
## Build
Build the docker image with the `docker build` command from the root of the synapse repository.
```
docker build -t docker.io/matrixdotorg/synapse .
```
The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
You may have a local Python wheel cache available, in which case copy the relevant packages in the ``cache/`` directory at the root of the project.
## Run
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual edition.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including
@@ -60,94 +36,6 @@ Then, customize your configuration and run the server:
docker-compose up -d
```
### Without Compose
### More information
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)

View File

@@ -6,6 +6,7 @@ version: '3'
services:
synapse:
build: ../..
image: docker.io/matrixdotorg/synapse:latest
# Since snyapse does not retry to connect to the database, restart upon
# failure

View File

@@ -0,0 +1,6 @@
# Using the Synapse Grafana dashboard
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up additional recording rules

4969
contrib/grafana/synapse.json Normal file

File diff suppressed because it is too large Load Diff

35
docker/Dockerfile Normal file
View File

@@ -0,0 +1,35 @@
FROM docker.io/python:2-alpine3.8
RUN apk add --no-cache --virtual .nacl_deps \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libressl-dev \
libxslt-dev \
linux-headers \
postgresql-dev \
su-exec \
zlib-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade \
lxml \
pip \
psycopg2 \
setuptools \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/docker/start.py /synapse/docker/conf / \
&& rm -rf \
setup.cfg \
setup.py \
synapse
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

124
docker/README.md Normal file
View File

@@ -0,0 +1,124 @@
# Synapse Docker
This Docker image will run Synapse as a single process. It does not provide a database
server or a TURN server, you should run these separately.
## Run
We do not currently offer a `latest` image, as this has somewhat undefined semantics.
We instead release only tagged versions so upgrading between releases is entirely
within your control.
### Using docker-compose (easier)
This image is designed to run either with an automatically generated configuration
file or with a custom configuration that requires manual editing.
An easy way to make use of this image is via docker-compose. See the
[contrib/docker](../contrib/docker)
section of the synapse project for examples.
### Without Compose (harder)
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
-v ${DATA_PATH}:/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
docker.io/matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually. No other environment variable is required.
Otherwise, a dynamic configuration file will be used. The following environment
variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_CACHE_FACTOR``, the cache factor [default `0.5`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use postgresql instead of sqlite3. [default: `db` which is useful when using a container on the same docker network in a compose file where the postgres service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If this is set then postgres will be used instead of sqlite3.** [default: none] **NOTE**: You are highly encouraged to use postgresql! Please use the compose file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default: `matrix`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default ``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail server if any.
## Build
Build the docker image with the `docker build` command from the root of the synapse repository.
```
docker build -t docker.io/matrixdotorg/synapse . -f docker/Dockerfile
```
The `-t` option sets the image tag. Official images are tagged `matrixdotorg/synapse:<version>` where `<version>` is the same as the release tag in the synapse git repository.
You may have a local Python wheel cache available, in which case copy the relevant
packages in the ``cache/`` directory at the root of the project.

View File

@@ -0,0 +1,63 @@
Shared-Secret Registration
==========================
This API allows for the creation of users in an administrative and
non-interactive way. This is generally used for bootstrapping a Synapse
instance with administrator accounts.
To authenticate yourself to the server, you will need both the shared secret
(``registration_shared_secret`` in the homeserver configuration), and a
one-time nonce. If the registration shared secret is not configured, this API
is not enabled.
To fetch the nonce, you need to request one from the API::
> GET /_matrix/client/r0/admin/register
< {"nonce": "thisisanonce"}
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
body containing the nonce, username, password, whether they are an admin
(optional, False by default), and a HMAC digest of the content.
As an example::
> POST /_matrix/client/r0/admin/register
> {
"nonce": "thisisanonce",
"username": "pepper_roni",
"password": "pizza",
"admin": true,
"mac": "mac_digest_here"
}
< {
"access_token": "token_here",
"user_id": "@pepper_roni:localhost",
"home_server": "test",
"device_id": "device_id_here"
}
The MAC is the hex digest output of the HMAC-SHA1 algorithm, with the key being
the shared secret and the content being the nonce, user, password, and either
the string "admin" or "notadmin", each separated by NULs. For an example of
generation in Python::
import hmac, hashlib
def generate_mac(nonce, user, password, admin=False):
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce.encode('utf8'))
mac.update(b"\x00")
mac.update(user.encode('utf8'))
mac.update(b"\x00")
mac.update(password.encode('utf8'))
mac.update(b"\x00")
mac.update(b"admin" if admin else b"notadmin")
return mac.hexdigest()

View File

@@ -74,7 +74,7 @@ replication endpoints that it's talking to on the main synapse process.
``worker_replication_port`` should point to the TCP replication listener port and
``worker_replication_http_port`` should point to the HTTP replication port.
Currently, only the ``event_creator`` worker requires specifying
Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
``worker_replication_http_port``.
For instance::
@@ -173,10 +173,23 @@ endpoints matching the following regular expressions::
^/_matrix/federation/v1/backfill/
^/_matrix/federation/v1/get_missing_events/
^/_matrix/federation/v1/publicRooms
^/_matrix/federation/v1/query/
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/send/
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
``synapse.app.federation_sender``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -206,6 +219,10 @@ Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~
@@ -224,6 +241,14 @@ regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
If ``use_presence`` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
This "stub" presence handler will pass through ``GET`` request but make the
``PUT`` effectively a no-op.
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
@@ -240,6 +265,7 @@ Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -1,5 +1,30 @@
[tool.towncrier]
package = "synapse"
filename = "CHANGES.rst"
filename = "CHANGES.md"
directory = "changelog.d"
issue_format = "`#{issue} <https://github.com/matrix-org/synapse/issues/{issue}>`_"
issue_format = "[\\#{issue}](https://github.com/matrix-org/synapse/issues/{issue})"
[[tool.towncrier.type]]
directory = "feature"
name = "Features"
showcontent = true
[[tool.towncrier.type]]
directory = "bugfix"
name = "Bugfixes"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
showcontent = true
[[tool.towncrier.type]]
directory = "removal"
name = "Deprecations and Removals"
showcontent = true
[[tool.towncrier.type]]
directory = "misc"
name = "Internal Changes"
showcontent = true

View File

@@ -26,11 +26,37 @@ import yaml
def request_registration(user, password, server_location, shared_secret, admin=False):
req = urllib2.Request(
"%s/_matrix/client/r0/admin/register" % (server_location,),
headers={'Content-Type': 'application/json'}
)
try:
if sys.version_info[:3] >= (2, 7, 9):
# As of version 2.7.9, urllib2 now checks SSL certs
import ssl
f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23))
else:
f = urllib2.urlopen(req)
body = f.read()
f.close()
nonce = json.loads(body)["nonce"]
except urllib2.HTTPError as e:
print "ERROR! Received %d %s" % (e.code, e.reason,)
if 400 <= e.code < 500:
if e.info().type == "application/json":
resp = json.load(e)
if "error" in resp:
print resp["error"]
sys.exit(1)
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce)
mac.update("\x00")
mac.update(user)
mac.update("\x00")
mac.update(password)
@@ -40,10 +66,10 @@ def request_registration(user, password, server_location, shared_secret, admin=F
mac = mac.hexdigest()
data = {
"user": user,
"nonce": nonce,
"username": user,
"password": password,
"mac": mac,
"type": "org.matrix.login.shared_secret",
"admin": admin,
}
@@ -52,7 +78,7 @@ def request_registration(user, password, server_location, shared_secret, admin=F
print "Sending registration request..."
req = urllib2.Request(
"%s/_matrix/client/api/v1/register" % (server_location,),
"%s/_matrix/client/r0/admin/register" % (server_location,),
data=json.dumps(data),
headers={'Content-Type': 'application/json'}
)

View File

@@ -14,12 +14,17 @@ ignore =
pylint.cfg
tox.ini
[flake8]
[pep8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik doesn't like it.
# E203 is contrary to PEP8.
# W503 requires that binary operators be at the end, not start, of lines. Erik
# doesn't like it. E203 is contrary to PEP8.
ignore = W503,E203
[flake8]
# note that flake8 inherits the "ignore" settings from "pep8" (because it uses
# pep8 to do those checks), but not the "max-line-length" setting
max-line-length = 90
[isort]
line_length = 89
not_skip = __init__.py
@@ -31,3 +36,4 @@ known_compat = mock,six
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true
combine_as_imports=true

View File

@@ -17,4 +17,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.32.2"
__version__ = "0.33.3"

View File

@@ -25,7 +25,7 @@ from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
@@ -65,8 +65,9 @@ class Auth(object):
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
event, prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
@@ -210,9 +211,9 @@ class Auth(object):
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent",
default=[b""]
)[0]
)[0].decode('ascii', 'surrogateescape')
if user and access_token and ip_addr:
self.store.insert_client_ip(
yield self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
@@ -251,10 +252,10 @@ class Auth(object):
if ip_address not in app_service.ip_range_whitelist:
defer.returnValue((None, None))
if "user_id" not in request.args:
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
user_id = request.args["user_id"][0]
user_id = request.args[b"user_id"][0].decode('utf8')
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
@@ -544,7 +545,8 @@ class Auth(object):
@defer.inlineCallbacks
def add_auth_events(self, builder, context):
auth_ids = yield self.compute_auth_events(builder, context.prev_state_ids)
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_ids = yield self.compute_auth_events(builder, prev_state_ids)
auth_events_entries = yield self.store.add_event_hashes(
auth_ids
@@ -680,7 +682,7 @@ class Auth(object):
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -696,7 +698,7 @@ class Auth(object):
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
@@ -718,9 +720,9 @@ class Auth(object):
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode('ascii')
else:
raise AuthError(
token_not_found_http_status,
@@ -736,4 +738,81 @@ class Auth(object):
errcode=Codes.MISSING_TOKEN
)
return query_params[0]
return query_params[0].decode('ascii')
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Returns:
Deferred[tuple[str, str|None]]: Resolves to the current membership of
the user in the room and the membership event ID of the user. If
the user is not in the room and never has been, then
`(Membership.JOIN, None)` is returned.
"""
try:
# check_user_was_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
except AuthError:
visibility = yield self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
Args:
user_id(str|None): If present, checks for presence against existing
MAU cohort
"""
# Never fail an auth check for the server notices users
# This can be a problem where event creation is prohibited due to blocking
if user_id == self.hs.config.server_notices_mxid:
return
if self.hs.config.hs_disabled:
raise ResourceLimitError(
403, self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_uri=self.hs.config.admin_uri,
limit_type=self.hs.config.hs_disabled_limit_type
)
if self.hs.config.limit_usage_by_mau is True:
# If the user is already part of the MAU cohort or a trial user
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
is_trial = yield self.store.is_trial_user(user_id)
if is_trial:
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise ResourceLimitError(
403, "Monthly Active User Limit Exceeded",
admin_uri=self.hs.config.admin_uri,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type="monthly_active_user"
)

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -68,7 +69,6 @@ class EventTypes(object):
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
Encryption = "m.room.encryption"
RoomAvatar = "m.room.avatar"
GuestAccess = "m.room.guest_access"
@@ -78,6 +78,7 @@ class EventTypes(object):
Name = "m.room.name"
ServerACL = "m.room.server_acl"
Pinned = "m.room.pinned_events"
class RejectedReason(object):
@@ -95,3 +96,19 @@ class RoomCreationPreset(object):
class ThirdPartyEntityKind(object):
USER = "user"
LOCATION = "location"
class RoomVersions(object):
V1 = "1"
VDH_TEST = "vdh-test-version"
# the version we will give rooms which are created on this server
DEFAULT_ROOM_VERSION = RoomVersions.V1
# vdh-test-version is a placeholder to get room versioning support working and tested
# until we have a working v2.
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
ServerNoticeMsgType = "m.server_notice"
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -55,6 +56,9 @@ class Codes(object):
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
RESOURCE_LIMIT_EXCEEDED = "M_RESOURCE_LIMIT_EXCEEDED"
UNSUPPORTED_ROOM_VERSION = "M_UNSUPPORTED_ROOM_VERSION"
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
class CodeMessageException(RuntimeError):
@@ -69,20 +73,6 @@ class CodeMessageException(RuntimeError):
self.code = code
self.msg = msg
def error_dict(self):
return cs_error(self.msg)
class MatrixCodeMessageException(CodeMessageException):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
super(MatrixCodeMessageException, self).__init__(code, msg)
self.errcode = errcode
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
@@ -108,38 +98,28 @@ class SynapseError(CodeMessageException):
self.errcode,
)
@classmethod
def from_http_response_exception(cls, err):
"""Make a SynapseError based on an HTTPResponseException
This is useful when a proxied request has failed, and we need to
decide how to map the failure onto a matrix error to send back to the
client.
class ProxiedRequestError(SynapseError):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call.
An attempt is made to parse the body of the http response as a matrix
error. If that succeeds, the errcode and error message from the body
are used as the errcode and error message in the new synapse error.
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
super(ProxiedRequestError, self).__init__(
code, msg, errcode
)
if additional_fields is None:
self._additional_fields = {}
else:
self._additional_fields = dict(additional_fields)
Otherwise, the errcode is set to M_UNKNOWN, and the error message is
set to the reason code from the HTTP response.
Args:
err (HttpResponseException):
Returns:
SynapseError:
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json.loads(err.response)
except ValueError:
j = {}
errcode = j.get('errcode', Codes.UNKNOWN)
errmsg = j.get('error', err.msg)
res = SynapseError(err.code, errmsg, errcode)
return res
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
**self._additional_fields
)
class ConsentNotGivenError(SynapseError):
@@ -251,6 +231,30 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs)
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.
For instance, the monthly active user limit for the server has been exceeded
"""
def __init__(
self, code, msg,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_uri=None,
limit_type=None,
):
self.admin_uri = admin_uri
self.limit_type = limit_type
super(ResourceLimitError, self).__init__(code, msg, errcode=errcode)
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
admin_uri=self.admin_uri,
limit_type=self.limit_type
)
class EventSizeError(SynapseError):
"""An error raised when an event is too big."""
@@ -308,12 +312,25 @@ class LimitExceededError(SynapseError):
)
def cs_exception(exception):
if isinstance(exception, CodeMessageException):
return exception.error_dict()
else:
logger.error("Unknown exception type: %s", type(exception))
return {}
class IncompatibleRoomVersionError(SynapseError):
"""A server is trying to join a room whose version it does not support."""
def __init__(self, room_version):
super(IncompatibleRoomVersionError, self).__init__(
code=400,
msg="Your homeserver does not support the features required to "
"join this room",
errcode=Codes.INCOMPATIBLE_ROOM_VERSION,
)
self._room_version = room_version
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
room_version=self._room_version,
)
def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
@@ -372,7 +389,7 @@ class HttpResponseException(CodeMessageException):
Represents an HTTP-level failure of an outbound request
Attributes:
response (str): body of response
response (bytes): body of response
"""
def __init__(self, code, msg, response):
"""
@@ -380,7 +397,39 @@ class HttpResponseException(CodeMessageException):
Args:
code (int): HTTP status code
msg (str): reason phrase from HTTP response status line
response (str): body of response
response (bytes): body of response
"""
super(HttpResponseException, self).__init__(code, msg)
self.response = response
def to_synapse_error(self):
"""Make a SynapseError based on an HTTPResponseException
This is useful when a proxied request has failed, and we need to
decide how to map the failure onto a matrix error to send back to the
client.
An attempt is made to parse the body of the http response as a matrix
error. If that succeeds, the errcode and error message from the body
are used as the errcode and error message in the new synapse error.
Otherwise, the errcode is set to M_UNKNOWN, and the error message is
set to the reason code from the HTTP response.
Returns:
SynapseError:
"""
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json.loads(self.response)
except ValueError:
j = {}
if not isinstance(j, dict):
j = {}
errcode = j.pop('errcode', Codes.UNKNOWN)
errmsg = j.pop('error', self.msg)
return ProxiedRequestError(self.code, errmsg, errcode, j)

View File

@@ -113,7 +113,13 @@ ROOM_EVENT_FILTER_SCHEMA = {
},
"contains_url": {
"type": "boolean"
}
},
"lazy_load_members": {
"type": "boolean"
},
"include_redundant_members": {
"type": "boolean"
},
}
}
@@ -261,6 +267,12 @@ class FilterCollection(object):
def ephemeral_limit(self):
return self._room_ephemeral_filter.limit()
def lazy_load_members(self):
return self._room_state_filter.lazy_load_members()
def include_redundant_members(self):
return self._room_state_filter.include_redundant_members()
def filter_presence(self, events):
return self._presence_filter.filter(events)
@@ -417,6 +429,12 @@ class Filter(object):
def limit(self):
return self.filter_json.get("limit", 10)
def lazy_load_members(self):
return self.filter_json.get("lazy_load_members", False)
def include_redundant_members(self):
return self.filter_json.get("include_redundant_members", False)
def _matches_wildcard(actual_value, filter_value):
if filter_value.endswith("*"):

View File

@@ -72,7 +72,7 @@ class Ratelimiter(object):
return allowed, time_allowed
def prune_message_counts(self, time_now_s):
for user_id in self.message_counts.keys():
for user_id in list(self.message_counts.keys()):
message_count, time_start, msg_rate_hz = (
self.message_counts[user_id]
)

View File

@@ -140,7 +140,7 @@ def listen_metrics(bind_addresses, port):
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, backlog=50):
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
@@ -156,7 +156,9 @@ def listen_tcp(bind_addresses, port, factory, backlog=50):
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
def listen_ssl(
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
):
"""
Create an SSL socket for a port and several addresses
"""

View File

@@ -117,8 +117,9 @@ class ASReplicationHandler(ReplicationClientHandler):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()

View File

@@ -31,6 +31,7 @@ from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -38,9 +39,15 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.room import PublicRoomListRestServlet
from synapse.rest.client.v1.room import (
JoinedRoomMemberListRestServlet,
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomMemberListRestServlet,
RoomStateRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
@@ -52,13 +59,14 @@ logger = logging.getLogger("synapse.app.client_reader")
class ClientReaderSlavedStore(
SlavedAccountDataStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
TransactionStore,
SlavedTransactionStore,
SlavedClientIpStore,
BaseSlavedStore,
):
@@ -82,7 +90,13 @@ class ClientReaderServer(HomeServer):
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
RoomMemberListRestServlet(self).register(resource)
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -154,11 +168,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = ClientReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -43,8 +43,13 @@ from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
ProfileRestServlet,
)
from synapse.rest.client.v1.room import (
JoinRoomAliasServlet,
RoomMembershipRestServlet,
@@ -53,6 +58,7 @@ from synapse.rest.client.v1.room import (
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
@@ -62,8 +68,11 @@ logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
UserDirectoryStore,
DirectoryStore,
TransactionStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
@@ -101,6 +110,9 @@ class EventCreatorServer(HomeServer):
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -174,11 +186,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = EventCreatorServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -32,11 +32,17 @@ from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -49,11 +55,17 @@ logger = logging.getLogger("synapse.app.federation_reader")
class FederationReaderSlavedStore(
SlavedAccountDataStore,
SlavedProfileStore,
SlavedApplicationServiceStore,
SlavedPusherStore,
SlavedPushRuleStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
TransactionStore,
SlavedTransactionStore,
BaseSlavedStore,
):
pass
@@ -143,11 +155,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = FederationReaderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -36,11 +36,11 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.async import Linearizer
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
@@ -50,7 +50,7 @@ logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, TransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedDeviceInboxStore, SlavedTransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
@@ -144,8 +144,9 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
yield super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
@@ -186,11 +187,13 @@ def start(config_options):
config.send_federation = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ps = FederationSenderServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -38,6 +38,7 @@ from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.base import ClientV1RestServlet, client_path_patterns
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -49,6 +50,35 @@ from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.frontend_proxy")
class PresenceStatusStubServlet(ClientV1RestServlet):
PATTERNS = client_path_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__(hs)
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.get_json(
self.main_uri + request.uri,
headers=headers,
)
defer.returnValue((200, result))
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
@@ -135,6 +165,12 @@ class FrontendProxyServer(HomeServer):
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -153,7 +189,8 @@ class FrontendProxyServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
reactor=self.get_reactor()
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -208,11 +245,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -18,6 +18,10 @@ import logging
import os
import sys
from six import iteritems
from prometheus_client import Gauge
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.web.resource import EncodingResourceWrapper, NoResource
@@ -47,6 +51,7 @@ from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.module_api import ModuleApi
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, check_requirements
@@ -297,6 +302,11 @@ class SynapseHomeServer(HomeServer):
quit_with_error(e.message)
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
def setup(config_options):
"""
Args:
@@ -328,6 +338,7 @@ def setup(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
database_engine = create_engine(config.database_config)
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
@@ -336,6 +347,7 @@ def setup(config_options):
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
@@ -425,6 +437,9 @@ def run(hs):
# currently either 0 or 1
stats_process = []
def start_phone_stats_home():
return run_as_background_process("phone_stats_home", phone_stats_home)
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
@@ -442,7 +457,7 @@ def run(hs):
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in daily_user_type_results.iteritems():
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
@@ -453,7 +468,7 @@ def run(hs):
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in r30_results.iteritems():
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
@@ -496,16 +511,41 @@ def run(hs):
)
def generate_user_daily_visit_stats():
hs.get_datastore().generate_user_daily_visits()
return run_as_background_process(
"generate_user_daily_visits",
hs.get_datastore().generate_user_daily_visits,
)
# Rather than update on per session basis, batch up the requests.
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastore().reap_monthly_active_users, 1000 * 60 * 60
)
hs.get_datastore().reap_monthly_active_users()
@defer.inlineCallbacks
def generate_monthly_active_users():
count = 0
if hs.config.limit_usage_by_mau:
count = yield hs.get_datastore().get_monthly_active_count()
current_mau_gauge.set(float(count))
max_mau_gauge.set(float(hs.config.max_mau_value))
hs.get_datastore().initialise_reserved_users(
hs.config.mau_limits_reserved_threepids
)
generate_monthly_active_users()
if hs.config.limit_usage_by_mau:
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
# End of monthly active user settings
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)
clock.looping_call(start_phone_stats_home, 3 * 60 * 60 * 1000)
# We need to defer this init for the cases that we daemonize
# otherwise the process ID we get is that of the non-daemon process
@@ -513,7 +553,7 @@ def run(hs):
# We wait 5 minutes to send the first set of stats as the server can
# be quite busy the first few minutes
clock.call_later(5 * 60, phone_stats_home)
clock.call_later(5 * 60, start_phone_stats_home)
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)

View File

@@ -34,7 +34,7 @@ from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import TransactionStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
@@ -52,7 +52,7 @@ class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
TransactionStore,
SlavedTransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
):
@@ -155,11 +155,13 @@ def start(config_options):
database_engine = create_engine(config.database_config)
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ss = MediaRepositoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -148,8 +148,9 @@ class PusherReplicationHandler(ReplicationClientHandler):
self.pusher_pool = hs.get_pusherpool()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
@@ -162,11 +163,11 @@ class PusherReplicationHandler(ReplicationClientHandler):
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:

View File

@@ -55,7 +55,6 @@ from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.presence import UserPresenceState
from synapse.storage.roommember import RoomMemberStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
@@ -81,9 +80,7 @@ class SynchrotronSlavedStore(
RoomStore,
BaseSlavedStore,
):
did_forget = (
RoomMemberStore.__dict__["did_forget"]
)
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000
@@ -117,7 +114,10 @@ class SynchrotronPresence(object):
logger.info("Presence process_id is %r", self.process_id)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
@@ -214,10 +214,13 @@ class SynchrotronPresence(object):
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
if self.hs.config.use_presence:
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
else:
return set()
class SynchrotronTyping(object):
@@ -335,8 +338,9 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):

View File

@@ -25,6 +25,8 @@ import subprocess
import sys
import time
from six import iteritems
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
@@ -173,7 +175,7 @@ def main():
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in cache_factors.iteritems():
for cache_name, factor in iteritems(cache_factors):
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []

View File

@@ -169,8 +169,9 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
yield super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
@@ -214,11 +215,13 @@ def start(config_options):
config.update_user_directory = True
tls_server_context_factory = context_factory.ServerContextFactory(config)
tls_client_options_factory = context_factory.ClientTLSOptionsFactory(config)
ps = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
tls_server_context_factory=tls_server_context_factory,
tls_client_options_factory=tls_client_options_factory,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,

View File

@@ -168,7 +168,8 @@ def setup_logging(config, use_worker_options=False):
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3,
encoding='utf8'
)
def sighup(signum, stack):
@@ -193,9 +194,8 @@ def setup_logging(config, use_worker_options=False):
def sighup(signum, stack):
# it might be better to use a file watcher or something for this.
logging.info("Reloading log config from %s due to SIGHUP",
log_config)
load_log_config()
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()

View File

@@ -49,6 +49,9 @@ class ServerConfig(Config):
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to enable user presence.
self.use_presence = config.get("use_presence", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
@@ -67,6 +70,31 @@ class ServerConfig(Config):
"block_non_admin_invites", False,
)
# Options to control access by tracking MAU
self.limit_usage_by_mau = config.get("limit_usage_by_mau", False)
self.max_mau_value = 0
if self.limit_usage_by_mau:
self.max_mau_value = config.get(
"max_mau_value", 0,
)
self.mau_limits_reserved_threepids = config.get(
"mau_limit_reserved_threepids", []
)
self.mau_trial_days = config.get(
"mau_trial_days", 0,
)
# Options to disable HS
self.hs_disabled = config.get("hs_disabled", False)
self.hs_disabled_message = config.get("hs_disabled_message", "")
self.hs_disabled_limit_type = config.get("hs_disabled_limit_type", "")
# Admin uri to direct users at should their instance become blocked
# due to resource constraints
self.admin_uri = config.get("admin_uri", None)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
federation_domain_whitelist = config.get(
@@ -209,6 +237,8 @@ class ServerConfig(Config):
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# This setting requires the affinity package to be installed!
#
# cpu_affinity: 0xFFFFFFFF
# Whether to serve a web client from the HTTP/HTTPS root resource.
@@ -228,6 +258,9 @@ class ServerConfig(Config):
# hard limit.
soft_file_limit: 0
# Set to false to disable presence tracking on this homeserver.
use_presence: true
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
@@ -319,6 +352,33 @@ class ServerConfig(Config):
# - port: 9000
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Homeserver blocking
#
# How to reach the server admin, used in ResourceLimitError
# admin_uri: 'mailto:admin@server.com'
#
# Global block config
#
# hs_disabled: False
# hs_disabled_message: 'Human readable reason for why the HS is blocked'
# hs_disabled_limit_type: 'error code(str), to help clients decode reason'
#
# Monthly Active User Blocking
#
# Enables monthly active user checking
# limit_usage_by_mau: False
# max_mau_value: 50
# mau_trial_days: 2
#
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
#
# mau_limit_reserved_threepids:
# - medium: 'email'
# address: 'reserved_user@example.com'
""" % locals()
def read_arguments(self, args):

View File

@@ -1,46 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
import sys
class StatsConfig(Config):
"""Stats Configuration
Configuration for the behaviour of synapse's stats engine
"""
def read_config(self, config):
self.stats_enable = False
self.stats_bucket_size = 86400
self.stats_retention = sys.maxint
stats_config = config.get("stats", None)
if stats_config:
self.stats_enable = stats_config.get("enable", self.stats_enable)
self.stats_bucket_size = stats_config.get(
"bucket_size", self.stats_bucket_size
)
self.stats_retention = stats_config.get("retention", self.stats_retention)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Stats configuration
#
# stats:
# enable: false
# bucket_size: 86400 # 1 day
# retention: 31536000 # 1 year
"""

View File

@@ -30,10 +30,10 @@ class VoipConfig(Config):
## Turn ##
# The public URIs of the TURN server to give to clients
turn_uris: []
#turn_uris: []
# The shared secret used to compute passwords for the TURN server
turn_shared_secret: "YOUR_SHARED_SECRET"
#turn_shared_secret: "YOUR_SHARED_SECRET"
# The Username and password if the TURN server needs them and
# does not use a token

View File

@@ -11,19 +11,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from zope.interface import implementer
from OpenSSL import SSL, crypto
from twisted.internet import ssl
from twisted.internet._sslverify import _defaultCurveName
from twisted.internet.interfaces import IOpenSSLClientConnectionCreator
from twisted.internet.ssl import CertificateOptions, ContextFactory
from twisted.python.failure import Failure
logger = logging.getLogger(__name__)
class ServerContextFactory(ssl.ContextFactory):
class ServerContextFactory(ContextFactory):
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
connections and to make connections to remote servers."""
connections."""
def __init__(self, config):
self._context = SSL.Context(SSL.SSLv23_METHOD)
@@ -48,3 +51,78 @@ class ServerContextFactory(ssl.ContextFactory):
def getContext(self):
return self._context
def _idnaBytes(text):
"""
Convert some text typed by a human into some ASCII bytes. This is a
copy of twisted.internet._idna._idnaBytes. For documentation, see the
twisted documentation.
"""
try:
import idna
except ImportError:
return text.encode("idna")
else:
return idna.encode(text)
def _tolerateErrors(wrapped):
"""
Wrap up an info_callback for pyOpenSSL so that if something goes wrong
the error is immediately logged and the connection is dropped if possible.
This is a copy of twisted.internet._sslverify._tolerateErrors. For
documentation, see the twisted documentation.
"""
def infoCallback(connection, where, ret):
try:
return wrapped(connection, where, ret)
except: # noqa: E722, taken from the twisted implementation
f = Failure()
logger.exception("Error during info_callback")
connection.get_app_data().failVerification(f)
return infoCallback
@implementer(IOpenSSLClientConnectionCreator)
class ClientTLSOptions(object):
"""
Client creator for TLS without certificate identity verification. This is a
copy of twisted.internet._sslverify.ClientTLSOptions with the identity
verification left out. For documentation, see the twisted documentation.
"""
def __init__(self, hostname, ctx):
self._ctx = ctx
self._hostname = hostname
self._hostnameBytes = _idnaBytes(hostname)
ctx.set_info_callback(
_tolerateErrors(self._identityVerifyingInfoCallback)
)
def clientConnectionForTLS(self, tlsProtocol):
context = self._ctx
connection = SSL.Connection(context, None)
connection.set_app_data(tlsProtocol)
return connection
def _identityVerifyingInfoCallback(self, connection, where, ret):
if where & SSL.SSL_CB_HANDSHAKE_START:
connection.set_tlsext_host_name(self._hostnameBytes)
class ClientTLSOptionsFactory(object):
"""Factory for Twisted ClientTLSOptions that are used to make connections
to remote servers for federation."""
def __init__(self, config):
# We don't use config options yet
pass
def get_options(self, host):
return ClientTLSOptions(
host.decode('utf-8'),
CertificateOptions(verify=False).getContext()
)

View File

@@ -18,7 +18,9 @@ import logging
from canonicaljson import json
from twisted.internet import defer, reactor
from twisted.internet.error import ConnectError
from twisted.internet.protocol import Factory
from twisted.names.error import DomainError
from twisted.web.http import HTTPClient
from synapse.http.endpoint import matrix_federation_endpoint
@@ -30,14 +32,14 @@ KEY_API_V1 = b"/_matrix/key/v1/"
@defer.inlineCallbacks
def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
"""Fetch the keys for a remote server."""
factory = SynapseKeyClientFactory()
factory.path = path
factory.host = server_name
endpoint = matrix_federation_endpoint(
reactor, server_name, ssl_context_factory, timeout=30
reactor, server_name, tls_client_options_factory, timeout=30
)
for i in range(5):
@@ -47,12 +49,14 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
logger.warn("Error getting key for %r: %s", server_name, e)
if e.status.startswith("4"):
# Don't retry for 4xx responses.
raise IOError("Cannot get key for %r" % server_name)
except (ConnectError, DomainError) as e:
logger.warn("Error getting key for %r: %s", server_name, e)
except Exception as e:
logger.exception(e)
logger.exception("Error getting key for %r", server_name)
raise IOError("Cannot get key for %r" % server_name)

View File

@@ -512,7 +512,7 @@ class Keyring(object):
continue
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_server_context_factory,
server_name, self.hs.tls_client_options_factory,
path=(b"/_matrix/key/v2/server/%s" % (
urllib.quote(requested_key_id),
)).encode("ascii"),
@@ -655,7 +655,7 @@ class Keyring(object):
# Try to fetch the key from the remote server.
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_server_context_factory
server_name, self.hs.tls_client_options_factory
)
# Check the response.

View File

@@ -20,7 +20,7 @@ from signedjson.key import decode_verify_key_bytes
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.constants import KNOWN_ROOM_VERSIONS, EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.types import UserID, get_domain_from_id
@@ -83,6 +83,14 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
403,
"Creation event's room_id domain does not match sender's"
)
room_version = event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
raise AuthError(
403,
"room appears to have unsupported version %s" % (
room_version,
))
# FIXME
logger.debug("Allowing! %s", event)
return

View File

@@ -13,22 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from six import iteritems
from frozendict import frozendict
from twisted.internet import defer
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
class EventContext(object):
"""
Attributes:
current_state_ids (dict[(str, str), str]):
The current state map including the current event.
(type, state_key) -> event_id
prev_state_ids (dict[(str, str), str]):
The current state map excluding the current event.
(type, state_key) -> event_id
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
@@ -45,38 +41,77 @@ class EventContext(object):
prev_state_events (?): XXX: is this ever set to anything other than
the empty list?
_current_state_ids (dict[(str, str), str]|None):
The current state map including the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_prev_state_ids (dict[(str, str), str]|None):
The current state map excluding the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
_event_type (str): The type of the event the context is associated with.
Only set when state has not been fetched yet.
_event_state_key (str|None): The state_key of the event the context is
associated with. Only set when state has not been fetched yet.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
Only set when state has not been fetched yet.
"""
__slots__ = [
"current_state_ids",
"prev_state_ids",
"state_group",
"rejected",
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
"_current_state_ids",
"_prev_state_ids",
"_prev_state_id",
"_event_type",
"_event_state_key",
"_fetching_state_deferred",
]
def __init__(self):
# The current state including the current event
self.current_state_ids = None
# The current state excluding the current event
self.prev_state_ids = None
self.state_group = None
self.prev_state_events = []
self.rejected = False
self.app_service = None
@staticmethod
def with_state(state_group, current_state_ids, prev_state_ids,
prev_group=None, delta_ids=None):
context = EventContext()
# The current state including the current event
context._current_state_ids = current_state_ids
# The current state excluding the current event
context._prev_state_ids = prev_state_ids
context.state_group = state_group
context._prev_state_id = None
context._event_type = None
context._event_state_key = None
context._fetching_state_deferred = defer.succeed(None)
# A previously persisted state group and a delta between that
# and this state.
self.prev_group = None
self.delta_ids = None
context.prev_group = prev_group
context.delta_ids = delta_ids
self.prev_state_events = None
return context
self.app_service = None
def serialize(self, event):
@defer.inlineCallbacks
def serialize(self, event, store):
"""Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize`
@@ -92,11 +127,12 @@ class EventContext(object):
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_id = self.prev_state_ids.get((event.type, event.state_key))
prev_state_ids = yield self.get_prev_state_ids(store)
prev_state_id = prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
return {
defer.returnValue({
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
@@ -106,10 +142,9 @@ class EventContext(object):
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None
}
})
@staticmethod
@defer.inlineCallbacks
def deserialize(store, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
@@ -122,32 +157,115 @@ class EventContext(object):
EventContext
"""
context = EventContext()
context.state_group = input["state_group"]
context.rejected = input["rejected"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.prev_state_events = input["prev_state_events"]
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id = input["prev_state_id"]
event_type = input["event_type"]
event_state_key = input["event_state_key"]
context._prev_state_id = input["prev_state_id"]
context._event_type = input["event_type"]
context._event_state_key = input["event_state_key"]
context.current_state_ids = yield store.get_state_ids_for_group(
context.state_group,
)
if prev_state_id and event_state_key:
context.prev_state_ids = dict(context.current_state_ids)
context.prev_state_ids[(event_type, event_state_key)] = prev_state_id
else:
context.prev_state_ids = context.current_state_ids
context._current_state_ids = None
context._prev_state_ids = None
context._fetching_state_deferred = None
context.state_group = input["state_group"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.rejected = input["rejected"]
context.prev_state_events = input["prev_state_events"]
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
defer.returnValue(context)
return context
@defer.inlineCallbacks
def get_current_state_ids(self, store):
"""Gets the current state IDs
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store,
)
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._current_state_ids)
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
"""Gets the prev state IDs
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store,
)
yield make_deferred_yieldable(self._fetching_state_deferred)
defer.returnValue(self._prev_state_ids)
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.
Returns:
dict[(str, str), str]|None: Returns None if we haven't cached the
state or if state_group is None, which happens when the associated
event is an outlier.
"""
return self._current_state_ids
@defer.inlineCallbacks
def _fill_out_state(self, store):
"""Called to populate the _current_state_ids and _prev_state_ids
attributes by loading from the database.
"""
if self.state_group is None:
return
self._current_state_ids = yield store.get_state_ids_for_group(
self.state_group,
)
if self._prev_state_id and self._event_state_key is not None:
self._prev_state_ids = dict(self._current_state_ids)
key = (self._event_type, self._event_state_key)
self._prev_state_ids[key] = self._prev_state_id
else:
self._prev_state_ids = self._current_state_ids
@defer.inlineCallbacks
def update_state(self, state_group, prev_state_ids, current_state_ids,
prev_group, delta_ids):
"""Replace the state in the context
"""
# We need to make sure we wait for any ongoing fetching of state
# to complete so that the updated state doesn't get clobbered
if self._fetching_state_deferred:
yield make_deferred_yieldable(self._fetching_state_deferred)
self.state_group = state_group
self._prev_state_ids = prev_state_ids
self.prev_group = prev_group
self._current_state_ids = current_state_ids
self.delta_ids = delta_ids
# We need to ensure that that we've marked as having fetched the state
self._fetching_state_deferred = defer.succeed(None)
def _encode_state_dict(state_dict):
@@ -159,7 +277,7 @@ def _encode_state_dict(state_dict):
return [
(etype, state_key, v)
for (etype, state_key), v in state_dict.iteritems()
for (etype, state_key), v in iteritems(state_dict)
]

View File

@@ -25,7 +25,7 @@ from prometheus_client import Counter
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.constants import KNOWN_ROOM_VERSIONS, EventTypes, Membership
from synapse.api.errors import (
CodeMessageException,
FederationDeniedError,
@@ -48,6 +48,13 @@ sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["t
PDU_RETRY_TIME_MS = 1 * 60 * 1000
class InvalidResponseError(RuntimeError):
"""Helper for _try_destination_list: indicates that the server returned a response
we couldn't parse
"""
pass
class FederationClient(FederationBase):
def __init__(self, hs):
super(FederationClient, self).__init__(hs)
@@ -458,8 +465,63 @@ class FederationClient(FederationBase):
defer.returnValue(signed_auth)
@defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback):
"""Try an operation on a series of servers, until it succeeds
Args:
description (unicode): description of the operation we're doing, for logging
destinations (Iterable[unicode]): list of server_names to try
callback (callable): Function to run for each server. Passed a single
argument: the server_name to try. May return a deferred.
If the callback raises a CodeMessageException with a 300/400 code,
attempts to perform the operation stop immediately and the exception is
reraised.
Otherwise, if the callback raises an Exception the error is logged and the
next server tried. Normally the stacktrace is logged but this is
suppressed if the exception is an InvalidResponseError.
Returns:
The [Deferred] result of callback, if it succeeds
Raises:
SynapseError if the chosen remote server returns a 300/400 code.
RuntimeError if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
try:
res = yield callback(destination)
defer.returnValue(res)
except InvalidResponseError as e:
logger.warn(
"Failed to %s via %s: %s",
description, destination, e,
)
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
else:
logger.warn(
"Failed to %s via %s: %i %s",
description, destination, e.code, e.message,
)
except Exception:
logger.warn(
"Failed to %s via %s",
description, destination, exc_info=1,
)
raise RuntimeError("Failed to %s via any server" % (description, ))
def make_membership_event(self, destinations, room_id, user_id, membership,
content={},):
content, params):
"""
Creates an m.room.member event, with context, without participating in the room.
@@ -475,13 +537,15 @@ class FederationClient(FederationBase):
user_id (str): The user whose membership is being evented.
membership (str): The "membership" property of the event. Must be
one of "join" or "leave".
content (object): Any additional data to put into the content field
content (dict): Any additional data to put into the content field
of the event.
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
Return:
Deferred: resolves to a tuple of (origin (str), event (object))
where origin is the remote homeserver which generated the event.
Fails with a ``CodeMessageException`` if the chosen remote server
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
@@ -492,50 +556,37 @@ class FederationClient(FederationBase):
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
for destination in destinations:
if destination == self.server_name:
continue
try:
ret = yield self.transport_layer.make_membership_event(
destination, room_id, user_id, membership
)
@defer.inlineCallbacks
def send_request(destination):
ret = yield self.transport_layer.make_membership_event(
destination, room_id, user_id, membership, params,
)
pdu_dict = ret["event"]
pdu_dict = ret.get("event", None)
if not isinstance(pdu_dict, dict):
raise InvalidResponseError("Bad 'event' field in response")
logger.debug("Got response to make_%s: %s", membership, pdu_dict)
logger.debug("Got response to make_%s: %s", membership, pdu_dict)
pdu_dict["content"].update(content)
pdu_dict["content"].update(content)
# The protoevent received over the JSON wire may not have all
# the required fields. Lets just gloss over that because
# there's some we never care about
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
# The protoevent received over the JSON wire may not have all
# the required fields. Lets just gloss over that because
# there's some we never care about
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
ev = builder.EventBuilder(pdu_dict)
ev = builder.EventBuilder(pdu_dict)
defer.returnValue(
(destination, ev)
)
break
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
except Exception as e:
logger.warn(
"Failed to make_%s via %s: %s",
membership, destination, e.message
)
defer.returnValue(
(destination, ev)
)
raise RuntimeError("Failed to send to any server.")
return self._try_destination_list(
"make_" + membership, destinations, send_request,
)
@defer.inlineCallbacks
def send_join(self, destinations, pdu):
"""Sends a join event to one of a list of homeservers.
@@ -552,103 +603,111 @@ class FederationClient(FederationBase):
giving the serer the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``CodeMessageException`` if the chosen remote server
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
try:
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
def check_authchain_validity(signed_auth_chain):
for e in signed_auth_chain:
if e.type == EventTypes.Create:
create_event = e
break
else:
raise InvalidResponseError(
"no %s in auth chain" % (EventTypes.Create,),
)
logger.debug("Got content: %s", content)
# the room version should be sane.
room_version = create_event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
# This shouldn't be possible, because the remote server should have
# rejected the join attempt during make_join.
raise InvalidResponseError(
"room appears to have unsupported version %s" % (
room_version,
))
state = [
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
auth_chain = [
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
logger.debug("Got content: %s", content)
pdus = {
p.event_id: p
for p in itertools.chain(state, auth_chain)
}
state = [
event_from_pdu_json(p, outlier=True)
for p in content.get("state", [])
]
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, list(pdus.values()),
outlier=True,
)
auth_chain = [
event_from_pdu_json(p, outlier=True)
for p in content.get("auth_chain", [])
]
valid_pdus_map = {
p.event_id: p
for p in valid_pdus
}
pdus = {
p.event_id: p
for p in itertools.chain(state, auth_chain)
}
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
signed_state = [
copy.copy(valid_pdus_map[p.event_id])
for p in state
if p.event_id in valid_pdus_map
]
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, list(pdus.values()),
outlier=True,
)
signed_auth = [
valid_pdus_map[p.event_id]
for p in auth_chain
if p.event_id in valid_pdus_map
]
valid_pdus_map = {
p.event_id: p
for p in valid_pdus
}
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
signed_state = [
copy.copy(valid_pdus_map[p.event_id])
for p in state
if p.event_id in valid_pdus_map
]
auth_chain.sort(key=lambda e: e.depth)
signed_auth = [
valid_pdus_map[p.event_id]
for p in auth_chain
if p.event_id in valid_pdus_map
]
defer.returnValue({
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
})
except CodeMessageException as e:
if not 500 <= e.code < 600:
raise
else:
logger.exception(
"Failed to send_join via %s: %s",
destination, e.message
)
except Exception as e:
logger.exception(
"Failed to send_join via %s: %s",
destination, e.message
)
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
raise RuntimeError("Failed to send to any server.")
check_authchain_validity(signed_auth)
defer.returnValue({
"state": signed_state,
"auth_chain": signed_auth,
"origin": destination,
})
return self._try_destination_list("send_join", destinations, send_request)
@defer.inlineCallbacks
def send_invite(self, destination, room_id, event_id, pdu):
time_now = self._clock.time_msec()
code, content = yield self.transport_layer.send_invite(
destination=destination,
room_id=room_id,
event_id=event_id,
content=pdu.get_pdu_json(time_now),
)
try:
code, content = yield self.transport_layer.send_invite(
destination=destination,
room_id=room_id,
event_id=event_id,
content=pdu.get_pdu_json(time_now),
)
except HttpResponseException as e:
if e.code == 403:
raise e.to_synapse_error()
raise
pdu_dict = content["event"]
@@ -663,7 +722,6 @@ class FederationClient(FederationBase):
defer.returnValue(pdu)
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
"""Sends a leave event to one of a list of homeservers.
@@ -680,35 +738,25 @@ class FederationClient(FederationBase):
Return:
Deferred: resolves to None.
Fails with a ``CodeMessageException`` if the chosen remote server
returns a non-200 code.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
"""
for destination in destinations:
if destination == self.server_name:
continue
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
try:
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
logger.debug("Got content: %s", content)
defer.returnValue(None)
logger.debug("Got content: %s", content)
defer.returnValue(None)
except CodeMessageException:
raise
except Exception as e:
logger.exception(
"Failed to send_leave via %s: %s",
destination, e.message
)
raise RuntimeError("Failed to send to any server.")
return self._try_destination_list("send_leave", destinations, send_request)
def get_public_rooms(self, destination, limit=None, since_token=None,
search_filter=None, include_all_networks=False,

View File

@@ -24,16 +24,27 @@ from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, FederationError, NotFoundError, SynapseError
from synapse.api.errors import (
AuthError,
FederationError,
IncompatibleRoomVersionError,
NotFoundError,
SynapseError,
)
from synapse.crypto.event_signing import compute_event_signature
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
)
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
@@ -60,8 +71,8 @@ class FederationServer(FederationBase):
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self._server_linearizer = async.Linearizer("fed_server")
self._transaction_linearizer = async.Linearizer("fed_txn_handler")
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
self.transaction_actions = TransactionActions(self.store)
@@ -186,10 +197,14 @@ class FederationServer(FederationBase):
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.exception("Failed to handle PDU %s", event_id)
logger.error(
"Failed to handle PDU %s: %s",
event_id, f.getTraceback().rstrip(),
)
yield async.concurrently_execute(
yield concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(),
TRANSACTION_CONCURRENCY_LIMIT,
)
@@ -202,10 +217,6 @@ class FederationServer(FederationBase):
edu.content
)
pdu_failures = getattr(transaction, "pdu_failures", [])
for failure in pdu_failures:
logger.info("Got failure %r", failure)
response = {
"pdus": pdu_results,
}
@@ -322,12 +333,21 @@ class FederationServer(FederationBase):
defer.returnValue((200, resp))
@defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id):
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
if room_version not in supported_versions:
logger.warn("Room version %s not in %s", room_version, supported_versions)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = yield self.handler.on_make_join_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
defer.returnValue({
"event": pdu.get_pdu_json(time_now),
"room_version": room_version,
})
@defer.inlineCallbacks
def on_invite_request(self, origin, content):
@@ -425,6 +445,7 @@ class FederationServer(FederationBase):
ret = yield self.handler.on_query_auth(
origin,
event_id,
room_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
@@ -743,6 +764,8 @@ class FederationHandlerRegistry(object):
if edu_type in self.edu_handlers:
raise KeyError("Already have an EDU handler for %s" % (edu_type,))
logger.info("Registering federation EDU handler for %r", edu_type)
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
@@ -761,6 +784,8 @@ class FederationHandlerRegistry(object):
"Already have a Query handler for %s" % (query_type,)
)
logger.info("Registering federation query handler for %r", query_type)
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
@@ -783,3 +808,49 @@ class FederationHandlerRegistry(object):
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)
class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
"""A FederationHandlerRegistry for worker processes.
When receiving EDU or queries it will check if an appropriate handler has
been registered on the worker, if there isn't one then it calls off to the
master process.
"""
def __init__(self, hs):
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.clock = hs.get_clock()
self._get_query_client = ReplicationGetQueryRestServlet.make_client(hs)
self._send_edu = ReplicationFederationSendEduRestServlet.make_client(hs)
super(ReplicationFederationHandlerRegistry, self).__init__()
def on_edu(self, edu_type, origin, content):
"""Overrides FederationHandlerRegistry
"""
handler = self.edu_handlers.get(edu_type)
if handler:
return super(ReplicationFederationHandlerRegistry, self).on_edu(
edu_type, origin, content,
)
return self._send_edu(
edu_type=edu_type,
origin=origin,
content=content,
)
def on_query(self, query_type, args):
"""Overrides FederationHandlerRegistry
"""
handler = self.query_handlers.get(query_type)
if handler:
return handler(args)
return self._get_query_client(
query_type=query_type,
args=args,
)

View File

@@ -62,8 +62,6 @@ class FederationRemoteSendQueue(object):
self.edus = SortedDict() # stream position -> Edu
self.failures = SortedDict() # stream position -> (destination, Failure)
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
@@ -79,7 +77,7 @@ class FederationRemoteSendQueue(object):
for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
"edus", "failures", "device_messages", "pos_time",
"edus", "device_messages", "pos_time",
]:
register(queue_name, getattr(self, queue_name))
@@ -149,12 +147,6 @@ class FederationRemoteSendQueue(object):
for key in keys[:i]:
del self.edus[key]
# Delete things out of failure map
keys = self.failures.keys()
i = self.failures.bisect_left(position_to_delete)
for key in keys[:i]:
del self.failures[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = self.device_messages.bisect_left(position_to_delete)
@@ -204,13 +196,6 @@ class FederationRemoteSendQueue(object):
self.notifier.on_new_replication_data()
def send_failure(self, failure, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
self.failures[pos] = (destination, str(failure))
self.notifier.on_new_replication_data()
def send_device_messages(self, destination):
"""As per TransactionQueue"""
pos = self._next_pos()
@@ -285,17 +270,6 @@ class FederationRemoteSendQueue(object):
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed failures
i = self.failures.bisect_right(from_token)
j = self.failures.bisect_right(to_token) + 1
failures = self.failures.items()[i:j]
for (pos, (destination, failure)) in failures:
rows.append((pos, FailureRow(
destination=destination,
failure=failure,
)))
# Fetch changed device messages
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
@@ -417,34 +391,6 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", (
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class FailureRow(BaseFederationRow, namedtuple("FailureRow", (
"destination", # str
"failure",
))):
"""Streams failures to a remote server. Failures are issued when there was
something wrong with a transaction the remote sent us, e.g. it included
an event that was invalid.
"""
TypeId = "f"
@staticmethod
def from_data(data):
return FailureRow(
destination=data["destination"],
failure=data["failure"],
)
def to_data(self):
return {
"destination": self.destination,
"failure": self.failure,
}
def add_to_buffer(self, buff):
buff.failures.setdefault(self.destination, []).append(self.failure)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", (
"destination", # str
))):
@@ -471,7 +417,6 @@ TypeToRow = {
PresenceRow,
KeyedEduRow,
EduRow,
FailureRow,
DeviceRow,
)
}
@@ -481,7 +426,6 @@ ParsedFederationStreamData = namedtuple("ParsedFederationStreamData", (
"presence", # list(UserPresenceState)
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"failures", # dict of destination -> [failures]
"device_destinations", # set of destinations
))
@@ -503,7 +447,6 @@ def process_rows_for_federation(transaction_queue, rows):
presence=[],
keyed_edus={},
edus={},
failures={},
device_destinations=set(),
)
@@ -532,9 +475,5 @@ def process_rows_for_federation(transaction_queue, rows):
edu.destination, edu.edu_type, edu.content, key=None,
)
for destination, failure_list in iteritems(buff.failures):
for failure in failure_list:
transaction_queue.send_failure(destination, failure)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -26,11 +26,14 @@ from synapse.api.errors import FederationDeniedError, HttpResponseException
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
from synapse.metrics import (
LaterGauge,
event_processing_loop_counter,
event_processing_loop_room_count,
events_processed_counter,
sent_edus_counter,
sent_transactions_counter,
)
from synapse.util import PreserveLoggingContext, logcontext
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util import logcontext
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
@@ -55,6 +58,7 @@ class TransactionQueue(object):
"""
def __init__(self, hs):
self.hs = hs
self.server_name = hs.hostname
self.store = hs.get_datastore()
@@ -115,9 +119,6 @@ class TransactionQueue(object):
),
)
# destination -> list of tuple(failure, deferred)
self.pending_failures_by_dest = {}
# destination -> stream_id of last successfully sent to-device message.
# NB: may be a long or an int.
self.last_device_stream_id_by_dest = {}
@@ -165,10 +166,11 @@ class TransactionQueue(object):
if self._is_processing:
return
# fire off a processing loop in the background. It's likely it will
# outlast the current request, so run it in the sentinel logcontext.
with PreserveLoggingContext():
self._process_event_queue_loop()
# fire off a processing loop in the background
run_as_background_process(
"process_event_queue_for_federation",
self._process_event_queue_loop,
)
@defer.inlineCallbacks
def _process_event_queue_loop(self):
@@ -254,7 +256,13 @@ class TransactionQueue(object):
synapse.metrics.event_processing_last_ts.labels(
"federation_sender").set(ts)
events_processed_counter.inc(len(events))
events_processed_counter.inc(len(events))
event_processing_loop_room_count.labels(
"federation_sender"
).inc(len(events_by_room))
event_processing_loop_counter.labels("federation_sender").inc()
synapse.metrics.event_processing_positions.labels(
"federation_sender").set(next_token)
@@ -301,6 +309,9 @@ class TransactionQueue(object):
Args:
states (list(UserPresenceState))
"""
if not self.hs.config.use_presence:
# No-op if presence is disabled.
return
# First we queue up the new presence by user ID, so multiple presence
# updates in quick successtion are correctly handled
@@ -380,19 +391,6 @@ class TransactionQueue(object):
self._attempt_new_transaction(destination)
def send_failure(self, failure, destination):
if destination == self.server_name or destination == "localhost":
return
if not self.can_send_to(destination):
return
self.pending_failures_by_dest.setdefault(
destination, []
).append(failure)
self._attempt_new_transaction(destination)
def send_device_messages(self, destination):
if destination == self.server_name or destination == "localhost":
return
@@ -432,14 +430,11 @@ class TransactionQueue(object):
logger.debug("TX [%s] Starting transaction loop", destination)
# Drop the logcontext before starting the transaction. It doesn't
# really make sense to log all the outbound transactions against
# whatever path led us to this point: that's pretty arbitrary really.
#
# (this also means we can fire off _perform_transaction without
# yielding)
with logcontext.PreserveLoggingContext():
self._transaction_transmission_loop(destination)
run_as_background_process(
"federation_transaction_transmission_loop",
self._transaction_transmission_loop,
destination,
)
@defer.inlineCallbacks
def _transaction_transmission_loop(self, destination):
@@ -470,7 +465,6 @@ class TransactionQueue(object):
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_presence = self.pending_presence_by_dest.pop(destination, {})
pending_failures = self.pending_failures_by_dest.pop(destination, [])
pending_edus.extend(
self.pending_edus_keyed_by_dest.pop(destination, {}).values()
@@ -498,7 +492,7 @@ class TransactionQueue(object):
logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures:
if not pending_pdus and not pending_edus:
logger.debug("TX [%s] Nothing to send", destination)
self.last_device_stream_id_by_dest[destination] = (
device_stream_id
@@ -508,7 +502,7 @@ class TransactionQueue(object):
# END CRITICAL SECTION
success = yield self._send_new_transaction(
destination, pending_pdus, pending_edus, pending_failures,
destination, pending_pdus, pending_edus,
)
if success:
sent_transactions_counter.inc()
@@ -585,14 +579,12 @@ class TransactionQueue(object):
@measure_func("_send_new_transaction")
@defer.inlineCallbacks
def _send_new_transaction(self, destination, pending_pdus, pending_edus,
pending_failures):
def _send_new_transaction(self, destination, pending_pdus, pending_edus):
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[1])
pdus = [x[0] for x in pending_pdus]
edus = pending_edus
failures = [x.get_dict() for x in pending_failures]
success = True
@@ -602,11 +594,10 @@ class TransactionQueue(object):
logger.debug(
"TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)",
" (pdus: %d, edus: %d)",
destination, txn_id,
len(pdus),
len(edus),
len(failures)
)
logger.debug("TX [%s] Persisting transaction...", destination)
@@ -618,7 +609,6 @@ class TransactionQueue(object):
destination=destination,
pdus=pdus,
edus=edus,
pdu_failures=failures,
)
self._next_txn_id += 1
@@ -628,12 +618,11 @@ class TransactionQueue(object):
logger.debug("TX [%s] Persisted transaction", destination)
logger.info(
"TX [%s] {%s} Sending transaction [%s],"
" (PDUs: %d, EDUs: %d, failures: %d)",
" (PDUs: %d, EDUs: %d)",
destination, txn_id,
transaction.transaction_id,
len(pdus),
len(edus),
len(failures),
)
# Actually send the transaction

View File

@@ -106,7 +106,7 @@ class TransportLayerClient(object):
dest (str)
room_id (str)
event_tuples (list)
limt (int)
limit (int)
Returns:
Deferred: Results in a dict received from the remote homeserver.
@@ -195,7 +195,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def make_membership_event(self, destination, room_id, user_id, membership):
def make_membership_event(self, destination, room_id, user_id, membership, params):
"""Asks a remote server to build and sign us a membership event
Note that this does not append any events to any graphs.
@@ -205,6 +205,8 @@ class TransportLayerClient(object):
room_id (str): room to join/leave
user_id (str): user to be joined/left
membership (str): one of join/leave
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
Returns:
Deferred: Succeeds when we get a 2xx HTTP response. The result
@@ -241,6 +243,7 @@ class TransportLayerClient(object):
content = yield self.client.get_json(
destination=destination,
path=path,
args=params,
retry_on_dns_fail=retry_on_dns_fail,
timeout=20000,
ignore_backoff=ignore_backoff,

View File

@@ -165,7 +165,7 @@ def _parse_auth_header(header_bytes):
param_dict = dict(kv.split("=") for kv in params)
def strip_quotes(value):
if value.startswith(b"\""):
if value.startswith("\""):
return value[1:-1]
else:
return value
@@ -190,6 +190,41 @@ def _parse_auth_header(header_bytes):
class BaseFederationServlet(object):
"""Abstract base class for federation servlet classes.
The servlet object should have a PATH attribute which takes the form of a regexp to
match against the request path (excluding the /federation/v1 prefix).
The servlet should also implement one or more of on_GET, on_POST, on_PUT, to match
the appropriate HTTP method. These methods have the signature:
on_<METHOD>(self, origin, content, query, **kwargs)
With arguments:
origin (unicode|None): The authenticated server_name of the calling server,
unless REQUIRE_AUTH is set to False and authentication failed.
content (unicode|None): decoded json body of the request. None if the
request was a GET.
query (dict[bytes, list[bytes]]): Query params from the request. url-decoded
(ie, '+' and '%xx' are decoded) but note that it is *not* utf8-decoded
yet.
**kwargs (dict[unicode, unicode]): the dict mapping keys to path
components as specified in the path match regexp.
Returns:
Deferred[(int, object)|None]: either (response code, response object) to
return a JSON response, or None if the request has already been handled.
Raises:
SynapseError: to return an error code
Exception: other exceptions will be caught, logged, and a 500 will be
returned.
"""
REQUIRE_AUTH = True
def __init__(self, handler, authenticator, ratelimiter, server_name):
@@ -204,6 +239,18 @@ class BaseFederationServlet(object):
@defer.inlineCallbacks
@functools.wraps(func)
def new_func(request, *args, **kwargs):
""" A callback which can be passed to HttpServer.RegisterPaths
Args:
request (twisted.web.http.Request):
*args: unused?
**kwargs (dict[unicode, unicode]): the dict mapping keys to path
components as specified in the path match regexp.
Returns:
Deferred[(int, object)|None]: (response code, response object) as returned
by the callback method. None if the request has already been handled.
"""
content = None
if request.method in ["PUT", "POST"]:
# TODO: Handle other method types? other content types?
@@ -214,10 +261,10 @@ class BaseFederationServlet(object):
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
logger.exception("authenticate_request failed")
logger.warn("authenticate_request failed: missing authentication")
raise
except Exception:
logger.exception("authenticate_request failed")
except Exception as e:
logger.warn("authenticate_request failed: %s", e)
raise
if origin:
@@ -283,11 +330,10 @@ class FederationSendServlet(BaseFederationServlet):
)
logger.info(
"Received txn %s from %s. (PDUs: %d, EDUs: %d, failures: %d)",
"Received txn %s from %s. (PDUs: %d, EDUs: %d)",
transaction_id, origin,
len(transaction_data.get("pdus", [])),
len(transaction_data.get("edus", [])),
len(transaction_data.get("failures", [])),
)
# We should ideally be getting this from the security layer.
@@ -385,9 +431,31 @@ class FederationMakeJoinServlet(BaseFederationServlet):
PATH = "/make_join/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
def on_GET(self, origin, _content, query, context, user_id):
"""
Args:
origin (unicode): The authenticated server_name of the calling server
_content (None): (GETs don't have bodies)
query (dict[bytes, list[bytes]]): Query params from the request.
**kwargs (dict[unicode, unicode]): the dict mapping keys to path
components as specified in the path match regexp.
Returns:
Deferred[(int, object)|None]: either (response code, response object) to
return a JSON response, or None if the request has already been handled.
"""
versions = query.get(b'ver')
if versions is not None:
supported_versions = [v.decode("utf-8") for v in versions]
else:
supported_versions = ["1"]
content = yield self.handler.on_make_join_request(
origin, context, user_id,
supported_versions=supported_versions,
)
defer.returnValue((200, content))
@@ -404,10 +472,10 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
class FederationSendLeaveServlet(BaseFederationServlet):
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<txid>[^/]*)"
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, room_id, txid):
def on_PUT(self, origin, content, query, room_id, event_id):
content = yield self.handler.on_send_leave_request(origin, content)
defer.returnValue((200, content))

View File

@@ -73,7 +73,6 @@ class Transaction(JsonEncodedObject):
"previous_ids",
"pdus",
"edus",
"pdu_failures",
]
internal_keys = [

View File

@@ -43,6 +43,7 @@ from signedjson.sign import sign_json
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import get_domain_from_id
from synapse.util.logcontext import run_in_background
@@ -129,7 +130,7 @@ class GroupAttestionRenewer(object):
self.attestations = hs.get_groups_attestation_signing()
self._renew_attestations_loop = self.clock.looping_call(
self._renew_attestations, 30 * 60 * 1000,
self._start_renew_attestations, 30 * 60 * 1000,
)
@defer.inlineCallbacks
@@ -151,6 +152,9 @@ class GroupAttestionRenewer(object):
defer.returnValue({})
def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations)
@defer.inlineCallbacks
def _renew_attestations(self):
"""Called periodically to check if we need to update any of our attestations

View File

@@ -17,9 +17,7 @@ from .admin import AdminHandler
from .directory import DirectoryHandler
from .federation import FederationHandler
from .identity import IdentityHandler
from .message import MessageHandler
from .register import RegistrationHandler
from .room import RoomContextHandler
from .search import SearchHandler
@@ -44,10 +42,8 @@ class Handlers(object):
def __init__(self, hs):
self.registration_handler = RegistrationHandler(hs)
self.message_handler = MessageHandler(hs)
self.federation_handler = FederationHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.admin_handler = AdminHandler(hs)
self.identity_handler = IdentityHandler(hs)
self.search_handler = SearchHandler(hs)
self.room_context_handler = RoomContextHandler(hs)

View File

@@ -112,8 +112,9 @@ class BaseHandler(object):
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
if context:
current_state_ids = yield context.get_current_state_ids(self.store)
current_state = yield self.store.get_events(
list(context.current_state_ids.values())
list(current_state_ids.values())
)
else:
current_state = yield self.state_handler.get_current_state(

View File

@@ -23,6 +23,11 @@ from twisted.internet import defer
import synapse
from synapse.api.constants import EventTypes
from synapse.metrics import (
event_processing_loop_counter,
event_processing_loop_room_count,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.metrics import Measure
@@ -106,7 +111,9 @@ class ApplicationServicesHandler(object):
yield self._check_user_exists(event.state_key)
if not self.started_scheduler:
self.scheduler.start().addErrback(log_failure)
def start_scheduler():
return self.scheduler.start().addErrback(log_failure)
run_as_background_process("as_scheduler", start_scheduler)
self.started_scheduler = True
# Fork off pushes to these services
@@ -133,6 +140,12 @@ class ApplicationServicesHandler(object):
events_processed_counter.inc(len(events))
event_processing_loop_room_count.labels(
"appservice_sender"
).inc(len(events_by_room))
event_processing_loop_counter.labels("appservice_sender").inc()
synapse.metrics.event_processing_lag.labels(
"appservice_sender").set(now - ts)
synapse.metrics.event_processing_last_ts.labels(

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import logging
import unicodedata
import attr
import bcrypt
@@ -519,6 +520,7 @@ class AuthHandler(BaseHandler):
"""
logger.info("Logging in user %s on device %s", user_id, device_id)
access_token = yield self.issue_access_token(user_id, device_id)
yield self.auth.check_auth_blocking(user_id)
# the device *should* have been registered before we got here; however,
# it's possible we raced against a DELETE operation. The thing we
@@ -626,6 +628,7 @@ class AuthHandler(BaseHandler):
# special case to check for "password" for the check_password interface
# for the auth providers
password = login_submission.get("password")
if login_type == LoginType.PASSWORD:
if not self._password_enabled:
raise SynapseError(400, "Password login has been disabled.")
@@ -707,9 +710,10 @@ class AuthHandler(BaseHandler):
multiple inexact matches.
Args:
user_id (str): complete @user:id
user_id (unicode): complete @user:id
password (unicode): the provided password
Returns:
(str) the canonical_user_id, or None if unknown user / bad password
(unicode) the canonical_user_id, or None if unknown user / bad password
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
@@ -728,15 +732,18 @@ class AuthHandler(BaseHandler):
device_id)
defer.returnValue(access_token)
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token):
auth_api = self.hs.get_auth()
user_id = None
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
user_id = auth_api.get_user_id_from_macaroon(macaroon)
auth_api.validate_macaroon(macaroon, "login", True, user_id)
return user_id
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
yield self.auth.check_auth_blocking(user_id)
defer.returnValue(user_id)
@defer.inlineCallbacks
def delete_access_token(self, access_token):
@@ -821,12 +828,26 @@ class AuthHandler(BaseHandler):
@defer.inlineCallbacks
def delete_threepid(self, user_id, medium, address):
"""Attempts to unbind the 3pid on the identity servers and deletes it
from the local database.
Args:
user_id (str)
medium (str)
address (str)
Returns:
Deferred[bool]: Returns True if successfully unbound the 3pid on
the identity server, False if identity server doesn't support the
unbind API.
"""
# 'Canonicalise' email addresses as per above
if medium == 'email':
address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler
yield identity_handler.unbind_threepid(
result = yield identity_handler.try_unbind_threepid(
user_id,
{
'medium': medium,
@@ -834,10 +855,10 @@ class AuthHandler(BaseHandler):
},
)
ret = yield self.store.user_delete_threepid(
yield self.store.user_delete_threepid(
user_id, medium, address,
)
defer.returnValue(ret)
defer.returnValue(result)
def _save_session(self, session):
# TODO: Persistent storage
@@ -849,14 +870,19 @@ class AuthHandler(BaseHandler):
"""Computes a secure hash of password.
Args:
password (str): Password to hash.
password (unicode): Password to hash.
Returns:
Deferred(str): Hashed password.
Deferred(unicode): Hashed password.
"""
def _do_hash():
return bcrypt.hashpw(password.encode('utf8') + self.hs.config.password_pepper,
bcrypt.gensalt(self.bcrypt_rounds))
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
return bcrypt.hashpw(
pw.encode('utf8') + self.hs.config.password_pepper.encode("utf8"),
bcrypt.gensalt(self.bcrypt_rounds),
).decode('ascii')
return make_deferred_yieldable(
threads.deferToThreadPool(
@@ -868,16 +894,19 @@ class AuthHandler(BaseHandler):
"""Validates that self.hash(password) == stored_hash.
Args:
password (str): Password to hash.
stored_hash (str): Expected hash value.
password (unicode): Password to hash.
stored_hash (unicode): Expected hash value.
Returns:
Deferred(bool): Whether self.hash(password) == stored_hash.
"""
def _do_validate_hash():
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
return bcrypt.checkpw(
password.encode('utf8') + self.hs.config.password_pepper,
pw.encode('utf8') + self.hs.config.password_pepper.encode("utf8"),
stored_hash.encode('utf8')
)

View File

@@ -51,7 +51,8 @@ class DeactivateAccountHandler(BaseHandler):
erase_data (bool): whether to GDPR-erase the user's data
Returns:
Deferred
Deferred[bool]: True if identity server supports removing
threepids, otherwise False.
"""
# FIXME: Theoretically there is a race here wherein user resets
# password using threepid.
@@ -60,16 +61,22 @@ class DeactivateAccountHandler(BaseHandler):
# leave the user still active so they can try again.
# Ideally we would prevent password resets and then do this in the
# background thread.
# This will be set to false if the identity server doesn't support
# unbinding
identity_server_supports_unbinding = True
threepids = yield self.store.user_get_threepids(user_id)
for threepid in threepids:
try:
yield self._identity_handler.unbind_threepid(
result = yield self._identity_handler.try_unbind_threepid(
user_id,
{
'medium': threepid['medium'],
'address': threepid['address'],
},
)
identity_server_supports_unbinding &= result
except Exception:
# Do we want this to be a fatal error or should we carry on?
logger.exception("Failed to remove threepid from ID server")
@@ -103,6 +110,8 @@ class DeactivateAccountHandler(BaseHandler):
# parts users from rooms (if it isn't already running)
self._start_user_parting()
defer.returnValue(identity_server_supports_unbinding)
def _start_user_parting(self):
"""
Start the process that goes through the table of users

View File

@@ -23,7 +23,7 @@ from synapse.api.constants import EventTypes
from synapse.api.errors import FederationDeniedError
from synapse.types import RoomStreamToken, get_domain_from_id
from synapse.util import stringutils
from synapse.util.async import Linearizer
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination

View File

@@ -19,10 +19,12 @@ import random
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.types import UserID
from synapse.util.logutils import log_function
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -129,11 +131,13 @@ class EventStreamHandler(BaseHandler):
class EventHandler(BaseHandler):
@defer.inlineCallbacks
def get_event(self, user, event_id):
def get_event(self, user, room_id, event_id):
"""Retrieve a single specified event.
Args:
user (synapse.types.UserID): The user requesting the event
room_id (str|None): The expected room id. We'll return None if the
event's room does not match.
event_id (str): The event ID to obtain.
Returns:
dict: An event, or None if there is no event matching this ID.
@@ -142,13 +146,26 @@ class EventHandler(BaseHandler):
AuthError if the user does not have the rights to inspect this
event.
"""
event = yield self.store.get_event(event_id)
event = yield self.store.get_event(event_id, check_room_id=room_id)
if not event:
defer.returnValue(None)
return
if hasattr(event, "room_id"):
yield self.auth.check_joined_room(event.room_id, user.to_string())
users = yield self.store.get_users_in_room(event.room_id)
is_peeking = user.to_string() not in users
filtered = yield filter_events_for_client(
self.store,
user.to_string(),
[event],
is_peeking=is_peeking
)
if not filtered:
raise AuthError(
403,
"You don't have permission to access that event."
)
defer.returnValue(event)

View File

@@ -21,8 +21,8 @@ import logging
import sys
import six
from six import iteritems
from six.moves import http_client
from six import iteritems, itervalues
from six.moves import http_client, zip
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
@@ -30,7 +30,12 @@ from unpaddedbase64 import decode_base64
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, RejectedReason
from synapse.api.constants import (
KNOWN_ROOM_VERSIONS,
EventTypes,
Membership,
RejectedReason,
)
from synapse.api.errors import (
AuthError,
CodeMessageException,
@@ -43,17 +48,21 @@ from synapse.crypto.event_signing import (
add_hashes_and_signatures,
compute_event_signature,
)
from synapse.events.utils import prune_event
from synapse.events.validator import EventValidator
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationFederationSendEventsRestServlet,
)
from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
from synapse.state import resolve_events_with_factory
from synapse.types import UserID, get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
from synapse.util.async import Linearizer
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room
from synapse.util.frozenutils import unfreeze
from synapse.util.logutils import log_function
from synapse.util.metrics import measure_func
from synapse.util.retryutils import NotRetryingDestination
from synapse.visibility import filter_events_for_server
from ._base import BaseHandler
@@ -77,7 +86,7 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.store = hs.get_datastore()
self.replication_layer = hs.get_federation_client()
self.federation_client = hs.get_federation_client()
self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname
self.keyring = hs.get_keyring()
@@ -87,6 +96,18 @@ class FederationHandler(BaseHandler):
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self._send_events_to_master = (
ReplicationFederationSendEventsRestServlet.make_client(hs)
)
self._notify_user_membership_change = (
ReplicationUserJoinedLeftRoomRestServlet.make_client(hs)
)
self._clean_room_for_join_client = (
ReplicationCleanRoomRestServlet.make_client(hs)
)
# When joining a room we need to queue any events for that room up
self.room_queues = {}
@@ -256,7 +277,7 @@ class FederationHandler(BaseHandler):
# know about
for p in prevs - seen:
state, got_auth_chain = (
yield self.replication_layer.get_state_for_room(
yield self.federation_client.get_state_for_room(
origin, pdu.room_id, p
)
)
@@ -270,8 +291,9 @@ class FederationHandler(BaseHandler):
ev_ids, get_prev_content=False, check_redacted=False
)
room_version = yield self.store.get_room_version(pdu.room_id)
state_map = yield resolve_events_with_factory(
state_groups, {pdu.event_id: pdu}, fetch
room_version, state_groups, {pdu.event_id: pdu}, fetch
)
state = (yield self.store.get_events(state_map.values())).values()
@@ -339,7 +361,7 @@ class FederationHandler(BaseHandler):
#
# see https://github.com/matrix-org/synapse/pull/1744
missing_events = yield self.replication_layer.get_missing_events(
missing_events = yield self.federation_client.get_missing_events(
origin,
pdu.room_id,
earliest_events_ids=list(latest),
@@ -401,7 +423,7 @@ class FederationHandler(BaseHandler):
)
try:
event_stream_id, max_stream_id = yield self._persist_auth_tree(
yield self._persist_auth_tree(
origin, auth_chain, state, event
)
except AuthError as e:
@@ -445,7 +467,7 @@ class FederationHandler(BaseHandler):
yield self._handle_new_events(origin, event_infos)
try:
context, event_stream_id, max_stream_id = yield self._handle_new_event(
context = yield self._handle_new_event(
origin,
event,
state=state,
@@ -470,24 +492,16 @@ class FederationHandler(BaseHandler):
except StoreError:
logger.exception("Failed to store room.")
extra_users = []
if event.type == EventTypes.Member:
target_user_id = event.state_key
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
if event.type == EventTypes.Member:
if event.membership == Membership.JOIN:
# Only fire user_joined_room if the user has acutally
# joined the room. Don't bother if the user is just
# changing their profile info.
newly_joined = True
prev_state_id = context.prev_state_ids.get(
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_id = prev_state_ids.get(
(event.type, event.state_key)
)
if prev_state_id:
@@ -499,138 +513,7 @@ class FederationHandler(BaseHandler):
if newly_joined:
user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id)
@measure_func("_filter_events_for_server")
@defer.inlineCallbacks
def _filter_events_for_server(self, server_name, room_id, events):
"""Filter the given events for the given server, redacting those the
server can't see.
Assumes the server is currently in the room.
Returns
list[FrozenEvent]
"""
# First lets check to see if all the events have a history visibility
# of "shared" or "world_readable". If thats the case then we don't
# need to check membership (as we know the server is in the room).
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
)
)
visibility_ids = set()
for sids in event_to_state_ids.itervalues():
hist = sids.get((EventTypes.RoomHistoryVisibility, ""))
if hist:
visibility_ids.add(hist)
# If we failed to find any history visibility events then the default
# is "shared" visiblity.
if not visibility_ids:
defer.returnValue(events)
event_map = yield self.store.get_events(visibility_ids)
all_open = all(
e.content.get("history_visibility") in (None, "shared", "world_readable")
for e in event_map.itervalues()
)
if all_open:
defer.returnValue(events)
# Ok, so we're dealing with events that have non-trivial visibility
# rules, so we need to also get the memberships of the room.
event_to_state_ids = yield self.store.get_state_ids_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
(EventTypes.Member, None),
)
)
# We only want to pull out member events that correspond to the
# server's domain.
def check_match(id):
try:
return server_name == get_domain_from_id(id)
except Exception:
return False
# Parses mapping `event_id -> (type, state_key) -> state event_id`
# to get all state ids that we're interested in.
event_map = yield self.store.get_events([
e_id
for key_to_eid in list(event_to_state_ids.values())
for key, e_id in key_to_eid.items()
if key[0] != EventTypes.Member or check_match(key[1])
])
event_to_state = {
e_id: {
key: event_map[inner_e_id]
for key, inner_e_id in key_to_eid.iteritems()
if inner_e_id in event_map
}
for e_id, key_to_eid in event_to_state_ids.iteritems()
}
erased_senders = yield self.store.are_users_erased(
e.sender for e in events,
)
def redact_disallowed(event, state):
# if the sender has been gdpr17ed, always return a redacted
# copy of the event.
if erased_senders[event.sender]:
logger.info(
"Sender of %s has been erased, redacting",
event.event_id,
)
return prune_event(event)
if not state:
return event
history = state.get((EventTypes.RoomHistoryVisibility, ''), None)
if history:
visibility = history.content.get("history_visibility", "shared")
if visibility in ["invited", "joined"]:
# We now loop through all state events looking for
# membership states for the requesting server to determine
# if the server is either in the room or has been invited
# into the room.
for ev in state.itervalues():
if ev.type != EventTypes.Member:
continue
try:
domain = get_domain_from_id(ev.state_key)
except Exception:
continue
if domain != server_name:
continue
memtype = ev.membership
if memtype == Membership.JOIN:
return event
elif memtype == Membership.INVITE:
if visibility == "invited":
return event
else:
return prune_event(event)
return event
defer.returnValue([
redact_disallowed(e, event_to_state[e.event_id])
for e in events
])
yield self.user_joined_room(user, event.room_id)
@log_function
@defer.inlineCallbacks
@@ -651,7 +534,7 @@ class FederationHandler(BaseHandler):
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
events = yield self.replication_layer.backfill(
events = yield self.federation_client.backfill(
dest,
room_id,
limit=limit,
@@ -699,7 +582,7 @@ class FederationHandler(BaseHandler):
state_events = {}
events_to_state = {}
for e_id in edges:
state, auth = yield self.replication_layer.get_state_for_room(
state, auth = yield self.federation_client.get_state_for_room(
destination=dest,
room_id=room_id,
event_id=e_id
@@ -741,7 +624,7 @@ class FederationHandler(BaseHandler):
results = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(
self.replication_layer.get_pdu,
self.federation_client.get_pdu,
[dest],
event_id,
outlier=True,
@@ -863,7 +746,7 @@ class FederationHandler(BaseHandler):
"""
joined_users = [
(state_key, int(event.depth))
for (e_type, state_key), event in state.iteritems()
for (e_type, state_key), event in iteritems(state)
if e_type == EventTypes.Member
and event.membership == Membership.JOIN
]
@@ -880,7 +763,7 @@ class FederationHandler(BaseHandler):
except Exception:
pass
return sorted(joined_domains.iteritems(), key=lambda d: d[1])
return sorted(joined_domains.items(), key=lambda d: d[1])
curr_domains = get_domains_from_state(curr_state)
@@ -943,7 +826,7 @@ class FederationHandler(BaseHandler):
tried_domains = set(likely_domains)
tried_domains.add(self.server_name)
event_ids = list(extremities.iterkeys())
event_ids = list(extremities.keys())
logger.debug("calling resolve_state_groups in _maybe_backfill")
resolve = logcontext.preserve_fn(
@@ -959,15 +842,15 @@ class FederationHandler(BaseHandler):
states = dict(zip(event_ids, [s.state for s in states]))
state_map = yield self.store.get_events(
[e_id for ids in states.itervalues() for e_id in ids.itervalues()],
[e_id for ids in itervalues(states) for e_id in itervalues(ids)],
get_prev_content=False
)
states = {
key: {
k: state_map[e_id]
for k, e_id in state_dict.iteritems()
for k, e_id in iteritems(state_dict)
if e_id in state_map
} for key, state_dict in states.iteritems()
} for key, state_dict in iteritems(states)
}
for e_id, _ in sorted_extremeties_tuple:
@@ -1022,7 +905,7 @@ class FederationHandler(BaseHandler):
Invites must be signed by the invitee's server before distribution.
"""
pdu = yield self.replication_layer.send_invite(
pdu = yield self.federation_client.send_invite(
destination=target_host,
room_id=event.room_id,
event_id=event.event_id,
@@ -1038,16 +921,6 @@ class FederationHandler(BaseHandler):
[auth_id for auth_id, _ in event.auth_events],
include_given=True
)
for event in auth:
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
defer.returnValue([e for e in auth])
@log_function
@@ -1072,6 +945,9 @@ class FederationHandler(BaseHandler):
joinee,
"join",
content,
params={
"ver": KNOWN_ROOM_VERSIONS,
},
)
# This shouldn't happen, because the RoomMemberHandler has a
@@ -1081,7 +957,7 @@ class FederationHandler(BaseHandler):
self.room_queues[room_id] = []
yield self.store.clean_room_for_join(room_id)
yield self._clean_room_for_join(room_id)
handled_events = set()
@@ -1094,7 +970,7 @@ class FederationHandler(BaseHandler):
target_hosts.insert(0, origin)
except ValueError:
pass
ret = yield self.replication_layer.send_join(target_hosts, event)
ret = yield self.federation_client.send_join(target_hosts, event)
origin = ret["origin"]
state = ret["state"]
@@ -1120,15 +996,10 @@ class FederationHandler(BaseHandler):
# FIXME
pass
event_stream_id, max_stream_id = yield self._persist_auth_tree(
yield self._persist_auth_tree(
origin, auth_chain, state, event
)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[joinee]
)
logger.debug("Finished joining %s to %s", joinee, room_id)
finally:
room_queue = self.room_queues[room_id]
@@ -1223,7 +1094,7 @@ class FederationHandler(BaseHandler):
# would introduce the danger of backwards-compatibility problems.
event.internal_metadata.send_on_behalf_of = origin
context, event_stream_id, max_stream_id = yield self._handle_new_event(
context = yield self._handle_new_event(
origin, event
)
@@ -1233,25 +1104,17 @@ class FederationHandler(BaseHandler):
event.signatures,
)
extra_users = []
if event.type == EventTypes.Member:
target_user_id = event.state_key
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
if event.type == EventTypes.Member:
if event.content["membership"] == Membership.JOIN:
user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id)
yield self.user_joined_room(user, event.room_id)
state_ids = list(context.prev_state_ids.values())
prev_state_ids = yield context.get_prev_state_ids(self.store)
state_ids = list(prev_state_ids.values())
auth_chain = yield self.store.get_auth_chain(state_ids)
state = yield self.store.get_events(list(context.prev_state_ids.values()))
state = yield self.store.get_events(list(prev_state_ids.values()))
defer.returnValue({
"state": list(state.values()),
@@ -1313,17 +1176,7 @@ class FederationHandler(BaseHandler):
)
context = yield self.state_handler.compute_event_context(event)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
)
target_user = UserID.from_string(event.state_key)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[target_user],
)
yield self.persist_events_and_notify([(event, context)])
defer.returnValue(event)
@@ -1348,35 +1201,26 @@ class FederationHandler(BaseHandler):
except ValueError:
pass
yield self.replication_layer.send_leave(
yield self.federation_client.send_leave(
target_hosts,
event
)
context = yield self.state_handler.compute_event_context(event)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
)
target_user = UserID.from_string(event.state_key)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=[target_user],
)
yield self.persist_events_and_notify([(event, context)])
defer.returnValue(event)
@defer.inlineCallbacks
def _make_and_verify_event(self, target_hosts, room_id, user_id, membership,
content={},):
origin, pdu = yield self.replication_layer.make_membership_event(
content={}, params=None):
origin, pdu = yield self.federation_client.make_membership_event(
target_hosts,
room_id,
user_id,
membership,
content,
params=params,
)
logger.debug("Got response to make_%s: %s", membership, pdu)
@@ -1416,7 +1260,7 @@ class FederationHandler(BaseHandler):
@log_function
def on_make_leave_request(self, room_id, user_id):
""" We've received a /make_leave/ request, so we create a partial
join event for the room and return that. We do *not* persist or
leave event for the room and return that. We do *not* persist or
process it until the other server has signed it and sent it back.
"""
builder = self.event_builder_factory.new({
@@ -1455,7 +1299,7 @@ class FederationHandler(BaseHandler):
event.internal_metadata.outlier = False
context, event_stream_id, max_stream_id = yield self._handle_new_event(
yield self._handle_new_event(
origin, event
)
@@ -1465,22 +1309,17 @@ class FederationHandler(BaseHandler):
event.signatures,
)
extra_users = []
if event.type == EventTypes.Member:
target_user_id = event.state_key
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id, extra_users=extra_users
)
defer.returnValue(None)
@defer.inlineCallbacks
def get_state_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
event = yield self.store.get_event(
event_id, allow_none=False, check_room_id=room_id,
)
state_groups = yield self.store.get_state_groups(
room_id, [event_id]
)
@@ -1491,8 +1330,7 @@ class FederationHandler(BaseHandler):
(e.type, e.state_key): e for e in state
}
event = yield self.store.get_event(event_id)
if event and event.is_state():
if event.is_state():
# Get previous state
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
@@ -1503,18 +1341,6 @@ class FederationHandler(BaseHandler):
del results[(event.type, event.state_key)]
res = list(results.values())
for event in res:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
if self.is_mine_id(event.event_id):
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
defer.returnValue(res)
else:
defer.returnValue([])
@@ -1523,6 +1349,10 @@ class FederationHandler(BaseHandler):
def get_state_ids_for_pdu(self, room_id, event_id):
"""Returns the state at the event. i.e. not including said event.
"""
event = yield self.store.get_event(
event_id, allow_none=False, check_room_id=room_id,
)
state_groups = yield self.store.get_state_groups_ids(
room_id, [event_id]
)
@@ -1531,8 +1361,7 @@ class FederationHandler(BaseHandler):
_, state = state_groups.items().pop()
results = state
event = yield self.store.get_event(event_id)
if event and event.is_state():
if event.is_state():
# Get previous state
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
@@ -1558,7 +1387,7 @@ class FederationHandler(BaseHandler):
limit
)
events = yield self._filter_events_for_server(origin, room_id, events)
events = yield filter_events_for_server(self.store, origin, events)
defer.returnValue(events)
@@ -1586,18 +1415,6 @@ class FederationHandler(BaseHandler):
)
if event:
if self.is_mine_id(event.event_id):
# FIXME: This is a temporary work around where we occasionally
# return events slightly differently than when they were
# originally signed
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
in_room = yield self.auth.check_host_in_room(
event.room_id,
origin
@@ -1605,8 +1422,8 @@ class FederationHandler(BaseHandler):
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield self._filter_events_for_server(
origin, event.room_id, [event]
events = yield filter_events_for_server(
self.store, origin, [event],
)
event = events[0]
defer.returnValue(event)
@@ -1633,9 +1450,8 @@ class FederationHandler(BaseHandler):
event, context
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event,
context=context,
yield self.persist_events_and_notify(
[(event, context)],
backfilled=backfilled,
)
except: # noqa: E722, as we reraise the exception this is fine.
@@ -1648,15 +1464,7 @@ class FederationHandler(BaseHandler):
six.reraise(tp, value, tb)
if not backfilled:
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
logcontext.run_in_background(
self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id,
)
defer.returnValue((context, event_stream_id, max_stream_id))
defer.returnValue(context)
@defer.inlineCallbacks
def _handle_new_events(self, origin, event_infos, backfilled=False):
@@ -1664,6 +1472,8 @@ class FederationHandler(BaseHandler):
should not depend on one another, e.g. this should be used to persist
a bunch of outliers, but not a chunk of individual events that depend
on each other for state calculations.
Notifies about the events where appropriate.
"""
contexts = yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
@@ -1678,10 +1488,10 @@ class FederationHandler(BaseHandler):
], consumeErrors=True,
))
yield self.store.persist_events(
yield self.persist_events_and_notify(
[
(ev_info["event"], context)
for ev_info, context in itertools.izip(event_infos, contexts)
for ev_info, context in zip(event_infos, contexts)
],
backfilled=backfilled,
)
@@ -1690,7 +1500,8 @@ class FederationHandler(BaseHandler):
def _persist_auth_tree(self, origin, auth_events, state, event):
"""Checks the auth chain is valid (and passes auth checks) for the
state and event. Then persists the auth chain and state atomically.
Persists the event seperately.
Persists the event separately. Notifies about the persisted events
where appropriate.
Will attempt to fetch missing auth events.
@@ -1701,8 +1512,7 @@ class FederationHandler(BaseHandler):
event (Event)
Returns:
2-tuple of (event_stream_id, max_stream_id) from the persist_event
call for `event`
Deferred
"""
events_to_context = {}
for e in itertools.chain(auth_events, state):
@@ -1728,7 +1538,7 @@ class FederationHandler(BaseHandler):
missing_auth_events.add(e_id)
for e_id in missing_auth_events:
m_ev = yield self.replication_layer.get_pdu(
m_ev = yield self.federation_client.get_pdu(
[origin],
e_id,
outlier=True,
@@ -1766,7 +1576,7 @@ class FederationHandler(BaseHandler):
raise
events_to_context[e.event_id].rejected = RejectedReason.AUTH_ERROR
yield self.store.persist_events(
yield self.persist_events_and_notify(
[
(e, events_to_context[e.event_id])
for e in itertools.chain(auth_events, state)
@@ -1777,12 +1587,10 @@ class FederationHandler(BaseHandler):
event, old_state=state
)
event_stream_id, max_stream_id = yield self.store.persist_event(
event, new_event_context,
yield self.persist_events_and_notify(
[(event, new_event_context)],
)
defer.returnValue((event_stream_id, max_stream_id))
@defer.inlineCallbacks
def _prep_event(self, origin, event, state=None, auth_events=None):
"""
@@ -1801,8 +1609,9 @@ class FederationHandler(BaseHandler):
)
if not auth_events:
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
event, prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
@@ -1838,8 +1647,19 @@ class FederationHandler(BaseHandler):
defer.returnValue(context)
@defer.inlineCallbacks
def on_query_auth(self, origin, event_id, remote_auth_chain, rejects,
def on_query_auth(self, origin, event_id, room_id, remote_auth_chain, rejects,
missing):
in_room = yield self.auth.check_host_in_room(
room_id,
origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
event = yield self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
# Just go through and process each event in `remote_auth_chain`. We
# don't want to fall into the trap of `missing` being wrong.
for e in remote_auth_chain:
@@ -1849,7 +1669,6 @@ class FederationHandler(BaseHandler):
pass
# Now get the current auth_chain for the event.
event = yield self.store.get_event(event_id)
local_auth_chain = yield self.store.get_auth_chain(
[auth_id for auth_id, _ in event.auth_events],
include_given=True
@@ -1862,15 +1681,6 @@ class FederationHandler(BaseHandler):
local_auth_chain, remote_auth_chain
)
for event in ret["auth_chain"]:
event.signatures.update(
compute_event_signature(
event,
self.hs.hostname,
self.hs.config.signing_key[0]
)
)
logger.debug("on_query_auth returning: %s", ret)
defer.returnValue(ret)
@@ -1896,8 +1706,8 @@ class FederationHandler(BaseHandler):
min_depth=min_depth,
)
missing_events = yield self._filter_events_for_server(
origin, room_id, missing_events,
missing_events = yield filter_events_for_server(
self.store, origin, missing_events,
)
defer.returnValue(missing_events)
@@ -1946,7 +1756,7 @@ class FederationHandler(BaseHandler):
logger.info("Missing auth: %s", missing_auth)
# If we don't have all the auth events, we need to get them.
try:
remote_auth_chain = yield self.replication_layer.get_event_auth(
remote_auth_chain = yield self.federation_client.get_event_auth(
origin, event.room_id, event.event_id
)
@@ -2019,7 +1829,10 @@ class FederationHandler(BaseHandler):
(d.type, d.state_key): d for d in different_events if d
})
room_version = yield self.store.get_room_version(event.room_id)
new_state = self.state_handler.resolve_events(
room_version,
[list(local_view.values()), list(remote_view.values())],
event
)
@@ -2051,9 +1864,10 @@ class FederationHandler(BaseHandler):
break
if do_resolution:
prev_state_ids = yield context.get_prev_state_ids(self.store)
# 1. Get what we think is the auth chain.
auth_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids
event, prev_state_ids
)
local_auth_chain = yield self.store.get_auth_chain(
auth_ids, include_given=True
@@ -2061,7 +1875,7 @@ class FederationHandler(BaseHandler):
try:
# 2. Get remote difference.
result = yield self.replication_layer.query_auth(
result = yield self.federation_client.query_auth(
origin,
event.room_id,
event.event_id,
@@ -2143,21 +1957,34 @@ class FederationHandler(BaseHandler):
k: a.event_id for k, a in iteritems(auth_events)
if k != event_key
}
context.current_state_ids = dict(context.current_state_ids)
context.current_state_ids.update(state_updates)
if context.delta_ids is not None:
context.delta_ids = dict(context.delta_ids)
context.delta_ids.update(state_updates)
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
current_state_ids = yield context.get_current_state_ids(self.store)
current_state_ids = dict(current_state_ids)
current_state_ids.update(state_updates)
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = dict(prev_state_ids)
prev_state_ids.update({
k: a.event_id for k, a in iteritems(auth_events)
})
context.state_group = yield self.store.store_state_group(
# create a new state group as a delta from the existing one.
prev_group = context.state_group
state_group = yield self.store.store_state_group(
event.event_id,
event.room_id,
prev_group=context.prev_group,
delta_ids=context.delta_ids,
current_state_ids=context.current_state_ids,
prev_group=prev_group,
delta_ids=state_updates,
current_state_ids=current_state_ids,
)
yield context.update_state(
state_group=state_group,
current_state_ids=current_state_ids,
prev_state_ids=prev_state_ids,
prev_group=prev_group,
delta_ids=state_updates,
)
@defer.inlineCallbacks
@@ -2347,7 +2174,7 @@ class FederationHandler(BaseHandler):
yield member_handler.send_membership_event(None, event, context)
else:
destinations = set(x.split(":", 1)[-1] for x in (sender_user_id, room_id))
yield self.replication_layer.forward_third_party_invite(
yield self.federation_client.forward_third_party_invite(
destinations,
room_id,
event_dict,
@@ -2397,7 +2224,8 @@ class FederationHandler(BaseHandler):
event.content["third_party_invite"]["signed"]["token"]
)
original_invite = None
original_invite_id = context.prev_state_ids.get(key)
prev_state_ids = yield context.get_prev_state_ids(self.store)
original_invite_id = prev_state_ids.get(key)
if original_invite_id:
original_invite = yield self.store.get_event(
original_invite_id, allow_none=True
@@ -2439,7 +2267,8 @@ class FederationHandler(BaseHandler):
signed = event.content["third_party_invite"]["signed"]
token = signed["token"]
invite_event_id = context.prev_state_ids.get(
prev_state_ids = yield context.get_prev_state_ids(self.store)
invite_event_id = prev_state_ids.get(
(EventTypes.ThirdPartyInvite, token,)
)
@@ -2489,7 +2318,7 @@ class FederationHandler(BaseHandler):
for revocation.
"""
try:
response = yield self.hs.get_simple_http_client().get_json(
response = yield self.http_client.get_json(
url,
{"public_key": public_key}
)
@@ -2500,3 +2329,91 @@ class FederationHandler(BaseHandler):
)
if "valid" not in response or not response["valid"]:
raise AuthError(403, "Third party certificate was invalid")
@defer.inlineCallbacks
def persist_events_and_notify(self, event_and_contexts, backfilled=False):
"""Persists events and tells the notifier/pushers about them, if
necessary.
Args:
event_and_contexts(list[tuple[FrozenEvent, EventContext]])
backfilled (bool): Whether these events are a result of
backfilling or not
Returns:
Deferred
"""
if self.config.worker_app:
yield self._send_events_to_master(
store=self.store,
event_and_contexts=event_and_contexts,
backfilled=backfilled
)
else:
max_stream_id = yield self.store.persist_events(
event_and_contexts,
backfilled=backfilled,
)
if not backfilled: # Never notify for backfilled events
for event, _ in event_and_contexts:
self._notify_persisted_event(event, max_stream_id)
def _notify_persisted_event(self, event, max_stream_id):
"""Checks to see if notifier/pushers should be notified about the
event or not.
Args:
event (FrozenEvent)
max_stream_id (int): The max_stream_id returned by persist_events
"""
extra_users = []
if event.type == EventTypes.Member:
target_user_id = event.state_key
# We notify for memberships if its an invite for one of our
# users
if event.internal_metadata.is_outlier():
if event.membership != Membership.INVITE:
if not self.is_mine_id(target_user_id):
return
target_user = UserID.from_string(target_user_id)
extra_users.append(target_user)
elif event.internal_metadata.is_outlier():
return
event_stream_id = event.internal_metadata.stream_ordering
self.notifier.on_new_room_event(
event, event_stream_id, max_stream_id,
extra_users=extra_users
)
self.pusher_pool.on_new_notifications(
event_stream_id, max_stream_id,
)
def _clean_room_for_join(self, room_id):
"""Called to clean up any data in DB for a given room, ready for the
server to join the room.
Args:
room_id (str)
"""
if self.config.worker_app:
return self._clean_room_for_join_client(room_id)
else:
return self.store.clean_room_for_join(room_id)
def user_joined_room(self, user, room_id):
"""Called when a new user has joined the room
"""
if self.config.worker_app:
return self._notify_user_membership_change(
room_id=room_id,
user_id=user.to_string(),
change="joined",
)
else:
return user_joined_room(self.distributor, user, room_id)

View File

@@ -26,7 +26,7 @@ from twisted.internet import defer
from synapse.api.errors import (
CodeMessageException,
Codes,
MatrixCodeMessageException,
HttpResponseException,
SynapseError,
)
@@ -85,7 +85,6 @@ class IdentityHandler(BaseHandler):
)
defer.returnValue(None)
data = {}
try:
data = yield self.http_client.get_json(
"https://%s%s" % (
@@ -94,11 +93,9 @@ class IdentityHandler(BaseHandler):
),
{'sid': creds['sid'], 'client_secret': client_secret}
)
except MatrixCodeMessageException as e:
except HttpResponseException as e:
logger.info("getValidated3pid failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
data = json.loads(e.msg)
raise e.to_synapse_error()
if 'medium' in data:
defer.returnValue(data)
@@ -136,19 +133,23 @@ class IdentityHandler(BaseHandler):
)
logger.debug("bound threepid %r to %s", creds, mxid)
except CodeMessageException as e:
data = json.loads(e.msg)
data = json.loads(e.msg) # XXX WAT?
defer.returnValue(data)
@defer.inlineCallbacks
def unbind_threepid(self, mxid, threepid):
"""
Removes a binding from an identity server
def try_unbind_threepid(self, mxid, threepid):
"""Removes a binding from an identity server
Args:
mxid (str): Matrix user ID of binding to be removed
threepid (dict): Dict with medium & address of binding to be removed
Raises:
SynapseError: If we failed to contact the identity server
Returns:
Deferred[bool]: True on success, otherwise False
Deferred[bool]: True on success, otherwise False if the identity
server doesn't support unbinding
"""
logger.debug("unbinding threepid %r from %s", threepid, mxid)
if not self.trusted_id_servers:
@@ -178,11 +179,21 @@ class IdentityHandler(BaseHandler):
content=content,
destination_is=id_server,
)
yield self.http_client.post_json_get_json(
url,
content,
headers,
)
try:
yield self.http_client.post_json_get_json(
url,
content,
headers,
)
except HttpResponseException as e:
if e.code in (400, 404, 501,):
# The remote server probably doesn't support unbinding (yet)
logger.warn("Received %d response while unbinding threepid", e.code)
defer.returnValue(False)
else:
logger.error("Failed to unbind threepid on identity server: %s", e)
raise SynapseError(502, "Failed to contact identity server")
defer.returnValue(True)
@defer.inlineCallbacks
@@ -209,12 +220,9 @@ class IdentityHandler(BaseHandler):
params
)
defer.returnValue(data)
except MatrixCodeMessageException as e:
logger.info("Proxied requestToken failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e
raise e.to_synapse_error()
@defer.inlineCallbacks
def requestMsisdnToken(
@@ -244,9 +252,6 @@ class IdentityHandler(BaseHandler):
params
)
defer.returnValue(data)
except MatrixCodeMessageException as e:
logger.info("Proxied requestToken failed with Matrix error: %r", e)
raise SynapseError(e.code, e.msg, e.errcode)
except CodeMessageException as e:
except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e
raise e.to_synapse_error()

View File

@@ -25,7 +25,7 @@ from synapse.handlers.presence import format_user_presence_state
from synapse.streams.config import PaginationConfig
from synapse.types import StreamToken, UserID
from synapse.util import unwrapFirstError
from synapse.util.async import concurrently_execute
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.visibility import filter_events_for_client
@@ -148,13 +148,15 @@ class InitialSyncHandler(BaseHandler):
try:
if event.membership == Membership.JOIN:
room_end_token = now_token.room_key
deferred_room_state = self.state_handler.get_current_state(
event.room_id
deferred_room_state = run_in_background(
self.state_handler.get_current_state,
event.room_id,
)
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = self.store.get_state_for_events(
[event.event_id], None
deferred_room_state = run_in_background(
self.store.get_state_for_events,
[event.event_id], None,
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
@@ -370,6 +372,10 @@ class InitialSyncHandler(BaseHandler):
@defer.inlineCallbacks
def get_presence():
# If presence is disabled, return an empty list
if not self.hs.config.use_presence:
defer.returnValue([])
states = yield presence_handler.get_states(
[m.user_id for m in room_members],
as_event=True,
@@ -387,19 +393,21 @@ class InitialSyncHandler(BaseHandler):
receipts = []
defer.returnValue(receipts)
presence, receipts, (messages, token) = yield defer.gatherResults(
[
run_in_background(get_presence),
run_in_background(get_receipts),
run_in_background(
self.store.get_recent_events_for_room,
room_id,
limit=limit,
end_token=now_token.room_key,
)
],
consumeErrors=True,
).addErrback(unwrapFirstError)
presence, receipts, (messages, token) = yield make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(get_presence),
run_in_background(get_receipts),
run_in_background(
self.store.get_recent_events_for_room,
room_id,
limit=limit,
end_token=now_token.room_key,
)
],
consumeErrors=True,
).addErrback(unwrapFirstError),
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking,

View File

@@ -23,21 +23,25 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from twisted.python.failure import Failure
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import AuthError, Codes, ConsentNotGivenError, SynapseError
from synapse.api.errors import (
AuthError,
Codes,
ConsentNotGivenError,
NotFoundError,
SynapseError,
)
from synapse.api.urls import ConsentURIBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
from synapse.replication.http.send_event import send_event_to_master
from synapse.types import RoomAlias, RoomStreamToken, UserID
from synapse.util.async import Limiter, ReadWriteLock
from synapse.replication.http.send_event import ReplicationSendEventRestServlet
from synapse.types import RoomAlias, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.logcontext import run_in_background
from synapse.util.metrics import measure_func
from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -45,234 +49,15 @@ from ._base import BaseHandler
logger = logging.getLogger(__name__)
class PurgeStatus(object):
"""Object tracking the status of a purge request
This class contains information on the progress of a purge request, for
return by get_purge_status.
Attributes:
status (int): Tracks whether this request has completed. One of
STATUS_{ACTIVE,COMPLETE,FAILED}
class MessageHandler(object):
"""Contains some read only APIs to get state about a room
"""
STATUS_ACTIVE = 0
STATUS_COMPLETE = 1
STATUS_FAILED = 2
STATUS_TEXT = {
STATUS_ACTIVE: "active",
STATUS_COMPLETE: "complete",
STATUS_FAILED: "failed",
}
def __init__(self):
self.status = PurgeStatus.STATUS_ACTIVE
def asdict(self):
return {
"status": PurgeStatus.STATUS_TEXT[self.status]
}
class MessageHandler(BaseHandler):
def __init__(self, hs):
super(MessageHandler, self).__init__(hs)
self.hs = hs
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
self.clock = hs.get_clock()
self.pagination_lock = ReadWriteLock()
self._purges_in_progress_by_room = set()
# map from purge id to PurgeStatus
self._purges_by_id = {}
def start_purge_history(self, room_id, token,
delete_local_events=False):
"""Start off a history purge on a room.
Args:
room_id (str): The room to purge from
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
Returns:
str: unique ID for this purge transaction.
"""
if room_id in self._purges_in_progress_by_room:
raise SynapseError(
400,
"History purge already in progress for %s" % (room_id, ),
)
purge_id = random_string(16)
# we log the purge_id here so that it can be tied back to the
# request id in the log lines.
logger.info("[purge] starting purge_id %s", purge_id)
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, token, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def _purge_history(self, purge_id, room_id, token,
delete_local_events):
"""Carry out a history purge on a room.
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
Returns:
Deferred
"""
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, token, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
finally:
self._purges_in_progress_by_room.discard(room_id)
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
self.hs.get_reactor().callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
Args:
purge_id (str): purge_id returned by start_purge_history
Returns:
PurgeStatus|None
"""
return self._purges_by_id.get(purge_id)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
as_client_event=True, event_filter=None):
"""Get messages in a room.
Args:
requester (Requester): The user requesting messages.
room_id (str): The room they want messages from.
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config rules to apply, if any.
as_client_event (bool): True to get events in client-server format.
event_filter (Filter): Filter to apply to results or None
Returns:
dict: Pagination API results
"""
user_id = requester.user.to_string()
if pagin_config.from_token:
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token_for_room(
room_id=room_id
)
)
room_token = pagin_config.from_token.room_key
room_token = RoomStreamToken.parse(room_token)
pagin_config.from_token = pagin_config.from_token.copy_and_replace(
"room_key", str(room_token)
)
source_config = pagin_config.get_source_config("room")
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token(
room_id, room_token.stream
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
)
events, next_key = yield self.store.paginate_room_events(
room_id=room_id,
from_key=source_config.from_key,
to_key=source_config.to_key,
direction=source_config.direction,
limit=source_config.limit,
event_filter=event_filter,
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if not events:
defer.returnValue({
"chunk": [],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
})
if event_filter:
events = event_filter.filter(events)
events = yield filter_events_for_client(
self.store,
user_id,
events,
is_peeking=(member_event_id is None),
)
time_now = self.clock.time_msec()
chunk = {
"chunk": [
serialize_event(e, time_now, as_client_event)
for e in events
],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
}
defer.returnValue(chunk)
self.state = hs.get_state_handler()
self.store = hs.get_datastore()
@defer.inlineCallbacks
def get_room_data(self, user_id=None, room_id=None,
@@ -286,12 +71,12 @@ class MessageHandler(BaseHandler):
Raises:
SynapseError if something went wrong.
"""
membership, membership_event_id = yield self._check_in_room_or_world_readable(
membership, membership_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
if membership == Membership.JOIN:
data = yield self.state_handler.get_current_state(
data = yield self.state.get_current_state(
room_id, event_type, state_key
)
elif membership == Membership.LEAVE:
@@ -304,53 +89,85 @@ class MessageHandler(BaseHandler):
defer.returnValue(data)
@defer.inlineCallbacks
def _check_in_room_or_world_readable(self, room_id, user_id):
try:
# check_user_was_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
defer.returnValue((member_event.membership, member_event.event_id))
return
except AuthError:
visibility = yield self.state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
@defer.inlineCallbacks
def get_state_events(self, user_id, room_id, is_guest=False):
def get_state_events(
self, user_id, room_id, types=None, filtered_types=None,
at_token=None, is_guest=False,
):
"""Retrieve all state events for a given room. If the user is
joined to the room then return the current state. If the user has
left the room return the state events from when they left.
left the room return the state events from when they left. If an explicit
'at' parameter is passed, return the state events as of that event, if
visible.
Args:
user_id(str): The user requesting state events.
room_id(str): The room ID to get all state events from.
types(list[(str, str|None)]|None): List of (type, state_key) tuples
which are used to filter the state fetched. If `state_key` is None,
all events are returned of the given type.
May be None, which matches any key.
filtered_types(list[str]|None): Only apply filtering via `types` to this
list of event types. Other types of events are returned unfiltered.
If None, `types` filtering is applied to all events.
at_token(StreamToken|None): the stream token of the at which we are requesting
the stats. If the user is not allowed to view the state as of that
stream token, we raise a 403 SynapseError. If None, returns the current
state based on the current_state_events table.
is_guest(bool): whether this user is a guest
Returns:
A list of dicts representing state events. [{}, {}, {}]
"""
membership, membership_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id
)
Raises:
NotFoundError (404) if the at token does not yield an event
if membership == Membership.JOIN:
room_state = yield self.state_handler.get_current_state(room_id)
elif membership == Membership.LEAVE:
room_state = yield self.store.get_state_for_events(
[membership_event_id], None
AuthError (403) if the user doesn't have permission to view
members of this room.
"""
if at_token:
# FIXME this claims to get the state at a stream position, but
# get_recent_events_for_room operates by topo ordering. This therefore
# does not reliably give you the state at the given stream position.
# (https://github.com/matrix-org/synapse/issues/3305)
last_events, _ = yield self.store.get_recent_events_for_room(
room_id, end_token=at_token.room_key, limit=1,
)
room_state = room_state[membership_event_id]
if not last_events:
raise NotFoundError("Can't find event for token %s" % (at_token, ))
visible_events = yield filter_events_for_client(
self.store, user_id, last_events,
)
event = last_events[0]
if visible_events:
room_state = yield self.store.get_state_for_events(
[event.event_id], types, filtered_types=filtered_types,
)
room_state = room_state[event.event_id]
else:
raise AuthError(
403,
"User %s not allowed to view events in room %s at token %s" % (
user_id, room_id, at_token,
)
)
else:
membership, membership_event_id = (
yield self.auth.check_in_room_or_world_readable(
room_id, user_id,
)
)
if membership == Membership.JOIN:
state_ids = yield self.store.get_filtered_current_state_ids(
room_id, types, filtered_types=filtered_types,
)
room_state = yield self.store.get_events(state_ids.values())
elif membership == Membership.LEAVE:
room_state = yield self.store.get_state_for_events(
[membership_event_id], types, filtered_types=filtered_types,
)
room_state = room_state[membership_event_id]
now = self.clock.time_msec()
defer.returnValue(
@@ -373,7 +190,7 @@ class MessageHandler(BaseHandler):
if not requester.app_service:
# We check AS auth after fetching the room membership, as it
# requires us to pull out all joined members anyway.
membership, _ = yield self._check_in_room_or_world_readable(
membership, _ = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
if membership != Membership.JOIN:
@@ -418,7 +235,7 @@ class EventCreationHandler(object):
self.notifier = hs.get_notifier()
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.send_event_to_master = ReplicationSendEventRestServlet.make_client(hs)
# This is only used to get at ratelimit function, and maybe_kick_guest_users
self.base_handler = BaseHandler(hs)
@@ -427,7 +244,7 @@ class EventCreationHandler(object):
# We arbitrarily limit concurrent event creation for a room to 5.
# This is to stop us from diverging history *too* much.
self.limiter = Limiter(max_count=5)
self.limiter = Linearizer(max_count=5, name="room_event_creation_limit")
self.action_generator = hs.get_action_generator()
@@ -459,10 +276,14 @@ class EventCreationHandler(object):
where *hashes* is a map from algorithm to hash.
If None, they will be requested from the database.
Raises:
ResourceLimitError if server is blocked to some resource being
exceeded
Returns:
Tuple of created event (FrozenEvent), Context
"""
yield self.auth.check_auth_blocking(requester.user.to_string())
builder = self.event_builder_factory.new(event_dict)
self.validator.validate_new(builder)
@@ -630,7 +451,8 @@ class EventCreationHandler(object):
If so, returns the version of the event in context.
Otherwise, returns None.
"""
prev_event_id = context.prev_state_ids.get((event.type, event.state_key))
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_event_id = prev_state_ids.get((event.type, event.state_key))
prev_event = yield self.store.get_event(prev_event_id, allow_none=True)
if not prev_event:
return
@@ -752,8 +574,8 @@ class EventCreationHandler(object):
event = builder.build()
logger.debug(
"Created event %s with state: %s",
event.event_id, context.prev_state_ids,
"Created event %s",
event.event_id,
)
defer.returnValue(
@@ -805,11 +627,9 @@ class EventCreationHandler(object):
try:
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield send_event_to_master(
self.hs.get_clock(),
self.http_client,
host=self.config.worker_replication_host,
port=self.config.worker_replication_http_port,
yield self.send_event_to_master(
event_id=event.event_id,
store=self.store,
requester=requester,
event=event,
context=context,
@@ -884,9 +704,11 @@ class EventCreationHandler(object):
e.sender == event.sender
)
current_state_ids = yield context.get_current_state_ids(self.store)
state_to_include_ids = [
e_id
for k, e_id in iteritems(context.current_state_ids)
for k, e_id in iteritems(current_state_ids)
if k[0] in self.hs.config.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
@@ -922,8 +744,9 @@ class EventCreationHandler(object):
)
if event.type == EventTypes.Redaction:
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.auth.compute_auth_events(
event, context.prev_state_ids, for_verification=True,
event, prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
@@ -943,21 +766,20 @@ class EventCreationHandler(object):
"You don't have permission to redact events"
)
if event.type == EventTypes.Create and context.prev_state_ids:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
if event.type == EventTypes.Create:
prev_state_ids = yield context.get_prev_state_ids(self.store)
if prev_state_ids:
raise AuthError(
403,
"Changing the room create event is forbidden",
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
# this intentionally does not yield: we don't care about the result
# and don't need to wait for it.
run_in_background(
self.pusher_pool.on_new_notifications,
event_stream_id, max_stream_id
self.pusher_pool.on_new_notifications(
event_stream_id, max_stream_id,
)
def _notify():

View File

@@ -0,0 +1,298 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2017 - 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events.utils import serialize_event
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import ReadWriteLock
from synapse.util.logcontext import run_in_background
from synapse.util.stringutils import random_string
from synapse.visibility import filter_events_for_client
logger = logging.getLogger(__name__)
class PurgeStatus(object):
"""Object tracking the status of a purge request
This class contains information on the progress of a purge request, for
return by get_purge_status.
Attributes:
status (int): Tracks whether this request has completed. One of
STATUS_{ACTIVE,COMPLETE,FAILED}
"""
STATUS_ACTIVE = 0
STATUS_COMPLETE = 1
STATUS_FAILED = 2
STATUS_TEXT = {
STATUS_ACTIVE: "active",
STATUS_COMPLETE: "complete",
STATUS_FAILED: "failed",
}
def __init__(self):
self.status = PurgeStatus.STATUS_ACTIVE
def asdict(self):
return {
"status": PurgeStatus.STATUS_TEXT[self.status]
}
class PaginationHandler(object):
"""Handles pagination and purge history requests.
These are in the same handler due to the fact we need to block clients
paginating during a purge.
"""
def __init__(self, hs):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.clock = hs.get_clock()
self.pagination_lock = ReadWriteLock()
self._purges_in_progress_by_room = set()
# map from purge id to PurgeStatus
self._purges_by_id = {}
def start_purge_history(self, room_id, token,
delete_local_events=False):
"""Start off a history purge on a room.
Args:
room_id (str): The room to purge from
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
Returns:
str: unique ID for this purge transaction.
"""
if room_id in self._purges_in_progress_by_room:
raise SynapseError(
400,
"History purge already in progress for %s" % (room_id, ),
)
purge_id = random_string(16)
# we log the purge_id here so that it can be tied back to the
# request id in the log lines.
logger.info("[purge] starting purge_id %s", purge_id)
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, token, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def _purge_history(self, purge_id, room_id, token,
delete_local_events):
"""Carry out a history purge on a room.
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
Returns:
Deferred
"""
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, token, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
logger.error("[purge] failed: %s", Failure().getTraceback().rstrip())
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_FAILED
finally:
self._purges_in_progress_by_room.discard(room_id)
# remove the purge from the list 24 hours after it completes
def clear_purge():
del self._purges_by_id[purge_id]
self.hs.get_reactor().callLater(24 * 3600, clear_purge)
def get_purge_status(self, purge_id):
"""Get the current status of an active purge
Args:
purge_id (str): purge_id returned by start_purge_history
Returns:
PurgeStatus|None
"""
return self._purges_by_id.get(purge_id)
@defer.inlineCallbacks
def get_messages(self, requester, room_id=None, pagin_config=None,
as_client_event=True, event_filter=None):
"""Get messages in a room.
Args:
requester (Requester): The user requesting messages.
room_id (str): The room they want messages from.
pagin_config (synapse.api.streams.PaginationConfig): The pagination
config rules to apply, if any.
as_client_event (bool): True to get events in client-server format.
event_filter (Filter): Filter to apply to results or None
Returns:
dict: Pagination API results
"""
user_id = requester.user.to_string()
if pagin_config.from_token:
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token_for_room(
room_id=room_id
)
)
room_token = pagin_config.from_token.room_key
room_token = RoomStreamToken.parse(room_token)
pagin_config.from_token = pagin_config.from_token.copy_and_replace(
"room_key", str(room_token)
)
source_config = pagin_config.get_source_config("room")
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
if source_config.direction == 'b':
# if we're going backwards, we might need to backfill. This
# requires that we have a topo token.
if room_token.topological:
max_topo = room_token.topological
else:
max_topo = yield self.store.get_max_topological_token(
room_id, room_token.stream
)
if membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
leave_token = yield self.store.get_topological_token_for_event(
member_event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < max_topo:
source_config.from_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, max_topo
)
events, next_key = yield self.store.paginate_room_events(
room_id=room_id,
from_key=source_config.from_key,
to_key=source_config.to_key,
direction=source_config.direction,
limit=source_config.limit,
event_filter=event_filter,
)
next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key
)
if not events:
defer.returnValue({
"chunk": [],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
})
if event_filter:
events = event_filter.filter(events)
events = yield filter_events_for_client(
self.store,
user_id,
events,
is_peeking=(member_event_id is None),
)
state = None
if event_filter and event_filter.lazy_load_members():
# TODO: remove redundant members
types = [
(EventTypes.Member, state_key)
for state_key in set(
event.sender # FIXME: we also care about invite targets etc.
for event in events
)
]
state_ids = yield self.store.get_state_ids_for_event(
events[0].event_id, types=types,
)
if state_ids:
state = yield self.store.get_events(list(state_ids.values()))
if state:
state = yield filter_events_for_client(
self.store,
user_id,
state.values(),
is_peeking=(member_event_id is None),
)
time_now = self.clock.time_msec()
chunk = {
"chunk": [
serialize_event(e, time_now, as_client_event)
for e in events
],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
}
if state:
chunk["state"] = [
serialize_event(e, time_now, as_client_event)
for e in state
]
defer.returnValue(chunk)

Some files were not shown because too many files have changed in this diff Show More