1
0

Compare commits

...

538 Commits

Author SHA1 Message Date
Amber Brown
92faeb2a3f changelog 2018-10-04 22:38:10 +10:00
Amber Brown
dd59dfc51f full version 2018-10-04 22:37:55 +10:00
Richard van der Hoff
e5f080d6a7 Merge pull request #4002 from matrix-org/rav/pin_prometheus
Pin to prometheus_client<0.4 to avoid renaming all of our metrics
2018-10-03 17:42:08 +01:00
Richard van der Hoff
a59d899668 Pin to prometheus_client<0.4 to avoid renaming all of our metrics 2018-10-03 17:20:15 +01:00
Richard van der Hoff
055fe3589e Merge pull request #3998 from matrix-org/michaelkaye/build_docker_images_of_rc_tags
Docker build all tags starting vX.Y.Z, including release candidates
2018-10-03 17:12:52 +01:00
Michael Kaye
d1b7c0ca05 Docker build all tags starting vX.Y.Z, inc RCs
Note the regex must match the complete string anyway,
so the leading ^ was useless.
2018-10-03 11:52:05 +01:00
Amber Brown
ed763aeba8 some changelog fixes 2018-10-03 02:27:41 +10:00
Amber Brown
ccce6f26a6 changelog 2018-10-03 02:16:21 +10:00
Amber Brown
da8f82008d version 2018-10-03 02:15:48 +10:00
Erik Johnston
25baf3b2ac Merge pull request #3991 from matrix-org/erikj/fix_pop_retry_cache
Fix bug when invalidating destination retry timings
2018-10-02 16:42:28 +01:00
Erik Johnston
9c834b8ee9 Add tests 2018-10-02 16:22:39 +01:00
Amber Brown
058a4c665e Remove Jenkins & other old dev junk (#3988) 2018-10-03 00:59:11 +10:00
Erik Johnston
8f614f842d Newsfile 2018-10-02 15:54:31 +01:00
Erik Johnston
7258d081a5 Fix bug when invalidating destination retry timings 2018-10-02 15:47:57 +01:00
Richard van der Hoff
2b8d28b095 Merge pull request #3960 from matrix-org/rav/fix_missing_create_event_error
Fix error handling for missing auth_event
2018-10-02 13:57:54 +01:00
Amber Brown
7232917f12 Disable frozen dicts by default (#3987) 2018-10-02 22:53:47 +10:00
Erik Johnston
0f7033fb98 Merge pull request #3990 from matrix-org/erikj/fix_logging_lost_connection
Fix error when logging incomplete requests
2018-10-02 14:10:05 +02:00
Erik Johnston
eeb755d17b Newsfile 2018-10-02 12:06:25 +01:00
Erik Johnston
334e075dd8 Fix error when logging incomplete requests
If a connection is lost before a request is read from Request, Twisted
sets `method` (and `uri`) attributes to dummy values. These dummy values
have incorrect types (i.e. they're not bytes), and so things like
`__repr__` would raise an exception.

To fix this we had a helper method to return the method with a
consistent type.
2018-10-02 12:06:22 +01:00
Richard van der Hoff
8c41b0ca66 Merge pull request #3989 from matrix-org/rav/better_stacktraces
Avoid reraise, to improve stacktraces
2018-10-02 10:47:38 +01:00
Erik Johnston
bc29946809 Merge pull request #3986 from matrix-org/erikj/fix_sync_with_redacted_state
Fix lazy loaded sync with rejected state events
2018-10-02 11:38:35 +02:00
Richard van der Hoff
8174c6725b Avoid reraise, to improve stacktraces 2018-10-01 18:50:34 +01:00
Richard van der Hoff
5908a8f6f5 Merge branch 'develop' into rav/fix_missing_create_event_error 2018-10-01 17:16:20 +01:00
Amber Brown
abdc141c2b Update instructions to point to pip install (#3985) 2018-10-02 01:08:38 +10:00
Richard van der Hoff
b5b93f45d5 Merge pull request #3968 from matrix-org/rav/fix_federation_errors
Fix exceptions when handling incoming transactions
2018-10-01 15:54:24 +01:00
Amber Brown
6e05fd032c Fix userconsent on Python 3 (#3938) 2018-10-02 00:11:58 +10:00
Erik Johnston
5fa27eac78 Newsfile 2018-10-01 14:24:41 +01:00
Erik Johnston
82f922b4af Fix lazy loaded sync with rejected state events
In particular, we assume that the name and canonical alias events in
the state have not been rejected. In practice this may not be the case
(though we should probably think about fixing that) so lets ensure that
we gracefully handle that case, rather than 404'ing the sync request
like we do now.
2018-10-01 14:19:40 +01:00
Erik Johnston
8f5c23d0cd Merge pull request #3933 from matrix-org/erikj/destination_retry_cache
Add a five minute cache to get_destination_retry_timings
2018-10-01 14:20:30 +02:00
Richard van der Hoff
53c5fa4e6c Further reduce the size of the docker image (#3972)
Rewrite the dockerfile as a multistage build: this means we can get rid of a whole load of cruft which we don't need.
2018-10-01 12:29:17 +01:00
Erik Johnston
4f3e3ac192 Correctly match 'dict.pop' api 2018-10-01 12:25:27 +01:00
Erik Johnston
8ea887856c Don't update eviction metrics on explicit removal 2018-10-01 12:00:58 +01:00
Bruno Windels
9a8bbc9a59 add --no-admin flag to registration script (#3836) 2018-09-28 13:36:56 +01:00
Richard van der Hoff
3deaad2fb4 Merge pull request #3964 from matrix-org/rav/remove_localhost_checks
remove spurious federation checks on localhost
2018-09-28 13:35:47 +01:00
Richard van der Hoff
26557c9db4 Merge pull request #3980 from matrix-org/rav/remove_broken_cache_call
Remove redundant call to start_get_pdu_cache
2018-09-28 13:13:19 +01:00
Richard van der Hoff
965154d60a Fix complete fail to do the right thing 2018-09-28 12:45:54 +01:00
Jan Christian Grünhage
5304449738 build python 3 docker images on circle CI (#3976) 2018-09-28 12:36:35 +01:00
Richard van der Hoff
19475cf337 Remove redundant call to start_get_pdu_cache
I think this got forgotten in #3932. We were getting away with it because it
was the last call in this function.
2018-09-28 12:01:23 +01:00
Richard van der Hoff
9c8cec5dab Merge remote-tracking branch 'origin/develop' into erikj/destination_retry_cache 2018-09-28 10:51:09 +01:00
Richard van der Hoff
f094f715cf Merge remote-tracking branch 'origin/develop' into rav/fix_federation_errors 2018-09-27 15:18:21 +01:00
Richard van der Hoff
36c62a67c4 Merge pull request #3794 from matrix-org/erikj/faster_typing
Improve performance of getting typing updates for replication
2018-09-27 15:14:26 +01:00
Richard van der Hoff
b5c976347b Merge pull request #3946 from matrix-org/michaelkaye/automate_docker_hub_upload
Build and push docker image to hub automatically
2018-09-27 15:07:41 +01:00
Richard van der Hoff
e1e3e77bfd Merge pull request #3967 from matrix-org/rav/federation_handler_cleanups
Clarifications in FederationHandler
2018-09-27 15:06:59 +01:00
Amber Brown
6de3884e5e Merge pull request #3965 from matrix-org/rav/notify_app_services_bg_process
Run notify_app_services as a bg process
2018-09-27 23:40:30 +10:00
Amber Brown
a512e637ac Merge pull request #3970 from schnuffle/develop-py3
Replaced all occurences of e.message with str(e)
2018-09-27 23:39:45 +10:00
Amber Brown
b3064532d0 Run our oldest supported configuration in CI (#3952) 2018-09-27 23:21:54 +10:00
Amber Brown
861c063ebc Update 3970.bugfix 2018-09-27 22:47:01 +10:00
Richard van der Hoff
484a9b8c81 Remove redundant, failing, test
This test didn't do what it claimed to do, and what it claimed to do was the
same as test_cant_hide_direct_ancestors anyway.

This stuff is tested by sytest anyway.
2018-09-27 13:11:23 +01:00
Schnuffle
a873896096 Added changelog fragment 2018-09-27 14:01:44 +02:00
Schnuffle
dc5db01ff2 Replaced all occurences of e.message with str(e)
Signed-off-by: Schnuffle  <schnuffle@github.com>
2018-09-27 13:38:50 +02:00
Richard van der Hoff
51d33d5178 Merge pull request #3961 from matrix-org/neilj/lock_mau_upserts
fix #3854 MAU transaction errors
2018-09-27 12:31:37 +01:00
Michael Kaye
74bbdd0412 Do the changelog.d dance 2018-09-27 12:01:43 +01:00
Michael Kaye
8607397ea7 Make a ":latest" tag, and a SHA1 commit ID one too.
Latest is horrible and makes debugging what has happened anywhere a
nightmare. We push a latest because of demand for it, but we'll also
push a SHA1 commit id so those wanting to know what they're running
(and be able to roll back if required) can use those instead.

Note that latest here is defined as "most recent master commit" not
"most recent released version", as the actual semantics of making latest
correct while still being able to build bugfixed releases of previous
versions is just ARGH. So we define it as "master" not "latest release".
2018-09-27 12:01:41 +01:00
Michael Kaye
a3c11f7320 Also, don't run this job on any branches 2018-09-27 12:01:09 +01:00
Michael Kaye
73a089f461 Make username configurable in the UI too 2018-09-27 12:00:40 +01:00
Michael Kaye
3852c2bc39 Fix typo 2018-09-27 12:00:01 +01:00
Michael Kaye
0d36fe3563 Build and push docker image to hub 2018-09-27 11:59:17 +01:00
Richard van der Hoff
948a6d8776 changelog 2018-09-27 11:37:39 +01:00
Richard van der Hoff
333bee27f5 Include event when resolving state for missing prevs
If we have a forward extremity for a room as `E`, and you receive `A`, `B`,
s.t. `A -> B -> E`, and `B` also points to an unknown event `X`, then we need
to do state res between `X` and `E`.

When that happens, we need to make sure we include `X` in the state that goes
into the state res alg.

Fixes #3934.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
bd61c82bdf Include state from remote servers in pdu handling
If we've fetched state events from remote servers in order to resolve the state
for a new event, we need to actually pass those events into
resolve_events_with_factory (so that it can do the state res) and then persist
the ones we need - otherwise other bits of the codebase get confused about why
we have state groups pointing to non-existent events.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
a215b698c4 Fix "unhashable type: 'list'" exception in federation handling
get_state_groups returns a map from state_group_id to a list of FrozenEvents,
so was very much the wrong thing to be putting as one of the entries in the
list passed to resolve_events_with_factory (which expects maps from
(event_type, state_key) to event id).

We actually want get_state_groups_ids().values() rather than
get_state_groups().

This fixes the main problem in #3923, but there are other problems with this
bit of code which get discovered once you do so.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
28223841e0 more comments 2018-09-27 11:31:51 +01:00
Richard van der Hoff
ad8e137062 changelog 2018-09-27 11:31:51 +01:00
Richard van der Hoff
e3c159863d Clarifications in FederationHandler
* add some comments on things that look a bit bogus
* rename this `state` variable to avoid confusion with the `state` used
  elsewhere in this function. (There was no actual conflict, but it was
  a confusing bit of spaghetti.)
2018-09-27 11:31:51 +01:00
Richard van der Hoff
92abd3d6d6 Merge pull request #3966 from matrix-org/rav/rx_txn_logging_2
Logging improvements
2018-09-27 11:27:46 +01:00
Richard van der Hoff
4a15a3e4d5 Include eventid in log lines when processing incoming federation transactions (#3959)
when processing incoming transactions, it can be hard to see what's going on,
because we process a bunch of stuff in parallel, and because we may end up
recursively working our way through a chain of three or four events.

This commit creates a way to use logcontexts to add the relevant event ids to
the log lines.
2018-09-27 11:25:34 +01:00
Richard van der Hoff
ae6ad4cf41 docstrings and unittests for storage.state (#3958)
I spent ages trying to figure out how I was going mad...
2018-09-27 11:22:25 +01:00
Amber Brown
2c695fd1aa Merge pull request #3963 from matrix-org/rav/get_state_for_room_docstring
fix docstring for FederationClient.get_state_for_room
2018-09-27 18:06:38 +10:00
Richard van der Hoff
e70b4ce069 Logging improvements
Some logging tweaks to help with debugging incoming federation transactions
2018-09-26 17:36:14 +01:00
Richard van der Hoff
3c37c7e45b changelog 2018-09-26 17:01:30 +01:00
Richard van der Hoff
a87d419a85 Run notify_app_services as a bg process
This ensures that its resource usage metrics get recorded somewhere rather than
getting lost.

(It also fixes an error when called from a nested logging context which
completes before the bg process)
2018-09-26 17:00:40 +01:00
Richard van der Hoff
0c4a99ea2d changelog 2018-09-26 16:54:54 +01:00
Richard van der Hoff
9453c65948 remove spurious federation checks on localhost
There's really no point in checking for destinations called "localhost" because
there is nothing stopping people creating other DNS entries which point to
127.0.0.1. The right fix for this is
https://github.com/matrix-org/synapse/issues/3953.

Blocking localhost, on the other hand, means that you get a surprise when
trying to connect a test server on localhost to an existing server (with a
'normal' server_name).
2018-09-26 16:53:52 +01:00
Richard van der Hoff
ab59f3d8da changelog 2018-09-26 16:52:24 +01:00
Richard van der Hoff
607eec0456 fix docstring for FederationClient.get_state_for_room
trivial fixes for docstring
2018-09-26 16:52:24 +01:00
Neil Johnson
d514608b5c towncrier 2018-09-26 16:19:53 +01:00
Neil Johnson
28781b65e7 fix #3854 2018-09-26 16:16:41 +01:00
Amber Brown
d4e0861ff9 Reduce the load on our CI (#3957)
* changelog

* reduce circleci config

* plus a handy script

* fix regex
2018-09-27 00:23:21 +10:00
Richard van der Hoff
a5e70b31a1 changelog 2018-09-26 14:41:14 +01:00
Richard van der Hoff
8afddf7afe Fix error handling for missing auth_event
When we were authorizing an event, if there was no `m.room.create` in its
auth_events, we would raise a SynapseError with a cryptic message, which then
meant that we would bail out of processing any incoming events, rather than
storing a rejection for the faulty event and moving on.

We should treat the absent event the same as any other auth failure, by
raising an AuthError, so that the event is marked as rejected.
2018-09-26 14:40:16 +01:00
Richard van der Hoff
bf01efb864 Merge branch 'develop' into rav/hacky_cache_factor_fix 2018-09-26 13:24:07 +01:00
Erik Johnston
8834396b8a Actuall set cache factors in workers 2018-09-26 13:23:02 +01:00
Richard van der Hoff
4e8276a34a Merge pull request #3956 from matrix-org/rav/fix_expiring_cache_len
Fix ExpiringCache.__len__ to be accurate
2018-09-26 13:10:13 +01:00
Richard van der Hoff
5b4028fa78 Merge branch 'rav/fix_expiring_cache_len' into erikj/destination_retry_cache 2018-09-26 12:55:53 +01:00
Richard van der Hoff
7ee94fc1ba Log which cache is throwing exceptions 2018-09-26 12:43:08 +01:00
Amber Brown
66a1d57adb Merge pull request #3948 from matrix-org/rav/no_symlink_synctl
Move synctl into top dir to avoid a symlink
2018-09-26 21:41:58 +10:00
Amber Brown
c2185f14d7 Merge pull request #3924 from matrix-org/rav/clean_up_on_receive_pdu
Comments and interface cleanup for on_receive_pdu
2018-09-26 21:41:26 +10:00
Richard van der Hoff
8ac9fa7375 changelog 2018-09-26 12:35:10 +01:00
Erik Johnston
3baf6e1667 Fix ExpiringCache.__len__ to be accurate
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.

(This appears to have been a longstanding bug, but one which has been made more
of a problem by #3932 and #3933).

(This was originally done by Erik as part of #3933. I'm cherry-picking it
because really it's a fix in its own right)
2018-09-26 12:32:29 +01:00
Richard van der Hoff
f65163627f Merge pull request #3911 from matrix-org/jcgruenhage/docker-support-python3
make python 3 work in the docker container
2018-09-25 15:18:09 +01:00
Jan Christian Grünhage
df55a943ca Update Dockerfile 2018-09-25 14:33:38 +02:00
Jan Christian Grünhage
e7fa032126 Update .dockerignore 2018-09-25 14:33:19 +02:00
Richard van der Hoff
0649306fde Update grafana dashboard 2018-09-25 13:29:28 +01:00
Richard van der Hoff
a1cd37390f Merge remote-tracking branch 'origin/develop' into erikj/destination_retry_cache 2018-09-25 12:03:54 +01:00
Richard van der Hoff
4c3e7eeec5 Merge pull request #3932 from matrix-org/erikj/auto_start_expiring_caches
Fix some instances of ExpiringCache not expiring cache items
2018-09-25 12:02:57 +01:00
Jérémy Farnaud
6cf261930a added "media-src: 'self'" to CSP for resources (#3578)
Synapse doesn’t allow for media resources to be played directly from
Chrome. It is a problem for users on other networks (e.g. IRC)
communicating with Matrix users through a gateway. The gateway sends
them the raw URL for the resource when a Matrix user uploads a video
and the video cannot be played directly in Chrome using that URL.

Chrome argues it is not authorized to play the video because of the
Content Security Policy. Chrome checks for the "media-src" policy which
is missing, and defauts to the "default-src" policy which is "none".

As Synapse already sends "object-src: 'self'" I thought it wouldn’t be
a problem to add "media-src: 'self'" to the CSP to fix this problem.
2018-09-25 11:55:02 +01:00
Richard van der Hoff
94f7befc31 Merge pull request #3925 from matrix-org/erikj/fix_producers_unregistered
Fix spurious exceptions when client closes conncetion
2018-09-25 11:52:06 +01:00
Richard van der Hoff
bd469adaa9 changelog 2018-09-25 11:22:17 +01:00
Richard van der Hoff
c53336986d Move synctl into top dir to avoid a symlink
symlinks apparently break setuptools on python3 and alpine
(https://bugs.python.org/issue31940), so let's stop using a symlink and just
use the file directly.
2018-09-25 11:19:27 +01:00
Richard van der Hoff
e4e96486a9 Merge pull request #3947 from matrix-org/rav/attr_version
We require attrs 16.0.0
2018-09-25 11:16:42 +01:00
Richard van der Hoff
45a1053d44 changelog 2018-09-25 10:45:34 +01:00
Richard van der Hoff
a9d84f4e44 We require attrs 16.0.0
Ref: https://github.com/matrix-org/synapse/issues/3945
2018-09-25 10:43:39 +01:00
Matthew Hodgson
787d22ed6c Only lazy load self-members on initial sync
Given we have disabled lazy loading for incr syncs in #3840, we can make self-LL more efficient by only doing it on initial sync.  Also adds a bounds check for if/when we change our mind, so that we don't try to include LL members on sync responses with no timeline.
2018-09-25 00:49:26 +01:00
Amber Brown
fbe5ba25f6 Merge branch 'master' into develop 2018-09-25 03:10:01 +10:00
Amber Brown
5121ae97f5 Merge tag 'v0.33.5.1'
Internal Changes
----------------

- Fix incompatibility with older Twisted version in tests. Thanks
  @OlegGirko!
([\#3940](https://github.com/matrix-org/synapse/issues/3940))
2018-09-25 03:09:30 +10:00
Amber Brown
fc691ca97c changelog 2018-09-25 02:54:59 +10:00
Amber Brown
6b6cb32297 bump version 2018-09-25 02:54:34 +10:00
Amber Brown
e37c221b97 changelog for 3940 2018-09-25 02:51:55 +10:00
Oleg Girko
7d3f639844 Fix compatibility issue with older Twisted in tests.
Older Twisted (18.4.0) returns TimeoutError instead of
ConnectingCancelledError when connection times out.
This change allows tests to be compatible with this behaviour.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-09-25 02:51:18 +10:00
Amber Brown
04eed80a73 Merge branch 'master' into develop 2018-09-24 23:42:25 +10:00
Amber Brown
829213523e Merge tag 'v0.33.5'
Features
--------

- Python 3.5 and 3.6 support is now in beta.
([\#3576](https://github.com/matrix-org/synapse/issues/3576))
- Implement `event_format` filter param in `/sync`
([\#3790](https://github.com/matrix-org/synapse/issues/3790))
- Add synapse_admin_mau:registered_reserved_users metric to expose
number of real reaserved users
([\#3846](https://github.com/matrix-org/synapse/issues/3846))

Bugfixes
--------

- Remove connection ID for replication prometheus metrics, as it creates
a large number of new series.
([\#3788](https://github.com/matrix-org/synapse/issues/3788))
- guest users should not be part of mau total
([\#3800](https://github.com/matrix-org/synapse/issues/3800))
- Bump dependency on pyopenssl 16.x, to avoid incompatibility with
recent Twisted.
([\#3804](https://github.com/matrix-org/synapse/issues/3804))
- Fix existing room tags not coming down sync when joining a room
([\#3810](https://github.com/matrix-org/synapse/issues/3810))
- Fix jwt import check
([\#3824](https://github.com/matrix-org/synapse/issues/3824))
- fix VOIP crashes under Python 3 (#3821)
([\#3835](https://github.com/matrix-org/synapse/issues/3835))
- Fix manhole so that it works with latest openssh clients
([\#3841](https://github.com/matrix-org/synapse/issues/3841))
- Fix outbound requests occasionally wedging, which can result in
federation breaking between servers.
([\#3845](https://github.com/matrix-org/synapse/issues/3845))
- Show heroes if room name/canonical alias has been deleted
([\#3851](https://github.com/matrix-org/synapse/issues/3851))
- Fix handling of redacted events from federation
([\#3859](https://github.com/matrix-org/synapse/issues/3859))
-  ([\#3874](https://github.com/matrix-org/synapse/issues/3874))
- Mitigate outbound federation randomly becoming wedged
([\#3875](https://github.com/matrix-org/synapse/issues/3875))

Internal Changes
----------------

- CircleCI tests now run on the potential merge of a PR.
([\#3704](https://github.com/matrix-org/synapse/issues/3704))
- http/ is now ported to Python 3.
([\#3771](https://github.com/matrix-org/synapse/issues/3771))
- Improve human readable error messages for threepid
registration/account update
([\#3789](https://github.com/matrix-org/synapse/issues/3789))
- Make /sync slightly faster by avoiding needless copies
([\#3795](https://github.com/matrix-org/synapse/issues/3795))
- handlers/ is now ported to Python 3.
([\#3803](https://github.com/matrix-org/synapse/issues/3803))
- Limit the number of PDUs/EDUs per federation transaction
([\#3805](https://github.com/matrix-org/synapse/issues/3805))
- Only start postgres instance for postgres tests on Travis CI
([\#3806](https://github.com/matrix-org/synapse/issues/3806))
- tests/ is now ported to Python 3.
([\#3808](https://github.com/matrix-org/synapse/issues/3808))
- crypto/ is now ported to Python 3.
([\#3822](https://github.com/matrix-org/synapse/issues/3822))
- rest/ is now ported to Python 3.
([\#3823](https://github.com/matrix-org/synapse/issues/3823))
- add some logging for the keyring queue
([\#3826](https://github.com/matrix-org/synapse/issues/3826))
- speed up lazy loading by 2-3x
([\#3827](https://github.com/matrix-org/synapse/issues/3827))
- Improved Dockerfile to remove build requirements after building
reducing the image size.
([\#3834](https://github.com/matrix-org/synapse/issues/3834))
- Disable lazy loading for incremental syncs for now
([\#3840](https://github.com/matrix-org/synapse/issues/3840))
- federation/ is now ported to Python 3.
([\#3847](https://github.com/matrix-org/synapse/issues/3847))
- Log when we retry outbound requests
([\#3853](https://github.com/matrix-org/synapse/issues/3853))
- Removed some excess logging messages.
([\#3855](https://github.com/matrix-org/synapse/issues/3855))
- Speed up purge history for rooms that have been previously purged
([\#3856](https://github.com/matrix-org/synapse/issues/3856))
- Refactor some HTTP timeout code.
([\#3857](https://github.com/matrix-org/synapse/issues/3857))
- Fix running merged builds on CircleCI
([\#3858](https://github.com/matrix-org/synapse/issues/3858))
- Fix typo in replication stream exception.
([\#3860](https://github.com/matrix-org/synapse/issues/3860))
- Add in flight real time metrics for Measure blocks
([\#3871](https://github.com/matrix-org/synapse/issues/3871))
- Disable buffering and automatic retrying in treq requests to prevent
timeouts. ([\#3872](https://github.com/matrix-org/synapse/issues/3872))
- mention jemalloc in the README
([\#3877](https://github.com/matrix-org/synapse/issues/3877))
- Remove unmaintained "nuke-room-from-db.sh" script
([\#3888](https://github.com/matrix-org/synapse/issues/3888))
2018-09-24 23:41:35 +10:00
Amber Brown
e3aa2c0b75 towncrier 2018-09-24 23:40:27 +10:00
Amber Brown
e302f40e20 update version 2018-09-24 23:40:05 +10:00
Erik Johnston
19dc676d1a Fix ExpiringCache.__len__ to be accurate
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.
2018-09-21 16:25:42 +01:00
Erik Johnston
5230bc1471 Newsfile 2018-09-21 15:05:37 +01:00
Erik Johnston
fdd1a62e8d Add a five minute cache to get_destination_retry_timings
Hopefully helps with #3931
2018-09-21 14:56:12 +01:00
Erik Johnston
79eded1ae4 Make ExpiringCache slightly more performant 2018-09-21 14:52:21 +01:00
Erik Johnston
d3f80cbc9c Newsfile 2018-09-21 14:24:39 +01:00
Erik Johnston
8601c24287 Fix some instances of ExpiringCache not expiring cache items
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.

This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
2018-09-21 14:19:46 +01:00
Erik Johnston
ad53a5497d Merge pull request #3927 from matrix-org/erikj/handle_background_errors
Handle exceptions thrown by background tasks
2018-09-21 09:26:30 +01:00
Matthew Hodgson
a2ddaa90f2 Always LL ourselves if we're in a room to simplify clients (#3916)
Should fix https://github.com/vector-im/riot-web/issues/7209
2018-09-20 21:21:54 +01:00
Erik Johnston
94ae1dea3c Add missing logger 2018-09-20 17:05:34 +01:00
Erik Johnston
168491c412 Newsfile 2018-09-20 16:16:52 +01:00
Erik Johnston
9ea408441f Handle exceptions thrown by background tasks
Fixes #3921
2018-09-20 16:15:21 +01:00
Jan Christian Grünhage
3af7a95a9d add changelog 2018-09-20 14:55:11 +02:00
Jan Christian Grünhage
8dfb33d325 make python 3 work in the docker container 2018-09-20 14:55:11 +02:00
Erik Johnston
13f6f1624b Newsfile 2018-09-20 13:52:09 +01:00
Erik Johnston
b28a7ed503 Fix spurious exceptions when client closes conncetion
If a HTTP handler throws an exception while processing a request we
automatically write a JSON error response. If the handler had already
started writing a response twisted throws an exception.

We should check for this case and simple abort the connection if there
was an error after the response had started being written.
2018-09-20 13:44:20 +01:00
Neil Johnson
23b53b4ef8 Merge pull request #3868 from matrix-org/neilj/fix_room_invite_mail_links
Neilj/fix room invite mail links
2018-09-20 13:32:38 +01:00
Richard van der Hoff
6bb9a4b1d6 changelog 2018-09-20 13:15:20 +01:00
Richard van der Hoff
703de4ec13 Comments and interface cleanup for on_receive_pdu
Add some informative comments about what's going on here.

Also, `sent_to_us_directly` and `get_missing` were doing the same thing (apart
from in `_handle_queued_pdus`, which looks like a bug), so let's get rid of
`get_missing` and use `sent_to_us_directly` consistently.
2018-09-20 13:06:55 +01:00
Amber Brown
1f3f5fcf52 Fix client IPs being broken on Python 3 (#3908) 2018-09-20 20:14:34 +10:00
Erik Johnston
3fd68d533b Merge pull request #3914 from matrix-org/erikj/remove_retry_cache
Remove get_destination_retry_timings cache
2018-09-20 10:54:49 +01:00
Amber Brown
741571cf22 Add a way to run tests in PostgreSQL in Docker (#3699) 2018-09-20 18:12:45 +10:00
Amber Brown
aeca5a5ed5 Add a regression test for logging on failed connections (#3912) 2018-09-20 16:28:18 +10:00
Richard van der Hoff
642199570c Improve the logging when handling a federation transaction (#3904)
Let's try to rationalise the logging that happens when we are processing an
incoming transaction, to make it easier to figure out what is going wrong when
they take ages. In particular:

- make everything start with a [room_id event_id] prefix
- make sure we log a warning when catching exceptions rather than just turning
  them into other, more cryptic, exceptions.
2018-09-19 17:28:18 +01:00
Erik Johnston
ce846bb620 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/faster_typing 2018-09-19 15:08:36 +01:00
Erik Johnston
1a24d4effa Newsfile 2018-09-19 15:03:05 +01:00
Erik Johnston
bbab6ebfd9 Fix up changelog and remove spurious comment 2018-09-19 14:45:14 +01:00
Erik Johnston
392a54128c pep8 2018-09-19 14:37:49 +01:00
Erik Johnston
83ee5592a5 Newsfile 2018-09-19 14:22:59 +01:00
Erik Johnston
b9158ac2bf Remove get_destination_retry_timings cache
Currently we rely on the master to invalidate this cache promptly.
However, after having moved most federation endpoints off of master this
no longer happens, causing outbound fedeariont to get blackholed.

Fixes #3798
2018-09-19 14:22:57 +01:00
Erik Johnston
cb016baa37 Merge pull request #3910 from matrix-org/erikj/update_timeout
Update to use new timeout function everywhere.
2018-09-19 11:45:11 +01:00
Erik Johnston
80d2d50f47 Fixup 2018-09-19 11:19:47 +01:00
Erik Johnston
9407bcf37a Replace custom DeferredTimeoutError with defer.TimeoutError 2018-09-19 11:07:29 +01:00
Erik Johnston
6c48aa0256 Run canceller first to allow it to generate correct error 2018-09-19 11:07:27 +01:00
Erik Johnston
6bbe3d5732 Newsfile 2018-09-19 10:43:13 +01:00
Erik Johnston
a334e1cace Update to use new timeout function everywhere.
The existing deferred timeout helper function (and the one into twisted)
suffer from a bug when a deferred's canceller throws an exception, #3842.

The new helper function doesn't suffer from this problem.
2018-09-19 10:39:40 +01:00
Richard van der Hoff
05b9937cd7 update changelog for #3909 2018-09-19 09:17:54 +01:00
Amber Brown
47c02e6332 Merge pull request #3909 from turt2live/travis/fix-logging-1
Fix matrixfederationclient.py logging: Destination is a string
2018-09-19 18:14:47 +10:00
Amber Brown
3f0d8e6b09 Remove documentation referencing Cygwin (#3873) 2018-09-19 18:14:30 +10:00
Amber Brown
3d6b24fb1b Merge pull request #3907 from matrix-org/rav/set_sni_to_server_name
Set SNI to the server_name, not whatever was in the SRV record
2018-09-19 17:59:33 +10:00
Amber Brown
f773ecbd61 Merge pull request #3903 from matrix-org/rav/increase_get_missing_events_timeout
Bump timeout on get_missing_events request
2018-09-19 17:57:48 +10:00
Travis Ralston
13b31a9baf Changelog 2018-09-18 15:30:25 -06:00
Travis Ralston
35aec19f0a Destination is a string 2018-09-18 15:29:30 -06:00
Richard van der Hoff
38ead946a9 Merge remote-tracking branch 'origin/develop' into neilj/fix_room_invite_mail_links 2018-09-18 19:02:45 +01:00
Richard van der Hoff
a219ce8726 Use directory server for room joins (#3899)
When we do a join, always try the server we used for the alias lookup first.

Fixes #2418
2018-09-18 18:27:37 +01:00
Richard van der Hoff
31c15dcb80 Refactor matrixfederationclient to fix logging (#3906)
We want to wait until we have read the response body before we log the request
as complete, otherwise a confusing thing happens where the request appears to
have completed, but we later fail it.

To do this, we factor the salient details of a request out to a separate
object, which can then keep track of the txn_id, so that it can be logged.
2018-09-18 18:17:15 +01:00
Amber Brown
c600886d47 Merge pull request #3894 from matrix-org/hs/phone_home_py_version
Add python_version phone home stat
2018-09-19 02:40:04 +10:00
Richard van der Hoff
edabc18938 changelog 2018-09-18 17:04:20 +01:00
Richard van der Hoff
b3097396e7 Set SNI to the server_name, not whatever was in the SRV record
Fixes #3843
2018-09-18 17:01:12 +01:00
Richard van der Hoff
8565d9a9fe changelog 2018-09-18 15:05:26 +01:00
Richard van der Hoff
550007cb0e Bump timeout on get_missing_events request 2018-09-18 15:02:51 +01:00
Richard van der Hoff
286d6930b7 Merge pull request #3879 from matrix-org/matthew/fix-autojoin
don't ratelimit autojoins
2018-09-18 13:07:01 +01:00
Richard van der Hoff
892432c818 Merge pull request #3882 from SimmyD/max_upload_docker_var
Add variable for changing the max upload size in Docker container
2018-09-18 13:05:09 +01:00
Richard van der Hoff
1e09a1d48a Merge pull request #3889 from matrix-org/rav/404_on_remove_unknown_alias
Return a 404 when deleting unknown room alias
2018-09-18 12:59:30 +01:00
Richard van der Hoff
ad95ec12ca Merge pull request #3895 from matrix-org/rav/decode_bytes_in_metrics
Fix more b'abcd' noise in metrics
2018-09-18 12:58:49 +01:00
Richard van der Hoff
1f4296efcf Merge pull request #3897 from aaronraimist/synaspse-typo
Fix typo in README, synaspse -> synapse
2018-09-18 11:37:09 +01:00
Aaron Raimist
505abb38f0 Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:07:19 -05:00
Aaron Raimist
4ecdf73fc7 Fix typo in README, synaspse -> synapse
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:05:23 -05:00
Will Hunt
2b41f2ed76 Create 3894.feature 2018-09-17 17:48:48 +01:00
Will Hunt
5baa087312 typo 2018-09-17 17:37:56 +01:00
Will Hunt
b58714789f make pip happy? 2018-09-17 17:35:54 +01:00
Richard van der Hoff
06f2dbbb5d changelog 2018-09-17 17:18:26 +01:00
Richard van der Hoff
ac80cb08fe Fix more b'abcd' noise in metrics 2018-09-17 17:16:50 +01:00
Will Hunt
9a1cceeca9 Use a string for versions 2018-09-17 17:09:06 +01:00
Richard van der Hoff
c7131baefc Merge pull request #3892 from matrix-org/rav/decode_bytes_in_request_logs
Fix some b'abcd' noise in logs and metrics
2018-09-17 17:07:10 +01:00
Richard van der Hoff
f75b9961c6 Reinstate missing null check 2018-09-17 16:52:02 +01:00
Will Hunt
2b39494cd5 Add python_version phone home stat 2018-09-17 16:35:18 +01:00
Richard van der Hoff
0cb7afff35 changelog 2018-09-17 16:17:25 +01:00
Richard van der Hoff
f00a9d2636 Fix some b'abcd' noise in logs and metrics
Python 3 compatibility: make sure that we decode some byte sequences before we
use them to create log lines and metrics labels.
2018-09-17 16:15:42 +01:00
Richard van der Hoff
c9c50284d7 README: run python_dependencies with -m
... to stop things which try to import `types` getting `synapse.types` instead
2018-09-17 15:58:47 +01:00
Amber Brown
1e70f1dbab changelog 2018-09-17 22:36:56 +10:00
Amber Brown
b56ef14629 update python 3 changelog 2018-09-17 22:36:41 +10:00
Amber Brown
fe88907d04 version 2018-09-17 22:33:22 +10:00
Richard van der Hoff
c1ae6b1bce changelog 2018-09-17 13:21:08 +01:00
Richard van der Hoff
85a43f4167 Return a 404 when deleting unknown room alias
As per https://github.com/matrix-org/matrix-doc/issues/1675

Fixes https://github.com/matrix-org/synapse/issues/2782
2018-09-17 13:19:00 +01:00
Amber Brown
c6363f7269 Merge pull request #3888 from matrix-org/rav/nuke_nuke_rooms
Remove nuke-room-from-db.sh script
2018-09-17 21:30:25 +10:00
Richard van der Hoff
2a8996b67d changelog 2018-09-17 11:51:38 +01:00
Richard van der Hoff
e7b3b4d8c2 Remove nuke-room-from-db.sh script
The problem with this script is that it is largely untested, entirely
unmaintained, and running it is likely to make your synapse blow up in
exciting ways.

For example, it leaves a bunch of tables with dead values in it, like
event_to_state_groups.

Having it here sends a message that it is a supported part of
synapse, which is absolutely not the case.
2018-09-17 11:42:42 +01:00
Simon Dwyer
0e46ff6904 Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables.
Signed-off-by: Simon Dwyer <simon@thedwyers.co>
2018-09-16 13:33:33 +10:00
Simon Dwyer
da864a92c9 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer
f472abd792 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer
9c749a6b61 Added 'MAX_UPLOAD_SIZE' variable and set default to "10M" 2018-09-16 13:12:57 +10:00
Matthew Hodgson
c71b93f2a4 changelog 2018-09-15 22:28:28 +01:00
Matthew Hodgson
d42d79e3c3 don't ratelimit autojoins 2018-09-15 22:27:41 +01:00
Matthew Hodgson
9d13ff4da8 missing changelog 2018-09-15 21:04:10 +01:00
Vincent Breitmoser
c8642720c9 mention libjemalloc in readme (#3877) 2018-09-15 21:03:27 +01:00
Erik Johnston
24efb2a70d Fix timeout function
Turns out deferred.cancel sometimes throws, so we do that last to ensure
that we always do resolve the new deferred.
2018-09-15 11:38:39 +01:00
Erik Johnston
c30cfff572 Merge pull request #3875 from matrix-org/erikj/extra_timeouts
Add an awful secondary timeout to fix wedged requests
2018-09-14 19:56:11 +01:00
Erik Johnston
335b23a078 Newsfile 2018-09-14 19:25:58 +01:00
Erik Johnston
fcfe7a850d Add an awful secondary timeout to fix wedged requests
This is an attempt to mitigate #3842 by adding yet-another-timeout
2018-09-14 19:23:07 +01:00
Matthew Hodgson
024be6cf18 don't filter membership events based on history visibility (#3874)
don't filter membership events based on history visibility
as we will already have filtered the messages in the timeline, and state events
are always visible.

and because @erikjohnston said so.
2018-09-14 18:12:52 +01:00
Erik Johnston
3e6e94fe9f Merge pull request #3872 from matrix-org/hawkowl/timeouts-2
timeouts 2: electric boogaloo
2018-09-14 16:58:44 +01:00
Erik Johnston
8b3652831c Merge pull request #3871 from matrix-org/erikj/in_flight_block_metrics
Add in flight real time metrics for Measure blocks
2018-09-14 16:40:31 +01:00
Amber Brown
bc9af88a2d fix 2018-09-15 00:26:00 +10:00
Amber Brown
90f8e606e2 changelog 2018-09-15 00:23:55 +10:00
Erik Johnston
d0f6c1ce21 Remove spurious comment 2018-09-14 15:12:36 +01:00
Erik Johnston
9e2f9a7b57 Measure outbound requests 2018-09-14 15:11:26 +01:00
Erik Johnston
941ac0f085 Newsfile 2018-09-14 15:08:37 +01:00
Erik Johnston
f6e82dcddb Tests 2018-09-14 15:08:37 +01:00
Erik Johnston
0a81038ea0 Add in flight real time metrics for Measure blocks 2018-09-14 15:08:37 +01:00
Travis Ralston
ad9198cc34 Merge pull request #3860 from matrix-org/travis/typo-1
Fix minor typo in exception
2018-09-13 16:32:48 -06:00
Neil Johnson
bef1d6f3bf towncrier 2018-09-13 22:53:35 +01:00
Neil Johnson
7de1989ea2 fix link for case that config.email_riot_base_url is set 2018-09-13 22:43:50 +01:00
Travis Ralston
984db8bb08 Create 3860.misc 2018-09-13 12:06:04 -06:00
Amber Brown
c971aa7b9d fix 2018-09-14 03:57:02 +10:00
Amber Brown
8f08d848f5 fix 2018-09-14 03:53:56 +10:00
Travis Ralston
f1a7264663 Fix minor typo in exception 2018-09-13 11:51:12 -06:00
Amber Brown
7c33ab76da redact better 2018-09-14 03:45:34 +10:00
Amber Brown
63755fa4c2 we do that higher up 2018-09-14 03:21:47 +10:00
Amber Brown
73884ebac5 Merge remote-tracking branch 'origin/develop' into hawkowl/timeouts-2 2018-09-14 03:11:25 +10:00
Amber Brown
7c27c4d51c merge (#3576) 2018-09-14 03:11:11 +10:00
Amber Brown
1c3f4d9ca5 buffer? 2018-09-14 03:09:13 +10:00
Erik Johnston
6c0f8d9d50 Merge pull request #3856 from matrix-org/erikj/speed_up_purge
Make purge history slightly faster
2018-09-13 16:14:46 +01:00
Erik Johnston
ed5331a627 comment 2018-09-13 16:10:56 +01:00
Amber Brown
cb64fe2cb7 Merge pull request #3859 from matrix-org/erikj/add_iterkeys
Fix handling of redacted events from federation
2018-09-14 01:06:44 +10:00
Erik Johnston
13193a6e2b Newsfile 2018-09-13 15:46:45 +01:00
Amber Brown
3126b88d35 fix circleci merged builds (#3858)
* fix

* changelog
2018-09-14 00:44:31 +10:00
Erik Johnston
89a76d1889 Fix handling of redacted events from federation
If we receive an event that doesn't pass their content hash check (e.g.
due to already being redacted) then we hit a bug which causes an
exception to be raised, which then promplty stops the event (and
request) from being processed.

This effects all sorts of federation APIs, including joining rooms with
a redacted state event.
2018-09-13 15:44:12 +01:00
Amber Brown
bfa0b759e0 Attempt to figure out what's going on with timeouts (#3857) 2018-09-14 00:15:51 +10:00
Erik Johnston
9cbd0094f0 pep8 2018-09-13 15:15:35 +01:00
Erik Johnston
9dbe38ea7d Create indices after insertion 2018-09-13 15:05:52 +01:00
Erik Johnston
93139a1fb8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/speed_up_purge 2018-09-13 12:57:09 +01:00
Erik Johnston
e7cd7cb0f0 Newsfile 2018-09-13 12:55:40 +01:00
Erik Johnston
c857f5ef9b Make purge history slightly faster
Don't pull out events that are outliers and won't be deleted, as nothing
should happen to them.
2018-09-13 12:48:10 +01:00
Amber Brown
b7d2fb5eb9 Remove some superfluous logging (#3855) 2018-09-13 19:59:32 +10:00
Neil Johnson
f30a303590 Merge pull request #3846 from matrix-org/neilj/expose-registered-users
expose number of real reserved users
2018-09-12 17:14:04 +01:00
Matthew Hodgson
2ac1abbc7e show heroes if a room has a 'deleted' name/canonical_alias (#3851) 2018-09-12 17:11:05 +01:00
Erik Johnston
fa0d464fa4 Merge pull request #3853 from matrix-org/erikj/log_outbound_each_time
Log outbound requests when we retry
2018-09-12 16:55:40 +01:00
Matthew Hodgson
0403cf0783 argh pep8 2018-09-12 16:54:28 +01:00
Matthew Hodgson
0e200e366d correctly log gappy sync metrics 2018-09-12 16:47:20 +01:00
Matthew Hodgson
11bfc2af1c fix logline 2018-09-12 16:45:42 +01:00
Erik Johnston
3db016b641 Newsfile 2018-09-12 16:25:18 +01:00
Neil Johnson
8decd6233d improve naming 2018-09-12 16:22:15 +01:00
Erik Johnston
8c5b84441b Log outbound requests when we retry 2018-09-12 16:22:14 +01:00
Erik Johnston
54f8616d2c Merge pull request #3841 from matrix-org/erikj/manhole_key_length
Change the manhole SSH key to have more bits
2018-09-12 14:33:39 +01:00
Amber Brown
65cd8ccc79 Add JUnit summaries to CircleCI as well as merged runs (#3704) 2018-09-12 23:29:21 +10:00
Amber Brown
7ca097f77e Port federation/ to py3 (#3847) 2018-09-12 23:23:32 +10:00
Neil Johnson
5cea4e16c7 towncrier 2018-09-12 12:03:31 +01:00
Neil Johnson
0ddf486724 expose number of real reserved users 2018-09-12 11:58:52 +01:00
Amber Brown
546aee7e52 Merge pull request #3835 from krombel/fix_3821
fix VOIP crashes under Python 3
2018-09-12 20:44:18 +10:00
Amber Brown
33716c4aea Merge pull request #3826 from matrix-org/rav/logging_for_keyring
add some logging for the keyring queue
2018-09-12 20:43:47 +10:00
Amber Brown
bc635026c5 Merge pull request #3824 from matrix-org/rav/fix_jwt_import
Fix jwt import check
2018-09-12 20:41:57 +10:00
Amber Brown
02aa41809b Port rest/ to Python 3 (#3823) 2018-09-12 20:41:31 +10:00
Amber Brown
8fd93b5eea Port crypto/ to Python 3 (#3822) 2018-09-12 20:16:31 +10:00
Amber Brown
4073f73edc Merge pull request #3845 from matrix-org/erikj/timeout_reads
Timeout reading body for outbound HTTP requests
2018-09-12 20:16:08 +10:00
Erik Johnston
649c647955 Newsfile 2018-09-12 11:03:42 +01:00
Erik Johnston
4084a774a8 Timeout reading body for outbound HTTP requests 2018-09-12 10:10:20 +01:00
Matthew Hodgson
b041115415 Speed up lazy loading (#3827)
* speed up room summaries by pulling their data from room_memberships rather than room state
* disable LL for incr syncs, and log incr sync stats  (#3840)
2018-09-12 00:50:39 +01:00
Erik Johnston
9a68778ac2 Newsfile 2018-09-11 10:44:40 +01:00
Erik Johnston
9e05c8d309 Change the manhole SSH key to have more bits
Newer versions of openssh client refuse to connect to the old key due to
its length.
2018-09-11 10:42:10 +01:00
Erik Johnston
037a06e8f0 Merge pull request #3834 from mvgorcum/develop
Remove build requirements after building docker image
2018-09-10 16:53:25 +01:00
Jan Christian Grünhage
af10fa6536 add runtime dependencies 2018-09-10 17:39:49 +02:00
Mathijs van Gorcum
e957428a15 Newsfile 2018-09-10 14:53:05 +00:00
Mathijs van Gorcum
e586916cda Move COPY before RUN and merge RUNs 2018-09-10 14:02:42 +00:00
Krombel
3572a206d3 add changelog 2018-09-10 14:33:08 +02:00
Krombel
7bc22539ff fix VOIP crashes under Python 3 (#3821) 2018-09-10 14:30:08 +02:00
Mathijs van Gorcum
b7e7712f07 Remove build requirements after building 2018-09-10 12:21:42 +00:00
Richard van der Hoff
1e4c7fff5f changelog 2018-09-07 16:37:30 +01:00
Amber Brown
9a5ea511b5 Merge pull request #3810 from matrix-org/erikj/send_tags_down_sync_on_join
Send existing room tags down sync on join
2018-09-07 23:28:42 +10:00
Richard van der Hoff
e33a538af3 Merge pull request #3783 from cwmke/develop
Add apache vhost config to Readme
2018-09-07 14:25:23 +01:00
Richard van der Hoff
edda9f5cac changelog 2018-09-07 14:23:35 +01:00
Richard van der Hoff
b8ad756bd0 Fix jwt import check
This handy code attempted to check that we could import jwt, but utterly failed
to check it was the right jwt.

Fixes https://github.com/matrix-org/synapse/issues/3793
2018-09-07 14:20:54 +01:00
Amber Brown
771d213ac5 Merge branch 'master' into develop 2018-09-07 21:45:38 +10:00
Amber Brown
b60749a1ec changelog 2018-09-07 21:41:57 +10:00
Amber Brown
6febd8e8f7 version 2018-09-07 21:40:57 +10:00
Richard van der Hoff
cd7ef43872 clearer logging when things fail, too 2018-09-06 23:56:47 +01:00
Richard van der Hoff
806964b5de add some logging for the keyring queue
why is it so damn slow?
2018-09-06 18:51:06 +01:00
Amber Brown
52ec6e9dfa Port tests/ to Python 3 (#3808) 2018-09-07 02:58:18 +10:00
Neil Johnson
c5440b2ca0 Merge pull request #3800 from matrix-org/neilj/remove-guests-from-mau-count
guest users should not be part of mau total
2018-09-06 17:45:55 +01:00
Neil Johnson
84a750e0c3 ensure guests never enter mau list 2018-09-06 17:22:53 +01:00
Erik Johnston
7298efd361 Newsfile 2018-09-06 17:13:33 +01:00
Erik Johnston
f60c9e2a01 Don't send empty tags list down sync 2018-09-06 17:01:41 +01:00
Erik Johnston
7baf66ef5d Send existing room tags down sync on join
When a user joined a room any existing tags were not sent down the sync
stream. Ordinarily this isn't a problem because the user needs to be in
the room to have set tags in it, however synapse will sometimes add tags
for a user to a room, e.g. for server notices, which need to come down
sync.
2018-09-06 16:46:51 +01:00
Amber Brown
654324eded Merge pull request #3805 from matrix-org/erikj/limit_transaction_pdus_edus
Limit the number of PDUs/EDUs per fedreation transaction
2018-09-07 01:33:31 +10:00
Amber Brown
1d371fc5b3 Merge pull request #3806 from matrix-org/erikj/limit_postgres_travis
Only start postgres instance for postgres tests on Travis CI
2018-09-07 01:06:21 +10:00
Erik Johnston
b07a2cbee9 Spelling 2018-09-06 15:53:02 +01:00
Amber Brown
70fd75cd1d Merge pull request #3788 from matrix-org/erikj/remove_conn_id
Remove conn_id from repl prometheus metrics
2018-09-07 00:48:19 +10:00
Erik Johnston
5d848992bf Newsfile 2018-09-06 15:28:46 +01:00
Erik Johnston
3b4223aa23 Only start postgres instance for postgres tests on Travis CI 2018-09-06 15:27:54 +01:00
Erik Johnston
28f5bfdcf7 Newsfile 2018-09-06 15:25:58 +01:00
Amber Brown
ee7c8bd2b5 Merge pull request #3795 from matrix-org/erikj/faster_sync_state
User iter* during sync state calculations
2018-09-07 00:24:43 +10:00
Erik Johnston
6707a3212c Limit the number of PDUs/EDUs per fedreation transaction 2018-09-06 15:23:55 +01:00
Amber Brown
135f3b4390 Merge pull request #3804 from matrix-org/rav/fix_openssl_dep
bump dep on pyopenssl to 16.x
2018-09-07 00:23:39 +10:00
Amber Brown
2608ebc04c Port handlers/ to Python 3 (#3803) 2018-09-07 00:22:23 +10:00
Erik Johnston
599f65bb89 Merge branch 'master' of github.com:matrix-org/synapse into release-v0.33.4 2018-09-06 14:13:58 +01:00
Erik Johnston
417e7077aa Bump version and changelog 2018-09-06 14:08:55 +01:00
Erik Johnston
d64b24dfe6 Merge tag 'v0.33.3.1' into release-v0.33.4
Synapse 0.33.3.1 (2018-09-06)
=============================

SECURITY FIXES
--------------

- Fix an issue where event signatures were not always correctly validated ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
- Fix an issue where server_acls could be circumvented for incoming events ([\#3796](https://github.com/matrix-org/synapse/issues/3796))

Internal Changes
----------------

- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
2018-09-06 14:08:33 +01:00
Richard van der Hoff
4f8baab0c4 Merge branch 'master' into develop 2018-09-06 13:05:22 +01:00
Richard van der Hoff
b3c2ebba32 changelog 2018-09-06 12:55:17 +01:00
Richard van der Hoff
625542878d bump dep on pyopenssl to 16.x 2018-09-06 12:53:15 +01:00
Richard van der Hoff
2fd17b5ad1 Merge tag 'v0.33.3.1'
Synapse 0.33.3.1 (2018-09-06)
=============================

SECURITY FIXES
--------------

- Fix an issue where event signatures were not always correctly validated ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
- Fix an issue where server_acls could be circumvented for incoming events ([\#3796](https://github.com/matrix-org/synapse/issues/3796))

Internal Changes
----------------

- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
2018-09-06 12:31:35 +01:00
Amber Brown
10587f7f32 Merge pull request #3802 from matrix-org/jcgruenhage/docker-unignore-synctl
remove synctl from .dockerignore
2018-09-06 19:47:33 +10:00
Richard van der Hoff
80189ed27c prepare v0.33.3.1 2018-09-06 10:26:23 +01:00
Jan Christian Grünhage
0cd7b209e2 Create 3802.misc 2018-09-06 10:24:59 +01:00
Jan Christian Grünhage
78d1042c10 remove synctl from .dockerignore 2018-09-06 10:24:59 +01:00
Jan Christian Grünhage
af3125226d Create 3802.misc 2018-09-06 10:46:00 +02:00
Jan Christian Grünhage
9c8cd855da remove synctl from .dockerignore 2018-09-06 10:39:55 +02:00
Neil Johnson
92657be7d0 towncrier 2018-09-05 22:33:45 +01:00
Neil Johnson
61b05727fa guest users should not be part of mau total 2018-09-05 22:30:36 +01:00
Richard van der Hoff
dfba1d843d Merge pull request #3790 from matrix-org/rav/respect_event_format_in_filter
Implement 'event_format' filter param in /sync
2018-09-05 16:24:14 +01:00
Erik Johnston
2254790ae4 Newsfile 2018-09-05 16:21:18 +01:00
Erik Johnston
7419764351 User iter* during sync state calculations 2018-09-05 16:19:50 +01:00
Amber Brown
2d2828dcbc Port http/ to Python 3 (#3771) 2018-09-06 00:10:47 +10:00
Richard van der Hoff
c127c8d042 Fix origin handling for pushed transactions
Use the actual origin for push transactions, rather than whatever the remote
server claimed.
2018-09-05 13:08:07 +01:00
Richard van der Hoff
804dd41e18 Check that signatures on events are valid
We should check that both the sender's server, and the server which created the
event_id (which may be different from whatever the remote server has told us
the origin is), have signed the event.
2018-09-05 13:08:07 +01:00
Erik Johnston
5f02017aea Improve performance of getting typing updates for replication
Fetching the list of all new typing notifications involved iterating
over all rooms and comparing their serial. Lets move to using a stream
change cache, like we do for other streams.
2018-09-05 10:20:40 +01:00
Richard van der Hoff
c91bd295f5 changelog 2018-09-04 15:23:19 +01:00
Richard van der Hoff
87c18d12ee Implement 'event_format' filter param in /sync
This has been specced and part-implemented; let's implement it for /sync (but
no other endpoints yet :/).
2018-09-04 15:20:09 +01:00
Neil Johnson
a6cf7d9d9a Merge pull request #3789 from matrix-org/neilj/improve_threepid_error_strings
improve human readable error messages
2018-09-04 13:16:00 +00:00
Amber Brown
7e9ced4178 version and towncrier 2018-09-04 21:12:04 +10:00
Neil Johnson
70fc599ede towncrier 2018-09-04 12:08:27 +01:00
Neil Johnson
bae37cd811 improve human readable error message 2018-09-04 12:07:00 +01:00
Neil Johnson
c42f7fd7b9 improve human readable error messages 2018-09-04 12:03:17 +01:00
Erik Johnston
3e242dc149 Remove conn_id 2018-09-04 11:45:52 +01:00
Erik Johnston
87b111f96a Newsfile 2018-09-03 17:26:15 +01:00
Erik Johnston
b13836da7f Remove conn_id from repl prometheus metrics
`conn_id` gets set to a random string, and so we end up filling up
prometheus with tonnes of data series, which is bad.
2018-09-03 17:22:49 +01:00
Amber Brown
77055dba92 Fix tests on postgresql (#3740) 2018-09-04 02:21:48 +10:00
Erik Johnston
567363e497 Merge pull request #3737 from matrix-org/erikj/remove_redundant_state_func
Remove unnecessary resolve_events_with_state_map
2018-09-03 16:19:41 +01:00
Erik Johnston
06ee2b7cc5 Newsfile 2018-09-03 15:43:01 +01:00
Amber Brown
30cf40ff30 Merge pull request #3378 from NickEckardt/develop
matrix-synapse-auto-deploy is no longer maintained.
2018-09-03 22:47:23 +10:00
Amber Brown
905f8de673 Create 3378.misc 2018-09-03 22:17:06 +10:00
Amber Brown
4fc4b881c5 Merge branch 'develop' into develop 2018-09-03 21:08:35 +10:00
Colin W
81942c109d Remove end '/'s 2018-09-02 23:28:03 -05:00
Colin W
a395f1ddb3 Update readme on develop branch 2018-09-02 17:30:07 -05:00
Neil Johnson
99178f8602 Merge pull request #3777 from matrix-org/neilj/fix_register_user_registration
fix bug where preserved threepid user comes to sign up and server is …
2018-08-31 16:31:48 +00:00
Neil Johnson
301cb60d0b assert rather than warn 2018-08-31 17:29:35 +01:00
Neil Johnson
0b01281e77 move threepid checker to config, add missing yields 2018-08-31 17:11:11 +01:00
Neil Johnson
e8e540630e fix reference to is_threepid_reserved 2018-08-31 16:09:15 +01:00
Neil Johnson
a796bdd35e typo 2018-08-31 15:57:33 +01:00
Neil Johnson
09f3cf1a7e ensure post registration auth checks do not fail erroneously 2018-08-31 15:42:51 +01:00
Neil Johnson
3d6aa06577 news fragemnt 2018-08-31 10:56:37 +01:00
Neil Johnson
ea068d6f3c fix bug where preserved threepid user comes to sign up and server is mau blocked 2018-08-31 10:49:14 +01:00
Amber Brown
14e4d4f4bf Port storage/ to Python 3 (#3725) 2018-08-31 00:19:58 +10:00
Richard van der Hoff
475253a88e Merge pull request #3764 from matrix-org/rav/close_db_conn_after_init
Make sure that we close db connections opened during init
2018-08-30 10:47:27 +01:00
Erik Johnston
7f0399586d Merge pull request #3768 from krombel/fix_3445
fix #3445 - do not use itervalues() on SortedDict()
2018-08-29 16:29:57 +01:00
Krombel
8c0c51ecb3 changelog 2018-08-29 16:49:26 +02:00
Krombel
79a8a347a6 fix #3445
itervalues(d) calls d.itervalues() [PY2] and d.values() [PY3]
but SortedDict only implements d.values()
2018-08-29 16:28:25 +02:00
Amber Brown
82276a18d1 Update CONTRIBUTING to clarify miscs & Markdown (#3730) 2018-08-29 19:06:59 +10:00
Matthew Hodgson
b1580f50fe don't return non-LL-member state in incremental sync state blocks (#3760)
don't return non-LL-member state in incremental sync state blocks
2018-08-28 23:25:58 +01:00
Richard van der Hoff
414fa36f3e Fix up tests 2018-08-28 17:21:05 +01:00
Richard van der Hoff
32eb1dedd2 use abc.abstractproperty
This gives clearer messages when someone gets it wrong
2018-08-28 17:10:43 +01:00
Richard van der Hoff
71990b3cae changelog 2018-08-28 13:42:20 +01:00
Richard van der Hoff
0b07f02e19 Make sure that we close db connections opened during init
We should explicitly close any db connections we open, because failing to do so
can block other transactions as per
https://github.com/matrix-org/synapse/issues/3682.

Let's also try to factor out some of the boilerplate by having server classes
define their datastore class rather than duplicating the whole of `setup`.
2018-08-28 13:39:49 +01:00
Erik Johnston
9fbaed325f Merge pull request #3758 from matrix-org/erikj/admin_contact
Change admin_uri to admin_contact in config and errors
2018-08-24 17:15:32 +01:00
Erik Johnston
9db2476991 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/admin_contact 2018-08-24 17:00:37 +01:00
Erik Johnston
69f7f1418f Newsfile 2018-08-24 16:54:06 +01:00
Erik Johnston
48b04c4a4c Merge pull request #3756 from matrix-org/erikj/fix_tags_server_notices
Fix checking if service notice room is already tagged
2018-08-24 16:53:11 +01:00
Erik Johnston
08abe8e13c Newsfile 2018-08-24 16:52:52 +01:00
Erik Johnston
05077e06fa Change admin_uri to admin_contact in config and errors 2018-08-24 16:51:27 +01:00
Erik Johnston
01a5a8b9e3 Fix checking if service notice room is already tagged
This manifested in synapse repeatedly setting the tag for the room
2018-08-24 16:22:37 +01:00
Erik Johnston
897d976c1e Merge pull request #3755 from matrix-org/erikj/fix_server_notice_tags
Fix up tagging server notice rooms.
2018-08-24 15:04:41 +01:00
Erik Johnston
45fc0ead10 Newsfile 2018-08-24 15:04:29 +01:00
Erik Johnston
cdd24449ee Ensure we wake up /sync when we add tag to notice room 2018-08-24 14:50:03 +01:00
Erik Johnston
14d49c51db Make content of tag an empty object rather than null 2018-08-24 14:44:16 +01:00
Erik Johnston
84b4e76fed Merge pull request #3754 from matrix-org/erikj/fix_whitelist
Allow federation_domain_whitelist to be emtpy list
2018-08-24 12:23:39 +01:00
Erik Johnston
c780d84d66 Newsfile 2018-08-24 12:13:50 +01:00
Erik Johnston
1d67b13674 Fix bug when federation_domain_whitelist is an emtpy list
Outbound federation were incorrectly allowed when the config option was
set to an empty list
2018-08-24 12:13:12 +01:00
Erik Johnston
92d50e3c2a Merge pull request #3753 from matrix-org/erikj/fix_no_server_noticse
Fix bug where we broke sync when using limit_usage_by_mau
2018-08-24 11:56:08 +01:00
Richard van der Hoff
e94cdbaecf Merge pull request #3751 from matrix-org/rav/twisted_17
Pin to twisted 17.1 or later
2018-08-24 11:55:52 +01:00
Erik Johnston
9b1bc593c5 Newsfile 2018-08-24 11:36:39 +01:00
Erik Johnston
7f147d623b Fix bug where we broke sync when using limit_usage_by_mau
We assumed that we always had service notices configured, but that is
not always true
2018-08-24 11:33:50 +01:00
Erik Johnston
15e8dd2ccc Merge pull request #3749 from matrix-org/erikj/add_trial_users
Implement trial users
2018-08-24 10:10:58 +01:00
Richard van der Hoff
05fe8a6c1c changelog 2018-08-24 10:04:52 +01:00
Richard van der Hoff
f584d6108f Pin to twisted 17.1 or later
Fixes https://github.com/matrix-org/synapse/issues/3741.
2018-08-24 10:02:31 +01:00
Erik Johnston
28d7b546cb Newsfile 2018-08-23 19:18:52 +01:00
Erik Johnston
f2cbbda956 Unit tests 2018-08-23 19:17:19 +01:00
Erik Johnston
cd77270a66 Implement trail users 2018-08-23 19:17:19 +01:00
Erik Johnston
242a0483eb Merge pull request #3747 from matrix-org/erikj/fix_multiple_sends_notice
Fix bug where we resent "limit exceeded" server notices
2018-08-23 18:10:08 +01:00
Erik Johnston
60cf1c6e16 Newsfile 2018-08-23 16:34:32 +01:00
Erik Johnston
7e6e588e60 Fix bug where we resent "limit exceeded" server notices
This was due to a bug where we mutated a cached event's contents
2018-08-23 16:21:20 +01:00
Travis Ralston
1ca2744621 Merge pull request #3734 from matrix-org/travis/worker-docs
Reference that the federation_reader needs the HTTP replication port set
2018-08-23 07:51:46 -06:00
Erik Johnston
dddb5aa7bb Merge pull request #3746 from matrix-org/erikj/mau_fixups
Fix MAU invalidation due to missing yield
2018-08-23 10:55:14 +01:00
Erik Johnston
4eb8408ed2 Newsfile 2018-08-23 10:46:13 +01:00
Erik Johnston
c5842dff1a Actually run the tests 2018-08-23 10:35:54 +01:00
Erik Johnston
7a0da69eee Add missing yield 2018-08-23 10:28:12 +01:00
Erik Johnston
a5806aba27 Merge pull request #3680 from matrix-org/neilj/server_notices_on_blocking
server notices on resource limit blocking
2018-08-22 17:18:28 +01:00
Erik Johnston
fd2dbf1836 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-22 17:06:10 +01:00
Erik Johnston
9643a6f7f2 Update notice format 2018-08-22 17:00:29 +01:00
Erik Johnston
5c261107c9 Remove unnecessary resolve_events_with_state_map
We only ever used the synchronous resolve_events_with_state_map in one
place, which is trivial to replace with the async version.
2018-08-22 15:41:15 +01:00
Richard van der Hoff
c7181dcc6c Merge branch 'master' into develop 2018-08-22 14:37:11 +01:00
Richard van der Hoff
74854a9719 Use recaptcha_ajax.js directly from Google
This was originally done in commit c75b71a397,
but got reverted on this branch due to the PR (#3677) being based on the wrong
branch.

We're ready to merge this to master now, so let's make it match
release-v0.33.3.
2018-08-22 14:30:49 +01:00
Richard van der Hoff
48fec67536 Merge tag 'v0.33.3'
Features
--------

- Add support for the SNI extension to federation TLS connections. Thanks to @vojeroen! ([\#3439](https://github.com/matrix-org/synapse/issues/3439))
- Add /_media/r0/config ([\#3184](https://github.com/matrix-org/synapse/issues/3184))
- speed up /members API and add `at` and `membership` params as per MSC1227 ([\#3568](https://github.com/matrix-org/synapse/issues/3568))
- implement `summary` block in /sync response as per MSC688 ([\#3574](https://github.com/matrix-org/synapse/issues/3574))
- Add lazy-loading support to /messages as per MSC1227 ([\#3589](https://github.com/matrix-org/synapse/issues/3589))
- Add ability to limit number of monthly active users on the server ([\#3633](https://github.com/matrix-org/synapse/issues/3633))
- Support more federation endpoints on workers ([\#3653](https://github.com/matrix-org/synapse/issues/3653))
- Basic support for room versioning ([\#3654](https://github.com/matrix-org/synapse/issues/3654))
- Ability to disable client/server Synapse via conf toggle ([\#3655](https://github.com/matrix-org/synapse/issues/3655))
- Ability to whitelist specific threepids against monthly active user limiting ([\#3662](https://github.com/matrix-org/synapse/issues/3662))
- Add some metrics for the appservice and federation event sending loops ([\#3664](https://github.com/matrix-org/synapse/issues/3664))
- Where server is disabled, block ability for locked out users to read new messages ([\#3670](https://github.com/matrix-org/synapse/issues/3670))
- set admin uri via config, to be used in error messages where the user should contact the administrator ([\#3687](https://github.com/matrix-org/synapse/issues/3687))
- Synapse's presence functionality can now be disabled with the "use_presence" configuration option. ([\#3694](https://github.com/matrix-org/synapse/issues/3694))
- For resource limit blocked users, prevent writing into rooms ([\#3708](https://github.com/matrix-org/synapse/issues/3708))

Bugfixes
--------

- Fix occasional glitches in the synapse_event_persisted_position metric ([\#3658](https://github.com/matrix-org/synapse/issues/3658))
- Fix bug on deleting 3pid when using identity servers that don't support unbind API ([\#3661](https://github.com/matrix-org/synapse/issues/3661))
- Make the tests pass on Twisted < 18.7.0 ([\#3676](https://github.com/matrix-org/synapse/issues/3676))
- Don’t ship recaptcha_ajax.js, use it directly from Google ([\#3677](https://github.com/matrix-org/synapse/issues/3677))
- Fixes test_reap_monthly_active_users so it passes under postgres ([\#3681](https://github.com/matrix-org/synapse/issues/3681))
- Fix mau blocking calulation bug on login ([\#3689](https://github.com/matrix-org/synapse/issues/3689))
- Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users ([\#3692](https://github.com/matrix-org/synapse/issues/3692))
- Improve HTTP request logging to include all requests ([\#3700](https://github.com/matrix-org/synapse/issues/3700))
- Avoid timing out requests while we are streaming back the response ([\#3701](https://github.com/matrix-org/synapse/issues/3701))
- Support more federation endpoints on workers ([\#3705](https://github.com/matrix-org/synapse/issues/3705), [\#3713](https://github.com/matrix-org/synapse/issues/3713))
- Fix "Starting db txn 'get_all_updated_receipts' from sentinel context" warning ([\#3710](https://github.com/matrix-org/synapse/issues/3710))
- Fix bug where `state_cache` cache factor ignored environment variables ([\#3719](https://github.com/matrix-org/synapse/issues/3719))
- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs ([\#3723](https://github.com/matrix-org/synapse/issues/3723))
- Fix bug introduced in v0.33.3rc1 which made the ToS give a 500 error ([\#3732](https://github.com/matrix-org/synapse/issues/3732))

Deprecations and Removals
-------------------------

- The Shared-Secret registration method of the legacy v1/register REST endpoint has been removed. For a replacement, please see [the admin/register API documentation](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/register_api.rst). ([\#3703](https://github.com/matrix-org/synapse/issues/3703))

Internal Changes
----------------

- The test suite now can run under PostgreSQL. ([\#3423](https://github.com/matrix-org/synapse/issues/3423))
- Refactor HTTP replication endpoints to reduce code duplication ([\#3632](https://github.com/matrix-org/synapse/issues/3632))
- Tests now correctly execute on Python 3. ([\#3647](https://github.com/matrix-org/synapse/issues/3647))
- Sytests can now be run inside a Docker container. ([\#3660](https://github.com/matrix-org/synapse/issues/3660))
- Port over enough to Python 3 to allow the sytests to start. ([\#3668](https://github.com/matrix-org/synapse/issues/3668))
- Update docker base image from alpine 3.7 to 3.8. ([\#3669](https://github.com/matrix-org/synapse/issues/3669))
- Rename synapse.util.async to synapse.util.async_helpers to mitigate async becoming a keyword on Python 3.7. ([\#3678](https://github.com/matrix-org/synapse/issues/3678))
- Synapse's tests are now formatted with the black autoformatter. ([\#3679](https://github.com/matrix-org/synapse/issues/3679))
- Implemented a new testing base class to reduce test boilerplate. ([\#3684](https://github.com/matrix-org/synapse/issues/3684))
- Rename MAU prometheus metrics ([\#3690](https://github.com/matrix-org/synapse/issues/3690))
- add new error type ResourceLimit ([\#3707](https://github.com/matrix-org/synapse/issues/3707))
- Logcontexts for replication command handlers ([\#3709](https://github.com/matrix-org/synapse/issues/3709))
- Update admin register API documentation to reference a real user ID. ([\#3712](https://github.com/matrix-org/synapse/issues/3712))
2018-08-22 14:28:55 +01:00
Richard van der Hoff
3504982cb7 changelog for 0.33.3 2018-08-22 14:07:44 +01:00
Richard van der Hoff
4e5a4549b6 bump version to 0.33.3 2018-08-22 14:07:10 +01:00
Richard van der Hoff
9b7d9d8ba0 Update attributions and PR links in changelog 2018-08-22 14:06:20 +01:00
Erik Johnston
db10f553ba Merge pull request #3724 from Half-Shot/hs/guest-fetch-event
Allow guests to use /rooms/:roomId/event/:eventId
2018-08-22 13:41:08 +01:00
Erik Johnston
764030cf63 Merge pull request #3659 from matrix-org/erikj/split_profiles
Allow profile updates to happen on workers
2018-08-22 11:35:55 +01:00
Erik Johnston
8432e2ebd7 Rename WorkerProfileHandler to BaseProfileHandler 2018-08-22 10:13:40 +01:00
Erik Johnston
a81f140880 Add assert to ensure handler is only run on master 2018-08-22 10:11:21 +01:00
Erik Johnston
47b25ba5f3 Remove redundant vars 2018-08-22 10:09:05 +01:00
Erik Johnston
3bf8bab8f9 Merge pull request #3673 from matrix-org/erikj/refactor_state_handler
Refactor state module to support multiple room versions
2018-08-22 10:04:55 +01:00
Richard van der Hoff
a4cf660a32 Merge pull request #3735 from matrix-org/travis/federation-spelling
limt -> limit
2018-08-22 09:34:21 +01:00
Richard van der Hoff
0d568ff403 Merge remote-tracking branch 'origin/release-v0.33.3' into develop 2018-08-22 09:15:44 +01:00
Richard van der Hoff
d7585a4c83 Merge pull request #3732 from matrix-org/rav/fix_gdpr_consent
Fix 500 error from /consent form
2018-08-22 09:15:06 +01:00
Travis Ralston
365f588d98 Create 3734.misc 2018-08-21 23:38:38 -06:00
Travis Ralston
eb7be75a10 Create 3735.misc 2018-08-21 23:38:06 -06:00
Travis Ralston
dd0ac1614c Reference that the federation_reader needs the HTTP replication port set 2018-08-21 23:35:50 -06:00
Matthew Hodgson
bb81e78ec6 Split the state_group_cache in two (#3726)
Splits the state_group_cache in two.

One half contains normal state events; the other contains member events.

The idea is that the lazyloading common case of: "I want a subset of member events plus all of the other state" can be accomplished efficiently by splitting the cache into two, and asking for "all events" from the non-members cache, and "just these keys" from the members cache.  This means we can avoid having to make DictionaryCache aware of these sort of complicated queries, whilst letting LL requests benefit from the caching.

Previously we were unable to sensibly use the caching and had to pull all state from the DB irrespective of the filtering, which made things slow.  Hopefully fixes https://github.com/matrix-org/synapse/issues/3720.
2018-08-22 00:56:37 +02:00
Richard van der Hoff
afb4b490a4 changelog 2018-08-21 23:19:14 +01:00
Richard van der Hoff
f7bf181a90 fix another consent encoding fail 2018-08-21 23:14:25 +01:00
Richard van der Hoff
f7baff6f7b Fix 500 error from /consent form
Fixes #3731
2018-08-21 22:47:07 +01:00
Richard van der Hoff
a52f276990 Merge tag 'v0.33.3rc2' into develop
Bugfixes
--------

- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs
([\#3723](https://github.com/matrix-org/synapse/issues/3723))
2018-08-21 20:30:09 +01:00
Erik Johnston
46c832eaac Merge pull request #3727 from matrix-org/erikj/dont_error_on_missing_keys
Don't log exceptions when failing to fetch server keys
2018-08-21 17:07:20 +01:00
Erik Johnston
2b1a4b2596 Merge pull request #3722 from matrix-org/erikj/bg_process_iteration
LaterGauge needs to call thread safe functions
2018-08-21 16:47:34 +01:00
Erik Johnston
cd6937fb26 Fix typo 2018-08-21 16:28:10 +01:00
Erik Johnston
c2c153dd3b Log more detail when we fail to authenticate request 2018-08-21 11:42:49 +01:00
Erik Johnston
79d3b4689e Newsfile 2018-08-21 11:21:48 +01:00
Erik Johnston
808d8e06aa Don't log exceptions when failing to fetch server keys
Not being able to resolve or connect to remote servers is an expected
error, so we shouldn't log at ERROR with stacktraces.
2018-08-21 11:19:26 +01:00
Erik Johnston
3f6762f0bb isort 2018-08-21 09:38:38 +01:00
Amber Brown
3b5b64ac99 changelog 2018-08-21 03:48:55 +10:00
Amber Brown
23d7e63a4a Merge pull request #3723 from matrix-org/rav/fix_logcontext_disaster
Fix exceptions when a connection is closed before we read the headers
2018-08-21 03:47:52 +10:00
Will Hunt
8bd585b09b 3724.feature 2018-08-20 18:27:42 +01:00
Richard van der Hoff
012d612f9d changelog 2018-08-20 18:26:27 +01:00
Will Hunt
f89f6b7c09 Allow guests to access /rooms/:roomId/event/:eventId 2018-08-20 18:25:54 +01:00
Richard van der Hoff
be6527325a Fix exceptions when a connection is closed before we read the headers
This fixes bugs introduced in #3700, by making sure that we behave sanely
when an incoming connection is closed before the headers are read.
2018-08-20 18:21:10 +01:00
Richard van der Hoff
55e6bdf287 Robustness fix for logcontext filter
Make the logcontext filter not explode if it somehow ends up with a logcontext
of None, since that infinite-loops the whole logging system.
2018-08-20 18:20:07 +01:00
Erik Johnston
e2c0aa2c26 Newsfile 2018-08-20 17:40:59 +01:00
Erik Johnston
b01a755498 Make the in flight requests metrics thread safe 2018-08-20 17:27:52 +01:00
Erik Johnston
1058d14127 Make the in flight background process metrics thread safe 2018-08-20 17:27:24 +01:00
Amber Brown
80bf7d3580 changelog 2018-08-21 00:01:14 +10:00
Amber Brown
9a2f960736 version 2018-08-21 00:00:19 +10:00
Amber Brown
324525f40c Port over enough to get some sytests running on Python 3 (#3668) 2018-08-20 23:54:49 +10:00
Erik Johnston
4d664278af Merge branch 'develop' of github.com:matrix-org/synapse into erikj/refactor_state_handler 2018-08-20 14:49:43 +01:00
Erik Johnston
8dee601054 Remove redundant room_version checks 2018-08-20 14:48:53 +01:00
Erik Johnston
e21c368b8b Revert spurious change 2018-08-20 13:54:51 +01:00
Erik Johnston
cf6f9a8b53 Merge pull request #3719 from matrix-org/erikj/use_cache_fact
Use get_cache_factor_for function for `state_cache`
2018-08-20 13:33:35 +01:00
Erik Johnston
48a910e128 Newsfile 2018-08-20 13:33:20 +01:00
Erik Johnston
f2a48d87df Use get_cache_factor_for function for state_cache
This allows the cache factor for `state_cache` to be individually
specified in the enviroment
2018-08-20 13:01:46 +01:00
Erik Johnston
2aa7cc6a46 Merge pull request #3713 from matrix-org/erikj/fixup_fed_logging
Fix logging bug in EDU handling over replication
2018-08-20 10:51:45 +01:00
Neil Johnson
e07970165f rename error code 2018-08-18 14:39:45 +01:00
Neil Johnson
c5171bf171 special case server_notices_mxid 2018-08-18 12:33:07 +01:00
Neil Johnson
ba1fbf7d5b special case server_notices_mxid 2018-08-18 12:31:08 +01:00
Richard van der Hoff
3cef867cc1 Merge pull request #3709 from matrix-org/rav/logcontext_for_replication_commands
Logcontexts for replication command handlers
2018-08-17 16:22:07 +01:00
Richard van der Hoff
c144252a8c Merge pull request #3710 from matrix-org/rav/logcontext_for_pusher_updates
Fix logcontexts for running pushers
2018-08-17 16:21:49 +01:00
Amber Brown
c334ca67bb Integrate presence from hotfixes (#3694) 2018-08-18 01:08:45 +10:00
Amber Brown
04f5d2db62 Remove v1/register's broken shared secret functionality (#3703) 2018-08-18 00:55:01 +10:00
Erik Johnston
ab822a2d1f Add some fixmes 2018-08-17 15:31:50 +01:00
Erik Johnston
91cdb6de08 Call UserDirectoryHandler methods directly
Turns out that the user directory handling is fairly racey as a bunch
of stuff assumes that the processing happens on master, which it doesn't
when there is a synapse.app.user_dir worker. So lets just call the
function directly until we actually get round to fixing it, since it
doesn't make the situation any worse.
2018-08-17 15:26:13 +01:00
Neil Johnson
d49b77404b clean up, no functional changes 2018-08-17 15:21:34 +01:00
Richard van der Hoff
63260397c6 Merge pull request #3701 from matrix-org/rav/use_producer_for_responses
Use a producer to stream back responses
2018-08-17 14:58:45 +01:00
Richard van der Hoff
3f8709ffe4 Merge pull request #3700 from matrix-org/rav/wait_for_producers
Refactor request logging code
2018-08-17 14:57:45 +01:00
Neil Johnson
3ee57bdcbb Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-17 14:34:10 +01:00
Neil Johnson
4c22b4047b Merge pull request #3707 from matrix-org/neilj/limit_exceeded_error
add new error type ResourceLimit
2018-08-17 13:33:54 +00:00
Erik Johnston
782689bd40 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_profiles 2018-08-17 14:15:48 +01:00
Erik Johnston
ca87ad1def Split ProfileHandler into master and worker 2018-08-17 14:15:14 +01:00
Neil Johnson
b5f638f1f4 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-17 14:04:15 +01:00
Neil Johnson
9fd161c6fb Merge branch 'neilj/limit_exceeded_error' of github.com:matrix-org/synapse into neilj/limit_exceeded_error 2018-08-17 13:58:40 +01:00
Neil Johnson
0195dfbf52 server limits config docs 2018-08-17 13:58:25 +01:00
Neil Johnson
69c49d3fa3 Merge branch 'develop' into neilj/limit_exceeded_error 2018-08-17 12:44:26 +00:00
Neil Johnson
a2d872e7b3 Merge pull request #3708 from matrix-org/neilj/resource_Limit_block_event_creation
Neilj/resource limit block event creation
2018-08-17 12:42:59 +00:00
Amber Brown
fa27073b14 Merge pull request #3712 from matrix-org/travis/register-admin-docs
Update the admin register documentation to return a real user ID
2018-08-17 22:31:02 +10:00
Erik Johnston
38f708a2bb Remote profile cache should remain in master worker 2018-08-17 11:37:42 +01:00
Erik Johnston
73737bd0f9 Newsfile 2018-08-17 11:14:05 +01:00
Erik Johnston
3b2dcfff78 Fix logging bug in EDU handling over replication 2018-08-17 11:11:06 +01:00
Neil Johnson
521d369e7a remove errant yield 2018-08-17 10:12:11 +01:00
Travis Ralston
b99a0f3941 Create 3712.misc 2018-08-17 02:47:31 -06:00
Travis Ralston
a8ffc27db7 Update the admin register documentation to return a real user ID
Presumably this is the intention anyways. I've also updated the domain part to be something more along the lines of what people might expect.
2018-08-17 02:46:25 -06:00
Richard van der Hoff
4c9da1440f changelog 2018-08-17 00:50:36 +01:00
Richard van der Hoff
d9efd87d55 changelog 2018-08-17 00:49:22 +01:00
Richard van der Hoff
0e8d78f6aa Logcontexts for replication command handlers
Run the handlers for replication commands as background processes. This should
improve the visibility in our metrics, and reduce the number of "running db
transaction from sentinel context" warnings.

Ideally it means converting the things that fire off deferreds into the night
into things that actually return a Deferred when they are done. I've made a bit
of a stab at this, but it will probably be leaky.
2018-08-17 00:43:43 +01:00
Richard van der Hoff
66f7dc8c87 Fix logcontexts for running pushers
First of all, avoid resetting the logcontext before running the pushers, to fix
the "Starting db txn 'get_all_updated_receipts' from sentinel context" warning.

Instead, give them their own "background process" logcontexts.
2018-08-17 00:32:39 +01:00
Neil Johnson
7e51342196 For resource limit blocked users, prevent writing into rooms 2018-08-16 23:05:20 +01:00
Neil Johnson
bcfeb44afe call reap on start up and fix under reaping bug 2018-08-16 22:55:32 +01:00
Neil Johnson
372bf073c1 block event creation and room creation on hitting resource limits 2018-08-16 21:25:16 +01:00
Neil Johnson
7edd11623d add new error type ResourceLimit 2018-08-16 21:04:30 +01:00
Neil Johnson
13ad9930c8 add new error type ResourceLimit 2018-08-16 18:02:02 +01:00
Neil Johnson
51b17ec566 flake8 2018-08-16 17:32:22 +01:00
Neil Johnson
3c1080b6e4 refactor for readability, and reuse caching for setting tags 2018-08-16 17:02:04 +01:00
Neil Johnson
eff3ae3b9a add room tagging 2018-08-16 15:48:34 +01:00
Erik Johnston
b4d6db5c4a Merge pull request #3705 from matrix-org/erikj/fix_inbound_fed_worker
Fix inbound federation on reader worker
2018-08-16 15:42:50 +01:00
Erik Johnston
aae86a81ef Newsfile 2018-08-16 15:41:48 +01:00
Erik Johnston
7d5b1a60a3 Fix inbound federation on reader worker
Inbound federation requires calculating push, which in turn relies on
having access to account data.
2018-08-16 15:32:20 +01:00
Neil Johnson
bfb6c58624 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-16 15:10:16 +01:00
Neil Johnson
a675f9c556 check for room state before deciding on action 2018-08-16 14:53:35 +01:00
Will Hunt
c151b32b1d Add GET media/v1/config (#3184) 2018-08-16 14:23:38 +01:00
Matthew Hodgson
762a758fea lazyload aware /messages (#3589) 2018-08-16 14:22:47 +01:00
Neil Johnson
25d2b5d55f Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-16 11:37:17 +01:00
Neil Johnson
df1e4f259f WIP impl commiting to get feedback 2018-08-16 11:10:53 +01:00
Neil Johnson
c055c91655 fix case where empty string state check is evaulated as False 2018-08-16 11:10:19 +01:00
Matthew Hodgson
3f543dc021 initial cut at a room summary API (#3574) 2018-08-16 09:46:50 +01:00
Richard van der Hoff
859989f958 Merge pull request #3686 from matrix-org/rav/changelog_links_to_prs
Changelog should link to PRs
2018-08-15 22:22:38 +01:00
Neil Johnson
8cfad2e686 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-15 17:20:38 +01:00
Neil Johnson
81d727efa9 Merge pull request #3689 from matrix-org/neilj/fix_off_by_1+maus
Fix Mau off by one errors
2018-08-15 16:19:41 +00:00
Neil Johnson
da7fe43899 Fix mau blocking calulation bug on login 2018-08-15 17:03:30 +01:00
Richard van der Hoff
e2e846628d Remove incorrectly re-added changelog files
These were removed for the 0.33.2 release, but were incorrectly re-added in
2018-08-15 16:38:46 +01:00
Matthew Hodgson
2f78f432c4 speed up /members and add at= and membership params (#3568) 2018-08-15 16:35:22 +01:00
Neil Johnson
fc5d937550 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/server_notices_on_blocking 2018-08-15 16:31:40 +01:00
Neil Johnson
86a00e05e1 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-15 16:27:08 +01:00
Richard van der Hoff
d82fa0e9a6 changelog 2018-08-15 15:12:35 +01:00
Amber Brown
a87af25fbb Fix the tests 2018-08-15 15:12:23 +01:00
Neil Johnson
eabc5f8271 wip cut at sending resource server notices 2018-08-15 15:04:52 +01:00
Neil Johnson
c24fc9797b add new event types 2018-08-15 15:04:30 +01:00
Richard van der Hoff
afcd655ab6 Use a producer to stream back responses
The problem with dumping all of the json response into the Request object at
once is that doing so starts the timeout for the next request to be received:
so if it takes longer than 60s to stream back the response to the client, the
client never gets it.

The correct solution is to use a Producer; then the timeout is only started
once all of the content is sent over the TCP connection.
2018-08-15 15:04:16 +01:00
Richard van der Hoff
2de813a611 changelog 2018-08-15 13:49:03 +01:00
Richard van der Hoff
eaaa2248ff Refactor request logging code
This commit moves a bunch of the logic for deciding when to log the receipt and
completion of HTTP requests into SynapseRequest, rather than in the request
handling wrappers.

Advantages of this are:
 * we get logs for *all* requests (including OPTIONS and HEADs), rather than
   just those that end up hitting handlers we've remembered to decorate
   correctly.

 * when a request handler wires up a Producer (as the media stuff does
   currently, and as other things will do soon), we log at the point that all
   of the traffic has been sent to the client.
2018-08-15 13:47:52 +01:00
Neil Johnson
b586b8b986 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 17:43:22 +01:00
Neil Johnson
06b331ff40 fix off by 1 errors 2018-08-14 15:28:15 +01:00
Neil Johnson
8f9a7eb58d support admin_email config and pass through into blocking errors, return AuthError in all cases 2018-08-14 15:11:54 +01:00
Neil Johnson
ed4bc3d2fc fix off by 1s on mau 2018-08-14 15:04:48 +01:00
Neil Johnson
bd92c8eaa7 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:56:45 +01:00
Neil Johnson
9b5bf3d858 Merge branch 'neilj/admin_email' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:51:38 +01:00
Neil Johnson
e25d87d97b Merge branch 'neilj/mau_sync_block' of github.com:matrix-org/synapse into neilj/fix_off_by_1+maus 2018-08-14 14:32:18 +01:00
Neil Johnson
e2c9fe0a6a backout ability to pass in event type to server notices 2018-08-14 13:32:56 +01:00
Neil Johnson
9b75c78b4d support server notice state events for resource limits 2018-08-14 11:20:41 +01:00
Neil Johnson
63417c31e9 fix typo 2018-08-13 22:36:52 +01:00
Richard van der Hoff
b6d55588e1 Changelog should link to PRs
The changelog makes much more sense if it consistently links to PRs, rather
than mostly PRs with the occasional issue #.
2018-08-13 18:56:55 +01:00
Richard van der Hoff
cd32c19a60 Merge pull request #3685 from matrix-org/revert-3677-master
Revert "Use recaptcha_ajax.js directly from Google"
2018-08-13 18:49:50 +01:00
Richard van der Hoff
34f51babc5 Revert "Use recaptcha_ajax.js directly from Google" 2018-08-13 18:48:03 +01:00
Richard van der Hoff
8226e5597a Merge pull request #3677 from andrewshadura/master
Use recaptcha_ajax.js directly from Google
2018-08-13 18:47:51 +01:00
Neil Johnson
c79c4c0a7d server notices for resource limit blocking 2018-08-10 15:25:47 +01:00
Neil Johnson
6c6aba76e1 implementation of server notices to alert on hitting resource limits 2018-08-10 15:12:59 +01:00
Neil Johnson
01021c812f wip at implementing MSC 7075 2018-08-09 22:16:00 +01:00
Erik Johnston
5075e444f4 Newsfile 2018-08-09 14:58:49 +01:00
Erik Johnston
3e19beb941 Fix tests 2018-08-09 14:58:49 +01:00
Erik Johnston
bb99b1f550 Add fast path in state res for zero prev events 2018-08-09 14:58:49 +01:00
Erik Johnston
ce6db0e547 Choose state algorithm based on room version 2018-08-09 14:58:47 +01:00
Erik Johnston
152c0aa58e Add constants for room versions 2018-08-09 14:55:47 +01:00
Erik Johnston
119451dcd1 Refactor state module
We split out the actual state resolution algorithm to prepare for having
multiple versions.
2018-08-09 14:55:47 +01:00
Erik Johnston
a6c813761a Docstrings 2018-08-09 10:41:08 +01:00
Erik Johnston
54a9bea88c Newsfile 2018-08-09 10:39:29 +01:00
Erik Johnston
484a0ebdfc Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_profiles 2018-08-09 10:16:29 +01:00
Erik Johnston
f81f421086 Update workers.rst with new paths 2018-08-07 10:51:35 +01:00
Erik Johnston
cd9765805e Allow ratelimiting on workers 2018-08-07 10:50:28 +01:00
Erik Johnston
495cb100d1 Allow profile changes to happen on workers 2018-08-07 10:50:26 +01:00
Travis Ralston
37be52ac34 limt -> limit 2018-07-31 16:29:09 -06:00
Nicholas Eckardt
76c80e3fdf The project matrix-synapse-auto-deploy does not seem to be maintained anymore.
It has been over a year since any code has been commited. Some of the relevant links in
the documentation are broken, but since no pull requests are being accepted, they
won't get fixed. We should probably remove this from the README.
2018-06-10 18:24:12 -07:00
256 changed files with 8326 additions and 3911 deletions

View File

@@ -1,5 +1,27 @@
version: 2
jobs:
dockerhubuploadrelease:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1} matrixdotorg/synapse:latest
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1}-py3 matrixdotorg/synapse:latest-py3
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}-py3
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
sytestpy2:
machine: true
steps:
@@ -9,6 +31,8 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2postgres:
machine: true
steps:
@@ -18,15 +42,45 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2merged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2postgresmerged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3postgres:
machine: true
steps:
@@ -36,13 +90,76 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3merged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3postgresmerged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
workflows:
version: 2
build:
jobs:
- sytestpy2
- sytestpy2postgres
# Currently broken while the Python 3 port is incomplete
# - sytestpy3
# - sytestpy3postgres
- sytestpy2:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2merged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy2postgresmerged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy3merged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy3postgresmerged:
filters:
branches:
ignore: /develop|master|release-.*/
- dockerhubuploadrelease:
filters:
tags:
only: /v[0-9].[0-9]+.[0-9]+.*/
branches:
ignore: /.*/
- dockerhubuploadlatest:
filters:
branches:
only: master

34
.circleci/merge_base_branch.sh Executable file
View File

@@ -0,0 +1,34 @@
#!/usr/bin/env bash
set -e
# CircleCI doesn't give CIRCLE_PR_NUMBER in the environment for non-forked PRs. Wonderful.
# In this case, we just need to do some ~shell magic~ to strip it out of the PULL_REQUEST URL.
echo 'export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-${CIRCLE_PULL_REQUEST##*/}}"' >> $BASH_ENV
source $BASH_ENV
if [[ -z "${CIRCLE_PR_NUMBER}" ]]
then
echo "Can't figure out what the PR number is! Assuming merge target is develop."
# It probably hasn't had a PR opened yet. Since all PRs land on develop, we
# can probably assume it's based on it and will be merged into it.
GITBASE="develop"
else
# Get the reference, using the GitHub API
GITBASE=`curl -q https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
fi
# Show what we are before
git show -s
# Set up username so it can do a merge
git config --global user.email bot@matrix.org
git config --global user.name "A robot"
# Fetch and merge. If it doesn't work, it will raise due to set -e.
git fetch -u origin $GITBASE
git merge --no-edit origin/$GITBASE
# Show what we are after.
git show -s

View File

@@ -3,6 +3,5 @@ Dockerfile
.gitignore
demo/etc
tox.ini
synctl
.git/*
.tox/*

3
.gitignore vendored
View File

@@ -1,9 +1,11 @@
*.pyc
.*.swp
*~
*.lock
.DS_Store
_trial_temp/
_trial_temp*/
logs/
dbs/
*.egg
@@ -44,6 +46,7 @@ media_store/
build/
venv/
venv*/
*venv/
localhost-800*/
static/client/register/register_config.js

View File

@@ -8,9 +8,6 @@ before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
services:
- postgresql
matrix:
fast_finish: true
include:
@@ -23,22 +20,31 @@ matrix:
- python: 2.7
env: TOX_ENV=py27
- python: 2.7
env: TOX_ENV=py27-old
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
services:
- postgresql
- python: 3.5
env: TOX_ENV=py35
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"
services:
- postgresql
- python: 3.6
env: TOX_ENV=check_isort
- python: 3.6
env: TOX_ENV=check-newsfragment
allow_failures:
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
install:
- pip install tox

View File

@@ -1,3 +1,313 @@
Synapse 0.33.6 (2018-10-04)
===========================
Internal Changes
----------------
- Pin to prometheus_client<0.4 to avoid renaming all of our metrics ([\#4002](https://github.com/matrix-org/synapse/issues/4002))
Synapse 0.33.6rc1 (2018-10-03)
==============================
Features
--------
- Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables. ([\#3883](https://github.com/matrix-org/synapse/issues/3883))
- Report "python_version" in the phone home stats ([\#3894](https://github.com/matrix-org/synapse/issues/3894))
- Always LL ourselves if we're in a room ([\#3916](https://github.com/matrix-org/synapse/issues/3916))
- Include eventid in log lines when processing incoming federation transactions ([\#3959](https://github.com/matrix-org/synapse/issues/3959))
- Remove spurious check which made 'localhost' servers not work ([\#3964](https://github.com/matrix-org/synapse/issues/3964))
Bugfixes
--------
- Fix problem when playing media from Chrome using direct URL (thanks @remjey!) ([\#3578](https://github.com/matrix-org/synapse/issues/3578))
- support registering regular users non-interactively with register_new_matrix_user script ([\#3836](https://github.com/matrix-org/synapse/issues/3836))
- Fix broken invite email links for self hosted riots ([\#3868](https://github.com/matrix-org/synapse/issues/3868))
- Don't ratelimit autojoins ([\#3879](https://github.com/matrix-org/synapse/issues/3879))
- Fix 500 error when deleting unknown room alias ([\#3889](https://github.com/matrix-org/synapse/issues/3889))
- Fix some b'abcd' noise in logs and metrics ([\#3892](https://github.com/matrix-org/synapse/issues/3892), [\#3895](https://github.com/matrix-org/synapse/issues/3895))
- When we join a room, always try the server we used for the alias lookup first, to avoid unresponsive and out-of-date servers. ([\#3899](https://github.com/matrix-org/synapse/issues/3899))
- Fix incorrect server-name indication for outgoing federation requests ([\#3907](https://github.com/matrix-org/synapse/issues/3907))
- Fix adding client IPs to the database failing on Python 3. ([\#3908](https://github.com/matrix-org/synapse/issues/3908))
- Fix bug where things occaisonally were not being timed out correctly. ([\#3910](https://github.com/matrix-org/synapse/issues/3910))
- Fix bug where outbound federation would stop talking to some servers when using workers ([\#3914](https://github.com/matrix-org/synapse/issues/3914))
- Fix some instances of ExpiringCache not expiring cache items ([\#3932](https://github.com/matrix-org/synapse/issues/3932), [\#3980](https://github.com/matrix-org/synapse/issues/3980))
- Fix out-of-bounds error when LLing yourself ([\#3936](https://github.com/matrix-org/synapse/issues/3936))
- Sending server notices regarding user consent now works on Python 3. ([\#3938](https://github.com/matrix-org/synapse/issues/3938))
- Fix exceptions from metrics handler ([\#3956](https://github.com/matrix-org/synapse/issues/3956))
- Fix error message for events with m.room.create missing from auth_events ([\#3960](https://github.com/matrix-org/synapse/issues/3960))
- Fix errors due to concurrent monthly_active_user upserts ([\#3961](https://github.com/matrix-org/synapse/issues/3961))
- Fix exceptions when processing incoming events over federation ([\#3968](https://github.com/matrix-org/synapse/issues/3968))
- Replaced all occurences of e.message with str(e). Contributed by Schnuffle ([\#3970](https://github.com/matrix-org/synapse/issues/3970))
- Fix lazy loaded sync in the presence of rejected state events ([\#3986](https://github.com/matrix-org/synapse/issues/3986))
- Fix error when logging incomplete HTTP requests ([\#3990](https://github.com/matrix-org/synapse/issues/3990))
Internal Changes
----------------
- Unit tests can now be run under PostgreSQL in Docker using ``test_postgresql.sh``. ([\#3699](https://github.com/matrix-org/synapse/issues/3699))
- Speed up calculation of typing updates for replication ([\#3794](https://github.com/matrix-org/synapse/issues/3794))
- Remove documentation regarding installation on Cygwin, the use of WSL is recommended instead. ([\#3873](https://github.com/matrix-org/synapse/issues/3873))
- Fix typo in README, synaspse -> synapse ([\#3897](https://github.com/matrix-org/synapse/issues/3897))
- Increase the timeout when filling missing events in federation requests ([\#3903](https://github.com/matrix-org/synapse/issues/3903))
- Improve the logging when handling a federation transaction ([\#3904](https://github.com/matrix-org/synapse/issues/3904), [\#3966](https://github.com/matrix-org/synapse/issues/3966))
- Improve logging of outbound federation requests ([\#3906](https://github.com/matrix-org/synapse/issues/3906), [\#3909](https://github.com/matrix-org/synapse/issues/3909))
- Fix the docker image building on python 3 ([\#3911](https://github.com/matrix-org/synapse/issues/3911))
- Add a regression test for logging failed HTTP requests on Python 3. ([\#3912](https://github.com/matrix-org/synapse/issues/3912))
- Comments and interface cleanup for on_receive_pdu ([\#3924](https://github.com/matrix-org/synapse/issues/3924))
- Fix spurious exceptions when remote http client closes conncetion ([\#3925](https://github.com/matrix-org/synapse/issues/3925))
- Log exceptions thrown by background tasks ([\#3927](https://github.com/matrix-org/synapse/issues/3927))
- Add a cache to get_destination_retry_timings ([\#3933](https://github.com/matrix-org/synapse/issues/3933), [\#3991](https://github.com/matrix-org/synapse/issues/3991))
- Automate pushes to docker hub ([\#3946](https://github.com/matrix-org/synapse/issues/3946))
- Require attrs 16.0.0 or later ([\#3947](https://github.com/matrix-org/synapse/issues/3947))
- Fix incompatibility with python3 on alpine ([\#3948](https://github.com/matrix-org/synapse/issues/3948))
- Run the test suite on the oldest supported versions of our dependencies in CI. ([\#3952](https://github.com/matrix-org/synapse/issues/3952))
- CircleCI now only runs merged jobs on PRs, and commit jobs on develop, master, and release branches. ([\#3957](https://github.com/matrix-org/synapse/issues/3957))
- Fix docstrings and add tests for state store methods ([\#3958](https://github.com/matrix-org/synapse/issues/3958))
- fix docstring for FederationClient.get_state_for_room ([\#3963](https://github.com/matrix-org/synapse/issues/3963))
- Run notify_app_services as a bg process ([\#3965](https://github.com/matrix-org/synapse/issues/3965))
- Clarifications in FederationHandler ([\#3967](https://github.com/matrix-org/synapse/issues/3967))
- Further reduce the docker image size ([\#3972](https://github.com/matrix-org/synapse/issues/3972))
- Build py3 docker images for docker hub too ([\#3976](https://github.com/matrix-org/synapse/issues/3976))
- Updated the installation instructions to point to the matrix-synapse package on PyPI. ([\#3985](https://github.com/matrix-org/synapse/issues/3985))
- Disable USE_FROZEN_DICTS for unittests by default. ([\#3987](https://github.com/matrix-org/synapse/issues/3987))
- Remove unused Jenkins and development related files from the repo. ([\#3988](https://github.com/matrix-org/synapse/issues/3988))
- Improve stacktraces in certain exceptions in the logs ([\#3989](https://github.com/matrix-org/synapse/issues/3989))
Synapse 0.33.5.1 (2018-09-25)
=============================
Internal Changes
----------------
- Fix incompatibility with older Twisted version in tests. Thanks @OlegGirko! ([\#3940](https://github.com/matrix-org/synapse/issues/3940))
Synapse 0.33.5 (2018-09-24)
===========================
No significant changes.
Synapse 0.33.5rc1 (2018-09-17)
==============================
Features
--------
- Python 3.5 and 3.6 support is now in beta. ([\#3576](https://github.com/matrix-org/synapse/issues/3576))
- Implement `event_format` filter param in `/sync` ([\#3790](https://github.com/matrix-org/synapse/issues/3790))
- Add synapse_admin_mau:registered_reserved_users metric to expose number of real reaserved users ([\#3846](https://github.com/matrix-org/synapse/issues/3846))
Bugfixes
--------
- Remove connection ID for replication prometheus metrics, as it creates a large number of new series. ([\#3788](https://github.com/matrix-org/synapse/issues/3788))
- guest users should not be part of mau total ([\#3800](https://github.com/matrix-org/synapse/issues/3800))
- Bump dependency on pyopenssl 16.x, to avoid incompatibility with recent Twisted. ([\#3804](https://github.com/matrix-org/synapse/issues/3804))
- Fix existing room tags not coming down sync when joining a room ([\#3810](https://github.com/matrix-org/synapse/issues/3810))
- Fix jwt import check ([\#3824](https://github.com/matrix-org/synapse/issues/3824))
- fix VOIP crashes under Python 3 (#3821) ([\#3835](https://github.com/matrix-org/synapse/issues/3835))
- Fix manhole so that it works with latest openssh clients ([\#3841](https://github.com/matrix-org/synapse/issues/3841))
- Fix outbound requests occasionally wedging, which can result in federation breaking between servers. ([\#3845](https://github.com/matrix-org/synapse/issues/3845))
- Show heroes if room name/canonical alias has been deleted ([\#3851](https://github.com/matrix-org/synapse/issues/3851))
- Fix handling of redacted events from federation ([\#3859](https://github.com/matrix-org/synapse/issues/3859))
- ([\#3874](https://github.com/matrix-org/synapse/issues/3874))
- Mitigate outbound federation randomly becoming wedged ([\#3875](https://github.com/matrix-org/synapse/issues/3875))
Internal Changes
----------------
- CircleCI tests now run on the potential merge of a PR. ([\#3704](https://github.com/matrix-org/synapse/issues/3704))
- http/ is now ported to Python 3. ([\#3771](https://github.com/matrix-org/synapse/issues/3771))
- Improve human readable error messages for threepid registration/account update ([\#3789](https://github.com/matrix-org/synapse/issues/3789))
- Make /sync slightly faster by avoiding needless copies ([\#3795](https://github.com/matrix-org/synapse/issues/3795))
- handlers/ is now ported to Python 3. ([\#3803](https://github.com/matrix-org/synapse/issues/3803))
- Limit the number of PDUs/EDUs per federation transaction ([\#3805](https://github.com/matrix-org/synapse/issues/3805))
- Only start postgres instance for postgres tests on Travis CI ([\#3806](https://github.com/matrix-org/synapse/issues/3806))
- tests/ is now ported to Python 3. ([\#3808](https://github.com/matrix-org/synapse/issues/3808))
- crypto/ is now ported to Python 3. ([\#3822](https://github.com/matrix-org/synapse/issues/3822))
- rest/ is now ported to Python 3. ([\#3823](https://github.com/matrix-org/synapse/issues/3823))
- add some logging for the keyring queue ([\#3826](https://github.com/matrix-org/synapse/issues/3826))
- speed up lazy loading by 2-3x ([\#3827](https://github.com/matrix-org/synapse/issues/3827))
- Improved Dockerfile to remove build requirements after building reducing the image size. ([\#3834](https://github.com/matrix-org/synapse/issues/3834))
- Disable lazy loading for incremental syncs for now ([\#3840](https://github.com/matrix-org/synapse/issues/3840))
- federation/ is now ported to Python 3. ([\#3847](https://github.com/matrix-org/synapse/issues/3847))
- Log when we retry outbound requests ([\#3853](https://github.com/matrix-org/synapse/issues/3853))
- Removed some excess logging messages. ([\#3855](https://github.com/matrix-org/synapse/issues/3855))
- Speed up purge history for rooms that have been previously purged ([\#3856](https://github.com/matrix-org/synapse/issues/3856))
- Refactor some HTTP timeout code. ([\#3857](https://github.com/matrix-org/synapse/issues/3857))
- Fix running merged builds on CircleCI ([\#3858](https://github.com/matrix-org/synapse/issues/3858))
- Fix typo in replication stream exception. ([\#3860](https://github.com/matrix-org/synapse/issues/3860))
- Add in flight real time metrics for Measure blocks ([\#3871](https://github.com/matrix-org/synapse/issues/3871))
- Disable buffering and automatic retrying in treq requests to prevent timeouts. ([\#3872](https://github.com/matrix-org/synapse/issues/3872))
- mention jemalloc in the README ([\#3877](https://github.com/matrix-org/synapse/issues/3877))
- Remove unmaintained "nuke-room-from-db.sh" script ([\#3888](https://github.com/matrix-org/synapse/issues/3888))
Synapse 0.33.4 (2018-09-07)
===========================
Internal Changes
----------------
- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
Synapse 0.33.4rc2 (2018-09-06)
==============================
Pull in security fixes from v0.33.3.1
Synapse 0.33.3.1 (2018-09-06)
=============================
SECURITY FIXES
--------------
- Fix an issue where event signatures were not always correctly validated ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
- Fix an issue where server_acls could be circumvented for incoming events ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
Internal Changes
----------------
- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
Synapse 0.33.4rc1 (2018-09-04)
==============================
Features
--------
- Support profile API endpoints on workers ([\#3659](https://github.com/matrix-org/synapse/issues/3659))
- Server notices for resource limit blocking ([\#3680](https://github.com/matrix-org/synapse/issues/3680))
- Allow guests to use /rooms/:roomId/event/:eventId ([\#3724](https://github.com/matrix-org/synapse/issues/3724))
- Add mau_trial_days config param, so that users only get counted as MAU after N days. ([\#3749](https://github.com/matrix-org/synapse/issues/3749))
- Require twisted 17.1 or later (fixes [#3741](https://github.com/matrix-org/synapse/issues/3741)). ([\#3751](https://github.com/matrix-org/synapse/issues/3751))
Bugfixes
--------
- Fix error collecting prometheus metrics when run on dedicated thread due to threading concurrency issues ([\#3722](https://github.com/matrix-org/synapse/issues/3722))
- Fix bug where we resent "limit exceeded" server notices repeatedly ([\#3747](https://github.com/matrix-org/synapse/issues/3747))
- Fix bug where we broke sync when using limit_usage_by_mau but hadn't configured server notices ([\#3753](https://github.com/matrix-org/synapse/issues/3753))
- Fix 'federation_domain_whitelist' such that an empty list correctly blocks all outbound federation traffic ([\#3754](https://github.com/matrix-org/synapse/issues/3754))
- Fix tagging of server notice rooms ([\#3755](https://github.com/matrix-org/synapse/issues/3755), [\#3756](https://github.com/matrix-org/synapse/issues/3756))
- Fix 'admin_uri' config variable and error parameter to be 'admin_contact' to match the spec. ([\#3758](https://github.com/matrix-org/synapse/issues/3758))
- Don't return non-LL-member state in incremental sync state blocks ([\#3760](https://github.com/matrix-org/synapse/issues/3760))
- Fix bug in sending presence over federation ([\#3768](https://github.com/matrix-org/synapse/issues/3768))
- Fix bug where preserved threepid user comes to sign up and server is mau blocked ([\#3777](https://github.com/matrix-org/synapse/issues/3777))
Internal Changes
----------------
- Removed the link to the unmaintained matrix-synapse-auto-deploy project from the readme. ([\#3378](https://github.com/matrix-org/synapse/issues/3378))
- Refactor state module to support multiple room versions ([\#3673](https://github.com/matrix-org/synapse/issues/3673))
- The synapse.storage module has been ported to Python 3. ([\#3725](https://github.com/matrix-org/synapse/issues/3725))
- Split the state_group_cache into member and non-member state events (and so speed up LL /sync) ([\#3726](https://github.com/matrix-org/synapse/issues/3726))
- Log failure to authenticate remote servers as warnings (without stack traces) ([\#3727](https://github.com/matrix-org/synapse/issues/3727))
- The CONTRIBUTING guidelines have been updated to mention our use of Markdown and that .misc files have content. ([\#3730](https://github.com/matrix-org/synapse/issues/3730))
- Reference the need for an HTTP replication port when using the federation_reader worker ([\#3734](https://github.com/matrix-org/synapse/issues/3734))
- Fix minor spelling error in federation client documentation. ([\#3735](https://github.com/matrix-org/synapse/issues/3735))
- Remove redundant state resolution function ([\#3737](https://github.com/matrix-org/synapse/issues/3737))
- The test suite now passes on PostgreSQL. ([\#3740](https://github.com/matrix-org/synapse/issues/3740))
- Fix MAU cache invalidation due to missing yield ([\#3746](https://github.com/matrix-org/synapse/issues/3746))
- Make sure that we close db connections opened during init ([\#3764](https://github.com/matrix-org/synapse/issues/3764))
Synapse 0.33.3 (2018-08-22)
===========================
Bugfixes
--------
- Fix bug introduced in v0.33.3rc1 which made the ToS give a 500 error ([\#3732](https://github.com/matrix-org/synapse/issues/3732))
Synapse 0.33.3rc2 (2018-08-21)
==============================
Bugfixes
--------
- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs ([\#3723](https://github.com/matrix-org/synapse/issues/3723))
Synapse 0.33.3rc1 (2018-08-21)
==============================
Features
--------
- Add support for the SNI extension to federation TLS connections. Thanks to @vojeroen! ([\#3439](https://github.com/matrix-org/synapse/issues/3439))
- Add /_media/r0/config ([\#3184](https://github.com/matrix-org/synapse/issues/3184))
- speed up /members API and add `at` and `membership` params as per MSC1227 ([\#3568](https://github.com/matrix-org/synapse/issues/3568))
- implement `summary` block in /sync response as per MSC688 ([\#3574](https://github.com/matrix-org/synapse/issues/3574))
- Add lazy-loading support to /messages as per MSC1227 ([\#3589](https://github.com/matrix-org/synapse/issues/3589))
- Add ability to limit number of monthly active users on the server ([\#3633](https://github.com/matrix-org/synapse/issues/3633))
- Support more federation endpoints on workers ([\#3653](https://github.com/matrix-org/synapse/issues/3653))
- Basic support for room versioning ([\#3654](https://github.com/matrix-org/synapse/issues/3654))
- Ability to disable client/server Synapse via conf toggle ([\#3655](https://github.com/matrix-org/synapse/issues/3655))
- Ability to whitelist specific threepids against monthly active user limiting ([\#3662](https://github.com/matrix-org/synapse/issues/3662))
- Add some metrics for the appservice and federation event sending loops ([\#3664](https://github.com/matrix-org/synapse/issues/3664))
- Where server is disabled, block ability for locked out users to read new messages ([\#3670](https://github.com/matrix-org/synapse/issues/3670))
- set admin uri via config, to be used in error messages where the user should contact the administrator ([\#3687](https://github.com/matrix-org/synapse/issues/3687))
- Synapse's presence functionality can now be disabled with the "use_presence" configuration option. ([\#3694](https://github.com/matrix-org/synapse/issues/3694))
- For resource limit blocked users, prevent writing into rooms ([\#3708](https://github.com/matrix-org/synapse/issues/3708))
Bugfixes
--------
- Fix occasional glitches in the synapse_event_persisted_position metric ([\#3658](https://github.com/matrix-org/synapse/issues/3658))
- Fix bug on deleting 3pid when using identity servers that don't support unbind API ([\#3661](https://github.com/matrix-org/synapse/issues/3661))
- Make the tests pass on Twisted < 18.7.0 ([\#3676](https://github.com/matrix-org/synapse/issues/3676))
- Dont ship recaptcha_ajax.js, use it directly from Google ([\#3677](https://github.com/matrix-org/synapse/issues/3677))
- Fixes test_reap_monthly_active_users so it passes under postgres ([\#3681](https://github.com/matrix-org/synapse/issues/3681))
- Fix mau blocking calulation bug on login ([\#3689](https://github.com/matrix-org/synapse/issues/3689))
- Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users ([\#3692](https://github.com/matrix-org/synapse/issues/3692))
- Improve HTTP request logging to include all requests ([\#3700](https://github.com/matrix-org/synapse/issues/3700))
- Avoid timing out requests while we are streaming back the response ([\#3701](https://github.com/matrix-org/synapse/issues/3701))
- Support more federation endpoints on workers ([\#3705](https://github.com/matrix-org/synapse/issues/3705), [\#3713](https://github.com/matrix-org/synapse/issues/3713))
- Fix "Starting db txn 'get_all_updated_receipts' from sentinel context" warning ([\#3710](https://github.com/matrix-org/synapse/issues/3710))
- Fix bug where `state_cache` cache factor ignored environment variables ([\#3719](https://github.com/matrix-org/synapse/issues/3719))
Deprecations and Removals
-------------------------
- The Shared-Secret registration method of the legacy v1/register REST endpoint has been removed. For a replacement, please see [the admin/register API documentation](https://github.com/matrix-org/synapse/blob/master/docs/admin_api/register_api.rst). ([\#3703](https://github.com/matrix-org/synapse/issues/3703))
Internal Changes
----------------
- The test suite now can run under PostgreSQL. ([\#3423](https://github.com/matrix-org/synapse/issues/3423))
- Refactor HTTP replication endpoints to reduce code duplication ([\#3632](https://github.com/matrix-org/synapse/issues/3632))
- Tests now correctly execute on Python 3. ([\#3647](https://github.com/matrix-org/synapse/issues/3647))
- Sytests can now be run inside a Docker container. ([\#3660](https://github.com/matrix-org/synapse/issues/3660))
- Port over enough to Python 3 to allow the sytests to start. ([\#3668](https://github.com/matrix-org/synapse/issues/3668))
- Update docker base image from alpine 3.7 to 3.8. ([\#3669](https://github.com/matrix-org/synapse/issues/3669))
- Rename synapse.util.async to synapse.util.async_helpers to mitigate async becoming a keyword on Python 3.7. ([\#3678](https://github.com/matrix-org/synapse/issues/3678))
- Synapse's tests are now formatted with the black autoformatter. ([\#3679](https://github.com/matrix-org/synapse/issues/3679))
- Implemented a new testing base class to reduce test boilerplate. ([\#3684](https://github.com/matrix-org/synapse/issues/3684))
- Rename MAU prometheus metrics ([\#3690](https://github.com/matrix-org/synapse/issues/3690))
- add new error type ResourceLimit ([\#3707](https://github.com/matrix-org/synapse/issues/3707))
- Logcontexts for replication command handlers ([\#3709](https://github.com/matrix-org/synapse/issues/3709))
- Update admin register API documentation to reference a real user ID. ([\#3712](https://github.com/matrix-org/synapse/issues/3712))
Synapse 0.33.2 (2018-08-09)
===========================
@@ -24,7 +334,7 @@ Features
Bugfixes
--------
- Make /directory/list API return 404 for room not found instead of 400 ([\#2952](https://github.com/matrix-org/synapse/issues/2952))
- Make /directory/list API return 404 for room not found instead of 400. Thanks to @fuzzmz! ([\#3620](https://github.com/matrix-org/synapse/issues/3620))
- Default inviter_display_name to mxid for email invites ([\#3391](https://github.com/matrix-org/synapse/issues/3391))
- Don't generate TURN credentials if no TURN config options are set ([\#3514](https://github.com/matrix-org/synapse/issues/3514))
- Correctly announce deleted devices over federation ([\#3520](https://github.com/matrix-org/synapse/issues/3520))

View File

@@ -30,12 +30,28 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Travis CI
<https://travis-ci.org/matrix-org/synapse>`_ for continuous integration. All
pull requests to synapse get automatically tested by Travis and CircleCI.
If your change breaks the build, this will be shown in GitHub, so please
keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
- ``tox -e py27`` (requires tox to be installed by ``pip install tox``) for
SQLite-backed Synapse on Python 2.7.
- ``tox -e py35`` for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py27-postgres`` for PostgreSQL-backed Synapse on Python 2.7
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 2.7
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the `documentation in the SyTest repo
<https://github.com/matrix-org/sytest/blob/develop/docker/README.md>`_ for more
information.
Code style
~~~~~~~~~~
@@ -56,17 +72,18 @@ entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``issuenumberOrPR.type``. The type can be
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes). The content of
the file is your changelog entry, which can contain RestructuredText
formatting. A note of contributors is welcomed in changelogs for
non-misc changes (the content of misc changes is not displayed).
the file is your changelog entry, which can contain Markdown
formatting. Adding credits to the changelog is encouraged, we value
your contributions and would like to have you shouted out in the
release notes!
For example, a fix for a bug reported in #1234 would have its
changelog entry in ``changelog.d/1234.bugfix``, and contain content
like "The security levels of Florbs are now validated when
recieved over federation. Contributed by Jane Matrix".
For example, a fix in PR #1234 would have its changelog entry in
``changelog.d/1234.bugfix``, and contain content like "The security levels of
Florbs are now validated when recieved over federation. Contributed by Jane
Matrix".
Attribution
~~~~~~~~~~~
@@ -76,7 +93,8 @@ AUTHORS.rst file for the project in question. Please feel free to include a
change to AUTHORS.rst in your pull request to list yourself and a short
description of the area(s) you've worked on. Also, we sometimes have swag to
give away to contributors - if you feel that Matrix-branded apparel is missing
from your life, please mail us your shipping address to matrix at matrix.org and we'll try to fix it :)
from your life, please mail us your shipping address to matrix at matrix.org and
we'll try to fix it :)
Sign off
~~~~~~~~
@@ -125,7 +143,7 @@ the contribution or otherwise have the right to contribute it to Matrix::
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
@@ -143,4 +161,9 @@ flag to ``git commit``, which uses the name and email set in your
Conclusion
~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!

View File

@@ -23,12 +23,9 @@ recursive-include synapse/static *.gif
recursive-include synapse/static *.html
recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
exclude Dockerfile
exclude .dockerignore
recursive-exclude jenkins *.sh
exclude test_postgresql.sh
include pyproject.toml
recursive-include changelog.d *
@@ -37,3 +34,6 @@ prune .github
prune demo/etc
prune docker
prune .circleci
exclude jenkins*
recursive-exclude jenkins *.sh

35
MAP.rst
View File

@@ -1,35 +0,0 @@
Directory Structure
===================
Warning: this may be a bit stale...
::
.
├── cmdclient Basic CLI python Matrix client
├── demo Scripts for running standalone Matrix demos
├── docs All doc, including the draft Matrix API spec
│   ├── client-server The client-server Matrix API spec
│   ├── model Domain-specific elements of the Matrix API spec
│   ├── server-server The server-server model of the Matrix API spec
│   └── sphinx The internal API doc of the Synapse homeserver
├── experiments Early experiments of using Synapse's internal APIs
├── graph Visualisation of Matrix's distributed message store
├── synapse The reference Matrix homeserver implementation
│   ├── api Common building blocks for the APIs
│   │   ├── events Definition of state representation Events
│   │   └── streams Definition of streamable Event objects
│   ├── app The __main__ entry point for the homeserver
│   ├── crypto The PKI client/server used for secure federation
│   │   └── resource PKI helper objects (e.g. keys)
│   ├── federation Server-server state replication logic
│   ├── handlers The main business logic of the homeserver
│   ├── http Wrappers around Twisted's HTTP server & client
│   ├── rest Servlet-style RESTful API
│   ├── storage Persistence subsystem (currently only sqlite3)
│   │   └── schema sqlite persistence schema
│   └── util Synapse-specific utilities
├── tests Unit tests for the Synapse homeserver
└── webclient Basic AngularJS Matrix web client

View File

@@ -81,7 +81,7 @@ Thanks for using Matrix!
Synapse Installation
====================
Synapse is the reference python/twisted Matrix homeserver implementation.
Synapse is the reference Python/Twisted Matrix homeserver implementation.
System requirements:
@@ -91,12 +91,13 @@ System requirements:
Installing from source
----------------------
(Prebuilt packages are available for some platforms - see `Platform-Specific
Instructions`_.)
Synapse is written in python but some of the libraries it uses are written in
C. So before we can install synapse itself we need a working C compiler and the
header files for python C extensions.
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
Installing prerequisites on Ubuntu or Debian::
@@ -143,21 +144,27 @@ Installing prerequisites on OpenBSD::
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt
To install the synapse homeserver run::
To install the Synapse homeserver run::
virtualenv -p python2.7 ~/.synapse
source ~/.synapse/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install https://github.com/matrix-org/synapse/tarball/master
pip install matrix-synapse
This installs synapse, along with the libraries it uses, into a virtual
This installs Synapse, along with the libraries it uses, into a virtual
environment under ``~/.synapse``. Feel free to pick a different directory
if you prefer.
This Synapse installation can then be later upgraded by using pip again with the
update flag::
source ~/.synapse/bin/activate
pip install -U matrix-synapse
In case of problems, please see the _`Troubleshooting` section below.
There is an offical synapse image available at
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
this including configuration options is available in the README on
@@ -167,12 +174,7 @@ Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see
https://github.com/EMnify/matrix-synapse-auto-deploy
for details.
Configuring synapse
Configuring Synapse
-------------------
Before you can start Synapse, you will need to generate a configuration
@@ -254,26 +256,6 @@ Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse
===============
@@ -449,8 +431,7 @@ settings require a slightly more difficult installation process.
using the ``.`` command, rather than ``bash``'s ``source``.
5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse
webpages for their titles.
6) Use ``pip`` to install this repository: ``pip install
https://github.com/matrix-org/synapse/tarball/master``
6) Use ``pip`` to install this repository: ``pip install matrix-synapse``
7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the
chance of a compromised Synapse server being used to take over your box.
@@ -464,37 +445,13 @@ https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-
Windows Install
---------------
Synapse can be installed on Cygwin. It requires the following Cygwin packages:
- gcc
- git
- libffi-devel
- openssl (and openssl-devel, python-openssl)
- python
- python-setuptools
The content repository requires additional packages and will be unable to process
uploads without them:
- libjpeg8
- libjpeg8-devel
- zlib
If you choose to install Synapse without these packages, you will need to reinstall
``pillow`` for changes to be applied, e.g. ``pip uninstall pillow`` ``pip install
pillow --user``
Troubleshooting:
- You may need to upgrade ``setuptools`` to get this to work correctly:
``pip install setuptools --upgrade``.
- You may encounter errors indicating that ``ffi.h`` is missing, even with
``libffi-devel`` installed. If you do, copy the ``.h`` files:
``cp /usr/lib/libffi-3.0.13/include/*.h /usr/include``
- You may need to install libsodium from source in order to install PyNacl. If
you do, you may need to create a symlink to ``libsodium.a`` so ``ld`` can find
it: ``ln -s /usr/local/lib/libsodium.a /usr/lib/libsodium.a``
If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can
be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
for Windows Server.
Troubleshooting
===============
@@ -502,7 +459,7 @@ Troubleshooting
Troubleshooting Installation
----------------------------
Synapse requires pip 1.7 or later, so if your OS provides too old a version you
Synapse requires pip 8 or later, so if your OS provides too old a version you
may need to manually upgrade it::
sudo pip install --upgrade pip
@@ -537,28 +494,6 @@ failing, e.g.::
pip install twisted
On OS X, if you encounter clang: error: unknown argument: '-mno-fused-madd' you
will need to export CFLAGS=-Qunused-arguments.
Troubleshooting Running
-----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
encryption and digital signatures.
Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and
(https://github.com/pyca/pynacl/issues/79) that mean it may not install
correctly, causing all tests to fail with errors about missing "sodium.h". To
fix try re-installing from PyPI or directly from
(https://github.com/pyca/pynacl)::
# Install from PyPI
pip install --user --upgrade --force pynacl
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -747,6 +682,18 @@ so an example nginx configuration might look like::
}
}
and an example apache configuration may look like::
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
<Location /_matrix>
ProxyPass http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse http://127.0.0.1:8008/_matrix
</Location>
</VirtualHost>
You will also want to set ``bind_addresses: ['127.0.0.1']`` and ``x_forwarded: true``
for port 8008 in ``homeserver.yaml`` to ensure that client IP addresses are
recorded correctly.
@@ -901,7 +848,7 @@ to install using pip and a virtualenv::
virtualenv -p python2.7 env
source env/bin/activate
python synapse/python_dependencies.py | xargs pip install
python -m synapse.python_dependencies | xargs pip install
pip install lxml mock
This will run a process of downloading and installing all the needed
@@ -956,5 +903,13 @@ variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall amount, and especially in terms of giving back RAM
to the OS. To use it, the library must simply be put in the LD_PRELOAD
environment variable when launching Synapse. On Debian, this can be done
by installing the ``libjemalloc1`` package and adding this line to
``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -18,7 +18,7 @@ instructions that may be required are listed later in this document.
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
pip install --upgrade --process-dependency-links matrix-synapse
# restart synapse
synctl restart
@@ -48,11 +48,11 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
Upgrading to v0.27.3
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.

View File

@@ -1 +0,0 @@
Add support for the SNI extension to federation TLS connections

View File

@@ -1 +0,0 @@
The test suite now can run under PostgreSQL.

View File

@@ -1 +0,0 @@
Refactor HTTP replication endpoints to reduce code duplication

View File

@@ -1 +0,0 @@
Add ability to limit number of monthly active users on the server

View File

@@ -1 +0,0 @@
Tests now correctly execute on Python 3.

View File

@@ -1 +0,0 @@
Support more federation endpoints on workers

View File

@@ -1 +0,0 @@
Basic support for room versioning

View File

@@ -1 +0,0 @@
Ability to disable client/server Synapse via conf toggle

View File

@@ -1 +0,0 @@
Fix occasional glitches in the synapse_event_persisted_position metric

View File

@@ -1 +0,0 @@
Sytests can now be run inside a Docker container.

View File

@@ -1 +0,0 @@
Fix bug on deleting 3pid when using identity servers that don't support unbind API

View File

@@ -1 +0,0 @@
Ability to whitelist specific threepids against monthly active user limiting

View File

@@ -1 +0,0 @@
Add some metrics for the appservice and federation event sending loops

View File

@@ -1 +0,0 @@
Update docker base image from alpine 3.7 to 3.8.

View File

@@ -1 +0,0 @@
Where server is disabled, block ability for locked out users to read new messages

View File

@@ -1 +0,0 @@
Make the tests pass on Twisted < 18.7.0

View File

@@ -1 +0,0 @@
Dont ship recaptcha_ajax.js, use it directly from Google

View File

@@ -1 +0,0 @@
Rename synapse.util.async to synapse.util.async_helpers to mitigate async becoming a keyword on Python 3.7.

View File

@@ -1 +0,0 @@
Synapse's tests are now formatted with the black autoformatter.

View File

@@ -1 +0,0 @@
Fixes test_reap_monthly_active_users so it passes under postgres

View File

@@ -1 +0,0 @@
Implemented a new testing base class to reduce test boilerplate.

View File

@@ -1 +0,0 @@
set admin uri via config, to be used in error messages where the user should contact the administrator

View File

@@ -1 +0,0 @@
Rename MAU prometheus metrics

View File

@@ -1 +0,0 @@
Fix missing yield in synapse.storage.monthly_active_users.initialise_reserved_users

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,13 @@
FROM docker.io/python:2-alpine3.8
ARG PYTHON_VERSION=2
RUN apk add --no-cache --virtual .nacl_deps \
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
# install the OS build deps
RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
@@ -8,25 +15,46 @@ RUN apk add --no-cache --virtual .nacl_deps \
libxslt-dev \
linux-headers \
postgresql-dev \
su-exec \
zlib-dev
COPY . /synapse
# build things which have slow build steps, before we copy synapse, so that
# the layer can be cached.
#
# (we really just care about caching a wheel here, as the "pip install" below
# will install them again.)
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade \
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
msgpack-python \
pillow \
pynacl
# now install synapse and all of the python deps to /install.
COPY . /synapse
RUN pip install --prefix="/install" --no-warn-script-location \
lxml \
pip \
psycopg2 \
setuptools \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/docker/start.py /synapse/docker/conf / \
&& rm -rf \
setup.cfg \
setup.py \
synapse
/synapse
###
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf
VOLUME ["/data"]

12
docker/Dockerfile-pgtests Normal file
View File

@@ -0,0 +1,12 @@
# Use the Sytest image that comes with a lot of the build dependencies
# pre-installed
FROM matrixdotorg/sytest:latest
# The Sytest image doesn't come with python, so install that
RUN apt-get -qq install -y python python-dev python-pip
# We need tox to run the tests in run_pg_tests.sh
RUN pip install tox
ADD run_pg_tests.sh /pg_tests.sh
ENTRYPOINT /pg_tests.sh

View File

@@ -88,6 +88,7 @@ variables are available for configuration:
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size [default `10M`].
Shared secrets, that will be initialized to random values if not set:

View File

@@ -85,7 +85,7 @@ federation_rc_concurrent: 3
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "10M"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "10M" }}"
max_image_pixels: "32M"
dynamic_thumbnails: false

20
docker/run_pg_tests.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
# This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the
# caller script). It will set up the database, run it, and then use the tox
# configuration to run the tests.
set -e
# Set PGUSER so Synapse's tests know what user to connect to the database with
export PGUSER=postgres
# Initialise & start the database
su -c '/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="en_US.UTF-8" --lc-ctype="en_US.UTF-8" --username=postgres' postgres
su -c '/usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start' postgres
# Run the tests
cd /src
export TRIAL_FLAGS="-j 4"
tox --workdir=/tmp -e py27-postgres

View File

@@ -5,6 +5,7 @@ import os
import sys
import subprocess
import glob
import codecs
# Utility functions
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
@@ -23,7 +24,7 @@ def generate_secrets(environ, secrets):
with open(filename) as handle: value = handle.read()
else:
print("Generating a random secret for {}".format(name))
value = os.urandom(32).encode("hex")
value = codecs.encode(os.urandom(32), "hex").decode()
with open(filename, "w") as handle: handle.write(value)
environ[secret] = value

View File

@@ -33,7 +33,7 @@ As an example::
< {
"access_token": "token_here",
"user_id": "@pepper_roni@test",
"user_id": "@pepper_roni:localhost",
"home_server": "test",
"device_id": "device_id_here"
}

View File

@@ -74,7 +74,7 @@ replication endpoints that it's talking to on the main synapse process.
``worker_replication_port`` should point to the TCP replication listener port and
``worker_replication_http_port`` should point to the HTTP replication port.
Currently, only the ``event_creator`` worker requires specifying
Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
``worker_replication_http_port``.
For instance::
@@ -241,6 +241,14 @@ regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
If ``use_presence`` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions::
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
This "stub" presence handler will pass through ``GET`` request but make the
``PUT`` effectively a no-op.
It will proxy any requests it cannot handle to the main synapse instance. It
must therefore be configured with the location of the main instance, via
the ``worker_main_http_uri`` setting in the frontend_proxy worker configuration
@@ -257,6 +265,7 @@ Handles some event creation. It can handle REST endpoints matching::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -1,20 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View File

@@ -1,22 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
export PEP8SUFFIX="--output-file=violations.flake8.log"
rm .coverage* || echo "No coverage files to remove"
tox -e packaging -e pep8

View File

@@ -1,18 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
tox -e py27

View File

@@ -1,44 +0,0 @@
#! /bin/bash
# This clones a project from github into a named subdirectory
# If the project has a branch with the same name as this branch
# then it will checkout that branch after cloning.
# Otherwise it will checkout "origin/develop."
# The first argument is the name of the directory to checkout
# the branch into.
# The second argument is the URL of the remote repository to checkout.
# Usually something like https://github.com/matrix-org/sytest.git
set -eux
NAME=$1
PROJECT=$2
BASE=".$NAME-base"
# Update our mirror.
if [ ! -d ".$NAME-base" ]; then
# Create a local mirror of the source repository.
# This saves us from having to download the entire repository
# when this script is next run.
git clone "$PROJECT" "$BASE" --mirror
else
# Fetch any updates from the source repository.
(cd "$BASE"; git fetch -p)
fi
# Remove the existing repository so that we have a clean copy
rm -rf "$NAME"
# Cloning with --shared means that we will share portions of the
# .git directory with our local mirror.
git clone "$BASE" "$NAME" --shared
# Jenkins may have supplied us with the name of the branch in the
# environment. Otherwise we will have to guess based on the current
# commit.
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
cd "$NAME"
# check out the relevant branch
git checkout "${GIT_BRANCH}" || (
echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop"
git checkout "origin/develop"
)

View File

@@ -31,5 +31,5 @@ $TOX_BIN/pip install 'setuptools>=18.5'
$TOX_BIN/pip install 'pip>=10'
{ python synapse/python_dependencies.py
echo lxml psycopg2
echo lxml
} | xargs $TOX_BIN/pip install

View File

@@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
/* Copyright 2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View File

@@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View File

@@ -21,4 +21,4 @@ try:
verifier.verify(macaroon, key)
print "Signature is correct"
except Exception as e:
print e.message
print str(e)

View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -e
# Fetch the current GitHub issue number, add one to it -- presto! The likely
# next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER

View File

@@ -1,57 +0,0 @@
#!/bin/bash
## CAUTION:
## This script will remove (hopefully) all trace of the given room ID from
## your homeserver.db
## Do not run it lightly.
set -e
if [ "$1" == "-h" ] || [ "$1" == "" ]; then
echo "Call with ROOM_ID as first option and then pipe it into the database. So for instance you might run"
echo " nuke-room-from-db.sh <room_id> | sqlite3 homeserver.db"
echo "or"
echo " nuke-room-from-db.sh <room_id> | psql --dbname=synapse"
exit
fi
ROOMID="$1"
cat <<EOF
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search WHERE room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF

View File

@@ -133,7 +133,7 @@ def register_new_user(user, password, server_location, shared_secret, admin):
print "Passwords do not match"
sys.exit(1)
if not admin:
if admin is None:
admin = raw_input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin = True
@@ -160,10 +160,16 @@ if __name__ == "__main__":
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
admin_group = parser.add_mutually_exclusive_group()
admin_group.add_argument(
"-a", "--admin",
action="store_true",
help="Register new user as an admin. Will prompt if omitted.",
help="Register new user as an admin. Will prompt if --no-admin is not set either.",
)
admin_group.add_argument(
"--no-admin",
action="store_true",
help="Register new user as a regular user. Will prompt if --admin is not set either.",
)
group = parser.add_mutually_exclusive_group(required=True)
@@ -197,4 +203,8 @@ if __name__ == "__main__":
else:
secret = args.shared_secret
register_new_user(args.user, args.password, args.server_url, secret, args.admin)
admin = None
if args.admin or args.no_admin:
admin = args.admin
register_new_user(args.user, args.password, args.server_url, secret, admin)

View File

@@ -17,13 +17,14 @@ ignore =
[pep8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik
# doesn't like it. E203 is contrary to PEP8.
ignore = W503,E203
# doesn't like it. E203 is contrary to PEP8. E731 is silly.
ignore = W503,E203,E731
[flake8]
# note that flake8 inherits the "ignore" settings from "pep8" (because it uses
# pep8 to do those checks), but not the "max-line-length" setting
max-line-length = 90
ignore=W503,E203,E731
[isort]
line_length = 89

View File

@@ -17,4 +17,14 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.33.2"
try:
from twisted.internet import protocol
from twisted.internet.protocol import Factory
from twisted.names.dns import DNSDatagramProtocol
protocol.Factory.noisy = False
Factory.noisy = False
DNSDatagramProtocol.noisy = False
except ImportError:
pass
__version__ = "0.33.6"

View File

@@ -25,7 +25,8 @@ from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
@@ -211,7 +212,7 @@ class Auth(object):
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent",
default=[b""]
)[0]
)[0].decode('ascii', 'surrogateescape')
if user and access_token and ip_addr:
yield self.store.insert_client_ip(
user_id=user.to_string(),
@@ -682,7 +683,7 @@ class Auth(object):
Returns:
bool: False if no access_token was given, True otherwise.
"""
query_params = request.args.get("access_token")
query_params = request.args.get(b"access_token")
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
return bool(query_params) or bool(auth_headers)
@@ -698,7 +699,7 @@ class Auth(object):
401 since some of the old clients depended on auth errors returning
403.
Returns:
str: The access_token
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
"""
@@ -720,9 +721,9 @@ class Auth(object):
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
parts = auth_headers[0].split(" ")
if parts[0] == "Bearer" and len(parts) == 2:
return parts[1]
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode('ascii')
else:
raise AuthError(
token_not_found_http_status,
@@ -738,7 +739,7 @@ class Auth(object):
errcode=Codes.MISSING_TOKEN
)
return query_params[0]
return query_params[0].decode('ascii')
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
@@ -775,31 +776,56 @@ class Auth(object):
)
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None):
def check_auth_blocking(self, user_id=None, threepid=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
Args:
user_id(str|None): If present, checks for presence against existing
MAU cohort
threepid(dict|None): If present, checks for presence against configured
reserved threepid. Used in cases where the user is trying register
with a MAU blocked server, normally they would be rejected but their
threepid is on the reserved list. user_id and
threepid should never be set at the same time.
"""
# Never fail an auth check for the server notices users
# This can be a problem where event creation is prohibited due to blocking
if user_id == self.hs.config.server_notices_mxid:
return
if self.hs.config.hs_disabled:
raise AuthError(
raise ResourceLimitError(
403, self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEED,
admin_uri=self.hs.config.admin_uri,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self.hs.config.admin_contact,
limit_type=self.hs.config.hs_disabled_limit_type
)
if self.hs.config.limit_usage_by_mau is True:
# If the user is already part of the MAU cohort
assert not (user_id and threepid)
# If the user is already part of the MAU cohort or a trial user
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
is_trial = yield self.store.is_trial_user(user_id)
if is_trial:
return
elif threepid:
# If the user does not exist yet, but is signing up with a
# reserved threepid then pass auth check
if is_threepid_reserved(self.hs.config, threepid):
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise AuthError(
403, "Monthly Active User Limits AU Limit Exceeded",
admin_uri=self.hs.config.admin_uri,
errcode=Codes.RESOURCE_LIMIT_EXCEED
raise ResourceLimitError(
403, "Monthly Active User Limit Exceeded",
admin_contact=self.hs.config.admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type="monthly_active_user"
)

View File

@@ -78,6 +78,7 @@ class EventTypes(object):
Name = "m.room.name"
ServerACL = "m.room.server_acl"
Pinned = "m.room.pinned_events"
class RejectedReason(object):
@@ -97,9 +98,17 @@ class ThirdPartyEntityKind(object):
LOCATION = "location"
class RoomVersions(object):
V1 = "1"
VDH_TEST = "vdh-test-version"
# the version we will give rooms which are created on this server
DEFAULT_ROOM_VERSION = "1"
DEFAULT_ROOM_VERSION = RoomVersions.V1
# vdh-test-version is a placeholder to get room versioning support working and tested
# until we have a working v2.
KNOWN_ROOM_VERSIONS = {"1", "vdh-test-version"}
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
ServerNoticeMsgType = "m.server_notice"
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"

View File

@@ -56,7 +56,7 @@ class Codes(object):
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
RESOURCE_LIMIT_EXCEED = "M_RESOURCE_LIMIT_EXCEED"
RESOURCE_LIMIT_EXCEEDED = "M_RESOURCE_LIMIT_EXCEEDED"
UNSUPPORTED_ROOM_VERSION = "M_UNSUPPORTED_ROOM_VERSION"
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
@@ -224,15 +224,34 @@ class NotFoundError(SynapseError):
class AuthError(SynapseError):
"""An error raised when there was a problem authorising an event."""
def __init__(self, code, msg, errcode=Codes.FORBIDDEN, admin_uri=None):
self.admin_uri = admin_uri
super(AuthError, self).__init__(code, msg, errcode=errcode)
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.FORBIDDEN
super(AuthError, self).__init__(*args, **kwargs)
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.
For instance, the monthly active user limit for the server has been exceeded
"""
def __init__(
self, code, msg,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=None,
limit_type=None,
):
self.admin_contact = admin_contact
self.limit_type = limit_type
super(ResourceLimitError, self).__init__(code, msg, errcode=errcode)
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
admin_uri=self.admin_uri,
admin_contact=self.admin_contact,
limit_type=self.limit_type
)

View File

@@ -226,7 +226,7 @@ class Filtering(object):
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker())
except jsonschema.ValidationError as e:
raise SynapseError(400, e.message)
raise SynapseError(400, str(e))
class FilterCollection(object):
@@ -251,6 +251,7 @@ class FilterCollection(object):
"include_leave", False
)
self.event_fields = filter_json.get("event_fields", [])
self.event_format = filter_json.get("event_format", "client")
def __repr__(self):
return "<FilterCollection %s>" % (json.dumps(self._filter_json),)

View File

@@ -72,7 +72,7 @@ class Ratelimiter(object):
return allowed, time_allowed
def prune_message_counts(self, time_now_s):
for user_id in self.message_counts.keys():
for user_id in list(self.message_counts.keys()):
message_count, time_start, msg_rate_hz = (
self.message_counts[user_id]
)

View File

@@ -64,7 +64,7 @@ class ConsentURIBuilder(object):
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id,
msg=user_id.encode('ascii'),
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (

View File

@@ -24,7 +24,7 @@ try:
python_dependencies.check_requirements()
except python_dependencies.MissingRequirementError as e:
message = "\n".join([
"Missing Requirement: %s" % (e.message,),
"Missing Requirement: %s" % (str(e),),
"To install run:",
" pip install --upgrade --force \"%s\"" % (e.dependency,),
"",

View File

@@ -140,7 +140,7 @@ def listen_metrics(bind_addresses, port):
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, backlog=50):
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
"""
Create a TCP socket for a port and several addresses
"""
@@ -156,7 +156,9 @@ def listen_tcp(bind_addresses, port, factory, backlog=50):
check_bind_error(e, address, bind_addresses)
def listen_ssl(bind_addresses, port, factory, context_factory, backlog=50):
def listen_ssl(
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
):
"""
Create an SSL socket for a port and several addresses
"""

View File

@@ -51,10 +51,7 @@ class AppserviceSlaveStore(
class AppserviceServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = AppserviceSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = AppserviceSlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -117,8 +114,9 @@ class ASReplicationHandler(ReplicationClientHandler):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
@@ -138,7 +136,7 @@ def start(config_options):
"Synapse appservice", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.appservice"
@@ -174,7 +172,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -74,10 +74,7 @@ class ClientReaderSlavedStore(
class ClientReaderServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = ClientReaderSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -156,7 +153,7 @@ def start(config_options):
"Synapse client reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.client_reader"
@@ -184,7 +181,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -45,6 +45,11 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
ProfileRestServlet,
)
from synapse.rest.client.v1.room import (
JoinRoomAliasServlet,
RoomMembershipRestServlet,
@@ -53,6 +58,7 @@ from synapse.rest.client.v1.room import (
)
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
@@ -62,6 +68,9 @@ logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
UserDirectoryStore,
DirectoryStore,
SlavedTransactionStore,
SlavedProfileStore,
@@ -81,10 +90,7 @@ class EventCreatorSlavedStore(
class EventCreatorServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = EventCreatorSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = EventCreatorSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -101,6 +107,9 @@ class EventCreatorServer(HomeServer):
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -160,7 +169,7 @@ def start(config_options):
"Synapse event creator", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.event_creator"
@@ -190,7 +199,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -32,6 +32,7 @@ from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -54,6 +55,7 @@ logger = logging.getLogger("synapse.app.federation_reader")
class FederationReaderSlavedStore(
SlavedAccountDataStore,
SlavedProfileStore,
SlavedApplicationServiceStore,
SlavedPusherStore,
@@ -70,10 +72,7 @@ class FederationReaderSlavedStore(
class FederationReaderServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = FederationReaderSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -141,7 +140,7 @@ def start(config_options):
"Synapse federation reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_reader"
@@ -169,7 +168,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -78,10 +78,7 @@ class FederationSenderSlaveStore(
class FederationSenderServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = FederationSenderSlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -144,8 +141,9 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(FederationSenderReplicationHandler, self).on_rdata(
yield super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
@@ -162,7 +160,7 @@ def start(config_options):
"Synapse federation sender", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_sender"
@@ -203,7 +201,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-federation-sender", config)

View File

@@ -38,6 +38,7 @@ from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.base import ClientV1RestServlet, client_path_patterns
from synapse.rest.client.v2_alpha._base import client_v2_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
@@ -49,6 +50,35 @@ from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.frontend_proxy")
class PresenceStatusStubServlet(ClientV1RestServlet):
PATTERNS = client_path_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__(hs)
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.get_json(
self.main_uri + request.uri,
headers=headers,
)
defer.returnValue((200, result))
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
defer.returnValue((200, {}))
class KeyUploadServlet(RestServlet):
PATTERNS = client_v2_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
@@ -118,10 +148,7 @@ class FrontendProxySlavedStore(
class FrontendProxyServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = FrontendProxySlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -135,6 +162,12 @@ class FrontendProxyServer(HomeServer):
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
@@ -153,7 +186,8 @@ class FrontendProxyServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
reactor=self.get_reactor()
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -194,7 +228,7 @@ def start(config_options):
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
@@ -224,7 +258,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -62,7 +62,7 @@ from synapse.rest.key.v1.server_key_resource import LocalKey
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage import are_all_users_on_domain
from synapse.storage import DataStore, are_all_users_on_domain
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
@@ -111,6 +111,8 @@ def build_resource_for_web_client(hs):
class SynapseHomeServer(HomeServer):
DATASTORE_CLASS = DataStore
def _listener_http(self, config, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
@@ -299,12 +301,16 @@ class SynapseHomeServer(HomeServer):
try:
database_engine.check_database(db_conn.cursor())
except IncorrectDatabaseSetup as e:
quit_with_error(e.message)
quit_with_error(str(e))
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
registered_reserved_users_mau_gauge = Gauge(
"synapse_admin_mau:registered_reserved_users",
"Registered users with reserved threepids"
)
def setup(config_options):
@@ -322,7 +328,7 @@ def setup(config_options):
config_options,
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if not config:
@@ -356,13 +362,13 @@ def setup(config_options):
logger.info("Preparing database: %s...", config.database_config['name'])
try:
db_conn = hs.get_db_conn(run_new_connection=False)
prepare_database(db_conn, database_engine, config=config)
database_engine.on_new_connection(db_conn)
with hs.get_db_conn(run_new_connection=False) as db_conn:
prepare_database(db_conn, database_engine, config=config)
database_engine.on_new_connection(db_conn)
hs.run_startup_checks(db_conn, database_engine)
hs.run_startup_checks(db_conn, database_engine)
db_conn.commit()
db_conn.commit()
except UpgradeDatabaseException:
sys.stderr.write(
"\nFailed to upgrade database.\n"
@@ -378,10 +384,8 @@ def setup(config_options):
def start():
hs.get_pusherpool().start()
hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
reactor.callWhenRunning(start)
@@ -451,6 +455,10 @@ def run(hs):
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
@@ -525,13 +533,18 @@ def run(hs):
clock.looping_call(
hs.get_datastore().reap_monthly_active_users, 1000 * 60 * 60
)
hs.get_datastore().reap_monthly_active_users()
@defer.inlineCallbacks
def generate_monthly_active_users():
count = 0
current_mau_count = 0
reserved_count = 0
store = hs.get_datastore()
if hs.config.limit_usage_by_mau:
count = yield hs.get_datastore().get_monthly_active_count()
current_mau_gauge.set(float(count))
current_mau_count = yield store.get_monthly_active_count()
reserved_count = yield store.get_registered_reserved_users_count()
current_mau_gauge.set(float(current_mau_count))
registered_reserved_users_mau_gauge.set(float(reserved_count))
max_mau_gauge.set(float(hs.config.max_mau_value))
hs.get_datastore().initialise_reserved_users(

View File

@@ -60,10 +60,7 @@ class MediaRepositorySlavedStore(
class MediaRepositoryServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = MediaRepositorySlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -136,7 +133,7 @@ def start(config_options):
"Synapse media repository", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.media_repository"
@@ -171,7 +168,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -78,10 +78,7 @@ class PusherSlaveStore(
class PusherServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = PusherSlaveStore
def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
@@ -148,8 +145,9 @@ class PusherReplicationHandler(ReplicationClientHandler):
self.pusher_pool = hs.get_pusherpool()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
@@ -162,11 +160,11 @@ class PusherReplicationHandler(ReplicationClientHandler):
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(
self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
@@ -193,7 +191,7 @@ def start(config_options):
"Synapse pusher", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.pusher"
@@ -230,7 +228,6 @@ def start(config_options):
def start():
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -114,7 +114,10 @@ class SynchrotronPresence(object):
logger.info("Presence process_id is %r", self.process_id)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
self.hs.get_tcp_replication().send_user_sync(user_id, is_syncing, last_sync_ms)
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
@@ -211,10 +214,13 @@ class SynchrotronPresence(object):
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
if self.hs.config.use_presence:
return [
user_id for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
else:
return set()
class SynchrotronTyping(object):
@@ -243,10 +249,7 @@ class SynchrotronApplicationService(object):
class SynchrotronServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = SynchrotronSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -332,8 +335,9 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
yield super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
@@ -406,7 +410,7 @@ def start(config_options):
"Synapse synchrotron", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron"
@@ -431,7 +435,6 @@ def start(config_options):
def start():
ss.get_datastore().start_profiling()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -1,284 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import collections
import errno
import glob
import os
import os.path
import signal
import subprocess
import sys
import time
from six import iteritems
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
YELLOW = "\x1b[1;33m"
RED = "\x1b[1;31m"
NORMAL = "\x1b[m"
def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError as err:
if err.errno == errno.EPERM:
return True
return False
def write(message, colour=NORMAL, stream=sys.stdout):
if colour == NORMAL:
stream.write(message + "\n")
else:
stream.write(colour + message + NORMAL + "\n")
def abort(message, colour=RED, stream=sys.stderr):
write(message, colour, stream)
sys.exit(1)
def start(configfile):
write("Starting ...")
args = SYNAPSE
args.extend(["--daemonize", "-c", configfile])
try:
subprocess.check_call(args)
write("started synapse.app.homeserver(%r)" %
(configfile,), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting (exit code: %d); see above for logs" % e.returncode,
colour=RED,
)
def start_worker(app, configfile, worker_configfile):
args = [
"python", "-B",
"-m", app,
"-c", configfile,
"-c", worker_configfile
]
try:
subprocess.check_call(args)
write("started %s(%r)" % (app, worker_configfile), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting %s(%r) (exit code: %d); see above for logs" % (
app, worker_configfile, e.returncode,
),
colour=RED,
)
def stop(pidfile, app):
if os.path.exists(pidfile):
pid = int(open(pidfile).read())
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError as err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
abort("Cannot stop %s: Operation not permitted" % (app,))
else:
abort("Cannot stop %s: Unknown error" % (app,))
Worker = collections.namedtuple("Worker", [
"app", "configfile", "pidfile", "cache_factor"
])
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"action",
choices=["start", "stop", "restart"],
help="whether to start, stop or restart the synapse",
)
parser.add_argument(
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homeserver.yaml",
)
parser.add_argument(
"-w", "--worker",
metavar="WORKERCONFIG",
help="start or stop a single worker",
)
parser.add_argument(
"-a", "--all-processes",
metavar="WORKERCONFIGDIR",
help="start or stop all the workers in the given directory"
" and the main synapse process",
)
options = parser.parse_args()
if options.worker and options.all_processes:
write(
'Cannot use "--worker" with "--all-processes"',
stream=sys.stderr
)
sys.exit(1)
configfile = options.configfile
if not os.path.exists(configfile):
write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), options.configfile
),
stream=sys.stderr,
)
sys.exit(1)
with open(configfile) as stream:
config = yaml.load(stream)
pidfile = config["pid_file"]
cache_factor = config.get("synctl_cache_factor")
start_stop_synapse = True
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in iteritems(cache_factors):
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False
worker_configfile = options.worker
if not os.path.exists(worker_configfile):
write(
"No worker config found at %r" % (worker_configfile,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.append(worker_configfile)
if options.all_processes:
# To start the main synapse with -a you need to add a worker file
# with worker_app == "synapse.app.homeserver"
start_stop_synapse = False
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
"No worker config directory found at %r" % (worker_configdir,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.extend(sorted(glob.glob(
os.path.join(worker_configdir, "*.yaml")
)))
workers = []
for worker_configfile in worker_configfiles:
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
if worker_app == "synapse.app.homeserver":
# We need to special case all of this to pick up options that may
# be set in the main config file or in this worker config file.
worker_pidfile = (
worker_config.get("pid_file")
or pidfile
)
worker_cache_factor = worker_config.get("synctl_cache_factor") or cache_factor
daemonize = worker_config.get("daemonize") or config.get("daemonize")
assert daemonize, "Main process must have daemonize set to true"
# The master process doesn't support using worker_* config.
for key in worker_config:
if key == "worker_app": # But we allow worker_app
continue
assert not key.startswith("worker_"), \
"Main process cannot use worker_* config"
else:
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
))
action = options.action
if action == "stop" or action == "restart":
for worker in workers:
stop(worker.pidfile, worker.app)
if start_stop_synapse:
stop(pidfile, "synapse.app.homeserver")
# Wait for synapse to actually shutdown before starting it again
if action == "restart":
running_pids = []
if start_stop_synapse and os.path.exists(pidfile):
running_pids.append(int(open(pidfile).read()))
for worker in workers:
if os.path.exists(worker.pidfile):
running_pids.append(int(open(worker.pidfile).read()))
if len(running_pids) > 0:
write("Waiting for process to exit before restarting...")
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart":
if start_stop_synapse:
# Check if synapse is already running
if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())):
abort("synapse.app.homeserver already running")
start(configfile)
for worker in workers:
if worker.cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(worker.cache_factor)
start_worker(worker.app, configfile, worker.configfile)
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
else:
os.environ.pop("SYNAPSE_CACHE_FACTOR", None)
if __name__ == "__main__":
main()

View File

@@ -94,10 +94,7 @@ class UserDirectorySlaveStore(
class UserDirectoryServer(HomeServer):
def setup(self):
logger.info("Setting up.")
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
logger.info("Finished setting up.")
DATASTORE_CLASS = UserDirectorySlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
@@ -169,8 +166,9 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
super(UserDirectoryReplicationHandler, self).on_rdata(
yield super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == "current_state_deltas":
@@ -190,7 +188,7 @@ def start(config_options):
"Synapse user directory", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
@@ -231,7 +229,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -13,7 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib
from six.moves import urllib
from prometheus_client import Counter
@@ -98,7 +99,7 @@ class ApplicationServiceApi(SimpleHttpClient):
def query_user(self, service, user_id):
if service.url is None:
defer.returnValue(False)
uri = service.url + ("/users/%s" % urllib.quote(user_id))
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
response = None
try:
response = yield self.get_json(uri, {
@@ -119,7 +120,7 @@ class ApplicationServiceApi(SimpleHttpClient):
def query_alias(self, service, alias):
if service.url is None:
defer.returnValue(False)
uri = service.url + ("/rooms/%s" % urllib.quote(alias))
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None
try:
response = yield self.get_json(uri, {
@@ -153,7 +154,7 @@ class ApplicationServiceApi(SimpleHttpClient):
service.url,
APP_SERVICE_PREFIX,
kind,
urllib.quote(protocol)
urllib.parse.quote(protocol)
)
try:
response = yield self.get_json(uri, fields)
@@ -188,7 +189,7 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = "%s%s/thirdparty/protocol/%s" % (
service.url,
APP_SERVICE_PREFIX,
urllib.quote(protocol)
urllib.parse.quote(protocol)
)
try:
info = yield self.get_json(uri, {})
@@ -228,7 +229,7 @@ class ApplicationServiceApi(SimpleHttpClient):
txn_id = str(txn_id)
uri = service.url + ("/transactions/%s" %
urllib.quote(txn_id))
urllib.parse.quote(txn_id))
try:
yield self.put_json(
uri=uri,

View File

@@ -25,7 +25,7 @@ if __name__ == "__main__":
try:
config = HomeServerConfig.load_config("", sys.argv[3:])
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
print (getattr(config, key))

View File

@@ -21,7 +21,7 @@ from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .groups import GroupsConfig
from .jwt import JWTConfig
from .jwt_config import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig

View File

@@ -168,7 +168,8 @@ def setup_logging(config, use_worker_options=False):
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3,
encoding='utf8'
)
def sighup(signum, stack):
@@ -226,7 +227,22 @@ def setup_logging(config, use_worker_options=False):
#
# However this may not be too much of a problem if we are just writing to a file.
observer = STDLibLogObserver()
def _log(event):
if "log_text" in event:
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
return
if event["log_text"].startswith("(UDP Port "):
return
if event["log_text"].startswith("Timing out client"):
return
return observer(event)
globalLogBeginner.beginLoggingTo(
[observer],
[_log],
redirectStandardIO=not config.no_redirect_stdio,
)

View File

@@ -49,6 +49,9 @@ class ServerConfig(Config):
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to enable user presence.
self.use_presence = config.get("use_presence", True)
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
@@ -74,17 +77,23 @@ class ServerConfig(Config):
self.max_mau_value = config.get(
"max_mau_value", 0,
)
self.mau_limits_reserved_threepids = config.get(
"mau_limit_reserved_threepids", []
)
self.mau_trial_days = config.get(
"mau_trial_days", 0,
)
# Options to disable HS
self.hs_disabled = config.get("hs_disabled", False)
self.hs_disabled_message = config.get("hs_disabled_message", "")
self.hs_disabled_limit_type = config.get("hs_disabled_limit_type", "")
# Admin uri to direct users at should their instance become blocked
# due to resource constraints
self.admin_uri = config.get("admin_uri", None)
self.admin_contact = config.get("admin_contact", None)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
@@ -249,6 +258,9 @@ class ServerConfig(Config):
# hard limit.
soft_file_limit: 0
# Set to false to disable presence tracking on this homeserver.
use_presence: true
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
# gc_thresholds: [700, 10, 10]
@@ -340,6 +352,33 @@ class ServerConfig(Config):
# - port: 9000
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Homeserver blocking
#
# How to reach the server admin, used in ResourceLimitError
# admin_contact: 'mailto:admin@server.com'
#
# Global block config
#
# hs_disabled: False
# hs_disabled_message: 'Human readable reason for why the HS is blocked'
# hs_disabled_limit_type: 'error code(str), to help clients decode reason'
#
# Monthly Active User Blocking
#
# Enables monthly active user checking
# limit_usage_by_mau: False
# max_mau_value: 50
# mau_trial_days: 2
#
# Sometimes the server admin will want to ensure certain accounts are
# never blocked by mau checking. These accounts are specified here.
#
# mau_limit_reserved_threepids:
# - medium: 'email'
# address: 'reserved_user@example.com'
""" % locals()
def read_arguments(self, args):
@@ -365,6 +404,23 @@ class ServerConfig(Config):
" service on the given port.")
def is_threepid_reserved(config, threepid):
"""Check the threepid against the reserved threepid config
Args:
config(ServerConfig) - to access server config attributes
threepid(dict) - The threepid to test for
Returns:
boolean Is the threepid undertest reserved_user
"""
for tp in config.mau_limits_reserved_threepids:
if (threepid['medium'] == tp['medium']
and threepid['address'] == tp['address']):
return True
return False
def read_gc_thresholds(thresholds):
"""Reads the three integer thresholds for garbage collection. Ensures that
the thresholds are integers if thresholds are supplied.

View File

@@ -123,6 +123,6 @@ class ClientTLSOptionsFactory(object):
def get_options(self, host):
return ClientTLSOptions(
host.decode('utf-8'),
host,
CertificateOptions(verify=False).getContext()
)

View File

@@ -18,7 +18,9 @@ import logging
from canonicaljson import json
from twisted.internet import defer, reactor
from twisted.internet.error import ConnectError
from twisted.internet.protocol import Factory
from twisted.names.error import DomainError
from twisted.web.http import HTTPClient
from synapse.http.endpoint import matrix_federation_endpoint
@@ -47,12 +49,14 @@ def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
server_response, server_certificate = yield protocol.remote_key
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"):
logger.warn("Error getting key for %r: %s", server_name, e)
if e.status.startswith(b"4"):
# Don't retry for 4xx responses.
raise IOError("Cannot get key for %r" % server_name)
except (ConnectError, DomainError) as e:
logger.warn("Error getting key for %r: %s", server_name, e)
except Exception as e:
logger.exception(e)
logger.exception("Error getting key for %r", server_name)
raise IOError("Cannot get key for %r" % server_name)
@@ -78,6 +82,12 @@ class SynapseKeyClientProtocol(HTTPClient):
self._peer = self.transport.getPeer()
logger.debug("Connected to %s", self._peer)
if not isinstance(self.path, bytes):
self.path = self.path.encode('ascii')
if not isinstance(self.host, bytes):
self.host = self.host.encode('ascii')
self.sendCommand(b"GET", self.path)
if self.host:
self.sendHeader(b"Host", self.host)

View File

@@ -16,9 +16,10 @@
import hashlib
import logging
import urllib
from collections import namedtuple
from six.moves import urllib
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
@@ -40,6 +41,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.crypto.keyclient import fetch_server_key
from synapse.util import logcontext, unwrapFirstError
from synapse.util.logcontext import (
LoggingContext,
PreserveLoggingContext,
preserve_fn,
run_in_background,
@@ -216,23 +218,34 @@ class Keyring(object):
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
loop_count = 1
while True:
wait_on = [
self.key_downloads[server_name]
(server_name, self.key_downloads[server_name])
for server_name in server_names
if server_name in self.key_downloads
]
if wait_on:
with PreserveLoggingContext():
yield defer.DeferredList(wait_on)
else:
if not wait_on:
break
logger.info(
"Waiting for existing lookups for %s to complete [loop %i]",
[w[0] for w in wait_on], loop_count,
)
with PreserveLoggingContext():
yield defer.DeferredList((w[1] for w in wait_on))
loop_count += 1
ctx = LoggingContext.current_context()
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name_)
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
logger.debug("Got key lookup lock on %s", server_name)
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
@@ -432,7 +445,7 @@ class Keyring(object):
# an incoming request.
query_response = yield self.client.post_json(
destination=perspective_name,
path=b"/_matrix/key/v2/query",
path="/_matrix/key/v2/query",
data={
u"server_keys": {
server_name: {
@@ -513,8 +526,8 @@ class Keyring(object):
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_client_options_factory,
path=(b"/_matrix/key/v2/server/%s" % (
urllib.quote(requested_key_id),
path=("/_matrix/key/v2/server/%s" % (
urllib.parse.quote(requested_key_id),
)).encode("ascii"),
)

View File

@@ -98,9 +98,9 @@ def check(event, auth_events, do_sig_check=True, do_size_check=True):
creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event:
raise SynapseError(
raise AuthError(
403,
"Room %r does not exist" % (event.room_id,)
"No create event in auth events",
)
creating_domain = get_domain_from_id(event.room_id)

View File

@@ -13,13 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from distutils.util import strtobool
import six
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting
# a dict to frozen_dicts is expensive.
USE_FROZEN_DICTS = True
# bugs where we accidentally share e.g. signature dicts. However, converting a
# dict to frozen_dicts is expensive.
#
# NOTE: This is overridden by the configuration by the Synapse worker apps, but
# for the sake of tests, it is set here while it cannot be configured on the
# homeserver object itself.
USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
class _EventInternalMetadata(object):
@@ -147,6 +156,9 @@ class EventBase(object):
def items(self):
return list(self._event_dict.items())
def keys(self):
return six.iterkeys(self._event_dict)
class FrozenEvent(EventBase):
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):

View File

@@ -13,17 +13,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from collections import namedtuple
import six
from twisted.internet import defer
from twisted.internet.defer import DeferredList
from synapse.api.constants import MAX_DEPTH
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import FrozenEvent
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_dict
from synapse.types import get_domain_from_id
from synapse.util import logcontext, unwrapFirstError
logger = logging.getLogger(__name__)
@@ -133,34 +136,45 @@ class FederationBase(object):
* throws a SynapseError if the signature check failed.
The deferreds run their callbacks in the sentinel logcontext.
"""
redacted_pdus = [
prune_event(pdu)
for pdu in pdus
]
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
deferreds = _check_sigs_on_pdus(self.keyring, pdus)
ctx = logcontext.LoggingContext.current_context()
def callback(_, pdu, redacted):
def callback(_, pdu):
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
# let's try to distinguish between failures because the event was
# redacted (which are somewhat expected) vs actual ball-tampering
# incidents.
#
# This is just a heuristic, so we just assume that if the keys are
# about the same between the redacted and received events, then the
# received event was probably a redacted copy (but we then use our
# *actual* redacted copy to be on the safe side.)
redacted_event = prune_event(pdu)
if (
set(redacted_event.keys()) == set(pdu.keys()) and
set(six.iterkeys(redacted_event.content))
== set(six.iterkeys(pdu.content))
):
logger.info(
"Event %s seems to have been redacted; using our redacted "
"copy",
pdu.event_id,
)
else:
logger.warning(
"Event %s content has been tampered, redacting",
pdu.event_id, pdu.get_pdu_json(),
)
return redacted_event
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
"Event contains spam, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return prune_event(pdu)
return pdu
@@ -168,21 +182,121 @@ class FederationBase(object):
failure.trap(SynapseError)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
"Signature check failed for %s: %s",
pdu.event_id, failure.getErrorMessage(),
)
return failure
for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):
for deferred, pdu in zip(deferreds, pdus):
deferred.addCallbacks(
callback, errback,
callbackArgs=[pdu, redacted],
callbackArgs=[pdu],
errbackArgs=[pdu],
)
return deferreds
class PduToCheckSig(namedtuple("PduToCheckSig", [
"pdu", "redacted_pdu_json", "event_id_domain", "sender_domain", "deferreds",
])):
pass
def _check_sigs_on_pdus(keyring, pdus):
"""Check that the given events are correctly signed
Args:
keyring (synapse.crypto.Keyring): keyring object to do the checks
pdus (Collection[EventBase]): the events to be checked
Returns:
List[Deferred]: a Deferred for each event in pdus, which will either succeed if
the signatures are valid, or fail (with a SynapseError) if not.
"""
# (currently this is written assuming the v1 room structure; we'll probably want a
# separate function for checking v2 rooms)
# we want to check that the event is signed by:
#
# (a) the server which created the event_id
#
# (b) the sender's server.
#
# - except in the case of invites created from a 3pid invite, which are exempt
# from this check, because the sender has to match that of the original 3pid
# invite, but the event may come from a different HS, for reasons that I don't
# entirely grok (why do the senders have to match? and if they do, why doesn't the
# joining server ask the inviting server to do the switcheroo with
# exchange_third_party_invite?).
#
# That's pretty awful, since redacting such an invite will render it invalid
# (because it will then look like a regular invite without a valid signature),
# and signatures are *supposed* to be valid whether or not an event has been
# redacted. But this isn't the worst of the ways that 3pid invites are broken.
#
# let's start by getting the domain for each pdu, and flattening the event back
# to JSON.
pdus_to_check = [
PduToCheckSig(
pdu=p,
redacted_pdu_json=prune_event(p).get_pdu_json(),
event_id_domain=get_domain_from_id(p.event_id),
sender_domain=get_domain_from_id(p.sender),
deferreds=[],
)
for p in pdus
]
# first make sure that the event is signed by the event_id's domain
deferreds = keyring.verify_json_objects_for_server([
(p.event_id_domain, p.redacted_pdu_json)
for p in pdus_to_check
])
for p, d in zip(pdus_to_check, deferreds):
p.deferreds.append(d)
# now let's look for events where the sender's domain is different to the
# event id's domain (normally only the case for joins/leaves), and add additional
# checks.
pdus_to_check_sender = [
p for p in pdus_to_check
if p.sender_domain != p.event_id_domain and not _is_invite_via_3pid(p.pdu)
]
more_deferreds = keyring.verify_json_objects_for_server([
(p.sender_domain, p.redacted_pdu_json)
for p in pdus_to_check_sender
])
for p, d in zip(pdus_to_check_sender, more_deferreds):
p.deferreds.append(d)
# replace lists of deferreds with single Deferreds
return [_flatten_deferred_list(p.deferreds) for p in pdus_to_check]
def _flatten_deferred_list(deferreds):
"""Given a list of one or more deferreds, either return the single deferred, or
combine into a DeferredList.
"""
if len(deferreds) > 1:
return DeferredList(deferreds, fireOnOneErrback=True, consumeErrors=True)
else:
assert len(deferreds) == 1
return deferreds[0]
def _is_invite_via_3pid(event):
return (
event.type == EventTypes.Member
and event.membership == Membership.INVITE
and "third_party_invite" in event.content
)
def event_from_pdu_json(pdu_json, outlier=False):
"""Construct a FrozenEvent from an event json received over federation

View File

@@ -66,6 +66,14 @@ class FederationClient(FederationBase):
self.state = hs.get_state_handler()
self.transport_layer = hs.get_federation_transport_client()
self._get_pdu_cache = ExpiringCache(
cache_name="get_pdu_cache",
clock=self._clock,
max_len=1000,
expiry_ms=120 * 1000,
reset_expiry_on_get=False,
)
def _clear_tried_cache(self):
"""Clear pdu_destination_tried cache"""
now = self._clock.time_msec()
@@ -82,17 +90,6 @@ class FederationClient(FederationBase):
if destination_dict:
self.pdu_destination_tried[event_id] = destination_dict
def start_get_pdu_cache(self):
self._get_pdu_cache = ExpiringCache(
cache_name="get_pdu_cache",
clock=self._clock,
max_len=1000,
expiry_ms=120 * 1000,
reset_expiry_on_get=False,
)
self._get_pdu_cache.start()
@log_function
def make_query(self, destination, query_type, args,
retry_on_dns_fail=False, ignore_backoff=False):
@@ -212,8 +209,6 @@ class FederationClient(FederationBase):
Will attempt to get the PDU from each destination in the list until
one succeeds.
This will persist the PDU locally upon receipt.
Args:
destinations (list): Which home servers to query
event_id (str): event to fetch
@@ -229,10 +224,9 @@ class FederationClient(FederationBase):
# TODO: Rate limit the number of times we try and get the same event.
if self._get_pdu_cache:
ev = self._get_pdu_cache.get(event_id)
if ev:
defer.returnValue(ev)
ev = self._get_pdu_cache.get(event_id)
if ev:
defer.returnValue(ev)
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
@@ -271,10 +265,10 @@ class FederationClient(FederationBase):
event_id, destination, e,
)
except NotRetryingDestination as e:
logger.info(e.message)
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(e.message)
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
@@ -285,7 +279,7 @@ class FederationClient(FederationBase):
)
continue
if self._get_pdu_cache is not None and signed_pdu:
if signed_pdu:
self._get_pdu_cache[event_id] = signed_pdu
defer.returnValue(signed_pdu)
@@ -293,8 +287,7 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
@log_function
def get_state_for_room(self, destination, room_id, event_id):
"""Requests all of the `current` state PDUs for a given room from
a remote home server.
"""Requests all of the room state at a given event from a remote home server.
Args:
destination (str): The remote homeserver to query for the state.
@@ -302,9 +295,10 @@ class FederationClient(FederationBase):
event_id (str): The id of the event we want the state at.
Returns:
Deferred: Results in a list of PDUs.
Deferred[Tuple[List[EventBase], List[EventBase]]]:
A list of events in the state, and a list of events in the auth chain
for the given event.
"""
try:
# First we try and ask for just the IDs, as thats far quicker if
# we have most of the state and auth_chain already.
@@ -510,7 +504,7 @@ class FederationClient(FederationBase):
else:
logger.warn(
"Failed to %s via %s: %i %s",
description, destination, e.code, e.message,
description, destination, e.code, e.args[0],
)
except Exception:
logger.warn(
@@ -875,7 +869,7 @@ class FederationClient(FederationBase):
except Exception as e:
logger.exception(
"Failed to send_third_party_invite via %s: %s",
destination, e.message
destination, str(e)
)
raise RuntimeError("Failed to send to any server.")

View File

@@ -46,6 +46,7 @@ from synapse.replication.http.federation import (
from synapse.types import get_domain_from_id
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logcontext import nested_logging_context
from synapse.util.logutils import log_function
# when processing incoming transactions, we try to handle multiple rooms in
@@ -99,7 +100,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
@log_function
def on_incoming_transaction(self, transaction_data):
def on_incoming_transaction(self, origin, transaction_data):
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
@@ -108,34 +109,33 @@ class FederationServer(FederationBase):
if not transaction.transaction_id:
raise Exception("Transaction missing transaction_id")
if not transaction.origin:
raise Exception("Transaction missing origin")
logger.debug("[%s] Got transaction", transaction.transaction_id)
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (yield self._transaction_linearizer.queue(
(transaction.origin, transaction.transaction_id),
(origin, transaction.transaction_id),
)):
result = yield self._handle_incoming_transaction(
transaction, request_time,
origin, transaction, request_time,
)
defer.returnValue(result)
@defer.inlineCallbacks
def _handle_incoming_transaction(self, transaction, request_time):
def _handle_incoming_transaction(self, origin, transaction, request_time):
""" Process an incoming transaction and return the HTTP response
Args:
origin (unicode): the server making the request
transaction (Transaction): incoming transaction
request_time (int): timestamp that the HTTP request arrived at
Returns:
Deferred[(int, object)]: http response code and body
"""
response = yield self.transaction_actions.have_responded(transaction)
response = yield self.transaction_actions.have_responded(origin, transaction)
if response:
logger.debug(
@@ -149,7 +149,7 @@ class FederationServer(FederationBase):
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(transaction.origin)
origin_host, _ = parse_server_name(origin)
pdus_by_room = {}
@@ -188,21 +188,22 @@ class FederationServer(FederationBase):
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
try:
yield self._handle_received_pdu(
transaction.origin, pdu
)
pdu_results[event_id] = {}
except FederationError as e:
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.error(
"Failed to handle PDU %s: %s",
event_id, f.getTraceback().rstrip(),
)
with nested_logging_context(event_id):
try:
yield self._handle_received_pdu(
origin, pdu
)
pdu_results[event_id] = {}
except FederationError as e:
logger.warn("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
pdu_results[event_id] = {"error": str(e)}
logger.error(
"Failed to handle PDU %s: %s",
event_id, f.getTraceback().rstrip(),
)
yield concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(),
@@ -212,7 +213,7 @@ class FederationServer(FederationBase):
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
yield self.received_edu(
transaction.origin,
origin,
edu.edu_type,
edu.content
)
@@ -224,6 +225,7 @@ class FederationServer(FederationBase):
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(
origin,
transaction,
200, response
)
@@ -618,7 +620,7 @@ class FederationServer(FederationBase):
)
yield self.handler.on_receive_pdu(
origin, pdu, get_missing=True, sent_to_us_directly=True,
origin, pdu, sent_to_us_directly=True,
)
def __str__(self):
@@ -838,9 +840,9 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
)
return self._send_edu(
edu_type=edu_type,
origin=origin,
content=content,
edu_type=edu_type,
origin=origin,
content=content,
)
def on_query(self, query_type, args):
@@ -851,6 +853,6 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
return handler(args)
return self._get_query_client(
query_type=query_type,
args=args,
query_type=query_type,
args=args,
)

View File

@@ -36,7 +36,7 @@ class TransactionActions(object):
self.store = datastore
@log_function
def have_responded(self, transaction):
def have_responded(self, origin, transaction):
""" Have we already responded to a transaction with the same id and
origin?
@@ -50,11 +50,11 @@ class TransactionActions(object):
"transaction_id")
return self.store.get_received_txn_response(
transaction.transaction_id, transaction.origin
transaction.transaction_id, origin
)
@log_function
def set_response(self, transaction, code, response):
def set_response(self, origin, transaction, code, response):
""" Persist how we responded to a transaction.
Returns:
@@ -66,7 +66,7 @@ class TransactionActions(object):
return self.store.set_received_txn_response(
transaction.transaction_id,
transaction.origin,
origin,
code,
response,
)

View File

@@ -32,7 +32,7 @@ Events are replicated via a separate events stream.
import logging
from collections import namedtuple
from six import iteritems, itervalues
from six import iteritems
from sortedcontainers import SortedDict
@@ -117,7 +117,7 @@ class FederationRemoteSendQueue(object):
user_ids = set(
user_id
for uids in itervalues(self.presence_changed)
for uids in self.presence_changed.values()
for user_id in uids
)

View File

@@ -58,6 +58,7 @@ class TransactionQueue(object):
"""
def __init__(self, hs):
self.hs = hs
self.server_name = hs.hostname
self.store = hs.get_datastore()
@@ -136,26 +137,6 @@ class TransactionQueue(object):
self._processing_pending_presence = False
def can_send_to(self, destination):
"""Can we send messages to the given server?
We can't send messages to ourselves. If we are running on localhost
then we can only federation with other servers running on localhost.
Otherwise we only federate with servers on a public domain.
Args:
destination(str): The server we are possibly trying to send to.
Returns:
bool: True if we can send to the server.
"""
if destination == self.server_name:
return False
if self.server_name.startswith("localhost"):
return destination.startswith("localhost")
else:
return not destination.startswith("localhost")
def notify_new_events(self, current_id):
"""This gets called when we have some new events we might want to
send out to other servers.
@@ -278,10 +259,7 @@ class TransactionQueue(object):
self._order += 1
destinations = set(destinations)
destinations = set(
dest for dest in destinations if self.can_send_to(dest)
)
destinations.discard(self.server_name)
logger.debug("Sending to: %s", str(destinations))
if not destinations:
@@ -308,6 +286,9 @@ class TransactionQueue(object):
Args:
states (list(UserPresenceState))
"""
if not self.hs.config.use_presence:
# No-op if presence is disabled.
return
# First we queue up the new presence by user ID, so multiple presence
# updates in quick successtion are correctly handled
@@ -354,7 +335,7 @@ class TransactionQueue(object):
for destinations, states in hosts_and_states:
for destination in destinations:
if not self.can_send_to(destination):
if destination == self.server_name:
continue
self.pending_presence_by_dest.setdefault(
@@ -373,7 +354,8 @@ class TransactionQueue(object):
content=content,
)
if not self.can_send_to(destination):
if destination == self.server_name:
logger.info("Not sending EDU to ourselves")
return
sent_edus_counter.inc()
@@ -388,10 +370,8 @@ class TransactionQueue(object):
self._attempt_new_transaction(destination)
def send_device_messages(self, destination):
if destination == self.server_name or destination == "localhost":
return
if not self.can_send_to(destination):
if destination == self.server_name:
logger.info("Not sending device update to ourselves")
return
self._attempt_new_transaction(destination)
@@ -459,7 +439,19 @@ class TransactionQueue(object):
# pending_transactions flag.
pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
# We can only include at most 50 PDUs per transactions
pending_pdus, leftover_pdus = pending_pdus[:50], pending_pdus[50:]
if leftover_pdus:
self.pending_pdus_by_dest[destination] = leftover_pdus
pending_edus = self.pending_edus_by_dest.pop(destination, [])
# We can only include at most 100 EDUs per transactions
pending_edus, leftover_edus = pending_edus[:100], pending_edus[100:]
if leftover_edus:
self.pending_edus_by_dest[destination] = leftover_edus
pending_presence = self.pending_presence_by_dest.pop(destination, {})
pending_edus.extend(

View File

@@ -15,7 +15,8 @@
# limitations under the License.
import logging
import urllib
from six.moves import urllib
from twisted.internet import defer
@@ -106,7 +107,7 @@ class TransportLayerClient(object):
dest (str)
room_id (str)
event_tuples (list)
limt (int)
limit (int)
Returns:
Deferred: Results in a dict received from the remote homeserver.
@@ -951,4 +952,4 @@ def _create_path(prefix, path, *args):
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)
return prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)

View File

@@ -90,8 +90,8 @@ class Authenticator(object):
@defer.inlineCallbacks
def authenticate_request(self, request, content):
json_request = {
"method": request.method,
"uri": request.uri,
"method": request.method.decode('ascii'),
"uri": request.uri.decode('ascii'),
"destination": self.server_name,
"signatures": {},
}
@@ -252,7 +252,7 @@ class BaseFederationServlet(object):
by the callback method. None if the request has already been handled.
"""
content = None
if request.method in ["PUT", "POST"]:
if request.method in [b"PUT", b"POST"]:
# TODO: Handle other method types? other content types?
content = parse_json_object_from_request(request)
@@ -261,10 +261,10 @@ class BaseFederationServlet(object):
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
logger.exception("authenticate_request failed")
logger.warn("authenticate_request failed: missing authentication")
raise
except Exception:
logger.exception("authenticate_request failed")
except Exception as e:
logger.warn("authenticate_request failed: %s", e)
raise
if origin:
@@ -353,7 +353,7 @@ class FederationSendServlet(BaseFederationServlet):
try:
code, response = yield self.handler.on_incoming_transaction(
transaction_data
origin, transaction_data,
)
except Exception:
logger.exception("on_incoming_transaction failed")
@@ -386,7 +386,7 @@ class FederationStateServlet(BaseFederationServlet):
return self.handler.on_context_state_request(
origin,
context,
query.get("event_id", [None])[0],
parse_string_from_args(query, "event_id", None),
)
@@ -397,7 +397,7 @@ class FederationStateIdsServlet(BaseFederationServlet):
return self.handler.on_state_ids_request(
origin,
room_id,
query.get("event_id", [None])[0],
parse_string_from_args(query, "event_id", None),
)
@@ -405,14 +405,12 @@ class FederationBackfillServlet(BaseFederationServlet):
PATH = "/backfill/(?P<context>[^/]*)/"
def on_GET(self, origin, content, query, context):
versions = query["v"]
limits = query["limit"]
versions = [x.decode('ascii') for x in query[b"v"]]
limit = parse_integer_from_args(query, "limit", None)
if not limits:
if not limit:
return defer.succeed((400, {"error": "Did not include limit param"}))
limit = int(limits[-1])
return self.handler.on_backfill_request(origin, context, versions, limit)
@@ -423,7 +421,7 @@ class FederationQueryServlet(BaseFederationServlet):
def on_GET(self, origin, content, query, query_type):
return self.handler.on_query_request(
query_type,
{k: v[0].decode("utf-8") for k, v in query.items()}
{k.decode('utf8'): v[0].decode("utf-8") for k, v in query.items()}
)
@@ -630,14 +628,14 @@ class OpenIdUserInfo(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
token = query.get("access_token", [None])[0]
token = query.get(b"access_token", [None])[0]
if token is None:
defer.returnValue((401, {
"errcode": "M_MISSING_TOKEN", "error": "Access Token required"
}))
return
user_id = yield self.handler.on_openid_userinfo(token)
user_id = yield self.handler.on_openid_userinfo(token.decode('ascii'))
if user_id is None:
defer.returnValue((401, {

View File

@@ -520,7 +520,7 @@ class AuthHandler(BaseHandler):
"""
logger.info("Logging in user %s on device %s", user_id, device_id)
access_token = yield self.issue_access_token(user_id, device_id)
yield self.auth.check_auth_blocking()
yield self.auth.check_auth_blocking(user_id)
# the device *should* have been registered before we got here; however,
# it's possible we raced against a DELETE operation. The thing we
@@ -734,7 +734,6 @@ class AuthHandler(BaseHandler):
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token):
yield self.auth.check_auth_blocking()
auth_api = self.hs.get_auth()
user_id = None
try:
@@ -743,6 +742,7 @@ class AuthHandler(BaseHandler):
auth_api.validate_macaroon(macaroon, "login", True, user_id)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
yield self.auth.check_auth_blocking(user_id)
defer.returnValue(user_id)
@defer.inlineCallbacks
@@ -895,22 +895,24 @@ class AuthHandler(BaseHandler):
Args:
password (unicode): Password to hash.
stored_hash (unicode): Expected hash value.
stored_hash (bytes): Expected hash value.
Returns:
Deferred(bool): Whether self.hash(password) == stored_hash.
"""
def _do_validate_hash():
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
return bcrypt.checkpw(
pw.encode('utf8') + self.hs.config.password_pepper.encode("utf8"),
stored_hash.encode('utf8')
stored_hash
)
if stored_hash:
if not isinstance(stored_hash, bytes):
stored_hash = stored_hash.encode('ascii')
return make_deferred_yieldable(
threads.deferToThreadPool(
self.hs.get_reactor(),

Some files were not shown because too many files have changed in this diff Show More