1
0

Compare commits

..

469 Commits

Author SHA1 Message Date
Amber Brown
3b0a85fc8e changelog 2018-10-29 21:54:01 +11:00
Amber Brown
2b791865c4 version bump 2018-10-29 21:52:52 +11:00
Amber Brown
77d70a7646 Port register_new_matrix_user to Python 3 and add tests (#4085) 2018-10-26 22:05:22 +11:00
Richard van der Hoff
6cb2e2448a Merge pull request #4089 from dekonnection/master
Make Docker image listen on ipv6 as well as ipv4
2018-10-25 19:23:37 +01:00
Cédric Laudrel
379376e5e6 Make Docker image listening on ipv6 as well as ipv4
Signed-off-by: Cédric Laudrel <dek@iono.me>
2018-10-25 20:03:47 +02:00
Erik Johnston
cb53ce9d64 Refactor state group lookup to reduce DB hits (#4011)
Currently when fetching state groups from the data store we make two
hits two the database: once for members and once for non-members (unless
request is filtered to one or the other). This adds needless load to the
datbase, so this PR refactors the lookup to make only a single database
hit.
2018-10-25 17:49:55 +01:00
Richard van der Hoff
e5da60d75d Merge remote-tracking branch 'origin/master' into develop 2018-10-25 17:08:09 +01:00
Erik Johnston
c85e063302 Merge pull request #4051 from matrix-org/erikj/alias_disallow_list
Add config option to control alias creation
2018-10-25 17:04:59 +01:00
Neil Johnson
95ad128851 Merge pull request #4081 from matrix-org/neilj/fix_mau_init
fix race condiftion in calling initialise_reserved_users
2018-10-25 16:33:40 +01:00
Neil Johnson
fcbd488e9a add new line 2018-10-25 16:13:43 +01:00
Erik Johnston
b94a43d5b5 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/alias_disallow_list 2018-10-25 15:25:31 +01:00
Erik Johnston
e5481b22aa Use allow/deny 2018-10-25 15:25:21 +01:00
Neil Johnson
c99b6c66bf Merge pull request #3975 from matrix-org/matthew/autocreate_autojoin
Autocreate autojoin rooms
2018-10-25 15:00:40 +01:00
Neil Johnson
f8fe98812b improve comments 2018-10-25 14:58:59 +01:00
Neil Johnson
f7f487e14c Merge branch 'develop' of github.com:matrix-org/synapse into matthew/autocreate_autojoin 2018-10-25 14:40:06 +01:00
Richard van der Hoff
edd2d82809 oops, run the check_isort build 2018-10-25 01:06:39 +01:00
Richard van der Hoff
46f98a6a29 Only cache the wheels 2018-10-25 01:00:58 +01:00
Richard van der Hoff
fc33e81323 Combine the pep8 and check_isort builds into one
there's really no point spinning up two separate jobs for these.
2018-10-25 00:59:49 +01:00
Richard van der Hoff
77d3b5772f disable coverage checking
I don't think we ever use this, and it slows things down. If we want to use it,
we should just do so on a couple of builds rather than all of them.
2018-10-25 00:36:00 +01:00
Neil Johnson
ea69a84bbb fix style inconsistencies 2018-10-24 17:18:08 +01:00
Neil Johnson
663d9db8e7 commit transaction before closing 2018-10-24 17:17:30 +01:00
Neil Johnson
07126e43a4 Merge branch 'develop' of github.com:matrix-org/synapse into neilj/fix_mau_init 2018-10-24 16:25:39 +01:00
Neil Johnson
9ec2186586 isort 2018-10-24 16:09:21 +01:00
Neil Johnson
9532caf6ef remove trailing whiter space 2018-10-24 16:08:25 +01:00
Richard van der Hoff
83d9ca7122 only fetch develop for check-newsfragments 2018-10-24 15:32:18 +01:00
Richard van der Hoff
480d98c91f Disable newsfragment checks on branch builds 2018-10-24 15:31:00 +01:00
Richard van der Hoff
ab96ee29c9 reduce git clone depth 2018-10-24 15:30:54 +01:00
Richard van der Hoff
0f4fb537ce fix branch regexp 2018-10-24 15:09:05 +01:00
Richard van der Hoff
3e438bfec8 also build on release branches 2018-10-24 15:04:55 +01:00
Richard van der Hoff
56a05583ae Disable travis-ci branch builds for most branches
(We really don't need to kick off 10 builds for the branch as well as 10 for
the PR)
2018-10-24 14:43:55 +01:00
Richard van der Hoff
94a49e0636 fix tuple
Co-Authored-By: neilisfragile <neil@matrix.org>
2018-10-24 14:39:23 +01:00
Richard van der Hoff
9f72c209ee Update changelog.d/3975.feature
Co-Authored-By: neilisfragile <neil@matrix.org>
2018-10-24 14:37:36 +01:00
Richard van der Hoff
78e8d4c3a5 Merge pull request #4083 from matrix-org/rav/fix_event_filter_validation
Allow backslashes in event field filters
2018-10-24 12:05:19 +01:00
Richard van der Hoff
3ad359e5be Merge remote-tracking branch 'origin/develop' into rav/fix_event_filter_validation 2018-10-24 11:23:49 +01:00
Richard van der Hoff
7328039117 Merge pull request #4082 from matrix-org/rav/fix_pep8
Fix a number of flake8 errors
2018-10-24 11:23:35 +01:00
Erik Johnston
3904cbf307 Merge pull request #4040 from matrix-org/erikj/states_res_v2_rebase
Add v2 state resolution algorithm
2018-10-24 11:12:12 +01:00
Richard van der Hoff
7e07d25ed6 Allow backslashes in event field filters
Fixes a bug introduced in https://github.com/matrix-org/synapse/pull/1783 which
meant that single backslashes were not allowed in event field filters.

The intention here is to allow single-backslashes, but disallow
double-backslashes.
2018-10-24 11:11:24 +01:00
Richard van der Hoff
ef771cc4c2 Fix a number of flake8 errors
Broadly three things here:

* disable W504 which seems a bit whacko
* remove a bunch of `as e` expressions from exception handlers that don't use
  them
* use `r""` for strings which include backslashes

Also, we don't use pep8 any more, so we can get rid of the duplicate config
there.
2018-10-24 10:39:03 +01:00
Erik Johnston
b313b9b009 isort 2018-10-24 10:02:41 +01:00
Erik Johnston
47a9ba435d Use match rather than search 2018-10-24 09:54:57 +01:00
Richard van der Hoff
e0b9d5f0af Merge pull request #4075 from matrix-org/rav/fix_pusher_logcontexts
Clean up the way logcontexts and threads work in the pushers
2018-10-24 09:53:57 +01:00
Erik Johnston
dacbeb2e03 Comment 2018-10-24 09:47:49 +01:00
Erik Johnston
810715f79a Rename resolve_events_with_factory 2018-10-24 09:44:22 +01:00
Erik Johnston
cb23aa4c42 Merge pull request #4063 from matrix-org/erikj/cleaup_alias_creation
Clean up room alias creation
2018-10-24 09:40:27 +01:00
Richard van der Hoff
c573794b22 Fix start_pushers vs _start_pushers confusion 2018-10-24 09:24:55 +01:00
Richard van der Hoff
e564306e31 sanity-check the is_processing flag
... and rename it, for even more sanity
2018-10-24 09:23:33 +01:00
Neil Johnson
a67d8ace9b remove errant exception and style 2018-10-23 17:44:39 +01:00
Travis Ralston
43c3f0b02f Merge pull request #3969 from turt2live/travis/fix-federated-group-requests
Handle HttpResponseException more safely for federated groups
2018-10-23 10:41:04 -06:00
Travis Ralston
3e704822be Comments help 2018-10-23 10:25:31 -06:00
Neil Johnson
329d18b39c remove white space 2018-10-23 15:27:20 +01:00
Neil Johnson
6105c6101f fix race condiftion in calling initialise_reserved_users 2018-10-23 15:24:58 +01:00
Richard van der Hoff
b3f6dddad2 Give some more things logcontexts (#4077) 2018-10-23 14:29:17 +01:00
Richard van der Hoff
5c445114d3 Correctly account for cpu usage by background threads (#4074)
Wrap calls to deferToThread() in a thing which uses a child logcontext to
attribute CPU usage to the right request.

While we're in the area, remove the logcontext_tracer stuff, which is never
used, and afaik doesn't work.

Fixes #4064
2018-10-23 13:12:32 +01:00
Richard van der Hoff
1fe6bbb555 Merge pull request #3698 from spantaleev/add-matrix-docker-ansible-deploy
Add information about the matrix-docker-ansible-deploy playbook
2018-10-23 10:11:58 +01:00
Richard van der Hoff
047ac0cbba Merge pull request #4072 from steamp0rt/patch-1
Add Caddy example to README
2018-10-23 09:39:55 +01:00
Richard van der Hoff
6340141300 README.rst: fix minor grammar 2018-10-22 16:17:27 +01:00
Richard van der Hoff
abd9914683 Changelog 2018-10-22 16:12:11 +01:00
Richard van der Hoff
026cd91ac8 Run PusherPool.start as a background process
We don't do anything with the result, so this is needed to give this code a
logcontext.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
f749607c91 Make on_started synchronous too
This brings it into line with on_new_notifications and on_new_receipts. It
requires a little bit of hoop-jumping in EmailPusher to load the throttle
params before the first loop.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
e7a16c6210 Remove redundant run_as_background_process() from pusherpool
`on_new_notifications` and `on_new_receipts` in `HttpPusher` and `EmailPusher`
now always return synchronously, so we can remove the `defer.gatherResults` on
their results, and the `run_as_background_process` wrappers can be removed too
because the PusherPool methods will now complete quickly enough.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
c7273c11bc Give pushers their own background logcontext
Each pusher has its own loop which runs for as long as it has work to do. This
should run in its own background thread with its own logcontext, as other
similar loops elsewhere in the system do - which means that CPU usage is
consistently attributed to that loop, rather than to whatever request happened
to start the loop.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
5110f4e425 move get_all_pushers call down
simplifies the interface to _start_pushers
2018-10-22 16:12:11 +01:00
Richard van der Hoff
04277d0ed8 Factor PusherPool._start_pusher out of _start_pushers
... and use it from start_pusher_by_id. This mostly simplifies
start_pusher_by_id.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
3e8b02c939 Rename _refresh_pusher
This is public (or at least, called from outside the class), so ought to have a
better name.
2018-10-22 16:12:11 +01:00
Richard van der Hoff
7aea00069c Merge pull request #4076 from matrix-org/rav/fix_init_logcontexts
Run MAU queries as background processes
2018-10-22 14:46:59 +01:00
Richard van der Hoff
911db96658 Merge pull request #4073 from matrix-org/rav/require_psutil
Make psutil an explicit dependency
2018-10-22 12:33:21 +01:00
Matthew Hodgson
058934b1cf uh, Matrix is called Matrix these days... 2018-10-21 12:18:30 +01:00
Richard van der Hoff
a6f421e812 Run MAU queries as background processes
Fixes #3820
2018-10-20 02:14:35 +01:00
Amber Brown
e1728dfcbe Make scripts/ and scripts-dev/ pass pyflakes (and the rest of the codebase on py3) (#4068) 2018-10-20 11:16:55 +11:00
steamport
5c3d6ea9c7 Whoops! 2018-10-19 22:00:27 +00:00
steamport
3f357583ce I HATE RST 2018-10-19 21:59:39 +00:00
steamport
9c2f99a3b7 Fix 2018-10-19 21:59:14 +00:00
steamport
08760b0d9a Fix formatting. 2018-10-19 21:57:28 +00:00
steamport
b85fe45f46 Add CL 2018-10-19 21:55:38 +00:00
Richard van der Hoff
e5b52d0f94 Make psutil an explicit dependency
As of #4027, we require psutil to be installed, so it should be in our
dependency list. We can also remove some of the conditional import code
introduced by #992.

Fixes #4062.
2018-10-19 21:51:15 +01:00
Richard van der Hoff
81d4f51524 Merge branch 'rav/fix_email_templates_4065' into develop 2018-10-19 21:37:32 +01:00
Richard van der Hoff
593389a077 Remove notes on fallback for email_templates
This fallback didn't work, and was removed in #4069.
2018-10-19 21:35:57 +01:00
steamport
eba48c0f16 Add Caddy example to README 2018-10-19 19:58:28 +00:00
Richard van der Hoff
f62c597d14 Merge pull request #4069 from matrix-org/rav/fix_email_templates_4065
Calculate absolute path for email templates
2018-10-19 16:33:54 +01:00
Richard van der Hoff
cc325c7069 Calculate absolute path for email templates 2018-10-19 14:01:59 +01:00
Amber Brown
e404ba9aac Fix manhole on py3 (pt 2) (#4067) 2018-10-19 22:26:00 +11:00
Amber Brown
b69216f768 Make the metrics less racy (#4061) 2018-10-19 21:45:45 +11:00
Richard van der Hoff
6a4d01ee94 Merge pull request #4060 from matrix-org/hawkowl/ssh-key-py3
Make manhole work on Python 3 again
2018-10-19 10:23:44 +01:00
Erik Johnston
3c580c2b47 Add tests for alias creation rules 2018-10-19 10:22:45 +01:00
Erik Johnston
1b4bf232b9 Add tests for config generation 2018-10-19 10:22:45 +01:00
Erik Johnston
9fafdfa97d Anchor returned regex to start and end of string 2018-10-19 10:22:45 +01:00
Erik Johnston
f9d6c677ea Newsfile 2018-10-19 10:22:45 +01:00
Erik Johnston
084046456e Add config option to control alias creation 2018-10-19 10:22:45 +01:00
Erik Johnston
0d31109ed5 Newsfile 2018-10-19 10:14:29 +01:00
Erik Johnston
74e7617083 Clean up room alias creation 2018-10-19 10:11:56 +01:00
Amber Brown
1d17fc52ae changelog 2018-10-19 09:27:10 +11:00
Amber Brown
a36b0ec195 make a bytestring 2018-10-19 09:24:00 +11:00
Amber Brown
6190abe8da Merge pull request #4057 from matrix-org/rav/use_correct_python
Use the right python when starting workers
2018-10-19 09:09:08 +11:00
Richard van der Hoff
c69026a758 Use the right python when starting workers
We should use the same python to start the workers as we do for the main
synapse (ie, the same one used to run synctl.)
2018-10-18 21:06:30 +01:00
Richard van der Hoff
2baebace6a Merge branch 'master' into develop 2018-10-18 17:29:57 +01:00
Richard van der Hoff
c00f4d237b Add warnings about the upgrade to 0.33.7 2018-10-18 17:17:39 +01:00
Richard van der Hoff
03287c350e remove redundant changelog file
this change has been released
2018-10-18 15:09:34 +01:00
Richard van der Hoff
c632bc8654 Merge branch 'master' into develop 2018-10-18 15:07:03 +01:00
Richard van der Hoff
926da4dda8 0.33.7 2018-10-18 14:57:32 +01:00
Erik Johnston
e77f24d80a Merge pull request #4049 from matrix-org/erikj/synctl_colour
Only colourise synctl output when attached to tty
2018-10-18 10:51:14 +01:00
Richard van der Hoff
8c2b8d7f0b fix changelog formatting 2018-10-17 17:42:12 +01:00
Richard van der Hoff
c7d0f34a3c v0.33.7rc2 2018-10-17 17:40:19 +01:00
Richard van der Hoff
52e3d3813b prep changelog 2018-10-17 17:40:01 +01:00
Richard van der Hoff
0fd2321629 Fix incorrect truncation in get_missing_events
It's quite important that get_missing_events returns the *latest* events in the
room; however we were pulling event ids out of the database until we got *at
least* 10, and then taking the *earliest* of the results.

We also shouldn't really be relying on depth, and should be checking the
room_id.
2018-10-17 17:35:26 +01:00
Richard van der Hoff
f1bfe6167a Merge pull request #4052 from matrix-org/rav/ship_resources_as_package_data
Ship the email templates as package_data
2018-10-17 17:34:22 +01:00
Erik Johnston
4e726783ea Merge pull request #4050 from matrix-org/erikj/fix_py37_iteration
Fix bug where we raised StopIteration in a generator
2018-10-17 17:07:19 +01:00
Erik Johnston
a5aea15a6b Assume isatty is always defined, and catch AttributeError. Also don't bother checking colour==Normal 2018-10-17 17:06:49 +01:00
Richard van der Hoff
c8f2c19991 Put the warning blob at the top of the file 2018-10-17 16:56:22 +01:00
Richard van der Hoff
1519572961 Ship the email templates as package_data
move the example email templates into the synapse package so that they can be
used as package data, which should mean that all of the packaging mechanisms
(pip, docker, debian, arch, etc) should now come with the example templates.

In order to grandfather in people who relied on the templates being in the old
place, check for that situation and fall back to using the defaults if the
templates directory does not exist.
2018-10-17 16:46:02 +01:00
Erik Johnston
3a5d8d5891 Newsfile 2018-10-17 16:12:45 +01:00
Erik Johnston
f6a0a02a62 Fix bug where we raised StopIteration in a generator
This made python 3.7 unhappy
2018-10-17 16:10:52 +01:00
Erik Johnston
1af16acd4c Newsfile 2018-10-17 13:48:09 +01:00
Erik Johnston
df33c164de Only colourise synctl output when attached to tty 2018-10-17 13:47:53 +01:00
Will Hunt
d6a7797dd1 Fix roomlist since tokens on Python 3 (#4046)
Thanks @Half-Shot !!!
2018-10-17 23:04:55 +11:00
Richard van der Hoff
6ec9d8ba0a Merge pull request #4045 from matrix-org/rav/fix_get_missing_events
Fix incorrect truncation in get_missing_events
2018-10-17 11:40:03 +01:00
Neil Johnson
c6584f4b5f clean up config error logic and imports 2018-10-17 11:36:41 +01:00
Amber Brown
80736fd8ed Merge pull request #4041 from matrix-org/rav/run_tests_in_docker
run the circle builds in docker containers
2018-10-17 20:10:44 +11:00
Richard van der Hoff
fc0f13dd03 Fix incorrect truncation in get_missing_events
It's quite important that get_missing_events returns the *latest* events in the
room; however we were pulling event ids out of the database until we got *at
least* 10, and then taking the *earliest* of the results.

We also shouldn't really be relying on depth, and should be checking the
room_id.
2018-10-16 21:10:04 +01:00
Richard van der Hoff
10405153c2 Use wget rather than curl
the docker image doesn't have wget
2018-10-16 16:47:26 +01:00
Richard van der Hoff
017eb9d17a changelog 2018-10-16 16:31:47 +01:00
Erik Johnston
4a28d3d36f Update event_auth table for rejected events 2018-10-16 16:28:42 +01:00
Erik Johnston
15133477ee Fix up use of resolve_events_with_factory 2018-10-16 16:28:42 +01:00
Erik Johnston
fc954960e9 Newsfile 2018-10-16 16:28:42 +01:00
Erik Johnston
947c7443eb Add some state res v2 tests 2018-10-16 16:28:39 +01:00
Erik Johnston
6bd856caa2 User event.sender rather than alias event.user_id 2018-10-16 16:16:13 +01:00
Erik Johnston
e238013c44 Add v2 state res algorithm.
We hook this up to the vdh test room version.
2018-10-16 16:16:13 +01:00
Richard van der Hoff
a94967bc5f run the circle builds in docker containers
Docker containers spin up faster than entire VMs.
2018-10-16 15:29:08 +01:00
Richard van der Hoff
b8a5b0097c Various cleanups in the federation client code (#4031)
- Improve logging: log things in the right order, include destination and txids
  in all log lines, don't log successful responses twice

- Fix the docstring on TransportLayerClient.send_transaction

- Don't use treq.request, which is overcomplicated for our purposes: just use a
  twisted.web.client.Agent.

- simplify the logic for setting up the bodyProducer

- fix bytes/str confusions
2018-10-16 10:44:49 +01:00
Amber Brown
24bc15eab4 update changelog 2018-10-15 22:23:13 +11:00
Amber Brown
4e50fe3edb update changelog 2018-10-15 22:22:49 +11:00
Amber Brown
f726f2dc6c version bump 2018-10-15 22:21:45 +11:00
David Baker
03c11032c3 Merge pull request #4019 from matrix-org/dbkr/e2e_backups
E2E backups
2018-10-15 10:19:43 +01:00
Amber Brown
f9ce1b4eb0 Merge pull request #4033 from intelfx/py37-hotfixes
py3: python3.7 hotfixes
2018-10-15 20:13:19 +11:00
Slavi Pantaleev
c187638ee9 Add information about the matrix-docker-ansible-deploy playbook
Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2018-10-14 21:50:18 +03:00
Ivan Shapovalov
06bc8d2fe5 synapse/app: frontend_proxy.py: actually make workers work on py3 2018-10-14 20:08:39 +03:00
Ivan Shapovalov
fb216a22db synapse/visibility.py: fix SyntaxError on py3.7 2018-10-14 20:08:17 +03:00
Neil Johnson
1ccafb0c5e no need to join room if creator 2018-10-13 21:14:21 +01:00
Travis Ralston
164f8e4843 isort 2018-10-12 15:11:59 -06:00
Travis Ralston
7bb651de6a More sane handling of group errors and pep8 2018-10-12 14:53:30 -06:00
Travis Ralston
e3586f7c06 Merge branch 'develop' into travis/fix-federated-group-requests 2018-10-12 14:49:58 -06:00
Neil Johnson
a2bfb778c8 improve auto room join logic, comments and tests 2018-10-12 18:17:36 +01:00
David Baker
a45f2c3a00 missed one 2018-10-12 14:33:55 +01:00
Amber Brown
381d2cfdf0 Make workers work on Py3 (#4027) 2018-10-13 00:14:08 +11:00
David Baker
8c0ff0287a Linting soothes the savage PEP8 monster 2018-10-12 13:47:43 +01:00
David Baker
306361b31b Misc PR feedback bits 2018-10-12 11:48:56 +01:00
David Baker
bddfad253a Don't mangle exceptions 2018-10-12 11:48:02 +01:00
David Baker
86ef9760a7 Split /room_keys/version into 2 servlets 2018-10-12 11:35:08 +01:00
David Baker
83e72bb2f0 PR feedback pt. 1 2018-10-12 11:26:18 +01:00
Richard van der Hoff
8ddd0f273c Comments on get_all_new_events_stream
just some docstrings to clarify the behaviour here
2018-10-12 09:55:41 +01:00
Erik Johnston
e97d93948d Merge pull request #4022 from matrix-org/erikj/metrics_lazy_loaded_count
Add metric to count lazy member sync requests
2018-10-10 14:39:18 +01:00
Erik Johnston
7e561b5c1a Add description to counter metric 2018-10-10 11:41:15 +01:00
Erik Johnston
49840f5ab2 Update newsfile 2018-10-10 11:31:02 +01:00
Erik Johnston
3cbe8331e6 Track number of non-empty sync responses instead 2018-10-10 11:23:17 +01:00
Erik Johnston
395276b405 Append _total to metric and fix up spelling 2018-10-10 09:24:39 +01:00
David Baker
b8d9e108be Fix mergefail 2018-10-09 18:04:21 +01:00
Erik Johnston
20733857ab Newsfile 2018-10-09 14:18:16 +01:00
Erik Johnston
bdc27d6716 Add metric to count lazy member sync requests 2018-10-09 14:17:52 +01:00
David Baker
d34657e1f2 Add changelog 2018-10-09 11:15:54 +01:00
David Baker
d3464ce708 isort 2018-10-09 10:33:59 +01:00
Erik Johnston
9eb1a79100 Merge pull request #4008 from matrix-org/erikj/log_looping_exceptions
Log looping call exceptions
2018-10-09 10:12:10 +01:00
David Baker
dc045ef202 Merge remote-tracking branch 'origin/develop' into dbkr/e2e_backups 2018-10-09 10:05:02 +01:00
Richard van der Hoff
2418e7811a Merge pull request #4017 from matrix-org/rav/optimise_filter_events_for_server
Optimisation for filter_events_for_server
2018-10-09 09:48:56 +01:00
David Baker
f4a4dbcad1 Apparently this blank line is Very Important 2018-10-09 09:47:04 +01:00
David Baker
0c905ee015 be python3 compatible 2018-10-09 09:39:13 +01:00
Erik Johnston
6982320572 Remove unnecessary extra function call layer 2018-10-08 14:06:19 +01:00
Richard van der Hoff
495975e231 Optimisation for filter_events_for_server
We're better off hashing just the event_id than the whole ((type, state_key),
event_id) tuple - so use a dict instead of a set.

Also, iteritems > items.
2018-10-08 13:46:52 +01:00
Erik Johnston
8a1817f0d2 Use errback pattern and catch async failures 2018-10-08 13:29:47 +01:00
David Baker
497444f1fd Don't reuse backup versions
Since we don't actually delete the keys, just mark the versions
as deleted in the db rather than actually deleting them, then we
won't reuse versions.

Fixes https://github.com/vector-im/riot-web/issues/7448
2018-10-05 15:08:36 +01:00
Erik Johnston
8164f6daf3 Newsfile 2018-10-05 11:25:09 +01:00
Erik Johnston
f7199e8734 Log looping call exceptions
If a looping call function errors, then it kills the loop entirely.
Currently it throws away the exception logs, so we should make it
actually log them.

Fixes #3929
2018-10-05 11:24:12 +01:00
Neil Johnson
ed82043efb Merge branch 'develop' into matthew/autocreate_autojoin 2018-10-04 17:26:59 +01:00
Neil Johnson
2dadc092b8 move logic into register, fix room alias localpart bug, tests 2018-10-04 17:00:27 +01:00
Richard van der Hoff
c6dbd216e6 Merge pull request #3995 from matrix-org/rav/no_deextrem_outliers
Fix bug in forward_extremity update logic
2018-10-04 16:27:05 +01:00
Amber Brown
d86794325f Merge branch 'master' into develop 2018-10-04 22:41:52 +10:00
Amber Brown
92faeb2a3f changelog 2018-10-04 22:38:10 +10:00
Amber Brown
dd59dfc51f full version 2018-10-04 22:37:55 +10:00
Richard van der Hoff
e5f080d6a7 Merge pull request #4002 from matrix-org/rav/pin_prometheus
Pin to prometheus_client<0.4 to avoid renaming all of our metrics
2018-10-03 17:42:08 +01:00
Richard van der Hoff
a59d899668 Pin to prometheus_client<0.4 to avoid renaming all of our metrics 2018-10-03 17:20:15 +01:00
Richard van der Hoff
055fe3589e Merge pull request #3998 from matrix-org/michaelkaye/build_docker_images_of_rc_tags
Docker build all tags starting vX.Y.Z, including release candidates
2018-10-03 17:12:52 +01:00
Erik Johnston
8935ec5a93 Merge pull request #3999 from matrix-org/erikj/fix_3pid_invite_rejetion
Fix handling of rejected threepid invites
2018-10-03 14:47:41 +01:00
Erik Johnston
81e2813948 Merge pull request #3996 from matrix-org/erikj/fix_bg_iteration
Fix exception in background metrics collection
2018-10-03 14:14:38 +01:00
Erik Johnston
52e6e815be Sanitise error messages when user doesn't have permission to invite 2018-10-03 14:13:07 +01:00
Erik Johnston
01afcfc4e9 Merge pull request #3997 from matrix-org/erikj/fix_profile_error_handling
Fix exception handling in fetching remote profiles
2018-10-03 14:11:09 +01:00
Erik Johnston
93a8603904 Newsfile 2018-10-03 11:59:05 +01:00
Erik Johnston
69e857853f Fix handling of rejected threepid invites 2018-10-03 11:57:30 +01:00
Michael Kaye
d1b7c0ca05 Docker build all tags starting vX.Y.Z, inc RCs
Note the regex must match the complete string anyway,
so the leading ^ was useless.
2018-10-03 11:52:05 +01:00
Erik Johnston
6e0c66f651 Newsfile 2018-10-03 11:40:02 +01:00
Erik Johnston
495a9d06bb Fix exception handling in fetching remote profiles 2018-10-03 11:34:30 +01:00
Erik Johnston
c69faf8c4a Newsfile 2018-10-03 11:29:44 +01:00
Erik Johnston
7c570bff74 Fix exception in background metrics collection
We attempted to iterate through a list on a separate thread without
doing the necessary copying.
2018-10-03 11:28:01 +01:00
Richard van der Hoff
9693625e55 actually exclude outliers 2018-10-03 10:19:41 +01:00
Richard van der Hoff
2a4ea3baa8 fix newsfile name 2018-10-03 07:09:25 +01:00
Richard van der Hoff
3e39783d5d remove debugging 2018-10-02 23:44:14 +01:00
Richard van der Hoff
ae61ade891 Fix bug in forward_extremity update logic
An event does not stop being a forward_extremity just because an outlier or
rejected event refers to it.
2018-10-02 23:36:08 +01:00
Amber Brown
ed763aeba8 some changelog fixes 2018-10-03 02:27:41 +10:00
Amber Brown
ccce6f26a6 changelog 2018-10-03 02:16:21 +10:00
Amber Brown
da8f82008d version 2018-10-03 02:15:48 +10:00
Erik Johnston
25baf3b2ac Merge pull request #3991 from matrix-org/erikj/fix_pop_retry_cache
Fix bug when invalidating destination retry timings
2018-10-02 16:42:28 +01:00
Erik Johnston
9c834b8ee9 Add tests 2018-10-02 16:22:39 +01:00
Amber Brown
058a4c665e Remove Jenkins & other old dev junk (#3988) 2018-10-03 00:59:11 +10:00
Erik Johnston
8f614f842d Newsfile 2018-10-02 15:54:31 +01:00
Erik Johnston
7258d081a5 Fix bug when invalidating destination retry timings 2018-10-02 15:47:57 +01:00
Richard van der Hoff
2b8d28b095 Merge pull request #3960 from matrix-org/rav/fix_missing_create_event_error
Fix error handling for missing auth_event
2018-10-02 13:57:54 +01:00
Amber Brown
7232917f12 Disable frozen dicts by default (#3987) 2018-10-02 22:53:47 +10:00
Erik Johnston
0f7033fb98 Merge pull request #3990 from matrix-org/erikj/fix_logging_lost_connection
Fix error when logging incomplete requests
2018-10-02 14:10:05 +02:00
Erik Johnston
eeb755d17b Newsfile 2018-10-02 12:06:25 +01:00
Erik Johnston
334e075dd8 Fix error when logging incomplete requests
If a connection is lost before a request is read from Request, Twisted
sets `method` (and `uri`) attributes to dummy values. These dummy values
have incorrect types (i.e. they're not bytes), and so things like
`__repr__` would raise an exception.

To fix this we had a helper method to return the method with a
consistent type.
2018-10-02 12:06:22 +01:00
Richard van der Hoff
8c41b0ca66 Merge pull request #3989 from matrix-org/rav/better_stacktraces
Avoid reraise, to improve stacktraces
2018-10-02 10:47:38 +01:00
Erik Johnston
bc29946809 Merge pull request #3986 from matrix-org/erikj/fix_sync_with_redacted_state
Fix lazy loaded sync with rejected state events
2018-10-02 11:38:35 +02:00
Richard van der Hoff
8174c6725b Avoid reraise, to improve stacktraces 2018-10-01 18:50:34 +01:00
Richard van der Hoff
5908a8f6f5 Merge branch 'develop' into rav/fix_missing_create_event_error 2018-10-01 17:16:20 +01:00
Amber Brown
abdc141c2b Update instructions to point to pip install (#3985) 2018-10-02 01:08:38 +10:00
Richard van der Hoff
b5b93f45d5 Merge pull request #3968 from matrix-org/rav/fix_federation_errors
Fix exceptions when handling incoming transactions
2018-10-01 15:54:24 +01:00
Amber Brown
6e05fd032c Fix userconsent on Python 3 (#3938) 2018-10-02 00:11:58 +10:00
Erik Johnston
5fa27eac78 Newsfile 2018-10-01 14:24:41 +01:00
Erik Johnston
82f922b4af Fix lazy loaded sync with rejected state events
In particular, we assume that the name and canonical alias events in
the state have not been rejected. In practice this may not be the case
(though we should probably think about fixing that) so lets ensure that
we gracefully handle that case, rather than 404'ing the sync request
like we do now.
2018-10-01 14:19:40 +01:00
Erik Johnston
8f5c23d0cd Merge pull request #3933 from matrix-org/erikj/destination_retry_cache
Add a five minute cache to get_destination_retry_timings
2018-10-01 14:20:30 +02:00
Richard van der Hoff
53c5fa4e6c Further reduce the size of the docker image (#3972)
Rewrite the dockerfile as a multistage build: this means we can get rid of a whole load of cruft which we don't need.
2018-10-01 12:29:17 +01:00
Erik Johnston
4f3e3ac192 Correctly match 'dict.pop' api 2018-10-01 12:25:27 +01:00
Erik Johnston
8ea887856c Don't update eviction metrics on explicit removal 2018-10-01 12:00:58 +01:00
Matthew Hodgson
faa462ef79 changelog 2018-09-29 02:21:01 +01:00
Matthew Hodgson
23b6a0537f emit room aliases event 2018-09-29 02:19:37 +01:00
Matthew Hodgson
5b68f29f48 fix thinkos 2018-09-29 02:14:40 +01:00
Matthew Hodgson
8f646f2d04 fix UTs 2018-09-28 15:37:28 +01:00
Bruno Windels
9a8bbc9a59 add --no-admin flag to registration script (#3836) 2018-09-28 13:36:56 +01:00
Richard van der Hoff
3deaad2fb4 Merge pull request #3964 from matrix-org/rav/remove_localhost_checks
remove spurious federation checks on localhost
2018-09-28 13:35:47 +01:00
Richard van der Hoff
26557c9db4 Merge pull request #3980 from matrix-org/rav/remove_broken_cache_call
Remove redundant call to start_get_pdu_cache
2018-09-28 13:13:19 +01:00
Richard van der Hoff
965154d60a Fix complete fail to do the right thing 2018-09-28 12:45:54 +01:00
Jan Christian Grünhage
5304449738 build python 3 docker images on circle CI (#3976) 2018-09-28 12:36:35 +01:00
Richard van der Hoff
19475cf337 Remove redundant call to start_get_pdu_cache
I think this got forgotten in #3932. We were getting away with it because it
was the last call in this function.
2018-09-28 12:01:23 +01:00
Richard van der Hoff
9c8cec5dab Merge remote-tracking branch 'origin/develop' into erikj/destination_retry_cache 2018-09-28 10:51:09 +01:00
Matthew Hodgson
07340cdaca untested stab at autocreating autojoin rooms 2018-09-28 01:42:53 +01:00
Richard van der Hoff
f094f715cf Merge remote-tracking branch 'origin/develop' into rav/fix_federation_errors 2018-09-27 15:18:21 +01:00
Richard van der Hoff
36c62a67c4 Merge pull request #3794 from matrix-org/erikj/faster_typing
Improve performance of getting typing updates for replication
2018-09-27 15:14:26 +01:00
Richard van der Hoff
b5c976347b Merge pull request #3946 from matrix-org/michaelkaye/automate_docker_hub_upload
Build and push docker image to hub automatically
2018-09-27 15:07:41 +01:00
Richard van der Hoff
e1e3e77bfd Merge pull request #3967 from matrix-org/rav/federation_handler_cleanups
Clarifications in FederationHandler
2018-09-27 15:06:59 +01:00
Amber Brown
6de3884e5e Merge pull request #3965 from matrix-org/rav/notify_app_services_bg_process
Run notify_app_services as a bg process
2018-09-27 23:40:30 +10:00
Amber Brown
a512e637ac Merge pull request #3970 from schnuffle/develop-py3
Replaced all occurences of e.message with str(e)
2018-09-27 23:39:45 +10:00
Amber Brown
b3064532d0 Run our oldest supported configuration in CI (#3952) 2018-09-27 23:21:54 +10:00
Amber Brown
861c063ebc Update 3970.bugfix 2018-09-27 22:47:01 +10:00
Richard van der Hoff
484a9b8c81 Remove redundant, failing, test
This test didn't do what it claimed to do, and what it claimed to do was the
same as test_cant_hide_direct_ancestors anyway.

This stuff is tested by sytest anyway.
2018-09-27 13:11:23 +01:00
Schnuffle
a873896096 Added changelog fragment 2018-09-27 14:01:44 +02:00
Schnuffle
dc5db01ff2 Replaced all occurences of e.message with str(e)
Signed-off-by: Schnuffle  <schnuffle@github.com>
2018-09-27 13:38:50 +02:00
Richard van der Hoff
51d33d5178 Merge pull request #3961 from matrix-org/neilj/lock_mau_upserts
fix #3854 MAU transaction errors
2018-09-27 12:31:37 +01:00
Michael Kaye
74bbdd0412 Do the changelog.d dance 2018-09-27 12:01:43 +01:00
Michael Kaye
8607397ea7 Make a ":latest" tag, and a SHA1 commit ID one too.
Latest is horrible and makes debugging what has happened anywhere a
nightmare. We push a latest because of demand for it, but we'll also
push a SHA1 commit id so those wanting to know what they're running
(and be able to roll back if required) can use those instead.

Note that latest here is defined as "most recent master commit" not
"most recent released version", as the actual semantics of making latest
correct while still being able to build bugfixed releases of previous
versions is just ARGH. So we define it as "master" not "latest release".
2018-09-27 12:01:41 +01:00
Michael Kaye
a3c11f7320 Also, don't run this job on any branches 2018-09-27 12:01:09 +01:00
Michael Kaye
73a089f461 Make username configurable in the UI too 2018-09-27 12:00:40 +01:00
Michael Kaye
3852c2bc39 Fix typo 2018-09-27 12:00:01 +01:00
Michael Kaye
0d36fe3563 Build and push docker image to hub 2018-09-27 11:59:17 +01:00
Richard van der Hoff
948a6d8776 changelog 2018-09-27 11:37:39 +01:00
Richard van der Hoff
333bee27f5 Include event when resolving state for missing prevs
If we have a forward extremity for a room as `E`, and you receive `A`, `B`,
s.t. `A -> B -> E`, and `B` also points to an unknown event `X`, then we need
to do state res between `X` and `E`.

When that happens, we need to make sure we include `X` in the state that goes
into the state res alg.

Fixes #3934.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
bd61c82bdf Include state from remote servers in pdu handling
If we've fetched state events from remote servers in order to resolve the state
for a new event, we need to actually pass those events into
resolve_events_with_factory (so that it can do the state res) and then persist
the ones we need - otherwise other bits of the codebase get confused about why
we have state groups pointing to non-existent events.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
a215b698c4 Fix "unhashable type: 'list'" exception in federation handling
get_state_groups returns a map from state_group_id to a list of FrozenEvents,
so was very much the wrong thing to be putting as one of the entries in the
list passed to resolve_events_with_factory (which expects maps from
(event_type, state_key) to event id).

We actually want get_state_groups_ids().values() rather than
get_state_groups().

This fixes the main problem in #3923, but there are other problems with this
bit of code which get discovered once you do so.
2018-09-27 11:37:39 +01:00
Richard van der Hoff
28223841e0 more comments 2018-09-27 11:31:51 +01:00
Richard van der Hoff
ad8e137062 changelog 2018-09-27 11:31:51 +01:00
Richard van der Hoff
e3c159863d Clarifications in FederationHandler
* add some comments on things that look a bit bogus
* rename this `state` variable to avoid confusion with the `state` used
  elsewhere in this function. (There was no actual conflict, but it was
  a confusing bit of spaghetti.)
2018-09-27 11:31:51 +01:00
Richard van der Hoff
92abd3d6d6 Merge pull request #3966 from matrix-org/rav/rx_txn_logging_2
Logging improvements
2018-09-27 11:27:46 +01:00
Richard van der Hoff
4a15a3e4d5 Include eventid in log lines when processing incoming federation transactions (#3959)
when processing incoming transactions, it can be hard to see what's going on,
because we process a bunch of stuff in parallel, and because we may end up
recursively working our way through a chain of three or four events.

This commit creates a way to use logcontexts to add the relevant event ids to
the log lines.
2018-09-27 11:25:34 +01:00
Richard van der Hoff
ae6ad4cf41 docstrings and unittests for storage.state (#3958)
I spent ages trying to figure out how I was going mad...
2018-09-27 11:22:25 +01:00
Amber Brown
2c695fd1aa Merge pull request #3963 from matrix-org/rav/get_state_for_room_docstring
fix docstring for FederationClient.get_state_for_room
2018-09-27 18:06:38 +10:00
Travis Ralston
82fa31799c Remove debugging statement 2018-09-26 14:01:02 -06:00
Travis Ralston
2a7b3439de Changelog 2018-09-26 13:51:34 -06:00
Travis Ralston
b4c3bc1734 Handle HttpResponseException more safely for federated groups 2018-09-26 13:48:04 -06:00
Travis Ralston
219606a6ed Fix exception documentation in matrixfederationclient.py 2018-09-26 13:26:27 -06:00
Richard van der Hoff
e70b4ce069 Logging improvements
Some logging tweaks to help with debugging incoming federation transactions
2018-09-26 17:36:14 +01:00
Richard van der Hoff
3c37c7e45b changelog 2018-09-26 17:01:30 +01:00
Richard van der Hoff
a87d419a85 Run notify_app_services as a bg process
This ensures that its resource usage metrics get recorded somewhere rather than
getting lost.

(It also fixes an error when called from a nested logging context which
completes before the bg process)
2018-09-26 17:00:40 +01:00
Richard van der Hoff
0c4a99ea2d changelog 2018-09-26 16:54:54 +01:00
Richard van der Hoff
9453c65948 remove spurious federation checks on localhost
There's really no point in checking for destinations called "localhost" because
there is nothing stopping people creating other DNS entries which point to
127.0.0.1. The right fix for this is
https://github.com/matrix-org/synapse/issues/3953.

Blocking localhost, on the other hand, means that you get a surprise when
trying to connect a test server on localhost to an existing server (with a
'normal' server_name).
2018-09-26 16:53:52 +01:00
Richard van der Hoff
ab59f3d8da changelog 2018-09-26 16:52:24 +01:00
Richard van der Hoff
607eec0456 fix docstring for FederationClient.get_state_for_room
trivial fixes for docstring
2018-09-26 16:52:24 +01:00
Neil Johnson
d514608b5c towncrier 2018-09-26 16:19:53 +01:00
Neil Johnson
28781b65e7 fix #3854 2018-09-26 16:16:41 +01:00
Amber Brown
d4e0861ff9 Reduce the load on our CI (#3957)
* changelog

* reduce circleci config

* plus a handy script

* fix regex
2018-09-27 00:23:21 +10:00
Richard van der Hoff
a5e70b31a1 changelog 2018-09-26 14:41:14 +01:00
Richard van der Hoff
8afddf7afe Fix error handling for missing auth_event
When we were authorizing an event, if there was no `m.room.create` in its
auth_events, we would raise a SynapseError with a cryptic message, which then
meant that we would bail out of processing any incoming events, rather than
storing a rejection for the faulty event and moving on.

We should treat the absent event the same as any other auth failure, by
raising an AuthError, so that the event is marked as rejected.
2018-09-26 14:40:16 +01:00
Richard van der Hoff
bf01efb864 Merge branch 'develop' into rav/hacky_cache_factor_fix 2018-09-26 13:24:07 +01:00
Erik Johnston
8834396b8a Actuall set cache factors in workers 2018-09-26 13:23:02 +01:00
Richard van der Hoff
4e8276a34a Merge pull request #3956 from matrix-org/rav/fix_expiring_cache_len
Fix ExpiringCache.__len__ to be accurate
2018-09-26 13:10:13 +01:00
Richard van der Hoff
5b4028fa78 Merge branch 'rav/fix_expiring_cache_len' into erikj/destination_retry_cache 2018-09-26 12:55:53 +01:00
Richard van der Hoff
7ee94fc1ba Log which cache is throwing exceptions 2018-09-26 12:43:08 +01:00
Amber Brown
66a1d57adb Merge pull request #3948 from matrix-org/rav/no_symlink_synctl
Move synctl into top dir to avoid a symlink
2018-09-26 21:41:58 +10:00
Amber Brown
c2185f14d7 Merge pull request #3924 from matrix-org/rav/clean_up_on_receive_pdu
Comments and interface cleanup for on_receive_pdu
2018-09-26 21:41:26 +10:00
Richard van der Hoff
8ac9fa7375 changelog 2018-09-26 12:35:10 +01:00
Erik Johnston
3baf6e1667 Fix ExpiringCache.__len__ to be accurate
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.

(This appears to have been a longstanding bug, but one which has been made more
of a problem by #3932 and #3933).

(This was originally done by Erik as part of #3933. I'm cherry-picking it
because really it's a fix in its own right)
2018-09-26 12:32:29 +01:00
Richard van der Hoff
f65163627f Merge pull request #3911 from matrix-org/jcgruenhage/docker-support-python3
make python 3 work in the docker container
2018-09-25 15:18:09 +01:00
Jan Christian Grünhage
df55a943ca Update Dockerfile 2018-09-25 14:33:38 +02:00
Jan Christian Grünhage
e7fa032126 Update .dockerignore 2018-09-25 14:33:19 +02:00
Richard van der Hoff
0649306fde Update grafana dashboard 2018-09-25 13:29:28 +01:00
Richard van der Hoff
a1cd37390f Merge remote-tracking branch 'origin/develop' into erikj/destination_retry_cache 2018-09-25 12:03:54 +01:00
Richard van der Hoff
4c3e7eeec5 Merge pull request #3932 from matrix-org/erikj/auto_start_expiring_caches
Fix some instances of ExpiringCache not expiring cache items
2018-09-25 12:02:57 +01:00
Jérémy Farnaud
6cf261930a added "media-src: 'self'" to CSP for resources (#3578)
Synapse doesn’t allow for media resources to be played directly from
Chrome. It is a problem for users on other networks (e.g. IRC)
communicating with Matrix users through a gateway. The gateway sends
them the raw URL for the resource when a Matrix user uploads a video
and the video cannot be played directly in Chrome using that URL.

Chrome argues it is not authorized to play the video because of the
Content Security Policy. Chrome checks for the "media-src" policy which
is missing, and defauts to the "default-src" policy which is "none".

As Synapse already sends "object-src: 'self'" I thought it wouldn’t be
a problem to add "media-src: 'self'" to the CSP to fix this problem.
2018-09-25 11:55:02 +01:00
Richard van der Hoff
94f7befc31 Merge pull request #3925 from matrix-org/erikj/fix_producers_unregistered
Fix spurious exceptions when client closes conncetion
2018-09-25 11:52:06 +01:00
Richard van der Hoff
bd469adaa9 changelog 2018-09-25 11:22:17 +01:00
Richard van der Hoff
c53336986d Move synctl into top dir to avoid a symlink
symlinks apparently break setuptools on python3 and alpine
(https://bugs.python.org/issue31940), so let's stop using a symlink and just
use the file directly.
2018-09-25 11:19:27 +01:00
Richard van der Hoff
e4e96486a9 Merge pull request #3947 from matrix-org/rav/attr_version
We require attrs 16.0.0
2018-09-25 11:16:42 +01:00
Richard van der Hoff
45a1053d44 changelog 2018-09-25 10:45:34 +01:00
Richard van der Hoff
a9d84f4e44 We require attrs 16.0.0
Ref: https://github.com/matrix-org/synapse/issues/3945
2018-09-25 10:43:39 +01:00
Matthew Hodgson
787d22ed6c Only lazy load self-members on initial sync
Given we have disabled lazy loading for incr syncs in #3840, we can make self-LL more efficient by only doing it on initial sync.  Also adds a bounds check for if/when we change our mind, so that we don't try to include LL members on sync responses with no timeline.
2018-09-25 00:49:26 +01:00
Amber Brown
fbe5ba25f6 Merge branch 'master' into develop 2018-09-25 03:10:01 +10:00
Amber Brown
5121ae97f5 Merge tag 'v0.33.5.1'
Internal Changes
----------------

- Fix incompatibility with older Twisted version in tests. Thanks
  @OlegGirko!
([\#3940](https://github.com/matrix-org/synapse/issues/3940))
2018-09-25 03:09:30 +10:00
Amber Brown
fc691ca97c changelog 2018-09-25 02:54:59 +10:00
Amber Brown
6b6cb32297 bump version 2018-09-25 02:54:34 +10:00
Amber Brown
e37c221b97 changelog for 3940 2018-09-25 02:51:55 +10:00
Oleg Girko
7d3f639844 Fix compatibility issue with older Twisted in tests.
Older Twisted (18.4.0) returns TimeoutError instead of
ConnectingCancelledError when connection times out.
This change allows tests to be compatible with this behaviour.

Signed-off-by: Oleg Girko <ol@infoserver.lv>
2018-09-25 02:51:18 +10:00
Amber Brown
04eed80a73 Merge branch 'master' into develop 2018-09-24 23:42:25 +10:00
Amber Brown
829213523e Merge tag 'v0.33.5'
Features
--------

- Python 3.5 and 3.6 support is now in beta.
([\#3576](https://github.com/matrix-org/synapse/issues/3576))
- Implement `event_format` filter param in `/sync`
([\#3790](https://github.com/matrix-org/synapse/issues/3790))
- Add synapse_admin_mau:registered_reserved_users metric to expose
number of real reaserved users
([\#3846](https://github.com/matrix-org/synapse/issues/3846))

Bugfixes
--------

- Remove connection ID for replication prometheus metrics, as it creates
a large number of new series.
([\#3788](https://github.com/matrix-org/synapse/issues/3788))
- guest users should not be part of mau total
([\#3800](https://github.com/matrix-org/synapse/issues/3800))
- Bump dependency on pyopenssl 16.x, to avoid incompatibility with
recent Twisted.
([\#3804](https://github.com/matrix-org/synapse/issues/3804))
- Fix existing room tags not coming down sync when joining a room
([\#3810](https://github.com/matrix-org/synapse/issues/3810))
- Fix jwt import check
([\#3824](https://github.com/matrix-org/synapse/issues/3824))
- fix VOIP crashes under Python 3 (#3821)
([\#3835](https://github.com/matrix-org/synapse/issues/3835))
- Fix manhole so that it works with latest openssh clients
([\#3841](https://github.com/matrix-org/synapse/issues/3841))
- Fix outbound requests occasionally wedging, which can result in
federation breaking between servers.
([\#3845](https://github.com/matrix-org/synapse/issues/3845))
- Show heroes if room name/canonical alias has been deleted
([\#3851](https://github.com/matrix-org/synapse/issues/3851))
- Fix handling of redacted events from federation
([\#3859](https://github.com/matrix-org/synapse/issues/3859))
-  ([\#3874](https://github.com/matrix-org/synapse/issues/3874))
- Mitigate outbound federation randomly becoming wedged
([\#3875](https://github.com/matrix-org/synapse/issues/3875))

Internal Changes
----------------

- CircleCI tests now run on the potential merge of a PR.
([\#3704](https://github.com/matrix-org/synapse/issues/3704))
- http/ is now ported to Python 3.
([\#3771](https://github.com/matrix-org/synapse/issues/3771))
- Improve human readable error messages for threepid
registration/account update
([\#3789](https://github.com/matrix-org/synapse/issues/3789))
- Make /sync slightly faster by avoiding needless copies
([\#3795](https://github.com/matrix-org/synapse/issues/3795))
- handlers/ is now ported to Python 3.
([\#3803](https://github.com/matrix-org/synapse/issues/3803))
- Limit the number of PDUs/EDUs per federation transaction
([\#3805](https://github.com/matrix-org/synapse/issues/3805))
- Only start postgres instance for postgres tests on Travis CI
([\#3806](https://github.com/matrix-org/synapse/issues/3806))
- tests/ is now ported to Python 3.
([\#3808](https://github.com/matrix-org/synapse/issues/3808))
- crypto/ is now ported to Python 3.
([\#3822](https://github.com/matrix-org/synapse/issues/3822))
- rest/ is now ported to Python 3.
([\#3823](https://github.com/matrix-org/synapse/issues/3823))
- add some logging for the keyring queue
([\#3826](https://github.com/matrix-org/synapse/issues/3826))
- speed up lazy loading by 2-3x
([\#3827](https://github.com/matrix-org/synapse/issues/3827))
- Improved Dockerfile to remove build requirements after building
reducing the image size.
([\#3834](https://github.com/matrix-org/synapse/issues/3834))
- Disable lazy loading for incremental syncs for now
([\#3840](https://github.com/matrix-org/synapse/issues/3840))
- federation/ is now ported to Python 3.
([\#3847](https://github.com/matrix-org/synapse/issues/3847))
- Log when we retry outbound requests
([\#3853](https://github.com/matrix-org/synapse/issues/3853))
- Removed some excess logging messages.
([\#3855](https://github.com/matrix-org/synapse/issues/3855))
- Speed up purge history for rooms that have been previously purged
([\#3856](https://github.com/matrix-org/synapse/issues/3856))
- Refactor some HTTP timeout code.
([\#3857](https://github.com/matrix-org/synapse/issues/3857))
- Fix running merged builds on CircleCI
([\#3858](https://github.com/matrix-org/synapse/issues/3858))
- Fix typo in replication stream exception.
([\#3860](https://github.com/matrix-org/synapse/issues/3860))
- Add in flight real time metrics for Measure blocks
([\#3871](https://github.com/matrix-org/synapse/issues/3871))
- Disable buffering and automatic retrying in treq requests to prevent
timeouts. ([\#3872](https://github.com/matrix-org/synapse/issues/3872))
- mention jemalloc in the README
([\#3877](https://github.com/matrix-org/synapse/issues/3877))
- Remove unmaintained "nuke-room-from-db.sh" script
([\#3888](https://github.com/matrix-org/synapse/issues/3888))
2018-09-24 23:41:35 +10:00
Amber Brown
e3aa2c0b75 towncrier 2018-09-24 23:40:27 +10:00
Amber Brown
e302f40e20 update version 2018-09-24 23:40:05 +10:00
Erik Johnston
19dc676d1a Fix ExpiringCache.__len__ to be accurate
It used to try and produce an estimate, which was sometimes negative.
This caused metrics to be sad, so lets always just calculate it from
scratch.
2018-09-21 16:25:42 +01:00
Erik Johnston
5230bc1471 Newsfile 2018-09-21 15:05:37 +01:00
Erik Johnston
fdd1a62e8d Add a five minute cache to get_destination_retry_timings
Hopefully helps with #3931
2018-09-21 14:56:12 +01:00
Erik Johnston
79eded1ae4 Make ExpiringCache slightly more performant 2018-09-21 14:52:21 +01:00
Erik Johnston
d3f80cbc9c Newsfile 2018-09-21 14:24:39 +01:00
Erik Johnston
8601c24287 Fix some instances of ExpiringCache not expiring cache items
ExpiringCache required that `start()` be called before it would actually
start expiring entries. A number of places didn't do that.

This PR removes `start` from ExpiringCache, and automatically starts
backround reaping process on creation instead.
2018-09-21 14:19:46 +01:00
Erik Johnston
ad53a5497d Merge pull request #3927 from matrix-org/erikj/handle_background_errors
Handle exceptions thrown by background tasks
2018-09-21 09:26:30 +01:00
Matthew Hodgson
a2ddaa90f2 Always LL ourselves if we're in a room to simplify clients (#3916)
Should fix https://github.com/vector-im/riot-web/issues/7209
2018-09-20 21:21:54 +01:00
Erik Johnston
94ae1dea3c Add missing logger 2018-09-20 17:05:34 +01:00
Erik Johnston
168491c412 Newsfile 2018-09-20 16:16:52 +01:00
Erik Johnston
9ea408441f Handle exceptions thrown by background tasks
Fixes #3921
2018-09-20 16:15:21 +01:00
Jan Christian Grünhage
3af7a95a9d add changelog 2018-09-20 14:55:11 +02:00
Jan Christian Grünhage
8dfb33d325 make python 3 work in the docker container 2018-09-20 14:55:11 +02:00
Erik Johnston
13f6f1624b Newsfile 2018-09-20 13:52:09 +01:00
Erik Johnston
b28a7ed503 Fix spurious exceptions when client closes conncetion
If a HTTP handler throws an exception while processing a request we
automatically write a JSON error response. If the handler had already
started writing a response twisted throws an exception.

We should check for this case and simple abort the connection if there
was an error after the response had started being written.
2018-09-20 13:44:20 +01:00
Neil Johnson
23b53b4ef8 Merge pull request #3868 from matrix-org/neilj/fix_room_invite_mail_links
Neilj/fix room invite mail links
2018-09-20 13:32:38 +01:00
Richard van der Hoff
6bb9a4b1d6 changelog 2018-09-20 13:15:20 +01:00
Richard van der Hoff
703de4ec13 Comments and interface cleanup for on_receive_pdu
Add some informative comments about what's going on here.

Also, `sent_to_us_directly` and `get_missing` were doing the same thing (apart
from in `_handle_queued_pdus`, which looks like a bug), so let's get rid of
`get_missing` and use `sent_to_us_directly` consistently.
2018-09-20 13:06:55 +01:00
Amber Brown
1f3f5fcf52 Fix client IPs being broken on Python 3 (#3908) 2018-09-20 20:14:34 +10:00
Erik Johnston
3fd68d533b Merge pull request #3914 from matrix-org/erikj/remove_retry_cache
Remove get_destination_retry_timings cache
2018-09-20 10:54:49 +01:00
Amber Brown
741571cf22 Add a way to run tests in PostgreSQL in Docker (#3699) 2018-09-20 18:12:45 +10:00
Amber Brown
aeca5a5ed5 Add a regression test for logging on failed connections (#3912) 2018-09-20 16:28:18 +10:00
Richard van der Hoff
642199570c Improve the logging when handling a federation transaction (#3904)
Let's try to rationalise the logging that happens when we are processing an
incoming transaction, to make it easier to figure out what is going wrong when
they take ages. In particular:

- make everything start with a [room_id event_id] prefix
- make sure we log a warning when catching exceptions rather than just turning
  them into other, more cryptic, exceptions.
2018-09-19 17:28:18 +01:00
Erik Johnston
ce846bb620 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/faster_typing 2018-09-19 15:08:36 +01:00
Erik Johnston
1a24d4effa Newsfile 2018-09-19 15:03:05 +01:00
Erik Johnston
bbab6ebfd9 Fix up changelog and remove spurious comment 2018-09-19 14:45:14 +01:00
Erik Johnston
392a54128c pep8 2018-09-19 14:37:49 +01:00
Erik Johnston
83ee5592a5 Newsfile 2018-09-19 14:22:59 +01:00
Erik Johnston
b9158ac2bf Remove get_destination_retry_timings cache
Currently we rely on the master to invalidate this cache promptly.
However, after having moved most federation endpoints off of master this
no longer happens, causing outbound fedeariont to get blackholed.

Fixes #3798
2018-09-19 14:22:57 +01:00
Erik Johnston
cb016baa37 Merge pull request #3910 from matrix-org/erikj/update_timeout
Update to use new timeout function everywhere.
2018-09-19 11:45:11 +01:00
Erik Johnston
80d2d50f47 Fixup 2018-09-19 11:19:47 +01:00
Erik Johnston
9407bcf37a Replace custom DeferredTimeoutError with defer.TimeoutError 2018-09-19 11:07:29 +01:00
Erik Johnston
6c48aa0256 Run canceller first to allow it to generate correct error 2018-09-19 11:07:27 +01:00
Erik Johnston
6bbe3d5732 Newsfile 2018-09-19 10:43:13 +01:00
Erik Johnston
a334e1cace Update to use new timeout function everywhere.
The existing deferred timeout helper function (and the one into twisted)
suffer from a bug when a deferred's canceller throws an exception, #3842.

The new helper function doesn't suffer from this problem.
2018-09-19 10:39:40 +01:00
Richard van der Hoff
05b9937cd7 update changelog for #3909 2018-09-19 09:17:54 +01:00
Amber Brown
47c02e6332 Merge pull request #3909 from turt2live/travis/fix-logging-1
Fix matrixfederationclient.py logging: Destination is a string
2018-09-19 18:14:47 +10:00
Amber Brown
3f0d8e6b09 Remove documentation referencing Cygwin (#3873) 2018-09-19 18:14:30 +10:00
Amber Brown
3d6b24fb1b Merge pull request #3907 from matrix-org/rav/set_sni_to_server_name
Set SNI to the server_name, not whatever was in the SRV record
2018-09-19 17:59:33 +10:00
Amber Brown
f773ecbd61 Merge pull request #3903 from matrix-org/rav/increase_get_missing_events_timeout
Bump timeout on get_missing_events request
2018-09-19 17:57:48 +10:00
Travis Ralston
13b31a9baf Changelog 2018-09-18 15:30:25 -06:00
Travis Ralston
35aec19f0a Destination is a string 2018-09-18 15:29:30 -06:00
Richard van der Hoff
38ead946a9 Merge remote-tracking branch 'origin/develop' into neilj/fix_room_invite_mail_links 2018-09-18 19:02:45 +01:00
Richard van der Hoff
a219ce8726 Use directory server for room joins (#3899)
When we do a join, always try the server we used for the alias lookup first.

Fixes #2418
2018-09-18 18:27:37 +01:00
Richard van der Hoff
31c15dcb80 Refactor matrixfederationclient to fix logging (#3906)
We want to wait until we have read the response body before we log the request
as complete, otherwise a confusing thing happens where the request appears to
have completed, but we later fail it.

To do this, we factor the salient details of a request out to a separate
object, which can then keep track of the txn_id, so that it can be logged.
2018-09-18 18:17:15 +01:00
Amber Brown
c600886d47 Merge pull request #3894 from matrix-org/hs/phone_home_py_version
Add python_version phone home stat
2018-09-19 02:40:04 +10:00
Richard van der Hoff
edabc18938 changelog 2018-09-18 17:04:20 +01:00
Richard van der Hoff
b3097396e7 Set SNI to the server_name, not whatever was in the SRV record
Fixes #3843
2018-09-18 17:01:12 +01:00
Richard van der Hoff
8565d9a9fe changelog 2018-09-18 15:05:26 +01:00
Richard van der Hoff
550007cb0e Bump timeout on get_missing_events request 2018-09-18 15:02:51 +01:00
Richard van der Hoff
286d6930b7 Merge pull request #3879 from matrix-org/matthew/fix-autojoin
don't ratelimit autojoins
2018-09-18 13:07:01 +01:00
Richard van der Hoff
892432c818 Merge pull request #3882 from SimmyD/max_upload_docker_var
Add variable for changing the max upload size in Docker container
2018-09-18 13:05:09 +01:00
Richard van der Hoff
1e09a1d48a Merge pull request #3889 from matrix-org/rav/404_on_remove_unknown_alias
Return a 404 when deleting unknown room alias
2018-09-18 12:59:30 +01:00
Richard van der Hoff
ad95ec12ca Merge pull request #3895 from matrix-org/rav/decode_bytes_in_metrics
Fix more b'abcd' noise in metrics
2018-09-18 12:58:49 +01:00
Richard van der Hoff
1f4296efcf Merge pull request #3897 from aaronraimist/synaspse-typo
Fix typo in README, synaspse -> synapse
2018-09-18 11:37:09 +01:00
Aaron Raimist
505abb38f0 Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:07:19 -05:00
Aaron Raimist
4ecdf73fc7 Fix typo in README, synaspse -> synapse
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:05:23 -05:00
Will Hunt
2b41f2ed76 Create 3894.feature 2018-09-17 17:48:48 +01:00
Will Hunt
5baa087312 typo 2018-09-17 17:37:56 +01:00
Will Hunt
b58714789f make pip happy? 2018-09-17 17:35:54 +01:00
Richard van der Hoff
06f2dbbb5d changelog 2018-09-17 17:18:26 +01:00
Richard van der Hoff
ac80cb08fe Fix more b'abcd' noise in metrics 2018-09-17 17:16:50 +01:00
Will Hunt
9a1cceeca9 Use a string for versions 2018-09-17 17:09:06 +01:00
Richard van der Hoff
c7131baefc Merge pull request #3892 from matrix-org/rav/decode_bytes_in_request_logs
Fix some b'abcd' noise in logs and metrics
2018-09-17 17:07:10 +01:00
Richard van der Hoff
f75b9961c6 Reinstate missing null check 2018-09-17 16:52:02 +01:00
Will Hunt
2b39494cd5 Add python_version phone home stat 2018-09-17 16:35:18 +01:00
Richard van der Hoff
0cb7afff35 changelog 2018-09-17 16:17:25 +01:00
Richard van der Hoff
f00a9d2636 Fix some b'abcd' noise in logs and metrics
Python 3 compatibility: make sure that we decode some byte sequences before we
use them to create log lines and metrics labels.
2018-09-17 16:15:42 +01:00
Richard van der Hoff
c9c50284d7 README: run python_dependencies with -m
... to stop things which try to import `types` getting `synapse.types` instead
2018-09-17 15:58:47 +01:00
Amber Brown
1e70f1dbab changelog 2018-09-17 22:36:56 +10:00
Amber Brown
b56ef14629 update python 3 changelog 2018-09-17 22:36:41 +10:00
Amber Brown
fe88907d04 version 2018-09-17 22:33:22 +10:00
Richard van der Hoff
c1ae6b1bce changelog 2018-09-17 13:21:08 +01:00
Richard van der Hoff
85a43f4167 Return a 404 when deleting unknown room alias
As per https://github.com/matrix-org/matrix-doc/issues/1675

Fixes https://github.com/matrix-org/synapse/issues/2782
2018-09-17 13:19:00 +01:00
Amber Brown
c6363f7269 Merge pull request #3888 from matrix-org/rav/nuke_nuke_rooms
Remove nuke-room-from-db.sh script
2018-09-17 21:30:25 +10:00
Richard van der Hoff
2a8996b67d changelog 2018-09-17 11:51:38 +01:00
Richard van der Hoff
e7b3b4d8c2 Remove nuke-room-from-db.sh script
The problem with this script is that it is largely untested, entirely
unmaintained, and running it is likely to make your synapse blow up in
exciting ways.

For example, it leaves a bunch of tables with dead values in it, like
event_to_state_groups.

Having it here sends a message that it is a supported part of
synapse, which is absolutely not the case.
2018-09-17 11:42:42 +01:00
Simon Dwyer
0e46ff6904 Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables.
Signed-off-by: Simon Dwyer <simon@thedwyers.co>
2018-09-16 13:33:33 +10:00
Simon Dwyer
da864a92c9 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer
f472abd792 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer
9c749a6b61 Added 'MAX_UPLOAD_SIZE' variable and set default to "10M" 2018-09-16 13:12:57 +10:00
Matthew Hodgson
c71b93f2a4 changelog 2018-09-15 22:28:28 +01:00
Matthew Hodgson
d42d79e3c3 don't ratelimit autojoins 2018-09-15 22:27:41 +01:00
Matthew Hodgson
9d13ff4da8 missing changelog 2018-09-15 21:04:10 +01:00
Vincent Breitmoser
c8642720c9 mention libjemalloc in readme (#3877) 2018-09-15 21:03:27 +01:00
Erik Johnston
24efb2a70d Fix timeout function
Turns out deferred.cancel sometimes throws, so we do that last to ensure
that we always do resolve the new deferred.
2018-09-15 11:38:39 +01:00
Erik Johnston
c30cfff572 Merge pull request #3875 from matrix-org/erikj/extra_timeouts
Add an awful secondary timeout to fix wedged requests
2018-09-14 19:56:11 +01:00
Erik Johnston
335b23a078 Newsfile 2018-09-14 19:25:58 +01:00
Erik Johnston
fcfe7a850d Add an awful secondary timeout to fix wedged requests
This is an attempt to mitigate #3842 by adding yet-another-timeout
2018-09-14 19:23:07 +01:00
Matthew Hodgson
024be6cf18 don't filter membership events based on history visibility (#3874)
don't filter membership events based on history visibility
as we will already have filtered the messages in the timeline, and state events
are always visible.

and because @erikjohnston said so.
2018-09-14 18:12:52 +01:00
Erik Johnston
3e6e94fe9f Merge pull request #3872 from matrix-org/hawkowl/timeouts-2
timeouts 2: electric boogaloo
2018-09-14 16:58:44 +01:00
Erik Johnston
8b3652831c Merge pull request #3871 from matrix-org/erikj/in_flight_block_metrics
Add in flight real time metrics for Measure blocks
2018-09-14 16:40:31 +01:00
Amber Brown
bc9af88a2d fix 2018-09-15 00:26:00 +10:00
Amber Brown
90f8e606e2 changelog 2018-09-15 00:23:55 +10:00
Erik Johnston
d0f6c1ce21 Remove spurious comment 2018-09-14 15:12:36 +01:00
Erik Johnston
9e2f9a7b57 Measure outbound requests 2018-09-14 15:11:26 +01:00
Erik Johnston
941ac0f085 Newsfile 2018-09-14 15:08:37 +01:00
Erik Johnston
f6e82dcddb Tests 2018-09-14 15:08:37 +01:00
Erik Johnston
0a81038ea0 Add in flight real time metrics for Measure blocks 2018-09-14 15:08:37 +01:00
Travis Ralston
ad9198cc34 Merge pull request #3860 from matrix-org/travis/typo-1
Fix minor typo in exception
2018-09-13 16:32:48 -06:00
Neil Johnson
bef1d6f3bf towncrier 2018-09-13 22:53:35 +01:00
Neil Johnson
7de1989ea2 fix link for case that config.email_riot_base_url is set 2018-09-13 22:43:50 +01:00
Travis Ralston
984db8bb08 Create 3860.misc 2018-09-13 12:06:04 -06:00
Amber Brown
c971aa7b9d fix 2018-09-14 03:57:02 +10:00
Amber Brown
8f08d848f5 fix 2018-09-14 03:53:56 +10:00
Travis Ralston
f1a7264663 Fix minor typo in exception 2018-09-13 11:51:12 -06:00
Amber Brown
7c33ab76da redact better 2018-09-14 03:45:34 +10:00
Amber Brown
63755fa4c2 we do that higher up 2018-09-14 03:21:47 +10:00
Amber Brown
73884ebac5 Merge remote-tracking branch 'origin/develop' into hawkowl/timeouts-2 2018-09-14 03:11:25 +10:00
Amber Brown
7c27c4d51c merge (#3576) 2018-09-14 03:11:11 +10:00
Amber Brown
1c3f4d9ca5 buffer? 2018-09-14 03:09:13 +10:00
David Baker
bc74925c5b WIP e2e key backups
Continues from uhoreg's branch

This just fixed the errcode on /room_keys/version if no backup and
updates the schema delta to be on the latest so it gets run
2018-09-13 17:02:59 +01:00
Erik Johnston
6c0f8d9d50 Merge pull request #3856 from matrix-org/erikj/speed_up_purge
Make purge history slightly faster
2018-09-13 16:14:46 +01:00
Erik Johnston
ed5331a627 comment 2018-09-13 16:10:56 +01:00
Amber Brown
cb64fe2cb7 Merge pull request #3859 from matrix-org/erikj/add_iterkeys
Fix handling of redacted events from federation
2018-09-14 01:06:44 +10:00
Erik Johnston
13193a6e2b Newsfile 2018-09-13 15:46:45 +01:00
Amber Brown
3126b88d35 fix circleci merged builds (#3858)
* fix

* changelog
2018-09-14 00:44:31 +10:00
Erik Johnston
89a76d1889 Fix handling of redacted events from federation
If we receive an event that doesn't pass their content hash check (e.g.
due to already being redacted) then we hit a bug which causes an
exception to be raised, which then promplty stops the event (and
request) from being processed.

This effects all sorts of federation APIs, including joining rooms with
a redacted state event.
2018-09-13 15:44:12 +01:00
Amber Brown
bfa0b759e0 Attempt to figure out what's going on with timeouts (#3857) 2018-09-14 00:15:51 +10:00
Erik Johnston
9cbd0094f0 pep8 2018-09-13 15:15:35 +01:00
Erik Johnston
9dbe38ea7d Create indices after insertion 2018-09-13 15:05:52 +01:00
Erik Johnston
93139a1fb8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/speed_up_purge 2018-09-13 12:57:09 +01:00
Erik Johnston
e7cd7cb0f0 Newsfile 2018-09-13 12:55:40 +01:00
Erik Johnston
c857f5ef9b Make purge history slightly faster
Don't pull out events that are outliers and won't be deleted, as nothing
should happen to them.
2018-09-13 12:48:10 +01:00
Amber Brown
b7d2fb5eb9 Remove some superfluous logging (#3855) 2018-09-13 19:59:32 +10:00
Hubert Chathi
3801b8aa03 try to make flake8 and isort happy 2018-09-06 11:35:19 -04:00
Erik Johnston
5f02017aea Improve performance of getting typing updates for replication
Fetching the list of all new typing notifications involved iterating
over all rooms and comparing their serial. Lets move to using a stream
change cache, like we do for other streams.
2018-09-05 10:20:40 +01:00
Hubert Chathi
16a31c6fce update to newer Synapse APIs 2018-08-24 22:51:25 -04:00
Hubert Chathi
83caead95a Merge branch 'develop' into e2e_backups 2018-08-24 11:44:26 -04:00
Hubert Chathi
42a394caa2 allow session_data to be any JSON instead of just a string 2018-08-21 14:51:34 -04:00
Hubert Chathi
8550a7e9c2 allow auth_data to be any JSON instead of a string 2018-08-21 10:38:00 -04:00
Matthew Hodgson
4f7064f6b5 missing import 2018-08-12 19:14:31 -04:00
Matthew Hodgson
f0cede5556 missing import 2018-08-12 19:14:31 -04:00
Matthew Hodgson
54ac18e832 use parse_string 2018-08-12 19:14:31 -04:00
Matthew Hodgson
66a4ca1d28 404 nicely if you try to interact with a missing current version 2018-08-12 19:14:31 -04:00
Matthew Hodgson
72788cf9c1 support DELETE /version with no args 2018-08-12 19:14:31 -04:00
Matthew Hodgson
edc427a351 flake8 2018-08-12 19:14:31 -04:00
Matthew Hodgson
fe87890b18 implement remaining tests and make them work 2018-08-12 19:14:31 -04:00
Matthew Hodgson
f6a3067868 linting 2018-08-12 19:14:31 -04:00
Matthew Hodgson
15d513f16f fix idiocies and so make tests pass 2018-08-12 19:14:31 -04:00
Matthew Hodgson
174be586e5 first cut at a UT 2018-08-12 19:14:31 -04:00
Matthew Hodgson
b5eee511c7 don't needlessly return user_id 2018-08-12 19:14:31 -04:00
Matthew Hodgson
93d174bcc4 improve docstring 2018-08-12 19:14:31 -04:00
Matthew Hodgson
5e42c45c96 switch get_current_version_info back to being get_version_info 2018-08-12 19:14:31 -04:00
Matthew Hodgson
982edca380 fix flakes 2018-08-12 19:14:31 -04:00
Matthew Hodgson
234611f347 fix typos 2018-08-12 19:14:31 -04:00
Matthew Hodgson
14b3da63a3 add a tonne of docstring; make upload_room_keys properly assert version 2018-08-12 19:14:31 -04:00
Matthew Hodgson
9f0791b7bd add a tonne of docstring; make upload_room_keys properly assert version 2018-08-12 19:14:31 -04:00
Matthew Hodgson
9f500cb39e more docstring for the e2e_room_keys rest 2018-08-12 19:14:31 -04:00
Matthew Hodgson
8d14598e90 add storage docstring; remove unused set_e2e_room_keys 2018-08-12 19:14:31 -04:00
Matthew Hodgson
ca0b052307 fix factoring out of _should_replace_room_key 2018-08-12 19:14:31 -04:00
Matthew Hodgson
cac0253799 rename room_key_version table correctly, and fix opt args 2018-08-12 19:14:31 -04:00
Matthew Hodgson
0abb205b47 blindly incorporate PR review - needs testing & fixing 2018-08-12 19:14:31 -04:00
Matthew Hodgson
69e51c7ba4 make /room_keys/version work 2018-08-12 19:14:31 -04:00
Matthew Hodgson
8ae64b270f implement /room_keys/version too (untested) 2018-08-12 19:14:31 -04:00
Matthew Hodgson
cf1e2000f6 document the API 2018-08-12 19:13:09 -04:00
Matthew Hodgson
6b8c07abc2 make it work and fix pep8 2018-08-12 19:13:09 -04:00
Matthew Hodgson
0bc4627a73 interim WIP checkin; doesn't build yet 2018-08-12 18:23:10 -04:00
Matthew Hodgson
53ace904b2 total WIP skeleton for /room_keys API 2018-08-12 18:23:10 -04:00
229 changed files with 9160 additions and 3789 deletions

View File

@@ -1,114 +1,172 @@
version: 2
jobs:
dockerhubuploadrelease:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1} .
- run: docker build -f docker/Dockerfile -t matrixdotorg/synapse:${CIRCLE_SHA1}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1} matrixdotorg/synapse:latest
- run: docker tag matrixdotorg/synapse:${CIRCLE_SHA1}-py3 matrixdotorg/synapse:latest-py3
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}
- run: docker push matrixdotorg/synapse:${CIRCLE_SHA1}-py3
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
sytestpy2:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- run: /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy2postgres:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy2merged:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- run: /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy2postgresmerged:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy3:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- run: /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy3postgres:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy3merged:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- run: /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
sytestpy3postgresmerged:
machine: true
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: ~/project/logs
path: /logs
destination: logs
- store_test_results:
path: logs
path: /logs
workflows:
version: 2
build:
jobs:
- sytestpy2
- sytestpy2postgres
- sytestpy2:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2merged:
filters:
branches:
ignore: /develop|master/
ignore: /develop|master|release-.*/
- sytestpy2postgresmerged:
filters:
branches:
ignore: /develop|master/
# Currently broken while the Python 3 port is incomplete
# - sytestpy3
# - sytestpy3postgres
ignore: /develop|master|release-.*/
- sytestpy3merged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy3postgresmerged:
filters:
branches:
ignore: /develop|master|release-.*/
- dockerhubuploadrelease:
filters:
tags:
only: /v[0-9].[0-9]+.[0-9]+.*/
branches:
ignore: /.*/
- dockerhubuploadlatest:
filters:
branches:
only: master

View File

@@ -4,24 +4,31 @@ set -e
# CircleCI doesn't give CIRCLE_PR_NUMBER in the environment for non-forked PRs. Wonderful.
# In this case, we just need to do some ~shell magic~ to strip it out of the PULL_REQUEST URL.
echo 'export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-${CIRCLE_PULL_REQUEST##*/}}"' >> "$BASH_ENV"
echo 'export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-${CIRCLE_PULL_REQUEST##*/}}"' >> $BASH_ENV
source $BASH_ENV
if [[ -z "${CIRCLE_PR_NUMBER}" ]]
then
echo "Can't figure out what the PR number is!"
exit 1
fi
echo "Can't figure out what the PR number is! Assuming merge target is develop."
# Get the reference, using the GitHub API
GITBASE=`curl -q https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
# It probably hasn't had a PR opened yet. Since all PRs land on develop, we
# can probably assume it's based on it and will be merged into it.
GITBASE="develop"
else
# Get the reference, using the GitHub API
GITBASE=`wget -O- https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
fi
# Show what we are before
git show -s
# Set up username so it can do a merge
git config --global user.email bot@matrix.org
git config --global user.name "A robot"
# Fetch and merge. If it doesn't work, it will raise due to set -e.
git fetch -u origin $GITBASE
git merge --no-edit origin/$GITBASE
# Show what we are after.
git show -s
git show -s

2
.gitignore vendored
View File

@@ -1,9 +1,11 @@
*.pyc
.*.swp
*~
*.lock
.DS_Store
_trial_temp/
_trial_temp*/
logs/
dbs/
*.egg

View File

@@ -1,12 +1,27 @@
sudo: false
language: python
# tell travis to cache ~/.cache/pip
cache: pip
cache:
directories:
# we only bother to cache the wheels; parts of the http cache get
# invalidated every build (because they get served with a max-age of 600
# seconds), which means that we end up re-uploading the whole cache for
# every build, which is time-consuming In any case, it's not obvious that
# downloading the cache from S3 would be much faster than downloading the
# originals from pypi.
#
- $HOME/.cache/pip/wheels
before_script:
- git remote set-branches --add origin develop
- git fetch origin develop
# don't clone the whole repo history, one commit will do
git:
depth: 1
# only build branches we care about (PRs are built seperately)
branches:
only:
- master
- develop
- /^release-v/
matrix:
fast_finish: true
@@ -14,25 +29,39 @@ matrix:
- python: 2.7
env: TOX_ENV=packaging
- python: 2.7
env: TOX_ENV=pep8
- python: 3.6
env: TOX_ENV="pep8,check_isort"
- python: 2.7
env: TOX_ENV=py27
- python: 2.7
env: TOX_ENV=py27-old
- python: 2.7
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
services:
- postgresql
- python: 3.5
env: TOX_ENV=py35
- python: 3.6
env: TOX_ENV=py36
- python: 3.6
env: TOX_ENV=check_isort
env: TOX_ENV=py36-postgres TRIAL_FLAGS="-j 4"
services:
- postgresql
- python: 3.6
- # we only need to check for the newsfragment if it's a PR build
if: type = pull_request
python: 3.6
env: TOX_ENV=check-newsfragment
script:
- git remote set-branches --add origin develop
- git fetch origin develop
- tox -e $TOX_ENV
install:
- pip install tox

View File

@@ -1,3 +1,259 @@
Synapse 0.33.8rc1 (2018-10-29)
==============================
Features
--------
- Servers with auto-join rooms will now automatically create those rooms when the first user registers ([\#3975](https://github.com/matrix-org/synapse/issues/3975))
- Add config option to control alias creation ([\#4051](https://github.com/matrix-org/synapse/issues/4051))
- The register_new_matrix_user script is now ported to Python 3. ([\#4085](https://github.com/matrix-org/synapse/issues/4085))
- Configure Docker image to listen on both ipv4 and ipv6. ([\#4089](https://github.com/matrix-org/synapse/issues/4089))
Bugfixes
--------
- Fix HTTP error response codes for federated group requests. ([\#3969](https://github.com/matrix-org/synapse/issues/3969))
- Fix issue where Python 3 users couldn't paginate /publicRooms ([\#4046](https://github.com/matrix-org/synapse/issues/4046))
- Fix URL previewing to work in Python 3.7 ([\#4050](https://github.com/matrix-org/synapse/issues/4050))
- synctl will use the right python executable to run worker processes ([\#4057](https://github.com/matrix-org/synapse/issues/4057))
- Manhole now works again on Python 3, instead of failing with a "couldn't match all kex parts" when connecting. ([\#4060](https://github.com/matrix-org/synapse/issues/4060), [\#4067](https://github.com/matrix-org/synapse/issues/4067))
- Fix some metrics being racy and causing exceptions when polled by Prometheus. ([\#4061](https://github.com/matrix-org/synapse/issues/4061))
- Fix bug which prevented email notifications from being sent unless an absolute path was given for `email_templates`. ([\#4068](https://github.com/matrix-org/synapse/issues/4068))
- Correctly account for cpu usage by background threads ([\#4074](https://github.com/matrix-org/synapse/issues/4074))
- Fix race condition where config defined reserved users were not being added to
the monthly active user list prior to the homeserver reactor firing up ([\#4081](https://github.com/matrix-org/synapse/issues/4081))
- Fix bug which prevented backslashes being used in event field filters ([\#4083](https://github.com/matrix-org/synapse/issues/4083))
Internal Changes
----------------
- Add information about the [matrix-docker-ansible-deploy](https://github.com/spantaleev/matrix-docker-ansible-deploy) playbook ([\#3698](https://github.com/matrix-org/synapse/issues/3698))
- Add initial implementation of new state resolution algorithm ([\#3786](https://github.com/matrix-org/synapse/issues/3786))
- Reduce database load when fetching state groups ([\#4011](https://github.com/matrix-org/synapse/issues/4011))
- Various cleanups in the federation client code ([\#4031](https://github.com/matrix-org/synapse/issues/4031))
- Run the CircleCI builds in docker containers ([\#4041](https://github.com/matrix-org/synapse/issues/4041))
- Only colourise synctl output when attached to tty ([\#4049](https://github.com/matrix-org/synapse/issues/4049))
- Refactor room alias creation code ([\#4063](https://github.com/matrix-org/synapse/issues/4063))
- Make the Python scripts in the top-level scripts folders meet pep8 and pass flake8. ([\#4068](https://github.com/matrix-org/synapse/issues/4068))
- The README now contains example for the Caddy web server. Contributed by steamp0rt. ([\#4072](https://github.com/matrix-org/synapse/issues/4072))
- Add psutil as an explicit dependency ([\#4073](https://github.com/matrix-org/synapse/issues/4073))
- Clean up threading and logcontexts in pushers ([\#4075](https://github.com/matrix-org/synapse/issues/4075))
- Correctly manage logcontexts during startup to fix some "Unexpected logging context" warnings ([\#4076](https://github.com/matrix-org/synapse/issues/4076))
- Give some more things logcontexts ([\#4077](https://github.com/matrix-org/synapse/issues/4077))
- Clean up some bits of code which were flagged by the linter ([\#4082](https://github.com/matrix-org/synapse/issues/4082))
Synapse 0.33.7 (2018-10-18)
===========================
**Warning**: This release removes the example email notification templates from
`res/templates` (they are now internal to the python package). This should only
affect you if you (a) deploy your Synapse instance from a git checkout or a
github snapshot URL, and (b) have email notifications enabled.
If you have email notifications enabled, you should ensure that
`email.template_dir` is either configured to point at a directory where you
have installed customised templates, or leave it unset to use the default
templates.
Synapse 0.33.7rc2 (2018-10-17)
==============================
Features
--------
- Ship the example email templates as part of the package ([\#4052](https://github.com/matrix-org/synapse/issues/4052))
Bugfixes
--------
- Fix bug which made get_missing_events return too few events ([\#4045](https://github.com/matrix-org/synapse/issues/4045))
Synapse 0.33.7rc1 (2018-10-15)
==============================
Features
--------
- Add support for end-to-end key backup (MSC1687) ([\#4019](https://github.com/matrix-org/synapse/issues/4019))
Bugfixes
--------
- Fix bug in event persistence logic which caused 'NoneType is not iterable' ([\#3995](https://github.com/matrix-org/synapse/issues/3995))
- Fix exception in background metrics collection ([\#3996](https://github.com/matrix-org/synapse/issues/3996))
- Fix exception handling in fetching remote profiles ([\#3997](https://github.com/matrix-org/synapse/issues/3997))
- Fix handling of rejected threepid invites ([\#3999](https://github.com/matrix-org/synapse/issues/3999))
- Workers now start on Python 3. ([\#4027](https://github.com/matrix-org/synapse/issues/4027))
- Synapse now starts on Python 3.7. ([\#4033](https://github.com/matrix-org/synapse/issues/4033))
Internal Changes
----------------
- Log exceptions in looping calls ([\#4008](https://github.com/matrix-org/synapse/issues/4008))
- Optimisation for serving federation requests ([\#4017](https://github.com/matrix-org/synapse/issues/4017))
- Add metric to count number of non-empty sync responses ([\#4022](https://github.com/matrix-org/synapse/issues/4022))
Synapse 0.33.6 (2018-10-04)
===========================
Internal Changes
----------------
- Pin to prometheus_client<0.4 to avoid renaming all of our metrics ([\#4002](https://github.com/matrix-org/synapse/issues/4002))
Synapse 0.33.6rc1 (2018-10-03)
==============================
Features
--------
- Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables. ([\#3883](https://github.com/matrix-org/synapse/issues/3883))
- Report "python_version" in the phone home stats ([\#3894](https://github.com/matrix-org/synapse/issues/3894))
- Always LL ourselves if we're in a room ([\#3916](https://github.com/matrix-org/synapse/issues/3916))
- Include eventid in log lines when processing incoming federation transactions ([\#3959](https://github.com/matrix-org/synapse/issues/3959))
- Remove spurious check which made 'localhost' servers not work ([\#3964](https://github.com/matrix-org/synapse/issues/3964))
Bugfixes
--------
- Fix problem when playing media from Chrome using direct URL (thanks @remjey!) ([\#3578](https://github.com/matrix-org/synapse/issues/3578))
- support registering regular users non-interactively with register_new_matrix_user script ([\#3836](https://github.com/matrix-org/synapse/issues/3836))
- Fix broken invite email links for self hosted riots ([\#3868](https://github.com/matrix-org/synapse/issues/3868))
- Don't ratelimit autojoins ([\#3879](https://github.com/matrix-org/synapse/issues/3879))
- Fix 500 error when deleting unknown room alias ([\#3889](https://github.com/matrix-org/synapse/issues/3889))
- Fix some b'abcd' noise in logs and metrics ([\#3892](https://github.com/matrix-org/synapse/issues/3892), [\#3895](https://github.com/matrix-org/synapse/issues/3895))
- When we join a room, always try the server we used for the alias lookup first, to avoid unresponsive and out-of-date servers. ([\#3899](https://github.com/matrix-org/synapse/issues/3899))
- Fix incorrect server-name indication for outgoing federation requests ([\#3907](https://github.com/matrix-org/synapse/issues/3907))
- Fix adding client IPs to the database failing on Python 3. ([\#3908](https://github.com/matrix-org/synapse/issues/3908))
- Fix bug where things occaisonally were not being timed out correctly. ([\#3910](https://github.com/matrix-org/synapse/issues/3910))
- Fix bug where outbound federation would stop talking to some servers when using workers ([\#3914](https://github.com/matrix-org/synapse/issues/3914))
- Fix some instances of ExpiringCache not expiring cache items ([\#3932](https://github.com/matrix-org/synapse/issues/3932), [\#3980](https://github.com/matrix-org/synapse/issues/3980))
- Fix out-of-bounds error when LLing yourself ([\#3936](https://github.com/matrix-org/synapse/issues/3936))
- Sending server notices regarding user consent now works on Python 3. ([\#3938](https://github.com/matrix-org/synapse/issues/3938))
- Fix exceptions from metrics handler ([\#3956](https://github.com/matrix-org/synapse/issues/3956))
- Fix error message for events with m.room.create missing from auth_events ([\#3960](https://github.com/matrix-org/synapse/issues/3960))
- Fix errors due to concurrent monthly_active_user upserts ([\#3961](https://github.com/matrix-org/synapse/issues/3961))
- Fix exceptions when processing incoming events over federation ([\#3968](https://github.com/matrix-org/synapse/issues/3968))
- Replaced all occurences of e.message with str(e). Contributed by Schnuffle ([\#3970](https://github.com/matrix-org/synapse/issues/3970))
- Fix lazy loaded sync in the presence of rejected state events ([\#3986](https://github.com/matrix-org/synapse/issues/3986))
- Fix error when logging incomplete HTTP requests ([\#3990](https://github.com/matrix-org/synapse/issues/3990))
Internal Changes
----------------
- Unit tests can now be run under PostgreSQL in Docker using ``test_postgresql.sh``. ([\#3699](https://github.com/matrix-org/synapse/issues/3699))
- Speed up calculation of typing updates for replication ([\#3794](https://github.com/matrix-org/synapse/issues/3794))
- Remove documentation regarding installation on Cygwin, the use of WSL is recommended instead. ([\#3873](https://github.com/matrix-org/synapse/issues/3873))
- Fix typo in README, synaspse -> synapse ([\#3897](https://github.com/matrix-org/synapse/issues/3897))
- Increase the timeout when filling missing events in federation requests ([\#3903](https://github.com/matrix-org/synapse/issues/3903))
- Improve the logging when handling a federation transaction ([\#3904](https://github.com/matrix-org/synapse/issues/3904), [\#3966](https://github.com/matrix-org/synapse/issues/3966))
- Improve logging of outbound federation requests ([\#3906](https://github.com/matrix-org/synapse/issues/3906), [\#3909](https://github.com/matrix-org/synapse/issues/3909))
- Fix the docker image building on python 3 ([\#3911](https://github.com/matrix-org/synapse/issues/3911))
- Add a regression test for logging failed HTTP requests on Python 3. ([\#3912](https://github.com/matrix-org/synapse/issues/3912))
- Comments and interface cleanup for on_receive_pdu ([\#3924](https://github.com/matrix-org/synapse/issues/3924))
- Fix spurious exceptions when remote http client closes conncetion ([\#3925](https://github.com/matrix-org/synapse/issues/3925))
- Log exceptions thrown by background tasks ([\#3927](https://github.com/matrix-org/synapse/issues/3927))
- Add a cache to get_destination_retry_timings ([\#3933](https://github.com/matrix-org/synapse/issues/3933), [\#3991](https://github.com/matrix-org/synapse/issues/3991))
- Automate pushes to docker hub ([\#3946](https://github.com/matrix-org/synapse/issues/3946))
- Require attrs 16.0.0 or later ([\#3947](https://github.com/matrix-org/synapse/issues/3947))
- Fix incompatibility with python3 on alpine ([\#3948](https://github.com/matrix-org/synapse/issues/3948))
- Run the test suite on the oldest supported versions of our dependencies in CI. ([\#3952](https://github.com/matrix-org/synapse/issues/3952))
- CircleCI now only runs merged jobs on PRs, and commit jobs on develop, master, and release branches. ([\#3957](https://github.com/matrix-org/synapse/issues/3957))
- Fix docstrings and add tests for state store methods ([\#3958](https://github.com/matrix-org/synapse/issues/3958))
- fix docstring for FederationClient.get_state_for_room ([\#3963](https://github.com/matrix-org/synapse/issues/3963))
- Run notify_app_services as a bg process ([\#3965](https://github.com/matrix-org/synapse/issues/3965))
- Clarifications in FederationHandler ([\#3967](https://github.com/matrix-org/synapse/issues/3967))
- Further reduce the docker image size ([\#3972](https://github.com/matrix-org/synapse/issues/3972))
- Build py3 docker images for docker hub too ([\#3976](https://github.com/matrix-org/synapse/issues/3976))
- Updated the installation instructions to point to the matrix-synapse package on PyPI. ([\#3985](https://github.com/matrix-org/synapse/issues/3985))
- Disable USE_FROZEN_DICTS for unittests by default. ([\#3987](https://github.com/matrix-org/synapse/issues/3987))
- Remove unused Jenkins and development related files from the repo. ([\#3988](https://github.com/matrix-org/synapse/issues/3988))
- Improve stacktraces in certain exceptions in the logs ([\#3989](https://github.com/matrix-org/synapse/issues/3989))
Synapse 0.33.5.1 (2018-09-25)
=============================
Internal Changes
----------------
- Fix incompatibility with older Twisted version in tests. Thanks @OlegGirko! ([\#3940](https://github.com/matrix-org/synapse/issues/3940))
Synapse 0.33.5 (2018-09-24)
===========================
No significant changes.
Synapse 0.33.5rc1 (2018-09-17)
==============================
Features
--------
- Python 3.5 and 3.6 support is now in beta. ([\#3576](https://github.com/matrix-org/synapse/issues/3576))
- Implement `event_format` filter param in `/sync` ([\#3790](https://github.com/matrix-org/synapse/issues/3790))
- Add synapse_admin_mau:registered_reserved_users metric to expose number of real reaserved users ([\#3846](https://github.com/matrix-org/synapse/issues/3846))
Bugfixes
--------
- Remove connection ID for replication prometheus metrics, as it creates a large number of new series. ([\#3788](https://github.com/matrix-org/synapse/issues/3788))
- guest users should not be part of mau total ([\#3800](https://github.com/matrix-org/synapse/issues/3800))
- Bump dependency on pyopenssl 16.x, to avoid incompatibility with recent Twisted. ([\#3804](https://github.com/matrix-org/synapse/issues/3804))
- Fix existing room tags not coming down sync when joining a room ([\#3810](https://github.com/matrix-org/synapse/issues/3810))
- Fix jwt import check ([\#3824](https://github.com/matrix-org/synapse/issues/3824))
- fix VOIP crashes under Python 3 (#3821) ([\#3835](https://github.com/matrix-org/synapse/issues/3835))
- Fix manhole so that it works with latest openssh clients ([\#3841](https://github.com/matrix-org/synapse/issues/3841))
- Fix outbound requests occasionally wedging, which can result in federation breaking between servers. ([\#3845](https://github.com/matrix-org/synapse/issues/3845))
- Show heroes if room name/canonical alias has been deleted ([\#3851](https://github.com/matrix-org/synapse/issues/3851))
- Fix handling of redacted events from federation ([\#3859](https://github.com/matrix-org/synapse/issues/3859))
- ([\#3874](https://github.com/matrix-org/synapse/issues/3874))
- Mitigate outbound federation randomly becoming wedged ([\#3875](https://github.com/matrix-org/synapse/issues/3875))
Internal Changes
----------------
- CircleCI tests now run on the potential merge of a PR. ([\#3704](https://github.com/matrix-org/synapse/issues/3704))
- http/ is now ported to Python 3. ([\#3771](https://github.com/matrix-org/synapse/issues/3771))
- Improve human readable error messages for threepid registration/account update ([\#3789](https://github.com/matrix-org/synapse/issues/3789))
- Make /sync slightly faster by avoiding needless copies ([\#3795](https://github.com/matrix-org/synapse/issues/3795))
- handlers/ is now ported to Python 3. ([\#3803](https://github.com/matrix-org/synapse/issues/3803))
- Limit the number of PDUs/EDUs per federation transaction ([\#3805](https://github.com/matrix-org/synapse/issues/3805))
- Only start postgres instance for postgres tests on Travis CI ([\#3806](https://github.com/matrix-org/synapse/issues/3806))
- tests/ is now ported to Python 3. ([\#3808](https://github.com/matrix-org/synapse/issues/3808))
- crypto/ is now ported to Python 3. ([\#3822](https://github.com/matrix-org/synapse/issues/3822))
- rest/ is now ported to Python 3. ([\#3823](https://github.com/matrix-org/synapse/issues/3823))
- add some logging for the keyring queue ([\#3826](https://github.com/matrix-org/synapse/issues/3826))
- speed up lazy loading by 2-3x ([\#3827](https://github.com/matrix-org/synapse/issues/3827))
- Improved Dockerfile to remove build requirements after building reducing the image size. ([\#3834](https://github.com/matrix-org/synapse/issues/3834))
- Disable lazy loading for incremental syncs for now ([\#3840](https://github.com/matrix-org/synapse/issues/3840))
- federation/ is now ported to Python 3. ([\#3847](https://github.com/matrix-org/synapse/issues/3847))
- Log when we retry outbound requests ([\#3853](https://github.com/matrix-org/synapse/issues/3853))
- Removed some excess logging messages. ([\#3855](https://github.com/matrix-org/synapse/issues/3855))
- Speed up purge history for rooms that have been previously purged ([\#3856](https://github.com/matrix-org/synapse/issues/3856))
- Refactor some HTTP timeout code. ([\#3857](https://github.com/matrix-org/synapse/issues/3857))
- Fix running merged builds on CircleCI ([\#3858](https://github.com/matrix-org/synapse/issues/3858))
- Fix typo in replication stream exception. ([\#3860](https://github.com/matrix-org/synapse/issues/3860))
- Add in flight real time metrics for Measure blocks ([\#3871](https://github.com/matrix-org/synapse/issues/3871))
- Disable buffering and automatic retrying in treq requests to prevent timeouts. ([\#3872](https://github.com/matrix-org/synapse/issues/3872))
- mention jemalloc in the README ([\#3877](https://github.com/matrix-org/synapse/issues/3877))
- Remove unmaintained "nuke-room-from-db.sh" script ([\#3888](https://github.com/matrix-org/synapse/issues/3888))
Synapse 0.33.4 (2018-09-07)
===========================

View File

@@ -30,12 +30,28 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Jenkins <http://matrix.org/jenkins>`_ and
`Travis <https://travis-ci.org/matrix-org/synapse>`_ for continuous
integration. All pull requests to synapse get automatically tested by Travis;
the Jenkins builds require an adminstrator to start them. If your change
breaks the build, this will be shown in github, so please keep an eye on the
pull request for feedback.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Travis CI
<https://travis-ci.org/matrix-org/synapse>`_ for continuous integration. All
pull requests to synapse get automatically tested by Travis and CircleCI.
If your change breaks the build, this will be shown in GitHub, so please
keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
- ``tox -e py27`` (requires tox to be installed by ``pip install tox``) for
SQLite-backed Synapse on Python 2.7.
- ``tox -e py35`` for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py27-postgres`` for PostgreSQL-backed Synapse on Python 2.7
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 2.7
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the `documentation in the SyTest repo
<https://github.com/matrix-org/sytest/blob/develop/docker/README.md>`_ for more
information.
Code style
~~~~~~~~~~
@@ -77,7 +93,8 @@ AUTHORS.rst file for the project in question. Please feel free to include a
change to AUTHORS.rst in your pull request to list yourself and a short
description of the area(s) you've worked on. Also, we sometimes have swag to
give away to contributors - if you feel that Matrix-branded apparel is missing
from your life, please mail us your shipping address to matrix at matrix.org and we'll try to fix it :)
from your life, please mail us your shipping address to matrix at matrix.org and
we'll try to fix it :)
Sign off
~~~~~~~~
@@ -144,4 +161,9 @@ flag to ``git commit``, which uses the name and email set in your
Conclusion
~~~~~~~~~~
That's it! Matrix is a very open and collaborative project as you might expect given our obsession with open communication. If we're going to successfully matrix together all the fragmented communication technologies out there we are reliant on contributions and collaboration from the community to do so. So please get involved - and we hope you have as much fun hacking on Matrix as we do!
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!

View File

@@ -12,23 +12,20 @@ recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.py
recursive-include docs *
recursive-include res *
recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
recursive-include synapse/res *
recursive-include synapse/static *.css
recursive-include synapse/static *.gif
recursive-include synapse/static *.html
recursive-include synapse/static *.js
exclude jenkins.sh
exclude jenkins*.sh
exclude jenkins*
exclude Dockerfile
exclude .dockerignore
recursive-exclude jenkins *.sh
exclude test_postgresql.sh
include pyproject.toml
recursive-include changelog.d *
@@ -37,3 +34,6 @@ prune .github
prune demo/etc
prune docker
prune .circleci
exclude jenkins*
recursive-exclude jenkins *.sh

35
MAP.rst
View File

@@ -1,35 +0,0 @@
Directory Structure
===================
Warning: this may be a bit stale...
::
.
├── cmdclient Basic CLI python Matrix client
├── demo Scripts for running standalone Matrix demos
├── docs All doc, including the draft Matrix API spec
│   ├── client-server The client-server Matrix API spec
│   ├── model Domain-specific elements of the Matrix API spec
│   ├── server-server The server-server model of the Matrix API spec
│   └── sphinx The internal API doc of the Synapse homeserver
├── experiments Early experiments of using Synapse's internal APIs
├── graph Visualisation of Matrix's distributed message store
├── synapse The reference Matrix homeserver implementation
│   ├── api Common building blocks for the APIs
│   │   ├── events Definition of state representation Events
│   │   └── streams Definition of streamable Event objects
│   ├── app The __main__ entry point for the homeserver
│   ├── crypto The PKI client/server used for secure federation
│   │   └── resource PKI helper objects (e.g. keys)
│   ├── federation Server-server state replication logic
│   ├── handlers The main business logic of the homeserver
│   ├── http Wrappers around Twisted's HTTP server & client
│   ├── rest Servlet-style RESTful API
│   ├── storage Persistence subsystem (currently only sqlite3)
│   │   └── schema sqlite persistence schema
│   └── util Synapse-specific utilities
├── tests Unit tests for the Synapse homeserver
└── webclient Basic AngularJS Matrix web client

View File

@@ -81,7 +81,7 @@ Thanks for using Matrix!
Synapse Installation
====================
Synapse is the reference python/twisted Matrix homeserver implementation.
Synapse is the reference Python/Twisted Matrix homeserver implementation.
System requirements:
@@ -91,12 +91,13 @@ System requirements:
Installing from source
----------------------
(Prebuilt packages are available for some platforms - see `Platform-Specific
Instructions`_.)
Synapse is written in python but some of the libraries it uses are written in
C. So before we can install synapse itself we need a working C compiler and the
header files for python C extensions.
Synapse is written in Python but some of the libraries it uses are written in
C. So before we can install Synapse itself we need a working C compiler and the
header files for Python C extensions.
Installing prerequisites on Ubuntu or Debian::
@@ -143,21 +144,27 @@ Installing prerequisites on OpenBSD::
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt
To install the synapse homeserver run::
To install the Synapse homeserver run::
virtualenv -p python2.7 ~/.synapse
source ~/.synapse/bin/activate
pip install --upgrade pip
pip install --upgrade setuptools
pip install https://github.com/matrix-org/synapse/tarball/master
pip install matrix-synapse
This installs synapse, along with the libraries it uses, into a virtual
This installs Synapse, along with the libraries it uses, into a virtual
environment under ``~/.synapse``. Feel free to pick a different directory
if you prefer.
This Synapse installation can then be later upgraded by using pip again with the
update flag::
source ~/.synapse/bin/activate
pip install -U matrix-synapse
In case of problems, please see the _`Troubleshooting` section below.
There is an offical synapse image available at
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with
the docker-compose file available at `contrib/docker <contrib/docker>`_. Further information on
this including configuration options is available in the README on
@@ -167,7 +174,13 @@ Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
https://hub.docker.com/r/avhost/docker-matrix/tags/
Configuring synapse
Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, riot-web, coturn, mxisd, SSL support, etc.).
For more details, see
https://github.com/spantaleev/matrix-docker-ansible-deploy
Configuring Synapse
-------------------
Before you can start Synapse, you will need to generate a configuration
@@ -249,26 +262,6 @@ Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See `<docs/turn-howto.rst>`_ for details.
IPv6
----
As of Synapse 0.19 we finally support IPv6, many thanks to @kyrias and @glyph
for providing PR #1696.
However, for federation to work on hosts with IPv6 DNS servers you **must**
be running Twisted 17.1.0 or later - see https://github.com/matrix-org/synapse/issues/1002
for details. We can't make Synapse depend on Twisted 17.1 by default
yet as it will break most older distributions (see https://github.com/matrix-org/synapse/pull/1909)
so if you are using operating system dependencies you'll have to install your
own Twisted 17.1 package via pip or backports etc.
If you're running in a virtualenv then pip should have installed the newest
Twisted automatically, but if your virtualenv is old you will need to manually
upgrade to a newer Twisted dependency via:
pip install Twisted>=17.1.0
Running Synapse
===============
@@ -444,8 +437,7 @@ settings require a slightly more difficult installation process.
using the ``.`` command, rather than ``bash``'s ``source``.
5) Optionally, use ``pip`` to install ``lxml``, which Synapse needs to parse
webpages for their titles.
6) Use ``pip`` to install this repository: ``pip install
https://github.com/matrix-org/synapse/tarball/master``
6) Use ``pip`` to install this repository: ``pip install matrix-synapse``
7) Optionally, change ``_synapse``'s shell to ``/bin/false`` to reduce the
chance of a compromised Synapse server being used to take over your box.
@@ -459,37 +451,13 @@ https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-
Windows Install
---------------
Synapse can be installed on Cygwin. It requires the following Cygwin packages:
- gcc
- git
- libffi-devel
- openssl (and openssl-devel, python-openssl)
- python
- python-setuptools
The content repository requires additional packages and will be unable to process
uploads without them:
- libjpeg8
- libjpeg8-devel
- zlib
If you choose to install Synapse without these packages, you will need to reinstall
``pillow`` for changes to be applied, e.g. ``pip uninstall pillow`` ``pip install
pillow --user``
Troubleshooting:
- You may need to upgrade ``setuptools`` to get this to work correctly:
``pip install setuptools --upgrade``.
- You may encounter errors indicating that ``ffi.h`` is missing, even with
``libffi-devel`` installed. If you do, copy the ``.h`` files:
``cp /usr/lib/libffi-3.0.13/include/*.h /usr/include``
- You may need to install libsodium from source in order to install PyNacl. If
you do, you may need to create a symlink to ``libsodium.a`` so ``ld`` can find
it: ``ln -s /usr/local/lib/libsodium.a /usr/lib/libsodium.a``
If you wish to run or develop Synapse on Windows, the Windows Subsystem For
Linux provides a Linux environment on Windows 10 which is capable of using the
Debian, Fedora, or source installation methods. More information about WSL can
be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
for Windows Server.
Troubleshooting
===============
@@ -497,7 +465,7 @@ Troubleshooting
Troubleshooting Installation
----------------------------
Synapse requires pip 1.7 or later, so if your OS provides too old a version you
Synapse requires pip 8 or later, so if your OS provides too old a version you
may need to manually upgrade it::
sudo pip install --upgrade pip
@@ -532,28 +500,6 @@ failing, e.g.::
pip install twisted
On OS X, if you encounter clang: error: unknown argument: '-mno-fused-madd' you
will need to export CFLAGS=-Qunused-arguments.
Troubleshooting Running
-----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (https://nacl.cr.yp.to/) for
encryption and digital signatures.
Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and
(https://github.com/pyca/pynacl/issues/79) that mean it may not install
correctly, causing all tests to fail with errors about missing "sodium.h". To
fix try re-installing from PyPI or directly from
(https://github.com/pyca/pynacl)::
# Install from PyPI
pip install --user --upgrade --force pynacl
# Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
Running out of File Handles
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -711,7 +657,8 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_ or
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
@@ -742,7 +689,15 @@ so an example nginx configuration might look like::
}
}
and an example apache configuration may look like::
an example Caddy configuration might look like::
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
}
and an example Apache configuration might look like::
<VirtualHost *:443>
SSLEngine on
@@ -908,7 +863,7 @@ to install using pip and a virtualenv::
virtualenv -p python2.7 env
source env/bin/activate
python synapse/python_dependencies.py | xargs pip install
python -m synapse.python_dependencies | xargs pip install
pip install lxml mock
This will run a process of downloading and installing all the needed
@@ -963,5 +918,13 @@ variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall amount, and especially in terms of giving back RAM
to the OS. To use it, the library must simply be put in the LD_PRELOAD
environment variable when launching Synapse. On Debian, this can be done
by installing the ``libjemalloc1`` package and adding this line to
``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys

View File

@@ -18,7 +18,7 @@ instructions that may be required are listed later in this document.
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
pip install --upgrade --process-dependency-links matrix-synapse
# restart synapse
synctl restart
@@ -48,11 +48,24 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to $NEXT_VERSION
Upgrading to v0.33.7
====================
This release removes the example email notification templates from
``res/templates`` (they are now internal to the python package). This should
only affect you if you (a) deploy your Synapse instance from a git checkout or
a github snapshot URL, and (b) have email notifications enabled.
If you have email notifications enabled, you should ensure that
``email.template_dir`` is either configured to point at a directory where you
have installed customised templates, or leave it unset to use the default
templates.
Upgrading to v0.27.3
====================
This release expands the anonymous usage stats sent if the opt-in
``report_stats`` configuration is set to ``true``. We now capture RSS memory
``report_stats`` configuration is set to ``true``. We now capture RSS memory
and cpu use at a very coarse level. This requires administrators to install
the optional ``psutil`` python module.

View File

@@ -1 +0,0 @@
CircleCI tests now run on the potential merge of a PR.

View File

@@ -1 +0,0 @@
http/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
Remove connection ID for replication prometheus metrics, as it creates a large number of new series.

View File

@@ -1 +0,0 @@
Improve human readable error messages for threepid registration/account update

View File

@@ -1 +0,0 @@
Implement `event_format` filter param in `/sync`

View File

@@ -1 +0,0 @@
Make /sync slightly faster by avoiding needless copies

View File

@@ -1 +0,0 @@
guest users should not be part of mau total

View File

@@ -1 +0,0 @@
handlers/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
Bump dependency on pyopenssl 16.x, to avoid incompatibility with recent Twisted.

View File

@@ -1 +0,0 @@
Limit the number of PDUs/EDUs per federation transaction

View File

@@ -1 +0,0 @@
Only start postgres instance for postgres tests on Travis CI

View File

@@ -1 +0,0 @@
tests/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
Fix existing room tags not coming down sync when joining a room

View File

@@ -1 +0,0 @@
crypto/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
rest/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
Fix jwt import check

View File

@@ -1 +0,0 @@
add some logging for the keyring queue

View File

@@ -1 +0,0 @@
speed up lazy loading by 2-3x

View File

@@ -1 +0,0 @@
Improved Dockerfile to remove build requirements after building reducing the image size.

View File

@@ -1 +0,0 @@
fix VOIP crashes under Python 3 (#3821)

View File

@@ -1 +0,0 @@
Disable lazy loading for incremental syncs for now

View File

@@ -1 +0,0 @@
Fix manhole so that it works with latest openssh clients

View File

@@ -1 +0,0 @@
Fix outbound requests occasionally wedging, which can result in federation breaking between servers.

View File

@@ -1 +0,0 @@
Add synapse_admin_mau:registered_reserved_users metric to expose number of real reaserved users

View File

@@ -1 +0,0 @@
federation/ is now ported to Python 3.

View File

@@ -1 +0,0 @@
Show heroes if room name/canonical alias has been deleted

View File

@@ -1 +0,0 @@
Log when we retry outbound requests

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,13 @@
FROM docker.io/python:2-alpine3.8
ARG PYTHON_VERSION=2
COPY . /synapse
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
RUN apk add --no-cache --virtual .build_deps \
# install the OS build deps
RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
@@ -10,30 +15,47 @@ RUN apk add --no-cache --virtual .build_deps \
libxslt-dev \
linux-headers \
postgresql-dev \
zlib-dev \
&& cd /synapse \
&& apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec \
&& pip install --upgrade \
zlib-dev
# build things which have slow build steps, before we copy synapse, so that
# the layer can be cached.
#
# (we really just care about caching a wheel here, as the "pip install" below
# will install them again.)
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
msgpack-python \
pillow \
pynacl
# now install synapse and all of the python deps to /install.
COPY . /synapse
RUN pip install --prefix="/install" --no-warn-script-location \
lxml \
pip \
psycopg2 \
setuptools \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/docker/start.py /synapse/docker/conf / \
&& rm -rf \
setup.cfg \
setup.py \
synapse \
&& apk del .build_deps
/synapse
###
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp

12
docker/Dockerfile-pgtests Normal file
View File

@@ -0,0 +1,12 @@
# Use the Sytest image that comes with a lot of the build dependencies
# pre-installed
FROM matrixdotorg/sytest:latest
# The Sytest image doesn't come with python, so install that
RUN apt-get -qq install -y python python-dev python-pip
# We need tox to run the tests in run_pg_tests.sh
RUN pip install tox
ADD run_pg_tests.sh /pg_tests.sh
ENTRYPOINT /pg_tests.sh

View File

@@ -88,6 +88,7 @@ variables are available for configuration:
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size [default `10M`].
Shared secrets, that will be initialized to random values if not set:

View File

@@ -21,7 +21,7 @@ listeners:
{% if not SYNAPSE_NO_TLS %}
-
port: 8448
bind_addresses: ['0.0.0.0']
bind_addresses: ['::']
type: http
tls: true
x_forwarded: false
@@ -34,7 +34,7 @@ listeners:
- port: 8008
tls: false
bind_addresses: ['0.0.0.0']
bind_addresses: ['::']
type: http
x_forwarded: false
@@ -85,7 +85,7 @@ federation_rc_concurrent: 3
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "10M"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "10M" }}"
max_image_pixels: "32M"
dynamic_thumbnails: false
@@ -211,7 +211,9 @@ email:
require_transport_security: False
notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
app_name: Matrix
template_dir: res/templates
# if template_dir is unset, uses the example templates that are part of
# the Synapse distribution.
#template_dir: res/templates
notif_template_html: notif_mail.html
notif_template_text: notif_mail.txt
notif_for_new_users: True

20
docker/run_pg_tests.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/bin/bash
# This script runs the PostgreSQL tests inside a Docker container. It expects
# the relevant source files to be mounted into /src (done automatically by the
# caller script). It will set up the database, run it, and then use the tox
# configuration to run the tests.
set -e
# Set PGUSER so Synapse's tests know what user to connect to the database with
export PGUSER=postgres
# Initialise & start the database
su -c '/usr/lib/postgresql/9.6/bin/initdb -D /var/lib/postgresql/data -E "UTF-8" --lc-collate="en_US.UTF-8" --lc-ctype="en_US.UTF-8" --username=postgres' postgres
su -c '/usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start' postgres
# Run the tests
cd /src
export TRIAL_FLAGS="-j 4"
tox --workdir=/tmp -e py27-postgres

View File

@@ -5,6 +5,7 @@ import os
import sys
import subprocess
import glob
import codecs
# Utility functions
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
@@ -23,7 +24,7 @@ def generate_secrets(environ, secrets):
with open(filename) as handle: value = handle.read()
else:
print("Generating a random secret for {}".format(name))
value = os.urandom(32).encode("hex")
value = codecs.encode(os.urandom(32), "hex").decode()
with open(filename, "w") as handle: handle.write(value)
environ[secret] = value

View File

@@ -1,23 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
export HAPROXY_BIN=/home/haproxy/haproxy-1.6.11/haproxy
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \
--haproxy \

View File

@@ -1,20 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./jenkins/clone.sh dendron https://github.com/matrix-org/dendron.git
./dendron/jenkins/build_dendron.sh
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \
--dendron $WORKSPACE/dendron/bin/dendron \

View File

@@ -1,22 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
export PEP8SUFFIX="--output-file=violations.flake8.log"
rm .coverage* || echo "No coverage files to remove"
tox -e packaging -e pep8

View File

@@ -1,18 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/prep_sytest_for_postgres.sh
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export WORKSPACE
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
./jenkins/prepare_synapse.sh
./jenkins/clone.sh sytest https://github.com/matrix-org/sytest.git
./sytest/jenkins/install_and_run.sh \
--python $WORKSPACE/.tox/py27/bin/python \
--synapse-directory $WORKSPACE \

View File

@@ -1,30 +0,0 @@
#!/bin/bash
set -eux
: ${WORKSPACE:="$(pwd)"}
export PYTHONDONTWRITEBYTECODE=yep
export SYNAPSE_CACHE_FACTOR=1
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Write coverage reports to a separate file for each process
export COVERAGE_OPTS="-p"
export DUMP_COVERAGE_COMMAND="coverage help"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
rm .coverage* || echo "No coverage files to remove"
tox --notest -e py27
TOX_BIN=$WORKSPACE/.tox/py27/bin
python synapse/python_dependencies.py | xargs -n1 $TOX_BIN/pip install
$TOX_BIN/pip install lxml
tox -e py27

View File

@@ -1,44 +0,0 @@
#! /bin/bash
# This clones a project from github into a named subdirectory
# If the project has a branch with the same name as this branch
# then it will checkout that branch after cloning.
# Otherwise it will checkout "origin/develop."
# The first argument is the name of the directory to checkout
# the branch into.
# The second argument is the URL of the remote repository to checkout.
# Usually something like https://github.com/matrix-org/sytest.git
set -eux
NAME=$1
PROJECT=$2
BASE=".$NAME-base"
# Update our mirror.
if [ ! -d ".$NAME-base" ]; then
# Create a local mirror of the source repository.
# This saves us from having to download the entire repository
# when this script is next run.
git clone "$PROJECT" "$BASE" --mirror
else
# Fetch any updates from the source repository.
(cd "$BASE"; git fetch -p)
fi
# Remove the existing repository so that we have a clean copy
rm -rf "$NAME"
# Cloning with --shared means that we will share portions of the
# .git directory with our local mirror.
git clone "$BASE" "$NAME" --shared
# Jenkins may have supplied us with the name of the branch in the
# environment. Otherwise we will have to guess based on the current
# commit.
: ${GIT_BRANCH:="origin/$(git rev-parse --abbrev-ref HEAD)"}
cd "$NAME"
# check out the relevant branch
git checkout "${GIT_BRANCH}" || (
echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop"
git checkout "origin/develop"
)

View File

@@ -1,21 +1,20 @@
from synapse.events import FrozenEvent
from synapse.api.auth import Auth
from mock import Mock
from __future__ import print_function
import argparse
import itertools
import json
import sys
from mock import Mock
from synapse.api.auth import Auth
from synapse.events import FrozenEvent
def check_auth(auth, auth_chain, events):
auth_chain.sort(key=lambda e: e.depth)
auth_map = {
e.event_id: e
for e in auth_chain
}
auth_map = {e.event_id: e for e in auth_chain}
create_events = {}
for e in auth_chain:
@@ -25,31 +24,26 @@ def check_auth(auth, auth_chain, events):
for e in itertools.chain(auth_chain, events):
auth_events_list = [auth_map[i] for i, _ in e.auth_events]
auth_events = {
(e.type, e.state_key): e
for e in auth_events_list
}
auth_events = {(e.type, e.state_key): e for e in auth_events_list}
auth_events[("m.room.create", "")] = create_events[e.room_id]
try:
auth.check(e, auth_events=auth_events)
except Exception as ex:
print "Failed:", e.event_id, e.type, e.state_key
print "Auth_events:", auth_events
print ex
print json.dumps(e.get_dict(), sort_keys=True, indent=4)
print("Failed:", e.event_id, e.type, e.state_key)
print("Auth_events:", auth_events)
print(ex)
print(json.dumps(e.get_dict(), sort_keys=True, indent=4))
# raise
print "Success:", e.event_id, e.type, e.state_key
print("Success:", e.event_id, e.type, e.state_key)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'json',
nargs='?',
type=argparse.FileType('r'),
default=sys.stdin,
'json', nargs='?', type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()

View File

@@ -1,10 +1,15 @@
from synapse.crypto.event_signing import *
from unpaddedbase64 import encode_base64
import argparse
import hashlib
import sys
import json
import logging
import sys
from unpaddedbase64 import encode_base64
from synapse.crypto.event_signing import (
check_event_content_hash,
compute_event_reference_hash,
)
class dictobj(dict):
@@ -24,27 +29,26 @@ class dictobj(dict):
def main():
parser = argparse.ArgumentParser()
parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'),
default=sys.stdin)
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()
logging.basicConfig()
event_json = dictobj(json.load(args.input_json))
algorithms = {
"sha256": hashlib.sha256,
}
algorithms = {"sha256": hashlib.sha256}
for alg_name in event_json.hashes:
if check_event_content_hash(event_json, algorithms[alg_name]):
print "PASS content hash %s" % (alg_name,)
print("PASS content hash %s" % (alg_name,))
else:
print "FAIL content hash %s" % (alg_name,)
print("FAIL content hash %s" % (alg_name,))
for algorithm in algorithms.values():
name, h_bytes = compute_event_reference_hash(event_json, algorithm)
print "Reference hash %s: %s" % (name, encode_base64(h_bytes))
print("Reference hash %s: %s" % (name, encode_base64(h_bytes)))
if __name__=="__main__":
if __name__ == "__main__":
main()

View File

@@ -1,15 +1,15 @@
from signedjson.sign import verify_signed_json
import argparse
import json
import logging
import sys
import urllib2
import dns.resolver
from signedjson.key import decode_verify_key_bytes, write_signing_keys
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
import urllib2
import json
import sys
import dns.resolver
import pprint
import argparse
import logging
def get_targets(server_name):
if ":" in server_name:
@@ -23,6 +23,7 @@ def get_targets(server_name):
except dns.resolver.NXDOMAIN:
yield (server_name, 8448)
def get_server_keys(server_name, target, port):
url = "https://%s:%i/_matrix/key/v1" % (target, port)
keys = json.load(urllib2.urlopen(url))
@@ -33,12 +34,14 @@ def get_server_keys(server_name, target, port):
verify_keys[key_id] = verify_key
return verify_keys
def main():
parser = argparse.ArgumentParser()
parser.add_argument("signature_name")
parser.add_argument("input_json", nargs="?", type=argparse.FileType('r'),
default=sys.stdin)
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()
logging.basicConfig()
@@ -48,24 +51,23 @@ def main():
for target, port in get_targets(server_name):
try:
keys = get_server_keys(server_name, target, port)
print "Using keys from https://%s:%s/_matrix/key/v1" % (target, port)
print("Using keys from https://%s:%s/_matrix/key/v1" % (target, port))
write_signing_keys(sys.stdout, keys.values())
break
except:
except Exception:
logging.exception("Error talking to %s:%s", target, port)
json_to_check = json.load(args.input_json)
print "Checking JSON:"
print("Checking JSON:")
for key_id in json_to_check["signatures"][args.signature_name]:
try:
key = keys[key_id]
verify_signed_json(json_to_check, args.signature_name, key)
print "PASS %s" % (key_id,)
except:
print("PASS %s" % (key_id,))
except Exception:
logging.exception("Check for key %s failed" % (key_id,))
print "FAIL %s" % (key_id,)
print("FAIL %s" % (key_id,))
if __name__ == '__main__':
main()

View File

@@ -1,13 +1,21 @@
import hashlib
import json
import sys
import time
import six
import psycopg2
import yaml
import sys
import json
import time
import hashlib
from unpaddedbase64 import encode_base64
from canonicaljson import encode_canonical_json
from signedjson.key import read_signing_keys
from signedjson.sign import sign_json
from canonicaljson import encode_canonical_json
from unpaddedbase64 import encode_base64
if six.PY2:
db_type = six.moves.builtins.buffer
else:
db_type = memoryview
def select_v1_keys(connection):
@@ -39,7 +47,9 @@ def select_v2_json(connection):
cursor.close()
results = {}
for server_name, key_id, key_json in rows:
results.setdefault(server_name, {})[key_id] = json.loads(str(key_json).decode("utf-8"))
results.setdefault(server_name, {})[key_id] = json.loads(
str(key_json).decode("utf-8")
)
return results
@@ -47,10 +57,7 @@ def convert_v1_to_v2(server_name, valid_until, keys, certificate):
return {
"old_verify_keys": {},
"server_name": server_name,
"verify_keys": {
key_id: {"key": key}
for key_id, key in keys.items()
},
"verify_keys": {key_id: {"key": key} for key_id, key in keys.items()},
"valid_until_ts": valid_until,
"tls_fingerprints": [fingerprint(certificate)],
}
@@ -65,7 +72,7 @@ def rows_v2(server, json):
valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, buffer(key_json))
yield (server, key_id, "-", valid_until, valid_until, db_type(key_json))
def main():
@@ -87,7 +94,7 @@ def main():
result = {}
for server in keys:
if not server in json:
if server not in json:
v2_json = convert_v1_to_v2(
server, valid_until, keys[server], certificates[server]
)
@@ -96,10 +103,7 @@ def main():
yaml.safe_dump(result, sys.stdout, default_flow_style=False)
rows = list(
row for server, json in result.items()
for row in rows_v2(server, json)
)
rows = list(row for server, json in result.items() for row in rows_v2(server, json))
cursor = connection.cursor()
cursor.executemany(
@@ -107,7 +111,7 @@ def main():
" server_name, key_id, from_server,"
" ts_added_ms, ts_valid_until_ms, key_json"
") VALUES (%s, %s, %s, %s, %s, %s)",
rows
rows,
)
connection.commit()

View File

@@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
/* Copyright 2016 OpenMarket Ltd
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View File

@@ -1,33 +0,0 @@
#!/usr/bin/perl -pi
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
$copyright = <<EOT;
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
EOT
s/^(# -\*- coding: utf-8 -\*-\n)?/$1$copyright/ if ($. == 1);

View File

@@ -1,8 +1,16 @@
#! /usr/bin/python
from __future__ import print_function
import argparse
import ast
import os
import re
import sys
import yaml
class DefinitionVisitor(ast.NodeVisitor):
def __init__(self):
super(DefinitionVisitor, self).__init__()
@@ -42,15 +50,18 @@ def non_empty(defs):
functions = {name: non_empty(f) for name, f in defs['def'].items()}
classes = {name: non_empty(f) for name, f in defs['class'].items()}
result = {}
if functions: result['def'] = functions
if classes: result['class'] = classes
if functions:
result['def'] = functions
if classes:
result['class'] = classes
names = defs['names']
uses = []
for name in names.get('Load', ()):
if name not in names.get('Param', ()) and name not in names.get('Store', ()):
uses.append(name)
uses.extend(defs['attrs'])
if uses: result['uses'] = uses
if uses:
result['uses'] = uses
result['names'] = names
result['attrs'] = defs['attrs']
return result
@@ -95,7 +106,6 @@ def used_names(prefix, item, defs, names):
if __name__ == '__main__':
import sys, os, argparse, re
parser = argparse.ArgumentParser(description='Find definitions.')
parser.add_argument(
@@ -105,24 +115,28 @@ if __name__ == '__main__':
"--ignore", action="append", metavar="REGEXP", help="Ignore a pattern"
)
parser.add_argument(
"--pattern", action="append", metavar="REGEXP",
help="Search for a pattern"
"--pattern", action="append", metavar="REGEXP", help="Search for a pattern"
)
parser.add_argument(
"directories", nargs='+', metavar="DIR",
help="Directories to search for definitions"
"directories",
nargs='+',
metavar="DIR",
help="Directories to search for definitions",
)
parser.add_argument(
"--referrers", default=0, type=int,
help="Include referrers up to the given depth"
"--referrers",
default=0,
type=int,
help="Include referrers up to the given depth",
)
parser.add_argument(
"--referred", default=0, type=int,
help="Include referred down to the given depth"
"--referred",
default=0,
type=int,
help="Include referred down to the given depth",
)
parser.add_argument(
"--format", default="yaml",
help="Output format, one of 'yaml' or 'dot'"
"--format", default="yaml", help="Output format, one of 'yaml' or 'dot'"
)
args = parser.parse_args()
@@ -162,7 +176,7 @@ if __name__ == '__main__':
for used_by in entry.get("used", ()):
referrers.add(used_by)
for name, definition in names.items():
if not name in referrers:
if name not in referrers:
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
@@ -176,7 +190,7 @@ if __name__ == '__main__':
for uses in entry.get("uses", ()):
referred.add(uses)
for name, definition in names.items():
if not name in referred:
if name not in referred:
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
@@ -185,12 +199,12 @@ if __name__ == '__main__':
if args.format == 'yaml':
yaml.dump(result, sys.stdout, default_flow_style=False)
elif args.format == 'dot':
print "digraph {"
print("digraph {")
for name, entry in result.items():
print name
print(name)
for used_by in entry.get("used", ()):
if used_by in result:
print used_by, "->", name
print "}"
print(used_by, "->", name)
print("}")
else:
raise ValueError("Unknown format %r" % (args.format))

View File

@@ -1,8 +1,11 @@
#!/usr/bin/env python2
import pymacaroons
from __future__ import print_function
import sys
import pymacaroons
if len(sys.argv) == 1:
sys.stderr.write("usage: %s macaroon [key]\n" % (sys.argv[0],))
sys.exit(1)
@@ -11,14 +14,14 @@ macaroon_string = sys.argv[1]
key = sys.argv[2] if len(sys.argv) > 2 else None
macaroon = pymacaroons.Macaroon.deserialize(macaroon_string)
print macaroon.inspect()
print(macaroon.inspect())
print ""
print("")
verifier = pymacaroons.Verifier()
verifier.satisfy_general(lambda c: True)
try:
verifier.verify(macaroon, key)
print "Signature is correct"
print("Signature is correct")
except Exception as e:
print e.message
print(str(e))

View File

@@ -18,21 +18,21 @@
from __future__ import print_function
import argparse
import base64
import json
import sys
from urlparse import urlparse, urlunparse
import nacl.signing
import json
import base64
import requests
import sys
from requests.adapters import HTTPAdapter
import srvlookup
import yaml
from requests.adapters import HTTPAdapter
# uncomment the following to enable debug logging of http requests
#from httplib import HTTPConnection
#HTTPConnection.debuglevel = 1
# from httplib import HTTPConnection
# HTTPConnection.debuglevel = 1
def encode_base64(input_bytes):
"""Encode bytes as a base64 string without any padding."""
@@ -58,15 +58,15 @@ def decode_base64(input_string):
def encode_canonical_json(value):
return json.dumps(
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unecessary white space.
separators=(',',':'),
# Sort the keys of dictionaries.
sort_keys=True,
# Encode the resulting unicode as UTF-8 bytes.
).encode("UTF-8")
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unecessary white space.
separators=(',', ':'),
# Sort the keys of dictionaries.
sort_keys=True,
# Encode the resulting unicode as UTF-8 bytes.
).encode("UTF-8")
def sign_json(json_object, signing_key, signing_name):
@@ -88,6 +88,7 @@ def sign_json(json_object, signing_key, signing_name):
NACL_ED25519 = "ed25519"
def decode_signing_key_base64(algorithm, version, key_base64):
"""Decode a base64 encoded signing key
Args:
@@ -143,14 +144,12 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers = []
for key, sig in signed_json["signatures"][origin_name].items():
header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (
origin_name, key, sig,
)
header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (origin_name, key, sig)
authorization_headers.append(bytes(header))
print ("Authorization: %s" % header, file=sys.stderr)
print("Authorization: %s" % header, file=sys.stderr)
dest = "matrix://%s%s" % (destination, path)
print ("Requesting %s" % dest, file=sys.stderr)
print("Requesting %s" % dest, file=sys.stderr)
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
@@ -158,10 +157,7 @@ def request_json(method, origin_name, origin_key, destination, path, content):
result = s.request(
method=method,
url=dest,
headers={
"Host": destination,
"Authorization": authorization_headers[0]
},
headers={"Host": destination, "Authorization": authorization_headers[0]},
verify=False,
data=content,
)
@@ -171,50 +167,50 @@ def request_json(method, origin_name, origin_key, destination, path, content):
def main():
parser = argparse.ArgumentParser(
description=
"Signs and sends a federation request to a matrix homeserver",
description="Signs and sends a federation request to a matrix homeserver"
)
parser.add_argument(
"-N", "--server-name",
"-N",
"--server-name",
help="Name to give as the local homeserver. If unspecified, will be "
"read from the config file.",
"read from the config file.",
)
parser.add_argument(
"-k", "--signing-key-path",
"-k",
"--signing-key-path",
help="Path to the file containing the private ed25519 key to sign the "
"request with.",
"request with.",
)
parser.add_argument(
"-c", "--config",
"-c",
"--config",
default="homeserver.yaml",
help="Path to server config file. Ignored if --server-name and "
"--signing-key-path are both given.",
"--signing-key-path are both given.",
)
parser.add_argument(
"-d", "--destination",
"-d",
"--destination",
default="matrix.org",
help="name of the remote homeserver. We will do SRV lookups and "
"connect appropriately.",
"connect appropriately.",
)
parser.add_argument(
"-X", "--method",
"-X",
"--method",
help="HTTP method to use for the request. Defaults to GET if --data is"
"unspecified, POST if it is."
"unspecified, POST if it is.",
)
parser.add_argument(
"--body",
help="Data to send as the body of the HTTP request"
)
parser.add_argument("--body", help="Data to send as the body of the HTTP request")
parser.add_argument(
"path",
help="request path. We will add '/_matrix/federation/v1/' to this."
"path", help="request path. We will add '/_matrix/federation/v1/' to this."
)
args = parser.parse_args()
@@ -227,13 +223,15 @@ def main():
result = request_json(
args.method,
args.server_name, key, args.destination,
args.server_name,
key,
args.destination,
"/_matrix/federation/v1/" + args.path,
content=args.body,
)
json.dump(result, sys.stdout)
print ("")
print("")
def read_args_from_config(args):
@@ -253,7 +251,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
return s, 8448
if ":" in s:
out = s.rsplit(":",1)
out = s.rsplit(":", 1)
try:
port = int(out[1])
except ValueError:
@@ -263,7 +261,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except:
except Exception:
return s, 8448
def get_connection(self, url, proxies=None):
@@ -272,10 +270,9 @@ class MatrixConnectionAdapter(HTTPAdapter):
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse((
"https", netloc, parsed.path, parsed.params, parsed.query,
parsed.fragment,
))
url = urlunparse(
("https", netloc, parsed.path, parsed.params, parsed.query, parsed.fragment)
)
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)

View File

@@ -1,23 +1,31 @@
from synapse.storage.pdu import PduStore
from synapse.storage.signatures import SignatureStore
from synapse.storage._base import SQLBaseStore
from synapse.federation.units import Pdu
from synapse.crypto.event_signing import (
add_event_pdu_content_hash, compute_pdu_event_reference_hash
)
from synapse.api.events.utils import prune_pdu
from unpaddedbase64 import encode_base64, decode_base64
from canonicaljson import encode_canonical_json
from __future__ import print_function
import sqlite3
import sys
from unpaddedbase64 import decode_base64, encode_base64
from synapse.crypto.event_signing import (
add_event_pdu_content_hash,
compute_pdu_event_reference_hash,
)
from synapse.federation.units import Pdu
from synapse.storage._base import SQLBaseStore
from synapse.storage.pdu import PduStore
from synapse.storage.signatures import SignatureStore
class Store(object):
_get_pdu_tuples = PduStore.__dict__["_get_pdu_tuples"]
_get_pdu_content_hashes_txn = SignatureStore.__dict__["_get_pdu_content_hashes_txn"]
_get_prev_pdu_hashes_txn = SignatureStore.__dict__["_get_prev_pdu_hashes_txn"]
_get_pdu_origin_signatures_txn = SignatureStore.__dict__["_get_pdu_origin_signatures_txn"]
_get_pdu_origin_signatures_txn = SignatureStore.__dict__[
"_get_pdu_origin_signatures_txn"
]
_store_pdu_content_hash_txn = SignatureStore.__dict__["_store_pdu_content_hash_txn"]
_store_pdu_reference_hash_txn = SignatureStore.__dict__["_store_pdu_reference_hash_txn"]
_store_pdu_reference_hash_txn = SignatureStore.__dict__[
"_store_pdu_reference_hash_txn"
]
_store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"]
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
@@ -26,9 +34,7 @@ store = Store()
def select_pdus(cursor):
cursor.execute(
"SELECT pdu_id, origin FROM pdus ORDER BY depth ASC"
)
cursor.execute("SELECT pdu_id, origin FROM pdus ORDER BY depth ASC")
ids = cursor.fetchall()
@@ -41,23 +47,30 @@ def select_pdus(cursor):
for pdu in pdus:
try:
if pdu.prev_pdus:
print "PROCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus
print("PROCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
for pdu_id, origin, hashes in pdu.prev_pdus:
ref_alg, ref_hsh = reference_hashes[(pdu_id, origin)]
hashes[ref_alg] = encode_base64(ref_hsh)
store._store_prev_pdu_hash_txn(cursor, pdu.pdu_id, pdu.origin, pdu_id, origin, ref_alg, ref_hsh)
print "SUCCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus
store._store_prev_pdu_hash_txn(
cursor, pdu.pdu_id, pdu.origin, pdu_id, origin, ref_alg, ref_hsh
)
print("SUCCESS", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
pdu = add_event_pdu_content_hash(pdu)
ref_alg, ref_hsh = compute_pdu_event_reference_hash(pdu)
reference_hashes[(pdu.pdu_id, pdu.origin)] = (ref_alg, ref_hsh)
store._store_pdu_reference_hash_txn(cursor, pdu.pdu_id, pdu.origin, ref_alg, ref_hsh)
store._store_pdu_reference_hash_txn(
cursor, pdu.pdu_id, pdu.origin, ref_alg, ref_hsh
)
for alg, hsh_base64 in pdu.hashes.items():
print alg, hsh_base64
store._store_pdu_content_hash_txn(cursor, pdu.pdu_id, pdu.origin, alg, decode_base64(hsh_base64))
print(alg, hsh_base64)
store._store_pdu_content_hash_txn(
cursor, pdu.pdu_id, pdu.origin, alg, decode_base64(hsh_base64)
)
except Exception:
print("FAILED_", pdu.pdu_id, pdu.origin, pdu.prev_pdus)
except:
print "FAILED_", pdu.pdu_id, pdu.origin, pdu.prev_pdus
def main():
conn = sqlite3.connect(sys.argv[1])
@@ -65,5 +78,6 @@ def main():
select_pdus(cursor)
conn.commit()
if __name__=='__main__':
if __name__ == '__main__':
main()

View File

@@ -1,18 +1,17 @@
#! /usr/bin/python
import ast
import argparse
import ast
import os
import sys
import yaml
PATTERNS_V1 = []
PATTERNS_V2 = []
RESULT = {
"v1": PATTERNS_V1,
"v2": PATTERNS_V2,
}
RESULT = {"v1": PATTERNS_V1, "v2": PATTERNS_V2}
class CallVisitor(ast.NodeVisitor):
def visit_Call(self, node):
@@ -21,7 +20,6 @@ class CallVisitor(ast.NodeVisitor):
else:
return
if name == "client_path_patterns":
PATTERNS_V1.append(node.args[0].s)
elif name == "client_v2_patterns":
@@ -42,8 +40,10 @@ def find_patterns_in_file(filepath):
parser = argparse.ArgumentParser(description='Find url patterns.')
parser.add_argument(
"directories", nargs='+', metavar="DIR",
help="Directories to search for definitions"
"directories",
nargs='+',
metavar="DIR",
help="Directories to search for definitions",
)
args = parser.parse_args()

View File

@@ -0,0 +1,9 @@
#!/bin/bash
set -e
# Fetch the current GitHub issue number, add one to it -- presto! The likely
# next PR number.
CURRENT_NUMBER=`curl -s "https://api.github.com/repos/matrix-org/synapse/issues?state=all&per_page=1" | jq -r ".[0].number"`
CURRENT_NUMBER=$((CURRENT_NUMBER+1))
echo $CURRENT_NUMBER

View File

@@ -1,57 +0,0 @@
#!/bin/bash
## CAUTION:
## This script will remove (hopefully) all trace of the given room ID from
## your homeserver.db
## Do not run it lightly.
set -e
if [ "$1" == "-h" ] || [ "$1" == "" ]; then
echo "Call with ROOM_ID as first option and then pipe it into the database. So for instance you might run"
echo " nuke-room-from-db.sh <room_id> | sqlite3 homeserver.db"
echo "or"
echo " nuke-room-from-db.sh <room_id> | psql --dbname=synapse"
exit
fi
ROOMID="$1"
cat <<EOF
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search WHERE room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF

View File

@@ -1,8 +1,9 @@
import requests
import collections
import json
import sys
import time
import json
import requests
Entry = collections.namedtuple("Entry", "name position rows")
@@ -30,11 +31,11 @@ def parse_response(content):
def replicate(server, streams):
return parse_response(requests.get(
server + "/_synapse/replication",
verify=False,
params=streams
).content)
return parse_response(
requests.get(
server + "/_synapse/replication", verify=False, params=streams
).content
)
def main():
@@ -45,16 +46,16 @@ def main():
try:
streams = {
row.name: row.position
for row in replicate(server, {"streams":"-1"})["streams"].rows
for row in replicate(server, {"streams": "-1"})["streams"].rows
}
except requests.exceptions.ConnectionError as e:
except requests.exceptions.ConnectionError:
time.sleep(0.1)
while True:
try:
results = replicate(server, streams)
except:
sys.stdout.write("connection_lost("+ repr(streams) + ")\n")
except Exception:
sys.stdout.write("connection_lost(" + repr(streams) + ")\n")
break
for update in results.values():
for row in update.rows:
@@ -62,6 +63,5 @@ def main():
streams[update.name] = update.position
if __name__=='__main__':
if __name__ == '__main__':
main()

View File

@@ -1,12 +1,10 @@
#!/usr/bin/env python
import argparse
import getpass
import sys
import bcrypt
import getpass
import yaml
bcrypt_rounds=12
@@ -52,4 +50,3 @@ if __name__ == "__main__":
password = prompt_for_pass()
print bcrypt.hashpw(password + password_pepper, bcrypt.gensalt(bcrypt_rounds))

View File

@@ -36,12 +36,9 @@ from __future__ import print_function
import argparse
import logging
import sys
import os
import shutil
import sys
from synapse.rest.media.v1.filepath import MediaFilePaths
@@ -77,24 +74,23 @@ def move_media(origin_server, file_id, src_paths, dest_paths):
if not os.path.exists(original_file):
logger.warn(
"Original for %s/%s (%s) does not exist",
origin_server, file_id, original_file,
origin_server,
file_id,
original_file,
)
else:
mkdir_and_move(
original_file,
dest_paths.remote_media_filepath(origin_server, file_id),
original_file, dest_paths.remote_media_filepath(origin_server, file_id)
)
# now look for thumbnails
original_thumb_dir = src_paths.remote_media_thumbnail_dir(
origin_server, file_id,
)
original_thumb_dir = src_paths.remote_media_thumbnail_dir(origin_server, file_id)
if not os.path.exists(original_thumb_dir):
return
mkdir_and_move(
original_thumb_dir,
dest_paths.remote_media_thumbnail_dir(origin_server, file_id)
dest_paths.remote_media_thumbnail_dir(origin_server, file_id),
)
@@ -109,24 +105,16 @@ def mkdir_and_move(original_file, dest_file):
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class = argparse.RawDescriptionHelpFormatter,
)
parser.add_argument(
"-v", action='store_true', help='enable debug logging')
parser.add_argument(
"src_repo",
help="Path to source content repo",
)
parser.add_argument(
"dest_repo",
help="Path to source content repo",
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("-v", action='store_true', help='enable debug logging')
parser.add_argument("src_repo", help="Path to source content repo")
parser.add_argument("dest_repo", help="Path to source content repo")
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s"
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s",
}
logging.basicConfig(**logging_config)

View File

@@ -14,187 +14,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import getpass
import hashlib
import hmac
import json
import sys
import urllib2
import yaml
def request_registration(user, password, server_location, shared_secret, admin=False):
req = urllib2.Request(
"%s/_matrix/client/r0/admin/register" % (server_location,),
headers={'Content-Type': 'application/json'}
)
try:
if sys.version_info[:3] >= (2, 7, 9):
# As of version 2.7.9, urllib2 now checks SSL certs
import ssl
f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23))
else:
f = urllib2.urlopen(req)
body = f.read()
f.close()
nonce = json.loads(body)["nonce"]
except urllib2.HTTPError as e:
print "ERROR! Received %d %s" % (e.code, e.reason,)
if 400 <= e.code < 500:
if e.info().type == "application/json":
resp = json.load(e)
if "error" in resp:
print resp["error"]
sys.exit(1)
mac = hmac.new(
key=shared_secret,
digestmod=hashlib.sha1,
)
mac.update(nonce)
mac.update("\x00")
mac.update(user)
mac.update("\x00")
mac.update(password)
mac.update("\x00")
mac.update("admin" if admin else "notadmin")
mac = mac.hexdigest()
data = {
"nonce": nonce,
"username": user,
"password": password,
"mac": mac,
"admin": admin,
}
server_location = server_location.rstrip("/")
print "Sending registration request..."
req = urllib2.Request(
"%s/_matrix/client/r0/admin/register" % (server_location,),
data=json.dumps(data),
headers={'Content-Type': 'application/json'}
)
try:
if sys.version_info[:3] >= (2, 7, 9):
# As of version 2.7.9, urllib2 now checks SSL certs
import ssl
f = urllib2.urlopen(req, context=ssl.SSLContext(ssl.PROTOCOL_SSLv23))
else:
f = urllib2.urlopen(req)
f.read()
f.close()
print "Success."
except urllib2.HTTPError as e:
print "ERROR! Received %d %s" % (e.code, e.reason,)
if 400 <= e.code < 500:
if e.info().type == "application/json":
resp = json.load(e)
if "error" in resp:
print resp["error"]
sys.exit(1)
def register_new_user(user, password, server_location, shared_secret, admin):
if not user:
try:
default_user = getpass.getuser()
except:
default_user = None
if default_user:
user = raw_input("New user localpart [%s]: " % (default_user,))
if not user:
user = default_user
else:
user = raw_input("New user localpart: ")
if not user:
print "Invalid user name"
sys.exit(1)
if not password:
password = getpass.getpass("Password: ")
if not password:
print "Password cannot be blank."
sys.exit(1)
confirm_password = getpass.getpass("Confirm password: ")
if password != confirm_password:
print "Passwords do not match"
sys.exit(1)
if not admin:
admin = raw_input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin = True
else:
admin = False
request_registration(user, password, server_location, shared_secret, bool(admin))
from synapse._scripts.register_new_matrix_user import main
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Used to register new users with a given home server when"
" registration has been disabled. The home server must be"
" configured with the 'registration_shared_secret' option"
" set.",
)
parser.add_argument(
"-u", "--user",
default=None,
help="Local part of the new user. Will prompt if omitted.",
)
parser.add_argument(
"-p", "--password",
default=None,
help="New password for user. Will prompt if omitted.",
)
parser.add_argument(
"-a", "--admin",
action="store_true",
help="Register new user as an admin. Will prompt if omitted.",
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
"-c", "--config",
type=argparse.FileType('r'),
help="Path to server config file. Used to read in shared secret.",
)
group.add_argument(
"-k", "--shared-secret",
help="Shared secret as defined in server config file.",
)
parser.add_argument(
"server_url",
default="https://localhost:8448",
nargs='?',
help="URL to use to talk to the home server. Defaults to "
" 'https://localhost:8448'.",
)
args = parser.parse_args()
if "config" in args and args.config:
config = yaml.safe_load(args.config)
secret = config.get("registration_shared_secret", None)
if not secret:
print "No 'registration_shared_secret' defined in config."
sys.exit(1)
else:
secret = args.shared_secret
register_new_user(args.user, args.password, args.server_url, secret, args.admin)
main()

View File

@@ -15,23 +15,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer, reactor
from twisted.enterprise import adbapi
from synapse.storage._base import LoggingTransaction, SQLBaseStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
import argparse
import curses
import logging
import sys
import time
import traceback
import yaml
from six import string_types
import yaml
from twisted.enterprise import adbapi
from twisted.internet import defer, reactor
from synapse.storage._base import LoggingTransaction, SQLBaseStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
logger = logging.getLogger("synapse_port_db")
@@ -105,6 +105,7 @@ class Store(object):
*All* database interactions should go through this object.
"""
def __init__(self, db_pool, engine):
self.db_pool = db_pool
self.database_engine = engine
@@ -135,7 +136,8 @@ class Store(object):
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args, **kwargs
*args,
**kwargs
)
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
@@ -158,22 +160,20 @@ class Store(object):
def r(txn):
txn.execute(sql, args)
return txn.fetchall()
return self.runInteraction("execute_sql", r)
def insert_many_txn(self, txn, table, headers, rows):
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
table,
", ".join(k for k in headers),
", ".join("%s" for _ in headers)
", ".join("%s" for _ in headers),
)
try:
txn.executemany(sql, rows)
except:
logger.exception(
"Failed to insert: %s",
table,
)
except Exception:
logger.exception("Failed to insert: %s", table)
raise
@@ -206,7 +206,7 @@ class Porter(object):
"table_name": table,
"forward_rowid": 1,
"backward_rowid": 0,
}
},
)
forward_chunk = 1
@@ -221,10 +221,10 @@ class Porter(object):
table, forward_chunk, backward_chunk
)
else:
def delete_all(txn):
txn.execute(
"DELETE FROM port_from_sqlite3 WHERE table_name = %s",
(table,)
"DELETE FROM port_from_sqlite3 WHERE table_name = %s", (table,)
)
txn.execute("TRUNCATE %s CASCADE" % (table,))
@@ -232,11 +232,7 @@ class Porter(object):
yield self.postgres_store._simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
"forward_rowid": 1,
"backward_rowid": 0,
}
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
forward_chunk = 1
@@ -251,12 +247,16 @@ class Porter(object):
)
@defer.inlineCallbacks
def handle_table(self, table, postgres_size, table_size, forward_chunk,
backward_chunk):
def handle_table(
self, table, postgres_size, table_size, forward_chunk, backward_chunk
):
logger.info(
"Table %s: %i/%i (rows %i-%i) already ported",
table, postgres_size, table_size,
backward_chunk+1, forward_chunk-1,
table,
postgres_size,
table_size,
backward_chunk + 1,
forward_chunk - 1,
)
if not table_size:
@@ -271,7 +271,9 @@ class Porter(object):
return
if table in (
"user_directory", "user_directory_search", "users_who_share_rooms",
"user_directory",
"user_directory_search",
"users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
@@ -283,37 +285,35 @@ class Porter(object):
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
table=table,
values={"stream_id": None},
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
return
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?"
% (table,)
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" % (table,)
)
backward_select = (
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?"
% (table,)
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?" % (table,)
)
do_forward = [True]
do_backward = [True]
while True:
def r(txn):
forward_rows = []
backward_rows = []
if do_forward[0]:
txn.execute(forward_select, (forward_chunk, self.batch_size,))
txn.execute(forward_select, (forward_chunk, self.batch_size))
forward_rows = txn.fetchall()
if not forward_rows:
do_forward[0] = False
if do_backward[0]:
txn.execute(backward_select, (backward_chunk, self.batch_size,))
txn.execute(backward_select, (backward_chunk, self.batch_size))
backward_rows = txn.fetchall()
if not backward_rows:
do_backward[0] = False
@@ -325,9 +325,7 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.runInteraction(
"select", r
)
headers, frows, brows = yield self.sqlite_store.runInteraction("select", r)
if frows or brows:
if frows:
@@ -339,9 +337,7 @@ class Porter(object):
rows = self._convert_rows(table, headers, rows)
def insert(txn):
self.postgres_store.insert_many_txn(
txn, table, headers[1:], rows
)
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
self.postgres_store._simple_update_one_txn(
txn,
@@ -362,8 +358,9 @@ class Porter(object):
return
@defer.inlineCallbacks
def handle_search_table(self, postgres_size, table_size, forward_chunk,
backward_chunk):
def handle_search_table(
self, postgres_size, table_size, forward_chunk, backward_chunk
):
select = (
"SELECT es.rowid, es.*, e.origin_server_ts, e.stream_ordering"
" FROM event_search as es"
@@ -373,8 +370,9 @@ class Porter(object):
)
while True:
def r(txn):
txn.execute(select, (forward_chunk, self.batch_size,))
txn.execute(select, (forward_chunk, self.batch_size))
rows = txn.fetchall()
headers = [column[0] for column in txn.description]
@@ -402,18 +400,21 @@ class Porter(object):
else:
rows_dict.append(d)
txn.executemany(sql, [
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
row["origin_server_ts"],
row["stream_ordering"],
)
for row in rows_dict
])
txn.executemany(
sql,
[
(
row["event_id"],
row["room_id"],
row["key"],
row["sender"],
row["value"],
row["origin_server_ts"],
row["stream_ordering"],
)
for row in rows_dict
],
)
self.postgres_store._simple_update_one_txn(
txn,
@@ -437,7 +438,8 @@ class Porter(object):
def setup_db(self, db_config, database_engine):
db_conn = database_engine.module.connect(
**{
k: v for k, v in db_config.get("args", {}).items()
k: v
for k, v in db_config.get("args", {}).items()
if not k.startswith("cp_")
}
)
@@ -450,13 +452,11 @@ class Porter(object):
def run(self):
try:
sqlite_db_pool = adbapi.ConnectionPool(
self.sqlite_config["name"],
**self.sqlite_config["args"]
self.sqlite_config["name"], **self.sqlite_config["args"]
)
postgres_db_pool = adbapi.ConnectionPool(
self.postgres_config["name"],
**self.postgres_config["args"]
self.postgres_config["name"], **self.postgres_config["args"]
)
sqlite_engine = create_engine(sqlite_config)
@@ -465,9 +465,7 @@ class Porter(object):
self.sqlite_store = Store(sqlite_db_pool, sqlite_engine)
self.postgres_store = Store(postgres_db_pool, postgres_engine)
yield self.postgres_store.execute(
postgres_engine.check_database
)
yield self.postgres_store.execute(postgres_engine.check_database)
# Step 1. Set up databases.
self.progress.set_state("Preparing SQLite3")
@@ -477,6 +475,7 @@ class Porter(object):
self.setup_db(postgres_config, postgres_engine)
self.progress.set_state("Creating port tables")
def create_port_table(txn):
txn.execute(
"CREATE TABLE IF NOT EXISTS port_from_sqlite3 ("
@@ -501,10 +500,9 @@ class Porter(object):
)
try:
yield self.postgres_store.runInteraction(
"alter_table", alter_table
)
except Exception as e:
yield self.postgres_store.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
yield self.postgres_store.runInteraction(
@@ -514,11 +512,7 @@ class Porter(object):
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
table="sqlite_master",
keyvalues={
"type": "table",
},
retcol="name",
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
@@ -545,18 +539,14 @@ class Porter(object):
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[
self.handle_table(*res)
for res in setup_res
],
consumeErrors=True,
[self.handle_table(*res) for res in setup_res], consumeErrors=True
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
self.progress.done()
except:
except Exception:
global end_error_exec_info
end_error_exec_info = sys.exc_info()
logger.exception("")
@@ -566,9 +556,7 @@ class Porter(object):
def _convert_rows(self, table, headers, rows):
bool_col_names = BOOLEAN_COLUMNS.get(table, [])
bool_cols = [
i for i, h in enumerate(headers) if h in bool_col_names
]
bool_cols = [i for i, h in enumerate(headers) if h in bool_col_names]
class BadValueException(Exception):
pass
@@ -577,18 +565,21 @@ class Porter(object):
if j in bool_cols:
return bool(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn("DROPPING ROW: NUL value in table %s col %s: %r", table, headers[j], col)
raise BadValueException();
logger.warn(
"DROPPING ROW: NUL value in table %s col %s: %r",
table,
headers[j],
col,
)
raise BadValueException()
return col
outrows = []
for i, row in enumerate(rows):
try:
outrows.append(tuple(
conv(j, col)
for j, col in enumerate(row)
if j > 0
))
outrows.append(
tuple(conv(j, col) for j, col in enumerate(row) if j > 0)
)
except BadValueException:
pass
@@ -616,9 +607,7 @@ class Porter(object):
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = yield self.sqlite_store.runInteraction(
"select", r,
)
headers, rows = yield self.sqlite_store.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -639,7 +628,7 @@ class Porter(object):
txn.execute(
"SELECT rowid FROM sent_transactions WHERE ts >= ?"
" ORDER BY rowid ASC LIMIT 1",
(yesterday,)
(yesterday,),
)
rows = txn.fetchall()
@@ -657,21 +646,17 @@ class Porter(object):
"table_name": "sent_transactions",
"forward_rowid": next_chunk,
"backward_rowid": 0,
}
},
)
def get_sent_table_size(txn):
txn.execute(
"SELECT count(*) FROM sent_transactions"
" WHERE ts >= ?",
(yesterday,)
"SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,)
)
size, = txn.fetchone()
return int(size)
remaining_count = yield self.sqlite_store.execute(
get_sent_table_size
)
remaining_count = yield self.sqlite_store.execute(get_sent_table_size)
total_count = remaining_count + inserted_rows
@@ -680,13 +665,11 @@ class Porter(object):
@defer.inlineCallbacks
def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = yield self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,),
forward_chunk,
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk
)
brows = yield self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,),
backward_chunk,
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk
)
defer.returnValue(frows[0][0] + brows[0][0])
@@ -694,7 +677,7 @@ class Porter(object):
@defer.inlineCallbacks
def _get_already_ported_count(self, table):
rows = yield self.postgres_store.execute_sql(
"SELECT count(*) FROM %s" % (table,),
"SELECT count(*) FROM %s" % (table,)
)
defer.returnValue(rows[0][0])
@@ -717,22 +700,21 @@ class Porter(object):
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0]+1
txn.execute(
"ALTER SEQUENCE state_group_id_seq RESTART WITH %s",
(next_id,),
)
next_id = txn.fetchone()[0] + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
##############################################
###### The following is simply UI stuff ######
# The following is simply UI stuff
##############################################
class Progress(object):
"""Used to report progress of the port
"""
def __init__(self):
self.tables = {}
@@ -758,6 +740,7 @@ class Progress(object):
class CursesProgress(Progress):
"""Reports progress to a curses window
"""
def __init__(self, stdscr):
self.stdscr = stdscr
@@ -801,7 +784,7 @@ class CursesProgress(Progress):
duration = int(now) - int(self.start_time)
minutes, seconds = divmod(duration, 60)
duration_str = '%02dm %02ds' % (minutes, seconds,)
duration_str = '%02dm %02ds' % (minutes, seconds)
if self.finished:
status = "Time spent: %s (Done!)" % (duration_str,)
@@ -814,16 +797,12 @@ class CursesProgress(Progress):
est_remaining_str = '%02dm %02ds remaining' % divmod(est_remaining, 60)
else:
est_remaining_str = "Unknown"
status = (
"Time spent: %s (est. remaining: %s)"
% (duration_str, est_remaining_str,)
status = "Time spent: %s (est. remaining: %s)" % (
duration_str,
est_remaining_str,
)
self.stdscr.addstr(
0, 0,
status,
curses.A_BOLD,
)
self.stdscr.addstr(0, 0, status, curses.A_BOLD)
max_len = max([len(t) for t in self.tables.keys()])
@@ -831,9 +810,7 @@ class CursesProgress(Progress):
middle_space = 1
items = self.tables.items()
items.sort(
key=lambda i: (i[1]["perc"], i[0]),
)
items.sort(key=lambda i: (i[1]["perc"], i[0]))
for i, (table, data) in enumerate(items):
if i + 2 >= rows:
@@ -844,9 +821,7 @@ class CursesProgress(Progress):
color = curses.color_pair(2) if perc == 100 else curses.color_pair(1)
self.stdscr.addstr(
i + 2, left_margin + max_len - len(table),
table,
curses.A_BOLD | color,
i + 2, left_margin + max_len - len(table), table, curses.A_BOLD | color
)
size = 20
@@ -857,15 +832,13 @@ class CursesProgress(Progress):
)
self.stdscr.addstr(
i + 2, left_margin + max_len + middle_space,
i + 2,
left_margin + max_len + middle_space,
"%s %3d%% (%d/%d)" % (progress, perc, data["num_done"], data["total"]),
)
if self.finished:
self.stdscr.addstr(
rows - 1, 0,
"Press any key to exit...",
)
self.stdscr.addstr(rows - 1, 0, "Press any key to exit...")
self.stdscr.refresh()
self.last_update = time.time()
@@ -877,29 +850,25 @@ class CursesProgress(Progress):
def set_state(self, state):
self.stdscr.clear()
self.stdscr.addstr(
0, 0,
state + "...",
curses.A_BOLD,
)
self.stdscr.addstr(0, 0, state + "...", curses.A_BOLD)
self.stdscr.refresh()
class TerminalProgress(Progress):
"""Just prints progress to the terminal
"""
def update(self, table, num_done):
super(TerminalProgress, self).update(table, num_done)
data = self.tables[table]
print "%s: %d%% (%d/%d)" % (
table, data["perc"],
data["num_done"], data["total"],
print(
"%s: %d%% (%d/%d)" % (table, data["perc"], data["num_done"], data["total"])
)
def set_state(self, state):
print state + "..."
print(state + "...")
##############################################
@@ -909,34 +878,38 @@ class TerminalProgress(Progress):
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="A script to port an existing synapse SQLite database to"
" a new PostgreSQL database."
" a new PostgreSQL database."
)
parser.add_argument("-v", action='store_true')
parser.add_argument(
"--sqlite-database", required=True,
"--sqlite-database",
required=True,
help="The snapshot of the SQLite database file. This must not be"
" currently used by a running synapse server"
" currently used by a running synapse server",
)
parser.add_argument(
"--postgres-config", type=argparse.FileType('r'), required=True,
help="The database config file for the PostgreSQL database"
"--postgres-config",
type=argparse.FileType('r'),
required=True,
help="The database config file for the PostgreSQL database",
)
parser.add_argument(
"--curses", action='store_true',
help="display a curses based progress UI"
"--curses", action='store_true', help="display a curses based progress UI"
)
parser.add_argument(
"--batch-size", type=int, default=1000,
"--batch-size",
type=int,
default=1000,
help="The number of rows to select from the SQLite table each"
" iteration [default=1000]",
" iteration [default=1000]",
)
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s"
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s",
}
if args.curses:

View File

@@ -14,17 +14,16 @@ ignore =
pylint.cfg
tox.ini
[pep8]
max-line-length = 90
# W503 requires that binary operators be at the end, not start, of lines. Erik
# doesn't like it. E203 is contrary to PEP8. E731 is silly.
ignore = W503,E203,E731
[flake8]
# note that flake8 inherits the "ignore" settings from "pep8" (because it uses
# pep8 to do those checks), but not the "max-line-length" setting
max-line-length = 90
ignore=W503,E203,E731
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
# for error codes. The ones we ignore are:
# W503: line break before binary operator
# W504: line break after binary operator
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
ignore=W503,W504,E203,E731
[isort]
line_length = 89

View File

@@ -1,6 +1,8 @@
#!/usr/bin/env python
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2014-2017 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2017-2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -86,7 +88,7 @@ setup(
name="matrix-synapse",
version=version,
packages=find_packages(exclude=["tests", "tests.*"]),
description="Reference Synapse Home Server",
description="Reference homeserver for the Matrix decentralised comms protocol",
install_requires=dependencies['requirements'](include_conditional=True).keys(),
dependency_links=dependencies["DEPENDENCY_LINKS"].values(),
include_package_data=True,

View File

@@ -17,4 +17,14 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.33.4"
try:
from twisted.internet import protocol
from twisted.internet.protocol import Factory
from twisted.names.dns import DNSDatagramProtocol
protocol.Factory.noisy = False
Factory.noisy = False
DNSDatagramProtocol.noisy = False
except ImportError:
pass
__version__ = "0.33.8rc1"

View File

View File

@@ -0,0 +1,215 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
import argparse
import getpass
import hashlib
import hmac
import logging
import sys
from six.moves import input
import requests as _requests
import yaml
def request_registration(
user,
password,
server_location,
shared_secret,
admin=False,
requests=_requests,
_print=print,
exit=sys.exit,
):
url = "%s/_matrix/client/r0/admin/register" % (server_location,)
# Get the nonce
r = requests.get(url, verify=False)
if r.status_code is not 200:
_print("ERROR! Received %d %s" % (r.status_code, r.reason))
if 400 <= r.status_code < 500:
try:
_print(r.json()["error"])
except Exception:
pass
return exit(1)
nonce = r.json()["nonce"]
mac = hmac.new(key=shared_secret.encode('utf8'), digestmod=hashlib.sha1)
mac.update(nonce.encode('utf8'))
mac.update(b"\x00")
mac.update(user.encode('utf8'))
mac.update(b"\x00")
mac.update(password.encode('utf8'))
mac.update(b"\x00")
mac.update(b"admin" if admin else b"notadmin")
mac = mac.hexdigest()
data = {
"nonce": nonce,
"username": user,
"password": password,
"mac": mac,
"admin": admin,
}
_print("Sending registration request...")
r = requests.post(url, json=data, verify=False)
if r.status_code is not 200:
_print("ERROR! Received %d %s" % (r.status_code, r.reason))
if 400 <= r.status_code < 500:
try:
_print(r.json()["error"])
except Exception:
pass
return exit(1)
_print("Success!")
def register_new_user(user, password, server_location, shared_secret, admin):
if not user:
try:
default_user = getpass.getuser()
except Exception:
default_user = None
if default_user:
user = input("New user localpart [%s]: " % (default_user,))
if not user:
user = default_user
else:
user = input("New user localpart: ")
if not user:
print("Invalid user name")
sys.exit(1)
if not password:
password = getpass.getpass("Password: ")
if not password:
print("Password cannot be blank.")
sys.exit(1)
confirm_password = getpass.getpass("Confirm password: ")
if password != confirm_password:
print("Passwords do not match")
sys.exit(1)
if admin is None:
admin = input("Make admin [no]: ")
if admin in ("y", "yes", "true"):
admin = True
else:
admin = False
request_registration(user, password, server_location, shared_secret, bool(admin))
def main():
logging.captureWarnings(True)
parser = argparse.ArgumentParser(
description="Used to register new users with a given home server when"
" registration has been disabled. The home server must be"
" configured with the 'registration_shared_secret' option"
" set."
)
parser.add_argument(
"-u",
"--user",
default=None,
help="Local part of the new user. Will prompt if omitted.",
)
parser.add_argument(
"-p",
"--password",
default=None,
help="New password for user. Will prompt if omitted.",
)
admin_group = parser.add_mutually_exclusive_group()
admin_group.add_argument(
"-a",
"--admin",
action="store_true",
help=(
"Register new user as an admin. "
"Will prompt if --no-admin is not set either."
),
)
admin_group.add_argument(
"--no-admin",
action="store_true",
help=(
"Register new user as a regular user. "
"Will prompt if --admin is not set either."
),
)
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument(
"-c",
"--config",
type=argparse.FileType('r'),
help="Path to server config file. Used to read in shared secret.",
)
group.add_argument(
"-k", "--shared-secret", help="Shared secret as defined in server config file."
)
parser.add_argument(
"server_url",
default="https://localhost:8448",
nargs='?',
help="URL to use to talk to the home server. Defaults to "
" 'https://localhost:8448'.",
)
args = parser.parse_args()
if "config" in args and args.config:
config = yaml.safe_load(args.config)
secret = config.get("registration_shared_secret", None)
if not secret:
print("No 'registration_shared_secret' defined in config.")
sys.exit(1)
else:
secret = args.shared_secret
admin = None
if args.admin or args.no_admin:
admin = args.admin
register_new_user(args.user, args.password, args.server_url, secret, admin)
if __name__ == "__main__":
main()

View File

@@ -59,6 +59,7 @@ class Codes(object):
RESOURCE_LIMIT_EXCEEDED = "M_RESOURCE_LIMIT_EXCEEDED"
UNSUPPORTED_ROOM_VERSION = "M_UNSUPPORTED_ROOM_VERSION"
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
class CodeMessageException(RuntimeError):
@@ -312,6 +313,20 @@ class LimitExceededError(SynapseError):
)
class RoomKeysVersionError(SynapseError):
"""A client has tried to upload to a non-current version of the room_keys store
"""
def __init__(self, current_version):
"""
Args:
current_version (str): the current version of the store they should have used
"""
super(RoomKeysVersionError, self).__init__(
403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION
)
self.current_version = current_version
class IncompatibleRoomVersionError(SynapseError):
"""A server is trying to join a room whose version it does not support."""

View File

@@ -172,7 +172,10 @@ USER_FILTER_SCHEMA = {
# events a lot easier as we can then use a negative lookbehind
# assertion to split '\.' If we allowed \\ then it would
# incorrectly split '\\.' See synapse.events.utils.serialize_event
"pattern": "^((?!\\\).)*$"
#
# Note that because this is a regular expression, we have to escape
# each backslash in the pattern.
"pattern": r"^((?!\\\\).)*$"
}
}
},
@@ -226,7 +229,7 @@ class Filtering(object):
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker())
except jsonschema.ValidationError as e:
raise SynapseError(400, e.message)
raise SynapseError(400, str(e))
class FilterCollection(object):

View File

@@ -64,7 +64,7 @@ class ConsentURIBuilder(object):
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id,
msg=user_id.encode('ascii'),
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (

View File

@@ -24,7 +24,7 @@ try:
python_dependencies.check_requirements()
except python_dependencies.MissingRequirementError as e:
message = "\n".join([
"Missing Requirement: %s" % (e.message,),
"Missing Requirement: %s" % (str(e),),
"To install run:",
" pip install --upgrade --force \"%s\"" % (e.dependency,),
"",

View File

@@ -17,6 +17,7 @@ import gc
import logging
import sys
import psutil
from daemonize import Daemonize
from twisted.internet import error, reactor
@@ -24,12 +25,6 @@ from twisted.internet import error, reactor
from synapse.util import PreserveLoggingContext
from synapse.util.rlimit import change_resource_limit
try:
import affinity
except Exception:
affinity = None
logger = logging.getLogger(__name__)
@@ -89,15 +84,20 @@ def start_reactor(
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
if not affinity:
quit_with_error(
"Missing package 'affinity' required for cpu_affinity\n"
"option\n\n"
"Install by running:\n\n"
" pip install affinity\n\n"
)
logger.info("Setting CPU affinity to %s" % cpu_affinity)
affinity.set_process_affinity_mask(0, cpu_affinity)
# Turn the bitmask into bits, reverse it so we go from 0 up
mask_to_bits = bin(cpu_affinity)[2:][::-1]
cpus = []
cpu_num = 0
for i in mask_to_bits:
if i == "1":
cpus.append(cpu_num)
cpu_num += 1
p = psutil.Process()
p.cpu_affinity(cpus)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)

View File

@@ -136,7 +136,7 @@ def start(config_options):
"Synapse appservice", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.appservice"
@@ -172,7 +172,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -153,7 +153,7 @@ def start(config_options):
"Synapse client reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.client_reader"
@@ -181,7 +181,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -169,7 +169,7 @@ def start(config_options):
"Synapse event creator", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.event_creator"
@@ -178,6 +178,9 @@ def start(config_options):
setup_logging(config, use_worker_options=True)
# This should only be done on the user directory worker or the master
config.update_user_directory = False
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
@@ -199,7 +202,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -140,7 +140,7 @@ def start(config_options):
"Synapse federation reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_reader"
@@ -168,7 +168,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -160,7 +160,7 @@ def start(config_options):
"Synapse federation sender", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_sender"
@@ -201,7 +201,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)
_base.start_worker_reactor("synapse-federation-sender", config)

View File

@@ -68,7 +68,7 @@ class PresenceStatusStubServlet(ClientV1RestServlet):
"Authorization": auth_headers,
}
result = yield self.http_client.get_json(
self.main_uri + request.uri,
self.main_uri + request.uri.decode('ascii'),
headers=headers,
)
defer.returnValue((200, result))
@@ -125,7 +125,7 @@ class KeyUploadServlet(RestServlet):
"Authorization": auth_headers,
}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri,
self.main_uri + request.uri.decode('ascii'),
body,
headers=headers,
)
@@ -228,7 +228,7 @@ def start(config_options):
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
@@ -258,7 +258,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -20,6 +20,7 @@ import sys
from six import iteritems
import psutil
from prometheus_client import Gauge
from twisted.application import service
@@ -301,7 +302,7 @@ class SynapseHomeServer(HomeServer):
try:
database_engine.check_database(db_conn.cursor())
except IncorrectDatabaseSetup as e:
quit_with_error(e.message)
quit_with_error(str(e))
# Gauges to expose monthly active user control metrics
@@ -328,7 +329,7 @@ def setup(config_options):
config_options,
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if not config:
@@ -384,10 +385,8 @@ def setup(config_options):
def start():
hs.get_pusherpool().start()
hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
reactor.callWhenRunning(start)
@@ -457,6 +456,10 @@ def run(hs):
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
@@ -500,7 +503,6 @@ def run(hs):
def performance_stats_init():
try:
import psutil
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
@@ -508,12 +510,9 @@ def run(hs):
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (ImportError, AttributeError):
logger.warn(
"report_stats enabled but psutil is not installed or incorrect version."
" Disabling reporting of memory/cpu stats."
" Ensuring psutil is available will help matrix.org track performance"
" changes across releases."
except (AttributeError):
logger.warning(
"Unable to read memory/cpu stats. Disabling reporting."
)
def generate_user_daily_visit_stats():
@@ -528,10 +527,13 @@ def run(hs):
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
# monthly active user limiting functionality
clock.looping_call(
hs.get_datastore().reap_monthly_active_users, 1000 * 60 * 60
)
hs.get_datastore().reap_monthly_active_users()
def reap_monthly_active_users():
return run_as_background_process(
"reap_monthly_active_users",
hs.get_datastore().reap_monthly_active_users,
)
clock.looping_call(reap_monthly_active_users, 1000 * 60 * 60)
reap_monthly_active_users()
@defer.inlineCallbacks
def generate_monthly_active_users():
@@ -545,12 +547,15 @@ def run(hs):
registered_reserved_users_mau_gauge.set(float(reserved_count))
max_mau_gauge.set(float(hs.config.max_mau_value))
hs.get_datastore().initialise_reserved_users(
hs.config.mau_limits_reserved_threepids
)
generate_monthly_active_users()
def start_generate_monthly_active_users():
return run_as_background_process(
"generate_monthly_active_users",
generate_monthly_active_users,
)
start_generate_monthly_active_users()
if hs.config.limit_usage_by_mau:
clock.looping_call(generate_monthly_active_users, 5 * 60 * 1000)
clock.looping_call(start_generate_monthly_active_users, 5 * 60 * 1000)
# End of monthly active user settings
if hs.config.report_stats:
@@ -566,7 +571,7 @@ def run(hs):
clock.call_later(5 * 60, start_phone_stats_home)
if hs.config.daemonize and hs.config.print_pidfile:
print (hs.config.pid_file)
print(hs.config.pid_file)
_base.start_reactor(
"synapse-homeserver",

View File

@@ -133,7 +133,7 @@ def start(config_options):
"Synapse media repository", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.media_repository"
@@ -168,7 +168,6 @@ def start(config_options):
ss.start_listening(config.worker_listeners)
def start():
ss.get_state_handler().start_caching()
ss.get_datastore().start_profiling()
reactor.callWhenRunning(start)

View File

@@ -28,6 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
@@ -49,31 +50,31 @@ class PusherSlaveStore(
SlavedAccountDataStore
):
update_pusher_last_stream_ordering_and_success = (
DataStore.update_pusher_last_stream_ordering_and_success.__func__
__func__(DataStore.update_pusher_last_stream_ordering_and_success)
)
update_pusher_failing_since = (
DataStore.update_pusher_failing_since.__func__
__func__(DataStore.update_pusher_failing_since)
)
update_pusher_last_stream_ordering = (
DataStore.update_pusher_last_stream_ordering.__func__
__func__(DataStore.update_pusher_last_stream_ordering)
)
get_throttle_params_by_room = (
DataStore.get_throttle_params_by_room.__func__
__func__(DataStore.get_throttle_params_by_room)
)
set_throttle_params = (
DataStore.set_throttle_params.__func__
__func__(DataStore.set_throttle_params)
)
get_time_of_last_push_action_before = (
DataStore.get_time_of_last_push_action_before.__func__
__func__(DataStore.get_time_of_last_push_action_before)
)
get_profile_displayname = (
DataStore.get_profile_displayname.__func__
__func__(DataStore.get_profile_displayname)
)
@@ -160,11 +161,11 @@ class PusherReplicationHandler(ReplicationClientHandler):
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
self.pusher_pool.on_new_notifications(
yield self.pusher_pool.on_new_notifications(
token, token,
)
elif stream_name == "receipts":
self.pusher_pool.on_new_receipts(
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
@@ -182,7 +183,7 @@ class PusherReplicationHandler(ReplicationClientHandler):
def start_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return self.pusher_pool._refresh_pusher(app_id, pushkey, user_id)
return self.pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
def start(config_options):
@@ -191,7 +192,7 @@ def start(config_options):
"Synapse pusher", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.pusher"
@@ -228,7 +229,6 @@ def start(config_options):
def start():
ps.get_pusherpool().start()
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -33,7 +33,7 @@ from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@@ -147,7 +147,7 @@ class SynchrotronPresence(object):
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in self.users_going_offline.items():
for user_id, last_sync_ms in list(self.users_going_offline.items()):
if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
@@ -156,9 +156,9 @@ class SynchrotronPresence(object):
# TODO Hows this supposed to work?
pass
get_states = PresenceHandler.get_states.__func__
get_state = PresenceHandler.get_state.__func__
current_state_for_users = PresenceHandler.current_state_for_users.__func__
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)
current_state_for_users = __func__(PresenceHandler.current_state_for_users)
def user_syncing(self, user_id, affect_presence):
if affect_presence:
@@ -208,7 +208,7 @@ class SynchrotronPresence(object):
) for row in rows]
for state in states:
self.user_to_current_state[row.user_id] = state
self.user_to_current_state[state.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
@@ -410,7 +410,7 @@ def start(config_options):
"Synapse synchrotron", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron"
@@ -435,7 +435,6 @@ def start(config_options):
def start():
ss.get_datastore().start_profiling()
ss.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -1,284 +0,0 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import collections
import errno
import glob
import os
import os.path
import signal
import subprocess
import sys
import time
from six import iteritems
import yaml
SYNAPSE = [sys.executable, "-B", "-m", "synapse.app.homeserver"]
GREEN = "\x1b[1;32m"
YELLOW = "\x1b[1;33m"
RED = "\x1b[1;31m"
NORMAL = "\x1b[m"
def pid_running(pid):
try:
os.kill(pid, 0)
return True
except OSError as err:
if err.errno == errno.EPERM:
return True
return False
def write(message, colour=NORMAL, stream=sys.stdout):
if colour == NORMAL:
stream.write(message + "\n")
else:
stream.write(colour + message + NORMAL + "\n")
def abort(message, colour=RED, stream=sys.stderr):
write(message, colour, stream)
sys.exit(1)
def start(configfile):
write("Starting ...")
args = SYNAPSE
args.extend(["--daemonize", "-c", configfile])
try:
subprocess.check_call(args)
write("started synapse.app.homeserver(%r)" %
(configfile,), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting (exit code: %d); see above for logs" % e.returncode,
colour=RED,
)
def start_worker(app, configfile, worker_configfile):
args = [
"python", "-B",
"-m", app,
"-c", configfile,
"-c", worker_configfile
]
try:
subprocess.check_call(args)
write("started %s(%r)" % (app, worker_configfile), colour=GREEN)
except subprocess.CalledProcessError as e:
write(
"error starting %s(%r) (exit code: %d); see above for logs" % (
app, worker_configfile, e.returncode,
),
colour=RED,
)
def stop(pidfile, app):
if os.path.exists(pidfile):
pid = int(open(pidfile).read())
try:
os.kill(pid, signal.SIGTERM)
write("stopped %s" % (app,), colour=GREEN)
except OSError as err:
if err.errno == errno.ESRCH:
write("%s not running" % (app,), colour=YELLOW)
elif err.errno == errno.EPERM:
abort("Cannot stop %s: Operation not permitted" % (app,))
else:
abort("Cannot stop %s: Unknown error" % (app,))
Worker = collections.namedtuple("Worker", [
"app", "configfile", "pidfile", "cache_factor"
])
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"action",
choices=["start", "stop", "restart"],
help="whether to start, stop or restart the synapse",
)
parser.add_argument(
"configfile",
nargs="?",
default="homeserver.yaml",
help="the homeserver config file, defaults to homeserver.yaml",
)
parser.add_argument(
"-w", "--worker",
metavar="WORKERCONFIG",
help="start or stop a single worker",
)
parser.add_argument(
"-a", "--all-processes",
metavar="WORKERCONFIGDIR",
help="start or stop all the workers in the given directory"
" and the main synapse process",
)
options = parser.parse_args()
if options.worker and options.all_processes:
write(
'Cannot use "--worker" with "--all-processes"',
stream=sys.stderr
)
sys.exit(1)
configfile = options.configfile
if not os.path.exists(configfile):
write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), options.configfile
),
stream=sys.stderr,
)
sys.exit(1)
with open(configfile) as stream:
config = yaml.load(stream)
pidfile = config["pid_file"]
cache_factor = config.get("synctl_cache_factor")
start_stop_synapse = True
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
cache_factors = config.get("synctl_cache_factors", {})
for cache_name, factor in iteritems(cache_factors):
os.environ["SYNAPSE_CACHE_FACTOR_" + cache_name.upper()] = str(factor)
worker_configfiles = []
if options.worker:
start_stop_synapse = False
worker_configfile = options.worker
if not os.path.exists(worker_configfile):
write(
"No worker config found at %r" % (worker_configfile,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.append(worker_configfile)
if options.all_processes:
# To start the main synapse with -a you need to add a worker file
# with worker_app == "synapse.app.homeserver"
start_stop_synapse = False
worker_configdir = options.all_processes
if not os.path.isdir(worker_configdir):
write(
"No worker config directory found at %r" % (worker_configdir,),
stream=sys.stderr,
)
sys.exit(1)
worker_configfiles.extend(sorted(glob.glob(
os.path.join(worker_configdir, "*.yaml")
)))
workers = []
for worker_configfile in worker_configfiles:
with open(worker_configfile) as stream:
worker_config = yaml.load(stream)
worker_app = worker_config["worker_app"]
if worker_app == "synapse.app.homeserver":
# We need to special case all of this to pick up options that may
# be set in the main config file or in this worker config file.
worker_pidfile = (
worker_config.get("pid_file")
or pidfile
)
worker_cache_factor = worker_config.get("synctl_cache_factor") or cache_factor
daemonize = worker_config.get("daemonize") or config.get("daemonize")
assert daemonize, "Main process must have daemonize set to true"
# The master process doesn't support using worker_* config.
for key in worker_config:
if key == "worker_app": # But we allow worker_app
continue
assert not key.startswith("worker_"), \
"Main process cannot use worker_* config"
else:
worker_pidfile = worker_config["worker_pid_file"]
worker_daemonize = worker_config["worker_daemonize"]
assert worker_daemonize, "In config %r: expected '%s' to be True" % (
worker_configfile, "worker_daemonize")
worker_cache_factor = worker_config.get("synctl_cache_factor")
workers.append(Worker(
worker_app, worker_configfile, worker_pidfile, worker_cache_factor,
))
action = options.action
if action == "stop" or action == "restart":
for worker in workers:
stop(worker.pidfile, worker.app)
if start_stop_synapse:
stop(pidfile, "synapse.app.homeserver")
# Wait for synapse to actually shutdown before starting it again
if action == "restart":
running_pids = []
if start_stop_synapse and os.path.exists(pidfile):
running_pids.append(int(open(pidfile).read()))
for worker in workers:
if os.path.exists(worker.pidfile):
running_pids.append(int(open(worker.pidfile).read()))
if len(running_pids) > 0:
write("Waiting for process to exit before restarting...")
for running_pid in running_pids:
while pid_running(running_pid):
time.sleep(0.2)
write("All processes exited; now restarting...")
if action == "start" or action == "restart":
if start_stop_synapse:
# Check if synapse is already running
if os.path.exists(pidfile) and pid_running(int(open(pidfile).read())):
abort("synapse.app.homeserver already running")
start(configfile)
for worker in workers:
if worker.cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(worker.cache_factor)
start_worker(worker.app, configfile, worker.configfile)
if cache_factor:
os.environ["SYNAPSE_CACHE_FACTOR"] = str(cache_factor)
else:
os.environ.pop("SYNAPSE_CACHE_FACTOR", None)
if __name__ == "__main__":
main()

View File

@@ -188,7 +188,7 @@ def start(config_options):
"Synapse user directory", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
@@ -229,7 +229,6 @@ def start(config_options):
def start():
ps.get_datastore().start_profiling()
ps.get_state_handler().start_caching()
reactor.callWhenRunning(start)

View File

@@ -25,10 +25,10 @@ if __name__ == "__main__":
try:
config = HomeServerConfig.load_config("", sys.argv[3:])
except ConfigError as e:
sys.stderr.write("\n" + e.message + "\n")
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
print (getattr(config, key))
print(getattr(config, key))
sys.exit(0)
else:
sys.stderr.write("Unknown command %r\n" % (action,))

View File

@@ -106,10 +106,7 @@ class Config(object):
@classmethod
def check_file(cls, file_path, config_name):
if file_path is None:
raise ConfigError(
"Missing config for %s."
% (config_name,)
)
raise ConfigError("Missing config for %s." % (config_name,))
try:
os.stat(file_path)
except OSError as e:
@@ -128,9 +125,7 @@ class Config(object):
if e.errno != errno.EEXIST:
raise
if not os.path.isdir(dir_path):
raise ConfigError(
"%s is not a directory" % (dir_path,)
)
raise ConfigError("%s is not a directory" % (dir_path,))
return dir_path
@classmethod
@@ -156,21 +151,20 @@ class Config(object):
return results
def generate_config(
self,
config_dir_path,
server_name,
is_generating_file,
report_stats=None,
self, config_dir_path, server_name, is_generating_file, report_stats=None
):
default_config = "# vim:ft=yaml\n"
default_config += "\n\n".join(dedent(conf) for conf in self.invoke_all(
"default_config",
config_dir_path=config_dir_path,
server_name=server_name,
is_generating_file=is_generating_file,
report_stats=report_stats,
))
default_config += "\n\n".join(
dedent(conf)
for conf in self.invoke_all(
"default_config",
config_dir_path=config_dir_path,
server_name=server_name,
is_generating_file=is_generating_file,
report_stats=report_stats,
)
)
config = yaml.load(default_config)
@@ -178,23 +172,22 @@ class Config(object):
@classmethod
def load_config(cls, description, argv):
config_parser = argparse.ArgumentParser(
description=description,
)
config_parser = argparse.ArgumentParser(description=description)
config_parser.add_argument(
"-c", "--config-path",
"-c",
"--config-path",
action="append",
metavar="CONFIG_FILE",
help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files."
" may specify directories containing *.yaml files.",
)
config_parser.add_argument(
"--keys-directory",
metavar="DIRECTORY",
help="Where files such as certs and signing keys are stored when"
" their location is given explicitly in the config."
" Defaults to the directory containing the last config file",
" their location is given explicitly in the config."
" Defaults to the directory containing the last config file",
)
config_args = config_parser.parse_args(argv)
@@ -203,9 +196,7 @@ class Config(object):
obj = cls()
obj.read_config_files(
config_files,
keys_directory=config_args.keys_directory,
generate_keys=False,
config_files, keys_directory=config_args.keys_directory, generate_keys=False
)
return obj
@@ -213,38 +204,38 @@ class Config(object):
def load_or_generate_config(cls, description, argv):
config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument(
"-c", "--config-path",
"-c",
"--config-path",
action="append",
metavar="CONFIG_FILE",
help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files."
" may specify directories containing *.yaml files.",
)
config_parser.add_argument(
"--generate-config",
action="store_true",
help="Generate a config file for the server name"
help="Generate a config file for the server name",
)
config_parser.add_argument(
"--report-stats",
action="store",
help="Whether the generated config reports anonymized usage statistics",
choices=["yes", "no"]
choices=["yes", "no"],
)
config_parser.add_argument(
"--generate-keys",
action="store_true",
help="Generate any missing key files then exit"
help="Generate any missing key files then exit",
)
config_parser.add_argument(
"--keys-directory",
metavar="DIRECTORY",
help="Used with 'generate-*' options to specify where files such as"
" certs and signing keys should be stored in, unless explicitly"
" specified in the config."
" certs and signing keys should be stored in, unless explicitly"
" specified in the config.",
)
config_parser.add_argument(
"-H", "--server-name",
help="The server name to generate a config file for"
"-H", "--server-name", help="The server name to generate a config file for"
)
config_args, remaining_args = config_parser.parse_known_args(argv)
@@ -257,8 +248,8 @@ class Config(object):
if config_args.generate_config:
if config_args.report_stats is None:
config_parser.error(
"Please specify either --report-stats=yes or --report-stats=no\n\n" +
MISSING_REPORT_STATS_SPIEL
"Please specify either --report-stats=yes or --report-stats=no\n\n"
+ MISSING_REPORT_STATS_SPIEL
)
if not config_files:
config_parser.error(
@@ -287,26 +278,32 @@ class Config(object):
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=(config_args.report_stats == "yes"),
is_generating_file=True
is_generating_file=True,
)
obj.invoke_all("generate_files", config)
config_file.write(config_str)
print((
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name))
print(
(
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
)
% (config_path, server_name)
)
print(
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
return
else:
print((
"Config file %r already exists. Generating any missing key"
" files."
) % (config_path,))
print(
(
"Config file %r already exists. Generating any missing key"
" files."
)
% (config_path,)
)
generate_keys = True
parser = argparse.ArgumentParser(
@@ -338,8 +335,7 @@ class Config(object):
return obj
def read_config_files(self, config_files, keys_directory=None,
generate_keys=False):
def read_config_files(self, config_files, keys_directory=None, generate_keys=False):
if not keys_directory:
keys_directory = os.path.dirname(config_files[-1])
@@ -364,8 +360,9 @@ class Config(object):
if "report_stats" not in config:
raise ConfigError(
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS + "\n" +
MISSING_REPORT_STATS_SPIEL
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS
+ "\n"
+ MISSING_REPORT_STATS_SPIEL
)
if generate_keys:
@@ -399,16 +396,16 @@ def find_config_files(search_paths):
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
err = "Found subdirectory in config directory: %r. IGNORING."
print(err % (entry_path,))
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
err = (
"Found file in config directory that does not end in "
"'.yaml': %r. IGNORING."
)
print(err % (entry_path,))
continue
files.append(entry_path)

View File

@@ -13,10 +13,18 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
# This file can't be called email.py because if it is, we cannot:
import email.utils
import logging
import os
from ._base import Config
import pkg_resources
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
class EmailConfig(Config):
@@ -38,7 +46,6 @@ class EmailConfig(Config):
"smtp_host",
"smtp_port",
"notif_from",
"template_dir",
"notif_template_html",
"notif_template_text",
]
@@ -62,9 +69,26 @@ class EmailConfig(Config):
self.email_smtp_host = email_config["smtp_host"]
self.email_smtp_port = email_config["smtp_port"]
self.email_notif_from = email_config["notif_from"]
self.email_template_dir = email_config["template_dir"]
self.email_notif_template_html = email_config["notif_template_html"]
self.email_notif_template_text = email_config["notif_template_text"]
template_dir = email_config.get("template_dir")
# we need an absolute path, because we change directory after starting (and
# we don't yet know what auxilliary templates like mail.css we will need).
# (Note that loading as package_resources with jinja.PackageLoader doesn't
# work for the same reason.)
if not template_dir:
template_dir = pkg_resources.resource_filename(
'synapse', 'res/templates'
)
template_dir = os.path.abspath(template_dir)
for f in self.email_notif_template_text, self.email_notif_template_html:
p = os.path.join(template_dir, f)
if not os.path.isfile(p):
raise ConfigError("Unable to find email template file %s" % (p, ))
self.email_template_dir = template_dir
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
@@ -113,7 +137,9 @@ class EmailConfig(Config):
# require_transport_security: False
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
# template_dir: res/templates
# # if template_dir is unset, uses the example templates that are part of
# # the Synapse distribution.
# #template_dir: res/templates
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
# notif_for_new_users: True

View File

@@ -31,6 +31,7 @@ from .push import PushConfig
from .ratelimiting import RatelimitConfig
from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig
from .room_directory import RoomDirectoryConfig
from .saml2 import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
@@ -49,7 +50,7 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,
ConsentConfig,
ServerNoticesConfig,
ServerNoticesConfig, RoomDirectoryConfig,
):
pass

View File

@@ -227,7 +227,22 @@ def setup_logging(config, use_worker_options=False):
#
# However this may not be too much of a problem if we are just writing to a file.
observer = STDLibLogObserver()
def _log(event):
if "log_text" in event:
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
return
if event["log_text"].startswith("(UDP Port "):
return
if event["log_text"].startswith("Timing out client"):
return
return observer(event)
globalLogBeginner.beginLoggingTo(
[observer],
[_log],
redirectStandardIO=not config.no_redirect_stdio,
)

View File

@@ -15,10 +15,10 @@
from distutils.util import strtobool
from synapse.config._base import Config, ConfigError
from synapse.types import RoomAlias
from synapse.util.stringutils import random_string_with_symbols
from ._base import Config
class RegistrationConfig(Config):
@@ -44,6 +44,10 @@ class RegistrationConfig(Config):
)
self.auto_join_rooms = config.get("auto_join_rooms", [])
for room_alias in self.auto_join_rooms:
if not RoomAlias.is_valid(room_alias):
raise ConfigError('Invalid auto_join_rooms entry %s' % (room_alias,))
self.autocreate_auto_join_rooms = config.get("autocreate_auto_join_rooms", True)
def default_config(self, **kwargs):
registration_shared_secret = random_string_with_symbols(50)
@@ -98,6 +102,13 @@ class RegistrationConfig(Config):
# to these rooms
#auto_join_rooms:
# - "#example:example.com"
# Where auto_join_rooms are specified, setting this flag ensures that the
# the rooms exist by creating them when the first user on the
# homeserver registers.
# Setting to false means that if the rooms are not manually created,
# users cannot be auto-joined since they do not exist.
autocreate_auto_join_rooms: true
""" % locals()
def add_arguments(self, parser):

View File

@@ -178,7 +178,7 @@ class ContentRepositoryConfig(Config):
def default_config(self, **kwargs):
media_store = self.default_path("media_store")
uploads_path = self.default_path("uploads")
return """
return r"""
# Directory where uploaded images and attachments are stored.
media_store_path: "%(media_store)s"

View File

@@ -0,0 +1,102 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.util import glob_to_regex
from ._base import Config, ConfigError
class RoomDirectoryConfig(Config):
def read_config(self, config):
alias_creation_rules = config["alias_creation_rules"]
self._alias_creation_rules = [
_AliasRule(rule)
for rule in alias_creation_rules
]
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# The `alias_creation` option controls who's allowed to create aliases
# on this server.
#
# The format of this option is a list of rules that contain globs that
# match against user_id and the new alias (fully qualified with server
# name). The action in the first rule that matches is taken, which can
# currently either be "allow" or "deny".
#
# If no rules match the request is denied.
alias_creation_rules:
- user_id: "*"
alias: "*"
action: allow
"""
def is_alias_creation_allowed(self, user_id, alias):
"""Checks if the given user is allowed to create the given alias
Args:
user_id (str)
alias (str)
Returns:
boolean: True if user is allowed to crate the alias
"""
for rule in self._alias_creation_rules:
if rule.matches(user_id, alias):
return rule.action == "allow"
return False
class _AliasRule(object):
def __init__(self, rule):
action = rule["action"]
user_id = rule["user_id"]
alias = rule["alias"]
if action in ("allow", "deny"):
self.action = action
else:
raise ConfigError(
"alias_creation_rules rules can only have action of 'allow'"
" or 'deny'"
)
try:
self._user_id_regex = glob_to_regex(user_id)
self._alias_regex = glob_to_regex(alias)
except Exception as e:
raise ConfigError("Failed to parse glob into regex: %s", e)
def matches(self, user_id, alias):
"""Tests if this rule matches the given user_id and alias.
Args:
user_id (str)
alias (str)
Returns:
boolean
"""
# Note: The regexes are anchored at both ends
if not self._user_id_regex.match(user_id):
return False
if not self._alias_regex.match(alias):
return False
return True

View File

@@ -55,7 +55,7 @@ def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
raise IOError("Cannot get key for %r" % server_name)
except (ConnectError, DomainError) as e:
logger.warn("Error getting key for %r: %s", server_name, e)
except Exception as e:
except Exception:
logger.exception("Error getting key for %r", server_name)
raise IOError("Cannot get key for %r" % server_name)

Some files were not shown because too many files have changed in this diff Show More