1
0

Compare commits

...

325 Commits

Author SHA1 Message Date
Andrew Morgan
8b0d5b171e Make changelog slightly more readable 2019-07-22 13:15:35 +01:00
Andrew Morgan
54437c48ca 1.2.0rc1 2019-07-22 12:59:04 +01:00
Jorik Schellekens
826e6ec3bd Opentracing Documentation (#5703)
* Opentracing survival guide

* Update decorator names in doc

* Doc cleanup

These are all alterations as a result of comments in #5703, it
includes mostly typos and clarifications. The most interesting
changes are:

- Split developer and user docs into two sections
- Add a high level description of OpenTracing

* newsfile

* Move contributer specific info to docstring.

* Sample config.

* Trailing whitespace.

* Update 5703.misc

* Apply suggestions from code review

Mostly just rewording parts of the docs for clarity.

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-22 11:15:21 +01:00
Neil Johnson
a3e40bd5b4 towncrier 2019-07-18 16:02:09 +01:00
Neil Johnson
cfc00068bd enable aggregations support by default 2019-07-18 15:56:59 +01:00
Richard van der Hoff
82345bc09a Clean up opentracing configuration options (#5712)
Clean up config settings and dead code.

This is mostly about cleaning up the config format, to bring it into line with our conventions. In particular:
 * There should be a blank line after `## Section ##' headings
 * There should be a blank line between each config setting
 * There should be a `#`-only line between a comment and the setting it describes
 * We don't really do the `#  #` style commenting-out of whole sections if we can help it
 * rename `tracer_enabled` to `enabled`

While we're here, do more config parsing upfront, which makes it easier to use
later on.

Also removes redundant code from LogContextScopeManager.

Also changes the changelog fragment to a `feature` - it's exciting!
2019-07-18 15:06:54 +01:00
Amber Brown
7ad1d76356 Support Prometheus_client 0.4.0+ (#5636) 2019-07-18 23:57:15 +10:00
Andrew Morgan
b2a382efdb Remove the ability to query relations when the original event was redacted. (#5629)
Fixes #5594

Forbid viewing relations on an event once it has been redacted.
2019-07-18 14:41:42 +01:00
Richard van der Hoff
fa8271c5ac Convert synapse.federation.transport.server to async (#5689)
* Convert BaseFederationServlet._wrap to async

Empirically, this fixes some lost stacktraces. It should be safe because the
wrapped function is called from JsonResource._async_render, which is already
async.

* Convert the rest of synapse.federation.transport.server to async

We may as well do the whole file while we're here.

* changelog

* flake8
2019-07-18 11:46:47 +01:00
Richard van der Hoff
9c70a02a9c Ignore redactions of m.room.create events (#5701) 2019-07-17 19:08:02 +01:00
Richard van der Hoff
1def298119 Improve Depends specs in debian package. (#5675)
This is basically a contrived way of adding a `Recommends` on `libpq5`, to fix #5653.

The way this is supposed to happen in debhelper is to run
`dh_shlibdeps`, which in turn runs `dpkg-shlibdeps`, which spits things out
into `debian/<package>.substvars` whence they can later be included by
`control`.

Previously, we had disabled `dh_shlibdeps`, mostly because `dpkg-shlibdeps`
gets confused about PIL's interdependent objects, but that's not really the
right thing to do and there is another way to work around that.

Since we don't always use postgres, we don't necessarily want a hard Depends on
libpq5, so I've actually ended up adding an explicit invocation of
`dpkg-shlibdeps` for `psycopg2`.

I've also updated the build-depends list for the package, which was missing a
couple of entries.
2019-07-17 17:47:07 +01:00
Richard van der Hoff
2091c91fde More refactoring in get_events_as_list (#5707)
We can now use `_get_events_from_cache_or_db` rather than going right back to
the database, which means that (a) we can benefit from caching, and (b) it
opens the way forward to more extensive checks on the original event.

We now always require the original event to exist before we will serve up a
redaction.
2019-07-17 17:34:13 +01:00
Richard van der Hoff
375162b3c3 Fix redaction authentication (#5700)
Ensures that redactions are correctly authenticated for recent room versions.

There are a few things going on here:

 * `_fetch_event_rows` is updated to return a dict rather than a list of rows.

 * Rather than returning multiple copies of an event which was redacted
   multiple times, it returns the redactions as a list within the dict.

 * It also returns the actual rejection reason, rather than merely the fact
   that it was rejected, so that we don't have to query the table again in
   `_get_event_from_row`.

 * The redaction handling is factored out of `_get_event_from_row`, and now
   checks if any of the redactions are valid.
2019-07-17 16:52:02 +01:00
Richard van der Hoff
65c5592b8e Refactor get_events_as_list (#5699)
A couple of changes here:

* get rid of a redundant `allow_rejected` condition - we should already have filtered out any rejected
  events before we get to that point in the code, and the redundancy is confusing. Instead, let's stick in
  an assertion just to make double-sure we aren't leaking rejected events by mistake.

* factor out a `_get_events_from_cache_or_db` method, which is going to be important for a 
  forthcoming fix to redactions.
2019-07-17 16:49:19 +01:00
Erik Johnston
c831c5b2bb Merge pull request #5597 from matrix-org/erikj/admin_api_cmd
Create basic admin command app
2019-07-16 15:59:36 +01:00
Erik Johnston
5ed7853bb0 Remove pointless description 2019-07-16 11:45:57 +01:00
Erik Johnston
f44354e17f Clean up arg name and remove lying comment 2019-07-16 11:39:40 +01:00
Erik Johnston
d0d479c1af Fix typo in synapse/app/admin_cmd.py
Co-Authored-By: Aaron Raimist <aaron@raim.ist>
2019-07-16 09:52:56 +01:00
Erik Johnston
03cc8c4b5d Fix invoking add_argument from homeserver.py 2019-07-15 14:25:05 +01:00
Erik Johnston
eca4f5ac73 s/exfiltrate_user_data/export_user_data/ 2019-07-15 14:17:28 +01:00
Erik Johnston
1b2067f53d Add FileExfiltrationWriter 2019-07-15 14:15:22 +01:00
Erik Johnston
e8c53b07f2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/admin_api_cmd 2019-07-15 14:13:22 +01:00
Erik Johnston
c8f35d8d38 Use set_defaults(func=) style 2019-07-15 14:09:35 +01:00
Erik Johnston
fdefb9e29a Move creation of ArgumentParser to caller 2019-07-15 14:09:35 +01:00
Erik Johnston
37b524f971 Fix up comments 2019-07-15 14:09:35 +01:00
Erik Johnston
823e13ddf4 Change add_arguments to be a static method 2019-07-15 14:09:33 +01:00
Andrew Morgan
18c516698e Return a different error from Invalid Password when a user is deactivated (#5674)
Return `This account has been deactivated` instead of `Invalid password` when a user is deactivated.
2019-07-15 11:45:29 +01:00
Erik Johnston
d86321300a Merge pull request #5589 from matrix-org/erikj/admin_exfiltrate_data
Add basic function to get all data for a user out of synapse
2019-07-15 10:04:02 +01:00
Richard van der Hoff
d336b51331 Add a docker type to the towncrier configuration (#5673)
... and certain other changelog-related fixes
2019-07-12 17:27:07 +01:00
Richard van der Hoff
5f158ec039 Implement access token expiry (#5660)
Record how long an access token is valid for, and raise a soft-logout once it
expires.
2019-07-12 17:26:02 +01:00
Erik Johnston
db0a50bc40 Fixup docstrings 2019-07-12 16:59:59 +01:00
Andrew Morgan
24aa0e0a5b fix typo: backgroud -> background 2019-07-12 15:29:40 +01:00
Richard van der Hoff
4c17a87606 fix changelog name 2019-07-12 11:47:24 +01:00
Ulrik Günther
d445b3ae57 Update reverse_proxy.rst (#5397)
Updates reverse_proxy.rst with information about nginx' URI normalisation.
2019-07-12 11:46:18 +01:00
Slavi Pantaleev
59f15309ca Add missing space in default logging file format generated by the Docker image (#5620)
This adds a missing space, without which log lines appear uglier.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2019-07-12 11:43:42 +01:00
Slavi Pantaleev
f369164761 Upgrade Alpine Linux used in the Docker image (3.8 -> 3.10) (#5619)
Alpine Linux 3.8 is still supported, but it seems like
it's quite outdated now.

While Python should be the same on both, all other libraries, etc.,
are much newer in Alpine 3.9 and 3.10.

Signed-off-by: Slavi Pantaleev <slavi@devture.com>
2019-07-12 11:38:25 +01:00
Richard van der Hoff
6bb0357c94 Add a mechanism for per-test configs (#5657)
It's useful to be able to tweak the homeserver config to be used for each
test. This PR adds a mechanism to do so.
2019-07-12 10:16:23 +01:00
Amber Brown
a83577d64f Use /src for checking out synapse during sytests (#5664) 2019-07-11 23:43:41 +10:00
Lrizika
39e9839a04 Improved docs on setting up Postgresql (#5661)
Added that synapse_user needs a database to access before it can auth
Noted you'll need to enable password auth, linked to pg_hba.conf docs
2019-07-11 14:31:36 +01:00
Andrew Morgan
78a1cd36b5 small typo fix (#5655) 2019-07-11 13:33:23 +01:00
Richard van der Hoff
0a4001eba1 Clean up exception handling for access_tokens (#5656)
First of all, let's get rid of `TOKEN_NOT_FOUND_HTTP_STATUS`. It was a hack we
did at one point when it was possible to return either a 403 or a 401 if the
creds were missing. We always return a 401 in these cases now (thankfully), so
it's not needed.

Let's also stop abusing `AuthError` for these cases. Honestly they have nothing
that relates them to the other places that `AuthError` is used, other than the
fact that they are loosely under the 'Auth' banner. It makes no sense for them
to share exception classes.

Instead, let's add a couple of new exception classes: `InvalidClientTokenError`
and `MissingClientTokenError`, for the `M_UNKNOWN_TOKEN` and `M_MISSING_TOKEN`
cases respectively - and an `InvalidClientCredentialsError` base class for the
two of them.
2019-07-11 11:06:23 +01:00
Jorik Schellekens
38a6d3eea7 Add basic opentracing support (#5544)
* Configure and initialise tracer

Includes config options for the tracer and sets up JaegerClient.

* Scope manager using LogContexts

We piggy-back our tracer scopes by using log context.
The current log context gives us the current scope. If new scope is
created we create a stack of scopes in the context.

* jaeger is a dependency now

* Carrier inject and extraction for Twisted Headers

* Trace federation requests on the way in and out.

The span is created in _started_processing and closed in
_finished_processing because we need a meaningful log context.

* Create logcontext for new scope.

Instead of having a stack of scopes in a logcontext we create a new
context for a new scope if the current logcontext already has a scope.

* Remove scope from logcontext if logcontext is top level

* Disable tracer if not configured

* typo

* Remove dependence on jaeger internals

* bools

* Set service name

* :Explicitely state that the tracer is disabled

* Black is the new black

* Newsfile

* Code style

* Use the new config setup.

* Generate config.

* Copyright

* Rename config to opentracing

* Remove user whitelisting

* Empty whitelist by default

* User ConfigError instead of RuntimeError

* Use isinstance

* Use tag constants for opentracing.

* Remove debug comment and no need to explicitely record error

* Two errors a "s(c)entry"

* Docstrings!

* Remove debugging brainslip

* Homeserver Whitlisting

* Better opentracing config comment

* linting

* Inclue worker name in service_name

* Make opentracing an optional dependency

* Neater config retreival

* Clean up dummy tags

* Instantiate tracing as object instead of global class

* Inlcude opentracing as a homeserver member.

* Thread opentracing to the request level

* Reference opetnracing through hs

* Instantiate dummy opentracin g for tests.

* About to revert, just keeping the unfinished changes just in case

* Revert back to global state, commit number:

9ce4a3d9067bf9889b86c360c05ac88618b85c4f

* Use class level methods in tracerutils

* Start and stop requests spans in a place where we
have access to the authenticated entity

* Seen it, isort it

* Make sure to close the active span.

* I'm getting black and blue from this.

* Logger formatting

Co-Authored-By: Erik Johnston <erik@matrix.org>

* Outdated comment

* Import opentracing at the top

* Return a contextmanager

* Start tracing client requests from the servlet

* Return noop context manager if not tracing

* Explicitely say that these are federation requests

* Include servlet name in client requests

* Use context manager

* Move opentracing to logging/

* Seen it, isort it again!

* Ignore twisted return exceptions on context exit

* Escape the scope

* Scopes should be entered to make them useful.

* Nicer decorator names

* Just one init, init?

* Don't need to close something that isn't open

* Docs make you smarter
2019-07-11 10:36:03 +01:00
Richard van der Hoff
1890cfcf82 Inline issue_access_token (#5659)
this is only used in one place, so it's clearer if we inline it and reduce the
API surface.

Also, fixes a buglet where we would create an access token even if we were
about to block the user (we would never return the AT, so the user could never
use it, but it was still created and added to the db.)
2019-07-11 04:10:07 +10:00
Brendan Abolivier
8ab3444fdf Merge pull request #5658 from matrix-org/babolivier/is-json
Send 3PID bind requests as JSON data
2019-07-10 17:01:26 +01:00
Richard van der Hoff
953dbb7980 Remove access-token support from RegistrationStore.register (#5642)
The 'token' param is no longer used anywhere except the tests, so let's kill
that off too.
2019-07-10 16:26:49 +01:00
Brendan Abolivier
b2a2e96ea6 Typo 2019-07-10 15:56:21 +01:00
Brendan Abolivier
351d9bd317 Rename changelog file 2019-07-10 15:48:50 +01:00
Brendan Abolivier
f77e997619 Send 3PID bind requests as JSON data 2019-07-10 15:46:42 +01:00
Andrew Morgan
f281714583 Don't bundle aggregations when retrieving the original event (#5654)
A fix for PR #5626, which returned the original event content as part of a call to /relations.

Only problem was that we were attempting to aggregate the relations on top of it when we did so. We now set bundle_aggregations to False in the get_event call.

We also do this when pulling the relation events as well, because edits of edits are not something we'd like to support here.
2019-07-10 14:43:11 +01:00
Andrew Morgan
3dd61d12cd Add a linting script (#5627)
Add a dev script to cover all the different linting steps.
2019-07-10 14:03:18 +01:00
Bruno Windels
4d122d295c Correct pep517 flag in readme (#5651) 2019-07-10 13:55:24 +01:00
Brendan Abolivier
65434da75d Merge pull request #5638 from matrix-org/babolivier/invite-json
Use JSON when querying the IS's /store-invite endpoint
2019-07-09 18:48:38 +01:00
Hubert Chathi
7b3bc755a3 remove unused and unnecessary check for FederationDeniedError (#5645)
FederationDeniedError is a subclass of SynapseError, which is a subclass of
CodeMessageException, so if e is a FederationDeniedError, then this check for
FederationDeniedError will never be reached since it will be caught by the
check for CodeMessageException above.  The check for CodeMessageException does
almost the same thing as this check (since FederationDeniedError initialises
with code=403 and msg="Federation denied with %s."), so may as well just keep
allowing it to handle this case.
2019-07-09 18:37:39 +01:00
Andrew Morgan
d88421ab03 Include the original event in /relations (#5626)
When asking for the relations of an event, include the original event in the response. This will mostly be used for efficiently showing edit history, but could be useful in other circumstances.
2019-07-09 13:43:08 +01:00
Brendan Abolivier
af67c7c1de Merge pull request #5644 from matrix-org/babolivier/profile-allow-self
Allow newly-registered users to lookup their own profiles
2019-07-09 10:25:40 +01:00
Richard van der Hoff
824707383b Remove access-token support from RegistrationHandler.register (#5641)
Nothing uses this now, so we can remove the dead code, and clean up the
API.

Since we're changing the shape of the return value anyway, we take the
opportunity to give the method a better name.
2019-07-08 19:01:08 +01:00
Brendan Abolivier
73cb716b3c Lint 2019-07-08 17:44:20 +01:00
Brendan Abolivier
5e01e9ac19 Add test case 2019-07-08 17:41:16 +01:00
Brendan Abolivier
f3615a8aa5 Changelog 2019-07-08 17:31:58 +01:00
Brendan Abolivier
7556851665 Allow newly-registered users to lookup their own profiles
When a user creates an account and the 'require_auth_for_profile_requests' config flag is set, and a client that performed the registration wants to lookup the newly-created profile, the request will be denied because the user doesn't share a room with themselves yet.
2019-07-08 17:31:00 +01:00
Richard van der Hoff
43d175d17a Unblacklist some user_directory sytests (#5637)
Fixes https://github.com/matrix-org/synapse/issues/2306
2019-07-09 02:15:17 +10:00
Richard van der Hoff
b70e080b59 Better logging for auto-join. (#5643)
It was pretty unclear what was going on, so I've added a couple of log lines.
2019-07-08 17:14:51 +01:00
Brendan Abolivier
57eacee4f4 Merge branch 'develop' into babolivier/invite-json 2019-07-08 15:49:23 +01:00
Brendan Abolivier
c142e5d16a Changelog 2019-07-08 15:28:38 +01:00
Richard van der Hoff
4b1f7febc7 Update ModuleApi to avoid register(generate_token=True) (#5640)
* Update ModuleApi to avoid register(generate_token=True)

This is the only place this is still used, so I'm trying to kill it off.

* changelog
2019-07-08 23:55:34 +10:00
Richard van der Hoff
f9e99f9534 Factor out some redundant code in the login impl (#5639)
* Factor out some redundant code in the login impl

Also fixes a redundant access_token which was generated during jwt login.

* changelog
2019-07-08 23:54:22 +10:00
Richard van der Hoff
1af2fcd492 Move get_or_create_user to test code (#5628)
This is only used in tests, so...
2019-07-08 23:52:26 +10:00
Brendan Abolivier
f05c7d62bc Lint 2019-07-08 14:29:27 +01:00
Brendan Abolivier
1a807dfe68 Use application/json when querying the IS's /store-invite endpoint 2019-07-08 14:19:39 +01:00
Andrew Morgan
589d43d9cd Add a few more common environment directory names to black exclusion (#5630)
* Add a few more common environment directory names to black exclusion

* Add changelog
2019-07-08 21:53:33 +10:00
J. Ryan Stinnett
9b1b79f3f5 Add default push rule to ignore reactions (#5623)
This adds a default push rule following the proposal in
[MSC2153](https://github.com/matrix-org/matrix-doc/pull/2153).

See also https://github.com/vector-im/riot-web/issues/10208
See also https://github.com/matrix-org/matrix-js-sdk/pull/976
2019-07-05 17:37:52 +01:00
Andrew Morgan
ad8b909ce9 Add origin_server_ts and sender fields to m.replace (#5613)
Riot team would like some extra fields as part of m.replace, so here you go.

Fixes: #5598
2019-07-05 17:20:02 +01:00
Richard van der Hoff
80cc82a445 Remove support for invite_3pid_guest. (#5625)
This has never been documented, and I'm not sure it's ever been used outside
sytest.

It's quite a lot of poorly-maintained code, so I'd like to get rid of it.

For now I haven't removed the database table; I suggest we leave that for a
future clearout.
2019-07-05 16:47:58 +01:00
Erik Johnston
b4f5416dd9 pep8 2019-07-05 14:41:29 +01:00
Erik Johnston
eadb13d2e9 Remove FileExfiltrationWriter 2019-07-05 14:15:00 +01:00
Erik Johnston
7f0d8e4288 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/admin_exfiltrate_data 2019-07-05 14:08:21 +01:00
Erik Johnston
9ccea16d45 Assume key existence. Update docstrings 2019-07-05 14:07:56 +01:00
Richard van der Hoff
a6a776f3d8 remove dead transaction persist code (#5622)
this hasn't done anything for years
2019-07-05 12:59:42 +01:00
Richard van der Hoff
9481707a52 Fixes to the federation rate limiter (#5621)
- Put the default window_size back to 1000ms (broken by #5181)
- Make the `rc_federation` config actually do something
- fix an off-by-one error in the 'concurrent' limit
- Avoid creating an unused `_PerHostRatelimiter` object for every single
  incoming request
2019-07-05 11:10:19 +01:00
Andrew Morgan
0e5434264f Make errors about email password resets much clearer (#5616)
The runtime errors that dealt with local email password resets talked about config options that users may not even have in their config file yet (if upgrading). Instead, the cryptic errors are now replaced with hopefully much more helpful ones.
2019-07-05 10:44:12 +01:00
Amber Brown
1ee268d33d Improve the backwards compatibility re-exports of synapse.logging.context (#5617)
* Improve the backwards compatibility re-exports of synapse.logging.context.

* reexport logformatter too
2019-07-05 02:32:02 +10:00
Andrew Morgan
ee91ac179c Add a sytest blacklist file (#5611)
* Add a sytest blacklist file

* Add changelog

* Add blacklist to manifest
2019-07-05 01:24:13 +10:00
Erik Johnston
822a0f0435 Merge branch 'master' of github.com:matrix-org/synapse into develop 2019-07-04 14:00:27 +01:00
Erik Johnston
54283f3ed4 Update changelog 2019-07-04 11:55:07 +01:00
Erik Johnston
20332b278d 1.1.0 2019-07-04 11:44:09 +01:00
Erik Johnston
c061d4f237 Fixup from review comments. 2019-07-04 11:41:06 +01:00
Erik Johnston
f6608a8805 Merge pull request #5615 from matrix-org/anoa/fix_changelog_email_resets
Suggest people use a config file for Docker instead of env vars
2019-07-04 11:40:28 +01:00
Andrew Morgan
426854e7bc Suggest people use a config file for Docker instead of env vars 2019-07-04 11:25:49 +01:00
Amber Brown
463b072b12 Move logging utilities out of the side drawer of util/ and into logging/ (#5606) 2019-07-04 00:07:04 +10:00
Erik Johnston
d0b849c86d Apply comment fixups from code review
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-07-03 15:03:38 +01:00
Richard van der Hoff
cb8d568cf9 Fix 'utime went backwards' errors on daemonization. (#5609)
* Fix 'utime went backwards' errors on daemonization.

Fixes #5608

* remove spurious debug
2019-07-03 22:40:45 +10:00
Richard van der Hoff
463d5a8fde 1.1.0rc2 2019-07-03 10:51:47 +01:00
Richard van der Hoff
91753cae59 Fix a number of "Starting txn from sentinel context" warnings (#5605)
Fixes #5602, #5603
2019-07-03 09:31:27 +01:00
Andrew Morgan
c7b48bd42d Remove SMTP_* env var functionality from docker conf (#5596)
Removes any `SMTP_*` docker container environment variables from having any effect on the default config.

Fixes https://github.com/matrix-org/synapse/issues/5430
2019-07-03 07:14:48 +01:00
Amber Brown
0ee9076ffe Fix media repo breaking (#5593) 2019-07-02 19:01:28 +01:00
Erik Johnston
10fe904d88 Newsfile 2019-07-02 17:21:27 +01:00
Erik Johnston
9f3c0a8556 Add basic admin cmd app 2019-07-02 17:12:48 +01:00
Erik Johnston
65dd5543f6 Newsfile 2019-07-02 12:10:23 +01:00
Erik Johnston
8ee69f299c Add basic function to get all data for a user out of synapse 2019-07-02 12:09:04 +01:00
Richard van der Hoff
f8b52eb8c5 tweak changelog 2019-07-02 12:03:39 +01:00
Richard van der Hoff
7085e8c0fb Merge remote-tracking branch 'origin/master' into release-v1.1.0 2019-07-02 11:56:17 +01:00
Richard van der Hoff
a0fa4641c4 prepare v1.1.0rc1 2019-07-02 11:51:55 +01:00
Richard van der Hoff
6eecb6e500 Complete the SAML2 implementation (#5422)
* SAML2 Improvements and redirect stuff

Signed-off-by: Alexander Trost <galexrt@googlemail.com>

* Code cleanups and simplifications.

Also: share the saml client between redirect and response handlers.

* changelog

* Revert redundant changes to static js

* Move all the saml stuff out to a centralised handler

* Add support for tracking SAML2 sessions.

This allows us to correctly handle `allow_unsolicited: False`.

* update sample config

* cleanups

* update sample config

* rename BaseSSORedirectServlet for consistency

* Address review comments
2019-07-02 11:18:11 +01:00
Erik Johnston
c3863ad6bf Merge pull request #5587 from matrix-org/erikj/fix_synctl
Fix --no-daemonize flag for synctl
2019-07-02 11:17:55 +01:00
Erik Johnston
8134c49cad Newsfile 2019-07-02 10:36:04 +01:00
Amir Zarrinkafsh
de8077a164 Add ability to set timezone for Docker container (#5383)
Signed-off-by: Amir Zarrinkafsh <nightah@me.com>
2019-07-02 10:31:06 +01:00
PauRE
948488e115 Fix JWT login with new users (#5586)
Signed-off-by: Pau Rodriguez-Estivill <prodrigestivill@gmail.com>
2019-07-02 10:25:37 +01:00
Erik Johnston
9ceb4f0889 Fix --no-daemonize flag for synctl 2019-07-02 10:14:38 +01:00
Amber Brown
b4914681a5 fix async/await consentresource (#5585)
Fixes #5582
2019-07-01 23:33:52 +10:00
Richard van der Hoff
b4fd86a9b4 Merge branch 'develop' into rav/saml2_client 2019-07-01 14:21:03 +01:00
Richard van der Hoff
3bcb13edd0 Address review comments 2019-07-01 12:13:22 +01:00
Erik Johnston
04196a4dae Merge pull request #5507 from matrix-org/erikj/presence_sync_tighloop
Fix sync tightloop bug.
2019-07-01 11:43:10 +01:00
Erik Johnston
915280f1ed Fixup comment 2019-07-01 10:22:42 +01:00
Amber Brown
f40a7dc41f Make the http server handle coroutine-making REST servlets (#5475) 2019-06-29 17:06:55 +10:00
Brendan Abolivier
c7ff297dde Merge pull request #5576 from matrix-org/babolivier/3pid-invite-ratelimit
Don't update the ratelimiter before sending a 3PID invite
2019-06-28 17:43:48 +01:00
Brendan Abolivier
15d9fc31bd Only ratelimit when sending the email
If we do the opposite, an event can arrive after or while sending the email and the 3PID invite event will get ratelimited.
2019-06-28 16:04:05 +01:00
Brendan Abolivier
b339f6489f Changelog 2019-06-28 15:24:59 +01:00
Brendan Abolivier
01d0f8e701 Don't update the ratelimiter before sending a 3PID invite
This would cause emails being sent, but Synapse responding with a 429 when creating the event. The client would then retry, and with bad timing the same scenario would happen again. Some testing I did ended up sending me 10 emails for one single invite because of this.
2019-06-28 15:22:16 +01:00
Amber Brown
071150ce19 Don't log GC 0s at INFO (#5557) 2019-06-28 21:45:33 +10:00
Amber Brown
be3b901ccd Update the TLS cipher string and provide configurability for TLS on outgoing federation (#5550) 2019-06-28 18:19:09 +10:00
Daniel Hoffend
9646a593ac Added possibilty to disable local password authentication (#5092)
Signed-off-by: Daniel Hoffend <dh@dotlan.net>
2019-06-27 18:37:29 +01:00
Silke Hofstra
457b8e4c4d Include systemd-python in Debian package to allow logging to journal (#5261)
Signed-off-by: Silke Hofstra <silke@slxh.eu>
2019-06-27 18:26:41 +01:00
Andrew Morgan
c548dbc4b1 Make it clearer that the template dir is relative to synapse's root dir (#5543)
Helps address #5444
2019-06-27 18:20:17 +01:00
Erik Johnston
e79ec03165 Merge pull request #5559 from matrix-org/erikj/refactor_changed_devices
Refactor devices changed query to pull less from DB
2019-06-27 16:53:15 +01:00
Erik Johnston
729f5a4fb6 Review comments 2019-06-27 16:06:23 +01:00
Richard van der Hoff
555b6fa0d5 Docker image: Add a migrate_config mode (#5567)
... to help people escape env var hell
2019-06-27 13:52:40 +01:00
Richard van der Hoff
1ddc7b39c9 Docker image: open the non-TLS port by default. (#5568)
There's not much point in binding to localhost when it's in a docker container.
2019-06-27 13:50:10 +01:00
Richard van der Hoff
2f7ebc2a55 Deprecate the env var way of running the docker image (#5566)
This is mostly a documentation change, but also adds a default value for
SYNAPSE_CONFIG_PATH, so that running from the generated config is the default,
and will Just Work provided your config is in the right place.
2019-06-27 13:49:48 +01:00
PauRE
856ea04eb3 Fix JWT login (#5555)
* Fix JWT login with register

Signed-off-by: Pau Rodriguez-Estivill <prodrigestivill@gmail.com>

* Add pyjwt conditional dependency

Signed-off-by: Pau Rodriguez-Estivill <prodrigestivill@gmail.com>

* Added changelog file

Signed-off-by: Pau Rodriguez-Estivill <prodrigestivill@gmail.com>

* Improved changelog description

Signed-off-by: Pau Rodriguez-Estivill <prodrigestivill@gmail.com>
2019-06-27 12:02:41 +01:00
Richard van der Hoff
b4db70e167 Merge pull request #5565 from matrix-org/rav/docker/fix_log_config
Docker: generate our own log config
2019-06-27 11:19:37 +01:00
Richard van der Hoff
b2d2617c0d Reduce the amount of stuff we send in the docker context (#5564)
this makes docker builds a bit faster.
2019-06-27 11:18:51 +01:00
Richard van der Hoff
b1b8a24b63 Merge pull request #5563 from matrix-org/rav/docker/data_dir
Docker image: add support for SYNAPSE_DATA_DIR parameter
2019-06-27 11:17:44 +01:00
Richard van der Hoff
53faa6a429 Merge pull request #5562 from matrix-org/rav/docker/no-generate-keys
Docker: only run --generate-keys when generating config on-the-fly.
2019-06-27 11:17:21 +01:00
Richard van der Hoff
02aeb5a98a Merge pull request #5561 from matrix-org/rav/docker/refactor
Refactor the docker/start.py script
2019-06-27 11:16:37 +01:00
Richard van der Hoff
42a3619ef4 Merge pull request #5570 from almereyda/patch-2
Update purge_api README
2019-06-27 00:58:17 +01:00
Richard van der Hoff
a11475f396 fix changelog 2019-06-27 00:57:59 +01:00
Richard van der Hoff
47fa836abb Merge pull request #5313 from twrist/patch-1
Update HAProxy example rules
2019-06-27 00:53:48 +01:00
Richard van der Hoff
79b9d9076d rename BaseSSORedirectServlet for consistency 2019-06-27 00:46:57 +01:00
Richard van der Hoff
dde4118341 update sample config 2019-06-27 00:41:04 +01:00
Richard van der Hoff
28db0ae537 cleanups 2019-06-27 00:37:41 +01:00
Richard van der Hoff
a0acfcc73e update sample config 2019-06-26 23:56:28 +01:00
Richard van der Hoff
36f4953dec Add support for tracking SAML2 sessions.
This allows us to correctly handle `allow_unsolicited: False`.
2019-06-26 23:50:55 +01:00
jon r
536820e572 Create 5570.misc
Signed-off-by: Jon Richter <jon@allmende.io>
2019-06-27 00:13:34 +02:00
jon r
db56383b24 Update purge_api README
This points the reverse links at the intended location.
2019-06-27 00:08:18 +02:00
Richard van der Hoff
3705322103 Move all the saml stuff out to a centralised handler 2019-06-26 22:52:02 +01:00
Richard van der Hoff
0ade403f55 Revert redundant changes to static js 2019-06-26 22:46:23 +01:00
Richard van der Hoff
a4daa899ec Merge branch 'develop' into rav/saml2_client 2019-06-26 22:34:41 +01:00
Erik Johnston
82028d723b Move changelog 2019-06-26 19:39:49 +01:00
Erik Johnston
8624db3194 Refactor and comment sync device list code 2019-06-26 19:39:49 +01:00
Erik Johnston
f335e77d53 Use batch_iter and correct docstring 2019-06-26 19:39:46 +01:00
Erik Johnston
806a06daf2 Rename get_users_whose_devices_changed 2019-06-26 19:39:19 +01:00
Richard van der Hoff
b051cefc75 Merge pull request #5552 from matrix-org/rav/github_templates
Update github templates
2019-06-26 16:15:41 +01:00
Richard van der Hoff
befa116b31 changelog 2019-06-26 15:52:00 +01:00
Richard van der Hoff
28e30c6581 Docker: generate our own log config
When running under docker, we want to use docker's own logging stuff rather
than losing the logs somewhere on the container's filesystem, so let's use log
configs that spit logs out to stdout instead.
2019-06-26 15:48:38 +01:00
Richard van der Hoff
6347dc1bed Add support for SYNAPSE_CONFIG_DIR 2019-06-26 15:48:38 +01:00
Richard van der Hoff
a0f2921ccf changelog 2019-06-26 15:41:10 +01:00
Richard van der Hoff
7e433beb65 Docker image: add support for SYNAPSE_DATA_DIR parameter
Fixes #4830.
2019-06-26 15:38:08 +01:00
Richard van der Hoff
c58a6e6108 document supported env vars for docker 'generate' option 2019-06-26 15:38:08 +01:00
Richard van der Hoff
7c453472e4 changelog 2019-06-26 15:32:55 +01:00
Richard van der Hoff
a5fba9c27c Docker: only run --generate-keys when generating config on-the-fly.
We don't want to generate any missing configs when running from a precanned
config.

(There's a strong argument that we don't want to do this at all, since
generating a new signing key on each invocation sounds disasterous, but I don't
fancy unpicking that for now.)
2019-06-26 15:31:19 +01:00
Richard van der Hoff
a1732bbff9 improve logging for generate_config_from_template 2019-06-26 15:31:19 +01:00
Richard van der Hoff
043ab6da13 changelog 2019-06-26 15:28:28 +01:00
Richard van der Hoff
2d91988799 Improve docs on choosing server_name (#5558)
Fixes #4901
2019-06-26 15:15:04 +01:00
Erik Johnston
508c3ce3d7 Newsfile 2019-06-26 12:03:49 +01:00
Erik Johnston
a2f6d31a63 Refactor get_user_ids_changed to pull less from DB
When a client asks for users whose devices have changed since a token we
used to pull *all* users from the database since the token, which could
easily be thousands of rows for old tokens.

This PR changes this to only check for changes for users the client is
actually interested in.

Fixes #5553
2019-06-26 12:03:44 +01:00
Amber Brown
0e97284dfa Remove & changelog (#5548) 2019-06-26 04:36:34 +10:00
Andrew Morgan
3eb8c7b0eb Merge branch 'master' into develop
* master:
  Fix broken link in MSC1711 FAQ
  Update changelog to better expain password reset change (#5545)
2019-06-25 18:08:56 +01:00
Andrew Morgan
52a4a90d05 Merge branch 'release-v1.0.0'
* release-v1.0.0:
  Update changelog to better expain password reset change (#5545)
2019-06-25 18:08:41 +01:00
Richard van der Hoff
5375c3a9b8 isort 2019-06-25 15:30:19 +01:00
Richard van der Hoff
3f24e4dce7 Add a main() function 2019-06-25 15:30:19 +01:00
Richard van der Hoff
b1fddb7f69 Factor out a run_generate_config function 2019-06-25 15:30:19 +01:00
Richard van der Hoff
a52e1a3b6c Factor out "generate_config_from_template"
... and inline generate_secrets
2019-06-25 15:30:19 +01:00
Andrew Morgan
ef8c62758c Prevent multiple upgrades on the same room at once (#5051)
Closes #4583

Does slightly less than #5045, which prevented a room from being upgraded multiple times, one after another. This PR still allows that, but just prevents two from happening at the same time.

Mostly just to mitigate the fact that servers are slow and it can take a moment for the room upgrade to actually complete. We don't want people sending another request to upgrade the room when really they just thought the first didn't go through.
2019-06-25 14:19:21 +01:00
Richard van der Hoff
c8cb186260 Fix broken link in MSC1711 FAQ 2019-06-25 12:27:56 +01:00
Richard van der Hoff
a5222b386e changelog 2019-06-25 12:24:47 +01:00
Richard van der Hoff
62e361a90f Update github templates 2019-06-25 12:24:30 +01:00
Richard van der Hoff
1c4a38e377 Update SUPPORT.md 2019-06-25 12:24:23 +01:00
Richard van der Hoff
6fa36c22fa Merge branch 'develop' of github.com:matrix-org/synapse into develop 2019-06-25 12:21:32 +01:00
Richard van der Hoff
a47f012501 Grafana dashboard updates 2019-06-25 11:51:51 +01:00
Richard van der Hoff
abd334d27b Add extremities graphs to grafana dashboard 2019-06-25 11:51:32 +01:00
Richard van der Hoff
dd1c722a39 format json for grafana dashboard 2019-06-25 08:59:19 +01:00
Richard van der Hoff
fe2d876e2a Increase default log level for docker image to INFO. (#5547)
Fixes #3370.
2019-06-25 14:38:38 +10:00
Richard van der Hoff
f817fc9ad5 Update docker image to use Python 3.7. (#5546)
Python 3.7 is apparently faster than 3.6, and should be mature enough.
2019-06-25 14:20:53 +10:00
Andrew Morgan
78c00ad3ff Update changelog to better expain password reset change (#5545)
Updates the v1.0.0 changelog to provide more information to those upgrading to v1.0 on why they may receive errors or find their password reset abilities have now been disabled.

Helps address #5444
2019-06-24 17:52:15 +01:00
Andrew Morgan
28604ab03d Add info about black to code_style.rst (#5537)
Fixes #5533

Adds information about how to install and run black on the codebase.
2019-06-24 17:48:05 +01:00
Richard van der Hoff
4ac7ef4b67 Merge pull request #5524 from matrix-org/rav/new_cmdline_options
Add --data-dir and --open-private-ports options.
2019-06-24 17:25:57 +01:00
Richard van der Hoff
af8a962905 Merge pull request #5523 from matrix-org/rav/arg_defaults
Stop conflating generated config and default config
2019-06-24 17:24:35 +01:00
Richard van der Hoff
e59a8cd2e5 Merge pull request #5499 from matrix-org/rav/cleanup_metrics
Cleanups and sanity-checking in cpu and db metrics
2019-06-24 17:12:54 +01:00
Brendan Abolivier
deb4fe6ef3 Merge pull request #5534 from matrix-org/babolivier/federation-publicrooms
Split public rooms directory auth config in two
2019-06-24 16:08:02 +01:00
Brendan Abolivier
bfe84e051e Split public rooms directory auth config in two 2019-06-24 15:42:31 +01:00
Erik Johnston
25433f212d Merge pull request #5531 from matrix-org/erikj/workers_pagination_token
Fix /messages on workers when no from param specified.
2019-06-24 15:30:10 +01:00
Richard van der Hoff
e6b2ccbb51 changelog 2019-06-24 14:15:34 +01:00
Richard van der Hoff
3f8a252dd8 Add "--open-private-ports" cmdline option
This is helpful when generating a config file for running synapse under docker.
2019-06-24 14:15:34 +01:00
Richard van der Hoff
6a92b06cbb Add --data-directory commandline argument
We don't necessarily want to put the data in the cwd.
2019-06-24 14:15:34 +01:00
Richard van der Hoff
c783294549 changelog 2019-06-24 14:14:52 +01:00
Richard van der Hoff
16b52642e2 Don't load the generated config as the default.
It's too confusing.
2019-06-24 14:14:52 +01:00
Richard van der Hoff
7c2f8881a9 Ensure that all config options have sensible defaults
This will enable us to skip the unintuitive behaviour where the generated
config and default config are the same thing.
2019-06-24 14:14:52 +01:00
Richard van der Hoff
cf7aef1114 Merge pull request #5516 from matrix-org/rav/acme_key_path
Allow configuration of the path used for ACME account keys.
2019-06-24 14:14:20 +01:00
Richard van der Hoff
5d9c101551 changelog 2019-06-24 13:51:22 +01:00
Richard van der Hoff
367a8263b3 Remove unused Config.config_dir_path attribute
This is no longer used and only serves to confuse.
2019-06-24 13:51:22 +01:00
Richard van der Hoff
edea4bb5be Allow configuration of the path used for ACME account keys.
Because sticking it in the same place as the config isn't necessarily the right
thing to do.
2019-06-24 13:51:22 +01:00
Richard van der Hoff
c3c6b00d95 Pass config_dir_path and data_dir_path into Config.read_config. (#5522)
* Pull config_dir_path and data_dir_path calculation out of read_config_files

* Pass config_dir_path and data_dir_path into read_config
2019-06-24 11:34:45 +01:00
Richard van der Hoff
21bf4318b5 Factor acme bits out to a separate file (#5521)
This makes some of the conditional-import hoop-jumping easier.
2019-06-24 11:33:56 +01:00
Richard van der Hoff
1793de6c6d black 2019-06-24 11:16:13 +01:00
Erik Johnston
14aff5cc0d Newsfile 2019-06-24 10:22:34 +01:00
Erik Johnston
dddf20e8e1 Fix /messages on workers when no from param specified.
If no `from` param is specified we calculate and use the "current
token" that inlcuded typing, presence, etc. These are unused during
pagination and are not available on workers, so we simply don't
calculate them.
2019-06-24 10:06:51 +01:00
Richard van der Hoff
dc94773e60 Avoid raising exceptions in metrics
Sentry will catch the errors if they happen, so that should be good enough, and
woun't make things explode if we hit the error condition.
2019-06-24 10:01:16 +01:00
Richard van der Hoff
5097aee740 Merge branch 'develop' into rav/cleanup_metrics 2019-06-24 10:00:13 +01:00
Richard van der Hoff
c753c098dd Merge pull request #5498 from matrix-org/rav/fix_clock_reversal
Use monotonic clock where possible for metrics
2019-06-24 09:55:12 +01:00
Richard van der Hoff
6cda36777b Drop support for cpu_affinity (#5525)
This has no useful purpose on python3, and is generally a source of confusion.
2019-06-22 11:01:55 +10:00
Richard van der Hoff
e1a795758c Improve help and cmdline option names for --generate-config options (#5512)
* group the arguments together into a group
* add new names "--generate-missing-config" and "--config-directory" for
  existing cmdline options "--generate-keys" and "--keys-dir", which better
  reflect their purposes.
2019-06-21 18:50:43 +01:00
Richard van der Hoff
03cea2b0fe Refactor Config parser and add some comments. (#5511)
Add some comments, and simplify `read_config_files`.
2019-06-21 17:43:38 +01:00
Richard van der Hoff
37933a3bf8 Improve logging when generating config files (#5510)
Make it a bit clearer what's going on.
2019-06-21 17:14:56 +01:00
Andrew Morgan
5d6644efe0 Only import jinja2 when needed (#5514)
Fixes https://github.com/matrix-org/synapse/issues/5431

`jinja2` was being imported even when it wasn't strictly necessary. This made it required to run Synapse, even if the functionality that required it wasn't enabled. This was causing new Synapse installations to crash on startup.

Email modules are now required.
2019-06-21 16:52:09 +01:00
Richard van der Hoff
0e8b35f7b0 Fix "Unexpected entry in 'full_schemas'" log warning (#5509)
There is a README.txt which always sets off this warning, which is a bit
alarming when you first start synapse. I don't think we need to warn about
this.
2019-06-21 15:11:57 +01:00
Richard van der Hoff
2f8491daef Fix logging error when a tampered event is detected. (#5500) 2019-06-21 15:11:42 +01:00
Erik Johnston
8fecb5fcbf Merge pull request #5513 from matrix-org/erikj/fix_messages_token
Fix /messages on worker when no token supplied
2019-06-21 14:36:03 +01:00
Erik Johnston
2b0bde935a Newsfile 2019-06-21 14:16:03 +01:00
Erik Johnston
7eadb74056 Fix /messages on worker when no token supplied 2019-06-21 14:15:59 +01:00
Erik Johnston
5f8a612af1 Merge pull request #5505 from matrix-org/erikj/messages_worker
Support pagination API in client_reader worker
2019-06-21 13:20:46 +01:00
Erik Johnston
60b912cf0d Update docs/workers.rst
E_TOO_MANY_NEGATIVES

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-06-21 11:54:03 +01:00
Erik Johnston
8d452e0ca5 Newsfile 2019-06-21 11:13:13 +01:00
Erik Johnston
8181e290a9 Fix sync tightloop bug.
If, for some reason, presence updates take a while to persist then it
can trigger clients to tightloop calling `/sync` due to the presence
handler returning updates but not advancing the stream token.

Fixes #5503.
2019-06-21 11:10:27 +01:00
Erik Johnston
626a26dbb8 Newsfile 2019-06-21 10:46:11 +01:00
Erik Johnston
f3ab533374 Support pagination API in client_reader worker 2019-06-21 10:43:12 +01:00
Erik Johnston
7456698241 Merge pull request #5476 from matrix-org/erikj/histogram_extremities
Add metrics for length of new extremities persisted.
2019-06-21 09:58:54 +01:00
Neil Johnson
f47969f42a Improve email notification logging (#5502) 2019-06-20 15:14:26 +01:00
Erik Johnston
f8bd30af2a Black 2019-06-20 13:10:06 +01:00
Erik Johnston
45f28a9d2f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/histogram_extremities 2019-06-20 11:59:14 +01:00
Amber Brown
32e7c9e7f2 Run Black. (#5482) 2019-06-20 19:32:02 +10:00
Richard van der Hoff
ae4d97bf05 changelog 2019-06-19 21:18:38 +01:00
Richard van der Hoff
fe641df770 Sanity-checking for metrics updates
Check that our clocks go forward.
2019-06-19 21:18:38 +01:00
Richard van der Hoff
f682af052d Simplify PerformanceCounters.update interface
we already have the duration for the update, so may as well use it rather than
passing extra params around and recalculating it.
2019-06-19 21:18:38 +01:00
Richard van der Hoff
68128d5626 Remove unused _get_event_counters
This has been redundant since cdb3757942.
2019-06-19 21:14:25 +01:00
Richard van der Hoff
15bf32e062 Use monotonic clock where possible for metrics
Fixes intermittent errors observed on Apple hardware which were caused by
time.clock() appearing to go backwards when called from different threads.

Also fixes a bug where database activity times were logged as 1/1000 of their
correct ratio due to confusion between milliseconds and seconds.
2019-06-19 21:09:43 +01:00
Erik Johnston
7dcf984075 Merge pull request #5042 from matrix-org/erikj/fix_get_missing_events_error
Handle the case of `get_missing_events` failing
2019-06-19 13:20:09 +01:00
Erik Johnston
e0be8d7016 Merge pull request #5480 from matrix-org/erikj/extremities_dummy_events
Add experimental option to reduce extremities.
2019-06-19 13:19:18 +01:00
Erik Johnston
65787b0f7c Add descriptions and remove redundant set(..) 2019-06-19 11:49:39 +01:00
Richard van der Hoff
ceb2fa60a5 Merge pull request #5490 from matrix-org/rav/xmlsec_in_docker
Include xmlsec in the docker image
2019-06-19 11:33:06 +01:00
Erik Johnston
554609288b Run as background process and fix comments 2019-06-19 11:33:03 +01:00
Brendan Abolivier
10f9e35b43 Merge pull request #5493 from matrix-org/babolivier/deactivate_bg_job_typo
Fix typo in deactivation background job
2019-06-19 11:32:38 +01:00
Brendan Abolivier
092e36457a Fix typo in deactivation background job 2019-06-19 11:09:41 +01:00
Richard van der Hoff
8fcd2ca907 Merge pull request #4276 from Ralith/performance-advice
Improve advice regarding poor performance
2019-06-18 23:29:58 +01:00
David Baker
f2d2ae03da Add some logging to 3pid invite sig verification (#5015)
I had to add quite a lot of logging to diagnose a problem with 3pid
invites - we only logged the one failure which isn't all that
informative.

NB. I'm not convinced the logic of this loop is right: I think it
should just accept a single valid signature from a trusted source
rather than fail if *any* signature is invalid. Also it should
probably not skip the rest of middle loop if a check fails? However,
I'm deliberately not changing the logic here.
2019-06-18 22:51:24 +01:00
Richard van der Hoff
b093cfb02c changelog 2019-06-18 22:36:40 +01:00
Richard van der Hoff
8e7ef3a023 Include xmlsec in the docker image
Fixes #5467.
2019-06-18 22:35:19 +01:00
Richard van der Hoff
b36de88066 README.rst: fix header level 2019-06-18 18:32:51 +01:00
Erik Johnston
2b20d0fb59 Fix logline 2019-06-18 16:12:53 +01:00
Erik Johnston
19b80fe68a Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fix_get_missing_events_error 2019-06-18 16:11:43 +01:00
Erik Johnston
fc51e21326 Newsfile 2019-06-18 15:03:12 +01:00
Erik Johnston
b42f90470f Add experimental option to reduce extremities.
Adds new config option `cleanup_extremities_with_dummy_events` which
periodically sends dummy events to rooms with more than 10 extremities.

THIS IS REALLY EXPERIMENTAL.
2019-06-18 15:02:18 +01:00
Erik Johnston
16a3124b76 Only count non-cache state resolution 2019-06-18 13:02:06 +01:00
Erik Johnston
c9385dd238 Use consistent buckets 2019-06-18 12:43:41 +01:00
cclauss
82d9d524bd Fix seven contrib files with Python syntax errors (#5446)
* Fix seven contrib files with Python syntax errors

Signed-off-by: cclauss <cclauss@me.com>
2019-06-18 03:21:30 +10:00
Brendan Abolivier
d6328e03fd Merge pull request #5477 from matrix-org/babolivier/third_party_rules_3pid
Add third party rules hook for 3PID invites
2019-06-17 18:08:31 +01:00
Brendan Abolivier
33ea87be39 Make check_threepid_can_be_invited async 2019-06-17 17:39:38 +01:00
Brendan Abolivier
9ce4220d6c Changelog 2019-06-17 17:39:09 +01:00
Brendan Abolivier
112cf5a73a Add third party rules hook for 3PID invites 2019-06-17 17:39:09 +01:00
Erik Johnston
8353ddd951 Merge pull request #5479 from matrix-org/erikj/add_create_room_hook_develop
Add third party rules hook into create room
2019-06-17 17:30:05 +01:00
Jorik Schellekens
160c52d0d4 Merge pull request #5478 from matrix-org/joriks/demo_python3
Joriks/demo python3
2019-06-17 17:13:01 +01:00
Erik Johnston
4c4877bbc1 Newsfile 2019-06-17 17:02:53 +01:00
Erik Johnston
ff88d36dcb Add metric fo number of state groups in resolution 2019-06-17 17:02:53 +01:00
Erik Johnston
499d4a32cd Add metrics for len of new extremities persisted.
Of new events being persisted add metrics for total size of forward
extremities and number of unchanged, "stale" extremities.
2019-06-17 17:02:48 +01:00
Jorik Schellekens
25d16fea78 Changelog 2019-06-17 16:50:31 +01:00
Erik Johnston
187d2837a9 Add third party rules hook into create room 2019-06-17 16:41:19 +01:00
Erik Johnston
2d6308a043 Newsfile 2019-06-17 16:41:19 +01:00
Jorik Schellekens
839f9b9231 One shot demo server startup
Configure the demo servers to use untrusted
tls certs so that they communicate with each other.

This configuration makes them very unsafe so I've added warnings about
it in the readme.
2019-06-17 16:24:28 +01:00
Amber Brown
eba7caf09f Remove Postgres 9.4 support (#5448) 2019-06-18 00:59:00 +10:00
Erik Johnston
6840ebeef8 Merge pull request #5385 from matrix-org/erikj/reduce_http_exceptions
Handle HttpResponseException when using federation client.
2019-06-17 13:54:47 +01:00
Erik Johnston
dd927b29e1 Merge pull request #5388 from matrix-org/erikj/fix_email_push
Fix email notifications for unnamed rooms with multiple people
2019-06-17 13:54:35 +01:00
Erik Johnston
414d2ca3a6 Merge pull request #5389 from matrix-org/erikj/renew_attestations_on_master
Only start background group attestation renewals on master
2019-06-17 13:54:29 +01:00
Amber Brown
97d7e4c7b7 Move SyTest to Buildkite (#5459)
Including workers!
2019-06-17 21:08:15 +10:00
Erik Johnston
a9dab970b8 Merge pull request #5464 from matrix-org/erikj/3pid_remote_invite_state
Fix 3PID invite room state over federation.
2019-06-17 10:18:28 +01:00
Brendan Abolivier
f12e1f029c Merge pull request #5440 from matrix-org/babolivier/third_party_event_rules
Allow server admins to define implementations of extra rules for allowing or denying incoming events
2019-06-14 19:37:59 +01:00
Erik Johnston
9ca4ae7131 Merge pull request #5461 from matrix-org/erikj/histograms_are_cumalitive
Prometheus histograms are cumalative
2019-06-14 18:21:42 +01:00
Brendan Abolivier
f874b16b2e Add plugin APIs for implementations of custom event rules. 2019-06-14 18:16:03 +01:00
Brendan Abolivier
14db086428 Merge pull request #5465 from matrix-org/babolivier/fix_deactivation_bg_job
Fix background job for deactivated flag
2019-06-14 18:12:56 +01:00
Brendan Abolivier
5cec6d1845 Fix changelog 2019-06-14 17:18:21 +01:00
Brendan Abolivier
4024520ff8 Changelog 2019-06-14 16:38:44 +01:00
Erik Johnston
3c9bb86fde Newsfile 2019-06-14 16:19:11 +01:00
Erik Johnston
304a1376c2 Fix 3PID invite room state over federation.
Fixes that when a user exchanges a 3PID invite for a proper invite over
federation it does not include the `invite_room_state` key.

This was due to synapse incorrectly sending out two invite requests.
2019-06-14 16:19:11 +01:00
Brendan Abolivier
e0b77b004d Fix background job for deactivated flag 2019-06-14 16:00:45 +01:00
Brendan Abolivier
9b14a810d2 Merge pull request #5462 from matrix-org/babolivier/account_validity_deactivated_accounts_2
Don't send renewal emails to deactivated users (second attempt)
2019-06-14 15:35:31 +01:00
Jorik Schellekens
1e7864c929 Merge pull request #5460 from matrix-org/joriks/demo_python3
Use python3 in the demo
2019-06-14 15:23:57 +01:00
Brendan Abolivier
6d56a694f4 Don't send renewal emails to deactivated users 2019-06-14 15:05:56 +01:00
Erik Johnston
e9344e0dee Merge pull request #5390 from matrix-org/erikj/dont_log_on_fail_to_get_file
Don't log exception when failing to fetch remote content.
2019-06-14 14:25:14 +01:00
Erik Johnston
9fd4f83f1a Newsfile 2019-06-14 14:19:37 +01:00
Jorik Schellekens
cc7cc853b1 Changelog 2019-06-14 14:07:47 +01:00
Erik Johnston
3ed595e327 Prometheus histograms are cumalative 2019-06-14 14:07:32 +01:00
Brendan Abolivier
d0530382ee Track deactivated accounts in the database (#5378) 2019-06-14 13:18:24 +01:00
Jorik Schellekens
d8db29c481 Use python3 in the demo 2019-06-14 13:03:46 +01:00
Erik Johnston
f03f8b7f4c Merge pull request #5458 from matrix-org/hawkowl/fix-prometheus
Fix Prometheus erroring after the extremities monitoring
2019-06-14 12:59:02 +01:00
Amber H. Brown
b2a6f90a67 changelog 2019-06-14 21:10:21 +10:00
Amber H. Brown
a10c8dae85 fix prometheus rendering error 2019-06-14 21:09:33 +10:00
Neil Johnson
4f68188d0b Change to absolute path for contrib/docker
because this file is reproduced on dockerhub and relative paths don't work
2019-06-13 16:42:36 +01:00
Richard van der Hoff
b59a4eba64 Updates to the federation_client script (#5447)
* py3 fixes for federation_client
* .well-known support for federation_client
2019-06-13 14:49:25 +01:00
Richard van der Hoff
5c15039e06 Clean up code for sending federation EDUs. (#5381)
This code confused the hell out of me today. Split _get_new_device_messages
into its two (unrelated) parts.
2019-06-13 13:52:08 +01:00
Amber Brown
6312d6cc7c Expose statistics on extrems to prometheus (#5384) 2019-06-13 22:40:52 +10:00
Amber Brown
09e9a26b71 Remove Python 2.7 support. (#5425)
* remove 2.7 from CI and publishing

* fill out classifiers and also make it not be installed on 3.5

* some minor bumps so that the old deps work on python 3.5
2019-06-12 21:31:59 +10:00
Erik Johnston
7e68691ce9 Merge branch 'master' of github.com:matrix-org/synapse into develop 2019-06-11 17:25:16 +01:00
Erik Johnston
97174780ce 1.0.0 2019-06-11 17:10:01 +01:00
Erik Johnston
9532eb55ec Merge pull request #5424 from matrix-org/erikj/change_password_reset_links
Change password reset links to /_matrix.
2019-06-11 13:29:42 +01:00
Erik Johnston
a766c41d25 Bump bleach version so that tests can run on old deps. 2019-06-11 12:34:18 +01:00
Neil Johnson
426218323b Neilj/improve federation docs (#5419)
Add FAQ questions to federate.md. Add a health warning making it clear that the 1711 upgrade FAQ is now out of date.
2019-06-11 12:17:43 +01:00
Erik Johnston
453aaaadc0 Newsfile 2019-06-11 11:34:38 +01:00
Erik Johnston
10383e6e6f Change password reset links to /_matrix. 2019-06-11 11:34:33 +01:00
Erik Johnston
5bc9484537 Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into develop 2019-06-11 10:37:43 +01:00
Andrew Morgan
2ddc13577c Don't warn user about password reset disabling through config code (#5387)
Moves the warning about password resets being disabled to the point where a user actually tries to reset their password. Is this an appropriate place for it to happen?

Also removed the disabling of msisdn password resets when you don't have an email config, as that just doesn't make sense.

Also change the error a user receives upon disabled passwords to specify that only email-based password reset is disabled.
2019-06-11 00:25:07 +01:00
Neil Johnson
94dac0f3e5 add monthly active users to phonehome stats (#5252)
* add monthly active users to phonehome stats
2019-06-10 23:33:59 +01:00
Benjamin Saunders
047486a384 Improve advice regarding poor performance
Signed-off-by: Benjamin Saunders <ben.e.saunders@gmail.com>
2019-06-09 15:20:28 -07:00
Erik Johnston
5009d988da Newsfile 2019-06-07 12:39:12 +01:00
Erik Johnston
95d38afe96 Don't log exception when failing to fetch remote content.
In particular, let's not log stack traces when we stop processing
becuase the response body was too large.
2019-06-07 12:39:10 +01:00
Erik Johnston
2cca90dd40 Newsfile 2019-06-07 12:26:59 +01:00
Erik Johnston
837340bdce Only start background group attestation renewals on master 2019-06-07 12:25:06 +01:00
Erik Johnston
a099926fcc Newsfile 2019-06-07 12:15:33 +01:00
Erik Johnston
2ebeda48b2 Add test 2019-06-07 12:15:33 +01:00
Erik Johnston
8182a1cfb5 Refactor email tests 2019-06-07 12:15:33 +01:00
Erik Johnston
928d1ccd73 Fix email notifications for large unnamed rooms.
When we try and calculate a description for a room for with no name but
multiple other users we threw an exception (due to trying to subscript
result of `dict.values()`).
2019-06-07 12:15:28 +01:00
Erik Johnston
6745b7de6d Handle failing to talk to master over replication 2019-06-07 10:47:31 +01:00
Erik Johnston
a2419b27fe Newsfile 2019-06-07 10:31:53 +01:00
Erik Johnston
a46ef1e3a4 Handle HttpResponseException when using federation client.
Otherwise we just log exceptions everywhere.
2019-06-07 10:29:35 +01:00
Ike Johnson
0df5b41759 Create 5313.misc 2019-06-02 23:23:58 +08:00
Ike Johnson
145f57897d Update HAProxy example rules
These new rules allow a user to instead route only matrix traffic, allowing them to run matrix on the domain without affecting their existing websites
2019-06-02 23:10:27 +08:00
Erik Johnston
42e1aa5b2c Newsfile 2019-04-10 10:43:47 +01:00
Erik Johnston
c132c8e505 Handle the case of get_missing_events failing
Currently if a call to `/get_missing_events` fails we log an exception
and stop processing the top level event we received over federation.
Instead let's try and handle it sensibly given it is a somewhat expected
failure mode.
2019-04-10 10:39:54 +01:00
478 changed files with 24741 additions and 14030 deletions

View File

@@ -1,21 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.4
environment:
POSTGRES_PASSWORD: postgres
testenv:
image: python:2.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
volumes:
- ..:/app

View File

@@ -1,21 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
testenv:
image: python:2.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
volumes:
- ..:/app

View File

@@ -1,21 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.4
environment:
POSTGRES_PASSWORD: postgres
testenv:
image: python:3.5
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /app
volumes:
- ..:/app

33
.buildkite/format_tap.py Normal file
View File

@@ -0,0 +1,33 @@
import sys
from tap.parser import Parser
from tap.line import Result, Unknown, Diagnostic
out = ["### TAP Output for " + sys.argv[2]]
p = Parser()
in_error = False
for line in p.parse_file(sys.argv[1]):
if isinstance(line, Result):
if in_error:
out.append("")
out.append("</pre></code></details>")
out.append("")
out.append("----")
out.append("")
in_error = False
if not line.ok and not line.todo:
in_error = True
out.append("FAILURE Test #%d: ``%s``" % (line.number, line.description))
out.append("")
out.append("<details><summary>Show log</summary><code><pre>")
elif isinstance(line, Diagnostic) and in_error:
out.append(line.text)
if out:
for line in out[:-3]:
print(line)

View File

@@ -1,22 +1,21 @@
#!/usr/bin/env bash
set -e
set -ex
# CircleCI doesn't give CIRCLE_PR_NUMBER in the environment for non-forked PRs. Wonderful.
# In this case, we just need to do some ~shell magic~ to strip it out of the PULL_REQUEST URL.
echo 'export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-${CIRCLE_PULL_REQUEST##*/}}"' >> $BASH_ENV
source $BASH_ENV
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
echo "Not merging forward, as this is a release branch"
exit 0
fi
if [[ -z "${CIRCLE_PR_NUMBER}" ]]
then
echo "Can't figure out what the PR number is! Assuming merge target is develop."
if [[ -z $BUILDKITE_PULL_REQUEST_BASE_BRANCH ]]; then
echo "Not a pull request, or hasn't had a PR opened yet..."
# It probably hasn't had a PR opened yet. Since all PRs land on develop, we
# can probably assume it's based on it and will be merged into it.
GITBASE="develop"
else
# Get the reference, using the GitHub API
GITBASE=`wget -O- https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
GITBASE=$BUILDKITE_PULL_REQUEST_BASE_BRANCH
fi
# Show what we are before

View File

@@ -2,10 +2,11 @@ env:
CODECOV_TOKEN: "2dd7eb9b-0eda-45fe-a47c-9b5ac040045f"
steps:
- command:
- "python -m pip install tox"
- "tox -e pep8"
label: "\U0001F9F9 PEP-8"
- "tox -e check_codestyle"
label: "\U0001F9F9 Check Style"
plugins:
- docker#v3.0.1:
image: "python:3.6"
@@ -46,15 +47,16 @@ steps:
- wait
- command:
- "python -m pip install tox"
- "tox -e py27,codecov"
label: ":python: 2.7 / SQLite"
- "tox -e py35-old,codecov"
label: ":python: 3.5 / SQLite / Old Deps"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:2.7"
image: "python:3.5"
propagate-environment: true
retry:
automatic:
@@ -114,74 +116,6 @@ steps:
- exit_status: 2
limit: 2
- command:
- "python -m pip install tox"
- "tox -e py27-old,codecov"
label: ":python: 2.7 / SQLite / Old Deps"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:2.7"
propagate-environment: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 2.7 / :postgres: 9.4"
env:
TRIAL_FLAGS: "-j 4"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py27-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py27.pg94.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 2.7 / :postgres: 9.5"
env:
TRIAL_FLAGS: "-j 4"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py27-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py27.pg95.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 3.5 / :postgres: 9.4"
env:
TRIAL_FLAGS: "-j 4"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py35.pg94.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
env:
TRIAL_FLAGS: "-j 4"
@@ -232,3 +166,67 @@ steps:
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / SQLite / Monolith"
agents:
queue: "medium"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / :postgres: 9.6 / Monolith"
agents:
queue: "medium"
env:
POSTGRES: "1"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / :postgres: 9.6 / Workers"
agents:
queue: "medium"
env:
POSTGRES: "1"
WORKERS: "1"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
soft_fail: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2

View File

@@ -4,160 +4,23 @@ jobs:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG}-py2 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py2
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest-py2 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py2
- run: docker push matrixdotorg/synapse:latest-py3
sytestpy2:
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy2postgres:
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy2merged:
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy2postgresmerged:
docker:
- image: matrixdotorg/sytest-synapsepy2
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy3:
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy3postgres:
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy3merged:
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
sytestpy3postgresmerged:
docker:
- image: matrixdotorg/sytest-synapsepy3
working_directory: /src
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: POSTGRES=1 /synapse_sytest.sh
- store_artifacts:
path: /logs
destination: logs
- store_test_results:
path: /logs
workflows:
version: 2
build:
jobs:
- sytestpy2:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy3postgres:
filters:
branches:
only: /develop|master|release-.*/
- sytestpy2merged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy2postgresmerged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy3merged:
filters:
branches:
ignore: /develop|master|release-.*/
- sytestpy3postgresmerged:
filters:
branches:
ignore: /develop|master|release-.*/
- dockerhubuploadrelease:
filters:
tags:

View File

@@ -1,9 +1,13 @@
Dockerfile
.travis.yml
.gitignore
demo/etc
tox.ini
.git/*
.tox/*
debian/matrix-synapse/
debian/matrix-synapse-*/
# ignore everything by default
*
# things to include
!docker
!scripts
!synapse
!MANIFEST.in
!README.rst
!setup.py
!synctl
**/__pycache__

View File

@@ -4,6 +4,7 @@ about: I need support for Synapse
---
# Please ask for support in [**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org)
Please don't file github issues asking for support.
## Don't file an issue as a support request.
Instead, please join [`#synapse:matrix.org`](https://matrix.to/#/#synapse:matrix.org)
(from a matrix.org account if necessary), and ask there.

6
.github/SUPPORT.md vendored
View File

@@ -1,3 +1,3 @@
[**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org) is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html
It can also be access via IRC bridge at irc://irc.freenode.net/matrix or on the web here: https://webchat.freenode.net/?channels=matrix
[**#synapse:matrix.org**](https://matrix.to/#/#synapse:matrix.org) is the official support room for
Synapse, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html.
Please ask for support there, rather than filing github issues.

View File

@@ -72,3 +72,6 @@ Jason Robinson <jasonr at matrix.org>
Joseph Weston <joseph at weston.cloud>
+ Add admin API for querying HS version
Benjamin Saunders <ben.e.saunders at gmail dot com>
* Documentation improvements

View File

@@ -1,3 +1,239 @@
Synapse 1.2.0rc1 (2019-07-22)
=============================
Features
--------
- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))
Bugfixes
--------
- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
- Ignore redactions of m.room.create events. ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
Updates to the Docker image
---------------------------
- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))
Improved Documentation
----------------------
- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))
Deprecations and Removals
-------------------------
- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))
Internal Changes
----------------
- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
Synapse 1.1.0 (2019-07-04)
==========================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details.
This release also deprecates the use of environment variables to configure the
docker image. See the [docker README](https://github.com/matrix-org/synapse/blob/release-v1.1.0/docker/README.md#legacy-dynamic-configuration-file-support)
for more details.
No changes since 1.1.0rc2.
Synapse 1.1.0rc2 (2019-07-03)
=============================
Bugfixes
--------
- Fix regression in 1.1rc1 where OPTIONS requests to the media repo would fail. ([\#5593](https://github.com/matrix-org/synapse/issues/5593))
- Removed the `SYNAPSE_SMTP_*` docker container environment variables. Using these environment variables prevented the docker container from starting in Synapse v1.0, even though they didn't actually allow any functionality anyway. ([\#5596](https://github.com/matrix-org/synapse/issues/5596))
- Fix a number of "Starting txn from sentinel context" warnings. ([\#5605](https://github.com/matrix-org/synapse/issues/5605))
Internal Changes
----------------
- Update github templates. ([\#5552](https://github.com/matrix-org/synapse/issues/5552))
Synapse 1.1.0rc1 (2019-07-02)
=============================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details.
Features
--------
- Added possibilty to disable local password authentication. Contributed by Daniel Hoffend. ([\#5092](https://github.com/matrix-org/synapse/issues/5092))
- Add monthly active users to phonehome stats. ([\#5252](https://github.com/matrix-org/synapse/issues/5252))
- Allow expired user to trigger renewal email sending manually. ([\#5363](https://github.com/matrix-org/synapse/issues/5363))
- Statistics on forward extremities per room are now exposed via Prometheus. ([\#5384](https://github.com/matrix-org/synapse/issues/5384), [\#5458](https://github.com/matrix-org/synapse/issues/5458), [\#5461](https://github.com/matrix-org/synapse/issues/5461))
- Add --no-daemonize option to run synapse in the foreground, per issue #4130. Contributed by Soham Gumaste. ([\#5412](https://github.com/matrix-org/synapse/issues/5412), [\#5587](https://github.com/matrix-org/synapse/issues/5587))
- Fully support SAML2 authentication. Contributed by [Alexander Trost](https://github.com/galexrt) - thank you! ([\#5422](https://github.com/matrix-org/synapse/issues/5422))
- Allow server admins to define implementations of extra rules for allowing or denying incoming events. ([\#5440](https://github.com/matrix-org/synapse/issues/5440), [\#5474](https://github.com/matrix-org/synapse/issues/5474), [\#5477](https://github.com/matrix-org/synapse/issues/5477))
- Add support for handling pagination APIs on client reader worker. ([\#5505](https://github.com/matrix-org/synapse/issues/5505), [\#5513](https://github.com/matrix-org/synapse/issues/5513), [\#5531](https://github.com/matrix-org/synapse/issues/5531))
- Improve help and cmdline option names for --generate-config options. ([\#5512](https://github.com/matrix-org/synapse/issues/5512))
- Allow configuration of the path used for ACME account keys. ([\#5516](https://github.com/matrix-org/synapse/issues/5516), [\#5521](https://github.com/matrix-org/synapse/issues/5521), [\#5522](https://github.com/matrix-org/synapse/issues/5522))
- Add --data-dir and --open-private-ports options. ([\#5524](https://github.com/matrix-org/synapse/issues/5524))
- Split public rooms directory auth config in two settings, in order to manage client auth independently from the federation part of it. Obsoletes the "restrict_public_rooms_to_local_users" configuration setting. If "restrict_public_rooms_to_local_users" is set in the config, Synapse will act as if both new options are enabled, i.e. require authentication through the client API and deny federation requests. ([\#5534](https://github.com/matrix-org/synapse/issues/5534))
- The minimum TLS version used for outgoing federation requests can now be set with `federation_client_minimum_tls_version`. ([\#5550](https://github.com/matrix-org/synapse/issues/5550))
- Optimise devices changed query to not pull unnecessary rows from the database, reducing database load. ([\#5559](https://github.com/matrix-org/synapse/issues/5559))
- Add new metrics for number of forward extremities being persisted and number of state groups involved in resolution. ([\#5476](https://github.com/matrix-org/synapse/issues/5476))
Bugfixes
--------
- Fix bug processing incoming events over federation if call to `/get_missing_events` fails. ([\#5042](https://github.com/matrix-org/synapse/issues/5042))
- Prevent more than one room upgrade happening simultaneously on the same room. ([\#5051](https://github.com/matrix-org/synapse/issues/5051))
- Fix a bug where running synapse_port_db would cause the account validity feature to fail because it didn't set the type of the email_sent column to boolean. ([\#5325](https://github.com/matrix-org/synapse/issues/5325))
- Warn about disabling email-based password resets when a reset occurs, and remove warning when someone attempts a phone-based reset. ([\#5387](https://github.com/matrix-org/synapse/issues/5387))
- Fix email notifications for unnamed rooms with multiple people. ([\#5388](https://github.com/matrix-org/synapse/issues/5388))
- Fix exceptions in federation reader worker caused by attempting to renew attestations, which should only happen on master worker. ([\#5389](https://github.com/matrix-org/synapse/issues/5389))
- Fix handling of failures fetching remote content to not log failures as exceptions. ([\#5390](https://github.com/matrix-org/synapse/issues/5390))
- Fix a bug where deactivated users could receive renewal emails if the account validity feature is on. ([\#5394](https://github.com/matrix-org/synapse/issues/5394))
- Fix missing invite state after exchanging 3PID invites over federaton. ([\#5464](https://github.com/matrix-org/synapse/issues/5464))
- Fix intermittent exceptions on Apple hardware. Also fix bug that caused database activity times to be under-reported in log lines. ([\#5498](https://github.com/matrix-org/synapse/issues/5498))
- Fix logging error when a tampered event is detected. ([\#5500](https://github.com/matrix-org/synapse/issues/5500))
- Fix bug where clients could tight loop calling `/sync` for a period. ([\#5507](https://github.com/matrix-org/synapse/issues/5507))
- Fix bug with `jinja2` preventing Synapse from starting. Users who had this problem should now simply need to run `pip install matrix-synapse`. ([\#5514](https://github.com/matrix-org/synapse/issues/5514))
- Fix a regression where homeservers on private IP addresses were incorrectly blacklisted. ([\#5523](https://github.com/matrix-org/synapse/issues/5523))
- Fixed m.login.jwt using unregistred user_id and added pyjwt>=1.6.4 as jwt conditional dependencies. Contributed by Pau Rodriguez-Estivill. ([\#5555](https://github.com/matrix-org/synapse/issues/5555), [\#5586](https://github.com/matrix-org/synapse/issues/5586))
- Fix a bug that would cause invited users to receive several emails for a single 3PID invite in case the inviter is rate limited. ([\#5576](https://github.com/matrix-org/synapse/issues/5576))
Updates to the Docker image
---------------------------
- Add ability to change Docker containers [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) with the `TZ` variable. ([\#5383](https://github.com/matrix-org/synapse/issues/5383))
- Update docker image to use Python 3.7. ([\#5546](https://github.com/matrix-org/synapse/issues/5546))
- Deprecate the use of environment variables for configuration, and make the use of a static configuration the default. ([\#5561](https://github.com/matrix-org/synapse/issues/5561), [\#5562](https://github.com/matrix-org/synapse/issues/5562), [\#5566](https://github.com/matrix-org/synapse/issues/5566), [\#5567](https://github.com/matrix-org/synapse/issues/5567))
- Increase default log level for docker image to INFO. It can still be changed by editing the generated log.config file. ([\#5547](https://github.com/matrix-org/synapse/issues/5547))
- Send synapse logs to the docker logging system, by default. ([\#5565](https://github.com/matrix-org/synapse/issues/5565))
- Open the non-TLS port by default. ([\#5568](https://github.com/matrix-org/synapse/issues/5568))
- Fix failure to start under docker with SAML support enabled. ([\#5490](https://github.com/matrix-org/synapse/issues/5490))
- Use a sensible location for data files when generating a config file. ([\#5563](https://github.com/matrix-org/synapse/issues/5563))
Deprecations and Removals
-------------------------
- Python 2.7 is no longer a supported platform. Synapse now requires Python 3.5+ to run. ([\#5425](https://github.com/matrix-org/synapse/issues/5425))
- PostgreSQL 9.4 is no longer supported. Synapse requires Postgres 9.5+ or above for Postgres support. ([\#5448](https://github.com/matrix-org/synapse/issues/5448))
- Remove support for cpu_affinity setting. ([\#5525](https://github.com/matrix-org/synapse/issues/5525))
Improved Documentation
----------------------
- Improve README section on performance troubleshooting. ([\#4276](https://github.com/matrix-org/synapse/issues/4276))
- Add information about how to install and run `black` on the codebase to code_style.rst. ([\#5537](https://github.com/matrix-org/synapse/issues/5537))
- Improve install docs on choosing server_name. ([\#5558](https://github.com/matrix-org/synapse/issues/5558))
Internal Changes
----------------
- Add logging to 3pid invite signature verification. ([\#5015](https://github.com/matrix-org/synapse/issues/5015))
- Update example haproxy config to a more compatible setup. ([\#5313](https://github.com/matrix-org/synapse/issues/5313))
- Track deactivated accounts in the database. ([\#5378](https://github.com/matrix-org/synapse/issues/5378), [\#5465](https://github.com/matrix-org/synapse/issues/5465), [\#5493](https://github.com/matrix-org/synapse/issues/5493))
- Clean up code for sending federation EDUs. ([\#5381](https://github.com/matrix-org/synapse/issues/5381))
- Add a sponsor button to the repo. ([\#5382](https://github.com/matrix-org/synapse/issues/5382), [\#5386](https://github.com/matrix-org/synapse/issues/5386))
- Don't log non-200 responses from federation queries as exceptions. ([\#5383](https://github.com/matrix-org/synapse/issues/5383))
- Update Python syntax in contrib/ to Python 3. ([\#5446](https://github.com/matrix-org/synapse/issues/5446))
- Update federation_client dev script to support `.well-known` and work with python3. ([\#5447](https://github.com/matrix-org/synapse/issues/5447))
- SyTest has been moved to Buildkite. ([\#5459](https://github.com/matrix-org/synapse/issues/5459))
- Demo script now uses python3. ([\#5460](https://github.com/matrix-org/synapse/issues/5460))
- Synapse can now handle RestServlets that return coroutines. ([\#5475](https://github.com/matrix-org/synapse/issues/5475), [\#5585](https://github.com/matrix-org/synapse/issues/5585))
- The demo servers talk to each other again. ([\#5478](https://github.com/matrix-org/synapse/issues/5478))
- Add an EXPERIMENTAL config option to try and periodically clean up extremities by sending dummy events. ([\#5480](https://github.com/matrix-org/synapse/issues/5480))
- Synapse's codebase is now formatted by `black`. ([\#5482](https://github.com/matrix-org/synapse/issues/5482))
- Some cleanups and sanity-checking in the CPU and database metrics. ([\#5499](https://github.com/matrix-org/synapse/issues/5499))
- Improve email notification logging. ([\#5502](https://github.com/matrix-org/synapse/issues/5502))
- Fix "Unexpected entry in 'full_schemas'" log warning. ([\#5509](https://github.com/matrix-org/synapse/issues/5509))
- Improve logging when generating config files. ([\#5510](https://github.com/matrix-org/synapse/issues/5510))
- Refactor and clean up Config parser for maintainability. ([\#5511](https://github.com/matrix-org/synapse/issues/5511))
- Make the config clearer in that email.template_dir is relative to the Synapse's root directory, not the `synapse/` folder within it. ([\#5543](https://github.com/matrix-org/synapse/issues/5543))
- Update v1.0.0 release changelog to include more information about changes to password resets. ([\#5545](https://github.com/matrix-org/synapse/issues/5545))
- Remove non-functioning check_event_hash.py dev script. ([\#5548](https://github.com/matrix-org/synapse/issues/5548))
- Synapse will now only allow TLS v1.2 connections when serving federation, if it terminates TLS. As Synapse's allowed ciphers were only able to be used in TLSv1.2 before, this does not change behaviour. ([\#5550](https://github.com/matrix-org/synapse/issues/5550))
- Logging when running GC collection on generation 0 is now at the DEBUG level, not INFO. ([\#5557](https://github.com/matrix-org/synapse/issues/5557))
- Reduce the amount of stuff we send in the docker context. ([\#5564](https://github.com/matrix-org/synapse/issues/5564))
- Point the reverse links in the Purge History contrib scripts at the intended location. ([\#5570](https://github.com/matrix-org/synapse/issues/5570))
Synapse 1.0.0 (2019-06-11)
==========================
Bugfixes
--------
- Fix bug where attempting to send transactions with large number of EDUs can fail. ([\#5418](https://github.com/matrix-org/synapse/issues/5418))
Improved Documentation
----------------------
- Expand the federation guide to include relevant content from the MSC1711 FAQ ([\#5419](https://github.com/matrix-org/synapse/issues/5419))
Internal Changes
----------------
- Move password reset links to /_matrix/client/unstable namespace. ([\#5424](https://github.com/matrix-org/synapse/issues/5424))
Synapse 1.0.0rc3 (2019-06-10)
=============================
@@ -31,7 +267,11 @@ Features
- Add a script to generate new signing-key files. ([\#5361](https://github.com/matrix-org/synapse/issues/5361))
- Update upgrade and installation guides ahead of 1.0. ([\#5371](https://github.com/matrix-org/synapse/issues/5371))
- Replace the `perspectives` configuration section with `trusted_key_servers`, and make validating the signatures on responses optional (since TLS will do this job for us). ([\#5374](https://github.com/matrix-org/synapse/issues/5374))
- Add ability to perform password reset via email without trusting the identity server. ([\#5377](https://github.com/matrix-org/synapse/issues/5377))
- Add ability to perform password reset via email without trusting the identity server. **As a result of this PR, password resets will now be disabled on the default configuration.**
Password reset emails are now sent from the homeserver by default, instead of the identity server. To enable this functionality, ensure `email` and `public_baseurl` config options are filled out.
If you would like to re-enable password resets being sent from the identity server (warning: this is dangerous! See [#5345](https://github.com/matrix-org/synapse/pull/5345)), set `email.trust_identity_server_for_password_resets` to true. ([\#5377](https://github.com/matrix-org/synapse/issues/5377))
- Set default room version to v4. ([\#5379](https://github.com/matrix-org/synapse/issues/5379))

View File

@@ -30,21 +30,19 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Travis CI
<https://travis-ci.org/matrix-org/synapse>`_ for continuous integration. All
pull requests to synapse get automatically tested by Travis and CircleCI.
If your change breaks the build, this will be shown in GitHub, so please
keep an eye on the pull request for feedback.
We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
continuous integration. Buildkite builds need to be authorised by a
maintainer. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
- ``tox -e py27`` (requires tox to be installed by ``pip install tox``) for
SQLite-backed Synapse on Python 2.7.
- ``tox -e py35`` for SQLite-backed Synapse on Python 3.5.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py27-postgres`` for PostgreSQL-backed Synapse on Python 2.7
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 2.7
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
@@ -71,13 +69,21 @@ All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes).
To create a changelog entry, make a new file in the ``changelog.d`` file named
in the format of ``PRnumber.type``. The type can be one of the following:
The content of the file is your changelog entry, which can contain Markdown
formatting. The entry should end with a full stop ('.') for consistency.
* ``feature``.
* ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!

View File

@@ -1,14 +1,31 @@
* [Installing Synapse](#installing-synapse)
* [Installing from source](#installing-from-source)
* [Platform-Specific Instructions](#platform-specific-instructions)
* [Troubleshooting Installation](#troubleshooting-installation)
* [Prebuilt packages](#prebuilt-packages)
* [Setting up Synapse](#setting-up-synapse)
* [TLS certificates](#tls-certificates)
* [Email](#email)
* [Registering a user](#registering-a-user)
* [Setting up a TURN server](#setting-up-a-turn-server)
* [URL previews](#url-previews)
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Troubleshooting Installation](#troubleshooting-installation)
- [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates)
- [Email](#email)
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
# Choosing your server name
It is important to choose the name for your server before you install Synapse,
because it cannot be changed later.
The server name determines the "domain" part of user-ids for users on your
server: these will all be of the format `@user:my.domain.name`. It also
determines how other matrix servers will reach yours for federation.
For a test configuration, set this to the hostname of your server. For a more
production-ready setup, you will probably want to specify your domain
(`example.com`) rather than a matrix-specific hostname here (in the same way
that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md).
# Installing Synapse
@@ -64,16 +81,7 @@ python -m synapse.app.homeserver \
--report-stats=[yes|no]
```
... substituting an appropriate value for `--server-name`. The server name
determines the "domain" part of user-ids for users on your server: these will
all be of the format `@user:my.domain.name`. It also determines how other
matrix servers will reach yours for Federation. For a test configuration,
set this to the hostname of your server. For a more production-ready setup, you
will probably want to specify your domain (`example.com`) rather than a
matrix-specific hostname here (in the same way that your email address is
probably `user@example.com` rather than `user@email.example.com`) - but
doing so may require more advanced setup: see [Setting up Federation](docs/federate.md).
Beware that the server name cannot be changed later.
... substituting an appropriate value for `--server-name`.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your Home Server to
@@ -86,9 +94,6 @@ different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management.)
You will need to give Synapse a TLS certficate before it will start - see [TLS
certificates](#tls-certificates).
To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and::
@@ -395,8 +400,9 @@ To configure Synapse to expose an HTTPS port, you will need to edit
instance, if using certbot, use `fullchain.pem` as your certificate, not
`cert.pem`).
For those of you upgrading your TLS certificate for Synapse 1.0 compliance,
please take a look at [our guide](docs/MSC1711_certificates_FAQ.md#configuring-certificates-for-compatibility-with-synapse-100).
For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md)
## Email

View File

@@ -7,6 +7,7 @@ include demo/README
include demo/demo.tls.dh
include demo/*.py
include demo/*.sh
include sytest-blacklist
recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.sql.postgres

View File

@@ -272,7 +272,7 @@ to install using pip and a virtualenv::
virtualenv -p python3 env
source env/bin/activate
python -m pip install --no-pep-517 -e .[all]
python -m pip install --no-use-pep517 -e .[all]
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
@@ -340,8 +340,11 @@ log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #synapse:matrix.org if
you see this failure mode so we can help debug it, however.
Help!! Synapse eats all my RAM!
-------------------------------
Help!! Synapse is slow and eats all my RAM/CPU!
-----------------------------------------------
First, ensure you are running the latest version of Synapse, using Python 3
with a PostgreSQL database.
Synapse's architecture is quite RAM hungry currently - we deliberately
cache a lot of recent room data and metadata in RAM in order to speed up
@@ -352,14 +355,29 @@ variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due
backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall amount, and especially in terms of giving back RAM
to the OS. To use it, the library must simply be put in the LD_PRELOAD
environment variable when launching Synapse. On Debian, this can be done
by installing the ``libjemalloc1`` package and adding this line to
``/etc/default/matrix-synapse``::
improvement in overall memory use, and especially in terms of giving back
RAM to the OS. To use it, the library must simply be put in the
LD_PRELOAD environment variable when launching Synapse. On Debian, this
can be done by installing the ``libjemalloc1`` package and adding this
line to ``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
This can make a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.
If you're encountering high CPU use by the Synapse process itself, you
may be affected by a bug with presence tracking that leads to a
massive excess of outgoing federation requests (see `discussion
<https://github.com/matrix-org/synapse/issues/3971>`_). If metrics
indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting
``use_presence: false`` in the Synapse config file.

View File

@@ -49,6 +49,40 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0
===================
Synapse v1.1.0 removes support for older Python and PostgreSQL versions, as
outlined in `our deprecation notice <https://matrix.org/blog/2019/04/08/synapse-deprecating-postgres-9-4-and-python-2-x>`_.
Minimum Python Version
----------------------
Synapse v1.1.0 has a minimum Python requirement of Python 3.5. Python 3.6 or
Python 3.7 are recommended as they have improved internal string handling,
significantly reducing memory usage.
If you use current versions of the Matrix.org-distributed Debian packages or
Docker images, action is not required.
If you install Synapse in a Python virtual environment, please see "Upgrading to
v0.34.0" for notes on setting up a new virtualenv under Python 3.
Minimum PostgreSQL Version
--------------------------
If using PostgreSQL under Synapse, you will need to use PostgreSQL 9.5 or above.
Please see the
`PostgreSQL documentation <https://www.postgresql.org/docs/11/upgrading.html>`_
for more details on upgrading your database.
Upgrading to v1.0
=================
@@ -71,11 +105,11 @@ server in a closed federation. This can be done in one of two ways:-
* Configure a whitelist of server domains to trust via ``federation_certificate_verification_whitelist``.
See the `sample configuration file <docs/sample_config.yaml>`_
for more details on these settings.
for more details on these settings.
Email
-----
When a user requests a password reset, Synapse will send an email to the
When a user requests a password reset, Synapse will send an email to the
user to confirm the request.
Previous versions of Synapse delegated the job of sending this email to an

View File

@@ -1 +0,0 @@
Fix a bug where running synapse_port_db would cause the account validity feature to fail because it didn't set the type of the email_sent column to boolean.

View File

@@ -1 +0,0 @@
Allow expired user to trigger renewal email sending manually.

View File

@@ -1 +0,0 @@
Add a sponsor button to the repo.

View File

@@ -1 +0,0 @@
Add a sponsor button to the repo.

View File

@@ -1 +0,0 @@
Add --no-daemonize option to run synapse in the foreground, per issue #4130. Contributed by Soham Gumaste.

View File

@@ -1 +0,0 @@
Fix bug where attempting to send transactions with large number of EDUs can fail.

View File

@@ -1 +0,0 @@
Fully support SAML2 authentication. Contributed by [Alexander Trost](https://github.com/galexrt) - thank you!

View File

@@ -15,6 +15,7 @@
# limitations under the License.
""" Starts a synapse client console. """
from __future__ import print_function
from twisted.internet import reactor, defer, threads
from http import TwistedHttpClient
@@ -36,9 +37,8 @@ from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json"
TRUSTED_ID_SERVERS = [
'localhost:8001'
]
TRUSTED_ID_SERVERS = ["localhost:8001"]
class SynapseCmd(cmd.Cmd):
@@ -58,7 +58,7 @@ class SynapseCmd(cmd.Cmd):
"token": token,
"verbose": "on",
"complete_usernames": "on",
"send_delivery_receipts": "on"
"send_delivery_receipts": "on",
}
self.path_prefix = "/_matrix/client/api/v1"
self.event_stream_token = "END"
@@ -109,7 +109,7 @@ class SynapseCmd(cmd.Cmd):
by using $. E.g. 'config roomid room1' then 'raw get /rooms/$roomid'.
"""
if len(line) == 0:
print json.dumps(self.config, indent=4)
print(json.dumps(self.config, indent=4))
return
try:
@@ -119,12 +119,11 @@ class SynapseCmd(cmd.Cmd):
config_rules = [ # key, valid_values
("verbose", ["on", "off"]),
("complete_usernames", ["on", "off"]),
("send_delivery_receipts", ["on", "off"])
("send_delivery_receipts", ["on", "off"]),
]
for key, valid_vals in config_rules:
if key == args["key"] and args["val"] not in valid_vals:
print "%s value must be one of %s" % (args["key"],
valid_vals)
print("%s value must be one of %s" % (args["key"], valid_vals))
return
# toggle the http client verbosity
@@ -133,11 +132,11 @@ class SynapseCmd(cmd.Cmd):
# assign the new config
self.config[args["key"]] = args["val"]
print json.dumps(self.config, indent=4)
print(json.dumps(self.config, indent=4))
save_config(self.config)
except Exception as e:
print e
print(e)
def do_register(self, line):
"""Registers for a new account: "register <userid> <noupdate>"
@@ -153,33 +152,32 @@ class SynapseCmd(cmd.Cmd):
pwd = getpass.getpass("Type a password for this user: ")
pwd2 = getpass.getpass("Retype the password: ")
if pwd != pwd2 or len(pwd) == 0:
print "Password mismatch."
print("Password mismatch.")
pwd = None
else:
password = pwd
body = {
"type": "m.login.password"
}
body = {"type": "m.login.password"}
if "userid" in args:
body["user"] = args["userid"]
if password:
body["password"] = password
reactor.callFromThread(self._do_register, body,
"noupdate" not in args)
reactor.callFromThread(self._do_register, body, "noupdate" not in args)
@defer.inlineCallbacks
def _do_register(self, data, update_config):
# check the registration flows
url = self._url() + "/register"
json_res = yield self.http_client.do_request("GET", url)
print json.dumps(json_res, indent=4)
print(json.dumps(json_res, indent=4))
passwordFlow = None
for flow in json_res["flows"]:
if flow["type"] == "m.login.recaptcha" or ("stages" in flow and "m.login.recaptcha" in flow["stages"]):
print "Unable to register: Home server requires captcha."
if flow["type"] == "m.login.recaptcha" or (
"stages" in flow and "m.login.recaptcha" in flow["stages"]
):
print("Unable to register: Home server requires captcha.")
return
if flow["type"] == "m.login.password" and "stages" not in flow:
passwordFlow = flow
@@ -189,7 +187,7 @@ class SynapseCmd(cmd.Cmd):
return
json_res = yield self.http_client.do_request("POST", url, data=data)
print json.dumps(json_res, indent=4)
print(json.dumps(json_res, indent=4))
if update_config and "user_id" in json_res:
self.config["user"] = json_res["user_id"]
self.config["token"] = json_res["access_token"]
@@ -201,9 +199,7 @@ class SynapseCmd(cmd.Cmd):
"""
try:
args = self._parse(line, ["user_id"], force_keys=True)
can_login = threads.blockingCallFromThread(
reactor,
self._check_can_login)
can_login = threads.blockingCallFromThread(reactor, self._check_can_login)
if can_login:
p = getpass.getpass("Enter your password: ")
user = args["user_id"]
@@ -211,29 +207,25 @@ class SynapseCmd(cmd.Cmd):
domain = self._domain()
if domain:
user = "@" + user + ":" + domain
reactor.callFromThread(self._do_login, user, p)
#print " got %s " % p
# print " got %s " % p
except Exception as e:
print e
print(e)
@defer.inlineCallbacks
def _do_login(self, user, password):
path = "/login"
data = {
"user": user,
"password": password,
"type": "m.login.password"
}
data = {"user": user, "password": password, "type": "m.login.password"}
url = self._url() + path
json_res = yield self.http_client.do_request("POST", url, data=data)
print json_res
print(json_res)
if "access_token" in json_res:
self.config["user"] = user
self.config["token"] = json_res["access_token"]
save_config(self.config)
print "Login successful."
print("Login successful.")
@defer.inlineCallbacks
def _check_can_login(self):
@@ -242,18 +234,19 @@ class SynapseCmd(cmd.Cmd):
# submitting!
url = self._url() + path
json_res = yield self.http_client.do_request("GET", url)
print json_res
print(json_res)
if "flows" not in json_res:
print "Failed to find any login flows."
print("Failed to find any login flows.")
defer.returnValue(False)
flow = json_res["flows"][0] # assume first is the one we want.
if ("type" not in flow or "m.login.password" != flow["type"] or
"stages" in flow):
flow = json_res["flows"][0] # assume first is the one we want.
if "type" not in flow or "m.login.password" != flow["type"] or "stages" in flow:
fallback_url = self._url() + "/login/fallback"
print ("Unable to login via the command line client. Please visit "
"%s to login." % fallback_url)
print(
"Unable to login via the command line client. Please visit "
"%s to login." % fallback_url
)
defer.returnValue(False)
defer.returnValue(True)
@@ -263,21 +256,33 @@ class SynapseCmd(cmd.Cmd):
<clientSecret> A string of characters generated when requesting an email that you'll supply in subsequent calls to identify yourself
<sendAttempt> The number of times the user has requested an email. Leave this the same between requests to retry the request at the transport level. Increment it to request that the email be sent again.
"""
args = self._parse(line, ['address', 'clientSecret', 'sendAttempt'])
args = self._parse(line, ["address", "clientSecret", "sendAttempt"])
postArgs = {'email': args['address'], 'clientSecret': args['clientSecret'], 'sendAttempt': args['sendAttempt']}
postArgs = {
"email": args["address"],
"clientSecret": args["clientSecret"],
"sendAttempt": args["sendAttempt"],
}
reactor.callFromThread(self._do_emailrequest, postArgs)
@defer.inlineCallbacks
def _do_emailrequest(self, args):
url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/requestToken"
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/requestToken"
)
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print json_res
if 'sid' in json_res:
print "Token sent. Your session ID is %s" % (json_res['sid'])
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
print(json_res)
if "sid" in json_res:
print("Token sent. Your session ID is %s" % (json_res["sid"]))
def do_emailvalidate(self, line):
"""Validate and associate a third party ID
@@ -285,39 +290,56 @@ class SynapseCmd(cmd.Cmd):
<token> The token sent to your third party identifier address
<clientSecret> The same clientSecret you supplied in requestToken
"""
args = self._parse(line, ['sid', 'token', 'clientSecret'])
args = self._parse(line, ["sid", "token", "clientSecret"])
postArgs = { 'sid' : args['sid'], 'token' : args['token'], 'clientSecret': args['clientSecret'] }
postArgs = {
"sid": args["sid"],
"token": args["token"],
"clientSecret": args["clientSecret"],
}
reactor.callFromThread(self._do_emailvalidate, postArgs)
@defer.inlineCallbacks
def _do_emailvalidate(self, args):
url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/submitToken"
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/submitToken"
)
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print json_res
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
print(json_res)
def do_3pidbind(self, line):
"""Validate and associate a third party ID
<sid> The session ID (sid) given to you in the response to requestToken
<clientSecret> The same clientSecret you supplied in requestToken
"""
args = self._parse(line, ['sid', 'clientSecret'])
args = self._parse(line, ["sid", "clientSecret"])
postArgs = { 'sid' : args['sid'], 'clientSecret': args['clientSecret'] }
postArgs['mxid'] = self.config["user"]
postArgs = {"sid": args["sid"], "clientSecret": args["clientSecret"]}
postArgs["mxid"] = self.config["user"]
reactor.callFromThread(self._do_3pidbind, postArgs)
@defer.inlineCallbacks
def _do_3pidbind(self, args):
url = self._identityServerUrl()+"/_matrix/identity/api/v1/3pid/bind"
url = self._identityServerUrl() + "/_matrix/identity/api/v1/3pid/bind"
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print json_res
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
print(json_res)
def do_join(self, line):
"""Joins a room: "join <roomid>" """
@@ -325,7 +347,7 @@ class SynapseCmd(cmd.Cmd):
args = self._parse(line, ["roomid"], force_keys=True)
self._do_membership_change(args["roomid"], "join", self._usr())
except Exception as e:
print e
print(e)
def do_joinalias(self, line):
try:
@@ -333,7 +355,7 @@ class SynapseCmd(cmd.Cmd):
path = "/join/%s" % urllib.quote(args["roomname"])
reactor.callFromThread(self._run_and_pprint, "POST", path, {})
except Exception as e:
print e
print(e)
def do_topic(self, line):
""""topic [set|get] <roomid> [<newtopic>]"
@@ -343,26 +365,24 @@ class SynapseCmd(cmd.Cmd):
try:
args = self._parse(line, ["action", "roomid", "topic"])
if "action" not in args or "roomid" not in args:
print "Must specify set|get and a room ID."
print("Must specify set|get and a room ID.")
return
if args["action"].lower() not in ["set", "get"]:
print "Must specify set|get, not %s" % args["action"]
print("Must specify set|get, not %s" % args["action"])
return
path = "/rooms/%s/topic" % urllib.quote(args["roomid"])
if args["action"].lower() == "set":
if "topic" not in args:
print "Must specify a new topic."
print("Must specify a new topic.")
return
body = {
"topic": args["topic"]
}
body = {"topic": args["topic"]}
reactor.callFromThread(self._run_and_pprint, "PUT", path, body)
elif args["action"].lower() == "get":
reactor.callFromThread(self._run_and_pprint, "GET", path)
except Exception as e:
print e
print(e)
def do_invite(self, line):
"""Invite a user to a room: "invite <userid> <roomid>" """
@@ -373,49 +393,64 @@ class SynapseCmd(cmd.Cmd):
reactor.callFromThread(self._do_invite, args["roomid"], user_id)
except Exception as e:
print e
print(e)
@defer.inlineCallbacks
def _do_invite(self, roomid, userstring):
if (not userstring.startswith('@') and
self._is_on("complete_usernames")):
url = self._identityServerUrl()+"/_matrix/identity/api/v1/lookup"
if not userstring.startswith("@") and self._is_on("complete_usernames"):
url = self._identityServerUrl() + "/_matrix/identity/api/v1/lookup"
json_res = yield self.http_client.do_request("GET", url, qparams={'medium':'email','address':userstring})
json_res = yield self.http_client.do_request(
"GET", url, qparams={"medium": "email", "address": userstring}
)
mxid = None
if 'mxid' in json_res and 'signatures' in json_res:
url = self._identityServerUrl()+"/_matrix/identity/api/v1/pubkey/ed25519"
if "mxid" in json_res and "signatures" in json_res:
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/pubkey/ed25519"
)
pubKey = None
pubKeyObj = yield self.http_client.do_request("GET", url)
if 'public_key' in pubKeyObj:
pubKey = nacl.signing.VerifyKey(pubKeyObj['public_key'], encoder=nacl.encoding.HexEncoder)
if "public_key" in pubKeyObj:
pubKey = nacl.signing.VerifyKey(
pubKeyObj["public_key"], encoder=nacl.encoding.HexEncoder
)
else:
print "No public key found in pubkey response!"
print("No public key found in pubkey response!")
sigValid = False
if pubKey:
for signame in json_res['signatures']:
for signame in json_res["signatures"]:
if signame not in TRUSTED_ID_SERVERS:
print "Ignoring signature from untrusted server %s" % (signame)
print(
"Ignoring signature from untrusted server %s"
% (signame)
)
else:
try:
verify_signed_json(json_res, signame, pubKey)
sigValid = True
print "Mapping %s -> %s correctly signed by %s" % (userstring, json_res['mxid'], signame)
print(
"Mapping %s -> %s correctly signed by %s"
% (userstring, json_res["mxid"], signame)
)
break
except SignatureVerifyException as e:
print "Invalid signature from %s" % (signame)
print e
print("Invalid signature from %s" % (signame))
print(e)
if sigValid:
print "Resolved 3pid %s to %s" % (userstring, json_res['mxid'])
mxid = json_res['mxid']
print("Resolved 3pid %s to %s" % (userstring, json_res["mxid"]))
mxid = json_res["mxid"]
else:
print "Got association for %s but couldn't verify signature" % (userstring)
print(
"Got association for %s but couldn't verify signature"
% (userstring)
)
if not mxid:
mxid = "@" + userstring + ":" + self._domain()
@@ -428,18 +463,17 @@ class SynapseCmd(cmd.Cmd):
args = self._parse(line, ["roomid"], force_keys=True)
self._do_membership_change(args["roomid"], "leave", self._usr())
except Exception as e:
print e
print(e)
def do_send(self, line):
"""Sends a message. "send <roomid> <body>" """
args = self._parse(line, ["roomid", "body"])
txn_id = "txn%s" % int(time.time())
path = "/rooms/%s/send/m.room.message/%s" % (urllib.quote(args["roomid"]),
txn_id)
body_json = {
"msgtype": "m.text",
"body": args["body"]
}
path = "/rooms/%s/send/m.room.message/%s" % (
urllib.quote(args["roomid"]),
txn_id,
)
body_json = {"msgtype": "m.text", "body": args["body"]}
reactor.callFromThread(self._run_and_pprint, "PUT", path, body_json)
def do_list(self, line):
@@ -453,10 +487,10 @@ class SynapseCmd(cmd.Cmd):
"""
args = self._parse(line, ["type", "roomid", "qp"])
if not "type" in args or not "roomid" in args:
print "Must specify type and room ID."
print("Must specify type and room ID.")
return
if args["type"] not in ["members", "messages"]:
print "Unrecognised type: %s" % args["type"]
print("Unrecognised type: %s" % args["type"])
return
room_id = args["roomid"]
path = "/rooms/%s/%s" % (urllib.quote(room_id), args["type"])
@@ -468,11 +502,10 @@ class SynapseCmd(cmd.Cmd):
key_value = key_value_str.split("=")
qp[key_value[0]] = key_value[1]
except:
print "Bad query param: %s" % key_value
print("Bad query param: %s" % key_value)
return
reactor.callFromThread(self._run_and_pprint, "GET", path,
query_params=qp)
reactor.callFromThread(self._run_and_pprint, "GET", path, query_params=qp)
def do_create(self, line):
"""Creates a room.
@@ -508,14 +541,22 @@ class SynapseCmd(cmd.Cmd):
args = self._parse(line, ["method", "path", "data"])
# sanity check
if "method" not in args or "path" not in args:
print "Must specify path and method."
print("Must specify path and method.")
return
args["method"] = args["method"].upper()
valid_methods = ["PUT", "GET", "POST", "DELETE",
"XPUT", "XGET", "XPOST", "XDELETE"]
valid_methods = [
"PUT",
"GET",
"POST",
"DELETE",
"XPUT",
"XGET",
"XPOST",
"XDELETE",
]
if args["method"] not in valid_methods:
print "Unsupported method: %s" % args["method"]
print("Unsupported method: %s" % args["method"])
return
if "data" not in args:
@@ -524,7 +565,7 @@ class SynapseCmd(cmd.Cmd):
try:
args["data"] = json.loads(args["data"])
except Exception as e:
print "Data is not valid JSON. %s" % e
print("Data is not valid JSON. %s" % e)
return
qp = {"access_token": self._tok()}
@@ -540,10 +581,13 @@ class SynapseCmd(cmd.Cmd):
except:
pass
reactor.callFromThread(self._run_and_pprint, args["method"],
args["path"],
args["data"],
query_params=qp)
reactor.callFromThread(
self._run_and_pprint,
args["method"],
args["path"],
args["data"],
query_params=qp,
)
def do_stream(self, line):
"""Stream data from the server: "stream <longpoll timeout ms>" """
@@ -553,26 +597,29 @@ class SynapseCmd(cmd.Cmd):
try:
timeout = int(args["timeout"])
except ValueError:
print "Timeout must be in milliseconds."
print("Timeout must be in milliseconds.")
return
reactor.callFromThread(self._do_event_stream, timeout)
@defer.inlineCallbacks
def _do_event_stream(self, timeout):
res = yield self.http_client.get_json(
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token
})
print json.dumps(res, indent=4)
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token,
},
)
print(json.dumps(res, indent=4))
if "chunk" in res:
for event in res["chunk"]:
if (event["type"] == "m.room.message" and
self._is_on("send_delivery_receipts") and
event["user_id"] != self._usr()): # not sent by us
if (
event["type"] == "m.room.message"
and self._is_on("send_delivery_receipts")
and event["user_id"] != self._usr()
): # not sent by us
self._send_receipt(event, "d")
# update the position in the stram
@@ -580,18 +627,28 @@ class SynapseCmd(cmd.Cmd):
self.event_stream_token = res["end"]
def _send_receipt(self, event, feedback_type):
path = ("/rooms/%s/messages/%s/%s/feedback/%s/%s" %
(urllib.quote(event["room_id"]), event["user_id"], event["msg_id"],
self._usr(), feedback_type))
path = "/rooms/%s/messages/%s/%s/feedback/%s/%s" % (
urllib.quote(event["room_id"]),
event["user_id"],
event["msg_id"],
self._usr(),
feedback_type,
)
data = {}
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data,
alt_text="Sent receipt for %s" % event["msg_id"])
reactor.callFromThread(
self._run_and_pprint,
"PUT",
path,
data=data,
alt_text="Sent receipt for %s" % event["msg_id"],
)
def _do_membership_change(self, roomid, membership, userid):
path = "/rooms/%s/state/m.room.member/%s" % (urllib.quote(roomid), urllib.quote(userid))
data = {
"membership": membership
}
path = "/rooms/%s/state/m.room.member/%s" % (
urllib.quote(roomid),
urllib.quote(userid),
)
data = {"membership": membership}
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data)
def do_displayname(self, line):
@@ -644,15 +701,20 @@ class SynapseCmd(cmd.Cmd):
for i, arg in enumerate(line_args):
for config_key in self.config:
if ("$" + config_key) in arg:
arg = arg.replace("$" + config_key,
self.config[config_key])
arg = arg.replace("$" + config_key, self.config[config_key])
line_args[i] = arg
return dict(zip(keys, line_args))
@defer.inlineCallbacks
def _run_and_pprint(self, method, path, data=None,
query_params={"access_token": None}, alt_text=None):
def _run_and_pprint(
self,
method,
path,
data=None,
query_params={"access_token": None},
alt_text=None,
):
""" Runs an HTTP request and pretty prints the output.
Args:
@@ -665,31 +727,31 @@ class SynapseCmd(cmd.Cmd):
if "access_token" in query_params:
query_params["access_token"] = self._tok()
json_res = yield self.http_client.do_request(method, url,
data=data,
qparams=query_params)
json_res = yield self.http_client.do_request(
method, url, data=data, qparams=query_params
)
if alt_text:
print alt_text
print(alt_text)
else:
print json.dumps(json_res, indent=4)
print(json.dumps(json_res, indent=4))
def save_config(config):
with open(CONFIG_JSON, 'w') as out:
with open(CONFIG_JSON, "w") as out:
json.dump(config, out)
def main(server_url, identity_server_url, username, token, config_path):
print "Synapse command line client"
print "==========================="
print "Server: %s" % server_url
print "Type 'help' to get started."
print "Close this console with CTRL+C then CTRL+D."
print("Synapse command line client")
print("===========================")
print("Server: %s" % server_url)
print("Type 'help' to get started.")
print("Close this console with CTRL+C then CTRL+D.")
if not username or not token:
print "- 'register <username>' - Register an account"
print "- 'stream' - Connect to the event stream"
print "- 'create <roomid>' - Create a room"
print "- 'send <roomid> <message>' - Send a message"
print("- 'register <username>' - Register an account")
print("- 'stream' - Connect to the event stream")
print("- 'create <roomid>' - Create a room")
print("- 'send <roomid> <message>' - Send a message")
http_client = TwistedHttpClient()
# the command line client
@@ -699,13 +761,13 @@ def main(server_url, identity_server_url, username, token, config_path):
global CONFIG_JSON
CONFIG_JSON = config_path # bit cheeky, but just overwrite the global
try:
with open(config_path, 'r') as config:
with open(config_path, "r") as config:
syn_cmd.config = json.load(config)
try:
http_client.verbose = "on" == syn_cmd.config["verbose"]
except:
pass
print "Loaded config from %s" % config_path
print("Loaded config from %s" % config_path)
except:
pass
@@ -716,27 +778,37 @@ def main(server_url, identity_server_url, username, token, config_path):
reactor.run()
if __name__ == '__main__':
if __name__ == "__main__":
parser = argparse.ArgumentParser("Starts a synapse client.")
parser.add_argument(
"-s", "--server", dest="server", default="http://localhost:8008",
help="The URL of the home server to talk to.")
"-s",
"--server",
dest="server",
default="http://localhost:8008",
help="The URL of the home server to talk to.",
)
parser.add_argument(
"-i", "--identity-server", dest="identityserver", default="http://localhost:8090",
help="The URL of the identity server to talk to.")
"-i",
"--identity-server",
dest="identityserver",
default="http://localhost:8090",
help="The URL of the identity server to talk to.",
)
parser.add_argument(
"-u", "--username", dest="username",
help="Your username on the server.")
"-u", "--username", dest="username", help="Your username on the server."
)
parser.add_argument("-t", "--token", dest="token", help="Your access token.")
parser.add_argument(
"-t", "--token", dest="token",
help="Your access token.")
parser.add_argument(
"-c", "--config", dest="config", default=CONFIG_JSON,
help="The location of the config.json file to read from.")
"-c",
"--config",
dest="config",
default=CONFIG_JSON,
help="The location of the config.json file to read from.",
)
args = parser.parse_args()
if not args.server:
print "You must supply a server URL to communicate with."
print("You must supply a server URL to communicate with.")
parser.print_help()
sys.exit(1)

View File

@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import print_function
from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers
from twisted.internet import defer, reactor
@@ -72,9 +73,7 @@ class TwistedHttpClient(HttpClient):
@defer.inlineCallbacks
def put_json(self, url, data):
response = yield self._create_put_request(
url,
data,
headers_dict={"Content-Type": ["application/json"]}
url, data, headers_dict={"Content-Type": ["application/json"]}
)
body = yield readBody(response)
defer.returnValue((response.code, body))
@@ -94,40 +93,34 @@ class TwistedHttpClient(HttpClient):
"""
if "Content-Type" not in headers_dict:
raise defer.error(
RuntimeError("Must include Content-Type header for PUTs"))
raise defer.error(RuntimeError("Must include Content-Type header for PUTs"))
return self._create_request(
"PUT",
url,
producer=_JsonProducer(json_data),
headers_dict=headers_dict
"PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict
)
def _create_get_request(self, url, headers_dict={}):
""" Wrapper of _create_request to issue a GET request
"""
return self._create_request(
"GET",
url,
headers_dict=headers_dict
)
return self._create_request("GET", url, headers_dict=headers_dict)
@defer.inlineCallbacks
def do_request(self, method, url, data=None, qparams=None, jsonreq=True, headers={}):
def do_request(
self, method, url, data=None, qparams=None, jsonreq=True, headers={}
):
if qparams:
url = "%s?%s" % (url, urllib.urlencode(qparams, True))
if jsonreq:
prod = _JsonProducer(data)
headers['Content-Type'] = ["application/json"];
headers["Content-Type"] = ["application/json"]
else:
prod = _RawProducer(data)
if method in ["POST", "PUT"]:
response = yield self._create_request(method, url,
producer=prod,
headers_dict=headers)
response = yield self._create_request(
method, url, producer=prod, headers_dict=headers
)
else:
response = yield self._create_request(method, url)
@@ -141,27 +134,24 @@ class TwistedHttpClient(HttpClient):
headers_dict["User-Agent"] = ["Synapse Cmd Client"]
retries_left = 5
print "%s to %s with headers %s" % (method, url, headers_dict)
print("%s to %s with headers %s" % (method, url, headers_dict))
if self.verbose and producer:
if "password" in producer.data:
temp = producer.data["password"]
producer.data["password"] = "[REDACTED]"
print json.dumps(producer.data, indent=4)
print(json.dumps(producer.data, indent=4))
producer.data["password"] = temp
else:
print json.dumps(producer.data, indent=4)
print(json.dumps(producer.data, indent=4))
while True:
try:
response = yield self.agent.request(
method,
url.encode("UTF8"),
Headers(headers_dict),
producer
method, url.encode("UTF8"), Headers(headers_dict), producer
)
break
except Exception as e:
print "uh oh: %s" % e
print("uh oh: %s" % e)
if retries_left:
yield self.sleep(2 ** (5 - retries_left))
retries_left -= 1
@@ -169,8 +159,8 @@ class TwistedHttpClient(HttpClient):
raise e
if self.verbose:
print "Status %s %s" % (response.code, response.phrase)
print pformat(list(response.headers.getAllRawHeaders()))
print("Status %s %s" % (response.code, response.phrase))
print(pformat(list(response.headers.getAllRawHeaders())))
defer.returnValue(response)
def sleep(self, seconds):
@@ -178,6 +168,7 @@ class TwistedHttpClient(HttpClient):
reactor.callLater(seconds, d.callback, seconds)
return d
class _RawProducer(object):
def __init__(self, data):
self.data = data
@@ -194,9 +185,11 @@ class _RawProducer(object):
def stopProducing(self):
pass
class _JsonProducer(object):
""" Used by the twisted http client to create the HTTP body from json
"""
def __init__(self, jsn):
self.data = jsn
self.body = json.dumps(jsn).encode("utf8")

View File

@@ -1,5 +1,9 @@
# Synapse Docker
FIXME: this is out-of-date as of
https://github.com/matrix-org/synapse/issues/5518. Contributions to bring it up
to date would be welcome.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including

View File

@@ -1,7 +1,7 @@
# Example log_config file for synapse. To enable, point `log_config` to it in
# Example log_config file for synapse. To enable, point `log_config` to it in
# `homeserver.yaml`, and restart synapse.
#
# This configuration will produce similar results to the defaults within
# This configuration will produce similar results to the defaults within
# synapse, but can be edited to give more flexibility.
version: 1
@@ -12,7 +12,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
@@ -35,7 +35,7 @@ handlers:
root:
level: INFO
handlers: [console] # to use file handler instead, switch to [file]
loggers:
synapse:
level: INFO

View File

@@ -19,13 +19,13 @@ from curses.ascii import isprint
from twisted.internet import reactor
class CursesStdIO():
class CursesStdIO:
def __init__(self, stdscr, callback=None):
self.statusText = "Synapse test app -"
self.searchText = ''
self.searchText = ""
self.stdscr = stdscr
self.logLine = ''
self.logLine = ""
self.callback = callback
@@ -71,8 +71,7 @@ class CursesStdIO():
i = 0
index = len(self.lines) - 1
while i < (self.rows - 3) and index >= 0:
self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index],
curses.A_NORMAL)
self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index], curses.A_NORMAL)
i = i + 1
index = index - 1
@@ -85,15 +84,13 @@ class CursesStdIO():
raise RuntimeError("TextTooLongError")
self.stdscr.addstr(
self.rows - 2, 0,
text + ' ' * (self.cols - len(text)),
curses.A_STANDOUT)
self.rows - 2, 0, text + " " * (self.cols - len(text)), curses.A_STANDOUT
)
def printLogLine(self, text):
self.stdscr.addstr(
0, 0,
text + ' ' * (self.cols - len(text)),
curses.A_STANDOUT)
0, 0, text + " " * (self.cols - len(text)), curses.A_STANDOUT
)
def doRead(self):
""" Input is ready! """
@@ -105,7 +102,7 @@ class CursesStdIO():
elif c == curses.KEY_ENTER or c == 10:
text = self.searchText
self.searchText = ''
self.searchText = ""
self.print_line(">> %s" % text)
@@ -122,11 +119,13 @@ class CursesStdIO():
return
self.searchText = self.searchText + chr(c)
self.stdscr.addstr(self.rows - 1, 0,
self.searchText + (' ' * (
self.cols - len(self.searchText) - 2)))
self.stdscr.addstr(
self.rows - 1,
0,
self.searchText + (" " * (self.cols - len(self.searchText) - 2)),
)
self.paintStatus(self.statusText + ' %d' % len(self.searchText))
self.paintStatus(self.statusText + " %d" % len(self.searchText))
self.stdscr.move(self.rows - 1, len(self.searchText))
self.stdscr.refresh()
@@ -143,7 +142,6 @@ class CursesStdIO():
class Callback(object):
def __init__(self, stdio):
self.stdio = stdio
@@ -152,7 +150,7 @@ class Callback(object):
def main(stdscr):
screen = CursesStdIO(stdscr) # create Screen object
screen = CursesStdIO(stdscr) # create Screen object
callback = Callback(screen)
@@ -164,5 +162,5 @@ def main(stdscr):
screen.close()
if __name__ == '__main__':
if __name__ == "__main__":
curses.wrapper(main)

View File

@@ -28,9 +28,7 @@ Currently assumes the local address is localhost:<port>
"""
from synapse.federation import (
ReplicationHandler
)
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
@@ -38,7 +36,7 @@ from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
#from synapse.util.logutils import log_function
# from synapse.logging.utils import log_function
from twisted.internet import reactor, defer
from twisted.python import log
@@ -83,7 +81,7 @@ class InputOutput(object):
room_name, = m.groups()
self.print_line("%s joining %s" % (self.user, room_name))
self.server.join_room(room_name, self.user, self.user)
#self.print_line("OK.")
# self.print_line("OK.")
return
m = re.match("^invite (\S+) (\S+)$", line)
@@ -92,7 +90,7 @@ class InputOutput(object):
room_name, invitee = m.groups()
self.print_line("%s invited to %s" % (invitee, room_name))
self.server.invite_to_room(room_name, self.user, invitee)
#self.print_line("OK.")
# self.print_line("OK.")
return
m = re.match("^send (\S+) (.*)$", line)
@@ -101,7 +99,7 @@ class InputOutput(object):
room_name, body = m.groups()
self.print_line("%s send to %s" % (self.user, room_name))
self.server.send_message(room_name, self.user, body)
#self.print_line("OK.")
# self.print_line("OK.")
return
m = re.match("^backfill (\S+)$", line)
@@ -125,7 +123,6 @@ class InputOutput(object):
class IOLoggerHandler(logging.Handler):
def __init__(self, io):
logging.Handler.__init__(self)
self.io = io
@@ -142,6 +139,7 @@ class Room(object):
""" Used to store (in memory) the current membership state of a room, and
which home servers we should send PDUs associated with the room to.
"""
def __init__(self, room_name):
self.room_name = room_name
self.invited = set()
@@ -175,6 +173,7 @@ class HomeServer(ReplicationHandler):
""" A very basic home server implentation that allows people to join a
room and then invite other people.
"""
def __init__(self, server_name, replication_layer, output):
self.server_name = server_name
self.replication_layer = replication_layer
@@ -197,26 +196,27 @@ class HomeServer(ReplicationHandler):
elif pdu.content["membership"] == "invite":
self._on_invite(pdu.origin, pdu.context, pdu.state_key)
else:
self.output.print_line("#%s (unrec) %s = %s" %
(pdu.context, pdu.pdu_type, json.dumps(pdu.content))
self.output.print_line(
"#%s (unrec) %s = %s"
% (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
)
#def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
# def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
#if "joinee" in pdu.content:
#self._on_join(pdu.context, pdu.content["joinee"])
#elif "invitee" in pdu.content:
#self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
# if "joinee" in pdu.content:
# self._on_join(pdu.context, pdu.content["joinee"])
# elif "invitee" in pdu.content:
# self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
def _on_message(self, pdu):
""" We received a message
"""
self.output.print_line("#%s %s %s" %
(pdu.context, pdu.content["sender"], pdu.content["body"])
)
self.output.print_line(
"#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"])
)
def _on_join(self, context, joinee):
""" Someone has joined a room, either a remote user or a local user
@@ -224,9 +224,7 @@ class HomeServer(ReplicationHandler):
room = self._get_or_create_room(context)
room.add_participant(joinee)
self.output.print_line("#%s %s %s" %
(context, joinee, "*** JOINED")
)
self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED"))
def _on_invite(self, origin, context, invitee):
""" Someone has been invited
@@ -234,9 +232,7 @@ class HomeServer(ReplicationHandler):
room = self._get_or_create_room(context)
room.add_invited(invitee)
self.output.print_line("#%s %s %s" %
(context, invitee, "*** INVITED")
)
self.output.print_line("#%s %s %s" % (context, invitee, "*** INVITED"))
if not room.have_got_metadata and origin is not self.server_name:
logger.debug("Get room state")
@@ -272,14 +268,14 @@ class HomeServer(ReplicationHandler):
try:
pdu = Pdu.create_new(
context=room_name,
pdu_type="sy.room.member",
is_state=True,
state_key=joinee,
content={"membership": "join"},
origin=self.server_name,
destinations=destinations,
)
context=room_name,
pdu_type="sy.room.member",
is_state=True,
state_key=joinee,
content={"membership": "join"},
origin=self.server_name,
destinations=destinations,
)
yield self.replication_layer.send_pdu(pdu)
except Exception as e:
logger.exception(e)
@@ -318,21 +314,21 @@ class HomeServer(ReplicationHandler):
return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name):
return [i for i in self.joined_rooms.setdefault(room_name,).servers]
return [i for i in self.joined_rooms.setdefault(room_name).servers]
def _get_or_create_room(self, room_name):
return self.joined_rooms.setdefault(room_name, Room(room_name))
def get_servers_for_context(self, context):
return defer.succeed(
self.joined_rooms.setdefault(context, Room(context)).servers
)
self.joined_rooms.setdefault(context, Room(context)).servers
)
def main(stdscr):
parser = argparse.ArgumentParser()
parser.add_argument('user', type=str)
parser.add_argument('-v', '--verbose', action='count')
parser.add_argument("user", type=str)
parser.add_argument("-v", "--verbose", action="count")
args = parser.parse_args()
user = args.user
@@ -342,8 +338,9 @@ def main(stdscr):
root_logger = logging.getLogger()
formatter = logging.Formatter('%(asctime)s - %(name)s - %(lineno)d - '
'%(levelname)s - %(message)s')
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(lineno)d - " "%(levelname)s - %(message)s"
)
if not os.path.exists("logs"):
os.makedirs("logs")
fh = logging.FileHandler("logs/%s" % user)

File diff suppressed because one or more lines are too long

View File

@@ -1,3 +1,5 @@
from __future__ import print_function
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -48,7 +50,7 @@ def make_graph(pdus, room, filename_prefix):
c = colors.pop()
color_map[o] = c
except:
print "Run out of colours!"
print("Run out of colours!")
color_map[o] = "black"
graph = pydot.Dot(graph_name="Test")
@@ -57,9 +59,9 @@ def make_graph(pdus, room, filename_prefix):
name = make_name(pdu.get("pdu_id"), pdu.get("origin"))
pdu_map[name] = pdu
t = datetime.datetime.fromtimestamp(
float(pdu["ts"]) / 1000
).strftime('%Y-%m-%d %H:%M:%S,%f')
t = datetime.datetime.fromtimestamp(float(pdu["ts"]) / 1000).strftime(
"%Y-%m-%d %H:%M:%S,%f"
)
label = (
"<"
@@ -79,11 +81,7 @@ def make_graph(pdus, room, filename_prefix):
"depth": pdu.get("depth"),
}
node = pydot.Node(
name=name,
label=label,
color=color_map[pdu.get("origin")]
)
node = pydot.Node(name=name, label=label, color=color_map[pdu.get("origin")])
node_map[name] = node
graph.add_node(node)
@@ -93,7 +91,7 @@ def make_graph(pdus, room, filename_prefix):
end_name = make_name(i, o)
if end_name not in node_map:
print "%s not in nodes" % end_name
print("%s not in nodes" % end_name)
continue
edge = pydot.Edge(node_map[start_name], node_map[end_name])
@@ -107,14 +105,13 @@ def make_graph(pdus, room, filename_prefix):
if prev_state_name in node_map:
state_edge = pydot.Edge(
node_map[start_name], node_map[prev_state_name],
style='dotted'
node_map[start_name], node_map[prev_state_name], style="dotted"
)
graph.add_edge(state_edge)
graph.write('%s.dot' % filename_prefix, format='raw', prog='dot')
# graph.write_png("%s.png" % filename_prefix, prog='dot')
graph.write_svg("%s.svg" % filename_prefix, prog='dot')
graph.write("%s.dot" % filename_prefix, format="raw", prog="dot")
# graph.write_png("%s.png" % filename_prefix, prog='dot')
graph.write_svg("%s.svg" % filename_prefix, prog="dot")
def get_pdus(host, room):
@@ -130,15 +127,14 @@ def get_pdus(host, room):
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by talking "
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
)
parser.add_argument(
"-p", "--prefix", dest="prefix",
help="String to prefix output files with"
"-p", "--prefix", dest="prefix", help="String to prefix output files with"
)
parser.add_argument('host')
parser.add_argument('room')
parser.add_argument("host")
parser.add_argument("room")
args = parser.parse_args()

View File

@@ -36,10 +36,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
args = [room_id]
if limit:
sql += (
" ORDER BY topological_ordering DESC, stream_ordering DESC "
"LIMIT ?"
)
sql += " ORDER BY topological_ordering DESC, stream_ordering DESC " "LIMIT ?"
args.append(limit)
@@ -56,9 +53,8 @@ def make_graph(db_name, room_id, file_prefix, limit):
for event in events:
c = conn.execute(
"SELECT state_group FROM event_to_state_groups "
"WHERE event_id = ?",
(event.event_id,)
"SELECT state_group FROM event_to_state_groups " "WHERE event_id = ?",
(event.event_id,),
)
res = c.fetchone()
@@ -69,7 +65,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
t = datetime.datetime.fromtimestamp(
float(event.origin_server_ts) / 1000
).strftime('%Y-%m-%d %H:%M:%S,%f')
).strftime("%Y-%m-%d %H:%M:%S,%f")
content = json.dumps(unfreeze(event.get_dict()["content"]))
@@ -93,10 +89,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
"state_group": state_group,
}
node = pydot.Node(
name=event.event_id,
label=label,
)
node = pydot.Node(name=event.event_id, label=label)
node_map[event.event_id] = node
graph.add_node(node)
@@ -106,10 +99,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
try:
end_node = node_map[prev_id]
except:
end_node = pydot.Node(
name=prev_id,
label="<<b>%s</b>>" % (prev_id,),
)
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node
graph.add_node(end_node)
@@ -121,36 +111,33 @@ def make_graph(db_name, room_id, file_prefix, limit):
if len(event_ids) <= 1:
continue
cluster = pydot.Cluster(
str(group),
label="<State Group: %s>" % (str(group),)
)
cluster = pydot.Cluster(str(group), label="<State Group: %s>" % (str(group),))
for event_id in event_ids:
cluster.add_node(node_map[event_id])
graph.add_subgraph(cluster)
graph.write('%s.dot' % file_prefix, format='raw', prog='dot')
graph.write_svg("%s.svg" % file_prefix, prog='dot')
graph.write("%s.dot" % file_prefix, format="raw", prog="dot")
graph.write_svg("%s.svg" % file_prefix, prog="dot")
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by talking "
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
)
parser.add_argument(
"-p", "--prefix", dest="prefix",
"-p",
"--prefix",
dest="prefix",
help="String to prefix output files with",
default="graph_output"
default="graph_output",
)
parser.add_argument(
"-l", "--limit",
help="Only retrieve the last N events.",
)
parser.add_argument('db')
parser.add_argument('room')
parser.add_argument("-l", "--limit", help="Only retrieve the last N events.")
parser.add_argument("db")
parser.add_argument("room")
args = parser.parse_args()

View File

@@ -1,3 +1,5 @@
from __future__ import print_function
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -26,22 +28,22 @@ from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print "Reading lines"
print("Reading lines")
with open(file_name) as f:
lines = f.readlines()
print "Read lines"
print("Read lines")
events = [FrozenEvent(json.loads(line)) for line in lines]
print "Loaded events."
print("Loaded events.")
events.sort(key=lambda e: e.depth)
print "Sorted events"
print("Sorted events")
if limit:
events = events[-int(limit):]
events = events[-int(limit) :]
node_map = {}
@@ -50,12 +52,12 @@ def make_graph(file_name, room_id, file_prefix, limit):
for event in events:
t = datetime.datetime.fromtimestamp(
float(event.origin_server_ts) / 1000
).strftime('%Y-%m-%d %H:%M:%S,%f')
).strftime("%Y-%m-%d %H:%M:%S,%f")
content = json.dumps(unfreeze(event.get_dict()["content"]), indent=4)
content = content.replace("\n", "<br/>\n")
print content
print(content)
content = []
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
@@ -66,15 +68,16 @@ def make_graph(file_name, room_id, file_prefix, limit):
value = json.dumps(value)
content.append(
"<b>%s</b>: %s," % (
cgi.escape(key, quote=True).encode("ascii", 'xmlcharrefreplace'),
cgi.escape(value, quote=True).encode("ascii", 'xmlcharrefreplace'),
"<b>%s</b>: %s,"
% (
cgi.escape(key, quote=True).encode("ascii", "xmlcharrefreplace"),
cgi.escape(value, quote=True).encode("ascii", "xmlcharrefreplace"),
)
)
content = "<br/>\n".join(content)
print content
print(content)
label = (
"<"
@@ -94,25 +97,19 @@ def make_graph(file_name, room_id, file_prefix, limit):
"depth": event.depth,
}
node = pydot.Node(
name=event.event_id,
label=label,
)
node = pydot.Node(name=event.event_id, label=label)
node_map[event.event_id] = node
graph.add_node(node)
print "Created Nodes"
print("Created Nodes")
for event in events:
for prev_id, _ in event.prev_events:
try:
end_node = node_map[prev_id]
except:
end_node = pydot.Node(
name=prev_id,
label="<<b>%s</b>>" % (prev_id,),
)
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node
graph.add_node(end_node)
@@ -120,33 +117,33 @@ def make_graph(file_name, room_id, file_prefix, limit):
edge = pydot.Edge(node_map[event.event_id], end_node)
graph.add_edge(edge)
print "Created edges"
print("Created edges")
graph.write('%s.dot' % file_prefix, format='raw', prog='dot')
graph.write("%s.dot" % file_prefix, format="raw", prog="dot")
print "Created Dot"
print("Created Dot")
graph.write_svg("%s.svg" % file_prefix, prog='dot')
graph.write_svg("%s.svg" % file_prefix, prog="dot")
print("Created svg")
print "Created svg"
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by reading "
"from a file with line deliminated events. \n"
"Requires pydot."
"from a file with line deliminated events. \n"
"Requires pydot."
)
parser.add_argument(
"-p", "--prefix", dest="prefix",
"-p",
"--prefix",
dest="prefix",
help="String to prefix output files with",
default="graph_output"
default="graph_output",
)
parser.add_argument(
"-l", "--limit",
help="Only retrieve the last N events.",
)
parser.add_argument('event_file')
parser.add_argument('room')
parser.add_argument("-l", "--limit", help="Only retrieve the last N events.")
parser.add_argument("event_file")
parser.add_argument("room")
args = parser.parse_args()

View File

@@ -8,8 +8,9 @@ we set the remote SDP at which point the stream ends. Our video never gets to
the bridge.
Requires:
npm install jquery jsdom
npm install jquery jsdom
"""
from __future__ import print_function
import gevent
import grequests
@@ -19,24 +20,25 @@ import urllib
import subprocess
import time
#ACCESS_TOKEN="" #
# ACCESS_TOKEN="" #
MATRIXBASE = 'https://matrix.org/_matrix/client/api/v1/'
MYUSERNAME = '@davetest:matrix.org'
MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
MYUSERNAME = "@davetest:matrix.org"
HTTPBIND = 'https://meet.jit.si/http-bind'
#HTTPBIND = 'https://jitsi.vuc.me/http-bind'
#ROOMNAME = "matrix"
HTTPBIND = "https://meet.jit.si/http-bind"
# HTTPBIND = 'https://jitsi.vuc.me/http-bind'
# ROOMNAME = "matrix"
ROOMNAME = "pibble"
HOST="guest.jit.si"
#HOST="jitsi.vuc.me"
HOST = "guest.jit.si"
# HOST="jitsi.vuc.me"
TURNSERVER="turn.guest.jit.si"
#TURNSERVER="turn.jitsi.vuc.me"
TURNSERVER = "turn.guest.jit.si"
# TURNSERVER="turn.jitsi.vuc.me"
ROOMDOMAIN = "meet.jit.si"
# ROOMDOMAIN="conference.jitsi.vuc.me"
ROOMDOMAIN="meet.jit.si"
#ROOMDOMAIN="conference.jitsi.vuc.me"
class TrivialMatrixClient:
def __init__(self, access_token):
@@ -45,38 +47,50 @@ class TrivialMatrixClient:
def getEvent(self):
while True:
url = MATRIXBASE+'events?access_token='+self.access_token+"&timeout=60000"
url = (
MATRIXBASE
+ "events?access_token="
+ self.access_token
+ "&timeout=60000"
)
if self.token:
url += "&from="+self.token
url += "&from=" + self.token
req = grequests.get(url)
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print "incoming from matrix",obj
if 'end' not in obj:
print("incoming from matrix", obj)
if "end" not in obj:
continue
self.token = obj['end']
if len(obj['chunk']):
return obj['chunk'][0]
self.token = obj["end"]
if len(obj["chunk"]):
return obj["chunk"][0]
def joinRoom(self, roomId):
url = MATRIXBASE+'rooms/'+roomId+'/join?access_token='+self.access_token
print url
headers={ 'Content-Type': 'application/json' }
req = grequests.post(url, headers=headers, data='{}')
url = MATRIXBASE + "rooms/" + roomId + "/join?access_token=" + self.access_token
print(url)
headers = {"Content-Type": "application/json"}
req = grequests.post(url, headers=headers, data="{}")
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print "response: ",obj
print("response: ", obj)
def sendEvent(self, roomId, evType, event):
url = MATRIXBASE+'rooms/'+roomId+'/send/'+evType+'?access_token='+self.access_token
print url
print json.dumps(event)
headers={ 'Content-Type': 'application/json' }
url = (
MATRIXBASE
+ "rooms/"
+ roomId
+ "/send/"
+ evType
+ "?access_token="
+ self.access_token
)
print(url)
print(json.dumps(event))
headers = {"Content-Type": "application/json"}
req = grequests.post(url, headers=headers, data=json.dumps(event))
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print "response: ",obj
print("response: ", obj)
xmppClients = {}
@@ -85,39 +99,40 @@ xmppClients = {}
def matrixLoop():
while True:
ev = matrixCli.getEvent()
print ev
if ev['type'] == 'm.room.member':
print 'membership event'
if ev['membership'] == 'invite' and ev['state_key'] == MYUSERNAME:
roomId = ev['room_id']
print "joining room %s" % (roomId)
print(ev)
if ev["type"] == "m.room.member":
print("membership event")
if ev["membership"] == "invite" and ev["state_key"] == MYUSERNAME:
roomId = ev["room_id"]
print("joining room %s" % (roomId))
matrixCli.joinRoom(roomId)
elif ev['type'] == 'm.room.message':
if ev['room_id'] in xmppClients:
print "already have a bridge for that user, ignoring"
elif ev["type"] == "m.room.message":
if ev["room_id"] in xmppClients:
print("already have a bridge for that user, ignoring")
continue
print "got message, connecting"
xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev['type'] == 'm.call.invite':
print "Incoming call"
#sdp = ev['content']['offer']['sdp']
#print "sdp: %s" % (sdp)
#xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
#gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev['type'] == 'm.call.answer':
print "Call answered"
sdp = ev['content']['answer']['sdp']
if ev['room_id'] not in xmppClients:
print "We didn't have a call for that room"
print("got message, connecting")
xmppClients[ev["room_id"]] = TrivialXmppClient(ev["room_id"], ev["user_id"])
gevent.spawn(xmppClients[ev["room_id"]].xmppLoop)
elif ev["type"] == "m.call.invite":
print("Incoming call")
# sdp = ev['content']['offer']['sdp']
# print "sdp: %s" % (sdp)
# xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
# gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev["type"] == "m.call.answer":
print("Call answered")
sdp = ev["content"]["answer"]["sdp"]
if ev["room_id"] not in xmppClients:
print("We didn't have a call for that room")
continue
# should probably check call ID too
xmppCli = xmppClients[ev['room_id']]
xmppCli = xmppClients[ev["room_id"]]
xmppCli.sendAnswer(sdp)
elif ev['type'] == 'm.call.hangup':
if ev['room_id'] in xmppClients:
xmppClients[ev['room_id']].stop()
del xmppClients[ev['room_id']]
elif ev["type"] == "m.call.hangup":
if ev["room_id"] in xmppClients:
xmppClients[ev["room_id"]].stop()
del xmppClients[ev["room_id"]]
class TrivialXmppClient:
def __init__(self, matrixRoom, userId):
@@ -131,130 +146,155 @@ class TrivialXmppClient:
def nextRid(self):
self.rid += 1
return '%d' % (self.rid)
return "%d" % (self.rid)
def sendIq(self, xml):
fullXml = "<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s'>%s</body>" % (self.nextRid(), self.sid, xml)
#print "\t>>>%s" % (fullXml)
fullXml = (
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s'>%s</body>"
% (self.nextRid(), self.sid, xml)
)
# print "\t>>>%s" % (fullXml)
return self.xmppPoke(fullXml)
def xmppPoke(self, xml):
headers = {'Content-Type': 'application/xml'}
headers = {"Content-Type": "application/xml"}
req = grequests.post(HTTPBIND, verify=False, headers=headers, data=xml)
resps = grequests.map([req])
obj = BeautifulSoup(resps[0].content)
return obj
def sendAnswer(self, answer):
print "sdp from matrix client",answer
p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--sdp'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
print("sdp from matrix client", answer)
p = subprocess.Popen(
["node", "unjingle/unjingle.js", "--sdp"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
jingle, out_err = p.communicate(answer)
jingle = jingle % {
'tojid': self.callfrom,
'action': 'session-accept',
'initiator': self.callfrom,
'responder': self.jid,
'sid': self.callsid
"tojid": self.callfrom,
"action": "session-accept",
"initiator": self.callfrom,
"responder": self.jid,
"sid": self.callsid,
}
print "answer jingle from sdp",jingle
print("answer jingle from sdp", jingle)
res = self.sendIq(jingle)
print "reply from answer: ",res
print("reply from answer: ", res)
self.ssrcs = {}
jingleSoup = BeautifulSoup(jingle)
for cont in jingleSoup.iq.jingle.findAll('content'):
for cont in jingleSoup.iq.jingle.findAll("content"):
if cont.description:
self.ssrcs[cont['name']] = cont.description['ssrc']
print "my ssrcs:",self.ssrcs
self.ssrcs[cont["name"]] = cont.description["ssrc"]
print("my ssrcs:", self.ssrcs)
gevent.joinall([
gevent.spawn(self.advertiseSsrcs)
])
gevent.joinall([gevent.spawn(self.advertiseSsrcs)])
def advertiseSsrcs(self):
time.sleep(7)
print "SSRC spammer started"
time.sleep(7)
print("SSRC spammer started")
while self.running:
ssrcMsg = "<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>" % { 'tojid': "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid), 'nick': self.userId, 'assrc': self.ssrcs['audio'], 'vssrc': self.ssrcs['video'] }
ssrcMsg = (
"<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>"
% {
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
"nick": self.userId,
"assrc": self.ssrcs["audio"],
"vssrc": self.ssrcs["video"],
}
)
res = self.sendIq(ssrcMsg)
print "reply from ssrc announce: ",res
print("reply from ssrc announce: ", res)
time.sleep(10)
def xmppLoop(self):
self.matrixCallId = time.time()
res = self.xmppPoke("<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' to='%s' xml:lang='en' wait='60' hold='1' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/>" % (self.nextRid(), HOST))
res = self.xmppPoke(
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' to='%s' xml:lang='en' wait='60' hold='1' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/>"
% (self.nextRid(), HOST)
)
print res
self.sid = res.body['sid']
print "sid %s" % (self.sid)
print(res)
self.sid = res.body["sid"]
print("sid %s" % (self.sid))
res = self.sendIq("<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='ANONYMOUS'/>")
res = self.sendIq(
"<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='ANONYMOUS'/>"
)
res = self.xmppPoke("<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s' to='%s' xml:lang='en' xmpp:restart='true' xmlns:xmpp='urn:xmpp:xbosh'/>" % (self.nextRid(), self.sid, HOST))
res = self.xmppPoke(
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s' to='%s' xml:lang='en' xmpp:restart='true' xmlns:xmpp='urn:xmpp:xbosh'/>"
% (self.nextRid(), self.sid, HOST)
)
res = self.sendIq("<iq type='set' id='_bind_auth_2' xmlns='jabber:client'><bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'/></iq>")
print res
res = self.sendIq(
"<iq type='set' id='_bind_auth_2' xmlns='jabber:client'><bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'/></iq>"
)
print(res)
self.jid = res.body.iq.bind.jid.string
print "jid: %s" % (self.jid)
self.shortJid = self.jid.split('-')[0]
print("jid: %s" % (self.jid))
self.shortJid = self.jid.split("-")[0]
res = self.sendIq("<iq type='set' id='_session_auth_2' xmlns='jabber:client'><session xmlns='urn:ietf:params:xml:ns:xmpp-session'/></iq>")
res = self.sendIq(
"<iq type='set' id='_session_auth_2' xmlns='jabber:client'><session xmlns='urn:ietf:params:xml:ns:xmpp-session'/></iq>"
)
#randomthing = res.body.iq['to']
#whatsitpart = randomthing.split('-')[0]
# randomthing = res.body.iq['to']
# whatsitpart = randomthing.split('-')[0]
#print "other random bind thing: %s" % (randomthing)
# print "other random bind thing: %s" % (randomthing)
# advertise preence to the jitsi room, with our nick
res = self.sendIq("<iq type='get' to='%s' xmlns='jabber:client' id='1:sendIQ'><services xmlns='urn:xmpp:extdisco:1'><service host='%s'/></services></iq><presence to='%s@%s/d98f6c40' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%s</nick></presence>" % (HOST, TURNSERVER, ROOMNAME, ROOMDOMAIN, self.userId))
self.muc = {'users': []}
for p in res.body.findAll('presence'):
res = self.sendIq(
"<iq type='get' to='%s' xmlns='jabber:client' id='1:sendIQ'><services xmlns='urn:xmpp:extdisco:1'><service host='%s'/></services></iq><presence to='%s@%s/d98f6c40' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%s</nick></presence>"
% (HOST, TURNSERVER, ROOMNAME, ROOMDOMAIN, self.userId)
)
self.muc = {"users": []}
for p in res.body.findAll("presence"):
u = {}
u['shortJid'] = p['from'].split('/')[1]
u["shortJid"] = p["from"].split("/")[1]
if p.c and p.c.nick:
u['nick'] = p.c.nick.string
self.muc['users'].append(u)
print "muc: ",self.muc
u["nick"] = p.c.nick.string
self.muc["users"].append(u)
print("muc: ", self.muc)
# wait for stuff
while True:
print "waiting..."
print("waiting...")
res = self.sendIq("")
print "got from stream: ",res
print("got from stream: ", res)
if res.body.iq:
jingles = res.body.iq.findAll('jingle')
jingles = res.body.iq.findAll("jingle")
if len(jingles):
self.callfrom = res.body.iq['from']
self.callfrom = res.body.iq["from"]
self.handleInvite(jingles[0])
elif 'type' in res.body and res.body['type'] == 'terminate':
elif "type" in res.body and res.body["type"] == "terminate":
self.running = False
del xmppClients[self.matrixRoom]
return
return
def handleInvite(self, jingle):
self.initiator = jingle['initiator']
self.callsid = jingle['sid']
p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--jingle'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
print "raw jingle invite",str(jingle)
self.initiator = jingle["initiator"]
self.callsid = jingle["sid"]
p = subprocess.Popen(
["node", "unjingle/unjingle.js", "--jingle"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
print("raw jingle invite", str(jingle))
sdp, out_err = p.communicate(str(jingle))
print "transformed remote offer sdp",sdp
print("transformed remote offer sdp", sdp)
inviteEvent = {
'offer': {
'type': 'offer',
'sdp': sdp
},
'call_id': self.matrixCallId,
'version': 0,
'lifetime': 30000
"offer": {"type": "offer", "sdp": sdp},
"call_id": self.matrixCallId,
"version": 0,
"lifetime": 30000,
}
matrixCli.sendEvent(self.matrixRoom, 'm.call.invite', inviteEvent)
matrixCli.sendEvent(self.matrixRoom, "m.call.invite", inviteEvent)
matrixCli = TrivialMatrixClient(ACCESS_TOKEN)
gevent.joinall([
gevent.spawn(matrixLoop)
])
matrixCli = TrivialMatrixClient(ACCESS_TOKEN) # Undefined name
gevent.joinall([gevent.spawn(matrixLoop)])

View File

@@ -3,7 +3,7 @@ Purge history API examples
# `purge_history.sh`
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
purge all messages in a list of rooms up to a certain event. You can select a
timeframe or a number of messages that you want to keep in the room.
@@ -12,5 +12,5 @@ the script.
# `purge_remote_media.sh`
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
purge all old cached remote media.

View File

@@ -1,34 +1,40 @@
#!/usr/bin/env python
from __future__ import print_function
from argparse import ArgumentParser
import json
import requests
import sys
import urllib
try:
raw_input
except NameError: # Python 3
raw_input = input
def _mkurl(template, kws):
for key in kws:
template = template.replace(key, kws[key])
return template
def main(hs, room_id, access_token, user_id_prefix, why):
if not why:
why = "Automated kick."
print "Kicking members on %s in room %s matching %s" % (hs, room_id, user_id_prefix)
print(
"Kicking members on %s in room %s matching %s" % (hs, room_id, user_id_prefix)
)
room_state_url = _mkurl(
"$HS/_matrix/client/api/v1/rooms/$ROOM/state?access_token=$TOKEN",
{
"$HS": hs,
"$ROOM": room_id,
"$TOKEN": access_token
}
{"$HS": hs, "$ROOM": room_id, "$TOKEN": access_token},
)
print "Getting room state => %s" % room_state_url
print("Getting room state => %s" % room_state_url)
res = requests.get(room_state_url)
print "HTTP %s" % res.status_code
print("HTTP %s" % res.status_code)
state_events = res.json()
if "error" in state_events:
print "FATAL"
print state_events
print("FATAL")
print(state_events)
return
kick_list = []
@@ -44,47 +50,40 @@ def main(hs, room_id, access_token, user_id_prefix, why):
kick_list.append(event["state_key"])
if len(kick_list) == 0:
print "No user IDs match the prefix '%s'" % user_id_prefix
print("No user IDs match the prefix '%s'" % user_id_prefix)
return
print "The following user IDs will be kicked from %s" % room_name
print("The following user IDs will be kicked from %s" % room_name)
for uid in kick_list:
print uid
print(uid)
doit = raw_input("Continue? [Y]es\n")
if len(doit) > 0 and doit.lower() == 'y':
print "Kicking members..."
if len(doit) > 0 and doit.lower() == "y":
print("Kicking members...")
# encode them all
kick_list = [urllib.quote(uid) for uid in kick_list]
for uid in kick_list:
kick_url = _mkurl(
"$HS/_matrix/client/api/v1/rooms/$ROOM/state/m.room.member/$UID?access_token=$TOKEN",
{
"$HS": hs,
"$UID": uid,
"$ROOM": room_id,
"$TOKEN": access_token
}
{"$HS": hs, "$UID": uid, "$ROOM": room_id, "$TOKEN": access_token},
)
kick_body = {
"membership": "leave",
"reason": why
}
print "Kicking %s" % uid
kick_body = {"membership": "leave", "reason": why}
print("Kicking %s" % uid)
res = requests.put(kick_url, data=json.dumps(kick_body))
if res.status_code != 200:
print "ERROR: HTTP %s" % res.status_code
print("ERROR: HTTP %s" % res.status_code)
if res.json().get("error"):
print "ERROR: JSON %s" % res.json()
print("ERROR: JSON %s" % res.json())
if __name__ == "__main__":
parser = ArgumentParser("Kick members in a room matching a certain user ID prefix.")
parser.add_argument("-u","--user-id",help="The user ID prefix e.g. '@irc_'")
parser.add_argument("-t","--token",help="Your access_token")
parser.add_argument("-r","--room",help="The room ID to kick members in")
parser.add_argument("-s","--homeserver",help="The base HS url e.g. http://matrix.org")
parser.add_argument("-w","--why",help="Reason for the kick. Optional.")
parser.add_argument("-u", "--user-id", help="The user ID prefix e.g. '@irc_'")
parser.add_argument("-t", "--token", help="Your access_token")
parser.add_argument("-r", "--room", help="The room ID to kick members in")
parser.add_argument(
"-s", "--homeserver", help="The base HS url e.g. http://matrix.org"
)
parser.add_argument("-w", "--why", help="Reason for the kick. Optional.")
args = parser.parse_args()
if not args.room or not args.token or not args.user_id or not args.homeserver:
parser.print_help()

View File

@@ -8,7 +8,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:

View File

@@ -43,7 +43,7 @@ dh_virtualenv \
--preinstall="mock" \
--extra-pip-arg="--no-cache-dir" \
--extra-pip-arg="--compile" \
--extras="all"
--extras="all,systemd"
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"

26
debian/changelog vendored
View File

@@ -1,3 +1,29 @@
matrix-synapse-py3 (1.1.0-1) UNRELEASED; urgency=medium
[ Amber Brown ]
* Update logging config defaults to match API changes in Synapse.
[ Richard van der Hoff ]
* Add Recommends and Depends for some libraries which you probably want.
-- Erik Johnston <erikj@rae> Thu, 04 Jul 2019 13:59:02 +0100
matrix-synapse-py3 (1.1.0) stable; urgency=medium
[ Silke Hofstra ]
* Include systemd-python to allow logging to the systemd journal.
[ Synapse Packaging team ]
* New synapse release 1.1.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 04 Jul 2019 11:43:41 +0100
matrix-synapse-py3 (1.0.0) stable; urgency=medium
* New synapse release 1.0.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Jun 2019 17:09:53 +0100
matrix-synapse-py3 (0.99.5.2) stable; urgency=medium
* New synapse release 0.99.5.2.

7
debian/control vendored
View File

@@ -2,16 +2,20 @@ Source: matrix-synapse-py3
Section: contrib/python
Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
Build-Depends:
debhelper (>= 9),
dh-systemd,
dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,
lsb-release,
python3-dev,
python3,
python3-setuptools,
python3-pip,
python3-venv,
libsqlite3-dev,
tar,
Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse
@@ -28,9 +32,12 @@ Depends:
debconf,
python3-distutils|libpython3-stdlib (<< 3.6),
${misc:Depends},
${shlibs:Depends},
${synapse:pydepends},
# some of our scripts use perl, but none of them are important,
# so we put perl:Depends in Suggests rather than Depends.
Recommends:
${shlibs1:Recommends},
Suggests:
sqlite3,
${perl:Depends},

2
debian/log.yaml vendored
View File

@@ -7,7 +7,7 @@ formatters:
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:

14
debian/rules vendored
View File

@@ -3,15 +3,29 @@
# Build Debian package using https://github.com/spotify/dh-virtualenv
#
# assume we only have one package
PACKAGE_NAME:=`dh_listpackages`
override_dh_systemd_enable:
dh_systemd_enable --name=matrix-synapse
override_dh_installinit:
dh_installinit --name=matrix-synapse
# we don't really want to strip the symbols from our object files.
override_dh_strip:
override_dh_shlibdeps:
# make the postgres package's dependencies a recommendation
# rather than a hard dependency.
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
-pshlibs1 -dRecommends
# all the other dependencies can be normal 'Depends' requirements,
# except for PIL's, which is self-contained and which confuses
# dpkg-shlibdeps.
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
override_dh_virtualenv:
./debian/build_virtualenv

View File

@@ -1,9 +1,13 @@
DO NOT USE THESE DEMO SERVERS IN PRODUCTION
Requires you to have done:
python setup.py develop
The demo start.sh will start three synapse servers on ports 8080, 8081 and 8082, with host names localhost:$port. This can be easily changed to `hostname`:$port in start.sh if required.
It will also start a web server on port 8000 pointed at the webclient.
The demo start.sh will start three synapse servers on ports 8080, 8081 and 8082, with host names localhost:$port. This can be easily changed to `hostname`:$port in start.sh if required.
To enable the servers to communicate untrusted ssl certs are used. In order to do this the servers do not check the certs
and are configured in a highly insecure way. Do not use these configuration files in production.
stop.sh will stop the synapse servers and the webclient.

View File

@@ -21,14 +21,76 @@ for port in 8080 8081 8082; do
pushd demo/$port
#rm $DIR/etc/$port.config
python -m synapse.app.homeserver \
python3 -m synapse.app.homeserver \
--generate-config \
-H "localhost:$https_port" \
--config-path "$DIR/etc/$port.config" \
--report-stats no
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
echo 'enable_registration: true' >> $DIR/etc/$port.config
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
# accidentaly bork me with your fancy settings.
listeners=$(cat <<-PORTLISTENERS
# Configure server to listen on both $https_port and $port
# This overides some of the default settings above
listeners:
- port: $https_port
type: http
tls: true
resources:
- names: [client, federation]
- port: $port
tls: false
bind_addresses: ['::1', '127.0.0.1']
type: http
x_forwarded: true
resources:
- names: [client, federation]
compress: false
PORTLISTENERS
)
echo "${listeners}" >> $DIR/etc/$port.config
# Disable tls for the servers
printf '\n\n# Disable tls on the servers.' >> $DIR/etc/$port.config
echo '# DO NOT USE IN PRODUCTION' >> $DIR/etc/$port.config
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true' >> $DIR/etc/$port.config
echo 'federation_verify_certificates: false' >> $DIR/etc/$port.config
# Set tls paths
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\"" >> $DIR/etc/$port.config
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\"" >> $DIR/etc/$port.config
# Generate tls keys
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
# Ignore keys from the trusted keys server
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
echo ' - server_name: "matrix.org"' >> $DIR/etc/$port.config
echo ' accept_keys_insecurely: true' >> $DIR/etc/$port.config
# Reduce the blacklist
blacklist=$(cat <<-BLACK
# Set the blacklist so that it doesn't include 127.0.0.1
federation_ip_range_blacklist:
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
BLACK
)
echo "${blacklist}" >> $DIR/etc/$port.config
fi
# Check script parameters
if [ $# -eq 1 ]; then
@@ -55,7 +117,7 @@ for port in 8080 8081 8082; do
echo "report_stats: false" >> $DIR/etc/$port.config
fi
python -m synapse.app.homeserver \
python3 -m synapse.app.homeserver \
--config-path "$DIR/etc/$port.config" \
-D \
-vv \

View File

@@ -6,23 +6,25 @@ import cgi, logging
from daemonize import Daemonize
class SimpleHTTPRequestHandlerWithPOST(SimpleHTTPServer.SimpleHTTPRequestHandler):
UPLOAD_PATH = "upload"
"""
Accept all post request as file upload
"""
def do_POST(self):
path = os.path.join(self.UPLOAD_PATH, os.path.basename(self.path))
length = self.headers['content-length']
length = self.headers["content-length"]
data = self.rfile.read(int(length))
with open(path, 'wb') as fh:
with open(path, "wb") as fh:
fh.write(data)
self.send_response(200)
self.send_header('Content-Type', 'application/json')
self.send_header("Content-Type", "application/json")
self.end_headers()
# Return the absolute path of the uploaded file
@@ -33,30 +35,25 @@ def setup():
parser = argparse.ArgumentParser()
parser.add_argument("directory")
parser.add_argument("-p", "--port", dest="port", type=int, default=8080)
parser.add_argument('-P', "--pid-file", dest="pid", default="web.pid")
parser.add_argument("-P", "--pid-file", dest="pid", default="web.pid")
args = parser.parse_args()
# Get absolute path to directory to serve, as daemonize changes to '/'
os.chdir(args.directory)
dr = os.getcwd()
httpd = BaseHTTPServer.HTTPServer(
('', args.port),
SimpleHTTPRequestHandlerWithPOST
)
httpd = BaseHTTPServer.HTTPServer(("", args.port), SimpleHTTPRequestHandlerWithPOST)
def run():
os.chdir(dr)
httpd.serve_forever()
daemon = Daemonize(
app="synapse-webclient",
pid=args.pid,
action=run,
auto_close_fds=False,
)
app="synapse-webclient", pid=args.pid, action=run, auto_close_fds=False
)
daemon.start()
if __name__ == '__main__':
if __name__ == "__main__":
setup()

View File

@@ -11,12 +11,12 @@
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=2
ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
# install the OS build deps
@@ -55,8 +55,9 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10
# xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
@@ -64,7 +65,9 @@ RUN apk add --no-cache --virtual .runtime_deps \
libxslt \
libpq \
zlib \
su-exec
su-exec \
tzdata \
xmlsec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py

View File

@@ -43,6 +43,9 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
FROM ${distro}
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \

View File

@@ -3,10 +3,10 @@
FROM matrixdotorg/sytest:latest
# The Sytest image doesn't come with python, so install that
RUN apt-get -qq install -y python python-dev python-pip
RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
# We need tox to run the tests in run_pg_tests.sh
RUN pip install tox
RUN python3 -m pip install tox
ADD run_pg_tests.sh /pg_tests.sh
ENTRYPOINT /pg_tests.sh

View File

@@ -6,39 +6,11 @@ postgres database.
The image also does *not* provide a TURN server.
## Run
### Using docker-compose (easier)
This image is designed to run either with an automatically generated
configuration file or with a custom configuration that requires manual editing.
An easy way to make use of this image is via docker-compose. See the
[contrib/docker](../contrib/docker) section of the synapse project for
examples.
### Without Compose (harder)
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
-p 8448:8448 \
matrixdotorg/synapse:latest
```
## Volumes
The image expects a single volume, located at ``/data``, that will hold:
By default, the image expects a single volume, located at ``/data``, that will hold:
* configuration files;
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
@@ -53,129 +25,106 @@ In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## TLS certificates
## Generating a configuration file
Synapse requires a valid TLS certificate. You can do one of the following:
The first step is to genearte a valid config file. To do this, you can run the
image with the `generate` commandline option.
* Provide your own certificate and key (as
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.crt` and
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.key`, or elsewhere by providing an
entire config as `${SYNAPSE_CONFIG_PATH}`). In this case, you should forward
traffic to port 8448 in the container, for example with `-p 443:8448`.
* Use a reverse proxy to terminate incoming TLS, and forward the plain http
traffic to port 8008 in the container. In this case you should set `-e
SYNAPSE_NO_TLS=1`.
* Use the ACME (Let's Encrypt) support built into Synapse. This requires
`${SYNAPSE_SERVER_NAME}` port 80 to be forwarded to port 8009 in the
container, for example with `-p 80:8009`. To enable it in the docker
container, set `-e SYNAPSE_ACME=1`.
If you don't do any of these, Synapse will fail to start with an error similar to:
synapse.config._base.ConfigError: Error accessing file '/data/<server_name>.tls.crt' (config for tls_certificate): No such file or directory
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually: see [Generating a config
file](#generating-a-config-file).
Otherwise, a dynamic configuration file will be used.
### Environment variables used to build a dynamic configuration file
The following environment variables are used to build the configuration file
when ``SYNAPSE_CONFIG_PATH`` is not set.
* ``SYNAPSE_SERVER_NAME`` (mandatory), the server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* `SYNAPSE_NO_TLS`, (accepts `true`, `false`, `on`, `off`, `1`, `0`, `yes`, `no`]): disable
TLS in Synapse (use this if you run your own TLS-capable reverse proxy). Defaults
to `false` (ie, TLS is enabled by default).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size
[default `10M`].
* ``SYNAPSE_ACME``: set this to enable the ACME certificate renewal support.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres
database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use
postgresql instead of sqlite3. [default: `db` which is useful when using a
container on the same docker network in a compose file where the postgres
service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If
this is set then postgres will be used instead of sqlite3.** [default: none]
**NOTE**: You are highly encouraged to use postgresql! Please use the compose
file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default:
`synapse`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default
``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if
any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail
server if any.
### Generating a config file
It is possible to generate a basic configuration file for use with
`SYNAPSE_CONFIG_PATH` using the `generate` commandline option. You will need to
specify values for `SYNAPSE_CONFIG_PATH`, `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS`, and mount a docker volume to store the data on. For
example:
You will need to specify values for the `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS` environment variable, and mount a docker volume to store
the configuration on. For example:
```
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest generate
```
This will generate a `homeserver.yaml` in (typically)
`/var/lib/docker/volumes/synapse-data/_data`, which you can then customise and
use with:
For information on picking a suitable server name, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md.
The above command will generate a `homeserver.yaml` in (typically)
`/var/lib/docker/volumes/synapse-data/_data`. You should check this file, and
customise it to your needs.
The following environment variables are supported in `generate` mode:
* `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname.
* `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable
anonymous statistics reporting.
* `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config
and event signing key) will be stored. Defaults to `/data`.
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the datatase and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`.
## Running synapse
Once you have a valid configuration file, you can start synapse as follows:
```
docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
-p 8008:8008 \
matrixdotorg/synapse:latest
```
You can then check that it has started correctly with:
```
docker logs synapse
```
If all is well, you should now be able to connect to http://localhost:8008 and
see a confirmation message.
The following environment variables are supported in run mode:
* `SYNAPSE_CONFIG_DIR`: where additional config files are stored. Defaults to
`/data`.
* `SYNAPSE_CONFIG_PATH`: path to the config file. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
## TLS support
The default configuration exposes a single HTTP port: http://localhost:8008. It
is suitable for local testing, but for any practical use, you will either need
to use a reverse proxy, or configure Synapse to expose an HTTPS port.
For documentation on using a reverse proxy, see
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
For more information on enabling TLS support in synapse itself, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md#tls-certificates. Of
course, you will need to expose the TLS port from the container with a `-p`
argument to `docker run`.
## Legacy dynamic configuration file support
For backwards-compatibility only, the docker image supports creating a dynamic
configuration file based on environment variables. This is now deprecated, but
is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is
not given).
To migrate from a dynamic configuration file to a static one, run the docker
container once with the environment variables set, and `migrate_config`
commandline option. For example:
```
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest migrate_config
```
This will generate the same configuration file as the legacy mode used, but
will store it in `/data/homeserver.yaml` instead of a temporary location. You
can then use it as shown above at [Running synapse](#running-synapse).

View File

@@ -21,7 +21,7 @@ server_name: "{{ SYNAPSE_SERVER_NAME }}"
pid_file: /homeserver.pid
web_client: False
soft_file_limit: 0
log_config: "/compiled/log.config"
log_config: "{{ SYNAPSE_LOG_CONFIG }}"
## Ports ##
@@ -207,22 +207,3 @@ perspectives:
password_config:
enabled: true
{% if SYNAPSE_SMTP_HOST %}
email:
enable_notifs: false
smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
smtp_user: "{{ SYNAPSE_SMTP_USER }}"
smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
require_transport_security: False
notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
app_name: Matrix
# if template_dir is unset, uses the example templates that are part of
# the Synapse distribution.
#template_dir: res/templates
notif_template_html: notif_mail.html
notif_template_text: notif_mail.txt
notif_for_new_users: True
riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
{% endif %}

View File

@@ -2,11 +2,11 @@ version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
@@ -16,14 +16,11 @@ handlers:
filters: [context]
loggers:
synapse:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
level: INFO
root:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
handlers: [console]

View File

@@ -17,4 +17,4 @@ su -c '/usr/lib/postgresql/9.6/bin/pg_ctl -w -D /var/lib/postgresql/data start'
# Run the tests
cd /src
export TRIAL_FLAGS="-j 4"
tox --workdir=/tmp -e py27-postgres
tox --workdir=/tmp -e py35-postgres

View File

@@ -1,89 +1,243 @@
#!/usr/local/bin/python
import jinja2
import os
import sys
import subprocess
import glob
import codecs
import glob
import os
import subprocess
import sys
import jinja2
# Utility functions
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
def log(txt):
print(txt, file=sys.stderr)
def check_arguments(environ, args):
for argument in args:
if argument not in environ:
print("Environment variable %s is mandatory, exiting." % argument)
sys.exit(2)
def generate_secrets(environ, secrets):
def error(txt):
log(txt)
sys.exit(2)
def convert(src, dst, environ):
"""Generate a file from a template
Args:
src (str): path to input file
dst (str): path to file to write
environ (dict): environment dictionary, for replacement mappings.
"""
with open(src) as infile:
template = infile.read()
rendered = jinja2.Template(template).render(**environ)
with open(dst, "w") as outfile:
outfile.write(rendered)
def generate_config_from_template(config_dir, config_path, environ, ownership):
"""Generate a homeserver.yaml from environment variables
Args:
config_dir (str): where to put generated config files
config_path (str): where to put the main config file
environ (dict): environment dictionary
ownership (str): "<user>:<group>" string which will be used to set
ownership of the generated configs
"""
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error(
"Environment variable '%s' is mandatory when generating a config file."
% (v,)
)
# populate some params from data files (if they exist, else create new ones)
environ = environ.copy()
secrets = {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
}
for name, secret in secrets.items():
if secret not in environ:
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
# if the file already exists, load in the existing value; otherwise,
# generate a new secret and write it to a file
if os.path.exists(filename):
with open(filename) as handle: value = handle.read()
log("Reading %s from %s" % (secret, filename))
with open(filename) as handle:
value = handle.read()
else:
print("Generating a random secret for {}".format(name))
log("Generating a random secret for {}".format(secret))
value = codecs.encode(os.urandom(32), "hex").decode()
with open(filename, "w") as handle: handle.write(value)
with open(filename, "w") as handle:
handle.write(value)
environ[secret] = value
# Prepare the configuration
mode = sys.argv[1] if len(sys.argv) > 1 else None
environ = os.environ.copy()
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
args = ["python", "-m", "synapse.app.homeserver"]
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists(config_dir):
os.mkdir(config_dir)
# In generate mode, generate a configuration, missing keys, then exit
if mode == "generate":
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
args += [
"--server-name", environ["SYNAPSE_SERVER_NAME"],
"--report-stats", environ["SYNAPSE_REPORT_STATS"],
"--config-path", environ["SYNAPSE_CONFIG_PATH"],
"--generate-config"
# Convert SYNAPSE_NO_TLS to boolean if exists
if "SYNAPSE_NO_TLS" in environ:
tlsanswerstring = str.lower(environ["SYNAPSE_NO_TLS"])
if tlsanswerstring in ("true", "on", "1", "yes"):
environ["SYNAPSE_NO_TLS"] = True
else:
if tlsanswerstring in ("false", "off", "0", "no"):
environ["SYNAPSE_NO_TLS"] = False
else:
error(
'Environment variable "SYNAPSE_NO_TLS" found but value "'
+ tlsanswerstring
+ '" unrecognized; exiting.'
)
if "SYNAPSE_LOG_CONFIG" not in environ:
environ["SYNAPSE_LOG_CONFIG"] = config_dir + "/log.config"
log("Generating synapse config file " + config_path)
convert("/conf/homeserver.yaml", config_path, environ)
log_config_file = environ["SYNAPSE_LOG_CONFIG"]
log("Generating log config file " + log_config_file)
convert("/conf/log.config", log_config_file, environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
# Hopefully we already have a signing key, but generate one if not.
subprocess.check_output(
[
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
# tell synapse to put generated keys in /data rather than /compiled
"--keys-directory",
config_dir,
"--generate-keys",
]
)
def run_generate_config(environ, ownership):
"""Run synapse with a --generate-config param to generate a template config file
Args:
environ (dict): env var dict
ownership (str): "userid:groupid" arg for chmod
Never returns.
"""
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error("Environment variable '%s' is mandatory in `generate` mode." % (v,))
server_name = environ["SYNAPSE_SERVER_NAME"]
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
if not os.path.exists(log_config_file):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = [
"python",
"-m",
"synapse.app.homeserver",
"--server-name",
server_name,
"--report-stats",
environ["SYNAPSE_REPORT_STATS"],
"--config-path",
config_path,
"--config-directory",
config_dir,
"--data-directory",
data_dir,
"--generate-config",
"--open-private-ports",
]
# log("running %s" % (args, ))
os.execv("/usr/local/bin/python", args)
# In normal mode, generate missing keys if any, then run synapse
else:
if "SYNAPSE_CONFIG_PATH" in environ:
config_path = environ["SYNAPSE_CONFIG_PATH"]
else:
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
generate_secrets(environ, {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
})
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists("/compiled"): os.mkdir("/compiled")
def main(args, environ):
mode = args[1] if len(args) > 1 else None
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
# In generate mode, generate a configuration and missing keys, then exit
if mode == "generate":
return run_generate_config(environ, ownership)
if mode == "migrate_config":
# generate a config based on environment vars.
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
return generate_config_from_template(
config_dir, config_path, environ, ownership
)
if mode is not None:
error("Unknown execution mode '%s'" % (mode,))
if "SYNAPSE_SERVER_NAME" in environ:
# backwards-compatibility generate-a-config-on-the-fly mode
if "SYNAPSE_CONFIG_PATH" in environ:
error(
"SYNAPSE_SERVER_NAME and SYNAPSE_CONFIG_PATH are mutually exclusive "
"except in `generate` or `migrate_config` mode."
)
config_path = "/compiled/homeserver.yaml"
# Convert SYNAPSE_NO_TLS to boolean if exists
if "SYNAPSE_NO_TLS" in environ:
tlsanswerstring = str.lower(environ["SYNAPSE_NO_TLS"])
if tlsanswerstring in ("true", "on", "1", "yes"):
environ["SYNAPSE_NO_TLS"] = True
else:
if tlsanswerstring in ("false", "off", "0", "no"):
environ["SYNAPSE_NO_TLS"] = False
else:
print("Environment variable \"SYNAPSE_NO_TLS\" found but value \"" + tlsanswerstring + "\" unrecognized; exiting.")
sys.exit(2)
log(
"Generating config file '%s' on-the-fly from environment variables.\n"
"Note that this mode is deprecated. You can migrate to a static config\n"
"file by running with 'migrate_config'. See the README for more details."
% (config_path,)
)
convert("/conf/homeserver.yaml", config_path, environ)
convert("/conf/log.config", "/compiled/log.config", environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
generate_config_from_template("/compiled", config_path, environ, ownership)
else:
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
if not os.path.exists(config_path):
error(
"Config file '%s' does not exist. You should either create a new "
"config file by running with the `generate` argument (and then edit "
"the resulting file before restarting) or specify the path to an "
"existing config file with the SYNAPSE_CONFIG_PATH variable."
% (config_path,)
)
log("Starting synapse with config file " + config_path)
args += [
"--config-path", config_path,
# tell synapse to put any generated keys in /data rather than /compiled
"--keys-directory", "/data",
args = [
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
]
os.execv("/sbin/su-exec", args)
# Generate missing keys and start synapse
subprocess.check_output(args + ["--generate-keys"])
os.execv("/sbin/su-exec", ["su-exec", ownership] + args)
if __name__ == "__main__":
main(sys.argv, os.environ)

View File

@@ -1,5 +1,22 @@
# MSC1711 Certificates FAQ
## Historical Note
This document was originally written to guide server admins through the upgrade
path towards Synapse 1.0. Specifically,
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/master/proposals/1711-x509-for-federation.md)
required that all servers present valid TLS certificates on their federation
API. Admins were encouraged to achieve compliance from version 0.99.0 (released
in February 2019) ahead of version 1.0 (released June 2019) enforcing the
certificate checks.
Much of what follows is now outdated since most admins will have already
upgraded, however it may be of use to those with old installs returning to the
project.
If you are setting up a server from scratch you almost certainly should look at
the [installation guide](../INSTALL.md) instead.
## Introduction
The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It
supports the r0.1 release of the server to server specification, but is
compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well

View File

@@ -1,43 +1,60 @@
- Everything should comply with PEP8. Code should pass
``pep8 --max-line-length=100`` without any warnings.
# Code Style
- **Indenting**:
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
in code.
- NEVER tabs. 4 spaces to indent.
The necessary tools are detailed below.
- follow PEP8; either hanging indent or multiline-visual indent depending
on the size and shape of the arguments and what makes more sense to the
author. In other words, both this::
## Formatting tools
print("I am a fish %s" % "moo")
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
and this::
First install ``black`` with::
print("I am a fish %s" %
"moo")
pip install --upgrade black
and this::
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
print(
"I am a fish %s" %
"moo",
)
black . --exclude="\.tox|build|env"
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes
most sense for their function invocation. (e.g. if they want to add
comments per-argument, or put expressions in the arguments, or group
related arguments together, or want to deliberately extend or preserve
vertical/horizontal space)
- **flake8**
- **Line length**:
``flake8`` is a code checking tool. We require code to pass ``flake8`` before being merged into the codebase.
Max line length is 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
Install ``flake8`` with::
Use parentheses instead of ``\`` for line continuation where ever possible
(which is pretty much everywhere).
pip install --upgrade flake8
Check all application and test code with::
flake8 synapse tests
- **isort**
``isort`` ensures imports are nicely formatted, and can suggest and
auto-fix issues such as double-importing.
Install ``isort`` with::
pip install --upgrade isort
Auto-fix imports with::
isort -rc synapse tests
``-rc`` means to recursively search the given directories.
It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
- **Naming**:
@@ -46,26 +63,6 @@
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Blank lines**:
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- **Whitespace**:
There should be spaces where spaces should be and not where there shouldn't
be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx
@@ -76,7 +73,7 @@
- **Imports**:
- Prefer to import classes and functions than packages or modules.
- Prefer to import classes and functions rather than packages or modules.
Example::

View File

@@ -14,9 +14,9 @@ up and will work provided you set the ``server_name`` to match your
machine's public DNS hostname, and provide Synapse with a TLS certificate
which is valid for your ``server_name``.
Once you have completed the steps necessary to federate, you should be able to
join a room via federation. (A good place to start is ``#synapse:matrix.org`` - a
room for Synapse admins.)
Once federation has been configured, you should be able to join a room over
federation. A good place to start is ``#synapse:matrix.org`` - a room for
Synapse admins.
## Delegation
@@ -98,6 +98,77 @@ _matrix._tcp.<server_name>``. In our example, we would expect this:
Note that the target of a SRV record cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
### Delegation FAQ
#### When do I need a SRV record or .well-known URI?
If your homeserver listens on the default federation port (8448), and your
`server_name` points to the host that your homeserver runs on, you do not need an SRV
record or `.well-known/matrix/server` URI.
For instance, if you registered `example.com` and pointed its DNS A record at a
fresh server, you could install Synapse on that host,
giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is enabled,
it would automatically generate a valid TLS certificate for you via Let's Encrypt
and no SRV record or .well-known URI would be needed.
This is the common case, although you can add an SRV record or
`.well-known/matrix/server` URI for completeness if you wish.
**However**, if your server does not listen on port 8448, or if your `server_name`
does not point to the host that your homeserver runs on, you will need to let
other servers know how to find it. The way to do this is via .well-known or an
SRV record.
#### I have created a .well-known URI. Do I still need an SRV record?
As of Synapse 0.99, Synapse will first check for the existence of a .well-known
URI and follow any delegation it suggests. It will only then check for the
existence of an SRV record.
That means that the SRV record will often be redundant. However, you should
remember that there may still be older versions of Synapse in the federation
which do not understand .well-known URIs, so if you removed your SRV record
you would no longer be able to federate with them.
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
earlier will follow the SRV record (and not care about the invalid
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
correct certificate chain.
#### Can I manage my own certificates rather than having Synapse renew certificates itself?
Yes, you are welcome to manage your certificates yourself. Synapse will only
attempt to obtain certificates from Let's Encrypt if you configure it to do
so.The only requirement is that there is a valid TLS cert present for
federation end points.
#### Do you still recommend against using a reverse proxy on the federation port?
We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
reverse proxy.
#### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
Practically speaking, this is no longer necessary.
If you are using a reverse proxy for all of your TLS traffic, then you can set
`no_tls: True` in the Synapse config. In that case, the only reason Synapse
needs the certificate is to populate a legacy `tls_fingerprints` field in the
federation API. This is ignored by Synapse 0.99.0 and later, and the only time
pre-0.99 Synapses will check it is when attempting to fetch the server keys -
and generally this is delegated via `matrix.org`, which will be running a modern
version of Synapse.
#### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy. However, Synapse will use the
same certificate on any ports where TLS is configured.
## Troubleshooting
You can use the [federation tester](

View File

@@ -1,4 +1,4 @@
Log contexts
Log Contexts
============
.. contents::
@@ -12,7 +12,7 @@ record.
Logcontexts are also used for CPU and database accounting, so that we can track
which requests were responsible for high CPU use or database activity.
The ``synapse.util.logcontext`` module provides a facilities for managing the
The ``synapse.logging.context`` module provides a facilities for managing the
current log context (as well as providing the ``LoggingContextFilter`` class).
Deferreds make the whole thing complicated, so this document describes how it
@@ -27,19 +27,19 @@ found them:
.. code:: python
from synapse.util import logcontext # omitted from future snippets
from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = logcontext.LoggingContext()
request_context = context.LoggingContext()
calling_context = logcontext.LoggingContext.current_context()
logcontext.LoggingContext.set_current_context(request_context)
calling_context = context.LoggingContext.current_context()
context.LoggingContext.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
logcontext.LoggingContext.set_current_context(calling_context)
context.LoggingContext.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id
@@ -51,7 +51,7 @@ written much more succinctly as:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
@@ -74,7 +74,7 @@ blocking operation, and returns a deferred:
@defer.inlineCallbacks
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
yield do_request_handling()
logger.debug("finished")
@@ -179,7 +179,7 @@ though, we need to make up a new Deferred, or we get a Deferred back from
external code. We need to make it follow our rules.
The easy way to do it is with a combination of ``defer.inlineCallbacks``, and
``logcontext.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
``context.PreserveLoggingContext``. Suppose we want to implement ``sleep``,
which returns a deferred which will run its callbacks after a given number of
seconds. That might look like:
@@ -204,13 +204,13 @@ That doesn't follow the rules, but we can fix it by wrapping it with
This technique works equally for external functions which return deferreds,
or deferreds we have made ourselves.
You can also use ``logcontext.make_deferred_yieldable``, which just does the
You can also use ``context.make_deferred_yieldable``, which just does the
boilerplate for you, so the above could be written:
.. code:: python
def sleep(seconds):
return logcontext.make_deferred_yieldable(get_sleep_deferred(seconds))
return context.make_deferred_yieldable(get_sleep_deferred(seconds))
Fire-and-forget
@@ -279,7 +279,7 @@ Obviously that option means that the operations done in
that might be fixed by setting a different logcontext via a ``with
LoggingContext(...)`` in ``background_operation``).
The second option is to use ``logcontext.run_in_background``, which wraps a
The second option is to use ``context.run_in_background``, which wraps a
function so that it doesn't reset the logcontext even when it returns an
incomplete deferred, and adds a callback to the returned deferred to reset the
logcontext. In other words, it turns a function that follows the Synapse rules
@@ -293,7 +293,7 @@ It can be used like this:
def do_request_handling():
yield foreground_operation()
logcontext.run_in_background(background_operation)
context.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")
@@ -332,7 +332,7 @@ gathered:
result = yield defer.gatherResults([d1, d2])
In this case particularly, though, option two, of using
``logcontext.preserve_fn`` almost certainly makes more sense, so that
``context.preserve_fn`` almost certainly makes more sense, so that
``operation1`` and ``operation2`` are both logged against the original
logcontext. This looks like:
@@ -340,8 +340,8 @@ logcontext. This looks like:
@defer.inlineCallbacks
def do_request_handling():
d1 = logcontext.preserve_fn(operation1)()
d2 = logcontext.preserve_fn(operation2)()
d1 = context.preserve_fn(operation1)()
d2 = context.preserve_fn(operation2)()
with PreserveLoggingContext():
result = yield defer.gatherResults([d1, d2])
@@ -381,7 +381,7 @@ off the background process, and then leave the ``with`` block to wait for it:
.. code:: python
def handle_request(request_id):
with logcontext.LoggingContext() as request_context:
with context.LoggingContext() as request_context:
request_context.request = request_id
d = do_request_handling()
@@ -414,7 +414,7 @@ runs its callbacks in the original logcontext, all is happy.
The business of a Deferred which runs its callbacks in the original logcontext
isn't hard to achieve — we have it today, in the shape of
``logcontext._PreservingContextDeferred``:
``context._PreservingContextDeferred``:
.. code:: python

View File

@@ -59,6 +59,108 @@ How to monitor Synapse metrics using Prometheus
Restart Prometheus.
Renaming of metrics & deprecation of old names in 1.2
-----------------------------------------------------
Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
upstream ``prometheus_client``. The old names are considered deprecated and will
be removed in a future version of Synapse.
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| New Name | Old Name |
+=============================================================================+=======================================================================+
| python_gc_objects_collected_total | python_gc_objects_collected |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_collections_total | python_gc_collections |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| process_cpu_seconds_total | process_cpu_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_start_count_total | synapse_background_process_start_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------

100
docs/opentracing.rst Normal file
View File

@@ -0,0 +1,100 @@
===========
OpenTracing
===========
Background
----------
OpenTracing is a semi-standard being adopted by a number of distributed tracing
platforms. It is a common api for facilitating vendor-agnostic tracing
instrumentation. That is, we can use the OpenTracing api and select one of a
number of tracer implementations to do the heavy lifting in the background.
Our current selected implementation is Jaeger.
OpenTracing is a tool which gives an insight into the causal relationship of
work done in and between servers. The servers each track events and report them
to a centralised server - in Synapse's case: Jaeger. The basic unit used to
represent events is the span. The span roughly represents a single piece of work
that was done and the time at which it occurred. A span can have child spans,
meaning that the work of the child had to be completed for the parent span to
complete, or it can have follow-on spans which represent work that is undertaken
as a result of the parent but is not depended on by the parent to in order to
finish.
Since this is undertaken in a distributed environment a request to another
server, such as an RPC or a simple GET, can be considered a span (a unit or
work) for the local server. This causal link is what OpenTracing aims to
capture and visualise. In order to do this metadata about the local server's
span, i.e the 'span context', needs to be included with the request to the
remote.
It is up to the remote server to decide what it does with the spans
it creates. This is called the sampling policy and it can be configured
through Jaeger's settings.
For OpenTracing concepts see
https://opentracing.io/docs/overview/what-is-tracing/.
For more information about Jaeger's implementation see
https://www.jaegertracing.io/docs/
=====================
Seting up OpenTracing
=====================
To receive OpenTracing spans, start up a Jaeger server. This can be done
using docker like so:
.. code-block:: bash
docker run -d --name jaeger
-p 6831:6831/udp \
-p 6832:6832/udp \
-p 5778:5778 \
-p 16686:16686 \
-p 14268:14268 \
jaegertracing/all-in-one:1.13
Latest documentation is probably at
https://www.jaegertracing.io/docs/1.13/getting-started/
Enable OpenTracing in Synapse
-----------------------------
OpenTracing is not enabled by default. It must be enabled in the homeserver
config by uncommenting the config options under ``opentracing`` as shown in
the `sample config <./sample_config.yaml>`_. For example:
.. code-block:: yaml
opentracing:
tracer_enabled: true
homeserver_whitelist:
- "mytrustedhomeserver.org"
- "*.myotherhomeservers.com"
Homeserver whitelisting
-----------------------
The homeserver whitelist is configured using regular expressions. A list of regular
expressions can be given and their union will be compared when propagating any
spans contexts to another homeserver.
Though it's mostly safe to send and receive span contexts to and from
untrusted users since span contexts are usually opaque ids it can lead to
two problems, namely:
- If the span context is marked as sampled by the sending homeserver the receiver will
sample it. Therefore two homeservers with wildly different sampling policies
could incur higher sampling counts than intended.
- Sending servers can attach arbitrary data to spans, known as 'baggage'. For safety this has been disabled in Synapse
but that doesn't prevent another server sending you baggage which will be logged
to OpenTracing's logs.
==================
Configuring Jaeger
==================
Sampling strategies can be set as in this document:
https://www.jaegertracing.io/docs/1.13/sampling/

View File

@@ -1,7 +1,7 @@
Using Postgres
--------------
Postgres version 9.4 or later is known to work.
Postgres version 9.5 or later is known to work.
Install postgres client libraries
=================================
@@ -11,12 +11,14 @@ a postgres database.
* If you are using the `matrix.org debian/ubuntu
packages <../INSTALL.md#matrixorg-packages>`_,
the necessary libraries will already be installed.
the necessary python library will already be installed, but you will need to
ensure the low-level postgres library is installed, which you can do with
``apt install libpq5``.
* For other pre-built packages, please consult the documentation from the
relevant package.
* If you installed synapse `in a virtualenv
* If you installed synapse `in a virtualenv
<../INSTALL.md#installing-from-source>`_, you can install the library with::
~/synapse/env/bin/pip install matrix-synapse[postgres]
@@ -34,9 +36,14 @@ Assuming your PostgreSQL database user is called ``postgres``, create a user
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
Before you can authenticate with the ``synapse_user``, you must create a
database that it can access. To create a database, first connect to the database
with your database user::
su - postgres
psql
and then run::
CREATE DATABASE synapse
ENCODING 'UTF8'
@@ -46,7 +53,13 @@ encoding use, e.g.::
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
``synapse_user`` user (which must already have been created as above).
Note that the PostgreSQL database *must* have the correct encoding set (as
shown above), otherwise it will not be able to store UTF8 strings.
You may need to enable password authentication so ``synapse_user`` can connect
to the database. See https://www.postgresql.org/docs/11/auth-pg-hba-conf.html.
Tuning Postgres
===============

View File

@@ -48,6 +48,8 @@ Let's assume that we expect clients to connect to our server at
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Do not add a `/` after the port in `proxy_pass`, otherwise nginx will canonicalise/normalise the URI.
* Caddy::
@@ -89,8 +91,10 @@ Let's assume that we expect clients to connect to our server at
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
# Matrix client traffic
acl matrix hdr(host) -i matrix.example.com
use_backend matrix if matrix
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
use_backend matrix if matrix-host matrix-path
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1

View File

@@ -23,29 +23,6 @@ server_name: "SERVERNAME"
#
pid_file: DATADIR/homeserver.pid
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# This setting requires the affinity package to be installed!
#
#cpu_affinity: 0xFFFFFFFF
# The path to the web client which will be served at /_matrix/client/
# if 'webclient' is configured under the 'listeners' configuration.
#
@@ -77,11 +54,15 @@ pid_file: DATADIR/homeserver.pid
#
#require_auth_for_profile_requests: true
# If set to 'true', requires authentication to access the server's
# public rooms directory through the client API, and forbids any other
# homeserver to fetch it via federation. Defaults to 'false'.
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
#
#restrict_public_rooms_to_local_users: true
#allow_public_rooms_without_auth: false
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
#
#allow_public_rooms_over_federation: false
# The default room version for newly created rooms.
#
@@ -232,7 +213,7 @@ listeners:
- names: [client, federation]
compress: false
# example additonal_resources:
# example additional_resources:
#
#additional_resources:
# "/_matrix/my/custom/endpoint":
@@ -336,6 +317,15 @@ listeners:
#
#federation_verify_certificates: false
# The minimum TLS version that will be used for outbound federation requests.
#
# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
# that setting this value higher than `1.2` will prevent federation to most
# of the public Matrix network: only configure it to `1.3` if you have an
# entirely private federation setup and you can ensure TLS 1.3 support.
#
#federation_client_minimum_tls_version: 1.2
# Skip federation certificate verification on the following whitelist
# of domains.
#
@@ -425,6 +415,13 @@ acme:
#
#domain: matrix.example.com
# file to use for the account key. This will be generated if it doesn't
# exist.
#
# If unspecified, we will use CONFDIR/client.key.
#
account_key_file: DATADIR/acme_account.key
# List of allowed TLS fingerprints for this server to publish along
# with the signing keys for this server. Other matrix servers that
# make HTTPS requests to this server will check that the TLS
@@ -789,6 +786,17 @@ uploads_path: "DATADIR/uploads"
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
#registrations_require_3pid:
@@ -1000,6 +1008,12 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# so it is not normally necessary to specify them unless you need to
# override them.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
# use to configure your SAML IdP with. Alternatively, you can manually configure
# the IdP to use an ACS location of
# https://<server>:<port>/_matrix/saml2/authn_response.
#
#saml2_config:
# sp_config:
# # point this to the IdP's metadata. You can use either a local file or
@@ -1009,7 +1023,15 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# remote:
# - url: https://our_idp/metadata.xml
#
# # The rest of sp_config is just used to generate our metadata xml, and you
# # By default, the user has to go to our login page first. If you'd like to
# # allow IdP-initiated login, set 'allow_unsolicited: True' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: True
#
# # The examples below are just used to generate our metadata xml, and you
# # may well not need it, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
@@ -1032,6 +1054,12 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# # separate pysaml2 configuration file:
# #
# config_path: "CONFDIR/sp_conf.py"
#
# # the lifetime of a SAML session. This defines how long a user has to
# # complete the authentication process, if allow_unsolicited is unset.
# # The default is 5 minutes.
# #
# # saml_session_lifetime: 5m
@@ -1058,6 +1086,12 @@ password_config:
#
#enabled: false
# Uncomment to disable authentication against the local password
# database. This is ignored if `enabled` is false, and is only useful
# if you have other password_providers.
#
#localdb_enabled: false
# Uncomment and change to a secret random string for extra security.
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#
@@ -1082,11 +1116,13 @@ password_config:
# app_name: Matrix
#
# # Enable email notifications by default
# #
# notif_for_new_users: True
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Enable sending password reset emails via the configured, trusted
@@ -1099,16 +1135,22 @@ password_config:
# #
# # If this option is set to false and SMTP options have not been
# # configured, resetting user passwords via email will be disabled
# #
# #trust_identity_server_for_password_resets: false
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
# #template_dir: res/templates
#
@@ -1351,3 +1393,40 @@ password_config:
# alias: "*"
# room_id: "*"
# action: allow
# Server admins can define a Python module that implements extra rules for
# allowing or denying incoming events. In order to work, this module needs to
# override the methods defined in synapse/events/third_party_rules.py.
#
# This feature is designed to be used in closed federations only, where each
# participating server enforces the same rules.
#
#third_party_event_rules:
# module: "my_custom_project.SuperRulesSet"
# config:
# example_option: 'things'
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
# See docs/opentracing.rst
# This is a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"

View File

@@ -18,226 +18,220 @@ import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('..'))
sys.path.insert(0, os.path.abspath(".."))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinxcontrib.napoleon',
"sphinx.ext.autodoc",
"sphinx.ext.intersphinx",
"sphinx.ext.coverage",
"sphinx.ext.ifconfig",
"sphinxcontrib.napoleon",
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]
# The suffix of source filenames.
source_suffix = '.rst'
source_suffix = ".rst"
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
master_doc = "index"
# General information about the project.
project = u'Synapse'
copyright = u'Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd'
project = "Synapse"
copyright = (
"Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd"
)
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.0'
version = "1.0"
# The full version, including alpha/beta/rc tags.
release = '1.0'
release = "1.0"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
exclude_patterns = ["_build"]
# The reST default role (used for this markup: `text`) to use for all
# documents.
#default_role = None
# default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
pygments_style = "sphinx"
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
# keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
html_theme = "default"
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
#html_extra_path = []
# html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Synapsedoc'
htmlhelp_basename = "Synapsedoc"
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
('index', 'Synapse.tex', u'Synapse Documentation',
u'TNG', 'manual'),
]
latex_documents = [("index", "Synapse.tex", "Synapse Documentation", "TNG", "manual")]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'synapse', u'Synapse Documentation',
[u'TNG'], 1)
]
man_pages = [("index", "synapse", "Synapse Documentation", ["TNG"], 1)]
# If true, show URL addresses after external links.
#man_show_urls = False
# man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
@@ -246,26 +240,32 @@ man_pages = [
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Synapse', u'Synapse Documentation',
u'TNG', 'Synapse', 'One line description of project.',
'Miscellaneous'),
(
"index",
"Synapse",
"Synapse Documentation",
"TNG",
"Synapse",
"One line description of project.",
"Miscellaneous",
)
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
# texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
#texinfo_no_detailmenu = False
# texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'http://docs.python.org/': None}
intersphinx_mapping = {"http://docs.python.org/": None}
napoleon_include_special_with_doc = True
napoleon_use_ivar = True

View File

@@ -239,6 +239,13 @@ be routed to the same instance::
^/_matrix/client/(r0|unstable)/register$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~

View File

@@ -14,6 +14,11 @@
name = "Bugfixes"
showcontent = true
[[tool.towncrier.type]]
directory = "docker"
name = "Updates to the Docker image"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"
@@ -28,3 +33,24 @@
directory = "misc"
name = "Internal Changes"
showcontent = true
[tool.black]
target-version = ['py34']
exclude = '''
(
/(
\.eggs # exclude a few common directories in the
| \.git # root of the project
| \.tox
| \.venv
| \.env
| env
| _build
| _trial_temp.*
| build
| dist
| debian
)/
)
'''

View File

@@ -39,11 +39,11 @@ def check_auth(auth, auth_chain, events):
print("Success:", e.event_id, e.type, e.state_key)
if __name__ == '__main__':
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
'json', nargs='?', type=argparse.FileType('r'), default=sys.stdin
"json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
)
args = parser.parse_args()

View File

@@ -1,54 +0,0 @@
import argparse
import hashlib
import json
import logging
import sys
from unpaddedbase64 import encode_base64
from synapse.crypto.event_signing import (
check_event_content_hash,
compute_event_reference_hash,
)
class dictobj(dict):
def __init__(self, *args, **kargs):
dict.__init__(self, *args, **kargs)
self.__dict__ = self
def get_dict(self):
return dict(self)
def get_full_dict(self):
return dict(self)
def get_pdu_json(self):
return dict(self)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()
logging.basicConfig()
event_json = dictobj(json.load(args.input_json))
algorithms = {"sha256": hashlib.sha256}
for alg_name in event_json.hashes:
if check_event_content_hash(event_json, algorithms[alg_name]):
print("PASS content hash %s" % (alg_name,))
else:
print("FAIL content hash %s" % (alg_name,))
for algorithm in algorithms.values():
name, h_bytes = compute_event_reference_hash(event_json, algorithm)
print("Reference hash %s: %s" % (name, encode_base64(h_bytes)))
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,3 @@
import argparse
import json
import logging
@@ -40,7 +39,7 @@ def main():
parser = argparse.ArgumentParser()
parser.add_argument("signature_name")
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
"input_json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
)
args = parser.parse_args()
@@ -69,5 +68,5 @@ def main():
print("FAIL %s" % (key_id,))
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -116,5 +116,5 @@ def main():
connection.commit()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -19,10 +19,10 @@ class DefinitionVisitor(ast.NodeVisitor):
self.names = {}
self.attrs = set()
self.definitions = {
'def': self.functions,
'class': self.classes,
'names': self.names,
'attrs': self.attrs,
"def": self.functions,
"class": self.classes,
"names": self.names,
"attrs": self.attrs,
}
def visit_Name(self, node):
@@ -47,23 +47,23 @@ class DefinitionVisitor(ast.NodeVisitor):
def non_empty(defs):
functions = {name: non_empty(f) for name, f in defs['def'].items()}
classes = {name: non_empty(f) for name, f in defs['class'].items()}
functions = {name: non_empty(f) for name, f in defs["def"].items()}
classes = {name: non_empty(f) for name, f in defs["class"].items()}
result = {}
if functions:
result['def'] = functions
result["def"] = functions
if classes:
result['class'] = classes
names = defs['names']
result["class"] = classes
names = defs["names"]
uses = []
for name in names.get('Load', ()):
if name not in names.get('Param', ()) and name not in names.get('Store', ()):
for name in names.get("Load", ()):
if name not in names.get("Param", ()) and name not in names.get("Store", ()):
uses.append(name)
uses.extend(defs['attrs'])
uses.extend(defs["attrs"])
if uses:
result['uses'] = uses
result['names'] = names
result['attrs'] = defs['attrs']
result["uses"] = uses
result["names"] = names
result["attrs"] = defs["attrs"]
return result
@@ -81,33 +81,33 @@ def definitions_in_file(filepath):
def defined_names(prefix, defs, names):
for name, funcs in defs.get('def', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
for name, funcs in defs.get("def", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
for name, funcs in defs.get('class', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
for name, funcs in defs.get("class", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
def used_names(prefix, item, defs, names):
for name, funcs in defs.get('def', {}).items():
for name, funcs in defs.get("def", {}).items():
used_names(prefix + name + ".", name, funcs, names)
for name, funcs in defs.get('class', {}).items():
for name, funcs in defs.get("class", {}).items():
used_names(prefix + name + ".", name, funcs, names)
path = prefix.rstrip('.')
for used in defs.get('uses', ()):
path = prefix.rstrip(".")
for used in defs.get("uses", ()):
if used in names:
if item:
names[item].setdefault('uses', []).append(used)
names[used].setdefault('used', {}).setdefault(item, []).append(path)
names[item].setdefault("uses", []).append(used)
names[used].setdefault("used", {}).setdefault(item, []).append(path)
if __name__ == '__main__':
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Find definitions.')
parser = argparse.ArgumentParser(description="Find definitions.")
parser.add_argument(
"--unused", action="store_true", help="Only list unused definitions"
)
@@ -119,7 +119,7 @@ if __name__ == '__main__':
)
parser.add_argument(
"directories",
nargs='+',
nargs="+",
metavar="DIR",
help="Directories to search for definitions",
)
@@ -164,7 +164,7 @@ if __name__ == '__main__':
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
if args.unused and definition.get('used'):
if args.unused and definition.get("used"):
continue
result[name] = definition
@@ -196,9 +196,9 @@ if __name__ == '__main__':
continue
result[name] = definition
if args.format == 'yaml':
if args.format == "yaml":
yaml.dump(result, sys.stdout, default_flow_style=False)
elif args.format == 'dot':
elif args.format == "dot":
print("digraph {")
for name, entry in result.items():
print(name)

View File

@@ -21,7 +21,8 @@ import argparse
import base64
import json
import sys
from urlparse import urlparse, urlunparse
from six.moves.urllib import parse as urlparse
import nacl.signing
import requests
@@ -62,7 +63,7 @@ def encode_canonical_json(value):
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unecessary white space.
separators=(',', ':'),
separators=(",", ":"),
# Sort the keys of dictionaries.
sort_keys=True,
# Encode the resulting unicode as UTF-8 bytes.
@@ -144,8 +145,8 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers = []
for key, sig in signed_json["signatures"][origin_name].items():
header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (origin_name, key, sig)
authorization_headers.append(bytes(header))
header = 'X-Matrix origin=%s,key="%s",sig="%s"' % (origin_name, key, sig)
authorization_headers.append(header.encode("ascii"))
print("Authorization: %s" % header, file=sys.stderr)
dest = "matrix://%s%s" % (destination, path)
@@ -160,11 +161,7 @@ def request_json(method, origin_name, origin_key, destination, path, content):
headers["Content-Type"] = "application/json"
result = s.request(
method=method,
url=dest,
headers=headers,
verify=False,
data=content,
method=method, url=dest, headers=headers, verify=False, data=content
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
@@ -240,18 +237,18 @@ def main():
def read_args_from_config(args):
with open(args.config, 'r') as fh:
with open(args.config, "r") as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config['server_name']
args.server_name = config["server_name"]
if not args.signing_key_path:
args.signing_key_path = config['signing_key_path']
args.signing_key_path = config["signing_key_path"]
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s):
if s[-1] == ']':
def lookup(s, skip_well_known=False):
if s[-1] == "]":
# ipv6 literal (with no port)
return s, 8448
@@ -263,19 +260,49 @@ class MatrixConnectionAdapter(HTTPAdapter):
raise ValueError("Invalid host:port '%s'" % s)
return out[0], port
# try a .well-known lookup
if not skip_well_known:
well_known = MatrixConnectionAdapter.get_well_known(s)
if well_known:
return MatrixConnectionAdapter.lookup(well_known, skip_well_known=True)
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
return srv.host, srv.port
except Exception:
return s, 8448
@staticmethod
def get_well_known(server_name):
uri = "https://%s/.well-known/matrix/server" % (server_name,)
print("fetching %s" % (uri,), file=sys.stderr)
try:
resp = requests.get(uri)
if resp.status_code != 200:
print("%s gave %i" % (uri, resp.status_code), file=sys.stderr)
return None
parsed_well_known = resp.json()
if not isinstance(parsed_well_known, dict):
raise Exception("not a dict")
if "m.server" not in parsed_well_known:
raise Exception("Missing key 'm.server'")
new_name = parsed_well_known["m.server"]
print("well-known lookup gave %s" % (new_name,), file=sys.stderr)
return new_name
except Exception as e:
print("Invalid response from %s: %s" % (uri, e), file=sys.stderr)
return None
def get_connection(self, url, proxies=None):
parsed = urlparse(url)
parsed = urlparse.urlparse(url)
(host, port) = self.lookup(parsed.netloc)
netloc = "%s:%d" % (host, port)
print("Connecting to %s" % (netloc,), file=sys.stderr)
url = urlunparse(
url = urlparse.urlunparse(
("https", netloc, parsed.path, parsed.params, parsed.query, parsed.fragment)
)
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)

View File

@@ -79,5 +79,5 @@ def main():
conn.commit()
if __name__ == '__main__':
if __name__ == "__main__":
main()

12
scripts-dev/lint.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements
# flake8 - lints and finds mistakes
# black - opinionated code formatter
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts

View File

@@ -35,11 +35,11 @@ def find_patterns_in_file(filepath):
find_patterns_in_code(f.read())
parser = argparse.ArgumentParser(description='Find url patterns.')
parser = argparse.ArgumentParser(description="Find url patterns.")
parser.add_argument(
"directories",
nargs='+',
nargs="+",
metavar="DIR",
help="Directories to search for definitions",
)

View File

@@ -63,5 +63,5 @@ def main():
streams[update.name] = update.position
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python
#!/usr/bin/env python3
import argparse
import shutil

View File

@@ -16,7 +16,7 @@
import argparse
import sys
from signedjson.key import write_signing_keys, generate_signing_key
from signedjson.key import generate_signing_key, write_signing_keys
from synapse.util.stringutils import random_string
@@ -24,14 +24,14 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-o", "--output_file",
type=argparse.FileType('w'),
"-o",
"--output_file",
type=argparse.FileType("w"),
default=sys.stdout,
help="Where to write the output to",
)
args = parser.parse_args()
key_id = "a_" + random_string(4)
key = generate_signing_key(key_id),
key = (generate_signing_key(key_id),)
write_signing_keys(args.output_file, key)

View File

@@ -50,7 +50,7 @@ def main(src_repo, dest_repo):
dest_paths = MediaFilePaths(dest_repo)
for line in sys.stdin:
line = line.strip()
parts = line.split('|')
parts = line.split("|")
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
@@ -107,7 +107,7 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("-v", action='store_true', help='enable debug logging')
parser.add_argument("-v", action="store_true", help="enable debug logging")
parser.add_argument("src_repo", help="Path to source content repo")
parser.add_argument("dest_repo", help="Path to source content repo")
args = parser.parse_args()

View File

@@ -15,18 +15,17 @@ ignore =
tox.ini
[flake8]
max-line-length = 90
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
# for error codes. The ones we ignore are:
# W503: line break before binary operator
# W504: line break after binary operator
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
ignore=W503,W504,E203,E731
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501
[isort]
line_length = 89
line_length = 88
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY

View File

@@ -60,9 +60,12 @@ class TestCommand(Command):
pass
def run(self):
print ("""Synapse's tests cannot be run via setup.py. To run them, try:
print(
"""Synapse's tests cannot be run via setup.py. To run them, try:
PYTHONPATH="." trial tests
""")
"""
)
def read_file(path_segments):
"""Read a file from the package. Takes a list of strings to join to
@@ -84,9 +87,9 @@ version = exec_file(("synapse", "__init__.py"))["__version__"]
dependencies = exec_file(("synapse", "python_dependencies.py"))
long_description = read_file(("README.rst",))
REQUIREMENTS = dependencies['REQUIREMENTS']
CONDITIONAL_REQUIREMENTS = dependencies['CONDITIONAL_REQUIREMENTS']
ALL_OPTIONAL_REQUIREMENTS = dependencies['ALL_OPTIONAL_REQUIREMENTS']
REQUIREMENTS = dependencies["REQUIREMENTS"]
CONDITIONAL_REQUIREMENTS = dependencies["CONDITIONAL_REQUIREMENTS"]
ALL_OPTIONAL_REQUIREMENTS = dependencies["ALL_OPTIONAL_REQUIREMENTS"]
# Make `pip install matrix-synapse[all]` install all the optional dependencies.
CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
@@ -102,6 +105,16 @@ setup(
include_package_data=True,
zip_safe=False,
long_description=long_description,
python_requires="~=3.5",
classifiers=[
"Development Status :: 5 - Production/Stable",
"Topic :: Communications :: Chat",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={'test': TestCommand},
cmdclass={"test": TestCommand},
)

View File

@@ -17,14 +17,22 @@
""" This is a reference implementation of a Matrix home server.
"""
import sys
# Check that we're not running on an unsupported Python version.
if sys.version_info < (3, 5):
print("Synapse requires Python 3.5 or above.")
sys.exit(1)
try:
from twisted.internet import protocol
from twisted.internet.protocol import Factory
from twisted.names.dns import DNSDatagramProtocol
protocol.Factory.noisy = False
Factory.noisy = False
DNSDatagramProtocol.noisy = False
except ImportError:
pass
__version__ = "1.0.0rc3"
__version__ = "1.2.0rc1"

View File

@@ -57,18 +57,18 @@ def request_registration(
nonce = r.json()["nonce"]
mac = hmac.new(key=shared_secret.encode('utf8'), digestmod=hashlib.sha1)
mac = hmac.new(key=shared_secret.encode("utf8"), digestmod=hashlib.sha1)
mac.update(nonce.encode('utf8'))
mac.update(nonce.encode("utf8"))
mac.update(b"\x00")
mac.update(user.encode('utf8'))
mac.update(user.encode("utf8"))
mac.update(b"\x00")
mac.update(password.encode('utf8'))
mac.update(password.encode("utf8"))
mac.update(b"\x00")
mac.update(b"admin" if admin else b"notadmin")
if user_type:
mac.update(b"\x00")
mac.update(user_type.encode('utf8'))
mac.update(user_type.encode("utf8"))
mac = mac.hexdigest()
@@ -134,8 +134,9 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
else:
admin = False
request_registration(user, password, server_location, shared_secret,
bool(admin), user_type)
request_registration(
user, password, server_location, shared_secret, bool(admin), user_type
)
def main():
@@ -189,7 +190,7 @@ def main():
group.add_argument(
"-c",
"--config",
type=argparse.FileType('r'),
type=argparse.FileType("r"),
help="Path to server config file. Used to read in shared secret.",
)
@@ -200,7 +201,7 @@ def main():
parser.add_argument(
"server_url",
default="https://localhost:8448",
nargs='?',
nargs="?",
help="URL to use to talk to the home server. Defaults to "
" 'https://localhost:8448'.",
)
@@ -220,8 +221,9 @@ def main():
if args.admin or args.no_admin:
admin = args.admin
register_new_user(args.user, args.password, args.server_url, secret,
admin, args.user_type)
register_new_user(
args.user, args.password, args.server_url, secret, admin, args.user_type
)
if __name__ == "__main__":

View File

@@ -25,7 +25,13 @@ from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
@@ -36,8 +42,11 @@ logger = logging.getLogger(__name__)
AuthEventTypes = (
EventTypes.Create, EventTypes.Member, EventTypes.PowerLevels,
EventTypes.JoinRules, EventTypes.RoomHistoryVisibility,
EventTypes.Create,
EventTypes.Member,
EventTypes.PowerLevels,
EventTypes.JoinRules,
EventTypes.RoomHistoryVisibility,
EventTypes.ThirdPartyInvite,
)
@@ -54,12 +63,12 @@ class Auth(object):
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
"""
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("cache", "token_cache", self.token_cache)
@@ -70,15 +79,12 @@ class Auth(object):
def check_from_context(self, room_version, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.compute_auth_events(
event, prev_state_ids, for_verification=True,
event, prev_state_ids, for_verification=True
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in itervalues(auth_events)
}
auth_events = {(e.type, e.state_key): e for e in itervalues(auth_events)}
self.check(
room_version, event,
auth_events=auth_events, do_sig_check=do_sig_check,
room_version, event, auth_events=auth_events, do_sig_check=do_sig_check
)
def check(self, room_version, event, auth_events, do_sig_check=True):
@@ -115,15 +121,10 @@ class Auth(object):
the room.
"""
if current_state:
member = current_state.get(
(EventTypes.Member, user_id),
None
)
member = current_state.get((EventTypes.Member, user_id), None)
else:
member = yield self.state.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
self._check_joined_room(member, user_id, room_id)
@@ -143,23 +144,17 @@ class Auth(object):
the room. This will be the leave event if they have left the room.
"""
member = yield self.state.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
membership = member.membership if member else None
if membership not in (Membership.JOIN, Membership.LEAVE):
raise AuthError(403, "User %s not in room %s" % (
user_id, room_id
))
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
if membership == Membership.LEAVE:
forgot = yield self.store.did_forget(user_id, room_id)
if forgot:
raise AuthError(403, "User %s not in room %s" % (
user_id, room_id
))
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
defer.returnValue(member)
@@ -171,9 +166,9 @@ class Auth(object):
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
raise AuthError(403, "User %s not in room %s (%s)" % (
user_id, room_id, repr(member)
))
raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
)
def can_federate(self, event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
@@ -185,11 +180,7 @@ class Auth(object):
@defer.inlineCallbacks
def get_user_by_req(
self,
request,
allow_guest=False,
rights="access",
allow_expired=False,
self, request, allow_guest=False, rights="access", allow_expired=False
):
""" Get a registered user's ID.
@@ -203,19 +194,17 @@ class Auth(object):
Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
# Can optionally look elsewhere in the request (e.g. headers)
try:
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent",
default=[b""]
)[0].decode('ascii', 'surrogateescape')
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
access_token = self.get_access_token_from_request(request)
user_id, app_service = yield self._get_appservice_user_id(request)
if user_id:
@@ -243,11 +232,12 @@ class Auth(object):
if self._account_validity.enabled and not allow_expired:
user_id = user.to_string()
expiration_ts = yield self.store.get_expiration_ts_for_user(user_id)
if expiration_ts is not None and self.clock.time_msec() >= expiration_ts:
if (
expiration_ts is not None
and self.clock.time_msec() >= expiration_ts
):
raise AuthError(
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
403, "User account has expired", errcode=Codes.EXPIRED_ACCOUNT
)
# device_id may not be present if get_user_by_access_token has been
@@ -265,26 +255,25 @@ class Auth(object):
if is_guest and not allow_guest:
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
403,
"Guest access not allowed",
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
)
request.authenticated_entity = user.to_string()
defer.returnValue(synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service)
defer.returnValue(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.",
errcode=Codes.MISSING_TOKEN
)
raise MissingClientTokenError()
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
self.get_access_token_from_request(request)
)
if app_service is None:
defer.returnValue((None, None))
@@ -297,20 +286,14 @@ class Auth(object):
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
user_id = request.args[b"user_id"][0].decode('utf8')
user_id = request.args[b"user_id"][0].decode("utf8")
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
if not app_service.is_interested_in_user(user_id):
raise AuthError(
403,
"Application service cannot masquerade as this user."
)
raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(
403,
"Application service has not registered this user"
)
raise AuthError(403, "Application service has not registered this user")
defer.returnValue((user_id, app_service))
@defer.inlineCallbacks
@@ -328,13 +311,25 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
"""
if rights == "access":
# first look in the database
r = yield self._look_up_user_by_access_token(token)
if r:
valid_until_ms = r["valid_until_ms"]
if (
valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
defer.returnValue(r)
# otherwise it needs to be a valid macaroon
@@ -346,11 +341,7 @@ class Auth(object):
if not guest:
# non-guest access tokens must be in the database
logger.warning("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError()
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -365,16 +356,10 @@ class Auth(object):
# guest tokens.
stored_user = yield self.store.get_user_by_id(user_id)
if not stored_user:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unknown user_id %s" % user_id,
errcode=Codes.UNKNOWN_TOKEN
)
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Guest access token used for regular user",
errcode=Codes.UNKNOWN_TOKEN
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
ret = {
"user": user,
@@ -401,10 +386,7 @@ class Auth(object):
ValueError,
) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
raise InvalidClientTokenError("Invalid macaroon passed.")
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
@@ -441,14 +423,10 @@ class Auth(object):
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest)
@@ -467,16 +445,14 @@ class Auth(object):
(str) user id
Raises:
AuthError if there is no user_id caveat in the macaroon
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix):]
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
return caveat.caveat_id[len(user_prefix) :]
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
"""
@@ -522,7 +498,7 @@ class Auth(object):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix):])
expiry = int(caveat[len(prefix) :])
now = self.hs.get_clock().time_msec()
return now < expiry
@@ -540,28 +516,18 @@ class Auth(object):
"token_id": ret.get("token_id", None),
"is_guest": False,
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
defer.returnValue(user_info)
def get_appservice_by_req(self, request):
try:
token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN
)
request.authenticated_entity = service.sender
return defer.succeed(service)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token."
)
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
@@ -581,19 +547,19 @@ class Auth(object):
auth_ids = []
key = (EventTypes.PowerLevels, "", )
key = (EventTypes.PowerLevels, "")
power_level_event_id = current_state_ids.get(key)
if power_level_event_id:
auth_ids.append(power_level_event_id)
key = (EventTypes.JoinRules, "", )
key = (EventTypes.JoinRules, "")
join_rule_event_id = current_state_ids.get(key)
key = (EventTypes.Member, event.sender, )
key = (EventTypes.Member, event.sender)
member_event_id = current_state_ids.get(key)
key = (EventTypes.Create, "", )
key = (EventTypes.Create, "")
create_event_id = current_state_ids.get(key)
if create_event_id:
auth_ids.append(create_event_id)
@@ -619,7 +585,7 @@ class Auth(object):
auth_ids.append(member_event_id)
if for_verification:
key = (EventTypes.Member, event.state_key, )
key = (EventTypes.Member, event.state_key)
existing_event_id = current_state_ids.get(key)
if existing_event_id:
auth_ids.append(existing_event_id)
@@ -628,7 +594,7 @@ class Auth(object):
if "third_party_invite" in event.content:
key = (
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"]
event.content["third_party_invite"]["signed"]["token"],
)
third_party_invite_id = current_state_ids.get(key)
if third_party_invite_id:
@@ -640,21 +606,6 @@ class Auth(object):
defer.returnValue(auth_ids)
def check_redaction(self, room_version, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
return event_auth.check_redaction(room_version, event, auth_events)
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
"""Check if the user is allowed to edit the room's entry in the
@@ -684,7 +635,7 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.Aliases, "", power_level_event,
EventTypes.Aliases, "", power_level_event
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
@@ -692,7 +643,7 @@ class Auth(object):
raise AuthError(
403,
"This server requires you to be a moderator in the room to"
" edit its room list entry"
" edit its room list entry",
)
@staticmethod
@@ -707,20 +658,16 @@ class Auth(object):
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401):
def get_access_token_from_request(request):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
MissingClientTokenError: If there isn't a single access_token in the
request
"""
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
@@ -729,36 +676,22 @@ class Auth(object):
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
raise MissingClientTokenError(
"Mixing Authorization headers and access_token query parameters."
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Too many Authorization headers.")
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode('ascii')
return parts[1].decode("ascii")
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Invalid Authorization header.")
else:
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN
)
raise MissingClientTokenError()
return query_params[0].decode('ascii')
return query_params[0].decode("ascii")
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
@@ -785,8 +718,8 @@ class Auth(object):
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
visibility
and visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
@@ -820,10 +753,11 @@ class Auth(object):
if self.hs.config.hs_disabled:
raise ResourceLimitError(
403, self.hs.config.hs_disabled_message,
403,
self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self.hs.config.admin_contact,
limit_type=self.hs.config.hs_disabled_limit_type
limit_type=self.hs.config.hs_disabled_limit_type,
)
if self.hs.config.limit_usage_by_mau is True:
assert not (user_id and threepid)
@@ -848,8 +782,9 @@ class Auth(object):
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise ResourceLimitError(
403, "Monthly Active User Limit Exceeded",
403,
"Monthly Active User Limit Exceeded",
admin_contact=self.hs.config.admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type="monthly_active_user"
limit_type="monthly_active_user",
)

View File

@@ -18,7 +18,7 @@
"""Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2**63 - 1
MAX_DEPTH = 2 ** 63 - 1
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255
@@ -30,39 +30,41 @@ MAX_USERID_LENGTH = 255
class Membership(object):
"""Represents the membership states of a user in a room."""
INVITE = u"invite"
JOIN = u"join"
KNOCK = u"knock"
LEAVE = u"leave"
BAN = u"ban"
INVITE = "invite"
JOIN = "join"
KNOCK = "knock"
LEAVE = "leave"
BAN = "ban"
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
class PresenceState(object):
"""Represents the presence state of a user."""
OFFLINE = u"offline"
UNAVAILABLE = u"unavailable"
ONLINE = u"online"
OFFLINE = "offline"
UNAVAILABLE = "unavailable"
ONLINE = "online"
class JoinRules(object):
PUBLIC = u"public"
KNOCK = u"knock"
INVITE = u"invite"
PRIVATE = u"private"
PUBLIC = "public"
KNOCK = "knock"
INVITE = "invite"
PRIVATE = "private"
class LoginType(object):
PASSWORD = u"m.login.password"
EMAIL_IDENTITY = u"m.login.email.identity"
MSISDN = u"m.login.msisdn"
RECAPTCHA = u"m.login.recaptcha"
TERMS = u"m.login.terms"
DUMMY = u"m.login.dummy"
PASSWORD = "m.login.password"
EMAIL_IDENTITY = "m.login.email.identity"
MSISDN = "m.login.msisdn"
RECAPTCHA = "m.login.recaptcha"
TERMS = "m.login.terms"
DUMMY = "m.login.dummy"
# Only for C/S API v1
APPLICATION_SERVICE = u"m.login.application_service"
SHARED_SECRET = u"org.matrix.login.shared_secret"
APPLICATION_SERVICE = "m.login.application_service"
SHARED_SECRET = "org.matrix.login.shared_secret"
class EventTypes(object):
@@ -118,6 +120,7 @@ class UserTypes(object):
"""Allows for user type specific behaviour. With the benefit of hindsight
'admin' and 'guest' users should also be UserTypes. Normal users are type None
"""
SUPPORT = "support"
ALL_USER_TYPES = (SUPPORT,)
@@ -125,6 +128,7 @@ class UserTypes(object):
class RelationTypes(object):
"""The types of relations known to this server.
"""
ANNOTATION = "m.annotation"
REPLACE = "m.replace"
REFERENCE = "m.reference"

View File

@@ -70,6 +70,7 @@ class CodeMessageException(RuntimeError):
code (int): HTTP error code
msg (str): string describing the error
"""
def __init__(self, code, msg):
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
self.code = code
@@ -83,6 +84,7 @@ class SynapseError(CodeMessageException):
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
"""Constructs a synapse error.
@@ -95,10 +97,7 @@ class SynapseError(CodeMessageException):
self.errcode = errcode
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
)
return cs_error(self.msg, self.errcode)
class ProxiedRequestError(SynapseError):
@@ -107,27 +106,23 @@ class ProxiedRequestError(SynapseError):
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
super(ProxiedRequestError, self).__init__(
code, msg, errcode
)
super(ProxiedRequestError, self).__init__(code, msg, errcode)
if additional_fields is None:
self._additional_fields = {}
else:
self._additional_fields = dict(additional_fields)
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
**self._additional_fields
)
return cs_error(self.msg, self.errcode, **self._additional_fields)
class ConsentNotGivenError(SynapseError):
"""The error returned to the client when the user has not consented to the
privacy policy.
"""
def __init__(self, msg, consent_uri):
"""Constructs a ConsentNotGivenError
@@ -136,22 +131,33 @@ class ConsentNotGivenError(SynapseError):
consent_url (str): The URL where the user can give their consent
"""
super(ConsentNotGivenError, self).__init__(
code=http_client.FORBIDDEN,
msg=msg,
errcode=Codes.CONSENT_NOT_GIVEN
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN
)
self._consent_uri = consent_uri
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
consent_uri=self._consent_uri
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
class UserDeactivatedError(SynapseError):
"""The error returned to the client when the user attempted to access an
authenticated endpoint, but the account has been deactivated.
"""
def __init__(self, msg):
"""Constructs a UserDeactivatedError
Args:
msg (str): The human-readable error message
"""
super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
@@ -190,15 +196,17 @@ class InteractiveAuthIncompleteError(Exception):
result (dict): the server response to the request, which should be
passed back to the client
"""
def __init__(self, result):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete",
"Interactive auth not yet complete"
)
self.result = result
class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.UNRECOGNIZED
@@ -207,25 +215,20 @@ class UnrecognizedRequestError(SynapseError):
message = "Unrecognized request"
else:
message = args[0]
super(UnrecognizedRequestError, self).__init__(
400,
message,
**kwargs
)
super(UnrecognizedRequestError, self).__init__(400, message, **kwargs)
class NotFoundError(SynapseError):
"""An error indicating we can't find the thing you asked for"""
def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND):
super(NotFoundError, self).__init__(
404,
msg,
errcode=errcode
)
super(NotFoundError, self).__init__(404, msg, errcode=errcode)
class AuthError(SynapseError):
"""An error raised when there was a problem authorising an event."""
"""An error raised when there was a problem authorising an event, and at various
other poorly-defined times.
"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
@@ -233,13 +236,51 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs)
class InvalidClientCredentialsError(SynapseError):
"""An error raised when there was a problem with the authorisation credentials
in a client request.
https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens:
When credentials are required but missing or invalid, the HTTP call will
return with a status of 401 and the error code, M_MISSING_TOKEN or
M_UNKNOWN_TOKEN respectively.
"""
def __init__(self, msg, errcode):
super().__init__(code=401, msg=msg, errcode=errcode)
class MissingClientTokenError(InvalidClientCredentialsError):
"""Raised when we couldn't find the access token in a request"""
def __init__(self, msg="Missing access token"):
super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
class InvalidClientTokenError(InvalidClientCredentialsError):
"""Raised when we didn't understand the access token in a request"""
def __init__(self, msg="Unrecognised access token", soft_logout=False):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout
def error_dict(self):
d = super().error_dict()
d["soft_logout"] = self._soft_logout
return d
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.
For instance, the monthly active user limit for the server has been exceeded
"""
def __init__(
self, code, msg,
self,
code,
msg,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=None,
limit_type=None,
@@ -253,7 +294,7 @@ class ResourceLimitError(SynapseError):
self.msg,
self.errcode,
admin_contact=self.admin_contact,
limit_type=self.limit_type
limit_type=self.limit_type,
)
@@ -268,6 +309,7 @@ class EventSizeError(SynapseError):
class EventStreamError(SynapseError):
"""An error raised when there a problem with the event stream."""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.BAD_PAGINATION
@@ -276,47 +318,53 @@ class EventStreamError(SynapseError):
class LoginError(SynapseError):
"""An error raised when there was a problem logging in."""
pass
class StoreError(SynapseError):
"""An error raised when there was a problem storing some data."""
pass
class InvalidCaptchaError(SynapseError):
def __init__(self, code=400, msg="Invalid captcha.", error_url=None,
errcode=Codes.CAPTCHA_INVALID):
def __init__(
self,
code=400,
msg="Invalid captcha.",
error_url=None,
errcode=Codes.CAPTCHA_INVALID,
):
super(InvalidCaptchaError, self).__init__(code, msg, errcode)
self.error_url = error_url
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
error_url=self.error_url,
)
return cs_error(self.msg, self.errcode, error_url=self.error_url)
class LimitExceededError(SynapseError):
"""A client has sent too many requests and is being throttled.
"""
def __init__(self, code=429, msg="Too Many Requests", retry_after_ms=None,
errcode=Codes.LIMIT_EXCEEDED):
def __init__(
self,
code=429,
msg="Too Many Requests",
retry_after_ms=None,
errcode=Codes.LIMIT_EXCEEDED,
):
super(LimitExceededError, self).__init__(code, msg, errcode)
self.retry_after_ms = retry_after_ms
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
retry_after_ms=self.retry_after_ms,
)
return cs_error(self.msg, self.errcode, retry_after_ms=self.retry_after_ms)
class RoomKeysVersionError(SynapseError):
"""A client has tried to upload to a non-current version of the room_keys store
"""
def __init__(self, current_version):
"""
Args:
@@ -331,6 +379,7 @@ class RoomKeysVersionError(SynapseError):
class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does
not support."""
def __init__(self):
super(UnsupportedRoomVersionError, self).__init__(
code=400,
@@ -354,22 +403,19 @@ class IncompatibleRoomVersionError(SynapseError):
Unlike UnsupportedRoomVersionError, it is specific to the case of the make_join
failing.
"""
def __init__(self, room_version):
super(IncompatibleRoomVersionError, self).__init__(
code=400,
msg="Your homeserver does not support the features required to "
"join this room",
"join this room",
errcode=Codes.INCOMPATIBLE_ROOM_VERSION,
)
self._room_version = room_version
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
room_version=self._room_version,
)
return cs_error(self.msg, self.errcode, room_version=self._room_version)
class RequestSendFailed(RuntimeError):
@@ -380,11 +426,11 @@ class RequestSendFailed(RuntimeError):
networking (e.g. DNS failures, connection timeouts etc), versus unexpected
errors (like programming errors).
"""
def __init__(self, inner_exception, can_retry):
super(RequestSendFailed, self).__init__(
"Failed to send request: %s: %s" % (
type(inner_exception).__name__, inner_exception,
)
"Failed to send request: %s: %s"
% (type(inner_exception).__name__, inner_exception)
)
self.inner_exception = inner_exception
self.can_retry = can_retry
@@ -428,7 +474,7 @@ class FederationError(RuntimeError):
self.affected = affected
self.source = source
msg = "%s %s: %s" % (level, code, reason,)
msg = "%s %s: %s" % (level, code, reason)
super(FederationError, self).__init__(msg)
def get_dict(self):
@@ -448,6 +494,7 @@ class HttpResponseException(CodeMessageException):
Attributes:
response (bytes): body of response
"""
def __init__(self, code, msg, response):
"""
@@ -486,7 +533,7 @@ class HttpResponseException(CodeMessageException):
if not isinstance(j, dict):
j = {}
errcode = j.pop('errcode', Codes.UNKNOWN)
errmsg = j.pop('error', self.msg)
errcode = j.pop("errcode", Codes.UNKNOWN)
errmsg = j.pop("error", self.msg)
return ProxiedRequestError(self.code, errmsg, errcode, j)

View File

@@ -28,117 +28,55 @@ FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
"limit": {"type": "number"},
"senders": {"$ref": "#/definitions/user_id_array"},
"not_senders": {"$ref": "#/definitions/user_id_array"},
# TODO: We don't limit event type values but we probably should...
# check types are valid event types
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
}
}
"types": {"type": "array", "items": {"type": "string"}},
"not_types": {"type": "array", "items": {"type": "string"}},
},
}
ROOM_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"ephemeral": {
"$ref": "#/definitions/room_event_filter"
},
"include_leave": {
"type": "boolean"
},
"state": {
"$ref": "#/definitions/room_event_filter"
},
"timeline": {
"$ref": "#/definitions/room_event_filter"
},
"account_data": {
"$ref": "#/definitions/room_event_filter"
},
}
"not_rooms": {"$ref": "#/definitions/room_id_array"},
"rooms": {"$ref": "#/definitions/room_id_array"},
"ephemeral": {"$ref": "#/definitions/room_event_filter"},
"include_leave": {"type": "boolean"},
"state": {"$ref": "#/definitions/room_event_filter"},
"timeline": {"$ref": "#/definitions/room_event_filter"},
"account_data": {"$ref": "#/definitions/room_event_filter"},
},
}
ROOM_EVENT_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"contains_url": {
"type": "boolean"
},
"lazy_load_members": {
"type": "boolean"
},
"include_redundant_members": {
"type": "boolean"
},
}
"limit": {"type": "number"},
"senders": {"$ref": "#/definitions/user_id_array"},
"not_senders": {"$ref": "#/definitions/user_id_array"},
"types": {"type": "array", "items": {"type": "string"}},
"not_types": {"type": "array", "items": {"type": "string"}},
"rooms": {"$ref": "#/definitions/room_id_array"},
"not_rooms": {"$ref": "#/definitions/room_id_array"},
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
},
}
USER_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {
"type": "string",
"format": "matrix_user_id"
}
"items": {"type": "string", "format": "matrix_user_id"},
}
ROOM_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {
"type": "string",
"format": "matrix_room_id"
}
"items": {"type": "string", "format": "matrix_room_id"},
}
USER_FILTER_SCHEMA = {
@@ -150,22 +88,13 @@ USER_FILTER_SCHEMA = {
"user_id_array": USER_ID_ARRAY_SCHEMA,
"filter": FILTER_SCHEMA,
"room_filter": ROOM_FILTER_SCHEMA,
"room_event_filter": ROOM_EVENT_FILTER_SCHEMA
"room_event_filter": ROOM_EVENT_FILTER_SCHEMA,
},
"properties": {
"presence": {
"$ref": "#/definitions/filter"
},
"account_data": {
"$ref": "#/definitions/filter"
},
"room": {
"$ref": "#/definitions/room_filter"
},
"event_format": {
"type": "string",
"enum": ["client", "federation"]
},
"presence": {"$ref": "#/definitions/filter"},
"account_data": {"$ref": "#/definitions/filter"},
"room": {"$ref": "#/definitions/room_filter"},
"event_format": {"type": "string", "enum": ["client", "federation"]},
"event_fields": {
"type": "array",
"items": {
@@ -177,26 +106,25 @@ USER_FILTER_SCHEMA = {
#
# Note that because this is a regular expression, we have to escape
# each backslash in the pattern.
"pattern": r"^((?!\\\\).)*$"
}
}
"pattern": r"^((?!\\\\).)*$",
},
},
},
"additionalProperties": False
"additionalProperties": False,
}
@FormatChecker.cls_checks('matrix_room_id')
@FormatChecker.cls_checks("matrix_room_id")
def matrix_room_id_validator(room_id_str):
return RoomID.from_string(room_id_str)
@FormatChecker.cls_checks('matrix_user_id')
@FormatChecker.cls_checks("matrix_user_id")
def matrix_user_id_validator(user_id_str):
return UserID.from_string(user_id_str)
class Filtering(object):
def __init__(self, hs):
super(Filtering, self).__init__()
self.store = hs.get_datastore()
@@ -228,8 +156,9 @@ class Filtering(object):
# individual top-level key e.g. public_user_data. Filters are made of
# many definitions.
try:
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker())
jsonschema.validate(
user_filter_json, USER_FILTER_SCHEMA, format_checker=FormatChecker()
)
except jsonschema.ValidationError as e:
raise SynapseError(400, str(e))
@@ -240,10 +169,9 @@ class FilterCollection(object):
room_filter_json = self._filter_json.get("room", {})
self._room_filter = Filter({
k: v for k, v in room_filter_json.items()
if k in ("rooms", "not_rooms")
})
self._room_filter = Filter(
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
)
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
self._room_state_filter = Filter(room_filter_json.get("state", {}))
@@ -252,9 +180,7 @@ class FilterCollection(object):
self._presence_filter = Filter(filter_json.get("presence", {}))
self._account_data = Filter(filter_json.get("account_data", {}))
self.include_leave = filter_json.get("room", {}).get(
"include_leave", False
)
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
self.event_fields = filter_json.get("event_fields", [])
self.event_format = filter_json.get("event_format", "client")
@@ -299,22 +225,22 @@ class FilterCollection(object):
def blocks_all_presence(self):
return (
self._presence_filter.filters_all_types() or
self._presence_filter.filters_all_senders()
self._presence_filter.filters_all_types()
or self._presence_filter.filters_all_senders()
)
def blocks_all_room_ephemeral(self):
return (
self._room_ephemeral_filter.filters_all_types() or
self._room_ephemeral_filter.filters_all_senders() or
self._room_ephemeral_filter.filters_all_rooms()
self._room_ephemeral_filter.filters_all_types()
or self._room_ephemeral_filter.filters_all_senders()
or self._room_ephemeral_filter.filters_all_rooms()
)
def blocks_all_room_timeline(self):
return (
self._room_timeline_filter.filters_all_types() or
self._room_timeline_filter.filters_all_senders() or
self._room_timeline_filter.filters_all_rooms()
self._room_timeline_filter.filters_all_types()
or self._room_timeline_filter.filters_all_senders()
or self._room_timeline_filter.filters_all_rooms()
)
@@ -375,12 +301,7 @@ class Filter(object):
# check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), text_type)
return self.check_fields(
room_id,
sender,
ev_type,
contains_url,
)
return self.check_fields(room_id, sender, ev_type, contains_url)
def check_fields(self, room_id, sender, event_type, contains_url):
"""Checks whether the filter matches the given event fields.
@@ -391,7 +312,7 @@ class Filter(object):
literal_keys = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v)
"types": lambda v: _matches_wildcard(event_type, v),
}
for name, match_func in literal_keys.items():

View File

@@ -44,29 +44,25 @@ class Ratelimiter(object):
"""
self.prune_message_counts(time_now_s)
message_count, time_start, _ignored = self.message_counts.get(
key, (0., time_now_s, None),
key, (0.0, time_now_s, None)
)
time_delta = time_now_s - time_start
sent_count = message_count - time_delta * rate_hz
if sent_count < 0:
allowed = True
time_start = time_now_s
message_count = 1.
elif sent_count > burst_count - 1.:
message_count = 1.0
elif sent_count > burst_count - 1.0:
allowed = False
else:
allowed = True
message_count += 1
if update:
self.message_counts[key] = (
message_count, time_start, rate_hz
)
self.message_counts[key] = (message_count, time_start, rate_hz)
if rate_hz > 0:
time_allowed = (
time_start + (message_count - burst_count + 1) / rate_hz
)
time_allowed = time_start + (message_count - burst_count + 1) / rate_hz
if time_allowed < time_now_s:
time_allowed = time_now_s
else:
@@ -76,9 +72,7 @@ class Ratelimiter(object):
def prune_message_counts(self, time_now_s):
for key in list(self.message_counts.keys()):
message_count, time_start, rate_hz = (
self.message_counts[key]
)
message_count, time_start, rate_hz = self.message_counts[key]
time_delta = time_now_s - time_start
if message_count - time_delta * rate_hz > 0:
break
@@ -92,5 +86,5 @@ class Ratelimiter(object):
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
retry_after_ms=int(1000 * (time_allowed - time_now_s))
)

View File

@@ -19,9 +19,10 @@ class EventFormatVersions(object):
"""This is an internal enum for tracking the version of the event format,
independently from the room version.
"""
V1 = 1 # $id:server event id format
V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
V3 = 3 # MSC1884-style $hash format: introduced for room v4
V1 = 1 # $id:server event id format
V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
V3 = 3 # MSC1884-style $hash format: introduced for room v4
KNOWN_EVENT_FORMAT_VERSIONS = {
@@ -33,8 +34,9 @@ KNOWN_EVENT_FORMAT_VERSIONS = {
class StateResolutionVersions(object):
"""Enum to identify the state resolution algorithms"""
V1 = 1 # room v1 state res
V2 = 2 # MSC1442 state res: room v2 and later
V1 = 1 # room v1 state res
V2 = 2 # MSC1442 state res: room v2 and later
class RoomDisposition(object):
@@ -46,10 +48,10 @@ class RoomDisposition(object):
class RoomVersion(object):
"""An object which describes the unique attributes of a room version."""
identifier = attr.ib() # str; the identifier for this version
disposition = attr.ib() # str; one of the RoomDispositions
event_format = attr.ib() # int; one of the EventFormatVersions
state_res = attr.ib() # int; one of the StateResolutionVersions
identifier = attr.ib() # str; the identifier for this version
disposition = attr.ib() # str; one of the RoomDispositions
event_format = attr.ib() # int; one of the EventFormatVersions
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
@@ -92,11 +94,12 @@ class RoomVersions(object):
KNOWN_ROOM_VERSIONS = {
v.identifier: v for v in (
v.identifier: v
for v in (
RoomVersions.V1,
RoomVersions.V2,
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
)
} # type: dict[str, RoomVersion]
} # type: dict[str, RoomVersion]

View File

@@ -42,13 +42,9 @@ class ConsentURIBuilder(object):
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
raise ConfigError(
"form_secret not set in config",
)
raise ConfigError("form_secret not set in config")
if hs_config.public_baseurl is None:
raise ConfigError(
"public_baseurl not set in config",
)
raise ConfigError("public_baseurl not set in config")
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
@@ -64,15 +60,10 @@ class ConsentURIBuilder(object):
(str) the URI where the user can do consent
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id.encode('ascii'),
digestmod=sha256,
key=self._hmac_secret, msg=user_id.encode("ascii"), digestmod=sha256
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (
self._public_baseurl,
urlencode({
"u": user_id,
"h": mac
}),
urlencode({"u": user_id, "h": mac}),
)
return consent_uri

View File

@@ -43,7 +43,7 @@ def check_bind_error(e, address, bind_addresses):
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
"""
if address == '0.0.0.0' and '::' in bind_addresses:
logger.warn('Failed to listen on 0.0.0.0, continuing because listening on [::]')
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warn("Failed to listen on 0.0.0.0, continuing because listening on [::]")
else:
raise e

View File

@@ -19,7 +19,6 @@ import signal
import sys
import traceback
import psutil
from daemonize import Daemonize
from twisted.internet import defer, error, reactor
@@ -28,7 +27,7 @@ from twisted.protocols.tls import TLSMemoryBIOFactory
import synapse
from synapse.app import check_bind_error
from synapse.crypto import context_factory
from synapse.util import PreserveLoggingContext
from synapse.logging.context import PreserveLoggingContext
from synapse.util.async_helpers import Linearizer
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
@@ -49,7 +48,7 @@ def register_sighup(func):
_sighup_callbacks.append(func)
def start_worker_reactor(appname, config):
def start_worker_reactor(appname, config, run_command=reactor.run):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
@@ -58,6 +57,7 @@ def start_worker_reactor(appname, config):
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
run_command (Callable[]): callable that actually runs the reactor
"""
logger = logging.getLogger(config.worker_app)
@@ -68,21 +68,21 @@ def start_worker_reactor(appname, config):
gc_thresholds=config.gc_thresholds,
pid_file=config.worker_pid_file,
daemonize=config.worker_daemonize,
cpu_affinity=config.worker_cpu_affinity,
print_pidfile=config.print_pidfile,
logger=logger,
run_command=run_command,
)
def start_reactor(
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
print_pidfile,
logger,
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
print_pidfile,
logger,
run_command=reactor.run,
):
""" Run the reactor in the main process
@@ -95,64 +95,53 @@ def start_reactor(
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize
run_command (Callable[]): callable that actually runs the reactor
"""
install_dns_limiter(reactor)
def run():
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
# Turn the bitmask into bits, reverse it so we go from 0 up
mask_to_bits = bin(cpu_affinity)[2:][::-1]
logger.info("Running")
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
run_command()
cpus = []
cpu_num = 0
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
# and complain when they see the logcontext arbitrarily swapping
# between the sentinel and `run` logcontexts.
#
# We also need to drop the logcontext before forking if we're daemonizing,
# otherwise the cputime metrics get confused about the per-thread resource usage
# appearing to go backwards.
with PreserveLoggingContext():
if daemonize:
if print_pidfile:
print(pid_file)
for i in mask_to_bits:
if i == "1":
cpus.append(cpu_num)
cpu_num += 1
p = psutil.Process()
p.cpu_affinity(cpus)
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
if daemonize:
if print_pidfile:
print(pid_file)
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n')
sys.stderr.write("*" * line_length + "\n")
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + '\n')
sys.stderr.write("*" * line_length + "\n")
sys.exit(1)
@@ -160,8 +149,7 @@ def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
from synapse.metrics import RegistryProxy, start_http_server
for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port)
@@ -178,14 +166,7 @@ def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
r = []
for address in bind_addresses:
try:
r.append(
reactor.listenTCP(
port,
factory,
backlog,
address
)
)
r.append(reactor.listenTCP(port, factory, backlog, address))
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
@@ -205,13 +186,7 @@ def listen_ssl(
for address in bind_addresses:
try:
r.append(
reactor.listenSSL(
port,
factory,
context_factory,
backlog,
address
)
reactor.listenSSL(port, factory, context_factory, backlog, address)
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
@@ -243,15 +218,13 @@ def refresh_certificate(hs):
if isinstance(i.factory, TLSMemoryBIOFactory):
addr = i.getHost()
logger.info(
"Replacing TLS context factory on [%s]:%i", addr.host, addr.port,
"Replacing TLS context factory on [%s]:%i", addr.host, addr.port
)
# We want to replace TLS factories with a new one, with the new
# TLS configuration. We do this by reaching in and pulling out
# the wrappedFactory, and then re-wrapping it.
i.factory = TLSMemoryBIOFactory(
hs.tls_server_context_factory,
False,
i.factory.wrappedFactory
hs.tls_server_context_factory, False, i.factory.wrappedFactory
)
logger.info("Context factories updated.")
@@ -267,6 +240,7 @@ def start(hs, listeners=None):
try:
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs):
for i in _sighup_callbacks:
i(hs)
@@ -278,6 +252,9 @@ def start(hs, listeners=None):
# Load the certificate from disk.
refresh_certificate(hs)
# Start the tracer
synapse.logging.opentracing.init_tracer(hs.config)
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().start_profiling()
@@ -302,10 +279,8 @@ def setup_sentry(hs):
return
import sentry_sdk
sentry_sdk.init(
dsn=hs.config.sentry_dsn,
release=get_version_string(synapse),
)
sentry_sdk.init(dsn=hs.config.sentry_dsn, release=get_version_string(synapse))
# We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope:
@@ -326,7 +301,7 @@ def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
many DNS queries at once
"""
new_resolver = _LimitedHostnameResolver(
reactor.nameResolver, max_dns_requests_in_flight,
reactor.nameResolver, max_dns_requests_in_flight
)
reactor.installNameResolver(new_resolver)
@@ -339,11 +314,17 @@ class _LimitedHostnameResolver(object):
def __init__(self, resolver, max_dns_requests_in_flight):
self._resolver = resolver
self._limiter = Linearizer(
name="dns_client_limiter", max_count=max_dns_requests_in_flight,
name="dns_client_limiter", max_count=max_dns_requests_in_flight
)
def resolveHostName(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
def resolveHostName(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
# We need this function to return `resolutionReceiver` so we do all the
# actual logic involving deferreds in a separate function.
@@ -363,8 +344,14 @@ class _LimitedHostnameResolver(object):
return resolutionReceiver
@defer.inlineCallbacks
def _resolve(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
def _resolve(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
with (yield self._limiter.queue(())):
# resolveHostName doesn't return a Deferred, so we need to hook into
@@ -374,8 +361,7 @@ class _LimitedHostnameResolver(object):
receiver = _DeferredResolutionReceiver(resolutionReceiver, deferred)
self._resolver.resolveHostName(
receiver, hostName, portNumber,
addressTypes, transportSemantics,
receiver, hostName, portNumber, addressTypes, transportSemantics
)
yield deferred

264
synapse/app/admin_cmd.py Normal file
View File

@@ -0,0 +1,264 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import sys
import tempfile
from canonicaljson import json
from twisted.internet import defer, task
import synapse
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
pass
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
def _listen_http(self, listener_config):
pass
def start_listening(self, listeners):
pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks
def export_data_command(hs, args):
"""Export data for a user.
Args:
hs (HomeServer)
args (argparse.Namespace)
"""
user_id = args.user_id
directory = args.output_directory
res = yield hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
)
print(res)
class FileExfiltrationWriter(ExfiltrationWriter):
"""An ExfiltrationWriter that writes the users data to a directory.
Returns the directory location on completion.
Note: This writes to disk on the main reactor thread.
Args:
user_id (str): The user whose data is being exfiltrated.
directory (str|None): The directory to write the data to, if None then
will write to a temporary directory.
"""
def __init__(self, user_id, directory=None):
self.user_id = user_id
if directory:
self.base_directory = directory
else:
self.base_directory = tempfile.mkdtemp(
prefix="synapse-exfiltrate__%s__" % (user_id,)
)
os.makedirs(self.base_directory, exist_ok=True)
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id, events):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
with open(events_file, "a") as f:
for event in events:
print(json.dumps(event.get_pdu_json()), file=f)
def write_state(self, room_id, event_id, state):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
state_directory = os.path.join(room_directory, "state")
os.makedirs(state_directory, exist_ok=True)
event_file = os.path.join(state_directory, event_id)
with open(event_file, "a") as f:
for event in state.values():
print(json.dumps(event.get_pdu_json()), file=f)
def write_invite(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the invite state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
invite_state = os.path.join(room_directory, "invite_state")
with open(invite_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
return self.base_directory
def start(config_options):
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
subparser = parser.add_subparsers(
title="Admin Commands",
required=True,
dest="command",
metavar="<admin_command>",
help="The admin command to perform.",
)
export_data_parser = subparser.add_parser(
"export-data", help="Export all data for a user"
)
export_data_parser.add_argument("user_id", help="User to extra data from")
export_data_parser.add_argument(
"--output-directory",
action="store",
metavar="DIRECTORY",
required=False,
help="The directory to store the exported data in. Must be empty. Defaults"
" to creating a temp directory.",
)
export_data_parser.set_defaults(func=export_data_command)
try:
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if config.worker_app is not None:
assert config.worker_app == "synapse.app.admin_cmd"
# Update the config with some basic overrides so that don't have to specify
# a full worker config.
config.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
and not config.worker_log_file
and not config.worker_log_config
):
# Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config.
config.no_redirect_stdio = True
# Explicitly disable background processes
config.update_user_directory = False
config.start_pushers = False
config.send_federation = False
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = AdminCmdServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
# We use task.react as the basic run command as it correctly handles tearing
# down the reactor when the deferreds resolve and setting the return value.
# We also make sure that `_base.start` gets run before we actually run the
# command.
@defer.inlineCallbacks
def run(_reactor):
with LoggingContext("command"):
yield _base.start(ss, [])
yield args.func(ss, args)
_base.start_worker_reactor(
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
)
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -26,8 +26,8 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -36,7 +36,6 @@ from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -44,7 +43,9 @@ logger = logging.getLogger("synapse.app.appservice")
class AppserviceSlaveStore(
DirectoryStore, SlavedEventStore, SlavedApplicationServiceStore,
DirectoryStore,
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
):
pass
@@ -74,7 +75,7 @@ class AppserviceServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
)
logger.info("Synapse appservice now listening on port %d", port)
@@ -88,18 +89,19 @@ class AppserviceServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -132,9 +134,7 @@ class ASReplicationHandler(ReplicationClientHandler):
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse appservice", config_options
)
config = HomeServerConfig.load_config("Synapse appservice", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -173,6 +173,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -27,8 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -37,6 +37,7 @@ from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
@@ -52,6 +53,7 @@ from synapse.rest.client.v1.room import (
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomMemberListRestServlet,
RoomMessageListRestServlet,
RoomStateRestServlet,
)
from synapse.rest.client.v1.voip import VoipRestServlet
@@ -62,7 +64,6 @@ from synapse.rest.client.versions import VersionsRestServlet
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -74,6 +75,7 @@ class ClientReaderSlavedStore(
SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedGroupServerStore,
SlavedAccountDataStore,
SlavedEventStore,
SlavedKeyStore,
@@ -109,6 +111,7 @@ class ClientReaderServer(HomeServer):
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
RoomMessageListRestServlet(self).register(resource)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
ThreepidRestServlet(self).register(resource)
@@ -118,9 +121,7 @@ class ClientReaderServer(HomeServer):
PushRuleRestServlet(self).register(resource)
VersionsRestServlet().register(resource)
resources.update({
"/_matrix/client": resource,
})
resources.update({"/_matrix/client": resource})
root_resource = create_resource_tree(resources, NoResource())
@@ -133,7 +134,7 @@ class ClientReaderServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -147,18 +148,19 @@ class ClientReaderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -170,9 +172,7 @@ class ClientReaderServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse client reader", config_options
)
config = HomeServerConfig.load_config("Synapse client reader", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -199,6 +199,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -27,8 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -59,7 +59,6 @@ from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -109,12 +108,14 @@ class EventCreatorServer(HomeServer):
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
@@ -127,7 +128,7 @@ class EventCreatorServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
)
logger.info("Synapse event creator now listening on port %d", port)
@@ -141,18 +142,19 @@ class EventCreatorServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -164,9 +166,7 @@ class EventCreatorServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse event creator", config_options
)
config = HomeServerConfig.load_config("Synapse event creator", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -198,6 +198,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-event-creator", config)
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -28,8 +28,8 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -48,7 +48,6 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -86,19 +85,18 @@ class FederationReaderServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
if name == "openid" and "federation" not in res["names"]:
# Only load the openid resource separately if federation resource
# is not specified since federation resource includes openid
# resource.
resources.update({
FEDERATION_PREFIX: TransportLayerServer(
self,
servlet_groups=["openid"],
),
})
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
@@ -115,7 +113,7 @@ class FederationReaderServer(HomeServer):
root_resource,
self.version_string,
),
reactor=self.get_reactor()
reactor=self.get_reactor(),
)
logger.info("Synapse federation reader now listening on port %d", port)
@@ -129,18 +127,19 @@ class FederationReaderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -181,6 +180,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -27,9 +27,9 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -44,7 +44,6 @@ from synapse.storage.engines import create_engine
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext, run_in_background
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -52,8 +51,13 @@ logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore, SlavedTransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
SlavedDeviceInboxStore,
SlavedTransactionStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedRegistrationStore,
SlavedDeviceStore,
SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
@@ -65,10 +69,7 @@ class FederationSenderSlaveStore(
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = (
"SELECT stream_id FROM federation_stream_position"
" WHERE type = ?"
)
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
@@ -103,7 +104,7 @@ class FederationSenderServer(HomeServer):
listener_config,
root_resource,
self.version_string,
)
),
)
logger.info("Synapse federation_sender now listening on port %d", port)
@@ -117,18 +118,19 @@ class FederationSenderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -151,7 +153,9 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate()
args = super(
FederationSenderReplicationHandler, self
).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
@@ -203,6 +207,7 @@ class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
@@ -241,7 +246,7 @@ class FederationSenderHandler(object):
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
run_as_background_process(
"process_receipts_for_federation", self._on_new_receipts, rows,
"process_receipts_for_federation", self._on_new_receipts, rows
)
@defer.inlineCallbacks
@@ -278,12 +283,14 @@ class FederationSenderHandler(object):
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(self.federation_position)
self.replication_client.send_federation_ack(
self.federation_position
)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -29,8 +29,8 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@@ -41,7 +41,6 @@ from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -62,14 +61,11 @@ class PresenceStatusStubServlet(RestServlet):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {
"Authorization": auth_headers,
}
headers = {"Authorization": auth_headers}
try:
result = yield self.http_client.get_json(
self.main_uri + request.uri.decode('ascii'),
headers=headers,
self.main_uri + request.uri.decode("ascii"), headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error()
@@ -105,18 +101,19 @@ class KeyUploadServlet(RestServlet):
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
if requester.device_id is not None and device_id != requester.device_id:
logger.warning(
"Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id,
device_id,
)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400,
"To upload keys, you must pass device_id when authenticating"
400, "To upload keys, you must pass device_id when authenticating"
)
if body:
@@ -124,13 +121,9 @@ class KeyUploadServlet(RestServlet):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {
"Authorization": auth_headers,
}
headers = {"Authorization": auth_headers}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri.decode('ascii'),
body,
headers=headers,
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
defer.returnValue((200, result))
@@ -171,12 +164,14 @@ class FrontendProxyServer(HomeServer):
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
@@ -190,7 +185,7 @@ class FrontendProxyServer(HomeServer):
root_resource,
self.version_string,
),
reactor=self.get_reactor()
reactor=self.get_reactor(),
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -204,18 +199,19 @@ class FrontendProxyServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -227,9 +223,7 @@ class FrontendProxyServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
config = HomeServerConfig.load_config("Synapse frontend proxy", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -258,6 +252,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == '__main__':
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -54,9 +54,9 @@ from synapse.federation.transport.server import TransportLayerServer
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.module_api import ModuleApi
from synapse.python_dependencies import check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
@@ -72,7 +72,6 @@ from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.logcontext import LoggingContext
from synapse.util.manhole import manhole
from synapse.util.module_loader import load_module
from synapse.util.rlimit import change_resource_limit
@@ -101,13 +100,12 @@ class SynapseHomeServer(HomeServer):
# Skip loading openid resource if federation is defined
# since federation resource will include openid
continue
resources.update(self._configure_named_resource(
name, res.get("compress", False),
))
resources.update(
self._configure_named_resource(name, res.get("compress", False))
)
additional_resources = listener_config.get("additional_resources", {})
logger.debug("Configuring additional resources: %r",
additional_resources)
logger.debug("Configuring additional resources: %r", additional_resources)
module_api = ModuleApi(self, self.get_auth_handler())
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
@@ -174,60 +172,67 @@ class SynapseHomeServer(HomeServer):
if compress:
client_resource = gz_wrap(client_resource)
resources.update({
"/_matrix/client/api/v1": client_resource,
"/_synapse/password_reset": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self),
"/_synapse/admin": AdminRestResource(self),
})
resources.update(
{
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
"/.well-known/matrix/client": WellKnownResource(self),
"/_synapse/admin": AdminRestResource(self),
}
)
if self.get_config().saml2_enabled:
from synapse.rest.saml2 import SAML2Resource
resources["/_matrix/saml2"] = SAML2Resource(self)
if name == "consent":
from synapse.rest.consent.consent_resource import ConsentResource
consent_resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({
"/_matrix/consent": consent_resource,
})
resources.update({"/_matrix/consent": consent_resource})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
if name == "openid":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self, servlet_groups=["openid"]),
})
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
)
if name in ["static", "client"]:
resources.update({
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
),
})
resources.update(
{
STATIC_PREFIX: File(
os.path.join(os.path.dirname(synapse.__file__), "static")
)
}
)
if name in ["media", "federation", "client"]:
if self.get_config().enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update({
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
})
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
}
)
elif name == "media":
raise ConfigError(
"'media' resource conflicts with enable_media_repo=False",
"'media' resource conflicts with enable_media_repo=False"
)
if name in ["keys", "federation"]:
@@ -258,18 +263,14 @@ class SynapseHomeServer(HomeServer):
for listener in listeners:
if listener["type"] == "http":
self._listening_services.extend(
self._listener_http(config, listener)
)
self._listening_services.extend(self._listener_http(config, listener))
elif listener["type"] == "manhole":
listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix",
password="rabbithole",
globals={"hs": self},
)
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "replication":
services = listen_tcp(
@@ -278,16 +279,17 @@ class SynapseHomeServer(HomeServer):
ReplicationStreamProtocolFactory(self),
)
for s in services:
reactor.addSystemEventTrigger(
"before", "shutdown", s.stopListening,
)
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -313,7 +315,7 @@ current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
registered_reserved_users_mau_gauge = Gauge(
"synapse_admin_mau:registered_reserved_users",
"Registered users with reserved threepids"
"Registered users with reserved threepids",
)
@@ -328,8 +330,7 @@ def setup(config_options):
"""
try:
config = HomeServerConfig.load_or_generate_config(
"Synapse Homeserver",
config_options,
"Synapse Homeserver", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
@@ -340,10 +341,7 @@ def setup(config_options):
# generating config files and shouldn't try to continue.
sys.exit(0)
synapse.config.logger.setup_logging(
config,
use_worker_options=False
)
synapse.config.logger.setup_logging(config, use_worker_options=False)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
@@ -358,7 +356,7 @@ def setup(config_options):
database_engine=database_engine,
)
logger.info("Preparing database: %s...", config.database_config['name'])
logger.info("Preparing database: %s...", config.database_config["name"])
try:
with hs.get_db_conn(run_new_connection=False) as db_conn:
@@ -376,7 +374,7 @@ def setup(config_options):
)
sys.exit(1)
logger.info("Database prepared in %s.", config.database_config['name'])
logger.info("Database prepared in %s.", config.database_config["name"])
hs.setup()
hs.setup_master()
@@ -392,9 +390,7 @@ def setup(config_options):
acme = hs.get_acme_handler()
# Check how long the certificate is active for.
cert_days_remaining = hs.config.is_disk_cert_valid(
allow_self_signed=False
)
cert_days_remaining = hs.config.is_disk_cert_valid(allow_self_signed=False)
# We want to reprovision if cert_days_remaining is None (meaning no
# certificate exists), or the days remaining number it returns
@@ -402,8 +398,8 @@ def setup(config_options):
provision = False
if (
cert_days_remaining is None or
cert_days_remaining < hs.config.acme_reprovision_threshold
cert_days_remaining is None
or cert_days_remaining < hs.config.acme_reprovision_threshold
):
provision = True
@@ -434,10 +430,7 @@ def setup(config_options):
yield do_acme()
# Check if it needs to be reprovisioned every day.
hs.get_clock().looping_call(
reprovision_acme,
24 * 60 * 60 * 1000
)
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
_base.start(hs, config.listeners)
@@ -464,6 +457,7 @@ class SynapseService(service.Service):
A twisted Service class that will start synapse. Used to run synapse
via twistd and a .tac.
"""
def __init__(self, config):
self.config = config
@@ -480,6 +474,7 @@ class SynapseService(service.Service):
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
def profile(func):
from cProfile import Profile
from threading import current_thread
@@ -490,13 +485,14 @@ def run(hs):
func(*args, **kargs)
profile.disable()
ident = current_thread().ident
profile.dump_stats("/tmp/%s.%s.%i.pstat" % (
hs.hostname, func.__name__, ident
))
profile.dump_stats(
"/tmp/%s.%s.%i.pstat" % (hs.hostname, func.__name__, ident)
)
return profiled
from twisted.python.threadpool import ThreadPool
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
@@ -541,7 +537,10 @@ def run(hs):
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
stats[
"daily_active_rooms"
] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
@@ -565,8 +564,7 @@ def run(hs):
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
yield hs.get_simple_http_client().put_json(
"https://matrix.org/report-usage-stats/push",
stats
"https://matrix.org/report-usage-stats/push", stats
)
except Exception as e:
logger.warn("Error reporting stats: %s", e)
@@ -581,14 +579,11 @@ def run(hs):
logger.info("report_stats can use psutil")
stats_process.append(process)
except (AttributeError):
logger.warning(
"Unable to read memory/cpu stats. Disabling reporting."
)
logger.warning("Unable to read memory/cpu stats. Disabling reporting.")
def generate_user_daily_visit_stats():
return run_as_background_process(
"generate_user_daily_visits",
hs.get_datastore().generate_user_daily_visits,
"generate_user_daily_visits", hs.get_datastore().generate_user_daily_visits
)
# Rather than update on per session basis, batch up the requests.
@@ -599,9 +594,9 @@ def run(hs):
# monthly active user limiting functionality
def reap_monthly_active_users():
return run_as_background_process(
"reap_monthly_active_users",
hs.get_datastore().reap_monthly_active_users,
"reap_monthly_active_users", hs.get_datastore().reap_monthly_active_users
)
clock.looping_call(reap_monthly_active_users, 1000 * 60 * 60)
reap_monthly_active_users()
@@ -619,8 +614,7 @@ def run(hs):
def start_generate_monthly_active_users():
return run_as_background_process(
"generate_monthly_active_users",
generate_monthly_active_users,
"generate_monthly_active_users", generate_monthly_active_users
)
start_generate_monthly_active_users()
@@ -646,7 +640,6 @@ def run(hs):
gc_thresholds=hs.config.gc_thresholds,
pid_file=hs.config.pid_file,
daemonize=hs.config.daemonize,
cpu_affinity=hs.config.cpu_affinity,
print_pidfile=hs.config.print_pidfile,
logger=logger,
)
@@ -660,5 +653,5 @@ def main():
run(hs)
if __name__ == '__main__':
if __name__ == "__main__":
main()

Some files were not shown because too many files have changed in this diff Show More