1
0

Compare commits

...

170 Commits

Author SHA1 Message Date
Erik Johnston
11f5579129 Newsfile 2020-03-30 16:39:06 +01:00
Erik Johnston
22fc68762e Move command processing out of transport 2020-03-30 16:38:11 +01:00
Erik Johnston
4f21c33be3 Remove usage of "conn_id" for presence. (#7128)
* Remove `conn_id` usage for UserSyncCommand.

Each tcp replication connection is assigned a "conn_id", which is used
to give an ID to a remotely connected worker. In a redis world, there
will no longer be a one to one mapping between connection and instance,
so instead we need to replace such usages with an ID generated by the
remote instances and included in the replicaiton commands.

This really only effects UserSyncCommand.

* Add CLEAR_USER_SYNCS command that is sent on shutdown.

This should help with the case where a synchrotron gets restarted
gracefully, rather than rely on 5 minute timeout.
2020-03-30 16:37:24 +01:00
David Baker
07569f25d1 Merge pull request #7160 from matrix-org/dbkr/always_send_own_device_list_updates
Always send the user updates to their own device list
2020-03-30 14:34:28 +01:00
Andrew Morgan
104844c1e1 Add explanatory comment 2020-03-30 14:00:11 +01:00
Richard van der Hoff
6486c96b65 Merge pull request #7157 from matrix-org/rev.outbound_device_pokes_tests
Add tests for outbound device pokes
2020-03-30 13:59:07 +01:00
Patrick Cloke
c5f89fba55 Add developer documentation for running a local CAS server (#7147) 2020-03-30 07:28:42 -04:00
David Baker
7406477525 black 2020-03-30 10:18:33 +01:00
David Baker
9fc588e6dc Just add own user ID to the list we track device changes for 2020-03-30 10:11:26 +01:00
Richard van der Hoff
b7da598a61 Always whitelist the login fallback for SSO (#7153)
That fallback sets the redirect URL to itself (so it can process the login
token then return gracefully to the client). This would make it pointless to
ask the user for confirmation, since the URL the confirmation page would be
showing wouldn't be the client's.
2020-03-27 20:24:52 +00:00
Brendan Abolivier
84f7eaed16 Improve the UX of the login fallback when using SSO (#7152)
* Don't show the login forms if we're currently logging in with a
  password or a token.
* Submit directly the SSO login form, showing only a spinner to the
  user, in order to eliminate from the clunkiness of SSO through this
  fallback.
2020-03-27 20:19:54 +00:00
Dirk Klimpel
fb69690761 Admin API to join users to a room. (#7051) 2020-03-27 19:16:43 +00:00
Dirk Klimpel
8327eb9280 Add options to prevent users from changing their profile. (#7096) 2020-03-27 19:15:23 +00:00
txt-file
ae219fb411 update debian installation instructions to recommend installing virtualenv instead of python3-virtualenv (#6892)
* change debian package from python3-virtualenv to virtualenv

The virtualenv package is needed for the virtualenv command. The
virtualenv package depends on python3-virtualenv (at least since
debian jessie) so there is no need to specify python3-virtualenv
additionally.

Signed-off-by: Vieno Hakkerinen <vieno@hakkerinen.eu>

* Add changelog

Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2020-03-27 15:02:00 +00:00
Andrew Morgan
12aa5a7fa7 Ensure is_verified on /_matrix/client/r0/room_keys/keys is a boolean (#7150) 2020-03-27 13:30:22 +00:00
David Vo
fbf0782c63 Only import sqlite3 when type checking (#7155)
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
2020-03-27 13:20:00 +00:00
David Baker
16ee97988a Fix undefined variable & remove debug logging 2020-03-27 12:39:54 +00:00
David Baker
a07e03ce90 black 2020-03-27 12:35:32 +00:00
David Baker
d9965fb8d6 changelog 2020-03-27 12:30:59 +00:00
David Baker
09cc058a4c Always send the user updates to their own device list
This will allow clients to notify users about new devices even if
the user isn't in any rooms (yet).
2020-03-27 12:26:47 +00:00
Richard van der Hoff
665630fcaa Add tests for outbound device pokes 2020-03-27 12:01:37 +00:00
Jason Robinson
7496d3d2f6 Merge pull request #7151 from matrix-org/jaywink/saml-redirect-fix
Allow RedirectResponse in SAML response handler
2020-03-26 22:10:31 +02:00
Patrick Cloke
fa4f12102d Refactor the CAS code (move the logic out of the REST layer to a handler) (#7136) 2020-03-26 15:05:26 -04:00
Jason Robinson
55ca6cf88c Update changelog.d/7151.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-03-26 20:35:50 +02:00
Nektarios Katakis
825fb5d0a5 Don't default to an invalid sqlite config if no database configuration is provided (#6573) 2020-03-26 17:13:14 +00:00
Jason Robinson
060e7dce09 Allow RedirectResponse in SAML response handler
Allow custom SAML handlers to redirect after processing an auth response.

Fixes #7149

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-03-26 19:02:35 +02:00
Dirk Klimpel
e8e2ddb60a Allow server admins to define and enforce a password policy (MSC2000). (#7118) 2020-03-26 16:51:13 +00:00
Patrick Cloke
1c1242acba Validate that the session is not modified during UI-Auth (#7068) 2020-03-26 07:39:34 -04:00
Aaron Raimist
6ca5e56fd1 Remove unused captcha_bypass_secret option (#7137)
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2020-03-25 17:49:34 +00:00
Erik Johnston
4cff617df1 Move catchup of replication streams to worker. (#7024)
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
2020-03-25 14:54:01 +00:00
Andrew Morgan
7bab642707 Various cleanups to INSTALL.md (#7141) 2020-03-25 13:56:40 +00:00
Erik Johnston
b1cfaf08af Merge pull request #7133 from matrix-org/erikj/fix_worker_startup
Fix starting workers when federation sending not split out.
2020-03-25 09:42:39 +00:00
Richard van der Hoff
28d9d6e8a9 Remove spurious "name" parameter to default_config
this is never set to anything other than "test", and is a source of unnecessary
boilerplate.
2020-03-24 18:33:49 +00:00
Richard van der Hoff
39230d2171 Clean up some LoggingContext stuff (#7120)
* Pull Sentinel out of LoggingContext

... and drop a few unnecessary references to it

* Factor out LoggingContext.current_context

move `current_context` and `set_context` out to top-level functions.

Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.

* move copy-to-parent into `stop`

this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.

* Replace `LoggingContext.alive` with `finished`

Turn `alive` into `finished` and make it a bit better defined.
2020-03-24 14:45:33 +00:00
Naugrimm
1fcf9c6f95 Fix CAS redirect url (#6634)
Build the same service URL when requesting the CAS ticket and when calling the proxyValidate URL.
2020-03-24 11:59:04 +00:00
Erik Johnston
d6828c129f Newsfile 2020-03-24 10:36:44 +00:00
Erik Johnston
c816072d47 Fix starting workers when federation sending not split out. 2020-03-24 10:35:00 +00:00
Patrick Cloke
190ab593b7 Use the proper error code when a canonical alias that does not exist is used. (#7109) 2020-03-23 15:21:54 -04:00
Kartikaya Gupta (kats)
e341518f92 Update pre-built package name for FreeBSD (#7107). (#7107)
Signed-off-by: Kartikaya Gupta <kats@trevize.staktrace.com>
2020-03-23 15:31:02 +00:00
Richard van der Hoff
a564b92d37 Convert *StreamRow classes to inner classes (#7116)
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
2020-03-23 13:59:11 +00:00
Richard van der Hoff
5126cb1253 Merge branch 'master' into develop 2020-03-23 13:54:29 +00:00
Richard van der Hoff
229eb81498 Merge tag 'v1.12.0'
Synapse 1.12.0 (2020-03-23)
===========================

No significant changes since 1.12.0rc1.

Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**.

Security advisory
-----------------

Synapse may be vulnerable to request-smuggling attacks when it is used with a
reverse-proxy. The vulnerabilties are fixed in Twisted 20.3.0, and are
described in
[CVE-2020-10108](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10108)
and
[CVE-2020-10109](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10109).
For a good introduction to this class of request-smuggling attacks, see
https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.

We are not aware of these vulnerabilities being exploited in the wild, and
do not believe that they are exploitable with current versions of any reverse
proxies. Nevertheless, we recommend that all Synapse administrators ensure that
they have the latest versions of the Twisted library to ensure that their
installation remains secure.

* Administrators using the [`matrix.org` Docker
  image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
  packages from
  `matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
  should ensure that they have version 1.12.0 installed: these images include
  Twisted 20.3.0.
* Administrators who have [installed Synapse from
  source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
  should upgrade Twisted within their virtualenv by running:
  ```sh
  <path_to_virtualenv>/bin/pip install 'Twisted>=20.3.0'
  ```
* Administrators who have installed Synapse from distribution packages should
  consult the information from their distributions.

The `matrix.org` Synapse instance was not vulnerable to these vulnerabilities.

Advance notice of change to the default `git` branch for Synapse
----------------------------------------------------------------

Currently, the default `git` branch for Synapse is `master`, which tracks the
latest release.

After the release of Synapse 1.13.0, we intend to change this default to
`develop`, which is the development tip. This is more consistent with common
practice and modern `git` usage.

Although we try to keep `develop` in a stable state, there may be occasions
where regressions creep in. Developers and distributors who have scripts which
run builds using the default branch of `Synapse` should therefore consider
pinning their scripts to `master`.

Synapse 1.12.0rc1 (2020-03-19)
==============================

Features
--------

- Changes related to room alias management ([MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)):
  - Publishing/removing a room from the room directory now requires the user to have a power level capable of modifying the canonical alias, instead of the room aliases. ([\#6965](https://github.com/matrix-org/synapse/issues/6965))
  - Validate the `alt_aliases` property of canonical alias events. ([\#6971](https://github.com/matrix-org/synapse/issues/6971))
  - Users with a power level sufficient to modify the canonical alias of a room can now delete room aliases. ([\#6986](https://github.com/matrix-org/synapse/issues/6986))
  - Implement updated authorization rules and redaction rules for aliases events, from [MSC2261](https://github.com/matrix-org/matrix-doc/pull/2261) and [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#7037](https://github.com/matrix-org/synapse/issues/7037))
  - Stop sending m.room.aliases events during room creation and upgrade. ([\#6941](https://github.com/matrix-org/synapse/issues/6941))
  - Synapse no longer uses room alias events to calculate room names for push notifications. ([\#6966](https://github.com/matrix-org/synapse/issues/6966))
  - The room list endpoint no longer returns a list of aliases. ([\#6970](https://github.com/matrix-org/synapse/issues/6970))
  - Remove special handling of aliases events from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260) added in v1.10.0rc1. ([\#7034](https://github.com/matrix-org/synapse/issues/7034))
- Expose the `synctl`, `hash_password` and `generate_config` commands in the snapcraft package. Contributed by @devec0. ([\#6315](https://github.com/matrix-org/synapse/issues/6315))
- Check that server_name is correctly set before running database updates. ([\#6982](https://github.com/matrix-org/synapse/issues/6982))
- Break down monthly active users by `appservice_id` and emit via Prometheus. ([\#7030](https://github.com/matrix-org/synapse/issues/7030))
- Render a configurable and comprehensible error page if something goes wrong during the SAML2 authentication process. ([\#7058](https://github.com/matrix-org/synapse/issues/7058), [\#7067](https://github.com/matrix-org/synapse/issues/7067))
- Add an optional parameter to control whether other sessions are logged out when a user's password is modified. ([\#7085](https://github.com/matrix-org/synapse/issues/7085))
- Add prometheus metrics for the number of active pushers. ([\#7103](https://github.com/matrix-org/synapse/issues/7103), [\#7106](https://github.com/matrix-org/synapse/issues/7106))
- Improve performance when making HTTPS requests to sygnal, sydent, etc, by sharing the SSL context object between connections. ([\#7094](https://github.com/matrix-org/synapse/issues/7094))

Bugfixes
--------

- When a user's profile is updated via the admin API, also generate a displayname/avatar update for that user in each room. ([\#6572](https://github.com/matrix-org/synapse/issues/6572))
- Fix a couple of bugs in email configuration handling. ([\#6962](https://github.com/matrix-org/synapse/issues/6962))
- Fix an issue affecting worker-based deployments where replication would stop working, necessitating a full restart, after joining a large room. ([\#6967](https://github.com/matrix-org/synapse/issues/6967))
- Fix `duplicate key` error which was logged when rejoining a room over federation. ([\#6968](https://github.com/matrix-org/synapse/issues/6968))
- Prevent user from setting 'deactivated' to anything other than a bool on the v2 PUT /users Admin API. ([\#6990](https://github.com/matrix-org/synapse/issues/6990))
- Fix py35-old CI by using native tox package. ([\#7018](https://github.com/matrix-org/synapse/issues/7018))
- Fix a bug causing `org.matrix.dummy_event` to be included in responses from `/sync`. ([\#7035](https://github.com/matrix-org/synapse/issues/7035))
- Fix a bug that renders UTF-8 text files incorrectly when loaded from media. Contributed by @TheStranjer. ([\#7044](https://github.com/matrix-org/synapse/issues/7044))
- Fix a bug that would cause Synapse to respond with an error about event visibility if a client tried to request the state of a room at a given token. ([\#7066](https://github.com/matrix-org/synapse/issues/7066))
- Repair a data-corruption issue which was introduced in Synapse 1.10, and fixed in Synapse 1.11, and which could cause `/sync` to return with 404 errors about missing events and unknown rooms. ([\#7070](https://github.com/matrix-org/synapse/issues/7070))
- Fix a bug causing account validity renewal emails to be sent even if the feature is turned off in some cases. ([\#7074](https://github.com/matrix-org/synapse/issues/7074))

Improved Documentation
----------------------

- Updated CentOS8 install instructions. Contributed by Richard Kellner. ([\#6925](https://github.com/matrix-org/synapse/issues/6925))
- Fix `POSTGRES_INITDB_ARGS` in the `contrib/docker/docker-compose.yml` example docker-compose configuration. ([\#6984](https://github.com/matrix-org/synapse/issues/6984))
- Change date in [INSTALL.md](./INSTALL.md#tls-certificates) for last date of getting TLS certificates to November 2019. ([\#7015](https://github.com/matrix-org/synapse/issues/7015))
- Document that the fallback auth endpoints must be routed to the same worker node as the register endpoints. ([\#7048](https://github.com/matrix-org/synapse/issues/7048))

Deprecations and Removals
-------------------------

- Remove the unused query_auth federation endpoint per [MSC2451](https://github.com/matrix-org/matrix-doc/pull/2451). ([\#7026](https://github.com/matrix-org/synapse/issues/7026))

Internal Changes
----------------

- Add type hints to `logging/context.py`. ([\#6309](https://github.com/matrix-org/synapse/issues/6309))
- Add some clarifications to `README.md` in the database schema directory. ([\#6615](https://github.com/matrix-org/synapse/issues/6615))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6874](https://github.com/matrix-org/synapse/issues/6874), [\#6875](https://github.com/matrix-org/synapse/issues/6875), [\#6983](https://github.com/matrix-org/synapse/issues/6983), [\#7003](https://github.com/matrix-org/synapse/issues/7003))
- Improve performance of v2 state resolution for large rooms. ([\#6952](https://github.com/matrix-org/synapse/issues/6952), [\#7095](https://github.com/matrix-org/synapse/issues/7095))
- Reduce time spent doing GC, by freezing objects on startup. ([\#6953](https://github.com/matrix-org/synapse/issues/6953))
- Minor perfermance fixes to `get_auth_chain_ids`. ([\#6954](https://github.com/matrix-org/synapse/issues/6954))
- Don't record remote cross-signing keys in the `devices` table. ([\#6956](https://github.com/matrix-org/synapse/issues/6956))
- Use flake8-comprehensions to enforce good hygiene of list/set/dict comprehensions. ([\#6957](https://github.com/matrix-org/synapse/issues/6957))
- Merge worker apps together. ([\#6964](https://github.com/matrix-org/synapse/issues/6964), [\#7002](https://github.com/matrix-org/synapse/issues/7002), [\#7055](https://github.com/matrix-org/synapse/issues/7055), [\#7104](https://github.com/matrix-org/synapse/issues/7104))
- Remove redundant `store_room` call from `FederationHandler._process_received_pdu`. ([\#6979](https://github.com/matrix-org/synapse/issues/6979))
- Update warning for incorrect database collation/ctype to include link to documentation. ([\#6985](https://github.com/matrix-org/synapse/issues/6985))
- Add some type annotations to the database storage classes. ([\#6987](https://github.com/matrix-org/synapse/issues/6987))
- Port `synapse.handlers.presence` to async/await. ([\#6991](https://github.com/matrix-org/synapse/issues/6991), [\#7019](https://github.com/matrix-org/synapse/issues/7019))
- Add some type annotations to the federation base & client classes. ([\#6995](https://github.com/matrix-org/synapse/issues/6995))
- Port `synapse.rest.keys` to async/await. ([\#7020](https://github.com/matrix-org/synapse/issues/7020))
- Add a type check to `is_verified` when processing room keys. ([\#7045](https://github.com/matrix-org/synapse/issues/7045))
- Add type annotations and comments to the auth handler. ([\#7063](https://github.com/matrix-org/synapse/issues/7063))
2020-03-23 13:54:17 +00:00
Richard van der Hoff
88bb6c27e1 matrix.org was fine 2020-03-23 13:38:30 +00:00
Neil Johnson
066804f591 Update CHANGES.md 2020-03-23 13:36:16 +00:00
Richard van der Hoff
56b5f1d0ee changelog typos 2020-03-23 13:23:21 +00:00
Richard van der Hoff
a438950a00 1.12.0 changelog 2020-03-23 13:00:40 +00:00
Richard van der Hoff
2fa55c0cc6 1.12.0 2020-03-23 12:13:09 +00:00
Richard van der Hoff
b3cee0ce67 Fix processing of groups stream, and use symbolic names for streams (#7117)
`groups` != `receipts`

Introduced in #6964
2020-03-23 11:39:36 +00:00
Dionysis Grigoropoulos
96071eea8f Set Referrer-Policy to no-referrer for media (#7009) 2020-03-23 09:48:28 +00:00
Patrick Cloke
477c4f5b1c Clean-up some auth/login REST code (#7115) 2020-03-20 16:22:47 -04:00
Richard van der Hoff
c165c1233b Improve database configuration docs (#6988)
Attempts to clarify the sample config for databases, and add some stuff about
tcp keepalives to `postgres.md`.
2020-03-20 15:24:22 +00:00
Erik Johnston
fdb1344716 Remove concept of a non-limited stream. (#7011) 2020-03-20 14:40:47 +00:00
Patrick Cloke
caec7d4fa0 Convert some of the media REST code to async/await (#7110) 2020-03-20 07:20:02 -04:00
Patrick Cloke
c2db6599c8 Fix a bug in the federation API which could cause occasional "Failed to get PDU" errors (#7089). 2020-03-19 08:22:56 -04:00
Erik Johnston
a319cb1dd1 Change device list streams to have one row per ID (#7010)
* Add 'device_lists_outbound_pokes' as extra table.

This makes sure we check all the relevant tables to get the current max
stream ID.

Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.

* Change device lists stream to have one row per id.

This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.

* Change device list replication to match new semantics.

Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).

* Newsfile

* Remove handling of multiple rows per ID

* Fix worker handling

* Comments from review
2020-03-19 11:36:53 +00:00
Richard van der Hoff
c8c926f9c9 more changelog 2020-03-19 11:26:51 +00:00
Richard van der Hoff
163f23785a changelog fixes 2020-03-19 11:25:32 +00:00
Richard van der Hoff
5aa6dff99e fix typo 2020-03-19 11:15:48 +00:00
Richard van der Hoff
e43e78b985 1.12.0rc1 2020-03-19 11:07:16 +00:00
Richard van der Hoff
782b811789 update grafana dashboard 2020-03-19 10:45:40 +00:00
Richard van der Hoff
e913823a22 Fix concurrent modification errors in pusher metrics (#7106)
add a lock to try to make this metric actually work
2020-03-19 10:28:49 +00:00
Richard van der Hoff
8c75667ad7 Add prometheus metrics for the number of active pushers (#7103) 2020-03-19 10:00:24 +00:00
Richard van der Hoff
443162e577 Move pusherpool startup into _base.setup (#7104)
This should be safe to do on all workers/masters because it is guarded by
a config option which will ensure it is only actually done on the worker
assigned as a pusher.
2020-03-19 09:48:45 +00:00
Erik Johnston
4a17a647a9 Improve get auth chain difference algorithm. (#7095)
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.

Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
2020-03-18 16:46:41 +00:00
Patrick Cloke
88b41986db Add an option to the set password API to choose whether to logout other devices. (#7085) 2020-03-18 07:50:00 -04:00
Erik Johnston
6e6476ef07 Comments from review 2020-03-18 10:13:55 +00:00
Richard von Kellner
6d110ddea4 Update INSTALL.md updated CentOS8 install instructions (#6925) 2020-03-17 21:48:23 +00:00
Richard van der Hoff
c37db0211e Share SSL contexts for non-federation requests (#7094)
Extends #5794 etc to the SimpleHttpClient so that it also applies to non-federation requests.

Fixes #7092.
2020-03-17 21:32:25 +00:00
Richard van der Hoff
4ce50519cd Update postgres.md
fix broken link
2020-03-17 18:08:43 +00:00
The Stranjer
5e477c1deb Set charset to utf-8 when adding headers for certain text content types (#7044)
Fixes #7043
2020-03-17 13:29:09 +00:00
Patrick Cloke
7581d30e9f Remove unused federation endpoint (query_auth) (#7026) 2020-03-17 08:04:49 -04:00
Patrick Cloke
60724c46b7 Remove special casing of m.room.aliases events (#7034) 2020-03-17 07:37:04 -04:00
Richard van der Hoff
6a35046363 Revert "Add options to disable setting profile info for prevent changes. (#7053)"
This reverts commit 54dd28621b, reversing
changes made to 6640460d05.
2020-03-17 11:25:01 +00:00
Brendan Abolivier
7df04ca0e6 Populate the room version from state events (#7070)
Fixes #7065 

This is basically the same as https://github.com/matrix-org/synapse/pull/6847 except it tries to populate events from `state_events` rather than `current_state_events`, since the latter might have been cleared from the state of some rooms too early, leaving them with a `NULL` room version.
2020-03-16 22:31:47 +00:00
Brendan Abolivier
beb19cf61a Fix buggy condition in account validity handler (#7074) 2020-03-16 12:16:30 +00:00
Brendan Abolivier
d8d91983bc Merge pull request #7067 from matrix-org/babolivier/saml_error_moar
Move the default SAML2 error HTML to a dedicated file
2020-03-13 19:53:19 +00:00
Brendan Abolivier
ebfcbbff9c Use innerText instead of innerHTML 2020-03-13 19:09:22 +00:00
Patrick Cloke
77d0a4507b Add type annotations and comments to auth handler (#7063) 2020-03-12 11:36:27 -04:00
Brendan Abolivier
0de9f9486a Lint 2020-03-11 20:39:18 +00:00
Brendan Abolivier
f9e98176bf Put the file in the templates directory 2020-03-11 20:31:42 +00:00
Brendan Abolivier
bd5e555b0d Merge pull request #7066 from matrix-org/babolivier/dummy_events_state
Skip the correct visibility checks when checking the visibility of the state at a given event
2020-03-11 20:07:58 +00:00
Brendan Abolivier
900bca9707 Update wording and config 2020-03-11 19:40:30 +00:00
Brendan Abolivier
e55a240681 Changelog 2020-03-11 19:37:04 +00:00
Brendan Abolivier
b8cfe79ffc Move the default SAML2 error HTML to a dedicated file
Also add some JS to it to process any error we might have in the URI
(see #6893).
2020-03-11 19:33:16 +00:00
Brendan Abolivier
8120a238a4 Refactor a bit 2020-03-11 18:49:41 +00:00
Brendan Abolivier
37a9873f63 Also don't fail on aliases events in this case 2020-03-11 18:43:41 +00:00
Brendan Abolivier
e38c44b418 Lint 2020-03-11 18:06:07 +00:00
Brendan Abolivier
1cde4cf3f1 Changelog 2020-03-11 18:03:56 +00:00
Brendan Abolivier
2dce68c651 Also don't filter out events sent by ignored users when checking state visibility 2020-03-11 17:53:22 +00:00
Brendan Abolivier
9c0775e86a Fix condition 2020-03-11 17:53:18 +00:00
Brendan Abolivier
69ce55c510 Don't filter out dummy events when we're checking the visibility of state 2020-03-11 17:52:54 +00:00
Brendan Abolivier
54dd28621b Add options to disable setting profile info for prevent changes. (#7053) 2020-03-10 22:23:01 +00:00
Dirk Klimpel
751d51dd12 Update sample_config.yaml 2020-03-10 21:41:25 +01:00
Dirk Klimpel
42ac4ca477 Update synapse/config/registration.py
Co-Authored-By: Brendan Abolivier <github@brendanabolivier.com>
2020-03-10 21:26:55 +01:00
Brendan Abolivier
6640460d05 Merge pull request #7058 from matrix-org/babolivier/saml_error_html
SAML2: render a comprehensible error page if something goes wrong
2020-03-10 18:42:15 +00:00
Brendan Abolivier
8f826f98ac Rephrase default message 2020-03-10 17:22:45 +00:00
Brendan Abolivier
dc6fb56c5f Hopefully mypy is happy now 2020-03-10 14:40:28 +00:00
Brendan Abolivier
fe593ef990 Attempt at appeasing the gods of mypy 2020-03-10 14:19:06 +00:00
Brendan Abolivier
5ec2077bf9 Lint 2020-03-10 14:04:20 +00:00
Brendan Abolivier
156f271867 Changelog 2020-03-10 14:01:24 +00:00
Brendan Abolivier
51c094c4ac Update sample config 2020-03-10 14:00:29 +00:00
Brendan Abolivier
6b0efe73e2 SAML2: render a comprehensible error page if something goes wrong
If an error happened while processing a SAML AuthN response, or a client
ends up doing a `GET` request to `/authn_response`, then render a
customisable error page rather than a confusing error.
2020-03-10 13:59:22 +00:00
dklimpel
39f6595b4a lint, fix tests 2020-03-09 22:13:20 +01:00
dklimpel
885134529f updates after review 2020-03-09 22:09:29 +01:00
dklimpel
7e5f40e771 fix tests 2020-03-09 21:00:36 +01:00
dklimpel
50ea178c20 lint 2020-03-09 19:57:04 +01:00
dklimpel
04f4b5f6f8 add tests 2020-03-09 19:51:31 +01:00
Brendan Abolivier
14b2ebe767 Merge pull request #7055 from matrix-org/babolivier/get_time_of_last_push_action_before
Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
2020-03-09 14:53:50 +00:00
Brendan Abolivier
f9e3a3f4d0 Changelog
It's the same as in #6964 since it's the most likely cause of the bug
and that change hasn't been released yet.
2020-03-09 14:21:01 +00:00
Brendan Abolivier
aee2bae952 Fix undefined room_id in make_summary_text
This would break notifications about un-named rooms when processing
notifications in a batch.
2020-03-09 14:10:19 +00:00
Brendan Abolivier
87c65576e0 Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
Fixes #7054

I also had a look at the rest of the functions in
`EventPushActionsStore` and in the push notifications send code and it
looks to me like there shouldn't be any other method with this issue in
this part of the codebase.
2020-03-09 13:58:38 +00:00
Patrick Cloke
06eb5cae08 Remove special auth and redaction rules for aliases events in experimental room ver. (#7037) 2020-03-09 08:58:25 -04:00
Patrick Cloke
66315d862f Update routing of fallback auth in the worker docs. (#7048) 2020-03-09 07:19:24 -04:00
Brendan Abolivier
bbf725e7da Merge pull request #7045 from matrix-org/babolivier/room_keys_check
Make sure that is_verified is a boolean when processing room keys
2020-03-09 09:54:48 +00:00
dklimpel
99bbe177b6 add disable_3pid_changes 2020-03-08 21:58:12 +01:00
dklimpel
20545a2199 lint2 2020-03-08 15:28:00 +01:00
dklimpel
ce460dc31c lint 2020-03-08 15:22:43 +01:00
dklimpel
fb078f921b changelog 2020-03-08 15:19:07 +01:00
dklimpel
1f5f3ae8b1 Add options to disable setting profile info for prevent changes. 2020-03-08 14:49:33 +01:00
Neil Pilgrim
2bff4457d9 Add type hints to logging/context.py (#6309)
* Add type hints to logging/context.py

Signed-off-by: neiljp (Neil Pilgrim) <github@kepier.clara.net>
2020-03-07 17:57:26 +00:00
Neil Johnson
1d66dce83e Break down monthly active users by appservice_id (#7030)
* Break down monthly active users by appservice_id and emit via prometheus.

Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2020-03-06 18:14:19 +00:00
Brendan Abolivier
54b78a0e3b Lint 2020-03-06 15:11:13 +00:00
Brendan Abolivier
297aaf4816 Mention the session ID in the error message 2020-03-06 15:07:41 +00:00
Brendan Abolivier
45df9d35a9 Lint 2020-03-06 11:10:52 +00:00
Brendan Abolivier
a27056d539 Changelog 2020-03-06 11:06:47 +00:00
Brendan Abolivier
80e580ae92 Make sure that is_verified is a boolean when processing room keys 2020-03-06 11:05:00 +00:00
Patrick Cloke
87972f07e5 Convert remote key resource REST layer to async/await. (#7020) 2020-03-05 11:29:56 -05:00
Richard van der Hoff
78a15b1f9d Store room_versions in EventBase objects (#6875)
This is a bit fiddly because it all has to be done on one fell swoop:

* Wherever we create a new event, pass in the room version (and check it matches the format version)
* When we prune an event, use the room version of the unpruned event to create the pruned version.
* When we pass an event over the replication protocol, pass the room version over alongside it, and use it when deserialising the event again.
2020-03-05 15:46:44 +00:00
Brendan Abolivier
fe678a0900 Merge pull request #7035 from matrix-org/babolivier/hide_dummy_events
Hide extremities dummy events from clients
2020-03-05 10:51:19 +00:00
Brendan Abolivier
83b6c69d3d Changelog 2020-03-04 17:29:09 +00:00
Brendan Abolivier
31a2116331 Hide extremities dummy events from clients 2020-03-04 17:28:13 +00:00
Patrick Cloke
13892776ef Allow deleting an alias if the user has sufficient power level (#6986) 2020-03-04 11:30:46 -05:00
Richard van der Hoff
8ef8fb2c1c Read the room version from database when fetching events (#6874)
This is a precursor to giving EventBase objects the knowledge of which room version they belong to.
2020-03-04 13:11:04 +00:00
Brendan Abolivier
43f874055d Merge branch 'master' into develop 2020-03-03 15:20:49 +00:00
Brendan Abolivier
6b0ef34706 Update debian changelog 2020-03-03 15:01:43 +00:00
Brendan Abolivier
fe6ab0439d Merge branch 'babolivier/v1.11.1-changelog' into 'release-v1.11.1'
v1.11.1

See merge request new-vector/synapse!6
2020-03-03 14:58:37 +00:00
Brendan Abolivier
fd983fad96 v1.11.1 2020-03-03 14:58:37 +00:00
Patrick Cloke
7dcbc33a1b Validate the alt_aliases property of canonical alias events (#6971) 2020-03-03 07:12:45 -05:00
Brendan Abolivier
6a8880b9c3 Merge branch 'babolivier/complete_sso_login_saml' into 'release-v1.11.1'
Fix wrong handler being used in SAML handler

See merge request new-vector/synapse!5
2020-03-03 11:29:07 +00:00
Brendan Abolivier
a0178df104 Fix wrong handler being used in SAML handler 2020-03-03 11:29:07 +00:00
Brendan Abolivier
6f67a8b570 Merge branch 'babolivier/sso_module_api' into 'release-v1.11.1'
Factor out complete_sso_login and expose it to the Module API

See merge request new-vector/synapse!4
2020-03-03 10:54:44 +00:00
Brendan Abolivier
65c73cdfec Factor out complete_sso_login and expose it to the Module API 2020-03-03 10:54:44 +00:00
Richard van der Hoff
809e8567f6 Merge branch 'rav/sso-confirm-whitelist' into 'release-v1.11.1'
Add a whitelist for the SSO confirmation step.

See merge request new-vector/synapse!3
2020-03-02 17:05:09 +00:00
Richard van der Hoff
b68041df3d Add a whitelist for the SSO confirmation step. 2020-03-02 17:05:09 +00:00
Erik Johnston
65a941d1f8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fixup_devices_stream 2020-03-02 16:55:55 +00:00
Erik Johnston
b29474e0aa Always return a deferred from get_current_state_deltas. (#7019)
This currently causes presence notify code to log exceptions when there
is no state changes to process. This doesn't actually cause any problems
as we'd simply do nothing anyway.
2020-03-02 16:52:15 +00:00
Richard van der Hoff
27d099edd6 Merge remote-tracking branch 'origin/release-v1.11.1' into release-v1.11.1 2020-03-02 16:43:33 +00:00
Brendan Abolivier
2e7fad87d4 Merge branch 'anoabolivier/sso-confirm' into 'release-v1.11.1'
Add a confirmation step to the SSO login flow

See merge request new-vector/synapse!2
2020-03-02 16:36:32 +00:00
Brendan Abolivier
b2bd54a2e3 Add a confirmation step to the SSO login flow 2020-03-02 16:36:32 +00:00
Erik Johnston
3ab8e9c293 Fix py35-old CI by using native tox. (#7018)
I'm not really sure how this was going wrong, but this seems like the
right approach anyway.
2020-03-02 16:17:11 +00:00
Richard van der Hoff
174aaa1d62 remove spurious changelog 2020-03-02 14:53:56 +00:00
Richard van der Hoff
036c6cea07 Merge branch 'release-v1.11.1' into develop 2020-03-02 14:53:10 +00:00
Dirk Klimpel
bbeee33d63 Fixed set a user as an admin with the new API (#6928)
Fix #6910
2020-03-02 13:28:50 +00:00
Erik Johnston
e53744c737 Fix worker handling 2020-03-02 12:52:28 +00:00
Matthew Hodgson
cc7ab0d84a rst->md 2020-03-01 21:21:36 +00:00
Uday Bansal
e4ffb14d57 Fix last date for ACMEv1 install (#7015)
Support for getting TLS certificates through ACMEv1 ended on November 2019.

Signed-off-by: Uday Bansal <43824981+udaybansal19@users.noreply.github.com>
2020-02-29 23:37:23 +00:00
Sandro
d96ac97d29 Fix mounting of homeserver.yaml when it does not exist on host (#6913)
Signed-off-by: Sandro Jäckel <sandro.jaeckel@gmail.com>
2020-02-29 23:32:26 +00:00
Patrick Cloke
12d4259000 Add some type annotations to the federation base & client classes (#6995) 2020-02-28 07:31:07 -05:00
Erik Johnston
f70f44abc7 Remove handling of multiple rows per ID 2020-02-28 11:45:35 +00:00
Erik Johnston
59ad93d2a4 Newsfile 2020-02-28 11:27:37 +00:00
Erik Johnston
9ce4e344a8 Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
2020-02-28 11:25:34 +00:00
Erik Johnston
f5caa1864e Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
2020-02-28 11:21:25 +00:00
Erik Johnston
c3c6c0e622 Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.

Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
2020-02-28 11:15:11 +00:00
Dirk Klimpel
9b06d8f8a6 Fixed set a user as an admin with the new API (#6928)
Fix #6910
2020-02-28 09:58:05 +00:00
Patrick Cloke
ab0073a6c0 Merge remote-tracking branch 'origin/release-v1.11.1' into develop 2020-02-27 13:47:44 -05:00
Erik Johnston
2201bc9795 Don't refuse to start worker if media listener configured. (#7002)
Instead lets just warn if the worker has a media listener configured but
has the media repository disabled.

Previously non media repository workers would just ignore the media
listener.
2020-02-27 16:33:21 +00:00
Richard van der Hoff
cab4a52535 set worker_app for frontend proxy test (#7003)
to stop the federationhandler trying to do master stuff
2020-02-27 13:08:43 +00:00
James
b32ac60c22 Expose common commands via snap run interface to allow easier invocation (#6315)
Signed-off-by: James Hebden <james@ec0.io>
2020-02-27 12:47:40 +00:00
Richard van der Hoff
132b673dbe Add some type annotations in synapse.storage (#6987)
I cracked, and added some type definitions in synapse.storage.
2020-02-27 11:53:40 +00:00
Patrick Cloke
380122866f Cast a coroutine into a Deferred in the federation base (#6996)
Properly convert a coroutine into a Deferred in federation_base to fix an error when joining a room.
2020-02-26 11:32:13 -05:00
243 changed files with 7367 additions and 3083 deletions

View File

@@ -6,12 +6,7 @@
set -ex
apt-get update
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev
# workaround for https://github.com/jaraco/zipp/issues/40
python3.5 -m pip install 'setuptools>=34.4.0'
python3.5 -m pip install tox
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev tox
export LANG="C.UTF-8"

View File

@@ -1,3 +1,155 @@
Synapse 1.12.0 (2020-03-23)
===========================
No significant changes since 1.12.0rc1.
Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**.
Security advisory
-----------------
Synapse may be vulnerable to request-smuggling attacks when it is used with a
reverse-proxy. The vulnerabilties are fixed in Twisted 20.3.0, and are
described in
[CVE-2020-10108](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10108)
and
[CVE-2020-10109](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10109).
For a good introduction to this class of request-smuggling attacks, see
https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.
We are not aware of these vulnerabilities being exploited in the wild, and
do not believe that they are exploitable with current versions of any reverse
proxies. Nevertheless, we recommend that all Synapse administrators ensure that
they have the latest versions of the Twisted library to ensure that their
installation remains secure.
* Administrators using the [`matrix.org` Docker
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
packages from
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
should ensure that they have version 1.12.0 installed: these images include
Twisted 20.3.0.
* Administrators who have [installed Synapse from
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
should upgrade Twisted within their virtualenv by running:
```sh
<path_to_virtualenv>/bin/pip install 'Twisted>=20.3.0'
```
* Administrators who have installed Synapse from distribution packages should
consult the information from their distributions.
The `matrix.org` Synapse instance was not vulnerable to these vulnerabilities.
Advance notice of change to the default `git` branch for Synapse
----------------------------------------------------------------
Currently, the default `git` branch for Synapse is `master`, which tracks the
latest release.
After the release of Synapse 1.13.0, we intend to change this default to
`develop`, which is the development tip. This is more consistent with common
practice and modern `git` usage.
Although we try to keep `develop` in a stable state, there may be occasions
where regressions creep in. Developers and distributors who have scripts which
run builds using the default branch of `Synapse` should therefore consider
pinning their scripts to `master`.
Synapse 1.12.0rc1 (2020-03-19)
==============================
Features
--------
- Changes related to room alias management ([MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)):
- Publishing/removing a room from the room directory now requires the user to have a power level capable of modifying the canonical alias, instead of the room aliases. ([\#6965](https://github.com/matrix-org/synapse/issues/6965))
- Validate the `alt_aliases` property of canonical alias events. ([\#6971](https://github.com/matrix-org/synapse/issues/6971))
- Users with a power level sufficient to modify the canonical alias of a room can now delete room aliases. ([\#6986](https://github.com/matrix-org/synapse/issues/6986))
- Implement updated authorization rules and redaction rules for aliases events, from [MSC2261](https://github.com/matrix-org/matrix-doc/pull/2261) and [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#7037](https://github.com/matrix-org/synapse/issues/7037))
- Stop sending m.room.aliases events during room creation and upgrade. ([\#6941](https://github.com/matrix-org/synapse/issues/6941))
- Synapse no longer uses room alias events to calculate room names for push notifications. ([\#6966](https://github.com/matrix-org/synapse/issues/6966))
- The room list endpoint no longer returns a list of aliases. ([\#6970](https://github.com/matrix-org/synapse/issues/6970))
- Remove special handling of aliases events from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260) added in v1.10.0rc1. ([\#7034](https://github.com/matrix-org/synapse/issues/7034))
- Expose the `synctl`, `hash_password` and `generate_config` commands in the snapcraft package. Contributed by @devec0. ([\#6315](https://github.com/matrix-org/synapse/issues/6315))
- Check that server_name is correctly set before running database updates. ([\#6982](https://github.com/matrix-org/synapse/issues/6982))
- Break down monthly active users by `appservice_id` and emit via Prometheus. ([\#7030](https://github.com/matrix-org/synapse/issues/7030))
- Render a configurable and comprehensible error page if something goes wrong during the SAML2 authentication process. ([\#7058](https://github.com/matrix-org/synapse/issues/7058), [\#7067](https://github.com/matrix-org/synapse/issues/7067))
- Add an optional parameter to control whether other sessions are logged out when a user's password is modified. ([\#7085](https://github.com/matrix-org/synapse/issues/7085))
- Add prometheus metrics for the number of active pushers. ([\#7103](https://github.com/matrix-org/synapse/issues/7103), [\#7106](https://github.com/matrix-org/synapse/issues/7106))
- Improve performance when making HTTPS requests to sygnal, sydent, etc, by sharing the SSL context object between connections. ([\#7094](https://github.com/matrix-org/synapse/issues/7094))
Bugfixes
--------
- When a user's profile is updated via the admin API, also generate a displayname/avatar update for that user in each room. ([\#6572](https://github.com/matrix-org/synapse/issues/6572))
- Fix a couple of bugs in email configuration handling. ([\#6962](https://github.com/matrix-org/synapse/issues/6962))
- Fix an issue affecting worker-based deployments where replication would stop working, necessitating a full restart, after joining a large room. ([\#6967](https://github.com/matrix-org/synapse/issues/6967))
- Fix `duplicate key` error which was logged when rejoining a room over federation. ([\#6968](https://github.com/matrix-org/synapse/issues/6968))
- Prevent user from setting 'deactivated' to anything other than a bool on the v2 PUT /users Admin API. ([\#6990](https://github.com/matrix-org/synapse/issues/6990))
- Fix py35-old CI by using native tox package. ([\#7018](https://github.com/matrix-org/synapse/issues/7018))
- Fix a bug causing `org.matrix.dummy_event` to be included in responses from `/sync`. ([\#7035](https://github.com/matrix-org/synapse/issues/7035))
- Fix a bug that renders UTF-8 text files incorrectly when loaded from media. Contributed by @TheStranjer. ([\#7044](https://github.com/matrix-org/synapse/issues/7044))
- Fix a bug that would cause Synapse to respond with an error about event visibility if a client tried to request the state of a room at a given token. ([\#7066](https://github.com/matrix-org/synapse/issues/7066))
- Repair a data-corruption issue which was introduced in Synapse 1.10, and fixed in Synapse 1.11, and which could cause `/sync` to return with 404 errors about missing events and unknown rooms. ([\#7070](https://github.com/matrix-org/synapse/issues/7070))
- Fix a bug causing account validity renewal emails to be sent even if the feature is turned off in some cases. ([\#7074](https://github.com/matrix-org/synapse/issues/7074))
Improved Documentation
----------------------
- Updated CentOS8 install instructions. Contributed by Richard Kellner. ([\#6925](https://github.com/matrix-org/synapse/issues/6925))
- Fix `POSTGRES_INITDB_ARGS` in the `contrib/docker/docker-compose.yml` example docker-compose configuration. ([\#6984](https://github.com/matrix-org/synapse/issues/6984))
- Change date in [INSTALL.md](./INSTALL.md#tls-certificates) for last date of getting TLS certificates to November 2019. ([\#7015](https://github.com/matrix-org/synapse/issues/7015))
- Document that the fallback auth endpoints must be routed to the same worker node as the register endpoints. ([\#7048](https://github.com/matrix-org/synapse/issues/7048))
Deprecations and Removals
-------------------------
- Remove the unused query_auth federation endpoint per [MSC2451](https://github.com/matrix-org/matrix-doc/pull/2451). ([\#7026](https://github.com/matrix-org/synapse/issues/7026))
Internal Changes
----------------
- Add type hints to `logging/context.py`. ([\#6309](https://github.com/matrix-org/synapse/issues/6309))
- Add some clarifications to `README.md` in the database schema directory. ([\#6615](https://github.com/matrix-org/synapse/issues/6615))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6874](https://github.com/matrix-org/synapse/issues/6874), [\#6875](https://github.com/matrix-org/synapse/issues/6875), [\#6983](https://github.com/matrix-org/synapse/issues/6983), [\#7003](https://github.com/matrix-org/synapse/issues/7003))
- Improve performance of v2 state resolution for large rooms. ([\#6952](https://github.com/matrix-org/synapse/issues/6952), [\#7095](https://github.com/matrix-org/synapse/issues/7095))
- Reduce time spent doing GC, by freezing objects on startup. ([\#6953](https://github.com/matrix-org/synapse/issues/6953))
- Minor perfermance fixes to `get_auth_chain_ids`. ([\#6954](https://github.com/matrix-org/synapse/issues/6954))
- Don't record remote cross-signing keys in the `devices` table. ([\#6956](https://github.com/matrix-org/synapse/issues/6956))
- Use flake8-comprehensions to enforce good hygiene of list/set/dict comprehensions. ([\#6957](https://github.com/matrix-org/synapse/issues/6957))
- Merge worker apps together. ([\#6964](https://github.com/matrix-org/synapse/issues/6964), [\#7002](https://github.com/matrix-org/synapse/issues/7002), [\#7055](https://github.com/matrix-org/synapse/issues/7055), [\#7104](https://github.com/matrix-org/synapse/issues/7104))
- Remove redundant `store_room` call from `FederationHandler._process_received_pdu`. ([\#6979](https://github.com/matrix-org/synapse/issues/6979))
- Update warning for incorrect database collation/ctype to include link to documentation. ([\#6985](https://github.com/matrix-org/synapse/issues/6985))
- Add some type annotations to the database storage classes. ([\#6987](https://github.com/matrix-org/synapse/issues/6987))
- Port `synapse.handlers.presence` to async/await. ([\#6991](https://github.com/matrix-org/synapse/issues/6991), [\#7019](https://github.com/matrix-org/synapse/issues/7019))
- Add some type annotations to the federation base & client classes. ([\#6995](https://github.com/matrix-org/synapse/issues/6995))
- Port `synapse.rest.keys` to async/await. ([\#7020](https://github.com/matrix-org/synapse/issues/7020))
- Add a type check to `is_verified` when processing room keys. ([\#7045](https://github.com/matrix-org/synapse/issues/7045))
- Add type annotations and comments to the auth handler. ([\#7063](https://github.com/matrix-org/synapse/issues/7063))
Synapse 1.11.1 (2020-03-03)
===========================
This release includes a security fix impacting installations using Single Sign-On (i.e. SAML2 or CAS) for authentication. Administrators of such installations are encouraged to upgrade as soon as possible.
The release also includes fixes for a couple of other bugs.
Bugfixes
--------
- Add a confirmation step to the SSO login flow before redirecting users to the redirect URL. ([b2bd54a2](https://github.com/matrix-org/synapse/commit/b2bd54a2e31d9a248f73fadb184ae9b4cbdb49f9), [65c73cdf](https://github.com/matrix-org/synapse/commit/65c73cdfec1876a9fec2fd2c3a74923cd146fe0b), [a0178df1](https://github.com/matrix-org/synapse/commit/a0178df10422a76fd403b82d2b2a4ed28a9a9d1e))
- Fixed set a user as an admin with the admin API `PUT /_synapse/admin/v2/users/<user_id>`. Contributed by @dklimpel. ([\#6910](https://github.com/matrix-org/synapse/issues/6910))
- Fix bug introduced in Synapse 1.11.0 which sometimes caused errors when joining rooms over federation, with `'coroutine' object has no attribute 'event_id'`. ([\#6996](https://github.com/matrix-org/synapse/issues/6996))
Synapse 1.11.0 (2020-02-21)
===========================

View File

@@ -2,7 +2,6 @@
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Troubleshooting Installation](#troubleshooting-installation)
- [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates)
@@ -10,6 +9,7 @@
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation)
# Choosing your server name
@@ -70,7 +70,7 @@ pip install -U matrix-synapse
```
Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before)::
file. To do this, run (in your virtualenv, as before):
```
cd ~/synapse
@@ -84,22 +84,24 @@ python -m synapse.app.homeserver \
... substituting an appropriate value for `--server-name`.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your Home Server to
identify itself to other Home Servers, so don't lose or delete them. It would be
also generate a set of keys for you. These keys will allow your homeserver to
identify itself to other homeserver, so don't lose or delete them. It would be
wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your Home Server's keys, you may find that other Home Servers have the
change your homeserver's keys, you may find that other homeserver have the
old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something
different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management.)
for more information on key management).
To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and::
run (e.g. `~/synapse`), and:
cd ~/synapse
source env/bin/activate
synctl start
```
cd ~/synapse
source env/bin/activate
synctl start
```
### Platform-Specific Instructions
@@ -110,7 +112,7 @@ Installing prerequisites on Ubuntu or Debian:
```
sudo apt-get install build-essential python3-dev libffi-dev \
python3-pip python3-setuptools sqlite3 \
libssl-dev python3-virtualenv libjpeg-dev libxslt1-dev
libssl-dev virtualenv libjpeg-dev libxslt1-dev
```
#### ArchLinux
@@ -124,12 +126,21 @@ sudo pacman -S base-devel python python-pip \
#### CentOS/Fedora
Installing prerequisites on CentOS 7 or Fedora 25:
Installing prerequisites on CentOS 8 or Fedora>26:
```
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
sudo dnf groupinstall "Development Tools"
```
Installing prerequisites on CentOS 7 or Fedora<=25:
```
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python-virtualenv libffi-devel openssl-devel
python3-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
```
@@ -179,7 +190,7 @@ doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
There is currently no port for OpenBSD. Additionally, OpenBSD's security
settings require a slightly more difficult installation process.
XXX: I suspect this is out of date.
(XXX: I suspect this is out of date)
1. Create a new directory in `/usr/local` called `_synapse`. Also, create a
new user called `_synapse` and set that directory as the new user's home.
@@ -187,7 +198,7 @@ XXX: I suspect this is out of date.
write and execute permissions on the same memory space to be run from
`/usr/local`.
2. `su` to the new `_synapse` user and change to their home directory.
3. Create a new virtualenv: `virtualenv -p python2.7 ~/.synapse`
3. Create a new virtualenv: `virtualenv -p python3 ~/.synapse`
4. Source the virtualenv configuration located at
`/usr/local/_synapse/.synapse/bin/activate`. This is done in `ksh` by
using the `.` command, rather than `bash`'s `source`.
@@ -208,45 +219,6 @@ be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
for Windows Server.
### Troubleshooting Installation
XXX a bunch of this is no longer relevant.
Synapse requires pip 8 or later, so if your OS provides too old a version you
may need to manually upgrade it::
sudo pip install --upgrade pip
Installing may fail with `Could not find any downloads that satisfy the requirement pymacaroons-pynacl (from matrix-synapse==0.12.0)`.
You can fix this by manually upgrading pip and virtualenv::
sudo pip install --upgrade virtualenv
You can next rerun `virtualenv -p python3 synapse` to update the virtual env.
Installing may fail during installing virtualenv with `InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.`
You can fix this by manually installing ndg-httpsclient::
pip install --upgrade ndg-httpsclient
Installing may fail with `mock requires setuptools>=17.1. Aborting installation`.
You can fix this by upgrading setuptools::
pip install --upgrade setuptools
If pip crashes mid-installation for reason (e.g. lost terminal), pip may
refuse to run until you remove the temporary installation directory it
created. To reset the installation::
rm -rf /tmp/pip_install_matrix
pip seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.::
pip install twisted
## Prebuilt packages
As an alternative to installing from source, prebuilt packages are available
@@ -305,7 +277,7 @@ For `buster` and `sid`, Synapse is available in the Debian repositories and
it should be possible to install it with simply:
```
sudo apt install matrix-synapse
sudo apt install matrix-synapse
```
There is also a version of `matrix-synapse` in `stretch-backports`. Please see
@@ -366,15 +338,17 @@ sudo pip install py-bcrypt
Synapse can be found in the void repositories as 'synapse':
xbps-install -Su
xbps-install -S synapse
```
xbps-install -Su
xbps-install -S synapse
```
### FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py27-matrix-synapse`
- Packages: `pkg install py37-matrix-synapse`
### NixOS
@@ -411,6 +385,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
resources:
- names: [client, federation]
```
* You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You can either
point these settings at an existing certificate and key, or you can
@@ -418,7 +393,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
for having Synapse automatically provision and renew federation
certificates through ACME can be found at [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2020.
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates
@@ -426,7 +401,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
`cert.pem`).
For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md)
[federate.md](docs/federate.md).
## Email
@@ -473,7 +448,7 @@ on your server even if `enable_registration` is `false`.
## Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
## URL previews
@@ -482,10 +457,24 @@ turn it on you must enable the `url_preview_enabled: True` config parameter
and explicitly specify the IP ranges that Synapse is not allowed to spider for
previewing in the `url_preview_ip_range_blacklist` configuration parameter.
This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that
spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed. This in turn requires the libxml2 library to be available - on
This also requires the optional `lxml` and `netaddr` python dependencies to be
installed. This in turn requires the `libxml2` library to be available - on
Debian/Ubuntu this means `apt-get install libxml2-dev`, or equivalent for
your OS.
# Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.:
```
pip install twisted
```
If you have any other problems, feel free to ask in
[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org).

View File

@@ -1 +0,0 @@
When a user's profile is updated via the admin API, also generate a displayname/avatar update for that user in each room.

1
changelog.d/6573.bugfix Normal file
View File

@@ -0,0 +1 @@
Don't attempt to use an invalid sqlite config if no database configuration is provided. Contributed by @nekatak.

View File

@@ -1 +0,0 @@
Add some clarifications to `README.md` in the database schema directory.

1
changelog.d/6634.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix single-sign on with CAS systems: pass the same service URL when requesting the CAS ticket and when calling the `proxyValidate` URL. Contributed by @Naugrimm.

1
changelog.d/6892.doc Normal file
View File

@@ -0,0 +1 @@
Update Debian installation instructions to recommend installing the `virtualenv` package instead of `python3-virtualenv`.

View File

@@ -1 +0,0 @@
Stop sending m.room.aliases events during room creation and upgrade.

View File

@@ -1 +0,0 @@
Improve perf of v2 state res for large rooms.

View File

@@ -1 +0,0 @@
Reduce time spent doing GC by freezing objects on startup.

View File

@@ -1 +0,0 @@
Minor perf fixes to `get_auth_chain_ids`.

View File

@@ -1 +0,0 @@
Don't record remote cross-signing keys in the `devices` table.

View File

@@ -1 +0,0 @@
Use flake8-comprehensions to enforce good hygiene of list/set/dict comprehensions.

View File

@@ -1 +0,0 @@
Fix a couple of bugs in email configuration handling.

View File

@@ -1 +0,0 @@
Merge worker apps together.

View File

@@ -1 +0,0 @@
Publishing/removing a room from the room directory now requires the user to have a power level capable of modifying the canonical alias, instead of the room aliases.

View File

@@ -1 +0,0 @@
Synapse no longer uses room alias events to calculate room names for email notifications.

View File

@@ -1 +0,0 @@
Fix an issue affecting worker-based deployments where replication would stop working, necessitating a full restart, after joining a large room.

View File

@@ -1 +0,0 @@
Fix `duplicate key` error which was logged when rejoining a room over federation.

View File

@@ -1 +0,0 @@
The room list endpoint no longer returns a list of aliases.

View File

@@ -1 +0,0 @@
Remove redundant `store_room` call from `FederationHandler._process_received_pdu`.

View File

@@ -1 +0,0 @@
Check that server_name is correctly set before running database updates.

View File

@@ -1 +0,0 @@
Refactoring work in preparation for changing the event redaction algorithm.

View File

@@ -1 +0,0 @@
Fix `POSTGRES_INITDB_ARGS` in the `contrib/docker/docker-compose.yml` example docker-compose configuration.

View File

@@ -1 +0,0 @@
Update warning for incorrect database collation/ctype to include link to documentation.

1
changelog.d/6988.doc Normal file
View File

@@ -0,0 +1 @@
Improve the documentation for database configuration.

View File

@@ -1 +0,0 @@
Prevent user from setting 'deactivated' to anything other than a bool on the v2 PUT /users Admin API.

View File

@@ -1 +0,0 @@
Port `synapse.handlers.presence` to async/await.

1
changelog.d/7009.feature Normal file
View File

@@ -0,0 +1 @@
Set `Referrer-Policy` header to `no-referrer` on media downloads.

1
changelog.d/7010.misc Normal file
View File

@@ -0,0 +1 @@
Change device list streams to have one row per ID.

1
changelog.d/7011.misc Normal file
View File

@@ -0,0 +1 @@
Remove concept of a non-limited stream.

1
changelog.d/7024.misc Normal file
View File

@@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7051.feature Normal file
View File

@@ -0,0 +1 @@
Admin API `POST /_synapse/admin/v1/join/<roomIdOrAlias>` to join users to a room like `auto_join_rooms` for creation of users.

1
changelog.d/7068.bugfix Normal file
View File

@@ -0,0 +1 @@
Ensure that a user inteactive authentication session is tied to a single request.

1
changelog.d/7089.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug in the federation API which could cause occasional "Failed to get PDU" errors.

1
changelog.d/7096.feature Normal file
View File

@@ -0,0 +1 @@
Add options to prevent users from changing their profile or associated 3PIDs.

1
changelog.d/7107.doc Normal file
View File

@@ -0,0 +1 @@
Update pre-built package name for FreeBSD.

1
changelog.d/7109.bugfix Normal file
View File

@@ -0,0 +1 @@
Return the proper error (M_BAD_ALIAS) when a non-existant canonical alias is provided.

1
changelog.d/7110.misc Normal file
View File

@@ -0,0 +1 @@
Convert some of synapse.rest.media to async/await.

1
changelog.d/7115.misc Normal file
View File

@@ -0,0 +1 @@
De-duplicate / remove unused REST code for login and auth.

1
changelog.d/7116.misc Normal file
View File

@@ -0,0 +1 @@
Convert `*StreamRow` classes to inner classes.

1
changelog.d/7117.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug which meant that groups updates were not correctly replicated between workers.

1
changelog.d/7118.feature Normal file
View File

@@ -0,0 +1 @@
Allow server admins to define and enforce a password policy (MSC2000).

1
changelog.d/7120.misc Normal file
View File

@@ -0,0 +1 @@
Clean up some LoggingContext code.

1
changelog.d/7128.misc Normal file
View File

@@ -0,0 +1 @@
Add explicit `instance_id` for USER_SYNC commands and remove implicit `conn_id` usage.

1
changelog.d/7133.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix starting workers when federation sending not split out.

1
changelog.d/7134.misc Normal file
View File

@@ -0,0 +1 @@
Refactor replication command handling to reduce difference between server vs client.

1
changelog.d/7136.misc Normal file
View File

@@ -0,0 +1 @@
Refactored the CAS authentication logic to a separate class.

1
changelog.d/7137.removal Normal file
View File

@@ -0,0 +1 @@
Remove nonfunctional `captcha_bypass_secret` option from `homeserver.yaml`.

1
changelog.d/7141.doc Normal file
View File

@@ -0,0 +1 @@
Clean up INSTALL.md a bit.

1
changelog.d/7147.doc Normal file
View File

@@ -0,0 +1 @@
Add documentation for running a local CAS server for testing.

1
changelog.d/7150.bugfix Normal file
View File

@@ -0,0 +1 @@
Ensure `is_verified` is a boolean in responses to `GET /_matrix/client/r0/room_keys/keys`. Also warn the user if they forgot the `version` query param.

1
changelog.d/7151.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix error page being shown when a custom SAML handler attempted to redirect when processing an auth response.

1
changelog.d/7152.feature Normal file
View File

@@ -0,0 +1 @@
Improve the support for SSO authentication on the login fallback page.

1
changelog.d/7153.feature Normal file
View File

@@ -0,0 +1 @@
Always whitelist the login fallback in the SSO configuration if `public_baseurl` is set.

1
changelog.d/7155.bugfix Normal file
View File

@@ -0,0 +1 @@
Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo.

1
changelog.d/7157.misc Normal file
View File

@@ -0,0 +1 @@
Add tests for outbound device pokes.

1
changelog.d/7160.feature Normal file
View File

@@ -0,0 +1 @@
Always send users their own device updates.

View File

@@ -15,10 +15,9 @@ services:
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
volumes:
# You may either store all the files in a local folder
- ./matrix-config/homeserver.yaml:/etc/homeserver.yaml
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data

View File

@@ -1,6 +1,6 @@
# Using the Synapse Grafana dashboard
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up additional recording rules

View File

@@ -18,7 +18,7 @@
"gnetId": null,
"graphTooltip": 0,
"id": 1,
"iteration": 1561447718159,
"iteration": 1584612489167,
"links": [
{
"asDropdown": true,
@@ -34,6 +34,7 @@
"panels": [
{
"collapsed": false,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -52,12 +53,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 1
},
"hiddenSeries": false,
"id": 75,
"legend": {
"avg": false,
@@ -72,7 +75,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -151,6 +156,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 9,
@@ -158,6 +164,7 @@
"x": 12,
"y": 1
},
"hiddenSeries": false,
"id": 33,
"legend": {
"avg": false,
@@ -172,7 +179,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -302,12 +311,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 10
},
"hiddenSeries": false,
"id": 107,
"legend": {
"avg": false,
@@ -322,7 +333,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -425,12 +438,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 19
},
"hiddenSeries": false,
"id": 118,
"legend": {
"avg": false,
@@ -445,7 +460,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -542,6 +559,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -1361,6 +1379,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -1732,6 +1751,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2439,6 +2459,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2635,6 +2656,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2650,11 +2672,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 61
"y": 33
},
"id": 79,
"legend": {
@@ -2670,6 +2693,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2684,8 +2710,13 @@
"expr": "sum(rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "txn rate",
"legendFormat": "successful txn rate",
"refId": "A"
},
{
"expr": "sum(rate(synapse_util_metrics_block_count{block_name=\"_send_new_transaction\",instance=\"$instance\"}[$bucket_size]) - ignoring (block_name) rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"legendFormat": "failed txn rate",
"refId": "B"
}
],
"thresholds": [],
@@ -2736,11 +2767,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 61
"y": 33
},
"id": 83,
"legend": {
@@ -2756,6 +2788,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2829,11 +2864,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 70
"y": 42
},
"id": 109,
"legend": {
@@ -2849,6 +2885,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2923,11 +2962,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 70
"y": 42
},
"id": 111,
"legend": {
@@ -2943,6 +2983,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3009,6 +3052,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3024,12 +3068,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"h": 8,
"w": 12,
"x": 0,
"y": 62
"y": 34
},
"hiddenSeries": false,
"id": 51,
"legend": {
"avg": false,
@@ -3044,6 +3090,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3112,6 +3161,95 @@
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 34
},
"hiddenSeries": false,
"id": 134,
"legend": {
"avg": false,
"current": false,
"hideZero": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "topk(10,synapse_pushers{job=~\"$job\",index=~\"$index\", instance=\"$instance\"})",
"legendFormat": "{{kind}} {{app_id}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Active pusher instances by app",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"repeat": null,
@@ -3120,6 +3258,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3523,6 +3662,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3540,6 +3680,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3562,6 +3703,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3630,6 +3774,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3652,6 +3797,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3720,6 +3868,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3742,6 +3891,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3810,6 +3962,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3832,6 +3985,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3921,6 +4077,7 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -4010,6 +4167,7 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -4076,6 +4234,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -4540,6 +4699,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5060,6 +5220,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5079,7 +5240,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 67
"y": 39
},
"id": 2,
"legend": {
@@ -5095,6 +5256,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5198,7 +5360,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 67
"y": 39
},
"id": 41,
"legend": {
@@ -5214,6 +5376,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5286,7 +5449,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 74
"y": 46
},
"id": 42,
"legend": {
@@ -5302,6 +5465,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5373,7 +5537,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 74
"y": 46
},
"id": 43,
"legend": {
@@ -5389,6 +5553,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5460,7 +5625,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 81
"y": 53
},
"id": 113,
"legend": {
@@ -5476,6 +5641,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5546,7 +5712,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 81
"y": 53
},
"id": 115,
"legend": {
@@ -5562,6 +5728,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5573,7 +5740,7 @@
"steppedLine": false,
"targets": [
{
"expr": "rate(synapse_replication_tcp_protocol_close_reason{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_replication_tcp_protocol_close_reason{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{job}}-{{index}} {{reason_type}}",
@@ -5628,6 +5795,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5643,11 +5811,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 13
"y": 40
},
"id": 67,
"legend": {
@@ -5663,7 +5832,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5679,7 +5850,7 @@
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{job}}-{{index}} ",
"legendFormat": "{{job}}-{{index}} {{name}}",
"refId": "A"
}
],
@@ -5731,11 +5902,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 13
"y": 40
},
"id": 71,
"legend": {
@@ -5751,7 +5923,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5819,11 +5993,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 22
"y": 49
},
"id": 121,
"interval": "",
@@ -5840,7 +6015,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5909,6 +6086,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -6607,7 +6785,7 @@
}
],
"refresh": "5m",
"schemaVersion": 18,
"schemaVersion": 22,
"style": "dark",
"tags": [
"matrix"
@@ -6616,7 +6794,7 @@
"list": [
{
"current": {
"tags": [],
"selected": true,
"text": "Prometheus",
"value": "Prometheus"
},
@@ -6638,6 +6816,7 @@
"auto_count": 100,
"auto_min": "30s",
"current": {
"selected": false,
"text": "auto",
"value": "$__auto_interval_bucket_size"
},
@@ -6719,9 +6898,9 @@
"allFormat": "regex wildcard",
"allValue": "",
"current": {
"text": "All",
"text": "synapse",
"value": [
"$__all"
"synapse"
]
},
"datasource": "$datasource",
@@ -6751,7 +6930,9 @@
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
"value": [
"$__all"
]
},
"datasource": "$datasource",
"definition": "",
@@ -6810,5 +6991,5 @@
"timezone": "",
"title": "Synapse",
"uid": "000000012",
"version": 10
"version": 19
}

12
debian/changelog vendored
View File

@@ -1,3 +1,15 @@
matrix-synapse-py3 (1.12.0) stable; urgency=medium
* New synapse release 1.12.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 23 Mar 2020 12:13:03 +0000
matrix-synapse-py3 (1.11.1) stable; urgency=medium
* New synapse release 1.11.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Mar 2020 15:01:22 +0000
matrix-synapse-py3 (1.11.0) stable; urgency=medium
* New synapse release 1.11.0.

View File

@@ -0,0 +1,34 @@
# Edit Room Membership API
This API allows an administrator to join an user account with a given `user_id`
to a room with a given `room_id_or_alias`. You can only modify the membership of
local users. The server administrator must be in the room and have permission to
invite users.
## Parameters
The following parameters are available:
* `user_id` - Fully qualified user: for example, `@user:server.com`.
* `room_id_or_alias` - The room identifier or alias to join: for example,
`!636q39766251:server.com`.
## Usage
```
POST /_synapse/admin/v1/join/<room_id_or_alias>
{
"user_id": "@user:server.com"
}
```
Including an `access_token` of a server admin.
Response:
```
{
"room_id": "!636q39766251:server.com"
}
```

View File

@@ -38,6 +38,7 @@ The parameter ``threepids`` is optional.
The parameter ``avatar_url`` is optional.
The parameter ``admin`` is optional and defaults to 'false'.
The parameter ``deactivated`` is optional and defaults to 'false'.
The parameter ``password`` is optional. If provided the user's password is updated and all devices are logged out.
If the user already exists then optional parameters default to the current value.
List Accounts
@@ -168,11 +169,14 @@ with a body of:
.. code:: json
{
"new_password": "<secret>"
"new_password": "<secret>",
"logout_devices": true,
}
including an ``access_token`` of a server admin.
The parameter ``new_password`` is required.
The parameter ``logout_devices`` is optional and defaults to ``true``.
Get whether a user is a server administrator or not
===================================================

64
docs/dev/cas.md Normal file
View File

@@ -0,0 +1,64 @@
# How to test CAS as a developer without a server
The [django-mama-cas](https://github.com/jbittel/django-mama-cas) project is an
easy to run CAS implementation built on top of Django.
## Prerequisites
1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
3. Install Django and django-mama-cas:
```
python -m pip install "django<3" "django-mama-cas==2.4.0"
```
4. Create a Django project in the current directory:
```
django-admin startproject cas_test .
```
5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
6. Setup the SQLite database: `python manage.py migrate`
7. Create a user:
```
python manage.py createsuperuser
```
1. Use whatever you want as the username and password.
2. Leave the other fields blank.
8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
```
python manage.py runserver
```
You should now have a Django project configured to serve CAS authentication with
a single user created.
## Configure Synapse (and Riot) to use CAS
1. Modify your `homeserver.yaml` to enable CAS and point it to your locally
running Django test server:
```yaml
cas_config:
enabled: true
server_url: "http://localhost:8000"
service_url: "http://localhost:8081"
#displayname_attribute: name
#required_attributes:
# name: value
```
2. Restart Synapse.
Note that the above configuration assumes the homeserver is running on port 8081
and that the CAS server is on port 8000, both on localhost.
## Testing the configuration
Then in Riot:
1. Visit the login page with a Riot pointing at your homeserver.
2. Click the Single Sign-On button.
3. Login using the credentials created with `createsuperuser`.
4. You should be logged in.
If you want to repeat this process you'll need to manually logout first:
1. http://localhost:8000/admin/
2. Click "logout" in the top right.

View File

@@ -18,9 +18,13 @@ To make Synapse (and therefore Riot) use it:
metadata:
local: ["samling.xml"]
```
5. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
```yaml
public_baseurl: http://localhost:8080/
```
6. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
the dependencies are installed and ready to go.
6. Restart Synapse.
7. Restart Synapse.
Then in Riot:

View File

@@ -29,14 +29,13 @@ from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = context.LoggingContext()
calling_context = context.LoggingContext.current_context()
context.LoggingContext.set_current_context(request_context)
calling_context = context.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
context.LoggingContext.set_current_context(calling_context)
context.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id

View File

@@ -72,8 +72,7 @@ underneath the database, or if a different version of the locale is used on any
replicas.
The safest way to fix the issue is to take a dump and recreate the database with
the correct `COLLATE` and `CTYPE` parameters (as per
[docs/postgres.md](docs/postgres.md)). It is also possible to change the
the correct `COLLATE` and `CTYPE` parameters (as shown above). It is also possible to change the
parameters on a live database and run a `REINDEX` on the entire database,
however extreme care must be taken to avoid database corruption.
@@ -105,19 +104,41 @@ of free memory the database host has available.
When you are ready to start using PostgreSQL, edit the `database`
section in your config file to match the following lines:
database:
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
```yaml
database:
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
```
All key, values in `args` are passed to the `psycopg2.connect(..)`
function, except keys beginning with `cp_`, which are consumed by the
twisted adbapi connection pool.
twisted adbapi connection pool. See the [libpq
documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
for a list of options which can be passed.
You should consider tuning the `args.keepalives_*` options if there is any danger of
the connection between your homeserver and database dropping, otherwise Synapse
may block for an extended period while it waits for a response from the
database server. Example values might be:
```yaml
# seconds of inactivity after which TCP should send a keepalive message to the server
keepalives_idle: 10
# the number of seconds after which a TCP keepalive message that is not
# acknowledged by the server should be retransmitted
keepalives_interval: 10
# the number of TCP keepalives that can be lost before the client's connection
# to the server is considered dead
keepalives_count: 3
```
## Porting from SQLite

View File

@@ -578,13 +578,46 @@ acme:
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
# its data.
#
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
# * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
# * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
# * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
#
#
# Example SQLite configuration:
#
#database:
# name: sqlite3
# args:
# database: /path/to/homeserver.db
#
#
# Example Postgres configuration:
#
#database:
# name: psycopg2
# args:
# user: synapse
# password: secretpassword
# database: synapse
# host: localhost
# cp_min: 5
# cp_max: 10
#
# For more information on using Synapse with Postgres, see `docs/postgres.md`.
#
database:
# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
name: sqlite3
args:
# Path to the database
database: "DATADIR/homeserver.db"
database: DATADIR/homeserver.db
# Number of events to cache in memory.
#
@@ -839,10 +872,6 @@ media_store_path: "DATADIR/media_store"
#
#enable_registration_captcha: false
# A secret key used to bypass the captcha test entirely.
#
#captcha_bypass_secret: "YOUR_SECRET_HERE"
# The API endpoint to use for verifying m.login.recaptcha responses.
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
@@ -1057,6 +1086,29 @@ account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Whether users are allowed to change their displayname after it has
# been initially set. Useful when provisioning users based on the
# contents of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_displayname: false
# Whether users are allowed to change their avatar after it has been
# initially set. Useful when provisioning users based on the contents
# of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_avatar_url: false
# Whether users can change the 3PIDs associated with their accounts
# (email address and msisdn).
#
# Defaults to 'true'
#
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms
#
@@ -1347,6 +1399,25 @@ saml2_config:
#
#grandfathered_mxid_source_attribute: upn
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
# Enable CAS for registration and login.
@@ -1360,6 +1431,60 @@ saml2_config:
# # name: value
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
# have to confirm giving access to their account to the URL. Any client
# whose URL starts with an entry in the following list will not be subject
# to an additional confirmation step after the SSO login is completed.
#
# WARNING: An entry such as "https://my.client" is insecure, because it
# will also match "https://my.client.evil.site", exposing your users to
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# If public_baseurl is set, then the login fallback page (used by clients
# that don't natively support the required login flows) is whitelisted in
# addition to any URLs in this list.
#
# By default, this list is empty.
#
#client_whitelist:
# - https://riot.im/develop
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * server_name: the homeserver's name.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
# The JWT needs to contain a globally unique "sub" (subject) claim.
#
#jwt_config:
@@ -1384,6 +1509,41 @@ password_config:
#
#pepper: "EVEN_MORE_SECRET"
# Define and enforce a password policy. Each parameter is optional.
# This is an implementation of MSC2000.
#
policy:
# Whether to enforce the password policy.
# Defaults to 'false'.
#
#enabled: true
# Minimum accepted length for a password.
# Defaults to 0.
#
#minimum_length: 15
# Whether a password must contain at least one digit.
# Defaults to 'false'.
#
#require_digit: true
# Whether a password must contain at least one symbol.
# A symbol is any character that's not a number or a letter.
# Defaults to 'false'.
#
#require_symbol: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_uppercase: true
# Configuration for sending emails from Synapse.
#

View File

@@ -14,16 +14,16 @@ example flow would be (where '>' indicates master to worker and
'<' worker to master flows):
> SERVER example.com
< REPLICATE events 53
< REPLICATE
> POSITION events 53
> RDATA events 54 ["$foo1:bar.com", ...]
> RDATA events 55 ["$foo4:bar.com", ...]
The example shows the server accepting a new connection and sending its
identity with the `SERVER` command, followed by the client asking to
subscribe to the `events` stream from the token `53`. The server then
periodically sends `RDATA` commands which have the format
`RDATA <stream_name> <token> <row>`, where the format of `<row>` is
defined by the individual streams.
The example shows the server accepting a new connection and sending its identity
with the `SERVER` command, followed by the client server to respond with the
position of all streams. The server then periodically sends `RDATA` commands
which have the format `RDATA <stream_name> <token> <row>`, where the format of
`<row>` is defined by the individual streams.
Error reporting happens by either the client or server sending an ERROR
command, and usually the connection will be closed.
@@ -32,9 +32,6 @@ Since the protocol is a simple line based, its possible to manually
connect to the server using a tool like netcat. A few things should be
noted when manually using the protocol:
- When subscribing to a stream using `REPLICATE`, the special token
`NOW` can be used to get all future updates. The special stream name
`ALL` can be used with `NOW` to subscribe to all available streams.
- The federation stream is only available if federation sending has
been disabled on the main process.
- The server will only time connections out that have sent a `PING`
@@ -91,9 +88,7 @@ The client:
- Sends a `NAME` command, allowing the server to associate a human
friendly name with the connection. This is optional.
- Sends a `PING` as above
- For each stream the client wishes to subscribe to it sends a
`REPLICATE` with the `stream_name` and token it wants to subscribe
from.
- Sends a `REPLICATE` to get the current position of all streams.
- On receipt of a `SERVER` command, checks that the server name
matches the expected server name.
@@ -140,9 +135,7 @@ the wire:
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE events 1
< REPLICATE backfill 1
< REPLICATE caches 1
< REPLICATE
> POSITION events 1
> POSITION backfill 1
> POSITION caches 1
@@ -181,9 +174,9 @@ client (C):
#### POSITION (S)
The position of the stream has been updated. Sent to the client
after all missing updates for a stream have been sent to the client
and they're now up to date.
On receipt of a POSITION command clients should check if they have missed any
updates, and if so then fetch them out of band. Sent in response to a
REPLICATE command (but can happen at any time).
#### ERROR (S, C)
@@ -199,25 +192,18 @@ client (C):
#### REPLICATE (C)
Asks the server to replicate a given stream. The syntax is:
```
REPLICATE <stream_name> <token>
```
Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
Asks the server for the current position of all streams.
#### USER_SYNC (C)
A user has started or stopped syncing
#### CLEAR_USER_SYNC (C)
The server should clear all associated user sync data from the worker.
This is used when a worker is shutting down.
#### FEDERATION_ACK (C)
Acknowledge receipt of some federation data

View File

@@ -273,6 +273,7 @@ Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance:
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to

View File

@@ -1,20 +1,31 @@
name: matrix-synapse
base: core18
version: git
version: git
summary: Reference Matrix homeserver
description: |
Synapse is the reference Matrix homeserver.
Matrix is a federated and decentralised instant messaging and VoIP system.
grade: stable
confinement: strict
grade: stable
confinement: strict
apps:
matrix-synapse:
matrix-synapse:
command: synctl --no-daemonize start $SNAP_COMMON/homeserver.yaml
stop-command: synctl -c $SNAP_COMMON stop
plugs: [network-bind, network]
daemon: simple
daemon: simple
hash-password:
command: hash_password
generate-config:
command: generate_config
generate-signing-key:
command: generate_signing_key.py
register-new-matrix-user:
command: register_new_matrix_user
plugs: [network]
synctl:
command: synctl
parts:
matrix-synapse:
source: .

View File

@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.11.0"
__version__ = "1.12.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -539,7 +539,7 @@ class Auth(object):
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id: str, user: UserID):
"""Check if the user is allowed to edit the room's entry in the
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
Args:
@@ -570,12 +570,7 @@ class Auth(object):
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
if user_level < send_level:
raise AuthError(
403,
"This server requires you to be a moderator in the room to"
" edit its room list entry",
)
return user_level >= send_level
@staticmethod
def has_access_token(request):

View File

@@ -64,8 +64,16 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
PASSWORD_TOO_SHORT = "M_PASSWORD_TOO_SHORT"
PASSWORD_NO_DIGIT = "M_PASSWORD_NO_DIGIT"
PASSWORD_NO_UPPERCASE = "M_PASSWORD_NO_UPPERCASE"
PASSWORD_NO_LOWERCASE = "M_PASSWORD_NO_LOWERCASE"
PASSWORD_NO_SYMBOL = "M_PASSWORD_NO_SYMBOL"
PASSWORD_IN_DICTIONARY = "M_PASSWORD_IN_DICTIONARY"
WEAK_PASSWORD = "M_WEAK_PASSWORD"
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
BAD_ALIAS = "M_BAD_ALIAS"
class CodeMessageException(RuntimeError):
@@ -438,6 +446,20 @@ class IncompatibleRoomVersionError(SynapseError):
return cs_error(self.msg, self.errcode, room_version=self._room_version)
class PasswordRefusedError(SynapseError):
"""A password has been refused, either during password reset/change or registration.
"""
def __init__(
self,
msg="This password doesn't comply with the server's policy",
errcode=Codes.WEAK_PASSWORD,
):
super(PasswordRefusedError, self).__init__(
code=400, msg=msg, errcode=errcode,
)
class RequestSendFailed(RuntimeError):
"""Sending a HTTP request over federation failed due to not being able to
talk to the remote server for some reason.

View File

@@ -57,7 +57,7 @@ class RoomVersion(object):
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
# bool: before MSC2260, anyone was allowed to send an aliases event
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool, default=False)
@@ -102,12 +102,13 @@ class RoomVersions(object):
enforce_key_validity=True,
special_case_aliases_auth=True,
)
MSC2260_DEV = RoomVersion(
"org.matrix.msc2260",
MSC2432_DEV = RoomVersion(
"org.matrix.msc2432",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
)
@@ -119,6 +120,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.MSC2260_DEV,
RoomVersions.MSC2432_DEV,
)
} # type: Dict[str, RoomVersion]

View File

@@ -276,6 +276,7 @@ def start(hs, listeners=None):
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().db.start_profiling()
hs.get_pusherpool().start()
setup_sentry(hs)
setup_sdnotify(hs)

View File

@@ -64,13 +64,26 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import (
from synapse.replication.tcp.client import ReplicationClientFactory
from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.handler import WorkerReplicationDataHandler
from synapse.replication.tcp.streams import (
AccountDataStream,
DeviceListsStream,
GroupServerStream,
PresenceStream,
PushersStream,
PushRulesStream,
ReceiptsStream,
TagAccountDataStream,
ToDeviceStream,
TypingStream,
)
from synapse.replication.tcp.streams.events import (
EventsStream,
EventsStreamEventRow,
EventsStreamRow,
)
from synapse.replication.tcp.streams.events import EventsStreamEventRow, EventsStreamRow
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@@ -113,7 +126,6 @@ from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
@@ -222,6 +234,7 @@ class GenericWorkerPresence(object):
self.user_to_num_current_syncs = {}
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.instance_id = hs.get_instance_id()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
@@ -234,13 +247,24 @@ class GenericWorkerPresence(object):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
run_as_background_process,
"generic_presence.on_shutdown",
self._on_shutdown,
)
def _on_shutdown(self):
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_command(
ClearUserSyncsCommand(self.instance_id)
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
self.instance_id, user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
@@ -390,6 +414,9 @@ class GenericWorkerTyping(object):
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
def get_current_token(self) -> int:
return self._latest_room_serial
class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
@@ -494,20 +521,26 @@ class GenericWorkerServer(HomeServer):
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media":
media_repo = self.get_media_repository_resource()
if self.config.can_load_media_repo:
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}
)
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}
)
else:
logger.warning(
"A 'media' listener is configured but the media"
" repository is disabled. Ignoring."
)
if name == "openid" and "federation" not in res["names"]:
# Only load the openid resource separately if federation resource
@@ -566,25 +599,26 @@ class GenericWorkerServer(HomeServer):
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
factory = ReplicationClientFactory(self, self.config.worker_name)
host = self.config.worker_replication_host
port = self.config.worker_replication_port
self.get_reactor().connectTCP(host, port, factory)
def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_tcp_replication(self):
return GenericWorkerReplicationHandler(self)
def build_presence_handler(self):
return GenericWorkerPresence(self)
def build_typing_handler(self):
return GenericWorkerTyping(self)
def build_replication_data_handler(self):
return GenericWorkerReplicationHandler(self)
class GenericWorkerReplicationHandler(ReplicationClientHandler):
class GenericWorkerReplicationHandler(WorkerReplicationDataHandler):
def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
@@ -612,15 +646,12 @@ class GenericWorkerReplicationHandler(ReplicationClientHandler):
args.update(self.send_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
async def process_and_notify(self, stream_name, token, rows):
try:
if self.send_handler:
self.send_handler.process_replication_rows(stream_name, token, rows)
if stream_name == "events":
if stream_name == EventsStream.NAME:
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
@@ -643,43 +674,44 @@ class GenericWorkerReplicationHandler(ReplicationClientHandler):
)
await self.pusher_pool.on_new_notifications(token, token)
elif stream_name == "push_rules":
elif stream_name == PushRulesStream.NAME:
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows]
)
elif stream_name in ("account_data", "tag_account_data"):
elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "receipts":
elif stream_name == ReceiptsStream.NAME:
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows]
)
await self.pusher_pool.on_new_receipts(
token, token, {row.room_id for row in rows}
)
elif stream_name == "typing":
elif stream_name == TypingStream.NAME:
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == "to_device":
elif stream_name == ToDeviceStream.NAME:
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == "device_lists":
elif stream_name == DeviceListsStream.NAME:
all_room_ids = set()
for row in rows:
room_ids = await self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
if row.entity.startswith("@"):
room_ids = await self.store.get_rooms_for_user(row.entity)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == "presence":
elif stream_name == PresenceStream.NAME:
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
elif stream_name == GroupServerStream.NAME:
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "pushers":
elif stream_name == PushersStream.NAME:
for row in rows:
if row.deleted:
self.stop_pusher(row.user_id, row.app_id, row.pushkey)
@@ -768,7 +800,10 @@ class FederationSenderHandler(object):
# ... as well as device updates and messages
elif stream_name == DeviceListsStream.NAME:
hosts = {row.destination for row in rows}
# The entities are either user IDs (starting with '@') whose devices
# have changed, or remote servers that we need to tell about
# changes.
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
@@ -783,7 +818,7 @@ class FederationSenderHandler(object):
async def _on_new_receipts(self, rows):
"""
Args:
rows (iterable[synapse.replication.tcp.streams.ReceiptsStreamRow]):
rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]):
new receipts to be processed
"""
for receipt in rows:
@@ -854,6 +889,9 @@ def start(config_options):
# Force the appservice to start since they will be disabled in the main config
config.notify_appservices = True
else:
# For other worker types we force this to off.
config.notify_appservices = False
if config.worker_app == "synapse.app.pusher":
if config.start_pushers:
@@ -867,6 +905,9 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
else:
# For other worker types we force this to off.
config.start_pushers = False
if config.worker_app == "synapse.app.user_dir":
if config.update_user_directory:
@@ -880,6 +921,9 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
else:
# For other worker types we force this to off.
config.update_user_directory = False
if config.worker_app == "synapse.app.federation_sender":
if config.send_federation:
@@ -893,6 +937,9 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.send_federation = True
else:
# For other worker types we force this to off.
config.send_federation = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts

View File

@@ -298,6 +298,11 @@ class SynapseHomeServer(HomeServer):
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
current_mau_by_service_gauge = Gauge(
"synapse_admin_mau_current_mau_by_service",
"Current MAU by service",
["app_service"],
)
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
registered_reserved_users_mau_gauge = Gauge(
"synapse_admin_mau:registered_reserved_users",
@@ -403,7 +408,6 @@ def setup(config_options):
_base.start(hs, config.listeners)
hs.get_pusherpool().start()
hs.get_datastore().db.updates.start_doing_background_updates()
except Exception:
# Print the exception and bail out.
@@ -585,12 +589,20 @@ def run(hs):
@defer.inlineCallbacks
def generate_monthly_active_users():
current_mau_count = 0
current_mau_count_by_service = {}
reserved_users = ()
store = hs.get_datastore()
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
current_mau_count = yield store.get_monthly_active_count()
current_mau_count_by_service = (
yield store.get_monthly_active_count_by_service()
)
reserved_users = yield store.get_registered_reserved_users()
current_mau_gauge.set(float(current_mau_count))
for app_service, count in current_mau_count_by_service.items():
current_mau_by_service_gauge.labels(app_service).set(float(count))
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
max_mau_gauge.set(float(hs.config.max_mau_value))

View File

@@ -294,7 +294,6 @@ class RootConfig(object):
report_stats=None,
open_private_ports=False,
listeners=None,
database_conf=None,
tls_certificate_path=None,
tls_private_key_path=None,
acme_domain=None,
@@ -367,7 +366,6 @@ class RootConfig(object):
report_stats=report_stats,
open_private_ports=open_private_ports,
listeners=listeners,
database_conf=database_conf,
tls_certificate_path=tls_certificate_path,
tls_private_key_path=tls_private_key_path,
acme_domain=acme_domain,

View File

@@ -24,6 +24,7 @@ from synapse.config import (
server,
server_notices_config,
spam_checker,
sso,
stats,
third_party_event_rules,
tls,
@@ -57,6 +58,7 @@ class RootConfig:
key: key.KeyConfig
saml2: saml2_config.SAML2Config
cas: cas.CasConfig
sso: sso.SSOConfig
jwt: jwt_config.JWTConfig
password: password.PasswordConfig
email: emailconfig.EmailConfig

View File

@@ -24,7 +24,6 @@ class CaptchaConfig(Config):
self.enable_registration_captcha = config.get(
"enable_registration_captcha", False
)
self.captcha_bypass_secret = config.get("captcha_bypass_secret")
self.recaptcha_siteverify_api = config.get(
"recaptcha_siteverify_api",
"https://www.recaptcha.net/recaptcha/api/siteverify",
@@ -49,10 +48,6 @@ class CaptchaConfig(Config):
#
#enable_registration_captcha: false
# A secret key used to bypass the captcha test entirely.
#
#captcha_bypass_secret: "YOUR_SECRET_HERE"
# The API endpoint to use for verifying m.login.recaptcha responses.
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,14 +15,65 @@
# limitations under the License.
import logging
import os
from textwrap import indent
import yaml
from synapse.config._base import Config, ConfigError
logger = logging.getLogger(__name__)
NON_SQLITE_DATABASE_PATH_WARNING = """\
Ignoring 'database_path' setting: not using a sqlite3 database.
--------------------------------------------------------------------------------
"""
DEFAULT_CONFIG = """\
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
# its data.
#
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
# * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
# * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
# * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
#
#
# Example SQLite configuration:
#
#database:
# name: sqlite3
# args:
# database: /path/to/homeserver.db
#
#
# Example Postgres configuration:
#
#database:
# name: psycopg2
# args:
# user: synapse
# password: secretpassword
# database: synapse
# host: localhost
# cp_min: 5
# cp_max: 10
#
# For more information on using Synapse with Postgres, see `docs/postgres.md`.
#
database:
name: sqlite3
args:
database: %(database_path)s
# Number of events to cache in memory.
#
#event_cache_size: 10K
"""
class DatabaseConnectionConfig:
"""Contains the connection config for a particular database.
@@ -36,10 +88,12 @@ class DatabaseConnectionConfig:
"""
def __init__(self, name: str, db_config: dict):
if db_config["name"] not in ("sqlite3", "psycopg2"):
raise ConfigError("Unsupported database type %r" % (db_config["name"],))
db_engine = db_config.get("name", "sqlite3")
if db_config["name"] == "sqlite3":
if db_engine not in ("sqlite3", "psycopg2"):
raise ConfigError("Unsupported database type %r" % (db_engine,))
if db_engine == "sqlite3":
db_config.setdefault("args", {}).update(
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
)
@@ -56,6 +110,11 @@ class DatabaseConnectionConfig:
class DatabaseConfig(Config):
section = "database"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.databases = []
def read_config(self, config, **kwargs):
self.event_cache_size = self.parse_size(config.get("event_cache_size", "10K"))
@@ -76,12 +135,13 @@ class DatabaseConfig(Config):
multi_database_config = config.get("databases")
database_config = config.get("database")
database_path = config.get("database_path")
if multi_database_config and database_config:
raise ConfigError("Can't specify both 'database' and 'datbases' in config")
if multi_database_config:
if config.get("database_path"):
if database_path:
raise ConfigError("Can't specify 'database_path' with 'databases'")
self.databases = [
@@ -89,65 +149,55 @@ class DatabaseConfig(Config):
for name, db_conf in multi_database_config.items()
]
else:
if database_config is None:
database_config = {"name": "sqlite3", "args": {}}
if database_config:
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(config.get("database_path"))
if database_path:
if self.databases and self.databases[0].name != "sqlite3":
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
return
def generate_config_section(self, data_dir_path, database_conf, **kwargs):
if not database_conf:
database_path = os.path.join(data_dir_path, "homeserver.db")
database_conf = (
"""# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
args:
# Path to the database
database: "%(database_path)s"
"""
% locals()
)
else:
database_conf = indent(yaml.dump(database_conf), " " * 10).lstrip()
database_config = {"name": "sqlite3", "args": {}}
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(database_path)
return (
"""\
## Database ##
database:
%(database_conf)s
# Number of events to cache in memory.
#
#event_cache_size: 10K
"""
% locals()
)
def generate_config_section(self, data_dir_path, **kwargs):
return DEFAULT_CONFIG % {
"database_path": os.path.join(data_dir_path, "homeserver.db")
}
def read_arguments(self, args):
self.set_databasepath(args.database_path)
"""
Cases for the cli input:
- If no databases are configured and no database_path is set, raise.
- No databases and only database_path available ==> sqlite3 db.
- If there are multiple databases and a database_path raise an error.
- If the database set in the config file is sqlite then
overwrite with the command line argument.
"""
if args.database_path is None:
if not self.databases:
raise ConfigError("No database config provided")
return
if len(self.databases) == 0:
database_config = {"name": "sqlite3", "args": {}}
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(args.database_path)
return
if self.get_single_database().name == "sqlite3":
self.set_databasepath(args.database_path)
else:
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
def set_databasepath(self, database_path):
if database_path is None:
return
if database_path != ":memory:":
database_path = self.abspath(database_path)
# We only support setting a database path if we have a single sqlite3
# database.
if len(self.databases) != 1:
raise ConfigError("Cannot specify 'database_path' with multiple databases")
database = self.get_single_database()
if database.config["name"] != "sqlite3":
# We don't raise here as we haven't done so before for this case.
logger.warn("Ignoring 'database_path' for non-sqlite3 database")
return
database.config["args"]["database"] = database_path
self.databases[0].config["args"]["database"] = database_path
@staticmethod
def add_arguments(parser):
@@ -162,7 +212,7 @@ class DatabaseConfig(Config):
def get_single_database(self) -> DatabaseConnectionConfig:
"""Returns the database if there is only one, useful for e.g. tests
"""
if len(self.databases) != 1:
if not self.databases:
raise Exception("More than one database exists")
return self.databases[0]

View File

@@ -38,6 +38,7 @@ from .saml2_config import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .sso import SSOConfig
from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig
@@ -65,6 +66,7 @@ class HomeServerConfig(RootConfig):
KeyConfig,
SAML2Config,
CasConfig,
SSOConfig,
JWTConfig,
PasswordConfig,
EmailConfig,

View File

@@ -31,6 +31,10 @@ class PasswordConfig(Config):
self.password_localdb_enabled = password_config.get("localdb_enabled", True)
self.password_pepper = password_config.get("pepper", "")
# Password policy
self.password_policy = password_config.get("policy") or {}
self.password_policy_enabled = self.password_policy.get("enabled", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
password_config:
@@ -48,4 +52,39 @@ class PasswordConfig(Config):
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#
#pepper: "EVEN_MORE_SECRET"
# Define and enforce a password policy. Each parameter is optional.
# This is an implementation of MSC2000.
#
policy:
# Whether to enforce the password policy.
# Defaults to 'false'.
#
#enabled: true
# Minimum accepted length for a password.
# Defaults to 0.
#
#minimum_length: 15
# Whether a password must contain at least one digit.
# Defaults to 'false'.
#
#require_digit: true
# Whether a password must contain at least one symbol.
# A symbol is any character that's not a number or a letter.
# Defaults to 'false'.
#
#require_symbol: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_uppercase: true
"""

View File

@@ -129,6 +129,10 @@ class RegistrationConfig(Config):
raise ConfigError("Invalid auto_join_rooms entry %s" % (room_alias,))
self.autocreate_auto_join_rooms = config.get("autocreate_auto_join_rooms", True)
self.enable_set_displayname = config.get("enable_set_displayname", True)
self.enable_set_avatar_url = config.get("enable_set_avatar_url", True)
self.enable_3pid_changes = config.get("enable_3pid_changes", True)
self.disable_msisdn_registration = config.get(
"disable_msisdn_registration", False
)
@@ -330,6 +334,29 @@ class RegistrationConfig(Config):
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Whether users are allowed to change their displayname after it has
# been initially set. Useful when provisioning users based on the
# contents of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_displayname: false
# Whether users are allowed to change their avatar after it has been
# initially set. Useful when provisioning users based on the contents
# of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_avatar_url: false
# Whether users can change the 3PIDs associated with their accounts
# (email address and msisdn).
#
# Defaults to 'true'
#
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms
#

View File

@@ -15,6 +15,9 @@
# limitations under the License.
import logging
import os
import pkg_resources
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module, load_python_module
@@ -160,6 +163,14 @@ class SAML2Config(Config):
saml2_config.get("saml_session_lifetime", "5m")
)
template_dir = saml2_config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.saml2_error_html_content = self.read_file(
os.path.join(template_dir, "saml_error.html"), "saml2_config.saml_error",
)
def _default_saml_config_dict(
self, required_attributes: set, optional_attributes: set
):
@@ -325,6 +336,25 @@ class SAML2Config(Config):
# The default is 'uid'.
#
#grandfathered_mxid_source_attribute: upn
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
""" % {
"config_dir_path": config_dir_path
}

107
synapse/config/sso.py Normal file
View File

@@ -0,0 +1,107 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict
import pkg_resources
from ._base import Config
class SSOConfig(Config):
"""SSO Configuration
"""
section = "sso"
def read_config(self, config, **kwargs):
sso_config = config.get("sso") or {} # type: Dict[str, Any]
# Pick a template directory in order of:
# * The sso-specific template_dir
# * /path/to/synapse/install/res/templates
template_dir = sso_config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.sso_redirect_confirm_template_dir = template_dir
self.sso_client_whitelist = sso_config.get("client_whitelist") or []
# Attempt to also whitelist the server's login fallback, since that fallback sets
# the redirect URL to itself (so it can process the login token then return
# gracefully to the client). This would make it pointless to ask the user for
# confirmation, since the URL the confirmation page would be showing wouldn't be
# the client's.
# public_baseurl is an optional setting, so we only add the fallback's URL to the
# list if it's provided (because we can't figure out what that URL is otherwise).
if self.public_baseurl:
login_fallback_url = self.public_baseurl + "_matrix/static/client/login"
self.sso_client_whitelist.append(login_fallback_url)
def generate_config_section(self, **kwargs):
return """\
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
# have to confirm giving access to their account to the URL. Any client
# whose URL starts with an entry in the following list will not be subject
# to an additional confirmation step after the SSO login is completed.
#
# WARNING: An entry such as "https://my.client" is insecure, because it
# will also match "https://my.client.evil.site", exposing your users to
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# If public_baseurl is set, then the login fallback page (used by clients
# that don't natively support the required login flows) is whitelisted in
# addition to any URLs in this list.
#
# By default, this list is empty.
#
#client_whitelist:
# - https://riot.im/develop
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * server_name: the homeserver's name.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
"""

View File

@@ -75,7 +75,7 @@ class ServerContextFactory(ContextFactory):
@implementer(IPolicyForHTTPS)
class ClientTLSOptionsFactory(object):
class FederationPolicyForHTTPS(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers for federation.
@@ -103,15 +103,15 @@ class ClientTLSOptionsFactory(object):
# let us do).
minTLS = _TLS_VERSION_MAP[config.federation_client_minimum_tls_version]
self._verify_ssl = CertificateOptions(
_verify_ssl = CertificateOptions(
trustRoot=trust_root, insecurelyLowerMinimumTo=minTLS
)
self._verify_ssl_context = self._verify_ssl.getContext()
self._verify_ssl_context.set_info_callback(self._context_info_cb)
self._verify_ssl_context = _verify_ssl.getContext()
self._verify_ssl_context.set_info_callback(_context_info_cb)
self._no_verify_ssl = CertificateOptions(insecurelyLowerMinimumTo=minTLS)
self._no_verify_ssl_context = self._no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(self._context_info_cb)
_no_verify_ssl = CertificateOptions(insecurelyLowerMinimumTo=minTLS)
self._no_verify_ssl_context = _no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(_context_info_cb)
def get_options(self, host: bytes):
@@ -136,23 +136,6 @@ class ClientTLSOptionsFactory(object):
return SSLClientConnectionCreator(host, ssl_context, should_verify)
@staticmethod
def _context_info_cb(ssl_connection, where, ret):
"""The 'information callback' for our openssl context object."""
# we assume that the app_data on the connection object has been set to
# a TLSMemoryBIOProtocol object. (This is done by SSLClientConnectionCreator)
tls_protocol = ssl_connection.get_app_data()
try:
# ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(
ssl_connection, where
)
except: # noqa: E722, taken from the twisted implementation
logger.exception("Error during info_callback")
f = Failure()
tls_protocol.failVerification(f)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
directly to agents.
@@ -160,6 +143,43 @@ class ClientTLSOptionsFactory(object):
return self.get_options(hostname)
@implementer(IPolicyForHTTPS)
class RegularPolicyForHTTPS(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers, for other than federation.
Always uses the same OpenSSL context object, which uses the default OpenSSL CA
trust root.
"""
def __init__(self):
trust_root = platformTrust()
self._ssl_context = CertificateOptions(trustRoot=trust_root).getContext()
self._ssl_context.set_info_callback(_context_info_cb)
def creatorForNetloc(self, hostname, port):
return SSLClientConnectionCreator(hostname, self._ssl_context, True)
def _context_info_cb(ssl_connection, where, ret):
"""The 'information callback' for our openssl context objects.
Note: Once this is set as the info callback on a Context object, the Context should
only be used with the SSLClientConnectionCreator.
"""
# we assume that the app_data on the connection object has been set to
# a TLSMemoryBIOProtocol object. (This is done by SSLClientConnectionCreator)
tls_protocol = ssl_connection.get_app_data()
try:
# ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where)
except: # noqa: E722, taken from the twisted implementation
logger.exception("Error during info_callback")
f = Failure()
tls_protocol.failVerification(f)
@implementer(IOpenSSLClientConnectionCreator)
class SSLClientConnectionCreator(object):
"""Creates openssl connection objects for client connections.

View File

@@ -140,7 +140,7 @@ def compute_event_signature(
Returns:
a dictionary in the same format of an event's signatures field.
"""
redact_json = prune_event_dict(event_dict)
redact_json = prune_event_dict(room_version, event_dict)
redact_json.pop("age_ts", None)
redact_json.pop("unsigned", None)
if logger.isEnabledFor(logging.DEBUG):

View File

@@ -43,8 +43,8 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
current_context,
make_deferred_yieldable,
preserve_fn,
run_in_background,
@@ -236,7 +236,7 @@ class Keyring(object):
"""
try:
ctx = LoggingContext.current_context()
ctx = current_context()
# map from server name to a set of outstanding request ids
server_to_request_ids = {}

View File

@@ -137,7 +137,7 @@ def check(
raise AuthError(403, "This room has been marked as unfederatable.")
# 4. If type is m.room.aliases
if event.type == EventTypes.Aliases:
if event.type == EventTypes.Aliases and room_version_obj.special_case_aliases_auth:
# 4a. If event has no state_key, reject
if not event.is_state():
raise AuthError(403, "Alias event must be a state event")
@@ -152,10 +152,8 @@ def check(
)
# 4c. Otherwise, allow.
# This is removed by https://github.com/matrix-org/matrix-doc/pull/2260
if room_version_obj.special_case_aliases_auth:
logger.debug("Allowing! %s", event)
return
logger.debug("Allowing! %s", event)
return
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Auth events: %s", [a.event_id for a in auth_events.values()])

View File

@@ -15,9 +15,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import os
from distutils.util import strtobool
from typing import Optional, Type
from typing import Dict, Optional, Type
import six
@@ -199,15 +200,25 @@ class _EventInternalMetadata(object):
return self._dict.get("redacted", False)
class EventBase(object):
class EventBase(metaclass=abc.ABCMeta):
@property
@abc.abstractmethod
def format_version(self) -> int:
"""The EventFormatVersion implemented by this event"""
...
def __init__(
self,
event_dict,
signatures={},
unsigned={},
internal_metadata_dict={},
rejected_reason=None,
event_dict: JsonDict,
room_version: RoomVersion,
signatures: Dict[str, Dict[str, str]],
unsigned: JsonDict,
internal_metadata_dict: JsonDict,
rejected_reason: Optional[str],
):
assert room_version.event_format == self.format_version
self.room_version = room_version
self.signatures = signatures
self.unsigned = unsigned
self.rejected_reason = rejected_reason
@@ -303,7 +314,13 @@ class EventBase(object):
class FrozenEvent(EventBase):
format_version = EventFormatVersions.V1 # All events of this type are V1
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
def __init__(
self,
event_dict: JsonDict,
room_version: RoomVersion,
internal_metadata_dict: JsonDict = {},
rejected_reason: Optional[str] = None,
):
event_dict = dict(event_dict)
# Signatures is a dict of dicts, and this is faster than doing a
@@ -326,8 +343,9 @@ class FrozenEvent(EventBase):
self._event_id = event_dict["event_id"]
super(FrozenEvent, self).__init__(
super().__init__(
frozen_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
internal_metadata_dict=internal_metadata_dict,
@@ -352,7 +370,13 @@ class FrozenEvent(EventBase):
class FrozenEventV2(EventBase):
format_version = EventFormatVersions.V2 # All events of this type are V2
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
def __init__(
self,
event_dict: JsonDict,
room_version: RoomVersion,
internal_metadata_dict: JsonDict = {},
rejected_reason: Optional[str] = None,
):
event_dict = dict(event_dict)
# Signatures is a dict of dicts, and this is faster than doing a
@@ -377,8 +401,9 @@ class FrozenEventV2(EventBase):
self._event_id = None
super(FrozenEventV2, self).__init__(
super().__init__(
frozen_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
internal_metadata_dict=internal_metadata_dict,
@@ -445,7 +470,7 @@ class FrozenEventV3(FrozenEventV2):
return self._event_id
def event_type_from_format_version(format_version: int) -> Type[EventBase]:
def _event_type_from_format_version(format_version: int) -> Type[EventBase]:
"""Returns the python type to use to construct an Event object for the
given event format version.
@@ -474,5 +499,5 @@ def make_event_from_dict(
rejected_reason: Optional[str] = None,
) -> EventBase:
"""Construct an EventBase from the given event dict"""
event_type = event_type_from_format_version(room_version.event_format)
return event_type(event_dict, internal_metadata_dict, rejected_reason)
event_type = _event_type_from_format_version(room_version.event_format)
return event_type(event_dict, room_version, internal_metadata_dict, rejected_reason)

View File

@@ -23,6 +23,7 @@ from frozendict import frozendict
from twisted.internet import defer
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.room_versions import RoomVersion
from synapse.util.async_helpers import yieldable_gather_results
from . import EventBase
@@ -35,26 +36,20 @@ from . import EventBase
SPLIT_FIELD_REGEX = re.compile(r"(?<!\\)\.")
def prune_event(event):
def prune_event(event: EventBase) -> EventBase:
""" Returns a pruned version of the given event, which removes all keys we
don't know about or think could potentially be dodgy.
This is used when we "redact" an event. We want to remove all fields that
the user has specified, but we do want to keep necessary information like
type, state_key etc.
Args:
event (FrozenEvent)
Returns:
FrozenEvent
"""
pruned_event_dict = prune_event_dict(event.get_dict())
pruned_event_dict = prune_event_dict(event.room_version, event.get_dict())
from . import event_type_from_format_version
from . import make_event_from_dict
pruned_event = event_type_from_format_version(event.format_version)(
pruned_event_dict, event.internal_metadata.get_dict()
pruned_event = make_event_from_dict(
pruned_event_dict, event.room_version, event.internal_metadata.get_dict()
)
# Mark the event as redacted
@@ -63,15 +58,12 @@ def prune_event(event):
return pruned_event
def prune_event_dict(event_dict):
def prune_event_dict(room_version: RoomVersion, event_dict: dict) -> dict:
"""Redacts the event_dict in the same way as `prune_event`, except it
operates on dicts rather than event objects
Args:
event_dict (dict)
Returns:
dict: A copy of the pruned event dict
A copy of the pruned event dict
"""
allowed_keys = [
@@ -118,7 +110,7 @@ def prune_event_dict(event_dict):
"kick",
"redact",
)
elif event_type == EventTypes.Aliases:
elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
add_fields("history_visibility")

View File

@@ -15,31 +15,28 @@
# limitations under the License.
import logging
from collections import namedtuple
from typing import Iterable, List
import six
from twisted.internet import defer
from twisted.internet.defer import DeferredList
from twisted.internet.defer import Deferred, DeferredList
from twisted.python.failure import Failure
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.api.room_versions import EventFormatVersions, RoomVersion
from synapse.crypto.event_signing import check_event_content_hash
from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
current_context,
make_deferred_yieldable,
preserve_fn,
)
from synapse.types import JsonDict, get_domain_from_id
from synapse.util import unwrapFirstError
logger = logging.getLogger(__name__)
@@ -54,92 +51,25 @@ class FederationBase(object):
self.store = hs.get_datastore()
self._clock = hs.get_clock()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(
self, origin, pdus, room_version, outlier=False, include_none=False
):
"""Takes a list of PDUs and checks the signatures and hashs of each
one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of
that PDU.
If a PDU fails its content hash check then it is redacted.
The given list of PDUs are not modified, instead the function returns
a new list.
Args:
origin (str)
pdu (list)
room_version (str)
outlier (bool): Whether the events are outliers or not
include_none (str): Whether to include None in the returned list
for events that have failed their checks
Returns:
Deferred : A list of PDUs that have valid signatures and hashes.
"""
deferreds = self._check_sigs_and_hashes(room_version, pdus)
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield make_deferred_yieldable(deferred)
except SynapseError:
res = None
if not res:
# Check local db.
res = yield self.store.get_event(
pdu.event_id, allow_rejected=True, allow_none=True
)
if not res and pdu.origin != origin:
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
room_version=room_version,
outlier=outlier,
timeout=10000,
)
except SynapseError:
pass
if not res:
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
return res
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
valid_pdus = yield make_deferred_yieldable(
defer.gatherResults(deferreds2, consumeErrors=True)
).addErrback(unwrapFirstError)
if include_none:
return valid_pdus
else:
return [p for p in valid_pdus if p]
def _check_sigs_and_hash(self, room_version, pdu):
def _check_sigs_and_hash(
self, room_version: RoomVersion, pdu: EventBase
) -> Deferred:
return make_deferred_yieldable(
self._check_sigs_and_hashes(room_version, [pdu])[0]
)
def _check_sigs_and_hashes(self, room_version, pdus):
def _check_sigs_and_hashes(
self, room_version: RoomVersion, pdus: List[EventBase]
) -> List[Deferred]:
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
room_version (str): The room version of the PDUs
pdus (list[FrozenEvent]): the events to be checked
room_version: The room version of the PDUs
pdus: the events to be checked
Returns:
list[Deferred]: for each input event, a deferred which:
For each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
@@ -148,9 +78,9 @@ class FederationBase(object):
"""
deferreds = _check_sigs_on_pdus(self.keyring, room_version, pdus)
ctx = LoggingContext.current_context()
ctx = current_context()
def callback(_, pdu):
def callback(_, pdu: EventBase):
with PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
# let's try to distinguish between failures because the event was
@@ -187,7 +117,7 @@ class FederationBase(object):
return pdu
def errback(failure, pdu):
def errback(failure: Failure, pdu: EventBase):
failure.trap(SynapseError)
with PreserveLoggingContext(ctx):
logger.warning(
@@ -213,16 +143,18 @@ class PduToCheckSig(
pass
def _check_sigs_on_pdus(keyring, room_version, pdus):
def _check_sigs_on_pdus(
keyring: Keyring, room_version: RoomVersion, pdus: Iterable[EventBase]
) -> List[Deferred]:
"""Check that the given events are correctly signed
Args:
keyring (synapse.crypto.Keyring): keyring object to do the checks
room_version (str): the room version of the PDUs
pdus (Collection[EventBase]): the events to be checked
keyring: keyring object to do the checks
room_version: the room version of the PDUs
pdus: the events to be checked
Returns:
List[Deferred]: a Deferred for each event in pdus, which will either succeed if
A Deferred for each event in pdus, which will either succeed if
the signatures are valid, or fail (with a SynapseError) if not.
"""
@@ -257,10 +189,6 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
for p in pdus
]
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
raise RuntimeError("Unrecognized room version %s" % (room_version,))
# First we check that the sender event is signed by the sender's domain
# (except if its a 3pid invite, in which case it may be sent by any server)
pdus_to_check_sender = [p for p in pdus_to_check if not _is_invite_via_3pid(p.pdu)]
@@ -270,7 +198,7 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
(
p.sender_domain,
p.redacted_pdu_json,
p.pdu.origin_server_ts if v.enforce_key_validity else 0,
p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
p.pdu.event_id,
)
for p in pdus_to_check_sender
@@ -293,7 +221,7 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
# event id's domain (normally only the case for joins/leaves), and add additional
# checks. Only do this if the room version has a concept of event ID domain
# (ie, the room version uses old-style non-hash event IDs).
if v.event_format == EventFormatVersions.V1:
if room_version.event_format == EventFormatVersions.V1:
pdus_to_check_event_id = [
p
for p in pdus_to_check
@@ -305,7 +233,7 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
(
get_domain_from_id(p.pdu.event_id),
p.redacted_pdu_json,
p.pdu.origin_server_ts if v.enforce_key_validity else 0,
p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
p.pdu.event_id,
)
for p in pdus_to_check_event_id
@@ -327,7 +255,7 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
return [_flatten_deferred_list(p.deferreds) for p in pdus_to_check]
def _flatten_deferred_list(deferreds):
def _flatten_deferred_list(deferreds: List[Deferred]) -> Deferred:
"""Given a list of deferreds, either return the single deferred,
combine into a DeferredList, or return an already resolved deferred.
"""
@@ -339,7 +267,7 @@ def _flatten_deferred_list(deferreds):
return defer.succeed(None)
def _is_invite_via_3pid(event):
def _is_invite_via_3pid(event: EventBase) -> bool:
return (
event.type == EventTypes.Member
and event.membership == Membership.INVITE

View File

@@ -33,6 +33,7 @@ from typing import (
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.defer import Deferred
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import (
@@ -51,7 +52,7 @@ from synapse.api.room_versions import (
)
from synapse.events import EventBase, builder
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.context import make_deferred_yieldable, preserve_fn
from synapse.logging.utils import log_function
from synapse.types import JsonDict
from synapse.util import unwrapFirstError
@@ -187,7 +188,7 @@ class FederationClient(FederationBase):
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Iterable[str]
) -> List[EventBase]:
) -> Optional[List[EventBase]]:
"""Requests some more historic PDUs for the given room from the
given destination server.
@@ -199,9 +200,9 @@ class FederationClient(FederationBase):
"""
logger.debug("backfill extrem=%s", extremities)
# If there are no extremeties then we've (probably) reached the start.
# If there are no extremities then we've (probably) reached the start.
if not extremities:
return
return None
transaction_data = await self.transport_layer.backfill(
dest, room_id, extremities, limit
@@ -219,8 +220,7 @@ class FederationClient(FederationBase):
# FIXME: We should handle signature failures more gracefully.
pdus[:] = await make_deferred_yieldable(
defer.gatherResults(
self._check_sigs_and_hashes(room_version.identifier, pdus),
consumeErrors=True,
self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True,
).addErrback(unwrapFirstError)
)
@@ -284,15 +284,13 @@ class FederationClient(FederationBase):
pdu_list = [
event_from_pdu_json(p, room_version, outlier=outlier)
for p in transaction_data["pdus"]
]
] # type: List[EventBase]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = await self._check_sigs_and_hash(
room_version.identifier, pdu
)
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
break
@@ -345,6 +343,83 @@ class FederationClient(FederationBase):
return state_event_ids, auth_event_ids
async def _check_sigs_and_hash_and_fetch(
self,
origin: str,
pdus: List[EventBase],
room_version: RoomVersion,
outlier: bool = False,
include_none: bool = False,
) -> List[EventBase]:
"""Takes a list of PDUs and checks the signatures and hashs of each
one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of
that PDU.
If a PDU fails its content hash check then it is redacted.
The given list of PDUs are not modified, instead the function returns
a new list.
Args:
origin
pdu
room_version
outlier: Whether the events are outliers or not
include_none: Whether to include None in the returned list
for events that have failed their checks
Returns:
Deferred : A list of PDUs that have valid signatures and hashes.
"""
deferreds = self._check_sigs_and_hashes(room_version, pdus)
@defer.inlineCallbacks
def handle_check_result(pdu: EventBase, deferred: Deferred):
try:
res = yield make_deferred_yieldable(deferred)
except SynapseError:
res = None
if not res:
# Check local db.
res = yield self.store.get_event(
pdu.event_id, allow_rejected=True, allow_none=True
)
if not res and pdu.origin != origin:
try:
res = yield defer.ensureDeferred(
self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
room_version=room_version,
outlier=outlier,
timeout=10000,
)
)
except SynapseError:
pass
if not res:
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
return res
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
valid_pdus = await make_deferred_yieldable(
defer.gatherResults(deferreds2, consumeErrors=True)
).addErrback(unwrapFirstError)
if include_none:
return valid_pdus
else:
return [p for p in valid_pdus if p]
async def get_event_auth(self, destination, room_id, event_id):
res = await self.transport_layer.get_event_auth(destination, room_id, event_id)
@@ -356,7 +431,7 @@ class FederationClient(FederationBase):
]
signed_auth = await self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True, room_version=room_version.identifier
destination, auth_chain, outlier=True, room_version=room_version
)
signed_auth.sort(key=lambda e: e.depth)
@@ -583,7 +658,7 @@ class FederationClient(FederationBase):
destination,
list(pdus.values()),
outlier=True,
room_version=room_version.identifier,
room_version=room_version,
)
valid_pdus_map = {p.event_id: p for p in valid_pdus}
@@ -615,7 +690,7 @@ class FederationClient(FederationBase):
]
if auth_chain_create_events != [create_event.event_id]:
raise InvalidResponseError(
"Unexpected create event(s) in auth chain"
"Unexpected create event(s) in auth chain: %s"
% (auth_chain_create_events,)
)
@@ -678,7 +753,7 @@ class FederationClient(FederationBase):
pdu = event_from_pdu_json(pdu_dict, room_version)
# Check signatures are correct.
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
# FIXME: We should handle signature failures more gracefully.
@@ -870,7 +945,7 @@ class FederationClient(FederationBase):
]
signed_events = await self._check_sigs_and_hash_and_fetch(
destination, events, outlier=False, room_version=room_version.identifier
destination, events, outlier=False, room_version=room_version
)
except HttpResponseException as e:
if not e.code == 400:

View File

@@ -409,7 +409,7 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
@@ -425,7 +425,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -455,7 +455,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return {}
@@ -470,57 +470,6 @@ class FederationServer(FederationBase):
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res
async def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
missing (list): A list of event_ids indicating what the other
side (`origin`) think we're missing.
rejects (dict): A mapping from event_id to a 2-tuple of reason
string and a proof (or None) of why the event was rejected.
The keys of this dict give the list of events the `origin` has
rejected.
Args:
origin (str)
content (dict)
event_id (str)
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version(room_id)
auth_chain = [
event_from_pdu_json(e, room_version) for e in content["auth_chain"]
]
signed_auth = await self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True, room_version=room_version.identifier
)
ret = await self.handler.on_query_auth(
origin,
event_id,
room_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [e.get_pdu_json(time_now) for e in ret["auth_chain"]],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
return 200, send_content
@log_function
def on_query_client_keys(self, origin, content):
return self.on_query_request("client_keys", content)
@@ -662,7 +611,7 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = await self.store.get_room_version_id(pdu.room_id)
room_version = await self.store.get_room_version(pdu.room_id)
# Check signature.
try:

View File

@@ -477,7 +477,7 @@ def process_rows_for_federation(transaction_queue, rows):
Args:
transaction_queue (FederationSender)
rows (list(synapse.replication.tcp.streams.FederationStreamRow))
rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow))
"""
# The federation stream contains a bunch of different types of

View File

@@ -499,4 +499,13 @@ class FederationSender(object):
self._get_per_destination_queue(destination).attempt_new_transaction()
def get_current_token(self) -> int:
# Dummy implementation for case where federation sender isn't offloaded
# to a worker.
return 0
async def get_replication_rows(
self, from_token, to_token, limit, federation_ack=None
):
# Dummy implementation for case where federation sender isn't offloaded
# to a worker.
return []

Some files were not shown because too many files have changed in this diff Show More