Compare commits
1 Commits
v1.24.0-mu
...
erikj/rele
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0eaa6dd30e |
@@ -6,7 +6,7 @@
|
||||
set -ex
|
||||
|
||||
apt-get update
|
||||
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox
|
||||
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev tox
|
||||
|
||||
export LANG="C.UTF-8"
|
||||
|
||||
|
||||
Binary file not shown.
@@ -5,30 +5,53 @@ jobs:
|
||||
- image: docker:git
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker
|
||||
- docker_prepare
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:v1.24.0
|
||||
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
platforms: linux/amd64
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
|
||||
dockerhubuploadlatest:
|
||||
docker:
|
||||
- image: docker:git
|
||||
steps:
|
||||
- checkout
|
||||
- setup_remote_docker
|
||||
- docker_prepare
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:latest
|
||||
platforms: linux/amd64
|
||||
- docker_build:
|
||||
tag: -t matrixdotorg/synapse:latest
|
||||
platforms: linux/amd64,linux/arm/v7,linux/arm64
|
||||
|
||||
workflows:
|
||||
build:
|
||||
jobs:
|
||||
- dockerhubuploadrelease
|
||||
- dockerhubuploadrelease:
|
||||
filters:
|
||||
tags:
|
||||
only: /v[0-9].[0-9]+.[0-9]+.*/
|
||||
branches:
|
||||
ignore: /.*/
|
||||
- dockerhubuploadlatest:
|
||||
filters:
|
||||
branches:
|
||||
only: master
|
||||
|
||||
commands:
|
||||
docker_prepare:
|
||||
description: Sets up a remote docker server, downloads the buildx cli plugin, and enables multiarch images
|
||||
description: Downloads the buildx cli plugin and enables multiarch images
|
||||
parameters:
|
||||
buildx_version:
|
||||
type: string
|
||||
default: "v0.4.1"
|
||||
steps:
|
||||
- setup_remote_docker:
|
||||
# 19.03.13 was the most recent available on circleci at the time of
|
||||
# writing.
|
||||
version: 19.03.13
|
||||
- run: apk add --no-cache curl
|
||||
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
|
||||
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
|
||||
|
||||
528
CHANGES.md
528
CHANGES.md
@@ -1,235 +1,3 @@
|
||||
Synapse 1.24.0 (2020-12-09)
|
||||
===========================
|
||||
|
||||
Due to the two security issues highlighted below, server administrators are
|
||||
encouraged to update Synapse. We are not aware of these vulnerabilities being
|
||||
exploited in the wild.
|
||||
|
||||
Security advisory
|
||||
-----------------
|
||||
|
||||
The following issues are fixed in v1.23.1 and v1.24.0.
|
||||
|
||||
- There is a denial of service attack
|
||||
([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
|
||||
against the federation APIs in which future events will not be correctly sent
|
||||
to other servers over federation. This affects all servers that participate in
|
||||
open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).
|
||||
|
||||
- Synapse may be affected by OpenSSL
|
||||
[CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
|
||||
Synapse administrators should ensure that they have the latest versions of
|
||||
the cryptography Python package installed.
|
||||
|
||||
To upgrade Synapse along with the cryptography package:
|
||||
|
||||
* Administrators using the [`matrix.org` Docker
|
||||
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
|
||||
packages from
|
||||
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
|
||||
should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
|
||||
the updated packages.
|
||||
* Administrators who have [installed Synapse from
|
||||
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
|
||||
should upgrade the cryptography package within their virtualenv by running:
|
||||
```sh
|
||||
<path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
|
||||
```
|
||||
* Administrators who have installed Synapse from distribution packages should
|
||||
consult the information from their distributions.
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
|
||||
|
||||
|
||||
Synapse 1.24.0rc2 (2020-12-04)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a regression in v1.24.0rc1 which failed to allow SAML mapping providers which were unable to redirect users to an additional page. ([\#8878](https://github.com/matrix-org/synapse/issues/8878))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add support for the `prometheus_client` newer than 0.9.0. Contributed by Jordan Bancino. ([\#8875](https://github.com/matrix-org/synapse/issues/8875))
|
||||
|
||||
|
||||
Synapse 1.24.0rc1 (2020-12-02)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add admin API for logging in as a user. ([\#8617](https://github.com/matrix-org/synapse/issues/8617))
|
||||
- Allow specification of the SAML IdP if the metadata returns multiple IdPs. ([\#8630](https://github.com/matrix-org/synapse/issues/8630))
|
||||
- Add support for re-trying generation of a localpart for OpenID Connect mapping providers. ([\#8801](https://github.com/matrix-org/synapse/issues/8801), [\#8855](https://github.com/matrix-org/synapse/issues/8855))
|
||||
- Allow the `Date` header through CORS. Contributed by Nicolas Chamo. ([\#8804](https://github.com/matrix-org/synapse/issues/8804))
|
||||
- Add a config option, `push.group_by_unread_count`, which controls whether unread message counts in push notifications are defined as "the number of rooms with unread messages" or "total unread messages". ([\#8820](https://github.com/matrix-org/synapse/issues/8820))
|
||||
- Add `force_purge` option to delete-room admin api. ([\#8843](https://github.com/matrix-org/synapse/issues/8843))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug where appservices may be sent an excessive amount of read receipts and presence. Broke in v1.22.0. ([\#8744](https://github.com/matrix-org/synapse/issues/8744))
|
||||
- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))
|
||||
- Fix a bug where synctl could spawn duplicate copies of a worker. Contributed by Waylon Cude. ([\#8798](https://github.com/matrix-org/synapse/issues/8798))
|
||||
- Allow per-room profiles to be used for the server notice user. ([\#8799](https://github.com/matrix-org/synapse/issues/8799))
|
||||
- Fix a bug where logging could break after a call to SIGHUP. ([\#8817](https://github.com/matrix-org/synapse/issues/8817))
|
||||
- Fix `register_new_matrix_user` failing with "Bad Request" when trailing slash is included in server URL. Contributed by @angdraug. ([\#8823](https://github.com/matrix-org/synapse/issues/8823))
|
||||
- Fix a minor long-standing bug in login, where we would offer the `password` login type if a custom auth provider supported it, even if password login was disabled. ([\#8835](https://github.com/matrix-org/synapse/issues/8835))
|
||||
- Fix a long-standing bug which caused Synapse to require unspecified parameters during user-interactive authentication. ([\#8848](https://github.com/matrix-org/synapse/issues/8848))
|
||||
- Fix a bug introduced in v1.20.0 where the user-agent and IP address reported during user registration for CAS, OpenID Connect, and SAML were of the wrong form. ([\#8784](https://github.com/matrix-org/synapse/issues/8784))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Clarify the usecase for a msisdn delegate. Contributed by Adrian Wannenmacher. ([\#8734](https://github.com/matrix-org/synapse/issues/8734))
|
||||
- Remove extraneous comma from JSON example in User Admin API docs. ([\#8771](https://github.com/matrix-org/synapse/issues/8771))
|
||||
- Update `turn-howto.md` with troubleshooting notes. ([\#8779](https://github.com/matrix-org/synapse/issues/8779))
|
||||
- Fix the example on how to set the `Content-Type` header in nginx for the Client Well-Known URI. ([\#8793](https://github.com/matrix-org/synapse/issues/8793))
|
||||
- Improve the documentation for the admin API to list all media in a room with respect to encrypted events. ([\#8795](https://github.com/matrix-org/synapse/issues/8795))
|
||||
- Update the formatting of the `push` section of the homeserver config file to better align with the [code style guidelines](https://github.com/matrix-org/synapse/blob/develop/docs/code_style.md#configuration-file-format). ([\#8818](https://github.com/matrix-org/synapse/issues/8818))
|
||||
- Improve documentation how to configure prometheus for workers. ([\#8822](https://github.com/matrix-org/synapse/issues/8822))
|
||||
- Update example prometheus console. ([\#8824](https://github.com/matrix-org/synapse/issues/8824))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Remove old `/_matrix/client/*/admin` endpoints which were deprecated since Synapse 1.20.0. ([\#8785](https://github.com/matrix-org/synapse/issues/8785))
|
||||
- Disable pretty printing JSON responses for curl. Users who want pretty-printed output should use [jq](https://stedolan.github.io/jq/) in combination with curl. Contributed by @tulir. ([\#8833](https://github.com/matrix-org/synapse/issues/8833))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Simplify the way the `HomeServer` object caches its internal attributes. ([\#8565](https://github.com/matrix-org/synapse/issues/8565), [\#8851](https://github.com/matrix-org/synapse/issues/8851))
|
||||
- Add an example and documentation for clock skew to the SAML2 sample configuration to allow for clock/time difference between the homserver and IdP. Contributed by @localguru. ([\#8731](https://github.com/matrix-org/synapse/issues/8731))
|
||||
- Generalise `RoomMemberHandler._locally_reject_invite` to apply to more flows than just invite. ([\#8751](https://github.com/matrix-org/synapse/issues/8751))
|
||||
- Generalise `RoomStore.maybe_store_room_on_invite` to handle other, non-invite membership events. ([\#8754](https://github.com/matrix-org/synapse/issues/8754))
|
||||
- Refactor test utilities for injecting HTTP requests. ([\#8757](https://github.com/matrix-org/synapse/issues/8757), [\#8758](https://github.com/matrix-org/synapse/issues/8758), [\#8759](https://github.com/matrix-org/synapse/issues/8759), [\#8760](https://github.com/matrix-org/synapse/issues/8760), [\#8761](https://github.com/matrix-org/synapse/issues/8761), [\#8777](https://github.com/matrix-org/synapse/issues/8777))
|
||||
- Consolidate logic between the OpenID Connect and SAML code. ([\#8765](https://github.com/matrix-org/synapse/issues/8765))
|
||||
- Use `TYPE_CHECKING` instead of magic `MYPY` variable. ([\#8770](https://github.com/matrix-org/synapse/issues/8770))
|
||||
- Add a commandline script to sign arbitrary json objects. ([\#8772](https://github.com/matrix-org/synapse/issues/8772))
|
||||
- Minor log line improvements for the SSO mapping code used to generate Matrix IDs from SSO IDs. ([\#8773](https://github.com/matrix-org/synapse/issues/8773))
|
||||
- Add additional error checking for OpenID Connect and SAML mapping providers. ([\#8774](https://github.com/matrix-org/synapse/issues/8774), [\#8800](https://github.com/matrix-org/synapse/issues/8800))
|
||||
- Add type hints to HTTP abstractions. ([\#8806](https://github.com/matrix-org/synapse/issues/8806), [\#8812](https://github.com/matrix-org/synapse/issues/8812))
|
||||
- Remove unnecessary function arguments and add typing to several membership replication classes. ([\#8809](https://github.com/matrix-org/synapse/issues/8809))
|
||||
- Optimise the lookup for an invite from another homeserver when trying to reject it. ([\#8815](https://github.com/matrix-org/synapse/issues/8815))
|
||||
- Add tests for `password_auth_provider`s. ([\#8819](https://github.com/matrix-org/synapse/issues/8819))
|
||||
- Drop redundant database index on `event_json`. ([\#8845](https://github.com/matrix-org/synapse/issues/8845))
|
||||
- Simplify `uk.half-shot.msc2778.login.application_service` login handler. ([\#8847](https://github.com/matrix-org/synapse/issues/8847))
|
||||
- Refactor `password_auth_provider` support code. ([\#8849](https://github.com/matrix-org/synapse/issues/8849))
|
||||
- Add missing `ordering` to background database updates. ([\#8850](https://github.com/matrix-org/synapse/issues/8850))
|
||||
- Allow for specifying a room version when creating a room in unit tests via `RestHelper.create_room_as`. ([\#8854](https://github.com/matrix-org/synapse/issues/8854))
|
||||
|
||||
|
||||
Synapse 1.23.0 (2020-11-18)
|
||||
===========================
|
||||
|
||||
This release changes the way structured logging is configured. See the [upgrade notes](UPGRADE.rst#upgrading-to-v1230) for details.
|
||||
|
||||
**Note**: We are aware of a trivially exploitable denial of service vulnerability in versions of Synapse prior to 1.20.0. Complete details will be disclosed on Monday, November 23rd. If you have not upgraded recently, please do so.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a dependency versioning bug in the Dockerfile that prevented Synapse from starting. ([\#8767](https://github.com/matrix-org/synapse/issues/8767))
|
||||
|
||||
|
||||
Synapse 1.23.0rc1 (2020-11-13)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add a push rule that highlights when a jitsi conference is created in a room. ([\#8286](https://github.com/matrix-org/synapse/issues/8286))
|
||||
- Add an admin api to delete a single file or files that were not used for a defined time from server. Contributed by @dklimpel. ([\#8519](https://github.com/matrix-org/synapse/issues/8519))
|
||||
- Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel. ([\#8539](https://github.com/matrix-org/synapse/issues/8539))
|
||||
- Support generating structured logs via the standard logging configuration. ([\#8607](https://github.com/matrix-org/synapse/issues/8607), [\#8685](https://github.com/matrix-org/synapse/issues/8685))
|
||||
- Add an admin API to allow server admins to list users' pushers. Contributed by @dklimpel. ([\#8610](https://github.com/matrix-org/synapse/issues/8610), [\#8689](https://github.com/matrix-org/synapse/issues/8689))
|
||||
- Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel. ([\#8647](https://github.com/matrix-org/synapse/issues/8647))
|
||||
- Add an admin API for local user media statistics. Contributed by @dklimpel. ([\#8700](https://github.com/matrix-org/synapse/issues/8700))
|
||||
- Add `displayname` to Shared-Secret Registration for admins. ([\#8722](https://github.com/matrix-org/synapse/issues/8722))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already. ([\#8455](https://github.com/matrix-org/synapse/issues/8455))
|
||||
- Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information. ([\#8580](https://github.com/matrix-org/synapse/issues/8580))
|
||||
- Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error. ([\#8620](https://github.com/matrix-org/synapse/issues/8620))
|
||||
- Fix email notifications for invites without local state. ([\#8627](https://github.com/matrix-org/synapse/issues/8627))
|
||||
- Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500. ([\#8628](https://github.com/matrix-org/synapse/issues/8628))
|
||||
- Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded. ([\#8632](https://github.com/matrix-org/synapse/issues/8632))
|
||||
- Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0. ([\#8643](https://github.com/matrix-org/synapse/issues/8643))
|
||||
- Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories. ([\#8682](https://github.com/matrix-org/synapse/issues/8682))
|
||||
- Fix bug that prevented Synapse from recovering after losing connection to the database. ([\#8726](https://github.com/matrix-org/synapse/issues/8726))
|
||||
- Fix bug where the `/_synapse/admin/v1/send_server_notice` API could send notices to non-notice rooms. ([\#8728](https://github.com/matrix-org/synapse/issues/8728))
|
||||
- Fix PostgreSQL port script fails when DB has no backfilled events. Broke in v1.21.0. ([\#8729](https://github.com/matrix-org/synapse/issues/8729))
|
||||
- Fix PostgreSQL port script to correctly handle foreign key constraints. Broke in v1.21.0. ([\#8730](https://github.com/matrix-org/synapse/issues/8730))
|
||||
- Fix PostgreSQL port script so that it can be run again after a failure. Broke in v1.21.0. ([\#8755](https://github.com/matrix-org/synapse/issues/8755))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk. ([\#8582](https://github.com/matrix-org/synapse/issues/8582))
|
||||
- Improve the sample configuration for single sign-on providers. ([\#8635](https://github.com/matrix-org/synapse/issues/8635))
|
||||
- Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs. ([\#8657](https://github.com/matrix-org/synapse/issues/8657))
|
||||
- Note support for Python 3.9. ([\#8665](https://github.com/matrix-org/synapse/issues/8665))
|
||||
- Minor updates to docs on running tests. ([\#8666](https://github.com/matrix-org/synapse/issues/8666))
|
||||
- Interlink prometheus/grafana documentation. ([\#8667](https://github.com/matrix-org/synapse/issues/8667))
|
||||
- Notes on SSO logins and media_repository worker. ([\#8701](https://github.com/matrix-org/synapse/issues/8701))
|
||||
- Document experimental support for running multiple event persisters. ([\#8706](https://github.com/matrix-org/synapse/issues/8706))
|
||||
- Add information regarding the various sources of, and expected contributions to, Synapse's documentation to `CONTRIBUTING.md`. ([\#8714](https://github.com/matrix-org/synapse/issues/8714))
|
||||
- Migrate documentation `docs/admin_api/event_reports` to markdown. ([\#8742](https://github.com/matrix-org/synapse/issues/8742))
|
||||
- Add some helpful hints to the README for new Synapse developers. Contributed by @chagai95. ([\#8746](https://github.com/matrix-org/synapse/issues/8746))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Optimise `/createRoom` with multiple invited users. ([\#8559](https://github.com/matrix-org/synapse/issues/8559))
|
||||
- Implement and use an `@lru_cache` decorator. ([\#8595](https://github.com/matrix-org/synapse/issues/8595))
|
||||
- Don't instansiate Requester directly. ([\#8614](https://github.com/matrix-org/synapse/issues/8614))
|
||||
- Type hints for `RegistrationStore`. ([\#8615](https://github.com/matrix-org/synapse/issues/8615))
|
||||
- Change schema to support access tokens belonging to one user but granting access to another. ([\#8616](https://github.com/matrix-org/synapse/issues/8616))
|
||||
- Remove unused OPTIONS handlers. ([\#8621](https://github.com/matrix-org/synapse/issues/8621))
|
||||
- Run `mypy` as part of the lint.sh script. ([\#8633](https://github.com/matrix-org/synapse/issues/8633))
|
||||
- Correct Synapse's PyPI package name in the OpenID Connect installation instructions. ([\#8634](https://github.com/matrix-org/synapse/issues/8634))
|
||||
- Catch exceptions during initialization of `password_providers`. Contributed by Nicolai Søborg. ([\#8636](https://github.com/matrix-org/synapse/issues/8636))
|
||||
- Fix typos and spelling errors in the code. ([\#8639](https://github.com/matrix-org/synapse/issues/8639))
|
||||
- Reduce number of OpenTracing spans started. ([\#8640](https://github.com/matrix-org/synapse/issues/8640), [\#8668](https://github.com/matrix-org/synapse/issues/8668), [\#8670](https://github.com/matrix-org/synapse/issues/8670))
|
||||
- Add field `total` to device list in admin API. ([\#8644](https://github.com/matrix-org/synapse/issues/8644))
|
||||
- Add more type hints to the application services code. ([\#8655](https://github.com/matrix-org/synapse/issues/8655), [\#8693](https://github.com/matrix-org/synapse/issues/8693))
|
||||
- Tell Black to format code for Python 3.5. ([\#8664](https://github.com/matrix-org/synapse/issues/8664))
|
||||
- Don't pull event from DB when handling replication traffic. ([\#8669](https://github.com/matrix-org/synapse/issues/8669))
|
||||
- Abstract some invite-related code in preparation for landing knocking. ([\#8671](https://github.com/matrix-org/synapse/issues/8671), [\#8688](https://github.com/matrix-org/synapse/issues/8688))
|
||||
- Clarify representation of events in logfiles. ([\#8679](https://github.com/matrix-org/synapse/issues/8679))
|
||||
- Don't require `hiredis` package to be installed to run unit tests. ([\#8680](https://github.com/matrix-org/synapse/issues/8680))
|
||||
- Fix typing info on cache call signature to accept `on_invalidate`. ([\#8684](https://github.com/matrix-org/synapse/issues/8684))
|
||||
- Fail tests if they do not await coroutines. ([\#8690](https://github.com/matrix-org/synapse/issues/8690))
|
||||
- Improve start time by adding an index to `e2e_cross_signing_keys.stream_id`. ([\#8694](https://github.com/matrix-org/synapse/issues/8694))
|
||||
- Re-organize the structured logging code to separate the TCP transport handling from the JSON formatting. ([\#8697](https://github.com/matrix-org/synapse/issues/8697))
|
||||
- Use Python 3.8 in Docker images by default. ([\#8698](https://github.com/matrix-org/synapse/issues/8698))
|
||||
- Remove the "draft" status of the Room Details Admin API. ([\#8702](https://github.com/matrix-org/synapse/issues/8702))
|
||||
- Improve the error returned when a non-string displayname or avatar_url is used when updating a user's profile. ([\#8705](https://github.com/matrix-org/synapse/issues/8705))
|
||||
- Block attempts by clients to send server ACLs, or redactions of server ACLs, that would result in the local server being blocked from the room. ([\#8708](https://github.com/matrix-org/synapse/issues/8708))
|
||||
- Add metrics the allow the local sysadmin to track 3PID `/requestToken` requests. ([\#8712](https://github.com/matrix-org/synapse/issues/8712))
|
||||
- Consolidate duplicated lists of purged tables that are checked in tests. ([\#8713](https://github.com/matrix-org/synapse/issues/8713))
|
||||
- Add some `mdui:UIInfo` element examples for `saml2_config` in the homeserver config. ([\#8718](https://github.com/matrix-org/synapse/issues/8718))
|
||||
- Improve the error message returned when a remote server incorrectly sets the `Content-Type` header in response to a JSON request. ([\#8719](https://github.com/matrix-org/synapse/issues/8719))
|
||||
- Speed up repeated state resolutions on the same room by caching event ID to auth event ID lookups. ([\#8752](https://github.com/matrix-org/synapse/issues/8752))
|
||||
|
||||
|
||||
Synapse 1.22.1 (2020-10-30)
|
||||
===========================
|
||||
|
||||
@@ -6455,8 +6223,8 @@ Changes in synapse 0.5.1 (2014-11-26)
|
||||
|
||||
See UPGRADES.rst for specific instructions on how to upgrade.
|
||||
|
||||
- Fix bug where we served up an Event that did not match its signatures.
|
||||
- Fix regression where we no longer correctly handled the case where a homeserver receives an event for a room it doesn\'t recognise (but is in.)
|
||||
> - Fix bug where we served up an Event that did not match its signatures.
|
||||
> - Fix regression where we no longer correctly handled the case where a homeserver receives an event for a room it doesn\'t recognise (but is in.)
|
||||
|
||||
Changes in synapse 0.5.0 (2014-11-19)
|
||||
=====================================
|
||||
@@ -6467,44 +6235,44 @@ This release also changes the internal database schemas and so requires servers
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Add authentication and authorization to the federation protocol. Events are now signed by their originating homeservers.
|
||||
- Implement the new authorization model for rooms.
|
||||
- Split out web client into a seperate repository: matrix-angular-sdk.
|
||||
- Change the structure of PDUs.
|
||||
- Fix bug where user could not join rooms via an alias containing 4-byte UTF-8 characters.
|
||||
- Merge concept of PDUs and Events internally.
|
||||
- Improve logging by adding request ids to log lines.
|
||||
- Implement a very basic room initial sync API.
|
||||
- Implement the new invite/join federation APIs.
|
||||
: - Add authentication and authorization to the federation protocol. Events are now signed by their originating homeservers.
|
||||
- Implement the new authorization model for rooms.
|
||||
- Split out web client into a seperate repository: matrix-angular-sdk.
|
||||
- Change the structure of PDUs.
|
||||
- Fix bug where user could not join rooms via an alias containing 4-byte UTF-8 characters.
|
||||
- Merge concept of PDUs and Events internally.
|
||||
- Improve logging by adding request ids to log lines.
|
||||
- Implement a very basic room initial sync API.
|
||||
- Implement the new invite/join federation APIs.
|
||||
|
||||
Webclient:
|
||||
|
||||
- The webclient has been moved to a seperate repository.
|
||||
: - The webclient has been moved to a seperate repository.
|
||||
|
||||
Changes in synapse 0.4.2 (2014-10-31)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Fix bugs where we did not notify users of correct presence updates.
|
||||
- Fix bug where we did not handle sub second event stream timeouts.
|
||||
: - Fix bugs where we did not notify users of correct presence updates.
|
||||
- Fix bug where we did not handle sub second event stream timeouts.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add ability to click on messages to see JSON.
|
||||
- Add ability to redact messages.
|
||||
- Add ability to view and edit all room state JSON.
|
||||
- Handle incoming redactions.
|
||||
- Improve feedback on errors.
|
||||
- Fix bugs in mobile CSS.
|
||||
- Fix bugs with desktop notifications.
|
||||
: - Add ability to click on messages to see JSON.
|
||||
- Add ability to redact messages.
|
||||
- Add ability to view and edit all room state JSON.
|
||||
- Handle incoming redactions.
|
||||
- Improve feedback on errors.
|
||||
- Fix bugs in mobile CSS.
|
||||
- Fix bugs with desktop notifications.
|
||||
|
||||
Changes in synapse 0.4.1 (2014-10-17)
|
||||
=====================================
|
||||
|
||||
Webclient:
|
||||
|
||||
- Fix bug with display of timestamps.
|
||||
: - Fix bug with display of timestamps.
|
||||
|
||||
Changes in synpase 0.4.0 (2014-10-17)
|
||||
=====================================
|
||||
@@ -6517,8 +6285,8 @@ You will also need an updated syutil and config. See UPGRADES.rst.
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Sign federation transactions to assert strong identity over federation.
|
||||
- Rename timestamp keys in PDUs and events from \'ts\' and \'hsob\_ts\' to \'origin\_server\_ts\'.
|
||||
: - Sign federation transactions to assert strong identity over federation.
|
||||
- Rename timestamp keys in PDUs and events from \'ts\' and \'hsob\_ts\' to \'origin\_server\_ts\'.
|
||||
|
||||
Changes in synapse 0.3.4 (2014-09-25)
|
||||
=====================================
|
||||
@@ -6527,48 +6295,48 @@ This version adds support for using a TURN server. See docs/turn-howto.rst on ho
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Add support for redaction of messages.
|
||||
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
|
||||
- Implement a get current room state API.
|
||||
- Add support specifying and retrieving turn server configuration.
|
||||
: - Add support for redaction of messages.
|
||||
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
|
||||
- Implement a get current room state API.
|
||||
- Add support specifying and retrieving turn server configuration.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add button to send messages to users from the home page.
|
||||
- Add support for using TURN for VoIP calls.
|
||||
- Show display name change messages.
|
||||
- Fix bug where the client didn\'t get the state of a newly joined room until after it has been refreshed.
|
||||
- Fix bugs with tab complete.
|
||||
- Fix bug where holding down the down arrow caused chrome to chew 100% CPU.
|
||||
- Fix bug where desktop notifications occasionally used \"Undefined\" as the display name.
|
||||
- Fix more places where we sometimes saw room IDs incorrectly.
|
||||
- Fix bug which caused lag when entering text in the text box.
|
||||
: - Add button to send messages to users from the home page.
|
||||
- Add support for using TURN for VoIP calls.
|
||||
- Show display name change messages.
|
||||
- Fix bug where the client didn\'t get the state of a newly joined room until after it has been refreshed.
|
||||
- Fix bugs with tab complete.
|
||||
- Fix bug where holding down the down arrow caused chrome to chew 100% CPU.
|
||||
- Fix bug where desktop notifications occasionally used \"Undefined\" as the display name.
|
||||
- Fix more places where we sometimes saw room IDs incorrectly.
|
||||
- Fix bug which caused lag when entering text in the text box.
|
||||
|
||||
Changes in synapse 0.3.3 (2014-09-22)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Fix bug where you continued to get events for rooms you had left.
|
||||
: - Fix bug where you continued to get events for rooms you had left.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add support for video calls with basic UI.
|
||||
- Fix bug where one to one chats were named after your display name rather than the other person\'s.
|
||||
- Fix bug which caused lag when typing in the textarea.
|
||||
- Refuse to run on browsers we know won\'t work.
|
||||
- Trigger pagination when joining new rooms.
|
||||
- Fix bug where we sometimes didn\'t display invitations in recents.
|
||||
- Automatically join room when accepting a VoIP call.
|
||||
- Disable outgoing and reject incoming calls on browsers we don\'t support VoIP in.
|
||||
- Don\'t display desktop notifications for messages in the room you are non-idle and speaking in.
|
||||
: - Add support for video calls with basic UI.
|
||||
- Fix bug where one to one chats were named after your display name rather than the other person\'s.
|
||||
- Fix bug which caused lag when typing in the textarea.
|
||||
- Refuse to run on browsers we know won\'t work.
|
||||
- Trigger pagination when joining new rooms.
|
||||
- Fix bug where we sometimes didn\'t display invitations in recents.
|
||||
- Automatically join room when accepting a VoIP call.
|
||||
- Disable outgoing and reject incoming calls on browsers we don\'t support VoIP in.
|
||||
- Don\'t display desktop notifications for messages in the room you are non-idle and speaking in.
|
||||
|
||||
Changes in synapse 0.3.2 (2014-09-18)
|
||||
=====================================
|
||||
|
||||
Webclient:
|
||||
|
||||
- Fix bug where an empty \"bing words\" list in old accounts didn\'t send notifications when it should have done.
|
||||
: - Fix bug where an empty \"bing words\" list in old accounts didn\'t send notifications when it should have done.
|
||||
|
||||
Changes in synapse 0.3.1 (2014-09-18)
|
||||
=====================================
|
||||
@@ -6577,8 +6345,8 @@ This is a release to hotfix v0.3.0 to fix two regressions.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Fix a regression where we sometimes displayed duplicate events.
|
||||
- Fix a regression where we didn\'t immediately remove rooms you were banned in from the recents list.
|
||||
: - Fix a regression where we sometimes displayed duplicate events.
|
||||
- Fix a regression where we didn\'t immediately remove rooms you were banned in from the recents list.
|
||||
|
||||
Changes in synapse 0.3.0 (2014-09-18)
|
||||
=====================================
|
||||
@@ -6587,91 +6355,91 @@ See UPGRADE for information about changes to the client server API, including br
|
||||
|
||||
Homeserver:
|
||||
|
||||
- When a user changes their displayname or avatar the server will now update all their join states to reflect this.
|
||||
- The server now adds \"age\" key to events to indicate how old they are. This is clock independent, so at no point does any server or webclient have to assume their clock is in sync with everyone else.
|
||||
- Fix bug where we didn\'t correctly pull in missing PDUs.
|
||||
- Fix bug where prev\_content key wasn\'t always returned.
|
||||
- Add support for password resets.
|
||||
: - When a user changes their displayname or avatar the server will now update all their join states to reflect this.
|
||||
- The server now adds \"age\" key to events to indicate how old they are. This is clock independent, so at no point does any server or webclient have to assume their clock is in sync with everyone else.
|
||||
- Fix bug where we didn\'t correctly pull in missing PDUs.
|
||||
- Fix bug where prev\_content key wasn\'t always returned.
|
||||
- Add support for password resets.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Improve page content loading.
|
||||
- Join/parts now trigger desktop notifications.
|
||||
- Always show room aliases in the UI if one is present.
|
||||
- No longer show user-count in the recents side panel.
|
||||
- Add up & down arrow support to the text box for message sending to step through your sent history.
|
||||
- Don\'t display notifications for our own messages.
|
||||
- Emotes are now formatted correctly in desktop notifications.
|
||||
- The recents list now differentiates between public & private rooms.
|
||||
- Fix bug where when switching between rooms the pagination flickered before the view jumped to the bottom of the screen.
|
||||
- Add bing word support.
|
||||
: - Improve page content loading.
|
||||
- Join/parts now trigger desktop notifications.
|
||||
- Always show room aliases in the UI if one is present.
|
||||
- No longer show user-count in the recents side panel.
|
||||
- Add up & down arrow support to the text box for message sending to step through your sent history.
|
||||
- Don\'t display notifications for our own messages.
|
||||
- Emotes are now formatted correctly in desktop notifications.
|
||||
- The recents list now differentiates between public & private rooms.
|
||||
- Fix bug where when switching between rooms the pagination flickered before the view jumped to the bottom of the screen.
|
||||
- Add bing word support.
|
||||
|
||||
Registration API:
|
||||
|
||||
- The registration API has been overhauled to function like the login API. In practice, this means registration requests must now include the following: \'type\':\'m.login.password\'. See UPGRADE for more information on this.
|
||||
- The \'user\_id\' key has been renamed to \'user\' to better match the login API.
|
||||
- There is an additional login type: \'m.login.email.identity\'.
|
||||
- The command client and web client have been updated to reflect these changes.
|
||||
: - The registration API has been overhauled to function like the login API. In practice, this means registration requests must now include the following: \'type\':\'m.login.password\'. See UPGRADE for more information on this.
|
||||
- The \'user\_id\' key has been renamed to \'user\' to better match the login API.
|
||||
- There is an additional login type: \'m.login.email.identity\'.
|
||||
- The command client and web client have been updated to reflect these changes.
|
||||
|
||||
Changes in synapse 0.2.3 (2014-09-12)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
|
||||
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
||||
: - Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
|
||||
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Display room names and topics.
|
||||
- Allow setting/editing of room names and topics.
|
||||
- Display information about rooms on the main page.
|
||||
- Handle ban and kick events in real time.
|
||||
- VoIP UI and reliability improvements.
|
||||
- Add glare support for VoIP.
|
||||
- Improvements to initial startup speed.
|
||||
- Don\'t display duplicate join events.
|
||||
- Local echo of messages.
|
||||
- Differentiate sending and sent of local echo.
|
||||
- Various minor bug fixes.
|
||||
: - Display room names and topics.
|
||||
- Allow setting/editing of room names and topics.
|
||||
- Display information about rooms on the main page.
|
||||
- Handle ban and kick events in real time.
|
||||
- VoIP UI and reliability improvements.
|
||||
- Add glare support for VoIP.
|
||||
- Improvements to initial startup speed.
|
||||
- Don\'t display duplicate join events.
|
||||
- Local echo of messages.
|
||||
- Differentiate sending and sent of local echo.
|
||||
- Various minor bug fixes.
|
||||
|
||||
Changes in synapse 0.2.2 (2014-09-06)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- When the server returns state events it now also includes the previous content.
|
||||
- Add support for inviting people when creating a new room.
|
||||
- Make the homeserver inform the room via m.room.aliases when a new alias is added for a room.
|
||||
- Validate m.room.power\_level events.
|
||||
: - When the server returns state events it now also includes the previous content.
|
||||
- Add support for inviting people when creating a new room.
|
||||
- Make the homeserver inform the room via m.room.aliases when a new alias is added for a room.
|
||||
- Validate m.room.power\_level events.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add support for captchas on registration.
|
||||
- Handle m.room.aliases events.
|
||||
- Asynchronously send messages and show a local echo.
|
||||
- Inform the UI when a message failed to send.
|
||||
- Only autoscroll on receiving a new message if the user was already at the bottom of the screen.
|
||||
- Add support for ban/kick reasons.
|
||||
: - Add support for captchas on registration.
|
||||
- Handle m.room.aliases events.
|
||||
- Asynchronously send messages and show a local echo.
|
||||
- Inform the UI when a message failed to send.
|
||||
- Only autoscroll on receiving a new message if the user was already at the bottom of the screen.
|
||||
- Add support for ban/kick reasons.
|
||||
|
||||
Changes in synapse 0.2.1 (2014-09-03)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Added support for signing up with a third party id.
|
||||
- Add synctl scripts.
|
||||
- Added rate limiting.
|
||||
- Add option to change the external address the content repo uses.
|
||||
- Presence bug fixes.
|
||||
: - Added support for signing up with a third party id.
|
||||
- Add synctl scripts.
|
||||
- Added rate limiting.
|
||||
- Add option to change the external address the content repo uses.
|
||||
- Presence bug fixes.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Added support for signing up with a third party id.
|
||||
- Added support for banning and kicking users.
|
||||
- Added support for displaying and setting ops.
|
||||
- Added support for room names.
|
||||
- Fix bugs with room membership event display.
|
||||
: - Added support for signing up with a third party id.
|
||||
- Added support for banning and kicking users.
|
||||
- Added support for displaying and setting ops.
|
||||
- Added support for room names.
|
||||
- Fix bugs with room membership event display.
|
||||
|
||||
Changes in synapse 0.2.0 (2014-09-02)
|
||||
=====================================
|
||||
@@ -6680,36 +6448,36 @@ This update changes many configuration options, updates the database schema and
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Require SSL for server-server connections.
|
||||
- Add SSL listener for client-server connections.
|
||||
- Add ability to use config files.
|
||||
- Add support for kicking/banning and power levels.
|
||||
- Allow setting of room names and topics on creation.
|
||||
- Change presence to include last seen time of the user.
|
||||
- Change url path prefix to /\_matrix/\...
|
||||
- Bug fixes to presence.
|
||||
: - Require SSL for server-server connections.
|
||||
- Add SSL listener for client-server connections.
|
||||
- Add ability to use config files.
|
||||
- Add support for kicking/banning and power levels.
|
||||
- Allow setting of room names and topics on creation.
|
||||
- Change presence to include last seen time of the user.
|
||||
- Change url path prefix to /\_matrix/\...
|
||||
- Bug fixes to presence.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Reskin the CSS for registration and login.
|
||||
- Various improvements to rooms CSS.
|
||||
- Support changes in client-server API.
|
||||
- Bug fixes to VOIP UI.
|
||||
- Various bug fixes to handling of changes to room member list.
|
||||
: - Reskin the CSS for registration and login.
|
||||
- Various improvements to rooms CSS.
|
||||
- Support changes in client-server API.
|
||||
- Bug fixes to VOIP UI.
|
||||
- Various bug fixes to handling of changes to room member list.
|
||||
|
||||
Changes in synapse 0.1.2 (2014-08-29)
|
||||
=====================================
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add basic call state UI for VoIP calls.
|
||||
: - Add basic call state UI for VoIP calls.
|
||||
|
||||
Changes in synapse 0.1.1 (2014-08-29)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Fix bug that caused the event stream to not notify some clients about changes.
|
||||
: - Fix bug that caused the event stream to not notify some clients about changes.
|
||||
|
||||
Changes in synapse 0.1.0 (2014-08-29)
|
||||
=====================================
|
||||
@@ -6718,22 +6486,26 @@ Presence has been reenabled in this release.
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Update client to server API, including:
|
||||
- Use a more consistent url scheme.
|
||||
- Provide more useful information in the initial sync api.
|
||||
- Change the presence handling to be much more efficient.
|
||||
- Change the presence server to server API to not require explicit polling of all users who share a room with a user.
|
||||
- Fix races in the event streaming logic.
|
||||
: -
|
||||
|
||||
Update client to server API, including:
|
||||
|
||||
: - Use a more consistent url scheme.
|
||||
- Provide more useful information in the initial sync api.
|
||||
|
||||
- Change the presence handling to be much more efficient.
|
||||
- Change the presence server to server API to not require explicit polling of all users who share a room with a user.
|
||||
- Fix races in the event streaming logic.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Update to use new client to server API.
|
||||
- Add basic VOIP support.
|
||||
- Add idle timers that change your status to away.
|
||||
- Add recent rooms column when viewing a room.
|
||||
- Various network efficiency improvements.
|
||||
- Add basic mobile browser support.
|
||||
- Add a settings page.
|
||||
: - Update to use new client to server API.
|
||||
- Add basic VOIP support.
|
||||
- Add idle timers that change your status to away.
|
||||
- Add recent rooms column when viewing a room.
|
||||
- Various network efficiency improvements.
|
||||
- Add basic mobile browser support.
|
||||
- Add a settings page.
|
||||
|
||||
Changes in synapse 0.0.1 (2014-08-22)
|
||||
=====================================
|
||||
@@ -6742,26 +6514,26 @@ Presence has been disabled in this release due to a bug that caused the homeserv
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Completely change the database schema to support generic event types.
|
||||
- Improve presence reliability.
|
||||
- Improve reliability of joining remote rooms.
|
||||
- Fix bug where room join events were duplicated.
|
||||
- Improve initial sync API to return more information to the client.
|
||||
- Stop generating fake messages for room membership events.
|
||||
: - Completely change the database schema to support generic event types.
|
||||
- Improve presence reliability.
|
||||
- Improve reliability of joining remote rooms.
|
||||
- Fix bug where room join events were duplicated.
|
||||
- Improve initial sync API to return more information to the client.
|
||||
- Stop generating fake messages for room membership events.
|
||||
|
||||
Webclient:
|
||||
|
||||
- Add tab completion of names.
|
||||
- Add ability to upload and send images.
|
||||
- Add profile pages.
|
||||
- Improve CSS layout of room.
|
||||
- Disambiguate identical display names.
|
||||
- Don\'t get remote users display names and avatars individually.
|
||||
- Use the new initial sync API to reduce number of round trips to the homeserver.
|
||||
- Change url scheme to use room aliases instead of room ids where known.
|
||||
- Increase longpoll timeout.
|
||||
: - Add tab completion of names.
|
||||
- Add ability to upload and send images.
|
||||
- Add profile pages.
|
||||
- Improve CSS layout of room.
|
||||
- Disambiguate identical display names.
|
||||
- Don\'t get remote users display names and avatars individually.
|
||||
- Use the new initial sync API to reduce number of round trips to the homeserver.
|
||||
- Change url scheme to use room aliases instead of room ids where known.
|
||||
- Increase longpoll timeout.
|
||||
|
||||
Changes in synapse 0.0.0 (2014-08-13)
|
||||
=====================================
|
||||
|
||||
- Initial alpha release
|
||||
> - Initial alpha release
|
||||
|
||||
@@ -156,24 +156,6 @@ directory, you will need both a regular newsfragment *and* an entry in the
|
||||
debian changelog. (Though typically such changes should be submitted as two
|
||||
separate pull requests.)
|
||||
|
||||
## Documentation
|
||||
|
||||
There is a growing amount of documentation located in the [docs](docs)
|
||||
directory. This documentation is intended primarily for sysadmins running their
|
||||
own Synapse instance, as well as developers interacting externally with
|
||||
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
|
||||
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
|
||||
regarding Synapse's Admin API, which is used mostly by sysadmins and external
|
||||
service developers.
|
||||
|
||||
New files added to both folders should be written in [Github-Flavoured
|
||||
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
|
||||
should be made to migrate existing documents to markdown where possible.
|
||||
|
||||
Some documentation also exists in [Synapse's Github
|
||||
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
|
||||
contributed to by community authors.
|
||||
|
||||
## Sign off
|
||||
|
||||
In order to have a concrete record that your contribution is intentional
|
||||
|
||||
@@ -487,7 +487,7 @@ In nginx this would be something like:
|
||||
```
|
||||
location /.well-known/matrix/client {
|
||||
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
|
||||
default_type application/json;
|
||||
add_header Content-Type application/json;
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
}
|
||||
```
|
||||
|
||||
16
README.rst
16
README.rst
@@ -261,22 +261,18 @@ to install using pip and a virtualenv::
|
||||
pip install -e ".[all,test]"
|
||||
|
||||
This will run a process of downloading and installing all the needed
|
||||
dependencies into a virtual env. If any dependencies fail to install,
|
||||
try installing the failing modules individually::
|
||||
dependencies into a virtual env.
|
||||
|
||||
pip install -e "module-name"
|
||||
|
||||
Once this is done, you may wish to run Synapse's unit tests to
|
||||
check that everything is installed correctly::
|
||||
Once this is done, you may wish to run Synapse's unit tests, to
|
||||
check that everything is installed as it should be::
|
||||
|
||||
python -m twisted.trial tests
|
||||
|
||||
This should end with a 'PASSED' result (note that exact numbers will
|
||||
differ)::
|
||||
This should end with a 'PASSED' result::
|
||||
|
||||
Ran 1337 tests in 716.064s
|
||||
Ran 1266 tests in 643.930s
|
||||
|
||||
PASSED (skips=15, successes=1322)
|
||||
PASSED (skips=15, successes=1251)
|
||||
|
||||
Running the Integration Tests
|
||||
=============================
|
||||
|
||||
54
UPGRADE.rst
54
UPGRADE.rst
@@ -75,58 +75,6 @@ for example:
|
||||
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
|
||||
Upgrading to v1.24.0
|
||||
====================
|
||||
|
||||
Custom OpenID Connect mapping provider breaking change
|
||||
------------------------------------------------------
|
||||
|
||||
This release allows the OpenID Connect mapping provider to perform normalisation
|
||||
of the localpart of the Matrix ID. This allows for the mapping provider to
|
||||
specify different algorithms, instead of the [default way](https://matrix.org/docs/spec/appendices#mapping-from-other-character-sets).
|
||||
|
||||
If your Synapse configuration uses a custom mapping provider
|
||||
(`oidc_config.user_mapping_provider.module` is specified and not equal to
|
||||
`synapse.handlers.oidc_handler.JinjaOidcMappingProvider`) then you *must* ensure
|
||||
that `map_user_attributes` of the mapping provider performs some normalisation
|
||||
of the `localpart` returned. To match previous behaviour you can use the
|
||||
`map_username_to_mxid_localpart` function provided by Synapse. An example is
|
||||
shown below:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from synapse.types import map_username_to_mxid_localpart
|
||||
|
||||
class MyMappingProvider:
|
||||
def map_user_attributes(self, userinfo, token):
|
||||
# ... your custom logic ...
|
||||
sso_user_id = ...
|
||||
localpart = map_username_to_mxid_localpart(sso_user_id)
|
||||
|
||||
return {"localpart": localpart}
|
||||
|
||||
Removal historical Synapse Admin API
|
||||
------------------------------------
|
||||
|
||||
Historically, the Synapse Admin API has been accessible under:
|
||||
|
||||
* ``/_matrix/client/api/v1/admin``
|
||||
* ``/_matrix/client/unstable/admin``
|
||||
* ``/_matrix/client/r0/admin``
|
||||
* ``/_synapse/admin/v1``
|
||||
|
||||
The endpoints with ``/_matrix/client/*`` prefixes have been removed as of v1.24.0.
|
||||
The Admin API is now only accessible under:
|
||||
|
||||
* ``/_synapse/admin/v1``
|
||||
|
||||
The only exception is the `/admin/whois` endpoint, which is
|
||||
`also available via the client-server API <https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>`_.
|
||||
|
||||
The deprecation of the old endpoints was announced with Synapse 1.20.0 (released
|
||||
on 2020-09-22) and makes it easier for homeserver admins to lock down external
|
||||
access to the Admin API endpoints.
|
||||
|
||||
Upgrading to v1.23.0
|
||||
====================
|
||||
|
||||
@@ -139,7 +87,7 @@ then it should be modified based on the `structured logging documentation
|
||||
<https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>`_.
|
||||
|
||||
The ``structured`` and ``drains`` logging options are now deprecated and should
|
||||
be replaced by standard logging configuration of ``handlers`` and ``formatters``.
|
||||
be replaced by standard logging configuration of ``handlers`` and ``formatters`.
|
||||
|
||||
A future will release of Synapse will make using ``structured: true`` an error.
|
||||
|
||||
|
||||
1
changelog.d/8455.bugfix
Normal file
1
changelog.d/8455.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already.
|
||||
1
changelog.d/8519.feature
Normal file
1
changelog.d/8519.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin api to delete a single file or files were not used for a defined time from server. Contributed by @dklimpel.
|
||||
1
changelog.d/8539.feature
Normal file
1
changelog.d/8539.feature
Normal file
@@ -0,0 +1 @@
|
||||
Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel.
|
||||
1
changelog.d/8559.misc
Normal file
1
changelog.d/8559.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimise `/createRoom` with multiple invited users.
|
||||
1
changelog.d/8580.bugfix
Normal file
1
changelog.d/8580.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information.
|
||||
1
changelog.d/8582.doc
Normal file
1
changelog.d/8582.doc
Normal file
@@ -0,0 +1 @@
|
||||
Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk.
|
||||
1
changelog.d/8595.misc
Normal file
1
changelog.d/8595.misc
Normal file
@@ -0,0 +1 @@
|
||||
Implement and use an @lru_cache decorator.
|
||||
1
changelog.d/8607.feature
Normal file
1
changelog.d/8607.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8610.feature
Normal file
1
changelog.d/8610.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8614.misc
Normal file
1
changelog.d/8614.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't instansiate Requester directly.
|
||||
1
changelog.d/8615.misc
Normal file
1
changelog.d/8615.misc
Normal file
@@ -0,0 +1 @@
|
||||
Type hints for `RegistrationStore`.
|
||||
1
changelog.d/8616.misc
Normal file
1
changelog.d/8616.misc
Normal file
@@ -0,0 +1 @@
|
||||
Change schema to support access tokens belonging to one user but granting access to another.
|
||||
1
changelog.d/8620.bugfix
Normal file
1
changelog.d/8620.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error.
|
||||
1
changelog.d/8621.misc
Normal file
1
changelog.d/8621.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused OPTIONS handlers.
|
||||
1
changelog.d/8627.bugfix
Normal file
1
changelog.d/8627.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix email notifications for invites without local state.
|
||||
1
changelog.d/8628.bugfix
Normal file
1
changelog.d/8628.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500.
|
||||
1
changelog.d/8632.bugfix
Normal file
1
changelog.d/8632.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded.
|
||||
1
changelog.d/8633.misc
Normal file
1
changelog.d/8633.misc
Normal file
@@ -0,0 +1 @@
|
||||
Run `mypy` as part of the lint.sh script.
|
||||
1
changelog.d/8634.misc
Normal file
1
changelog.d/8634.misc
Normal file
@@ -0,0 +1 @@
|
||||
Correct Synapse's PyPI package name in the OpenID Connect installation instructions.
|
||||
1
changelog.d/8635.doc
Normal file
1
changelog.d/8635.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve the sample configuration for single sign-on providers.
|
||||
1
changelog.d/8639.misc
Normal file
1
changelog.d/8639.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typos and spelling errors in the code.
|
||||
1
changelog.d/8640.misc
Normal file
1
changelog.d/8640.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8643.bugfix
Normal file
1
changelog.d/8643.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0.
|
||||
1
changelog.d/8644.misc
Normal file
1
changelog.d/8644.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add field `total` to device list in admin API.
|
||||
1
changelog.d/8647.feature
Normal file
1
changelog.d/8647.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel.
|
||||
1
changelog.d/8655.misc
Normal file
1
changelog.d/8655.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
1
changelog.d/8657.doc
Normal file
1
changelog.d/8657.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs.
|
||||
1
changelog.d/8664.misc
Normal file
1
changelog.d/8664.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tell Black to format code for Python 3.5.
|
||||
1
changelog.d/8665.doc
Normal file
1
changelog.d/8665.doc
Normal file
@@ -0,0 +1 @@
|
||||
Note support for Python 3.9.
|
||||
1
changelog.d/8666.doc
Normal file
1
changelog.d/8666.doc
Normal file
@@ -0,0 +1 @@
|
||||
Minor updates to docs on running tests.
|
||||
1
changelog.d/8667.doc
Normal file
1
changelog.d/8667.doc
Normal file
@@ -0,0 +1 @@
|
||||
Interlink prometheus/grafana documentation.
|
||||
1
changelog.d/8668.misc
Normal file
1
changelog.d/8668.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8669.misc
Normal file
1
changelog.d/8669.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't pull event from DB when handling replication traffic.
|
||||
1
changelog.d/8670.misc
Normal file
1
changelog.d/8670.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8671.misc
Normal file
1
changelog.d/8671.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8679.misc
Normal file
1
changelog.d/8679.misc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify representation of events in logfiles.
|
||||
1
changelog.d/8680.misc
Normal file
1
changelog.d/8680.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't require `hiredis` package to be installed to run unit tests.
|
||||
1
changelog.d/8682.bugfix
Normal file
1
changelog.d/8682.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories.
|
||||
1
changelog.d/8684.misc
Normal file
1
changelog.d/8684.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typing info on cache call signature to accept `on_invalidate`.
|
||||
1
changelog.d/8685.feature
Normal file
1
changelog.d/8685.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8688.misc
Normal file
1
changelog.d/8688.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8689.feature
Normal file
1
changelog.d/8689.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8690.misc
Normal file
1
changelog.d/8690.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fail tests if they do not await coroutines.
|
||||
1
changelog.d/8693.misc
Normal file
1
changelog.d/8693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
@@ -1 +0,0 @@
|
||||
Fix multiarch docker image builds.
|
||||
@@ -20,7 +20,6 @@ Add a new job to the main prometheus.conf file:
|
||||
```
|
||||
|
||||
### for Prometheus v2
|
||||
|
||||
Add a new job to the main prometheus.yml file:
|
||||
|
||||
```yaml
|
||||
@@ -30,17 +29,14 @@ Add a new job to the main prometheus.yml file:
|
||||
scheme: "https"
|
||||
|
||||
static_configs:
|
||||
- targets: ["my.server.here:port"]
|
||||
- targets: ['SERVER.LOCATION:PORT']
|
||||
```
|
||||
|
||||
An example of a Prometheus configuration with workers can be found in
|
||||
[metrics-howto.md](https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md).
|
||||
|
||||
To use `synapse.rules` add
|
||||
|
||||
```yaml
|
||||
rule_files:
|
||||
- "/PATH/TO/synapse-v2.rules"
|
||||
rule_files:
|
||||
- "/PATH/TO/synapse-v2.rules"
|
||||
```
|
||||
|
||||
Metrics are disabled by default when running synapse; they must be enabled
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#process_resource_utime"),
|
||||
expr: "rate(process_cpu_seconds_total[2m]) * 100",
|
||||
name: "[[job]]-[[index]]",
|
||||
name: "[[job]]",
|
||||
min: 0,
|
||||
max: 100,
|
||||
renderer: "line",
|
||||
@@ -22,12 +22,12 @@ new PromConsole.Graph({
|
||||
</script>
|
||||
|
||||
<h3>Memory</h3>
|
||||
<div id="process_resident_memory_bytes"></div>
|
||||
<div id="process_resource_maxrss"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#process_resident_memory_bytes"),
|
||||
expr: "process_resident_memory_bytes",
|
||||
name: "[[job]]-[[index]]",
|
||||
node: document.querySelector("#process_resource_maxrss"),
|
||||
expr: "process_psutil_rss:max",
|
||||
name: "Maxrss",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
@@ -43,8 +43,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#process_fds"),
|
||||
expr: "process_open_fds",
|
||||
name: "[[job]]-[[index]]",
|
||||
expr: "process_open_fds{job='synapse'}",
|
||||
name: "FDs",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
@@ -62,8 +62,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#reactor_total_time"),
|
||||
expr: "rate(python_twisted_reactor_tick_time_sum[2m])",
|
||||
name: "[[job]]-[[index]]",
|
||||
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / 1000",
|
||||
name: "time",
|
||||
max: 1,
|
||||
min: 0,
|
||||
renderer: "area",
|
||||
@@ -80,8 +80,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#reactor_average_time"),
|
||||
expr: "rate(python_twisted_reactor_tick_time_sum[2m]) / rate(python_twisted_reactor_tick_time_count[2m])",
|
||||
name: "[[job]]-[[index]]",
|
||||
expr: "rate(python_twisted_reactor_tick_time:total[2m]) / rate(python_twisted_reactor_tick_time:count[2m]) / 1000",
|
||||
name: "time",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
@@ -97,14 +97,14 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#reactor_pending_calls"),
|
||||
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
|
||||
name: "[[job]]-[[index]]",
|
||||
expr: "rate(python_twisted_reactor_pending_calls:total[30s])/rate(python_twisted_reactor_pending_calls:count[30s])",
|
||||
name: "calls",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yTitle: "Pending Calls"
|
||||
yTitle: "Pending Cals"
|
||||
})
|
||||
</script>
|
||||
|
||||
@@ -115,7 +115,7 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_storage_query_time"),
|
||||
expr: "sum(rate(synapse_storage_query_time_count[2m])) by (verb)",
|
||||
expr: "rate(synapse_storage_query_time:count[2m])",
|
||||
name: "[[verb]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
@@ -129,8 +129,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_storage_transaction_time"),
|
||||
expr: "topk(10, rate(synapse_storage_transaction_time_count[2m]))",
|
||||
name: "[[job]]-[[index]] [[desc]]",
|
||||
expr: "rate(synapse_storage_transaction_time:count[2m])",
|
||||
name: "[[desc]]",
|
||||
min: 0,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
@@ -140,12 +140,12 @@ new PromConsole.Graph({
|
||||
</script>
|
||||
|
||||
<h3>Transaction execution time</h3>
|
||||
<div id="synapse_storage_transactions_time_sec"></div>
|
||||
<div id="synapse_storage_transactions_time_msec"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_storage_transactions_time_sec"),
|
||||
expr: "rate(synapse_storage_transaction_time_sum[2m])",
|
||||
name: "[[job]]-[[index]] [[desc]]",
|
||||
node: document.querySelector("#synapse_storage_transactions_time_msec"),
|
||||
expr: "rate(synapse_storage_transaction_time:total[2m]) / 1000",
|
||||
name: "[[desc]]",
|
||||
min: 0,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
@@ -154,33 +154,34 @@ new PromConsole.Graph({
|
||||
})
|
||||
</script>
|
||||
|
||||
<h3>Average time waiting for database connection</h3>
|
||||
<div id="synapse_storage_avg_waiting_time"></div>
|
||||
<h3>Database scheduling latency</h3>
|
||||
<div id="synapse_storage_schedule_time"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_storage_avg_waiting_time"),
|
||||
expr: "rate(synapse_storage_schedule_time_sum[2m]) / rate(synapse_storage_schedule_time_count[2m])",
|
||||
name: "[[job]]-[[index]]",
|
||||
node: document.querySelector("#synapse_storage_schedule_time"),
|
||||
expr: "rate(synapse_storage_schedule_time:total[2m]) / 1000",
|
||||
name: "Total latency",
|
||||
min: 0,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "s",
|
||||
yTitle: "Time"
|
||||
yUnits: "s/s",
|
||||
yTitle: "Usage"
|
||||
})
|
||||
</script>
|
||||
|
||||
<h3>Cache request rate</h3>
|
||||
<div id="synapse_cache_request_rate"></div>
|
||||
<h3>Cache hit ratio</h3>
|
||||
<div id="synapse_cache_ratio"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_cache_request_rate"),
|
||||
expr: "rate(synapse_util_caches_cache:total[2m])",
|
||||
name: "[[job]]-[[index]] [[name]]",
|
||||
node: document.querySelector("#synapse_cache_ratio"),
|
||||
expr: "rate(synapse_util_caches_cache:total[2m]) * 100",
|
||||
name: "[[name]]",
|
||||
min: 0,
|
||||
max: 100,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yUnits: "rps",
|
||||
yTitle: "Cache request rate"
|
||||
yUnits: "%",
|
||||
yTitle: "Percentage"
|
||||
})
|
||||
</script>
|
||||
|
||||
@@ -190,7 +191,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_cache_size"),
|
||||
expr: "synapse_util_caches_cache:size",
|
||||
name: "[[job]]-[[index]] [[name]]",
|
||||
name: "[[name]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yUnits: "",
|
||||
@@ -205,8 +206,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_request_count_servlet"),
|
||||
expr: "rate(synapse_http_server_in_flight_requests_count[2m])",
|
||||
name: "[[job]]-[[index]] [[method]] [[servlet]]",
|
||||
expr: "rate(synapse_http_server_request_count:servlet[2m])",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "req/s",
|
||||
@@ -218,8 +219,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_request_count_servlet_minus_events"),
|
||||
expr: "rate(synapse_http_server_in_flight_requests_count{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
|
||||
name: "[[job]]-[[index]] [[method]] [[servlet]]",
|
||||
expr: "rate(synapse_http_server_request_count:servlet{servlet!=\"EventStreamRestServlet\", servlet!=\"SyncRestServlet\"}[2m])",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "req/s",
|
||||
@@ -232,8 +233,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_response_time_avg"),
|
||||
expr: "rate(synapse_http_server_response_time_seconds_sum[2m]) / rate(synapse_http_server_response_count[2m])",
|
||||
name: "[[job]]-[[index]] [[servlet]]",
|
||||
expr: "rate(synapse_http_server_response_time_seconds[2m]) / rate(synapse_http_server_response_count[2m]) / 1000",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "s/req",
|
||||
@@ -276,7 +277,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_response_ru_utime"),
|
||||
expr: "rate(synapse_http_server_response_ru_utime_seconds[2m])",
|
||||
name: "[[job]]-[[index]] [[servlet]]",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "s/s",
|
||||
@@ -291,7 +292,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_response_db_txn_duration"),
|
||||
expr: "rate(synapse_http_server_response_db_txn_duration_seconds[2m])",
|
||||
name: "[[job]]-[[index]] [[servlet]]",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "s/s",
|
||||
@@ -305,8 +306,8 @@ new PromConsole.Graph({
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_http_server_send_time_avg"),
|
||||
expr: "rate(synapse_http_server_response_time_seconds_sum{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m])",
|
||||
name: "[[job]]-[[index]] [[servlet]]",
|
||||
expr: "rate(synapse_http_server_response_time_second{servlet='RoomSendEventRestServlet'}[2m]) / rate(synapse_http_server_response_count{servlet='RoomSendEventRestServlet'}[2m]) / 1000",
|
||||
name: "[[servlet]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "s/req",
|
||||
@@ -322,7 +323,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_federation_client_sent"),
|
||||
expr: "rate(synapse_federation_client_sent[2m])",
|
||||
name: "[[job]]-[[index]] [[type]]",
|
||||
name: "[[type]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "req/s",
|
||||
@@ -336,7 +337,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_federation_server_received"),
|
||||
expr: "rate(synapse_federation_server_received[2m])",
|
||||
name: "[[job]]-[[index]] [[type]]",
|
||||
name: "[[type]]",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "req/s",
|
||||
@@ -366,7 +367,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_notifier_listeners"),
|
||||
expr: "synapse_notifier_listeners",
|
||||
name: "[[job]]-[[index]]",
|
||||
name: "listeners",
|
||||
min: 0,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanizeNoSmallPrefix,
|
||||
@@ -381,7 +382,7 @@ new PromConsole.Graph({
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#synapse_notifier_notified_events"),
|
||||
expr: "rate(synapse_notifier_notified_events[2m])",
|
||||
name: "[[job]]-[[index]]",
|
||||
name: "events",
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yUnits: "events/s",
|
||||
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,15 +1,3 @@
|
||||
matrix-synapse-py3 (1.24.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.24.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:14:30 +0000
|
||||
|
||||
matrix-synapse-py3 (1.23.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.23.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 18 Nov 2020 11:41:28 +0000
|
||||
|
||||
matrix-synapse-py3 (1.22.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.22.1.
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
|
||||
#
|
||||
|
||||
ARG PYTHON_VERSION=3.8
|
||||
ARG PYTHON_VERSION=3.7
|
||||
|
||||
###
|
||||
### Stage 0: builder
|
||||
@@ -36,8 +36,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
|
||||
frozendict \
|
||||
jaeger-client \
|
||||
opentracing \
|
||||
# Match the version constraints of Synapse
|
||||
"prometheus_client>=0.4.0" \
|
||||
prometheus-client \
|
||||
psycopg2 \
|
||||
pycparser \
|
||||
pyrsistent \
|
||||
|
||||
@@ -1,172 +0,0 @@
|
||||
# Show reported events
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [README.rst](README.rst).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"score": -100,
|
||||
"received_ts": 1598889612059,
|
||||
"canonical_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"name": "Your room name here",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
],
|
||||
"next_token": 2,
|
||||
"total": 4
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint again with `from`
|
||||
set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more reports to
|
||||
paginate through.
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
* `limit`: integer - Is optional but is used for pagination, denoting the maximum number
|
||||
of items to return in this call. Defaults to `100`.
|
||||
* `from`: integer - Is optional but used for pagination, denoting the offset in the
|
||||
returned results. This should be treated as an opaque value and not explicitly set to
|
||||
anything other than the return value of `next_token` from a previous call. Defaults to `0`.
|
||||
* `dir`: string - Direction of event report order. Whether to fetch the most recent
|
||||
first (`b`) or the oldest first (`f`). Defaults to `b`.
|
||||
* `user_id`: string - Is optional and filters to only return users with user IDs that
|
||||
contain this value. This is the user who reported the event and wrote the reason.
|
||||
* `room_id`: string - Is optional and filters to only return rooms with room IDs that
|
||||
contain this value.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `id`: integer - ID of event report.
|
||||
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
|
||||
report was sent.
|
||||
* `room_id`: string - The ID of the room in which the event being reported is located.
|
||||
* `name`: string - The name of the room.
|
||||
* `event_id`: string - The ID of the reported event.
|
||||
* `user_id`: string - This is the user who reported the event and wrote the reason.
|
||||
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
|
||||
* `score`: integer - Content is reported based upon a negative score, where -100 is
|
||||
"most offensive" and 0 is "inoffensive".
|
||||
* `sender`: string - This is the ID of the user who sent the original message/event that
|
||||
was reported.
|
||||
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
|
||||
have a canonical alias set.
|
||||
* `next_token`: integer - Indication for pagination. See above.
|
||||
* `total`: integer - Total number of event reports related to the query
|
||||
(`user_id` and `room_id`).
|
||||
|
||||
# Show details of a specific event report
|
||||
|
||||
This API returns information about a specific event report.
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [README.rst](README.rst).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```jsonc
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
}
|
||||
```
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
* `report_id`: string - The ID of the event report.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `id`: integer - ID of event report.
|
||||
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
|
||||
report was sent.
|
||||
* `room_id`: string - The ID of the room in which the event being reported is located.
|
||||
* `name`: string - The name of the room.
|
||||
* `event_id`: string - The ID of the reported event.
|
||||
* `user_id`: string - This is the user who reported the event and wrote the reason.
|
||||
* `reason`: string - Comment made by the `user_id` in this report. May be blank.
|
||||
* `score`: integer - Content is reported based upon a negative score, where -100 is
|
||||
"most offensive" and 0 is "inoffensive".
|
||||
* `sender`: string - This is the ID of the user who sent the original message/event that
|
||||
was reported.
|
||||
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
|
||||
have a canonical alias set.
|
||||
* `event_json`: object - Details of the original event that was reported.
|
||||
165
docs/admin_api/event_reports.rst
Normal file
165
docs/admin_api/event_reports.rst
Normal file
@@ -0,0 +1,165 @@
|
||||
Show reported events
|
||||
====================
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"score": -100,
|
||||
"received_ts": 1598889612059,
|
||||
"canonical_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"name": "Your room name here",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
],
|
||||
"next_token": 2,
|
||||
"total": 4
|
||||
}
|
||||
|
||||
To paginate, check for ``next_token`` and if present, call the endpoint again
|
||||
with ``from`` set to the value of ``next_token``. This will return a new page.
|
||||
|
||||
If the endpoint does not return a ``next_token`` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``limit``: integer - Is optional but is used for pagination,
|
||||
denoting the maximum number of items to return in this call. Defaults to ``100``.
|
||||
- ``from``: integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value and
|
||||
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
|
||||
Defaults to ``0``.
|
||||
- ``dir``: string - Direction of event report order. Whether to fetch the most recent first (``b``) or the
|
||||
oldest first (``f``). Defaults to ``b``.
|
||||
- ``user_id``: string - Is optional and filters to only return users with user IDs that contain this value.
|
||||
This is the user who reported the event and wrote the reason.
|
||||
- ``room_id``: string - Is optional and filters to only return rooms with room IDs that contain this value.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
||||
|
||||
Show details of a specific event report
|
||||
=======================================
|
||||
|
||||
This API returns information about a specific event report.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
}
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``report_id``: string - The ID of the event report.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``event_json``: object - Details of the original event that was reported.
|
||||
@@ -1,7 +1,6 @@
|
||||
# List all media in a room
|
||||
|
||||
This API gets a list of known media in a room.
|
||||
However, it only shows media from unencrypted events or rooms.
|
||||
|
||||
The API is:
|
||||
```
|
||||
|
||||
@@ -18,8 +18,7 @@ To fetch the nonce, you need to request one from the API::
|
||||
|
||||
Once you have the nonce, you can make a ``POST`` to the same URL with a JSON
|
||||
body containing the nonce, username, password, whether they are an admin
|
||||
(optional, False by default), and a HMAC digest of the content. Also you can
|
||||
set the displayname (optional, ``username`` by default).
|
||||
(optional, False by default), and a HMAC digest of the content.
|
||||
|
||||
As an example::
|
||||
|
||||
@@ -27,7 +26,6 @@ As an example::
|
||||
> {
|
||||
"nonce": "thisisanonce",
|
||||
"username": "pepper_roni",
|
||||
"displayname": "Pepper Roni",
|
||||
"password": "pizza",
|
||||
"admin": true,
|
||||
"mac": "mac_digest_here"
|
||||
|
||||
@@ -265,10 +265,12 @@ Response:
|
||||
Once the `next_token` parameter is no longer present, we know we've reached the
|
||||
end of the list.
|
||||
|
||||
# Room Details API
|
||||
# DRAFT: Room Details API
|
||||
|
||||
The Room Details admin API allows server admins to get all details of a room.
|
||||
|
||||
This API is still a draft and details might change!
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
* `room_id` - The ID of the room.
|
||||
@@ -382,7 +384,7 @@ the new room. Users on other servers will be unaffected.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
```json
|
||||
POST /_synapse/admin/v1/rooms/<room_id>/delete
|
||||
```
|
||||
|
||||
@@ -439,10 +441,6 @@ The following JSON body parameters are available:
|
||||
future attempts to join the room. Defaults to `false`.
|
||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||
Defaults to `true`.
|
||||
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
||||
will force a purge to go ahead even if there are local users still in the room. Do not
|
||||
use this unless a regular `purge` operation fails, as it could leave those users'
|
||||
clients in a confused state.
|
||||
|
||||
The JSON body must not be empty. The body must be at least `{}`.
|
||||
|
||||
|
||||
@@ -1,83 +0,0 @@
|
||||
# Users' media usage statistics
|
||||
|
||||
Returns information about all local media usage of users. Gives the
|
||||
possibility to filter them by time and user.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/statistics/users/media
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [README.rst](README.rst).
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"users": [
|
||||
{
|
||||
"displayname": "foo_user_0",
|
||||
"media_count": 2,
|
||||
"media_length": 134,
|
||||
"user_id": "@foo_user_0:test"
|
||||
},
|
||||
{
|
||||
"displayname": "foo_user_1",
|
||||
"media_count": 2,
|
||||
"media_length": 134,
|
||||
"user_id": "@foo_user_1:test"
|
||||
}
|
||||
],
|
||||
"next_token": 3,
|
||||
"total": 10
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint
|
||||
again with `from` set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `limit`: string representing a positive integer - Is optional but is
|
||||
used for pagination, denoting the maximum number of items to return
|
||||
in this call. Defaults to `100`.
|
||||
* `from`: string representing a positive integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value
|
||||
and not explicitly set to anything other than the return value of `next_token` from a
|
||||
previous call. Defaults to `0`.
|
||||
* `order_by` - string - The method in which to sort the returned list of users. Valid values are:
|
||||
- `user_id` - Users are ordered alphabetically by `user_id`. This is the default.
|
||||
- `displayname` - Users are ordered alphabetically by `displayname`.
|
||||
- `media_length` - Users are ordered by the total size of uploaded media in bytes.
|
||||
Smallest to largest.
|
||||
- `media_count` - Users are ordered by number of uploaded media. Smallest to largest.
|
||||
* `from_ts` - string representing a positive integer - Considers only
|
||||
files created at this timestamp or later. Unix timestamp in ms.
|
||||
* `until_ts` - string representing a positive integer - Considers only
|
||||
files created at this timestamp or earlier. Unix timestamp in ms.
|
||||
* `search_term` - string - Filter users by their user ID localpart **or** displayname.
|
||||
The search term can be found in any part of the string.
|
||||
Defaults to no filtering.
|
||||
* `dir` - string - Direction of order. Either `f` for forwards or `b` for backwards.
|
||||
Setting this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `users` - An array of objects, each containing information
|
||||
about the user and their local media. Objects contain the following fields:
|
||||
- `displayname` - string - Displayname of this user.
|
||||
- `media_count` - integer - Number of uploaded media by this user.
|
||||
- `media_length` - integer - Size of uploaded media in bytes by this user.
|
||||
- `user_id` - string - Fully-qualified user ID (ex. `@user:server.com`).
|
||||
* `next_token` - integer - Opaque value used for pagination. See above.
|
||||
* `total` - integer - Total number of users after filtering.
|
||||
@@ -176,13 +176,6 @@ The api is::
|
||||
|
||||
GET /_synapse/admin/v1/whois/<user_id>
|
||||
|
||||
and::
|
||||
|
||||
GET /_matrix/client/r0/admin/whois/<userId>
|
||||
|
||||
See also: `Client Server API Whois
|
||||
<https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid>`_
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
@@ -261,7 +254,7 @@ with a body of:
|
||||
|
||||
{
|
||||
"new_password": "<secret>",
|
||||
"logout_devices": true
|
||||
"logout_devices": true,
|
||||
}
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
@@ -431,41 +424,6 @@ The following fields are returned in the JSON response body:
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total`` - integer - Total number of media.
|
||||
|
||||
Login as a user
|
||||
===============
|
||||
|
||||
Get an access token that can be used to authenticate as that user. Useful for
|
||||
when admins wish to do actions on behalf of a user.
|
||||
|
||||
The API is::
|
||||
|
||||
POST /_synapse/admin/v1/users/<user_id>/login
|
||||
{}
|
||||
|
||||
An optional ``valid_until_ms`` field can be specified in the request body as an
|
||||
integer timestamp that specifies when the token should expire. By default tokens
|
||||
do not expire.
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"access_token": "<opaque_access_token_string>"
|
||||
}
|
||||
|
||||
|
||||
This API does *not* generate a new device for the user, and so will not appear
|
||||
their ``/devices`` list, and in general the target user should not be able to
|
||||
tell they have been logged in as.
|
||||
|
||||
To expire the token call the standard ``/logout`` API with the token.
|
||||
|
||||
Note: The token will expire if the *admin* user calls ``/logout/all`` from any
|
||||
of their devices, but the token will *not* expire if the target user does the
|
||||
same.
|
||||
|
||||
|
||||
User devices
|
||||
============
|
||||
|
||||
|
||||
@@ -13,12 +13,10 @@
|
||||
can be enabled by adding the \"metrics\" resource to the existing
|
||||
listener as such:
|
||||
|
||||
```yaml
|
||||
resources:
|
||||
- names:
|
||||
- client
|
||||
- metrics
|
||||
```
|
||||
resources:
|
||||
- names:
|
||||
- client
|
||||
- metrics
|
||||
|
||||
This provides a simple way of adding metrics to your Synapse
|
||||
installation, and serves under `/_synapse/metrics`. If you do not
|
||||
@@ -33,13 +31,11 @@
|
||||
|
||||
Add a new listener to homeserver.yaml:
|
||||
|
||||
```yaml
|
||||
listeners:
|
||||
- type: metrics
|
||||
port: 9000
|
||||
bind_addresses:
|
||||
- '0.0.0.0'
|
||||
```
|
||||
listeners:
|
||||
- type: metrics
|
||||
port: 9000
|
||||
bind_addresses:
|
||||
- '0.0.0.0'
|
||||
|
||||
For both options, you will need to ensure that `enable_metrics` is
|
||||
set to `True`.
|
||||
@@ -51,13 +47,10 @@
|
||||
It needs to set the `metrics_path` to a non-default value (under
|
||||
`scrape_configs`):
|
||||
|
||||
```yaml
|
||||
- job_name: "synapse"
|
||||
scrape_interval: 15s
|
||||
metrics_path: "/_synapse/metrics"
|
||||
static_configs:
|
||||
- targets: ["my.server.here:port"]
|
||||
```
|
||||
- job_name: "synapse"
|
||||
metrics_path: "/_synapse/metrics"
|
||||
static_configs:
|
||||
- targets: ["my.server.here:port"]
|
||||
|
||||
where `my.server.here` is the IP address of Synapse, and `port` is
|
||||
the listener port configured with the `metrics` resource.
|
||||
@@ -67,8 +60,7 @@
|
||||
|
||||
1. Restart Prometheus.
|
||||
|
||||
1. Consider using the [grafana dashboard](https://github.com/matrix-org/synapse/tree/master/contrib/grafana/)
|
||||
and required [recording rules](https://github.com/matrix-org/synapse/tree/master/contrib/prometheus/)
|
||||
1. Consider using the [grafana dashboard](https://github.com/matrix-org/synapse/tree/master/contrib/grafana/) and required [recording rules](https://github.com/matrix-org/synapse/tree/master/contrib/prometheus/)
|
||||
|
||||
## Monitoring workers
|
||||
|
||||
@@ -84,9 +76,9 @@ To allow collecting metrics from a worker, you need to add a
|
||||
under `worker_listeners`:
|
||||
|
||||
```yaml
|
||||
- type: metrics
|
||||
bind_address: ''
|
||||
port: 9101
|
||||
- type: metrics
|
||||
bind_address: ''
|
||||
port: 9101
|
||||
```
|
||||
|
||||
The `bind_address` and `port` parameters should be set so that
|
||||
@@ -95,38 +87,6 @@ don't clash with an existing worker.
|
||||
With this example, the worker's metrics would then be available
|
||||
on `http://127.0.0.1:9101`.
|
||||
|
||||
Example Prometheus target for Synapse with workers:
|
||||
|
||||
```yaml
|
||||
- job_name: "synapse"
|
||||
scrape_interval: 15s
|
||||
metrics_path: "/_synapse/metrics"
|
||||
static_configs:
|
||||
- targets: ["my.server.here:port"]
|
||||
labels:
|
||||
instance: "my.server"
|
||||
job: "master"
|
||||
index: 1
|
||||
- targets: ["my.workerserver.here:port"]
|
||||
labels:
|
||||
instance: "my.server"
|
||||
job: "generic_worker"
|
||||
index: 1
|
||||
- targets: ["my.workerserver.here:port"]
|
||||
labels:
|
||||
instance: "my.server"
|
||||
job: "generic_worker"
|
||||
index: 2
|
||||
- targets: ["my.workerserver.here:port"]
|
||||
labels:
|
||||
instance: "my.server"
|
||||
job: "media_repository"
|
||||
index: 1
|
||||
```
|
||||
|
||||
Labels (`instance`, `job`, `index`) can be defined as anything.
|
||||
The labels are used to group graphs in grafana.
|
||||
|
||||
## Renaming of metrics & deprecation of old names in 1.2
|
||||
|
||||
Synapse 1.2 updates the Prometheus metrics to match the naming
|
||||
|
||||
@@ -205,7 +205,7 @@ GitHub is a bit special as it is not an OpenID Connect compliant provider, but
|
||||
just a regular OAuth2 provider.
|
||||
|
||||
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
|
||||
can be used to retrieve information on the authenticated user. As the Synapse
|
||||
can be used to retrieve information on the authenticated user. As the Synaspse
|
||||
login mechanism needs an attribute to uniquely identify users, and that endpoint
|
||||
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
||||
|
||||
|
||||
@@ -26,7 +26,6 @@ Password auth provider classes must provide the following methods:
|
||||
|
||||
It should perform any appropriate sanity checks on the provided
|
||||
configuration, and return an object which is then passed into
|
||||
`__init__`.
|
||||
|
||||
This method should have the `@staticmethod` decoration.
|
||||
|
||||
|
||||
@@ -1230,9 +1230,8 @@ account_validity:
|
||||
# email will be globally disabled.
|
||||
#
|
||||
# Additionally, if `msisdn` is not set, registration and password resets via msisdn
|
||||
# will be disabled regardless, and users will not be able to associate an msisdn
|
||||
# identifier to their account. This is due to Synapse currently not supporting
|
||||
# any method of sending SMS messages on its own.
|
||||
# will be disabled regardless. This is due to Synapse currently not supporting any
|
||||
# method of sending SMS messages on its own.
|
||||
#
|
||||
# To enable using an identity server for operations regarding a particular third-party
|
||||
# identifier type, set the value to the URL of that identity server as shown in the
|
||||
@@ -1546,12 +1545,6 @@ saml2_config:
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
|
||||
# Allowed clock difference in seconds between the homeserver and IdP.
|
||||
#
|
||||
# Uncomment the below to increase the accepted time difference from 0 to 3 seconds.
|
||||
#
|
||||
#accepted_time_diff: 3
|
||||
|
||||
# By default, the user has to go to our login page first. If you'd like
|
||||
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# 'service.sp' section:
|
||||
@@ -1567,28 +1560,6 @@ saml2_config:
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#ui_info:
|
||||
# display_name:
|
||||
# - lang: en
|
||||
# text: "Display Name is the descriptive name of your service."
|
||||
# description:
|
||||
# - lang: en
|
||||
# text: "Description should be a short paragraph explaining the purpose of the service."
|
||||
# information_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/terms-of-service"
|
||||
# privacy_statement_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/privacy-policy"
|
||||
# keywords:
|
||||
# - lang: en
|
||||
# text: ["Matrix", "Element"]
|
||||
# logo:
|
||||
# - lang: en
|
||||
# text: "https://example.com/logo.svg"
|
||||
# width: "200"
|
||||
# height: "80"
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
@@ -1674,14 +1645,6 @@ saml2_config:
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# If the metadata XML contains multiple IdP entities then the `idp_entityid`
|
||||
# option must be set to the entity to redirect users to.
|
||||
#
|
||||
# Most deployments only have a single IdP entity and so should omit this
|
||||
# option.
|
||||
#
|
||||
#idp_entityid: 'https://our_idp/entityid'
|
||||
|
||||
|
||||
# Enable OpenID Connect (OIDC) / OAuth 2.0 for registration and login.
|
||||
#
|
||||
@@ -2251,35 +2214,20 @@ password_providers:
|
||||
|
||||
|
||||
|
||||
## Push ##
|
||||
|
||||
push:
|
||||
# Clients requesting push notifications can either have the body of
|
||||
# the message sent in the notification poke along with other details
|
||||
# like the sender, or just the event ID and room ID (`event_id_only`).
|
||||
# If clients choose the former, this option controls whether the
|
||||
# notification request includes the content of the event (other details
|
||||
# like the sender are still included). For `event_id_only` push, it
|
||||
# has no effect.
|
||||
#
|
||||
# For modern android devices the notification content will still appear
|
||||
# because it is loaded by the app. iPhone, however will send a
|
||||
# notification saying only that a message arrived and who it came from.
|
||||
#
|
||||
# The default value is "true" to include message details. Uncomment to only
|
||||
# include the event ID and room ID in push notification payloads.
|
||||
#
|
||||
#include_content: false
|
||||
|
||||
# When a push notification is received, an unread count is also sent.
|
||||
# This number can either be calculated as the number of unread messages
|
||||
# for the user, or the number of *rooms* the user has unread messages in.
|
||||
#
|
||||
# The default value is "true", meaning push clients will see the number of
|
||||
# rooms with unread messages in them. Uncomment to instead send the number
|
||||
# of unread messages.
|
||||
#
|
||||
#group_unread_count_by_room: false
|
||||
# Clients requesting push notifications can either have the body of
|
||||
# the message sent in the notification poke along with other details
|
||||
# like the sender, or just the event ID and room ID (`event_id_only`).
|
||||
# If clients choose the former, this option controls whether the
|
||||
# notification request includes the content of the event (other details
|
||||
# like the sender are still included). For `event_id_only` push, it
|
||||
# has no effect.
|
||||
#
|
||||
# For modern android devices the notification content will still appear
|
||||
# because it is loaded by the app. iPhone, however will send a
|
||||
# notification saying only that a message arrived and who it came from.
|
||||
#
|
||||
#push:
|
||||
# include_content: true
|
||||
|
||||
|
||||
# Spam checkers are third-party modules that can block specific actions
|
||||
|
||||
@@ -15,15 +15,8 @@ where SAML mapping providers come into play.
|
||||
SSO mapping providers are currently supported for OpenID and SAML SSO
|
||||
configurations. Please see the details below for how to implement your own.
|
||||
|
||||
It is the responsibility of the mapping provider to normalise the SSO attributes
|
||||
and map them to a valid Matrix ID. The
|
||||
[specification for Matrix IDs](https://matrix.org/docs/spec/appendices#user-identifiers)
|
||||
has some information about what is considered valid. Alternately an easy way to
|
||||
ensure it is valid is to use a Synapse utility function:
|
||||
`synapse.types.map_username_to_mxid_localpart`.
|
||||
|
||||
External mapping providers are provided to Synapse in the form of an external
|
||||
Python module. You can retrieve this module from [PyPI](https://pypi.org) or elsewhere,
|
||||
Python module. You can retrieve this module from [PyPi](https://pypi.org) or elsewhere,
|
||||
but it must be importable via Synapse (e.g. it must be in the same virtualenv
|
||||
as Synapse). The Synapse config is then modified to point to the mapping provider
|
||||
(and optionally provide additional configuration for it).
|
||||
@@ -63,22 +56,13 @@ A custom mapping provider must specify the following methods:
|
||||
information from.
|
||||
- This method must return a string, which is the unique identifier for the
|
||||
user. Commonly the ``sub`` claim of the response.
|
||||
* `map_user_attributes(self, userinfo, token, failures)`
|
||||
* `map_user_attributes(self, userinfo, token)`
|
||||
- This method must be async.
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
information from.
|
||||
- `token` - A dictionary which includes information necessary to make
|
||||
further requests to the OpenID provider.
|
||||
- `failures` - An `int` that represents the amount of times the returned
|
||||
mxid localpart mapping has failed. This should be used
|
||||
to create a deduplicated mxid localpart which should be
|
||||
returned instead. For example, if this method returns
|
||||
`john.doe` as the value of `localpart` in the returned
|
||||
dict, and that is already taken on the homeserver, this
|
||||
method will be called again with the same parameters but
|
||||
with failures=1. The method should then return a different
|
||||
`localpart` value, such as `john.doe1`.
|
||||
- Returns a dictionary with two keys:
|
||||
- localpart: A required string, used to generate the Matrix ID.
|
||||
- displayname: An optional string, the display name for the user.
|
||||
@@ -168,13 +152,6 @@ A custom mapping provider must specify the following methods:
|
||||
the value of `mxid_localpart`.
|
||||
* `emails` - A list of emails for the new user. If not provided, will
|
||||
default to an empty list.
|
||||
|
||||
Alternatively it can raise a `synapse.api.errors.RedirectException` to
|
||||
redirect the user to another page. This is useful to prompt the user for
|
||||
additional information, e.g. if you want them to provide their own username.
|
||||
It is the responsibility of the mapping provider to either redirect back
|
||||
to `client_redirect_url` (including any additional information) or to
|
||||
complete registration using methods from the `ModuleApi`.
|
||||
|
||||
### Default SAML Mapping Provider
|
||||
|
||||
|
||||
@@ -37,10 +37,10 @@ synapse master process to be started as part of the `matrix-synapse.target`
|
||||
target.
|
||||
1. For each worker process to be enabled, run `systemctl enable
|
||||
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
|
||||
should be a corresponding configuration file.
|
||||
should be a corresponding configuration file
|
||||
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
|
||||
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
|
||||
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`.
|
||||
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
|
||||
|
||||
## Usage
|
||||
|
||||
|
||||
@@ -42,10 +42,10 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
./configure
|
||||
|
||||
You may need to install `libevent2`: if so, you should do so in
|
||||
the way recommended by your operating system. You can ignore
|
||||
warnings about lack of database support: a database is unnecessary
|
||||
for this purpose.
|
||||
> You may need to install `libevent2`: if so, you should do so in
|
||||
> the way recommended by your operating system. You can ignore
|
||||
> warnings about lack of database support: a database is unnecessary
|
||||
> for this purpose.
|
||||
|
||||
1. Build and install it:
|
||||
|
||||
@@ -66,19 +66,6 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
pwgen -s 64 1
|
||||
|
||||
A `realm` must be specified, but its value is somewhat arbitrary. (It is
|
||||
sent to clients as part of the authentication flow.) It is conventional to
|
||||
set it to be your server name.
|
||||
|
||||
1. You will most likely want to configure coturn to write logs somewhere. The
|
||||
easiest way is normally to send them to the syslog:
|
||||
|
||||
syslog
|
||||
|
||||
(in which case, the logs will be available via `journalctl -u coturn` on a
|
||||
systemd system). Alternatively, coturn can be configured to write to a
|
||||
logfile - check the example config file supplied with coturn.
|
||||
|
||||
1. Consider your security settings. TURN lets users request a relay which will
|
||||
connect to arbitrary IP addresses and ports. The following configuration is
|
||||
suggested as a minimum starting point:
|
||||
@@ -109,31 +96,11 @@ This will install and start a systemd service called `coturn`.
|
||||
# TLS private key file
|
||||
pkey=/path/to/privkey.pem
|
||||
|
||||
In this case, replace the `turn:` schemes in the `turn_uri` settings below
|
||||
with `turns:`.
|
||||
|
||||
We recommend that you only try to set up TLS/DTLS once you have set up a
|
||||
basic installation and got it working.
|
||||
|
||||
1. Ensure your firewall allows traffic into the TURN server on the ports
|
||||
you've configured it to listen on (By default: 3478 and 5349 for TURN
|
||||
you've configured it to listen on (By default: 3478 and 5349 for the TURN(s)
|
||||
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
|
||||
for the UDP relay.)
|
||||
|
||||
1. We do not recommend running a TURN server behind NAT, and are not aware of
|
||||
anyone doing so successfully.
|
||||
|
||||
If you want to try it anyway, you will at least need to tell coturn its
|
||||
external IP address:
|
||||
|
||||
external-ip=192.88.99.1
|
||||
|
||||
... and your NAT gateway must forward all of the relayed ports directly
|
||||
(eg, port 56789 on the external IP must be always be forwarded to port
|
||||
56789 on the internal IP).
|
||||
|
||||
If you get this working, let us know!
|
||||
|
||||
1. (Re)start the turn server:
|
||||
|
||||
* If you used the Debian package (or have set up a systemd unit yourself):
|
||||
@@ -170,10 +137,9 @@ Your home server configuration file needs the following extra keys:
|
||||
without having gone through a CAPTCHA or similar to register a
|
||||
real account.
|
||||
|
||||
As an example, here is the relevant section of the config file for `matrix.org`. The
|
||||
`turn_uris` are appropriate for TURN servers listening on the default ports, with no TLS.
|
||||
As an example, here is the relevant section of the config file for matrix.org:
|
||||
|
||||
turn_uris: [ "turn:turn.matrix.org?transport=udp", "turn:turn.matrix.org?transport=tcp" ]
|
||||
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
|
||||
turn_shared_secret: "n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
|
||||
turn_user_lifetime: 86400000
|
||||
turn_allow_guests: True
|
||||
@@ -189,86 +155,5 @@ After updating the homeserver configuration, you must restart synapse:
|
||||
```
|
||||
systemctl restart synapse.service
|
||||
```
|
||||
... and then reload any clients (or wait an hour for them to refresh their
|
||||
settings).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The normal symptoms of a misconfigured TURN server are that calls between
|
||||
devices on different networks ring, but get stuck at "call
|
||||
connecting". Unfortunately, troubleshooting this can be tricky.
|
||||
|
||||
Here are a few things to try:
|
||||
|
||||
* Check that your TURN server is not behind NAT. As above, we're not aware of
|
||||
anyone who has successfully set this up.
|
||||
|
||||
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
||||
TURN ports (normally 3478 and 5479).
|
||||
|
||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||
relay ports (49152-65535 by default).
|
||||
|
||||
* Some WebRTC implementations (notably, that of Google Chrome) appear to get
|
||||
confused by TURN servers which are reachable over IPv6 (this appears to be
|
||||
an unexpected side-effect of its handling of multiple IP addresses as
|
||||
defined by
|
||||
[`draft-ietf-rtcweb-ip-handling`](https://tools.ietf.org/html/draft-ietf-rtcweb-ip-handling-12)).
|
||||
|
||||
Try removing any AAAA records for your TURN server, so that it is only
|
||||
reachable over IPv4.
|
||||
|
||||
* Enable more verbose logging in coturn via the `verbose` setting:
|
||||
|
||||
```
|
||||
verbose
|
||||
```
|
||||
|
||||
... and then see if there are any clues in its logs.
|
||||
|
||||
* If you are using a browser-based client under Chrome, check
|
||||
`chrome://webrtc-internals/` for insights into the internals of the
|
||||
negotiation. On Firefox, check the "Connection Log" on `about:webrtc`.
|
||||
|
||||
(Understanding the output is beyond the scope of this document!)
|
||||
|
||||
* There is a WebRTC test tool at
|
||||
https://webrtc.github.io/samples/src/content/peerconnection/trickle-ice/. To
|
||||
use it, you will need a username/password for your TURN server. You can
|
||||
either:
|
||||
|
||||
* look for the `GET /_matrix/client/r0/voip/turnServer` request made by a
|
||||
matrix client to your homeserver in your browser's network inspector. In
|
||||
the response you should see `username` and `password`. Or:
|
||||
|
||||
* Use the following shell commands:
|
||||
|
||||
```sh
|
||||
secret=staticAuthSecretHere
|
||||
|
||||
u=$((`date +%s` + 3600)):test
|
||||
p=$(echo -n $u | openssl dgst -hmac $secret -sha1 -binary | base64)
|
||||
echo -e "username: $u\npassword: $p"
|
||||
```
|
||||
|
||||
Or:
|
||||
|
||||
* Temporarily configure coturn to accept a static username/password. To do
|
||||
this, comment out `use-auth-secret` and `static-auth-secret` and add the
|
||||
following:
|
||||
|
||||
```
|
||||
lt-cred-mech
|
||||
user=username:password
|
||||
```
|
||||
|
||||
**Note**: these settings will not take effect unless `use-auth-secret`
|
||||
and `static-auth-secret` are disabled.
|
||||
|
||||
Restart coturn after changing the configuration file.
|
||||
|
||||
Remember to restore the original settings to go back to testing with
|
||||
Matrix clients!
|
||||
|
||||
If the TURN server is working correctly, you should see at least one `relay`
|
||||
entry in the results.
|
||||
..and your Home Server now supports VoIP relaying!
|
||||
|
||||
@@ -116,7 +116,7 @@ public internet; it has no authentication and is unencrypted.
|
||||
### Worker configuration
|
||||
|
||||
In the config file for each worker, you must specify the type of worker
|
||||
application (`worker_app`), and you should specify a unique name for the worker
|
||||
application (`worker_app`), and you should specify a unqiue name for the worker
|
||||
(`worker_name`). The currently available worker applications are listed below.
|
||||
You must also specify the HTTP replication endpoint that it should talk to on
|
||||
the main synapse process. `worker_replication_host` should specify the host of
|
||||
@@ -262,9 +262,6 @@ using):
|
||||
Note that a HTTP listener with `client` and `federation` resources must be
|
||||
configured in the `worker_listeners` option in the worker config.
|
||||
|
||||
Ensure that all SSO logins go to a single process (usually the main process).
|
||||
For multiple workers not handling the SSO endpoints properly, see
|
||||
[#7530](https://github.com/matrix-org/synapse/issues/7530).
|
||||
|
||||
#### Load balancing
|
||||
|
||||
@@ -305,7 +302,7 @@ Additionally, there is *experimental* support for moving writing of specific
|
||||
streams (such as events) off of the main process to a particular worker. (This
|
||||
is only supported with Redis-based replication.)
|
||||
|
||||
Currently supported streams are `events` and `typing`.
|
||||
Currently support streams are `events` and `typing`.
|
||||
|
||||
To enable this, the worker must have a HTTP replication listener configured,
|
||||
have a `worker_name` and be listed in the `instance_map` config. For example to
|
||||
@@ -322,18 +319,6 @@ stream_writers:
|
||||
events: event_persister1
|
||||
```
|
||||
|
||||
The `events` stream also experimentally supports having multiple writers, where
|
||||
work is sharded between them by room ID. Note that you *must* restart all worker
|
||||
instances when adding or removing event persisters. An example `stream_writers`
|
||||
configuration with multiple writers:
|
||||
|
||||
```yaml
|
||||
stream_writers:
|
||||
events:
|
||||
- event_persister1
|
||||
- event_persister2
|
||||
```
|
||||
|
||||
#### Background tasks
|
||||
|
||||
There is also *experimental* support for moving background tasks to a separate
|
||||
@@ -423,8 +408,6 @@ and you must configure a single instance to run the background tasks, e.g.:
|
||||
media_instance_running_background_jobs: "media-repository-1"
|
||||
```
|
||||
|
||||
Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for both inbound client and federation requests (if they are handled separately).
|
||||
|
||||
### `synapse.app.user_dir`
|
||||
|
||||
Handles searches in the user directory. It can handle REST endpoints matching
|
||||
|
||||
9
mypy.ini
9
mypy.ini
@@ -8,13 +8,11 @@ show_traceback = True
|
||||
mypy_path = stubs
|
||||
warn_unreachable = True
|
||||
files =
|
||||
scripts-dev/sign_json,
|
||||
synapse/api,
|
||||
synapse/appservice,
|
||||
synapse/config,
|
||||
synapse/event_auth.py,
|
||||
synapse/events/builder.py,
|
||||
synapse/events/validator.py,
|
||||
synapse/events/spamcheck.py,
|
||||
synapse/federation,
|
||||
synapse/handlers/_base.py,
|
||||
@@ -38,17 +36,13 @@ files =
|
||||
synapse/handlers/presence.py,
|
||||
synapse/handlers/profile.py,
|
||||
synapse/handlers/read_marker.py,
|
||||
synapse/handlers/register.py,
|
||||
synapse/handlers/room.py,
|
||||
synapse/handlers/room_member.py,
|
||||
synapse/handlers/room_member_worker.py,
|
||||
synapse/handlers/saml_handler.py,
|
||||
synapse/handlers/sync.py,
|
||||
synapse/handlers/ui_auth,
|
||||
synapse/http/client.py,
|
||||
synapse/http/federation/matrix_federation_agent.py,
|
||||
synapse/http/federation/well_known_resolver.py,
|
||||
synapse/http/matrixfederationclient.py,
|
||||
synapse/http/server.py,
|
||||
synapse/http/site.py,
|
||||
synapse/logging,
|
||||
@@ -80,7 +74,6 @@ files =
|
||||
synapse/util/metrics.py,
|
||||
tests/replication,
|
||||
tests/test_utils,
|
||||
tests/handlers/test_password_providers.py,
|
||||
tests/rest/client/v2_alpha/test_auth.py,
|
||||
tests/util/test_stream_change_cache.py
|
||||
|
||||
@@ -111,7 +104,7 @@ ignore_missing_imports = True
|
||||
[mypy-opentracing]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-OpenSSL.*]
|
||||
[mypy-OpenSSL]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-netaddr]
|
||||
|
||||
227
scripts-dev/release.py
Normal file
227
scripts-dev/release.py
Normal file
@@ -0,0 +1,227 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
import git
|
||||
from packaging import version
|
||||
from redbaron import RedBaron
|
||||
|
||||
|
||||
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
|
||||
"""Find the branch/ref, looking first locally then in the remote.
|
||||
"""
|
||||
if ref_name in repo.refs:
|
||||
return repo.refs[ref_name]
|
||||
elif ref_name in repo.remote().refs:
|
||||
return repo.remote().refs[ref_name]
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def update_branch(repo: git.Repo):
|
||||
"""Ensure branch is up to date if it has a remote
|
||||
"""
|
||||
if repo.active_branch.tracking_branch():
|
||||
repo.git.merge(repo.active_branch.tracking_branch().name)
|
||||
|
||||
|
||||
@click.command()
|
||||
def release():
|
||||
"""Main release command
|
||||
"""
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
try:
|
||||
repo = git.Repo()
|
||||
except git.InvalidGitRepositoryError:
|
||||
raise click.ClickException("Not in Synapse repo.")
|
||||
|
||||
if repo.is_dirty():
|
||||
raise click.ClickException("Uncommitted changes exist.")
|
||||
|
||||
click.secho("Updating git repo...")
|
||||
repo.remote().fetch()
|
||||
|
||||
# Parse the AST and load the `__version__` node so that we can edit it
|
||||
# later.
|
||||
with open("synapse/__init__.py") as f:
|
||||
red = RedBaron(f.read())
|
||||
|
||||
version_node = None
|
||||
for node in red:
|
||||
if node.type != "assignment":
|
||||
continue
|
||||
|
||||
if node.target.type != "name":
|
||||
continue
|
||||
|
||||
if node.target.value != "__version__":
|
||||
continue
|
||||
|
||||
version_node = node
|
||||
break
|
||||
|
||||
if not version_node:
|
||||
print("Failed to find '__version__' definition in synapse/__init__.py")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse the current version.
|
||||
current_version = version.parse(version_node.value.value.strip('"'))
|
||||
assert isinstance(current_version, version.Version)
|
||||
|
||||
# Figure out what sort of release we're doing and calcuate the new version.
|
||||
rc = click.confirm("RC", default=True)
|
||||
if current_version.pre:
|
||||
# If the current version is an RC we don't need to bump any of the
|
||||
# version numbers (other than the RC number).
|
||||
base_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro,
|
||||
)
|
||||
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc{}".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro,
|
||||
current_version.pre[1] + 1,
|
||||
)
|
||||
else:
|
||||
new_version = base_version
|
||||
else:
|
||||
# If this is a new release cycle then we need to know if its a major
|
||||
# version bump or a hotfix.
|
||||
release_type = click.prompt(
|
||||
"Release type",
|
||||
type=click.Choice(("major", "hotfix")),
|
||||
show_choices=True,
|
||||
default="major",
|
||||
)
|
||||
|
||||
if release_type == "major":
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
|
||||
else:
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro + 1,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro + 1,
|
||||
)
|
||||
|
||||
# Confirm the calculated version is OK.
|
||||
if not click.confirm(f"Create new version: {new_version}?", default=True):
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Switch to the release branch.
|
||||
release_branch_name = f"release-v{base_version}"
|
||||
release_branch = find_ref(repo, release_branch_name)
|
||||
if release_branch:
|
||||
if release_branch.is_remote():
|
||||
# If the release branch only exists on the remote we check it out
|
||||
# locally.
|
||||
repo.git.checkout(release_branch_name)
|
||||
release_branch = repo.active_branch
|
||||
else:
|
||||
# If a branch doesn't exist we create one. We ask which one branch it
|
||||
# should be based off, defaulting to sensible values depending on the
|
||||
# release type.
|
||||
if current_version.is_prerelease:
|
||||
default = release_branch_name
|
||||
elif release_type == "major":
|
||||
default = "develop"
|
||||
else:
|
||||
default = "master"
|
||||
|
||||
branch_name = click.prompt(
|
||||
"Which branch should release be based off of?", default=default
|
||||
)
|
||||
|
||||
base_branch = find_ref(repo, branch_name)
|
||||
if not base_branch:
|
||||
print(f"Could not find base branch {branch_name}!")
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Checkout the base branch and ensure its up to date
|
||||
repo.head.reference = base_branch
|
||||
repo.head.reset(index=True, working_tree=True)
|
||||
if not base_branch.is_remote():
|
||||
update_branch(repo)
|
||||
|
||||
# Create the new release branch
|
||||
release_branch = repo.create_head(release_branch_name, commit=base_branch)
|
||||
|
||||
# Switch to the release branch and ensure its up to date.
|
||||
repo.git.checkout(release_branch_name)
|
||||
update_branch(repo)
|
||||
|
||||
# Update the `__version__` variable and write it back to the file.
|
||||
version_node.value = '"' + new_version + '"'
|
||||
with open("synapse/__init__.py", "w") as f:
|
||||
f.write(red.dumps())
|
||||
|
||||
# Generate changelgs
|
||||
subprocess.run("python3 -m towncrier", shell=True)
|
||||
|
||||
# Generate debian changelogs if its not an RC.
|
||||
if not rc:
|
||||
subprocess.run(
|
||||
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
|
||||
)
|
||||
subprocess.run('dch -M -r -D stable ""', shell=True)
|
||||
|
||||
# Show the user the changes and ask if they want to edit the change log.
|
||||
repo.git.add("-u")
|
||||
subprocess.run("git diff --cached", shell=True)
|
||||
|
||||
if click.confirm("Edit changelog?", default=False):
|
||||
click.edit(filename="CHANGES.md")
|
||||
|
||||
# Commit the changes.
|
||||
repo.git.add("-u")
|
||||
repo.git.commit(f"-m {new_version}")
|
||||
|
||||
# We give the option to bail here in case the user wants to make sure things
|
||||
# are OK before pushing.
|
||||
if not click.confirm("Push branch to github?", default=True):
|
||||
print("")
|
||||
print("Run when ready to push:")
|
||||
print("")
|
||||
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
|
||||
print("")
|
||||
sys.exit(0)
|
||||
|
||||
# Otherwise, push and open the changelog in the browser.
|
||||
repo.git.push(f"-u {repo.remote().name} {repo.active_branch.name}")
|
||||
|
||||
click.launch(
|
||||
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
release()
|
||||
@@ -1,127 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
#
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from json import JSONDecodeError
|
||||
|
||||
import yaml
|
||||
from signedjson.key import read_signing_keys
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from synapse.util import json_encoder
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="""Adds a signature to a JSON object.
|
||||
|
||||
Example usage:
|
||||
|
||||
$ scripts-dev/sign_json.py -N test -k localhost.signing.key "{}"
|
||||
{"signatures":{"test":{"ed25519:a_ZnZh":"LmPnml6iM0iR..."}}}
|
||||
""",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-N",
|
||||
"--server-name",
|
||||
help="Name to give as the local homeserver. If unspecified, will be "
|
||||
"read from the config file.",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-k",
|
||||
"--signing-key-path",
|
||||
help="Path to the file containing the private ed25519 key to sign the "
|
||||
"request with.",
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-c",
|
||||
"--config",
|
||||
default="homeserver.yaml",
|
||||
help=(
|
||||
"Path to synapse config file, from which the server name and/or signing "
|
||||
"key path will be read. Ignored if --server-name and --signing-key-path "
|
||||
"are both given."
|
||||
),
|
||||
)
|
||||
|
||||
input_args = parser.add_mutually_exclusive_group()
|
||||
|
||||
input_args.add_argument("input_data", nargs="?", help="Raw JSON to be signed.")
|
||||
|
||||
input_args.add_argument(
|
||||
"-i",
|
||||
"--input",
|
||||
type=argparse.FileType("r"),
|
||||
default=sys.stdin,
|
||||
help=(
|
||||
"A file from which to read the JSON to be signed. If neither --input nor "
|
||||
"input_data are given, JSON will be read from stdin."
|
||||
),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"-o",
|
||||
"--output",
|
||||
type=argparse.FileType("w"),
|
||||
default=sys.stdout,
|
||||
help="Where to write the signed JSON. Defaults to stdout.",
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
if not args.server_name or not args.signing_key_path:
|
||||
read_args_from_config(args)
|
||||
|
||||
with open(args.signing_key_path) as f:
|
||||
key = read_signing_keys(f)[0]
|
||||
|
||||
json_to_sign = args.input_data
|
||||
if json_to_sign is None:
|
||||
json_to_sign = args.input.read()
|
||||
|
||||
try:
|
||||
obj = json.loads(json_to_sign)
|
||||
except JSONDecodeError as e:
|
||||
print("Unable to parse input as JSON: %s" % e, file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if not isinstance(obj, dict):
|
||||
print("Input json was not an object", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
sign_json(obj, args.server_name, key)
|
||||
for c in json_encoder.iterencode(obj):
|
||||
args.output.write(c)
|
||||
args.output.write("\n")
|
||||
|
||||
|
||||
def read_args_from_config(args: argparse.Namespace) -> None:
|
||||
with open(args.config, "r") as fh:
|
||||
config = yaml.safe_load(fh)
|
||||
if not args.server_name:
|
||||
args.server_name = config["server_name"]
|
||||
if not args.signing_key_path:
|
||||
args.signing_key_path = config["signing_key_path"]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
@@ -22,7 +22,7 @@ import logging
|
||||
import sys
|
||||
import time
|
||||
import traceback
|
||||
from typing import Dict, Optional, Set
|
||||
from typing import Optional
|
||||
|
||||
import yaml
|
||||
|
||||
@@ -40,7 +40,6 @@ from synapse.storage.database import DatabasePool, make_conn
|
||||
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyBackgroundStore
|
||||
from synapse.storage.databases.main.events_bg_updates import (
|
||||
EventsBackgroundUpdatesStore,
|
||||
)
|
||||
@@ -175,7 +174,6 @@ class Store(
|
||||
StateBackgroundUpdateStore,
|
||||
MainStateBackgroundUpdateStore,
|
||||
UserDirectoryBackgroundUpdateStore,
|
||||
EndToEndKeyBackgroundStore,
|
||||
StatsStore,
|
||||
):
|
||||
def execute(self, f, *args, **kwargs):
|
||||
@@ -292,34 +290,6 @@ class Porter(object):
|
||||
|
||||
return table, already_ported, total_to_port, forward_chunk, backward_chunk
|
||||
|
||||
async def get_table_constraints(self) -> Dict[str, Set[str]]:
|
||||
"""Returns a map of tables that have foreign key constraints to tables they depend on.
|
||||
"""
|
||||
|
||||
def _get_constraints(txn):
|
||||
# We can pull the information about foreign key constraints out from
|
||||
# the postgres schema tables.
|
||||
sql = """
|
||||
SELECT DISTINCT
|
||||
tc.table_name,
|
||||
ccu.table_name AS foreign_table_name
|
||||
FROM
|
||||
information_schema.table_constraints AS tc
|
||||
INNER JOIN information_schema.constraint_column_usage AS ccu
|
||||
USING (table_schema, constraint_name)
|
||||
WHERE tc.constraint_type = 'FOREIGN KEY';
|
||||
"""
|
||||
txn.execute(sql)
|
||||
|
||||
results = {}
|
||||
for table, foreign_table in txn:
|
||||
results.setdefault(table, set()).add(foreign_table)
|
||||
return results
|
||||
|
||||
return await self.postgres_store.db_pool.runInteraction(
|
||||
"get_table_constraints", _get_constraints
|
||||
)
|
||||
|
||||
async def handle_table(
|
||||
self, table, postgres_size, table_size, forward_chunk, backward_chunk
|
||||
):
|
||||
@@ -619,18 +589,7 @@ class Porter(object):
|
||||
"create_port_table", create_port_table
|
||||
)
|
||||
|
||||
# Step 2. Set up sequences
|
||||
#
|
||||
# We do this before porting the tables so that event if we fail half
|
||||
# way through the postgres DB always have sequences that are greater
|
||||
# than their respective tables. If we don't then creating the
|
||||
# `DataStore` object will fail due to the inconsistency.
|
||||
self.progress.set_state("Setting up sequence generators")
|
||||
await self._setup_state_group_id_seq()
|
||||
await self._setup_user_id_seq()
|
||||
await self._setup_events_stream_seqs()
|
||||
|
||||
# Step 3. Get tables.
|
||||
# Step 2. Get tables.
|
||||
self.progress.set_state("Fetching tables")
|
||||
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
|
||||
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
|
||||
@@ -645,7 +604,7 @@ class Porter(object):
|
||||
tables = set(sqlite_tables) & set(postgres_tables)
|
||||
logger.info("Found %d tables", len(tables))
|
||||
|
||||
# Step 4. Figure out what still needs copying
|
||||
# Step 3. Figure out what still needs copying
|
||||
self.progress.set_state("Checking on port progress")
|
||||
setup_res = await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
@@ -658,43 +617,21 @@ class Porter(object):
|
||||
consumeErrors=True,
|
||||
)
|
||||
)
|
||||
# Map from table name to args passed to `handle_table`, i.e. a tuple
|
||||
# of: `postgres_size`, `table_size`, `forward_chunk`, `backward_chunk`.
|
||||
tables_to_port_info_map = {r[0]: r[1:] for r in setup_res}
|
||||
|
||||
# Step 5. Do the copying.
|
||||
#
|
||||
# This is slightly convoluted as we need to ensure tables are ported
|
||||
# in the correct order due to foreign key constraints.
|
||||
# Step 4. Do the copying.
|
||||
self.progress.set_state("Copying to postgres")
|
||||
|
||||
constraints = await self.get_table_constraints()
|
||||
tables_ported = set() # type: Set[str]
|
||||
|
||||
while tables_to_port_info_map:
|
||||
# Pulls out all tables that are still to be ported and which
|
||||
# only depend on tables that are already ported (if any).
|
||||
tables_to_port = [
|
||||
table
|
||||
for table in tables_to_port_info_map
|
||||
if not constraints.get(table, set()) - tables_ported
|
||||
]
|
||||
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[
|
||||
run_in_background(
|
||||
self.handle_table,
|
||||
table,
|
||||
*tables_to_port_info_map.pop(table),
|
||||
)
|
||||
for table in tables_to_port
|
||||
],
|
||||
consumeErrors=True,
|
||||
)
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
[run_in_background(self.handle_table, *res) for res in setup_res],
|
||||
consumeErrors=True,
|
||||
)
|
||||
)
|
||||
|
||||
tables_ported.update(tables_to_port)
|
||||
# Step 5. Set up sequences
|
||||
self.progress.set_state("Setting up sequence generators")
|
||||
await self._setup_state_group_id_seq()
|
||||
await self._setup_user_id_seq()
|
||||
await self._setup_events_stream_seqs()
|
||||
|
||||
self.progress.done()
|
||||
except Exception as e:
|
||||
@@ -853,62 +790,45 @@ class Porter(object):
|
||||
|
||||
return done, remaining + done
|
||||
|
||||
async def _setup_state_group_id_seq(self):
|
||||
curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True
|
||||
)
|
||||
|
||||
if not curr_id:
|
||||
return
|
||||
|
||||
def _setup_state_group_id_seq(self):
|
||||
def r(txn):
|
||||
txn.execute("SELECT MAX(id) FROM state_groups")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if not curr_id:
|
||||
return
|
||||
next_id = curr_id + 1
|
||||
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
|
||||
|
||||
async def _setup_user_id_seq(self):
|
||||
curr_id = await self.sqlite_store.db_pool.runInteraction(
|
||||
"setup_user_id_seq", find_max_generated_user_id_localpart
|
||||
)
|
||||
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
|
||||
|
||||
def _setup_user_id_seq(self):
|
||||
def r(txn):
|
||||
next_id = curr_id + 1
|
||||
next_id = find_max_generated_user_id_localpart(txn) + 1
|
||||
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
|
||||
|
||||
async def _setup_events_stream_seqs(self):
|
||||
"""Set the event stream sequences to the correct values.
|
||||
"""
|
||||
|
||||
# We get called before we've ported the events table, so we need to
|
||||
# fetch the current positions from the SQLite store.
|
||||
curr_forward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="events", keyvalues={}, retcol="MAX(stream_ordering)", allow_none=True
|
||||
)
|
||||
|
||||
curr_backward_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
|
||||
table="events",
|
||||
keyvalues={},
|
||||
retcol="MAX(-MIN(stream_ordering), 1)",
|
||||
allow_none=True,
|
||||
)
|
||||
|
||||
def _setup_events_stream_seqs_set_pos(txn):
|
||||
if curr_forward_id:
|
||||
def _setup_events_stream_seqs(self):
|
||||
def r(txn):
|
||||
txn.execute("SELECT MAX(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_stream_seq RESTART WITH %s",
|
||||
(curr_forward_id + 1,),
|
||||
"ALTER SEQUENCE events_stream_seq RESTART WITH %s", (next_id,)
|
||||
)
|
||||
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
|
||||
(curr_backward_id + 1,),
|
||||
)
|
||||
txn.execute("SELECT -MIN(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
|
||||
(next_id,),
|
||||
)
|
||||
|
||||
return await self.postgres_store.db_pool.runInteraction(
|
||||
"_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
|
||||
return self.postgres_store.db_pool.runInteraction(
|
||||
"_setup_events_stream_seqs", r
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.24.0"
|
||||
__version__ = "1.22.1"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -37,7 +37,7 @@ def request_registration(
|
||||
exit=sys.exit,
|
||||
):
|
||||
|
||||
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
||||
url = "%s/_matrix/client/r0/admin/register" % (server_location,)
|
||||
|
||||
# Get the nonce
|
||||
r = requests.get(url, verify=False)
|
||||
|
||||
@@ -14,12 +14,10 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from synapse.api.constants import LimitBlockingTypes, UserTypes
|
||||
from synapse.api.errors import Codes, ResourceLimitError
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
from synapse.types import Requester
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -35,47 +33,24 @@ class AuthBlocking:
|
||||
self._max_mau_value = hs.config.max_mau_value
|
||||
self._limit_usage_by_mau = hs.config.limit_usage_by_mau
|
||||
self._mau_limits_reserved_threepids = hs.config.mau_limits_reserved_threepids
|
||||
self._server_name = hs.hostname
|
||||
|
||||
async def check_auth_blocking(
|
||||
self,
|
||||
user_id: Optional[str] = None,
|
||||
threepid: Optional[dict] = None,
|
||||
user_type: Optional[str] = None,
|
||||
requester: Optional[Requester] = None,
|
||||
):
|
||||
async def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
|
||||
"""Checks if the user should be rejected for some external reason,
|
||||
such as monthly active user limiting or global disable flag
|
||||
|
||||
Args:
|
||||
user_id: If present, checks for presence against existing
|
||||
user_id(str|None): If present, checks for presence against existing
|
||||
MAU cohort
|
||||
|
||||
threepid: If present, checks for presence against configured
|
||||
threepid(dict|None): If present, checks for presence against configured
|
||||
reserved threepid. Used in cases where the user is trying register
|
||||
with a MAU blocked server, normally they would be rejected but their
|
||||
threepid is on the reserved list. user_id and
|
||||
threepid should never be set at the same time.
|
||||
|
||||
user_type: If present, is used to decide whether to check against
|
||||
user_type(str|None): If present, is used to decide whether to check against
|
||||
certain blocking reasons like MAU.
|
||||
|
||||
requester: If present, and the authenticated entity is a user, checks for
|
||||
presence against existing MAU cohort. Passing in both a `user_id` and
|
||||
`requester` is an error.
|
||||
"""
|
||||
if requester and user_id:
|
||||
raise Exception(
|
||||
"Passed in both 'user_id' and 'requester' to 'check_auth_blocking'"
|
||||
)
|
||||
|
||||
if requester:
|
||||
if requester.authenticated_entity.startswith("@"):
|
||||
user_id = requester.authenticated_entity
|
||||
elif requester.authenticated_entity == self._server_name:
|
||||
# We never block the server from doing actions on behalf of
|
||||
# users.
|
||||
return
|
||||
|
||||
# Never fail an auth check for the server notices users or support user
|
||||
# This can be a problem where event creation is prohibited due to blocking
|
||||
|
||||
@@ -32,7 +32,6 @@ from synapse.app.phone_stats_home import start_phone_stats_home
|
||||
from synapse.config.server import ListenerConfig
|
||||
from synapse.crypto import context_factory
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.daemonize import daemonize_process
|
||||
from synapse.util.rlimit import change_resource_limit
|
||||
@@ -50,6 +49,7 @@ def register_sighup(func, *args, **kwargs):
|
||||
|
||||
Args:
|
||||
func (function): Function to be called when sent a SIGHUP signal.
|
||||
Will be called with a single default argument, the homeserver.
|
||||
*args, **kwargs: args and kwargs to be passed to the target function.
|
||||
"""
|
||||
_sighup_callbacks.append((func, args, kwargs))
|
||||
@@ -245,26 +245,19 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
|
||||
# Set up the SIGHUP machinery.
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
|
||||
@wrap_as_background_process("sighup")
|
||||
def handle_sighup(*args, **kwargs):
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
sdnotify(b"RELOADING=1")
|
||||
|
||||
for i, args, kwargs in _sighup_callbacks:
|
||||
i(*args, **kwargs)
|
||||
i(hs, *args, **kwargs)
|
||||
|
||||
sdnotify(b"READY=1")
|
||||
|
||||
# We defer running the sighup handlers until next reactor tick. This
|
||||
# is so that we're in a sane state, e.g. flushing the logs may fail
|
||||
# if the sighup happens in the middle of writing a log entry.
|
||||
def run_sighup(*args, **kwargs):
|
||||
hs.get_clock().call_later(0, handle_sighup, *args, **kwargs)
|
||||
signal.signal(signal.SIGHUP, handle_sighup)
|
||||
|
||||
signal.signal(signal.SIGHUP, run_sighup)
|
||||
|
||||
register_sighup(refresh_certificate, hs)
|
||||
register_sighup(refresh_certificate)
|
||||
|
||||
# Load the certificate from disk.
|
||||
refresh_certificate(hs)
|
||||
|
||||
@@ -21,11 +21,8 @@ class PushConfig(Config):
|
||||
section = "push"
|
||||
|
||||
def read_config(self, config, **kwargs):
|
||||
push_config = config.get("push") or {}
|
||||
push_config = config.get("push", {})
|
||||
self.push_include_content = push_config.get("include_content", True)
|
||||
self.push_group_unread_count_by_room = push_config.get(
|
||||
"group_unread_count_by_room", True
|
||||
)
|
||||
|
||||
pusher_instances = config.get("pusher_instances") or []
|
||||
self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
|
||||
@@ -52,33 +49,18 @@ class PushConfig(Config):
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """
|
||||
## Push ##
|
||||
|
||||
push:
|
||||
# Clients requesting push notifications can either have the body of
|
||||
# the message sent in the notification poke along with other details
|
||||
# like the sender, or just the event ID and room ID (`event_id_only`).
|
||||
# If clients choose the former, this option controls whether the
|
||||
# notification request includes the content of the event (other details
|
||||
# like the sender are still included). For `event_id_only` push, it
|
||||
# has no effect.
|
||||
#
|
||||
# For modern android devices the notification content will still appear
|
||||
# because it is loaded by the app. iPhone, however will send a
|
||||
# notification saying only that a message arrived and who it came from.
|
||||
#
|
||||
# The default value is "true" to include message details. Uncomment to only
|
||||
# include the event ID and room ID in push notification payloads.
|
||||
#
|
||||
#include_content: false
|
||||
|
||||
# When a push notification is received, an unread count is also sent.
|
||||
# This number can either be calculated as the number of unread messages
|
||||
# for the user, or the number of *rooms* the user has unread messages in.
|
||||
#
|
||||
# The default value is "true", meaning push clients will see the number of
|
||||
# rooms with unread messages in them. Uncomment to instead send the number
|
||||
# of unread messages.
|
||||
#
|
||||
#group_unread_count_by_room: false
|
||||
# Clients requesting push notifications can either have the body of
|
||||
# the message sent in the notification poke along with other details
|
||||
# like the sender, or just the event ID and room ID (`event_id_only`).
|
||||
# If clients choose the former, this option controls whether the
|
||||
# notification request includes the content of the event (other details
|
||||
# like the sender are still included). For `event_id_only` push, it
|
||||
# has no effect.
|
||||
#
|
||||
# For modern android devices the notification content will still appear
|
||||
# because it is loaded by the app. iPhone, however will send a
|
||||
# notification saying only that a message arrived and who it came from.
|
||||
#
|
||||
#push:
|
||||
# include_content: true
|
||||
"""
|
||||
|
||||
@@ -347,9 +347,8 @@ class RegistrationConfig(Config):
|
||||
# email will be globally disabled.
|
||||
#
|
||||
# Additionally, if `msisdn` is not set, registration and password resets via msisdn
|
||||
# will be disabled regardless, and users will not be able to associate an msisdn
|
||||
# identifier to their account. This is due to Synapse currently not supporting
|
||||
# any method of sending SMS messages on its own.
|
||||
# will be disabled regardless. This is due to Synapse currently not supporting any
|
||||
# method of sending SMS messages on its own.
|
||||
#
|
||||
# To enable using an identity server for operations regarding a particular third-party
|
||||
# identifier type, set the value to the URL of that identity server as shown in the
|
||||
|
||||
@@ -90,8 +90,6 @@ class SAML2Config(Config):
|
||||
"grandfathered_mxid_source_attribute", "uid"
|
||||
)
|
||||
|
||||
self.saml2_idp_entityid = saml2_config.get("idp_entityid", None)
|
||||
|
||||
# user_mapping_provider may be None if the key is present but has no value
|
||||
ump_dict = saml2_config.get("user_mapping_provider") or {}
|
||||
|
||||
@@ -258,12 +256,6 @@ class SAML2Config(Config):
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
|
||||
# Allowed clock difference in seconds between the homeserver and IdP.
|
||||
#
|
||||
# Uncomment the below to increase the accepted time difference from 0 to 3 seconds.
|
||||
#
|
||||
#accepted_time_diff: 3
|
||||
|
||||
# By default, the user has to go to our login page first. If you'd like
|
||||
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# 'service.sp' section:
|
||||
@@ -279,28 +271,6 @@ class SAML2Config(Config):
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#ui_info:
|
||||
# display_name:
|
||||
# - lang: en
|
||||
# text: "Display Name is the descriptive name of your service."
|
||||
# description:
|
||||
# - lang: en
|
||||
# text: "Description should be a short paragraph explaining the purpose of the service."
|
||||
# information_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/terms-of-service"
|
||||
# privacy_statement_url:
|
||||
# - lang: en
|
||||
# text: "https://example.com/privacy-policy"
|
||||
# keywords:
|
||||
# - lang: en
|
||||
# text: ["Matrix", "Element"]
|
||||
# logo:
|
||||
# - lang: en
|
||||
# text: "https://example.com/logo.svg"
|
||||
# width: "200"
|
||||
# height: "80"
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
@@ -385,14 +355,6 @@ class SAML2Config(Config):
|
||||
# value: "staff"
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# If the metadata XML contains multiple IdP entities then the `idp_entityid`
|
||||
# option must be set to the entity to redirect users to.
|
||||
#
|
||||
# Most deployments only have a single IdP entity and so should omit this
|
||||
# option.
|
||||
#
|
||||
#idp_entityid: 'https://our_idp/entityid'
|
||||
""" % {
|
||||
"config_dir_path": config_dir_path
|
||||
}
|
||||
|
||||
@@ -15,12 +15,13 @@
|
||||
# limitations under the License.
|
||||
|
||||
import inspect
|
||||
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple
|
||||
from typing import Any, Dict, List, Optional, Tuple
|
||||
|
||||
from synapse.spam_checker_api import RegistrationBehaviour
|
||||
from synapse.types import Collection
|
||||
|
||||
if TYPE_CHECKING:
|
||||
MYPY = False
|
||||
if MYPY:
|
||||
import synapse.events
|
||||
import synapse.server
|
||||
|
||||
|
||||
@@ -13,26 +13,20 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import Union
|
||||
|
||||
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.room_versions import EventFormatVersions
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.events import EventBase
|
||||
from synapse.events.builder import EventBuilder
|
||||
from synapse.events.utils import validate_canonicaljson
|
||||
from synapse.federation.federation_server import server_matches_acl_event
|
||||
from synapse.types import EventID, RoomID, UserID
|
||||
|
||||
|
||||
class EventValidator:
|
||||
def validate_new(self, event: EventBase, config: HomeServerConfig):
|
||||
def validate_new(self, event, config):
|
||||
"""Validates the event has roughly the right format
|
||||
|
||||
Args:
|
||||
event: The event to validate.
|
||||
config: The homeserver's configuration.
|
||||
event (FrozenEvent): The event to validate.
|
||||
config (Config): The homeserver's configuration.
|
||||
"""
|
||||
self.validate_builder(event)
|
||||
|
||||
@@ -82,18 +76,12 @@ class EventValidator:
|
||||
if event.type == EventTypes.Retention:
|
||||
self._validate_retention(event)
|
||||
|
||||
if event.type == EventTypes.ServerACL:
|
||||
if not server_matches_acl_event(config.server_name, event):
|
||||
raise SynapseError(
|
||||
400, "Can't create an ACL event that denies the local server"
|
||||
)
|
||||
|
||||
def _validate_retention(self, event: EventBase):
|
||||
def _validate_retention(self, event):
|
||||
"""Checks that an event that defines the retention policy for a room respects the
|
||||
format enforced by the spec.
|
||||
|
||||
Args:
|
||||
event: The event to validate.
|
||||
event (FrozenEvent): The event to validate.
|
||||
"""
|
||||
if not event.is_state():
|
||||
raise SynapseError(code=400, msg="must be a state event")
|
||||
@@ -128,10 +116,13 @@ class EventValidator:
|
||||
errcode=Codes.BAD_JSON,
|
||||
)
|
||||
|
||||
def validate_builder(self, event: Union[EventBase, EventBuilder]):
|
||||
def validate_builder(self, event):
|
||||
"""Validates that the builder/event has roughly the right format. Only
|
||||
checks values that we expect a proto event to have, rather than all the
|
||||
fields an event would have
|
||||
|
||||
Args:
|
||||
event (EventBuilder|FrozenEvent)
|
||||
"""
|
||||
|
||||
strings = ["room_id", "sender", "type"]
|
||||
|
||||
@@ -49,7 +49,6 @@ from synapse.federation.federation_base import FederationBase, event_from_pdu_js
|
||||
from synapse.federation.persistence import TransactionActions
|
||||
from synapse.federation.units import Edu, Transaction
|
||||
from synapse.http.endpoint import parse_server_name
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
nested_logging_context,
|
||||
@@ -392,7 +391,7 @@ class FederationServer(FederationBase):
|
||||
TRANSACTION_CONCURRENCY_LIMIT,
|
||||
)
|
||||
|
||||
async def on_room_state_request(
|
||||
async def on_context_state_request(
|
||||
self, origin: str, room_id: str, event_id: str
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -515,12 +514,11 @@ class FederationServer(FederationBase):
|
||||
return {"event": ret_pdu.get_pdu_json(time_now)}
|
||||
|
||||
async def on_send_join_request(
|
||||
self, origin: str, content: JsonDict
|
||||
self, origin: str, content: JsonDict, room_id: str
|
||||
) -> Dict[str, Any]:
|
||||
logger.debug("on_send_join_request: content: %s", content)
|
||||
|
||||
assert_params_in_dict(content, ["room_id"])
|
||||
room_version = await self.store.get_room_version(content["room_id"])
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
pdu = event_from_pdu_json(content, room_version)
|
||||
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -549,11 +547,12 @@ class FederationServer(FederationBase):
|
||||
time_now = self._clock.time_msec()
|
||||
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
|
||||
|
||||
async def on_send_leave_request(self, origin: str, content: JsonDict) -> dict:
|
||||
async def on_send_leave_request(
|
||||
self, origin: str, content: JsonDict, room_id: str
|
||||
) -> dict:
|
||||
logger.debug("on_send_leave_request: content: %s", content)
|
||||
|
||||
assert_params_in_dict(content, ["room_id"])
|
||||
room_version = await self.store.get_room_version(content["room_id"])
|
||||
room_version = await self.store.get_room_version(room_id)
|
||||
pdu = event_from_pdu_json(content, room_version)
|
||||
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
@@ -749,8 +748,12 @@ class FederationServer(FederationBase):
|
||||
)
|
||||
return ret
|
||||
|
||||
async def on_exchange_third_party_invite_request(self, event_dict: Dict):
|
||||
ret = await self.handler.on_exchange_third_party_invite_request(event_dict)
|
||||
async def on_exchange_third_party_invite_request(
|
||||
self, room_id: str, event_dict: Dict
|
||||
):
|
||||
ret = await self.handler.on_exchange_third_party_invite_request(
|
||||
room_id, event_dict
|
||||
)
|
||||
return ret
|
||||
|
||||
async def check_server_matches_acl(self, server_name: str, room_id: str):
|
||||
|
||||
@@ -440,13 +440,13 @@ class FederationEventServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationStateV1Servlet(BaseFederationServlet):
|
||||
PATH = "/state/(?P<room_id>[^/]*)/?"
|
||||
PATH = "/state/(?P<context>[^/]*)/?"
|
||||
|
||||
# This is when someone asks for all data for a given room.
|
||||
async def on_GET(self, origin, content, query, room_id):
|
||||
return await self.handler.on_room_state_request(
|
||||
# This is when someone asks for all data for a given context.
|
||||
async def on_GET(self, origin, content, query, context):
|
||||
return await self.handler.on_context_state_request(
|
||||
origin,
|
||||
room_id,
|
||||
context,
|
||||
parse_string_from_args(query, "event_id", None, required=False),
|
||||
)
|
||||
|
||||
@@ -463,16 +463,16 @@ class FederationStateIdsServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationBackfillServlet(BaseFederationServlet):
|
||||
PATH = "/backfill/(?P<room_id>[^/]*)/?"
|
||||
PATH = "/backfill/(?P<context>[^/]*)/?"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id):
|
||||
async def on_GET(self, origin, content, query, context):
|
||||
versions = [x.decode("ascii") for x in query[b"v"]]
|
||||
limit = parse_integer_from_args(query, "limit", None)
|
||||
|
||||
if not limit:
|
||||
return 400, {"error": "Did not include limit param"}
|
||||
|
||||
return await self.handler.on_backfill_request(origin, room_id, versions, limit)
|
||||
return await self.handler.on_backfill_request(origin, context, versions, limit)
|
||||
|
||||
|
||||
class FederationQueryServlet(BaseFederationServlet):
|
||||
@@ -487,9 +487,9 @@ class FederationQueryServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationMakeJoinServlet(BaseFederationServlet):
|
||||
PATH = "/make_join/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
|
||||
PATH = "/make_join/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, _content, query, room_id, user_id):
|
||||
async def on_GET(self, origin, _content, query, context, user_id):
|
||||
"""
|
||||
Args:
|
||||
origin (unicode): The authenticated server_name of the calling server
|
||||
@@ -511,16 +511,16 @@ class FederationMakeJoinServlet(BaseFederationServlet):
|
||||
supported_versions = ["1"]
|
||||
|
||||
content = await self.handler.on_make_join_request(
|
||||
origin, room_id, user_id, supported_versions=supported_versions
|
||||
origin, context, user_id, supported_versions=supported_versions
|
||||
)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationMakeLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/make_leave/(?P<room_id>[^/]*)/(?P<user_id>[^/]*)"
|
||||
PATH = "/make_leave/(?P<context>[^/]*)/(?P<user_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id, user_id):
|
||||
content = await self.handler.on_make_leave_request(origin, room_id, user_id)
|
||||
async def on_GET(self, origin, content, query, context, user_id):
|
||||
content = await self.handler.on_make_leave_request(origin, context, user_id)
|
||||
return 200, content
|
||||
|
||||
|
||||
@@ -528,7 +528,7 @@ class FederationV1SendLeaveServlet(BaseFederationServlet):
|
||||
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content)
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
@@ -538,43 +538,43 @@ class FederationV2SendLeaveServlet(BaseFederationServlet):
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
content = await self.handler.on_send_leave_request(origin, content)
|
||||
content = await self.handler.on_send_leave_request(origin, content, room_id)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationEventAuthServlet(BaseFederationServlet):
|
||||
PATH = "/event_auth/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/event_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_GET(self, origin, content, query, room_id, event_id):
|
||||
return await self.handler.on_event_auth(origin, room_id, event_id)
|
||||
async def on_GET(self, origin, content, query, context, event_id):
|
||||
return await self.handler.on_event_auth(origin, context, event_id)
|
||||
|
||||
|
||||
class FederationV1SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
content = await self.handler.on_send_join_request(origin, content)
|
||||
content = await self.handler.on_send_join_request(origin, content, context)
|
||||
return 200, (200, content)
|
||||
|
||||
|
||||
class FederationV2SendJoinServlet(BaseFederationServlet):
|
||||
PATH = "/send_join/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
content = await self.handler.on_send_join_request(origin, content)
|
||||
content = await self.handler.on_send_join_request(origin, content, context)
|
||||
return 200, content
|
||||
|
||||
|
||||
class FederationV1InviteServlet(BaseFederationServlet):
|
||||
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/invite/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# We don't get a room version, so we have to assume its EITHER v1 or
|
||||
# v2. This is "fine" as the only difference between V1 and V2 is the
|
||||
# state resolution algorithm, and we don't use that for processing
|
||||
@@ -589,12 +589,12 @@ class FederationV1InviteServlet(BaseFederationServlet):
|
||||
|
||||
|
||||
class FederationV2InviteServlet(BaseFederationServlet):
|
||||
PATH = "/invite/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
|
||||
PATH = "/invite/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
|
||||
|
||||
PREFIX = FEDERATION_V2_PREFIX
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id, event_id):
|
||||
# TODO(paul): assert that room_id/event_id parsed from path actually
|
||||
async def on_PUT(self, origin, content, query, context, event_id):
|
||||
# TODO(paul): assert that context/event_id parsed from path actually
|
||||
# match those given in content
|
||||
|
||||
room_version = content["room_version"]
|
||||
@@ -616,7 +616,9 @@ class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
|
||||
PATH = "/exchange_third_party_invite/(?P<room_id>[^/]*)"
|
||||
|
||||
async def on_PUT(self, origin, content, query, room_id):
|
||||
content = await self.handler.on_exchange_third_party_invite_request(content)
|
||||
content = await self.handler.on_exchange_third_party_invite_request(
|
||||
room_id, content
|
||||
)
|
||||
return 200, content
|
||||
|
||||
|
||||
|
||||
@@ -169,9 +169,7 @@ class BaseHandler:
|
||||
# and having homeservers have their own users leave keeps more
|
||||
# of that decision-making and control local to the guest-having
|
||||
# homeserver.
|
||||
requester = synapse.types.create_requester(
|
||||
target_user, is_guest=True, authenticated_entity=self.server_name
|
||||
)
|
||||
requester = synapse.types.create_requester(target_user, is_guest=True)
|
||||
handler = self.hs.get_room_member_handler()
|
||||
await handler.update_membership(
|
||||
requester,
|
||||
|
||||
@@ -226,7 +226,7 @@ class ApplicationServicesHandler:
|
||||
new_token: Optional[int],
|
||||
users: Collection[Union[str, UserID]],
|
||||
):
|
||||
logger.debug("Checking interested services for %s" % (stream_key))
|
||||
logger.info("Checking interested services for %s" % (stream_key))
|
||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||
for service in services:
|
||||
# Only handle typing if we have the latest token
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 - 2016 OpenMarket Ltd
|
||||
# Copyright 2017 Vector Creations Ltd
|
||||
# Copyright 2019 - 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -26,7 +25,6 @@ from typing import (
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
@@ -184,11 +182,11 @@ class AuthHandler(BaseHandler):
|
||||
account_handler = ModuleApi(hs, self)
|
||||
|
||||
self.password_providers = [
|
||||
PasswordProvider.load(module, config, account_handler)
|
||||
module(config=config, account_handler=account_handler)
|
||||
for module, config in hs.config.password_providers
|
||||
]
|
||||
|
||||
logger.info("Extra password_providers: %s", self.password_providers)
|
||||
logger.info("Extra password_providers: %r", self.password_providers)
|
||||
|
||||
self.hs = hs # FIXME better possibility to access registrationHandler later?
|
||||
self.macaroon_gen = hs.get_macaroon_generator()
|
||||
@@ -202,23 +200,15 @@ class AuthHandler(BaseHandler):
|
||||
# type in the list. (NB that the spec doesn't require us to do so and
|
||||
# clients which favour types that they don't understand over those that
|
||||
# they do are technically broken)
|
||||
|
||||
# start out by assuming PASSWORD is enabled; we will remove it later if not.
|
||||
login_types = []
|
||||
if hs.config.password_localdb_enabled:
|
||||
if self._password_enabled:
|
||||
login_types.append(LoginType.PASSWORD)
|
||||
|
||||
for provider in self.password_providers:
|
||||
if hasattr(provider, "get_supported_login_types"):
|
||||
for t in provider.get_supported_login_types().keys():
|
||||
if t not in login_types:
|
||||
login_types.append(t)
|
||||
|
||||
if not self._password_enabled:
|
||||
login_types.remove(LoginType.PASSWORD)
|
||||
|
||||
self._supported_login_types = login_types
|
||||
|
||||
# Login types and UI Auth types have a heavy overlap, but are not
|
||||
# necessarily identical. Login types have SSO (and other login types)
|
||||
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
|
||||
@@ -235,13 +225,6 @@ class AuthHandler(BaseHandler):
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
)
|
||||
|
||||
# Ratelimitier for failed /login attempts
|
||||
self._failed_login_attempts_ratelimiter = Ratelimiter(
|
||||
clock=hs.get_clock(),
|
||||
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
|
||||
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
|
||||
)
|
||||
|
||||
self._clock = self.hs.get_clock()
|
||||
|
||||
# Expire old UI auth sessions after a period of time.
|
||||
@@ -654,8 +637,14 @@ class AuthHandler(BaseHandler):
|
||||
res = await checker.check_auth(authdict, clientip=clientip)
|
||||
return res
|
||||
|
||||
# fall back to the v1 login flow
|
||||
canonical_id, _ = await self.validate_login(authdict)
|
||||
# build a v1-login-style dict out of the authdict and fall back to the
|
||||
# v1 code
|
||||
user_id = authdict.get("user")
|
||||
|
||||
if user_id is None:
|
||||
raise SynapseError(400, "", Codes.MISSING_PARAM)
|
||||
|
||||
(canonical_id, callback) = await self.validate_login(user_id, authdict)
|
||||
return canonical_id
|
||||
|
||||
def _get_params_recaptcha(self) -> dict:
|
||||
@@ -704,12 +693,8 @@ class AuthHandler(BaseHandler):
|
||||
}
|
||||
|
||||
async def get_access_token_for_user_id(
|
||||
self,
|
||||
user_id: str,
|
||||
device_id: Optional[str],
|
||||
valid_until_ms: Optional[int],
|
||||
puppets_user_id: Optional[str] = None,
|
||||
) -> str:
|
||||
self, user_id: str, device_id: Optional[str], valid_until_ms: Optional[int]
|
||||
):
|
||||
"""
|
||||
Creates a new access token for the user with the given user ID.
|
||||
|
||||
@@ -735,25 +720,13 @@ class AuthHandler(BaseHandler):
|
||||
fmt_expiry = time.strftime(
|
||||
" until %Y-%m-%d %H:%M:%S", time.localtime(valid_until_ms / 1000.0)
|
||||
)
|
||||
|
||||
if puppets_user_id:
|
||||
logger.info(
|
||||
"Logging in user %s as %s%s", user_id, puppets_user_id, fmt_expiry
|
||||
)
|
||||
else:
|
||||
logger.info(
|
||||
"Logging in user %s on device %s%s", user_id, device_id, fmt_expiry
|
||||
)
|
||||
logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry)
|
||||
|
||||
await self.auth.check_auth_blocking(user_id)
|
||||
|
||||
access_token = self.macaroon_gen.generate_access_token(user_id)
|
||||
await self.store.add_access_token_to_user(
|
||||
user_id=user_id,
|
||||
token=access_token,
|
||||
device_id=device_id,
|
||||
valid_until_ms=valid_until_ms,
|
||||
puppets_user_id=puppets_user_id,
|
||||
user_id, access_token, device_id, valid_until_ms
|
||||
)
|
||||
|
||||
# the device *should* have been registered before we got here; however,
|
||||
@@ -830,17 +803,17 @@ class AuthHandler(BaseHandler):
|
||||
return self._supported_login_types
|
||||
|
||||
async def validate_login(
|
||||
self, login_submission: Dict[str, Any], ratelimit: bool = False,
|
||||
self, username: str, login_submission: Dict[str, Any]
|
||||
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
|
||||
"""Authenticates the user for the /login API
|
||||
|
||||
Also used by the user-interactive auth flow to validate auth types which don't
|
||||
have an explicit UIA handler, including m.password.auth.
|
||||
Also used by the user-interactive auth flow to validate
|
||||
m.login.password auth types.
|
||||
|
||||
Args:
|
||||
username: username supplied by the user
|
||||
login_submission: the whole of the login submission
|
||||
(including 'type' and other relevant fields)
|
||||
ratelimit: whether to apply the failed_login_attempt ratelimiter
|
||||
Returns:
|
||||
A tuple of the canonical user id, and optional callback
|
||||
to be called once the access token and device id are issued
|
||||
@@ -849,160 +822,38 @@ class AuthHandler(BaseHandler):
|
||||
SynapseError if there was a problem with the request
|
||||
LoginError if there was an authentication problem.
|
||||
"""
|
||||
login_type = login_submission.get("type")
|
||||
if not isinstance(login_type, str):
|
||||
raise SynapseError(400, "Bad parameter: type", Codes.INVALID_PARAM)
|
||||
|
||||
# ideally, we wouldn't be checking the identifier unless we know we have a login
|
||||
# method which uses it (https://github.com/matrix-org/synapse/issues/8836)
|
||||
#
|
||||
# But the auth providers' check_auth interface requires a username, so in
|
||||
# practice we can only support login methods which we can map to a username
|
||||
# anyway.
|
||||
if username.startswith("@"):
|
||||
qualified_user_id = username
|
||||
else:
|
||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||
|
||||
login_type = login_submission.get("type")
|
||||
known_login_type = False
|
||||
|
||||
# special case to check for "password" for the check_password interface
|
||||
# for the auth providers
|
||||
password = login_submission.get("password")
|
||||
|
||||
if login_type == LoginType.PASSWORD:
|
||||
if not self._password_enabled:
|
||||
raise SynapseError(400, "Password login has been disabled.")
|
||||
if not isinstance(password, str):
|
||||
raise SynapseError(400, "Bad parameter: password", Codes.INVALID_PARAM)
|
||||
|
||||
# map old-school login fields into new-school "identifier" fields.
|
||||
identifier_dict = convert_client_dict_legacy_fields_to_identifier(
|
||||
login_submission
|
||||
)
|
||||
|
||||
# convert phone type identifiers to generic threepids
|
||||
if identifier_dict["type"] == "m.id.phone":
|
||||
identifier_dict = login_id_phone_to_thirdparty(identifier_dict)
|
||||
|
||||
# convert threepid identifiers to user IDs
|
||||
if identifier_dict["type"] == "m.id.thirdparty":
|
||||
address = identifier_dict.get("address")
|
||||
medium = identifier_dict.get("medium")
|
||||
|
||||
if medium is None or address is None:
|
||||
raise SynapseError(400, "Invalid thirdparty identifier")
|
||||
|
||||
# For emails, canonicalise the address.
|
||||
# We store all email addresses canonicalised in the DB.
|
||||
# (See add_threepid in synapse/handlers/auth.py)
|
||||
if medium == "email":
|
||||
try:
|
||||
address = canonicalise_email(address)
|
||||
except ValueError as e:
|
||||
raise SynapseError(400, str(e))
|
||||
|
||||
# We also apply account rate limiting using the 3PID as a key, as
|
||||
# otherwise using 3PID bypasses the ratelimiting based on user ID.
|
||||
if ratelimit:
|
||||
self._failed_login_attempts_ratelimiter.ratelimit(
|
||||
(medium, address), update=False
|
||||
)
|
||||
|
||||
# Check for login providers that support 3pid login types
|
||||
if login_type == LoginType.PASSWORD:
|
||||
# we've already checked that there is a (valid) password field
|
||||
assert isinstance(password, str)
|
||||
(
|
||||
canonical_user_id,
|
||||
callback_3pid,
|
||||
) = await self.check_password_provider_3pid(medium, address, password)
|
||||
if canonical_user_id:
|
||||
# Authentication through password provider and 3pid succeeded
|
||||
return canonical_user_id, callback_3pid
|
||||
|
||||
# No password providers were able to handle this 3pid
|
||||
# Check local store
|
||||
user_id = await self.hs.get_datastore().get_user_id_by_threepid(
|
||||
medium, address
|
||||
)
|
||||
if not user_id:
|
||||
logger.warning(
|
||||
"unknown 3pid identifier medium %s, address %r", medium, address
|
||||
)
|
||||
# We mark that we've failed to log in here, as
|
||||
# `check_password_provider_3pid` might have returned `None` due
|
||||
# to an incorrect password, rather than the account not
|
||||
# existing.
|
||||
#
|
||||
# If it returned None but the 3PID was bound then we won't hit
|
||||
# this code path, which is fine as then the per-user ratelimit
|
||||
# will kick in below.
|
||||
if ratelimit:
|
||||
self._failed_login_attempts_ratelimiter.can_do_action(
|
||||
(medium, address)
|
||||
)
|
||||
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
|
||||
|
||||
identifier_dict = {"type": "m.id.user", "user": user_id}
|
||||
|
||||
# by this point, the identifier should be an m.id.user: if it's anything
|
||||
# else, we haven't understood it.
|
||||
if identifier_dict["type"] != "m.id.user":
|
||||
raise SynapseError(400, "Unknown login identifier type")
|
||||
|
||||
username = identifier_dict.get("user")
|
||||
if not username:
|
||||
raise SynapseError(400, "User identifier is missing 'user' key")
|
||||
|
||||
if username.startswith("@"):
|
||||
qualified_user_id = username
|
||||
else:
|
||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||
|
||||
# Check if we've hit the failed ratelimit (but don't update it)
|
||||
if ratelimit:
|
||||
self._failed_login_attempts_ratelimiter.ratelimit(
|
||||
qualified_user_id.lower(), update=False
|
||||
)
|
||||
|
||||
try:
|
||||
return await self._validate_userid_login(username, login_submission)
|
||||
except LoginError:
|
||||
# The user has failed to log in, so we need to update the rate
|
||||
# limiter. Using `can_do_action` avoids us raising a ratelimit
|
||||
# exception and masking the LoginError. The actual ratelimiting
|
||||
# should have happened above.
|
||||
if ratelimit:
|
||||
self._failed_login_attempts_ratelimiter.can_do_action(
|
||||
qualified_user_id.lower()
|
||||
)
|
||||
raise
|
||||
|
||||
async def _validate_userid_login(
|
||||
self, username: str, login_submission: Dict[str, Any],
|
||||
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
|
||||
"""Helper for validate_login
|
||||
|
||||
Handles login, once we've mapped 3pids onto userids
|
||||
|
||||
Args:
|
||||
username: the username, from the identifier dict
|
||||
login_submission: the whole of the login submission
|
||||
(including 'type' and other relevant fields)
|
||||
Returns:
|
||||
A tuple of the canonical user id, and optional callback
|
||||
to be called once the access token and device id are issued
|
||||
Raises:
|
||||
StoreError if there was a problem accessing the database
|
||||
SynapseError if there was a problem with the request
|
||||
LoginError if there was an authentication problem.
|
||||
"""
|
||||
if username.startswith("@"):
|
||||
qualified_user_id = username
|
||||
else:
|
||||
qualified_user_id = UserID(username, self.hs.hostname).to_string()
|
||||
|
||||
login_type = login_submission.get("type")
|
||||
# we already checked that we have a valid login type
|
||||
assert isinstance(login_type, str)
|
||||
|
||||
known_login_type = False
|
||||
if not password:
|
||||
raise SynapseError(400, "Missing parameter: password")
|
||||
|
||||
for provider in self.password_providers:
|
||||
if hasattr(provider, "check_password") and login_type == LoginType.PASSWORD:
|
||||
known_login_type = True
|
||||
is_valid = await provider.check_password(qualified_user_id, password)
|
||||
if is_valid:
|
||||
return qualified_user_id, None
|
||||
|
||||
if not hasattr(provider, "get_supported_login_types") or not hasattr(
|
||||
provider, "check_auth"
|
||||
):
|
||||
# this password provider doesn't understand custom login types
|
||||
continue
|
||||
|
||||
supported_login_types = provider.get_supported_login_types()
|
||||
if login_type not in supported_login_types:
|
||||
# this password provider doesn't understand this login type
|
||||
@@ -1027,17 +878,15 @@ class AuthHandler(BaseHandler):
|
||||
|
||||
result = await provider.check_auth(username, login_type, login_dict)
|
||||
if result:
|
||||
if isinstance(result, str):
|
||||
result = (result, None)
|
||||
return result
|
||||
|
||||
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
|
||||
known_login_type = True
|
||||
|
||||
# we've already checked that there is a (valid) password field
|
||||
password = login_submission["password"]
|
||||
assert isinstance(password, str)
|
||||
|
||||
canonical_user_id = await self._check_local_password(
|
||||
qualified_user_id, password
|
||||
qualified_user_id, password # type: ignore
|
||||
)
|
||||
|
||||
if canonical_user_id:
|
||||
@@ -1068,9 +917,19 @@ class AuthHandler(BaseHandler):
|
||||
unsuccessful, `user_id` and `callback` are both `None`.
|
||||
"""
|
||||
for provider in self.password_providers:
|
||||
result = await provider.check_3pid_auth(medium, address, password)
|
||||
if result:
|
||||
return result
|
||||
if hasattr(provider, "check_3pid_auth"):
|
||||
# This function is able to return a deferred that either
|
||||
# resolves None, meaning authentication failure, or upon
|
||||
# success, to a str (which is the user_id) or a tuple of
|
||||
# (user_id, callback_func), where callback_func should be run
|
||||
# after we've finished everything else
|
||||
result = await provider.check_3pid_auth(medium, address, password)
|
||||
if result:
|
||||
# Check if the return value is a str or a tuple
|
||||
if isinstance(result, str):
|
||||
# If it's a str, set callback function to None
|
||||
result = (result, None)
|
||||
return result
|
||||
|
||||
return None, None
|
||||
|
||||
@@ -1128,11 +987,16 @@ class AuthHandler(BaseHandler):
|
||||
|
||||
# see if any of our auth providers want to know about this
|
||||
for provider in self.password_providers:
|
||||
await provider.on_logged_out(
|
||||
user_id=user_info.user_id,
|
||||
device_id=user_info.device_id,
|
||||
access_token=access_token,
|
||||
)
|
||||
if hasattr(provider, "on_logged_out"):
|
||||
# This might return an awaitable, if it does block the log out
|
||||
# until it completes.
|
||||
result = provider.on_logged_out(
|
||||
user_id=user_info.user_id,
|
||||
device_id=user_info.device_id,
|
||||
access_token=access_token,
|
||||
)
|
||||
if inspect.isawaitable(result):
|
||||
await result
|
||||
|
||||
# delete pushers associated with this access token
|
||||
if user_info.token_id is not None:
|
||||
@@ -1161,10 +1025,11 @@ class AuthHandler(BaseHandler):
|
||||
|
||||
# see if any of our auth providers want to know about this
|
||||
for provider in self.password_providers:
|
||||
for token, token_id, device_id in tokens_and_devices:
|
||||
await provider.on_logged_out(
|
||||
user_id=user_id, device_id=device_id, access_token=token
|
||||
)
|
||||
if hasattr(provider, "on_logged_out"):
|
||||
for token, token_id, device_id in tokens_and_devices:
|
||||
await provider.on_logged_out(
|
||||
user_id=user_id, device_id=device_id, access_token=token
|
||||
)
|
||||
|
||||
# delete pushers associated with the access tokens
|
||||
await self.hs.get_pusherpool().remove_pushers_by_access_token(
|
||||
@@ -1488,127 +1353,3 @@ class MacaroonGenerator:
|
||||
macaroon.add_first_party_caveat("gen = 1")
|
||||
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
|
||||
return macaroon
|
||||
|
||||
|
||||
class PasswordProvider:
|
||||
"""Wrapper for a password auth provider module
|
||||
|
||||
This class abstracts out all of the backwards-compatibility hacks for
|
||||
password providers, to provide a consistent interface.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def load(cls, module, config, module_api: ModuleApi) -> "PasswordProvider":
|
||||
try:
|
||||
pp = module(config=config, account_handler=module_api)
|
||||
except Exception as e:
|
||||
logger.error("Error while initializing %r: %s", module, e)
|
||||
raise
|
||||
return cls(pp, module_api)
|
||||
|
||||
def __init__(self, pp, module_api: ModuleApi):
|
||||
self._pp = pp
|
||||
self._module_api = module_api
|
||||
|
||||
self._supported_login_types = {}
|
||||
|
||||
# grandfather in check_password support
|
||||
if hasattr(self._pp, "check_password"):
|
||||
self._supported_login_types[LoginType.PASSWORD] = ("password",)
|
||||
|
||||
g = getattr(self._pp, "get_supported_login_types", None)
|
||||
if g:
|
||||
self._supported_login_types.update(g())
|
||||
|
||||
def __str__(self):
|
||||
return str(self._pp)
|
||||
|
||||
def get_supported_login_types(self) -> Mapping[str, Iterable[str]]:
|
||||
"""Get the login types supported by this password provider
|
||||
|
||||
Returns a map from a login type identifier (such as m.login.password) to an
|
||||
iterable giving the fields which must be provided by the user in the submission
|
||||
to the /login API.
|
||||
|
||||
This wrapper adds m.login.password to the list if the underlying password
|
||||
provider supports the check_password() api.
|
||||
"""
|
||||
return self._supported_login_types
|
||||
|
||||
async def check_auth(
|
||||
self, username: str, login_type: str, login_dict: JsonDict
|
||||
) -> Optional[Tuple[str, Optional[Callable]]]:
|
||||
"""Check if the user has presented valid login credentials
|
||||
|
||||
This wrapper also calls check_password() if the underlying password provider
|
||||
supports the check_password() api and the login type is m.login.password.
|
||||
|
||||
Args:
|
||||
username: user id presented by the client. Either an MXID or an unqualified
|
||||
username.
|
||||
|
||||
login_type: the login type being attempted - one of the types returned by
|
||||
get_supported_login_types()
|
||||
|
||||
login_dict: the dictionary of login secrets passed by the client.
|
||||
|
||||
Returns: (user_id, callback) where `user_id` is the fully-qualified mxid of the
|
||||
user, and `callback` is an optional callback which will be called with the
|
||||
result from the /login call (including access_token, device_id, etc.)
|
||||
"""
|
||||
# first grandfather in a call to check_password
|
||||
if login_type == LoginType.PASSWORD:
|
||||
g = getattr(self._pp, "check_password", None)
|
||||
if g:
|
||||
qualified_user_id = self._module_api.get_qualified_user_id(username)
|
||||
is_valid = await self._pp.check_password(
|
||||
qualified_user_id, login_dict["password"]
|
||||
)
|
||||
if is_valid:
|
||||
return qualified_user_id, None
|
||||
|
||||
g = getattr(self._pp, "check_auth", None)
|
||||
if not g:
|
||||
return None
|
||||
result = await g(username, login_type, login_dict)
|
||||
|
||||
# Check if the return value is a str or a tuple
|
||||
if isinstance(result, str):
|
||||
# If it's a str, set callback function to None
|
||||
return result, None
|
||||
|
||||
return result
|
||||
|
||||
async def check_3pid_auth(
|
||||
self, medium: str, address: str, password: str
|
||||
) -> Optional[Tuple[str, Optional[Callable]]]:
|
||||
g = getattr(self._pp, "check_3pid_auth", None)
|
||||
if not g:
|
||||
return None
|
||||
|
||||
# This function is able to return a deferred that either
|
||||
# resolves None, meaning authentication failure, or upon
|
||||
# success, to a str (which is the user_id) or a tuple of
|
||||
# (user_id, callback_func), where callback_func should be run
|
||||
# after we've finished everything else
|
||||
result = await g(medium, address, password)
|
||||
|
||||
# Check if the return value is a str or a tuple
|
||||
if isinstance(result, str):
|
||||
# If it's a str, set callback function to None
|
||||
return result, None
|
||||
|
||||
return result
|
||||
|
||||
async def on_logged_out(
|
||||
self, user_id: str, device_id: Optional[str], access_token: str
|
||||
) -> None:
|
||||
g = getattr(self._pp, "on_logged_out", None)
|
||||
if not g:
|
||||
return
|
||||
|
||||
# This might return an awaitable, if it does block the log out
|
||||
# until it completes.
|
||||
result = g(user_id=user_id, device_id=device_id, access_token=access_token,)
|
||||
if inspect.isawaitable(result):
|
||||
await result
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import urllib
|
||||
from typing import TYPE_CHECKING, Dict, Optional, Tuple
|
||||
from typing import Dict, Optional, Tuple
|
||||
from xml.etree import ElementTree as ET
|
||||
|
||||
from twisted.web.client import PartialDownloadError
|
||||
@@ -23,9 +23,6 @@ from synapse.api.errors import Codes, LoginError
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.types import UserID, map_username_to_mxid_localpart
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -34,10 +31,10 @@ class CasHandler:
|
||||
Utility class for to handle the response from a CAS SSO service.
|
||||
|
||||
Args:
|
||||
hs
|
||||
hs (synapse.server.HomeServer)
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self._hostname = hs.hostname
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
@@ -203,57 +200,27 @@ class CasHandler:
|
||||
args["session"] = session
|
||||
username, user_display_name = await self._validate_ticket(ticket, args)
|
||||
|
||||
# Pull out the user-agent and IP from the request.
|
||||
user_agent = request.get_user_agent("")
|
||||
ip_address = self.hs.get_ip_from_request(request)
|
||||
|
||||
# Get the matrix ID from the CAS username.
|
||||
user_id = await self._map_cas_user_to_matrix_user(
|
||||
username, user_display_name, user_agent, ip_address
|
||||
)
|
||||
|
||||
if session:
|
||||
await self._auth_handler.complete_sso_ui_auth(
|
||||
user_id, session, request,
|
||||
)
|
||||
else:
|
||||
# If this not a UI auth request than there must be a redirect URL.
|
||||
assert client_redirect_url
|
||||
|
||||
await self._auth_handler.complete_sso_login(
|
||||
user_id, request, client_redirect_url
|
||||
)
|
||||
|
||||
async def _map_cas_user_to_matrix_user(
|
||||
self,
|
||||
remote_user_id: str,
|
||||
display_name: Optional[str],
|
||||
user_agent: str,
|
||||
ip_address: str,
|
||||
) -> str:
|
||||
"""
|
||||
Given a CAS username, retrieve the user ID for it and possibly register the user.
|
||||
|
||||
Args:
|
||||
remote_user_id: The username from the CAS response.
|
||||
display_name: The display name from the CAS response.
|
||||
user_agent: The user agent of the client making the request.
|
||||
ip_address: The IP address of the client making the request.
|
||||
|
||||
Returns:
|
||||
The user ID associated with this response.
|
||||
"""
|
||||
|
||||
localpart = map_username_to_mxid_localpart(remote_user_id)
|
||||
localpart = map_username_to_mxid_localpart(username)
|
||||
user_id = UserID(localpart, self._hostname).to_string()
|
||||
registered_user_id = await self._auth_handler.check_user_exists(user_id)
|
||||
|
||||
# If the user does not exist, register it.
|
||||
if not registered_user_id:
|
||||
registered_user_id = await self._registration_handler.register_user(
|
||||
localpart=localpart,
|
||||
default_display_name=display_name,
|
||||
user_agent_ips=[(user_agent, ip_address)],
|
||||
if session:
|
||||
await self._auth_handler.complete_sso_ui_auth(
|
||||
registered_user_id, session, request,
|
||||
)
|
||||
|
||||
return registered_user_id
|
||||
else:
|
||||
if not registered_user_id:
|
||||
# Pull out the user-agent and IP from the request.
|
||||
user_agent = request.get_user_agent("")
|
||||
ip_address = self.hs.get_ip_from_request(request)
|
||||
|
||||
registered_user_id = await self._registration_handler.register_user(
|
||||
localpart=localpart,
|
||||
default_display_name=user_display_name,
|
||||
user_agent_ips=(user_agent, ip_address),
|
||||
)
|
||||
|
||||
await self._auth_handler.complete_sso_login(
|
||||
registered_user_id, request, client_redirect_url
|
||||
)
|
||||
|
||||
@@ -39,7 +39,6 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
self._room_member_handler = hs.get_room_member_handler()
|
||||
self._identity_handler = hs.get_identity_handler()
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
self._server_name = hs.hostname
|
||||
|
||||
# Flag that indicates whether the process to part users from rooms is running
|
||||
self._user_parter_running = False
|
||||
@@ -153,7 +152,7 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
for room in pending_invites:
|
||||
try:
|
||||
await self._room_member_handler.update_membership(
|
||||
create_requester(user, authenticated_entity=self._server_name),
|
||||
create_requester(user),
|
||||
user,
|
||||
room.room_id,
|
||||
"leave",
|
||||
@@ -209,7 +208,7 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
logger.info("User parter parting %r from %r", user_id, room_id)
|
||||
try:
|
||||
await self._room_member_handler.update_membership(
|
||||
create_requester(user, authenticated_entity=self._server_name),
|
||||
create_requester(user),
|
||||
user,
|
||||
room_id,
|
||||
"leave",
|
||||
|
||||
@@ -55,7 +55,6 @@ from synapse.events import EventBase
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.events.validator import EventValidator
|
||||
from synapse.handlers._base import BaseHandler
|
||||
from synapse.http.servlet import assert_params_in_dict
|
||||
from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
nested_logging_context,
|
||||
@@ -68,7 +67,7 @@ from synapse.replication.http.devices import ReplicationUserDevicesResyncRestSer
|
||||
from synapse.replication.http.federation import (
|
||||
ReplicationCleanRoomRestServlet,
|
||||
ReplicationFederationSendEventsRestServlet,
|
||||
ReplicationStoreRoomOnOutlierMembershipRestServlet,
|
||||
ReplicationStoreRoomOnInviteRestServlet,
|
||||
)
|
||||
from synapse.state import StateResolutionStore
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
@@ -153,14 +152,12 @@ class FederationHandler(BaseHandler):
|
||||
self._user_device_resync = ReplicationUserDevicesResyncRestServlet.make_client(
|
||||
hs
|
||||
)
|
||||
self._maybe_store_room_on_outlier_membership = ReplicationStoreRoomOnOutlierMembershipRestServlet.make_client(
|
||||
self._maybe_store_room_on_invite = ReplicationStoreRoomOnInviteRestServlet.make_client(
|
||||
hs
|
||||
)
|
||||
else:
|
||||
self._device_list_updater = hs.get_device_handler().device_list_updater
|
||||
self._maybe_store_room_on_outlier_membership = (
|
||||
self.store.maybe_store_room_on_outlier_membership
|
||||
)
|
||||
self._maybe_store_room_on_invite = self.store.maybe_store_room_on_invite
|
||||
|
||||
# When joining a room we need to queue any events for that room up.
|
||||
# For each room, a list of (pdu, origin) tuples.
|
||||
@@ -1620,7 +1617,7 @@ class FederationHandler(BaseHandler):
|
||||
# keep a record of the room version, if we don't yet know it.
|
||||
# (this may get overwritten if we later get a different room version in a
|
||||
# join dance).
|
||||
await self._maybe_store_room_on_outlier_membership(
|
||||
await self._maybe_store_room_on_invite(
|
||||
room_id=event.room_id, room_version=room_version
|
||||
)
|
||||
|
||||
@@ -2689,7 +2686,7 @@ class FederationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
async def on_exchange_third_party_invite_request(
|
||||
self, event_dict: JsonDict
|
||||
self, room_id: str, event_dict: JsonDict
|
||||
) -> None:
|
||||
"""Handle an exchange_third_party_invite request from a remote server
|
||||
|
||||
@@ -2697,11 +2694,12 @@ class FederationHandler(BaseHandler):
|
||||
into a normal m.room.member invite.
|
||||
|
||||
Args:
|
||||
event_dict: Dictionary containing the event body.
|
||||
room_id: The ID of the room.
|
||||
|
||||
event_dict (dict[str, Any]): Dictionary containing the event body.
|
||||
|
||||
"""
|
||||
assert_params_in_dict(event_dict, ["room_id"])
|
||||
room_version = await self.store.get_room_version_id(event_dict["room_id"])
|
||||
room_version = await self.store.get_room_version_id(room_id)
|
||||
|
||||
# NB: event_dict has a particular specced format we might need to fudge
|
||||
# if we change event formats too much.
|
||||
|
||||
@@ -354,8 +354,7 @@ class IdentityHandler(BaseHandler):
|
||||
raise SynapseError(500, "An error was encountered when sending the email")
|
||||
|
||||
token_expires = (
|
||||
self.hs.get_clock().time_msec()
|
||||
+ self.hs.config.email_validation_token_lifetime
|
||||
self.hs.clock.time_msec() + self.hs.config.email_validation_token_lifetime
|
||||
)
|
||||
|
||||
await self.store.start_or_continue_validation_session(
|
||||
|
||||
@@ -472,7 +472,7 @@ class EventCreationHandler:
|
||||
Returns:
|
||||
Tuple of created event, Context
|
||||
"""
|
||||
await self.auth.check_auth_blocking(requester=requester)
|
||||
await self.auth.check_auth_blocking(requester.user.to_string())
|
||||
|
||||
if event_dict["type"] == EventTypes.Create and event_dict["state_key"] == "":
|
||||
room_version = event_dict["content"]["room_version"]
|
||||
@@ -619,13 +619,7 @@ class EventCreationHandler:
|
||||
if requester.app_service is not None:
|
||||
return
|
||||
|
||||
user_id = requester.authenticated_entity
|
||||
if not user_id.startswith("@"):
|
||||
# The authenticated entity might not be a user, e.g. if it's the
|
||||
# server puppetting the user.
|
||||
return
|
||||
|
||||
user = UserID.from_string(user_id)
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
# exempt the system notices user
|
||||
if (
|
||||
@@ -645,7 +639,9 @@ class EventCreationHandler:
|
||||
if u["consent_version"] == self.config.user_consent_version:
|
||||
return
|
||||
|
||||
consent_uri = self._consent_uri_builder.build_user_consent_uri(user.localpart)
|
||||
consent_uri = self._consent_uri_builder.build_user_consent_uri(
|
||||
requester.user.localpart
|
||||
)
|
||||
msg = self._block_events_without_consent_error % {"consent_uri": consent_uri}
|
||||
raise ConsentNotGivenError(msg=msg, consent_uri=consent_uri)
|
||||
|
||||
@@ -1142,9 +1138,6 @@ class EventCreationHandler:
|
||||
if original_event.room_id != event.room_id:
|
||||
raise SynapseError(400, "Cannot redact event from a different room")
|
||||
|
||||
if original_event.type == EventTypes.ServerACL:
|
||||
raise AuthError(403, "Redacting server ACL events is not permitted")
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
auth_events_ids = self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
@@ -1256,7 +1249,7 @@ class EventCreationHandler:
|
||||
for user_id in members:
|
||||
if not self.hs.is_mine_id(user_id):
|
||||
continue
|
||||
requester = create_requester(user_id, authenticated_entity=self.server_name)
|
||||
requester = create_requester(user_id)
|
||||
try:
|
||||
event, context = await self.create_event(
|
||||
requester,
|
||||
@@ -1277,6 +1270,11 @@ class EventCreationHandler:
|
||||
requester, event, context, ratelimit=False, ignore_shadow_ban=True,
|
||||
)
|
||||
return True
|
||||
except ConsentNotGivenError:
|
||||
logger.info(
|
||||
"Failed to send dummy event into room %s for user %s due to "
|
||||
"lack of consent. Will try another user" % (room_id, user_id)
|
||||
)
|
||||
except AuthError:
|
||||
logger.info(
|
||||
"Failed to send dummy event into room %s for user %s due to "
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import inspect
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, Tuple, TypeVar
|
||||
from urllib.parse import urlencode
|
||||
@@ -35,8 +34,7 @@ from typing_extensions import TypedDict
|
||||
from twisted.web.client import readBody
|
||||
|
||||
from synapse.config import ConfigError
|
||||
from synapse.handlers._base import BaseHandler
|
||||
from synapse.handlers.sso import MappingException, UserAttributes
|
||||
from synapse.http.server import respond_with_html
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
|
||||
@@ -85,12 +83,17 @@ class OidcError(Exception):
|
||||
return self.error
|
||||
|
||||
|
||||
class OidcHandler(BaseHandler):
|
||||
class MappingException(Exception):
|
||||
"""Used to catch errors when mapping the UserInfo object
|
||||
"""
|
||||
|
||||
|
||||
class OidcHandler:
|
||||
"""Handles requests related to the OpenID Connect login flow.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
self.hs = hs
|
||||
self._callback_url = hs.config.oidc_callback_url # type: str
|
||||
self._scopes = hs.config.oidc_scopes # type: List[str]
|
||||
self._user_profile_method = hs.config.oidc_user_profile_method # type: str
|
||||
@@ -117,13 +120,36 @@ class OidcHandler(BaseHandler):
|
||||
self._http_client = hs.get_proxied_http_client()
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
self._registration_handler = hs.get_registration_handler()
|
||||
self._datastore = hs.get_datastore()
|
||||
self._clock = hs.get_clock()
|
||||
self._hostname = hs.hostname # type: str
|
||||
self._server_name = hs.config.server_name # type: str
|
||||
self._macaroon_secret_key = hs.config.macaroon_secret_key
|
||||
self._error_template = hs.config.sso_error_template
|
||||
|
||||
# identifier for the external_ids table
|
||||
self._auth_provider_id = "oidc"
|
||||
|
||||
self._sso_handler = hs.get_sso_handler()
|
||||
def _render_error(
|
||||
self, request, error: str, error_description: Optional[str] = None
|
||||
) -> None:
|
||||
"""Render the error template and respond to the request with it.
|
||||
|
||||
This is used to show errors to the user. The template of this page can
|
||||
be found under `synapse/res/templates/sso_error.html`.
|
||||
|
||||
Args:
|
||||
request: The incoming request from the browser.
|
||||
We'll respond with an HTML page describing the error.
|
||||
error: A technical identifier for this error. Those include
|
||||
well-known OAuth2/OIDC error types like invalid_request or
|
||||
access_denied.
|
||||
error_description: A human-readable description of the error.
|
||||
"""
|
||||
html = self._error_template.render(
|
||||
error=error, error_description=error_description
|
||||
)
|
||||
respond_with_html(request, 400, html)
|
||||
|
||||
def _validate_metadata(self):
|
||||
"""Verifies the provider metadata.
|
||||
@@ -545,7 +571,7 @@ class OidcHandler(BaseHandler):
|
||||
|
||||
Since we might want to display OIDC-related errors in a user-friendly
|
||||
way, we don't raise SynapseError from here. Instead, we call
|
||||
``self._sso_handler.render_error`` which displays an HTML page for the error.
|
||||
``self._render_error`` which displays an HTML page for the error.
|
||||
|
||||
Most of the OpenID Connect logic happens here:
|
||||
|
||||
@@ -583,7 +609,7 @@ class OidcHandler(BaseHandler):
|
||||
if error != "access_denied":
|
||||
logger.error("Error from the OIDC provider: %s %s", error, description)
|
||||
|
||||
self._sso_handler.render_error(request, error, description)
|
||||
self._render_error(request, error, description)
|
||||
return
|
||||
|
||||
# otherwise, it is presumably a successful response. see:
|
||||
@@ -593,9 +619,7 @@ class OidcHandler(BaseHandler):
|
||||
session = request.getCookie(SESSION_COOKIE_NAME) # type: Optional[bytes]
|
||||
if session is None:
|
||||
logger.info("No session cookie found")
|
||||
self._sso_handler.render_error(
|
||||
request, "missing_session", "No session cookie found"
|
||||
)
|
||||
self._render_error(request, "missing_session", "No session cookie found")
|
||||
return
|
||||
|
||||
# Remove the cookie. There is a good chance that if the callback failed
|
||||
@@ -613,9 +637,7 @@ class OidcHandler(BaseHandler):
|
||||
# Check for the state query parameter
|
||||
if b"state" not in request.args:
|
||||
logger.info("State parameter is missing")
|
||||
self._sso_handler.render_error(
|
||||
request, "invalid_request", "State parameter is missing"
|
||||
)
|
||||
self._render_error(request, "invalid_request", "State parameter is missing")
|
||||
return
|
||||
|
||||
state = request.args[b"state"][0].decode()
|
||||
@@ -629,19 +651,17 @@ class OidcHandler(BaseHandler):
|
||||
) = self._verify_oidc_session_token(session, state)
|
||||
except MacaroonDeserializationException as e:
|
||||
logger.exception("Invalid session")
|
||||
self._sso_handler.render_error(request, "invalid_session", str(e))
|
||||
self._render_error(request, "invalid_session", str(e))
|
||||
return
|
||||
except MacaroonInvalidSignatureException as e:
|
||||
logger.exception("Could not verify session")
|
||||
self._sso_handler.render_error(request, "mismatching_session", str(e))
|
||||
self._render_error(request, "mismatching_session", str(e))
|
||||
return
|
||||
|
||||
# Exchange the code with the provider
|
||||
if b"code" not in request.args:
|
||||
logger.info("Code parameter is missing")
|
||||
self._sso_handler.render_error(
|
||||
request, "invalid_request", "Code parameter is missing"
|
||||
)
|
||||
self._render_error(request, "invalid_request", "Code parameter is missing")
|
||||
return
|
||||
|
||||
logger.debug("Exchanging code")
|
||||
@@ -650,7 +670,7 @@ class OidcHandler(BaseHandler):
|
||||
token = await self._exchange_code(code)
|
||||
except OidcError as e:
|
||||
logger.exception("Could not exchange code")
|
||||
self._sso_handler.render_error(request, e.error, e.error_description)
|
||||
self._render_error(request, e.error, e.error_description)
|
||||
return
|
||||
|
||||
logger.debug("Successfully obtained OAuth2 access token")
|
||||
@@ -663,7 +683,7 @@ class OidcHandler(BaseHandler):
|
||||
userinfo = await self._fetch_userinfo(token)
|
||||
except Exception as e:
|
||||
logger.exception("Could not fetch userinfo")
|
||||
self._sso_handler.render_error(request, "fetch_error", str(e))
|
||||
self._render_error(request, "fetch_error", str(e))
|
||||
return
|
||||
else:
|
||||
logger.debug("Extracting userinfo from id_token")
|
||||
@@ -671,7 +691,7 @@ class OidcHandler(BaseHandler):
|
||||
userinfo = await self._parse_id_token(token, nonce=nonce)
|
||||
except Exception as e:
|
||||
logger.exception("Invalid id_token")
|
||||
self._sso_handler.render_error(request, "invalid_token", str(e))
|
||||
self._render_error(request, "invalid_token", str(e))
|
||||
return
|
||||
|
||||
# Pull out the user-agent and IP from the request.
|
||||
@@ -685,7 +705,7 @@ class OidcHandler(BaseHandler):
|
||||
)
|
||||
except MappingException as e:
|
||||
logger.exception("Could not map user")
|
||||
self._sso_handler.render_error(request, "mapping_error", str(e))
|
||||
self._render_error(request, "mapping_error", str(e))
|
||||
return
|
||||
|
||||
# Mapping providers might not have get_extra_attributes: only call this
|
||||
@@ -750,7 +770,7 @@ class OidcHandler(BaseHandler):
|
||||
macaroon.add_first_party_caveat(
|
||||
"ui_auth_session_id = %s" % (ui_auth_session_id,)
|
||||
)
|
||||
now = self.clock.time_msec()
|
||||
now = self._clock.time_msec()
|
||||
expiry = now + duration_in_ms
|
||||
macaroon.add_first_party_caveat("time < %d" % (expiry,))
|
||||
|
||||
@@ -825,7 +845,7 @@ class OidcHandler(BaseHandler):
|
||||
if not caveat.startswith(prefix):
|
||||
return False
|
||||
expiry = int(caveat[len(prefix) :])
|
||||
now = self.clock.time_msec()
|
||||
now = self._clock.time_msec()
|
||||
return now < expiry
|
||||
|
||||
async def _map_userinfo_to_user(
|
||||
@@ -865,77 +885,71 @@ class OidcHandler(BaseHandler):
|
||||
# to be strings.
|
||||
remote_user_id = str(remote_user_id)
|
||||
|
||||
# Older mapping providers don't accept the `failures` argument, so we
|
||||
# try and detect support.
|
||||
mapper_signature = inspect.signature(
|
||||
self._user_mapping_provider.map_user_attributes
|
||||
)
|
||||
supports_failures = "failures" in mapper_signature.parameters
|
||||
|
||||
async def oidc_response_to_user_attributes(failures: int) -> UserAttributes:
|
||||
"""
|
||||
Call the mapping provider to map the OIDC userinfo and token to user attributes.
|
||||
|
||||
This is backwards compatibility for abstraction for the SSO handler.
|
||||
"""
|
||||
if supports_failures:
|
||||
attributes = await self._user_mapping_provider.map_user_attributes(
|
||||
userinfo, token, failures
|
||||
)
|
||||
else:
|
||||
# If the mapping provider does not support processing failures,
|
||||
# do not continually generate the same Matrix ID since it will
|
||||
# continue to already be in use. Note that the error raised is
|
||||
# arbitrary and will get turned into a MappingException.
|
||||
if failures:
|
||||
raise MappingException(
|
||||
"Mapping provider does not support de-duplicating Matrix IDs"
|
||||
)
|
||||
|
||||
attributes = await self._user_mapping_provider.map_user_attributes( # type: ignore
|
||||
userinfo, token
|
||||
)
|
||||
|
||||
return UserAttributes(**attributes)
|
||||
|
||||
async def grandfather_existing_users() -> Optional[str]:
|
||||
if self._allow_existing_users:
|
||||
# If allowing existing users we want to generate a single localpart
|
||||
# and attempt to match it.
|
||||
attributes = await oidc_response_to_user_attributes(failures=0)
|
||||
|
||||
user_id = UserID(attributes.localpart, self.server_name).to_string()
|
||||
users = await self.store.get_users_by_id_case_insensitive(user_id)
|
||||
if users:
|
||||
# If an existing matrix ID is returned, then use it.
|
||||
if len(users) == 1:
|
||||
previously_registered_user_id = next(iter(users))
|
||||
elif user_id in users:
|
||||
previously_registered_user_id = user_id
|
||||
else:
|
||||
# Do not attempt to continue generating Matrix IDs.
|
||||
raise MappingException(
|
||||
"Attempted to login as '{}' but it matches more than one user inexactly: {}".format(
|
||||
user_id, users
|
||||
)
|
||||
)
|
||||
|
||||
return previously_registered_user_id
|
||||
|
||||
return None
|
||||
|
||||
return await self._sso_handler.get_mxid_from_sso(
|
||||
logger.info(
|
||||
"Looking for existing mapping for user %s:%s",
|
||||
self._auth_provider_id,
|
||||
remote_user_id,
|
||||
user_agent,
|
||||
ip_address,
|
||||
oidc_response_to_user_attributes,
|
||||
grandfather_existing_users,
|
||||
)
|
||||
|
||||
registered_user_id = await self._datastore.get_user_by_external_id(
|
||||
self._auth_provider_id, remote_user_id,
|
||||
)
|
||||
|
||||
UserAttributeDict = TypedDict(
|
||||
"UserAttributeDict", {"localpart": str, "display_name": Optional[str]}
|
||||
if registered_user_id is not None:
|
||||
logger.info("Found existing mapping %s", registered_user_id)
|
||||
return registered_user_id
|
||||
|
||||
try:
|
||||
attributes = await self._user_mapping_provider.map_user_attributes(
|
||||
userinfo, token
|
||||
)
|
||||
except Exception as e:
|
||||
raise MappingException(
|
||||
"Could not extract user attributes from OIDC response: " + str(e)
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"Retrieved user attributes from user mapping provider: %r", attributes
|
||||
)
|
||||
|
||||
if not attributes["localpart"]:
|
||||
raise MappingException("localpart is empty")
|
||||
|
||||
localpart = map_username_to_mxid_localpart(attributes["localpart"])
|
||||
|
||||
user_id = UserID(localpart, self._hostname).to_string()
|
||||
users = await self._datastore.get_users_by_id_case_insensitive(user_id)
|
||||
if users:
|
||||
if self._allow_existing_users:
|
||||
if len(users) == 1:
|
||||
registered_user_id = next(iter(users))
|
||||
elif user_id in users:
|
||||
registered_user_id = user_id
|
||||
else:
|
||||
raise MappingException(
|
||||
"Attempted to login as '{}' but it matches more than one user inexactly: {}".format(
|
||||
user_id, list(users.keys())
|
||||
)
|
||||
)
|
||||
else:
|
||||
# This mxid is taken
|
||||
raise MappingException("mxid '{}' is already taken".format(user_id))
|
||||
else:
|
||||
# It's the first time this user is logging in and the mapped mxid was
|
||||
# not taken, register the user
|
||||
registered_user_id = await self._registration_handler.register_user(
|
||||
localpart=localpart,
|
||||
default_display_name=attributes["display_name"],
|
||||
user_agent_ips=(user_agent, ip_address),
|
||||
)
|
||||
await self._datastore.record_user_external_id(
|
||||
self._auth_provider_id, remote_user_id, registered_user_id,
|
||||
)
|
||||
return registered_user_id
|
||||
|
||||
|
||||
UserAttribute = TypedDict(
|
||||
"UserAttribute", {"localpart": str, "display_name": Optional[str]}
|
||||
)
|
||||
C = TypeVar("C")
|
||||
|
||||
@@ -978,15 +992,13 @@ class OidcMappingProvider(Generic[C]):
|
||||
raise NotImplementedError()
|
||||
|
||||
async def map_user_attributes(
|
||||
self, userinfo: UserInfo, token: Token, failures: int
|
||||
) -> UserAttributeDict:
|
||||
self, userinfo: UserInfo, token: Token
|
||||
) -> UserAttribute:
|
||||
"""Map a `UserInfo` object into user attributes.
|
||||
|
||||
Args:
|
||||
userinfo: An object representing the user given by the OIDC provider
|
||||
token: A dict with the tokens returned by the provider
|
||||
failures: How many times a call to this function with this
|
||||
UserInfo has resulted in a failure.
|
||||
|
||||
Returns:
|
||||
A dict containing the ``localpart`` and (optionally) the ``display_name``
|
||||
@@ -1086,17 +1098,10 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||
return userinfo[self._config.subject_claim]
|
||||
|
||||
async def map_user_attributes(
|
||||
self, userinfo: UserInfo, token: Token, failures: int
|
||||
) -> UserAttributeDict:
|
||||
self, userinfo: UserInfo, token: Token
|
||||
) -> UserAttribute:
|
||||
localpart = self._config.localpart_template.render(user=userinfo).strip()
|
||||
|
||||
# Ensure only valid characters are included in the MXID.
|
||||
localpart = map_username_to_mxid_localpart(localpart)
|
||||
|
||||
# Append suffix integer if last call to this function failed to produce
|
||||
# a usable mxid.
|
||||
localpart += str(failures) if failures else ""
|
||||
|
||||
display_name = None # type: Optional[str]
|
||||
if self._config.display_name_template is not None:
|
||||
display_name = self._config.display_name_template.render(
|
||||
@@ -1106,7 +1111,7 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||
if display_name == "":
|
||||
display_name = None
|
||||
|
||||
return UserAttributeDict(localpart=localpart, display_name=display_name)
|
||||
return UserAttribute(localpart=localpart, display_name=display_name)
|
||||
|
||||
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:
|
||||
extras = {} # type: Dict[str, str]
|
||||
|
||||
@@ -299,22 +299,17 @@ class PaginationHandler:
|
||||
"""
|
||||
return self._purges_by_id.get(purge_id)
|
||||
|
||||
async def purge_room(self, room_id: str, force: bool = False) -> None:
|
||||
"""Purge the given room from the database.
|
||||
|
||||
Args:
|
||||
room_id: room to be purged
|
||||
force: set true to skip checking for joined users.
|
||||
"""
|
||||
async def purge_room(self, room_id: str) -> None:
|
||||
"""Purge the given room from the database"""
|
||||
with await self.pagination_lock.write(room_id):
|
||||
# check we know about the room
|
||||
await self.store.get_room_version_id(room_id)
|
||||
|
||||
# first check that we have no users in this room
|
||||
if not force:
|
||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||
if joined:
|
||||
raise SynapseError(400, "Users are still joined to this room")
|
||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||
|
||||
if joined:
|
||||
raise SynapseError(400, "Users are still joined to this room")
|
||||
|
||||
await self.storage.purge_events.purge_room(room_id)
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ The methods that define policy are:
|
||||
import abc
|
||||
import logging
|
||||
from contextlib import contextmanager
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Set, Tuple
|
||||
from typing import Dict, Iterable, List, Set, Tuple
|
||||
|
||||
from prometheus_client import Counter
|
||||
from typing_extensions import ContextManager
|
||||
@@ -46,7 +46,8 @@ from synapse.util.caches.descriptors import cached
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.util.wheel_timer import WheelTimer
|
||||
|
||||
if TYPE_CHECKING:
|
||||
MYPY = False
|
||||
if MYPY:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -189,9 +189,7 @@ class ProfileHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if not isinstance(new_displayname, str):
|
||||
raise SynapseError(
|
||||
400, "'displayname' must be a string", errcode=Codes.INVALID_PARAM
|
||||
)
|
||||
raise SynapseError(400, "Invalid displayname")
|
||||
|
||||
if len(new_displayname) > MAX_DISPLAYNAME_LEN:
|
||||
raise SynapseError(
|
||||
@@ -206,9 +204,7 @@ class ProfileHandler(BaseHandler):
|
||||
# the join event to update the displayname in the rooms.
|
||||
# This must be done by the target user himself.
|
||||
if by_admin:
|
||||
requester = create_requester(
|
||||
target_user, authenticated_entity=requester.authenticated_entity,
|
||||
)
|
||||
requester = create_requester(target_user)
|
||||
|
||||
await self.store.set_profile_displayname(
|
||||
target_user.localpart, displayname_to_set
|
||||
@@ -277,9 +273,7 @@ class ProfileHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if not isinstance(new_avatar_url, str):
|
||||
raise SynapseError(
|
||||
400, "'avatar_url' must be a string", errcode=Codes.INVALID_PARAM
|
||||
)
|
||||
raise SynapseError(400, "Invalid displayname")
|
||||
|
||||
if len(new_avatar_url) > MAX_AVATAR_URL_LEN:
|
||||
raise SynapseError(
|
||||
@@ -288,9 +282,7 @@ class ProfileHandler(BaseHandler):
|
||||
|
||||
# Same like set_displayname
|
||||
if by_admin:
|
||||
requester = create_requester(
|
||||
target_user, authenticated_entity=requester.authenticated_entity
|
||||
)
|
||||
requester = create_requester(target_user)
|
||||
|
||||
await self.store.set_profile_avatar_url(target_user.localpart, new_avatar_url)
|
||||
|
||||
|
||||
@@ -158,8 +158,7 @@ class ReceiptEventSource:
|
||||
if from_key == to_key:
|
||||
return [], to_key
|
||||
|
||||
# Fetch all read receipts for all rooms, up to a limit of 100. This is ordered
|
||||
# by most recent.
|
||||
# We first need to fetch all new receipts
|
||||
rooms_to_events = await self.store.get_linearized_receipts_for_all_rooms(
|
||||
from_key=from_key, to_key=to_key
|
||||
)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user