1
0

Compare commits

..

7 Commits

Author SHA1 Message Date
Erik Johnston
d76698ef30 Up no output timeout 2021-02-18 14:00:59 +00:00
Erik Johnston
48cc4f8903 Try building lxml up front to avoid time outs 2021-02-18 12:08:21 +00:00
Erik Johnston
3fe29250c4 Try building cryptography separately to avoid time outs 2021-02-18 10:17:09 +00:00
Erik Johnston
1dd584b46d Test circleci config 2021-02-17 17:46:29 +00:00
Erik Johnston
6314645c05 Newsfile 2021-02-17 17:45:44 +00:00
Erik Johnston
32b2c4c97f Update circleci config to use cargo cache 2021-02-17 17:45:44 +00:00
Erik Johnston
b64dadc497 Add a Dockerfile that allows using a base image with a cargo cache 2021-02-17 15:09:45 +00:00
171 changed files with 1102 additions and 3120 deletions

View File

@@ -14,7 +14,7 @@ jobs:
platforms: linux/amd64
- docker_build:
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
platforms: linux/amd64,linux/arm64
platforms: linux/amd64,linux/arm/v7,linux/arm64
dockerhubuploadlatest:
docker:
@@ -22,12 +22,12 @@ jobs:
steps:
- checkout
- docker_prepare
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# - run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# for `latest`, we don't want the arm images to disappear, so don't update the tag
# until all of the platforms are built.
- docker_build:
tag: -t matrixdotorg/synapse:latest
platforms: linux/amd64,linux/arm64
tag: -t 127.0.0.1:5000/synapse:erikj-test
platforms: linux/amd64,linux/arm/v7,linux/arm64
workflows:
build:
@@ -41,7 +41,7 @@ workflows:
- dockerhubuploadlatest:
filters:
branches:
only: master
only: erikj/arm_docker_cache
commands:
docker_prepare:
@@ -52,9 +52,9 @@ commands:
default: "v0.4.1"
steps:
- setup_remote_docker:
# 19.03.13 was the most recent available on circleci at the time of
# 20.10.2 was the most recent available on circleci at the time of
# writing.
version: 19.03.13
version: 20.10.2
- run: apk add --no-cache curl
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx
@@ -64,7 +64,10 @@ commands:
# create a context named `builder` for the builds
- run: docker context create builder
# create a buildx builder using the new context, and set it as the default
- run: docker buildx create builder --use
- run: docker buildx create --driver docker-container --driver-opt network=host builder --use
# Start a registry so that have somewhere to store our temporary docker
# images (as multi arch builds don't work with stand local docker store)
- run: docker run -d -p 127.0.0.1:5000:5000 --name registry registry:2
docker_build:
description: Builds and pushed images to dockerhub using buildx
@@ -75,4 +78,7 @@ commands:
tag:
type: string
steps:
- run: docker buildx build -f docker/Dockerfile --push --platform << parameters.platforms >> --label gitsha1=${CIRCLE_SHA1} << parameters.tag >> --progress=plain .
- run: docker buildx build -f docker/Dockerfile-cargo-cache --push -t 127.0.0.1:5000/cargo_cache --platform << parameters.platforms >> --progress=plain .
- run:
command: docker buildx build -f docker/Dockerfile --push --platform << parameters.platforms >> --label gitsha1=${CIRCLE_SHA1} << parameters.tag >> --build-arg BASE_IMAGE=127.0.0.1:5000/cargo_cache --build-arg CARGO_NET_OFFLINE=true --progress=plain .
no_output_timeout: 30m

3
.gitignore vendored
View File

@@ -6,14 +6,13 @@
*.egg
*.egg-info
*.lock
*.py[cod]
*.pyc
*.snap
*.tac
_trial_temp/
_trial_temp*/
/out
.DS_Store
__pycache__/
# stuff that is likely to exist when you run a server locally
/*.db

View File

@@ -1,171 +1,9 @@
Synapse 1.29.0 (2021-03-08)
===========================
Note that synapse now expects an `X-Forwarded-Proto` header when used with a reverse proxy. Please see [UPGRADE.rst](UPGRADE.rst#upgrading-to-v1290) for more details on this change.
No significant changes.
Synapse 1.29.0rc1 (2021-03-04)
==============================
Features
--------
- Add rate limiters to cross-user key sharing requests. ([\#8957](https://github.com/matrix-org/synapse/issues/8957))
- Add `order_by` to the admin API `GET /_synapse/admin/v1/users/<user_id>/media`. Contributed by @dklimpel. ([\#8978](https://github.com/matrix-org/synapse/issues/8978))
- Add some configuration settings to make users' profile data more private. ([\#9203](https://github.com/matrix-org/synapse/issues/9203))
- The `no_proxy` and `NO_PROXY` environment variables are now respected in proxied HTTP clients with the lowercase form taking precedence if both are present. Additionally, the lowercase `https_proxy` environment variable is now respected in proxied HTTP clients on top of existing support for the uppercase `HTTPS_PROXY` form and takes precedence if both are present. Contributed by Timothy Leung. ([\#9372](https://github.com/matrix-org/synapse/issues/9372))
- Add a configuration option, `user_directory.prefer_local_users`, which when enabled will make it more likely for users on the same server as you to appear above other users. ([\#9383](https://github.com/matrix-org/synapse/issues/9383), [\#9385](https://github.com/matrix-org/synapse/issues/9385))
- Add support for regenerating thumbnails if they have been deleted but the original image is still stored. ([\#9438](https://github.com/matrix-org/synapse/issues/9438))
- Add support for `X-Forwarded-Proto` header when using a reverse proxy. ([\#9472](https://github.com/matrix-org/synapse/issues/9472), [\#9501](https://github.com/matrix-org/synapse/issues/9501), [\#9512](https://github.com/matrix-org/synapse/issues/9512), [\#9539](https://github.com/matrix-org/synapse/issues/9539))
Bugfixes
--------
- Fix a bug where users' pushers were not all deleted when they deactivated their account. ([\#9285](https://github.com/matrix-org/synapse/issues/9285), [\#9516](https://github.com/matrix-org/synapse/issues/9516))
- Fix a bug where a lot of unnecessary presence updates were sent when joining a room. ([\#9402](https://github.com/matrix-org/synapse/issues/9402))
- Fix a bug that caused multiple calls to the experimental `shared_rooms` endpoint to return stale results. ([\#9416](https://github.com/matrix-org/synapse/issues/9416))
- Fix a bug in single sign-on which could cause a "No session cookie found" error. ([\#9436](https://github.com/matrix-org/synapse/issues/9436))
- Fix bug introduced in v1.27.0 where allowing a user to choose their own username when logging in via single sign-on did not work unless an `idp_icon` was defined. ([\#9440](https://github.com/matrix-org/synapse/issues/9440))
- Fix a bug introduced in v1.26.0 where some sequences were not properly configured when running `synapse_port_db`. ([\#9449](https://github.com/matrix-org/synapse/issues/9449))
- Fix deleting pushers when using sharded pushers. ([\#9465](https://github.com/matrix-org/synapse/issues/9465), [\#9466](https://github.com/matrix-org/synapse/issues/9466), [\#9479](https://github.com/matrix-org/synapse/issues/9479), [\#9536](https://github.com/matrix-org/synapse/issues/9536))
- Fix missing startup checks for the consistency of certain PostgreSQL sequences. ([\#9470](https://github.com/matrix-org/synapse/issues/9470))
- Fix a long-standing bug where the media repository could leak file descriptors while previewing media. ([\#9497](https://github.com/matrix-org/synapse/issues/9497))
- Properly purge the event chain cover index when purging history. ([\#9498](https://github.com/matrix-org/synapse/issues/9498))
- Fix missing chain cover index due to a schema delta not being applied correctly. Only affected servers that ran development versions. ([\#9503](https://github.com/matrix-org/synapse/issues/9503))
- Fix a bug introduced in v1.25.0 where `/_synapse/admin/join/` would fail when given a room alias. ([\#9506](https://github.com/matrix-org/synapse/issues/9506))
- Prevent presence background jobs from running when presence is disabled. ([\#9530](https://github.com/matrix-org/synapse/issues/9530))
- Fix rare edge case that caused a background update to fail if the server had rejected an event that had duplicate auth events. ([\#9537](https://github.com/matrix-org/synapse/issues/9537))
Improved Documentation
----------------------
- Update the example systemd config to propagate reloads to individual units. ([\#9463](https://github.com/matrix-org/synapse/issues/9463))
Internal Changes
----------------
- Add documentation and type hints to `parse_duration`. ([\#9432](https://github.com/matrix-org/synapse/issues/9432))
- Remove vestiges of `uploads_path` configuration setting. ([\#9462](https://github.com/matrix-org/synapse/issues/9462))
- Add a comment about systemd-python. ([\#9464](https://github.com/matrix-org/synapse/issues/9464))
- Test that we require validated email for email pushers. ([\#9496](https://github.com/matrix-org/synapse/issues/9496))
- Allow python to generate bytecode for synapse. ([\#9502](https://github.com/matrix-org/synapse/issues/9502))
- Fix incorrect type hints. ([\#9515](https://github.com/matrix-org/synapse/issues/9515), [\#9518](https://github.com/matrix-org/synapse/issues/9518))
- Add type hints to device and event report admin API. ([\#9519](https://github.com/matrix-org/synapse/issues/9519))
- Add type hints to user admin API. ([\#9521](https://github.com/matrix-org/synapse/issues/9521))
- Bump the versions of mypy and mypy-zope used for static type checking. ([\#9529](https://github.com/matrix-org/synapse/issues/9529))
Synapse 1.28.0 (2021-02-25)
===========================
Note that this release drops support for ARMv7 in the official Docker images, due to repeated problems building for ARMv7 (and the associated maintenance burden this entails).
This release also fixes the documentation included in v1.27.0 around the callback URI for SAML2 identity providers. If your server is configured to use single sign-on via a SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
Internal Changes
----------------
- Revert change in v1.28.0rc1 to remove the deprecated SAML endpoint. ([\#9474](https://github.com/matrix-org/synapse/issues/9474))
Synapse 1.28.0rc1 (2021-02-19)
==============================
Removal warning
---------------
The v1 list accounts API is deprecated and will be removed in a future release.
This API was undocumented and misleading. It can be replaced by the
[v2 list accounts API](https://github.com/matrix-org/synapse/blob/release-v1.28.0/docs/admin_api/user_admin_api.rst#list-accounts),
which has been available since Synapse 1.7.0 (2019-12-13).
Please check if you're using any scripts which use the admin API and replace
`GET /_synapse/admin/v1/users/<user_id>` with `GET /_synapse/admin/v2/users`.
Features
--------
- New admin API to get the context of an event: `/_synapse/admin/rooms/{roomId}/context/{eventId}`. ([\#9150](https://github.com/matrix-org/synapse/issues/9150))
- Further improvements to the user experience of registration via single sign-on. ([\#9300](https://github.com/matrix-org/synapse/issues/9300), [\#9301](https://github.com/matrix-org/synapse/issues/9301))
- Add hook to spam checker modules that allow checking file uploads and remote downloads. ([\#9311](https://github.com/matrix-org/synapse/issues/9311))
- Add support for receiving OpenID Connect authentication responses via form `POST`s rather than `GET`s. ([\#9376](https://github.com/matrix-org/synapse/issues/9376))
- Add the shadow-banning status to the admin API for user info. ([\#9400](https://github.com/matrix-org/synapse/issues/9400))
Bugfixes
--------
- Fix long-standing bug where sending email notifications would fail for rooms that the server had since left. ([\#9257](https://github.com/matrix-org/synapse/issues/9257))
- Fix bug introduced in Synapse 1.27.0rc1 which meant the "session expired" error page during SSO registration was badly formatted. ([\#9296](https://github.com/matrix-org/synapse/issues/9296))
- Assert a maximum length for some parameters for spec compliance. ([\#9321](https://github.com/matrix-org/synapse/issues/9321), [\#9393](https://github.com/matrix-org/synapse/issues/9393))
- Fix additional errors when previewing URLs: "AttributeError 'NoneType' object has no attribute 'xpath'" and "ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.". ([\#9333](https://github.com/matrix-org/synapse/issues/9333))
- Fix a bug causing Synapse to impose the wrong type constraints on fields when processing responses from appservices to `/_matrix/app/v1/thirdparty/user/{protocol}`. ([\#9361](https://github.com/matrix-org/synapse/issues/9361))
- Fix bug where Synapse would occasionally stop reconnecting to Redis after the connection was lost. ([\#9391](https://github.com/matrix-org/synapse/issues/9391))
- Fix a long-standing bug when upgrading a room: "TypeError: '>' not supported between instances of 'NoneType' and 'int'". ([\#9395](https://github.com/matrix-org/synapse/issues/9395))
- Reduce the amount of memory used when generating the URL preview of a file that is larger than the `max_spider_size`. ([\#9421](https://github.com/matrix-org/synapse/issues/9421))
- Fix a long-standing bug in the deduplication of old presence, resulting in no deduplication. ([\#9425](https://github.com/matrix-org/synapse/issues/9425))
- The `ui_auth.session_timeout` config option can now be specified in terms of number of seconds/minutes/etc/. Contributed by Rishabh Arya. ([\#9426](https://github.com/matrix-org/synapse/issues/9426))
- Fix a bug introduced in v1.27.0: "TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType." related to the user directory. ([\#9428](https://github.com/matrix-org/synapse/issues/9428))
Updates to the Docker image
---------------------------
- Drop support for ARMv7 in Docker images. ([\#9433](https://github.com/matrix-org/synapse/issues/9433))
Improved Documentation
----------------------
- Reorganize CHANGELOG.md. ([\#9281](https://github.com/matrix-org/synapse/issues/9281))
- Add note to `auto_join_rooms` config option explaining existing rooms must be publicly joinable. ([\#9291](https://github.com/matrix-org/synapse/issues/9291))
- Correct name of Synapse's service file in TURN howto. ([\#9308](https://github.com/matrix-org/synapse/issues/9308))
- Fix the braces in the `oidc_providers` section of the sample config. ([\#9317](https://github.com/matrix-org/synapse/issues/9317))
- Update installation instructions on Fedora. ([\#9322](https://github.com/matrix-org/synapse/issues/9322))
- Add HTTP/2 support to the nginx example configuration. Contributed by David Vo. ([\#9390](https://github.com/matrix-org/synapse/issues/9390))
- Update docs for using Gitea as OpenID provider. ([\#9404](https://github.com/matrix-org/synapse/issues/9404))
- Document that pusher instances are shardable. ([\#9407](https://github.com/matrix-org/synapse/issues/9407))
- Fix erroneous documentation from v1.27.0 about updating the SAML2 callback URL. ([\#9434](https://github.com/matrix-org/synapse/issues/9434))
Deprecations and Removals
-------------------------
- Deprecate old admin API `GET /_synapse/admin/v1/users/<user_id>`. ([\#9429](https://github.com/matrix-org/synapse/issues/9429))
Internal Changes
----------------
- Fix 'object name reserved for internal use' errors with recent versions of SQLite. ([\#9003](https://github.com/matrix-org/synapse/issues/9003))
- Add experimental support for running Synapse with PyPy. ([\#9123](https://github.com/matrix-org/synapse/issues/9123))
- Deny access to additional IP addresses by default. ([\#9240](https://github.com/matrix-org/synapse/issues/9240))
- Update the `Cursor` type hints to better match PEP 249. ([\#9299](https://github.com/matrix-org/synapse/issues/9299))
- Add debug logging for SRV lookups. Contributed by @Bubu. ([\#9305](https://github.com/matrix-org/synapse/issues/9305))
- Improve logging for OIDC login flow. ([\#9307](https://github.com/matrix-org/synapse/issues/9307))
- Share the code for handling required attributes between the CAS and SAML handlers. ([\#9326](https://github.com/matrix-org/synapse/issues/9326))
- Clean up the code to load the metadata for OpenID Connect identity providers. ([\#9362](https://github.com/matrix-org/synapse/issues/9362))
- Convert tests to use `HomeserverTestCase`. ([\#9377](https://github.com/matrix-org/synapse/issues/9377), [\#9396](https://github.com/matrix-org/synapse/issues/9396))
- Update the version of black used to 20.8b1. ([\#9381](https://github.com/matrix-org/synapse/issues/9381))
- Allow OIDC config to override discovered values. ([\#9384](https://github.com/matrix-org/synapse/issues/9384))
- Remove some dead code from the acceptance of room invites path. ([\#9394](https://github.com/matrix-org/synapse/issues/9394))
- Clean up an unused method in the presence handler code. ([\#9408](https://github.com/matrix-org/synapse/issues/9408))
Synapse 1.27.0 (2021-02-16)
===========================
Note that this release includes a change in Synapse to use Redis as a cache ─ as well as a pub/sub mechanism ─ if Redis support is enabled for workers. No action is needed by server administrators, and we do not expect resource usage of the Redis instance to change dramatically.
This release also changes the callback URI for OpenID Connect (OIDC) and SAML2 identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 or SAML2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also changes the callback URI for OpenID Connect (OIDC) identity providers. If your server is configured to use single sign-on via an OIDC/OAuth2 IdP, you may need to make configuration changes. Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.
This release also changes escaping of variables in the HTML templates for SSO or email notifications. If you have customised these templates, please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes.

View File

@@ -1,31 +1,4 @@
Welcome to Synapse
This document aims to get you started with contributing to this repo!
- [1. Who can contribute to Synapse?](#1-who-can-contribute-to-synapse)
- [2. What do I need?](#2-what-do-i-need)
- [3. Get the source.](#3-get-the-source)
- [4. Install the dependencies](#4-install-the-dependencies)
* [Under Unix (macOS, Linux, BSD, ...)](#under-unix-macos-linux-bsd-)
* [Under Windows](#under-windows)
- [5. Get in touch.](#5-get-in-touch)
- [6. Pick an issue.](#6-pick-an-issue)
- [7. Turn coffee and documentation into code and documentation!](#7-turn-coffee-and-documentation-into-code-and-documentation)
- [8. Test, test, test!](#8-test-test-test)
* [Run the linters.](#run-the-linters)
* [Run the unit tests.](#run-the-unit-tests)
* [Run the integration tests.](#run-the-integration-tests)
- [9. Submit your patch.](#9-submit-your-patch)
* [Changelog](#changelog)
+ [How do I know what to call the changelog file before I create the PR?](#how-do-i-know-what-to-call-the-changelog-file-before-i-create-the-pr)
+ [Debian changelog](#debian-changelog)
* [Sign off](#sign-off)
- [10. Turn feedback into better code.](#10-turn-feedback-into-better-code)
- [11. Find a new issue.](#11-find-a-new-issue)
- [Notes for maintainers on merging PRs etc](#notes-for-maintainers-on-merging-prs-etc)
- [Conclusion](#conclusion)
# 1. Who can contribute to Synapse?
# Contributing code to Synapse
Everyone is welcome to contribute code to [matrix.org
projects](https://github.com/matrix-org), provided that they are willing to
@@ -36,179 +9,70 @@ license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
[LICENSE](LICENSE)).
# 2. What do I need?
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
For some tests, you will need [a recent version of Docker](https://docs.docker.com/get-docker/).
# 3. Get the source.
## How to contribute
The preferred and easiest way to contribute changes is to fork the relevant
project on GitHub, and then [create a pull request](
project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull your
changes into our repo.
Please base your changes on the `develop` branch.
Some other points to follow:
```sh
git clone git@github.com:YOUR_GITHUB_USER_NAME/synapse.git
git checkout develop
```
* Please base your changes on the `develop` branch.
If you need help getting started with git, this is beyond the scope of the document, but you
can find many good git tutorials on the web.
* Please follow the [code style requirements](#code-style).
# 4. Install the dependencies
* Please include a [changelog entry](#changelog) with each PR.
## Under Unix (macOS, Linux, BSD, ...)
* Please [sign off](#sign-off) your contribution.
Once you have installed Python 3 and added the source, please open a terminal and
setup a *virtualenv*, as follows:
* Please keep an eye on the pull request for feedback from the [continuous
integration system](#continuous-integration-and-testing) and try to fix any
errors that come up.
```sh
cd path/where/you/have/cloned/the/repository
python3 -m venv ./env
source ./env/bin/activate
pip install -e ".[all,lint,mypy,test]"
pip install tox
```
* If you need to [update your PR](#updating-your-pull-request), just add new
commits to your branch rather than rebasing.
This will install the developer dependencies for the project.
## Under Windows
TBD
# 5. Get in touch.
Join our developer community on Matrix: #synapse-dev:matrix.org !
# 6. Pick an issue.
Fix your favorite problem or perhaps find a [Good First Issue](https://github.com/matrix-org/synapse/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+First+Issue%22)
to work on.
# 7. Turn coffee and documentation into code and documentation!
## Code style
Synapse's code style is documented [here](docs/code_style.md). Please follow
it, including the conventions for the [sample configuration
file](docs/code_style.md#configuration-file-format).
There is a growing amount of documentation located in the [docs](docs)
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
Many of the conventions are enforced by scripts which are run as part of the
[continuous integration system](#continuous-integration-and-testing). To help
check if you have followed the code style, you can run `scripts-dev/lint.sh`
locally. You'll need python 3.6 or later, and to install a number of tools:
If you add new files added to either of these folders, please use [GitHub-Flavoured
Markdown](https://guides.github.com/features/mastering-markdown/).
```
# Install the dependencies
pip install -e ".[lint,mypy]"
Some documentation also exists in [Synapse's GitHub
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
# 8. Test, test, test!
<a name="test-test-test"></a>
While you're developing and before submitting a patch, you'll
want to test your code.
## Run the linters.
The linters look at your code and do two things:
- ensure that your code follows the coding style adopted by the project;
- catch a number of errors in your code.
They're pretty fast, don't hesitate!
```sh
source ./env/bin/activate
# Run the linter script
./scripts-dev/lint.sh
```
Note that this script *will modify your files* to fix styling errors.
Make sure that you have saved all your files.
**Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**.
If you wish to restrict the linters to only the files changed since the last commit
(much faster!), you can instead run:
By default, this script checks all files and can take some time; if you alter
only certain files, you might wish to specify paths as arguments to reduce the
run-time:
```sh
source ./env/bin/activate
./scripts-dev/lint.sh -d
```
Or if you know exactly which files you wish to lint, you can instead run:
```sh
source ./env/bin/activate
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
```
## Run the unit tests.
You can also provide the `-d` option, which will lint the files that have been
changed since the last git commit. This will often be significantly faster than
linting the whole codebase.
The unit tests run parts of Synapse, including your changes, to see if anything
was broken. They are slower than the linters but will typically catch more errors.
```sh
source ./env/bin/activate
trial tests
```
If you wish to only run *some* unit tests, you may specify
another module instead of `tests` - or a test class or a method:
```sh
source ./env/bin/activate
trial tests.rest.admin.test_room tests.handlers.test_admin.ExfiltrateData.test_invite
```
If your tests fail, you may wish to look at the logs:
```sh
less _trial_temp/test.log
```
## Run the integration tests.
The integration tests are a more comprehensive suite of tests. They
run a full version of Synapse, including your changes, to check if
anything was broken. They are slower than the unit tests but will
typically catch more errors.
The following command will let you run the integration test with the most common
configuration:
```sh
$ docker run --rm -it -v /path/where/you/have/cloned/the/repository\:/src:ro -v /path/to/where/you/want/logs\:/logs matrixdotorg/sytest-synapse:py37
```
This configuration should generally cover your needs. For more details about other configurations, see [documentation in the SyTest repo](https://github.com/matrix-org/sytest/blob/develop/docker/README.md).
# 9. Submit your patch.
Once you're happy with your patch, it's time to prepare a Pull Request.
To prepare a Pull Request, please:
1. verify that [all the tests pass](#test-test-test), including the coding style;
2. [sign off](#sign-off) your contribution;
3. `git push` your commit to your fork of Synapse;
4. on GitHub, [create the Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request);
5. add a [changelog entry](#changelog) and push it to your Pull Request;
6. for most contributors, that's all - however, if you are a member of the organization `matrix-org`, on GitHub, please request a review from `matrix.org / Synapse Core`.
Before pushing new changes, ensure they don't produce linting errors. Commit any
files that were corrected.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
## Changelog
@@ -292,6 +156,24 @@ directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
## Documentation
There is a growing amount of documentation located in the [docs](docs)
directory. This documentation is intended primarily for sysadmins running their
own Synapse instance, as well as developers interacting externally with
Synapse. [docs/dev](docs/dev) exists primarily to house documentation for
Synapse developers. [docs/admin_api](docs/admin_api) houses documentation
regarding Synapse's Admin API, which is used mostly by sysadmins and external
service developers.
New files added to both folders should be written in [Github-Flavoured
Markdown](https://guides.github.com/features/mastering-markdown/), and attempts
should be made to migrate existing documents to markdown where possible.
Some documentation also exists in [Synapse's Github
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
## Sign off
In order to have a concrete record that your contribution is intentional
@@ -358,36 +240,47 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
## Continuous integration and testing
# 10. Turn feedback into better code.
[Buildkite](https://buildkite.com/matrix-dot-org/synapse) will automatically
run a series of checks and tests against any PR which is opened against the
project; if your change breaks the build, this will be shown in GitHub, with
links to the build results. If your build fails, please try to fix the errors
and update your branch.
Once the Pull Request is opened, you will see a few things:
To run unit tests in a local development environment, you can use:
1. our automated CI (Continuous Integration) pipeline will run (again) the linters, the unit tests, the integration tests and more;
2. one or more of the developers will take a look at your Pull Request and offer feedback.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
From this point, you should:
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
1. Look at the results of the CI pipeline.
- If there is any error, fix the error.
2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
3. Create a new commit with the changes.
- Please do NOT overwrite the history. New commits make the reviewer's life easier.
- Push this commits to your Pull Request.
4. Back to 1.
## Updating your pull request
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!
If you decide to make changes to your pull request - perhaps to address issues
raised in a review, or to fix problems highlighted by [continuous
integration](#continuous-integration-and-testing) - just add new commits to your
branch, and push to GitHub. The pull request will automatically be updated.
# 11. Find a new issue.
Please **avoid** rebasing your branch, especially once the PR has been
reviewed: doing so makes it very difficult for a reviewer to see what has
changed since a previous review.
By now, you know the drill!
# Notes for maintainers on merging PRs etc
## Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
# Conclusion
## Conclusion
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully

View File

@@ -85,44 +85,23 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.29.0
====================
Requirement for X-Forwarded-Proto header
----------------------------------------
When using Synapse with a reverse proxy (in particular, when using the
`x_forwarded` option on an HTTP listener), Synapse now expects to receive an
`X-Forwarded-Proto` header on incoming HTTP requests. If it is not set, Synapse
will log a warning on each received request.
To avoid the warning, administrators using a reverse proxy should ensure that
the reverse proxy sets `X-Forwarded-Proto` header to `https` or `http` to
indicate the protocol used by the client. See the `reverse proxy documentation
<docs/reverse_proxy.md>`_, where the example configurations have been updated to
show how to set this header.
(Users of `Caddy <https://caddyserver.com/>`_ are unaffected, since we believe it
sets `X-Forwarded-Proto` by default.)
Upgrading to v1.27.0
====================
Changes to callback URI for OAuth2 / OpenID Connect and SAML2
-------------------------------------------------------------
Changes to callback URI for OAuth2 / OpenID Connect
---------------------------------------------------
This version changes the URI used for callbacks from OAuth2 and SAML2 identity providers:
This version changes the URI used for callbacks from OAuth2 identity providers. If
your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
to the list of permitted "redirect URIs" at the identity provider.
* If your server is configured for single sign-on via an OpenID Connect or OAuth2 identity
provider, you will need to add ``[synapse public baseurl]/_synapse/client/oidc/callback``
to the list of permitted "redirect URIs" at the identity provider.
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
Connect.
See `docs/openid.md <docs/openid.md>`_ for more information on setting up OpenID
Connect.
* If your server is configured for single sign-on via a SAML2 identity provider, you will
need to add ``[synapse public baseurl]/_synapse/client/saml2/authn_response`` as a permitted
"ACS location" (also known as "allowed callback URLs") at the identity provider.
(Note: a similar change is being made for SAML2; in this case the old URI
``[synapse public baseurl]/_matrix/saml2`` is being deprecated, but will continue to
work, so no immediate changes are required for existing installations.)
Changes to HTML templates
-------------------------

1
changelog.d/9003.misc Normal file
View File

@@ -0,0 +1 @@
Fix 'object name reserved for internal use' errors with recent versions of SQLite.

1
changelog.d/9123.misc Normal file
View File

@@ -0,0 +1 @@
Add experimental support for running Synapse with PyPy.

1
changelog.d/9150.feature Normal file
View File

@@ -0,0 +1 @@
New API /_synapse/admin/rooms/{roomId}/context/{eventId}.

1
changelog.d/9240.misc Normal file
View File

@@ -0,0 +1 @@
Deny access to additional IP addresses by default.

1
changelog.d/9257.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bug where sending email push would fail for rooms that the server had since left.

1
changelog.d/9291.doc Normal file
View File

@@ -0,0 +1 @@
Add note to `auto_join_rooms` config option explaining existing rooms must be publicly joinable.

1
changelog.d/9296.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug in Synapse 1.27.0rc1 which meant the "session expired" error page during SSO registration was badly formatted.

1
changelog.d/9299.misc Normal file
View File

@@ -0,0 +1 @@
Update the `Cursor` type hints to better match PEP 249.

1
changelog.d/9300.feature Normal file
View File

@@ -0,0 +1 @@
Further improvements to the user experience of registration via single sign-on.

1
changelog.d/9301.feature Normal file
View File

@@ -0,0 +1 @@
Further improvements to the user experience of registration via single sign-on.

1
changelog.d/9305.misc Normal file
View File

@@ -0,0 +1 @@
Add debug logging for SRV lookups. Contributed by @Bubu.

1
changelog.d/9307.misc Normal file
View File

@@ -0,0 +1 @@
Improve logging for OIDC login flow.

1
changelog.d/9308.doc Normal file
View File

@@ -0,0 +1 @@
Correct name of Synapse's service file in TURN howto.

1
changelog.d/9311.feature Normal file
View File

@@ -0,0 +1 @@
Add hook to spam checker modules that allow checking file uploads and remote downloads.

1
changelog.d/9317.doc Normal file
View File

@@ -0,0 +1 @@
Fix the braces in the `oidc_providers` section of the sample config.

1
changelog.d/9321.bugfix Normal file
View File

@@ -0,0 +1 @@
Assert a maximum length for the `client_secret` parameter for spec compliance.

1
changelog.d/9322.doc Normal file
View File

@@ -0,0 +1 @@
Update installation instructions on Fedora.

1
changelog.d/9326.misc Normal file
View File

@@ -0,0 +1 @@
Share the code for handling required attributes between the CAS and SAML handlers.

1
changelog.d/9333.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix additional errors when previewing URLs: "AttributeError 'NoneType' object has no attribute 'xpath'" and "ValueError: Unicode strings with encoding declaration are not supported. Please use bytes input or XML fragments without declaration.".

1
changelog.d/9361.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug causing Synapse to impose the wrong type constraints on fields when processing responses from appservices to `/_matrix/app/v1/thirdparty/user/{protocol}`.

1
changelog.d/9362.misc Normal file
View File

@@ -0,0 +1 @@
Clean up the code to load the metadata for OpenID Connect identity providers.

1
changelog.d/9376.feature Normal file
View File

@@ -0,0 +1 @@
Add support for receiving OpenID Connect authentication responses via form `POST`s rather than `GET`s.

1
changelog.d/9377.misc Normal file
View File

@@ -0,0 +1 @@
Convert tests to use `HomeserverTestCase`.

1
changelog.d/9381.misc Normal file
View File

@@ -0,0 +1 @@
Update the version of black used to 20.8b1.

1
changelog.d/9384.misc Normal file
View File

@@ -0,0 +1 @@
Allow OIDC config to override discovered values.

1
changelog.d/9391.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where Synapse would occaisonally stop reconnecting after the connection was lost.

1
changelog.d/9394.misc Normal file
View File

@@ -0,0 +1 @@
Remove some dead code from the acceptance of room invites path.

1
changelog.d/9395.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when upgrading a room: "TypeError: '>' not supported between instances of 'NoneType' and 'int'".

1
changelog.d/9396.misc Normal file
View File

@@ -0,0 +1 @@
Convert tests to use `HomeserverTestCase`.

1
changelog.d/9404.doc Normal file
View File

@@ -0,0 +1 @@
Update docs for using Gitea as OpenID provider.

1
changelog.d/9407.doc Normal file
View File

@@ -0,0 +1 @@
Document that pusher instances are shardable.

1
changelog.d/9423.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix building docker images for 32-bit ARM.

View File

@@ -58,10 +58,10 @@ trap "rm -r $tmpdir" EXIT
cp -r tests "$tmpdir"
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
"${TARGET_PYTHON}" -B -m twisted.trial --reporter=text -j2 tests
# build the config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_config" \
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_config" \
--config-dir="/etc/matrix-synapse" \
--data-dir="/var/lib/matrix-synapse" |
perl -pe '
@@ -87,7 +87,7 @@ PYTHONPATH="$tmpdir" \
' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
# build the log config file
"${TARGET_PYTHON}" "${VIRTUALENV_DIR}/bin/generate_log_config" \
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars.

16
debian/changelog vendored
View File

@@ -1,19 +1,3 @@
matrix-synapse-py3 (1.29.0) stable; urgency=medium
[ Jonathan de Jong ]
* Remove the python -B flag (don't generate bytecode) in scripts and documentation.
[ Synapse Packaging team ]
* New synapse release 1.29.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 08 Mar 2021 13:51:50 +0000
matrix-synapse-py3 (1.28.0) stable; urgency=medium
* New synapse release 1.28.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Feb 2021 10:21:57 +0000
matrix-synapse-py3 (1.27.0) stable; urgency=medium
[ Dan Callahan ]

2
debian/synctl.1 vendored
View File

@@ -44,7 +44,7 @@ Configuration file may be generated as follows:
.
.nf
$ python \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
$ python \-B \-m synapse\.app\.homeserver \-c config\.yaml \-\-generate\-config \-\-server\-name=<server name>
.
.fi
.

2
debian/synctl.ronn vendored
View File

@@ -41,7 +41,7 @@ process.
Configuration file may be generated as follows:
$ python -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
$ python -B -m synapse.app.homeserver -c config.yaml --generate-config --server-name=<server name>
## ENVIRONMENT

View File

@@ -12,11 +12,13 @@
#
ARG PYTHON_VERSION=3.8
ARG BASE_IMAGE=docker.io/python:${PYTHON_VERSION}-slim
ARG CARGO_NET_OFFLINE=false
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
FROM ${BASE_IMAGE} as builder
# install the OS build deps
RUN apt-get update && apt-get install -y \
@@ -32,9 +34,16 @@ RUN apt-get update && apt-get install -y \
zlib1g-dev \
&& rm -rf /var/lib/apt/lists/*
ENV CARGO_NET_OFFLINE=${CARGO_NET_OFFLINE}
# Build dependencies that are not available as wheels, to speed up rebuilds
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \
lxml
RUN pip install --prefix="/install" --no-warn-script-location \
cryptography
RUN pip install --prefix="/install" --no-warn-script-location \
frozendict \
jaeger-client \
opentracing \

View File

@@ -0,0 +1,21 @@
# A docker file that caches the cargo index for the cryptography deps. This is
# mainly useful for multi-arch builds where fetching the index from the internet
# fails for 32bit archs built on 64 bit platforms.
ARG PYTHON_VERSION=3.8
FROM --platform=$BUILDPLATFORM docker.io/python:${PYTHON_VERSION}-slim as builder
RUN apt-get update && apt-get install -y \
rustc \
&& rm -rf /var/lib/apt/lists/*
RUN pip download --no-binary cryptography --no-deps cryptography
RUN tar -xf cryptography*.tar.gz --wildcards cryptography*/src/rust/
RUN cd cryptography*/src/rust && cargo fetch
FROM docker.io/python:${PYTHON_VERSION}-slim
COPY --from=builder /root/.cargo /root/.cargo

View File

@@ -11,6 +11,7 @@ The image also does *not* provide a TURN server.
By default, the image expects a single volume, located at ``/data``, that will hold:
* configuration files;
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
* the appservices configuration.

View File

@@ -89,6 +89,7 @@ federation_rc_concurrent: 3
## Files ##
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "50M" }}"
max_image_pixels: "32M"
dynamic_thumbnails: false

View File

@@ -29,9 +29,8 @@ It returns a JSON body like the following:
}
],
"avatar_url": "<avatar_url>",
"admin": 0,
"deactivated": 0,
"shadow_banned": 0,
"admin": false,
"deactivated": false,
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
"creation_ts": 1560432506,
"appservice_id": null,
@@ -151,7 +150,6 @@ A JSON body is returned with the following shape:
"admin": 0,
"user_type": null,
"deactivated": 0,
"shadow_banned": 0,
"displayname": "<User One>",
"avatar_url": null
}, {
@@ -160,7 +158,6 @@ A JSON body is returned with the following shape:
"admin": 1,
"user_type": null,
"deactivated": 0,
"shadow_banned": 0,
"displayname": "<User Two>",
"avatar_url": "<avatar_url>"
}
@@ -265,7 +262,7 @@ The following actions are performed when deactivating an user:
- Reject all pending invites
- Remove all account validity information related to the user
The following additional actions are performed during deactivation if ``erase``
The following additional actions are performed during deactivation if``erase``
is set to ``true``:
- Remove the user's display name
@@ -379,12 +376,11 @@ The following fields are returned in the JSON response body:
- ``total`` - Number of rooms.
List media of a user
====================
List media of an user
================================
Gets a list of all local media that a specific ``user_id`` has created.
By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters
``order_by`` and ``dir``.
The response is ordered by creation date descending and media ID descending.
The newest media is on top.
The API is::
@@ -441,35 +437,6 @@ The following parameters should be set in the URL:
denoting the offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
Defaults to ``0``.
- ``order_by`` - The method by which to sort the returned list of media.
If the ordered field has duplicates, the second order is always by ascending ``media_id``,
which guarantees a stable ordering. Valid values are:
- ``media_id`` - Media are ordered alphabetically by ``media_id``.
- ``upload_name`` - Media are ordered alphabetically by name the media was uploaded with.
- ``created_ts`` - Media are ordered by when the content was uploaded in ms.
Smallest to largest. This is the default.
- ``last_access_ts`` - Media are ordered by when the content was last accessed in ms.
Smallest to largest.
- ``media_length`` - Media are ordered by length of the media in bytes.
Smallest to largest.
- ``media_type`` - Media are ordered alphabetically by MIME-type.
- ``quarantined_by`` - Media are ordered alphabetically by the user ID that
initiated the quarantine request for this media.
- ``safe_from_quarantine`` - Media are ordered by the status if this media is safe
from quarantining.
- ``dir`` - Direction of media order. Either ``f`` for forwards or ``b`` for backwards.
Setting this value to ``b`` will reverse the above sort order. Defaults to ``f``.
If neither ``order_by`` nor ``dir`` is set, the default order is newest media on top
(corresponds to ``order_by`` = ``created_ts`` and ``dir`` = ``b``).
Caution. The database only has indexes on the columns ``media_id``,
``user_id`` and ``created_ts``. This means that if a different sort order is used
(``upload_name``, ``last_access_ts``, ``media_length``, ``media_type``,
``quarantined_by`` or ``safe_from_quarantine``), this can cause a large load on the
database, especially for large environments.
**Response**

View File

@@ -9,24 +9,24 @@ of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
You should configure your reverse proxy to forward requests to `/_matrix` or
`/_synapse/client` to Synapse, and have it set the `X-Forwarded-For` and
`X-Forwarded-Proto` request headers.
You should remember that Matrix clients and other Matrix servers do not
necessarily need to connect to your server via the same server name or
port. Indeed, clients will use port 443 by default, whereas servers default to
port 8448. Where these are different, we refer to the 'client port' and the
'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes).
Beware that Apache *will* canonicalise URIs unless you specify
`nocanon`.
When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we
refer to the 'client port' and the 'federation port'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
Endpoints that are part of the standardised Matrix specification are
located under `/_matrix`, whereas endpoints specific to Synapse are
located under `/_synapse/client`.
Let's assume that we expect clients to connect to our server at
`https://matrix.example.com`, and other servers to connect at
`https://example.com:8448`. The following sections detail the configuration of
@@ -40,21 +40,18 @@ the reverse proxy and the homeserver.
```
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
listen 443 ssl;
listen [::]:443 ssl;
# For the federation port
listen 8448 ssl http2 default_server;
listen [::]:8448 ssl http2 default_server;
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name matrix.example.com;
location ~* ^(\/_matrix|\/_synapse\/client) {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 50M;
@@ -105,7 +102,6 @@ example.com:8448 {
SSLEngine on
ServerName matrix.example.com;
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
@@ -117,7 +113,6 @@ example.com:8448 {
SSLEngine on
ServerName example.com;
RequestHeader set "X-Forwarded-Proto" expr=%{REQUEST_SCHEME}
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
@@ -139,9 +134,6 @@ example.com:8448 {
```
frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
@@ -152,10 +144,6 @@ frontend https
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Proto http if !{ ssl_fc }
http-request set-header X-Forwarded-For %[src]
default_backend matrix
backend matrix

View File

@@ -101,14 +101,6 @@ pid_file: DATADIR/homeserver.pid
#
#limit_profile_requests_to_users_who_share_rooms: true
# Uncomment to prevent a user's profile data from being retrieved and
# displayed in a room until they have joined it. By default, a user's
# profile data is included in an invite event, regardless of the values
# of the above two settings, and whether or not the users share a server.
# Defaults to 'true'.
#
#include_profile_data_on_invite: false
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
@@ -707,12 +699,6 @@ acme:
# - matrix.org
# - example.com
# Uncomment to disable profile lookup over federation. By default, the
# Federation API allows other homeservers to obtain profile data of any user
# on this homeserver. Defaults to 'true'.
#
#allow_profile_lookup_over_federation: false
## Caching ##
@@ -2242,8 +2228,8 @@ password_config:
#require_uppercase: true
ui_auth:
# The amount of time to allow a user-interactive authentication session
# to be active.
# The number of milliseconds to allow a user-interactive authentication
# session to be active.
#
# This defaults to 0, meaning the user is queried for their credentials
# before every action, but this can be overridden to allow a single
@@ -2254,7 +2240,7 @@ ui_auth:
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
#session_timeout: "15s"
#session_timeout: 15000
# Configuration for sending emails from Synapse.
@@ -2544,35 +2530,19 @@ spam_checker:
# User Directory configuration
#
user_directory:
# Defines whether users can search the user directory. If false then
# empty responses are returned to all queries. Defaults to true.
#
# Uncomment to disable the user directory.
#
#enabled: false
# Defines whether to search all users visible to your HS when searching
# the user directory, rather than limiting to users visible in public
# rooms. Defaults to false.
#
# If you set it true, you'll have to rebuild the user_directory search
# indexes, see:
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.
#
#search_all_users: true
# Defines whether to prefer local users in search query results.
# If True, local users are more likely to appear above remote users
# when searching the user directory. Defaults to false.
#
# Uncomment to prefer local over remote users in user directory search
# results.
#
#prefer_local_users: true
# 'enabled' defines whether users can search the user directory. If
# false then empty responses are returned to all queries. Defaults to
# true.
#
# 'search_all_users' defines whether to search all users visible to your HS
# when searching the user directory, rather than limiting to users visible
# in public rooms. Defaults to false. If you set it True, you'll have to
# rebuild the user_directory search indexes, see
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
#
#user_directory:
# enabled: true
# search_all_users: false
# User Consent configuration

View File

@@ -25,7 +25,7 @@ well as some specific methods:
* `check_username_for_spam`
* `check_registration_for_spam`
The details of each of these methods (as well as their inputs and outputs)
The details of the each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
The `ModuleApi` class provides a way for the custom spam checker class to

View File

@@ -4,7 +4,6 @@ AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
ReloadPropagatedFrom=matrix-synapse.target
# if this is started at the same time as the main, let the main process start
# first, to initialise the database schema.

View File

@@ -3,7 +3,6 @@ Description=Synapse master
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
ReloadPropagatedFrom=matrix-synapse.target
[Service]
Type=notify

View File

@@ -220,6 +220,10 @@ Asks the server for the current position of all streams.
Acknowledge receipt of some federation data
#### REMOVE_PUSHER (C)
Inform the server a pusher should be removed
### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online.

View File

@@ -276,8 +276,7 @@ using):
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
[#7530](https://github.com/matrix-org/synapse/issues/7530).
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.

View File

@@ -23,7 +23,6 @@ files =
synapse/events/validator.py,
synapse/events/spamcheck.py,
synapse/federation,
synapse/groups,
synapse/handlers,
synapse/http/client.py,
synapse/http/federation/matrix_federation_agent.py,

View File

@@ -22,7 +22,7 @@ import logging
import sys
import time
import traceback
from typing import Dict, Iterable, Optional, Set
from typing import Dict, Optional, Set
import yaml
@@ -47,7 +47,6 @@ from synapse.storage.databases.main.events_bg_updates import (
from synapse.storage.databases.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.databases.main.pusher import PusherWorkerStore
from synapse.storage.databases.main.registration import (
RegistrationBackgroundUpdateStore,
find_max_generated_user_id_localpart,
@@ -178,7 +177,6 @@ class Store(
UserDirectoryBackgroundUpdateStore,
EndToEndKeyBackgroundStore,
StatsStore,
PusherWorkerStore,
):
def execute(self, f, *args, **kwargs):
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
@@ -631,13 +629,7 @@ class Porter(object):
await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
await self._setup_events_stream_seqs()
await self._setup_sequence(
"device_inbox_sequence", ("device_inbox", "device_federation_outbox")
)
await self._setup_sequence(
"account_data_sequence", ("room_account_data", "room_tags_revisions", "account_data"))
await self._setup_sequence("receipts_sequence", ("receipts_linearized", ))
await self._setup_auth_chain_sequence()
await self._setup_device_inbox_seq()
# Step 3. Get tables.
self.progress.set_state("Fetching tables")
@@ -862,7 +854,7 @@ class Porter(object):
return done, remaining + done
async def _setup_state_group_id_seq(self) -> None:
async def _setup_state_group_id_seq(self):
curr_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="state_groups", keyvalues={}, retcol="MAX(id)", allow_none=True
)
@@ -876,7 +868,7 @@ class Porter(object):
await self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
async def _setup_user_id_seq(self) -> None:
async def _setup_user_id_seq(self):
curr_id = await self.sqlite_store.db_pool.runInteraction(
"setup_user_id_seq", find_max_generated_user_id_localpart
)
@@ -885,9 +877,9 @@ class Porter(object):
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
await self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
async def _setup_events_stream_seqs(self) -> None:
async def _setup_events_stream_seqs(self):
"""Set the event stream sequences to the correct values.
"""
@@ -916,46 +908,35 @@ class Porter(object):
(curr_backward_id + 1,),
)
await self.postgres_store.db_pool.runInteraction(
return await self.postgres_store.db_pool.runInteraction(
"_setup_events_stream_seqs", _setup_events_stream_seqs_set_pos,
)
async def _setup_sequence(self, sequence_name: str, stream_id_tables: Iterable[str]) -> None:
"""Set a sequence to the correct value.
async def _setup_device_inbox_seq(self):
"""Set the device inbox sequence to the correct value.
"""
current_stream_ids = []
for stream_id_table in stream_id_tables:
max_stream_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table=stream_id_table,
keyvalues={},
retcol="COALESCE(MAX(stream_id), 1)",
allow_none=True,
)
current_stream_ids.append(max_stream_id)
next_id = max(current_stream_ids) + 1
def r(txn):
sql = "ALTER SEQUENCE %s RESTART WITH" % (sequence_name, )
txn.execute(sql + " %s", (next_id, ))
await self.postgres_store.db_pool.runInteraction("_setup_%s" % (sequence_name,), r)
async def _setup_auth_chain_sequence(self) -> None:
curr_chain_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="event_auth_chains", keyvalues={}, retcol="MAX(chain_id)", allow_none=True
curr_local_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="device_inbox",
keyvalues={},
retcol="COALESCE(MAX(stream_id), 1)",
allow_none=True,
)
curr_federation_id = await self.sqlite_store.db_pool.simple_select_one_onecol(
table="device_federation_outbox",
keyvalues={},
retcol="COALESCE(MAX(stream_id), 1)",
allow_none=True,
)
next_id = max(curr_local_id, curr_federation_id) + 1
def r(txn):
txn.execute(
"ALTER SEQUENCE event_auth_chain_id RESTART WITH %s",
(curr_chain_id,),
"ALTER SEQUENCE device_inbox_sequence RESTART WITH %s", (next_id,)
)
await self.postgres_store.db_pool.runInteraction(
"_setup_event_auth_chain_id", r,
)
return self.postgres_store.db_pool.runInteraction("_setup_device_inbox_seq", r)
##############################################

View File

@@ -102,7 +102,7 @@ CONDITIONAL_REQUIREMENTS["lint"] = [
"flake8",
]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.812", "mypy-zope==0.2.11"]
CONDITIONAL_REQUIREMENTS["mypy"] = ["mypy==0.790", "mypy-zope==0.2.8"]
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.29.0"
__version__ = "1.27.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -27,11 +27,6 @@ MAX_ALIAS_LENGTH = 255
# the maximum length for a user id is 255 characters
MAX_USERID_LENGTH = 255
# The maximum length for a group id is 255 characters
MAX_GROUPID_LENGTH = 255
MAX_GROUP_CATEGORYID_LENGTH = 255
MAX_GROUP_ROLEID_LENGTH = 255
class Membership:
@@ -98,12 +93,9 @@ class EventTypes:
Retention = "m.room.retention"
Dummy = "org.matrix.dummy_event"
class EduTypes:
Presence = "m.presence"
RoomKeyRequest = "m.room_key_request"
Dummy = "org.matrix.dummy_event"
class RejectedReason:

View File

@@ -14,7 +14,7 @@
# limitations under the License.
from collections import OrderedDict
from typing import Hashable, Optional, Tuple
from typing import Any, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.types import Requester
@@ -42,9 +42,7 @@ class Ratelimiter:
# * How many times an action has occurred since a point in time
# * The point in time
# * The rate_hz of this particular entry. This can vary per request
self.actions = (
OrderedDict()
) # type: OrderedDict[Hashable, Tuple[float, int, float]]
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
def can_requester_do_action(
self,
@@ -84,7 +82,7 @@ class Ratelimiter:
def can_do_action(
self,
key: Hashable,
key: Any,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
@@ -177,7 +175,7 @@ class Ratelimiter:
def ratelimit(
self,
key: Hashable,
key: Any,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,

View File

@@ -17,6 +17,8 @@ import sys
from synapse import python_dependencies # noqa: E402
sys.dont_write_bytecode = True
logger = logging.getLogger(__name__)
try:

View File

@@ -210,9 +210,7 @@ def start(config_options):
config.update_user_directory = False
config.run_background_tasks = False
config.start_pushers = False
config.pusher_shard_config.instances = []
config.send_federation = False
config.federation_shard_config.instances = []
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts

View File

@@ -23,7 +23,6 @@ from typing_extensions import ContextManager
from twisted.internet import address
from twisted.web.resource import IResource
from twisted.web.server import Request
import synapse
import synapse.events
@@ -191,7 +190,7 @@ class KeyUploadServlet(RestServlet):
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
async def on_POST(self, request: Request, device_id: Optional[str]):
async def on_POST(self, request, device_id):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
@@ -224,12 +223,10 @@ class KeyUploadServlet(RestServlet):
header: request.requestHeaders.getRawHeaders(header, [])
for header in (b"Authorization", b"User-Agent")
}
# Add the previous hop to the X-Forwarded-For header.
# Add the previous hop the the X-Forwarded-For header.
x_forwarded_for = request.requestHeaders.getRawHeaders(
b"X-Forwarded-For", []
)
# we use request.client here, since we want the previous hop, not the
# original client (as returned by request.getClientAddress()).
if isinstance(request.client, (address.IPv4Address, address.IPv6Address)):
previous_host = request.client.host.encode("ascii")
# If the header exists, add to the comma-separated list of the first
@@ -242,14 +239,6 @@ class KeyUploadServlet(RestServlet):
x_forwarded_for = [previous_host]
headers[b"X-Forwarded-For"] = x_forwarded_for
# Replicate the original X-Forwarded-Proto header. Note that
# XForwardedForRequest overrides isSecure() to give us the original protocol
# used by the client, as opposed to the protocol used by our upstream proxy
# - which is what we want here.
headers[b"X-Forwarded-Proto"] = [
b"https" if request.isSecure() else b"http"
]
try:
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
@@ -656,6 +645,9 @@ class GenericWorkerServer(HomeServer):
self.get_tcp_replication().start_replication(self)
async def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
@cache_in_self
def get_replication_data_handler(self):
return GenericWorkerReplicationHandler(self)
@@ -930,6 +922,22 @@ def start(config_options):
# For other worker types we force this to off.
config.appservice.notify_appservices = False
if config.worker_app == "synapse.app.pusher":
if config.server.start_pushers:
sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``start_pushers: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.server.start_pushers = True
else:
# For other worker types we force this to off.
config.server.start_pushers = False
if config.worker_app == "synapse.app.user_dir":
if config.server.update_user_directory:
sys.stderr.write(
@@ -946,6 +954,22 @@ def start(config_options):
# For other worker types we force this to off.
config.server.update_user_directory = False
if config.worker_app == "synapse.app.federation_sender":
if config.worker.send_federation:
sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``send_federation: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.worker.send_federation = True
else:
# For other worker types we force this to off.
config.worker.send_federation = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
hs = GenericWorkerServer(

View File

@@ -21,7 +21,7 @@ import os
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import Any, Iterable, List, MutableMapping, Optional, Union
from typing import Any, Iterable, List, MutableMapping, Optional
import attr
import jinja2
@@ -147,20 +147,7 @@ class Config:
return int(value) * size
@staticmethod
def parse_duration(value: Union[str, int]) -> int:
"""Convert a duration as a string or integer to a number of milliseconds.
If an integer is provided it is treated as milliseconds and is unchanged.
String durations can have a suffix of 's', 'm', 'h', 'd', 'w', or 'y'.
No suffix is treated as milliseconds.
Args:
value: The duration to parse.
Returns:
The number of milliseconds in the duration.
"""
def parse_duration(value):
if isinstance(value, int):
return value
second = 1000
@@ -844,23 +831,22 @@ class ShardedWorkerHandlingConfig:
def should_handle(self, instance_name: str, key: str) -> bool:
"""Whether this instance is responsible for handling the given key."""
# If no instances are defined we assume some other worker is handling
# this.
if not self.instances:
return False
# If multiple instances are not defined we always return true
if not self.instances or len(self.instances) == 1:
return True
return self._get_instance(key) == instance_name
return self.get_instance(key) == instance_name
def _get_instance(self, key: str) -> str:
def get_instance(self, key: str) -> str:
"""Get the instance responsible for handling the given key.
Note: For federation sending and pushers the config for which instance
is sending is known only to the sender instance, so we don't expose this
method by default.
Note: For things like federation sending the config for which instance
is sending is known only to the sender instance if there is only one.
Therefore `should_handle` should be used where possible.
"""
if not self.instances:
raise Exception("Unknown worker")
return "master"
if len(self.instances) == 1:
return self.instances[0]
@@ -877,21 +863,4 @@ class ShardedWorkerHandlingConfig:
return self.instances[remainder]
@attr.s
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
"""A version of `ShardedWorkerHandlingConfig` that is used for config
options where all instances know which instances are responsible for the
sharded work.
"""
def __attrs_post_init__(self):
# We require that `self.instances` is non-empty.
if not self.instances:
raise Exception("Got empty list of instances for shard config")
def get_instance(self, key: str) -> str:
"""Get the instance responsible for handling the given key."""
return self._get_instance(key)
__all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]

View File

@@ -149,6 +149,4 @@ class ShardedWorkerHandlingConfig:
instances: List[str]
def __init__(self, instances: List[str]) -> None: ...
def should_handle(self, instance_name: str, key: str) -> bool: ...
class RoutableShardedWorkerHandlingConfig(ShardedWorkerHandlingConfig):
def get_instance(self, key: str) -> str: ...

View File

@@ -37,9 +37,7 @@ class AuthConfig(Config):
# User-interactive authentication
ui_auth = config.get("ui_auth") or {}
self.ui_auth_session_timeout = self.parse_duration(
ui_auth.get("session_timeout", 0)
)
self.ui_auth_session_timeout = ui_auth.get("session_timeout", 0)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
@@ -95,8 +93,8 @@ class AuthConfig(Config):
#require_uppercase: true
ui_auth:
# The amount of time to allow a user-interactive authentication session
# to be active.
# The number of milliseconds to allow a user-interactive authentication
# session to be active.
#
# This defaults to 0, meaning the user is queried for their credentials
# before every action, but this can be overridden to allow a single
@@ -107,5 +105,5 @@ class AuthConfig(Config):
# Uncomment below to allow for credential validation to last for 15
# seconds.
#
#session_timeout: "15s"
#session_timeout: 15000
"""

View File

@@ -41,10 +41,6 @@ class FederationConfig(Config):
)
self.federation_metrics_domains = set(federation_metrics_domains)
self.allow_profile_lookup_over_federation = config.get(
"allow_profile_lookup_over_federation", True
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
## Federation ##
@@ -70,12 +66,6 @@ class FederationConfig(Config):
#federation_metrics_domains:
# - matrix.org
# - example.com
# Uncomment to disable profile lookup over federation. By default, the
# Federation API allows other homeservers to obtain profile data of any user
# on this homeserver. Defaults to 'true'.
#
#allow_profile_lookup_over_federation: false
"""

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from ._base import Config, ShardedWorkerHandlingConfig
class PushConfig(Config):
@@ -27,6 +27,9 @@ class PushConfig(Config):
"group_unread_count_by_room", True
)
pusher_instances = config.get("pusher_instances") or []
self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
# There was a a 'redact_content' setting but mistakenly read from the
# 'email'section'. Check for the flag in the 'push' section, and log,
# but do not honour it to avoid nasty surprises when people upgrade.

View File

@@ -102,16 +102,6 @@ class RatelimitConfig(Config):
defaults={"per_second": 0.01, "burst_count": 3},
)
# Ratelimit cross-user key requests:
# * For local requests this is keyed by the sending device.
# * For requests received over federation this is keyed by the origin.
#
# Note that this isn't exposed in the configuration as it is obscure.
self.rc_key_requests = RateLimitConfig(
config.get("rc_key_requests", {}),
defaults={"per_second": 20, "burst_count": 100},
)
self.rc_3pid_validation = RateLimitConfig(
config.get("rc_3pid_validation") or {},
defaults={"per_second": 0.003, "burst_count": 5},

View File

@@ -206,6 +206,7 @@ class ContentRepositoryConfig(Config):
def generate_config_section(self, data_dir_path, **kwargs):
media_store = os.path.join(data_dir_path, "media_store")
uploads_path = os.path.join(data_dir_path, "uploads")
formatted_thumbnail_sizes = "".join(
THUMBNAIL_SIZE_YAML % s for s in DEFAULT_THUMBNAIL_SIZES

View File

@@ -263,12 +263,6 @@ class ServerConfig(Config):
False,
)
# Whether to retrieve and display profile data for a user when they
# are invited to a room
self.include_profile_data_on_invite = config.get(
"include_profile_data_on_invite", True
)
if "restrict_public_rooms_to_local_users" in config and (
"allow_public_rooms_without_auth" in config
or "allow_public_rooms_over_federation" in config
@@ -397,6 +391,7 @@ class ServerConfig(Config):
if self.public_baseurl is not None:
if self.public_baseurl[-1] != "/":
self.public_baseurl += "/"
self.start_pushers = config.get("start_pushers", True)
# (undocumented) option for torturing the worker-mode replication a bit,
# for testing. The value defines the number of milliseconds to pause before
@@ -853,14 +848,6 @@ class ServerConfig(Config):
#
#limit_profile_requests_to_users_who_share_rooms: true
# Uncomment to prevent a user's profile data from being retrieved and
# displayed in a room until they have joined it. By default, a user's
# profile data is included in an invite event, regardless of the values
# of the above two settings, and whether or not the users share a server.
# Defaults to 'true'.
#
#include_profile_data_on_invite: false
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.

View File

@@ -24,46 +24,32 @@ class UserDirectoryConfig(Config):
section = "userdirectory"
def read_config(self, config, **kwargs):
user_directory_config = config.get("user_directory") or {}
self.user_directory_search_enabled = user_directory_config.get("enabled", True)
self.user_directory_search_all_users = user_directory_config.get(
"search_all_users", False
)
self.user_directory_search_prefer_local_users = user_directory_config.get(
"prefer_local_users", False
)
self.user_directory_search_enabled = True
self.user_directory_search_all_users = False
user_directory_config = config.get("user_directory", None)
if user_directory_config:
self.user_directory_search_enabled = user_directory_config.get(
"enabled", True
)
self.user_directory_search_all_users = user_directory_config.get(
"search_all_users", False
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# User Directory configuration
#
user_directory:
# Defines whether users can search the user directory. If false then
# empty responses are returned to all queries. Defaults to true.
#
# Uncomment to disable the user directory.
#
#enabled: false
# Defines whether to search all users visible to your HS when searching
# the user directory, rather than limiting to users visible in public
# rooms. Defaults to false.
#
# If you set it true, you'll have to rebuild the user_directory search
# indexes, see:
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.
#
#search_all_users: true
# Defines whether to prefer local users in search query results.
# If True, local users are more likely to appear above remote users
# when searching the user directory. Defaults to false.
#
# Uncomment to prefer local over remote users in user directory search
# results.
#
#prefer_local_users: true
# 'enabled' defines whether users can search the user directory. If
# false then empty responses are returned to all queries. Defaults to
# true.
#
# 'search_all_users' defines whether to search all users visible to your HS
# when searching the user directory, rather than limiting to users visible
# in public rooms. Defaults to false. If you set it True, you'll have to
# rebuild the user_directory search indexes, see
# https://github.com/matrix-org/synapse/blob/master/docs/user_directory.md
#
#user_directory:
# enabled: true
# search_all_users: false
"""

View File

@@ -17,28 +17,9 @@ from typing import List, Union
import attr
from ._base import (
Config,
ConfigError,
RoutableShardedWorkerHandlingConfig,
ShardedWorkerHandlingConfig,
)
from ._base import Config, ConfigError, ShardedWorkerHandlingConfig
from .server import ListenerConfig, parse_listener_def
_FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR = """
The send_federation config option must be disabled in the main
synapse process before they can be run in a separate worker.
Please add ``send_federation: false`` to the main config
"""
_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR = """
The start_pushers config option must be disabled in the main
synapse process before they can be run in a separate worker.
Please add ``start_pushers: false`` to the main config
"""
def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
"""Helper for allowing parsing a string or list of strings to a config
@@ -122,7 +103,6 @@ class WorkerConfig(Config):
self.worker_replication_secret = config.get("worker_replication_secret", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.instance_name = self.worker_name or "master"
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
@@ -138,41 +118,12 @@ class WorkerConfig(Config):
)
)
# Handle federation sender configuration.
#
# There are two ways of configuring which instances handle federation
# sending:
# 1. The old way where "send_federation" is set to false and running a
# `synapse.app.federation_sender` worker app.
# 2. Specifying the workers sending federation in
# `federation_sender_instances`.
#
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
send_federation = config.get("send_federation", True)
federation_sender_instances = config.get("federation_sender_instances")
if federation_sender_instances is None:
# Default to an empty list, which means "another, unknown, worker is
# responsible for it".
federation_sender_instances = []
# If no federation sender instances are set we check if
# `send_federation` is set, which means use master
if send_federation:
federation_sender_instances = ["master"]
if self.worker_app == "synapse.app.federation_sender":
if send_federation:
# If we're running federation senders, and not using
# `federation_sender_instances`, then we should have
# explicitly set `send_federation` to false.
raise ConfigError(
_FEDERATION_SENDER_WITH_SEND_FEDERATION_ENABLED_ERROR
)
federation_sender_instances = [self.worker_name]
self.send_federation = self.instance_name in federation_sender_instances
federation_sender_instances = config.get("federation_sender_instances") or []
self.federation_shard_config = ShardedWorkerHandlingConfig(
federation_sender_instances
)
@@ -213,37 +164,7 @@ class WorkerConfig(Config):
"Must only specify one instance to handle `receipts` messages."
)
if len(self.writers.events) == 0:
raise ConfigError("Must specify at least one instance to handle `events`.")
self.events_shard_config = RoutableShardedWorkerHandlingConfig(
self.writers.events
)
# Handle sharded push
start_pushers = config.get("start_pushers", True)
pusher_instances = config.get("pusher_instances")
if pusher_instances is None:
# Default to an empty list, which means "another, unknown, worker is
# responsible for it".
pusher_instances = []
# If no pushers instances are set we check if `start_pushers` is
# set, which means use master
if start_pushers:
pusher_instances = ["master"]
if self.worker_app == "synapse.app.pusher":
if start_pushers:
# If we're running pushers, and not using
# `pusher_instances`, then we should have explicitly set
# `start_pushers` to false.
raise ConfigError(_PUSHER_WITH_START_PUSHERS_ENABLED_ERROR)
pusher_instances = [self.instance_name]
self.start_pushers = self.instance_name in pusher_instances
self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
self.events_shard_config = ShardedWorkerHandlingConfig(self.writers.events)
# Whether this worker should run background tasks or not.
#

View File

@@ -34,7 +34,7 @@ from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import (
AuthError,
Codes,
@@ -44,7 +44,6 @@ from synapse.api.errors import (
SynapseError,
UnsupportedRoomVersionError,
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
@@ -870,13 +869,6 @@ class FederationHandlerRegistry:
# EDU received.
self._edu_type_to_instance = {} # type: Dict[str, List[str]]
# A rate limiter for incoming room key requests per origin.
self._room_key_request_rate_limiter = Ratelimiter(
clock=self.clock,
rate_hz=self.config.rc_key_requests.per_second,
burst_count=self.config.rc_key_requests.burst_count,
)
def register_edu_handler(
self, edu_type: str, handler: Callable[[str, JsonDict], Awaitable[None]]
):
@@ -925,15 +917,7 @@ class FederationHandlerRegistry:
self._edu_type_to_instance[edu_type] = instance_names
async def on_edu(self, edu_type: str, origin: str, content: dict):
if not self.config.use_presence and edu_type == EduTypes.Presence:
return
# If the incoming room key requests from a particular origin are over
# the limit, drop them.
if (
edu_type == EduTypes.RoomKeyRequest
and not self._room_key_request_rate_limiter.can_do_action(origin)
):
if not self.config.use_presence and edu_type == "m.presence":
return
# Check if we have a handler on this instance

View File

@@ -474,7 +474,7 @@ class FederationSender:
self._processing_pending_presence = False
def send_presence_to_destinations(
self, states: Iterable[UserPresenceState], destinations: Iterable[str]
self, states: List[UserPresenceState], destinations: List[str]
) -> None:
"""Send the given presence states to the given destinations.
destinations (list[str])

View File

@@ -21,7 +21,6 @@ import re
from typing import Optional, Tuple, Type
import synapse
from synapse.api.constants import MAX_GROUP_CATEGORYID_LENGTH, MAX_GROUP_ROLEID_LENGTH
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
from synapse.api.room_versions import RoomVersions
from synapse.api.urls import (
@@ -484,9 +483,10 @@ class FederationQueryServlet(BaseFederationServlet):
# This is when we receive a server-server Query
async def on_GET(self, origin, content, query, query_type):
args = {k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()}
args["origin"] = origin
return await self.handler.on_query_request(query_type, args)
return await self.handler.on_query_request(
query_type,
{k.decode("utf8"): v[0].decode("utf-8") for k, v in query.items()},
)
class FederationMakeJoinServlet(BaseFederationServlet):
@@ -1118,17 +1118,7 @@ class FederationGroupsSummaryRoomsServlet(BaseFederationServlet):
raise SynapseError(403, "requester_user_id doesn't match origin")
if category_id == "":
raise SynapseError(
400, "category_id cannot be empty string", Codes.INVALID_PARAM
)
if len(category_id) > MAX_GROUP_CATEGORYID_LENGTH:
raise SynapseError(
400,
"category_id may not be longer than %s characters"
% (MAX_GROUP_CATEGORYID_LENGTH,),
Codes.INVALID_PARAM,
)
raise SynapseError(400, "category_id cannot be empty string")
resp = await self.handler.update_group_summary_room(
group_id,
@@ -1194,14 +1184,6 @@ class FederationGroupsCategoryServlet(BaseFederationServlet):
if category_id == "":
raise SynapseError(400, "category_id cannot be empty string")
if len(category_id) > MAX_GROUP_CATEGORYID_LENGTH:
raise SynapseError(
400,
"category_id may not be longer than %s characters"
% (MAX_GROUP_CATEGORYID_LENGTH,),
Codes.INVALID_PARAM,
)
resp = await self.handler.upsert_group_category(
group_id, requester_user_id, category_id, content
)
@@ -1258,17 +1240,7 @@ class FederationGroupsRoleServlet(BaseFederationServlet):
raise SynapseError(403, "requester_user_id doesn't match origin")
if role_id == "":
raise SynapseError(
400, "role_id cannot be empty string", Codes.INVALID_PARAM
)
if len(role_id) > MAX_GROUP_ROLEID_LENGTH:
raise SynapseError(
400,
"role_id may not be longer than %s characters"
% (MAX_GROUP_ROLEID_LENGTH,),
Codes.INVALID_PARAM,
)
raise SynapseError(400, "role_id cannot be empty string")
resp = await self.handler.update_group_role(
group_id, requester_user_id, role_id, content
@@ -1313,14 +1285,6 @@ class FederationGroupsSummaryUsersServlet(BaseFederationServlet):
if role_id == "":
raise SynapseError(400, "role_id cannot be empty string")
if len(role_id) > MAX_GROUP_ROLEID_LENGTH:
raise SynapseError(
400,
"role_id may not be longer than %s characters"
% (MAX_GROUP_ROLEID_LENGTH,),
Codes.INVALID_PARAM,
)
resp = await self.handler.update_group_summary_user(
group_id,
requester_user_id,

View File

@@ -37,16 +37,13 @@ An attestation is a signed blob of json that looks like:
import logging
import random
from typing import TYPE_CHECKING, Optional, Tuple
from typing import Tuple
from signedjson.sign import sign_json
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, get_domain_from_id
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
from synapse.types import get_domain_from_id
logger = logging.getLogger(__name__)
@@ -66,19 +63,15 @@ UPDATE_ATTESTATION_TIME_MS = 1 * 24 * 60 * 60 * 1000
class GroupAttestationSigning:
"""Creates and verifies group attestations."""
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
self.keyring = hs.get_keyring()
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.signing_key = hs.signing_key
async def verify_attestation(
self,
attestation: JsonDict,
group_id: str,
user_id: str,
server_name: Optional[str] = None,
) -> None:
self, attestation, group_id, user_id, server_name=None
):
"""Verifies that the given attestation matches the given parameters.
An optional server_name can be supplied to explicitly set which server's
@@ -107,18 +100,16 @@ class GroupAttestationSigning:
if valid_until_ms < now:
raise SynapseError(400, "Attestation expired")
assert server_name is not None
await self.keyring.verify_json_for_server(
server_name, attestation, now, "Group attestation"
)
def create_attestation(self, group_id: str, user_id: str) -> JsonDict:
def create_attestation(self, group_id, user_id):
"""Create an attestation for the group_id and user_id with default
validity length.
"""
validity_period = DEFAULT_ATTESTATION_LENGTH_MS * random.uniform(
*DEFAULT_ATTESTATION_JITTER
)
validity_period = DEFAULT_ATTESTATION_LENGTH_MS
validity_period *= random.uniform(*DEFAULT_ATTESTATION_JITTER)
valid_until_ms = int(self.clock.time_msec() + validity_period)
return sign_json(
@@ -135,7 +126,7 @@ class GroupAttestationSigning:
class GroupAttestionRenewer:
"""Responsible for sending and receiving attestation updates."""
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.assestations = hs.get_groups_attestation_signing()
@@ -148,9 +139,7 @@ class GroupAttestionRenewer:
self._start_renew_attestations, 30 * 60 * 1000
)
async def on_renew_attestation(
self, group_id: str, user_id: str, content: JsonDict
) -> JsonDict:
async def on_renew_attestation(self, group_id, user_id, content):
"""When a remote updates an attestation"""
attestation = content["attestation"]
@@ -165,10 +154,10 @@ class GroupAttestionRenewer:
return {}
def _start_renew_attestations(self) -> None:
def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations)
async def _renew_attestations(self) -> None:
async def _renew_attestations(self):
"""Called periodically to check if we need to update any of our attestations"""
now = self.clock.time_msec()
@@ -177,7 +166,7 @@ class GroupAttestionRenewer:
now + UPDATE_ATTESTATION_TIME_MS
)
async def _renew_attestation(group_user: Tuple[str, str]) -> None:
async def _renew_attestation(group_user: Tuple[str, str]):
group_id, user_id = group_user
try:
if not self.is_mine_id(group_id):

View File

@@ -16,17 +16,12 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Optional
from synapse.api.errors import Codes, SynapseError
from synapse.handlers.groups_local import GroupsLocalHandler
from synapse.handlers.profile import MAX_AVATAR_URL_LEN, MAX_DISPLAYNAME_LEN
from synapse.types import GroupID, JsonDict, RoomID, UserID, get_domain_from_id
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from synapse.util.async_helpers import concurrently_execute
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
@@ -44,7 +39,7 @@ MAX_LONG_DESC_LEN = 10000
class GroupsServerWorkerHandler:
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
self.room_list_handler = hs.get_room_list_handler()
@@ -59,21 +54,16 @@ class GroupsServerWorkerHandler:
self.profile_handler = hs.get_profile_handler()
async def check_group_is_ours(
self,
group_id: str,
requester_user_id: str,
and_exists: bool = False,
and_is_admin: Optional[str] = None,
) -> Optional[dict]:
self, group_id, requester_user_id, and_exists=False, and_is_admin=None
):
"""Check that the group is ours, and optionally if it exists.
If group does exist then return group.
Args:
group_id: The group ID to check.
requester_user_id: The user ID of the requester.
and_exists: whether to also check if group exists
and_is_admin: whether to also check if given str is a user_id
group_id (str)
and_exists (bool): whether to also check if group exists
and_is_admin (str): whether to also check if given str is a user_id
that is an admin
"""
if not self.is_mine_id(group_id):
@@ -96,9 +86,7 @@ class GroupsServerWorkerHandler:
return group
async def get_group_summary(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_group_summary(self, group_id, requester_user_id):
"""Get the summary for a group as seen by requester_user_id.
The group summary consists of the profile of the room, and a curated
@@ -131,8 +119,6 @@ class GroupsServerWorkerHandler:
entry = await self.room_list_handler.generate_room_entry(
room_id, len(joined_users), with_alias=False, allow_private=True
)
if entry is None:
continue
entry = dict(entry) # so we don't change what's cached
entry.pop("room_id", None)
@@ -140,22 +126,22 @@ class GroupsServerWorkerHandler:
rooms.sort(key=lambda e: e.get("order", 0))
for user in users:
user_id = user["user_id"]
for entry in users:
user_id = entry["user_id"]
if not self.is_mine_id(requester_user_id):
attestation = await self.store.get_remote_attestation(group_id, user_id)
if not attestation:
continue
user["attestation"] = attestation
entry["attestation"] = attestation
else:
user["attestation"] = self.attestations.create_attestation(
entry["attestation"] = self.attestations.create_attestation(
group_id, user_id
)
user_profile = await self.profile_handler.get_profile_from_cache(user_id)
user.update(user_profile)
entry.update(user_profile)
users.sort(key=lambda e: e.get("order", 0))
@@ -178,43 +164,40 @@ class GroupsServerWorkerHandler:
"user": membership_info,
}
async def get_group_categories(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user)"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
categories = await self.store.get_group_categories(group_id=group_id)
return {"categories": categories}
async def get_group_category(
self, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict:
async def get_group_category(self, group_id, requester_user_id, category_id):
"""Get a specific category in a group (as seen by user)"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
return await self.store.get_group_category(
res = await self.store.get_group_category(
group_id=group_id, category_id=category_id
)
async def get_group_roles(self, group_id: str, requester_user_id: str) -> JsonDict:
logger.info("group %s", res)
return res
async def get_group_roles(self, group_id, requester_user_id):
"""Get all roles in a group (as seen by user)"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
roles = await self.store.get_group_roles(group_id=group_id)
return {"roles": roles}
async def get_group_role(
self, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict:
async def get_group_role(self, group_id, requester_user_id, role_id):
"""Get a specific role in a group (as seen by user)"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
return await self.store.get_group_role(group_id=group_id, role_id=role_id)
res = await self.store.get_group_role(group_id=group_id, role_id=role_id)
return res
async def get_group_profile(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_group_profile(self, group_id, requester_user_id):
"""Get the group profile as seen by requester_user_id"""
await self.check_group_is_ours(group_id, requester_user_id)
@@ -236,9 +219,7 @@ class GroupsServerWorkerHandler:
else:
raise SynapseError(404, "Unknown group")
async def get_users_in_group(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_users_in_group(self, group_id, requester_user_id):
"""Get the users in group as seen by requester_user_id.
The ordering is arbitrary at the moment
@@ -287,9 +268,7 @@ class GroupsServerWorkerHandler:
return {"chunk": chunk, "total_user_count_estimate": len(user_results)}
async def get_invited_users_in_group(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_invited_users_in_group(self, group_id, requester_user_id):
"""Get the users that have been invited to a group as seen by requester_user_id.
The ordering is arbitrary at the moment
@@ -319,9 +298,7 @@ class GroupsServerWorkerHandler:
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}
async def get_rooms_in_group(
self, group_id: str, requester_user_id: str
) -> JsonDict:
async def get_rooms_in_group(self, group_id, requester_user_id):
"""Get the rooms in group as seen by requester_user_id
This returns rooms in order of decreasing number of joined users
@@ -359,20 +336,15 @@ class GroupsServerWorkerHandler:
class GroupsServerHandler(GroupsServerWorkerHandler):
def __init__(self, hs: "HomeServer"):
def __init__(self, hs):
super().__init__(hs)
# Ensure attestations get renewed
hs.get_groups_attestation_renewer()
async def update_group_summary_room(
self,
group_id: str,
requester_user_id: str,
room_id: str,
category_id: str,
content: JsonDict,
) -> JsonDict:
self, group_id, requester_user_id, room_id, category_id, content
):
"""Add/update a room to the group summary"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -395,8 +367,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def delete_group_summary_room(
self, group_id: str, requester_user_id: str, room_id: str, category_id: str
) -> JsonDict:
self, group_id, requester_user_id, room_id, category_id
):
"""Remove a room from the summary"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -408,9 +380,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def set_group_join_policy(
self, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
async def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
@@ -430,8 +400,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def update_group_category(
self, group_id: str, requester_user_id: str, category_id: str, content: JsonDict
) -> JsonDict:
self, group_id, requester_user_id, category_id, content
):
"""Add/Update a group category"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -449,9 +419,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def delete_group_category(
self, group_id: str, requester_user_id: str, category_id: str
) -> JsonDict:
async def delete_group_category(self, group_id, requester_user_id, category_id):
"""Delete a group category"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -463,9 +431,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def update_group_role(
self, group_id: str, requester_user_id: str, role_id: str, content: JsonDict
) -> JsonDict:
async def update_group_role(self, group_id, requester_user_id, role_id, content):
"""Add/update a role in a group"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -481,9 +447,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def delete_group_role(
self, group_id: str, requester_user_id: str, role_id: str
) -> JsonDict:
async def delete_group_role(self, group_id, requester_user_id, role_id):
"""Remove role from group"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -494,13 +458,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def update_group_summary_user(
self,
group_id: str,
requester_user_id: str,
user_id: str,
role_id: str,
content: JsonDict,
) -> JsonDict:
self, group_id, requester_user_id, user_id, role_id, content
):
"""Add/update a users entry in the group summary"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -521,8 +480,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def delete_group_summary_user(
self, group_id: str, requester_user_id: str, user_id: str, role_id: str
) -> JsonDict:
self, group_id, requester_user_id, user_id, role_id
):
"""Remove a user from the group summary"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -534,9 +493,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def update_group_profile(
self, group_id: str, requester_user_id: str, content: JsonDict
) -> None:
async def update_group_profile(self, group_id, requester_user_id, content):
"""Update the group profile"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -567,9 +524,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
await self.store.update_group_profile(group_id, profile)
async def add_room_to_group(
self, group_id: str, requester_user_id: str, room_id: str, content: JsonDict
) -> JsonDict:
async def add_room_to_group(self, group_id, requester_user_id, room_id, content):
"""Add room to group"""
RoomID.from_string(room_id) # Ensure valid room id
@@ -584,13 +539,8 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def update_room_in_group(
self,
group_id: str,
requester_user_id: str,
room_id: str,
config_key: str,
content: JsonDict,
) -> JsonDict:
self, group_id, requester_user_id, room_id, config_key, content
):
"""Update room in group"""
RoomID.from_string(room_id) # Ensure valid room id
@@ -609,9 +559,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def remove_room_from_group(
self, group_id: str, requester_user_id: str, room_id: str
) -> JsonDict:
async def remove_room_from_group(self, group_id, requester_user_id, room_id):
"""Remove room from group"""
await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
@@ -621,16 +569,12 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def invite_to_group(
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
async def invite_to_group(self, group_id, user_id, requester_user_id, content):
"""Invite user to group"""
group = await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
if not group:
raise SynapseError(400, "Group does not exist", errcode=Codes.BAD_STATE)
# TODO: Check if user knocked
@@ -653,9 +597,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
assert isinstance(
groups_local, GroupsLocalHandler
), "Workers cannot invites users to groups."
res = await groups_local.on_invite(group_id, user_id, content)
local_attestation = None
else:
@@ -691,7 +632,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
local_attestation=local_attestation,
remote_attestation=remote_attestation,
)
return {"state": "join"}
elif res["state"] == "invite":
await self.store.add_group_invite(group_id, user_id)
return {"state": "invite"}
@@ -700,17 +640,13 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
else:
raise SynapseError(502, "Unknown state returned by HS")
async def _add_user(
self, group_id: str, user_id: str, content: JsonDict
) -> Optional[JsonDict]:
async def _add_user(self, group_id, user_id, content):
"""Add a user to a group based on a content dict.
See accept_invite, join_group.
"""
if not self.hs.is_mine_id(user_id):
local_attestation = self.attestations.create_attestation(
group_id, user_id
) # type: Optional[JsonDict]
local_attestation = self.attestations.create_attestation(group_id, user_id)
remote_attestation = content["attestation"]
@@ -734,9 +670,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return local_attestation
async def accept_invite(
self, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
async def accept_invite(self, group_id, requester_user_id, content):
"""User tries to accept an invite to the group.
This is different from them asking to join, and so should error if no
@@ -755,9 +689,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {"state": "join", "attestation": local_attestation}
async def join_group(
self, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
async def join_group(self, group_id, requester_user_id, content):
"""User tries to join the group.
This will error if the group requires an invite/knock to join
@@ -766,8 +698,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
group_info = await self.check_group_is_ours(
group_id, requester_user_id, and_exists=True
)
if not group_info:
raise SynapseError(404, "Group does not exist", errcode=Codes.NOT_FOUND)
if group_info["join_policy"] != "open":
raise SynapseError(403, "Group is not publicly joinable")
@@ -775,9 +705,25 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {"state": "join", "attestation": local_attestation}
async def knock(self, group_id, requester_user_id, content):
"""A user requests becoming a member of the group"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
async def accept_knock(self, group_id, requester_user_id, content):
"""Accept a users knock to the room.
Errors if the user hasn't knocked, rather than inviting them.
"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
raise NotImplementedError()
async def remove_user_from_group(
self, group_id: str, user_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
self, group_id, user_id, requester_user_id, content
):
"""Remove a user from the group; either a user is leaving or an admin
kicked them.
"""
@@ -799,9 +745,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if is_kick:
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
assert isinstance(
groups_local, GroupsLocalHandler
), "Workers cannot remove users from groups."
await groups_local.user_removed_from_group(group_id, user_id, {})
else:
await self.transport_client.remove_user_from_group_notification(
@@ -818,15 +761,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {}
async def create_group(
self, group_id: str, requester_user_id: str, content: JsonDict
) -> JsonDict:
async def create_group(self, group_id, requester_user_id, content):
group = await self.check_group_is_ours(group_id, requester_user_id)
logger.info("Attempting to create group with ID: %r", group_id)
# parsing the id into a GroupID validates it.
group_id_obj = GroupID.from_string(group_id)
group = await self.check_group_is_ours(group_id, requester_user_id)
if group:
raise SynapseError(400, "Group already exists")
@@ -871,7 +813,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
local_attestation = self.attestations.create_attestation(
group_id, requester_user_id
) # type: Optional[JsonDict]
)
else:
local_attestation = None
remote_attestation = None
@@ -894,14 +836,15 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {"group_id": group_id}
async def delete_group(self, group_id: str, requester_user_id: str) -> None:
async def delete_group(self, group_id, requester_user_id):
"""Deletes a group, kicking out all current members.
Only group admins or server admins can call this request
Args:
group_id: The group ID to delete.
requester_user_id: The user requesting to delete the group.
group_id (str)
request_user_id (str)
"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
@@ -924,9 +867,6 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
async def _kick_user_from_group(user_id):
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
assert isinstance(
groups_local, GroupsLocalHandler
), "Workers cannot kick users from groups."
await groups_local.user_removed_from_group(group_id, user_id, {})
else:
await self.transport_client.remove_user_from_group_notification(
@@ -958,7 +898,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
await self.store.delete_group(group_id)
def _parse_join_policy_from_contents(content: JsonDict) -> Optional[str]:
def _parse_join_policy_from_contents(content):
"""Given a content for a request, return the specified join policy or None"""
join_policy_dict = content.get("m.join_policy")
@@ -968,7 +908,7 @@ def _parse_join_policy_from_contents(content: JsonDict) -> Optional[str]:
return None
def _parse_join_policy_dict(join_policy_dict: JsonDict) -> str:
def _parse_join_policy_dict(join_policy_dict):
"""Given a dict for the "m.join_policy" config return the join policy specified"""
join_policy_type = join_policy_dict.get("type")
if not join_policy_type:
@@ -979,7 +919,7 @@ def _parse_join_policy_dict(join_policy_dict: JsonDict) -> str:
return join_policy_type
def _parse_visibility_from_contents(content: JsonDict) -> bool:
def _parse_visibility_from_contents(content):
"""Given a content for a request parse out whether the entity should be
public or not
"""
@@ -993,7 +933,7 @@ def _parse_visibility_from_contents(content: JsonDict) -> bool:
return is_public
def _parse_visibility_dict(visibility: JsonDict) -> bool:
def _parse_visibility_dict(visibility):
"""Given a dict for the "m.visibility" config return if the entity should
be public or not
"""

View File

@@ -36,7 +36,7 @@ import attr
import bcrypt
import pymacaroons
from twisted.web.server import Request
from twisted.web.http import Request
from synapse.api.constants import LoginType
from synapse.api.errors import (
@@ -481,7 +481,7 @@ class AuthHandler(BaseHandler):
sid = authdict["session"]
# Convert the URI and method to strings.
uri = request.uri.decode("utf-8") # type: ignore
uri = request.uri.decode("utf-8")
method = request.method.decode("utf-8")
# If there's no session ID, create a new session.

View File

@@ -120,11 +120,6 @@ class DeactivateAccountHandler(BaseHandler):
await self.store.user_set_password_hash(user_id, None)
# Most of the pushers will have been deleted when we logged out the
# associated devices above, but we still need to delete pushers not
# associated with devices, e.g. email pushers.
await self.store.delete_all_pushers_for_user(user_id)
# Add the user to a table of users pending deactivation (ie.
# removal from all the rooms they're a member of)
await self.store.add_user_pending_deactivation(user_id)

View File

@@ -16,9 +16,7 @@
import logging
from typing import TYPE_CHECKING, Any, Dict
from synapse.api.constants import EduTypes
from synapse.api.errors import SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
get_active_span_text_map,
@@ -27,7 +25,7 @@ from synapse.logging.opentracing import (
start_active_span,
)
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.types import JsonDict, Requester, UserID, get_domain_from_id
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util import json_encoder
from synapse.util.stringutils import random_string
@@ -80,12 +78,6 @@ class DeviceMessageHandler:
ReplicationUserDevicesResyncRestServlet.make_client(hs)
)
self._ratelimiter = Ratelimiter(
clock=hs.get_clock(),
rate_hz=hs.config.rc_key_requests.per_second,
burst_count=hs.config.rc_key_requests.burst_count,
)
async def on_direct_to_device_edu(self, origin: str, content: JsonDict) -> None:
local_messages = {}
sender_user_id = content["sender"]
@@ -176,27 +168,15 @@ class DeviceMessageHandler:
async def send_device_message(
self,
requester: Requester,
sender_user_id: str,
message_type: str,
messages: Dict[str, Dict[str, JsonDict]],
) -> None:
sender_user_id = requester.user.to_string()
set_tag("number_of_messages", len(messages))
set_tag("sender", sender_user_id)
local_messages = {}
remote_messages = {} # type: Dict[str, Dict[str, Dict[str, JsonDict]]]
for user_id, by_device in messages.items():
# Ratelimit local cross-user key requests by the sending device.
if (
message_type == EduTypes.RoomKeyRequest
and user_id != sender_user_id
and self._ratelimiter.can_do_action(
(sender_user_id, requester.device_id)
)
):
continue
# we use UserID.from_string to catch invalid user ids
if self.is_mine(UserID.from_string(user_id)):
messages_by_device = {

View File

@@ -17,7 +17,7 @@ import logging
import random
from typing import TYPE_CHECKING, Iterable, List, Optional
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state
@@ -113,7 +113,7 @@ class EventStreamHandler(BaseHandler):
states = await presence_handler.get_states(users)
to_add.extend(
{
"type": EduTypes.Presence,
"type": EventTypes.Presence,
"content": format_user_presence_state(state, time_now),
}
for state in states

View File

@@ -18,7 +18,7 @@ from typing import TYPE_CHECKING, Optional, Tuple
from twisted.internet import defer
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state
@@ -412,7 +412,7 @@ class InitialSyncHandler(BaseHandler):
return [
{
"type": EduTypes.Presence,
"type": EventTypes.Presence,
"content": format_user_presence_state(s, time_now),
}
for s in states

View File

@@ -387,12 +387,6 @@ class EventCreationHandler:
self.room_invite_state_types = self.hs.config.room_invite_state_types
self.membership_types_to_include_profile_data_in = (
{Membership.JOIN, Membership.INVITE}
if self.hs.config.include_profile_data_on_invite
else {Membership.JOIN}
)
self.send_event = ReplicationSendEventRestServlet.make_client(hs)
# This is only used to get at ratelimit function, and maybe_kick_guest_users
@@ -506,7 +500,7 @@ class EventCreationHandler:
membership = builder.content.get("membership", None)
target = UserID.from_string(builder.state_key)
if membership in self.membership_types_to_include_profile_data_in:
if membership in {Membership.JOIN, Membership.INVITE}:
# If event doesn't include a display name, add one.
profile = self.profile_handler
content = builder.content

View File

@@ -274,25 +274,22 @@ class PresenceHandler(BasePresenceHandler):
self.external_sync_linearizer = Linearizer(name="external_sync_linearizer")
if self._presence_enabled:
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
def run_timeout_handler():
return run_as_background_process(
"handle_presence_timeouts", self._handle_timeouts
)
self.clock.call_later(
30, self.clock.looping_call, run_timeout_handler, 5000
# Start a LoopingCall in 30s that fires every 5s.
# The initial delay is to allow disconnected clients a chance to
# reconnect before we treat them as offline.
def run_timeout_handler():
return run_as_background_process(
"handle_presence_timeouts", self._handle_timeouts
)
def run_persister():
return run_as_background_process(
"persist_presence_changes", self._persist_unpersisted_changes
)
self.clock.call_later(30, self.clock.looping_call, run_timeout_handler, 5000)
self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
def run_persister():
return run_as_background_process(
"persist_presence_changes", self._persist_unpersisted_changes
)
self.clock.call_later(60, self.clock.looping_call, run_persister, 60 * 1000)
LaterGauge(
"synapse_handlers_presence_wheel_timer_size",
@@ -302,7 +299,7 @@ class PresenceHandler(BasePresenceHandler):
)
# Used to handle sending of presence to newly joined users/servers
if self._presence_enabled:
if hs.config.use_presence:
self.notifier.add_replication_callback(self.notify_new_event)
# Presence is best effort and quickly heals itself, so lets just always
@@ -352,13 +349,10 @@ class PresenceHandler(BasePresenceHandler):
[self.user_to_current_state[user_id] for user_id in unpersisted]
)
async def _update_states(self, new_states: Iterable[UserPresenceState]) -> None:
async def _update_states(self, new_states):
"""Updates presence of users. Sets the appropriate timeouts. Pokes
the notifier and federation if and only if the changed presence state
should be sent to clients/servers.
Args:
new_states: The new user presence state updates to process.
"""
now = self.clock.time_msec()
@@ -374,7 +368,7 @@ class PresenceHandler(BasePresenceHandler):
new_states_dict = {}
for new_state in new_states:
new_states_dict[new_state.user_id] = new_state
new_states = new_states_dict.values()
new_state = new_states_dict.values()
for new_state in new_states:
user_id = new_state.user_id
@@ -663,6 +657,17 @@ class PresenceHandler(BasePresenceHandler):
self._push_to_remotes(states)
async def notify_for_states(self, state, stream_id):
parties = await get_interested_parties(self.store, [state])
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key",
stream_id,
rooms=room_ids_to_states.keys(),
users=[UserID.from_string(u) for u in users_to_states],
)
def _push_to_remotes(self, states):
"""Sends state updates to remote servers.
@@ -852,9 +857,6 @@ class PresenceHandler(BasePresenceHandler):
"""Process current state deltas to find new joins that need to be
handled.
"""
# A map of destination to a set of user state that they should receive
presence_destinations = {} # type: Dict[str, Set[UserPresenceState]]
for delta in deltas:
typ = delta["type"]
state_key = delta["state_key"]
@@ -864,7 +866,6 @@ class PresenceHandler(BasePresenceHandler):
logger.debug("Handling: %r %r, %s", typ, state_key, event_id)
# Drop any event that isn't a membership join
if typ != EventTypes.Member:
continue
@@ -887,38 +888,13 @@ class PresenceHandler(BasePresenceHandler):
# Ignore changes to join events.
continue
# Retrieve any user presence state updates that need to be sent as a result,
# and the destinations that need to receive it
destinations, user_presence_states = await self._on_user_joined_room(
room_id, state_key
)
await self._on_user_joined_room(room_id, state_key)
# Insert the destinations and respective updates into our destinations dict
for destination in destinations:
presence_destinations.setdefault(destination, set()).update(
user_presence_states
)
# Send out user presence updates for each destination
for destination, user_state_set in presence_destinations.items():
self.federation.send_presence_to_destinations(
destinations=[destination], states=user_state_set
)
async def _on_user_joined_room(
self, room_id: str, user_id: str
) -> Tuple[List[str], List[UserPresenceState]]:
async def _on_user_joined_room(self, room_id: str, user_id: str) -> None:
"""Called when we detect a user joining the room via the current state
delta stream. Returns the destinations that need to be updated and the
presence updates to send to them.
Args:
room_id: The ID of the room that the user has joined.
user_id: The ID of the user that has joined the room.
Returns:
A tuple of destinations and presence updates to send to them.
delta stream.
"""
if self.is_mine_id(user_id):
# If this is a local user then we need to send their presence
# out to hosts in the room (who don't already have it)
@@ -926,15 +902,15 @@ class PresenceHandler(BasePresenceHandler):
# TODO: We should be able to filter the hosts down to those that
# haven't previously seen the user
remote_hosts = await self.state.get_current_hosts_in_room(room_id)
state = await self.current_state_for_user(user_id)
hosts = await self.state.get_current_hosts_in_room(room_id)
# Filter out ourselves.
filtered_remote_hosts = [
host for host in remote_hosts if host != self.server_name
]
hosts = {host for host in hosts if host != self.server_name}
state = await self.current_state_for_user(user_id)
return filtered_remote_hosts, [state]
self.federation.send_presence_to_destinations(
states=[state], destinations=hosts
)
else:
# A remote user has joined the room, so we need to:
# 1. Check if this is a new server in the room
@@ -947,8 +923,6 @@ class PresenceHandler(BasePresenceHandler):
# TODO: Check that this is actually a new server joining the
# room.
remote_host = get_domain_from_id(user_id)
users = await self.state.get_current_users_in_room(room_id)
user_ids = list(filter(self.is_mine_id, users))
@@ -968,7 +942,10 @@ class PresenceHandler(BasePresenceHandler):
or state.status_msg is not None
]
return [remote_host], states
if states:
self.federation.send_presence_to_destinations(
states=states, destinations=[get_domain_from_id(user_id)]
)
def should_notify(old_state, new_state):

View File

@@ -310,15 +310,6 @@ class ProfileHandler(BaseHandler):
await self._update_join_states(requester, target_user)
async def on_profile_query(self, args: JsonDict) -> JsonDict:
"""Handles federation profile query requests."""
if not self.hs.config.allow_profile_lookup_over_federation:
raise SynapseError(
403,
"Profile lookup over federation is disabled on this homeserver",
Codes.FORBIDDEN,
)
user = UserID.from_string(args["user_id"])
if not self.hs.is_mine(user):
raise SynapseError(400, "User is not hosted on this homeserver")

View File

@@ -31,8 +31,8 @@ from urllib.parse import urlencode
import attr
from typing_extensions import NoReturn, Protocol
from twisted.web.http import Request
from twisted.web.iweb import IRequest
from twisted.web.server import Request
from synapse.api.constants import LoginType
from synapse.api.errors import Codes, NotFoundError, RedirectException, SynapseError

View File

@@ -143,10 +143,6 @@ class UserDirectoryHandler(StateDeltasHandler):
if self.pos is None:
self.pos = await self.store.get_user_directory_stream_pos()
# If still None then the initial background update hasn't happened yet.
if self.pos is None:
return None
# Loop round handling deltas until we're up to date
while True:
with Measure(self.clock, "user_dir_delta"):

View File

@@ -14,9 +14,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import re
from typing import Union
from twisted.internet import address, task
from twisted.internet import task
from twisted.web.client import FileBodyProducer
from twisted.web.iweb import IRequest
@@ -54,40 +53,6 @@ class QuieterFileBodyProducer(FileBodyProducer):
pass
def get_request_uri(request: IRequest) -> bytes:
"""Return the full URI that was requested by the client"""
return b"%s://%s%s" % (
b"https" if request.isSecure() else b"http",
_get_requested_host(request),
# despite its name, "request.uri" is only the path and query-string.
request.uri,
)
def _get_requested_host(request: IRequest) -> bytes:
hostname = request.getHeader(b"host")
if hostname:
return hostname
# no Host header, use the address/port that the request arrived on
host = request.getHost() # type: Union[address.IPv4Address, address.IPv6Address]
hostname = host.host.encode("ascii")
if request.isSecure() and host.port == 443:
# default port for https
return hostname
if not request.isSecure() and host.port == 80:
# default port for http
return hostname
return b"%s:%i" % (
hostname,
host.port,
)
def get_request_user_agent(request: IRequest, default: str = "") -> str:
"""Return the last User-Agent header, or the given default."""
# There could be raw utf-8 bytes in the User-Agent header.

View File

@@ -56,7 +56,7 @@ from twisted.web.client import (
)
from twisted.web.http import PotentialDataLoss
from twisted.web.http_headers import Headers
from twisted.web.iweb import UNKNOWN_LENGTH, IAgent, IBodyProducer, IResponse
from twisted.web.iweb import IAgent, IBodyProducer, IResponse
from synapse.api.errors import Codes, HttpResponseException, SynapseError
from synapse.http import QuieterFileBodyProducer, RequestTimedOutError, redact_uri
@@ -289,7 +289,8 @@ class SimpleHttpClient:
treq_args: Dict[str, Any] = {},
ip_whitelist: Optional[IPSet] = None,
ip_blacklist: Optional[IPSet] = None,
use_proxy: bool = False,
http_proxy: Optional[bytes] = None,
https_proxy: Optional[bytes] = None,
):
"""
Args:
@@ -299,8 +300,8 @@ class SimpleHttpClient:
we may not request.
ip_whitelist: The whitelisted IP addresses, that we can
request if it were otherwise caught in a blacklist.
use_proxy: Whether proxy settings should be discovered and used
from conventional environment variables.
http_proxy: proxy server to use for http connections. host[:port]
https_proxy: proxy server to use for https connections. host[:port]
"""
self.hs = hs
@@ -344,7 +345,8 @@ class SimpleHttpClient:
connectTimeout=15,
contextFactory=self.hs.get_http_client_context_factory(),
pool=pool,
use_proxy=use_proxy,
http_proxy=http_proxy,
https_proxy=https_proxy,
)
if self._ip_blacklist:
@@ -406,9 +408,6 @@ class SimpleHttpClient:
agent=self.agent,
data=body_producer,
headers=headers,
# Avoid buffering the body in treq since we do not reuse
# response bodies.
unbuffered=True,
**self._extra_treq_args,
) # type: defer.Deferred
@@ -703,6 +702,18 @@ class SimpleHttpClient:
resp_headers = dict(response.headers.getAllRawHeaders())
if (
b"Content-Length" in resp_headers
and max_size
and int(resp_headers[b"Content-Length"][0]) > max_size
):
logger.warning("Requested URL is too large > %r bytes" % (max_size,))
raise SynapseError(
502,
"Requested file is too large > %r bytes" % (max_size,),
Codes.TOO_LARGE,
)
if response.code > 299:
logger.warning("Got %d when downloading %s" % (response.code, url))
raise SynapseError(502, "Got error %d" % (response.code,), Codes.UNKNOWN)
@@ -748,32 +759,7 @@ class BodyExceededMaxSize(Exception):
"""The maximum allowed size of the HTTP body was exceeded."""
class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which immediately errors upon receiving data."""
def __init__(self, deferred: defer.Deferred):
self.deferred = deferred
def _maybe_fail(self):
"""
Report a max size exceed error and disconnect the first time this is called.
"""
if not self.deferred.called:
self.deferred.errback(BodyExceededMaxSize())
# Close the connection (forcefully) since all the data will get
# discarded anyway.
self.transport.abortConnection()
def dataReceived(self, data: bytes) -> None:
self._maybe_fail()
def connectionLost(self, reason: Failure) -> None:
self._maybe_fail()
class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which reads body to a stream, erroring if the body exceeds a maximum size."""
def __init__(
self, stream: BinaryIO, deferred: defer.Deferred, max_size: Optional[int]
):
@@ -794,9 +780,7 @@ class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
# in the meantime.
if self.max_size is not None and self.length >= self.max_size:
self.deferred.errback(BodyExceededMaxSize())
# Close the connection (forcefully) since all the data will get
# discarded anyway.
self.transport.abortConnection()
self.transport.loseConnection()
def connectionLost(self, reason: Failure) -> None:
# If the maximum size was already exceeded, there's nothing to do.
@@ -830,15 +814,8 @@ def read_body_with_max_size(
Returns:
A Deferred which resolves to the length of the read body.
"""
d = defer.Deferred()
# If the Content-Length header gives a size larger than the maximum allowed
# size, do not bother downloading the body.
if max_size is not None and response.length != UNKNOWN_LENGTH:
if response.length > max_size:
response.deliverBody(_DiscardBodyWithMaxSizeProtocol(d))
return d
response.deliverBody(_ReadBodyWithMaxSizeProtocol(stream, d, max_size))
return d

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import logging
import urllib.parse
from typing import Any, Generator, List, Optional
from typing import List, Optional
from netaddr import AddrFormatError, IPAddress, IPSet
from zope.interface import implementer
@@ -116,7 +116,7 @@ class MatrixFederationAgent:
uri: bytes,
headers: Optional[Headers] = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> Generator[defer.Deferred, Any, defer.Deferred]:
) -> defer.Deferred:
"""
Args:
method: HTTP method: GET/POST/etc
@@ -177,17 +177,17 @@ class MatrixFederationAgent:
# We need to make sure the host header is set to the netloc of the
# server and that a user-agent is provided.
if headers is None:
request_headers = Headers()
headers = Headers()
else:
request_headers = headers.copy()
headers = headers.copy()
if not request_headers.hasHeader(b"host"):
request_headers.addRawHeader(b"host", parsed_uri.netloc)
if not request_headers.hasHeader(b"user-agent"):
request_headers.addRawHeader(b"user-agent", self.user_agent)
if not headers.hasHeader(b"host"):
headers.addRawHeader(b"host", parsed_uri.netloc)
if not headers.hasHeader(b"user-agent"):
headers.addRawHeader(b"user-agent", self.user_agent)
res = yield make_deferred_yieldable(
self._agent.request(method, uri, request_headers, bodyProducer)
self._agent.request(method, uri, headers, bodyProducer)
)
return res

View File

@@ -1049,14 +1049,14 @@ def check_content_type_is_json(headers: Headers) -> None:
RequestSendFailed: if the Content-Type header is missing or isn't JSON
"""
content_type_headers = headers.getRawHeaders(b"Content-Type")
if content_type_headers is None:
c_type = headers.getRawHeaders(b"Content-Type")
if c_type is None:
raise RequestSendFailed(
RuntimeError("No Content-Type header received from remote server"),
can_retry=False,
)
c_type = content_type_headers[0].decode("ascii") # only the first header
c_type = c_type[0].decode("ascii") # only the first header
val, options = cgi.parse_header(c_type)
if val != "application/json":
raise RequestSendFailed(

View File

@@ -14,7 +14,6 @@
# limitations under the License.
import logging
import re
from urllib.request import getproxies_environment, proxy_bypass_environment
from zope.interface import implementer
@@ -59,9 +58,6 @@ class ProxyAgent(_AgentBase):
pool (HTTPConnectionPool|None): connection pool to be used. If None, a
non-persistent pool instance will be created.
use_proxy (bool): Whether proxy settings should be discovered and used
from conventional environment variables.
"""
def __init__(
@@ -72,7 +68,8 @@ class ProxyAgent(_AgentBase):
connectTimeout=None,
bindAddress=None,
pool=None,
use_proxy=False,
http_proxy=None,
https_proxy=None,
):
_AgentBase.__init__(self, reactor, pool)
@@ -87,15 +84,6 @@ class ProxyAgent(_AgentBase):
if bindAddress is not None:
self._endpoint_kwargs["bindAddress"] = bindAddress
http_proxy = None
https_proxy = None
no_proxy = None
if use_proxy:
proxies = getproxies_environment()
http_proxy = proxies["http"].encode() if "http" in proxies else None
https_proxy = proxies["https"].encode() if "https" in proxies else None
no_proxy = proxies["no"] if "no" in proxies else None
self.http_proxy_endpoint = _http_proxy_endpoint(
http_proxy, self.proxy_reactor, **self._endpoint_kwargs
)
@@ -104,8 +92,6 @@ class ProxyAgent(_AgentBase):
https_proxy, self.proxy_reactor, **self._endpoint_kwargs
)
self.no_proxy = no_proxy
self._policy_for_https = contextFactory
self._reactor = reactor
@@ -153,28 +139,13 @@ class ProxyAgent(_AgentBase):
pool_key = (parsed_uri.scheme, parsed_uri.host, parsed_uri.port)
request_path = parsed_uri.originForm
should_skip_proxy = False
if self.no_proxy is not None:
should_skip_proxy = proxy_bypass_environment(
parsed_uri.host.decode(),
proxies={"no": self.no_proxy},
)
if (
parsed_uri.scheme == b"http"
and self.http_proxy_endpoint
and not should_skip_proxy
):
if parsed_uri.scheme == b"http" and self.http_proxy_endpoint:
# Cache *all* connections under the same key, since we are only
# connecting to a single destination, the proxy:
pool_key = ("http-proxy", self.http_proxy_endpoint)
endpoint = self.http_proxy_endpoint
request_path = uri
elif (
parsed_uri.scheme == b"https"
and self.https_proxy_endpoint
and not should_skip_proxy
):
elif parsed_uri.scheme == b"https" and self.https_proxy_endpoint:
endpoint = HTTPConnectProxyEndpoint(
self.proxy_reactor,
self.https_proxy_endpoint,

View File

@@ -21,7 +21,6 @@ import logging
import types
import urllib
from http import HTTPStatus
from inspect import isawaitable
from io import BytesIO
from typing import (
Any,
@@ -31,7 +30,6 @@ from typing import (
Iterable,
Iterator,
List,
Optional,
Pattern,
Tuple,
Union,
@@ -81,12 +79,10 @@ def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
"""Sends a JSON error response to clients."""
if f.check(SynapseError):
# mypy doesn't understand that f.check asserts the type.
exc = f.value # type: SynapseError # type: ignore
error_code = exc.code
error_dict = exc.error_dict()
error_code = f.value.code
error_dict = f.value.error_dict()
logger.info("%s SynapseError: %s - %s", request, error_code, exc.msg)
logger.info("%s SynapseError: %s - %s", request, error_code, f.value.msg)
else:
error_code = 500
error_dict = {"error": "Internal server error", "errcode": Codes.UNKNOWN}
@@ -95,7 +91,7 @@ def return_json_error(f: failure.Failure, request: SynapseRequest) -> None:
"Failed handle request via %r: %r",
request.request_metrics.name,
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
# Only respond with an error response if we haven't already started writing,
@@ -132,8 +128,7 @@ def return_html_error(
`{msg}` placeholders), or a jinja2 template
"""
if f.check(CodeMessageException):
# mypy doesn't understand that f.check asserts the type.
cme = f.value # type: CodeMessageException # type: ignore
cme = f.value
code = cme.code
msg = cme.msg
@@ -147,7 +142,7 @@ def return_html_error(
logger.error(
"Failed handle request %r",
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
else:
code = HTTPStatus.INTERNAL_SERVER_ERROR
@@ -156,7 +151,7 @@ def return_html_error(
logger.error(
"Failed handle request %r",
request,
exc_info=(f.type, f.value, f.getTracebackObject()), # type: ignore
exc_info=(f.type, f.value, f.getTracebackObject()),
)
if isinstance(error_template, str):
@@ -283,7 +278,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
raw_callback_return = method_handler(request)
# Is it synchronous? We'll allow this for now.
if isawaitable(raw_callback_return):
if isinstance(raw_callback_return, (defer.Deferred, types.CoroutineType)):
callback_return = await raw_callback_return
else:
callback_return = raw_callback_return # type: ignore
@@ -404,10 +399,8 @@ class JsonResource(DirectServeJsonResource):
A tuple of the callback to use, the name of the servlet, and the
key word arguments to pass to the callback
"""
# At this point the path must be bytes.
request_path_bytes = request.path # type: bytes # type: ignore
request_path = request_path_bytes.decode("ascii")
# Treat HEAD requests as GET requests.
request_path = request.path.decode("ascii")
request_method = request.method
if request_method == b"HEAD":
request_method = b"GET"
@@ -558,7 +551,7 @@ class _ByteProducer:
request: Request,
iterator: Iterator[bytes],
):
self._request = request # type: Optional[Request]
self._request = request
self._iterator = iterator
self._paused = False
@@ -570,7 +563,7 @@ class _ByteProducer:
"""
Send a list of bytes as a chunk of a response.
"""
if not data or not self._request:
if not data:
return
self._request.write(b"".join(data))

View File

@@ -14,12 +14,8 @@
import contextlib
import logging
import time
from typing import Optional, Type, Union
from typing import Optional, Union
import attr
from zope.interface import implementer
from twisted.internet.interfaces import IAddress
from twisted.python.failure import Failure
from twisted.web.server import Request, Site
@@ -57,7 +53,7 @@ class SynapseRequest(Request):
def __init__(self, channel, *args, **kw):
Request.__init__(self, channel, *args, **kw)
self.site = channel.site # type: SynapseSite
self.site = channel.site
self._channel = channel # this is used by the tests
self.start_time = 0.0
@@ -96,34 +92,25 @@ class SynapseRequest(Request):
def get_request_id(self):
return "%s-%i" % (self.get_method(), self.request_seq)
def get_redacted_uri(self) -> str:
"""Gets the redacted URI associated with the request (or placeholder if the URI
has not yet been received).
Note: This is necessary as the placeholder value in twisted is str
rather than bytes, so we need to sanitise `self.uri`.
Returns:
The redacted URI as a string.
"""
uri = self.uri # type: Union[bytes, str]
def get_redacted_uri(self):
uri = self.uri
if isinstance(uri, bytes):
uri = uri.decode("ascii", errors="replace")
uri = self.uri.decode("ascii", errors="replace")
return redact_uri(uri)
def get_method(self) -> str:
"""Gets the method associated with the request (or placeholder if method
has not yet been received).
def get_method(self):
"""Gets the method associated with the request (or placeholder if not
method has yet been received).
Note: This is necessary as the placeholder value in twisted is str
rather than bytes, so we need to sanitise `self.method`.
Returns:
The request method as a string.
str
"""
method = self.method # type: Union[bytes, str]
method = self.method
if isinstance(method, bytes):
return self.method.decode("ascii")
method = self.method.decode("ascii")
return method
def render(self, resrc):
@@ -346,78 +333,27 @@ class SynapseRequest(Request):
class XForwardedForRequest(SynapseRequest):
"""Request object which honours proxy headers
def __init__(self, *args, **kw):
SynapseRequest.__init__(self, *args, **kw)
Extends SynapseRequest to replace getClientIP, getClientAddress, and isSecure with
information from request headers.
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
# the client IP and ssl flag, as extracted from the headers.
_forwarded_for = None # type: Optional[_XForwardedForAddress]
_forwarded_https = False # type: bool
def requestReceived(self, command, path, version):
# this method is called by the Channel once the full request has been
# received, to dispatch the request to a resource.
# We can use it to set the IP address and protocol according to the
# headers.
self._process_forwarded_headers()
return super().requestReceived(command, path, version)
def _process_forwarded_headers(self):
headers = self.requestHeaders.getRawHeaders(b"x-forwarded-for")
if not headers:
return
# for now, we just use the first x-forwarded-for header. Really, we ought
# to start from the client IP address, and check whether it is trusted; if it
# is, work backwards through the headers until we find an untrusted address.
# see https://github.com/matrix-org/synapse/issues/9471
self._forwarded_for = _XForwardedForAddress(
headers[0].split(b",")[0].strip().decode("ascii")
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return (
self.requestHeaders.getRawHeaders(b"x-forwarded-for", [b"-"])[0]
.split(b",")[0]
.strip()
.decode("ascii")
)
# if we got an x-forwarded-for header, also look for an x-forwarded-proto header
header = self.getHeader(b"x-forwarded-proto")
if header is not None:
self._forwarded_https = header.lower() == b"https"
else:
# this is done largely for backwards-compatibility so that people that
# haven't set an x-forwarded-proto header don't get a redirect loop.
logger.warning(
"forwarded request lacks an x-forwarded-proto header: assuming https"
)
self._forwarded_https = True
def isSecure(self):
if self._forwarded_https:
return True
return super().isSecure()
def getClientIP(self) -> str:
"""
Return the IP address of the client who submitted this request.
This method is deprecated. Use getClientAddress() instead.
"""
if self._forwarded_for is not None:
return self._forwarded_for.host
return super().getClientIP()
def getClientAddress(self) -> IAddress:
"""
Return the address of the client who submitted this request.
"""
if self._forwarded_for is not None:
return self._forwarded_for
return super().getClientAddress()
@implementer(IAddress)
@attr.s(frozen=True, slots=True)
class _XForwardedForAddress:
host = attr.ib(type=str)
class SynapseSite(Site):
"""
@@ -441,9 +377,7 @@ class SynapseSite(Site):
assert config.http_options is not None
proxied = config.http_options.x_forwarded
self.requestFactory = (
XForwardedForRequest if proxied else SynapseRequest
) # type: Type[Request]
self.requestFactory = XForwardedForRequest if proxied else SynapseRequest
self.access_logger = logging.getLogger(logger_name)
self.server_version_string = server_version_string.encode("ascii")

View File

@@ -32,7 +32,7 @@ from twisted.internet.endpoints import (
TCP4ClientEndpoint,
TCP6ClientEndpoint,
)
from twisted.internet.interfaces import IPushProducer, IStreamClientEndpoint, ITransport
from twisted.internet.interfaces import IPushProducer, ITransport
from twisted.internet.protocol import Factory, Protocol
from twisted.python.failure import Failure
@@ -121,9 +121,7 @@ class RemoteHandler(logging.Handler):
try:
ip = ip_address(self.host)
if isinstance(ip, IPv4Address):
endpoint = TCP4ClientEndpoint(
_reactor, self.host, self.port
) # type: IStreamClientEndpoint
endpoint = TCP4ClientEndpoint(_reactor, self.host, self.port)
elif isinstance(ip, IPv6Address):
endpoint = TCP6ClientEndpoint(_reactor, self.host, self.port)
else:

View File

@@ -527,7 +527,7 @@ class ReactorLastSeenMetric:
REGISTRY.register(ReactorLastSeenMetric())
def runUntilCurrentTimer(reactor, func):
def runUntilCurrentTimer(func):
@functools.wraps(func)
def f(*args, **kwargs):
now = reactor.seconds()
@@ -590,14 +590,13 @@ def runUntilCurrentTimer(reactor, func):
try:
# Ensure the reactor has all the attributes we expect
reactor.seconds # type: ignore
reactor.runUntilCurrent # type: ignore
reactor._newTimedCalls # type: ignore
reactor.threadCallQueue # type: ignore
reactor.runUntilCurrent
reactor._newTimedCalls
reactor.threadCallQueue
# runUntilCurrent is called when we have pending calls. It is called once
# per iteratation after fd polling.
reactor.runUntilCurrent = runUntilCurrentTimer(reactor, reactor.runUntilCurrent) # type: ignore
reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent)
# We manually run the GC each reactor tick so that we can get some metrics
# about time spent doing GC,

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Generator, Iterable, Optional, Tuple
from typing import TYPE_CHECKING, Iterable, Optional, Tuple
from twisted.internet import defer
@@ -307,7 +307,7 @@ class ModuleApi:
@defer.inlineCallbacks
def get_state_events_in_room(
self, room_id: str, types: Iterable[Tuple[str, Optional[str]]]
) -> Generator[defer.Deferred, Any, defer.Deferred]:
) -> defer.Deferred:
"""Gets current state events for the given room.
(This is exposed for compatibility with the old SpamCheckerApi. We should

View File

@@ -15,12 +15,11 @@
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Any, Dict, Iterable, Optional, Union
from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
from prometheus_client import Counter
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from twisted.internet.interfaces import IDelayedCall
from synapse.api.constants import EventTypes
from synapse.events import EventBase
@@ -72,10 +71,9 @@ class HttpPusher(Pusher):
self.data = pusher_config.data
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.failing_since = pusher_config.failing_since
self.timed_call = None # type: Optional[IDelayedCall]
self.timed_call = None
self._is_processing = False
self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room
self._pusherpool = hs.get_pusherpool()
self.data = pusher_config.data
if self.data is None:
@@ -301,7 +299,7 @@ class HttpPusher(Pusher):
)
else:
logger.info("Pushkey %s was rejected: removing", pk)
await self._pusherpool.remove_pusher(self.app_id, pk, self.user_id)
await self.hs.remove_pusher(self.app_id, pk, self.user_id)
return True
async def _build_notification_dict(

View File

@@ -19,14 +19,12 @@ from typing import TYPE_CHECKING, Dict, Iterable, Optional
from prometheus_client import Gauge
from synapse.api.errors import Codes, SynapseError
from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
)
from synapse.push import Pusher, PusherConfig, PusherConfigException
from synapse.push.pusher import PusherFactory
from synapse.replication.http.push import ReplicationRemovePusherRestServlet
from synapse.types import JsonDict, RoomStreamToken
from synapse.util.async_helpers import concurrently_execute
@@ -60,6 +58,7 @@ class PusherPool:
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.pusher_factory = PusherFactory(hs)
self._should_start_pushers = hs.config.start_pushers
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
@@ -68,16 +67,6 @@ class PusherPool:
# We shard the handling of push notifications by user ID.
self._pusher_shard_config = hs.config.push.pusher_shard_config
self._instance_name = hs.get_instance_name()
self._should_start_pushers = (
self._instance_name in self._pusher_shard_config.instances
)
# We can only delete pushers on master.
self._remove_pusher_client = None
if hs.config.worker.worker_app:
self._remove_pusher_client = ReplicationRemovePusherRestServlet.make_client(
hs
)
# Record the last stream ID that we were poked about so we can get
# changes since then. We set this to the current max stream ID on
@@ -114,11 +103,6 @@ class PusherPool:
The newly created pusher.
"""
if kind == "email":
email_owner = await self.store.get_user_id_by_threepid("email", pushkey)
if email_owner != user_id:
raise SynapseError(400, "Email not found", Codes.THREEPID_NOT_FOUND)
time_now_msec = self.clock.time_msec()
# create the pusher setting last_stream_ordering to the current maximum
@@ -191,6 +175,9 @@ class PusherPool:
user_id: user to remove pushers for
access_tokens: access token *ids* to remove pushers for
"""
if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
return
tokens = set(access_tokens)
for p in await self.store.get_pushers_by_user_id(user_id):
if p.access_token in tokens:
@@ -393,12 +380,6 @@ class PusherPool:
synapse_pushers.labels(type(pusher).__name__, pusher.app_id).dec()
# We can only delete pushers on master.
if self._remove_pusher_client:
await self._remove_pusher_client(
app_id=app_id, pushkey=pushkey, user_id=user_id
)
else:
await self.store.delete_pusher_by_app_id_pushkey_user_id(
app_id, pushkey, user_id
)
await self.store.delete_pusher_by_app_id_pushkey_user_id(
app_id, pushkey, user_id
)

View File

@@ -106,9 +106,6 @@ CONDITIONAL_REQUIREMENTS = {
"pysaml2>=4.5.0;python_version>='3.6'",
],
"oidc": ["authlib>=0.14.0"],
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
# `contrib/systemd/log_config.yaml`.
"systemd": ["systemd-python>=231"],
"url_preview": ["lxml>=3.5.0"],
"sentry": ["sentry-sdk>=0.7.2"],

Some files were not shown because too many files have changed in this diff Show More