1
0

Compare commits

...

35 Commits

Author SHA1 Message Date
Olivier Wilkinson (reivilibre) c747b4b37d AAAA# 2022-07-05 17:26:12 +01:00
Olivier Wilkinson (reivilibre) 7094bcf251 DBG cache all 2022-07-05 12:40:26 +01:00
Olivier Wilkinson (reivilibre) 5af3ce8bf8 DBG cache 2022-07-05 12:31:50 +01:00
Olivier Wilkinson (reivilibre) 7fd218ad8b DBG retry until fail 2022-07-05 12:18:26 +01:00
Olivier Wilkinson (reivilibre) 30627554d4 DBG set debug 2022-07-05 12:18:07 +01:00
Olivier Wilkinson (reivilibre) 3c41609a9e DBG limit 2022-07-05 11:59:48 +01:00
Olivier Wilkinson (reivilibre) 18695c7631 DBG 2022-07-05 11:37:14 +01:00
Olivier Wilkinson (reivilibre) c76da0d948 TMP flake debug code 2022-07-05 11:25:57 +01:00
Olivier Wilkinson (reivilibre) 3f203e1b10 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-07-04 12:01:09 +01:00
Olivier Wilkinson (reivilibre) ae7f8cccc2 Give -f a long option --fast as it's confusing otherwise (and I might remember it better) 2022-07-04 12:01:09 +01:00
Olivier Wilkinson (reivilibre) 37112f76f7 Add a --build-only argument to complement.sh 2022-07-04 12:01:09 +01:00
Olivier Wilkinson (reivilibre) f61931fb8a Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-07-01 16:55:14 +01:00
Olivier Wilkinson (reivilibre) ae1712dd71 Factor out Complement checking-out 2022-07-01 16:55:14 +01:00
Olivier Wilkinson (reivilibre) c1229e0218 Factor out some common commands into a script 2022-07-01 16:55:14 +01:00
reivilibre c04e25789e Enable Complement testing in the 'Twisted Trunk' CI runs. (#13079) 2022-07-01 15:42:49 +00:00
Richard van der Hoff fe910fb10e complement.sh: Permit skipping docker build (#13143)
Add a `-f` argument to `complement.sh` to skip the docker build
2022-07-01 12:33:59 +00:00
Andrew Morgan 5296c09473 Merge tag 'v1.62.0rc2' into develop
Synapse 1.62.0rc2 (2022-07-01)
==============================

Bugfixes
--------

- Fix unread counts for users on large servers. Introduced in v1.62.0rc1. ([\#13140](https://github.com/matrix-org/synapse/issues/13140))
- Fix DB performance when deleting old push notifications. Introduced in v1.62.0rc1. ([\#13141](https://github.com/matrix-org/synapse/issues/13141))
2022-07-01 12:29:23 +01:00
David Robertson d70ff5cc35 Extra validation for rest/client/account_data (#13148)
* Extra validation for rest/client/account_data

This is a fairly simple endpoint and we did pretty well here.

* Changelog
2022-07-01 11:04:56 +01:00
Richard van der Hoff 6da861ae69 _process_received_pdu: Improve exception handling (#13145)
`_check_event_auth` is expected to raise `AuthError`s, so no need to log it
again.
2022-07-01 10:52:10 +01:00
Richard van der Hoff 8c2825276f Skip waiting for full state for incoming events (#13144)
When we receive an event over federation during a faster join, there is no need
to wait for full state, since we have a whole reconciliation process designed
to take the partial state into account.
2022-07-01 10:19:27 +01:00
Andrew Morgan c0efc689cb Add documentation for phone home stats (#13086) 2022-06-30 22:12:28 +01:00
Jacek Kuśnierz 50f0e4028b Allow dependency errors to pass through (#13113)
Signed-off-by: Jacek Kusnierz <jacek.kusnierz@tum.de>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-06-30 19:48:04 +02:00
Patrick Cloke b0366853ca Merge remote-tracking branch 'origin/release-v1.62' into develop 2022-06-30 13:27:24 -04:00
Shay 046a6513bc Don't process /send requests for users who have hit their ratelimit (#13134) 2022-06-30 09:22:40 -07:00
Shay 8330fc9953 Cleanup references to sample config in the docs and redirect users to configuration manual (#13077) 2022-06-30 09:21:39 -07:00
Andrew Morgan 0ceb3af10b Add a link to the configuration manual from the homeserver sample config documentation page (#13139) 2022-06-30 15:59:11 +01:00
Patrick Cloke 6ad012ef89 More type hints for synapse.logging (#13103)
Completes type hints for synapse.logging.scopecontextmanager and (partially)
for synapse.logging.opentracing.
2022-06-30 13:05:06 +00:00
reivilibre 9667bad55d Improve startup times in Complement test runs against workers, particularly in CPU-constrained environments. (#13127)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-06-30 11:58:12 +00:00
David Robertson 09f6e43025 Actually typecheck tests.test_server (#13135) 2022-06-30 10:45:47 +01:00
David Teller 80c7a06777 Rate limiting invites per issuer (#13125)
Co-authored-by: reivilibre <oliverw@matrix.org>
2022-06-30 09:44:47 +00:00
Brendan Abolivier 4d3b8fb23f Don't actually one-line the SQL statements we send to the DB (#13129) 2022-06-30 10:43:24 +02:00
Šimon Brandner 13e359aec8 Implement MSC3827: Filtering of /publicRooms by room type (#13031)
Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>
2022-06-29 17:12:45 +00:00
Moritz Stückler e714b8a057 Fix documentation header for allow_public_rooms_over_federation (#13116)
Signed-off-by: Moritz Stückler <moritz.stueckler@gmail.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-06-29 18:41:39 +02:00
Erik Johnston 92a0c18ef0 Improve performance of getting unread counts in rooms (#13119) 2022-06-29 10:32:38 +00:00
jejo86 cdc0259449 Document the --report-stats argument (#13029)
Signed-off-by: jejo86 <28619134+jejo86@users.noreply.github.com>
2022-06-29 10:24:10 +01:00
84 changed files with 1123 additions and 638 deletions
+36
View File
@@ -0,0 +1,36 @@
#!/bin/sh
#
# Common commands to set up Complement's prerequisites in a GitHub Actions CI run.
#
# Must be called after Synapse has been checked out to `synapse/`.
#
set -eu
alias block='{ set +x; } 2>/dev/null; func() { echo "::group::$*"; set -x; }; func'
alias endblock='{ set +x; } 2>/dev/null; func() { echo "::endgroup::"; set -x; }; func'
block Set Go Version
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
endblock
block Install Complement Dependencies
sudo apt-get -qq update && sudo apt-get install -qqy libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
endblock
block Install custom gotestfmt template
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
endblock
block Check out Complement
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
synapse/.ci/scripts/checkout_complement.sh
endblock
+10 -385
View File
@@ -10,403 +10,37 @@ concurrency:
cancel-in-progress: true
jobs:
check-sampleconfig:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install .
- run: scripts-dev/generate_sample_config.sh --check
- run: scripts-dev/config-lint.sh
check-schema-delta:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: scripts-dev/check_schema_delta.py --force-colors
lint:
uses: "matrix-org/backend-meta/.github/workflows/python-poetry-ci.yml@v1"
with:
typechecking-extras: "all"
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v2
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment.sh
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, check-sampleconfig, check-schema-delta]
runs-on: ubuntu-latest
steps:
- run: "true"
trial:
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.7", "3.8", "3.9", "3.10"]
database: ["sqlite"]
extras: ["all"]
include:
# Newest Python without optional deps
- python-version: "3.10"
extras: ""
# Oldest Python with PostgreSQL
- python-version: "3.7"
database: "postgres"
postgres-version: "10"
extras: "all"
# Newest Python with newest PostgreSQL
- python-version: "3.10"
database: "postgres"
postgres-version: "14"
extras: "all"
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
extras: ${{ matrix.extras }}
- name: Await PostgreSQL
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
- run: poetry run trial --jobs=2 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: localhost
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-olddeps:
# Note: sqlite only; no postgres
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Test with old deps
uses: docker://ubuntu:focal # For old python and sqlite
# Note: focal seems to be using 3.8, but the oldest is 3.7?
# See https://github.com/matrix-org/synapse/issues/12343
with:
workdir: /github/workspace
entrypoint: .ci/scripts/test_old_deps.sh
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-pypy:
# Very slow; only run if the branch name includes 'pypy'
# Note: sqlite only; no postgres. Completely untested since poetry move.
if: ${{ contains(github.ref, 'pypy') && !failure() && !cancelled() }}
needs: linting-done
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.7"]
extras: ["all"]
steps:
- uses: actions/checkout@v2
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
extras: ${{ matrix.extras }}
- run: poetry run trial --jobs=2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
if: ${{ !failure() && !cancelled() }}
needs: linting-done
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:${{ matrix.sytest-tag }}
volumes:
- ${{ github.workspace }}:/src
env:
SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.postgres == 'multi-postgres') && 1}}
WORKERS: ${{ matrix.workers && 1 }}
REDIS: ${{ matrix.redis && 1 }}
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
TOP: ${{ github.workspace }}
strategy:
fail-fast: false
matrix:
include:
- sytest-tag: focal
- sytest-tag: focal
postgres: postgres
- sytest-tag: testing
postgres: postgres
- sytest-tag: focal
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: multi-postgres
workers: workers
- sytest-tag: buster
postgres: postgres
workers: workers
redis: redis
steps:
- uses: actions/checkout@v2
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
export-data:
if: ${{ !failure() && !cancelled() }} # Allow previous steps to be skipped, but not fail
needs: [linting-done, portdb]
runs-on: ubuntu-latest
env:
TOP: ${{ github.workspace }}
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
portdb:
if: ${{ !failure() && !cancelled() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
env:
TOP: ${{ github.workspace }}
strategy:
matrix:
include:
- python-version: "3.7"
postgres-version: "10"
- python-version: "3.10"
postgres-version: "14"
services:
postgres:
image: postgres:${{ matrix.postgres-version }}
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
complement:
if: "${{ !failure() && !cancelled() }}"
needs: linting-done
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
- arrangement: workers
database: Postgres
steps:
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
- name: "Set Go Version"
run: |
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
- name: "Install Complement Dependencies"
run: |
sudo apt-get update && sudo apt-get install -y libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: "Install custom gotestfmt template"
run: |
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
- name: Checkout complement
run: synapse/.ci/scripts/checkout_complement.sh
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- run: |
set -o pipefail
POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
synapse/scripts-dev/complement.sh --build-only
while :; do
POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -f -run TestMembersLocal -json 2>&1 | gotestfmt | tee /tmp/xxx
if grep FAIL /tmp/xxx; then
break
fi
done
# We only run the workers tests on `develop` for now, because they're too slow to wait for on PRs.
# Sadly, you can't have an `if` condition on the value of a matrix, so this is a temporary, separate job for now.
# GitHub Actions doesn't support YAML anchors, so it's full-on duplication for now.
complement-developonly:
if: "${{ !failure() && !cancelled() && (github.ref == 'refs/heads/develop') }}"
needs: linting-done
runs-on: ubuntu-latest
name: "Complement Workers (develop only)"
steps:
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
- name: "Set Go Version"
run: |
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
- name: "Install Complement Dependencies"
run: |
sudo apt-get -qq update && sudo apt-get install -qqy libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: "Install custom gotestfmt template"
run: |
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
- name: Checkout complement
run: synapse/.ci/scripts/checkout_complement.sh
- run: |
set -o pipefail
WORKERS=1 COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
@@ -414,15 +48,6 @@ jobs:
tests-done:
if: ${{ always() }}
needs:
- check-sampleconfig
- lint
- lint-crlf
- lint-newsfile
- trial
- trial-olddeps
- sytest
- export-data
- portdb
- complement
runs-on: ubuntu-latest
steps:
+46
View File
@@ -96,6 +96,51 @@ jobs:
/logs/results.tap
/logs/**/*.log*
complement:
if: "${{ !failure() && !cancelled() }}"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
# This step is specific to the 'Twisted trunk' test run:
- name: Patch dependencies
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.1.12
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update
# NOT IN 1.1.12 poetry lock --check
working-directory: synapse
- run: |
set -o pipefail
TEST_ONLY_SKIP_DEP_HASH_VERIFICATION=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
# open an issue if the build fails, so we know about it.
open-issue:
if: failure()
@@ -103,6 +148,7 @@ jobs:
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
+1
View File
@@ -0,0 +1 @@
Add an explanation of the `--report-stats` argument to the docs.
+1
View File
@@ -0,0 +1 @@
Implement [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827): Filtering of /publicRooms by room type.
+3
View File
@@ -0,0 +1,3 @@
Clean up references to sample configuration and redirect users to the configuration manual instead.
+1
View File
@@ -0,0 +1 @@
Enable Complement testing in the 'Twisted Trunk' CI runs.
+1
View File
@@ -0,0 +1 @@
Add documentation for anonymised homeserver statistics collection.
+1
View File
@@ -0,0 +1 @@
Add missing type hints to `synapse.logging`.
+1
View File
@@ -0,0 +1 @@
Raise a `DependencyError` on missing dependencies instead of a `ConfigError`.
+1
View File
@@ -0,0 +1 @@
Fix wrong section header for `allow_public_rooms_over_federation` in the homeserver config documentation.
+1
View File
@@ -0,0 +1 @@
Reduce DB usage of `/sync` when a large number of unread messages have recently been sent in a room.
+1
View File
@@ -0,0 +1 @@
Add a rate limit for local users sending invites.
+1
View File
@@ -0,0 +1 @@
Improve startup times in Complement test runs against workers, particularly in CPU-constrained environments.
+1
View File
@@ -0,0 +1 @@
Only one-line SQL statements for logging and tracing.
+1
View File
@@ -0,0 +1 @@
Apply ratelimiting earlier in processing of /send request.
+1
View File
@@ -0,0 +1 @@
Enforce type annotations for `tests.test_server`.
+1
View File
@@ -0,0 +1 @@
Add a link to the configuration manual from the homeserver sample config documentation.
+1
View File
@@ -0,0 +1 @@
Add support to `complement.sh` for skipping the docker build.
+1
View File
@@ -0,0 +1 @@
Faster joins: skip waiting for full state when processing incoming events over federation.
+1
View File
@@ -0,0 +1 @@
Improve exception handling when processing events received over federation.
+1
View File
@@ -0,0 +1 @@
Improve validation logic in Synapse's REST endpoints.
+1
View File
@@ -0,0 +1 @@
Enable Complement testing in the 'Twisted Trunk' CI runs.
+1
View File
@@ -0,0 +1 @@
Add support to `complement.sh` for skipping the docker build.
+8 -1
View File
@@ -62,7 +62,13 @@ WORKDIR /synapse
# Copy just what we need to run `poetry export`...
COPY pyproject.toml poetry.lock /synapse/
RUN /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt
# If specified, we won't verify the hashes of dependencies.
# This is only needed if the hashes of dependencies cannot be checked for some
# reason, such as when a git repository is used directly as a dependency.
ARG TEST_ONLY_SKIP_DEP_HASH_VERIFICATION
RUN /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}
###
### Stage 1: builder
@@ -85,6 +91,7 @@ RUN \
openssl \
rustc \
zlib1g-dev \
git \
&& rm -rf /var/lib/apt/lists/*
# To speed up rebuilds, install all of the dependencies before we copy over
@@ -59,6 +59,9 @@ if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
synchrotron, \
appservice, \
pusher"
# Improve startup times by using a launcher based on fork()
export SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER=1
else
# Empty string here means 'main process only'
export SYNAPSE_WORKER_TYPES=""
@@ -1,3 +1,24 @@
{% if use_forking_launcher %}
[program:synapse_fork]
command=/usr/local/bin/python -m synapse.app.complement_fork_starter
{{ main_config_path }}
synapse.app.homeserver
--config-path="{{ main_config_path }}"
--config-path=/conf/workers/shared.yaml
{%- for worker in workers %}
-- {{ worker.app }}
--config-path="{{ main_config_path }}"
--config-path=/conf/workers/shared.yaml
--config-path=/conf/workers/{{ worker.name }}.yaml
{%- endfor %}
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=unexpected
exitcodes=0
{% else %}
[program:synapse_main]
command=/usr/local/bin/prefix-log /usr/local/bin/python -m synapse.app.homeserver
--config-path="{{ main_config_path }}"
@@ -13,7 +34,7 @@ autorestart=unexpected
exitcodes=0
{% for worker in workers %}
{% for worker in workers %}
[program:synapse_{{ worker.name }}]
command=/usr/local/bin/prefix-log /usr/local/bin/python -m {{ worker.app }}
--config-path="{{ main_config_path }}"
@@ -27,4 +48,5 @@ stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
{% endfor %}
{% endfor %}
{% endif %}
+4
View File
@@ -2,7 +2,11 @@ version: 1
formatters:
precise:
{% if include_worker_name_in_log_line %}
format: '{{ worker_name }} | %(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% else %}
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% endif %}
handlers:
{% if LOG_FILE_PATH %}
+7
View File
@@ -26,6 +26,9 @@
# * SYNAPSE_TLS_CERT: Path to a TLS certificate in PEM format.
# * SYNAPSE_TLS_KEY: Path to a TLS key. If this and SYNAPSE_TLS_CERT are specified,
# Nginx will be configured to serve TLS on port 8448.
# * SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER: Whether to use the forking launcher,
# only intended for usage in Complement at the moment.
# No stability guarantees are provided.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
@@ -525,6 +528,7 @@ def generate_worker_files(
"/etc/supervisor/conf.d/synapse.conf",
workers=worker_descriptors,
main_config_path=config_path,
use_forking_launcher=environ.get("SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER"),
)
# healthcheck config
@@ -560,6 +564,9 @@ def generate_worker_log_config(
log_config_filepath,
worker_name=worker_name,
**extra_log_template_args,
include_worker_name_in_log_line=environ.get(
"SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER"
),
)
return log_config_filepath
+5 -1
View File
@@ -110,7 +110,11 @@ def generate_config_from_template(
log_config_file = environ["SYNAPSE_LOG_CONFIG"]
log("Generating log config file " + log_config_file)
convert("/conf/log.config", log_config_file, environ)
convert(
"/conf/log.config",
log_config_file,
{**environ, "include_worker_name_in_log_line": False},
)
# Hopefully we already have a signing key, but generate one if not.
args = [
+1
View File
@@ -69,6 +69,7 @@
- [Federation](usage/administration/admin_api/federation.md)
- [Manhole](manhole.md)
- [Monitoring](metrics-howto.md)
- [Reporting Anonymised Statistics](usage/administration/monitoring/reporting_anonymised_statistics.md)
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
+2 -3
View File
@@ -124,9 +124,8 @@ Body parameters:
- `address` - string. Value of third-party ID.
belonging to a user.
- `external_ids` - array, optional. Allow setting the identifier of the external identity
provider for SSO (Single sign-on). Details in
[Sample Configuration File](../usage/configuration/homeserver_sample_config.html)
section `sso` and `oidc_providers`.
provider for SSO (Single sign-on). Details in the configuration manual under the
sections [sso](../usage/configuration/config_documentation.md#sso) and [oidc_providers](../usage/configuration/config_documentation.md#oidc_providers).
- `auth_provider` - string. ID of the external identity provider. Value of `idp_id`
in the homeserver configuration. Note that no error is raised if the provided
value is not in the homeserver configuration.
+36 -57
View File
@@ -70,82 +70,61 @@ on save as they take a while and can be very resource intensive.
- Avoid wildcard imports (`from synapse.types import *`) and
relative imports (`from .types import UserID`).
## Configuration file format
## Configuration code and documentation format
The [sample configuration file](./sample_config.yaml) acts as a
When adding a configuration option to the code, if several settings are grouped into a single dict, ensure that your code
correctly handles the top-level option being set to `None` (as it will be if no sub-options are enabled).
The [configuration manual](usage/configuration/config_documentation.md) acts as a
reference to Synapse's configuration options for server administrators.
Remember that many readers will be unfamiliar with YAML and server
administration in general, so that it is important that the file be as
easy to understand as possible, which includes following a consistent
format.
administration in general, so it is important that when you add
a configuration option the documentation be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
- Sections should be separated with a heading consisting of a single
line prefixed and suffixed with `##`. There should be **two** blank
lines before the section header, and **one** after.
- Each option should be listed in the file with the following format:
- A comment describing the setting. Each line of this comment
should be prefixed with a hash (`#`) and a space.
- Each option should be listed in the config manual with the following format:
- The name of the option, prefixed by `###`.
The comment should describe the default behaviour (ie, what
- A comment which describes the default behaviour (i.e. what
happens if the setting is omitted), as well as what the effect
will be if the setting is changed.
Often, the comment end with something like "uncomment the
following to <do action>".
- A line consisting of only `#`.
- A commented-out example setting, prefixed with only `#`.
- An example setting, using backticks to define the code block
For boolean (on/off) options, convention is that this example
should be the *opposite* to the default (so the comment will end
with "Uncomment the following to enable [or disable]
<feature>." For other options, the example should give some
non-default value which is likely to be useful to the reader.
should be the *opposite* to the default. For other options, the example should give
some non-default value which is likely to be useful to the reader.
- There should be a blank line between each option.
- Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in
comment lines starting `# #`, as this is hard to read and confusing
to edit. Instead, leave the top-level config option uncommented, and
follow the conventions above for sub-options. Ensure that your code
correctly handles the top-level option being set to `None` (as it
will be if no sub-options are enabled).
- Lines should be wrapped at 80 characters.
- Use two-space indents.
- `true` and `false` are spelt thus (as opposed to `True`, etc.)
- Use single quotes (`'`) rather than double-quotes (`"`) or backticks
(`` ` ``) to refer to configuration options.
- There should be a horizontal rule between each option, which can be achieved by adding `---` before and
after the option.
- `true` and `false` are spelt thus (as opposed to `True`, etc.)
Example:
---
### `modules`
Use the `module` sub-option to add a module under `modules` to extend functionality.
The `module` setting then has a sub-option, `config`, which can be used to define some configuration
for the `module`.
Defaults to none.
Example configuration:
```yaml
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
modules:
- module: my_super_module.MySuperClass
config:
do_thing: true
- module: my_other_super_module.SomeClass
config: {}
```
---
Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config.sh`.
Making sure that the output from this script matches the desired format
is left as an exercise for the reader!
+2 -3
View File
@@ -49,9 +49,8 @@ as follows:
* For other installation mechanisms, see the documentation provided by the
maintainer.
To enable the JSON web token integration, you should then add a `jwt_config` section
to your configuration file (or uncomment the `enabled: true` line in the
existing section). See [sample_config.yaml](./sample_config.yaml) for some
To enable the JSON web token integration, you should then add a `jwt_config` option
to your configuration file. See the [configuration manual](usage/configuration/config_documentation.md#jwt_config) for some
sample settings.
## How to test JWT as a developer
+4 -2
View File
@@ -13,8 +13,10 @@ environments where untrusted users have shell access.
## Configuring the manhole
To enable it, first uncomment the `manhole` listener configuration in
`homeserver.yaml`. The configuration is slightly different if you're using docker.
To enable it, first add the `manhole` listener configuration in your
`homeserver.yaml`. You can find information on how to do that
in the [configuration manual](usage/configuration/config_documentation.md#manhole_settings).
The configuration is slightly different if you're using docker.
#### Docker config
+9 -9
View File
@@ -49,9 +49,9 @@ clients.
## Server configuration
Support for this feature can be enabled and configured in the
`retention` section of the Synapse configuration file (see the
[sample file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518)).
Support for this feature can be enabled and configured by adding a the
`retention` in the Synapse configuration file (see
[configuration manual](usage/configuration/config_documentation.md#retention)).
To enable support for message retention policies, set the setting
`enabled` in this section to `true`.
@@ -65,8 +65,8 @@ message retention policy configured in its state. This allows server
admins to ensure that messages are never kept indefinitely in a server's
database.
A default policy can be defined as such, in the `retention` section of
the configuration file:
A default policy can be defined as such, by adding the `retention` option in
the configuration file and adding these sub-options:
```yaml
default_policy:
@@ -86,8 +86,8 @@ Purge jobs are the jobs that Synapse runs in the background to purge
expired events from the database. They are only run if support for
message retention policies is enabled in the server's configuration. If
no configuration for purge jobs is configured by the server admin,
Synapse will use a default configuration, which is described in the
[sample configuration file](https://github.com/matrix-org/synapse/blob/v1.36.0/docs/sample_config.yaml#L451-L518).
Synapse will use a default configuration, which is described here in the
[configuration manual](usage/configuration/config_documentation.md#retention).
Some server admins might want a finer control on when events are removed
depending on an event's room's policy. This can be done by setting the
@@ -137,8 +137,8 @@ the server's database.
### Lifetime limits
Server admins can set limits on the values of `max_lifetime` to use when
purging old events in a room. These limits can be defined as such in the
`retention` section of the configuration file:
purging old events in a room. These limits can be defined under the
`retention` option in the configuration file:
```yaml
allowed_lifetime_min: 1d
+2 -2
View File
@@ -45,8 +45,8 @@ as follows:
maintainer.
To enable the OpenID integration, you should then add a section to the `oidc_providers`
setting in your configuration file (or uncomment one of the existing examples).
See [sample_config.yaml](./sample_config.yaml) for some sample settings, as well as
setting in your configuration file.
See the [configuration manual](usage/configuration/config_documentation.md#oidc_providers) for some sample settings, as well as
the text below for example configurations for specific providers.
## Sample configs
+2 -2
View File
@@ -66,8 +66,8 @@ in Synapse can be deactivated.
**NOTE**: This has an impact on security and is for testing purposes only!
To deactivate the certificate validation, the following setting must be made in
[homserver.yaml](../usage/configuration/homeserver_sample_config.md).
To deactivate the certificate validation, the following setting must be added to
your [homserver.yaml](../usage/configuration/homeserver_sample_config.md).
```yaml
use_insecure_ssl_client_just_for_testing_do_not_use: true
+11 -7
View File
@@ -232,7 +232,9 @@ python -m synapse.app.homeserver \
--report-stats=[yes|no]
```
... substituting an appropriate value for `--server-name`.
... substituting an appropriate value for `--server-name` and choosing whether
or not to report usage statistics (hostname, Synapse version, uptime, total
users, etc.) to the developers via the `--report-stats` argument.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your homeserver to
@@ -405,11 +407,11 @@ The recommended way to do so is to set up a reverse proxy on port
Alternatively, you can configure Synapse to expose an HTTPS port. To do
so, you will need to edit `homeserver.yaml`, as follows:
- First, under the `listeners` section, uncomment the configuration for the
TLS-enabled listener. (Remove the hash sign (`#`) at the start of
each line). The relevant lines are like this:
- First, under the `listeners` option, add the configuration for the
TLS-enabled listener like so:
```yaml
listeners:
- port: 8448
type: http
tls: true
@@ -417,9 +419,11 @@ so, you will need to edit `homeserver.yaml`, as follows:
- names: [client, federation]
```
- You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You will need to manage
provisioning of these certificates yourself.
- You will also need to add the options `tls_certificate_path` and
`tls_private_key_path`. to your configuration file. You will need to manage provisioning of
these certificates yourself.
- You can find more information about these options as well as how to configure synapse in the
[configuration manual](../usage/configuration/config_documentation.md).
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates
@@ -0,0 +1,81 @@
# Reporting Anonymised Statistics
When generating your Synapse configuration file, you are asked whether you
would like to report anonymised statistics to Matrix.org. These statistics
provide the foundation a glimpse into the number of Synapse homeservers
participating in the network, as well as statistics such as the number of
rooms being created and messages being sent. This feature is sometimes
affectionately called "phone-home" stats. Reporting
[is optional](../../configuration/config_documentation.md#report_stats)
and the reporting endpoint
[can be configured](../../configuration/config_documentation.md#report_stats_endpoint),
in case you would like to instead report statistics from a set of homeservers
to your own infrastructure.
This documentation aims to define the statistics available and the
homeserver configuration options that exist to tweak it.
## Available Statistics
The following statistics are sent to the configured reporting endpoint:
| Statistic Name | Type | Description |
|----------------------------|--------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `memory_rss` | int | The memory usage of the process (in kilobytes on Unix-based systems, bytes on MacOS). |
| `cpu_average` | int | CPU time in % of a single core (not % of all cores). |
| `homeserver` | string | The homeserver's server name. |
| `server_context` | string | An arbitrary string used to group statistics from a set of homeservers. |
| `timestamp` | int | The current time, represented as the number of seconds since the epoch. |
| `uptime_seconds` | int | The number of seconds since the homeserver was last started. |
| `python_version` | string | The Python version number in use (e.g "3.7.1"). Taken from `sys.version_info`. |
| `total_users` | int | The number of registered users on the homeserver. |
| `total_nonbridged_users` | int | The number of users, excluding those created by an Application Service. |
| `daily_user_type_native` | int | The number of native users created in the last 24 hours. |
| `daily_user_type_guest` | int | The number of guest users created in the last 24 hours. |
| `daily_user_type_bridged` | int | The number of users created by Application Services in the last 24 hours. |
| `total_room_count` | int | The total number of rooms present on the homeserver. |
| `daily_active_users` | int | The number of unique users[^1] that have used the homeserver in the last 24 hours. |
| `monthly_active_users` | int | The number of unique users[^1] that have used the homeserver in the last 30 days. |
| `daily_active_rooms` | int | The number of rooms that have had a (state) event with the type `m.room.message` sent in them in the last 24 hours. |
| `daily_active_e2ee_rooms` | int | The number of rooms that have had a (state) event with the type `m.room.encrypted` sent in them in the last 24 hours. |
| `daily_messages` | int | The number of (state) events with the type `m.room.message` seen in the last 24 hours. |
| `daily_e2ee_messages` | int | The number of (state) events with the type `m.room.encrypted` seen in the last 24 hours. |
| `daily_sent_messages` | int | The number of (state) events sent by a local user with the type `m.room.message` seen in the last 24 hours. |
| `daily_sent_e2ee_messages` | int | The number of (state) events sent by a local user with the type `m.room.encrypted` seen in the last 24 hours. |
| `r30_users_all` | int | The number of 30 day retained users, defined as users who have created their accounts more than 30 days ago, where they were last seen at most 30 days ago and where those two timestamps are over 30 days apart. Includes clients that do not fit into the below r30 client types. |
| `r30_users_android` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Android" in the user agent string. |
| `r30_users_ios` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "iOS" in the user agent string. |
| `r30_users_electron` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Electron" in the user agent string. |
| `r30_users_web` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "Mozilla" or "Gecko" in the user agent string. |
| `r30v2_users_all` | int | The number of 30 day retained users, with a revised algorithm. Defined as users that appear more than once in the past 60 days, and have more than 30 days between the most and least recent appearances in the past 60 days. Includes clients that do not fit into the below r30 client types. |
| `r30v2_users_android` | int | The number of 30 day retained users, as defined above. Filtered only to clients with ("riot" or "element") and "android" (case-insensitive) in the user agent string. |
| `r30v2_users_ios` | int | The number of 30 day retained users, as defined above. Filtered only to clients with ("riot" or "element") and "ios" (case-insensitive) in the user agent string. |
| `r30v2_users_electron` | int | The number of 30 day retained users, as defined above. Filtered only to clients with ("riot" or "element") and "electron" (case-insensitive) in the user agent string. |
| `r30v2_users_web` | int | The number of 30 day retained users, as defined above. Filtered only to clients with "mozilla" or "gecko" (case-insensitive) in the user agent string. |
| `cache_factor` | int | The configured [`global factor`](../../configuration/config_documentation.md#caching) value for caching. |
| `event_cache_size` | int | The configured [`event_cache_size`](../../configuration/config_documentation.md#caching) value for caching. |
| `database_engine` | string | The database engine that is in use. Either "psycopg2" meaning PostgreSQL is in use, or "sqlite3" for SQLite3. |
| `database_server_version` | string | The version of the database server. Examples being "10.10" for PostgreSQL server version 10.0, and "3.38.5" for SQLite 3.38.5 installed on the system. |
| `log_level` | string | The log level in use. Examples are "INFO", "WARNING", "ERROR", "DEBUG", etc. |
[^1]: Native matrix users and guests are always counted. If the
[`track_puppeted_user_ips`](../../configuration/config_documentation.md#track_puppeted_user_ips)
option is set to `true`, "puppeted" users (users that an Application Service have performed
[an action on behalf of](https://spec.matrix.org/v1.3/application-service-api/#identity-assertion))
will also be counted. Note that an Application Service can "puppet" any user in their
[user namespace](https://spec.matrix.org/v1.3/application-service-api/#registration),
not only users that the Application Service has created. If this happens, the Application Service
will additionally be counted as a user (irrespective of `track_puppeted_user_ips`).
## Using a Custom Statistics Collection Server
If statistics reporting is enabled, the endpoint that Synapse sends metrics to is configured by the
[`report_stats_endpoint`](../../configuration/config_documentation.md#report_stats_endpoint) config
option. By default, statistics are sent to Matrix.org.
If you would like to set up your own statistics collection server and send metrics there, you may
consider using one of the following known implementations:
* [Matrix.org's Panopticon](https://github.com/matrix-org/panopticon)
* [Famedly's Barad-dûr](https://gitlab.com/famedly/company/devops/services/barad-dur)
@@ -317,7 +317,7 @@ Example configuration:
allow_public_rooms_without_auth: true
```
---
### `allow_public_rooms_without_auth`
### `allow_public_rooms_over_federation`
If set to true, allows any other homeserver to fetch the server's public
rooms directory via federation. Defaults to false.
@@ -2999,7 +2999,7 @@ This setting has the following sub-options:
* `localdb_enabled`: Set to false to disable authentication against the local password
database. This is ignored if `enabled` is false, and is only useful
if you have other `password_providers`. Defaults to true.
* `pepper`: Set the value here to a secret random string for extra security. # Uncomment and change to a secret random string for extra security.
* `pepper`: Set the value here to a secret random string for extra security.
DO NOT CHANGE THIS AFTER INITIAL SETUP!
* `policy`: Define and enforce a password policy, such as minimum lengths for passwords, etc.
Each parameter is optional. This is an implementation of MSC2000. Parameters are as follows:
@@ -9,6 +9,9 @@ a real homeserver.yaml. Instead, if you are starting from scratch, please genera
a fresh config using Synapse by following the instructions in
[Installation](../../setup/installation.md).
Documentation for all configuration options can be found in the
[Configuration Manual](./config_documentation.md).
```yaml
{{#include ../../sample_config.yaml}}
```
@@ -4,5 +4,5 @@ Synapse supports authenticating users via the [Central Authentication
Service protocol](https://en.wikipedia.org/wiki/Central_Authentication_Service)
(CAS) natively.
Please see the `cas_config` and `sso` sections of the [Synapse configuration
file](../../../configuration/homeserver_sample_config.md) for more details.
Please see the [cas_config](../../../configuration/config_documentation.md#cas_config) and [sso](../../../configuration/config_documentation.md#sso)
sections of the configuration manual for more details.
-4
View File
@@ -56,7 +56,6 @@ exclude = (?x)
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py
|tests/test_metrics.py
|tests/test_server.py
|tests/test_state.py
|tests/test_terms_auth.py
|tests/util/caches/test_cached_call.py
@@ -89,9 +88,6 @@ disallow_untyped_defs = False
[mypy-synapse.logging.opentracing]
disallow_untyped_defs = False
[mypy-synapse.logging.scopecontextmanager]
disallow_untyped_defs = False
[mypy-synapse.metrics._reactor_metrics]
disallow_untyped_defs = False
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
+70 -16
View File
@@ -14,12 +14,18 @@
# By default Synapse is run in monolith mode. This can be overridden by
# setting the WORKERS environment variable.
#
# A regular expression of test method names can be supplied as the first
# argument to the script. Complement will then only run those tests. If
# no regex is supplied, all tests are run. For example;
# You can optionally give a "-f" argument (for "fast") before any to skip
# rebuilding the docker images, if you just want to rerun the tests.
#
# Remaining commandline arguments are passed through to `go test`. For example,
# you can supply a regular expression of test method names via the "-run"
# argument:
#
# ./complement.sh -run "TestOutboundFederation(Profile|Send)"
#
# Specifying TEST_ONLY_SKIP_DEP_HASH_VERIFICATION=1 will cause `poetry export`
# to not emit any hashes when building the Docker image. This then means that
# you can use 'unverifiable' sources such as git repositories as dependencies.
# Exit if a line returns a non-zero exit code
set -e
@@ -32,6 +38,45 @@ echo_if_github() {
fi
}
# Helper to print out the usage instructions
usage() {
cat >&2 <<EOF
Usage: $0 [-f] <go test arguments>...
Run the complement test suite on Synapse.
-f, --fast
Skip rebuilding the docker images, and just use the most recent
'complement-synapse:latest' image
--build-only
Only build the Docker images. Don't actually run Complement.
For help on arguments to 'go test', run 'go help testflag'.
EOF
}
# parse our arguments
skip_docker_build=""
skip_complement_run=""
while [ $# -ge 1 ]; do
arg=$1
case "$arg" in
"-h")
usage
exit 1
;;
"-f"|"--fast")
skip_docker_build=1
;;
"--build-only")
skip_complement_run=1
;;
*)
# unknown arg: presumably an argument to gotest. break the loop.
break
esac
shift
done
# enable buildkit for the docker builds
export DOCKER_BUILDKIT=1
@@ -49,21 +94,30 @@ if [[ -z "$COMPLEMENT_DIR" ]]; then
echo "Checkout available at 'complement-${COMPLEMENT_REF}'"
fi
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
docker build -t matrixdotorg/synapse -f "docker/Dockerfile" .
echo_if_github "::endgroup::"
if [ -z "$skip_docker_build" ]; then
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
docker build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
-f "docker/Dockerfile" .
echo_if_github "::endgroup::"
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
docker build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
docker build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
docker build -t complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
docker build -t complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"
fi
if [ -n "$skip_complement_run" ]; then
echo "Skipping Complement run as requested."
exit
fi
export COMPLEMENT_BASE_IMAGE=complement-synapse
+3
View File
@@ -268,6 +268,9 @@ class MockHomeserver:
def get_instance_name(self) -> str:
return "master"
def should_send_federation(self) -> bool:
return False
class Porter:
def __init__(
+10
View File
@@ -259,3 +259,13 @@ class ReceiptTypes:
READ: Final = "m.read"
READ_PRIVATE: Final = "org.matrix.msc2285.read.private"
FULLY_READ: Final = "m.fully_read"
class PublicRoomsFilterFields:
"""Fields in the search filter for `/publicRooms` that we understand.
As defined in https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3publicrooms
"""
GENERIC_SEARCH_TERM: Final = "generic_search_term"
ROOM_TYPES: Final = "org.matrix.msc3827.room_types"
+6 -2
View File
@@ -106,7 +106,9 @@ def register_sighup(func: Callable[P, None], *args: P.args, **kwargs: P.kwargs)
def start_worker_reactor(
appname: str,
config: HomeServerConfig,
run_command: Callable[[], None] = reactor.run,
# Use a lambda to avoid binding to a given reactor at import time.
# (needed when synapse.app.complement_fork_starter is being used)
run_command: Callable[[], None] = lambda: reactor.run(),
) -> None:
"""Run the reactor in the main process
@@ -141,7 +143,9 @@ def start_reactor(
daemonize: bool,
print_pidfile: bool,
logger: logging.Logger,
run_command: Callable[[], None] = reactor.run,
# Use a lambda to avoid binding to a given reactor at import time.
# (needed when synapse.app.complement_fork_starter is being used)
run_command: Callable[[], None] = lambda: reactor.run(),
) -> None:
"""Run the reactor in the main process
+190
View File
@@ -0,0 +1,190 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ## What this script does
#
# This script spawns multiple workers, whilst only going through the code loading
# process once. The net effect is that start-up time for a swarm of workers is
# reduced, particularly in CPU-constrained environments.
#
# Before the workers are spawned, the database is prepared in order to avoid the
# workers racing.
#
# ## Stability
#
# This script is only intended for use within the Synapse images for the
# Complement test suite.
# There are currently no stability guarantees whatsoever; especially not about:
# - whether it will continue to exist in future versions;
# - the format of its command-line arguments; or
# - any details about its behaviour or principles of operation.
#
# ## Usage
#
# The first argument should be the path to the database configuration, used to
# set up the database. The rest of the arguments are used as follows:
# Each worker is specified as an argument group (each argument group is
# separated by '--').
# The first argument in each argument group is the Python module name of the application
# to start. Further arguments are then passed to that module as-is.
#
# ## Example
#
# python -m synapse.app.complement_fork_starter path_to_db_config.yaml \
# synapse.app.homeserver [args..] -- \
# synapse.app.generic_worker [args..] -- \
# ...
# synapse.app.generic_worker [args..]
#
import argparse
import importlib
import itertools
import multiprocessing
import sys
from typing import Any, Callable, List
from twisted.internet.main import installReactor
class ProxiedReactor:
"""
Twisted tracks the 'installed' reactor as a global variable.
(Actually, it does some module trickery, but the effect is similar.)
The default EpollReactor is buggy if it's created before a process is
forked, then used in the child.
See https://twistedmatrix.com/trac/ticket/4759#comment:17.
However, importing certain Twisted modules will automatically create and
install a reactor if one hasn't already been installed.
It's not normally possible to re-install a reactor.
Given the goal of launching workers with fork() to only import the code once,
this presents a conflict.
Our work around is to 'install' this ProxiedReactor which prevents Twisted
from creating and installing one, but which lets us replace the actual reactor
in use later on.
"""
def __init__(self) -> None:
self.___reactor_target: Any = None
def _install_real_reactor(self, new_reactor: Any) -> None:
"""
Install a real reactor for this ProxiedReactor to forward lookups onto.
This method is specific to our ProxiedReactor and should not clash with
any names used on an actual Twisted reactor.
"""
self.___reactor_target = new_reactor
def __getattr__(self, attr_name: str) -> Any:
return getattr(self.___reactor_target, attr_name)
def _worker_entrypoint(
func: Callable[[], None], proxy_reactor: ProxiedReactor, args: List[str]
) -> None:
"""
Entrypoint for a forked worker process.
We just need to set up the command-line arguments, create our real reactor
and then kick off the worker's main() function.
"""
sys.argv = args
from twisted.internet.epollreactor import EPollReactor
proxy_reactor._install_real_reactor(EPollReactor())
func()
def main() -> None:
"""
Entrypoint for the forking launcher.
"""
parser = argparse.ArgumentParser()
parser.add_argument("db_config", help="Path to database config file")
parser.add_argument(
"args",
nargs="...",
help="Argument groups separated by `--`. "
"The first argument of each group is a Synapse app name. "
"Subsequent arguments are passed through.",
)
ns = parser.parse_args()
# Split up the subsequent arguments into each workers' arguments;
# `--` is our delimiter of choice.
args_by_worker: List[List[str]] = [
list(args)
for cond, args in itertools.groupby(ns.args, lambda ele: ele != "--")
if cond and args
]
# Prevent Twisted from installing a shared reactor that all the workers will
# inherit when we fork(), by installing our own beforehand.
proxy_reactor = ProxiedReactor()
installReactor(proxy_reactor)
# Import the entrypoints for all the workers.
worker_functions = []
for worker_args in args_by_worker:
worker_module = importlib.import_module(worker_args[0])
worker_functions.append(worker_module.main)
# We need to prepare the database first as otherwise all the workers will
# try to create a schema version table and some will crash out.
from synapse._scripts import update_synapse_database
update_proc = multiprocessing.Process(
target=_worker_entrypoint,
args=(
update_synapse_database.main,
proxy_reactor,
[
"update_synapse_database",
"--database-config",
ns.db_config,
"--run-background-updates",
],
),
)
print("===== PREPARING DATABASE =====", file=sys.stderr)
update_proc.start()
update_proc.join()
print("===== PREPARED DATABASE =====", file=sys.stderr)
# At this point, we've imported all the main entrypoints for all the workers.
# Now we basically just fork() out to create the workers we need.
# Because we're using fork(), all the workers get a clone of this launcher's
# memory space and don't need to repeat the work of loading the code!
# Instead of using fork() directly, we use the multiprocessing library,
# which uses fork() on Unix platforms.
processes = []
for (func, worker_args) in zip(worker_functions, args_by_worker):
process = multiprocessing.Process(
target=_worker_entrypoint, args=(func, proxy_reactor, worker_args)
)
process.start()
processes.append(process)
# Be a good parent and wait for our children to die before exiting.
for process in processes:
process.join()
if __name__ == "__main__":
main()
+2 -7
View File
@@ -21,7 +21,7 @@ from typing import Any, Callable, Dict, Optional
import attr
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import check_requirements
from ._base import Config, ConfigError
@@ -159,12 +159,7 @@ class CacheConfig(Config):
self.track_memory_usage = cache_config.get("track_memory_usage", False)
if self.track_memory_usage:
try:
check_requirements("cache_memory")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
check_requirements("cache_memory")
expire_caches = cache_config.get("expire_caches", True)
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
+1 -1
View File
@@ -145,7 +145,7 @@ class EmailConfig(Config):
raise ConfigError(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"Please consult the configuration manual at docs/usage/configuration/config_documentation.md for "
"details and update your config file."
)
+3
View File
@@ -87,3 +87,6 @@ class ExperimentalConfig(Config):
# MSC3715: dir param on /relations.
self.msc3715_enabled: bool = experimental.get("msc3715_enabled", False)
# MSC3827: Filtering of /publicRooms by room type
self.msc3827_enabled: bool = experimental.get("msc3827_enabled", False)
+3 -14
View File
@@ -15,14 +15,9 @@
from typing import Any
from synapse.types import JsonDict
from synapse.util.check_dependencies import check_requirements
from ._base import Config, ConfigError
MISSING_AUTHLIB = """Missing authlib library. This is required for jwt login.
Install by running:
pip install synapse[jwt]
"""
from ._base import Config
class JWTConfig(Config):
@@ -41,13 +36,7 @@ class JWTConfig(Config):
# that the claims exist on the JWT.
self.jwt_issuer = jwt_config.get("issuer")
self.jwt_audiences = jwt_config.get("audiences")
try:
from authlib.jose import JsonWebToken
JsonWebToken # To stop unused lint.
except ImportError:
raise ConfigError(MISSING_AUTHLIB)
check_requirements("jwt")
else:
self.jwt_enabled = False
self.jwt_secret = None
+2 -7
View File
@@ -18,7 +18,7 @@ from typing import Any, Optional
import attr
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import check_requirements
from ._base import Config, ConfigError
@@ -57,12 +57,7 @@ class MetricsConfig(Config):
self.sentry_enabled = "sentry" in config
if self.sentry_enabled:
try:
check_requirements("sentry")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
check_requirements("sentry")
self.sentry_dsn = config["sentry"].get("dsn")
if not self.sentry_dsn:
+2 -8
View File
@@ -24,7 +24,7 @@ from synapse.types import JsonDict
from synapse.util.module_loader import load_module
from synapse.util.stringutils import parse_and_validate_mxc_uri
from ..util.check_dependencies import DependencyException, check_requirements
from ..util.check_dependencies import check_requirements
from ._base import Config, ConfigError, read_file
DEFAULT_USER_MAPPING_PROVIDER = "synapse.handlers.oidc.JinjaOidcMappingProvider"
@@ -41,12 +41,7 @@ class OIDCConfig(Config):
if not self.oidc_providers:
return
try:
check_requirements("oidc")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
) from e
check_requirements("oidc")
# check we don't have any duplicate idp_ids now. (The SSO handler will also
# check for duplicates when the REST listeners get registered, but that happens
@@ -146,7 +141,6 @@ OIDC_PROVIDER_CONFIG_WITH_ID_SCHEMA = {
"allOf": [OIDC_PROVIDER_CONFIG_SCHEMA, {"required": ["idp_id", "idp_name"]}]
}
# the `oidc_providers` list can either be None (as it is in the default config), or
# a list of provider configs, each of which requires an explicit ID and name.
OIDC_PROVIDER_LIST_SCHEMA = {
+5
View File
@@ -136,6 +136,11 @@ class RatelimitConfig(Config):
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_invites_per_issuer = RateLimitConfig(
config.get("rc_invites", {}).get("per_issuer", {}),
defaults={"per_second": 0.3, "burst_count": 10},
)
self.rc_third_party_invite = RateLimitConfig(
config.get("rc_third_party_invite", {}),
defaults={
+2 -8
View File
@@ -21,7 +21,7 @@ import attr
from synapse.config.server import generate_ip_set
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import check_requirements
from synapse.util.module_loader import load_module
from ._base import Config, ConfigError
@@ -184,13 +184,7 @@ class ContentRepositoryConfig(Config):
)
self.url_preview_enabled = config.get("url_preview_enabled", False)
if self.url_preview_enabled:
try:
check_requirements("url_preview")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
check_requirements("url_preview")
proxy_env = getproxies_environment()
if "url_preview_ip_range_blacklist" not in config:
+2 -7
View File
@@ -18,7 +18,7 @@ from typing import Any, List, Set
from synapse.config.sso import SsoAttributeRequirement
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import check_requirements
from synapse.util.module_loader import load_module, load_python_module
from ._base import Config, ConfigError
@@ -76,12 +76,7 @@ class SAML2Config(Config):
if not saml2_config.get("sp_config") and not saml2_config.get("config_path"):
return
try:
check_requirements("saml2")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
check_requirements("saml2")
self.saml2_enabled = True
+2 -7
View File
@@ -15,7 +15,7 @@
from typing import Any, List, Set
from synapse.types import JsonDict
from synapse.util.check_dependencies import DependencyException, check_requirements
from synapse.util.check_dependencies import check_requirements
from ._base import Config, ConfigError
@@ -40,12 +40,7 @@ class TracerConfig(Config):
if not self.opentracer_enabled:
return
try:
check_requirements("opentracing")
except DependencyException as e:
raise ConfigError(
e.message # noqa: B306, DependencyException.message is a property
)
check_requirements("opentracing")
# The tracer is enabled so sanitize the config
+6 -7
View File
@@ -1092,20 +1092,19 @@ class FederationEventHandler:
logger.debug("Processing event: %s", event)
assert not event.internal_metadata.outlier
context = await self._state_handler.compute_event_context(
event,
state_ids_before_event=state_ids,
)
try:
context = await self._state_handler.compute_event_context(
event,
state_ids_before_event=state_ids,
)
context = await self._check_event_auth(
origin,
event,
context,
)
except AuthError as e:
# FIXME richvdh 2021/10/07 I don't think this is reachable. Let's log it
# for now
logger.exception("Unexpected AuthError from _check_event_auth")
# This happens only if we couldn't find the auth events. We'll already have
# logged a warning, so now we just convert to a FederationError.
raise FederationError("ERROR", e.code, e.msg, affected=event.event_id)
if not backfilled and not context.rejected:
+3
View File
@@ -903,6 +903,9 @@ class EventCreationHandler:
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
if ratelimit:
await self.request_ratelimiter.ratelimit(requester, update=False)
# We limit the number of concurrent event sends in a room so that we
# don't fork the DAG too much. If we don't limit then we can end up in
# a situation where event persistence can't keep up, causing
+20 -3
View File
@@ -25,6 +25,7 @@ from synapse.api.constants import (
GuestAccess,
HistoryVisibility,
JoinRules,
PublicRoomsFilterFields,
)
from synapse.api.errors import (
Codes,
@@ -181,6 +182,7 @@ class RoomListHandler:
== HistoryVisibility.WORLD_READABLE,
"guest_can_join": room["guest_access"] == "can_join",
"join_rule": room["join_rules"],
"org.matrix.msc3827.room_type": room["room_type"],
}
# Filter out Nones rather omit the field altogether
@@ -239,7 +241,9 @@ class RoomListHandler:
response["chunk"] = results
response["total_room_count_estimate"] = await self.store.count_public_rooms(
network_tuple, ignore_non_federatable=from_federation
network_tuple,
ignore_non_federatable=from_federation,
search_filter=search_filter,
)
return response
@@ -508,8 +512,21 @@ class RoomListNextBatch:
def _matches_room_entry(room_entry: JsonDict, search_filter: dict) -> bool:
if search_filter and search_filter.get("generic_search_term", None):
generic_search_term = search_filter["generic_search_term"].upper()
"""Determines whether the given search filter matches a room entry returned over
federation.
Only used if the remote server does not support MSC2197 remote-filtered search, and
hence does not support MSC3827 filtering of `/publicRooms` by room type either.
In this case, we cannot apply the `room_type` filter since no `room_type` field is
returned.
"""
if search_filter and search_filter.get(
PublicRoomsFilterFields.GENERIC_SEARCH_TERM, None
):
generic_search_term = search_filter[
PublicRoomsFilterFields.GENERIC_SEARCH_TERM
].upper()
if generic_search_term in room_entry.get("name", "").upper():
return True
elif generic_search_term in room_entry.get("topic", "").upper():
+18 -2
View File
@@ -101,19 +101,33 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
burst_count=hs.config.ratelimiting.rc_joins_remote.burst_count,
)
# Ratelimiter for invites, keyed by room (across all issuers, all
# recipients).
self._invites_per_room_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_room.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_room.burst_count,
)
self._invites_per_user_limiter = Ratelimiter(
# Ratelimiter for invites, keyed by recipient (across all rooms, all
# issuers).
self._invites_per_recipient_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_user.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
)
# Ratelimiter for invites, keyed by issuer (across all rooms, all
# recipients).
self._invites_per_issuer_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_invites_per_issuer.per_second,
burst_count=hs.config.ratelimiting.rc_invites_per_issuer.burst_count,
)
self._third_party_invite_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
@@ -258,7 +272,9 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if room_id:
await self._invites_per_room_limiter.ratelimit(requester, room_id)
await self._invites_per_user_limiter.ratelimit(requester, invitee_user_id)
await self._invites_per_recipient_limiter.ratelimit(requester, invitee_user_id)
if requester is not None:
await self._invites_per_issuer_limiter.ratelimit(requester)
async def _local_membership_update(
self,
+3
View File
@@ -271,6 +271,9 @@ class StatsHandler:
room_state["is_federatable"] = (
event_content.get(EventContentFields.FEDERATE, True) is True
)
room_type = event_content.get(EventContentFields.ROOM_TYPE)
if isinstance(room_type, str):
room_state["room_type"] = room_type
elif typ == EventTypes.JoinRules:
room_state["join_rules"] = event_content.get("join_rule")
elif typ == EventTypes.RoomHistoryVisibility:
+35 -26
View File
@@ -164,6 +164,7 @@ Gotchas
with an active span?
"""
import contextlib
import enum
import inspect
import logging
import re
@@ -268,7 +269,7 @@ try:
_reporter: Reporter = attr.Factory(Reporter)
def set_process(self, *args, **kwargs):
def set_process(self, *args: Any, **kwargs: Any) -> None:
return self._reporter.set_process(*args, **kwargs)
def report_span(self, span: "opentracing.Span") -> None:
@@ -319,7 +320,11 @@ _homeserver_whitelist: Optional[Pattern[str]] = None
# Util methods
Sentinel = object()
class _Sentinel(enum.Enum):
# defining a sentinel in this way allows mypy to correctly handle the
# type of a dictionary lookup.
sentinel = object()
P = ParamSpec("P")
@@ -339,12 +344,12 @@ def only_if_tracing(func: Callable[P, R]) -> Callable[P, Optional[R]]:
return _only_if_tracing_inner
def ensure_active_span(message, ret=None):
def ensure_active_span(message: str, ret=None):
"""Executes the operation only if opentracing is enabled and there is an active span.
If there is no active span it logs message at the error level.
Args:
message (str): Message which fills in "There was no active span when trying to %s"
message: Message which fills in "There was no active span when trying to %s"
in the error log if there is no active span and opentracing is enabled.
ret (object): return value if opentracing is None or there is no active span.
@@ -402,7 +407,7 @@ def init_tracer(hs: "HomeServer") -> None:
config = JaegerConfig(
config=hs.config.tracing.jaeger_config,
service_name=f"{hs.config.server.server_name} {hs.get_instance_name()}",
scope_manager=LogContextScopeManager(hs.config),
scope_manager=LogContextScopeManager(),
metrics_factory=PrometheusMetricsFactory(),
)
@@ -451,15 +456,15 @@ def whitelisted_homeserver(destination: str) -> bool:
# Could use kwargs but I want these to be explicit
def start_active_span(
operation_name,
child_of=None,
references=None,
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
operation_name: str,
child_of: Optional[Union["opentracing.Span", "opentracing.SpanContext"]] = None,
references: Optional[List["opentracing.Reference"]] = None,
tags: Optional[Dict[str, str]] = None,
start_time: Optional[float] = None,
ignore_active_span: bool = False,
finish_on_close: bool = True,
*,
tracer=None,
tracer: Optional["opentracing.Tracer"] = None,
):
"""Starts an active opentracing span.
@@ -493,11 +498,11 @@ def start_active_span(
def start_active_span_follows_from(
operation_name: str,
contexts: Collection,
child_of=None,
child_of: Optional[Union["opentracing.Span", "opentracing.SpanContext"]] = None,
start_time: Optional[float] = None,
*,
inherit_force_tracing=False,
tracer=None,
inherit_force_tracing: bool = False,
tracer: Optional["opentracing.Tracer"] = None,
):
"""Starts an active opentracing span, with additional references to previous spans
@@ -540,7 +545,7 @@ def start_active_span_from_edu(
edu_content: Dict[str, Any],
operation_name: str,
references: Optional[List["opentracing.Reference"]] = None,
tags: Optional[Dict] = None,
tags: Optional[Dict[str, str]] = None,
start_time: Optional[float] = None,
ignore_active_span: bool = False,
finish_on_close: bool = True,
@@ -617,23 +622,27 @@ def set_operation_name(operation_name: str) -> None:
@only_if_tracing
def force_tracing(span=Sentinel) -> None:
def force_tracing(
span: Union["opentracing.Span", _Sentinel] = _Sentinel.sentinel
) -> None:
"""Force sampling for the active/given span and its children.
Args:
span: span to force tracing for. By default, the active span.
"""
if span is Sentinel:
span = opentracing.tracer.active_span
if span is None:
if isinstance(span, _Sentinel):
span_to_trace = opentracing.tracer.active_span
else:
span_to_trace = span
if span_to_trace is None:
logger.error("No active span in force_tracing")
return
span.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
span_to_trace.set_tag(opentracing.tags.SAMPLING_PRIORITY, 1)
# also set a bit of baggage, so that we have a way of figuring out if
# it is enabled later
span.set_baggage_item(SynapseBaggage.FORCE_TRACING, "1")
span_to_trace.set_baggage_item(SynapseBaggage.FORCE_TRACING, "1")
def is_context_forced_tracing(
@@ -789,7 +798,7 @@ def extract_text_map(carrier: Dict[str, str]) -> Optional["opentracing.SpanConte
# Tracing decorators
def trace(func=None, opname=None):
def trace(func=None, opname: Optional[str] = None):
"""
Decorator to trace a function.
Sets the operation name to that of the function's or that given
@@ -822,11 +831,11 @@ def trace(func=None, opname=None):
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result):
def call_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
def err_back(result):
def err_back(result: R) -> R:
scope.__exit__(None, None, None)
return result
+19 -16
View File
@@ -16,11 +16,15 @@ import logging
from types import TracebackType
from typing import Optional, Type
from opentracing import Scope, ScopeManager
from opentracing import Scope, ScopeManager, Span
import twisted
from synapse.logging.context import current_context, nested_logging_context
from synapse.logging.context import (
LoggingContext,
current_context,
nested_logging_context,
)
logger = logging.getLogger(__name__)
@@ -35,11 +39,11 @@ class LogContextScopeManager(ScopeManager):
but currently that doesn't work due to https://twistedmatrix.com/trac/ticket/10301.
"""
def __init__(self, config):
def __init__(self) -> None:
pass
@property
def active(self):
def active(self) -> Optional[Scope]:
"""
Returns the currently active Scope which can be used to access the
currently active Scope.span.
@@ -48,19 +52,18 @@ class LogContextScopeManager(ScopeManager):
Tracer.start_active_span() time.
Return:
(Scope) : the Scope that is active, or None if not
available.
The Scope that is active, or None if not available.
"""
ctx = current_context()
return ctx.scope
def activate(self, span, finish_on_close):
def activate(self, span: Span, finish_on_close: bool) -> Scope:
"""
Makes a Span active.
Args
span (Span): the span that should become active.
finish_on_close (Boolean): whether Span should be automatically
finished when Scope.close() is called.
span: the span that should become active.
finish_on_close: whether Span should be automatically finished when
Scope.close() is called.
Returns:
Scope to control the end of the active period for
@@ -112,8 +115,8 @@ class _LogContextScope(Scope):
def __init__(
self,
manager: LogContextScopeManager,
span,
logcontext,
span: Span,
logcontext: LoggingContext,
enter_logcontext: bool,
finish_on_close: bool,
):
@@ -121,13 +124,13 @@ class _LogContextScope(Scope):
Args:
manager:
the manager that is responsible for this scope.
span (Span):
span:
the opentracing span which this scope represents the local
lifetime for.
logcontext (LogContext):
the logcontext to which this scope is attached.
logcontext:
the log context to which this scope is attached.
enter_logcontext:
if True the logcontext will be exited when the scope is finished
if True the log context will be exited when the scope is finished
finish_on_close:
if True finish the span when the scope is closed
"""
+17 -2
View File
@@ -15,11 +15,11 @@
import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.errors import AuthError, NotFoundError, SynapseError
from synapse.api.errors import AuthError, Codes, NotFoundError, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
from synapse.types import JsonDict, RoomID
from ._base import client_patterns
@@ -104,6 +104,13 @@ class RoomAccountDataServlet(RestServlet):
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot add account data for other users.")
if not RoomID.is_valid(room_id):
raise SynapseError(
400,
f"{room_id} is not a valid room ID",
Codes.INVALID_PARAM,
)
body = parse_json_object_from_request(request)
if account_data_type == "m.fully_read":
@@ -111,6 +118,7 @@ class RoomAccountDataServlet(RestServlet):
405,
"Cannot set m.fully_read through this API."
" Use /rooms/!roomId:server.name/read_markers",
Codes.BAD_JSON,
)
await self.handler.add_account_data_to_room(
@@ -130,6 +138,13 @@ class RoomAccountDataServlet(RestServlet):
if user_id != requester.user.to_string():
raise AuthError(403, "Cannot get account data for other users.")
if not RoomID.is_valid(room_id):
raise SynapseError(
400,
f"{room_id} is not a valid room ID",
Codes.INVALID_PARAM,
)
event = await self.store.get_account_data_for_room_and_type(
user_id, room_id, account_data_type
)
+2
View File
@@ -95,6 +95,8 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc3026.busy_presence": self.config.experimental.msc3026_enabled,
# Supports receiving private read receipts as per MSC2285
"org.matrix.msc2285": self.config.experimental.msc2285_enabled,
# Supports filtering of /publicRooms by room type MSC3827
"org.matrix.msc3827": self.config.experimental.msc3827_enabled,
# Adds support for importing historical messages as per MSC2716
"org.matrix.msc2716": self.config.experimental.msc2716_enabled,
# Adds support for jump to date endpoints (/timestamp_to_event) as per MSC3030
+9 -3
View File
@@ -249,8 +249,12 @@ class StateHandler:
partial_state = True
logger.debug("calling resolve_state_groups from compute_event_context")
# we've already taken into account partial state, so no need to wait for
# complete state here.
entry = await self.resolve_state_groups_for_events(
event.room_id, event.prev_event_ids()
event.room_id,
event.prev_event_ids(),
await_full_state=False,
)
state_ids_before_event = entry.state
@@ -335,7 +339,7 @@ class StateHandler:
@measure_func()
async def resolve_state_groups_for_events(
self, room_id: str, event_ids: Collection[str]
self, room_id: str, event_ids: Collection[str], await_full_state: bool = True
) -> _StateCacheEntry:
"""Given a list of event_ids this method fetches the state at each
event, resolves conflicts between them and returns them.
@@ -343,6 +347,8 @@ class StateHandler:
Args:
room_id
event_ids
await_full_state: if true, will block if we do not yet have complete
state at these events.
Returns:
The resolved state
@@ -350,7 +356,7 @@ class StateHandler:
logger.debug("resolve_state_groups event_ids %s", event_ids)
state_groups = await self._state_storage_controller.get_state_group_for_events(
event_ids
event_ids, await_full_state=await_full_state
)
state_group_ids = state_groups.values()
+4 -3
View File
@@ -366,10 +366,11 @@ class LoggingTransaction:
*args: P.args,
**kwargs: P.kwargs,
) -> R:
sql = self._make_sql_one_line(sql)
# Generate a one-line version of the SQL to better log it.
one_line_sql = self._make_sql_one_line(sql)
# TODO(paul): Maybe use 'info' and 'debug' for values?
sql_logger.debug("[SQL] {%s} %s", self.name, sql)
sql_logger.debug("[SQL] {%s} %s", self.name, one_line_sql)
sql = self.database_engine.convert_param_style(sql)
if args:
@@ -389,7 +390,7 @@ class LoggingTransaction:
"db.query",
tags={
opentracing.tags.DATABASE_TYPE: "sql",
opentracing.tags.DATABASE_STATEMENT: sql,
opentracing.tags.DATABASE_STATEMENT: one_line_sql,
},
):
return func(sql, *args, **kwargs)
+1 -1
View File
@@ -87,7 +87,6 @@ class DataStore(
RoomStore,
RoomBatchStore,
RegistrationStore,
StreamWorkerStore,
ProfileStore,
PresenceStore,
TransactionWorkerStore,
@@ -112,6 +111,7 @@ class DataStore(
SearchStore,
TagsStore,
AccountDataStore,
StreamWorkerStore,
OpenIdStore,
ClientIpWorkerStore,
DeviceStore,
@@ -25,8 +25,8 @@ from synapse.storage.database import (
LoggingDatabaseConnection,
LoggingTransaction,
)
from synapse.storage.databases.main.events_worker import EventsWorkerStore
from synapse.storage.databases.main.receipts import ReceiptsWorkerStore
from synapse.storage.databases.main.stream import StreamWorkerStore
from synapse.util import json_encoder
from synapse.util.caches.descriptors import cached
@@ -122,7 +122,7 @@ def _deserialize_action(actions: str, is_highlight: bool) -> List[Union[dict, st
return DEFAULT_NOTIF_ACTION
class EventPushActionsWorkerStore(ReceiptsWorkerStore, EventsWorkerStore, SQLBaseStore):
class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBaseStore):
def __init__(
self,
database: DatabasePool,
@@ -218,7 +218,7 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, EventsWorkerStore, SQLBas
retcol="event_id",
)
stream_ordering = self.get_stream_id_for_event_txn(txn, event_id) # type: ignore[attr-defined]
stream_ordering = self.get_stream_id_for_event_txn(txn, event_id)
return self._get_unread_counts_by_pos_txn(
txn, room_id, user_id, stream_ordering
@@ -307,12 +307,22 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, EventsWorkerStore, SQLBas
actions that have been deleted from `event_push_actions` table.
"""
# If there have been no events in the room since the stream ordering,
# there can't be any push actions either.
if not self._events_stream_cache.has_entity_changed(room_id, stream_ordering):
return 0, 0
clause = ""
args = [user_id, room_id, stream_ordering]
if max_stream_ordering is not None:
clause = "AND ea.stream_ordering <= ?"
args.append(max_stream_ordering)
# If the max stream ordering is less than the min stream ordering,
# then obviously there are zero push actions in that range.
if max_stream_ordering <= stream_ordering:
return 0, 0
sql = f"""
SELECT
COUNT(CASE WHEN notif = 1 THEN 1 END),
+121 -5
View File
@@ -32,12 +32,17 @@ from typing import (
import attr
from synapse.api.constants import EventContentFields, EventTypes, JoinRules
from synapse.api.constants import (
EventContentFields,
EventTypes,
JoinRules,
PublicRoomsFilterFields,
)
from synapse.api.errors import StoreError
from synapse.api.room_versions import RoomVersion, RoomVersions
from synapse.config.homeserver import HomeServerConfig
from synapse.events import EventBase
from synapse.storage._base import SQLBaseStore, db_to_json
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
@@ -199,10 +204,29 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
desc="get_public_room_ids",
)
def _construct_room_type_where_clause(
self, room_types: Union[List[Union[str, None]], None]
) -> Tuple[Union[str, None], List[str]]:
if not room_types or not self.config.experimental.msc3827_enabled:
return None, []
else:
# We use None when we want get rooms without a type
is_null_clause = ""
if None in room_types:
is_null_clause = "OR room_type IS NULL"
room_types = [value for value in room_types if value is not None]
list_clause, args = make_in_list_sql_clause(
self.database_engine, "room_type", room_types
)
return f"({list_clause} {is_null_clause})", args
async def count_public_rooms(
self,
network_tuple: Optional[ThirdPartyInstanceID],
ignore_non_federatable: bool,
search_filter: Optional[dict],
) -> int:
"""Counts the number of public rooms as tracked in the room_stats_current
and room_stats_state table.
@@ -210,11 +234,20 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
Args:
network_tuple
ignore_non_federatable: If true filters out non-federatable rooms
search_filter
"""
def _count_public_rooms_txn(txn: LoggingTransaction) -> int:
query_args = []
room_type_clause, args = self._construct_room_type_where_clause(
search_filter.get(PublicRoomsFilterFields.ROOM_TYPES, None)
if search_filter
else None
)
room_type_clause = f" AND {room_type_clause}" if room_type_clause else ""
query_args += args
if network_tuple:
if network_tuple.appservice_id:
published_sql = """
@@ -249,6 +282,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
OR join_rules = '{JoinRules.KNOCK_RESTRICTED}'
OR history_visibility = 'world_readable'
)
{room_type_clause}
AND joined_members > 0
"""
@@ -347,8 +381,12 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
if ignore_non_federatable:
where_clauses.append("is_federatable")
if search_filter and search_filter.get("generic_search_term", None):
search_term = "%" + search_filter["generic_search_term"] + "%"
if search_filter and search_filter.get(
PublicRoomsFilterFields.GENERIC_SEARCH_TERM, None
):
search_term = (
"%" + search_filter[PublicRoomsFilterFields.GENERIC_SEARCH_TERM] + "%"
)
where_clauses.append(
"""
@@ -365,6 +403,15 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
search_term.lower(),
]
room_type_clause, args = self._construct_room_type_where_clause(
search_filter.get(PublicRoomsFilterFields.ROOM_TYPES, None)
if search_filter
else None
)
if room_type_clause:
where_clauses.append(room_type_clause)
query_args += args
where_clause = ""
if where_clauses:
where_clause = " AND " + " AND ".join(where_clauses)
@@ -373,7 +420,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
sql = f"""
SELECT
room_id, name, topic, canonical_alias, joined_members,
avatar, history_visibility, guest_access, join_rules
avatar, history_visibility, guest_access, join_rules, room_type
FROM (
{published_sql}
) published
@@ -1166,6 +1213,7 @@ class _BackgroundUpdates:
POPULATE_ROOM_DEPTH_MIN_DEPTH2 = "populate_room_depth_min_depth2"
REPLACE_ROOM_DEPTH_MIN_DEPTH = "replace_room_depth_min_depth"
POPULATE_ROOMS_CREATOR_COLUMN = "populate_rooms_creator_column"
ADD_ROOM_TYPE_COLUMN = "add_room_type_column"
_REPLACE_ROOM_DEPTH_SQL_COMMANDS = (
@@ -1200,6 +1248,11 @@ class RoomBackgroundUpdateStore(SQLBaseStore):
self._background_add_rooms_room_version_column,
)
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.ADD_ROOM_TYPE_COLUMN,
self._background_add_room_type_column,
)
# BG updates to change the type of room_depth.min_depth
self.db_pool.updates.register_background_update_handler(
_BackgroundUpdates.POPULATE_ROOM_DEPTH_MIN_DEPTH2,
@@ -1569,6 +1622,69 @@ class RoomBackgroundUpdateStore(SQLBaseStore):
return batch_size
async def _background_add_room_type_column(
self, progress: JsonDict, batch_size: int
) -> int:
"""Background update to go and add room_type information to `room_stats_state`
table from `event_json` table.
"""
last_room_id = progress.get("room_id", "")
def _background_add_room_type_column_txn(
txn: LoggingTransaction,
) -> bool:
sql = """
SELECT state.room_id, json FROM event_json
INNER JOIN current_state_events AS state USING (event_id)
WHERE state.room_id > ? AND type = 'm.room.create'
ORDER BY state.room_id
LIMIT ?
"""
txn.execute(sql, (last_room_id, batch_size))
room_id_to_create_event_results = txn.fetchall()
new_last_room_id = None
for room_id, event_json in room_id_to_create_event_results:
event_dict = db_to_json(event_json)
room_type = event_dict.get("content", {}).get(
EventContentFields.ROOM_TYPE, None
)
if isinstance(room_type, str):
self.db_pool.simple_update_txn(
txn,
table="room_stats_state",
keyvalues={"room_id": room_id},
updatevalues={"room_type": room_type},
)
new_last_room_id = room_id
if new_last_room_id is None:
return True
self.db_pool.updates._background_update_progress_txn(
txn,
_BackgroundUpdates.ADD_ROOM_TYPE_COLUMN,
{"room_id": new_last_room_id},
)
return False
end = await self.db_pool.runInteraction(
"_background_add_room_type_column",
_background_add_room_type_column_txn,
)
if end:
await self.db_pool.updates._end_background_update(
_BackgroundUpdates.ADD_ROOM_TYPE_COLUMN
)
return batch_size
class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
def __init__(
+1 -1
View File
@@ -476,7 +476,7 @@ class RoomMemberWorkerStore(EventsWorkerStore):
return results_dict.get("membership"), results_dict.get("event_id")
@cached(max_entries=500000, iterable=True, prune_unread_entries=False)
@cached(max_entries=500000, iterable=True, prune_unread_entries=False, debug_invalidations=True)
async def get_rooms_for_user_with_stream_ordering(
self, user_id: str
) -> FrozenSet[GetRoomsForUserWithStreamOrdering]:
+8 -2
View File
@@ -16,7 +16,7 @@
import logging
from enum import Enum
from itertools import chain
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, cast
from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple, Union, cast
from typing_extensions import Counter
@@ -238,6 +238,7 @@ class StatsStore(StateDeltasStore):
* avatar
* canonical_alias
* guest_access
* room_type
A is_federatable key can also be included with a boolean value.
@@ -263,6 +264,7 @@ class StatsStore(StateDeltasStore):
"avatar",
"canonical_alias",
"guest_access",
"room_type",
):
field = fields.get(col, sentinel)
if field is not sentinel and (not isinstance(field, str) or "\0" in field):
@@ -572,7 +574,7 @@ class StatsStore(StateDeltasStore):
state_event_map = await self.get_events(event_ids, get_prev_content=False) # type: ignore[attr-defined]
room_state = {
room_state: Dict[str, Union[None, bool, str]] = {
"join_rules": None,
"history_visibility": None,
"encryption": None,
@@ -581,6 +583,7 @@ class StatsStore(StateDeltasStore):
"avatar": None,
"canonical_alias": None,
"is_federatable": True,
"room_type": None,
}
for event in state_event_map.values():
@@ -604,6 +607,9 @@ class StatsStore(StateDeltasStore):
room_state["is_federatable"] = (
event.content.get(EventContentFields.FEDERATE, True) is True
)
room_type = event.content.get(EventContentFields.ROOM_TYPE)
if isinstance(room_type, str):
room_state["room_type"] = room_type
await self.update_room_state(room_id, room_state)
+20
View File
@@ -46,10 +46,12 @@ from typing import (
Set,
Tuple,
cast,
overload,
)
import attr
from frozendict import frozendict
from typing_extensions import Literal
from twisted.internet import defer
@@ -795,6 +797,24 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
)
return RoomStreamToken(topo, stream_ordering)
@overload
def get_stream_id_for_event_txn(
self,
txn: LoggingTransaction,
event_id: str,
allow_none: Literal[False] = False,
) -> int:
...
@overload
def get_stream_id_for_event_txn(
self,
txn: LoggingTransaction,
event_id: str,
allow_none: bool = False,
) -> Optional[int]:
...
def get_stream_id_for_event_txn(
self,
txn: LoggingTransaction,
@@ -0,0 +1,19 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
ALTER TABLE room_stats_state ADD room_type TEXT;
INSERT INTO background_updates (update_name, progress_json)
VALUES ('add_room_type_column', '{}');
+9
View File
@@ -15,6 +15,7 @@
# limitations under the License.
import enum
import logging
import threading
from typing import (
Callable,
@@ -66,6 +67,7 @@ class DeferredCache(Generic[KT, VT]):
"cache",
"thread",
"_pending_deferred_cache",
"debug_invalidations"
)
def __init__(
@@ -76,6 +78,7 @@ class DeferredCache(Generic[KT, VT]):
iterable: bool = False,
apply_cache_factor_from_config: bool = True,
prune_unread_entries: bool = True,
debug_invalidations: bool = False,
):
"""
Args:
@@ -119,6 +122,7 @@ class DeferredCache(Generic[KT, VT]):
)
self.thread: Optional[threading.Thread] = None
self.debug_invalidations = debug_invalidations
@property
def max_entries(self) -> int:
@@ -310,6 +314,9 @@ class DeferredCache(Generic[KT, VT]):
self.check_thread()
self.cache.del_multi(key)
if self.debug_invalidations:
logging.debug("Invalidating key %r in cache", key)
# if we have a pending lookup for this key, remove it from the
# _pending_deferred_cache, which will (a) stop it being returned
# for future queries and (b) stop it being persisted as a proper entry
@@ -326,6 +333,8 @@ class DeferredCache(Generic[KT, VT]):
entry.invalidate()
def invalidate_all(self) -> None:
if self.debug_invalidations:
logging.debug("Invalidating ALL keys")
self.check_thread()
self.cache.clear()
for entry in self._pending_deferred_cache.values():
+5
View File
@@ -301,6 +301,7 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
cache_context: bool = False,
iterable: bool = False,
prune_unread_entries: bool = True,
debug_invalidations: bool = False
):
super().__init__(
orig,
@@ -318,6 +319,7 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
self.tree = tree
self.iterable = iterable
self.prune_unread_entries = prune_unread_entries
self.debug_invalidations = debug_invalidations
def __get__(self, obj: Optional[Any], owner: Optional[Type]) -> Callable[..., Any]:
cache: DeferredCache[CacheKey, Any] = DeferredCache(
@@ -326,6 +328,7 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
tree=self.tree,
iterable=self.iterable,
prune_unread_entries=self.prune_unread_entries,
debug_invalidations=self.debug_invalidations,
)
get_cache_key = self.cache_key_builder
@@ -577,6 +580,7 @@ def cached(
cache_context: bool = False,
iterable: bool = False,
prune_unread_entries: bool = True,
debug_invalidations: bool = False,
) -> Callable[[F], _CachedFunction[F]]:
func = lambda orig: DeferredCacheDescriptor(
orig,
@@ -587,6 +591,7 @@ def cached(
cache_context=cache_context,
iterable=iterable,
prune_unread_entries=prune_unread_entries,
debug_invalidations=debug_invalidations
)
return cast(Callable[[F], _CachedFunction[F]], func)
+1 -1
View File
@@ -50,7 +50,7 @@ class LogContextScopeManagerTestCase(TestCase):
# global variables that power opentracing. We create our own tracer instance
# and test with it.
scope_manager = LogContextScopeManager({})
scope_manager = LogContextScopeManager()
config = jaeger_client.config.Config(
config={}, service_name="test", scope_manager=scope_manager
)
+89 -3
View File
@@ -18,7 +18,7 @@
"""Tests REST events for /rooms paths."""
import json
from typing import Any, Dict, Iterable, List, Optional, Union
from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
from unittest.mock import Mock, call
from urllib import parse as urlparse
@@ -33,7 +33,9 @@ from synapse.api.constants import (
EventContentFields,
EventTypes,
Membership,
PublicRoomsFilterFields,
RelationTypes,
RoomTypes,
)
from synapse.api.errors import Codes, HttpResponseException
from synapse.handlers.pagination import PurgeStatus
@@ -1858,6 +1860,90 @@ class PublicRoomsRestrictedTestCase(unittest.HomeserverTestCase):
self.assertEqual(channel.code, 200, channel.result)
class PublicRoomsRoomTypeFilterTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets_for_client_rest_resource,
room.register_servlets,
login.register_servlets,
]
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
config = self.default_config()
config["allow_public_rooms_without_auth"] = True
config["experimental_features"] = {"msc3827_enabled": True}
self.hs = self.setup_test_homeserver(config=config)
self.url = b"/_matrix/client/r0/publicRooms"
return self.hs
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
user = self.register_user("alice", "pass")
self.token = self.login(user, "pass")
# Create a room
self.helper.create_room_as(
user,
is_public=True,
extra_content={"visibility": "public"},
tok=self.token,
)
# Create a space
self.helper.create_room_as(
user,
is_public=True,
extra_content={
"visibility": "public",
"creation_content": {EventContentFields.ROOM_TYPE: RoomTypes.SPACE},
},
tok=self.token,
)
def make_public_rooms_request(
self, room_types: Union[List[Union[str, None]], None]
) -> Tuple[List[Dict[str, Any]], int]:
channel = self.make_request(
"POST",
self.url,
{"filter": {PublicRoomsFilterFields.ROOM_TYPES: room_types}},
self.token,
)
chunk = channel.json_body["chunk"]
count = channel.json_body["total_room_count_estimate"]
self.assertEqual(len(chunk), count)
return chunk, count
def test_returns_both_rooms_and_spaces_if_no_filter(self) -> None:
chunk, count = self.make_public_rooms_request(None)
self.assertEqual(count, 2)
def test_returns_only_rooms_based_on_filter(self) -> None:
chunk, count = self.make_public_rooms_request([None])
self.assertEqual(count, 1)
self.assertEqual(chunk[0].get("org.matrix.msc3827.room_type", None), None)
def test_returns_only_space_based_on_filter(self) -> None:
chunk, count = self.make_public_rooms_request(["m.space"])
self.assertEqual(count, 1)
self.assertEqual(chunk[0].get("org.matrix.msc3827.room_type", None), "m.space")
def test_returns_both_rooms_and_space_based_on_filter(self) -> None:
chunk, count = self.make_public_rooms_request(["m.space", None])
self.assertEqual(count, 2)
def test_returns_both_rooms_and_spaces_if_array_is_empty(self) -> None:
chunk, count = self.make_public_rooms_request([])
self.assertEqual(count, 2)
class PublicRoomsTestRemoteSearchFallbackTestCase(unittest.HomeserverTestCase):
"""Test that we correctly fallback to local filtering if a remote server
doesn't support search.
@@ -1882,7 +1968,7 @@ class PublicRoomsTestRemoteSearchFallbackTestCase(unittest.HomeserverTestCase):
"Simple test for searching rooms over federation"
self.federation_client.get_public_rooms.return_value = make_awaitable({}) # type: ignore[attr-defined]
search_filter = {"generic_search_term": "foobar"}
search_filter = {PublicRoomsFilterFields.GENERIC_SEARCH_TERM: "foobar"}
channel = self.make_request(
"POST",
@@ -1911,7 +1997,7 @@ class PublicRoomsTestRemoteSearchFallbackTestCase(unittest.HomeserverTestCase):
make_awaitable({}),
)
search_filter = {"generic_search_term": "foobar"}
search_filter = {PublicRoomsFilterFields.GENERIC_SEARCH_TERM: "foobar"}
channel = self.make_request(
"POST",
+69
View File
@@ -12,6 +12,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import json
from synapse.api.constants import RoomTypes
from synapse.rest import admin
from synapse.rest.client import login, room
from synapse.storage.databases.main.room import _BackgroundUpdates
@@ -91,3 +94,69 @@ class RoomBackgroundUpdateStoreTestCase(HomeserverTestCase):
)
)
self.assertEqual(room_creator_after, self.user_id)
def test_background_add_room_type_column(self):
"""Test that the background update to populate the `room_type` column in
`room_stats_state` works properly.
"""
# Create a room without a type
room_id = self._generate_room()
# Get event_id of the m.room.create event
event_id = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table="current_state_events",
keyvalues={
"room_id": room_id,
"type": "m.room.create",
},
retcol="event_id",
)
)
# Fake a room creation event with a room type
event = {
"content": {
"creator": "@user:server.org",
"room_version": "9",
"type": RoomTypes.SPACE,
},
"type": "m.room.create",
}
self.get_success(
self.store.db_pool.simple_update(
table="event_json",
keyvalues={"event_id": event_id},
updatevalues={"json": json.dumps(event)},
desc="test",
)
)
# Insert and run the background update
self.get_success(
self.store.db_pool.simple_insert(
"background_updates",
{
"update_name": _BackgroundUpdates.ADD_ROOM_TYPE_COLUMN,
"progress_json": "{}",
},
)
)
# ... and tell the DataStore that it hasn't finished all updates yet
self.store.db_pool.updates._all_done = False
# Now let's actually drive the updates to completion
self.wait_for_background_updates()
# Make sure the background update filled in the room type
room_type_after = self.get_success(
self.store.db_pool.simple_select_one_onecol(
table="room_stats_state",
keyvalues={"room_id": room_id},
retcol="room_type",
allow_none=True,
)
)
self.assertEqual(room_type_after, RoomTypes.SPACE)
+2
View File
@@ -86,6 +86,8 @@ class EventPushActionsStoreTestCase(HomeserverTestCase):
event.internal_metadata.is_outlier.return_value = False
event.depth = stream
self.store._events_stream_cache.entity_has_changed(room_id, stream)
self.get_success(
self.store.db_pool.simple_insert(
table="events",
+3 -1
View File
@@ -131,7 +131,9 @@ class _DummyStore:
async def get_room_version_id(self, room_id):
return RoomVersions.V1.identifier
async def get_state_group_for_events(self, event_ids):
async def get_state_group_for_events(
self, event_ids, await_full_state: bool = True
):
res = {}
for event in event_ids:
res[event] = self._event_to_state_group[event]