1
0

Compare commits

..

22 Commits

Author SHA1 Message Date
Brendan Abolivier
dd62d6baac Correct docstring for DeleteMediaByDateSize 2022-10-19 16:58:20 +01:00
Will Hunt
04d7f56f53 Use backend-meta edition of issue triage workflow (#14230) 2022-10-19 11:41:25 +01:00
Eric Eastwood
fa8616e65c Fix MSC3030 /timestamp_to_event returning outliers that it has no idea whether are near a gap or not (#14215)
Fix MSC3030 `/timestamp_to_event` endpoint returning `outliers` that it has no idea whether are near a gap or not (and therefore unable to determine whether it's actually the closest event). The reason Synapse doesn't know whether an `outlier` is next to a gap is because our gap checks rely on entries in the `event_edges`, `event_forward_extremeties`, and `event_backward_extremities` tables which is [not the case for `outliers`](2c63cdcc3f/docs/development/room-dag-concepts.md (outliers)).

Also fixes MSC3030 Complement `can_paginate_after_getting_remote_event_from_timestamp_to_event_endpoint` test flake.  Although this acted flakey in Complement, if `sync_partial_state` raced and beat us before `/timestamp_to_event`, then even if we retried the failing `/context` request it wouldn't work until we made this Synapse change. With this PR, Synapse will never return an `outlier` event so that test will always go and ask over federation.

Fix  https://github.com/matrix-org/synapse/issues/13944


### Why did this fail before? Why was it flakey?

Sleuthing the server logs on the [CI failure](https://github.com/matrix-org/synapse/actions/runs/3149623842/jobs/5121449357#step:5:5805), it looks like `hs2:/timestamp_to_event` found `$NP6-oU7mIFVyhtKfGvfrEQX949hQX-T-gvuauG6eurU` as an `outlier` event locally. Then when we went and asked for it via `/context`, since it's an `outlier`, it was filtered out of the results -> `You don't have permission to access that event.`

This is reproducible when `sync_partial_state` races and persists `$NP6-oU7mIFVyhtKfGvfrEQX949hQX-T-gvuauG6eurU` as an `outlier` before we evaluate `get_event_for_timestamp(...)`. To consistently reproduce locally, just add a delay at the [start of `get_event_for_timestamp(...)`](cb20b885cb/synapse/handlers/room.py (L1470-L1496)) so it always runs after `sync_partial_state` completes.

```py
from twisted.internet import task as twisted_task
d = twisted_task.deferLater(self.hs.get_reactor(), 3.5)
await d
```

In a run where it passes, on `hs2`, `get_event_for_timestamp(...)` finds a different event locally which is next to a gap and we request from a closer one from `hs1` which gets backfilled. And since the backfilled event is not an `outlier`, it's returned as expected during `/context`.

With this PR, Synapse will never return an `outlier` event so that test will always go and ask over federation.
2022-10-18 19:46:25 -05:00
Aaron Raimist
2a76a7369f Fix hiding devices names over federation (#10015)
And don't include blank opentracing stuff in device list updates.

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2022-10-18 20:54:27 +00:00
Shay
1c777ef1e8 Fix docstring in EventContext (#14145) 2022-10-18 13:40:50 -07:00
MichaIng
06b0c4edfe Add aarch64 wheels to CI (#14212)
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-10-18 17:12:21 +00:00
dependabot[bot]
85aa0f513b Bump twisted from 22.4.0 to 22.8.0 (#14207)
* Bump twisted from 22.4.0 to 22.8.0

Bumps [twisted](https://github.com/twisted/twisted) from 22.4.0 to 22.8.0.
- [Release notes](https://github.com/twisted/twisted/releases)
- [Changelog](https://github.com/twisted/twisted/blob/trunk/NEWS.rst)
- [Commits](https://github.com/twisted/twisted/compare/twisted-22.4.0...twisted-22.8.0)

---
updated-dependencies:
- dependency-name: twisted
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-18 18:06:06 +01:00
Shay
847e2393f3 Prepatory work for adding power level event to batched events (#14214) 2022-10-18 09:58:47 -07:00
dependabot[bot]
2b940d2668 Bump pygithub from 1.55 to 1.56 (#14206)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-18 16:36:31 +00:00
dependabot[bot]
f91b547a07 Bump types-setuptools from 65.4.0.0 to 65.5.0.1 (#14208)
* Bump types-setuptools from 65.4.0.0 to 65.5.0.1

Bumps [types-setuptools](https://github.com/python/typeshed) from 65.4.0.0 to 65.5.0.1.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-setuptools
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-18 16:53:34 +01:00
Hugh Nimmo-Smith
4eaf3eb840 Implementation of HTTP 307 response for MSC3886 POST endpoint (#14018)
Co-authored-by: reivilibre <olivier@librepush.net>
Co-authored-by: Andrew Morgan <andrewm@element.io>
2022-10-18 15:52:25 +00:00
David Robertson
844ce47b9b Don't pin dev-deps in pyproject; use lower bounds (#14227)
* Don't pin dev-deps in pyproject; use lower bounds

This makes it slightly less tedious to update these things via
successive dependabot updates, by reducing the likelihood of a merge
conflict.

* Changelog

* Changelog
2022-10-18 16:44:43 +01:00
David Robertson
b951d6bd4c Fixes to release-artifacts warnings (#14224) 2022-10-18 15:40:05 +00:00
Patrick Cloke
dbf18f514e Update the thread_id right before use (in case the bg update hasn't finished) (#14222)
This avoids running a forced-update of a null thread_id rows.

An index is added (in the background) to hopefully make this
easier in the future.
2022-10-18 14:55:41 +00:00
Jonathan de Jong
e440f9674a Enable URL previews in complement homeserver config. (#14198) 2022-10-18 09:52:23 -04:00
David Robertson
8e50299d8b Fix track_memory_usage on poetry-core 1.3.x installations (#14221)
* Fix `track_memory_usage` on poetry-core 1.3.x installations

The same kind of problem as discussed in #14085:

1. we defined an extra with an underscore
2. we look it up at runtime with an underscore
3. but poetry-core 1.3.x. installs it with a dash, causing (2) to fail.

Fix by using a dash everywhere.

* Changelog
2022-10-18 13:59:04 +01:00
David Robertson
a8677bc9b8 Deal with some GHA deprecation warnings (#14216) 2022-10-18 13:45:34 +01:00
realtyem
6c5082f3e0 Flush stdout/err in Dockerfile-workers before replacing the current process (#14195)
Also update `subprocess.check_output` to the slightly newer `subprocess.run`.

Signed-off-by: Jason Little <realtyem@gmail.com>
2022-10-18 11:56:20 +00:00
David Robertson
c3a4780080 When restarting a partial join resync, prioritise the server which actioned a partial join (#14126) 2022-10-18 12:33:18 +01:00
Ivan Shapovalov
4af93bd7f6 Allow poetry-core 1.3.2 (#14217)
Signed-off-by: Ivan Shapovalov <intelfx@intelfx.name>
2022-10-18 10:38:58 +01:00
Andrew Morgan
dc02d9f8c5 Avoid checking the event cache when backfilling events (#14164) 2022-10-18 10:33:35 +01:00
Andrew Morgan
828b5502cf Remove _get_events_cache check optimisation from _have_seen_events_dict (#14161) 2022-10-18 10:33:21 +01:00
62 changed files with 743 additions and 331 deletions

View File

@@ -18,6 +18,13 @@
import json
import os
def set_output(key: str, value: str):
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-output-parameter
with open(os.environ["GITHUB_OUTPUT"], "at") as f:
print(f"{key}={value}", file=f)
IS_PR = os.environ["GITHUB_REF"].startswith("refs/pull/")
# First calculate the various trial jobs.
@@ -81,7 +88,7 @@ print("::endgroup::")
test_matrix = json.dumps(
trial_sqlite_tests + trial_postgres_tests + trial_no_extra_tests
)
print(f"::set-output name=trial_test_matrix::{test_matrix}")
set_output("trial_test_matrix", test_matrix)
# First calculate the various sytest jobs.
@@ -125,4 +132,4 @@ print(json.dumps(sytest_tests, indent=4))
print("::endgroup::")
test_matrix = json.dumps(sytest_tests)
print(f"::set-output name=sytest_test_matrix::{test_matrix}")
set_output("sytest_test_matrix", test_matrix)

View File

@@ -54,7 +54,7 @@ jobs:
esac
# finally, set the 'branch-version' var.
echo "::set-output name=branch-version::$branch"
echo "branch-version=$branch" >> "$GITHUB_OUTPUT"
# Deploy to the target directory.
- name: Deploy to gh pages

View File

@@ -27,6 +27,8 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.x'
- id: set-distros
run: |
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
@@ -34,7 +36,7 @@ jobs:
if [[ $GITHUB_REF == refs/tags/* ]]; then
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)
fi
echo "::set-output name=distros::$dists"
echo "distros=$dists" >> "$GITHUB_OUTPUT"
# map the step outputs to job outputs
outputs:
distros: ${{ steps.set-distros.outputs.distros }}
@@ -70,6 +72,8 @@ jobs:
- name: Set up python
uses: actions/setup-python@v4
with:
python-version: '3.x'
- name: Build the packages
# see https://github.com/docker/build-push-action/issues/252
@@ -91,11 +95,14 @@ jobs:
path: debs/*
build-wheels:
name: Build wheels on ${{ matrix.os }}
name: Build wheels on ${{ matrix.os }} for ${{ matrix.arch }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-10.15]
arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
is_pr:
- ${{ startsWith(github.ref, 'refs/pull/') }}
@@ -103,6 +110,12 @@ jobs:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-10.15"
# Don't build aarch64 wheels on mac.
- os: "macos-10.15"
arch: aarch64
# Don't build aarch64 wheels on PR CI.
- is_pr: true
arch: aarch64
steps:
- uses: actions/checkout@v3
@@ -116,11 +129,19 @@ jobs:
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.9.0 poetry==1.2.0
# Only build a single wheel in CI.
- name: Set env vars.
run: |
echo "CIBW_BUILD="cp37-manylinux_x86_64"" >> $GITHUB_ENV
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'
uses: docker/setup-qemu-action@v2
with:
platforms: arm64
- name: Build aarch64 wheels
if: matrix.arch == 'aarch64'
run: echo 'CIBW_ARCHS_LINUX=aarch64' >> $GITHUB_ENV
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp37-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
@@ -128,6 +149,9 @@ jobs:
# Skip testing for platforms which various libraries don't have wheels
# for, and so need extra build deps.
CIBW_TEST_SKIP: pp39-* *i686* *musl* pp37-macosx*
# Fix Rust OOM errors on emulated aarch64: https://github.com/rust-lang/cargo/issues/10583
CARGO_NET_GIT_FETCH_WITH_CLI: true
CIBW_ENVIRONMENT_PASS_LINUX: CARGO_NET_GIT_FETCH_WITH_CLI
- uses: actions/upload-artifact@v3
with:

View File

@@ -5,24 +5,11 @@ on:
types: [ opened ]
jobs:
add_new_issues:
name: Add new issues to the triage board
runs-on: ubuntu-latest
steps:
- uses: octokit/graphql-action@v2.x
id: add_to_project
with:
headers: '{"GraphQL-Features": "projects_next_graphql"}'
query: |
mutation add_to_project($projectid:ID!,$contentid:ID!) {
addProjectV2ItemById(input: {projectId: $projectid contentId: $contentid}) {
item {
id
}
}
}
projectid: ${{ env.PROJECT_ID }}
contentid: ${{ github.event.issue.node_id }}
env:
PROJECT_ID: "PVT_kwDOAIB0Bs4AFDdZ"
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
triage:
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@v1
with:
project_id: 'PVT_kwDOAIB0Bs4AFDdZ'
content_id: ${{ github.event.issue.node_id }}
secrets:
github_access_token: ${{ secrets.ELEMENT_BOT_TOKEN }}

1
changelog.d/10015.bugfix Normal file
View File

@@ -0,0 +1 @@
Prevent device names from appearing in device list updates when `allow_device_name_lookup_over_federation` is `false`.

View File

@@ -0,0 +1 @@
Support for redirecting to an implementation of a [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886) HTTP rendezvous service.

1
changelog.d/14126.misc Normal file
View File

@@ -0,0 +1 @@
Faster joins: prioritise the server we joined by when restarting a partial join resync.

2
changelog.d/14145.doc Normal file
View File

@@ -0,0 +1,2 @@
Clarify comment on event contexts.

1
changelog.d/14195.docker Normal file
View File

@@ -0,0 +1 @@
Fix pre-startup logging being lost when using the `Dockerfile-workers` image.

1
changelog.d/14198.misc Normal file
View File

@@ -0,0 +1 @@
Enable url previews when testing with complement.

1
changelog.d/14206.misc Normal file
View File

@@ -0,0 +1 @@
Bump pygithub from 1.55 to 1.56.

1
changelog.d/14207.misc Normal file
View File

@@ -0,0 +1 @@
Bump twisted from 22.4.0 to 22.8.0.

1
changelog.d/14208.misc Normal file
View File

@@ -0,0 +1 @@
Bump types-setuptools from 65.4.0.0 to 65.5.0.1.

View File

@@ -0,0 +1 @@
Build and publish binary wheels for `aarch64` platforms.

1
changelog.d/14214.misc Normal file
View File

@@ -0,0 +1 @@
When authenticating batched events, check for auth events in batch as well as DB.

1
changelog.d/14215.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event` endpoint returning potentially inaccurate closest events with `outliers` present.

1
changelog.d/14216.misc Normal file
View File

@@ -0,0 +1 @@
Update CI config to avoid GitHub Actions deprecation warnings.

1
changelog.d/14217.misc Normal file
View File

@@ -0,0 +1 @@
Update dependency requirements to allow building with poetry-core 1.3.2.

1
changelog.d/14221.misc Normal file
View File

@@ -0,0 +1 @@
Rename the `cache_memory` extra to `cache-memory`, for compatability with poetry-core 1.3.0 and [PEP 685](https://peps.python.org/pep-0685/). From-source installations using this extra will need to install using the new name.

View File

@@ -0,0 +1 @@
Support for thread-specific notifications & receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771) and [MSC3773](https://github.com/matrix-org/matrix-spec-proposals/pull/3773)).

1
changelog.d/14224.misc Normal file
View File

@@ -0,0 +1 @@
Update CI config to avoid GitHub Actions deprecation warnings.

1
changelog.d/14227.misc Normal file
View File

@@ -0,0 +1 @@
Specify dev-dependencies using lower bounds, to reduce the likelihood of a dependabot merge conflict. The lockfile continues to pin to specific versions.

1
changelog.d/14230.misc Normal file
View File

@@ -0,0 +1 @@
Switch to using the `matrix-org/backend-meta` version of `triage-incoming` for new issues in CI.

View File

@@ -12,6 +12,8 @@ trusted_key_servers: []
enable_registration: true
enable_registration_without_verification: true
bcrypt_rounds: 4
url_preview_enabled: true
url_preview_ip_range_blacklist: []
## Registration ##

View File

@@ -230,24 +230,19 @@ upstream {upstream_worker_type} {{
# Utility functions
def log(txt: str) -> None:
"""Log something to the stdout.
Args:
txt: The text to log.
"""
print(txt)
def error(txt: str) -> NoReturn:
"""Log something and exit with an error code.
Args:
txt: The text to log in error.
"""
log(txt)
print(txt, file=sys.stderr)
sys.exit(2)
def flush_buffers() -> None:
sys.stdout.flush()
sys.stderr.flush()
def convert(src: str, dst: str, **template_vars: object) -> None:
"""Generate a file from a template
@@ -328,7 +323,7 @@ def generate_base_homeserver_config() -> None:
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"])
subprocess.run(["/usr/local/bin/python", "/start.py", "migrate_config"], check=True)
def generate_worker_files(
@@ -642,6 +637,7 @@ def main(args: List[str], environ: MutableMapping[str, str]) -> None:
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
log("Starting supervisord")
flush_buffers()
os.execle(
"/usr/local/bin/supervisord",
"supervisord",

View File

@@ -13,14 +13,19 @@ import jinja2
# Utility functions
def log(txt: str) -> None:
print(txt, file=sys.stderr)
print(txt)
def error(txt: str) -> NoReturn:
log(txt)
print(txt, file=sys.stderr)
sys.exit(2)
def flush_buffers() -> None:
sys.stdout.flush()
sys.stderr.flush()
def convert(src: str, dst: str, environ: Mapping[str, object]) -> None:
"""Generate a file from a template
@@ -131,10 +136,10 @@ def generate_config_from_template(
if ownership is not None:
log(f"Setting ownership on /data to {ownership}")
subprocess.check_output(["chown", "-R", ownership, "/data"])
subprocess.run(["chown", "-R", ownership, "/data"], check=True)
args = ["gosu", ownership] + args
subprocess.check_output(args)
subprocess.run(args, check=True)
def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) -> None:
@@ -158,7 +163,7 @@ def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) ->
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
log(f"Setting ownership on {data_dir} to {ownership}")
subprocess.check_output(["chown", ownership, data_dir])
subprocess.run(["chown", ownership, data_dir], check=True)
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
@@ -185,6 +190,7 @@ def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) ->
"--open-private-ports",
]
# log("running %s" % (args, ))
flush_buffers()
os.execv(sys.executable, args)
@@ -267,8 +273,10 @@ running with 'migrate_config'. See the README for more details.
args = [sys.executable] + args
if ownership is not None:
args = ["gosu", ownership] + args
flush_buffers()
os.execve("/usr/sbin/gosu", args, environ)
else:
flush_buffers()
os.execve(sys.executable, args, environ)

41
poetry.lock generated
View File

@@ -810,7 +810,7 @@ python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pygithub"
version = "1.55"
version = "1.56"
description = "Use the full Github API v3"
category = "dev"
optional = false
@@ -1308,11 +1308,11 @@ urllib3 = ">=1.26.0"
[[package]]
name = "twisted"
version = "22.4.0"
version = "22.8.0"
description = "An asynchronous networking framework written in Python"
category = "main"
optional = false
python-versions = ">=3.6.7"
python-versions = ">=3.7.1"
[package.dependencies]
attrs = ">=19.2.0"
@@ -1321,27 +1321,28 @@ constantly = ">=15.1"
hyperlink = ">=17.1.1"
idna = {version = ">=2.4", optional = true, markers = "extra == \"tls\""}
incremental = ">=21.3.0"
pyopenssl = {version = ">=16.0.0", optional = true, markers = "extra == \"tls\""}
pyopenssl = {version = ">=21.0.0", optional = true, markers = "extra == \"tls\""}
service-identity = {version = ">=18.1.0", optional = true, markers = "extra == \"tls\""}
twisted-iocpsupport = {version = ">=1.0.2,<2", markers = "platform_system == \"Windows\""}
typing-extensions = ">=3.6.5"
"zope.interface" = ">=4.4.2"
[package.extras]
all-non-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyopenssl (>=16.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
all-non-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
conch = ["appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "cryptography (>=2.6)", "pyasn1"]
conch-nacl = ["PyNaCl", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "cryptography (>=2.6)", "pyasn1"]
contextvars = ["contextvars (>=2.4,<3)"]
dev = ["coverage (>=6b1,<7)", "pydoctor (>=21.9.0,<21.10.0)", "pyflakes (>=2.2,<3.0)", "python-subunit (>=1.4,<2.0)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)", "twistedchecker (>=0.7,<1.0)"]
dev-release = ["pydoctor (>=21.9.0,<21.10.0)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)"]
dev = ["coverage (>=6b1,<7)", "pydoctor (>=22.7.0,<22.8.0)", "pyflakes (>=2.2,<3.0)", "python-subunit (>=1.4,<2.0)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)", "twistedchecker (>=0.7,<1.0)"]
dev-release = ["pydoctor (>=22.7.0,<22.8.0)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)"]
gtk-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pygobject", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
http2 = ["h2 (>=3.0,<5.0)", "priority (>=1.1.0,<2.0)"]
macos-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyobjc-core", "pyobjc-framework-CFNetwork", "pyobjc-framework-Cocoa", "pyopenssl (>=16.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
mypy = ["PyHamcrest (>=1.9.0)", "PyNaCl", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "coverage (>=6b1,<7)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "mypy (==0.930)", "mypy-zope (==0.3.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pydoctor (>=21.9.0,<21.10.0)", "pyflakes (>=2.2,<3.0)", "pyopenssl (>=16.0.0)", "pyserial (>=3.0)", "python-subunit (>=1.4,<2.0)", "pywin32 (!=226)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "service-identity (>=18.1.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)", "twistedchecker (>=0.7,<1.0)", "types-pyOpenSSL", "types-setuptools"]
osx-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyobjc-core", "pyobjc-framework-CFNetwork", "pyobjc-framework-Cocoa", "pyopenssl (>=16.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
macos-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyobjc-core", "pyobjc-framework-CFNetwork", "pyobjc-framework-Cocoa", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
mypy = ["PyHamcrest (>=1.9.0)", "PyNaCl", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "coverage (>=6b1,<7)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "mypy (==0.930)", "mypy-zope (==0.3.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pydoctor (>=22.7.0,<22.8.0)", "pyflakes (>=2.2,<3.0)", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "python-subunit (>=1.4,<2.0)", "pywin32 (!=226)", "readthedocs-sphinx-ext (>=2.1,<3.0)", "service-identity (>=18.1.0)", "sphinx (>=4.1.2,<6)", "sphinx-rtd-theme (>=0.5,<1.0)", "towncrier (>=19.2,<20.0)", "twistedchecker (>=0.7,<1.0)", "types-pyOpenSSL", "types-setuptools"]
osx-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyobjc-core", "pyobjc-framework-CFNetwork", "pyobjc-framework-Cocoa", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
serial = ["pyserial (>=3.0)", "pywin32 (!=226)"]
test = ["PyHamcrest (>=1.9.0)", "cython-test-exception-raiser (>=1.0.2,<2)"]
tls = ["idna (>=2.4)", "pyopenssl (>=16.0.0)", "service-identity (>=18.1.0)"]
windows-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyopenssl (>=16.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
tls = ["idna (>=2.4)", "pyopenssl (>=21.0.0)", "service-identity (>=18.1.0)"]
windows-platform = ["PyHamcrest (>=1.9.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.0.0)", "contextvars (>=2.4,<3)", "cryptography (>=2.6)", "cython-test-exception-raiser (>=1.0.2,<2)", "h2 (>=3.0,<5.0)", "idna (>=2.4)", "priority (>=1.1.0,<2.0)", "pyasn1", "pyopenssl (>=21.0.0)", "pyserial (>=3.0)", "pywin32 (!=226)", "pywin32 (!=226)", "service-identity (>=18.1.0)"]
[[package]]
name = "twisted-iocpsupport"
@@ -1479,7 +1480,7 @@ types-urllib3 = "<1.27"
[[package]]
name = "types-setuptools"
version = "65.4.0.0"
version = "65.5.0.1"
description = "Typing stubs for setuptools"
category = "dev"
optional = false
@@ -1632,7 +1633,7 @@ url-preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "327eb55e543f29feac9ca1a014f17c48fdf01a96bbed9ed9237dab787e9ac614"
content-hash = "9400cb5c92bb4648238f652f5e7f81df51cdcf9b7c69d645f35beaa4acb2f420"
[metadata.files]
attrs = [
@@ -2381,8 +2382,8 @@ pyflakes = [
{file = "pyflakes-2.4.0.tar.gz", hash = "sha256:05a85c2872edf37a4ed30b0cce2f6093e1d0581f8c19d7393122da7e25b2b24c"},
]
pygithub = [
{file = "PyGithub-1.55-py3-none-any.whl", hash = "sha256:2caf0054ea079b71e539741ae56c5a95e073b81fa472ce222e81667381b9601b"},
{file = "PyGithub-1.55.tar.gz", hash = "sha256:1bbfff9372047ff3f21d5cd8e07720f3dbfdaf6462fcaed9d815f528f1ba7283"},
{file = "PyGithub-1.56-py3-none-any.whl", hash = "sha256:d15f13d82165306da8a68aefc0f848a6f6432d5febbff13b60a94758ce3ef8b5"},
{file = "PyGithub-1.56.tar.gz", hash = "sha256:80c6d85cf0f9418ffeb840fd105840af694c4f17e102970badbaf678251f2a01"},
]
pygments = [
{file = "Pygments-2.11.2-py3-none-any.whl", hash = "sha256:44238f1b60a76d78fc8ca0528ee429702aae011c265fe6a8dd8b63049ae41c65"},
@@ -2694,8 +2695,8 @@ twine = [
{file = "twine-3.8.0.tar.gz", hash = "sha256:8efa52658e0ae770686a13b675569328f1fba9837e5de1867bfe5f46a9aefe19"},
]
twisted = [
{file = "Twisted-22.4.0-py3-none-any.whl", hash = "sha256:f9f7a91f94932477a9fc3b169d57f54f96c6e74a23d78d9ce54039a7f48928a2"},
{file = "Twisted-22.4.0.tar.gz", hash = "sha256:a047990f57dfae1e0bd2b7df2526d4f16dcdc843774dc108b78c52f2a5f13680"},
{file = "Twisted-22.8.0-py3-none-any.whl", hash = "sha256:8d4718d1e48dcc28933f8beb48dc71cfe77a125e37ad1eb7a3d0acc49baf6c99"},
{file = "Twisted-22.8.0.tar.gz", hash = "sha256:e5b60de39f2d1da153fbe1874d885fe3fcbdb21fcc446fa759a53e8fc3513bed"},
]
twisted-iocpsupport = [
{file = "twisted-iocpsupport-1.0.2.tar.gz", hash = "sha256:72068b206ee809c9c596b57b5287259ea41ddb4774d86725b19f35bf56aa32a9"},
@@ -2790,8 +2791,8 @@ types-requests = [
{file = "types_requests-2.28.11-py3-none-any.whl", hash = "sha256:af5f55e803cabcfb836dad752bd6d8a0fc8ef1cd84243061c0e27dee04ccf4fd"},
]
types-setuptools = [
{file = "types-setuptools-65.4.0.0.tar.gz", hash = "sha256:d9021d6a70690b34e7bd2947e7ab10167c646fbf062508cb56581be2e2a1615e"},
{file = "types_setuptools-65.4.0.0-py3-none-any.whl", hash = "sha256:ce178b3f7dbd6c0e67f8eee7ae29c1be280ade7e5188bdd9e620843de4060d85"},
{file = "types-setuptools-65.5.0.1.tar.gz", hash = "sha256:5b297081c8f1fbd992cd8b305a97ed96ee6ffc765e9115124029597dd10b8a71"},
{file = "types_setuptools-65.5.0.1-py3-none-any.whl", hash = "sha256:601d45b5e9979d2b931de5403aa11153626a1eadd1ce9727b21f24673ced5ceb"},
]
types-urllib3 = [
{file = "types-urllib3-1.26.10.tar.gz", hash = "sha256:a26898f530e6c3f43f25b907f2b884486868ffd56a9faa94cbf9b3eb6e165d6a"},

View File

@@ -227,7 +227,7 @@ jwt = ["authlib"]
# (if it is not installed, we fall back to slow code.)
redis = ["txredisapi", "hiredis"]
# Required to use experimental `caches.track_memory_usage` config option.
cache_memory = ["pympler"]
cache-memory = ["pympler"]
test = ["parameterized", "idna"]
# The duplication here is awful. I hate hate hate hate hate it. However, for now I want
@@ -258,7 +258,7 @@ all = [
"jaeger-client", "opentracing",
# redis
"txredisapi", "hiredis",
# cache_memory
# cache-memory
"pympler",
# omitted:
# - test: it's useful to have this separate from dev deps in the olddeps job
@@ -267,10 +267,10 @@ all = [
[tool.poetry.dev-dependencies]
## We pin black so that our tests don't start failing on new releases.
isort = "==5.10.1"
black = "==22.3.0"
isort = ">=5.10.1"
black = ">=22.3.0"
flake8-comprehensions = "*"
flake8-bugbear = "==21.3.2"
flake8-bugbear = ">=21.3.2"
flake8 = "*"
# Typechecking
@@ -296,11 +296,11 @@ parameterized = ">=0.7.4"
idna = ">=2.5"
# The following are used by the release script
click = "==8.1.3"
click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
commonmark = "==0.9.1"
pygithub = "==1.55"
commonmark = ">=0.9.1"
pygithub = ">=1.55"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from #3425. Rationale unclear.
@@ -312,7 +312,7 @@ towncrier = ">=18.6.0rc1"
# system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=1.0.0,<=1.3.1", "setuptools_rust>=1.3,<=1.5.2"]
requires = ["poetry-core>=1.0.0,<=1.3.2", "setuptools_rust>=1.3,<=1.5.2"]
build-backend = "poetry.core.masonry.api"

View File

@@ -159,7 +159,7 @@ class CacheConfig(Config):
self.track_memory_usage = cache_config.get("track_memory_usage", False)
if self.track_memory_usage:
check_requirements("cache_memory")
check_requirements("cache-memory")
expire_caches = cache_config.get("expire_caches", True)
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any
from typing import Any, Optional
import attr
@@ -120,3 +120,8 @@ class ExperimentalConfig(Config):
# MSC3874: Filtering /messages with rel_types / not_rel_types.
self.msc3874_enabled: bool = experimental.get("msc3874_enabled", False)
# MSC3886: Simple client rendezvous capability
self.msc3886_endpoint: Optional[str] = experimental.get(
"msc3886_endpoint", None
)

View File

@@ -207,6 +207,9 @@ class HttpListenerConfig:
additional_resources: Dict[str, dict] = attr.Factory(dict)
tag: Optional[str] = None
request_id_header: Optional[str] = None
# If true, the listener will return CORS response headers compatible with MSC3886:
# https://github.com/matrix-org/matrix-spec-proposals/pull/3886
experimental_cors_msc3886: bool = False
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -935,6 +938,7 @@ def parse_listener_def(num: int, listener: Any) -> ListenerConfig:
additional_resources=listener.get("additional_resources", {}),
tag=listener.get("tag"),
request_id_header=listener.get("request_id_header"),
experimental_cors_msc3886=listener.get("experimental_cors_msc3886", False),
)
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)

View File

@@ -15,7 +15,18 @@
import logging
import typing
from typing import Any, Collection, Dict, Iterable, List, Optional, Set, Tuple, Union
from typing import (
Any,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
@@ -134,6 +145,7 @@ def validate_event_for_room_version(event: "EventBase") -> None:
async def check_state_independent_auth_rules(
store: _EventSourceStore,
event: "EventBase",
batched_auth_events: Optional[Mapping[str, "EventBase"]] = None,
) -> None:
"""Check that an event complies with auth rules that are independent of room state
@@ -143,6 +155,8 @@ async def check_state_independent_auth_rules(
Args:
store: the datastore; used to fetch the auth events for validation
event: the event being checked.
batched_auth_events: if the event being authed is part of a batch, any events
from the same batch that may be necessary to auth the current event
Raises:
AuthError if the checks fail
@@ -162,6 +176,9 @@ async def check_state_independent_auth_rules(
redact_behaviour=EventRedactBehaviour.as_is,
allow_rejected=True,
)
if batched_auth_events:
auth_events.update(batched_auth_events)
room_id = event.room_id
auth_dict: MutableStateMap[str] = {}
expected_auth_types = auth_types_for_event(event.room_version, event)

View File

@@ -65,7 +65,8 @@ class EventContext:
None does not necessarily mean that ``state_group`` does not have
a prev_group!
If the event is a state event, this is normally the same as ``prev_group``.
If the event is a state event, this is normally the same as
``state_group_before_event``.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.

View File

@@ -937,7 +937,10 @@ class DeviceListUpdater:
# Check if we are partially joining any rooms. If so we need to store
# all device list updates so that we can handle them correctly once we
# know who is in the room.
partial_rooms = await self.store.get_partial_state_rooms_and_servers()
# TODO(faster joins): this fetches and processes a bunch of data that we don't
# use. Could be replaced by a tighter query e.g.
# SELECT EXISTS(SELECT 1 FROM partial_state_rooms)
partial_rooms = await self.store.get_partial_state_room_resync_info()
if partial_rooms:
await self.store.add_remote_device_list_to_pending(
user_id,

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, List, Optional, Union
from typing import TYPE_CHECKING, Collection, List, Mapping, Optional, Union
from synapse import event_auth
from synapse.api.constants import (
@@ -29,7 +29,6 @@ from synapse.event_auth import (
)
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.events.snapshot import EventContext
from synapse.types import StateMap, get_domain_from_id
if TYPE_CHECKING:
@@ -51,12 +50,21 @@ class EventAuthHandler:
async def check_auth_rules_from_context(
self,
event: EventBase,
context: EventContext,
batched_auth_events: Optional[Mapping[str, EventBase]] = None,
) -> None:
"""Check an event passes the auth rules at its own auth events"""
await check_state_independent_auth_rules(self._store, event)
"""Check an event passes the auth rules at its own auth events
Args:
event: event to be authed
batched_auth_events: if the event being authed is part of a batch, any events
from the same batch that may be necessary to auth the current event
"""
await check_state_independent_auth_rules(
self._store, event, batched_auth_events
)
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self._store.get_events(auth_event_ids)
if batched_auth_events:
auth_events_by_id.update(batched_auth_events)
check_state_dependent_auth_rules(event, auth_events_by_id.values())
def compute_auth_events(

View File

@@ -632,6 +632,7 @@ class FederationHandler:
room_id=room_id,
servers=ret.servers_in_room,
device_lists_stream_id=self.store.get_device_stream_token(),
joined_via=origin,
)
try:
@@ -941,7 +942,7 @@ class FederationHandler:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_join_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
return event
async def on_invite_request(
@@ -1122,7 +1123,7 @@ class FederationHandler:
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_leave_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Failed to create new leave %r because %s", event, e)
raise e
@@ -1181,7 +1182,7 @@ class FederationHandler:
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_knock_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Failed to create new knock %r because %s", event, e)
raise e
@@ -1347,9 +1348,7 @@ class FederationHandler:
try:
validate_event_for_room_version(event)
await self._event_auth_handler.check_auth_rules_from_context(
event, context
)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Denying new third party invite %r because %s", event, e)
raise e
@@ -1399,7 +1398,7 @@ class FederationHandler:
try:
validate_event_for_room_version(event)
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Denying third party invite %r because %s", event, e)
raise e
@@ -1615,13 +1614,13 @@ class FederationHandler:
"""Resumes resyncing of all partial-state rooms after a restart."""
assert not self.config.worker.worker_app
partial_state_rooms = await self.store.get_partial_state_rooms_and_servers()
for room_id, servers_in_room in partial_state_rooms.items():
partial_state_rooms = await self.store.get_partial_state_room_resync_info()
for room_id, resync_info in partial_state_rooms.items():
run_as_background_process(
desc="sync_partial_state_room",
func=self._sync_partial_state_room,
initial_destination=None,
other_destinations=servers_in_room,
initial_destination=resync_info.joined_via,
other_destinations=resync_info.servers_in_room,
room_id=room_id,
)
@@ -1650,28 +1649,12 @@ class FederationHandler:
# really leave, that might mean we have difficulty getting the room state over
# federation.
# https://github.com/matrix-org/synapse/issues/12802
#
# TODO(faster_joins): we need some way of prioritising which homeservers in
# `other_destinations` to try first, otherwise we'll spend ages trying dead
# homeservers for large rooms.
# https://github.com/matrix-org/synapse/issues/12999
if initial_destination is None and len(other_destinations) == 0:
raise ValueError(
f"Cannot resync state of {room_id}: no destinations provided"
)
# Make an infinite iterator of destinations to try. Once we find a working
# destination, we'll stick with it until it flakes.
destinations: Collection[str]
if initial_destination is not None:
# Move `initial_destination` to the front of the list.
destinations = list(other_destinations)
if initial_destination in destinations:
destinations.remove(initial_destination)
destinations = [initial_destination] + destinations
else:
destinations = other_destinations
destinations = _prioritise_destinations_for_partial_state_resync(
initial_destination, other_destinations, room_id
)
destination_iter = itertools.cycle(destinations)
# `destination` is the current remote homeserver we're pulling from.
@@ -1769,3 +1752,29 @@ class FederationHandler:
room_id,
destination,
)
def _prioritise_destinations_for_partial_state_resync(
initial_destination: Optional[str],
other_destinations: Collection[str],
room_id: str,
) -> Collection[str]:
"""Work out the order in which we should ask servers to resync events.
If an `initial_destination` is given, it takes top priority. Otherwise
all servers are treated equally.
:raises ValueError: if no destination is provided at all.
"""
if initial_destination is None and len(other_destinations) == 0:
raise ValueError(f"Cannot resync state of {room_id}: no destinations provided")
if initial_destination is None:
return other_destinations
# Move `initial_destination` to the front of the list.
destinations = list(other_destinations)
if initial_destination in destinations:
destinations.remove(initial_destination)
destinations = [initial_destination] + destinations
return destinations

View File

@@ -1360,8 +1360,16 @@ class EventCreationHandler:
else:
try:
validate_event_for_room_version(event)
# If we are persisting a batch of events the event(s) needed to auth the
# current event may be part of the batch and will not be in the DB yet
event_id_to_event = {e.event_id: e for e, _ in events_and_context}
batched_auth_events = {}
for event_id in event.auth_event_ids():
auth_event = event_id_to_event.get(event_id)
if auth_event:
batched_auth_events[event_id] = auth_event
await self._event_auth_handler.check_auth_rules_from_context(
event, context
event, batched_auth_events
)
except AuthError as err:
logger.warning("Denying new event %r because %s", event, err)

View File

@@ -229,9 +229,7 @@ class RoomCreationHandler:
},
)
validate_event_for_room_version(tombstone_event)
await self._event_auth_handler.check_auth_rules_from_context(
tombstone_event, tombstone_context
)
await self._event_auth_handler.check_auth_rules_from_context(tombstone_event)
# Upgrade the room
#

View File

@@ -874,7 +874,7 @@ class SsoHandler:
)
async def handle_terms_accepted(
self, request: Request, session_id: str, terms_version: str
self, request: SynapseRequest, session_id: str, terms_version: str
) -> None:
"""Handle a request to the new-user 'consent' endpoint

View File

@@ -19,6 +19,7 @@ import logging
import types
import urllib
from http import HTTPStatus
from http.client import FOUND
from inspect import isawaitable
from typing import (
TYPE_CHECKING,
@@ -339,7 +340,7 @@ class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
return callback_return
_unrecognised_request_handler(request)
return _unrecognised_request_handler(request)
@abc.abstractmethod
def _send_response(
@@ -598,7 +599,7 @@ class RootRedirect(resource.Resource):
class OptionsResource(resource.Resource):
"""Responds to OPTION requests for itself and all children."""
def render_OPTIONS(self, request: Request) -> bytes:
def render_OPTIONS(self, request: SynapseRequest) -> bytes:
request.setResponseCode(204)
request.setHeader(b"Content-Length", b"0")
@@ -763,7 +764,7 @@ def respond_with_json(
def respond_with_json_bytes(
request: Request,
request: SynapseRequest,
code: int,
json_bytes: bytes,
send_cors: bool = False,
@@ -859,7 +860,7 @@ def _write_bytes_to_request(request: Request, bytes_to_write: bytes) -> None:
_ByteProducer(request, bytes_generator)
def set_cors_headers(request: Request) -> None:
def set_cors_headers(request: SynapseRequest) -> None:
"""Set the CORS headers so that javascript running in a web browsers can
use this API
@@ -870,10 +871,20 @@ def set_cors_headers(request: Request) -> None:
request.setHeader(
b"Access-Control-Allow-Methods", b"GET, HEAD, POST, PUT, DELETE, OPTIONS"
)
request.setHeader(
b"Access-Control-Allow-Headers",
b"X-Requested-With, Content-Type, Authorization, Date",
)
if request.experimental_cors_msc3886:
request.setHeader(
b"Access-Control-Allow-Headers",
b"X-Requested-With, Content-Type, Authorization, Date, If-Match, If-None-Match",
)
request.setHeader(
b"Access-Control-Expose-Headers",
b"ETag, Location, X-Max-Bytes",
)
else:
request.setHeader(
b"Access-Control-Allow-Headers",
b"X-Requested-With, Content-Type, Authorization, Date",
)
def set_corp_headers(request: Request) -> None:
@@ -942,10 +953,25 @@ def set_clickjacking_protection_headers(request: Request) -> None:
request.setHeader(b"Content-Security-Policy", b"frame-ancestors 'none';")
def respond_with_redirect(request: Request, url: bytes) -> None:
"""Write a 302 response to the request, if it is still alive."""
def respond_with_redirect(
request: SynapseRequest, url: bytes, statusCode: int = FOUND, cors: bool = False
) -> None:
"""
Write a 302 (or other specified status code) response to the request, if it is still alive.
Args:
request: The http request to respond to.
url: The URL to redirect to.
statusCode: The HTTP status code to use for the redirect (defaults to 302).
cors: Whether to set CORS headers on the response.
"""
logger.debug("Redirect to %s", url.decode("utf-8"))
request.redirect(url)
if cors:
set_cors_headers(request)
request.setResponseCode(statusCode)
request.setHeader(b"location", url)
finish_request(request)

View File

@@ -82,6 +82,7 @@ class SynapseRequest(Request):
self.reactor = site.reactor
self._channel = channel # this is used by the tests
self.start_time = 0.0
self.experimental_cors_msc3886 = site.experimental_cors_msc3886
# The requester, if authenticated. For federation requests this is the
# server name, for client requests this is the Requester object.
@@ -622,6 +623,8 @@ class SynapseSite(Site):
request_id_header = config.http_options.request_id_header
self.experimental_cors_msc3886 = config.http_options.experimental_cors_msc3886
def request_factory(channel: HTTPChannel, queued: bool) -> Request:
return request_class(
channel,

View File

@@ -44,6 +44,7 @@ from synapse.rest.client import (
receipts,
register,
relations,
rendezvous,
report_event,
room,
room_batch,
@@ -132,3 +133,4 @@ class ClientRestResource(JsonResource):
# unstable
mutual_rooms.register_servlets(hs, client_resource)
login_token_request.register_servlets(hs, client_resource)
rendezvous.register_servlets(hs, client_resource)

View File

@@ -274,9 +274,7 @@ class DeleteMediaByID(RestServlet):
class DeleteMediaByDateSize(RestServlet):
"""Delete local media and local copies of remote media by
timestamp and size.
"""
"""Delete local media by timestamp and size."""
PATTERNS = admin_patterns("/media/(?P<server_name>[^/]*)/delete$")

View File

@@ -0,0 +1,74 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from http.client import TEMPORARY_REDIRECT
from typing import TYPE_CHECKING, Optional
from synapse.http.server import HttpServer, respond_with_redirect
from synapse.http.servlet import RestServlet
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
class RendezvousServlet(RestServlet):
"""
This is a placeholder implementation of [MSC3886](https://github.com/matrix-org/matrix-spec-proposals/pull/3886)
simple client rendezvous capability that is used by the "Sign in with QR" functionality.
This implementation only serves as a 307 redirect to a configured server rather than being a full implementation.
A module that implements the full functionality is available at: https://pypi.org/project/matrix-http-rendezvous-synapse/.
Request:
POST /rendezvous HTTP/1.1
Content-Type: ...
...
Response:
HTTP/1.1 307
Location: <configured endpoint>
"""
PATTERNS = client_patterns(
"/org.matrix.msc3886/rendezvous$", releases=[], v1=False, unstable=True
)
def __init__(self, hs: "HomeServer"):
super().__init__()
redirection_target: Optional[str] = hs.config.experimental.msc3886_endpoint
assert (
redirection_target is not None
), "Servlet is only registered if there is a redirection target"
self.endpoint = redirection_target.encode("utf-8")
async def on_POST(self, request: SynapseRequest) -> None:
respond_with_redirect(
request, self.endpoint, statusCode=TEMPORARY_REDIRECT, cors=True
)
# PUT, GET and DELETE are not implemented as they should be fulfilled by the redirect target.
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
if hs.config.experimental.msc3886_endpoint is not None:
RendezvousServlet(hs).register(http_server)

View File

@@ -116,6 +116,9 @@ class VersionsRestServlet(RestServlet):
"org.matrix.msc3881": self.config.experimental.msc3881_enabled,
# Adds support for filtering /messages by event relation.
"org.matrix.msc3874": self.config.experimental.msc3874_enabled,
# Adds support for simple HTTP rendezvous as per MSC3886
"org.matrix.msc3886": self.config.experimental.msc3886_endpoint
is not None,
},
},
)

View File

@@ -20,9 +20,9 @@ from signedjson.sign import sign_json
from unpaddedbase64 import encode_base64
from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.http.server import respond_with_json_bytes
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
if TYPE_CHECKING:
@@ -99,7 +99,7 @@ class LocalKey(Resource):
json_object = sign_json(json_object, self.config.server.server_name, key)
return json_object
def render_GET(self, request: Request) -> Optional[int]:
def render_GET(self, request: SynapseRequest) -> Optional[int]:
time_now = self.clock.time_msec()
# Update the expiry time if less than half the interval remains.
if time_now + self.config.key.key_refresh_interval / 2 > self.valid_until_ts:

View File

@@ -20,6 +20,7 @@ from synapse.api.errors import SynapseError
from synapse.handlers.sso import get_username_mapping_session_cookie_from_request
from synapse.http.server import DirectServeHtmlResource, respond_with_html
from synapse.http.servlet import parse_string
from synapse.http.site import SynapseRequest
from synapse.types import UserID
from synapse.util.templates import build_jinja_env
@@ -88,7 +89,7 @@ class NewUserConsentResource(DirectServeHtmlResource):
html = template.render(template_params)
respond_with_html(request, 200, html)
async def _async_render_POST(self, request: Request) -> None:
async def _async_render_POST(self, request: SynapseRequest) -> None:
try:
session_id = get_username_mapping_session_cookie_from_request(request)
except SynapseError as e:

View File

@@ -18,6 +18,7 @@ from twisted.web.resource import Resource
from twisted.web.server import Request
from synapse.http.server import set_cors_headers
from synapse.http.site import SynapseRequest
from synapse.types import JsonDict
from synapse.util import json_encoder
from synapse.util.stringutils import parse_server_name
@@ -63,7 +64,7 @@ class ClientWellKnownResource(Resource):
Resource.__init__(self)
self._well_known_builder = WellKnownBuilder(hs)
def render_GET(self, request: Request) -> bytes:
def render_GET(self, request: SynapseRequest) -> bytes:
set_cors_headers(request)
r = self._well_known_builder.get_well_known()
if not r:

View File

@@ -1658,7 +1658,7 @@ class DatabasePool:
table: string giving the table name
keyvalues: dict of column names and values to select the row with
retcol: string giving the name of the column to return
allow_none: If true, return None instead of failing if the SELECT
allow_none: If true, return None instead of raising StoreError if the SELECT
statement returns no rows
desc: description of the transaction, for logging and metrics
"""

View File

@@ -539,9 +539,11 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
"device_id": device_id,
"prev_id": [prev_id] if prev_id else [],
"stream_id": stream_id,
"org.matrix.opentracing_context": opentracing_context,
}
if opentracing_context != "{}":
result["org.matrix.opentracing_context"] = opentracing_context
prev_id = stream_id
if device is not None:
@@ -549,7 +551,11 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
if keys:
result["keys"] = keys
device_display_name = device.display_name
device_display_name = None
if (
self.hs.config.federation.allow_device_name_lookup_over_federation
):
device_display_name = device.display_name
if device_display_name:
result["device_display_name"] = device_display_name
else:

View File

@@ -294,6 +294,44 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
self._background_backfill_thread_id,
)
# Indexes which will be used to quickly make the thread_id column non-null.
self.db_pool.updates.register_background_index_update(
"event_push_actions_thread_id_null",
index_name="event_push_actions_thread_id_null",
table="event_push_actions",
columns=["thread_id"],
where_clause="thread_id IS NULL",
)
self.db_pool.updates.register_background_index_update(
"event_push_summary_thread_id_null",
index_name="event_push_summary_thread_id_null",
table="event_push_summary",
columns=["thread_id"],
where_clause="thread_id IS NULL",
)
# Check ASAP (and then later, every 1s) to see if we have finished
# background updates the event_push_actions and event_push_summary tables.
self._clock.call_later(0.0, self._check_event_push_backfill_thread_id)
self._event_push_backfill_thread_id_done = False
@wrap_as_background_process("check_event_push_backfill_thread_id")
async def _check_event_push_backfill_thread_id(self) -> None:
"""
Has thread_id finished backfilling?
If not, we need to just-in-time update it so the queries work.
"""
done = await self.db_pool.updates.has_completed_background_update(
"event_push_backfill_thread_id"
)
if done:
self._event_push_backfill_thread_id_done = True
else:
# Reschedule to run.
self._clock.call_later(15.0, self._check_event_push_backfill_thread_id)
async def _background_backfill_thread_id(
self, progress: JsonDict, batch_size: int
) -> int:
@@ -526,6 +564,25 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
(ReceiptTypes.READ, ReceiptTypes.READ_PRIVATE),
)
# First ensure that the existing rows have an updated thread_id field.
if not self._event_push_backfill_thread_id_done:
txn.execute(
"""
UPDATE event_push_summary
SET thread_id = ?
WHERE room_id = ? AND user_id = ? AND thread_id is NULL
""",
(MAIN_TIMELINE, room_id, user_id),
)
txn.execute(
"""
UPDATE event_push_actions
SET thread_id = ?
WHERE room_id = ? AND user_id = ? AND thread_id is NULL
""",
(MAIN_TIMELINE, room_id, user_id),
)
# First we pull the counts from the summary table.
#
# We check that `last_receipt_stream_ordering` matches the stream ordering of the
@@ -1341,6 +1398,25 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
(room_id, user_id, stream_ordering, *thread_args),
)
# First ensure that the existing rows have an updated thread_id field.
if not self._event_push_backfill_thread_id_done:
txn.execute(
"""
UPDATE event_push_summary
SET thread_id = ?
WHERE room_id = ? AND user_id = ? AND thread_id is NULL
""",
(MAIN_TIMELINE, room_id, user_id),
)
txn.execute(
"""
UPDATE event_push_actions
SET thread_id = ?
WHERE room_id = ? AND user_id = ? AND thread_id is NULL
""",
(MAIN_TIMELINE, room_id, user_id),
)
# Fetch the notification counts between the stream ordering of the
# latest receipt and what was previously summarised.
unread_counts = self._get_notif_unread_count_for_user_room(
@@ -1475,6 +1551,19 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
rotate_to_stream_ordering: The new maximum event stream ordering to summarise.
"""
# Ensure that any new actions have an updated thread_id.
if not self._event_push_backfill_thread_id_done:
txn.execute(
"""
UPDATE event_push_actions
SET thread_id = ?
WHERE ? < stream_ordering AND stream_ordering <= ? AND thread_id IS NULL
""",
(MAIN_TIMELINE, old_rotate_stream_ordering, rotate_to_stream_ordering),
)
# XXX Do we need to update summaries here too?
# Calculate the new counts that should be upserted into event_push_summary
sql = """
SELECT user_id, room_id, thread_id,
@@ -1537,6 +1626,20 @@ class EventPushActionsWorkerStore(ReceiptsWorkerStore, StreamWorkerStore, SQLBas
logger.info("Rotating notifications, handling %d rows", len(summaries))
# Ensure that any updated threads have the proper thread_id.
if not self._event_push_backfill_thread_id_done:
txn.execute_batch(
"""
UPDATE event_push_summary
SET thread_id = ?
WHERE room_id = ? AND user_id = ? AND thread_id is NULL
""",
[
(MAIN_TIMELINE, room_id, user_id)
for user_id, room_id, _ in summaries
],
)
self.db_pool.simple_upsert_many_txn(
txn,
table="event_push_summary",

View File

@@ -1971,12 +1971,17 @@ class EventsWorkerStore(SQLBaseStore):
Args:
room_id: room where the event lives
event_id: event to check
event: event to check (can't be an `outlier`)
Returns:
Boolean indicating whether it's an extremity
"""
assert not event.internal_metadata.is_outlier(), (
"is_event_next_to_backward_gap(...) can't be used with `outlier` events. "
"This function relies on `event_backward_extremities` which won't be filled in for `outliers`."
)
def is_event_next_to_backward_gap_txn(txn: LoggingTransaction) -> bool:
# If the event in question has any of its prev_events listed as a
# backward extremity, it's next to a gap.
@@ -2026,12 +2031,17 @@ class EventsWorkerStore(SQLBaseStore):
Args:
room_id: room where the event lives
event_id: event to check
event: event to check (can't be an `outlier`)
Returns:
Boolean indicating whether it's an extremity
"""
assert not event.internal_metadata.is_outlier(), (
"is_event_next_to_forward_gap(...) can't be used with `outlier` events. "
"This function relies on `event_edges` and `event_forward_extremities` which won't be filled in for `outliers`."
)
def is_event_next_to_gap_txn(txn: LoggingTransaction) -> bool:
# If the event in question is a forward extremity, we will just
# consider any potential forward gap as not a gap since it's one of
@@ -2112,13 +2122,33 @@ class EventsWorkerStore(SQLBaseStore):
The closest event_id otherwise None if we can't find any event in
the given direction.
"""
if direction == "b":
# Find closest event *before* a given timestamp. We use descending
# (which gives values largest to smallest) because we want the
# largest possible timestamp *before* the given timestamp.
comparison_operator = "<="
order = "DESC"
else:
# Find closest event *after* a given timestamp. We use ascending
# (which gives values smallest to largest) because we want the
# closest possible timestamp *after* the given timestamp.
comparison_operator = ">="
order = "ASC"
sql_template = """
sql_template = f"""
SELECT event_id FROM events
LEFT JOIN rejections USING (event_id)
WHERE
origin_server_ts %s ?
AND room_id = ?
room_id = ?
AND origin_server_ts {comparison_operator} ?
/**
* Make sure the event isn't an `outlier` because we have no way
* to later check whether it's next to a gap. `outliers` do not
* have entries in the `event_edges`, `event_forward_extremeties`,
* and `event_backward_extremities` tables to check against
* (used by `is_event_next_to_backward_gap` and `is_event_next_to_forward_gap`).
*/
AND NOT outlier
/* Make sure event is not rejected */
AND rejections.event_id IS NULL
/**
@@ -2128,27 +2158,14 @@ class EventsWorkerStore(SQLBaseStore):
* Finally, we can tie-break based on when it was received on the server
* (`stream_ordering`).
*/
ORDER BY origin_server_ts %s, depth %s, stream_ordering %s
ORDER BY origin_server_ts {order}, depth {order}, stream_ordering {order}
LIMIT 1;
"""
def get_event_id_for_timestamp_txn(txn: LoggingTransaction) -> Optional[str]:
if direction == "b":
# Find closest event *before* a given timestamp. We use descending
# (which gives values largest to smallest) because we want the
# largest possible timestamp *before* the given timestamp.
comparison_operator = "<="
order = "DESC"
else:
# Find closest event *after* a given timestamp. We use ascending
# (which gives values smallest to largest) because we want the
# closest possible timestamp *after* the given timestamp.
comparison_operator = ">="
order = "ASC"
txn.execute(
sql_template % (comparison_operator, order, order, order),
(timestamp, room_id),
sql_template,
(room_id, timestamp),
)
row = txn.fetchone()
if row:

View File

@@ -97,6 +97,12 @@ class RoomSortOrder(Enum):
STATE_EVENTS = "state_events"
@attr.s(slots=True, frozen=True, auto_attribs=True)
class PartialStateResyncInfo:
joined_via: Optional[str]
servers_in_room: List[str] = attr.ib(factory=list)
class RoomWorkerStore(CacheInvalidationWorkerStore):
def __init__(
self,
@@ -1160,17 +1166,29 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
desc="get_partial_state_servers_at_join",
)
async def get_partial_state_rooms_and_servers(
async def get_partial_state_room_resync_info(
self,
) -> Mapping[str, Collection[str]]:
"""Get all rooms containing events with partial state, and the servers known
to be in the room.
) -> Mapping[str, PartialStateResyncInfo]:
"""Get all rooms containing events with partial state, and the information
needed to restart a "resync" of those rooms.
Returns:
A dictionary of rooms with partial state, with room IDs as keys and
lists of servers in rooms as values.
"""
room_servers: Dict[str, List[str]] = {}
room_servers: Dict[str, PartialStateResyncInfo] = {}
rows = await self.db_pool.simple_select_list(
table="partial_state_rooms",
keyvalues={},
retcols=("room_id", "joined_via"),
desc="get_server_which_served_partial_join",
)
for row in rows:
room_id = row["room_id"]
joined_via = row["joined_via"]
room_servers[room_id] = PartialStateResyncInfo(joined_via=joined_via)
rows = await self.db_pool.simple_select_list(
"partial_state_rooms_servers",
@@ -1182,7 +1200,15 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
for row in rows:
room_id = row["room_id"]
server_name = row["server_name"]
room_servers.setdefault(room_id, []).append(server_name)
entry = room_servers.get(room_id)
if entry is None:
# There is a foreign key constraint which enforces that every room_id in
# partial_state_rooms_servers appears in partial_state_rooms. So we
# expect `entry` to be non-null. (This reasoning fails if we've
# partial-joined between the two SELECTs, but this is unlikely to happen
# in practice.)
continue
entry.servers_in_room.append(server_name)
return room_servers
@@ -1827,6 +1853,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
room_id: str,
servers: Collection[str],
device_lists_stream_id: int,
joined_via: str,
) -> None:
"""Mark the given room as containing events with partial state.
@@ -1842,6 +1869,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
servers: other servers known to be in the room
device_lists_stream_id: the device_lists stream ID at the time when we first
joined the room.
joined_via: the server name we requested a partial join from.
"""
await self.db_pool.runInteraction(
"store_partial_state_room",
@@ -1849,6 +1877,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
room_id,
servers,
device_lists_stream_id,
joined_via,
)
def _store_partial_state_room_txn(
@@ -1857,6 +1886,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
room_id: str,
servers: Collection[str],
device_lists_stream_id: int,
joined_via: str,
) -> None:
DatabasePool.simple_insert_txn(
txn,
@@ -1866,6 +1896,7 @@ class RoomStore(RoomBackgroundUpdateStore, RoomWorkerStore):
"device_lists_stream_id": device_lists_stream_id,
# To be updated later once the join event is persisted.
"join_event_id": None,
"joined_via": joined_via,
},
)
DatabasePool.simple_insert_many_txn(

View File

@@ -1,29 +0,0 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Forces the background updates from 06thread_notifications.sql to run in the
-- foreground as code will now require those to be "done".
DELETE FROM background_updates WHERE update_name = 'event_push_backfill_thread_id';
-- Overwrite any null thread_id columns.
UPDATE event_push_actions_staging SET thread_id = 'main' WHERE thread_id IS NULL;
UPDATE event_push_actions SET thread_id = 'main' WHERE thread_id IS NULL;
UPDATE event_push_summary SET thread_id = 'main' WHERE thread_id IS NULL;
-- Do not run the event_push_summary_unique_index job if it is pending; the
-- thread_id field will be made required.
DELETE FROM background_updates WHERE update_name = 'event_push_summary_unique_index';
DROP INDEX IF EXISTS event_push_summary_unique_index;

View File

@@ -0,0 +1,23 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- Allow there to be multiple summaries per user/room.
DROP INDEX IF EXISTS event_push_summary_unique_index;
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(7306, 'event_push_actions_thread_id_null', '{}', 'event_push_backfill_thread_id');
INSERT INTO background_updates (ordering, update_name, progress_json, depends_on) VALUES
(7306, 'event_push_summary_thread_id_null', '{}', 'event_push_backfill_thread_id');

View File

@@ -1,101 +0,0 @@
/* Copyright 2022 The Matrix.org Foundation C.I.C
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-- SQLite doesn't support modifying columns to an existing table, so it must
-- be recreated.
-- Create the new tables.
CREATE TABLE event_push_actions_staging_new (
event_id TEXT NOT NULL,
user_id TEXT NOT NULL,
actions TEXT NOT NULL,
notif SMALLINT NOT NULL,
highlight SMALLINT NOT NULL,
unread SMALLINT,
thread_id TEXT NOT NULL,
inserted_ts BIGINT
);
CREATE TABLE event_push_actions_new (
room_id TEXT NOT NULL,
event_id TEXT NOT NULL,
user_id TEXT NOT NULL,
profile_tag VARCHAR(32),
actions TEXT NOT NULL,
topological_ordering BIGINT,
stream_ordering BIGINT,
notif SMALLINT,
highlight SMALLINT,
unread SMALLINT,
thread_id TEXT NOT NULL,
CONSTRAINT event_id_user_id_profile_tag_uniqueness UNIQUE (room_id, event_id, user_id, profile_tag)
);
CREATE TABLE event_push_summary_new (
user_id TEXT NOT NULL,
room_id TEXT NOT NULL,
notif_count BIGINT NOT NULL,
stream_ordering BIGINT NOT NULL,
unread_count BIGINT,
last_receipt_stream_ordering BIGINT,
thread_id TEXT NOT NULL
);
-- Swap the indexes.
DROP INDEX IF EXISTS event_push_actions_staging_id;
CREATE INDEX event_push_actions_staging_id ON event_push_actions_staging_new(event_id);
DROP INDEX IF EXISTS event_push_actions_room_id_user_id;
DROP INDEX IF EXISTS event_push_actions_rm_tokens;
DROP INDEX IF EXISTS event_push_actions_stream_ordering;
DROP INDEX IF EXISTS event_push_actions_u_highlight;
DROP INDEX IF EXISTS event_push_actions_highlights_index;
CREATE INDEX event_push_actions_room_id_user_id on event_push_actions_new(room_id, user_id);
CREATE INDEX event_push_actions_rm_tokens on event_push_actions_new( user_id, room_id, topological_ordering, stream_ordering );
CREATE INDEX event_push_actions_stream_ordering on event_push_actions_new( stream_ordering, user_id );
CREATE INDEX event_push_actions_u_highlight ON event_push_actions_new (user_id, stream_ordering);
CREATE INDEX event_push_actions_highlights_index ON event_push_actions_new (user_id, room_id, topological_ordering, stream_ordering);
-- Copy the data.
INSERT INTO event_push_actions_staging_new (event_id, user_id, actions, notif, highlight, unread, thread_id, inserted_ts)
SELECT event_id, user_id, actions, notif, highlight, unread, thread_id, inserted_ts
FROM event_push_actions_staging;
INSERT INTO event_push_actions_new (room_id, event_id, user_id, profile_tag, actions, topological_ordering, stream_ordering, notif, highlight, unread, thread_id)
SELECT room_id, event_id, user_id, profile_tag, actions, topological_ordering, stream_ordering, notif, highlight, unread, thread_id
FROM event_push_actions;
INSERT INTO event_push_summary_new (user_id, room_id, notif_count, stream_ordering, unread_count, last_receipt_stream_ordering, thread_id)
SELECT user_id, room_id, notif_count, stream_ordering, unread_count, last_receipt_stream_ordering, thread_id
FROM event_push_summary;
-- Drop the old tables.
DROP TABLE event_push_actions_staging;
DROP TABLE event_push_actions;
DROP TABLE event_push_summary;
-- Rename the tables.
ALTER TABLE event_push_actions_staging_new RENAME TO event_push_actions_staging;
ALTER TABLE event_push_actions_new RENAME TO event_push_actions;
ALTER TABLE event_push_summary_new RENAME TO event_push_summary;
-- Re-run background updates from 72/02event_push_actions_index.sql and
-- 72/06thread_notifications.sql.
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(7307, 'event_push_summary_unique_index2', '{}')
ON CONFLICT (update_name) DO NOTHING;
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
(7307, 'event_push_actions_stream_highlight_index', '{}')
ON CONFLICT (update_name) DO NOTHING;

View File

@@ -13,7 +13,6 @@
* limitations under the License.
*/
-- The columns can now be made non-nullable.
ALTER TABLE event_push_actions_staging ALTER COLUMN thread_id SET NOT NULL;
ALTER TABLE event_push_actions ALTER COLUMN thread_id SET NOT NULL;
ALTER TABLE event_push_summary ALTER COLUMN thread_id SET NOT NULL;
-- When we resync partial state, we prioritise doing so using the server we
-- partial-joined from. To do this we need to record which server that was!
ALTER TABLE partial_state_rooms ADD COLUMN joined_via TEXT;

View File

@@ -153,6 +153,7 @@ class TerseJsonTestCase(LoggerCleanupMixin, TestCase):
site.site_tag = "test-site"
site.server_version_string = "Server v1"
site.reactor = Mock()
site.experimental_cors_msc3886 = False
request = SynapseRequest(FakeChannel(site, None), site)
# Call requestReceived to finish instantiating the object.
request.content = BytesIO()

View File

@@ -0,0 +1,45 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.test.proto_helpers import MemoryReactor
from synapse.rest.client import rendezvous
from synapse.server import HomeServer
from synapse.util import Clock
from tests import unittest
from tests.unittest import override_config
endpoint = "/_matrix/client/unstable/org.matrix.msc3886/rendezvous"
class RendezvousServletTestCase(unittest.HomeserverTestCase):
servlets = [
rendezvous.register_servlets,
]
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
self.hs = self.setup_test_homeserver()
return self.hs
def test_disabled(self) -> None:
channel = self.make_request("POST", endpoint, {}, access_token=None)
self.assertEqual(channel.code, 400)
@override_config({"experimental_features": {"msc3886_endpoint": "/asd"}})
def test_redirect(self) -> None:
channel = self.make_request("POST", endpoint, {}, access_token=None)
self.assertEqual(channel.code, 307)
self.assertEqual(channel.headers.getRawHeaders("Location"), ["/asd"])

View File

@@ -39,6 +39,8 @@ from synapse.api.constants import (
)
from synapse.api.errors import Codes, HttpResponseException
from synapse.appservice import ApplicationService
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.handlers.pagination import PurgeStatus
from synapse.rest import admin
from synapse.rest.client import account, directory, login, profile, register, room, sync
@@ -51,6 +53,7 @@ from tests import unittest
from tests.http.server._base import make_request_with_cancellation_test
from tests.storage.test_stream import PaginationTestCase
from tests.test_utils import make_awaitable
from tests.test_utils.event_injection import create_event
PATH_PREFIX = b"/_matrix/client/api/v1"
@@ -3486,3 +3489,65 @@ class ThreepidInviteTestCase(unittest.HomeserverTestCase):
)
self.assertEqual(channel.code, 400)
self.assertEqual(channel.json_body["errcode"], "M_MISSING_PARAM")
class TimestampLookupTestCase(unittest.HomeserverTestCase):
servlets = [
admin.register_servlets,
room.register_servlets,
login.register_servlets,
]
def default_config(self) -> JsonDict:
config = super().default_config()
config["experimental_features"] = {"msc3030_enabled": True}
return config
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self._storage_controllers = self.hs.get_storage_controllers()
self.room_owner = self.register_user("room_owner", "test")
self.room_owner_tok = self.login("room_owner", "test")
def _inject_outlier(self, room_id: str) -> EventBase:
event, _context = self.get_success(
create_event(
self.hs,
room_id=room_id,
type="m.test",
sender="@test_remote_user:remote",
)
)
event.internal_metadata.outlier = True
self.get_success(
self._storage_controllers.persistence.persist_event(
event, EventContext.for_outlier(self._storage_controllers)
)
)
return event
def test_no_outliers(self) -> None:
"""
Test to make sure `/timestamp_to_event` does not return `outlier` events.
We're unable to determine whether an `outlier` is next to a gap so we
don't know whether it's actually the closest event. Instead, let's just
ignore `outliers` with this endpoint.
This test is really seeing that we choose the non-`outlier` event behind the
`outlier`. Since the gap checking logic considers the latest message in the room
as *not* next to a gap, asking over federation does not come into play here.
"""
room_id = self.helper.create_room_as(self.room_owner, tok=self.room_owner_tok)
outlier_event = self._inject_outlier(room_id)
channel = self.make_request(
"GET",
f"/_matrix/client/unstable/org.matrix.msc3030/rooms/{room_id}/timestamp_to_event?dir=b&ts={outlier_event.origin_server_ts}",
access_token=self.room_owner_tok,
)
self.assertEqual(HTTPStatus.OK, channel.code, msg=channel.json_body)
# Make sure the outlier event is not returned
self.assertNotEqual(channel.json_body["event_id"], outlier_event.event_id)

View File

@@ -266,7 +266,12 @@ class FakeSite:
site_tag = "test"
access_logger = logging.getLogger("synapse.access.http.fake")
def __init__(self, resource: IResource, reactor: IReactorTime):
def __init__(
self,
resource: IResource,
reactor: IReactorTime,
experimental_cors_msc3886: bool = False,
):
"""
Args:
@@ -274,6 +279,7 @@ class FakeSite:
"""
self._resource = resource
self.reactor = reactor
self.experimental_cors_msc3886 = experimental_cors_msc3886
def getResourceFor(self, request):
return self._resource

View File

@@ -222,13 +222,22 @@ class OptionsResourceTests(unittest.TestCase):
self.resource = OptionsResource()
self.resource.putChild(b"res", DummyResource())
def _make_request(self, method: bytes, path: bytes) -> FakeChannel:
def _make_request(
self, method: bytes, path: bytes, experimental_cors_msc3886: bool = False
) -> FakeChannel:
"""Create a request from the method/path and return a channel with the response."""
# Create a site and query for the resource.
site = SynapseSite(
"test",
"site_tag",
parse_listener_def(0, {"type": "http", "port": 0}),
parse_listener_def(
0,
{
"type": "http",
"port": 0,
"experimental_cors_msc3886": experimental_cors_msc3886,
},
),
self.resource,
"1.0",
max_request_body_size=4096,
@@ -239,25 +248,58 @@ class OptionsResourceTests(unittest.TestCase):
channel = make_request(self.reactor, site, method, path, shorthand=False)
return channel
def _check_cors_standard_headers(self, channel: FakeChannel) -> None:
# Ensure the correct CORS headers have been added
# as per https://spec.matrix.org/v1.4/client-server-api/#web-browser-clients
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Origin"),
[b"*"],
"has correct CORS Origin header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Methods"),
[b"GET, HEAD, POST, PUT, DELETE, OPTIONS"], # HEAD isn't in the spec
"has correct CORS Methods header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Headers"),
[b"X-Requested-With, Content-Type, Authorization, Date"],
"has correct CORS Headers header",
)
def _check_cors_msc3886_headers(self, channel: FakeChannel) -> None:
# Ensure the correct CORS headers have been added
# as per https://github.com/matrix-org/matrix-spec-proposals/blob/hughns/simple-rendezvous-capability/proposals/3886-simple-rendezvous-capability.md#cors
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Origin"),
[b"*"],
"has correct CORS Origin header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Methods"),
[b"GET, HEAD, POST, PUT, DELETE, OPTIONS"], # HEAD isn't in the spec
"has correct CORS Methods header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Allow-Headers"),
[
b"X-Requested-With, Content-Type, Authorization, Date, If-Match, If-None-Match"
],
"has correct CORS Headers header",
)
self.assertEqual(
channel.headers.getRawHeaders(b"Access-Control-Expose-Headers"),
[b"ETag, Location, X-Max-Bytes"],
"has correct CORS Expose Headers header",
)
def test_unknown_options_request(self) -> None:
"""An OPTIONS requests to an unknown URL still returns 204 No Content."""
channel = self._make_request(b"OPTIONS", b"/foo/")
self.assertEqual(channel.code, 204)
self.assertNotIn("body", channel.result)
# Ensure the correct CORS headers have been added
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Origin"),
"has CORS Origin header",
)
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Methods"),
"has CORS Methods header",
)
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Headers"),
"has CORS Headers header",
)
self._check_cors_standard_headers(channel)
def test_known_options_request(self) -> None:
"""An OPTIONS requests to an known URL still returns 204 No Content."""
@@ -265,19 +307,17 @@ class OptionsResourceTests(unittest.TestCase):
self.assertEqual(channel.code, 204)
self.assertNotIn("body", channel.result)
# Ensure the correct CORS headers have been added
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Origin"),
"has CORS Origin header",
)
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Methods"),
"has CORS Methods header",
)
self.assertTrue(
channel.headers.hasHeader(b"Access-Control-Allow-Headers"),
"has CORS Headers header",
self._check_cors_standard_headers(channel)
def test_known_options_request_msc3886(self) -> None:
"""An OPTIONS requests to an known URL still returns 204 No Content."""
channel = self._make_request(
b"OPTIONS", b"/res/", experimental_cors_msc3886=True
)
self.assertEqual(channel.code, 204)
self.assertNotIn("body", channel.result)
self._check_cors_msc3886_headers(channel)
def test_unknown_request(self) -> None:
"""A non-OPTIONS request to an unknown URL should 404."""