Compare commits
12 Commits
rei/moh-co
...
anoa/regre
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b9c5d133b1 | ||
|
|
b7041386a4 | ||
|
|
4ec0a309cf | ||
|
|
3ba9389699 | ||
|
|
422e33fabf | ||
|
|
867443472c | ||
|
|
b602ba194b | ||
|
|
22abfca8d9 | ||
|
|
1b1aed38e3 | ||
|
|
2185b28184 | ||
|
|
7fe7c45438 | ||
|
|
e87540abb1 |
37
CHANGES.md
37
CHANGES.md
@@ -1,8 +1,38 @@
|
||||
Synapse 1.50.0rc1 (2022-01-05)
|
||||
==============================
|
||||
Synapse 1.50.0 (2022-01-18)
|
||||
===========================
|
||||
|
||||
Please note that we now only support Python 3.7+ and PostgreSQL 10+ (if applicable), because Python 3.6 and PostgreSQL 9.6 have reached end-of-life.
|
||||
|
||||
No significant changes since 1.50.0rc2.
|
||||
|
||||
|
||||
Synapse 1.50.0rc2 (2022-01-14)
|
||||
==============================
|
||||
|
||||
This release candidate fixes a federation-breaking regression introduced in Synapse 1.50.0rc1.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse v1.0.0 whereby some device list updates would not be sent to remote homeservers if there were too many to send at once. ([\#11729](https://github.com/matrix-org/synapse/issues/11729))
|
||||
- Fix a bug introduced in Synapse v1.50.0rc1 whereby outbound federation could fail because too many EDUs were produced for device updates. ([\#11730](https://github.com/matrix-org/synapse/issues/11730))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Document that now the minimum supported PostgreSQL version is 10. ([\#11725](https://github.com/matrix-org/synapse/issues/11725))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Fix a typechecker problem related to our (ab)use of `nacl.signing.SigningKey`s. ([\#11714](https://github.com/matrix-org/synapse/issues/11714))
|
||||
|
||||
|
||||
Synapse 1.50.0rc1 (2022-01-05)
|
||||
==============================
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
@@ -42,6 +72,7 @@ Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Replace `mock` package by its standard library version. ([\#11588](https://github.com/matrix-org/synapse/issues/11588))
|
||||
- Drop support for Python 3.6 and Ubuntu 18.04. ([\#11633](https://github.com/matrix-org/synapse/issues/11633))
|
||||
|
||||
|
||||
Internal Changes
|
||||
@@ -77,13 +108,13 @@ Internal Changes
|
||||
- Improve OpenTracing support for requests which use a `ResponseCache`. ([\#11607](https://github.com/matrix-org/synapse/issues/11607))
|
||||
- Improve OpenTracing support for incoming HTTP requests. ([\#11618](https://github.com/matrix-org/synapse/issues/11618))
|
||||
- A number of improvements to opentracing support. ([\#11619](https://github.com/matrix-org/synapse/issues/11619))
|
||||
- Drop support for Python 3.6 and Ubuntu 18.04. ([\#11633](https://github.com/matrix-org/synapse/issues/11633))
|
||||
- Refactor the way that the `outlier` flag is set on events received over federation. ([\#11634](https://github.com/matrix-org/synapse/issues/11634))
|
||||
- Improve the error messages from `get_create_event_for_room`. ([\#11638](https://github.com/matrix-org/synapse/issues/11638))
|
||||
- Remove redundant `get_current_events_token` method. ([\#11643](https://github.com/matrix-org/synapse/issues/11643))
|
||||
- Convert `namedtuples` to `attrs`. ([\#11665](https://github.com/matrix-org/synapse/issues/11665), [\#11574](https://github.com/matrix-org/synapse/issues/11574))
|
||||
- Update the `/capabilities` response to include whether support for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440) is available. ([\#11690](https://github.com/matrix-org/synapse/issues/11690))
|
||||
- Send the `Accept` header in HTTP requests made using `SimpleHttpClient.get_json`. ([\#11677](https://github.com/matrix-org/synapse/issues/11677))
|
||||
- Work around Mjolnir compatibility issue by adding an import for `glob_to_regex` in `synapse.util`, where it moved from. ([\#11696](https://github.com/matrix-org/synapse/issues/11696))
|
||||
|
||||
|
||||
Synapse 1.49.2 (2021-12-21)
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Fix a performance regression in `/sync` handling, introduced in 1.49.0.
|
||||
@@ -1 +0,0 @@
|
||||
Work around Mjolnir compatibility issue by adding an import for `glob_to_regex` in `synapse.util`, where it moved from.
|
||||
1
changelog.d/11765.misc
Normal file
1
changelog.d/11765.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a unit test that checks both `client` and `webclient` resources will function when simultaneously enabled.
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,3 +1,15 @@
|
||||
matrix-synapse-py3 (1.50.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 10:40:38 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 14 Jan 2022 11:18:06 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc1.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Using Postgres
|
||||
|
||||
Synapse supports PostgreSQL versions 9.6 or later.
|
||||
Synapse supports PostgreSQL versions 10 or later.
|
||||
|
||||
## Install postgres client libraries
|
||||
|
||||
|
||||
@@ -47,7 +47,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.50.0rc1"
|
||||
__version__ = "1.50.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -277,7 +277,7 @@ class MessageHandler:
|
||||
# If this is an AS, double check that they are allowed to see the members.
|
||||
# This can either be because the AS user is in the room or because there
|
||||
# is a user in the room that the AS is "interested in"
|
||||
if False and requester.app_service and user_id not in users_with_profile:
|
||||
if requester.app_service and user_id not in users_with_profile:
|
||||
for uid in users_with_profile:
|
||||
if requester.app_service.is_interested_in_user(uid):
|
||||
break
|
||||
|
||||
@@ -82,7 +82,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
self.event_auth_handler = hs.get_event_auth_handler()
|
||||
|
||||
self.member_linearizer: Linearizer = Linearizer(name="member")
|
||||
self.member_limiter = Linearizer(max_count=10, name="member_as_limiter")
|
||||
|
||||
self.clock = hs.get_clock()
|
||||
self.spam_checker = hs.get_spam_checker()
|
||||
@@ -483,43 +482,24 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
|
||||
key = (room_id,)
|
||||
|
||||
as_id = object()
|
||||
if requester.app_service:
|
||||
as_id = requester.app_service.id
|
||||
|
||||
then = self.clock.time_msec()
|
||||
|
||||
with (await self.member_limiter.queue(as_id)):
|
||||
diff = self.clock.time_msec() - then
|
||||
|
||||
if diff > 80 * 1000:
|
||||
# haproxy would have timed the request out anyway...
|
||||
raise SynapseError(504, "took to long to process")
|
||||
|
||||
with (await self.member_linearizer.queue(key)):
|
||||
diff = self.clock.time_msec() - then
|
||||
|
||||
if diff > 80 * 1000:
|
||||
# haproxy would have timed the request out anyway...
|
||||
raise SynapseError(504, "took to long to process")
|
||||
|
||||
result = await self.update_membership_locked(
|
||||
requester,
|
||||
target,
|
||||
room_id,
|
||||
action,
|
||||
txn_id=txn_id,
|
||||
remote_room_hosts=remote_room_hosts,
|
||||
third_party_signed=third_party_signed,
|
||||
ratelimit=ratelimit,
|
||||
content=content,
|
||||
new_room=new_room,
|
||||
require_consent=require_consent,
|
||||
outlier=outlier,
|
||||
historical=historical,
|
||||
prev_event_ids=prev_event_ids,
|
||||
auth_event_ids=auth_event_ids,
|
||||
)
|
||||
with (await self.member_linearizer.queue(key)):
|
||||
result = await self.update_membership_locked(
|
||||
requester,
|
||||
target,
|
||||
room_id,
|
||||
action,
|
||||
txn_id=txn_id,
|
||||
remote_room_hosts=remote_room_hosts,
|
||||
third_party_signed=third_party_signed,
|
||||
ratelimit=ratelimit,
|
||||
content=content,
|
||||
new_room=new_room,
|
||||
require_consent=require_consent,
|
||||
outlier=outlier,
|
||||
historical=historical,
|
||||
prev_event_ids=prev_event_ids,
|
||||
auth_event_ids=auth_event_ids,
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
|
||||
@@ -104,11 +104,6 @@ class HttpPusher(Pusher):
|
||||
"'url' must have a path of '/_matrix/push/v1/notify'"
|
||||
)
|
||||
|
||||
url = url.replace(
|
||||
"https://matrix.org/_matrix/push/v1/notify",
|
||||
"http://10.103.0.7/_matrix/push/v1/notify",
|
||||
)
|
||||
|
||||
self.url = url
|
||||
self.http_client = hs.get_proxied_blacklisted_http_client()
|
||||
self.data_minus_url = {}
|
||||
|
||||
@@ -50,7 +50,6 @@ from synapse.logging.context import (
|
||||
current_context,
|
||||
make_deferred_yieldable,
|
||||
)
|
||||
from synapse.logging.opentracing import trace
|
||||
from synapse.metrics import register_threadpool
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.storage.background_updates import BackgroundUpdater
|
||||
@@ -105,20 +104,8 @@ def make_pool(
|
||||
# Ensure we have a logging context so we can correctly track queries,
|
||||
# etc.
|
||||
with LoggingContext("db.on_new_connection"):
|
||||
# HACK Patch the connection's commit function so that we can see
|
||||
# how long it's taking from Jaeger.
|
||||
class NastyConnectionWrapper:
|
||||
def __init__(self, connection):
|
||||
self._connection = connection
|
||||
self.commit = trace(connection.commit, "db.conn.commit")
|
||||
|
||||
def __getattr__(self, item):
|
||||
return getattr(self._connection, item)
|
||||
|
||||
engine.on_new_connection(
|
||||
LoggingDatabaseConnection(
|
||||
NastyConnectionWrapper(conn), engine, "on_new_connection"
|
||||
)
|
||||
LoggingDatabaseConnection(conn, engine, "on_new_connection")
|
||||
)
|
||||
|
||||
connection_pool = adbapi.ConnectionPool(
|
||||
|
||||
@@ -37,7 +37,7 @@ logger = logging.getLogger(__name__)
|
||||
# Number of msec of granularity to store the user IP 'last seen' time. Smaller
|
||||
# times give more inserts into the database even for readonly API hits
|
||||
# 120 seconds == 2 minutes
|
||||
LAST_SEEN_GRANULARITY = 10 * 60 * 1000
|
||||
LAST_SEEN_GRANULARITY = 120 * 1000
|
||||
|
||||
|
||||
class DeviceLastConnectionInfo(TypedDict):
|
||||
|
||||
@@ -191,7 +191,7 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
@trace
|
||||
async def get_device_updates_by_remote(
|
||||
self, destination: str, from_stream_id: int, limit: int
|
||||
) -> Tuple[int, List[Tuple[str, dict]]]:
|
||||
) -> Tuple[int, List[Tuple[str, JsonDict]]]:
|
||||
"""Get a stream of device updates to send to the given remote server.
|
||||
|
||||
Args:
|
||||
@@ -200,9 +200,10 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
limit: Maximum number of device updates to return
|
||||
|
||||
Returns:
|
||||
A mapping from the current stream id (ie, the stream id of the last
|
||||
update included in the response), and the list of updates, where
|
||||
each update is a pair of EDU type and EDU contents.
|
||||
- The current stream id (i.e. the stream id of the last update included
|
||||
in the response); and
|
||||
- The list of updates, where each update is a pair of EDU type and
|
||||
EDU contents.
|
||||
"""
|
||||
now_stream_id = self.get_device_stream_token()
|
||||
|
||||
@@ -221,6 +222,9 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
limit,
|
||||
)
|
||||
|
||||
# We need to ensure `updates` doesn't grow too big.
|
||||
# Currently: `len(updates) <= limit`.
|
||||
|
||||
# Return an empty list if there are no updates
|
||||
if not updates:
|
||||
return now_stream_id, []
|
||||
@@ -270,19 +274,50 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
# The most recent request's opentracing_context is used as the
|
||||
# context which created the Edu.
|
||||
|
||||
# This is the stream ID that we will return for the consumer to resume
|
||||
# following this stream later.
|
||||
last_processed_stream_id = from_stream_id
|
||||
|
||||
query_map = {}
|
||||
cross_signing_keys_by_user = {}
|
||||
for user_id, device_id, update_stream_id, update_context in updates:
|
||||
if (
|
||||
# Calculate the remaining length budget.
|
||||
# Note that, for now, each entry in `cross_signing_keys_by_user`
|
||||
# gives rise to two device updates in the result, so those cost twice
|
||||
# as much (and are the whole reason we need to separately calculate
|
||||
# the budget; we know len(updates) <= limit otherwise!)
|
||||
# N.B. len() on dicts is cheap since they store their size.
|
||||
remaining_length_budget = limit - (
|
||||
len(query_map) + 2 * len(cross_signing_keys_by_user)
|
||||
)
|
||||
assert remaining_length_budget >= 0
|
||||
|
||||
is_master_key_update = (
|
||||
user_id in master_key_by_user
|
||||
and device_id == master_key_by_user[user_id]["device_id"]
|
||||
):
|
||||
result = cross_signing_keys_by_user.setdefault(user_id, {})
|
||||
result["master_key"] = master_key_by_user[user_id]["key_info"]
|
||||
elif (
|
||||
)
|
||||
is_self_signing_key_update = (
|
||||
user_id in self_signing_key_by_user
|
||||
and device_id == self_signing_key_by_user[user_id]["device_id"]
|
||||
)
|
||||
|
||||
is_cross_signing_key_update = (
|
||||
is_master_key_update or is_self_signing_key_update
|
||||
)
|
||||
|
||||
if (
|
||||
is_cross_signing_key_update
|
||||
and user_id not in cross_signing_keys_by_user
|
||||
):
|
||||
# This will give rise to 2 device updates.
|
||||
# If we don't have the budget, stop here!
|
||||
if remaining_length_budget < 2:
|
||||
break
|
||||
|
||||
if is_master_key_update:
|
||||
result = cross_signing_keys_by_user.setdefault(user_id, {})
|
||||
result["master_key"] = master_key_by_user[user_id]["key_info"]
|
||||
elif is_self_signing_key_update:
|
||||
result = cross_signing_keys_by_user.setdefault(user_id, {})
|
||||
result["self_signing_key"] = self_signing_key_by_user[user_id][
|
||||
"key_info"
|
||||
@@ -290,24 +325,47 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
else:
|
||||
key = (user_id, device_id)
|
||||
|
||||
if key not in query_map and remaining_length_budget < 1:
|
||||
# We don't have space for a new entry
|
||||
break
|
||||
|
||||
previous_update_stream_id, _ = query_map.get(key, (0, None))
|
||||
|
||||
if update_stream_id > previous_update_stream_id:
|
||||
# FIXME If this overwrites an older update, this discards the
|
||||
# previous OpenTracing context.
|
||||
# It might make it harder to track down issues using OpenTracing.
|
||||
# If there's a good reason why it doesn't matter, a comment here
|
||||
# about that would not hurt.
|
||||
query_map[key] = (update_stream_id, update_context)
|
||||
|
||||
# As this update has been added to the response, advance the stream
|
||||
# position.
|
||||
last_processed_stream_id = update_stream_id
|
||||
|
||||
# In the worst case scenario, each update is for a distinct user and is
|
||||
# added either to the query_map or to cross_signing_keys_by_user,
|
||||
# but not both:
|
||||
# len(query_map) + len(cross_signing_keys_by_user) <= len(updates) here,
|
||||
# so len(query_map) + len(cross_signing_keys_by_user) <= limit.
|
||||
|
||||
results = await self._get_device_update_edus_by_remote(
|
||||
destination, from_stream_id, query_map
|
||||
)
|
||||
|
||||
# add the updated cross-signing keys to the results list
|
||||
# len(results) <= len(query_map) here,
|
||||
# so len(results) + len(cross_signing_keys_by_user) <= limit.
|
||||
|
||||
# Add the updated cross-signing keys to the results list
|
||||
for user_id, result in cross_signing_keys_by_user.items():
|
||||
result["user_id"] = user_id
|
||||
results.append(("m.signing_key_update", result))
|
||||
# also send the unstable version
|
||||
# FIXME: remove this when enough servers have upgraded
|
||||
# and remove the length budgeting above.
|
||||
results.append(("org.matrix.signing_key_update", result))
|
||||
|
||||
return now_stream_id, results
|
||||
return last_processed_stream_id, results
|
||||
|
||||
def _get_device_updates_by_remote_txn(
|
||||
self,
|
||||
@@ -316,7 +374,7 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
from_stream_id: int,
|
||||
now_stream_id: int,
|
||||
limit: int,
|
||||
):
|
||||
) -> List[Tuple[str, str, int, Optional[str]]]:
|
||||
"""Return device update information for a given remote destination
|
||||
|
||||
Args:
|
||||
@@ -327,7 +385,11 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
limit: Maximum number of device updates to return
|
||||
|
||||
Returns:
|
||||
List: List of device updates
|
||||
List: List of device update tuples:
|
||||
- user_id
|
||||
- device_id
|
||||
- stream_id
|
||||
- opentracing_context
|
||||
"""
|
||||
# get the list of device updates that need to be sent
|
||||
sql = """
|
||||
@@ -351,15 +413,21 @@ class DeviceWorkerStore(SQLBaseStore):
|
||||
Args:
|
||||
destination: The host the device updates are intended for
|
||||
from_stream_id: The minimum stream_id to filter updates by, exclusive
|
||||
query_map (Dict[(str, str): (int, str|None)]): Dictionary mapping
|
||||
user_id/device_id to update stream_id and the relevant json-encoded
|
||||
opentracing context
|
||||
query_map: Dictionary mapping (user_id, device_id) to
|
||||
(update stream_id, the relevant json-encoded opentracing context)
|
||||
|
||||
Returns:
|
||||
List of objects representing an device update EDU
|
||||
List of objects representing a device update EDU.
|
||||
|
||||
Postconditions:
|
||||
The returned list has a length not exceeding that of the query_map:
|
||||
len(result) <= len(query_map)
|
||||
"""
|
||||
devices = (
|
||||
await self.get_e2e_device_keys_and_signatures(
|
||||
# Because these are (user_id, device_id) tuples with all
|
||||
# device_ids not being None, the returned list's length will not
|
||||
# exceed that of query_map.
|
||||
query_map.keys(),
|
||||
include_all_devices=True,
|
||||
include_deleted_devices=True,
|
||||
|
||||
@@ -744,7 +744,7 @@ def _parse_query(database_engine, search_term):
|
||||
results = re.findall(r"([\w\-]+)", search_term, re.UNICODE)
|
||||
|
||||
if isinstance(database_engine, PostgresEngine):
|
||||
return " & ".join(result for result in results)
|
||||
return " & ".join(result + ":*" for result in results)
|
||||
elif isinstance(database_engine, Sqlite3Engine):
|
||||
return " & ".join(result + "*" for result in results)
|
||||
else:
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
|
||||
|
||||
import nacl.signing
|
||||
import signedjson.types
|
||||
from unpaddedbase64 import decode_base64
|
||||
|
||||
from synapse.api.room_versions import RoomVersions
|
||||
@@ -35,7 +36,12 @@ HOSTNAME = "domain"
|
||||
|
||||
class EventSigningTestCase(unittest.TestCase):
|
||||
def setUp(self):
|
||||
self.signing_key = nacl.signing.SigningKey(SIGNING_KEY_SEED)
|
||||
# NB: `signedjson` expects `nacl.signing.SigningKey` instances which have been
|
||||
# monkeypatched to include new `alg` and `version` attributes. This is captured
|
||||
# by the `signedjson.types.SigningKey` protocol.
|
||||
self.signing_key: signedjson.types.SigningKey = nacl.signing.SigningKey(
|
||||
SIGNING_KEY_SEED
|
||||
)
|
||||
self.signing_key.alg = KEY_ALG
|
||||
self.signing_key.version = KEY_VER
|
||||
|
||||
|
||||
106
tests/http/test_webclient.py
Normal file
106
tests/http/test_webclient.py
Normal file
@@ -0,0 +1,106 @@
|
||||
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Dict
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
from synapse.app.homeserver import SynapseHomeServer
|
||||
from synapse.config.server import HttpListenerConfig, HttpResourceConfig, ListenerConfig
|
||||
from synapse.http.site import SynapseSite
|
||||
|
||||
from tests.server import make_request
|
||||
from tests.unittest import HomeserverTestCase, create_resource_tree, override_config
|
||||
|
||||
|
||||
class WebClientTests(HomeserverTestCase):
|
||||
@override_config(
|
||||
{
|
||||
"web_client_location": "https://example.org",
|
||||
}
|
||||
)
|
||||
def test_webclient_resolves_with_client_resource(self):
|
||||
"""
|
||||
Tests that both client and webclient resources can be accessed simultaneously.
|
||||
|
||||
This is a regression test created in response to https://github.com/matrix-org/synapse/issues/11763.
|
||||
"""
|
||||
for resource_name_order_list in [
|
||||
["webclient", "client"],
|
||||
["client", "webclient"],
|
||||
]:
|
||||
# Create a dictionary from path regex -> resource
|
||||
resource_dict: Dict[str, Resource] = {}
|
||||
|
||||
for resource_name in resource_name_order_list:
|
||||
resource_dict.update(
|
||||
SynapseHomeServer._configure_named_resource(self.hs, resource_name)
|
||||
)
|
||||
|
||||
# Create a root resource which ties the above resources together into one
|
||||
root_resource = Resource()
|
||||
create_resource_tree(resource_dict, root_resource)
|
||||
|
||||
# Create a site configured with this resource to make HTTP requests against
|
||||
listener_config = ListenerConfig(
|
||||
port=8008,
|
||||
bind_addresses=["127.0.0.1"],
|
||||
type="http",
|
||||
http_options=HttpListenerConfig(
|
||||
resources=[HttpResourceConfig(names=resource_name_order_list)]
|
||||
),
|
||||
)
|
||||
test_site = SynapseSite(
|
||||
logger_name="synapse.access.http.fake",
|
||||
site_tag=self.hs.config.server.server_name,
|
||||
config=listener_config,
|
||||
resource=root_resource,
|
||||
server_version_string="1",
|
||||
max_request_body_size=1234,
|
||||
reactor=self.reactor,
|
||||
)
|
||||
|
||||
# Attempt to make requests to endpoints on both the webclient and client resources
|
||||
# on test_site.
|
||||
self._request_client_and_webclient_resources(test_site)
|
||||
|
||||
def _request_client_and_webclient_resources(self, test_site: SynapseSite) -> None:
|
||||
"""Make a request to an endpoint on both the webclient and client-server resources
|
||||
of the given SynapseSite.
|
||||
|
||||
Args:
|
||||
test_site: The SynapseSite object to make requests against.
|
||||
"""
|
||||
|
||||
# Ensure that the *webclient* resource is behaving as expected (we get redirected to
|
||||
# the configured web_client_location)
|
||||
channel = make_request(
|
||||
self.reactor,
|
||||
site=test_site,
|
||||
method="GET",
|
||||
path="/_matrix/client",
|
||||
)
|
||||
self.assertEqual(channel.code, 302)
|
||||
self.assertEqual(
|
||||
channel.headers.getRawHeaders("Location"), ["https://example.org"]
|
||||
)
|
||||
|
||||
# Ensure that a request to the *client* resource works.
|
||||
channel = make_request(
|
||||
self.reactor,
|
||||
site=test_site,
|
||||
method="GET",
|
||||
path="/_matrix/client/v3/login",
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
self.assertIn("flows", channel.json_body)
|
||||
@@ -94,7 +94,7 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||
def test_get_device_updates_by_remote(self):
|
||||
device_ids = ["device_id1", "device_id2"]
|
||||
|
||||
# Add two device updates with a single stream_id
|
||||
# Add two device updates with sequential `stream_id`s
|
||||
self.get_success(
|
||||
self.store.add_device_change_to_streams("user_id", device_ids, ["somehost"])
|
||||
)
|
||||
@@ -107,6 +107,164 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||
# Check original device_ids are contained within these updates
|
||||
self._check_devices_in_updates(device_ids, device_updates)
|
||||
|
||||
def test_get_device_updates_by_remote_can_limit_properly(self):
|
||||
"""
|
||||
Tests that `get_device_updates_by_remote` returns an appropriate
|
||||
stream_id to resume fetching from (without skipping any results).
|
||||
"""
|
||||
|
||||
# Add some device updates with sequential `stream_id`s
|
||||
device_ids = [
|
||||
"device_id1",
|
||||
"device_id2",
|
||||
"device_id3",
|
||||
"device_id4",
|
||||
"device_id5",
|
||||
]
|
||||
self.get_success(
|
||||
self.store.add_device_change_to_streams("user_id", device_ids, ["somehost"])
|
||||
)
|
||||
|
||||
# Get device updates meant for this remote
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", -1, limit=3)
|
||||
)
|
||||
|
||||
# Check the first three original device_ids are contained within these updates
|
||||
self._check_devices_in_updates(device_ids[:3], device_updates)
|
||||
|
||||
# Get the next batch of device updates
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", next_stream_id, limit=3)
|
||||
)
|
||||
|
||||
# Check the last two original device_ids are contained within these updates
|
||||
self._check_devices_in_updates(device_ids[3:], device_updates)
|
||||
|
||||
# Add some more device updates to ensure it still resumes properly
|
||||
device_ids = ["device_id6", "device_id7"]
|
||||
self.get_success(
|
||||
self.store.add_device_change_to_streams("user_id", device_ids, ["somehost"])
|
||||
)
|
||||
|
||||
# Get the next batch of device updates
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", next_stream_id, limit=3)
|
||||
)
|
||||
|
||||
# Check the newly-added device_ids are contained within these updates
|
||||
self._check_devices_in_updates(device_ids, device_updates)
|
||||
|
||||
# Check there are no more device updates left.
|
||||
_, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", next_stream_id, limit=3)
|
||||
)
|
||||
self.assertEqual(device_updates, [])
|
||||
|
||||
def test_get_device_updates_by_remote_cross_signing_key_updates(
|
||||
self,
|
||||
) -> None:
|
||||
"""
|
||||
Tests that `get_device_updates_by_remote` limits the length of the return value
|
||||
properly when cross-signing key updates are present.
|
||||
Current behaviour is that the cross-signing key updates will always come in pairs,
|
||||
even if that means leaving an earlier batch one EDU short of the limit.
|
||||
"""
|
||||
|
||||
assert self.hs.is_mine_id(
|
||||
"@user_id:test"
|
||||
), "Test not valid: this MXID should be considered local"
|
||||
|
||||
self.get_success(
|
||||
self.store.set_e2e_cross_signing_key(
|
||||
"@user_id:test",
|
||||
"master",
|
||||
{
|
||||
"keys": {
|
||||
"ed25519:fakeMaster": "aaafakefakefake1AAAAAAAAAAAAAAAAAAAAAAAAAAA="
|
||||
},
|
||||
"signatures": {
|
||||
"@user_id:test": {
|
||||
"ed25519:fake2": "aaafakefakefake2AAAAAAAAAAAAAAAAAAAAAAAAAAA="
|
||||
}
|
||||
},
|
||||
},
|
||||
)
|
||||
)
|
||||
self.get_success(
|
||||
self.store.set_e2e_cross_signing_key(
|
||||
"@user_id:test",
|
||||
"self_signing",
|
||||
{
|
||||
"keys": {
|
||||
"ed25519:fakeSelfSigning": "aaafakefakefake3AAAAAAAAAAAAAAAAAAAAAAAAAAA="
|
||||
},
|
||||
"signatures": {
|
||||
"@user_id:test": {
|
||||
"ed25519:fake4": "aaafakefakefake4AAAAAAAAAAAAAAAAAAAAAAAAAAA="
|
||||
}
|
||||
},
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
# Add some device updates with sequential `stream_id`s
|
||||
# Note that the public cross-signing keys occupy the same space as device IDs,
|
||||
# so also notify that those have updated.
|
||||
device_ids = [
|
||||
"device_id1",
|
||||
"device_id2",
|
||||
"fakeMaster",
|
||||
"fakeSelfSigning",
|
||||
]
|
||||
|
||||
self.get_success(
|
||||
self.store.add_device_change_to_streams(
|
||||
"@user_id:test", device_ids, ["somehost"]
|
||||
)
|
||||
)
|
||||
|
||||
# Get device updates meant for this remote
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", -1, limit=3)
|
||||
)
|
||||
|
||||
# Here we expect the device updates for `device_id1` and `device_id2`.
|
||||
# That means we only receive 2 updates this time around.
|
||||
# If we had a higher limit, we would expect to see the pair of
|
||||
# (unstable-prefixed & unprefixed) signing key updates for the device
|
||||
# represented by `fakeMaster` and `fakeSelfSigning`.
|
||||
# Our implementation only sends these two variants together, so we get
|
||||
# a short batch.
|
||||
self.assertEqual(len(device_updates), 2, device_updates)
|
||||
|
||||
# Check the first two devices (device_id1, device_id2) came out.
|
||||
self._check_devices_in_updates(device_ids[:2], device_updates)
|
||||
|
||||
# Get more device updates meant for this remote
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", next_stream_id, limit=3)
|
||||
)
|
||||
|
||||
# The next 2 updates should be a cross-signing key update
|
||||
# (the master key update and the self-signing key update are combined into
|
||||
# one 'signing key update', but the cross-signing key update is emitted
|
||||
# twice, once with an unprefixed type and once again with an unstable-prefixed type)
|
||||
# (This is a temporary arrangement for backwards compatibility!)
|
||||
self.assertEqual(len(device_updates), 2, device_updates)
|
||||
self.assertEqual(
|
||||
device_updates[0][0], "m.signing_key_update", device_updates[0]
|
||||
)
|
||||
self.assertEqual(
|
||||
device_updates[1][0], "org.matrix.signing_key_update", device_updates[1]
|
||||
)
|
||||
|
||||
# Check there are no more device updates left.
|
||||
_, device_updates = self.get_success(
|
||||
self.store.get_device_updates_by_remote("somehost", next_stream_id, limit=3)
|
||||
)
|
||||
self.assertEqual(device_updates, [])
|
||||
|
||||
def _check_devices_in_updates(self, expected_device_ids, device_updates):
|
||||
"""Check that an specific device ids exist in a list of device update EDUs"""
|
||||
self.assertEqual(len(device_updates), len(expected_device_ids))
|
||||
|
||||
Reference in New Issue
Block a user