Merge remote-tracking branch 'origin/develop' into dmr/pyproject-poetry
This commit is contained in:
1
changelog.d/12449.misc
Normal file
1
changelog.d/12449.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use `poetry` to manage the virtualenv in debian packages.
|
||||
1
changelog.d/12455.misc
Normal file
1
changelog.d/12455.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reintroduce the list of targets to the linter script, to avoid linting unwanted local-only directories during development.
|
||||
1
changelog.d/12457.doc
Normal file
1
changelog.d/12457.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update worker documentation and replace old `federation_reader` with `generic_worker`.
|
||||
1
changelog.d/12465.feature
Normal file
1
changelog.d/12465.feature
Normal file
@@ -0,0 +1 @@
|
||||
Enable processing of device list updates asynchronously.
|
||||
12
debian/build_virtualenv
vendored
12
debian/build_virtualenv
vendored
@@ -32,21 +32,16 @@ esac
|
||||
|
||||
# Manually install Poetry and export a pip-compatible `requirements.txt`
|
||||
# We need a Poetry pre-release as the export command is buggy in < 1.2
|
||||
if [ -e requirements.txt ]; then
|
||||
# Guard against a possible future where requirements.txt lives in the repo.
|
||||
# Otherwise, calling `poetry export` below would silently clobber it.
|
||||
echo "requirements.txt already exists; aborting" 1>&2
|
||||
exit 1
|
||||
fi
|
||||
TEMP_VENV="$(mktemp -d)"
|
||||
python3 -m venv "$TEMP_VENV"
|
||||
source "$TEMP_VENV/bin/activate"
|
||||
pip install -U pip
|
||||
pip install poetry==1.2.0b1
|
||||
poetry export --extras "all test" -o requirements.txt
|
||||
deactivate
|
||||
rm -rf "$TEMP_VENV"
|
||||
|
||||
# Use --no-deps to only install pinned versions in exported_requirements.txt,
|
||||
# and to avoid https://github.com/pypa/pip/issues/9644
|
||||
dh_virtualenv \
|
||||
--install-suffix "matrix-synapse" \
|
||||
--builtin-venv \
|
||||
@@ -55,10 +50,11 @@ dh_virtualenv \
|
||||
--preinstall="lxml" \
|
||||
--preinstall="mock" \
|
||||
--preinstall="wheel" \
|
||||
--extra-pip-arg="--no-deps" \
|
||||
--extra-pip-arg="--no-cache-dir" \
|
||||
--extra-pip-arg="--compile" \
|
||||
--extras="all,systemd,test" \
|
||||
--requirements="requirements.txt"
|
||||
--requirements="exported_requirements.txt"
|
||||
|
||||
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
|
||||
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
|
||||
|
||||
1
debian/clean
vendored
Normal file
1
debian/clean
vendored
Normal file
@@ -0,0 +1 @@
|
||||
exported_requirements.txt
|
||||
@@ -6,55 +6,36 @@ The Synapse codebase uses a number of code formatting tools in order to
|
||||
quickly and automatically check for formatting (and sometimes logical)
|
||||
errors in code.
|
||||
|
||||
The necessary tools are detailed below. They should be installed by poetry as part
|
||||
of Synapse's dev dependencies. See [the contributing guide](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies) for instructions.
|
||||
The necessary tools are:
|
||||
|
||||
- **black**
|
||||
- [black](https://black.readthedocs.io/en/stable/), a source code formatter;
|
||||
- [isort](https://pycqa.github.io/isort/), which organises each file's imports;
|
||||
- [flake8](https://flake8.pycqa.org/en/latest/), which can spot common errors; and
|
||||
- [mypy](https://mypy.readthedocs.io/en/stable/), a type checker.
|
||||
|
||||
The Synapse codebase uses [black](https://pypi.org/project/black/)
|
||||
as an opinionated code formatter, ensuring all comitted code is
|
||||
properly formatted.
|
||||
Install them with:
|
||||
|
||||
Have `black` auto-format your code (it shouldn't change any
|
||||
functionality) with:
|
||||
```sh
|
||||
pip install -e ".[lint,mypy]"
|
||||
```
|
||||
|
||||
```sh
|
||||
black .
|
||||
```
|
||||
The easiest way to run the lints is to invoke the linter script as follows.
|
||||
|
||||
- **flake8**
|
||||
|
||||
`flake8` is a code checking tool. We require code to pass `flake8`
|
||||
before being merged into the codebase.
|
||||
|
||||
Check all application and test code with:
|
||||
|
||||
```sh
|
||||
flake8 .
|
||||
```
|
||||
|
||||
- **isort**
|
||||
|
||||
`isort` ensures imports are nicely formatted, and can suggest and
|
||||
auto-fix issues such as double-importing.
|
||||
|
||||
Auto-fix imports with:
|
||||
|
||||
```sh
|
||||
isort .
|
||||
```
|
||||
```sh
|
||||
scripts-dev/lint.sh
|
||||
```
|
||||
|
||||
It's worth noting that modern IDEs and text editors can run these tools
|
||||
automatically on save. It may be worth looking into whether this
|
||||
functionality is supported in your editor for a more convenient
|
||||
development workflow. It is not, however, recommended to run `flake8` on
|
||||
save as it takes a while and is very resource intensive.
|
||||
development workflow. It is not, however, recommended to run `flake8` or `mypy`
|
||||
on save as they take a while and can be very resource intensive.
|
||||
|
||||
## General rules
|
||||
|
||||
- **Naming**:
|
||||
- Use camel case for class and type names
|
||||
- Use underscores for functions and variables.
|
||||
- Use `CamelCase` for class and type names
|
||||
- Use underscores for `function_names` and `variable_names`.
|
||||
- **Docstrings**: should follow the [google code
|
||||
style](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings).
|
||||
See the
|
||||
|
||||
@@ -10,15 +10,15 @@ See the folder [system](https://github.com/matrix-org/synapse/tree/develop/docs/
|
||||
for the systemd unit files.
|
||||
|
||||
The folder [workers](https://github.com/matrix-org/synapse/tree/develop/docs/systemd-with-workers/workers/)
|
||||
contains an example configuration for the `federation_reader` worker.
|
||||
contains an example configuration for the `generic_worker` worker.
|
||||
|
||||
## Synapse configuration files
|
||||
|
||||
See [the worker documentation](../workers.md) for information on how to set up the
|
||||
configuration files and reverse-proxy correctly.
|
||||
Below is a sample `federation_reader` worker configuration file.
|
||||
Below is a sample `generic_worker` worker configuration file.
|
||||
```yaml
|
||||
{{#include workers/federation_reader.yaml}}
|
||||
{{#include workers/generic_worker.yaml}}
|
||||
```
|
||||
|
||||
Systemd manages daemonization itself, so ensure that none of the configuration
|
||||
@@ -61,9 +61,9 @@ systemctl stop matrix-synapse.target
|
||||
# Restart the master alone
|
||||
systemctl start matrix-synapse.service
|
||||
|
||||
# Restart a specific worker (eg. federation_reader); the master is
|
||||
# Restart a specific worker (eg. generic_worker); the master is
|
||||
# unaffected by this.
|
||||
systemctl restart matrix-synapse-worker@federation_reader.service
|
||||
systemctl restart matrix-synapse-worker@generic_worker.service
|
||||
|
||||
# Add a new worker (assuming all configs are set up already)
|
||||
systemctl enable matrix-synapse-worker@federation_writer.service
|
||||
|
||||
@@ -1,13 +0,0 @@
|
||||
worker_app: synapse.app.federation_reader
|
||||
worker_name: federation_reader1
|
||||
|
||||
worker_replication_host: 127.0.0.1
|
||||
worker_replication_http_port: 9093
|
||||
|
||||
worker_listeners:
|
||||
- type: http
|
||||
port: 8011
|
||||
resources:
|
||||
- names: [federation]
|
||||
|
||||
worker_log_config: /etc/matrix-synapse/federation-reader-log.yaml
|
||||
13
docs/systemd-with-workers/workers/generic_worker.yaml
Normal file
13
docs/systemd-with-workers/workers/generic_worker.yaml
Normal file
@@ -0,0 +1,13 @@
|
||||
worker_app: synapse.app.generic_worker
|
||||
worker_name: generic_worker1
|
||||
|
||||
worker_replication_host: 127.0.0.1
|
||||
worker_replication_http_port: 9093
|
||||
|
||||
worker_listeners:
|
||||
- type: http
|
||||
port: 8011
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
|
||||
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
|
||||
@@ -146,12 +146,10 @@ worker_replication_host: 127.0.0.1
|
||||
worker_replication_http_port: 9093
|
||||
|
||||
worker_listeners:
|
||||
- type: http
|
||||
port: 8083
|
||||
resources:
|
||||
- names:
|
||||
- client
|
||||
- federation
|
||||
- type: http
|
||||
port: 8083
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
|
||||
worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
|
||||
```
|
||||
|
||||
@@ -79,8 +79,20 @@ else
|
||||
# If we were not asked to lint changed files, and no paths were found as a result,
|
||||
# then lint everything!
|
||||
if [[ -z ${files+x} ]]; then
|
||||
# Lint all source code files and directories
|
||||
files=( "." )
|
||||
# CI runs each linter on the entire checkout, e.g. `black .`. So don't
|
||||
# rely on this list to *find* lint targets if that misses a file; instead;
|
||||
# use it to exclude files from linters when this can't be done by config.
|
||||
#
|
||||
# To check which files the linters examine, use:
|
||||
# black --verbose . 2>&1 | \grep -v ignored
|
||||
# isort --show-files .
|
||||
# flake8 --verbose . # This isn't a great option
|
||||
# mypy has explicit config in mypy.ini; there is also mypy --verbose
|
||||
files=(
|
||||
"synapse" "docker" "tests"
|
||||
"scripts-dev"
|
||||
"contrib" "setup.py" "synmark" "stubs" ".ci"
|
||||
)
|
||||
fi
|
||||
fi
|
||||
|
||||
|
||||
@@ -649,9 +649,13 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
return
|
||||
|
||||
for user_id, device_id, room_id, stream_id, opentracing_context in rows:
|
||||
joined_user_ids = await self.store.get_users_in_room(room_id)
|
||||
hosts = {get_domain_from_id(u) for u in joined_user_ids}
|
||||
hosts.discard(self.server_name)
|
||||
hosts = set()
|
||||
|
||||
# Ignore any users that aren't ours
|
||||
if self.hs.is_mine_id(user_id):
|
||||
joined_user_ids = await self.store.get_users_in_room(room_id)
|
||||
hosts = {get_domain_from_id(u) for u in joined_user_ids}
|
||||
hosts.discard(self.server_name)
|
||||
|
||||
# Check if we've already sent this update to some hosts
|
||||
if current_stream_id == stream_id:
|
||||
|
||||
@@ -1703,7 +1703,9 @@ class DeviceStore(DeviceWorkerStore, DeviceBackgroundUpdateStore):
|
||||
next(stream_id_iterator),
|
||||
user_id,
|
||||
device_id,
|
||||
False,
|
||||
not self.hs.is_mine_id(
|
||||
user_id
|
||||
), # We only need to send out update for *our* users
|
||||
now,
|
||||
encoded_context if whitelisted_homeserver(destination) else "{}",
|
||||
)
|
||||
|
||||
@@ -162,7 +162,9 @@ class FederationSenderDevicesTestCases(HomeserverTestCase):
|
||||
|
||||
def make_homeserver(self, reactor, clock):
|
||||
return self.setup_test_homeserver(
|
||||
federation_transport_client=Mock(spec=["send_transaction"]),
|
||||
federation_transport_client=Mock(
|
||||
spec=["send_transaction", "query_user_devices"]
|
||||
),
|
||||
)
|
||||
|
||||
def default_config(self):
|
||||
@@ -218,6 +220,45 @@ class FederationSenderDevicesTestCases(HomeserverTestCase):
|
||||
self.assertEqual(len(self.edus), 1)
|
||||
self.check_device_update_edu(self.edus.pop(0), u1, "D2", stream_id)
|
||||
|
||||
def test_dont_send_device_updates_for_remote_users(self):
|
||||
"""Check that we don't send device updates for remote users"""
|
||||
|
||||
# Send the server a device list EDU for the other user, this will cause
|
||||
# it to try and resync the device lists.
|
||||
self.hs.get_federation_transport_client().query_user_devices.return_value = (
|
||||
defer.succeed(
|
||||
{
|
||||
"stream_id": "1",
|
||||
"user_id": "@user2:host2",
|
||||
"devices": [{"device_id": "D1"}],
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
self.get_success(
|
||||
self.hs.get_device_handler().device_list_updater.incoming_device_list_update(
|
||||
"host2",
|
||||
{
|
||||
"user_id": "@user2:host2",
|
||||
"device_id": "D1",
|
||||
"stream_id": "1",
|
||||
"prev_ids": [],
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
self.reactor.advance(1)
|
||||
|
||||
# We shouldn't see an EDU for that update
|
||||
self.assertEqual(self.edus, [])
|
||||
|
||||
# Check that we did successfully process the inbound EDU (otherwise this
|
||||
# test would pass if we failed to process the EDU)
|
||||
devices = self.get_success(
|
||||
self.hs.get_datastores().main.get_cached_devices_for_user("@user2:host2")
|
||||
)
|
||||
self.assertIn("D1", devices)
|
||||
|
||||
def test_upload_signatures(self):
|
||||
"""Uploading signatures on some devices should produce updates for that user"""
|
||||
|
||||
|
||||
@@ -118,7 +118,7 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||
device_ids = ["device_id1", "device_id2"]
|
||||
|
||||
# Add two device updates with sequential `stream_id`s
|
||||
self.add_device_change("user_id", device_ids, "somehost")
|
||||
self.add_device_change("@user_id:test", device_ids, "somehost")
|
||||
|
||||
# Get all device updates ever meant for this remote
|
||||
now_stream_id, device_updates = self.get_success(
|
||||
@@ -142,7 +142,7 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||
"device_id4",
|
||||
"device_id5",
|
||||
]
|
||||
self.add_device_change("user_id", device_ids, "somehost")
|
||||
self.add_device_change("@user_id:test", device_ids, "somehost")
|
||||
|
||||
# Get device updates meant for this remote
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
@@ -162,7 +162,7 @@ class DeviceStoreTestCase(HomeserverTestCase):
|
||||
|
||||
# Add some more device updates to ensure it still resumes properly
|
||||
device_ids = ["device_id6", "device_id7"]
|
||||
self.add_device_change("user_id", device_ids, "somehost")
|
||||
self.add_device_change("@user_id:test", device_ids, "somehost")
|
||||
|
||||
# Get the next batch of device updates
|
||||
next_stream_id, device_updates = self.get_success(
|
||||
|
||||
Reference in New Issue
Block a user