1
0

Compare commits

...

35 Commits

Author SHA1 Message Date
Erik Johnston 07fb52b5d5 event_rs stuff 2022-03-11 18:10:36 +00:00
Shay 26211fec24 Fix a bug in background updates wherein background updates are never run using the default batch size (#12157) 2022-03-07 09:44:33 -08:00
Patrick Cloke f63bedef07 Invalidate caches when an event with a relation is redacted. (#12121)
The caches for the target of the relation must be cleared
so that the bundled aggregations are re-calculated after
the redaction is processed.
2022-03-07 14:00:05 +00:00
Richard van der Hoff 0211f18d65 Switch the tests-done job to an Action (#12161)
I've factored it out for easier use in other workflows.
2022-03-07 12:24:06 +00:00
Richard van der Hoff 00a67f831a Merge remote-tracking branch 'origin/release-v1.54' into develop 2022-03-04 22:40:51 +00:00
David Robertson d2ef1a79cf Relax version guard for packaging (#12166)
It’s just occurred to me that #12088 pulled in the “packaging” package (~=21.3). I pulled in the newest version I had at the time.

I only use it for packaging.requirements.Requirements. Which was added in packaging 16.1: https://github.com/pypa/packaging/releases/tag/16.1

https://pkgs.org/download/python3-packaging suggests that the oldest version we care about is 17.1 in Ubuntu Bionic. So I think with this bound we're hunky dory.
2022-03-04 22:40:24 +00:00
Erik Johnston 0752ab7a36 Reduce to-device queries for /sync. (#12163) 2022-03-04 17:57:27 +00:00
Sean Quah 75574726a7 Add type hints for ObservableDeferred attributes (#12159)
Signed-off-by: Sean Quah <seanq@element.io>
2022-03-04 15:37:02 +00:00
Sean Quah 158e0937eb Add test for ObservableDeferred's cancellation behaviour (#12149)
Signed-off-by: Sean Quah <seanq@element.io>
2022-03-04 13:10:05 +00:00
Patrick Cloke cd1ae3d0b4 Remove backwards compatibility with RelationPaginationToken. (#12138) 2022-03-04 07:10:10 -05:00
David Robertson 36071d39f7 Changelog (#12153) 2022-03-04 12:01:51 +00:00
David Robertson 4aeb00ca20 Move synctl into synapse._scripts and expose as an entrypoint (#12140) 2022-03-04 11:58:49 +00:00
Erik Johnston 423cca9efe Spread out sending device lists to remote hosts (#12132) 2022-03-04 11:48:15 +00:00
Richard van der Hoff 87c230c27c Update client-visibility filtering for outlier events (#12155)
Avoid trying to get the state for outliers, which isn't a sensible thing to do.
2022-03-04 10:31:19 +00:00
Richard van der Hoff d56202b038 Fix type of events in StateGroupStorage and StateHandler (#12156)
We make multiple passes over this, so a regular iterable won't do.
2022-03-04 10:25:18 +00:00
Richard van der Hoff 8533c8b03d Avoid generating state groups for local out-of-band leaves (#12154)
If we locally generate a rejection for an invite received over federation, it
is stored as an outlier (because we probably don't have the state for the
room). However, currently we still generate a state group for it (even though
the state in that state group will be nonsense).

By setting the `outlier` param on `create_event`, we avoid the nonsensical
state.
2022-03-03 19:58:08 +00:00
Andrew Morgan fb0ffa9676 Rename various ApplicationServices interested methods (#11915) 2022-03-03 18:14:09 +00:00
David Robertson 9297d040a7 Detox, part 2 of N (#12152)
I've argued in #11537 that poetry and tox don't cooperate well at the
moment. (See also #12119.) Therefore I'm pruning away bits of tox to make the transition to poetry easier. This change removes the commands for coverage.

We don't use coverage in anger at the moment. It shouldn't be too hard to add coverage as a dev-dependency and reintroduce this if we really want it.
2022-03-03 17:14:09 +00:00
Dirk Klimpel 7e91107be1 Add type hints to tests/rest (#12146)
* Add type hints to `tests/rest`

* newsfile

* change import from `SigningKey`
2022-03-03 16:05:44 +00:00
Patrick Cloke 1d11b452b7 Use the proper serialization format when bundling aggregations. (#12090)
This ensures that the `latest_event` field of the bundled aggregation
for threads uses the same format as the other events in the response.
2022-03-03 10:43:06 -05:00
David Robertson cea1b58c4a Don't impose version checks on dev extras at runtime (#12129)
* Fix incorrect argument in test case

* Add copyright header

* Docstring and __all__

* Exclude dev depenencies

* Use changelog from #12088

* Include version in error messages

This will hopefully distinguish between the version of the source code
and the version of the distribution package that is installed.

* Linter script is your friend
2022-03-03 12:47:55 +00:00
Eric Eastwood a511a890d7 Enable MSC2716 Complement tests in Synapse (#12145)
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-03-03 11:19:20 +00:00
Erik Johnston 61fd2a8f59 Limit the size of the aggregation_key (#12101)
There's no reason to let people use long keys.
2022-03-03 10:52:35 +00:00
Eric Eastwood 31b125ccec Enable MSC3030 Complement tests in Synapse (#12144)
The Complement tests for MSC3030 are now merged, https://github.com/matrix-org/complement/pull/178

Synapse implmentation: https://github.com/matrix-org/synapse/pull/9445
2022-03-03 11:45:23 +01:00
Brendan Abolivier ae8a616b49 Correctly register deactivation and profile update module callbacks (#12141) 2022-03-03 11:39:58 +01:00
David Robertson 11282ade1d Move the snapcraft configuration to contrib. (#12142)
* Move the `snapcraft` configuration to `contrib`.

We're happy for people to package this as a snap image if it's useful,
but we don't support or maintain it. I'd like to move the config to
`contrib` to reflect this state of affairs.

* Changelog
2022-03-02 19:22:44 +00:00
David Robertson 1fbe0316a9 Add suffices to scripts in scripts-dev (#12137)
* Rename scripts-dev to have suffices

* Update references to `scripts-dev`

* Changelog

* These scripts don't pass mypy
2022-03-02 18:00:26 +00:00
David Robertson 106959b3cf Remove unused mocks from test_typing (#12136)
* Remove unused mocks from `test_typing`

It's not clear what these do. `get_user_by_access_token` has the wrong
signature, including the return type. Tests all pass without these. I
think we should nuke them.

* Changelog

* Fixup imports
2022-03-02 17:24:52 +00:00
Dirk Klimpel 2ffaf30803 Add type hints to tests/rest/client (#12108)
* Add type hints to `tests/rest/client`

* newsfile

* fix imports

* add `test_account.py`

* Remove one type hint in `test_report_event.py`

* change `on_create_room` to `async`

* update new functions in `test_third_party_rules.py`

* Add `test_filter.py`

* add `test_rooms.py`

* change to `assertEquals` to `assertEqual`

* lint
2022-03-02 16:34:14 +00:00
Andrew Morgan b4461e7d8a Enable complexity checking in complexity checking docs example (#11998) 2022-03-02 16:11:16 +00:00
Olivier Wilkinson (reivilibre) 594a07ede4 Merge tag 'v1.54.0rc1' into develop
Synapse 1.54.0rc1 (2022-03-02)
==============================

Please note that this will be the last release of Synapse that is compatible with Mjolnir 1.3.1 and earlier.
Administrators of servers which have the Mjolnir module installed are advised to upgrade Mjolnir to version 1.3.2 or later.

Features
--------

- Add support for [MSC3202](https://github.com/matrix-org/matrix-doc/pull/3202): sending one-time key counts and fallback key usage states to Application Services. ([\#11617](https://github.com/matrix-org/synapse/issues/11617))
- Improve the generated URL previews for some web pages. Contributed by @AndrewRyanChama. ([\#11985](https://github.com/matrix-org/synapse/issues/11985))
- Track cache invalidations in Prometheus metrics, as already happens for cache eviction based on size or time. ([\#12000](https://github.com/matrix-org/synapse/issues/12000))
- Implement experimental support for [MSC3720](https://github.com/matrix-org/matrix-doc/pull/3720) (account status endpoints). ([\#12001](https://github.com/matrix-org/synapse/issues/12001), [\#12067](https://github.com/matrix-org/synapse/issues/12067))
- Enable modules to set a custom display name when registering a user. ([\#12009](https://github.com/matrix-org/synapse/issues/12009))
- Advertise Matrix 1.1 and 1.2 support on `/_matrix/client/versions`. ([\#12020](https://github.com/matrix-org/synapse/issues/12020), ([\#12022](https://github.com/matrix-org/synapse/issues/12022))
- Support only the stable identifier for [MSC3069](https://github.com/matrix-org/matrix-doc/pull/3069)'s `is_guest` on `/_matrix/client/v3/account/whoami`. ([\#12021](https://github.com/matrix-org/synapse/issues/12021))
- Use room version 9 as the default room version (per [MSC3589](https://github.com/matrix-org/matrix-doc/pull/3589)). ([\#12058](https://github.com/matrix-org/synapse/issues/12058))
- Add module callbacks to react to user deactivation status changes (i.e. deactivations and reactivations) and profile updates. ([\#12062](https://github.com/matrix-org/synapse/issues/12062))

Bugfixes
--------

- Fix a bug introduced in Synapse 1.48.0 where an edit of the latest event in a thread would not be properly applied to the thread summary. ([\#11992](https://github.com/matrix-org/synapse/issues/11992))
- Fix long-standing bug where the `get_rooms_for_user` cache was not correctly invalidated for remote users when the server left a room. ([\#11999](https://github.com/matrix-org/synapse/issues/11999))
- Fix a 500 error with Postgres when looking backwards with the [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) `/timestamp_to_event?dir=b` endpoint. ([\#12024](https://github.com/matrix-org/synapse/issues/12024))
- Properly fix a long-standing bug where wrong data could be inserted into the `event_search` table when using SQLite. This could block running `synapse_port_db` with an `argument of type 'int' is not iterable` error. This bug was partially fixed by a change in Synapse 1.44.0. ([\#12037](https://github.com/matrix-org/synapse/issues/12037))
- Fix slow performance of `/logout` in some cases where refresh tokens are in use. The slowness existed since the initial implementation of refresh tokens in version 1.38.0. ([\#12056](https://github.com/matrix-org/synapse/issues/12056))
- Fix a long-standing bug where Synapse would make additional failing requests over federation for missing data. ([\#12077](https://github.com/matrix-org/synapse/issues/12077))
- Fix occasional `Unhandled error in Deferred` error message. ([\#12089](https://github.com/matrix-org/synapse/issues/12089))
- Fix a bug introduced in Synapse 1.51.0 where incoming federation transactions containing at least one EDU would be dropped if debug logging was enabled for `synapse.8631_debug`. ([\#12098](https://github.com/matrix-org/synapse/issues/12098))
- Fix a long-standing bug which could cause push notifications to malfunction if `use_frozen_dicts` was set in the configuration. ([\#12100](https://github.com/matrix-org/synapse/issues/12100))
- Fix an extremely rare, long-standing bug in `ReadWriteLock` that would cause an error when a newly unblocked writer completes instantly. ([\#12105](https://github.com/matrix-org/synapse/issues/12105))
- Make a `POST` to `/rooms/<room_id>/receipt/m.read/<event_id>` only trigger a push notification if the count of unread messages is different to the one in the last successfully sent push. This reduces server load and load on the receiving device. ([\#11835](https://github.com/matrix-org/synapse/issues/11835))

Updates to the Docker image
---------------------------

- The Docker image no longer automatically creates a temporary volume at `/data`. This is not expected to affect normal usage. ([\#11997](https://github.com/matrix-org/synapse/issues/11997))
- Use Python 3.9 in Docker images by default. ([\#12112](https://github.com/matrix-org/synapse/issues/12112))

Improved Documentation
----------------------

- Document support for the `to_device`, `account_data`, `receipts`, and `presence` stream writers for workers. ([\#11599](https://github.com/matrix-org/synapse/issues/11599))
- Explain the meaning of spam checker callbacks' return values. ([\#12003](https://github.com/matrix-org/synapse/issues/12003))
- Clarify information about external Identity Provider IDs. ([\#12004](https://github.com/matrix-org/synapse/issues/12004))

Deprecations and Removals
-------------------------

- Deprecate using `synctl` with the config option `synctl_cache_factor` and print a warning if a user still uses this option. ([\#11865](https://github.com/matrix-org/synapse/issues/11865))
- Remove support for the legacy structured logging configuration (please see the the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#legacy-structured-logging-configuration-removal) if you are using `structured: true` in the Synapse configuration). ([\#12008](https://github.com/matrix-org/synapse/issues/12008))
- Drop support for [MSC3283](https://github.com/matrix-org/matrix-doc/pull/3283) unstable flags now that the stable flags are supported. ([\#12018](https://github.com/matrix-org/synapse/issues/12018))
- Remove the unstable `/spaces` endpoint from [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946). ([\#12073](https://github.com/matrix-org/synapse/issues/12073))

Internal Changes
----------------

- Make the `get_room_version` method use `get_room_version_id` to benefit from caching. ([\#11808](https://github.com/matrix-org/synapse/issues/11808))
- Remove unnecessary condition on knock -> leave auth rule check. ([\#11900](https://github.com/matrix-org/synapse/issues/11900))
- Add tests for device list changes between local users. ([\#11972](https://github.com/matrix-org/synapse/issues/11972))
- Optimise calculating `device_list` changes in `/sync`. ([\#11974](https://github.com/matrix-org/synapse/issues/11974))
- Add missing type hints to storage classes. ([\#11984](https://github.com/matrix-org/synapse/issues/11984))
- Refactor the search code for improved readability. ([\#11991](https://github.com/matrix-org/synapse/issues/11991))
- Move common deduplication code down into `_auth_and_persist_outliers`. ([\#11994](https://github.com/matrix-org/synapse/issues/11994))
- Limit concurrent joins from applications services. ([\#11996](https://github.com/matrix-org/synapse/issues/11996))
- Preparation for faster-room-join work: when parsing the `send_join` response, get the `m.room.create` event from `state`, not `auth_chain`. ([\#12005](https://github.com/matrix-org/synapse/issues/12005), [\#12039](https://github.com/matrix-org/synapse/issues/12039))
- Preparation for faster-room-join work: parse MSC3706 fields in send_join response. ([\#12011](https://github.com/matrix-org/synapse/issues/12011))
- Preparation for faster-room-join work: persist information on which events and rooms have partial state to the database. ([\#12012](https://github.com/matrix-org/synapse/issues/12012))
- Preparation for faster-room-join work: Support for calling `/federation/v1/state` on a remote server. ([\#12013](https://github.com/matrix-org/synapse/issues/12013))
- Configure `tox` to use `venv` rather than `virtualenv`. ([\#12015](https://github.com/matrix-org/synapse/issues/12015))
- Fix bug in `StateFilter.return_expanded()` and add some tests. ([\#12016](https://github.com/matrix-org/synapse/issues/12016))
- Use Matrix v1.1 endpoints (`/_matrix/client/v3/auth/...`) in fallback auth HTML forms. ([\#12019](https://github.com/matrix-org/synapse/issues/12019))
- Update the `olddeps` CI job to use an old version of `markupsafe`. ([\#12025](https://github.com/matrix-org/synapse/issues/12025))
- Upgrade Mypy to version 0.931. ([\#12030](https://github.com/matrix-org/synapse/issues/12030))
- Remove legacy `HomeServer.get_datastore()`. ([\#12031](https://github.com/matrix-org/synapse/issues/12031), [\#12070](https://github.com/matrix-org/synapse/issues/12070))
- Minor typing fixes. ([\#12034](https://github.com/matrix-org/synapse/issues/12034), [\#12069](https://github.com/matrix-org/synapse/issues/12069))
- After joining a room, create a dedicated logcontext to process the queued events. ([\#12041](https://github.com/matrix-org/synapse/issues/12041))
- Tidy up GitHub Actions config which builds distributions for PyPI. ([\#12051](https://github.com/matrix-org/synapse/issues/12051))
- Move configuration out of `setup.cfg`. ([\#12052](https://github.com/matrix-org/synapse/issues/12052), [\#12059](https://github.com/matrix-org/synapse/issues/12059))
- Fix error message when a worker process fails to talk to another worker process. ([\#12060](https://github.com/matrix-org/synapse/issues/12060))
- Fix using the `complement.sh` script without specifying a directory or a branch. Contributed by Nico on behalf of Famedly. ([\#12063](https://github.com/matrix-org/synapse/issues/12063))
- Add type hints to `tests/rest/client`. ([\#12066](https://github.com/matrix-org/synapse/issues/12066), [\#12072](https://github.com/matrix-org/synapse/issues/12072), [\#12084](https://github.com/matrix-org/synapse/issues/12084), [\#12094](https://github.com/matrix-org/synapse/issues/12094))
- Add some logging to `/sync` to try and track down #11916. ([\#12068](https://github.com/matrix-org/synapse/issues/12068))
- Inspect application dependencies using `importlib.metadata` or its backport. ([\#12088](https://github.com/matrix-org/synapse/issues/12088))
- Use `assertEqual` instead of the deprecated `assertEquals` in test code. ([\#12092](https://github.com/matrix-org/synapse/issues/12092))
- Move experimental support for [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440) to `/versions`. ([\#12099](https://github.com/matrix-org/synapse/issues/12099))
- Add `stop_cancellation` utility function to stop `Deferred`s from being cancelled. ([\#12106](https://github.com/matrix-org/synapse/issues/12106))
- Improve exception handling for concurrent execution. ([\#12109](https://github.com/matrix-org/synapse/issues/12109))
- Advertise support for Python 3.10 in packaging files. ([\#12111](https://github.com/matrix-org/synapse/issues/12111))
- Move CI checks out of tox, to facilitate a move to using poetry. ([\#12119](https://github.com/matrix-org/synapse/issues/12119))
2022-03-02 15:26:43 +00:00
Erik Johnston 6d282a9c89 Make release script write correct no-op changelog (#12127)
As we want to include the previous version in the "No new changes..."
string.
2022-03-02 14:28:18 +00:00
Patrick Cloke 1103c5fe8a Check if instances are lists, not sequences. (#12128)
As a str is a sequence, the checks were not granular
enough and would allow lists or strings, when only
lists were valid.
2022-03-02 13:18:51 +00:00
David Robertson f3f0ab10fe Move scripts directory inside synapse, exposing as setuptools entry_points (#12118)
* Two scripts are basically entry_points already
* Move and rename scripts/* to synapse/_scripts/*.py
* Delete sync_room_to_group.pl
* Expose entry points in setup.py
* Update linter script and config
* Fixup scripts & docs mentioning scripts that moved

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2022-03-02 13:00:16 +00:00
Patrick Cloke 6adb89ff00 Improve and refactor the tests for relations. (#12113)
* Modernizes code (f-strings, etc.)
* Fixes incorrect comments.
* Splits the test case into two.
* Factors out some duplicated code.
2022-03-02 06:56:16 -05:00
135 changed files with 1979 additions and 1339 deletions
+2 -2
View File
@@ -21,7 +21,7 @@ python -m synapse.app.homeserver --generate-keys -c .ci/sqlite-config.yaml
echo "--- Prepare test database"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
# Run the export-data command on the sqlite test database
python -m synapse.app.admin_cmd -c .ci/sqlite-config.yaml export-data @anon-20191002_181700-832:localhost:8800 \
@@ -41,7 +41,7 @@ fi
# Port the SQLite databse to postgres so we can check command works against postgres
echo "+++ Port SQLite3 databse to postgres"
scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
# Run the export-data command on postgres database
python -m synapse.app.admin_cmd -c .ci/postgres-config.yaml export-data @anon-20191002_181700-832:localhost:8800 \
+7 -5
View File
@@ -25,17 +25,19 @@ python -m synapse.app.homeserver --generate-keys -c .ci/sqlite-config.yaml
echo "--- Prepare test database"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
# Create the PostgreSQL database.
.ci/scripts/postgres_exec.py "CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against test database"
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
# TODO: this invocation of synapse_port_db (and others below) used to be prepended with `coverage run`,
# but coverage seems unable to find the entrypoints installed by `pip install -e .`.
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
# We should be able to run twice against the same database.
echo "+++ Run synapse_port_db a second time"
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
#####
@@ -46,7 +48,7 @@ echo "--- Prepare empty SQLite database"
# we do this by deleting the sqlite db, and then doing the same again.
rm .ci/test_db.db
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
# re-create the PostgreSQL database.
.ci/scripts/postgres_exec.py \
@@ -54,4 +56,4 @@ scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-b
"CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against empty database"
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
-2
View File
@@ -3,11 +3,9 @@
# things to include
!docker
!scripts
!synapse
!MANIFEST.in
!README.rst
!setup.py
!synctl
**/__pycache__
+2 -2
View File
@@ -31,7 +31,7 @@ jobs:
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
dists='["debian:sid"]'
if [[ $GITHUB_REF == refs/tags/* ]]; then
dists=$(scripts-dev/build_debian_packages --show-dists-json)
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)
fi
echo "::set-output name=distros::$dists"
# map the step outputs to job outputs
@@ -74,7 +74,7 @@ jobs:
# see https://github.com/docker/build-push-action/issues/252
# for the cache magic here
run: |
./src/scripts-dev/build_debian_packages \
./src/scripts-dev/build_debian_packages.py \
--docker-build-arg=--cache-from=type=local,src=/tmp/.buildx-cache \
--docker-build-arg=--cache-to=type=local,mode=max,dest=/tmp/.buildx-cache-new \
--docker-build-arg=--progress=plain \
+12 -23
View File
@@ -16,7 +16,8 @@ jobs:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install -e .
- run: scripts-dev/generate_sample_config --check
- run: scripts-dev/generate_sample_config.sh --check
- run: scripts-dev/config-lint.sh
lint:
runs-on: ubuntu-latest
@@ -51,7 +52,7 @@ jobs:
fetch-depth: 0
- uses: actions/setup-python@v2
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment
- run: scripts-dev/check-newsfragment.sh
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
@@ -376,7 +377,7 @@ jobs:
# Run Complement
- run: |
set -o pipefail
go test -v -json -p 1 -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
go test -v -json -p 1 -tags synapse_blacklist,msc2403,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
env:
@@ -387,34 +388,22 @@ jobs:
tests-done:
if: ${{ always() }}
needs:
- check-sampleconfig
- lint
- lint-crlf
- lint-newsfile
- trial
- trial-olddeps
- sytest
- export-data
- portdb
- complement
runs-on: ubuntu-latest
steps:
- name: Set build result
env:
NEEDS_CONTEXT: ${{ toJSON(needs) }}
# the `jq` incantation dumps out a series of "<job> <result>" lines.
# we set it to an intermediate variable to avoid a pipe, which makes it
# hard to set $rc.
run: |
rc=0
results=$(jq -r 'to_entries[] | [.key,.value.result] | join(" ")' <<< $NEEDS_CONTEXT)
while read job result ; do
# The newsfile lint may be skipped on non PR builds
if [ $result == "skipped" ] && [ $job == "lint-newsfile" ]; then
continue
fi
- uses: matrix-org/done-action@v2
with:
needs: ${{ toJSON(needs) }}
if [ "$result" != "success" ]; then
echo "::set-failed ::Job $job returned $result"
rc=1
fi
done <<< $results
exit $rc
# The newsfile lint may be skipped on non PR builds
skippable:
lint-newsfile
+3
View File
@@ -54,3 +54,6 @@ book/
# complement
/complement-*
/master.tar.gz
# rust
/event_rs/target*
-3
View File
@@ -1,4 +1,3 @@
include synctl
include LICENSE
include VERSION
include *.rst
@@ -17,7 +16,6 @@ recursive-include synapse/storage *.txt
recursive-include synapse/storage *.md
recursive-include docs *
recursive-include scripts *
recursive-include scripts-dev *
recursive-include synapse *.pyi
recursive-include tests *.py
@@ -53,5 +51,4 @@ prune contrib
prune debian
prune demo/etc
prune docker
prune snap
prune stubs
+1
View File
@@ -0,0 +1 @@
Simplify the `ApplicationService` class' set of public methods related to interest checking.
+1
View File
@@ -0,0 +1 @@
Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page.
+1
View File
@@ -0,0 +1 @@
Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0.
+1
View File
@@ -0,0 +1 @@
Limit the size of `aggregation_key` on annotations.
+1
View File
@@ -0,0 +1 @@
Add type hints to `tests/rest/client`.
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.
+1
View File
@@ -0,0 +1 @@
Move scripts to Synapse package and expose as setuptools entry points.
+1
View File
@@ -0,0 +1 @@
Fix a long-standing bug when redacting events with relations.
+1
View File
@@ -0,0 +1 @@
Update release script to insert the previous version when writing "No significant changes" line in the changelog.
+1
View File
@@ -0,0 +1 @@
Fix data validation to compare to lists, not sequences.
+1
View File
@@ -0,0 +1 @@
Inspect application dependencies using `importlib.metadata` or its backport.
+1
View File
@@ -0,0 +1 @@
Improve performance of logging in for large accounts.
+1
View File
@@ -0,0 +1 @@
Remove unused mocks from `test_typing`.
+1
View File
@@ -0,0 +1 @@
Give `scripts-dev` scripts suffixes for neater CI config.
+1
View File
@@ -0,0 +1 @@
Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0.
+1
View File
@@ -0,0 +1 @@
Move `synctl` into `synapse._scripts` and expose as an entry point.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.54.0rc1 preventing the new module callbacks introduced in this release from being registered by modules.
+1
View File
@@ -0,0 +1 @@
Move the snapcraft configuration file to `contrib`.
+1
View File
@@ -0,0 +1 @@
Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI.
+1
View File
@@ -0,0 +1 @@
Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI.
+1
View File
@@ -0,0 +1 @@
Add type hints to `tests/rest`.
+1
View File
@@ -0,0 +1 @@
Add test for `ObservableDeferred`'s cancellation behaviour.
+1
View File
@@ -0,0 +1 @@
Prune unused jobs from `tox` config.
+1
View File
@@ -0,0 +1 @@
Move CI checks out of tox, to facilitate a move to using poetry.
+1
View File
@@ -0,0 +1 @@
Avoid generating state groups for local out-of-band leaves.
+1
View File
@@ -0,0 +1 @@
Avoid trying to calculate the state at outlier events.
+1
View File
@@ -0,0 +1 @@
Fix some type annotations.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in #4864 whereby background updates are never run with the default background batch size.
+1
View File
@@ -0,0 +1 @@
Add type hints for `ObservableDeferred` attributes.
+1
View File
@@ -0,0 +1 @@
Use a prebuilt Action for the `tests-done` CI job.
+1
View File
@@ -0,0 +1 @@
Reduce number of DB queries made during processing of `/sync`.
+1
View File
@@ -0,0 +1 @@
Relax the version guard for "packaging" added in #12088.
@@ -20,7 +20,7 @@ apps:
generate-config:
command: generate_config
generate-signing-key:
command: generate_signing_key.py
command: generate_signing_key
register-new-matrix-user:
command: register_new_matrix_user
plugs: [network]
+1 -2
View File
@@ -46,8 +46,7 @@ RUN \
&& rm -rf /var/lib/apt/lists/*
# Copy just what we need to pip install
COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/
COPY MANIFEST.in README.rst setup.py /synapse/
COPY synapse/__init__.py /synapse/synapse/__init__.py
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
+1 -1
View File
@@ -172,6 +172,6 @@ frobber:
```
Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config`.
and is maintained by a script, `scripts-dev/generate_sample_config.sh`.
Making sure that the output from this script matches the desired format
is left as an exercise for the reader!
+3 -3
View File
@@ -158,9 +158,9 @@ same as integers.
There are three separate aspects to this:
* Any new boolean column must be added to the `BOOLEAN_COLUMNS` list in
`scripts/synapse_port_db`. This tells the port script to cast the integer
value from SQLite to a boolean before writing the value to the postgres
database.
`synapse/_scripts/synapse_port_db.py`. This tells the port script to cast
the integer value from SQLite to a boolean before writing the value to the
postgres database.
* Before SQLite 3.23, `TRUE` and `FALSE` were not recognised as constants by
SQLite, and the `IS [NOT] TRUE`/`IS [NOT] FALSE` operators were not
@@ -31,28 +31,29 @@ Anything that requires modifying the device list [#7721](https://github.com/matr
Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml.
```
# Set to false to disable presence tracking on this homeserver.
# Disable presence tracking, which is currently fairly resource intensive
# More info: https://github.com/matrix-org/synapse/issues/9478
use_presence: false
# When this is enabled, the room "complexity" will be checked before a user
# joins a new remote room. If it is above the complexity limit, the server will
# disallow joining, or will instantly leave.
# Set a small complexity limit, preventing users from joining large rooms
# which may be resource-intensive to remain a part of.
#
# Note that this will not prevent users from joining smaller rooms that
# eventually become complex.
limit_remote_rooms:
# Uncomment to enable room complexity checking.
#enabled: true
enabled: true
complexity: 3.0
# Database configuration
database:
# Use postgres for the best performance
name: psycopg2
args:
user: matrix-synapse
# Generate a long, secure one with a password manager
# Generate a long, secure password using a password manager
password: hunter2
database: matrix-synapse
host: localhost
cp_min: 5
cp_max: 10
```
Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this:
+4 -4
View File
@@ -153,9 +153,9 @@ database file (typically `homeserver.db`) to another location. Once the
copy is complete, restart synapse. For instance:
```sh
./synctl stop
synctl stop
cp homeserver.db homeserver.db.snapshot
./synctl start
synctl start
```
Copy the old config file into a new config file:
@@ -192,10 +192,10 @@ Once that has completed, change the synapse config to point at the
PostgreSQL database configuration file `homeserver-postgres.yaml`:
```sh
./synctl stop
synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
synctl start
```
Synapse should now be running against PostgreSQL.
+3 -2
View File
@@ -238,8 +238,9 @@ After updating the homeserver configuration, you must restart synapse:
* If you use synctl:
```sh
cd /where/you/run/synapse
./synctl restart
# Depending on how Synapse is installed, synctl may already be on
# your PATH. If not, you may need to activate a virtual environment.
synctl restart
```
* If you use systemd:
```sh
+22 -1
View File
@@ -47,7 +47,7 @@ this document.
3. Restart Synapse:
```bash
./synctl restart
synctl restart
```
To check whether your update was successful, you can check the running
@@ -85,6 +85,27 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.55.0
## `synctl` script has been moved
The `synctl` script
[has been made](https://github.com/matrix-org/synapse/pull/12140) an
[entry point](https://packaging.python.org/en/latest/specifications/entry-points/)
and no longer exists at the root of Synapse's source tree. If you wish to use
`synctl` to manage your homeserver, you should invoke `synctl` directly, e.g.
`synctl start` instead of `./synctl start` or `/path/to/synctl start`.
You will need to ensure `synctl` is on your `PATH`.
- This is automatically the case when using
[Debian packages](https://packages.matrix.org/debian/) or
[docker images](https://hub.docker.com/r/matrixdotorg/synapse)
provided by Matrix.org.
- When installing from a wheel, sdist, or PyPI, a `synctl` executable is added
to your Python installation's `bin`. This should be on your `PATH`
automatically, though you might need to activate a virtual environment
depending on how you installed Synapse.
# Upgrading to v1.54.0
## Legacy structured logging configuration removal
@@ -12,7 +12,7 @@ UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
```
A new server admin user can also be created using the `register_new_matrix_user`
command. This is a script that is located in the `scripts/` directory, or possibly
command. This is a script that is distributed as part of synapse. It is possibly
already on your `$PATH` depending on how Synapse was installed.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
+18
View File
@@ -0,0 +1,18 @@
[package]
name = "synapse_events"
version = "0.1.0"
edition = "2021"
authors = ["Erik"]
[lib]
crate-type = ["cdylib"]
[dependencies]
anyhow = "1.0.56"
base64 = "0.13.0"
pyo3 = { version = "0.16.1", features = ["extension-module", "anyhow"] }
pythonize = "0.16.0"
serde = { version = "1.0.136", features = ["derive"] }
serde_json = "1.0.79"
sha2 = "0.10.2"
signed-json = { git = "https://github.com/erikjohnston/rust-signed-json.git" }
+187
View File
@@ -0,0 +1,187 @@
use std::collections::BTreeMap;
use anyhow::Context;
use base64::URL_SAFE_NO_PAD;
use pyo3::exceptions::PyAttributeError;
use pyo3::prelude::*;
use pyo3::types::PyBytes;
use pythonize::pythonize;
use serde::Deserialize;
use serde_json::Value;
use sha2::{Digest, Sha256};
use signed_json::Signed;
/*
depth: DictProperty[int] = DictProperty("depth")
content: DictProperty[JsonDict] = DictProperty("content")
hashes: DictProperty[Dict[str, str]] = DictProperty("hashes")
origin: DictProperty[str] = DictProperty("origin")
origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts")
redacts: DefaultDictProperty[Optional[str]] = DefaultDictProperty("redacts", None)
room_id: DictProperty[str] = DictProperty("room_id")
sender: DictProperty[str] = DictProperty("sender")
# TODO state_key should be Optional[str]. This is generally asserted in Synapse
# by calling is_state() first (which ensures it is not None), but it is hard (not possible?)
# to properly annotate that calling is_state() asserts that state_key exists
# and is non-None. It would be better to replace such direct references with
# get_state_key() (and a check for None).
state_key: DictProperty[str] = DictProperty("state_key")
type: DictProperty[str] = DictProperty("type")
user_id: DictProperty[str] = DictProperty("sender")
*/
// FYI origin is not included here
#[derive(Debug, Clone, Deserialize)]
struct EventInner {
room_id: String,
depth: u64,
hashes: BTreeMap<String, String>,
origin_server_ts: u64,
redacts: Option<String>,
sender: String,
#[serde(rename = "type")]
event_type: String,
#[serde(default)]
state_key: Option<String>,
content: BTreeMap<String, Value>,
}
#[pyclass]
#[derive(Debug, Clone, Deserialize)]
struct Event {
#[pyo3(get)]
event_id: String,
#[serde(flatten)]
inner: Signed<EventInner>,
}
#[pymethods]
impl Event {
#[getter]
fn room_id(&self) -> &str {
&self.inner.room_id
}
fn get_pdu_json(&self) -> PyResult<String> {
// TODO: Do all the other things `get_pdu_json` does.
Ok(serde_json::to_string(&self.inner).context("bah")?)
}
#[getter]
fn content(&self, py: Python) -> PyResult<PyObject> {
Ok(pythonize(py, &self.inner.content)?)
}
#[getter]
fn state_key(&self) -> PyResult<&str> {
if let Some(state_key) = &self.inner.state_key {
Ok(state_key)
} else {
Err(PyAttributeError::new_err("state_key"))
}
}
}
#[pyfunction]
fn from_bytes(bytes: &PyBytes) -> PyResult<Event> {
let b = bytes.as_bytes();
let inner: Signed<EventInner> = serde_json::from_slice(b).context("parsing event")?;
let mut redacted: BTreeMap<String, Value> = redact(&inner).context("redacting")?;
redacted.remove("signatures");
redacted.remove("unsigned");
let redacted_json = serde_json::to_vec(&redacted).context("BAH")?;
let event_id = base64::encode_config(Sha256::digest(&redacted_json), URL_SAFE_NO_PAD);
let event = Event { event_id, inner };
Ok(event)
}
#[pymodule]
fn synapse_events(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_function(wrap_pyfunction!(from_bytes, m)?)?;
Ok(())
}
fn redact<E: serde::de::DeserializeOwned>(
event: &Signed<EventInner>,
) -> Result<E, serde_json::Error> {
let etype = event.event_type.to_string();
let mut content = event.as_ref().content.clone();
let val = serde_json::to_value(event)?;
let allowed_keys = [
"event_id",
"sender",
"room_id",
"hashes",
"signatures",
"content",
"type",
"state_key",
"depth",
"prev_events",
"prev_state",
"auth_events",
"origin",
"origin_server_ts",
"membership",
];
let val = match val {
serde_json::Value::Object(obj) => obj,
_ => unreachable!(), // Events always serialize to an object
};
let mut val: serde_json::Map<_, _> = val
.into_iter()
.filter(|(k, _)| allowed_keys.contains(&(k as &str)))
.collect();
let mut new_content = serde_json::Map::new();
let mut copy_content = |key: &str| {
if let Some(v) = content.remove(key) {
new_content.insert(key.to_string(), v);
}
};
match &etype[..] {
"m.room.member" => copy_content("membership"),
"m.room.create" => copy_content("creator"),
"m.room.join_rules" => copy_content("join_rule"),
"m.room.aliases" => copy_content("aliases"),
"m.room.history_visibility" => copy_content("history_visibility"),
"m.room.power_levels" => {
for key in &[
"ban",
"events",
"events_default",
"kick",
"redact",
"state_default",
"users",
"users_default",
] {
copy_content(key);
}
}
_ => {}
}
val.insert(
"content".to_string(),
serde_json::Value::Object(new_content),
);
serde_json::from_value(serde_json::Value::Object(val))
}
+16 -13
View File
@@ -11,7 +11,7 @@ local_partial_types = True
no_implicit_optional = True
files =
scripts-dev/sign_json,
scripts-dev/,
setup.py,
synapse/,
tests/
@@ -23,6 +23,20 @@ files =
# https://docs.python.org/3/library/re.html#re.X
exclude = (?x)
^(
|scripts-dev/build_debian_packages.py
|scripts-dev/check_signature.py
|scripts-dev/definitions.py
|scripts-dev/federation_client.py
|scripts-dev/hash_history.py
|scripts-dev/list_url_patterns.py
|scripts-dev/release.py
|scripts-dev/tail-synapse.py
|synapse/_scripts/export_signing_key.py
|synapse/_scripts/move_remote_media_to_new_store.py
|synapse/_scripts/synapse_port_db.py
|synapse/_scripts/update_synapse_database.py
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/__init__.py
|synapse/storage/databases/main/cache.py
@@ -74,15 +88,7 @@ exclude = (?x)
|tests/push/test_http.py
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/client/test_account.py
|tests/rest/client/test_filter.py
|tests/rest/client/test_report_event.py
|tests/rest/client/test_rooms.py
|tests/rest/client/test_third_party_rules.py
|tests/rest/client/test_transactions.py
|tests/rest/client/test_typing.py
|tests/rest/key/v2/test_remote_key_resource.py
|tests/rest/media/v1/test_base.py
|tests/rest/media/v1/test_media_storage.py
|tests/rest/media/v1/test_url_preview.py
|tests/scripts/test_new_matrix_user.py
@@ -246,10 +252,7 @@ disallow_untyped_defs = True
[mypy-tests.storage.test_user_directory]
disallow_untyped_defs = True
[mypy-tests.rest.admin.*]
disallow_untyped_defs = True
[mypy-tests.rest.client.*]
[mypy-tests.rest.*]
disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]
+50
View File
@@ -0,0 +1,50 @@
import json
import time
from synapse.api.room_versions import RoomVersion, RoomVersions
from synapse.events import make_event_from_dict
import synapse_events
with open("/home/erikj/git/synapse/hq_events", "rb") as f:
event_json = f.readlines()
start = time.time()
rust_events = []
for e in event_json:
e = e.strip()
e = e.replace(b"\\\\", b"\\")
event = synapse_events.from_bytes(e)
rust_events.append(event)
now = time.time()
print(f"Parsed rust event in {now - start:.2f} seconds")
event_dicts = []
start = time.time()
event_dicts = []
for e in event_json:
e = e.strip()
e = e.replace(b"\\\\", b"\\")
event_dicts.append(json.loads(e.strip()))
now = time.time()
print(f"Parsed JSON in {now - start:.2f} seconds")
events = []
start = time.time()
for e in event_dicts:
event = make_event_from_dict(e, RoomVersions.V5)
events.append(event)
now = time.time()
print(f"Parsed event in {now - start:.2f} seconds")
+1 -1
View File
@@ -71,4 +71,4 @@ fi
# Run the tests!
echo "Images built; running complement"
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
go test -v -tags synapse_blacklist,msc2403,msc2716,msc3030 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
-28
View File
@@ -1,28 +0,0 @@
#!/usr/bin/env bash
#
# Update/check the docs/sample_config.yaml
set -e
cd "$(dirname "$0")/.."
SAMPLE_CONFIG="docs/sample_config.yaml"
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"
check() {
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || return 1
}
if [ "$1" == "--check" ]; then
diff -u "$SAMPLE_CONFIG" <(./scripts/generate_config --header-file docs/.sample_config_header.yaml) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
exit 1
}
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_LOG_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
exit 1
}
else
./scripts/generate_config --header-file docs/.sample_config_header.yaml -o "$SAMPLE_CONFIG"
./scripts/generate_log_config -o "$SAMPLE_LOG_CONFIG"
fi
+28
View File
@@ -0,0 +1,28 @@
#!/usr/bin/env bash
#
# Update/check the docs/sample_config.yaml
set -e
cd "$(dirname "$0")/.."
SAMPLE_CONFIG="docs/sample_config.yaml"
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"
check() {
diff -u "$SAMPLE_LOG_CONFIG" <(synapse/_scripts/generate_log_config.py) >/dev/null || return 1
}
if [ "$1" == "--check" ]; then
diff -u "$SAMPLE_CONFIG" <(synapse/_scripts/generate_config.py --header-file docs/.sample_config_header.yaml) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config.sh\`.\e[0m" >&2
exit 1
}
diff -u "$SAMPLE_LOG_CONFIG" <(synapse/_scripts/generate_log_config.py) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_LOG_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config.sh\`.\e[0m" >&2
exit 1
}
else
synapse/_scripts/generate_config.py --header-file docs/.sample_config_header.yaml -o "$SAMPLE_CONFIG"
synapse/_scripts/generate_log_config.py -o "$SAMPLE_LOG_CONFIG"
fi
+1 -10
View File
@@ -84,17 +84,8 @@ else
files=(
"synapse" "docker" "tests"
# annoyingly, black doesn't find these so we have to list them
"scripts/export_signing_key"
"scripts/generate_config"
"scripts/generate_log_config"
"scripts/hash_password"
"scripts/register_new_matrix_user"
"scripts/synapse_port_db"
"scripts/update_synapse_database"
"scripts-dev"
"scripts-dev/build_debian_packages"
"scripts-dev/sign_json"
"contrib" "synctl" "setup.py" "synmark" "stubs" ".ci"
"contrib" "setup.py" "synmark" "stubs" ".ci"
)
fi
fi
+3 -3
View File
@@ -147,7 +147,7 @@ python -m synapse.app.homeserver --generate-keys -c "$SQLITE_CONFIG"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
echo "Running db background jobs..."
scripts/update_synapse_database --database-config --run-background-updates "$SQLITE_CONFIG"
synapse/_scripts/update_synapse_database.py --database-config --run-background-updates "$SQLITE_CONFIG"
# Create the PostgreSQL database.
echo "Creating postgres database..."
@@ -156,10 +156,10 @@ createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_DB_NAME"
echo "Copying data from SQLite3 to Postgres with synapse_port_db..."
if [ -z "$COVERAGE" ]; then
# No coverage needed
scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
synapse/_scripts/synapse_port_db.py --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
else
# Coverage desired
coverage run scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
coverage run synapse/_scripts/synapse_port_db.py --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
fi
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
+28 -2
View File
@@ -17,6 +17,8 @@
"""An interactive script for doing a release. See `cli()` below.
"""
import glob
import os
import re
import subprocess
import sys
@@ -209,8 +211,8 @@ def prepare():
with open("synapse/__init__.py", "w") as f:
f.write(parsed_synapse_ast.dumps())
# Generate changelogs
run_until_successful("python3 -m towncrier", shell=True)
# Generate changelogs.
generate_and_write_changelog(current_version)
# Generate debian changelogs
if parsed_new_version.pre is not None:
@@ -523,5 +525,29 @@ def get_changes_for_version(wanted_version: version.Version) -> str:
return "\n".join(version_changelog)
def generate_and_write_changelog(current_version: version.Version):
# We do this by getting a draft so that we can edit it before writing to the
# changelog.
result = run_until_successful(
"python3 -m towncrier --draft", shell=True, capture_output=True
)
new_changes = result.stdout.decode("utf-8")
new_changes = new_changes.replace(
"No significant changes.", f"No significant changes since {current_version}."
)
# Prepend changes to changelog
with open("CHANGES.md", "r+") as f:
existing_content = f.read()
f.seek(0, 0)
f.write(new_changes)
f.write("\n")
f.write(existing_content)
# Remove all the news fragments
for f in glob.iglob("changelog.d/*.*"):
os.remove(f)
if __name__ == "__main__":
cli()
-19
View File
@@ -1,19 +0,0 @@
#!/usr/bin/env python
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse._scripts.register_new_matrix_user import main
if __name__ == "__main__":
main()
-19
View File
@@ -1,19 +0,0 @@
#!/usr/bin/env python
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse._scripts.review_recent_signups import main
if __name__ == "__main__":
main()
-45
View File
@@ -1,45 +0,0 @@
#!/usr/bin/env perl
use strict;
use warnings;
use JSON::XS;
use LWP::UserAgent;
use URI::Escape;
if (@ARGV < 4) {
die "usage: $0 <homeserver url> <access_token> <room_id|room_alias> <group_id>\n";
}
my ($hs, $access_token, $room_id, $group_id) = @ARGV;
my $ua = LWP::UserAgent->new();
$ua->timeout(10);
if ($room_id =~ /^#/) {
$room_id = uri_escape($room_id);
$room_id = decode_json($ua->get("${hs}/_matrix/client/r0/directory/room/${room_id}?access_token=${access_token}")->decoded_content)->{room_id};
}
my $room_users = [ keys %{decode_json($ua->get("${hs}/_matrix/client/r0/rooms/${room_id}/joined_members?access_token=${access_token}")->decoded_content)->{joined}} ];
my $group_users = [
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/users?access_token=${access_token}" )->decoded_content)->{chunk}}),
(map { $_->{user_id} } @{decode_json($ua->get("${hs}/_matrix/client/unstable/groups/${group_id}/invited_users?access_token=${access_token}" )->decoded_content)->{chunk}}),
];
die "refusing to sync from empty room" unless (@$room_users);
die "refusing to sync to empty group" unless (@$group_users);
my $diff = {};
foreach my $user (@$room_users) { $diff->{$user}++ }
foreach my $user (@$group_users) { $diff->{$user}-- }
foreach my $user (keys %$diff) {
if ($diff->{$user} == 1) {
warn "inviting $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/invite/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
elsif ($diff->{$user} == -1) {
warn "removing $user";
print STDERR $ua->put("${hs}/_matrix/client/unstable/groups/${group_id}/admin/users/remove/${user}?access_token=${access_token}", Content=>'{}')->status_line."\n";
}
}
+12 -2
View File
@@ -15,7 +15,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob
import os
from typing import Any, Dict
@@ -153,8 +152,20 @@ setup(
python_requires="~=3.7",
entry_points={
"console_scripts": [
# Application
"synapse_homeserver = synapse.app.homeserver:main",
"synapse_worker = synapse.app.generic_worker:main",
"synctl = synapse._scripts.synctl:main",
# Scripts
"export_signing_key = synapse._scripts.export_signing_key:main",
"generate_config = synapse._scripts.generate_config:main",
"generate_log_config = synapse._scripts.generate_log_config:main",
"generate_signing_key = synapse._scripts.generate_signing_key:main",
"hash_password = synapse._scripts.hash_password:main",
"register_new_matrix_user = synapse._scripts.register_new_matrix_user:main",
"synapse_port_db = synapse._scripts.synapse_port_db:main",
"synapse_review_recent_signups = synapse._scripts.review_recent_signups:main",
"update_synapse_database = synapse._scripts.update_synapse_database:main",
]
},
classifiers=[
@@ -167,6 +178,5 @@ setup(
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3.10",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},
)
@@ -50,7 +50,7 @@ def format_for_config(public_key: nacl.signing.VerifyKey, expiry_ts: int):
)
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
@@ -85,7 +85,6 @@ if __name__ == "__main__":
else format_plain
)
keys = []
for file in args.key_file:
try:
res = read_signing_keys(file)
@@ -98,3 +97,7 @@ if __name__ == "__main__":
res = []
for key in res:
formatter(get_verify_key(key))
if __name__ == "__main__":
main()
@@ -6,7 +6,8 @@ import sys
from synapse.config.homeserver import HomeServerConfig
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"--config-dir",
@@ -76,3 +77,7 @@ if __name__ == "__main__":
shutil.copyfileobj(args.header_file, args.output_file)
args.output_file.write(conf)
if __name__ == "__main__":
main()
@@ -19,7 +19,8 @@ import sys
from synapse.config.logger import DEFAULT_LOG_CONFIG
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
@@ -42,3 +43,7 @@ if __name__ == "__main__":
out = args.output_file
out.write(DEFAULT_LOG_CONFIG.substitute(log_file=args.log_file))
out.flush()
if __name__ == "__main__":
main()
@@ -19,7 +19,8 @@ from signedjson.key import generate_signing_key, write_signing_keys
from synapse.util.stringutils import random_string
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
@@ -34,3 +35,7 @@ if __name__ == "__main__":
key_id = "a_" + random_string(4)
key = (generate_signing_key(key_id),)
write_signing_keys(args.output_file, key)
if __name__ == "__main__":
main()
@@ -8,9 +8,6 @@ import unicodedata
import bcrypt
import yaml
bcrypt_rounds = 12
password_pepper = ""
def prompt_for_pass():
password = getpass.getpass("Password: ")
@@ -26,7 +23,10 @@ def prompt_for_pass():
return password
if __name__ == "__main__":
def main():
bcrypt_rounds = 12
password_pepper = ""
parser = argparse.ArgumentParser(
description=(
"Calculate the hash of a new password, so that passwords can be reset"
@@ -77,3 +77,7 @@ if __name__ == "__main__":
).decode("ascii")
print(hashed)
if __name__ == "__main__":
main()
@@ -28,7 +28,7 @@ This can be extracted from postgres with::
To use, pipe the above into::
PYTHON_PATH=. ./scripts/move_remote_media_to_new_store.py <source repo> <dest repo>
PYTHON_PATH=. synapse/_scripts/move_remote_media_to_new_store.py <source repo> <dest repo>
"""
import argparse
@@ -1146,7 +1146,7 @@ class TerminalProgress(Progress):
##############################################
if __name__ == "__main__":
def main():
parser = argparse.ArgumentParser(
description="A script to port an existing synapse SQLite database to"
" a new PostgreSQL database."
@@ -1251,3 +1251,7 @@ if __name__ == "__main__":
sys.stderr.write(end_error)
sys.exit(5)
if __name__ == "__main__":
main()
+91 -42
View File
@@ -175,27 +175,14 @@ class ApplicationService:
return namespace.exclusive
return False
async def _matches_user(self, event: EventBase, store: "DataStore") -> bool:
if self.is_interested_in_user(event.sender):
return True
# also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
return True
does_match = await self.matches_user_in_member_list(event.room_id, store)
return does_match
@cached(num_args=1, cache_context=True)
async def matches_user_in_member_list(
async def _matches_user_in_member_list(
self,
room_id: str,
store: "DataStore",
cache_context: _CacheContext,
) -> bool:
"""Check if this service is interested a room based upon it's membership
"""Check if this service is interested a room based upon its membership
Args:
room_id: The room to check.
@@ -214,47 +201,110 @@ class ApplicationService:
return True
return False
def _matches_room_id(self, event: EventBase) -> bool:
if hasattr(event, "room_id"):
return self.is_interested_in_room(event.room_id)
return False
def is_interested_in_user(
self,
user_id: str,
) -> bool:
"""
Returns whether the application is interested in a given user ID.
async def _matches_aliases(self, event: EventBase, store: "DataStore") -> bool:
alias_list = await store.get_aliases_for_room(event.room_id)
The appservice is considered to be interested in a user if either: the
user ID is in the appservice's user namespace, or if the user is the
appservice's configured sender_localpart.
Args:
user_id: The ID of the user to check.
Returns:
True if the application service is interested in the user, False if not.
"""
return (
# User is the appservice's sender_localpart user
user_id == self.sender
# User is in the appservice's user namespace
or self.is_user_in_namespace(user_id)
)
@cached(num_args=1, cache_context=True)
async def is_interested_in_room(
self,
room_id: str,
store: "DataStore",
cache_context: _CacheContext,
) -> bool:
"""
Returns whether the application service is interested in a given room ID.
The appservice is considered to be interested in the room if either: the ID or one
of the aliases of the room is in the appservice's room ID or alias namespace
respectively, or if one of the members of the room fall into the appservice's user
namespace.
Args:
room_id: The ID of the room to check.
store: The homeserver's datastore class.
Returns:
True if the application service is interested in the room, False if not.
"""
# Check if we have interest in this room ID
if self.is_room_id_in_namespace(room_id):
return True
# likewise with the room's aliases (if it has any)
alias_list = await store.get_aliases_for_room(room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
if self.is_room_alias_in_namespace(alias):
return True
return False
# And finally, perform an expensive check on whether any of the
# users in the room match the appservice's user namespace
return await self._matches_user_in_member_list(
room_id, store, on_invalidate=cache_context.invalidate
)
async def is_interested(self, event: EventBase, store: "DataStore") -> bool:
@cached(num_args=1, cache_context=True)
async def is_interested_in_event(
self,
event_id: str,
event: EventBase,
store: "DataStore",
cache_context: _CacheContext,
) -> bool:
"""Check if this service is interested in this event.
Args:
event_id: The ID of the event to check. This is purely used for simplifying the
caching of calls to this method.
event: The event to check.
store: The datastore to query.
Returns:
True if this service would like to know about this event.
True if this service would like to know about this event, otherwise False.
"""
# Do cheap checks first
if self._matches_room_id(event):
# Check if we're interested in this event's sender by namespace (or if they're the
# sender_localpart user)
if self.is_interested_in_user(event.sender):
return True
# This will check the namespaces first before
# checking the store, so should be run before _matches_aliases
if await self._matches_user(event, store):
# additionally, if this is a membership event, perform the same checks on
# the user it references
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
return True
# This will check the store, so should be run last
if await self._matches_aliases(event, store):
# This will check the datastore, so should be run last
if await self.is_interested_in_room(
event.room_id, store, on_invalidate=cache_context.invalidate
):
return True
return False
@cached(num_args=1)
@cached(num_args=1, cache_context=True)
async def is_interested_in_presence(
self, user_id: UserID, store: "DataStore"
self, user_id: UserID, store: "DataStore", cache_context: _CacheContext
) -> bool:
"""Check if this service is interested a user's presence
@@ -272,20 +322,19 @@ class ApplicationService:
# Then find out if the appservice is interested in any of those rooms
for room_id in room_ids:
if await self.matches_user_in_member_list(room_id, store):
if await self.is_interested_in_room(
room_id, store, on_invalidate=cache_context.invalidate
):
return True
return False
def is_interested_in_user(self, user_id: str) -> bool:
return (
bool(self._matches_regex(ApplicationService.NS_USERS, user_id))
or user_id == self.sender
)
def is_user_in_namespace(self, user_id: str) -> bool:
return bool(self._matches_regex(ApplicationService.NS_USERS, user_id))
def is_interested_in_alias(self, alias: str) -> bool:
def is_room_alias_in_namespace(self, alias: str) -> bool:
return bool(self._matches_regex(ApplicationService.NS_ALIASES, alias))
def is_interested_in_room(self, room_id: str) -> bool:
def is_room_id_in_namespace(self, room_id: str) -> bool:
return bool(self._matches_regex(ApplicationService.NS_ROOMS, room_id))
def is_exclusive_user(self, user_id: str) -> bool:
+13 -11
View File
@@ -25,7 +25,7 @@ from synapse.appservice import (
TransactionUnusedFallbackKeys,
)
from synapse.events import EventBase
from synapse.events.utils import serialize_event
from synapse.events.utils import SerializeEventConfig, serialize_event
from synapse.http.client import SimpleHttpClient
from synapse.types import JsonDict, ThirdPartyInstanceID
from synapse.util.caches.response_cache import ResponseCache
@@ -321,16 +321,18 @@ class ApplicationServiceApi(SimpleHttpClient):
serialize_event(
e,
time_now,
as_client_event=True,
# If this is an invite or a knock membership event, and we're interested
# in this user, then include any stripped state alongside the event.
include_stripped_room_state=(
e.type == EventTypes.Member
and (
e.membership == Membership.INVITE
or e.membership == Membership.KNOCK
)
and service.is_interested_in_user(e.state_key)
config=SerializeEventConfig(
as_client_event=True,
# If this is an invite or a knock membership event, and we're interested
# in this user, then include any stripped state alongside the event.
include_stripped_room_state=(
e.type == EventTypes.Member
and (
e.membership == Membership.INVITE
or e.membership == Membership.KNOCK
)
and service.is_interested_in_user(e.state_key)
),
),
)
for e in events
+1 -1
View File
@@ -383,7 +383,7 @@ class RootConfig:
Build a default configuration file
This is used when the user explicitly asks us to generate a config file
(eg with --generate_config).
(eg with --generate-config).
Args:
config_dir_path: The path where the config files are kept. Used to
+3 -1
View File
@@ -310,7 +310,9 @@ class EventBase(metaclass=abc.ABCMeta):
depth: DictProperty[int] = DictProperty("depth")
content: DictProperty[JsonDict] = DictProperty("content")
hashes: DictProperty[Dict[str, str]] = DictProperty("hashes")
origin: DictProperty[str] = DictProperty("origin")
origin: DictProperty[str] = DictProperty(
"origin"
) # CAN WE GET RID OF THIS??!!!??!?!
origin_server_ts: DictProperty[int] = DictProperty("origin_server_ts")
redacts: DefaultDictProperty[Optional[str]] = DefaultDictProperty("redacts", None)
room_id: DictProperty[str] = DictProperty("room_id")
+7 -3
View File
@@ -174,7 +174,9 @@ class ThirdPartyEventRules:
] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
on_profile_update: Optional[ON_PROFILE_UPDATE_CALLBACK] = None,
on_deactivation: Optional[ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK] = None,
on_user_deactivation_status_changed: Optional[
ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK
] = None,
) -> None:
"""Register callbacks from modules for each hook."""
if check_event_allowed is not None:
@@ -199,8 +201,10 @@ class ThirdPartyEventRules:
if on_profile_update is not None:
self._on_profile_update_callbacks.append(on_profile_update)
if on_deactivation is not None:
self._on_user_deactivation_status_changed_callbacks.append(on_deactivation)
if on_user_deactivation_status_changed is not None:
self._on_user_deactivation_status_changed_callbacks.append(
on_user_deactivation_status_changed,
)
async def check_event_allowed(
self, event: EventBase, context: EventContext
+54 -27
View File
@@ -26,6 +26,7 @@ from typing import (
Union,
)
import attr
from frozendict import frozendict
from synapse.api.constants import EventContentFields, EventTypes, RelationTypes
@@ -303,29 +304,37 @@ def format_event_for_client_v2_without_room_id(d: JsonDict) -> JsonDict:
return d
@attr.s(slots=True, frozen=True, auto_attribs=True)
class SerializeEventConfig:
as_client_event: bool = True
# Function to convert from federation format to client format
event_format: Callable[[JsonDict], JsonDict] = format_event_for_client_v1
# ID of the user's auth token - used for namespacing of transaction IDs
token_id: Optional[int] = None
# List of event fields to include. If empty, all fields will be returned.
only_event_fields: Optional[List[str]] = None
# Some events can have stripped room state stored in the `unsigned` field.
# This is required for invite and knock functionality. If this option is
# False, that state will be removed from the event before it is returned.
# Otherwise, it will be kept.
include_stripped_room_state: bool = False
_DEFAULT_SERIALIZE_EVENT_CONFIG = SerializeEventConfig()
def serialize_event(
e: Union[JsonDict, EventBase],
time_now_ms: int,
*,
as_client_event: bool = True,
event_format: Callable[[JsonDict], JsonDict] = format_event_for_client_v1,
token_id: Optional[str] = None,
only_event_fields: Optional[List[str]] = None,
include_stripped_room_state: bool = False,
config: SerializeEventConfig = _DEFAULT_SERIALIZE_EVENT_CONFIG,
) -> JsonDict:
"""Serialize event for clients
Args:
e
time_now_ms
as_client_event
event_format
token_id
only_event_fields
include_stripped_room_state: Some events can have stripped room state
stored in the `unsigned` field. This is required for invite and knock
functionality. If this option is False, that state will be removed from the
event before it is returned. Otherwise, it will be kept.
config: Event serialization config
Returns:
The serialized event dictionary.
@@ -348,11 +357,11 @@ def serialize_event(
if "redacted_because" in e.unsigned:
d["unsigned"]["redacted_because"] = serialize_event(
e.unsigned["redacted_because"], time_now_ms, event_format=event_format
e.unsigned["redacted_because"], time_now_ms, config=config
)
if token_id is not None:
if token_id == getattr(e.internal_metadata, "token_id", None):
if config.token_id is not None:
if config.token_id == getattr(e.internal_metadata, "token_id", None):
txn_id = getattr(e.internal_metadata, "txn_id", None)
if txn_id is not None:
d["unsigned"]["transaction_id"] = txn_id
@@ -361,13 +370,14 @@ def serialize_event(
# that are meant to provide metadata about a room to an invitee/knocker. They are
# intended to only be included in specific circumstances, such as down sync, and
# should not be included in any other case.
if not include_stripped_room_state:
if not config.include_stripped_room_state:
d["unsigned"].pop("invite_room_state", None)
d["unsigned"].pop("knock_room_state", None)
if as_client_event:
d = event_format(d)
if config.as_client_event:
d = config.event_format(d)
only_event_fields = config.only_event_fields
if only_event_fields:
if not isinstance(only_event_fields, list) or not all(
isinstance(f, str) for f in only_event_fields
@@ -390,18 +400,18 @@ class EventClientSerializer:
event: Union[JsonDict, EventBase],
time_now: int,
*,
config: SerializeEventConfig = _DEFAULT_SERIALIZE_EVENT_CONFIG,
bundle_aggregations: Optional[Dict[str, "BundledAggregations"]] = None,
**kwargs: Any,
) -> JsonDict:
"""Serializes a single event.
Args:
event: The event being serialized.
time_now: The current time in milliseconds
config: Event serialization config
bundle_aggregations: Whether to include the bundled aggregations for this
event. Only applies to non-state events. (State events never include
bundled aggregations.)
**kwargs: Arguments to pass to `serialize_event`
Returns:
The serialized event
@@ -410,7 +420,7 @@ class EventClientSerializer:
if not isinstance(event, EventBase):
return event
serialized_event = serialize_event(event, time_now, **kwargs)
serialized_event = serialize_event(event, time_now, config=config)
# Check if there are any bundled aggregations to include with the event.
if bundle_aggregations:
@@ -419,6 +429,7 @@ class EventClientSerializer:
self._inject_bundled_aggregations(
event,
time_now,
config,
bundle_aggregations[event.event_id],
serialized_event,
)
@@ -456,6 +467,7 @@ class EventClientSerializer:
self,
event: EventBase,
time_now: int,
config: SerializeEventConfig,
aggregations: "BundledAggregations",
serialized_event: JsonDict,
) -> None:
@@ -466,6 +478,7 @@ class EventClientSerializer:
time_now: The current time in milliseconds
aggregations: The bundled aggregation to serialize.
serialized_event: The serialized event which may be modified.
config: Event serialization config
"""
serialized_aggregations = {}
@@ -493,8 +506,8 @@ class EventClientSerializer:
thread = aggregations.thread
# Don't bundle aggregations as this could recurse forever.
serialized_latest_event = self.serialize_event(
thread.latest_event, time_now, bundle_aggregations=None
serialized_latest_event = serialize_event(
thread.latest_event, time_now, config=config
)
# Manually apply an edit, if one exists.
if thread.latest_edit:
@@ -515,20 +528,34 @@ class EventClientSerializer:
)
def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
self,
events: Iterable[Union[JsonDict, EventBase]],
time_now: int,
*,
config: SerializeEventConfig = _DEFAULT_SERIALIZE_EVENT_CONFIG,
bundle_aggregations: Optional[Dict[str, "BundledAggregations"]] = None,
) -> List[JsonDict]:
"""Serializes multiple events.
Args:
event
time_now: The current time in milliseconds
**kwargs: Arguments to pass to `serialize_event`
config: Event serialization config
bundle_aggregations: Whether to include the bundled aggregations for this
event. Only applies to non-state events. (State events never include
bundled aggregations.)
Returns:
The list of serialized events
"""
return [
self.serialize_event(event, time_now=time_now, **kwargs) for event in events
self.serialize_event(
event,
time_now,
config=config,
bundle_aggregations=bundle_aggregations,
)
for event in events
]
+4 -4
View File
@@ -1428,7 +1428,7 @@ class FederationClient(FederationBase):
# Validate children_state of the room.
children_state = room.pop("children_state", [])
if not isinstance(children_state, Sequence):
if not isinstance(children_state, list):
raise InvalidResponseError("'room.children_state' must be a list")
if any(not isinstance(e, dict) for e in children_state):
raise InvalidResponseError("Invalid event in 'children_state' list")
@@ -1440,14 +1440,14 @@ class FederationClient(FederationBase):
# Validate the children rooms.
children = res.get("children", [])
if not isinstance(children, Sequence):
if not isinstance(children, list):
raise InvalidResponseError("'children' must be a list")
if any(not isinstance(r, dict) for r in children):
raise InvalidResponseError("Invalid room in 'children' list")
# Validate the inaccessible children.
inaccessible_children = res.get("inaccessible_children", [])
if not isinstance(inaccessible_children, Sequence):
if not isinstance(inaccessible_children, list):
raise InvalidResponseError("'inaccessible_children' must be a list")
if any(not isinstance(r, str) for r in inaccessible_children):
raise InvalidResponseError(
@@ -1630,7 +1630,7 @@ def _validate_hierarchy_event(d: JsonDict) -> None:
raise ValueError("Invalid event: 'content' must be a dict")
via = content.get("via")
if not isinstance(via, Sequence):
if not isinstance(via, list):
raise ValueError("Invalid event: 'via' must be a list")
if any(not isinstance(v, str) for v in via):
raise ValueError("Invalid event: 'via' must be a list of strings")
+1 -1
View File
@@ -244,7 +244,7 @@ class FederationRemoteSendQueue(AbstractFederationSender):
self.notifier.on_new_replication_data()
def send_device_messages(self, destination: str) -> None:
def send_device_messages(self, destination: str, immediate: bool = False) -> None:
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
# stream.
+17 -9
View File
@@ -118,7 +118,12 @@ class AbstractFederationSender(metaclass=abc.ABCMeta):
raise NotImplementedError()
@abc.abstractmethod
def send_device_messages(self, destination: str) -> None:
def send_device_messages(self, destination: str, immediate: bool = True) -> None:
"""Tells the sender that a new device message is ready to be sent to the
destination. The `immediate` flag specifies whether the messages should
be tried to be sent immediately, or whether it can be delayed for a
short while (to aid performance).
"""
raise NotImplementedError()
@abc.abstractmethod
@@ -146,9 +151,8 @@ class AbstractFederationSender(metaclass=abc.ABCMeta):
@attr.s
class _PresenceQueue:
"""A queue of destinations that need to be woken up due to new presence
updates.
class _DestinationWakeupQueue:
"""A queue of destinations that need to be woken up due to new updates.
Staggers waking up of per destination queues to ensure that we don't attempt
to start TLS connections with many hosts all at once, leading to pinned CPU.
@@ -175,7 +179,7 @@ class _PresenceQueue:
if not self.processing:
self._handle()
@wrap_as_background_process("_PresenceQueue.handle")
@wrap_as_background_process("_DestinationWakeupQueue.handle")
async def _handle(self) -> None:
"""Background process to drain the queue."""
@@ -297,7 +301,7 @@ class FederationSender(AbstractFederationSender):
self._external_cache = hs.get_external_cache()
self._presence_queue = _PresenceQueue(self, self.clock)
self._destination_wakeup_queue = _DestinationWakeupQueue(self, self.clock)
def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue:
"""Get or create a PerDestinationQueue for the given destination
@@ -614,7 +618,7 @@ class FederationSender(AbstractFederationSender):
states, start_loop=False
)
self._presence_queue.add_to_queue(destination)
self._destination_wakeup_queue.add_to_queue(destination)
def build_and_send_edu(
self,
@@ -667,7 +671,7 @@ class FederationSender(AbstractFederationSender):
else:
queue.send_edu(edu)
def send_device_messages(self, destination: str) -> None:
def send_device_messages(self, destination: str, immediate: bool = False) -> None:
if destination == self.server_name:
logger.warning("Not sending device update to ourselves")
return
@@ -677,7 +681,11 @@ class FederationSender(AbstractFederationSender):
):
return
self._get_per_destination_queue(destination).attempt_new_transaction()
if immediate:
self._get_per_destination_queue(destination).attempt_new_transaction()
else:
self._get_per_destination_queue(destination).mark_new_data()
self._destination_wakeup_queue.add_to_queue(destination)
def wake_destination(self, destination: str) -> None:
"""Called when we want to retry sending transactions to a remote.
@@ -219,6 +219,16 @@ class PerDestinationQueue:
self._pending_edus.append(edu)
self.attempt_new_transaction()
def mark_new_data(self) -> None:
"""Marks that the destination has new data to send, without starting a
new transaction.
If a transaction loop is already in progress then a new transcation will
be attempted when the current one finishes.
"""
self._new_data_to_send = True
def attempt_new_transaction(self) -> None:
"""Try to start a new transaction to this destination
+2 -2
View File
@@ -571,7 +571,7 @@ class ApplicationServicesHandler:
room_alias_str = room_alias.to_string()
services = self.store.get_app_services()
alias_query_services = [
s for s in services if (s.is_interested_in_alias(room_alias_str))
s for s in services if (s.is_room_alias_in_namespace(room_alias_str))
]
for alias_service in alias_query_services:
is_known_alias = await self.appservice_api.query_alias(
@@ -660,7 +660,7 @@ class ApplicationServicesHandler:
# inside of a list comprehension anymore.
interested_list = []
for s in services:
if await s.is_interested(event, self.store):
if await s.is_interested_in_event(event.event_id, event, self.store):
interested_list.append(s)
return interested_list
+1 -1
View File
@@ -506,7 +506,7 @@ class DeviceHandler(DeviceWorkerHandler):
"Sending device list update notif for %r to: %r", user_id, hosts
)
for host in hosts:
self.federation_sender.send_device_messages(host)
self.federation_sender.send_device_messages(host, immediate=False)
log_kv({"message": "sent device update to host", "host": host})
async def notify_user_signature_update(
+3 -3
View File
@@ -119,7 +119,7 @@ class DirectoryHandler:
service = requester.app_service
if service:
if not service.is_interested_in_alias(room_alias_str):
if not service.is_room_alias_in_namespace(room_alias_str):
raise SynapseError(
400,
"This application service has not reserved this kind of alias.",
@@ -221,7 +221,7 @@ class DirectoryHandler:
async def delete_appservice_association(
self, service: ApplicationService, room_alias: RoomAlias
) -> None:
if not service.is_interested_in_alias(room_alias.to_string()):
if not service.is_room_alias_in_namespace(room_alias.to_string()):
raise SynapseError(
400,
"This application service has not reserved this kind of alias",
@@ -376,7 +376,7 @@ class DirectoryHandler:
# non-exclusive locks on the alias (or there are no interested services)
services = self.store.get_app_services()
interested_services = [
s for s in services if s.is_interested_in_alias(alias.to_string())
s for s in services if s.is_room_alias_in_namespace(alias.to_string())
]
for service in interested_services:
+2 -1
View File
@@ -19,6 +19,7 @@ from typing import TYPE_CHECKING, Iterable, List, Optional
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
from synapse.events.utils import SerializeEventConfig
from synapse.handlers.presence import format_user_presence_state
from synapse.streams.config import PaginationConfig
from synapse.types import JsonDict, UserID
@@ -120,7 +121,7 @@ class EventStreamHandler:
chunks = self._event_serializer.serialize_events(
events,
time_now,
as_client_event=as_client_event,
config=SerializeEventConfig(as_client_event=as_client_event),
)
chunk = {
+6 -3
View File
@@ -18,6 +18,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, cast
from synapse.api.constants import EduTypes, EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.events import EventBase
from synapse.events.utils import SerializeEventConfig
from synapse.events.validator import EventValidator
from synapse.handlers.presence import format_user_presence_state
from synapse.handlers.receipts import ReceiptEventSource
@@ -156,6 +157,8 @@ class InitialSyncHandler:
if limit is None:
limit = 10
serializer_options = SerializeEventConfig(as_client_event=as_client_event)
async def handle_room(event: RoomsForUser) -> None:
d: JsonDict = {
"room_id": event.room_id,
@@ -173,7 +176,7 @@ class InitialSyncHandler:
d["invite"] = self._event_serializer.serialize_event(
invite_event,
time_now,
as_client_event=as_client_event,
config=serializer_options,
)
rooms_ret.append(d)
@@ -225,7 +228,7 @@ class InitialSyncHandler:
self._event_serializer.serialize_events(
messages,
time_now=time_now,
as_client_event=as_client_event,
config=serializer_options,
)
),
"start": await start_token.to_string(self.store),
@@ -235,7 +238,7 @@ class InitialSyncHandler:
d["state"] = self._event_serializer.serialize_events(
current_state.values(),
time_now=time_now,
as_client_event=as_client_event,
config=serializer_options,
)
account_data_events = []
+3
View File
@@ -1069,6 +1069,9 @@ class EventCreationHandler:
if relation_type == RelationTypes.ANNOTATION:
aggregation_key = relation["key"]
if len(aggregation_key) > 500:
raise SynapseError(400, "Aggregation key is too long")
already_exists = await self.store.has_user_annotated_event(
relates_to, event.type, aggregation_key, event.sender
)
+5 -2
View File
@@ -22,6 +22,7 @@ from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.api.filtering import Filter
from synapse.events.utils import SerializeEventConfig
from synapse.handlers.room import ShutdownRoomResponse
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.state import StateFilter
@@ -541,13 +542,15 @@ class PaginationHandler:
time_now = self.clock.time_msec()
serialize_options = SerializeEventConfig(as_client_event=as_client_event)
chunk = {
"chunk": (
self._event_serializer.serialize_events(
events,
time_now,
config=serialize_options,
bundle_aggregations=aggregations,
as_client_event=as_client_event,
)
),
"start": await from_token.to_string(self.store),
@@ -556,7 +559,7 @@ class PaginationHandler:
if state:
chunk["state"] = self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
state, time_now, config=serialize_options
)
return chunk
+1 -1
View File
@@ -269,7 +269,7 @@ class ReceiptEventSource(EventSource[int, JsonDict]):
# Then filter down to rooms that the AS can read
events = []
for room_id, event in rooms_to_events.items():
if not await service.matches_user_in_member_list(room_id, self.store):
if not await service.is_interested_in_room(room_id, self.store):
continue
events.append(event)
+1 -1
View File
@@ -1736,8 +1736,8 @@ class RoomMemberMasterHandler(RoomMemberHandler):
txn_id=txn_id,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
outlier=True,
)
event.internal_metadata.outlier = True
event.internal_metadata.out_of_band_membership = True
result_event = await self.event_creation_handler.handle_new_client_event(
+1 -1
View File
@@ -857,7 +857,7 @@ class _RoomEntry:
def _has_valid_via(e: EventBase) -> bool:
via = e.content.get("via")
if not via or not isinstance(via, Sequence):
if not via or not isinstance(via, list):
return False
for v in via:
if not isinstance(v, str):
+1 -3
View File
@@ -486,9 +486,7 @@ class TypingNotificationEventSource(EventSource[int, JsonDict]):
if handler._room_serials[room_id] <= from_key:
continue
if not await service.matches_user_in_member_list(
room_id, self._main_store
):
if not await service.is_interested_in_room(room_id, self._main_store):
continue
events.append(self._make_event_for(room_id))
+8
View File
@@ -59,6 +59,8 @@ from synapse.events.third_party_rules import (
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK,
ON_CREATE_ROOM_CALLBACK,
ON_NEW_EVENT_CALLBACK,
ON_PROFILE_UPDATE_CALLBACK,
ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK,
)
from synapse.handlers.account_validity import (
IS_USER_EXPIRED_CALLBACK,
@@ -281,6 +283,10 @@ class ModuleApi:
CHECK_VISIBILITY_CAN_BE_MODIFIED_CALLBACK
] = None,
on_new_event: Optional[ON_NEW_EVENT_CALLBACK] = None,
on_profile_update: Optional[ON_PROFILE_UPDATE_CALLBACK] = None,
on_user_deactivation_status_changed: Optional[
ON_USER_DEACTIVATION_STATUS_CHANGED_CALLBACK
] = None,
) -> None:
"""Registers callbacks for third party event rules capabilities.
@@ -292,6 +298,8 @@ class ModuleApi:
check_threepid_can_be_invited=check_threepid_can_be_invited,
check_visibility_can_be_modified=check_visibility_can_be_modified,
on_new_event=on_new_event,
on_profile_update=on_profile_update,
on_user_deactivation_status_changed=on_user_deactivation_status_changed,
)
def register_presence_router_callbacks(
+2 -2
View File
@@ -83,8 +83,8 @@ REQUIREMENTS = [
# ijson 3.1.4 fixes a bug with "." in property names
"ijson>=3.1.4",
"matrix-common~=1.1.0",
# For runtime introspection of our dependencies
"packaging~=21.3",
# We need packaging.requirements.Requirement, added in 16.1.
"packaging>=16.1",
]
CONDITIONAL_REQUIREMENTS = {
+1 -1
View File
@@ -380,7 +380,7 @@ class FederationSenderHandler:
# changes.
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
self.federation_sender.send_device_messages(host, immediate=False)
elif stream_name == ToDeviceStream.NAME:
# The to_device stream includes stuff to be pushed to both local
+7 -2
View File
@@ -16,7 +16,10 @@ import logging
from typing import TYPE_CHECKING, Tuple
from synapse.api.constants import ReceiptTypes
from synapse.events.utils import format_event_for_client_v2_without_room_id
from synapse.events.utils import (
SerializeEventConfig,
format_event_for_client_v2_without_room_id,
)
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
@@ -75,7 +78,9 @@ class NotificationsServlet(RestServlet):
self._event_serializer.serialize_event(
notif_events[pa.event_id],
self.clock.time_msec(),
event_format=format_event_for_client_v2_without_room_id,
config=SerializeEventConfig(
event_format=format_event_for_client_v2_without_room_id
),
)
),
}
+14 -41
View File
@@ -27,50 +27,15 @@ from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.client._base import client_patterns
from synapse.storage.relations import (
AggregationPaginationToken,
PaginationChunk,
RelationPaginationToken,
)
from synapse.types import JsonDict, RoomStreamToken, StreamToken
from synapse.storage.relations import AggregationPaginationToken, PaginationChunk
from synapse.types import JsonDict, StreamToken
if TYPE_CHECKING:
from synapse.server import HomeServer
from synapse.storage.databases.main import DataStore
logger = logging.getLogger(__name__)
async def _parse_token(
store: "DataStore", token: Optional[str]
) -> Optional[StreamToken]:
"""
For backwards compatibility support RelationPaginationToken, but new pagination
tokens are generated as full StreamTokens, to be compatible with /sync and /messages.
"""
if not token:
return None
# Luckily the format for StreamToken and RelationPaginationToken differ enough
# that they can easily be separated. An "_" appears in the serialization of
# RoomStreamToken (as part of StreamToken), but RelationPaginationToken uses
# "-" only for separators.
if "_" in token:
return await StreamToken.from_string(store, token)
else:
relation_token = RelationPaginationToken.from_string(token)
return StreamToken(
room_key=RoomStreamToken(relation_token.topological, relation_token.stream),
presence_key=0,
typing_key=0,
receipt_key=0,
account_data_key=0,
push_rules_key=0,
to_device_key=0,
device_list_key=0,
groups_key=0,
)
class RelationPaginationServlet(RestServlet):
"""API to paginate relations on an event by topological ordering, optionally
filtered by relation type and event type.
@@ -122,8 +87,12 @@ class RelationPaginationServlet(RestServlet):
pagination_chunk = PaginationChunk(chunk=[])
else:
# Return the relations
from_token = await _parse_token(self.store, from_token_str)
to_token = await _parse_token(self.store, to_token_str)
from_token = None
if from_token_str:
from_token = await StreamToken.from_string(self.store, from_token_str)
to_token = None
if to_token_str:
to_token = await StreamToken.from_string(self.store, to_token_str)
pagination_chunk = await self.store.get_relations_for_event(
event_id=parent_id,
@@ -317,8 +286,12 @@ class RelationAggregationGroupPaginationServlet(RestServlet):
from_token_str = parse_string(request, "from")
to_token_str = parse_string(request, "to")
from_token = await _parse_token(self.store, from_token_str)
to_token = await _parse_token(self.store, to_token_str)
from_token = None
if from_token_str:
from_token = await StreamToken.from_string(self.store, from_token_str)
to_token = None
if to_token_str:
to_token = await StreamToken.from_string(self.store, to_token_str)
result = await self.store.get_relations_for_event(
event_id=parent_id,

Some files were not shown because too many files have changed in this diff Show More