Compare commits
4 Commits
v1.2.1
...
anoa/user_
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6f98bc6512 | ||
|
|
50a63d5fda | ||
|
|
280bfb15ce | ||
|
|
fff89c33d8 |
122
CHANGES.md
122
CHANGES.md
@@ -1,125 +1,3 @@
|
||||
Synapse 1.2.1 (2019-07-26)
|
||||
==========================
|
||||
|
||||
Security update
|
||||
---------------
|
||||
|
||||
This release includes *four* security fixes:
|
||||
|
||||
- Prevent an attack where a federated server could send redactions for arbitrary events in v1 and v2 rooms. ([\#5767](https://github.com/matrix-org/synapse/issues/5767))
|
||||
- Prevent a denial-of-service attack where cycles of redaction events would make Synapse spin infinitely. Thanks to `@lrizika:matrix.org` for identifying and responsibly disclosing this issue. ([0f2ecb961](https://github.com/matrix-org/synapse/commit/0f2ecb961))
|
||||
- Prevent an attack where users could be joined or parted from public rooms without their consent. Thanks to @dylangerdaly for identifying and responsibly disclosing this issue. ([\#5744](https://github.com/matrix-org/synapse/issues/5744))
|
||||
- Fix a vulnerability where a federated server could spoof read-receipts from
|
||||
users on other servers. Thanks to @dylangerdaly for identifying this issue too. ([\#5743](https://github.com/matrix-org/synapse/issues/5743))
|
||||
|
||||
Additionally, the following fix was in Synapse **1.2.0**, but was not correctly
|
||||
identified during the original release:
|
||||
|
||||
- It was possible for a room moderator to send a redaction for an `m.room.create` event, which would downgrade the room to version 1. Thanks to `/dev/ponies` for identifying and responsibly disclosing this issue! ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
|
||||
|
||||
Synapse 1.2.0 (2019-07-25)
|
||||
==========================
|
||||
|
||||
No significant changes.
|
||||
|
||||
|
||||
Synapse 1.2.0rc2 (2019-07-24)
|
||||
=============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a regression introduced in v1.2.0rc1 which led to incorrect labels on some prometheus metrics. ([\#5734](https://github.com/matrix-org/synapse/issues/5734))
|
||||
|
||||
|
||||
Synapse 1.2.0rc1 (2019-07-22)
|
||||
=============================
|
||||
|
||||
Security fixes
|
||||
--------------
|
||||
|
||||
This update included a security fix which was initially incorrectly flagged as
|
||||
a regular bug fix.
|
||||
|
||||
- It was possible for a room moderator to send a redaction for an `m.room.create` event, which would downgrade the room to version 1. Thanks to `/dev/ponies` for identifying and responsibly disclosing this issue! ([\#5701](https://github.com/matrix-org/synapse/issues/5701))
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add support for opentracing. ([\#5544](https://github.com/matrix-org/synapse/issues/5544), [\#5712](https://github.com/matrix-org/synapse/issues/5712))
|
||||
- Add ability to pull all locally stored events out of synapse that a particular user can see. ([\#5589](https://github.com/matrix-org/synapse/issues/5589))
|
||||
- Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance. ([\#5597](https://github.com/matrix-org/synapse/issues/5597))
|
||||
- Add `sender` and `origin_server_ts` fields to `m.replace`. ([\#5613](https://github.com/matrix-org/synapse/issues/5613))
|
||||
- Add default push rule to ignore reactions. ([\#5623](https://github.com/matrix-org/synapse/issues/5623))
|
||||
- Include the original event when asking for its relations. ([\#5626](https://github.com/matrix-org/synapse/issues/5626))
|
||||
- Implement `session_lifetime` configuration option, after which access tokens will expire. ([\#5660](https://github.com/matrix-org/synapse/issues/5660))
|
||||
- Return "This account has been deactivated" when a deactivated user tries to login. ([\#5674](https://github.com/matrix-org/synapse/issues/5674))
|
||||
- Enable aggregations support by default ([\#5714](https://github.com/matrix-org/synapse/issues/5714))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix 'utime went backwards' errors on daemonization. ([\#5609](https://github.com/matrix-org/synapse/issues/5609))
|
||||
- Various minor fixes to the federation request rate limiter. ([\#5621](https://github.com/matrix-org/synapse/issues/5621))
|
||||
- Forbid viewing relations on an event once it has been redacted. ([\#5629](https://github.com/matrix-org/synapse/issues/5629))
|
||||
- Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format. ([\#5638](https://github.com/matrix-org/synapse/issues/5638))
|
||||
- Fix newly-registered users not being able to lookup their own profile without joining a room. ([\#5644](https://github.com/matrix-org/synapse/issues/5644))
|
||||
- Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`. ([\#5654](https://github.com/matrix-org/synapse/issues/5654))
|
||||
- Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated. ([\#5658](https://github.com/matrix-org/synapse/issues/5658))
|
||||
- Fix some problems with authenticating redactions in recent room versions. ([\#5699](https://github.com/matrix-org/synapse/issues/5699), [\#5700](https://github.com/matrix-org/synapse/issues/5700), [\#5707](https://github.com/matrix-org/synapse/issues/5707))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- Base Docker image on a newer Alpine Linux version (3.8 -> 3.10). ([\#5619](https://github.com/matrix-org/synapse/issues/5619))
|
||||
- Add missing space in default logging file format generated by the Docker image. ([\#5620](https://github.com/matrix-org/synapse/issues/5620))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks! ([\#5397](https://github.com/matrix-org/synapse/issues/5397))
|
||||
- --no-pep517 should be --no-use-pep517 in the documentation to setup the development environment. ([\#5651](https://github.com/matrix-org/synapse/issues/5651))
|
||||
- Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks! ([\#5661](https://github.com/matrix-org/synapse/issues/5661))
|
||||
- Minor tweaks to postgres documentation. ([\#5675](https://github.com/matrix-org/synapse/issues/5675))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Remove support for the `invite_3pid_guest` configuration setting. ([\#5625](https://github.com/matrix-org/synapse/issues/5625))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Move logging code out of `synapse.util` and into `synapse.logging`. ([\#5606](https://github.com/matrix-org/synapse/issues/5606), [\#5617](https://github.com/matrix-org/synapse/issues/5617))
|
||||
- Add a blacklist file to the repo to blacklist certain sytests from failing CI. ([\#5611](https://github.com/matrix-org/synapse/issues/5611))
|
||||
- Make runtime errors surrounding password reset emails much clearer. ([\#5616](https://github.com/matrix-org/synapse/issues/5616))
|
||||
- Remove dead code for persiting outgoing federation transactions. ([\#5622](https://github.com/matrix-org/synapse/issues/5622))
|
||||
- Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI. ([\#5627](https://github.com/matrix-org/synapse/issues/5627))
|
||||
- Move RegistrationHandler.get_or_create_user to test code. ([\#5628](https://github.com/matrix-org/synapse/issues/5628))
|
||||
- Add some more common python virtual-environment paths to the black exclusion list. ([\#5630](https://github.com/matrix-org/synapse/issues/5630))
|
||||
- Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details. ([\#5636](https://github.com/matrix-org/synapse/issues/5636))
|
||||
- Unblacklist some user_directory sytests. ([\#5637](https://github.com/matrix-org/synapse/issues/5637))
|
||||
- Factor out some redundant code in the login implementation. ([\#5639](https://github.com/matrix-org/synapse/issues/5639))
|
||||
- Update ModuleApi to avoid register(generate_token=True). ([\#5640](https://github.com/matrix-org/synapse/issues/5640))
|
||||
- Remove access-token support from `RegistrationHandler.register`, and rename it. ([\#5641](https://github.com/matrix-org/synapse/issues/5641))
|
||||
- Remove access-token support from `RegistrationStore.register`, and rename it. ([\#5642](https://github.com/matrix-org/synapse/issues/5642))
|
||||
- Improve logging for auto-join when a new user is created. ([\#5643](https://github.com/matrix-org/synapse/issues/5643))
|
||||
- Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure. ([\#5645](https://github.com/matrix-org/synapse/issues/5645))
|
||||
- Fix a small typo in a code comment. ([\#5655](https://github.com/matrix-org/synapse/issues/5655))
|
||||
- Clean up exception handling around client access tokens. ([\#5656](https://github.com/matrix-org/synapse/issues/5656))
|
||||
- Add a mechanism for per-test homeserver configuration in the unit tests. ([\#5657](https://github.com/matrix-org/synapse/issues/5657))
|
||||
- Inline issue_access_token. ([\#5659](https://github.com/matrix-org/synapse/issues/5659))
|
||||
- Update the sytest BuildKite configuration to checkout Synapse in `/src`. ([\#5664](https://github.com/matrix-org/synapse/issues/5664))
|
||||
- Add a `docker` type to the towncrier configuration. ([\#5673](https://github.com/matrix-org/synapse/issues/5673))
|
||||
- Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces. ([\#5689](https://github.com/matrix-org/synapse/issues/5689))
|
||||
- Documentation for opentracing. ([\#5703](https://github.com/matrix-org/synapse/issues/5703))
|
||||
|
||||
|
||||
Synapse 1.1.0 (2019-07-04)
|
||||
==========================
|
||||
|
||||
|
||||
@@ -49,13 +49,6 @@ returned by the Client-Server API:
|
||||
# configured on port 443.
|
||||
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
|
||||
|
||||
Upgrading to v1.2.0
|
||||
===================
|
||||
|
||||
Some counter metrics have been renamed, with the old names deprecated. See
|
||||
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
|
||||
for details.
|
||||
|
||||
Upgrading to v1.1.0
|
||||
===================
|
||||
|
||||
|
||||
1
changelog.d/5397.doc
Normal file
1
changelog.d/5397.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks!
|
||||
1
changelog.d/5544.misc
Normal file
1
changelog.d/5544.misc
Normal file
@@ -0,0 +1 @@
|
||||
Added opentracing and configuration options.
|
||||
1
changelog.d/5589.feature
Normal file
1
changelog.d/5589.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add ability to pull all locally stored events out of synapse that a particular user can see.
|
||||
1
changelog.d/5606.misc
Normal file
1
changelog.d/5606.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move logging code out of `synapse.util` and into `synapse.logging`.
|
||||
1
changelog.d/5609.bugfix
Normal file
1
changelog.d/5609.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix 'utime went backwards' errors on daemonization.
|
||||
1
changelog.d/5611.misc
Normal file
1
changelog.d/5611.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a blacklist file to the repo to blacklist certain sytests from failing CI.
|
||||
1
changelog.d/5613.feature
Normal file
1
changelog.d/5613.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add `sender` and `origin_server_ts` fields to `m.replace`.
|
||||
1
changelog.d/5616.misc
Normal file
1
changelog.d/5616.misc
Normal file
@@ -0,0 +1 @@
|
||||
Make runtime errors surrounding password reset emails much clearer.
|
||||
1
changelog.d/5617.misc
Normal file
1
changelog.d/5617.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move logging code out of `synapse.util` and into `synapse.logging`.
|
||||
1
changelog.d/5619.docker
Normal file
1
changelog.d/5619.docker
Normal file
@@ -0,0 +1 @@
|
||||
Base Docker image on a newer Alpine Linux version (3.8 -> 3.10).
|
||||
1
changelog.d/5620.docker
Normal file
1
changelog.d/5620.docker
Normal file
@@ -0,0 +1 @@
|
||||
Add missing space in default logging file format generated by the Docker image.
|
||||
1
changelog.d/5621.bugfix
Normal file
1
changelog.d/5621.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Various minor fixes to the federation request rate limiter.
|
||||
1
changelog.d/5622.misc
Normal file
1
changelog.d/5622.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove dead code for persiting outgoing federation transactions.
|
||||
1
changelog.d/5623.feature
Normal file
1
changelog.d/5623.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add default push rule to ignore reactions.
|
||||
1
changelog.d/5625.removal
Normal file
1
changelog.d/5625.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove support for the `invite_3pid_guest` configuration setting.
|
||||
1
changelog.d/5626.feature
Normal file
1
changelog.d/5626.feature
Normal file
@@ -0,0 +1 @@
|
||||
Include the original event when asking for its relations.
|
||||
1
changelog.d/5627.misc
Normal file
1
changelog.d/5627.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI.
|
||||
1
changelog.d/5628.misc
Normal file
1
changelog.d/5628.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move RegistrationHandler.get_or_create_user to test code.
|
||||
1
changelog.d/5630.misc
Normal file
1
changelog.d/5630.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some more common python virtual-environment paths to the black exclusion list.
|
||||
1
changelog.d/5637.misc
Normal file
1
changelog.d/5637.misc
Normal file
@@ -0,0 +1 @@
|
||||
Unblacklist some user_directory sytests.
|
||||
1
changelog.d/5638.bugfix
Normal file
1
changelog.d/5638.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format.
|
||||
1
changelog.d/5639.misc
Normal file
1
changelog.d/5639.misc
Normal file
@@ -0,0 +1 @@
|
||||
Factor out some redundant code in the login implementation.
|
||||
1
changelog.d/5640.misc
Normal file
1
changelog.d/5640.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update ModuleApi to avoid register(generate_token=True).
|
||||
1
changelog.d/5641.misc
Normal file
1
changelog.d/5641.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove access-token support from RegistrationHandler.register, and rename it.
|
||||
1
changelog.d/5642.misc
Normal file
1
changelog.d/5642.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove access-token support from `RegistrationStore.register`, and rename it.
|
||||
1
changelog.d/5643.misc
Normal file
1
changelog.d/5643.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve logging for auto-join when a new user is created.
|
||||
1
changelog.d/5644.bugfix
Normal file
1
changelog.d/5644.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix newly-registered users not being able to lookup their own profile without joining a room.
|
||||
1
changelog.d/5645.misc
Normal file
1
changelog.d/5645.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure.
|
||||
1
changelog.d/5651.doc
Normal file
1
changelog.d/5651.doc
Normal file
@@ -0,0 +1 @@
|
||||
--no-pep517 should be --no-use-pep517 in the documentation to setup the development environment.
|
||||
1
changelog.d/5654.bugfix
Normal file
1
changelog.d/5654.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`.
|
||||
1
changelog.d/5655.misc
Normal file
1
changelog.d/5655.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a small typo in a code comment.
|
||||
1
changelog.d/5656.misc
Normal file
1
changelog.d/5656.misc
Normal file
@@ -0,0 +1 @@
|
||||
Clean up exception handling around client access tokens.
|
||||
1
changelog.d/5657.misc
Normal file
1
changelog.d/5657.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a mechanism for per-test homeserver configuration in the unit tests.
|
||||
1
changelog.d/5658.bugfix
Normal file
1
changelog.d/5658.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated.
|
||||
1
changelog.d/5659.misc
Normal file
1
changelog.d/5659.misc
Normal file
@@ -0,0 +1 @@
|
||||
Inline issue_access_token.
|
||||
1
changelog.d/5660.feature
Normal file
1
changelog.d/5660.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement `session_lifetime` configuration option, after which access tokens will expire.
|
||||
1
changelog.d/5661.doc
Normal file
1
changelog.d/5661.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks!
|
||||
1
changelog.d/5664.misc
Normal file
1
changelog.d/5664.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update the sytest BuildKite configuration to checkout Synapse in `/src`.
|
||||
1
changelog.d/5673.misc
Normal file
1
changelog.d/5673.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a `docker` type to the towncrier configuration.
|
||||
1
changelog.d/5674.feature
Normal file
1
changelog.d/5674.feature
Normal file
@@ -0,0 +1 @@
|
||||
Return "This account has been deactivated" when a deactivated user tries to login.
|
||||
1
changelog.d/5686.feature
Normal file
1
changelog.d/5686.feature
Normal file
@@ -0,0 +1 @@
|
||||
Use `M_USER_DEACTIVATED` instead of `M_UNKNOWN` for errcode when a deactivated user attempts to login.
|
||||
16
debian/changelog
vendored
16
debian/changelog
vendored
@@ -1,21 +1,9 @@
|
||||
matrix-synapse-py3 (1.2.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.2.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 26 Jul 2019 11:32:47 +0100
|
||||
|
||||
matrix-synapse-py3 (1.2.0) stable; urgency=medium
|
||||
matrix-synapse-py3 (1.1.0-1) UNRELEASED; urgency=medium
|
||||
|
||||
[ Amber Brown ]
|
||||
* Update logging config defaults to match API changes in Synapse.
|
||||
|
||||
[ Richard van der Hoff ]
|
||||
* Add Recommends and Depends for some libraries which you probably want.
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.2.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Jul 2019 14:10:07 +0100
|
||||
-- Erik Johnston <erikj@rae> Thu, 04 Jul 2019 13:59:02 +0100
|
||||
|
||||
matrix-synapse-py3 (1.1.0) stable; urgency=medium
|
||||
|
||||
|
||||
7
debian/control
vendored
7
debian/control
vendored
@@ -2,20 +2,16 @@ Source: matrix-synapse-py3
|
||||
Section: contrib/python
|
||||
Priority: extra
|
||||
Maintainer: Synapse Packaging team <packages@matrix.org>
|
||||
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
|
||||
Build-Depends:
|
||||
debhelper (>= 9),
|
||||
dh-systemd,
|
||||
dh-virtualenv (>= 1.1),
|
||||
libsystemd-dev,
|
||||
libpq-dev,
|
||||
lsb-release,
|
||||
python3-dev,
|
||||
python3,
|
||||
python3-setuptools,
|
||||
python3-pip,
|
||||
python3-venv,
|
||||
libsqlite3-dev,
|
||||
tar,
|
||||
Standards-Version: 3.9.8
|
||||
Homepage: https://github.com/matrix-org/synapse
|
||||
@@ -32,12 +28,9 @@ Depends:
|
||||
debconf,
|
||||
python3-distutils|libpython3-stdlib (<< 3.6),
|
||||
${misc:Depends},
|
||||
${shlibs:Depends},
|
||||
${synapse:pydepends},
|
||||
# some of our scripts use perl, but none of them are important,
|
||||
# so we put perl:Depends in Suggests rather than Depends.
|
||||
Recommends:
|
||||
${shlibs1:Recommends},
|
||||
Suggests:
|
||||
sqlite3,
|
||||
${perl:Depends},
|
||||
|
||||
14
debian/rules
vendored
14
debian/rules
vendored
@@ -3,29 +3,15 @@
|
||||
# Build Debian package using https://github.com/spotify/dh-virtualenv
|
||||
#
|
||||
|
||||
# assume we only have one package
|
||||
PACKAGE_NAME:=`dh_listpackages`
|
||||
|
||||
override_dh_systemd_enable:
|
||||
dh_systemd_enable --name=matrix-synapse
|
||||
|
||||
override_dh_installinit:
|
||||
dh_installinit --name=matrix-synapse
|
||||
|
||||
# we don't really want to strip the symbols from our object files.
|
||||
override_dh_strip:
|
||||
|
||||
override_dh_shlibdeps:
|
||||
# make the postgres package's dependencies a recommendation
|
||||
# rather than a hard dependency.
|
||||
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
|
||||
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
|
||||
-pshlibs1 -dRecommends
|
||||
|
||||
# all the other dependencies can be normal 'Depends' requirements,
|
||||
# except for PIL's, which is self-contained and which confuses
|
||||
# dpkg-shlibdeps.
|
||||
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
|
||||
|
||||
override_dh_virtualenv:
|
||||
./debian/build_virtualenv
|
||||
|
||||
@@ -43,9 +43,6 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
|
||||
FROM ${distro}
|
||||
|
||||
# Install the build dependencies
|
||||
#
|
||||
# NB: keep this list in sync with the list of build-deps in debian/control
|
||||
# TODO: it would be nice to do that automatically.
|
||||
RUN apt-get update -qq -o Acquire::Languages=none \
|
||||
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
|
||||
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
|
||||
|
||||
@@ -59,108 +59,6 @@ How to monitor Synapse metrics using Prometheus
|
||||
Restart Prometheus.
|
||||
|
||||
|
||||
Renaming of metrics & deprecation of old names in 1.2
|
||||
-----------------------------------------------------
|
||||
|
||||
Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
|
||||
upstream ``prometheus_client``. The old names are considered deprecated and will
|
||||
be removed in a future version of Synapse.
|
||||
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| New Name | Old Name |
|
||||
+=============================================================================+=======================================================================+
|
||||
| python_gc_objects_collected_total | python_gc_objects_collected |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| python_gc_collections_total | python_gc_collections |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| process_cpu_seconds_total | process_cpu_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_start_count_total | synapse_background_process_start_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
|
||||
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
|
||||
|
||||
|
||||
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
===========
|
||||
OpenTracing
|
||||
===========
|
||||
|
||||
Background
|
||||
----------
|
||||
|
||||
OpenTracing is a semi-standard being adopted by a number of distributed tracing
|
||||
platforms. It is a common api for facilitating vendor-agnostic tracing
|
||||
instrumentation. That is, we can use the OpenTracing api and select one of a
|
||||
number of tracer implementations to do the heavy lifting in the background.
|
||||
Our current selected implementation is Jaeger.
|
||||
|
||||
OpenTracing is a tool which gives an insight into the causal relationship of
|
||||
work done in and between servers. The servers each track events and report them
|
||||
to a centralised server - in Synapse's case: Jaeger. The basic unit used to
|
||||
represent events is the span. The span roughly represents a single piece of work
|
||||
that was done and the time at which it occurred. A span can have child spans,
|
||||
meaning that the work of the child had to be completed for the parent span to
|
||||
complete, or it can have follow-on spans which represent work that is undertaken
|
||||
as a result of the parent but is not depended on by the parent to in order to
|
||||
finish.
|
||||
|
||||
Since this is undertaken in a distributed environment a request to another
|
||||
server, such as an RPC or a simple GET, can be considered a span (a unit or
|
||||
work) for the local server. This causal link is what OpenTracing aims to
|
||||
capture and visualise. In order to do this metadata about the local server's
|
||||
span, i.e the 'span context', needs to be included with the request to the
|
||||
remote.
|
||||
|
||||
It is up to the remote server to decide what it does with the spans
|
||||
it creates. This is called the sampling policy and it can be configured
|
||||
through Jaeger's settings.
|
||||
|
||||
For OpenTracing concepts see
|
||||
https://opentracing.io/docs/overview/what-is-tracing/.
|
||||
|
||||
For more information about Jaeger's implementation see
|
||||
https://www.jaegertracing.io/docs/
|
||||
|
||||
=====================
|
||||
Seting up OpenTracing
|
||||
=====================
|
||||
|
||||
To receive OpenTracing spans, start up a Jaeger server. This can be done
|
||||
using docker like so:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
docker run -d --name jaeger
|
||||
-p 6831:6831/udp \
|
||||
-p 6832:6832/udp \
|
||||
-p 5778:5778 \
|
||||
-p 16686:16686 \
|
||||
-p 14268:14268 \
|
||||
jaegertracing/all-in-one:1.13
|
||||
|
||||
Latest documentation is probably at
|
||||
https://www.jaegertracing.io/docs/1.13/getting-started/
|
||||
|
||||
|
||||
Enable OpenTracing in Synapse
|
||||
-----------------------------
|
||||
|
||||
OpenTracing is not enabled by default. It must be enabled in the homeserver
|
||||
config by uncommenting the config options under ``opentracing`` as shown in
|
||||
the `sample config <./sample_config.yaml>`_. For example:
|
||||
|
||||
.. code-block:: yaml
|
||||
|
||||
opentracing:
|
||||
tracer_enabled: true
|
||||
homeserver_whitelist:
|
||||
- "mytrustedhomeserver.org"
|
||||
- "*.myotherhomeservers.com"
|
||||
|
||||
Homeserver whitelisting
|
||||
-----------------------
|
||||
|
||||
The homeserver whitelist is configured using regular expressions. A list of regular
|
||||
expressions can be given and their union will be compared when propagating any
|
||||
spans contexts to another homeserver.
|
||||
|
||||
Though it's mostly safe to send and receive span contexts to and from
|
||||
untrusted users since span contexts are usually opaque ids it can lead to
|
||||
two problems, namely:
|
||||
|
||||
- If the span context is marked as sampled by the sending homeserver the receiver will
|
||||
sample it. Therefore two homeservers with wildly different sampling policies
|
||||
could incur higher sampling counts than intended.
|
||||
- Sending servers can attach arbitrary data to spans, known as 'baggage'. For safety this has been disabled in Synapse
|
||||
but that doesn't prevent another server sending you baggage which will be logged
|
||||
to OpenTracing's logs.
|
||||
|
||||
==================
|
||||
Configuring Jaeger
|
||||
==================
|
||||
|
||||
Sampling strategies can be set as in this document:
|
||||
https://www.jaegertracing.io/docs/1.13/sampling/
|
||||
@@ -11,9 +11,7 @@ a postgres database.
|
||||
|
||||
* If you are using the `matrix.org debian/ubuntu
|
||||
packages <../INSTALL.md#matrixorg-packages>`_,
|
||||
the necessary python library will already be installed, but you will need to
|
||||
ensure the low-level postgres library is installed, which you can do with
|
||||
``apt install libpq5``.
|
||||
the necessary libraries will already be installed.
|
||||
|
||||
* For other pre-built packages, please consult the documentation from the
|
||||
relevant package.
|
||||
@@ -36,7 +34,7 @@ Assuming your PostgreSQL database user is called ``postgres``, create a user
|
||||
su - postgres
|
||||
createuser --pwprompt synapse_user
|
||||
|
||||
Before you can authenticate with the ``synapse_user``, you must create a
|
||||
Before you can authenticate with the ``synapse_user``, you must create a
|
||||
database that it can access. To create a database, first connect to the database
|
||||
with your database user::
|
||||
|
||||
@@ -55,7 +53,7 @@ and then run::
|
||||
This would create an appropriate database named ``synapse`` owned by the
|
||||
``synapse_user`` user (which must already have been created as above).
|
||||
|
||||
Note that the PostgreSQL database *must* have the correct encoding set (as
|
||||
Note that the PostgreSQL database *must* have the correct encoding set (as
|
||||
shown above), otherwise it will not be able to store UTF8 strings.
|
||||
|
||||
You may need to enable password authentication so ``synapse_user`` can connect
|
||||
|
||||
@@ -1409,24 +1409,17 @@ password_config:
|
||||
|
||||
|
||||
## Opentracing ##
|
||||
# These settings enable opentracing which implements distributed tracing
|
||||
# This allows you to observe the causal chain of events across servers
|
||||
# including requests, key lookups etc. across any server running
|
||||
# synapse or any other other services which supports opentracing.
|
||||
# (specifically those implemented with jaeger)
|
||||
|
||||
# These settings enable opentracing, which implements distributed tracing.
|
||||
# This allows you to observe the causal chains of events across servers
|
||||
# including requests, key lookups etc., across any server running
|
||||
# synapse or any other other services which supports opentracing
|
||||
# (specifically those implemented with Jaeger).
|
||||
#
|
||||
opentracing:
|
||||
# tracing is disabled by default. Uncomment the following line to enable it.
|
||||
#
|
||||
#enabled: true
|
||||
|
||||
# The list of homeservers we wish to send and receive span contexts and span baggage.
|
||||
# See docs/opentracing.rst
|
||||
# This is a list of regexes which are matched against the server_name of the
|
||||
# homeserver.
|
||||
#
|
||||
# By defult, it is empty, so no servers are matched.
|
||||
#
|
||||
#homeserver_whitelist:
|
||||
# - ".*"
|
||||
#opentracing:
|
||||
# # Enable / disable tracer
|
||||
# tracer_enabled: false
|
||||
# # The list of homeservers we wish to expose our current traces to.
|
||||
# # The list is a list of regexes which are matched against the
|
||||
# # servername of the homeserver
|
||||
# homeserver_whitelist:
|
||||
# - ".*"
|
||||
|
||||
@@ -35,4 +35,4 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.2.1"
|
||||
__version__ = "1.1.0"
|
||||
|
||||
@@ -606,6 +606,21 @@ class Auth(object):
|
||||
|
||||
defer.returnValue(auth_ids)
|
||||
|
||||
def check_redaction(self, room_version, event, auth_events):
|
||||
"""Check whether the event sender is allowed to redact the target event.
|
||||
|
||||
Returns:
|
||||
True if the the sender is allowed to redact the target event if the
|
||||
target event was created by them.
|
||||
False if the sender is allowed to redact the target event with no
|
||||
further checks.
|
||||
|
||||
Raises:
|
||||
AuthError if the event sender is definitely not allowed to redact
|
||||
the target event.
|
||||
"""
|
||||
return event_auth.check_redaction(room_version, event, auth_events)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_can_change_room_list(self, room_id, user):
|
||||
"""Check if the user is allowed to edit the room's entry in the
|
||||
|
||||
@@ -61,6 +61,7 @@ class Codes(object):
|
||||
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
|
||||
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
|
||||
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
|
||||
USER_DEACTIVATED = "M_USER_DEACTIVATED"
|
||||
|
||||
|
||||
class CodeMessageException(RuntimeError):
|
||||
@@ -151,7 +152,7 @@ class UserDeactivatedError(SynapseError):
|
||||
msg (str): The human-readable error message
|
||||
"""
|
||||
super(UserDeactivatedError, self).__init__(
|
||||
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
|
||||
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ def register_sighup(func):
|
||||
_sighup_callbacks.append(func)
|
||||
|
||||
|
||||
def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||
def start_worker_reactor(appname, config):
|
||||
""" Run the reactor in the main process
|
||||
|
||||
Daemonizes if necessary, and then configures some resources, before starting
|
||||
@@ -57,7 +57,6 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||
Args:
|
||||
appname (str): application name which will be sent to syslog
|
||||
config (synapse.config.Config): config object
|
||||
run_command (Callable[]): callable that actually runs the reactor
|
||||
"""
|
||||
|
||||
logger = logging.getLogger(config.worker_app)
|
||||
@@ -70,19 +69,11 @@ def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||
daemonize=config.worker_daemonize,
|
||||
print_pidfile=config.print_pidfile,
|
||||
logger=logger,
|
||||
run_command=run_command,
|
||||
)
|
||||
|
||||
|
||||
def start_reactor(
|
||||
appname,
|
||||
soft_file_limit,
|
||||
gc_thresholds,
|
||||
pid_file,
|
||||
daemonize,
|
||||
print_pidfile,
|
||||
logger,
|
||||
run_command=reactor.run,
|
||||
appname, soft_file_limit, gc_thresholds, pid_file, daemonize, print_pidfile, logger
|
||||
):
|
||||
""" Run the reactor in the main process
|
||||
|
||||
@@ -97,7 +88,6 @@ def start_reactor(
|
||||
daemonize (bool): true to run the reactor in a background process
|
||||
print_pidfile (bool): whether to print the pid file, if daemonize is True
|
||||
logger (logging.Logger): logger instance to pass to Daemonize
|
||||
run_command (Callable[]): callable that actually runs the reactor
|
||||
"""
|
||||
|
||||
install_dns_limiter(reactor)
|
||||
@@ -107,7 +97,7 @@ def start_reactor(
|
||||
change_resource_limit(soft_file_limit)
|
||||
if gc_thresholds:
|
||||
gc.set_threshold(*gc_thresholds)
|
||||
run_command()
|
||||
reactor.run()
|
||||
|
||||
# make sure that we run the reactor with the sentinel log context,
|
||||
# otherwise other PreserveLoggingContext instances will get confused
|
||||
@@ -149,7 +139,8 @@ def listen_metrics(bind_addresses, port):
|
||||
"""
|
||||
Start Prometheus metrics server.
|
||||
"""
|
||||
from synapse.metrics import RegistryProxy, start_http_server
|
||||
from synapse.metrics import RegistryProxy
|
||||
from prometheus_client import start_http_server
|
||||
|
||||
for host in bind_addresses:
|
||||
logger.info("Starting metrics listener on %s:%d", host, port)
|
||||
|
||||
@@ -1,264 +0,0 @@
|
||||
#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
from canonicaljson import json
|
||||
|
||||
from twisted.internet import defer, task
|
||||
|
||||
import synapse
|
||||
from synapse.app import _base
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.logger import setup_logging
|
||||
from synapse.handlers.admin import ExfiltrationWriter
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
|
||||
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
|
||||
from synapse.replication.slave.storage.devices import SlavedDeviceStore
|
||||
from synapse.replication.slave.storage.events import SlavedEventStore
|
||||
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
|
||||
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
|
||||
from synapse.replication.slave.storage.presence import SlavedPresenceStore
|
||||
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
|
||||
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
|
||||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||
from synapse.replication.slave.storage.room import RoomStore
|
||||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
logger = logging.getLogger("synapse.app.admin_cmd")
|
||||
|
||||
|
||||
class AdminCmdSlavedStore(
|
||||
SlavedReceiptsStore,
|
||||
SlavedAccountDataStore,
|
||||
SlavedApplicationServiceStore,
|
||||
SlavedRegistrationStore,
|
||||
SlavedFilteringStore,
|
||||
SlavedPresenceStore,
|
||||
SlavedGroupServerStore,
|
||||
SlavedDeviceInboxStore,
|
||||
SlavedDeviceStore,
|
||||
SlavedPushRuleStore,
|
||||
SlavedEventStore,
|
||||
SlavedClientIpStore,
|
||||
RoomStore,
|
||||
BaseSlavedStore,
|
||||
):
|
||||
pass
|
||||
|
||||
|
||||
class AdminCmdServer(HomeServer):
|
||||
DATASTORE_CLASS = AdminCmdSlavedStore
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
pass
|
||||
|
||||
def start_listening(self, listeners):
|
||||
pass
|
||||
|
||||
def build_tcp_replication(self):
|
||||
return AdminCmdReplicationHandler(self)
|
||||
|
||||
|
||||
class AdminCmdReplicationHandler(ReplicationClientHandler):
|
||||
@defer.inlineCallbacks
|
||||
def on_rdata(self, stream_name, token, rows):
|
||||
pass
|
||||
|
||||
def get_streams_to_replicate(self):
|
||||
return {}
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def export_data_command(hs, args):
|
||||
"""Export data for a user.
|
||||
|
||||
Args:
|
||||
hs (HomeServer)
|
||||
args (argparse.Namespace)
|
||||
"""
|
||||
|
||||
user_id = args.user_id
|
||||
directory = args.output_directory
|
||||
|
||||
res = yield hs.get_handlers().admin_handler.export_user_data(
|
||||
user_id, FileExfiltrationWriter(user_id, directory=directory)
|
||||
)
|
||||
print(res)
|
||||
|
||||
|
||||
class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
"""An ExfiltrationWriter that writes the users data to a directory.
|
||||
Returns the directory location on completion.
|
||||
|
||||
Note: This writes to disk on the main reactor thread.
|
||||
|
||||
Args:
|
||||
user_id (str): The user whose data is being exfiltrated.
|
||||
directory (str|None): The directory to write the data to, if None then
|
||||
will write to a temporary directory.
|
||||
"""
|
||||
|
||||
def __init__(self, user_id, directory=None):
|
||||
self.user_id = user_id
|
||||
|
||||
if directory:
|
||||
self.base_directory = directory
|
||||
else:
|
||||
self.base_directory = tempfile.mkdtemp(
|
||||
prefix="synapse-exfiltrate__%s__" % (user_id,)
|
||||
)
|
||||
|
||||
os.makedirs(self.base_directory, exist_ok=True)
|
||||
if list(os.listdir(self.base_directory)):
|
||||
raise Exception("Directory must be empty")
|
||||
|
||||
def write_events(self, room_id, events):
|
||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||
os.makedirs(room_directory, exist_ok=True)
|
||||
events_file = os.path.join(room_directory, "events")
|
||||
|
||||
with open(events_file, "a") as f:
|
||||
for event in events:
|
||||
print(json.dumps(event.get_pdu_json()), file=f)
|
||||
|
||||
def write_state(self, room_id, event_id, state):
|
||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||
state_directory = os.path.join(room_directory, "state")
|
||||
os.makedirs(state_directory, exist_ok=True)
|
||||
|
||||
event_file = os.path.join(state_directory, event_id)
|
||||
|
||||
with open(event_file, "a") as f:
|
||||
for event in state.values():
|
||||
print(json.dumps(event.get_pdu_json()), file=f)
|
||||
|
||||
def write_invite(self, room_id, event, state):
|
||||
self.write_events(room_id, [event])
|
||||
|
||||
# We write the invite state somewhere else as they aren't full events
|
||||
# and are only a subset of the state at the event.
|
||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||
os.makedirs(room_directory, exist_ok=True)
|
||||
|
||||
invite_state = os.path.join(room_directory, "invite_state")
|
||||
|
||||
with open(invite_state, "a") as f:
|
||||
for event in state.values():
|
||||
print(json.dumps(event), file=f)
|
||||
|
||||
def finished(self):
|
||||
return self.base_directory
|
||||
|
||||
|
||||
def start(config_options):
|
||||
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
||||
HomeServerConfig.add_arguments_to_parser(parser)
|
||||
|
||||
subparser = parser.add_subparsers(
|
||||
title="Admin Commands",
|
||||
required=True,
|
||||
dest="command",
|
||||
metavar="<admin_command>",
|
||||
help="The admin command to perform.",
|
||||
)
|
||||
export_data_parser = subparser.add_parser(
|
||||
"export-data", help="Export all data for a user"
|
||||
)
|
||||
export_data_parser.add_argument("user_id", help="User to extra data from")
|
||||
export_data_parser.add_argument(
|
||||
"--output-directory",
|
||||
action="store",
|
||||
metavar="DIRECTORY",
|
||||
required=False,
|
||||
help="The directory to store the exported data in. Must be empty. Defaults"
|
||||
" to creating a temp directory.",
|
||||
)
|
||||
export_data_parser.set_defaults(func=export_data_command)
|
||||
|
||||
try:
|
||||
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
|
||||
except ConfigError as e:
|
||||
sys.stderr.write("\n" + str(e) + "\n")
|
||||
sys.exit(1)
|
||||
|
||||
if config.worker_app is not None:
|
||||
assert config.worker_app == "synapse.app.admin_cmd"
|
||||
|
||||
# Update the config with some basic overrides so that don't have to specify
|
||||
# a full worker config.
|
||||
config.worker_app = "synapse.app.admin_cmd"
|
||||
|
||||
if (
|
||||
not config.worker_daemonize
|
||||
and not config.worker_log_file
|
||||
and not config.worker_log_config
|
||||
):
|
||||
# Since we're meant to be run as a "command" let's not redirect stdio
|
||||
# unless we've actually set log config.
|
||||
config.no_redirect_stdio = True
|
||||
|
||||
# Explicitly disable background processes
|
||||
config.update_user_directory = False
|
||||
config.start_pushers = False
|
||||
config.send_federation = False
|
||||
|
||||
setup_logging(config, use_worker_options=True)
|
||||
|
||||
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
|
||||
|
||||
database_engine = create_engine(config.database_config)
|
||||
|
||||
ss = AdminCmdServer(
|
||||
config.server_name,
|
||||
db_config=config.database_config,
|
||||
config=config,
|
||||
version_string="Synapse/" + get_version_string(synapse),
|
||||
database_engine=database_engine,
|
||||
)
|
||||
|
||||
ss.setup()
|
||||
|
||||
# We use task.react as the basic run command as it correctly handles tearing
|
||||
# down the reactor when the deferreds resolve and setting the return value.
|
||||
# We also make sure that `_base.start` gets run before we actually run the
|
||||
# command.
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def run(_reactor):
|
||||
with LoggingContext("command"):
|
||||
yield _base.start(ss, [])
|
||||
yield args.func(ss, args)
|
||||
|
||||
_base.start_worker_reactor(
|
||||
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
@@ -27,7 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.logger import setup_logging
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext, run_in_background
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
from synapse.replication.slave.storage.directory import DirectoryStore
|
||||
from synapse.replication.slave.storage.events import SlavedEventStore
|
||||
|
||||
@@ -28,7 +28,8 @@ from synapse.config.logger import setup_logging
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
|
||||
@@ -28,7 +28,8 @@ from synapse.config.logger import setup_logging
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
|
||||
@@ -29,7 +29,8 @@ from synapse.config.logger import setup_logging
|
||||
from synapse.federation.transport.server import TransportLayerServer
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
|
||||
@@ -28,8 +28,9 @@ from synapse.config.logger import setup_logging
|
||||
from synapse.federation import send_queue
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext, run_in_background
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
|
||||
from synapse.replication.slave.storage.devices import SlavedDeviceStore
|
||||
from synapse.replication.slave.storage.events import SlavedEventStore
|
||||
|
||||
@@ -30,7 +30,8 @@ from synapse.http.server import JsonResource
|
||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
|
||||
|
||||
@@ -55,8 +55,9 @@ from synapse.http.additional_resource import AdditionalResource
|
||||
from synapse.http.server import RootRedirect
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.module_api import ModuleApi
|
||||
from synapse.python_dependencies import check_requirements
|
||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
|
||||
@@ -28,7 +28,8 @@ from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.logger import setup_logging
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
|
||||
|
||||
@@ -27,7 +27,8 @@ from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.logger import setup_logging
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext, run_in_background
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import __func__
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.events import SlavedEventStore
|
||||
|
||||
@@ -32,7 +32,8 @@ from synapse.handlers.presence import PresenceHandler, get_interested_parties
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext, run_in_background
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
|
||||
@@ -29,7 +29,8 @@ from synapse.config.logger import setup_logging
|
||||
from synapse.http.server import JsonResource
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext, run_in_background
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.metrics import RegistryProxy
|
||||
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
|
||||
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
|
||||
|
||||
@@ -137,42 +137,12 @@ class Config(object):
|
||||
return file_stream.read()
|
||||
|
||||
def invoke_all(self, name, *args, **kargs):
|
||||
"""Invoke all instance methods with the given name and arguments in the
|
||||
class's MRO.
|
||||
|
||||
Args:
|
||||
name (str): Name of function to invoke
|
||||
*args
|
||||
**kwargs
|
||||
|
||||
Returns:
|
||||
list: The list of the return values from each method called
|
||||
"""
|
||||
results = []
|
||||
for cls in type(self).mro():
|
||||
if name in cls.__dict__:
|
||||
results.append(getattr(cls, name)(self, *args, **kargs))
|
||||
return results
|
||||
|
||||
@classmethod
|
||||
def invoke_all_static(cls, name, *args, **kargs):
|
||||
"""Invoke all static methods with the given name and arguments in the
|
||||
class's MRO.
|
||||
|
||||
Args:
|
||||
name (str): Name of function to invoke
|
||||
*args
|
||||
**kwargs
|
||||
|
||||
Returns:
|
||||
list: The list of the return values from each method called
|
||||
"""
|
||||
results = []
|
||||
for c in cls.mro():
|
||||
if name in c.__dict__:
|
||||
results.append(getattr(c, name)(*args, **kargs))
|
||||
return results
|
||||
|
||||
def generate_config(
|
||||
self,
|
||||
config_dir_path,
|
||||
@@ -232,23 +202,6 @@ class Config(object):
|
||||
Returns: Config object.
|
||||
"""
|
||||
config_parser = argparse.ArgumentParser(description=description)
|
||||
cls.add_arguments_to_parser(config_parser)
|
||||
obj, _ = cls.load_config_with_parser(config_parser, argv)
|
||||
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def add_arguments_to_parser(cls, config_parser):
|
||||
"""Adds all the config flags to an ArgumentParser.
|
||||
|
||||
Doesn't support config-file-generation: used by the worker apps.
|
||||
|
||||
Used for workers where we want to add extra flags/subcommands.
|
||||
|
||||
Args:
|
||||
config_parser (ArgumentParser): App description
|
||||
"""
|
||||
|
||||
config_parser.add_argument(
|
||||
"-c",
|
||||
"--config-path",
|
||||
@@ -266,34 +219,16 @@ class Config(object):
|
||||
" Defaults to the directory containing the last config file",
|
||||
)
|
||||
|
||||
cls.invoke_all_static("add_arguments", config_parser)
|
||||
|
||||
@classmethod
|
||||
def load_config_with_parser(cls, parser, argv):
|
||||
"""Parse the commandline and config files with the given parser
|
||||
|
||||
Doesn't support config-file-generation: used by the worker apps.
|
||||
|
||||
Used for workers where we want to add extra flags/subcommands.
|
||||
|
||||
Args:
|
||||
parser (ArgumentParser)
|
||||
argv (list[str])
|
||||
|
||||
Returns:
|
||||
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
|
||||
config object and the parsed argparse.Namespace object from
|
||||
`parser.parse_args(..)`
|
||||
"""
|
||||
|
||||
obj = cls()
|
||||
|
||||
config_args = parser.parse_args(argv)
|
||||
obj.invoke_all("add_arguments", config_parser)
|
||||
|
||||
config_args = config_parser.parse_args(argv)
|
||||
|
||||
config_files = find_config_files(search_paths=config_args.config_path)
|
||||
|
||||
if not config_files:
|
||||
parser.error("Must supply a config file.")
|
||||
config_parser.error("Must supply a config file.")
|
||||
|
||||
if config_args.keys_directory:
|
||||
config_dir_path = config_args.keys_directory
|
||||
@@ -309,7 +244,7 @@ class Config(object):
|
||||
|
||||
obj.invoke_all("read_arguments", config_args)
|
||||
|
||||
return obj, config_args
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def load_or_generate_config(cls, description, argv):
|
||||
@@ -466,7 +401,7 @@ class Config(object):
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
)
|
||||
|
||||
obj.invoke_all_static("add_arguments", parser)
|
||||
obj.invoke_all("add_arguments", parser)
|
||||
args = parser.parse_args(remaining_args)
|
||||
|
||||
config_dict = read_config_files(config_files)
|
||||
|
||||
@@ -69,8 +69,7 @@ class DatabaseConfig(Config):
|
||||
if database_path is not None:
|
||||
self.database_config["args"]["database"] = database_path
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
def add_arguments(self, parser):
|
||||
db_group = parser.add_argument_group("database")
|
||||
db_group.add_argument(
|
||||
"-d",
|
||||
|
||||
@@ -103,8 +103,7 @@ class LoggingConfig(Config):
|
||||
if args.log_file is not None:
|
||||
self.log_file = args.log_file
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
def add_arguments(cls, parser):
|
||||
logging_group = parser.add_argument_group("logging")
|
||||
logging_group.add_argument(
|
||||
"-v",
|
||||
|
||||
@@ -237,8 +237,7 @@ class RegistrationConfig(Config):
|
||||
% locals()
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
def add_arguments(self, parser):
|
||||
reg_group = parser.add_argument_group("registration")
|
||||
reg_group.add_argument(
|
||||
"--enable-registration",
|
||||
|
||||
@@ -136,7 +136,7 @@ class ServerConfig(Config):
|
||||
|
||||
# Whether to enable experimental MSC1849 (aka relations) support
|
||||
self.experimental_msc1849_support_enabled = config.get(
|
||||
"experimental_msc1849_support_enabled", True
|
||||
"experimental_msc1849_support_enabled", False
|
||||
)
|
||||
|
||||
# Options to control access by tracking MAU
|
||||
@@ -639,8 +639,7 @@ class ServerConfig(Config):
|
||||
if args.print_pidfile is not None:
|
||||
self.print_pidfile = args.print_pidfile
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser):
|
||||
def add_arguments(self, parser):
|
||||
server_group = parser.add_argument_group("server")
|
||||
server_group.add_argument(
|
||||
"-D",
|
||||
|
||||
@@ -18,42 +18,33 @@ from ._base import Config, ConfigError
|
||||
|
||||
class TracerConfig(Config):
|
||||
def read_config(self, config, **kwargs):
|
||||
opentracing_config = config.get("opentracing")
|
||||
if opentracing_config is None:
|
||||
opentracing_config = {}
|
||||
self.tracer_config = config.get("opentracing")
|
||||
|
||||
self.opentracer_enabled = opentracing_config.get("enabled", False)
|
||||
if not self.opentracer_enabled:
|
||||
return
|
||||
self.tracer_config = config.get("opentracing", {"tracer_enabled": False})
|
||||
|
||||
# The tracer is enabled so sanitize the config
|
||||
if self.tracer_config.get("tracer_enabled", False):
|
||||
# The tracer is enabled so sanitize the config
|
||||
# If no whitelists are given
|
||||
self.tracer_config.setdefault("homeserver_whitelist", [])
|
||||
|
||||
self.opentracer_whitelist = opentracing_config.get("homeserver_whitelist", [])
|
||||
if not isinstance(self.opentracer_whitelist, list):
|
||||
raise ConfigError("Tracer homeserver_whitelist config is malformed")
|
||||
if not isinstance(self.tracer_config.get("homeserver_whitelist"), list):
|
||||
raise ConfigError("Tracer homesererver_whitelist config is malformed")
|
||||
|
||||
def generate_config_section(cls, **kwargs):
|
||||
return """\
|
||||
## Opentracing ##
|
||||
# These settings enable opentracing which implements distributed tracing
|
||||
# This allows you to observe the causal chain of events across servers
|
||||
# including requests, key lookups etc. across any server running
|
||||
# synapse or any other other services which supports opentracing.
|
||||
# (specifically those implemented with jaeger)
|
||||
|
||||
# These settings enable opentracing, which implements distributed tracing.
|
||||
# This allows you to observe the causal chains of events across servers
|
||||
# including requests, key lookups etc., across any server running
|
||||
# synapse or any other other services which supports opentracing
|
||||
# (specifically those implemented with Jaeger).
|
||||
#
|
||||
opentracing:
|
||||
# tracing is disabled by default. Uncomment the following line to enable it.
|
||||
#
|
||||
#enabled: true
|
||||
|
||||
# The list of homeservers we wish to send and receive span contexts and span baggage.
|
||||
# See docs/opentracing.rst
|
||||
# This is a list of regexes which are matched against the server_name of the
|
||||
# homeserver.
|
||||
#
|
||||
# By defult, it is empty, so no servers are matched.
|
||||
#
|
||||
#homeserver_whitelist:
|
||||
# - ".*"
|
||||
#opentracing:
|
||||
# # Enable / disable tracer
|
||||
# tracer_enabled: false
|
||||
# # The list of homeservers we wish to expose our current traces to.
|
||||
# # The list is a list of regexes which are matched against the
|
||||
# # servername of the homeserver
|
||||
# homeserver_whitelist:
|
||||
# - ".*"
|
||||
"""
|
||||
|
||||
@@ -104,17 +104,6 @@ class _EventInternalMetadata(object):
|
||||
"""
|
||||
return getattr(self, "proactively_send", True)
|
||||
|
||||
def is_redacted(self):
|
||||
"""Whether the event has been redacted.
|
||||
|
||||
This is used for efficiently checking whether an event has been
|
||||
marked as redacted without needing to make another database call.
|
||||
|
||||
Returns:
|
||||
bool
|
||||
"""
|
||||
return getattr(self, "redacted", False)
|
||||
|
||||
|
||||
def _event_dict_property(key):
|
||||
# We want to be able to use hasattr with the event dict properties.
|
||||
|
||||
@@ -52,15 +52,10 @@ def prune_event(event):
|
||||
|
||||
from . import event_type_from_format_version
|
||||
|
||||
pruned_event = event_type_from_format_version(event.format_version)(
|
||||
return event_type_from_format_version(event.format_version)(
|
||||
pruned_event_dict, event.internal_metadata.get_dict()
|
||||
)
|
||||
|
||||
# Mark the event as redacted
|
||||
pruned_event.internal_metadata.redacted = True
|
||||
|
||||
return pruned_event
|
||||
|
||||
|
||||
def prune_event_dict(event_dict):
|
||||
"""Redacts the event_dict in the same way as `prune_event`, except it
|
||||
@@ -365,12 +360,9 @@ class EventClientSerializer(object):
|
||||
event_id = event.event_id
|
||||
serialized_event = serialize_event(event, time_now, **kwargs)
|
||||
|
||||
# If MSC1849 is enabled then we need to look if there are any relations
|
||||
# we need to bundle in with the event.
|
||||
# Do not bundle relations if the event has been redacted
|
||||
if not event.internal_metadata.is_redacted() and (
|
||||
self.experimental_msc1849_support_enabled and bundle_aggregations
|
||||
):
|
||||
# If MSC1849 is enabled then we need to look if thre are any relations
|
||||
# we need to bundle in with the event
|
||||
if self.experimental_msc1849_support_enabled and bundle_aggregations:
|
||||
annotations = yield self.store.get_aggregation_groups_for_event(event_id)
|
||||
references = yield self.store.get_relations_for_event(
|
||||
event_id, RelationTypes.REFERENCE, direction="f"
|
||||
|
||||
@@ -369,7 +369,7 @@ class FederationServer(FederationBase):
|
||||
logger.warn("Room version %s not in %s", room_version, supported_versions)
|
||||
raise IncompatibleRoomVersionError(room_version=room_version)
|
||||
|
||||
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
|
||||
pdu = yield self.handler.on_make_join_request(room_id, user_id)
|
||||
time_now = self._clock.time_msec()
|
||||
defer.returnValue(
|
||||
{"event": pdu.get_pdu_json(time_now), "room_version": room_version}
|
||||
@@ -423,7 +423,7 @@ class FederationServer(FederationBase):
|
||||
def on_make_leave_request(self, origin, room_id, user_id):
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
yield self.check_server_matches_acl(origin_host, room_id)
|
||||
pdu = yield self.handler.on_make_leave_request(origin, room_id, user_id)
|
||||
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
|
||||
|
||||
room_version = yield self.store.get_room_version(room_id)
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1204,28 +1204,11 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def on_make_join_request(self, origin, room_id, user_id):
|
||||
def on_make_join_request(self, room_id, user_id):
|
||||
""" We've received a /make_join/ request, so we create a partial
|
||||
join event for the room and return that. We do *not* persist or
|
||||
process it until the other server has signed it and sent it back.
|
||||
|
||||
Args:
|
||||
origin (str): The (verified) server name of the requesting server.
|
||||
room_id (str): Room to create join event in
|
||||
user_id (str): The user to create the join for
|
||||
|
||||
Returns:
|
||||
Deferred[FrozenEvent]
|
||||
"""
|
||||
|
||||
if get_domain_from_id(user_id) != origin:
|
||||
logger.info(
|
||||
"Got /make_join request for user %r from different origin %s, ignoring",
|
||||
user_id,
|
||||
origin,
|
||||
)
|
||||
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
|
||||
|
||||
event_content = {"membership": Membership.JOIN}
|
||||
|
||||
room_version = yield self.store.get_room_version(room_id)
|
||||
@@ -1428,27 +1411,11 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def on_make_leave_request(self, origin, room_id, user_id):
|
||||
def on_make_leave_request(self, room_id, user_id):
|
||||
""" We've received a /make_leave/ request, so we create a partial
|
||||
leave event for the room and return that. We do *not* persist or
|
||||
process it until the other server has signed it and sent it back.
|
||||
|
||||
Args:
|
||||
origin (str): The (verified) server name of the requesting server.
|
||||
room_id (str): Room to create leave event in
|
||||
user_id (str): The user to create the leave for
|
||||
|
||||
Returns:
|
||||
Deferred[FrozenEvent]
|
||||
"""
|
||||
if get_domain_from_id(user_id) != origin:
|
||||
logger.info(
|
||||
"Got /make_leave request for user %r from different origin %s, ignoring",
|
||||
user_id,
|
||||
origin,
|
||||
)
|
||||
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
|
||||
|
||||
room_version = yield self.store.get_room_version(room_id)
|
||||
builder = self.event_builder_factory.new(
|
||||
room_version,
|
||||
|
||||
@@ -23,7 +23,6 @@ from canonicaljson import encode_canonical_json, json
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.defer import succeed
|
||||
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import EventTypes, Membership, RelationTypes
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
@@ -785,20 +784,6 @@ class EventCreationHandler(object):
|
||||
event.signatures.update(returned_invite.signatures)
|
||||
|
||||
if event.type == EventTypes.Redaction:
|
||||
original_event = yield self.store.get_event(
|
||||
event.redacts,
|
||||
check_redacted=False,
|
||||
get_prev_content=False,
|
||||
allow_rejected=False,
|
||||
allow_none=True,
|
||||
check_room_id=event.room_id,
|
||||
)
|
||||
|
||||
# we can make some additional checks now if we have the original event.
|
||||
if original_event:
|
||||
if original_event.type == EventTypes.Create:
|
||||
raise AuthError(403, "Redacting create events is not permitted")
|
||||
|
||||
prev_state_ids = yield context.get_prev_state_ids(self.store)
|
||||
auth_events_ids = yield self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
@@ -806,18 +791,18 @@ class EventCreationHandler(object):
|
||||
auth_events = yield self.store.get_events(auth_events_ids)
|
||||
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
|
||||
room_version = yield self.store.get_room_version(event.room_id)
|
||||
|
||||
if event_auth.check_redaction(room_version, event, auth_events=auth_events):
|
||||
# this user doesn't have 'redact' rights, so we need to do some more
|
||||
# checks on the original event. Let's start by checking the original
|
||||
# event exists.
|
||||
if not original_event:
|
||||
raise NotFoundError("Could not find event %s" % (event.redacts,))
|
||||
|
||||
if self.auth.check_redaction(room_version, event, auth_events=auth_events):
|
||||
original_event = yield self.store.get_event(
|
||||
event.redacts,
|
||||
check_redacted=False,
|
||||
get_prev_content=False,
|
||||
allow_rejected=False,
|
||||
allow_none=False,
|
||||
)
|
||||
if event.user_id != original_event.user_id:
|
||||
raise AuthError(403, "You don't have permission to redact events")
|
||||
|
||||
# all the checks are done.
|
||||
# We've already checked.
|
||||
event.internal_metadata.recheck_redaction = False
|
||||
|
||||
if event.type == EventTypes.Create:
|
||||
|
||||
@@ -17,7 +17,7 @@ import logging
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.handlers._base import BaseHandler
|
||||
from synapse.types import ReadReceipt, get_domain_from_id
|
||||
from synapse.types import ReadReceipt
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -40,27 +40,18 @@ class ReceiptsHandler(BaseHandler):
|
||||
def _received_remote_receipt(self, origin, content):
|
||||
"""Called when we receive an EDU of type m.receipt from a remote HS.
|
||||
"""
|
||||
receipts = []
|
||||
for room_id, room_values in content.items():
|
||||
for receipt_type, users in room_values.items():
|
||||
for user_id, user_values in users.items():
|
||||
if get_domain_from_id(user_id) != origin:
|
||||
logger.info(
|
||||
"Received receipt for user %r from server %s, ignoring",
|
||||
user_id,
|
||||
origin,
|
||||
)
|
||||
continue
|
||||
|
||||
receipts.append(
|
||||
ReadReceipt(
|
||||
room_id=room_id,
|
||||
receipt_type=receipt_type,
|
||||
user_id=user_id,
|
||||
event_ids=user_values["event_ids"],
|
||||
data=user_values.get("data", {}),
|
||||
)
|
||||
)
|
||||
receipts = [
|
||||
ReadReceipt(
|
||||
room_id=room_id,
|
||||
receipt_type=receipt_type,
|
||||
user_id=user_id,
|
||||
event_ids=user_values["event_ids"],
|
||||
data=user_values.get("data", {}),
|
||||
)
|
||||
for room_id, room_values in content.items()
|
||||
for receipt_type, users in room_values.items()
|
||||
for user_id, user_values in users.items()
|
||||
]
|
||||
|
||||
yield self._handle_new_receipts(receipts)
|
||||
|
||||
|
||||
@@ -245,9 +245,7 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
|
||||
isLeaf = True
|
||||
|
||||
_PathEntry = collections.namedtuple(
|
||||
"_PathEntry", ["pattern", "callback", "servlet_classname"]
|
||||
)
|
||||
_PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"])
|
||||
|
||||
def __init__(self, hs, canonical_json=True):
|
||||
resource.Resource.__init__(self)
|
||||
@@ -257,28 +255,12 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
self.path_regexs = {}
|
||||
self.hs = hs
|
||||
|
||||
def register_paths(self, method, path_patterns, callback, servlet_classname):
|
||||
"""
|
||||
Registers a request handler against a regular expression. Later request URLs are
|
||||
checked against these regular expressions in order to identify an appropriate
|
||||
handler for that request.
|
||||
|
||||
Args:
|
||||
method (str): GET, POST etc
|
||||
|
||||
path_patterns (Iterable[str]): A list of regular expressions to which
|
||||
the request URLs are compared.
|
||||
|
||||
callback (function): The handler for the request. Usually a Servlet
|
||||
|
||||
servlet_classname (str): The name of the handler to be used in prometheus
|
||||
and opentracing logs.
|
||||
"""
|
||||
def register_paths(self, method, path_patterns, callback):
|
||||
method = method.encode("utf-8") # method is bytes on py3
|
||||
for path_pattern in path_patterns:
|
||||
logger.debug("Registering for %s %s", method, path_pattern.pattern)
|
||||
self.path_regexs.setdefault(method, []).append(
|
||||
self._PathEntry(path_pattern, callback, servlet_classname)
|
||||
self._PathEntry(path_pattern, callback)
|
||||
)
|
||||
|
||||
def render(self, request):
|
||||
@@ -293,9 +275,13 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
This checks if anyone has registered a callback for that method and
|
||||
path.
|
||||
"""
|
||||
callback, servlet_classname, group_dict = self._get_handler_for_request(request)
|
||||
callback, group_dict = self._get_handler_for_request(request)
|
||||
|
||||
# Make sure we have a name for this handler in prometheus.
|
||||
servlet_instance = getattr(callback, "__self__", None)
|
||||
if servlet_instance is not None:
|
||||
servlet_classname = servlet_instance.__class__.__name__
|
||||
else:
|
||||
servlet_classname = "%r" % callback
|
||||
request.request_metrics.name = servlet_classname
|
||||
|
||||
# Now trigger the callback. If it returns a response, we send it
|
||||
@@ -325,8 +311,7 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
request (twisted.web.http.Request):
|
||||
|
||||
Returns:
|
||||
Tuple[Callable, str, dict[unicode, unicode]]: callback method, the
|
||||
label to use for that method in prometheus metrics, and the
|
||||
Tuple[Callable, dict[unicode, unicode]]: callback method, and the
|
||||
dict mapping keys to path components as specified in the
|
||||
handler's path match regexp.
|
||||
|
||||
@@ -335,7 +320,7 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
None, or a tuple of (http code, response body).
|
||||
"""
|
||||
if request.method == b"OPTIONS":
|
||||
return _options_handler, "options_request_handler", {}
|
||||
return _options_handler, {}
|
||||
|
||||
# Loop through all the registered callbacks to check if the method
|
||||
# and path regex match
|
||||
@@ -343,10 +328,10 @@ class JsonResource(HttpServer, resource.Resource):
|
||||
m = path_entry.pattern.match(request.path.decode("ascii"))
|
||||
if m:
|
||||
# We found a match!
|
||||
return path_entry.callback, path_entry.servlet_classname, m.groupdict()
|
||||
return path_entry.callback, m.groupdict()
|
||||
|
||||
# Huh. No one wanted to handle that? Fiiiiiine. Send 400.
|
||||
return _unrecognised_request_handler, "unrecognised_request_handler", {}
|
||||
return _unrecognised_request_handler, {}
|
||||
|
||||
def _send_response(
|
||||
self, request, code, response_json_object, response_code_message=None
|
||||
|
||||
@@ -290,13 +290,11 @@ class RestServlet(object):
|
||||
|
||||
for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"):
|
||||
if hasattr(self, "on_%s" % (method,)):
|
||||
servlet_classname = self.__class__.__name__
|
||||
method_handler = getattr(self, "on_%s" % (method,))
|
||||
http_server.register_paths(
|
||||
method,
|
||||
patterns,
|
||||
trace_servlet(servlet_classname, method_handler),
|
||||
servlet_classname,
|
||||
trace_servlet(self.__class__.__name__, method_handler),
|
||||
)
|
||||
|
||||
else:
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.d
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -24,140 +24,6 @@
|
||||
# this move the methods have work very similarly to opentracing's and it should only
|
||||
# be a matter of few regexes to move over to opentracing's access patterns proper.
|
||||
|
||||
"""
|
||||
============================
|
||||
Using OpenTracing in Synapse
|
||||
============================
|
||||
|
||||
Python-specific tracing concepts are at https://opentracing.io/guides/python/.
|
||||
Note that Synapse wraps OpenTracing in a small module (this one) in order to make the
|
||||
OpenTracing dependency optional. That means that the access patterns are
|
||||
different to those demonstrated in the OpenTracing guides. However, it is
|
||||
still useful to know, especially if OpenTracing is included as a full dependency
|
||||
in the future or if you are modifying this module.
|
||||
|
||||
|
||||
OpenTracing is encapsulated so that
|
||||
no span objects from OpenTracing are exposed in Synapse's code. This allows
|
||||
OpenTracing to be easily disabled in Synapse and thereby have OpenTracing as
|
||||
an optional dependency. This does however limit the number of modifiable spans
|
||||
at any point in the code to one. From here out references to `opentracing`
|
||||
in the code snippets refer to the Synapses module.
|
||||
|
||||
Tracing
|
||||
-------
|
||||
|
||||
In Synapse it is not possible to start a non-active span. Spans can be started
|
||||
using the ``start_active_span`` method. This returns a scope (see
|
||||
OpenTracing docs) which is a context manager that needs to be entered and
|
||||
exited. This is usually done by using ``with``.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from synapse.logging.opentracing import start_active_span
|
||||
|
||||
with start_active_span("operation name"):
|
||||
# Do something we want to tracer
|
||||
|
||||
Forgetting to enter or exit a scope will result in some mysterious and grievous log
|
||||
context errors.
|
||||
|
||||
At anytime where there is an active span ``opentracing.set_tag`` can be used to
|
||||
set a tag on the current active span.
|
||||
|
||||
Tracing functions
|
||||
-----------------
|
||||
|
||||
Functions can be easily traced using decorators. There is a decorator for
|
||||
'normal' function and for functions which are actually deferreds. The name of
|
||||
the function becomes the operation name for the span.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from synapse.logging.opentracing import trace, trace_deferred
|
||||
|
||||
# Start a span using 'normal_function' as the operation name
|
||||
@trace
|
||||
def normal_function(*args, **kwargs):
|
||||
# Does all kinds of cool and expected things
|
||||
return something_usual_and_useful
|
||||
|
||||
# Start a span using 'deferred_function' as the operation name
|
||||
@trace_deferred
|
||||
@defer.inlineCallbacks
|
||||
def deferred_function(*args, **kwargs):
|
||||
# We start
|
||||
yield we_wait
|
||||
# we finish
|
||||
defer.returnValue(something_usual_and_useful)
|
||||
|
||||
Operation names can be explicitly set for functions by using
|
||||
``trace_using_operation_name`` and
|
||||
``trace_deferred_using_operation_name``
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from synapse.logging.opentracing import (
|
||||
trace_using_operation_name,
|
||||
trace_deferred_using_operation_name
|
||||
)
|
||||
|
||||
@trace_using_operation_name("A *much* better operation name")
|
||||
def normal_function(*args, **kwargs):
|
||||
# Does all kinds of cool and expected things
|
||||
return something_usual_and_useful
|
||||
|
||||
@trace_deferred_using_operation_name("Another exciting operation name!")
|
||||
@defer.inlineCallbacks
|
||||
def deferred_function(*args, **kwargs):
|
||||
# We start
|
||||
yield we_wait
|
||||
# we finish
|
||||
defer.returnValue(something_usual_and_useful)
|
||||
|
||||
Contexts and carriers
|
||||
---------------------
|
||||
|
||||
There are a selection of wrappers for injecting and extracting contexts from
|
||||
carriers provided. Unfortunately OpenTracing's three context injection
|
||||
techniques are not adequate for our inject of OpenTracing span-contexts into
|
||||
Twisted's http headers, EDU contents and our database tables. Also note that
|
||||
the binary encoding format mandated by OpenTracing is not actually implemented
|
||||
by jaeger_client v4.0.0 - it will silently noop.
|
||||
Please refer to the end of ``logging/opentracing.py`` for the available
|
||||
injection and extraction methods.
|
||||
|
||||
Homeserver whitelisting
|
||||
-----------------------
|
||||
|
||||
Most of the whitelist checks are encapsulated in the modules's injection
|
||||
and extraction method but be aware that using custom carriers or crossing
|
||||
unchartered waters will require the enforcement of the whitelist.
|
||||
``logging/opentracing.py`` has a ``whitelisted_homeserver`` method which takes
|
||||
in a destination and compares it to the whitelist.
|
||||
|
||||
=======
|
||||
Gotchas
|
||||
=======
|
||||
|
||||
- Checking whitelists on span propagation
|
||||
- Inserting pii
|
||||
- Forgetting to enter or exit a scope
|
||||
- Span source: make sure that the span you expect to be active across a
|
||||
function call really will be that one. Does the current function have more
|
||||
than one caller? Will all of those calling functions have be in a context
|
||||
with an active span?
|
||||
"""
|
||||
|
||||
import contextlib
|
||||
import logging
|
||||
import re
|
||||
from functools import wraps
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.config import ConfigError
|
||||
|
||||
try:
|
||||
import opentracing
|
||||
except ImportError:
|
||||
@@ -169,6 +35,12 @@ except ImportError:
|
||||
JaegerConfig = None
|
||||
LogContextScopeManager = None
|
||||
|
||||
import contextlib
|
||||
import logging
|
||||
import re
|
||||
from functools import wraps
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -219,8 +91,7 @@ def only_if_tracing(func):
|
||||
return _only_if_tracing_inner
|
||||
|
||||
|
||||
# A regex which matches the server_names to expose traces for.
|
||||
# None means 'block everything'.
|
||||
# Block everything by default
|
||||
_homeserver_whitelist = None
|
||||
|
||||
tags = _DumTagNames
|
||||
@@ -230,24 +101,31 @@ def init_tracer(config):
|
||||
"""Set the whitelists and initialise the JaegerClient tracer
|
||||
|
||||
Args:
|
||||
config (HomeserverConfig): The config used by the homeserver
|
||||
config (Config)
|
||||
The config used by the homeserver. Here it's used to set the service
|
||||
name to the homeserver's.
|
||||
"""
|
||||
global opentracing
|
||||
if not config.opentracer_enabled:
|
||||
if not config.tracer_config.get("tracer_enabled", False):
|
||||
# We don't have a tracer
|
||||
opentracing = None
|
||||
return
|
||||
|
||||
if not opentracing or not JaegerConfig:
|
||||
raise ConfigError(
|
||||
"The server has been configured to use opentracing but opentracing is not "
|
||||
"installed."
|
||||
if not opentracing:
|
||||
logger.error(
|
||||
"The server has been configure to use opentracing but opentracing is not installed."
|
||||
)
|
||||
raise ModuleNotFoundError("opentracing")
|
||||
|
||||
if not JaegerConfig:
|
||||
logger.error(
|
||||
"The server has been configure to use opentracing but opentracing is not installed."
|
||||
)
|
||||
|
||||
# Include the worker name
|
||||
name = config.worker_name if config.worker_name else "master"
|
||||
|
||||
set_homeserver_whitelist(config.opentracer_whitelist)
|
||||
set_homeserver_whitelist(config.tracer_config["homeserver_whitelist"])
|
||||
jaeger_config = JaegerConfig(
|
||||
config={"sampler": {"type": "const", "param": 1}, "logging": True},
|
||||
service_name="{} {}".format(config.server_name, name),
|
||||
@@ -354,6 +232,7 @@ def whitelisted_homeserver(destination):
|
||||
"""Checks if a destination matches the whitelist
|
||||
Args:
|
||||
destination (String)"""
|
||||
global _homeserver_whitelist
|
||||
if _homeserver_whitelist:
|
||||
return _homeserver_whitelist.match(destination)
|
||||
return False
|
||||
|
||||
@@ -34,7 +34,9 @@ class LogContextScopeManager(ScopeManager):
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
pass
|
||||
# Set the whitelists
|
||||
logger.info(config.tracer_config)
|
||||
self._homeserver_whitelist = config.tracer_config["homeserver_whitelist"]
|
||||
|
||||
@property
|
||||
def active(self):
|
||||
|
||||
@@ -29,16 +29,8 @@ from prometheus_client.core import REGISTRY, GaugeMetricFamily, HistogramMetricF
|
||||
|
||||
from twisted.internet import reactor
|
||||
|
||||
from synapse.metrics._exposition import (
|
||||
MetricsResource,
|
||||
generate_latest,
|
||||
start_http_server,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
METRICS_PREFIX = "/_synapse/metrics"
|
||||
|
||||
running_on_pypy = platform.python_implementation() == "PyPy"
|
||||
all_metrics = []
|
||||
all_collectors = []
|
||||
@@ -478,12 +470,3 @@ try:
|
||||
gc.disable()
|
||||
except AttributeError:
|
||||
pass
|
||||
|
||||
__all__ = [
|
||||
"MetricsResource",
|
||||
"generate_latest",
|
||||
"start_http_server",
|
||||
"LaterGauge",
|
||||
"InFlightGauge",
|
||||
"BucketCollector",
|
||||
]
|
||||
|
||||
@@ -1,258 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2015-2019 Prometheus Python Client Developers
|
||||
# Copyright 2019 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""
|
||||
This code is based off `prometheus_client/exposition.py` from version 0.7.1.
|
||||
|
||||
Due to the renaming of metrics in prometheus_client 0.4.0, this customised
|
||||
vendoring of the code will emit both the old versions that Synapse dashboards
|
||||
expect, and the newer "best practice" version of the up-to-date official client.
|
||||
"""
|
||||
|
||||
import math
|
||||
import threading
|
||||
from collections import namedtuple
|
||||
from http.server import BaseHTTPRequestHandler, HTTPServer
|
||||
from socketserver import ThreadingMixIn
|
||||
from urllib.parse import parse_qs, urlparse
|
||||
|
||||
from prometheus_client import REGISTRY
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
try:
|
||||
from prometheus_client.samples import Sample
|
||||
except ImportError:
|
||||
Sample = namedtuple("Sample", ["name", "labels", "value", "timestamp", "exemplar"])
|
||||
|
||||
|
||||
CONTENT_TYPE_LATEST = str("text/plain; version=0.0.4; charset=utf-8")
|
||||
|
||||
|
||||
INF = float("inf")
|
||||
MINUS_INF = float("-inf")
|
||||
|
||||
|
||||
def floatToGoString(d):
|
||||
d = float(d)
|
||||
if d == INF:
|
||||
return "+Inf"
|
||||
elif d == MINUS_INF:
|
||||
return "-Inf"
|
||||
elif math.isnan(d):
|
||||
return "NaN"
|
||||
else:
|
||||
s = repr(d)
|
||||
dot = s.find(".")
|
||||
# Go switches to exponents sooner than Python.
|
||||
# We only need to care about positive values for le/quantile.
|
||||
if d > 0 and dot > 6:
|
||||
mantissa = "{0}.{1}{2}".format(s[0], s[1:dot], s[dot + 1 :]).rstrip("0.")
|
||||
return "{0}e+0{1}".format(mantissa, dot - 1)
|
||||
return s
|
||||
|
||||
|
||||
def sample_line(line, name):
|
||||
if line.labels:
|
||||
labelstr = "{{{0}}}".format(
|
||||
",".join(
|
||||
[
|
||||
'{0}="{1}"'.format(
|
||||
k,
|
||||
v.replace("\\", r"\\").replace("\n", r"\n").replace('"', r"\""),
|
||||
)
|
||||
for k, v in sorted(line.labels.items())
|
||||
]
|
||||
)
|
||||
)
|
||||
else:
|
||||
labelstr = ""
|
||||
timestamp = ""
|
||||
if line.timestamp is not None:
|
||||
# Convert to milliseconds.
|
||||
timestamp = " {0:d}".format(int(float(line.timestamp) * 1000))
|
||||
return "{0}{1} {2}{3}\n".format(
|
||||
name, labelstr, floatToGoString(line.value), timestamp
|
||||
)
|
||||
|
||||
|
||||
def nameify_sample(sample):
|
||||
"""
|
||||
If we get a prometheus_client<0.4.0 sample as a tuple, transform it into a
|
||||
namedtuple which has the names we expect.
|
||||
"""
|
||||
if not isinstance(sample, Sample):
|
||||
sample = Sample(*sample, None, None)
|
||||
|
||||
return sample
|
||||
|
||||
|
||||
def generate_latest(registry, emit_help=False):
|
||||
output = []
|
||||
|
||||
for metric in registry.collect():
|
||||
|
||||
if metric.name.startswith("__unused"):
|
||||
continue
|
||||
|
||||
if not metric.samples:
|
||||
# No samples, don't bother.
|
||||
continue
|
||||
|
||||
mname = metric.name
|
||||
mnewname = metric.name
|
||||
mtype = metric.type
|
||||
|
||||
# OpenMetrics -> Prometheus
|
||||
if mtype == "counter":
|
||||
mnewname = mnewname + "_total"
|
||||
elif mtype == "info":
|
||||
mtype = "gauge"
|
||||
mnewname = mnewname + "_info"
|
||||
elif mtype == "stateset":
|
||||
mtype = "gauge"
|
||||
elif mtype == "gaugehistogram":
|
||||
mtype = "histogram"
|
||||
elif mtype == "unknown":
|
||||
mtype = "untyped"
|
||||
|
||||
# Output in the old format for compatibility.
|
||||
if emit_help:
|
||||
output.append(
|
||||
"# HELP {0} {1}\n".format(
|
||||
mname,
|
||||
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
|
||||
)
|
||||
)
|
||||
output.append("# TYPE {0} {1}\n".format(mname, mtype))
|
||||
for sample in map(nameify_sample, metric.samples):
|
||||
# Get rid of the OpenMetrics specific samples
|
||||
for suffix in ["_created", "_gsum", "_gcount"]:
|
||||
if sample.name.endswith(suffix):
|
||||
break
|
||||
else:
|
||||
newname = sample.name.replace(mnewname, mname)
|
||||
if ":" in newname and newname.endswith("_total"):
|
||||
newname = newname[: -len("_total")]
|
||||
output.append(sample_line(sample, newname))
|
||||
|
||||
# Get rid of the weird colon things while we're at it
|
||||
if mtype == "counter":
|
||||
mnewname = mnewname.replace(":total", "")
|
||||
mnewname = mnewname.replace(":", "_")
|
||||
|
||||
if mname == mnewname:
|
||||
continue
|
||||
|
||||
# Also output in the new format, if it's different.
|
||||
if emit_help:
|
||||
output.append(
|
||||
"# HELP {0} {1}\n".format(
|
||||
mnewname,
|
||||
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
|
||||
)
|
||||
)
|
||||
output.append("# TYPE {0} {1}\n".format(mnewname, mtype))
|
||||
for sample in map(nameify_sample, metric.samples):
|
||||
# Get rid of the OpenMetrics specific samples
|
||||
for suffix in ["_created", "_gsum", "_gcount"]:
|
||||
if sample.name.endswith(suffix):
|
||||
break
|
||||
else:
|
||||
output.append(
|
||||
sample_line(
|
||||
sample, sample.name.replace(":total", "").replace(":", "_")
|
||||
)
|
||||
)
|
||||
|
||||
return "".join(output).encode("utf-8")
|
||||
|
||||
|
||||
class MetricsHandler(BaseHTTPRequestHandler):
|
||||
"""HTTP handler that gives metrics from ``REGISTRY``."""
|
||||
|
||||
registry = REGISTRY
|
||||
|
||||
def do_GET(self):
|
||||
registry = self.registry
|
||||
params = parse_qs(urlparse(self.path).query)
|
||||
|
||||
if "help" in params:
|
||||
emit_help = True
|
||||
else:
|
||||
emit_help = False
|
||||
|
||||
try:
|
||||
output = generate_latest(registry, emit_help=emit_help)
|
||||
except Exception:
|
||||
self.send_error(500, "error generating metric output")
|
||||
raise
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", CONTENT_TYPE_LATEST)
|
||||
self.end_headers()
|
||||
self.wfile.write(output)
|
||||
|
||||
def log_message(self, format, *args):
|
||||
"""Log nothing."""
|
||||
|
||||
@classmethod
|
||||
def factory(cls, registry):
|
||||
"""Returns a dynamic MetricsHandler class tied
|
||||
to the passed registry.
|
||||
"""
|
||||
# This implementation relies on MetricsHandler.registry
|
||||
# (defined above and defaulted to REGISTRY).
|
||||
|
||||
# As we have unicode_literals, we need to create a str()
|
||||
# object for type().
|
||||
cls_name = str(cls.__name__)
|
||||
MyMetricsHandler = type(cls_name, (cls, object), {"registry": registry})
|
||||
return MyMetricsHandler
|
||||
|
||||
|
||||
class _ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
|
||||
"""Thread per request HTTP server."""
|
||||
|
||||
# Make worker threads "fire and forget". Beginning with Python 3.7 this
|
||||
# prevents a memory leak because ``ThreadingMixIn`` starts to gather all
|
||||
# non-daemon threads in a list in order to join on them at server close.
|
||||
# Enabling daemon threads virtually makes ``_ThreadingSimpleServer`` the
|
||||
# same as Python 3.7's ``ThreadingHTTPServer``.
|
||||
daemon_threads = True
|
||||
|
||||
|
||||
def start_http_server(port, addr="", registry=REGISTRY):
|
||||
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
|
||||
CustomMetricsHandler = MetricsHandler.factory(registry)
|
||||
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
|
||||
t = threading.Thread(target=httpd.serve_forever)
|
||||
t.daemon = True
|
||||
t.start()
|
||||
|
||||
|
||||
class MetricsResource(Resource):
|
||||
"""
|
||||
Twisted ``Resource`` that serves prometheus metrics.
|
||||
"""
|
||||
|
||||
isLeaf = True
|
||||
|
||||
def __init__(self, registry=REGISTRY):
|
||||
self.registry = registry
|
||||
|
||||
def render_GET(self, request):
|
||||
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
|
||||
return generate_latest(self.registry)
|
||||
20
synapse/metrics/resource.py
Normal file
20
synapse/metrics/resource.py
Normal file
@@ -0,0 +1,20 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from prometheus_client.twisted import MetricsResource
|
||||
|
||||
METRICS_PREFIX = "/_synapse/metrics"
|
||||
|
||||
__all__ = ["MetricsResource", "METRICS_PREFIX"]
|
||||
@@ -65,7 +65,9 @@ REQUIREMENTS = [
|
||||
"msgpack>=0.5.2",
|
||||
"phonenumbers>=8.2.0",
|
||||
"six>=1.10",
|
||||
"prometheus_client>=0.0.18,<0.8.0",
|
||||
# prometheus_client 0.4.0 changed the format of counter metrics
|
||||
# (cf https://github.com/matrix-org/synapse/issues/4001)
|
||||
"prometheus_client>=0.0.18,<0.4.0",
|
||||
# we use attr.s(slots), which arrived in 16.0.0
|
||||
# Twisted 18.7.0 requires attrs>=17.4.0
|
||||
"attrs>=17.4.0",
|
||||
|
||||
@@ -205,7 +205,7 @@ class ReplicationEndpoint(object):
|
||||
args = "/".join("(?P<%s>[^/]+)" % (arg,) for arg in url_args)
|
||||
pattern = re.compile("^/_synapse/replication/%s/%s$" % (self.NAME, args))
|
||||
|
||||
http_server.register_paths(method, [pattern], handler, self.__class__.__name__)
|
||||
http_server.register_paths(method, [pattern], handler)
|
||||
|
||||
def _cached_handler(self, request, txn_id, **kwargs):
|
||||
"""Called on new incoming requests when caching is enabled. Checks
|
||||
|
||||
@@ -59,14 +59,9 @@ class SendServerNoticeServlet(RestServlet):
|
||||
|
||||
def register(self, json_resource):
|
||||
PATTERN = "^/_synapse/admin/v1/send_server_notice"
|
||||
json_resource.register_paths("POST", (re.compile(PATTERN + "$"),), self.on_POST)
|
||||
json_resource.register_paths(
|
||||
"POST", (re.compile(PATTERN + "$"),), self.on_POST, self.__class__.__name__
|
||||
)
|
||||
json_resource.register_paths(
|
||||
"PUT",
|
||||
(re.compile(PATTERN + "/(?P<txn_id>[^/]*)$"),),
|
||||
self.on_PUT,
|
||||
self.__class__.__name__,
|
||||
"PUT", (re.compile(PATTERN + "/(?P<txn_id>[^/]*)$"),), self.on_PUT
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
|
||||
@@ -67,17 +67,11 @@ class RoomCreateRestServlet(TransactionRestServlet):
|
||||
register_txn_path(self, PATTERNS, http_server)
|
||||
# define CORS for all of /rooms in RoomCreateRestServlet for simplicity
|
||||
http_server.register_paths(
|
||||
"OPTIONS",
|
||||
client_patterns("/rooms(?:/.*)?$", v1=True),
|
||||
self.on_OPTIONS,
|
||||
self.__class__.__name__,
|
||||
"OPTIONS", client_patterns("/rooms(?:/.*)?$", v1=True), self.on_OPTIONS
|
||||
)
|
||||
# define CORS for /createRoom[/txnid]
|
||||
http_server.register_paths(
|
||||
"OPTIONS",
|
||||
client_patterns("/createRoom(?:/.*)?$", v1=True),
|
||||
self.on_OPTIONS,
|
||||
self.__class__.__name__,
|
||||
"OPTIONS", client_patterns("/createRoom(?:/.*)?$", v1=True), self.on_OPTIONS
|
||||
)
|
||||
|
||||
def on_PUT(self, request, txn_id):
|
||||
@@ -122,28 +116,16 @@ class RoomStateEventRestServlet(TransactionRestServlet):
|
||||
)
|
||||
|
||||
http_server.register_paths(
|
||||
"GET",
|
||||
client_patterns(state_key, v1=True),
|
||||
self.on_GET,
|
||||
self.__class__.__name__,
|
||||
"GET", client_patterns(state_key, v1=True), self.on_GET
|
||||
)
|
||||
http_server.register_paths(
|
||||
"PUT",
|
||||
client_patterns(state_key, v1=True),
|
||||
self.on_PUT,
|
||||
self.__class__.__name__,
|
||||
"PUT", client_patterns(state_key, v1=True), self.on_PUT
|
||||
)
|
||||
http_server.register_paths(
|
||||
"GET",
|
||||
client_patterns(no_state_key, v1=True),
|
||||
self.on_GET_no_state_key,
|
||||
self.__class__.__name__,
|
||||
"GET", client_patterns(no_state_key, v1=True), self.on_GET_no_state_key
|
||||
)
|
||||
http_server.register_paths(
|
||||
"PUT",
|
||||
client_patterns(no_state_key, v1=True),
|
||||
self.on_PUT_no_state_key,
|
||||
self.__class__.__name__,
|
||||
"PUT", client_patterns(no_state_key, v1=True), self.on_PUT_no_state_key
|
||||
)
|
||||
|
||||
def on_GET_no_state_key(self, request, room_id, event_type):
|
||||
@@ -863,23 +845,18 @@ def register_txn_path(servlet, regex_string, http_server, with_get=False):
|
||||
with_get: True to also register respective GET paths for the PUTs.
|
||||
"""
|
||||
http_server.register_paths(
|
||||
"POST",
|
||||
client_patterns(regex_string + "$", v1=True),
|
||||
servlet.on_POST,
|
||||
servlet.__class__.__name__,
|
||||
"POST", client_patterns(regex_string + "$", v1=True), servlet.on_POST
|
||||
)
|
||||
http_server.register_paths(
|
||||
"PUT",
|
||||
client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True),
|
||||
servlet.on_PUT,
|
||||
servlet.__class__.__name__,
|
||||
)
|
||||
if with_get:
|
||||
http_server.register_paths(
|
||||
"GET",
|
||||
client_patterns(regex_string + "/(?P<txn_id>[^/]*)$", v1=True),
|
||||
servlet.on_GET,
|
||||
servlet.__class__.__name__,
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -34,7 +34,6 @@ from synapse.http.servlet import (
|
||||
from synapse.rest.client.transactions import HttpTransactionCache
|
||||
from synapse.storage.relations import (
|
||||
AggregationPaginationToken,
|
||||
PaginationChunk,
|
||||
RelationPaginationToken,
|
||||
)
|
||||
|
||||
@@ -72,13 +71,11 @@ class RelationSendServlet(RestServlet):
|
||||
"POST",
|
||||
client_patterns(self.PATTERN + "$", releases=()),
|
||||
self.on_PUT_or_POST,
|
||||
self.__class__.__name__,
|
||||
)
|
||||
http_server.register_paths(
|
||||
"PUT",
|
||||
client_patterns(self.PATTERN + "/(?P<txn_id>[^/]*)$", releases=()),
|
||||
self.on_PUT,
|
||||
self.__class__.__name__,
|
||||
)
|
||||
|
||||
def on_PUT(self, request, *args, **kwargs):
|
||||
@@ -156,28 +153,23 @@ class RelationPaginationServlet(RestServlet):
|
||||
from_token = parse_string(request, "from")
|
||||
to_token = parse_string(request, "to")
|
||||
|
||||
if event.internal_metadata.is_redacted():
|
||||
# If the event is redacted, return an empty list of relations
|
||||
pagination_chunk = PaginationChunk(chunk=[])
|
||||
else:
|
||||
# Return the relations
|
||||
if from_token:
|
||||
from_token = RelationPaginationToken.from_string(from_token)
|
||||
if from_token:
|
||||
from_token = RelationPaginationToken.from_string(from_token)
|
||||
|
||||
if to_token:
|
||||
to_token = RelationPaginationToken.from_string(to_token)
|
||||
if to_token:
|
||||
to_token = RelationPaginationToken.from_string(to_token)
|
||||
|
||||
pagination_chunk = yield self.store.get_relations_for_event(
|
||||
event_id=parent_id,
|
||||
relation_type=relation_type,
|
||||
event_type=event_type,
|
||||
limit=limit,
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
)
|
||||
result = yield self.store.get_relations_for_event(
|
||||
event_id=parent_id,
|
||||
relation_type=relation_type,
|
||||
event_type=event_type,
|
||||
limit=limit,
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
events = yield self.store.get_events_as_list(
|
||||
[c["event_id"] for c in pagination_chunk.chunk]
|
||||
[c["event_id"] for c in result.chunk]
|
||||
)
|
||||
|
||||
now = self.clock.time_msec()
|
||||
@@ -194,7 +186,7 @@ class RelationPaginationServlet(RestServlet):
|
||||
events, now, bundle_aggregations=False
|
||||
)
|
||||
|
||||
return_value = pagination_chunk.to_dict()
|
||||
return_value = result.to_dict()
|
||||
return_value["chunk"] = events
|
||||
return_value["original_event"] = original_event
|
||||
|
||||
@@ -242,7 +234,7 @@ class RelationAggregationPaginationServlet(RestServlet):
|
||||
|
||||
# This checks that a) the event exists and b) the user is allowed to
|
||||
# view it.
|
||||
event = yield self.event_handler.get_event(requester.user, room_id, parent_id)
|
||||
yield self.event_handler.get_event(requester.user, room_id, parent_id)
|
||||
|
||||
if relation_type not in (RelationTypes.ANNOTATION, None):
|
||||
raise SynapseError(400, "Relation type must be 'annotation'")
|
||||
@@ -251,26 +243,21 @@ class RelationAggregationPaginationServlet(RestServlet):
|
||||
from_token = parse_string(request, "from")
|
||||
to_token = parse_string(request, "to")
|
||||
|
||||
if event.internal_metadata.is_redacted():
|
||||
# If the event is redacted, return an empty list of relations
|
||||
pagination_chunk = PaginationChunk(chunk=[])
|
||||
else:
|
||||
# Return the relations
|
||||
if from_token:
|
||||
from_token = AggregationPaginationToken.from_string(from_token)
|
||||
if from_token:
|
||||
from_token = AggregationPaginationToken.from_string(from_token)
|
||||
|
||||
if to_token:
|
||||
to_token = AggregationPaginationToken.from_string(to_token)
|
||||
if to_token:
|
||||
to_token = AggregationPaginationToken.from_string(to_token)
|
||||
|
||||
pagination_chunk = yield self.store.get_aggregation_groups_for_event(
|
||||
event_id=parent_id,
|
||||
event_type=event_type,
|
||||
limit=limit,
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
)
|
||||
res = yield self.store.get_aggregation_groups_for_event(
|
||||
event_id=parent_id,
|
||||
event_type=event_type,
|
||||
limit=limit,
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
)
|
||||
|
||||
defer.returnValue((200, pagination_chunk.to_dict()))
|
||||
defer.returnValue((200, res.to_dict()))
|
||||
|
||||
|
||||
class RelationAggregationGroupPaginationServlet(RestServlet):
|
||||
|
||||
@@ -37,7 +37,6 @@ from synapse.logging.context import (
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import get_domain_from_id
|
||||
from synapse.util import batch_iter
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
from ._base import SQLBaseStore
|
||||
@@ -219,116 +218,9 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
if not event_ids:
|
||||
defer.returnValue([])
|
||||
|
||||
# there may be duplicates so we cast the list to a set
|
||||
event_entry_map = yield self._get_events_from_cache_or_db(
|
||||
set(event_ids), allow_rejected=allow_rejected
|
||||
)
|
||||
event_id_list = event_ids
|
||||
event_ids = set(event_ids)
|
||||
|
||||
events = []
|
||||
for event_id in event_ids:
|
||||
entry = event_entry_map.get(event_id, None)
|
||||
if not entry:
|
||||
continue
|
||||
|
||||
if not allow_rejected:
|
||||
assert not entry.event.rejected_reason, (
|
||||
"rejected event returned from _get_events_from_cache_or_db despite "
|
||||
"allow_rejected=False"
|
||||
)
|
||||
|
||||
# We may not have had the original event when we received a redaction, so
|
||||
# we have to recheck auth now.
|
||||
|
||||
if not allow_rejected and entry.event.type == EventTypes.Redaction:
|
||||
redacted_event_id = entry.event.redacts
|
||||
event_map = yield self._get_events_from_cache_or_db([redacted_event_id])
|
||||
original_event_entry = event_map.get(redacted_event_id)
|
||||
if not original_event_entry:
|
||||
# we don't have the redacted event (or it was rejected).
|
||||
#
|
||||
# We assume that the redaction isn't authorized for now; if the
|
||||
# redacted event later turns up, the redaction will be re-checked,
|
||||
# and if it is found valid, the original will get redacted before it
|
||||
# is served to the client.
|
||||
logger.debug(
|
||||
"Withholding redaction event %s since we don't (yet) have the "
|
||||
"original %s",
|
||||
event_id,
|
||||
redacted_event_id,
|
||||
)
|
||||
continue
|
||||
|
||||
original_event = original_event_entry.event
|
||||
if original_event.type == EventTypes.Create:
|
||||
# we never serve redactions of Creates to clients.
|
||||
logger.info(
|
||||
"Withholding redaction %s of create event %s",
|
||||
event_id,
|
||||
redacted_event_id,
|
||||
)
|
||||
continue
|
||||
|
||||
if original_event.room_id != entry.event.room_id:
|
||||
logger.info(
|
||||
"Withholding redaction %s of event %s from a different room",
|
||||
event_id,
|
||||
redacted_event_id,
|
||||
)
|
||||
continue
|
||||
|
||||
if entry.event.internal_metadata.need_to_check_redaction():
|
||||
original_domain = get_domain_from_id(original_event.sender)
|
||||
redaction_domain = get_domain_from_id(entry.event.sender)
|
||||
if original_domain != redaction_domain:
|
||||
# the senders don't match, so this is forbidden
|
||||
logger.info(
|
||||
"Withholding redaction %s whose sender domain %s doesn't "
|
||||
"match that of redacted event %s %s",
|
||||
event_id,
|
||||
redaction_domain,
|
||||
redacted_event_id,
|
||||
original_domain,
|
||||
)
|
||||
continue
|
||||
|
||||
# Update the cache to save doing the checks again.
|
||||
entry.event.internal_metadata.recheck_redaction = False
|
||||
|
||||
if check_redacted and entry.redacted_event:
|
||||
event = entry.redacted_event
|
||||
else:
|
||||
event = entry.event
|
||||
|
||||
events.append(event)
|
||||
|
||||
if get_prev_content:
|
||||
if "replaces_state" in event.unsigned:
|
||||
prev = yield self.get_event(
|
||||
event.unsigned["replaces_state"],
|
||||
get_prev_content=False,
|
||||
allow_none=True,
|
||||
)
|
||||
if prev:
|
||||
event.unsigned = dict(event.unsigned)
|
||||
event.unsigned["prev_content"] = prev.content
|
||||
event.unsigned["prev_sender"] = prev.sender
|
||||
|
||||
defer.returnValue(events)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _get_events_from_cache_or_db(self, event_ids, allow_rejected=False):
|
||||
"""Fetch a bunch of events from the cache or the database.
|
||||
|
||||
If events are pulled from the database, they will be cached for future lookups.
|
||||
|
||||
Args:
|
||||
event_ids (Iterable[str]): The event_ids of the events to fetch
|
||||
allow_rejected (bool): Whether to include rejected events
|
||||
|
||||
Returns:
|
||||
Deferred[Dict[str, _EventCacheEntry]]:
|
||||
map from event id to result
|
||||
"""
|
||||
event_entry_map = self._get_events_from_cache(
|
||||
event_ids, allow_rejected=allow_rejected
|
||||
)
|
||||
@@ -351,7 +243,81 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
event_entry_map.update(missing_events)
|
||||
|
||||
return event_entry_map
|
||||
events = []
|
||||
for event_id in event_id_list:
|
||||
entry = event_entry_map.get(event_id, None)
|
||||
if not entry:
|
||||
continue
|
||||
|
||||
# Starting in room version v3, some redactions need to be rechecked if we
|
||||
# didn't have the redacted event at the time, so we recheck on read
|
||||
# instead.
|
||||
if not allow_rejected and entry.event.type == EventTypes.Redaction:
|
||||
if entry.event.internal_metadata.need_to_check_redaction():
|
||||
# XXX: we need to avoid calling get_event here.
|
||||
#
|
||||
# The problem is that we end up at this point when an event
|
||||
# which has been redacted is pulled out of the database by
|
||||
# _enqueue_events, because _enqueue_events needs to check
|
||||
# the redaction before it can cache the redacted event. So
|
||||
# obviously, calling get_event to get the redacted event out
|
||||
# of the database gives us an infinite loop.
|
||||
#
|
||||
# For now (quick hack to fix during 0.99 release cycle), we
|
||||
# just go and fetch the relevant row from the db, but it
|
||||
# would be nice to think about how we can cache this rather
|
||||
# than hit the db every time we access a redaction event.
|
||||
#
|
||||
# One thought on how to do this:
|
||||
# 1. split get_events_as_list up so that it is divided into
|
||||
# (a) get the rawish event from the db/cache, (b) do the
|
||||
# redaction/rejection filtering
|
||||
# 2. have _get_event_from_row just call the first half of
|
||||
# that
|
||||
|
||||
orig_sender = yield self._simple_select_one_onecol(
|
||||
table="events",
|
||||
keyvalues={"event_id": entry.event.redacts},
|
||||
retcol="sender",
|
||||
allow_none=True,
|
||||
)
|
||||
|
||||
expected_domain = get_domain_from_id(entry.event.sender)
|
||||
if (
|
||||
orig_sender
|
||||
and get_domain_from_id(orig_sender) == expected_domain
|
||||
):
|
||||
# This redaction event is allowed. Mark as not needing a
|
||||
# recheck.
|
||||
entry.event.internal_metadata.recheck_redaction = False
|
||||
else:
|
||||
# We don't have the event that is being redacted, so we
|
||||
# assume that the event isn't authorized for now. (If we
|
||||
# later receive the event, then we will always redact
|
||||
# it anyway, since we have this redaction)
|
||||
continue
|
||||
|
||||
if allow_rejected or not entry.event.rejected_reason:
|
||||
if check_redacted and entry.redacted_event:
|
||||
event = entry.redacted_event
|
||||
else:
|
||||
event = entry.event
|
||||
|
||||
events.append(event)
|
||||
|
||||
if get_prev_content:
|
||||
if "replaces_state" in event.unsigned:
|
||||
prev = yield self.get_event(
|
||||
event.unsigned["replaces_state"],
|
||||
get_prev_content=False,
|
||||
allow_none=True,
|
||||
)
|
||||
if prev:
|
||||
event.unsigned = dict(event.unsigned)
|
||||
event.unsigned["prev_content"] = prev.content
|
||||
event.unsigned["prev_sender"] = prev.sender
|
||||
|
||||
defer.returnValue(events)
|
||||
|
||||
def _invalidate_get_event_cache(self, event_id):
|
||||
self._get_event_cache.invalidate((event_id,))
|
||||
@@ -360,7 +326,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
"""Fetch events from the caches
|
||||
|
||||
Args:
|
||||
events (Iterable[str]): list of event_ids to fetch
|
||||
events (list(str)): list of event_ids to fetch
|
||||
allow_rejected (bool): Whether to return events that were rejected
|
||||
update_metrics (bool): Whether to update the cache hit ratio metrics
|
||||
|
||||
@@ -418,16 +384,19 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
The fetch requests. Each entry consists of a list of event
|
||||
ids to be fetched, and a deferred to be completed once the
|
||||
events have been fetched.
|
||||
|
||||
"""
|
||||
with Measure(self._clock, "_fetch_event_list"):
|
||||
try:
|
||||
event_id_lists = list(zip(*event_list))[0]
|
||||
event_ids = [item for sublist in event_id_lists for item in sublist]
|
||||
|
||||
row_dict = self._new_transaction(
|
||||
rows = self._new_transaction(
|
||||
conn, "do_fetch", [], [], self._fetch_event_rows, event_ids
|
||||
)
|
||||
|
||||
row_dict = {r["event_id"]: r for r in rows}
|
||||
|
||||
# We only want to resolve deferreds from the main thread
|
||||
def fire(lst, res):
|
||||
for ids, d in lst:
|
||||
@@ -485,7 +454,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
logger.debug("Loaded %d events (%d rows)", len(events), len(rows))
|
||||
|
||||
if not allow_rejected:
|
||||
rows[:] = [r for r in rows if r["rejected_reason"] is None]
|
||||
rows[:] = [r for r in rows if not r["rejects"]]
|
||||
|
||||
res = yield make_deferred_yieldable(
|
||||
defer.gatherResults(
|
||||
@@ -494,8 +463,8 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
self._get_event_from_row,
|
||||
row["internal_metadata"],
|
||||
row["json"],
|
||||
row["redactions"],
|
||||
rejected_reason=row["rejected_reason"],
|
||||
row["redacts"],
|
||||
rejected_reason=row["rejects"],
|
||||
format_version=row["format_version"],
|
||||
)
|
||||
for row in rows
|
||||
@@ -506,98 +475,49 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
defer.returnValue({e.event.event_id: e for e in res if e})
|
||||
|
||||
def _fetch_event_rows(self, txn, event_ids):
|
||||
"""Fetch event rows from the database
|
||||
def _fetch_event_rows(self, txn, events):
|
||||
rows = []
|
||||
N = 200
|
||||
for i in range(1 + len(events) // N):
|
||||
evs = events[i * N : (i + 1) * N]
|
||||
if not evs:
|
||||
break
|
||||
|
||||
Events which are not found are omitted from the result.
|
||||
|
||||
The returned per-event dicts contain the following keys:
|
||||
|
||||
* event_id (str)
|
||||
|
||||
* json (str): json-encoded event structure
|
||||
|
||||
* internal_metadata (str): json-encoded internal metadata dict
|
||||
|
||||
* format_version (int|None): The format of the event. Hopefully one
|
||||
of EventFormatVersions. 'None' means the event predates
|
||||
EventFormatVersions (so the event is format V1).
|
||||
|
||||
* rejected_reason (str|None): if the event was rejected, the reason
|
||||
why.
|
||||
|
||||
* redactions (List[str]): a list of event-ids which (claim to) redact
|
||||
this event.
|
||||
|
||||
Args:
|
||||
txn (twisted.enterprise.adbapi.Connection):
|
||||
event_ids (Iterable[str]): event IDs to fetch
|
||||
|
||||
Returns:
|
||||
Dict[str, Dict]: a map from event id to event info.
|
||||
"""
|
||||
event_dict = {}
|
||||
for evs in batch_iter(event_ids, 200):
|
||||
sql = (
|
||||
"SELECT "
|
||||
" e.event_id, "
|
||||
" e.event_id as event_id, "
|
||||
" e.internal_metadata,"
|
||||
" e.json,"
|
||||
" e.format_version, "
|
||||
" rej.reason "
|
||||
" r.redacts as redacts,"
|
||||
" rej.event_id as rejects "
|
||||
" FROM event_json as e"
|
||||
" LEFT JOIN rejections as rej USING (event_id)"
|
||||
" LEFT JOIN redactions as r ON e.event_id = r.redacts"
|
||||
" WHERE e.event_id IN (%s)"
|
||||
) % (",".join(["?"] * len(evs)),)
|
||||
|
||||
txn.execute(sql, evs)
|
||||
rows.extend(self.cursor_to_dict(txn))
|
||||
|
||||
for row in txn:
|
||||
event_id = row[0]
|
||||
event_dict[event_id] = {
|
||||
"event_id": event_id,
|
||||
"internal_metadata": row[1],
|
||||
"json": row[2],
|
||||
"format_version": row[3],
|
||||
"rejected_reason": row[4],
|
||||
"redactions": [],
|
||||
}
|
||||
|
||||
# check for redactions
|
||||
redactions_sql = (
|
||||
"SELECT event_id, redacts FROM redactions WHERE redacts IN (%s)"
|
||||
) % (",".join(["?"] * len(evs)),)
|
||||
|
||||
txn.execute(redactions_sql, evs)
|
||||
|
||||
for (redacter, redacted) in txn:
|
||||
d = event_dict.get(redacted)
|
||||
if d:
|
||||
d["redactions"].append(redacter)
|
||||
|
||||
return event_dict
|
||||
return rows
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _get_event_from_row(
|
||||
self, internal_metadata, js, redactions, format_version, rejected_reason=None
|
||||
self, internal_metadata, js, redacted, format_version, rejected_reason=None
|
||||
):
|
||||
"""Parse an event row which has been read from the database
|
||||
|
||||
Args:
|
||||
internal_metadata (str): json-encoded internal_metadata column
|
||||
js (str): json-encoded event body from event_json
|
||||
redactions (list[str]): a list of the events which claim to have redacted
|
||||
this event, from the redactions table
|
||||
format_version: (str): the 'format_version' column
|
||||
rejected_reason (str|None): the reason this event was rejected, if any
|
||||
|
||||
Returns:
|
||||
_EventCacheEntry
|
||||
"""
|
||||
with Measure(self._clock, "_get_event_from_row"):
|
||||
d = json.loads(js)
|
||||
internal_metadata = json.loads(internal_metadata)
|
||||
|
||||
if rejected_reason:
|
||||
rejected_reason = yield self._simple_select_one_onecol(
|
||||
table="rejections",
|
||||
keyvalues={"event_id": rejected_reason},
|
||||
retcol="reason",
|
||||
desc="_get_event_from_row_rejected_reason",
|
||||
)
|
||||
|
||||
if format_version is None:
|
||||
# This means that we stored the event before we had the concept
|
||||
# of a event format version, so it must be a V1 event.
|
||||
@@ -609,7 +529,41 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
rejected_reason=rejected_reason,
|
||||
)
|
||||
|
||||
redacted_event = yield self._maybe_redact_event_row(original_ev, redactions)
|
||||
redacted_event = None
|
||||
if redacted:
|
||||
redacted_event = prune_event(original_ev)
|
||||
|
||||
redaction_id = yield self._simple_select_one_onecol(
|
||||
table="redactions",
|
||||
keyvalues={"redacts": redacted_event.event_id},
|
||||
retcol="event_id",
|
||||
desc="_get_event_from_row_redactions",
|
||||
)
|
||||
|
||||
redacted_event.unsigned["redacted_by"] = redaction_id
|
||||
# Get the redaction event.
|
||||
|
||||
because = yield self.get_event(
|
||||
redaction_id, check_redacted=False, allow_none=True
|
||||
)
|
||||
|
||||
if because:
|
||||
# It's fine to do add the event directly, since get_pdu_json
|
||||
# will serialise this field correctly
|
||||
redacted_event.unsigned["redacted_because"] = because
|
||||
|
||||
# Starting in room version v3, some redactions need to be
|
||||
# rechecked if we didn't have the redacted event at the
|
||||
# time, so we recheck on read instead.
|
||||
if because.internal_metadata.need_to_check_redaction():
|
||||
expected_domain = get_domain_from_id(original_ev.sender)
|
||||
if get_domain_from_id(because.sender) == expected_domain:
|
||||
# This redaction event is allowed. Mark as not needing a
|
||||
# recheck.
|
||||
because.internal_metadata.recheck_redaction = False
|
||||
else:
|
||||
# Senders don't match, so the event isn't actually redacted
|
||||
redacted_event = None
|
||||
|
||||
cache_entry = _EventCacheEntry(
|
||||
event=original_ev, redacted_event=redacted_event
|
||||
@@ -619,83 +573,6 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
|
||||
defer.returnValue(cache_entry)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _maybe_redact_event_row(self, original_ev, redactions):
|
||||
"""Given an event object and a list of possible redacting event ids,
|
||||
determine whether to honour any of those redactions and if so return a redacted
|
||||
event.
|
||||
|
||||
Args:
|
||||
original_ev (EventBase):
|
||||
redactions (iterable[str]): list of event ids of potential redaction events
|
||||
|
||||
Returns:
|
||||
Deferred[EventBase|None]: if the event should be redacted, a pruned
|
||||
event object. Otherwise, None.
|
||||
"""
|
||||
if original_ev.type == "m.room.create":
|
||||
# we choose to ignore redactions of m.room.create events.
|
||||
return None
|
||||
|
||||
if original_ev.type == "m.room.redaction":
|
||||
# ... and redaction events
|
||||
return None
|
||||
|
||||
redaction_map = yield self._get_events_from_cache_or_db(redactions)
|
||||
|
||||
for redaction_id in redactions:
|
||||
redaction_entry = redaction_map.get(redaction_id)
|
||||
if not redaction_entry:
|
||||
# we don't have the redaction event, or the redaction event was not
|
||||
# authorized.
|
||||
logger.debug(
|
||||
"%s was redacted by %s but redaction not found/authed",
|
||||
original_ev.event_id,
|
||||
redaction_id,
|
||||
)
|
||||
continue
|
||||
|
||||
redaction_event = redaction_entry.event
|
||||
if redaction_event.room_id != original_ev.room_id:
|
||||
logger.debug(
|
||||
"%s was redacted by %s but redaction was in a different room!",
|
||||
original_ev.event_id,
|
||||
redaction_id,
|
||||
)
|
||||
continue
|
||||
|
||||
# Starting in room version v3, some redactions need to be
|
||||
# rechecked if we didn't have the redacted event at the
|
||||
# time, so we recheck on read instead.
|
||||
if redaction_event.internal_metadata.need_to_check_redaction():
|
||||
expected_domain = get_domain_from_id(original_ev.sender)
|
||||
if get_domain_from_id(redaction_event.sender) == expected_domain:
|
||||
# This redaction event is allowed. Mark as not needing a recheck.
|
||||
redaction_event.internal_metadata.recheck_redaction = False
|
||||
else:
|
||||
# Senders don't match, so the event isn't actually redacted
|
||||
logger.debug(
|
||||
"%s was redacted by %s but the senders don't match",
|
||||
original_ev.event_id,
|
||||
redaction_id,
|
||||
)
|
||||
continue
|
||||
|
||||
logger.debug("Redacting %s due to %s", original_ev.event_id, redaction_id)
|
||||
|
||||
# we found a good redaction event. Redact!
|
||||
redacted_event = prune_event(original_ev)
|
||||
redacted_event.unsigned["redacted_by"] = redaction_id
|
||||
|
||||
# It's fine to add the event directly, since get_pdu_json
|
||||
# will serialise this field correctly
|
||||
redacted_event.unsigned["redacted_because"] = redaction_event
|
||||
|
||||
return redacted_event
|
||||
|
||||
# no valid redaction found for this event
|
||||
return None
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def have_events_in_timeline(self, event_ids):
|
||||
"""Given a list of event ids, check if we have already processed and
|
||||
|
||||
@@ -1,179 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client.v1 import login, room
|
||||
from synapse.rest.client.v2_alpha import sync
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
|
||||
class RedactionsTestCase(HomeserverTestCase):
|
||||
"""Tests that various redaction events are handled correctly"""
|
||||
|
||||
servlets = [
|
||||
admin.register_servlets,
|
||||
room.register_servlets,
|
||||
login.register_servlets,
|
||||
sync.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor, clock, hs):
|
||||
# register a couple of users
|
||||
self.mod_user_id = self.register_user("user1", "pass")
|
||||
self.mod_access_token = self.login("user1", "pass")
|
||||
self.other_user_id = self.register_user("otheruser", "pass")
|
||||
self.other_access_token = self.login("otheruser", "pass")
|
||||
|
||||
# Create a room
|
||||
self.room_id = self.helper.create_room_as(
|
||||
self.mod_user_id, tok=self.mod_access_token
|
||||
)
|
||||
|
||||
# Invite the other user
|
||||
self.helper.invite(
|
||||
room=self.room_id,
|
||||
src=self.mod_user_id,
|
||||
tok=self.mod_access_token,
|
||||
targ=self.other_user_id,
|
||||
)
|
||||
# The other user joins
|
||||
self.helper.join(
|
||||
room=self.room_id, user=self.other_user_id, tok=self.other_access_token
|
||||
)
|
||||
|
||||
def _redact_event(self, access_token, room_id, event_id, expect_code=200):
|
||||
"""Helper function to send a redaction event.
|
||||
|
||||
Returns the json body.
|
||||
"""
|
||||
path = "/_matrix/client/r0/rooms/%s/redact/%s" % (room_id, event_id)
|
||||
|
||||
request, channel = self.make_request(
|
||||
"POST", path, content={}, access_token=access_token
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(int(channel.result["code"]), expect_code)
|
||||
return channel.json_body
|
||||
|
||||
def _sync_room_timeline(self, access_token, room_id):
|
||||
request, channel = self.make_request(
|
||||
"GET", "sync", access_token=self.mod_access_token
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEqual(channel.result["code"], b"200")
|
||||
room_sync = channel.json_body["rooms"]["join"][room_id]
|
||||
return room_sync["timeline"]["events"]
|
||||
|
||||
def test_redact_event_as_moderator(self):
|
||||
# as a regular user, send a message to redact
|
||||
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
|
||||
msg_id = b["event_id"]
|
||||
|
||||
# as the moderator, send a redaction
|
||||
b = self._redact_event(self.mod_access_token, self.room_id, msg_id)
|
||||
redaction_id = b["event_id"]
|
||||
|
||||
# now sync
|
||||
timeline = self._sync_room_timeline(self.mod_access_token, self.room_id)
|
||||
|
||||
# the last event should be the redaction
|
||||
self.assertEqual(timeline[-1]["event_id"], redaction_id)
|
||||
self.assertEqual(timeline[-1]["redacts"], msg_id)
|
||||
|
||||
# and the penultimate should be the redacted original
|
||||
self.assertEqual(timeline[-2]["event_id"], msg_id)
|
||||
self.assertEqual(timeline[-2]["unsigned"]["redacted_by"], redaction_id)
|
||||
self.assertEqual(timeline[-2]["content"], {})
|
||||
|
||||
def test_redact_event_as_normal(self):
|
||||
# as a regular user, send a message to redact
|
||||
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
|
||||
normal_msg_id = b["event_id"]
|
||||
|
||||
# also send one as the admin
|
||||
b = self.helper.send(room_id=self.room_id, tok=self.mod_access_token)
|
||||
admin_msg_id = b["event_id"]
|
||||
|
||||
# as a normal, try to redact the admin's event
|
||||
self._redact_event(
|
||||
self.other_access_token, self.room_id, admin_msg_id, expect_code=403
|
||||
)
|
||||
|
||||
# now try to redact our own event
|
||||
b = self._redact_event(self.other_access_token, self.room_id, normal_msg_id)
|
||||
redaction_id = b["event_id"]
|
||||
|
||||
# now sync
|
||||
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
|
||||
|
||||
# the last event should be the redaction of the normal event
|
||||
self.assertEqual(timeline[-1]["event_id"], redaction_id)
|
||||
self.assertEqual(timeline[-1]["redacts"], normal_msg_id)
|
||||
|
||||
# the penultimate should be the unredacted one from the admin
|
||||
self.assertEqual(timeline[-2]["event_id"], admin_msg_id)
|
||||
self.assertNotIn("redacted_by", timeline[-2]["unsigned"])
|
||||
self.assertTrue(timeline[-2]["content"]["body"], {})
|
||||
|
||||
# and the antepenultimate should be the redacted normal
|
||||
self.assertEqual(timeline[-3]["event_id"], normal_msg_id)
|
||||
self.assertEqual(timeline[-3]["unsigned"]["redacted_by"], redaction_id)
|
||||
self.assertEqual(timeline[-3]["content"], {})
|
||||
|
||||
def test_redact_nonexistent_event(self):
|
||||
# control case: an existing event
|
||||
b = self.helper.send(room_id=self.room_id, tok=self.other_access_token)
|
||||
msg_id = b["event_id"]
|
||||
b = self._redact_event(self.other_access_token, self.room_id, msg_id)
|
||||
redaction_id = b["event_id"]
|
||||
|
||||
# room moderators can send redactions for non-existent events
|
||||
self._redact_event(self.mod_access_token, self.room_id, "$zzz")
|
||||
|
||||
# ... but normals cannot
|
||||
self._redact_event(
|
||||
self.other_access_token, self.room_id, "$zzz", expect_code=404
|
||||
)
|
||||
|
||||
# when we sync, we should see only the valid redaction
|
||||
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
|
||||
self.assertEqual(timeline[-1]["event_id"], redaction_id)
|
||||
self.assertEqual(timeline[-1]["redacts"], msg_id)
|
||||
|
||||
# and the penultimate should be the redacted original
|
||||
self.assertEqual(timeline[-2]["event_id"], msg_id)
|
||||
self.assertEqual(timeline[-2]["unsigned"]["redacted_by"], redaction_id)
|
||||
self.assertEqual(timeline[-2]["content"], {})
|
||||
|
||||
def test_redact_create_event(self):
|
||||
# control case: an existing event
|
||||
b = self.helper.send(room_id=self.room_id, tok=self.mod_access_token)
|
||||
msg_id = b["event_id"]
|
||||
self._redact_event(self.mod_access_token, self.room_id, msg_id)
|
||||
|
||||
# sync the room, to get the id of the create event
|
||||
timeline = self._sync_room_timeline(self.other_access_token, self.room_id)
|
||||
create_event_id = timeline[0]["event_id"]
|
||||
|
||||
# room moderators cannot send redactions for create events
|
||||
self._redact_event(
|
||||
self.mod_access_token, self.room_id, create_event_id, expect_code=403
|
||||
)
|
||||
|
||||
# and nor can normals
|
||||
self._redact_event(
|
||||
self.other_access_token, self.room_id, create_event_id, expect_code=403
|
||||
)
|
||||
@@ -93,7 +93,7 @@ class RelationsTestCase(unittest.HomeserverTestCase):
|
||||
def test_deny_double_react(self):
|
||||
"""Test that we deny relations on membership events
|
||||
"""
|
||||
channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", key="a")
|
||||
channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
channel = self._send_relation(RelationTypes.ANNOTATION, "m.reaction", "a")
|
||||
@@ -540,122 +540,14 @@ class RelationsTestCase(unittest.HomeserverTestCase):
|
||||
{"event_id": edit_event_id, "sender": self.user_id}, m_replace_dict
|
||||
)
|
||||
|
||||
def test_relations_redaction_redacts_edits(self):
|
||||
"""Test that edits of an event are redacted when the original event
|
||||
is redacted.
|
||||
"""
|
||||
# Send a new event
|
||||
res = self.helper.send(self.room, body="Heyo!", tok=self.user_token)
|
||||
original_event_id = res["event_id"]
|
||||
|
||||
# Add a relation
|
||||
channel = self._send_relation(
|
||||
RelationTypes.REPLACE,
|
||||
"m.room.message",
|
||||
parent_id=original_event_id,
|
||||
content={
|
||||
"msgtype": "m.text",
|
||||
"body": "Wibble",
|
||||
"m.new_content": {"msgtype": "m.text", "body": "First edit"},
|
||||
},
|
||||
)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
# Check the relation is returned
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/unstable/rooms/%s/relations/%s/m.replace/m.room.message"
|
||||
% (self.room, original_event_id),
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
self.assertIn("chunk", channel.json_body)
|
||||
self.assertEquals(len(channel.json_body["chunk"]), 1)
|
||||
|
||||
# Redact the original event
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
"/rooms/%s/redact/%s/%s"
|
||||
% (self.room, original_event_id, "test_relations_redaction_redacts_edits"),
|
||||
access_token=self.user_token,
|
||||
content="{}",
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
# Try to check for remaining m.replace relations
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/unstable/rooms/%s/relations/%s/m.replace/m.room.message"
|
||||
% (self.room, original_event_id),
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
# Check that no relations are returned
|
||||
self.assertIn("chunk", channel.json_body)
|
||||
self.assertEquals(channel.json_body["chunk"], [])
|
||||
|
||||
def test_aggregations_redaction_prevents_access_to_aggregations(self):
|
||||
"""Test that annotations of an event are redacted when the original event
|
||||
is redacted.
|
||||
"""
|
||||
# Send a new event
|
||||
res = self.helper.send(self.room, body="Hello!", tok=self.user_token)
|
||||
original_event_id = res["event_id"]
|
||||
|
||||
# Add a relation
|
||||
channel = self._send_relation(
|
||||
RelationTypes.ANNOTATION, "m.reaction", key="👍", parent_id=original_event_id
|
||||
)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
# Redact the original
|
||||
request, channel = self.make_request(
|
||||
"PUT",
|
||||
"/rooms/%s/redact/%s/%s"
|
||||
% (
|
||||
self.room,
|
||||
original_event_id,
|
||||
"test_aggregations_redaction_prevents_access_to_aggregations",
|
||||
),
|
||||
access_token=self.user_token,
|
||||
content="{}",
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
# Check that aggregations returns zero
|
||||
request, channel = self.make_request(
|
||||
"GET",
|
||||
"/_matrix/client/unstable/rooms/%s/aggregations/%s/m.annotation/m.reaction"
|
||||
% (self.room, original_event_id),
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.render(request)
|
||||
self.assertEquals(200, channel.code, channel.json_body)
|
||||
|
||||
self.assertIn("chunk", channel.json_body)
|
||||
self.assertEquals(channel.json_body["chunk"], [])
|
||||
|
||||
def _send_relation(
|
||||
self,
|
||||
relation_type,
|
||||
event_type,
|
||||
key=None,
|
||||
content={},
|
||||
access_token=None,
|
||||
parent_id=None,
|
||||
self, relation_type, event_type, key=None, content={}, access_token=None
|
||||
):
|
||||
"""Helper function to send a relation pointing at `self.parent_id`
|
||||
|
||||
Args:
|
||||
relation_type (str): One of `RelationTypes`
|
||||
event_type (str): The type of the event to create
|
||||
parent_id (str): The event_id this relation relates to. If None, then self.parent_id
|
||||
key (str|None): The aggregation key used for m.annotation relation
|
||||
type.
|
||||
content(dict|None): The content of the created event.
|
||||
@@ -672,12 +564,10 @@ class RelationsTestCase(unittest.HomeserverTestCase):
|
||||
if key:
|
||||
query = "?key=" + six.moves.urllib.parse.quote_plus(key.encode("utf-8"))
|
||||
|
||||
original_id = parent_id if parent_id else self.parent_id
|
||||
|
||||
request, channel = self.make_request(
|
||||
"POST",
|
||||
"/_matrix/client/unstable/rooms/%s/send_relation/%s/%s/%s%s"
|
||||
% (self.room, original_id, relation_type, event_type, query),
|
||||
% (self.room, self.parent_id, relation_type, event_type, query),
|
||||
json.dumps(content).encode("utf-8"),
|
||||
access_token=access_token,
|
||||
)
|
||||
|
||||
@@ -13,7 +13,9 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.metrics import REGISTRY, generate_latest
|
||||
from prometheus_client.exposition import generate_latest
|
||||
|
||||
from synapse.metrics import REGISTRY
|
||||
from synapse.types import Requester, UserID
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
@@ -61,10 +61,7 @@ class JsonResourceTests(unittest.TestCase):
|
||||
|
||||
res = JsonResource(self.homeserver)
|
||||
res.register_paths(
|
||||
"GET",
|
||||
[re.compile("^/_matrix/foo/(?P<room_id>[^/]*)$")],
|
||||
_callback,
|
||||
"test_servlet",
|
||||
"GET", [re.compile("^/_matrix/foo/(?P<room_id>[^/]*)$")], _callback
|
||||
)
|
||||
|
||||
request, channel = make_request(
|
||||
@@ -85,9 +82,7 @@ class JsonResourceTests(unittest.TestCase):
|
||||
raise Exception("boo")
|
||||
|
||||
res = JsonResource(self.homeserver)
|
||||
res.register_paths(
|
||||
"GET", [re.compile("^/_matrix/foo$")], _callback, "test_servlet"
|
||||
)
|
||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||
|
||||
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||
render(request, res, self.reactor)
|
||||
@@ -110,9 +105,7 @@ class JsonResourceTests(unittest.TestCase):
|
||||
return make_deferred_yieldable(d)
|
||||
|
||||
res = JsonResource(self.homeserver)
|
||||
res.register_paths(
|
||||
"GET", [re.compile("^/_matrix/foo$")], _callback, "test_servlet"
|
||||
)
|
||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||
|
||||
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||
render(request, res, self.reactor)
|
||||
@@ -129,9 +122,7 @@ class JsonResourceTests(unittest.TestCase):
|
||||
raise SynapseError(403, "Forbidden!!one!", Codes.FORBIDDEN)
|
||||
|
||||
res = JsonResource(self.homeserver)
|
||||
res.register_paths(
|
||||
"GET", [re.compile("^/_matrix/foo$")], _callback, "test_servlet"
|
||||
)
|
||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||
|
||||
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foo")
|
||||
render(request, res, self.reactor)
|
||||
@@ -152,9 +143,7 @@ class JsonResourceTests(unittest.TestCase):
|
||||
self.fail("shouldn't ever get here")
|
||||
|
||||
res = JsonResource(self.homeserver)
|
||||
res.register_paths(
|
||||
"GET", [re.compile("^/_matrix/foo$")], _callback, "test_servlet"
|
||||
)
|
||||
res.register_paths("GET", [re.compile("^/_matrix/foo$")], _callback)
|
||||
|
||||
request, channel = make_request(self.reactor, b"GET", b"/_matrix/foobar")
|
||||
render(request, res, self.reactor)
|
||||
|
||||
@@ -447,7 +447,6 @@ class HomeserverTestCase(TestCase):
|
||||
# Create the user
|
||||
request, channel = self.make_request("GET", "/_matrix/client/r0/admin/register")
|
||||
self.render(request)
|
||||
self.assertEqual(channel.code, 200)
|
||||
nonce = channel.json_body["nonce"]
|
||||
|
||||
want_mac = hmac.new(key=b"shared", digestmod=hashlib.sha1)
|
||||
|
||||
@@ -471,7 +471,7 @@ class MockHttpResource(HttpServer):
|
||||
|
||||
raise KeyError("No event can handle %s" % path)
|
||||
|
||||
def register_paths(self, method, path_patterns, callback, servlet_name):
|
||||
def register_paths(self, method, path_patterns, callback):
|
||||
for path_pattern in path_patterns:
|
||||
self.callbacks.append((method, path_pattern, callback))
|
||||
|
||||
|
||||
Reference in New Issue
Block a user