1
0

Merge branch 'develop' into cross-signing

This commit is contained in:
Hubert Chathi
2019-07-19 17:13:22 -04:00
135 changed files with 3684 additions and 944 deletions

View File

@@ -117,8 +117,10 @@ steps:
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins:
@@ -134,8 +136,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -151,8 +155,10 @@ steps:
limit: 2
- label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 4"
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
@@ -179,6 +185,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
@@ -199,6 +206,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
@@ -220,6 +228,7 @@ steps:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
soft_fail: true
retry:
automatic:

View File

@@ -30,11 +30,10 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `CircleCI <https://circleci.com/gh/matrix-org>`_ and `Buildkite
<https://buildkite.com/matrix-dot-org/synapse>`_ for continuous integration.
Buildkite builds need to be authorised by a maintainer. If your change breaks
the build, this will be shown in GitHub, so please keep an eye on the pull
request for feedback.
We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
continuous integration. Buildkite builds need to be authorised by a
maintainer. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
@@ -70,13 +69,21 @@ All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d``
file named in the format of ``PRnumber.type``. The type can be
one of ``feature``, ``bugfix``, ``removal`` (also used for
deprecations), or ``misc`` (for internal-only changes).
To create a changelog entry, make a new file in the ``changelog.d`` file named
in the format of ``PRnumber.type``. The type can be one of the following:
The content of the file is your changelog entry, which can contain Markdown
formatting. The entry should end with a full stop ('.') for consistency.
* ``feature``.
* ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!

View File

@@ -272,7 +272,7 @@ to install using pip and a virtualenv::
virtualenv -p python3 env
source env/bin/activate
python -m pip install --no-pep-517 -e .[all]
python -m pip install --no-use-pep517 -e .[all]
This will run a process of downloading and installing all the needed
dependencies into a virtual env.

View File

@@ -49,6 +49,13 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0
===================

1
changelog.d/5397.doc Normal file
View File

@@ -0,0 +1 @@
Add information about nginx normalisation to reverse_proxy.rst. Contributed by @skalarproduktraum - thanks!

2
changelog.d/5544.feature Normal file
View File

@@ -0,0 +1,2 @@
Add support for opentracing.

1
changelog.d/5589.feature Normal file
View File

@@ -0,0 +1 @@
Add ability to pull all locally stored events out of synapse that a particular user can see.

1
changelog.d/5597.feature Normal file
View File

@@ -0,0 +1 @@
Add a basic admin command app to allow server operators to run Synapse admin commands separately from the main production instance.

1
changelog.d/5619.docker Normal file
View File

@@ -0,0 +1 @@
Base Docker image on a newer Alpine Linux version (3.8 -> 3.10).

1
changelog.d/5620.docker Normal file
View File

@@ -0,0 +1 @@
Add missing space in default logging file format generated by the Docker image.

1
changelog.d/5626.feature Normal file
View File

@@ -0,0 +1 @@
Include the original event when asking for its relations.

1
changelog.d/5627.misc Normal file
View File

@@ -0,0 +1 @@
Add `lint.sh` to the scripts-dev folder which will run all linting steps required by CI.

1
changelog.d/5629.bugfix Normal file
View File

@@ -0,0 +1 @@
Forbid viewing relations on an event once it has been redacted.

1
changelog.d/5636.misc Normal file
View File

@@ -0,0 +1 @@
Some counter metrics exposed over Prometheus have been renamed, with the old names preserved for backwards compatibility and deprecated. See `docs/metrics-howto.rst` for details.

1
changelog.d/5638.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix requests to the `/store_invite` endpoint of identity servers being sent in the wrong format.

1
changelog.d/5642.misc Normal file
View File

@@ -0,0 +1 @@
Remove access-token support from `RegistrationStore.register`, and rename it.

1
changelog.d/5644.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix newly-registered users not being able to lookup their own profile without joining a room.

1
changelog.d/5645.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused and unnecessary check for FederationDeniedError in _exception_to_failure.

1
changelog.d/5651.doc Normal file
View File

@@ -0,0 +1 @@
--no-pep517 should be --no-use-pep517 in the documentation to setup the development environment.

1
changelog.d/5654.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug in #5626 that prevented the original_event field from actually having the contents of the original event in a call to `/relations`.

1
changelog.d/5655.misc Normal file
View File

@@ -0,0 +1 @@
Fix a small typo in a code comment.

1
changelog.d/5656.misc Normal file
View File

@@ -0,0 +1 @@
Clean up exception handling around client access tokens.

1
changelog.d/5657.misc Normal file
View File

@@ -0,0 +1 @@
Add a mechanism for per-test homeserver configuration in the unit tests.

1
changelog.d/5658.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix 3PID bind requests being sent to identity servers as `application/x-form-www-urlencoded` data, which is deprecated.

1
changelog.d/5659.misc Normal file
View File

@@ -0,0 +1 @@
Inline issue_access_token.

1
changelog.d/5660.feature Normal file
View File

@@ -0,0 +1 @@
Implement `session_lifetime` configuration option, after which access tokens will expire.

1
changelog.d/5661.doc Normal file
View File

@@ -0,0 +1 @@
Improvements to Postgres setup instructions. Contributed by @Lrizika - thanks!

1
changelog.d/5664.misc Normal file
View File

@@ -0,0 +1 @@
Update the sytest BuildKite configuration to checkout Synapse in `/src`.

1
changelog.d/5673.misc Normal file
View File

@@ -0,0 +1 @@
Add a `docker` type to the towncrier configuration.

1
changelog.d/5674.feature Normal file
View File

@@ -0,0 +1 @@
Return "This account has been deactivated" when a deactivated user tries to login.

1
changelog.d/5675.doc Normal file
View File

@@ -0,0 +1 @@
Minor tweaks to postgres documentation.

1
changelog.d/5678.removal Normal file
View File

@@ -0,0 +1 @@
Synapse now no longer accepts the `-v`/`--verbose`, `-f`/`--log-file`, or `--log-config` command line flags, and removes the deprecated `verbose` and `log_file` configuration file options. Users of these options should migrate their options into the dedicated log configuration.

1
changelog.d/5689.misc Normal file
View File

@@ -0,0 +1 @@
Convert `synapse.federation.transport.server` to `async`. Might improve some stack traces.

1
changelog.d/5695.misc Normal file
View File

@@ -0,0 +1 @@
Add precautionary measures to prevent future abuse of `window.opener` in default welcome page.

1
changelog.d/5699.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix some problems with authenticating redactions in recent room versions.

2
changelog.d/5700.bugfix Normal file
View File

@@ -0,0 +1,2 @@
Fix some problems with authenticating redactions in recent room versions.

1
changelog.d/5701.bugfix Normal file
View File

@@ -0,0 +1 @@
Ignore redactions of m.room.create events.

1
changelog.d/5706.misc Normal file
View File

@@ -0,0 +1 @@
Reduce database IO usage by optimising queries for current membership.

1
changelog.d/5707.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix some problems with authenticating redactions in recent room versions.

2
changelog.d/5712.feature Normal file
View File

@@ -0,0 +1,2 @@
Add support for opentracing.

1
changelog.d/5713.misc Normal file
View File

@@ -0,0 +1 @@
Improve caching when fetching `get_filtered_current_state_ids`.

1
changelog.d/5714.feature Normal file
View File

@@ -0,0 +1 @@
Enable aggregations support by default

1
changelog.d/5715.misc Normal file
View File

@@ -0,0 +1 @@
Don't accept opentracing data from clients.

1
changelog.d/5717.misc Normal file
View File

@@ -0,0 +1 @@
Speed up PostgreSQL unit tests in CI.

1
changelog.d/5719.misc Normal file
View File

@@ -0,0 +1 @@
Update the coding style document.

1
changelog.d/5720.misc Normal file
View File

@@ -0,0 +1 @@
Improve database query performance when recording retry intervals for remote hosts.

3
debian/changelog vendored
View File

@@ -3,6 +3,9 @@ matrix-synapse-py3 (1.1.0-1) UNRELEASED; urgency=medium
[ Amber Brown ]
* Update logging config defaults to match API changes in Synapse.
[ Richard van der Hoff ]
* Add Recommends and Depends for some libraries which you probably want.
-- Erik Johnston <erikj@rae> Thu, 04 Jul 2019 13:59:02 +0100
matrix-synapse-py3 (1.1.0) stable; urgency=medium

7
debian/control vendored
View File

@@ -2,16 +2,20 @@ Source: matrix-synapse-py3
Section: contrib/python
Priority: extra
Maintainer: Synapse Packaging team <packages@matrix.org>
# keep this list in sync with the build dependencies in docker/Dockerfile-dhvirtualenv.
Build-Depends:
debhelper (>= 9),
dh-systemd,
dh-virtualenv (>= 1.1),
libsystemd-dev,
libpq-dev,
lsb-release,
python3-dev,
python3,
python3-setuptools,
python3-pip,
python3-venv,
libsqlite3-dev,
tar,
Standards-Version: 3.9.8
Homepage: https://github.com/matrix-org/synapse
@@ -28,9 +32,12 @@ Depends:
debconf,
python3-distutils|libpython3-stdlib (<< 3.6),
${misc:Depends},
${shlibs:Depends},
${synapse:pydepends},
# some of our scripts use perl, but none of them are important,
# so we put perl:Depends in Suggests rather than Depends.
Recommends:
${shlibs1:Recommends},
Suggests:
sqlite3,
${perl:Depends},

14
debian/rules vendored
View File

@@ -3,15 +3,29 @@
# Build Debian package using https://github.com/spotify/dh-virtualenv
#
# assume we only have one package
PACKAGE_NAME:=`dh_listpackages`
override_dh_systemd_enable:
dh_systemd_enable --name=matrix-synapse
override_dh_installinit:
dh_installinit --name=matrix-synapse
# we don't really want to strip the symbols from our object files.
override_dh_strip:
override_dh_shlibdeps:
# make the postgres package's dependencies a recommendation
# rather than a hard dependency.
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
-pshlibs1 -dRecommends
# all the other dependencies can be normal 'Depends' requirements,
# except for PIL's, which is self-contained and which confuses
# dpkg-shlibdeps.
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
override_dh_virtualenv:
./debian/build_virtualenv

View File

@@ -16,7 +16,7 @@ ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8 as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
# install the OS build deps
@@ -55,7 +55,7 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10
# xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \

View File

@@ -43,6 +43,9 @@ RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
FROM ${distro}
# Install the build dependencies
#
# NB: keep this list in sync with the list of build-deps in debian/control
# TODO: it would be nice to do that automatically.
RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \

View File

@@ -2,7 +2,7 @@ version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:

View File

@@ -1,4 +1,8 @@
# Code Style
Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
@@ -6,20 +10,20 @@ in code.
The necessary tools are detailed below.
## Formatting tools
- **black**
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
First install ``black`` with::
First install ``black`` with::
pip install --upgrade black
pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
Have ``black`` auto-format your code (it shouldn't change any functionality)
with::
black . --exclude="\.tox|build|env"
black . --exclude="\.tox|build|env"
- **flake8**
@@ -54,17 +58,16 @@ functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
General rules
-------------
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
- **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
@@ -73,6 +76,8 @@ takes a while and is very resource intensive.
- **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules.
Example::
@@ -92,25 +97,84 @@ takes a while and is very resource intensive.
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Multiple imports from the same package can be combined onto one line::
from synapse.types import GroupID, RoomID, UserID
An effort should be made to keep the individual imports in alphabetical
order.
If the list becomes long, wrap it with parentheses and split it over
multiple lines.
- As per `PEP-8 <https://www.python.org/dev/peps/pep-0008/#imports>`_,
imports should be grouped in the following order, with a blank line between
each group:
1. standard library imports
2. related third party imports
3. local application/library specific imports
- Imports within each group should be sorted alphabetically by module name.
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@@ -59,6 +59,108 @@ How to monitor Synapse metrics using Prometheus
Restart Prometheus.
Renaming of metrics & deprecation of old names in 1.2
-----------------------------------------------------
Synapse 1.2 updates the Prometheus metrics to match the naming convention of the
upstream ``prometheus_client``. The old names are considered deprecated and will
be removed in a future version of Synapse.
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| New Name | Old Name |
+=============================================================================+=======================================================================+
| python_gc_objects_collected_total | python_gc_objects_collected |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| python_gc_collections_total | python_gc_collections |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| process_cpu_seconds_total | process_cpu_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_start_count_total | synapse_background_process_start_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
+-----------------------------------------------------------------------------+-----------------------------------------------------------------------+
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------

View File

@@ -11,7 +11,9 @@ a postgres database.
* If you are using the `matrix.org debian/ubuntu
packages <../INSTALL.md#matrixorg-packages>`_,
the necessary libraries will already be installed.
the necessary python library will already be installed, but you will need to
ensure the low-level postgres library is installed, which you can do with
``apt install libpq5``.
* For other pre-built packages, please consult the documentation from the
relevant package.
@@ -34,9 +36,14 @@ Assuming your PostgreSQL database user is called ``postgres``, create a user
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
Before you can authenticate with the ``synapse_user``, you must create a
database that it can access. To create a database, first connect to the database
with your database user::
su - postgres
psql
and then run::
CREATE DATABASE synapse
ENCODING 'UTF8'
@@ -46,7 +53,13 @@ encoding use, e.g.::
OWNER synapse_user;
This would create an appropriate database named ``synapse`` owned by the
``synapse_user`` user (which must already exist).
``synapse_user`` user (which must already have been created as above).
Note that the PostgreSQL database *must* have the correct encoding set (as
shown above), otherwise it will not be able to store UTF8 strings.
You may need to enable password authentication so ``synapse_user`` can connect
to the database. See https://www.postgresql.org/docs/11/auth-pg-hba-conf.html.
Tuning Postgres
===============

View File

@@ -48,6 +48,8 @@ Let's assume that we expect clients to connect to our server at
proxy_set_header X-Forwarded-For $remote_addr;
}
}
Do not add a `/` after the port in `proxy_pass`, otherwise nginx will canonicalise/normalise the URI.
* Caddy::

View File

@@ -786,6 +786,17 @@ uploads_path: "DATADIR/uploads"
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
#registrations_require_3pid:
@@ -1395,3 +1406,37 @@ password_config:
# module: "my_custom_project.SuperRulesSet"
# config:
# example_option: 'things'
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
#
# Though it's mostly safe to send and receive span contexts to and from
# untrusted users since span contexts are usually opaque ids it can lead to
# two problems, namely:
# - If the span context is marked as sampled by the sending homeserver the receiver will
# sample it. Therefore two homeservers with wildly disparaging sampling policies
# could incur higher sampling counts than intended.
# - Span baggage can be arbitrary data. For safety this has been disabled in synapse
# but that doesn't prevent another server sending you baggage which will be logged
# to opentracing logs.
#
# This a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"

View File

@@ -14,6 +14,11 @@
name = "Bugfixes"
showcontent = true
[[tool.towncrier.type]]
directory = "docker"
name = "Updates to the Docker image"
showcontent = true
[[tool.towncrier.type]]
directory = "doc"
name = "Improved Documentation"

12
scripts-dev/lint.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/sh
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements
# flake8 - lints and finds mistakes
# black - opinionated code formatter
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts

View File

@@ -25,7 +25,13 @@ from twisted.internet import defer
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, Codes, ResourceLimitError
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
@@ -63,7 +69,6 @@ class Auth(object):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("cache", "token_cache", self.token_cache)
@@ -189,18 +194,17 @@ class Auth(object):
Returns:
defer.Deferred: resolves to a ``synapse.types.Requester`` object
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
AuthError if access is denied for the user in the access token
"""
# Can optionally look elsewhere in the request (e.g. headers)
try:
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
access_token = self.get_access_token_from_request(request)
user_id, app_service = yield self._get_appservice_user_id(request)
if user_id:
@@ -264,18 +268,12 @@ class Auth(object):
)
)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError()
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
self.get_access_token_from_request(request)
)
if app_service is None:
defer.returnValue((None, None))
@@ -313,13 +311,25 @@ class Auth(object):
`token_id` (int|None): access token id. May be None if guest
`device_id` (str|None): device corresponding to access token
Raises:
AuthError if no user by that token exists or the token is invalid.
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
"""
if rights == "access":
# first look in the database
r = yield self._look_up_user_by_access_token(token)
if r:
valid_until_ms = r["valid_until_ms"]
if (
valid_until_ms is not None
and valid_until_ms < self.clock.time_msec()
):
# there was a valid access token, but it has expired.
# soft-logout the user.
raise InvalidClientTokenError(
msg="Access token has expired", soft_logout=True
)
defer.returnValue(r)
# otherwise it needs to be a valid macaroon
@@ -331,11 +341,7 @@ class Auth(object):
if not guest:
# non-guest access tokens must be in the database
logger.warning("Unrecognised access token - not in store.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError()
# Guest access tokens are not stored in the database (there can
# only be one access token per guest, anyway).
@@ -350,16 +356,10 @@ class Auth(object):
# guest tokens.
stored_user = yield self.store.get_user_by_id(user_id)
if not stored_user:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unknown user_id %s" % user_id,
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Guest access token used for regular user",
errcode=Codes.UNKNOWN_TOKEN,
raise InvalidClientTokenError(
"Guest access token used for regular user"
)
ret = {
"user": user,
@@ -386,11 +386,7 @@ class Auth(object):
ValueError,
) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Invalid macaroon passed.")
def _parse_and_validate_macaroon(self, token, rights="access"):
"""Takes a macaroon and tries to parse and validate it. This is cached
@@ -430,11 +426,7 @@ class Auth(object):
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("Invalid macaroon passed.")
if not has_expiry and rights == "access":
self.token_cache[token] = (user_id, guest)
@@ -453,17 +445,14 @@ class Auth(object):
(str) user id
Raises:
AuthError if there is no user_id caveat in the macaroon
InvalidClientCredentialsError if there is no user_id caveat in the
macaroon
"""
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN,
)
raise InvalidClientTokenError("No user caveat in macaroon")
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
"""
@@ -527,26 +516,18 @@ class Auth(object):
"token_id": ret.get("token_id", None),
"is_guest": False,
"device_id": ret.get("device_id"),
"valid_until_ms": ret.get("valid_until_ms"),
}
defer.returnValue(user_info)
def get_appservice_by_req(self, request):
try:
token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
)
request.authenticated_entity = service.sender
return defer.succeed(service)
except KeyError:
raise AuthError(self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.")
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
@@ -625,21 +606,6 @@ class Auth(object):
defer.returnValue(auth_ids)
def check_redaction(self, room_version, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
return event_auth.check_redaction(room_version, event, auth_events)
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
"""Check if the user is allowed to edit the room's entry in the
@@ -692,20 +658,16 @@ class Auth(object):
return bool(query_params) or bool(auth_headers)
@staticmethod
def get_access_token_from_request(request, token_not_found_http_status=401):
def get_access_token_from_request(request):
"""Extracts the access_token from the request.
Args:
request: The http request.
token_not_found_http_status(int): The HTTP status code to set in the
AuthError if the token isn't found. This is used in some of the
legacy APIs to change the status code to 403 from the default of
401 since some of the old clients depended on auth errors returning
403.
Returns:
unicode: The access_token
Raises:
AuthError: If there isn't an access_token in the request.
MissingClientTokenError: If there isn't a single access_token in the
request
"""
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
@@ -714,34 +676,20 @@ class Auth(object):
# Try the get the access_token from a "Authorization: Bearer"
# header
if query_params is not None:
raise AuthError(
token_not_found_http_status,
"Mixing Authorization headers and access_token query parameters.",
errcode=Codes.MISSING_TOKEN,
raise MissingClientTokenError(
"Mixing Authorization headers and access_token query parameters."
)
if len(auth_headers) > 1:
raise AuthError(
token_not_found_http_status,
"Too many Authorization headers.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Too many Authorization headers.")
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode("ascii")
else:
raise AuthError(
token_not_found_http_status,
"Invalid Authorization header.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError("Invalid Authorization header.")
else:
# Try to get the access_token from the query params.
if not query_params:
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
)
raise MissingClientTokenError()
return query_params[0].decode("ascii")

View File

@@ -140,6 +140,22 @@ class ConsentNotGivenError(SynapseError):
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
class UserDeactivatedError(SynapseError):
"""The error returned to the client when the user attempted to access an
authenticated endpoint, but the account has been deactivated.
"""
def __init__(self, msg):
"""Constructs a UserDeactivatedError
Args:
msg (str): The human-readable error message
"""
super(UserDeactivatedError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.UNKNOWN
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
@@ -211,7 +227,9 @@ class NotFoundError(SynapseError):
class AuthError(SynapseError):
"""An error raised when there was a problem authorising an event."""
"""An error raised when there was a problem authorising an event, and at various
other poorly-defined times.
"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
@@ -219,6 +237,41 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs)
class InvalidClientCredentialsError(SynapseError):
"""An error raised when there was a problem with the authorisation credentials
in a client request.
https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens:
When credentials are required but missing or invalid, the HTTP call will
return with a status of 401 and the error code, M_MISSING_TOKEN or
M_UNKNOWN_TOKEN respectively.
"""
def __init__(self, msg, errcode):
super().__init__(code=401, msg=msg, errcode=errcode)
class MissingClientTokenError(InvalidClientCredentialsError):
"""Raised when we couldn't find the access token in a request"""
def __init__(self, msg="Missing access token"):
super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
class InvalidClientTokenError(InvalidClientCredentialsError):
"""Raised when we didn't understand the access token in a request"""
def __init__(self, msg="Unrecognised access token", soft_logout=False):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout
def error_dict(self):
d = super().error_dict()
d["soft_logout"] = self._soft_logout
return d
class ResourceLimitError(SynapseError):
"""
Any error raised when there is a problem with resource usage.

View File

@@ -48,7 +48,7 @@ def register_sighup(func):
_sighup_callbacks.append(func)
def start_worker_reactor(appname, config):
def start_worker_reactor(appname, config, run_command=reactor.run):
""" Run the reactor in the main process
Daemonizes if necessary, and then configures some resources, before starting
@@ -57,6 +57,7 @@ def start_worker_reactor(appname, config):
Args:
appname (str): application name which will be sent to syslog
config (synapse.config.Config): config object
run_command (Callable[]): callable that actually runs the reactor
"""
logger = logging.getLogger(config.worker_app)
@@ -69,11 +70,19 @@ def start_worker_reactor(appname, config):
daemonize=config.worker_daemonize,
print_pidfile=config.print_pidfile,
logger=logger,
run_command=run_command,
)
def start_reactor(
appname, soft_file_limit, gc_thresholds, pid_file, daemonize, print_pidfile, logger
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
print_pidfile,
logger,
run_command=reactor.run,
):
""" Run the reactor in the main process
@@ -88,6 +97,7 @@ def start_reactor(
daemonize (bool): true to run the reactor in a background process
print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize
run_command (Callable[]): callable that actually runs the reactor
"""
install_dns_limiter(reactor)
@@ -97,7 +107,7 @@ def start_reactor(
change_resource_limit(soft_file_limit)
if gc_thresholds:
gc.set_threshold(*gc_thresholds)
reactor.run()
run_command()
# make sure that we run the reactor with the sentinel log context,
# otherwise other PreserveLoggingContext instances will get confused
@@ -139,8 +149,7 @@ def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
from synapse.metrics import RegistryProxy, start_http_server
for host in bind_addresses:
logger.info("Starting metrics listener on %s:%d", host, port)
@@ -243,6 +252,9 @@ def start(hs, listeners=None):
# Load the certificate from disk.
refresh_certificate(hs)
# Start the tracer
synapse.logging.opentracing.init_tracer(hs.config)
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().start_profiling()

264
synapse/app/admin_cmd.py Normal file
View File

@@ -0,0 +1,264 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import os
import sys
import tempfile
from canonicaljson import json
from twisted.internet import defer, task
import synapse
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.admin import ExfiltrationWriter
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
pass
class AdminCmdServer(HomeServer):
DATASTORE_CLASS = AdminCmdSlavedStore
def _listen_http(self, listener_config):
pass
def start_listening(self, listeners):
pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks
def export_data_command(hs, args):
"""Export data for a user.
Args:
hs (HomeServer)
args (argparse.Namespace)
"""
user_id = args.user_id
directory = args.output_directory
res = yield hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
)
print(res)
class FileExfiltrationWriter(ExfiltrationWriter):
"""An ExfiltrationWriter that writes the users data to a directory.
Returns the directory location on completion.
Note: This writes to disk on the main reactor thread.
Args:
user_id (str): The user whose data is being exfiltrated.
directory (str|None): The directory to write the data to, if None then
will write to a temporary directory.
"""
def __init__(self, user_id, directory=None):
self.user_id = user_id
if directory:
self.base_directory = directory
else:
self.base_directory = tempfile.mkdtemp(
prefix="synapse-exfiltrate__%s__" % (user_id,)
)
os.makedirs(self.base_directory, exist_ok=True)
if list(os.listdir(self.base_directory)):
raise Exception("Directory must be empty")
def write_events(self, room_id, events):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
events_file = os.path.join(room_directory, "events")
with open(events_file, "a") as f:
for event in events:
print(json.dumps(event.get_pdu_json()), file=f)
def write_state(self, room_id, event_id, state):
room_directory = os.path.join(self.base_directory, "rooms", room_id)
state_directory = os.path.join(room_directory, "state")
os.makedirs(state_directory, exist_ok=True)
event_file = os.path.join(state_directory, event_id)
with open(event_file, "a") as f:
for event in state.values():
print(json.dumps(event.get_pdu_json()), file=f)
def write_invite(self, room_id, event, state):
self.write_events(room_id, [event])
# We write the invite state somewhere else as they aren't full events
# and are only a subset of the state at the event.
room_directory = os.path.join(self.base_directory, "rooms", room_id)
os.makedirs(room_directory, exist_ok=True)
invite_state = os.path.join(room_directory, "invite_state")
with open(invite_state, "a") as f:
for event in state.values():
print(json.dumps(event), file=f)
def finished(self):
return self.base_directory
def start(config_options):
parser = argparse.ArgumentParser(description="Synapse Admin Command")
HomeServerConfig.add_arguments_to_parser(parser)
subparser = parser.add_subparsers(
title="Admin Commands",
required=True,
dest="command",
metavar="<admin_command>",
help="The admin command to perform.",
)
export_data_parser = subparser.add_parser(
"export-data", help="Export all data for a user"
)
export_data_parser.add_argument("user_id", help="User to extra data from")
export_data_parser.add_argument(
"--output-directory",
action="store",
metavar="DIRECTORY",
required=False,
help="The directory to store the exported data in. Must be empty. Defaults"
" to creating a temp directory.",
)
export_data_parser.set_defaults(func=export_data_command)
try:
config, args = HomeServerConfig.load_config_with_parser(parser, config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
if config.worker_app is not None:
assert config.worker_app == "synapse.app.admin_cmd"
# Update the config with some basic overrides so that don't have to specify
# a full worker config.
config.worker_app = "synapse.app.admin_cmd"
if (
not config.worker_daemonize
and not config.worker_log_file
and not config.worker_log_config
):
# Since we're meant to be run as a "command" let's not redirect stdio
# unless we've actually set log config.
config.no_redirect_stdio = True
# Explicitly disable background processes
config.update_user_directory = False
config.start_pushers = False
config.send_federation = False
setup_logging(config, use_worker_options=True)
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = AdminCmdServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
ss.setup()
# We use task.react as the basic run command as it correctly handles tearing
# down the reactor when the deferreds resolve and setting the return value.
# We also make sure that `_base.start` gets run before we actually run the
# command.
@defer.inlineCallbacks
def run(_reactor):
with LoggingContext("command"):
yield _base.start(ss, [])
yield args.func(ss, args)
_base.start_worker_reactor(
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
)
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -27,8 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore

View File

@@ -28,8 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@@ -28,8 +28,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@@ -29,8 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@@ -28,9 +28,8 @@ from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore

View File

@@ -30,8 +30,7 @@ from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@@ -55,9 +55,8 @@ from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.module_api import ModuleApi
from synapse.python_dependencies import check_requirements
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource

View File

@@ -28,8 +28,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@@ -27,8 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore

View File

@@ -32,8 +32,7 @@ from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore

View File

@@ -29,8 +29,7 @@ from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore

View File

@@ -137,12 +137,42 @@ class Config(object):
return file_stream.read()
def invoke_all(self, name, *args, **kargs):
"""Invoke all instance methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = []
for cls in type(self).mro():
if name in cls.__dict__:
results.append(getattr(cls, name)(self, *args, **kargs))
return results
@classmethod
def invoke_all_static(cls, name, *args, **kargs):
"""Invoke all static methods with the given name and arguments in the
class's MRO.
Args:
name (str): Name of function to invoke
*args
**kwargs
Returns:
list: The list of the return values from each method called
"""
results = []
for c in cls.mro():
if name in c.__dict__:
results.append(getattr(c, name)(*args, **kargs))
return results
def generate_config(
self,
config_dir_path,
@@ -202,6 +232,23 @@ class Config(object):
Returns: Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
obj, _ = cls.load_config_with_parser(config_parser, argv)
return obj
@classmethod
def add_arguments_to_parser(cls, config_parser):
"""Adds all the config flags to an ArgumentParser.
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
config_parser (ArgumentParser): App description
"""
config_parser.add_argument(
"-c",
"--config-path",
@@ -219,16 +266,34 @@ class Config(object):
" Defaults to the directory containing the last config file",
)
cls.invoke_all_static("add_arguments", config_parser)
@classmethod
def load_config_with_parser(cls, parser, argv):
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
Used for workers where we want to add extra flags/subcommands.
Args:
parser (ArgumentParser)
argv (list[str])
Returns:
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
config object and the parsed argparse.Namespace object from
`parser.parse_args(..)`
"""
obj = cls()
obj.invoke_all("add_arguments", config_parser)
config_args = config_parser.parse_args(argv)
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
if not config_files:
config_parser.error("Must supply a config file.")
parser.error("Must supply a config file.")
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
@@ -244,7 +309,7 @@ class Config(object):
obj.invoke_all("read_arguments", config_args)
return obj
return obj, config_args
@classmethod
def load_or_generate_config(cls, description, argv):
@@ -401,7 +466,7 @@ class Config(object):
formatter_class=argparse.RawDescriptionHelpFormatter,
)
obj.invoke_all("add_arguments", parser)
obj.invoke_all_static("add_arguments", parser)
args = parser.parse_args(remaining_args)
config_dict = read_config_files(config_files)

View File

@@ -69,7 +69,8 @@ class DatabaseConfig(Config):
if database_path is not None:
self.database_config["args"]["database"] = database_path
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
db_group = parser.add_argument_group("database")
db_group.add_argument(
"-d",

View File

@@ -40,6 +40,7 @@ from .spam_checker import SpamCheckerConfig
from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig
from .tracer import TracerConfig
from .user_directory import UserDirectoryConfig
from .voip import VoipConfig
from .workers import WorkerConfig
@@ -75,5 +76,6 @@ class HomeServerConfig(
ServerNoticesConfig,
RoomDirectoryConfig,
ThirdPartyRulesConfig,
TracerConfig,
):
pass

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import logging.config
import os
@@ -75,10 +76,8 @@ root:
class LoggingConfig(Config):
def read_config(self, config, **kwargs):
self.verbosity = config.get("verbose", 0)
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
log_config = os.path.join(config_dir_path, server_name + ".log.config")
@@ -94,37 +93,12 @@ class LoggingConfig(Config):
)
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
def add_arguments(cls, parser):
@staticmethod
def add_arguments(parser):
logging_group = parser.add_argument_group("logging")
logging_group.add_argument(
"-v",
"--verbose",
dest="verbose",
action="count",
help="The verbosity level. Specify multiple times to increase "
"verbosity. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"-f",
"--log-file",
dest="log_file",
help="File to log to. (Ignored if --log-config is specified.)",
)
logging_group.add_argument(
"--log-config",
dest="log_config",
default=None,
help="Python logging config file",
)
logging_group.add_argument(
"-n",
"--no-redirect-stdio",
@@ -152,58 +126,29 @@ def setup_logging(config, use_worker_options=False):
config (LoggingConfig | synapse.config.workers.WorkerConfig):
configuration data
use_worker_options (bool): True to use 'worker_log_config' and
'worker_log_file' options instead of 'log_config' and 'log_file'.
use_worker_options (bool): True to use the 'worker_log_config' option
instead of 'log_config'.
register_sighup (func | None): Function to call to register a
sighup handler.
"""
log_config = config.worker_log_config if use_worker_options else config.log_config
log_file = config.worker_log_file if use_worker_options else config.log_file
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
if log_config is None:
# We don't have a logfile, so fall back to the 'verbosity' param from
# the config or cmdline. (Note that we generate a log config for new
# installs, so this will be an unusual case)
level = logging.INFO
level_for_storage = logging.INFO
if config.verbosity:
level = logging.DEBUG
if config.verbosity > 1:
level_for_storage = logging.DEBUG
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(level)
logging.getLogger("synapse.storage.SQL").setLevel(level_for_storage)
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
if log_file:
# TODO: Customisable file size / backup count
handler = logging.handlers.RotatingFileHandler(
log_file, maxBytes=(1000 * 1000 * 100), backupCount=3, encoding="utf8"
)
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
else:
handler = logging.StreamHandler()
def sighup(*args):
pass
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
@@ -217,8 +162,7 @@ def setup_logging(config, use_worker_options=False):
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
load_log_config()
appbase.register_sighup(sighup)
appbase.register_sighup(sighup)
# make sure that the first thing we log is a thing we can grep backwards
# for

View File

@@ -84,6 +84,11 @@ class RegistrationConfig(Config):
"disable_msisdn_registration", False
)
session_lifetime = config.get("session_lifetime")
if session_lifetime is not None:
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
def generate_config_section(self, generate_secrets=False, **kwargs):
if generate_secrets:
registration_shared_secret = 'registration_shared_secret: "%s"' % (
@@ -141,6 +146,17 @@ class RegistrationConfig(Config):
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"
# Time that a user's session remains valid for, after they log in.
#
# Note that this is not currently compatible with guest logins.
#
# Note also that this is calculated at login time: changes are not applied
# retrospectively to users who have already logged in.
#
# By default, this is infinite.
#
#session_lifetime: 24h
# The user must provide all of the below types of 3PID when registering.
#
#registrations_require_3pid:
@@ -221,7 +237,8 @@ class RegistrationConfig(Config):
% locals()
)
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
reg_group = parser.add_argument_group("registration")
reg_group.add_argument(
"--enable-registration",

View File

@@ -136,7 +136,7 @@ class ServerConfig(Config):
# Whether to enable experimental MSC1849 (aka relations) support
self.experimental_msc1849_support_enabled = config.get(
"experimental_msc1849_support_enabled", False
"experimental_msc1849_support_enabled", True
)
# Options to control access by tracking MAU
@@ -639,7 +639,8 @@ class ServerConfig(Config):
if args.print_pidfile is not None:
self.print_pidfile = args.print_pidfile
def add_arguments(self, parser):
@staticmethod
def add_arguments(parser):
server_group = parser.add_argument_group("server")
server_group.add_argument(
"-D",

69
synapse/config/tracer.py Normal file
View File

@@ -0,0 +1,69 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.d
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config, ConfigError
class TracerConfig(Config):
def read_config(self, config, **kwargs):
opentracing_config = config.get("opentracing")
if opentracing_config is None:
opentracing_config = {}
self.opentracer_enabled = opentracing_config.get("enabled", False)
if not self.opentracer_enabled:
return
# The tracer is enabled so sanitize the config
self.opentracer_whitelist = opentracing_config.get("homeserver_whitelist", [])
if not isinstance(self.opentracer_whitelist, list):
raise ConfigError("Tracer homeserver_whitelist config is malformed")
def generate_config_section(cls, **kwargs):
return """\
## Opentracing ##
# These settings enable opentracing, which implements distributed tracing.
# This allows you to observe the causal chains of events across servers
# including requests, key lookups etc., across any server running
# synapse or any other other services which supports opentracing
# (specifically those implemented with Jaeger).
#
opentracing:
# tracing is disabled by default. Uncomment the following line to enable it.
#
#enabled: true
# The list of homeservers we wish to send and receive span contexts and span baggage.
#
# Though it's mostly safe to send and receive span contexts to and from
# untrusted users since span contexts are usually opaque ids it can lead to
# two problems, namely:
# - If the span context is marked as sampled by the sending homeserver the receiver will
# sample it. Therefore two homeservers with wildly disparaging sampling policies
# could incur higher sampling counts than intended.
# - Span baggage can be arbitrary data. For safety this has been disabled in synapse
# but that doesn't prevent another server sending you baggage which will be logged
# to opentracing logs.
#
# This a list of regexes which are matched against the server_name of the
# homeserver.
#
# By defult, it is empty, so no servers are matched.
#
#homeserver_whitelist:
# - ".*"
"""

View File

@@ -31,8 +31,6 @@ class WorkerConfig(Config):
self.worker_listeners = config.get("worker_listeners", [])
self.worker_daemonize = config.get("worker_daemonize")
self.worker_pid_file = config.get("worker_pid_file")
self.worker_log_file = config.get("worker_log_file")
self.worker_log_config = config.get("worker_log_config")
# The host used to connect to the main synapse
self.worker_replication_host = config.get("worker_replication_host", None)
@@ -78,9 +76,5 @@ class WorkerConfig(Config):
if args.daemonize is not None:
self.worker_daemonize = args.daemonize
if args.log_config is not None:
self.worker_log_config = args.log_config
if args.log_file is not None:
self.worker_log_file = args.log_file
if args.manhole is not None:
self.worker_manhole = args.worker_manhole

View File

@@ -104,6 +104,17 @@ class _EventInternalMetadata(object):
"""
return getattr(self, "proactively_send", True)
def is_redacted(self):
"""Whether the event has been redacted.
This is used for efficiently checking whether an event has been
marked as redacted without needing to make another database call.
Returns:
bool
"""
return getattr(self, "redacted", False)
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.

View File

@@ -52,10 +52,15 @@ def prune_event(event):
from . import event_type_from_format_version
return event_type_from_format_version(event.format_version)(
pruned_event = event_type_from_format_version(event.format_version)(
pruned_event_dict, event.internal_metadata.get_dict()
)
# Mark the event as redacted
pruned_event.internal_metadata.redacted = True
return pruned_event
def prune_event_dict(event_dict):
"""Redacts the event_dict in the same way as `prune_event`, except it
@@ -360,9 +365,12 @@ class EventClientSerializer(object):
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if thre are any relations
# we need to bundle in with the event
if self.experimental_msc1849_support_enabled and bundle_aggregations:
# If MSC1849 is enabled then we need to look if there are any relations
# we need to bundle in with the event.
# Do not bundle relations if the event has been redacted
if not event.internal_metadata.is_redacted() and (
self.experimental_msc1849_support_enabled and bundle_aggregations
):
annotations = yield self.store.get_aggregation_groups_for_event(event_id)
references = yield self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"

File diff suppressed because it is too large Load Diff

View File

@@ -17,6 +17,10 @@ import logging
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.types import RoomStreamToken
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -89,3 +93,182 @@ class AdminHandler(BaseHandler):
ret = yield self.store.search_users(term)
defer.returnValue(ret)
@defer.inlineCallbacks
def export_user_data(self, user_id, writer):
"""Write all data we have on the user to the given writer.
Args:
user_id (str)
writer (ExfiltrationWriter)
Returns:
defer.Deferred: Resolves when all data for a user has been written.
The returned value is that returned by `writer.finished()`.
"""
# Get all rooms the user is in or has been in
rooms = yield self.store.get_rooms_for_user_where_membership_is(
user_id,
membership_list=(
Membership.JOIN,
Membership.LEAVE,
Membership.BAN,
Membership.INVITE,
),
)
# We only try and fetch events for rooms the user has been in. If
# they've been e.g. invited to a room without joining then we handle
# those seperately.
rooms_user_has_been_in = yield self.store.get_rooms_user_has_been_in(user_id)
for index, room in enumerate(rooms):
room_id = room.room_id
logger.info(
"[%s] Handling room %s, %d/%d", user_id, room_id, index + 1, len(rooms)
)
forgotten = yield self.store.did_forget(user_id, room_id)
if forgotten:
logger.info("[%s] User forgot room %d, ignoring", user_id, room_id)
continue
if room_id not in rooms_user_has_been_in:
# If we haven't been in the rooms then the filtering code below
# won't return anything, so we need to handle these cases
# explicitly.
if room.membership == Membership.INVITE:
event_id = room.event_id
invite = yield self.store.get_event(event_id, allow_none=True)
if invite:
invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state)
continue
# We only want to bother fetching events up to the last time they
# were joined. We estimate that point by looking at the
# stream_ordering of the last membership if it wasn't a join.
if room.membership == Membership.JOIN:
stream_ordering = yield self.store.get_room_max_stream_ordering()
else:
stream_ordering = room.stream_ordering
from_key = str(RoomStreamToken(0, 0))
to_key = str(RoomStreamToken(None, stream_ordering))
written_events = set() # Events that we've processed in this room
# We need to track gaps in the events stream so that we can then
# write out the state at those events. We do this by keeping track
# of events whose prev events we haven't seen.
# Map from event ID to prev events that haven't been processed,
# dict[str, set[str]].
event_to_unseen_prevs = {}
# The reverse mapping to above, i.e. map from unseen event to events
# that have the unseen event in their prev_events, i.e. the unseen
# events "children". dict[str, set[str]]
unseen_to_child_events = {}
# We fetch events in the room the user could see by fetching *all*
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = yield self.store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction="f"
)
if not events:
break
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.store, user_id, events)
writer.write_events(room_id, events)
# Update the extremity tracking dicts
for event in events:
# Check if we have any prev events that haven't been
# processed yet, and add those to the appropriate dicts.
unseen_events = set(event.prev_event_ids()) - written_events
if unseen_events:
event_to_unseen_prevs[event.event_id] = unseen_events
for unseen in unseen_events:
unseen_to_child_events.setdefault(unseen, set()).add(
event.event_id
)
# Now check if this event is an unseen prev event, if so
# then we remove this event from the appropriate dicts.
for child_id in unseen_to_child_events.pop(event.event_id, []):
event_to_unseen_prevs[child_id].discard(event.event_id)
written_events.add(event.event_id)
logger.info(
"Written %d events in room %s", len(written_events), room_id
)
# Extremities are the events who have at least one unseen prev event.
extremities = (
event_id
for event_id, unseen_prevs in event_to_unseen_prevs.items()
if unseen_prevs
)
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
defer.returnValue(writer.finished())
class ExfiltrationWriter(object):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id, events):
"""Write a batch of events for a room.
Args:
room_id (str)
events (list[FrozenEvent])
"""
pass
def write_state(self, room_id, event_id, state):
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
Args:
room_id (str)
event_id (str)
state (dict[tuple[str, str], FrozenEvent])
"""
pass
def write_invite(self, room_id, event, state):
"""Write an invite for the room, with associated invite state.
Args:
room_id (str)
event (FrozenEvent)
state (dict[tuple[str, str], dict]): A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
"""
def finished(self):
"""Called when all data has succesfully been exported and written.
This functions return value is passed to the caller of
`export_user_data`.
"""
pass

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import logging
import time
import unicodedata
import attr
@@ -34,6 +35,7 @@ from synapse.api.errors import (
LoginError,
StoreError,
SynapseError,
UserDeactivatedError,
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.logging.context import defer_to_thread
@@ -558,7 +560,7 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id]
@defer.inlineCallbacks
def get_access_token_for_user_id(self, user_id, device_id=None):
def get_access_token_for_user_id(self, user_id, device_id, valid_until_ms):
"""
Creates a new access token for the user with the given user ID.
@@ -572,15 +574,27 @@ class AuthHandler(BaseHandler):
device_id (str|None): the device ID to associate with the tokens.
None to leave the tokens unassociated with a device (deprecated:
we should always have a device ID)
valid_until_ms (int|None): when the token is valid until. None for
no expiry.
Returns:
The access token for the user's session.
Raises:
StoreError if there was a problem storing the token.
"""
logger.info("Logging in user %s on device %s", user_id, device_id)
access_token = yield self.issue_access_token(user_id, device_id)
fmt_expiry = ""
if valid_until_ms is not None:
fmt_expiry = time.strftime(
" until %Y-%m-%d %H:%M:%S", time.localtime(valid_until_ms / 1000.0)
)
logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry)
yield self.auth.check_auth_blocking(user_id)
access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user(
user_id, access_token, device_id, valid_until_ms
)
# the device *should* have been registered before we got here; however,
# it's possible we raced against a DELETE operation. The thing we
# really don't want is active access_tokens without a record of the
@@ -610,6 +624,7 @@ class AuthHandler(BaseHandler):
Raises:
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
UserDeactivatedError if a user is found but is deactivated.
"""
self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id)
@@ -825,18 +840,19 @@ class AuthHandler(BaseHandler):
if not lookupres:
defer.returnValue(None)
(user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated
if not password_hash:
deactivated = yield self.store.get_user_deactivated_status(user_id)
if deactivated:
raise UserDeactivatedError("This account has been deactivated")
result = yield self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
defer.returnValue(None)
defer.returnValue(user_id)
@defer.inlineCallbacks
def issue_access_token(self, user_id, device_id=None):
access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user(user_id, access_token, device_id)
defer.returnValue(access_token)
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token):
auth_api = self.hs.get_auth()

View File

@@ -24,12 +24,7 @@ from signedjson.sign import SignatureVerifyException, verify_signed_json
from twisted.internet import defer
from synapse.api.errors import (
CodeMessageException,
Codes,
FederationDeniedError,
SynapseError,
)
from synapse.api.errors import CodeMessageException, Codes, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import (
UserID,
@@ -554,9 +549,6 @@ def _exception_to_failure(e):
if isinstance(e, NotRetryingDestination):
return {"status": 503, "message": "Not ready for retry"}
if isinstance(e, FederationDeniedError):
return {"status": 403, "message": "Federation Denied"}
# include ConnectionRefused and other errors
#
# Note that some Exceptions (notably twisted's ResponseFailed etc) don't

View File

@@ -118,7 +118,7 @@ class IdentityHandler(BaseHandler):
raise SynapseError(400, "No client_secret in creds")
try:
data = yield self.http_client.post_urlencoded_get_json(
data = yield self.http_client.post_json_get_json(
"https://%s%s" % (id_server, "/_matrix/identity/api/v1/3pid/bind"),
{"sid": creds["sid"], "client_secret": client_secret, "mxid": mxid},
)

View File

@@ -23,6 +23,7 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, RelationTypes
from synapse.api.errors import (
AuthError,
@@ -784,6 +785,20 @@ class EventCreationHandler(object):
event.signatures.update(returned_invite.signatures)
if event.type == EventTypes.Redaction:
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=True,
check_room_id=event.room_id,
)
# we can make some additional checks now if we have the original event.
if original_event:
if original_event.type == EventTypes.Create:
raise AuthError(403, "Redacting create events is not permitted")
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
@@ -791,18 +806,18 @@ class EventCreationHandler(object):
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
room_version = yield self.store.get_room_version(event.room_id)
if self.auth.check_redaction(room_version, event, auth_events=auth_events):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False,
)
if event_auth.check_redaction(room_version, event, auth_events=auth_events):
# this user doesn't have 'redact' rights, so we need to do some more
# checks on the original event. Let's start by checking the original
# event exists.
if not original_event:
raise NotFoundError("Could not find event %s" % (event.redacts,))
if event.user_id != original_event.user_id:
raise AuthError(403, "You don't have permission to redact events")
# We've already checked.
# all the checks are done.
event.internal_metadata.recheck_redaction = False
if event.type == EventTypes.Create:

View File

@@ -303,6 +303,10 @@ class BaseProfileHandler(BaseHandler):
if not self.hs.config.require_auth_for_profile_requests or not requester:
return
# Always allow the user to query their own profile.
if target_user.to_string() == requester.to_string():
return
try:
requester_rooms = yield self.store.get_rooms_for_user(requester.to_string())
target_user_rooms = yield self.store.get_rooms_for_user(

View File

@@ -84,6 +84,8 @@ class RegistrationHandler(BaseHandler):
self.device_handler = hs.get_device_handler()
self.pusher_pool = hs.get_pusherpool()
self.session_lifetime = hs.config.session_lifetime
@defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None, assigned_user_id=None):
if types.contains_invalid_mxid_characters(localpart):
@@ -584,7 +586,7 @@ class RegistrationHandler(BaseHandler):
address=address,
)
else:
return self.store.register(
return self.store.register_user(
user_id=user_id,
password_hash=password_hash,
was_guest=was_guest,
@@ -599,6 +601,8 @@ class RegistrationHandler(BaseHandler):
def register_device(self, user_id, device_id, initial_display_name, is_guest=False):
"""Register a device for a user and generate an access token.
The access token will be limited by the homeserver's session_lifetime config.
Args:
user_id (str): full canonical @user:id
device_id (str|None): The device ID to check, or None to generate
@@ -619,20 +623,29 @@ class RegistrationHandler(BaseHandler):
is_guest=is_guest,
)
defer.returnValue((r["device_id"], r["access_token"]))
else:
device_id = yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
if is_guest:
access_token = self.macaroon_gen.generate_access_token(
user_id, ["guest = true"]
)
else:
access_token = yield self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id
)
defer.returnValue((device_id, access_token))
valid_until_ms = None
if self.session_lifetime is not None:
if is_guest:
raise Exception(
"session_lifetime is not currently implemented for guest access"
)
valid_until_ms = self.clock.time_msec() + self.session_lifetime
device_id = yield self.device_handler.check_device_registered(
user_id, device_id, initial_display_name
)
if is_guest:
assert valid_until_ms is None
access_token = self.macaroon_gen.generate_access_token(
user_id, ["guest = true"]
)
else:
access_token = yield self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id, valid_until_ms=valid_until_ms
)
defer.returnValue((device_id, access_token))
@defer.inlineCallbacks
def post_registration_actions(

View File

@@ -29,7 +29,7 @@ from twisted.internet import defer
import synapse.server
import synapse.types
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.errors import AuthError, Codes, HttpResponseException, SynapseError
from synapse.types import RoomID, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room
@@ -872,9 +872,23 @@ class RoomMemberHandler(object):
"sender_avatar_url": inviter_avatar_url,
}
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url, invite_config
)
try:
data = yield self.simple_http_client.post_json_get_json(
is_url, invite_config
)
except HttpResponseException as e:
# Some identity servers may only support application/x-www-form-urlencoded
# types. This is especially true with old instances of Sydent, see
# https://github.com/matrix-org/sydent/pull/170
logger.info(
"Failed to POST %s with JSON, falling back to urlencoded form: %s",
is_url,
e,
)
data = yield self.simple_http_client.post_urlencoded_get_json(
is_url, invite_config
)
# TODO: Check for success
token = data["token"]
public_keys = data.get("public_keys", [])

View File

@@ -36,6 +36,7 @@ from twisted.internet.task import _EPSILON, Cooperator
from twisted.web._newclient import ResponseDone
from twisted.web.http_headers import Headers
import synapse.logging.opentracing as opentracing
import synapse.metrics
import synapse.util.retryutils
from synapse.api.errors import (
@@ -339,9 +340,25 @@ class MatrixFederationHttpClient(object):
else:
query_bytes = b""
headers_dict = {b"User-Agent": [self.version_string_bytes]}
# Retreive current span
scope = opentracing.start_active_span(
"outgoing-federation-request",
tags={
opentracing.tags.SPAN_KIND: opentracing.tags.SPAN_KIND_RPC_CLIENT,
opentracing.tags.PEER_ADDRESS: request.destination,
opentracing.tags.HTTP_METHOD: request.method,
opentracing.tags.HTTP_URL: request.path,
},
finish_on_close=True,
)
with limiter:
# Inject the span into the headers
headers_dict = {}
opentracing.inject_active_span_byte_dict(headers_dict, request.destination)
headers_dict[b"User-Agent"] = [self.version_string_bytes]
with limiter, scope:
# XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place)
if long_retries:
@@ -419,6 +436,10 @@ class MatrixFederationHttpClient(object):
response.phrase.decode("ascii", errors="replace"),
)
opentracing.set_tag(
opentracing.tags.HTTP_STATUS_CODE, response.code
)
if 200 <= response.code < 300:
pass
else:
@@ -499,8 +520,7 @@ class MatrixFederationHttpClient(object):
_flatten_response_never_received(e),
)
raise
defer.returnValue(response)
defer.returnValue(response)
def build_auth_headers(
self, destination, method, url_bytes, content=None, destination_is=None

View File

@@ -20,6 +20,7 @@ import logging
from canonicaljson import json
from synapse.api.errors import Codes, SynapseError
from synapse.logging.opentracing import trace_servlet
logger = logging.getLogger(__name__)
@@ -290,7 +291,11 @@ class RestServlet(object):
for method in ("GET", "PUT", "POST", "OPTIONS", "DELETE"):
if hasattr(self, "on_%s" % (method,)):
method_handler = getattr(self, "on_%s" % (method,))
http_server.register_paths(method, patterns, method_handler)
http_server.register_paths(
method,
patterns,
trace_servlet(self.__class__.__name__, method_handler),
)
else:
raise NotImplementedError("RestServlet must register something.")

View File

@@ -186,6 +186,7 @@ class LoggingContext(object):
"alive",
"request",
"tag",
"scope",
]
thread_local = threading.local()
@@ -238,6 +239,7 @@ class LoggingContext(object):
self.request = None
self.tag = ""
self.alive = True
self.scope = None
self.parent_context = parent_context
@@ -322,10 +324,12 @@ class LoggingContext(object):
another LoggingContext
"""
# 'request' is the only field we currently use in the logger, so that's
# all we need to copy
# we track the current request
record.request = self.request
# we also track the current scope:
record.scope = self.scope
def start(self):
if get_thread_id() != self.main_thread:
logger.warning("Started logcontext %s on different thread", self)

View File

@@ -0,0 +1,357 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import opentracing
# NOTE
# This is a small wrapper around opentracing because opentracing is not currently
# packaged downstream (specifically debian). Since opentracing instrumentation is
# fairly invasive it was awkward to make it optional. As a result we opted to encapsulate
# all opentracing state in these methods which effectively noop if opentracing is
# not present. We should strongly consider encouraging the downstream distributers
# to package opentracing and making opentracing a full dependency. In order to facilitate
# this move the methods have work very similarly to opentracing's and it should only
# be a matter of few regexes to move over to opentracing's access patterns proper.
import contextlib
import logging
import re
from functools import wraps
from twisted.internet import defer
from synapse.config import ConfigError
try:
import opentracing
except ImportError:
opentracing = None
try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
JaegerConfig = None
LogContextScopeManager = None
logger = logging.getLogger(__name__)
class _DumTagNames(object):
"""wrapper of opentracings tags. We need to have them if we
want to reference them without opentracing around. Clearly they
should never actually show up in a trace. `set_tags` overwrites
these with the correct ones."""
INVALID_TAG = "invalid-tag"
COMPONENT = INVALID_TAG
DATABASE_INSTANCE = INVALID_TAG
DATABASE_STATEMENT = INVALID_TAG
DATABASE_TYPE = INVALID_TAG
DATABASE_USER = INVALID_TAG
ERROR = INVALID_TAG
HTTP_METHOD = INVALID_TAG
HTTP_STATUS_CODE = INVALID_TAG
HTTP_URL = INVALID_TAG
MESSAGE_BUS_DESTINATION = INVALID_TAG
PEER_ADDRESS = INVALID_TAG
PEER_HOSTNAME = INVALID_TAG
PEER_HOST_IPV4 = INVALID_TAG
PEER_HOST_IPV6 = INVALID_TAG
PEER_PORT = INVALID_TAG
PEER_SERVICE = INVALID_TAG
SAMPLING_PRIORITY = INVALID_TAG
SERVICE = INVALID_TAG
SPAN_KIND = INVALID_TAG
SPAN_KIND_CONSUMER = INVALID_TAG
SPAN_KIND_PRODUCER = INVALID_TAG
SPAN_KIND_RPC_CLIENT = INVALID_TAG
SPAN_KIND_RPC_SERVER = INVALID_TAG
def only_if_tracing(func):
"""Executes the function only if we're tracing. Otherwise return.
Assumes the function wrapped may return None"""
@wraps(func)
def _only_if_tracing_inner(*args, **kwargs):
if opentracing:
return func(*args, **kwargs)
else:
return
return _only_if_tracing_inner
# A regex which matches the server_names to expose traces for.
# None means 'block everything'.
_homeserver_whitelist = None
tags = _DumTagNames
def init_tracer(config):
"""Set the whitelists and initialise the JaegerClient tracer
Args:
config (HomeserverConfig): The config used by the homeserver
"""
global opentracing
if not config.opentracer_enabled:
# We don't have a tracer
opentracing = None
return
if not opentracing or not JaegerConfig:
raise ConfigError(
"The server has been configured to use opentracing but opentracing is not "
"installed."
)
# Include the worker name
name = config.worker_name if config.worker_name else "master"
set_homeserver_whitelist(config.opentracer_whitelist)
jaeger_config = JaegerConfig(
config={"sampler": {"type": "const", "param": 1}, "logging": True},
service_name="{} {}".format(config.server_name, name),
scope_manager=LogContextScopeManager(config),
)
jaeger_config.initialize_tracer()
# Set up tags to be opentracing's tags
global tags
tags = opentracing.tags
@contextlib.contextmanager
def _noop_context_manager(*args, **kwargs):
"""Does absolutely nothing really well. Can be entered and exited arbitrarily.
Good substitute for an opentracing scope."""
yield
# Could use kwargs but I want these to be explicit
def start_active_span(
operation_name,
child_of=None,
references=None,
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
):
"""Starts an active opentracing span. Note, the scope doesn't become active
until it has been entered, however, the span starts from the time this
message is called.
Args:
See opentracing.tracer
Returns:
scope (Scope) or noop_context_manager
"""
if opentracing is None:
return _noop_context_manager()
else:
# We need to enter the scope here for the logcontext to become active
return opentracing.tracer.start_active_span(
operation_name,
child_of=child_of,
references=references,
tags=tags,
start_time=start_time,
ignore_active_span=ignore_active_span,
finish_on_close=finish_on_close,
)
@only_if_tracing
def close_active_span():
"""Closes the active span. This will close it's logcontext if the context
was made for the span"""
opentracing.tracer.scope_manager.active.__exit__(None, None, None)
@only_if_tracing
def set_tag(key, value):
"""Set's a tag on the active span"""
opentracing.tracer.active_span.set_tag(key, value)
@only_if_tracing
def log_kv(key_values, timestamp=None):
"""Log to the active span"""
opentracing.tracer.active_span.log_kv(key_values, timestamp)
# Note: we don't have a get baggage items because we're trying to hide all
# scope and span state from synapse. I think this method may also be useless
# as a result
@only_if_tracing
def set_baggage_item(key, value):
"""Attach baggage to the active span"""
opentracing.tracer.active_span.set_baggage_item(key, value)
@only_if_tracing
def set_operation_name(operation_name):
"""Sets the operation name of the active span"""
opentracing.tracer.active_span.set_operation_name(operation_name)
@only_if_tracing
def set_homeserver_whitelist(homeserver_whitelist):
"""Sets the whitelist
Args:
homeserver_whitelist (iterable of strings): regex of whitelisted homeservers
"""
global _homeserver_whitelist
if homeserver_whitelist:
# Makes a single regex which accepts all passed in regexes in the list
_homeserver_whitelist = re.compile(
"({})".format(")|(".join(homeserver_whitelist))
)
@only_if_tracing
def whitelisted_homeserver(destination):
"""Checks if a destination matches the whitelist
Args:
destination (String)"""
if _homeserver_whitelist:
return _homeserver_whitelist.match(destination)
return False
def start_active_span_from_context(
headers,
operation_name,
references=None,
tags=None,
start_time=None,
ignore_active_span=False,
finish_on_close=True,
):
"""
Extracts a span context from Twisted Headers.
args:
headers (twisted.web.http_headers.Headers)
returns:
span_context (opentracing.span.SpanContext)
"""
# Twisted encodes the values as lists whereas opentracing doesn't.
# So, we take the first item in the list.
# Also, twisted uses byte arrays while opentracing expects strings.
if opentracing is None:
return _noop_context_manager()
header_dict = {k.decode(): v[0].decode() for k, v in headers.getAllRawHeaders()}
context = opentracing.tracer.extract(opentracing.Format.HTTP_HEADERS, header_dict)
return opentracing.tracer.start_active_span(
operation_name,
child_of=context,
references=references,
tags=tags,
start_time=start_time,
ignore_active_span=ignore_active_span,
finish_on_close=finish_on_close,
)
@only_if_tracing
def inject_active_span_twisted_headers(headers, destination):
"""
Injects a span context into twisted headers inplace
Args:
headers (twisted.web.http_headers.Headers)
span (opentracing.Span)
Returns:
Inplace modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
carrier = {}
opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
for key, value in carrier.items():
headers.addRawHeaders(key, value)
@only_if_tracing
def inject_active_span_byte_dict(headers, destination):
"""
Injects a span context into a dict where the headers are encoded as byte
strings
Args:
headers (dict)
span (opentracing.Span)
Returns:
Inplace modification of headers
Note:
The headers set by the tracer are custom to the tracer implementation which
should be unique enough that they don't interfere with any headers set by
synapse or twisted. If we're still using jaeger these headers would be those
here:
https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/constants.py
"""
if not whitelisted_homeserver(destination):
return
span = opentracing.tracer.active_span
carrier = {}
opentracing.tracer.inject(span, opentracing.Format.HTTP_HEADERS, carrier)
for key, value in carrier.items():
headers[key.encode()] = [value.encode()]
def trace_servlet(servlet_name, func):
"""Decorator which traces a serlet. It starts a span with some servlet specific
tags such as the servlet_name and request information"""
@wraps(func)
@defer.inlineCallbacks
def _trace_servlet_inner(request, *args, **kwargs):
with start_active_span(
"incoming-client-request",
tags={
"request_id": request.get_request_id(),
tags.SPAN_KIND: tags.SPAN_KIND_RPC_SERVER,
tags.HTTP_METHOD: request.get_method(),
tags.HTTP_URL: request.get_redacted_uri(),
tags.PEER_HOST_IPV6: request.getClientIP(),
"servlet_name": servlet_name,
},
):
result = yield defer.maybeDeferred(func, request, *args, **kwargs)
defer.returnValue(result)
return _trace_servlet_inner

View File

@@ -0,0 +1,138 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.import logging
import logging
from opentracing import Scope, ScopeManager
import twisted
from synapse.logging.context import LoggingContext, nested_logging_context
logger = logging.getLogger(__name__)
class LogContextScopeManager(ScopeManager):
"""
The LogContextScopeManager tracks the active scope in opentracing
by using the log contexts which are native to synapse. This is so
that the basic opentracing api can be used across twisted defereds.
(I would love to break logcontexts and this into an OS package. but
let's wait for twisted's contexts to be released.)
"""
def __init__(self, config):
pass
@property
def active(self):
"""
Returns the currently active Scope which can be used to access the
currently active Scope.span.
If there is a non-null Scope, its wrapped Span
becomes an implicit parent of any newly-created Span at
Tracer.start_active_span() time.
Return:
(Scope) : the Scope that is active, or None if not
available.
"""
ctx = LoggingContext.current_context()
if ctx is LoggingContext.sentinel:
return None
else:
return ctx.scope
def activate(self, span, finish_on_close):
"""
Makes a Span active.
Args
span (Span): the span that should become active.
finish_on_close (Boolean): whether Span should be automatically
finished when Scope.close() is called.
Returns:
Scope to control the end of the active period for
*span*. It is a programming error to neglect to call
Scope.close() on the returned instance.
"""
enter_logcontext = False
ctx = LoggingContext.current_context()
if ctx is LoggingContext.sentinel:
# We don't want this scope to affect.
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span)
elif ctx.scope is not None:
# We want the logging scope to look exactly the same so we give it
# a blank suffix
ctx = nested_logging_context("")
enter_logcontext = True
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
ctx.scope = scope
return scope
class _LogContextScope(Scope):
"""
A custom opentracing scope. The only significant difference is that it will
close the log context it's related to if the logcontext was created specifically
for this scope.
"""
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
"""
Args:
manager (LogContextScopeManager):
the manager that is responsible for this scope.
span (Span):
the opentracing span which this scope represents the local
lifetime for.
logcontext (LogContext):
the logcontext to which this scope is attached.
enter_logcontext (Boolean):
if True the logcontext will be entered and exited when the scope
is entered and exited respectively
finish_on_close (Boolean):
if True finish the span when the scope is closed
"""
super(_LogContextScope, self).__init__(manager, span)
self.logcontext = logcontext
self._finish_on_close = finish_on_close
self._enter_logcontext = enter_logcontext
def __enter__(self):
if self._enter_logcontext:
self.logcontext.__enter__()
def __exit__(self, type, value, traceback):
if type == twisted.internet.defer._DefGen_Return:
super(_LogContextScope, self).__exit__(None, None, None)
else:
super(_LogContextScope, self).__exit__(type, value, traceback)
if self._enter_logcontext:
self.logcontext.__exit__(type, value, traceback)
else: # the logcontext existed before the creation of the scope
self.logcontext.scope = None
def close(self):
if self.manager.active is not self:
logger.error("Tried to close a none active scope!")
return
if self._finish_on_close:
self.span.finish()

View File

@@ -29,8 +29,16 @@ from prometheus_client.core import REGISTRY, GaugeMetricFamily, HistogramMetricF
from twisted.internet import reactor
from synapse.metrics._exposition import (
MetricsResource,
generate_latest,
start_http_server,
)
logger = logging.getLogger(__name__)
METRICS_PREFIX = "/_synapse/metrics"
running_on_pypy = platform.python_implementation() == "PyPy"
all_metrics = []
all_collectors = []
@@ -470,3 +478,12 @@ try:
gc.disable()
except AttributeError:
pass
__all__ = [
"MetricsResource",
"generate_latest",
"start_http_server",
"LaterGauge",
"InFlightGauge",
"BucketCollector",
]

View File

@@ -0,0 +1,258 @@
# -*- coding: utf-8 -*-
# Copyright 2015-2019 Prometheus Python Client Developers
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This code is based off `prometheus_client/exposition.py` from version 0.7.1.
Due to the renaming of metrics in prometheus_client 0.4.0, this customised
vendoring of the code will emit both the old versions that Synapse dashboards
expect, and the newer "best practice" version of the up-to-date official client.
"""
import math
import threading
from collections import namedtuple
from http.server import BaseHTTPRequestHandler, HTTPServer
from socketserver import ThreadingMixIn
from urllib.parse import parse_qs, urlparse
from prometheus_client import REGISTRY
from twisted.web.resource import Resource
try:
from prometheus_client.samples import Sample
except ImportError:
Sample = namedtuple("Sample", ["name", "labels", "value", "timestamp", "exemplar"])
CONTENT_TYPE_LATEST = str("text/plain; version=0.0.4; charset=utf-8")
INF = float("inf")
MINUS_INF = float("-inf")
def floatToGoString(d):
d = float(d)
if d == INF:
return "+Inf"
elif d == MINUS_INF:
return "-Inf"
elif math.isnan(d):
return "NaN"
else:
s = repr(d)
dot = s.find(".")
# Go switches to exponents sooner than Python.
# We only need to care about positive values for le/quantile.
if d > 0 and dot > 6:
mantissa = "{0}.{1}{2}".format(s[0], s[1:dot], s[dot + 1 :]).rstrip("0.")
return "{0}e+0{1}".format(mantissa, dot - 1)
return s
def sample_line(line, name):
if line.labels:
labelstr = "{{{0}}}".format(
",".join(
[
'{0}="{1}"'.format(
k,
v.replace("\\", r"\\").replace("\n", r"\n").replace('"', r"\""),
)
for k, v in sorted(line.labels.items())
]
)
)
else:
labelstr = ""
timestamp = ""
if line.timestamp is not None:
# Convert to milliseconds.
timestamp = " {0:d}".format(int(float(line.timestamp) * 1000))
return "{0}{1} {2}{3}\n".format(
name, labelstr, floatToGoString(line.value), timestamp
)
def nameify_sample(sample):
"""
If we get a prometheus_client<0.4.0 sample as a tuple, transform it into a
namedtuple which has the names we expect.
"""
if not isinstance(sample, Sample):
sample = Sample(*sample, None, None)
return sample
def generate_latest(registry, emit_help=False):
output = []
for metric in registry.collect():
if metric.name.startswith("__unused"):
continue
if not metric.samples:
# No samples, don't bother.
continue
mname = metric.name
mnewname = metric.name
mtype = metric.type
# OpenMetrics -> Prometheus
if mtype == "counter":
mnewname = mnewname + "_total"
elif mtype == "info":
mtype = "gauge"
mnewname = mnewname + "_info"
elif mtype == "stateset":
mtype = "gauge"
elif mtype == "gaugehistogram":
mtype = "histogram"
elif mtype == "unknown":
mtype = "untyped"
# Output in the old format for compatibility.
if emit_help:
output.append(
"# HELP {0} {1}\n".format(
mname,
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
)
)
output.append("# TYPE {0} {1}\n".format(mname, mtype))
for sample in map(nameify_sample, metric.samples):
# Get rid of the OpenMetrics specific samples
for suffix in ["_created", "_gsum", "_gcount"]:
if sample.name.endswith(suffix):
break
else:
newname = sample.name.replace(mnewname, mname)
if ":" in newname and newname.endswith("_total"):
newname = newname[: -len("_total")]
output.append(sample_line(sample, newname))
# Get rid of the weird colon things while we're at it
if mtype == "counter":
mnewname = mnewname.replace(":total", "")
mnewname = mnewname.replace(":", "_")
if mname == mnewname:
continue
# Also output in the new format, if it's different.
if emit_help:
output.append(
"# HELP {0} {1}\n".format(
mnewname,
metric.documentation.replace("\\", r"\\").replace("\n", r"\n"),
)
)
output.append("# TYPE {0} {1}\n".format(mnewname, mtype))
for sample in map(nameify_sample, metric.samples):
# Get rid of the OpenMetrics specific samples
for suffix in ["_created", "_gsum", "_gcount"]:
if sample.name.endswith(suffix):
break
else:
output.append(
sample_line(
sample, sample.name.replace(":total", "").replace(":", "_")
)
)
return "".join(output).encode("utf-8")
class MetricsHandler(BaseHTTPRequestHandler):
"""HTTP handler that gives metrics from ``REGISTRY``."""
registry = REGISTRY
def do_GET(self):
registry = self.registry
params = parse_qs(urlparse(self.path).query)
if "help" in params:
emit_help = True
else:
emit_help = False
try:
output = generate_latest(registry, emit_help=emit_help)
except Exception:
self.send_error(500, "error generating metric output")
raise
self.send_response(200)
self.send_header("Content-Type", CONTENT_TYPE_LATEST)
self.end_headers()
self.wfile.write(output)
def log_message(self, format, *args):
"""Log nothing."""
@classmethod
def factory(cls, registry):
"""Returns a dynamic MetricsHandler class tied
to the passed registry.
"""
# This implementation relies on MetricsHandler.registry
# (defined above and defaulted to REGISTRY).
# As we have unicode_literals, we need to create a str()
# object for type().
cls_name = str(cls.__name__)
MyMetricsHandler = type(cls_name, (cls, object), {"registry": registry})
return MyMetricsHandler
class _ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
"""Thread per request HTTP server."""
# Make worker threads "fire and forget". Beginning with Python 3.7 this
# prevents a memory leak because ``ThreadingMixIn`` starts to gather all
# non-daemon threads in a list in order to join on them at server close.
# Enabling daemon threads virtually makes ``_ThreadingSimpleServer`` the
# same as Python 3.7's ``ThreadingHTTPServer``.
daemon_threads = True
def start_http_server(port, addr="", registry=REGISTRY):
"""Starts an HTTP server for prometheus metrics as a daemon thread"""
CustomMetricsHandler = MetricsHandler.factory(registry)
httpd = _ThreadingSimpleServer((addr, port), CustomMetricsHandler)
t = threading.Thread(target=httpd.serve_forever)
t.daemon = True
t.start()
class MetricsResource(Resource):
"""
Twisted ``Resource`` that serves prometheus metrics.
"""
isLeaf = True
def __init__(self, registry=REGISTRY):
self.registry = registry
def render_GET(self, request):
request.setHeader(b"Content-Type", CONTENT_TYPE_LATEST.encode("ascii"))
return generate_latest(self.registry)

Some files were not shown because too many files have changed in this diff Show More