Compare commits
58 Commits
release-v1
...
erikj/rele
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0eaa6dd30e | ||
|
|
7a0fd6f98d | ||
|
|
f27a789697 | ||
|
|
1b831f2bec | ||
|
|
8f1aefa694 | ||
|
|
cbc82aa09f | ||
|
|
fd7c743445 | ||
|
|
46f4be94b4 | ||
|
|
4504151546 | ||
|
|
ef2d627015 | ||
|
|
70269fbd18 | ||
|
|
8b42a4eefd | ||
|
|
f21e24ffc2 | ||
|
|
22eeb6bc54 | ||
|
|
0073fe914a | ||
|
|
56f0ee78a9 | ||
|
|
00b24aa545 | ||
|
|
9a7e0d2ea6 | ||
|
|
c97da1e45d | ||
|
|
e80eb69887 | ||
|
|
b6ca69e4f1 | ||
|
|
31d721fbf6 | ||
|
|
2239813278 | ||
|
|
29ce6d43b5 | ||
|
|
a6ea1a957e | ||
|
|
aff1eb7c67 | ||
|
|
e90fad5cba | ||
|
|
88e1d0c52b | ||
|
|
f49c2093b5 | ||
|
|
a699c044b6 | ||
|
|
4215a3acd4 | ||
|
|
0c7f9cb81f | ||
|
|
9b7c28283a | ||
|
|
24229fac05 | ||
|
|
2e380f0f18 | ||
|
|
10f45d85bb | ||
|
|
66e6801c3e | ||
|
|
6c9ab61df5 | ||
|
|
49d72dea2a | ||
|
|
f6a3859a73 | ||
|
|
4ac3a8c5dc | ||
|
|
cf9a17a2b3 | ||
|
|
ff7f0e8a14 | ||
|
|
e8dbbcb64c | ||
|
|
73d8209694 | ||
|
|
913f8a06e4 | ||
|
|
7b13780c54 | ||
|
|
2b7c180879 | ||
|
|
34a5696f93 | ||
|
|
c850dd9a8e | ||
|
|
db9ef792f0 | ||
|
|
f28756bb40 | ||
|
|
4fb7a68a65 | ||
|
|
054a6b9538 | ||
|
|
514a240aed | ||
|
|
b19b63e6b4 | ||
|
|
a9f90fa73a | ||
|
|
2ac908f377 |
@@ -46,7 +46,7 @@ locally. You'll need python 3.6 or later, and to install a number of tools:
|
||||
|
||||
```
|
||||
# Install the dependencies
|
||||
pip install -e ".[lint]"
|
||||
pip install -e ".[lint,mypy]"
|
||||
|
||||
# Run the linter script
|
||||
./scripts-dev/lint.sh
|
||||
@@ -63,7 +63,7 @@ run-time:
|
||||
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
|
||||
```
|
||||
|
||||
You can also provided the `-d` option, which will lint the files that have been
|
||||
You can also provide the `-d` option, which will lint the files that have been
|
||||
changed since the last git commit. This will often be significantly faster than
|
||||
linting the whole codebase.
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ light workloads.
|
||||
System requirements:
|
||||
|
||||
- POSIX-compliant system (tested on Linux & OS X)
|
||||
- Python 3.5.2 or later, up to Python 3.8.
|
||||
- Python 3.5.2 or later, up to Python 3.9.
|
||||
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
|
||||
|
||||
Synapse is written in Python but some of the libraries it uses are written in
|
||||
|
||||
10
README.rst
10
README.rst
@@ -256,9 +256,9 @@ directory of your choice::
|
||||
Synapse has a number of external dependencies, that are easiest
|
||||
to install using pip and a virtualenv::
|
||||
|
||||
virtualenv -p python3 env
|
||||
source env/bin/activate
|
||||
python -m pip install --no-use-pep517 -e ".[all]"
|
||||
python3 -m venv ./env
|
||||
source ./env/bin/activate
|
||||
pip install -e ".[all,test]"
|
||||
|
||||
This will run a process of downloading and installing all the needed
|
||||
dependencies into a virtual env.
|
||||
@@ -270,9 +270,9 @@ check that everything is installed as it should be::
|
||||
|
||||
This should end with a 'PASSED' result::
|
||||
|
||||
Ran 143 tests in 0.601s
|
||||
Ran 1266 tests in 643.930s
|
||||
|
||||
PASSED (successes=143)
|
||||
PASSED (skips=15, successes=1251)
|
||||
|
||||
Running the Integration Tests
|
||||
=============================
|
||||
|
||||
16
UPGRADE.rst
16
UPGRADE.rst
@@ -75,6 +75,22 @@ for example:
|
||||
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
|
||||
Upgrading to v1.23.0
|
||||
====================
|
||||
|
||||
Structured logging configuration breaking changes
|
||||
-------------------------------------------------
|
||||
|
||||
This release deprecates use of the ``structured: true`` logging configuration for
|
||||
structured logging. If your logging configuration contains ``structured: true``
|
||||
then it should be modified based on the `structured logging documentation
|
||||
<https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md>`_.
|
||||
|
||||
The ``structured`` and ``drains`` logging options are now deprecated and should
|
||||
be replaced by standard logging configuration of ``handlers`` and ``formatters`.
|
||||
|
||||
A future will release of Synapse will make using ``structured: true`` an error.
|
||||
|
||||
Upgrading to v1.22.0
|
||||
====================
|
||||
|
||||
|
||||
1
changelog.d/8455.bugfix
Normal file
1
changelog.d/8455.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix fetching of E2E cross signing keys over federation when only one of the master key and device signing key is cached already.
|
||||
1
changelog.d/8519.feature
Normal file
1
changelog.d/8519.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin api to delete a single file or files were not used for a defined time from server. Contributed by @dklimpel.
|
||||
1
changelog.d/8539.feature
Normal file
1
changelog.d/8539.feature
Normal file
@@ -0,0 +1 @@
|
||||
Split admin API for reported events (`GET /_synapse/admin/v1/event_reports`) into detail and list endpoints. This is a breaking change to #8217 which was introduced in Synapse v1.21.0. Those who already use this API should check their scripts. Contributed by @dklimpel.
|
||||
1
changelog.d/8559.misc
Normal file
1
changelog.d/8559.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimise `/createRoom` with multiple invited users.
|
||||
1
changelog.d/8580.bugfix
Normal file
1
changelog.d/8580.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where Synapse would blindly forward bad responses from federation to clients when retrieving profile information.
|
||||
1
changelog.d/8582.doc
Normal file
1
changelog.d/8582.doc
Normal file
@@ -0,0 +1 @@
|
||||
Instructions for Azure AD in the OpenID Connect documentation. Contributed by peterk.
|
||||
1
changelog.d/8595.misc
Normal file
1
changelog.d/8595.misc
Normal file
@@ -0,0 +1 @@
|
||||
Implement and use an @lru_cache decorator.
|
||||
1
changelog.d/8607.feature
Normal file
1
changelog.d/8607.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8610.feature
Normal file
1
changelog.d/8610.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8614.misc
Normal file
1
changelog.d/8614.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't instansiate Requester directly.
|
||||
1
changelog.d/8615.misc
Normal file
1
changelog.d/8615.misc
Normal file
@@ -0,0 +1 @@
|
||||
Type hints for `RegistrationStore`.
|
||||
1
changelog.d/8616.misc
Normal file
1
changelog.d/8616.misc
Normal file
@@ -0,0 +1 @@
|
||||
Change schema to support access tokens belonging to one user but granting access to another.
|
||||
1
changelog.d/8620.bugfix
Normal file
1
changelog.d/8620.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where the account validity endpoint would silently fail if the user ID did not have an expiration time. It now returns a 400 error.
|
||||
1
changelog.d/8621.misc
Normal file
1
changelog.d/8621.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused OPTIONS handlers.
|
||||
1
changelog.d/8627.bugfix
Normal file
1
changelog.d/8627.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix email notifications for invites without local state.
|
||||
1
changelog.d/8628.bugfix
Normal file
1
changelog.d/8628.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of invalid group IDs to return a 400 rather than log an exception and return a 500.
|
||||
1
changelog.d/8632.bugfix
Normal file
1
changelog.d/8632.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix handling of User-Agent headers that are invalid UTF-8, which caused user agents of users to not get correctly recorded.
|
||||
1
changelog.d/8633.misc
Normal file
1
changelog.d/8633.misc
Normal file
@@ -0,0 +1 @@
|
||||
Run `mypy` as part of the lint.sh script.
|
||||
1
changelog.d/8634.misc
Normal file
1
changelog.d/8634.misc
Normal file
@@ -0,0 +1 @@
|
||||
Correct Synapse's PyPI package name in the OpenID Connect installation instructions.
|
||||
1
changelog.d/8635.doc
Normal file
1
changelog.d/8635.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve the sample configuration for single sign-on providers.
|
||||
1
changelog.d/8639.misc
Normal file
1
changelog.d/8639.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typos and spelling errors in the code.
|
||||
1
changelog.d/8640.misc
Normal file
1
changelog.d/8640.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8643.bugfix
Normal file
1
changelog.d/8643.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug in the `joined_rooms` admin API if the user has never joined any rooms. The bug was introduced, along with the API, in v1.21.0.
|
||||
1
changelog.d/8644.misc
Normal file
1
changelog.d/8644.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add field `total` to device list in admin API.
|
||||
1
changelog.d/8647.feature
Normal file
1
changelog.d/8647.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin API `GET /_synapse/admin/v1/users/<user_id>/media` to get information about uploaded media. Contributed by @dklimpel.
|
||||
1
changelog.d/8655.misc
Normal file
1
changelog.d/8655.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
1
changelog.d/8657.doc
Normal file
1
changelog.d/8657.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix the filepath of Dex's example config and the link to Dex's Getting Started guide in the OpenID Connect docs.
|
||||
1
changelog.d/8664.misc
Normal file
1
changelog.d/8664.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tell Black to format code for Python 3.5.
|
||||
1
changelog.d/8665.doc
Normal file
1
changelog.d/8665.doc
Normal file
@@ -0,0 +1 @@
|
||||
Note support for Python 3.9.
|
||||
1
changelog.d/8666.doc
Normal file
1
changelog.d/8666.doc
Normal file
@@ -0,0 +1 @@
|
||||
Minor updates to docs on running tests.
|
||||
1
changelog.d/8667.doc
Normal file
1
changelog.d/8667.doc
Normal file
@@ -0,0 +1 @@
|
||||
Interlink prometheus/grafana documentation.
|
||||
1
changelog.d/8668.misc
Normal file
1
changelog.d/8668.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8669.misc
Normal file
1
changelog.d/8669.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't pull event from DB when handling replication traffic.
|
||||
1
changelog.d/8670.misc
Normal file
1
changelog.d/8670.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of OpenTracing spans started.
|
||||
1
changelog.d/8671.misc
Normal file
1
changelog.d/8671.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8679.misc
Normal file
1
changelog.d/8679.misc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify representation of events in logfiles.
|
||||
1
changelog.d/8680.misc
Normal file
1
changelog.d/8680.misc
Normal file
@@ -0,0 +1 @@
|
||||
Don't require `hiredis` package to be installed to run unit tests.
|
||||
1
changelog.d/8682.bugfix
Normal file
1
changelog.d/8682.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix exception during handling multiple concurrent requests for remote media when using multiple media repositories.
|
||||
1
changelog.d/8684.misc
Normal file
1
changelog.d/8684.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typing info on cache call signature to accept `on_invalidate`.
|
||||
1
changelog.d/8685.feature
Normal file
1
changelog.d/8685.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support generating structured logs via the standard logging configuration.
|
||||
1
changelog.d/8688.misc
Normal file
1
changelog.d/8688.misc
Normal file
@@ -0,0 +1 @@
|
||||
Abstract some invite-related code in preparation for landing knocking.
|
||||
1
changelog.d/8689.feature
Normal file
1
changelog.d/8689.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an admin APIs to allow server admins to list users' pushers. Contributed by @dklimpel.
|
||||
1
changelog.d/8690.misc
Normal file
1
changelog.d/8690.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fail tests if they do not await coroutines.
|
||||
1
changelog.d/8693.misc
Normal file
1
changelog.d/8693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add more type hints to the application services code.
|
||||
@@ -3,4 +3,4 @@
|
||||
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
|
||||
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md
|
||||
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
|
||||
3. Set up additional recording rules
|
||||
3. Set up required recording rules. https://github.com/matrix-org/synapse/tree/master/contrib/prometheus
|
||||
|
||||
@@ -17,67 +17,26 @@ It returns a JSON body like the following:
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"content": {
|
||||
"reason": "foo",
|
||||
"score": -100
|
||||
},
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"room_alias": "#alias1:matrix.org",
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"reason": "bar",
|
||||
"score": -100
|
||||
},
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"event_json": {
|
||||
// hidden items
|
||||
// see above
|
||||
},
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"score": -100,
|
||||
"received_ts": 1598889612059,
|
||||
"room_alias": "#alias2:matrix.org",
|
||||
"canonical_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"name": "Your room name here",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
@@ -113,17 +72,94 @@ The following fields are returned in the JSON response body:
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``content``: object - Content of reported event.
|
||||
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``room_alias``: string - The alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``event_json``: object - Details of the original event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
||||
|
||||
Show details of a specific event report
|
||||
=======================================
|
||||
|
||||
This API returns information about a specific event report.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
"reason": "foo",
|
||||
"score": -100,
|
||||
"received_ts": 1570897107409,
|
||||
"canonical_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"name": "Matrix HQ",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
}
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``report_id``: string - The ID of the event report.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``name``: string - The name of the room.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``canonical_alias``: string - The canonical alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``event_json``: object - Details of the original event that was reported.
|
||||
|
||||
@@ -100,3 +100,82 @@ Response:
|
||||
"num_quarantined": 10 # The number of media items successfully quarantined
|
||||
}
|
||||
```
|
||||
|
||||
# Delete local media
|
||||
This API deletes the *local* media from the disk of your own server.
|
||||
This includes any local thumbnails and copies of media downloaded from
|
||||
remote homeservers.
|
||||
This API will not affect media that has been uploaded to external
|
||||
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
|
||||
See also [purge_remote_media.rst](purge_remote_media.rst).
|
||||
|
||||
## Delete a specific local media
|
||||
Delete a specific `media_id`.
|
||||
|
||||
Request:
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v1/media/<server_name>/<media_id>
|
||||
|
||||
{}
|
||||
```
|
||||
|
||||
URL Parameters
|
||||
|
||||
* `server_name`: string - The name of your local server (e.g `matrix.org`)
|
||||
* `media_id`: string - The ID of the media (e.g `abcdefghijklmnopqrstuvwx`)
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"deleted_media": [
|
||||
"abcdefghijklmnopqrstuvwx"
|
||||
],
|
||||
"total": 1
|
||||
}
|
||||
```
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `deleted_media`: an array of strings - List of deleted `media_id`
|
||||
* `total`: integer - Total number of deleted `media_id`
|
||||
|
||||
## Delete local media by date or size
|
||||
|
||||
Request:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/media/<server_name>/delete?before_ts=<before_ts>
|
||||
|
||||
{}
|
||||
```
|
||||
|
||||
URL Parameters
|
||||
|
||||
* `server_name`: string - The name of your local server (e.g `matrix.org`).
|
||||
* `before_ts`: string representing a positive integer - Unix timestamp in ms.
|
||||
Files that were last used before this timestamp will be deleted. It is the timestamp of
|
||||
last access and not the timestamp creation.
|
||||
* `size_gt`: Optional - string representing a positive integer - Size of the media in bytes.
|
||||
Files that are larger will be deleted. Defaults to `0`.
|
||||
* `keep_profiles`: Optional - string representing a boolean - Switch to also delete files
|
||||
that are still used in image data (e.g user profile, room avatar).
|
||||
If `false` these files will be deleted. Defaults to `true`.
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"deleted_media": [
|
||||
"abcdefghijklmnopqrstuvwx",
|
||||
"abcdefghijklmnopqrstuvwz"
|
||||
],
|
||||
"total": 2
|
||||
}
|
||||
```
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
* `deleted_media`: an array of strings - List of deleted `media_id`
|
||||
* `total`: integer - Total number of deleted `media_id`
|
||||
|
||||
@@ -341,6 +341,89 @@ The following fields are returned in the JSON response body:
|
||||
- ``total`` - Number of rooms.
|
||||
|
||||
|
||||
List media of an user
|
||||
================================
|
||||
Gets a list of all local media that a specific ``user_id`` has created.
|
||||
The response is ordered by creation date descending and media ID descending.
|
||||
The newest media is on top.
|
||||
|
||||
The API is::
|
||||
|
||||
GET /_synapse/admin/v1/users/<user_id>/media
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"media": [
|
||||
{
|
||||
"created_ts": 100400,
|
||||
"last_access_ts": null,
|
||||
"media_id": "qXhyRzulkwLsNHTbpHreuEgo",
|
||||
"media_length": 67,
|
||||
"media_type": "image/png",
|
||||
"quarantined_by": null,
|
||||
"safe_from_quarantine": false,
|
||||
"upload_name": "test1.png"
|
||||
},
|
||||
{
|
||||
"created_ts": 200400,
|
||||
"last_access_ts": null,
|
||||
"media_id": "FHfiSnzoINDatrXHQIXBtahw",
|
||||
"media_length": 67,
|
||||
"media_type": "image/png",
|
||||
"quarantined_by": null,
|
||||
"safe_from_quarantine": false,
|
||||
"upload_name": "test2.png"
|
||||
}
|
||||
],
|
||||
"next_token": 3,
|
||||
"total": 2
|
||||
}
|
||||
|
||||
To paginate, check for ``next_token`` and if present, call the endpoint again
|
||||
with ``from`` set to the value of ``next_token``. This will return a new page.
|
||||
|
||||
If the endpoint does not return a ``next_token`` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- ``user_id`` - string - fully qualified: for example, ``@user:server.com``.
|
||||
- ``limit``: string representing a positive integer - Is optional but is used for pagination,
|
||||
denoting the maximum number of items to return in this call. Defaults to ``100``.
|
||||
- ``from``: string representing a positive integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value and
|
||||
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
|
||||
Defaults to ``0``.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``media`` - An array of objects, each containing information about a media.
|
||||
Media objects contain the following fields:
|
||||
|
||||
- ``created_ts`` - integer - Timestamp when the content was uploaded in ms.
|
||||
- ``last_access_ts`` - integer - Timestamp when the content was last accessed in ms.
|
||||
- ``media_id`` - string - The id used to refer to the media.
|
||||
- ``media_length`` - integer - Length of the media in bytes.
|
||||
- ``media_type`` - string - The MIME-type of the media.
|
||||
- ``quarantined_by`` - string - The user ID that initiated the quarantine request
|
||||
for this media.
|
||||
|
||||
- ``safe_from_quarantine`` - bool - Status if this media is safe from quarantining.
|
||||
- ``upload_name`` - string - The name the media was uploaded with.
|
||||
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total`` - integer - Total number of media.
|
||||
|
||||
User devices
|
||||
============
|
||||
|
||||
@@ -375,7 +458,8 @@ A response body like the following is returned:
|
||||
"last_seen_ts": 1474491775025,
|
||||
"user_id": "<user_id>"
|
||||
}
|
||||
]
|
||||
],
|
||||
"total": 2
|
||||
}
|
||||
|
||||
**Parameters**
|
||||
@@ -400,6 +484,8 @@ The following fields are returned in the JSON response body:
|
||||
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
|
||||
- ``user_id`` - Owner of device.
|
||||
|
||||
- ``total`` - Total number of user's devices.
|
||||
|
||||
Delete multiple devices
|
||||
------------------
|
||||
Deletes the given devices for a specific ``user_id``, and invalidates
|
||||
@@ -525,3 +611,82 @@ The following parameters should be set in the URL:
|
||||
|
||||
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
|
||||
- ``device_id`` - The device to delete.
|
||||
|
||||
List all pushers
|
||||
================
|
||||
Gets information about all pushers for a specific ``user_id``.
|
||||
|
||||
The API is::
|
||||
|
||||
GET /_synapse/admin/v1/users/<user_id>/pushers
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"pushers": [
|
||||
{
|
||||
"app_display_name":"HTTP Push Notifications",
|
||||
"app_id":"m.http",
|
||||
"data": {
|
||||
"url":"example.com"
|
||||
},
|
||||
"device_display_name":"pushy push",
|
||||
"kind":"http",
|
||||
"lang":"None",
|
||||
"profile_tag":"",
|
||||
"pushkey":"a@example.com"
|
||||
}
|
||||
],
|
||||
"total": 1
|
||||
}
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``pushers`` - An array containing the current pushers for the user
|
||||
|
||||
- ``app_display_name`` - string - A string that will allow the user to identify
|
||||
what application owns this pusher.
|
||||
|
||||
- ``app_id`` - string - This is a reverse-DNS style identifier for the application.
|
||||
Max length, 64 chars.
|
||||
|
||||
- ``data`` - A dictionary of information for the pusher implementation itself.
|
||||
|
||||
- ``url`` - string - Required if ``kind`` is ``http``. The URL to use to send
|
||||
notifications to.
|
||||
|
||||
- ``format`` - string - The format to use when sending notifications to the
|
||||
Push Gateway.
|
||||
|
||||
- ``device_display_name`` - string - A string that will allow the user to identify
|
||||
what device owns this pusher.
|
||||
|
||||
- ``profile_tag`` - string - This string determines which set of device specific rules
|
||||
this pusher executes.
|
||||
|
||||
- ``kind`` - string - The kind of pusher. "http" is a pusher that sends HTTP pokes.
|
||||
- ``lang`` - string - The preferred language for receiving notifications
|
||||
(e.g. 'en' or 'en-US')
|
||||
|
||||
- ``profile_tag`` - string - This string determines which set of device specific rules
|
||||
this pusher executes.
|
||||
|
||||
- ``pushkey`` - string - This is a unique identifier for this pusher.
|
||||
Max length, 512 bytes.
|
||||
|
||||
- ``total`` - integer - Number of pushers.
|
||||
|
||||
See also `Client-Server API Spec <https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers>`_
|
||||
|
||||
@@ -60,6 +60,8 @@
|
||||
|
||||
1. Restart Prometheus.
|
||||
|
||||
1. Consider using the [grafana dashboard](https://github.com/matrix-org/synapse/tree/master/contrib/grafana/) and required [recording rules](https://github.com/matrix-org/synapse/tree/master/contrib/prometheus/)
|
||||
|
||||
## Monitoring workers
|
||||
|
||||
To monitor a Synapse installation using
|
||||
|
||||
@@ -37,7 +37,7 @@ as follows:
|
||||
provided by `matrix.org` so no further action is needed.
|
||||
|
||||
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
|
||||
install synapse[oidc]` to install the necessary dependencies.
|
||||
install matrix-synapse[oidc]` to install the necessary dependencies.
|
||||
|
||||
* For other installation mechanisms, see the documentation provided by the
|
||||
maintainer.
|
||||
@@ -52,14 +52,39 @@ specific providers.
|
||||
|
||||
Here are a few configs for providers that should work with Synapse.
|
||||
|
||||
### Microsoft Azure Active Directory
|
||||
Azure AD can act as an OpenID Connect Provider. Register a new application under
|
||||
*App registrations* in the Azure AD management console. The RedirectURI for your
|
||||
application should point to your matrix server: `[synapse public baseurl]/_synapse/oidc/callback`
|
||||
|
||||
Go to *Certificates & secrets* and register a new client secret. Make note of your
|
||||
Directory (tenant) ID as it will be used in the Azure links.
|
||||
Edit your Synapse config file and change the `oidc_config` section:
|
||||
|
||||
```yaml
|
||||
oidc_config:
|
||||
enabled: true
|
||||
issuer: "https://login.microsoftonline.com/<tenant id>/v2.0"
|
||||
client_id: "<client id>"
|
||||
client_secret: "<client secret>"
|
||||
scopes: ["openid", "profile"]
|
||||
authorization_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/authorize"
|
||||
token_endpoint: "https://login.microsoftonline.com/<tenant id>/oauth2/v2.0/token"
|
||||
userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo"
|
||||
|
||||
user_mapping_provider:
|
||||
config:
|
||||
localpart_template: "{{ user.preferred_username.split('@')[0] }}"
|
||||
display_name_template: "{{ user.name }}"
|
||||
```
|
||||
|
||||
### [Dex][dex-idp]
|
||||
|
||||
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
||||
Although it is designed to help building a full-blown provider with an
|
||||
external database, it can be configured with static passwords in a config file.
|
||||
|
||||
Follow the [Getting Started
|
||||
guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
|
||||
Follow the [Getting Started guide](https://dexidp.io/docs/getting-started/)
|
||||
to install Dex.
|
||||
|
||||
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
|
||||
@@ -73,7 +98,7 @@ staticClients:
|
||||
name: 'Synapse'
|
||||
```
|
||||
|
||||
Run with `dex serve examples/config-dex.yaml`.
|
||||
Run with `dex serve examples/config-dev.yaml`.
|
||||
|
||||
Synapse config:
|
||||
|
||||
|
||||
@@ -1505,10 +1505,8 @@ trusted_key_servers:
|
||||
|
||||
## Single sign-on integration ##
|
||||
|
||||
# Enable SAML2 for registration and login. Uses pysaml2.
|
||||
#
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
# The following settings can be used to make Synapse use a single sign-on
|
||||
# provider for authentication, instead of its internal password database.
|
||||
#
|
||||
# You will probably also want to set the following options to `false` to
|
||||
# disable the regular login/registration flows:
|
||||
@@ -1517,6 +1515,11 @@ trusted_key_servers:
|
||||
#
|
||||
# You will also want to investigate the settings under the "sso" configuration
|
||||
# section below.
|
||||
|
||||
# Enable SAML2 for registration and login. Uses pysaml2.
|
||||
#
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
#
|
||||
# Once SAML support is enabled, a metadata file will be exposed at
|
||||
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
|
||||
@@ -1532,40 +1535,42 @@ saml2_config:
|
||||
# so it is not normally necessary to specify them unless you need to
|
||||
# override them.
|
||||
#
|
||||
#sp_config:
|
||||
# # point this to the IdP's metadata. You can use either a local file or
|
||||
# # (preferably) a URL.
|
||||
# metadata:
|
||||
# #local: ["saml2/idp.xml"]
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
#
|
||||
# # By default, the user has to go to our login page first. If you'd like
|
||||
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# # 'service.sp' section:
|
||||
# #
|
||||
# #service:
|
||||
# # sp:
|
||||
# # allow_unsolicited: true
|
||||
#
|
||||
# # The examples below are just used to generate our metadata xml, and you
|
||||
# # may well not need them, depending on your setup. Alternatively you
|
||||
# # may need a whole lot more detail - see the pysaml2 docs!
|
||||
#
|
||||
# description: ["My awesome SP", "en"]
|
||||
# name: ["Test SP", "en"]
|
||||
#
|
||||
# organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
# - ["Example co", "en"]
|
||||
# url: "http://example.com"
|
||||
#
|
||||
# contact_person:
|
||||
# - given_name: Bob
|
||||
# sur_name: "the Sysadmin"
|
||||
# email_address": ["admin@example.com"]
|
||||
# contact_type": technical
|
||||
sp_config:
|
||||
# Point this to the IdP's metadata. You must provide either a local
|
||||
# file via the `local` attribute or (preferably) a URL via the
|
||||
# `remote` attribute.
|
||||
#
|
||||
#metadata:
|
||||
# local: ["saml2/idp.xml"]
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
|
||||
# By default, the user has to go to our login page first. If you'd like
|
||||
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# 'service.sp' section:
|
||||
#
|
||||
#service:
|
||||
# sp:
|
||||
# allow_unsolicited: true
|
||||
|
||||
# The examples below are just used to generate our metadata xml, and you
|
||||
# may well not need them, depending on your setup. Alternatively you
|
||||
# may need a whole lot more detail - see the pysaml2 docs!
|
||||
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
# - ["Example co", "en"]
|
||||
# url: "http://example.com"
|
||||
|
||||
#contact_person:
|
||||
# - given_name: Bob
|
||||
# sur_name: "the Sysadmin"
|
||||
# email_address": ["admin@example.com"]
|
||||
# contact_type": technical
|
||||
|
||||
# Instead of putting the config inline as above, you can specify a
|
||||
# separate pysaml2 configuration file:
|
||||
@@ -1641,11 +1646,10 @@ saml2_config:
|
||||
# value: "sales"
|
||||
|
||||
|
||||
# OpenID Connect integration. The following settings can be used to make Synapse
|
||||
# use an OpenID Connect Provider for authentication, instead of its internal
|
||||
# password database.
|
||||
# Enable OpenID Connect (OIDC) / OAuth 2.0 for registration and login.
|
||||
#
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md.
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
|
||||
# for some example configurations.
|
||||
#
|
||||
oidc_config:
|
||||
# Uncomment the following to enable authorization against an OpenID Connect
|
||||
@@ -1778,15 +1782,37 @@ oidc_config:
|
||||
|
||||
|
||||
|
||||
# Enable CAS for registration and login.
|
||||
# Enable Central Authentication Service (CAS) for registration and login.
|
||||
#
|
||||
#cas_config:
|
||||
# enabled: true
|
||||
# server_url: "https://cas-server.com"
|
||||
# service_url: "https://homeserver.domain.com:8448"
|
||||
# #displayname_attribute: name
|
||||
# #required_attributes:
|
||||
# # name: value
|
||||
cas_config:
|
||||
# Uncomment the following to enable authorization against a CAS server.
|
||||
# Defaults to false.
|
||||
#
|
||||
#enabled: true
|
||||
|
||||
# The URL of the CAS authorization endpoint.
|
||||
#
|
||||
#server_url: "https://cas-server.com"
|
||||
|
||||
# The public URL of the homeserver.
|
||||
#
|
||||
#service_url: "https://homeserver.domain.com:8448"
|
||||
|
||||
# The attribute of the CAS response to use as the display name.
|
||||
#
|
||||
# If unset, no displayname will be set.
|
||||
#
|
||||
#displayname_attribute: name
|
||||
|
||||
# It is possible to configure Synapse to only allow logins if CAS attributes
|
||||
# match particular values. All of the keys in the mapping below must exist
|
||||
# and the values must match the given value. Alternately if the given value
|
||||
# is None then any value is allowed (the attribute just must exist).
|
||||
# All of the listed attributes must match for the login to be permitted.
|
||||
#
|
||||
#required_attributes:
|
||||
# userGroup: "staff"
|
||||
# department: None
|
||||
|
||||
|
||||
# Additional settings to use with single-sign on systems such as OpenID Connect,
|
||||
@@ -1886,7 +1912,7 @@ sso:
|
||||
# and issued at ("iat") claims are validated if present.
|
||||
#
|
||||
# Note that this is a non-standard login type and client support is
|
||||
# expected to be non-existant.
|
||||
# expected to be non-existent.
|
||||
#
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
||||
#
|
||||
@@ -2402,7 +2428,7 @@ spam_checker:
|
||||
#
|
||||
# Options for the rules include:
|
||||
#
|
||||
# user_id: Matches agaisnt the creator of the alias
|
||||
# user_id: Matches against the creator of the alias
|
||||
# room_id: Matches against the room ID being published
|
||||
# alias: Matches against any current local or canonical aliases
|
||||
# associated with the room
|
||||
@@ -2448,7 +2474,7 @@ opentracing:
|
||||
# This is a list of regexes which are matched against the server_name of the
|
||||
# homeserver.
|
||||
#
|
||||
# By defult, it is empty, so no servers are matched.
|
||||
# By default, it is empty, so no servers are matched.
|
||||
#
|
||||
#homeserver_whitelist:
|
||||
# - ".*"
|
||||
|
||||
@@ -3,7 +3,11 @@
|
||||
# This is a YAML file containing a standard Python logging configuration
|
||||
# dictionary. See [1] for details on the valid settings.
|
||||
#
|
||||
# Synapse also supports structured logging for machine readable logs which can
|
||||
# be ingested by ELK stacks. See [2] for details.
|
||||
#
|
||||
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
|
||||
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
|
||||
|
||||
version: 1
|
||||
|
||||
@@ -59,7 +63,7 @@ root:
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuation for the `twisted` logger above, in
|
||||
# also need to update the configuration for the `twisted` logger above, in
|
||||
# this case.)
|
||||
#
|
||||
handlers: [buffer]
|
||||
|
||||
@@ -1,11 +1,116 @@
|
||||
# Structured Logging
|
||||
|
||||
A structured logging system can be useful when your logs are destined for a machine to parse and process. By maintaining its machine-readable characteristics, it enables more efficient searching and aggregations when consumed by software such as the "ELK stack".
|
||||
A structured logging system can be useful when your logs are destined for a
|
||||
machine to parse and process. By maintaining its machine-readable characteristics,
|
||||
it enables more efficient searching and aggregations when consumed by software
|
||||
such as the "ELK stack".
|
||||
|
||||
Synapse's structured logging system is configured via the file that Synapse's `log_config` config option points to. The file must be YAML and contain `structured: true`. It must contain a list of "drains" (places where logs go to).
|
||||
Synapse's structured logging system is configured via the file that Synapse's
|
||||
`log_config` config option points to. The file should include a formatter which
|
||||
uses the `synapse.logging.TerseJsonFormatter` class included with Synapse and a
|
||||
handler which uses the above formatter.
|
||||
|
||||
There is also a `synapse.logging.JsonFormatter` option which does not include
|
||||
a timestamp in the resulting JSON. This is useful if the log ingester adds its
|
||||
own timestamp.
|
||||
|
||||
A structured logging configuration looks similar to the following:
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
|
||||
formatters:
|
||||
structured:
|
||||
class: synapse.logging.TerseJsonFormatter
|
||||
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.TimedRotatingFileHandler
|
||||
formatter: structured
|
||||
filename: /path/to/my/logs/homeserver.log
|
||||
when: midnight
|
||||
backupCount: 3 # Does not include the current log file.
|
||||
encoding: utf8
|
||||
|
||||
loggers:
|
||||
synapse:
|
||||
level: INFO
|
||||
handlers: [remote]
|
||||
synapse.storage.SQL:
|
||||
level: WARNING
|
||||
```
|
||||
|
||||
The above logging config will set Synapse as 'INFO' logging level by default,
|
||||
with the SQL layer at 'WARNING', and will log to a file, stored as JSON.
|
||||
|
||||
It is also possible to figure Synapse to log to a remote endpoint by using the
|
||||
`synapse.logging.RemoteHandler` class included with Synapse. It takes the
|
||||
following arguments:
|
||||
|
||||
- `host`: Hostname or IP address of the log aggregator.
|
||||
- `port`: Numerical port to contact on the host.
|
||||
- `maximum_buffer`: (Optional, defaults to 1000) The maximum buffer size to allow.
|
||||
|
||||
A remote structured logging configuration looks similar to the following:
|
||||
|
||||
```yaml
|
||||
version: 1
|
||||
|
||||
formatters:
|
||||
structured:
|
||||
class: synapse.logging.TerseJsonFormatter
|
||||
|
||||
handlers:
|
||||
remote:
|
||||
class: synapse.logging.RemoteHandler
|
||||
formatter: structured
|
||||
host: 10.1.2.3
|
||||
port: 9999
|
||||
|
||||
loggers:
|
||||
synapse:
|
||||
level: INFO
|
||||
handlers: [remote]
|
||||
synapse.storage.SQL:
|
||||
level: WARNING
|
||||
```
|
||||
|
||||
The above logging config will set Synapse as 'INFO' logging level by default,
|
||||
with the SQL layer at 'WARNING', and will log JSON formatted messages to a
|
||||
remote endpoint at 10.1.2.3:9999.
|
||||
|
||||
## Upgrading from legacy structured logging configuration
|
||||
|
||||
Versions of Synapse prior to v1.23.0 included a custom structured logging
|
||||
configuration which is deprecated. It used a `structured: true` flag and
|
||||
configured `drains` instead of ``handlers`` and `formatters`.
|
||||
|
||||
Synapse currently automatically converts the old configuration to the new
|
||||
configuration, but this will be removed in a future version of Synapse. The
|
||||
following reference can be used to update your configuration. Based on the drain
|
||||
`type`, we can pick a new handler:
|
||||
|
||||
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
|
||||
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`
|
||||
or `ext://sys.stderr` should be used.
|
||||
2. For a type of `file` or `file_json`: a handler of `logging.FileHandler` with
|
||||
a location of the file path should be used.
|
||||
3. For a type of `network_json_terse`: a handler of `synapse.logging.RemoteHandler`
|
||||
with the host and port should be used.
|
||||
|
||||
Then based on the drain `type` we can pick a new formatter:
|
||||
|
||||
1. For a type of `console` or `file` no formatter is necessary.
|
||||
2. For a type of `console_json` or `file_json`: a formatter of
|
||||
`synapse.logging.JsonFormatter` should be used.
|
||||
3. For a type of `console_json_terse` or `network_json_terse`: a formatter of
|
||||
`synapse.logging.TerseJsonFormatter` should be used.
|
||||
|
||||
For each new handler and formatter they should be added to the logging configuration
|
||||
and then assigned to either a logger or the root logger.
|
||||
|
||||
An example legacy configuration:
|
||||
|
||||
```yaml
|
||||
structured: true
|
||||
|
||||
@@ -24,60 +129,33 @@ drains:
|
||||
location: homeserver.log
|
||||
```
|
||||
|
||||
The above logging config will set Synapse as 'INFO' logging level by default, with the SQL layer at 'WARNING', and will have two logging drains (to the console and to a file, stored as JSON).
|
||||
Would be converted into a new configuration:
|
||||
|
||||
## Drain Types
|
||||
```yaml
|
||||
version: 1
|
||||
|
||||
Drain types can be specified by the `type` key.
|
||||
formatters:
|
||||
json:
|
||||
class: synapse.logging.JsonFormatter
|
||||
|
||||
### `console`
|
||||
handlers:
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
location: ext://sys.stdout
|
||||
file:
|
||||
class: logging.FileHandler
|
||||
formatter: json
|
||||
filename: homeserver.log
|
||||
|
||||
Outputs human-readable logs to the console.
|
||||
loggers:
|
||||
synapse:
|
||||
level: INFO
|
||||
handlers: [console, file]
|
||||
synapse.storage.SQL:
|
||||
level: WARNING
|
||||
```
|
||||
|
||||
Arguments:
|
||||
|
||||
- `location`: Either `stdout` or `stderr`.
|
||||
|
||||
### `console_json`
|
||||
|
||||
Outputs machine-readable JSON logs to the console.
|
||||
|
||||
Arguments:
|
||||
|
||||
- `location`: Either `stdout` or `stderr`.
|
||||
|
||||
### `console_json_terse`
|
||||
|
||||
Outputs machine-readable JSON logs to the console, separated by newlines. This
|
||||
format is not designed to be read and re-formatted into human-readable text, but
|
||||
is optimal for a logging aggregation system.
|
||||
|
||||
Arguments:
|
||||
|
||||
- `location`: Either `stdout` or `stderr`.
|
||||
|
||||
### `file`
|
||||
|
||||
Outputs human-readable logs to a file.
|
||||
|
||||
Arguments:
|
||||
|
||||
- `location`: An absolute path to the file to log to.
|
||||
|
||||
### `file_json`
|
||||
|
||||
Outputs machine-readable logs to a file.
|
||||
|
||||
Arguments:
|
||||
|
||||
- `location`: An absolute path to the file to log to.
|
||||
|
||||
### `network_json_terse`
|
||||
|
||||
Delivers machine-readable JSON logs to a log aggregator over TCP. This is
|
||||
compatible with LogStash's TCP input with the codec set to `json_lines`.
|
||||
|
||||
Arguments:
|
||||
|
||||
- `host`: Hostname or IP address of the log aggregator.
|
||||
- `port`: Numerical port to contact on the host.
|
||||
The new logging configuration is a bit more verbose, but significantly more
|
||||
flexible. It allows for configuration that were not previously possible, such as
|
||||
sending plain logs over the network, or using different handlers for different
|
||||
modules.
|
||||
|
||||
6
mypy.ini
6
mypy.ini
@@ -17,6 +17,7 @@ files =
|
||||
synapse/federation,
|
||||
synapse/handlers/_base.py,
|
||||
synapse/handlers/account_data.py,
|
||||
synapse/handlers/account_validity.py,
|
||||
synapse/handlers/appservice.py,
|
||||
synapse/handlers/auth.py,
|
||||
synapse/handlers/cas_handler.py,
|
||||
@@ -56,7 +57,9 @@ files =
|
||||
synapse/server_notices,
|
||||
synapse/spam_checker_api,
|
||||
synapse/state,
|
||||
synapse/storage/databases/main/appservice.py,
|
||||
synapse/storage/databases/main/events.py,
|
||||
synapse/storage/databases/main/registration.py,
|
||||
synapse/storage/databases/main/stream.py,
|
||||
synapse/storage/databases/main/ui_auth.py,
|
||||
synapse/storage/database.py,
|
||||
@@ -80,6 +83,9 @@ ignore_missing_imports = True
|
||||
[mypy-zope]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-bcrypt]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-constantly]
|
||||
ignore_missing_imports = True
|
||||
|
||||
|
||||
@@ -35,7 +35,7 @@
|
||||
showcontent = true
|
||||
|
||||
[tool.black]
|
||||
target-version = ['py34']
|
||||
target-version = ['py35']
|
||||
exclude = '''
|
||||
|
||||
(
|
||||
|
||||
@@ -80,7 +80,7 @@ else
|
||||
# then lint everything!
|
||||
if [[ -z ${files+x} ]]; then
|
||||
# Lint all source code files and directories
|
||||
files=("synapse" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py")
|
||||
files=("synapse" "tests" "scripts-dev" "scripts" "contrib" "synctl" "setup.py" "synmark")
|
||||
fi
|
||||
fi
|
||||
|
||||
@@ -94,3 +94,4 @@ isort "${files[@]}"
|
||||
python3 -m black "${files[@]}"
|
||||
./scripts-dev/config-lint.sh
|
||||
flake8 "${files[@]}"
|
||||
mypy
|
||||
|
||||
@@ -19,9 +19,10 @@ can crop up, e.g the cache descriptors.
|
||||
|
||||
from typing import Callable, Optional
|
||||
|
||||
from mypy.nodes import ARG_NAMED_OPT
|
||||
from mypy.plugin import MethodSigContext, Plugin
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import CallableType
|
||||
from mypy.types import CallableType, NoneType
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
@@ -40,8 +41,9 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
|
||||
It already has *almost* the correct signature, except:
|
||||
|
||||
1. the `self` argument needs to be marked as "bound"; and
|
||||
2. any `cache_context` argument should be removed.
|
||||
1. the `self` argument needs to be marked as "bound";
|
||||
2. any `cache_context` argument should be removed;
|
||||
3. an optional keyword argument `on_invalidated` should be added.
|
||||
"""
|
||||
|
||||
# First we mark this as a bound function signature.
|
||||
@@ -58,19 +60,33 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
context_arg_index = idx
|
||||
break
|
||||
|
||||
arg_types = list(signature.arg_types)
|
||||
arg_names = list(signature.arg_names)
|
||||
arg_kinds = list(signature.arg_kinds)
|
||||
|
||||
if context_arg_index:
|
||||
arg_types = list(signature.arg_types)
|
||||
arg_types.pop(context_arg_index)
|
||||
|
||||
arg_names = list(signature.arg_names)
|
||||
arg_names.pop(context_arg_index)
|
||||
|
||||
arg_kinds = list(signature.arg_kinds)
|
||||
arg_kinds.pop(context_arg_index)
|
||||
|
||||
signature = signature.copy_modified(
|
||||
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
|
||||
)
|
||||
# Third, we add an optional "on_invalidate" argument.
|
||||
#
|
||||
# This is a callable which accepts no input and returns nothing.
|
||||
calltyp = CallableType(
|
||||
arg_types=[],
|
||||
arg_kinds=[],
|
||||
arg_names=[],
|
||||
ret_type=NoneType(),
|
||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||
)
|
||||
|
||||
arg_types.append(calltyp)
|
||||
arg_names.append("on_invalidate")
|
||||
arg_kinds.append(ARG_NAMED_OPT) # Arg is an optional kwarg.
|
||||
|
||||
signature = signature.copy_modified(
|
||||
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
||||
|
||||
227
scripts-dev/release.py
Normal file
227
scripts-dev/release.py
Normal file
@@ -0,0 +1,227 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import subprocess
|
||||
import sys
|
||||
from typing import Optional
|
||||
|
||||
import click
|
||||
import git
|
||||
from packaging import version
|
||||
from redbaron import RedBaron
|
||||
|
||||
|
||||
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
|
||||
"""Find the branch/ref, looking first locally then in the remote.
|
||||
"""
|
||||
if ref_name in repo.refs:
|
||||
return repo.refs[ref_name]
|
||||
elif ref_name in repo.remote().refs:
|
||||
return repo.remote().refs[ref_name]
|
||||
else:
|
||||
return None
|
||||
|
||||
|
||||
def update_branch(repo: git.Repo):
|
||||
"""Ensure branch is up to date if it has a remote
|
||||
"""
|
||||
if repo.active_branch.tracking_branch():
|
||||
repo.git.merge(repo.active_branch.tracking_branch().name)
|
||||
|
||||
|
||||
@click.command()
|
||||
def release():
|
||||
"""Main release command
|
||||
"""
|
||||
|
||||
# Make sure we're in a git repo.
|
||||
try:
|
||||
repo = git.Repo()
|
||||
except git.InvalidGitRepositoryError:
|
||||
raise click.ClickException("Not in Synapse repo.")
|
||||
|
||||
if repo.is_dirty():
|
||||
raise click.ClickException("Uncommitted changes exist.")
|
||||
|
||||
click.secho("Updating git repo...")
|
||||
repo.remote().fetch()
|
||||
|
||||
# Parse the AST and load the `__version__` node so that we can edit it
|
||||
# later.
|
||||
with open("synapse/__init__.py") as f:
|
||||
red = RedBaron(f.read())
|
||||
|
||||
version_node = None
|
||||
for node in red:
|
||||
if node.type != "assignment":
|
||||
continue
|
||||
|
||||
if node.target.type != "name":
|
||||
continue
|
||||
|
||||
if node.target.value != "__version__":
|
||||
continue
|
||||
|
||||
version_node = node
|
||||
break
|
||||
|
||||
if not version_node:
|
||||
print("Failed to find '__version__' definition in synapse/__init__.py")
|
||||
sys.exit(1)
|
||||
|
||||
# Parse the current version.
|
||||
current_version = version.parse(version_node.value.value.strip('"'))
|
||||
assert isinstance(current_version, version.Version)
|
||||
|
||||
# Figure out what sort of release we're doing and calcuate the new version.
|
||||
rc = click.confirm("RC", default=True)
|
||||
if current_version.pre:
|
||||
# If the current version is an RC we don't need to bump any of the
|
||||
# version numbers (other than the RC number).
|
||||
base_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro,
|
||||
)
|
||||
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc{}".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro,
|
||||
current_version.pre[1] + 1,
|
||||
)
|
||||
else:
|
||||
new_version = base_version
|
||||
else:
|
||||
# If this is a new release cycle then we need to know if its a major
|
||||
# version bump or a hotfix.
|
||||
release_type = click.prompt(
|
||||
"Release type",
|
||||
type=click.Choice(("major", "hotfix")),
|
||||
show_choices=True,
|
||||
default="major",
|
||||
)
|
||||
|
||||
if release_type == "major":
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major, current_version.minor + 1, 0,
|
||||
)
|
||||
|
||||
else:
|
||||
base_version = new_version = "{}.{}.{}".format(
|
||||
current_version.major, current_version.minor, current_version.micro + 1,
|
||||
)
|
||||
if rc:
|
||||
new_version = "{}.{}.{}rc1".format(
|
||||
current_version.major,
|
||||
current_version.minor,
|
||||
current_version.micro + 1,
|
||||
)
|
||||
|
||||
# Confirm the calculated version is OK.
|
||||
if not click.confirm(f"Create new version: {new_version}?", default=True):
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Switch to the release branch.
|
||||
release_branch_name = f"release-v{base_version}"
|
||||
release_branch = find_ref(repo, release_branch_name)
|
||||
if release_branch:
|
||||
if release_branch.is_remote():
|
||||
# If the release branch only exists on the remote we check it out
|
||||
# locally.
|
||||
repo.git.checkout(release_branch_name)
|
||||
release_branch = repo.active_branch
|
||||
else:
|
||||
# If a branch doesn't exist we create one. We ask which one branch it
|
||||
# should be based off, defaulting to sensible values depending on the
|
||||
# release type.
|
||||
if current_version.is_prerelease:
|
||||
default = release_branch_name
|
||||
elif release_type == "major":
|
||||
default = "develop"
|
||||
else:
|
||||
default = "master"
|
||||
|
||||
branch_name = click.prompt(
|
||||
"Which branch should release be based off of?", default=default
|
||||
)
|
||||
|
||||
base_branch = find_ref(repo, branch_name)
|
||||
if not base_branch:
|
||||
print(f"Could not find base branch {branch_name}!")
|
||||
click.get_current_context().abort()
|
||||
|
||||
# Checkout the base branch and ensure its up to date
|
||||
repo.head.reference = base_branch
|
||||
repo.head.reset(index=True, working_tree=True)
|
||||
if not base_branch.is_remote():
|
||||
update_branch(repo)
|
||||
|
||||
# Create the new release branch
|
||||
release_branch = repo.create_head(release_branch_name, commit=base_branch)
|
||||
|
||||
# Switch to the release branch and ensure its up to date.
|
||||
repo.git.checkout(release_branch_name)
|
||||
update_branch(repo)
|
||||
|
||||
# Update the `__version__` variable and write it back to the file.
|
||||
version_node.value = '"' + new_version + '"'
|
||||
with open("synapse/__init__.py", "w") as f:
|
||||
f.write(red.dumps())
|
||||
|
||||
# Generate changelgs
|
||||
subprocess.run("python3 -m towncrier", shell=True)
|
||||
|
||||
# Generate debian changelogs if its not an RC.
|
||||
if not rc:
|
||||
subprocess.run(
|
||||
f'dch -M -v {new_version} "New synapse release {new_version}."', shell=True
|
||||
)
|
||||
subprocess.run('dch -M -r -D stable ""', shell=True)
|
||||
|
||||
# Show the user the changes and ask if they want to edit the change log.
|
||||
repo.git.add("-u")
|
||||
subprocess.run("git diff --cached", shell=True)
|
||||
|
||||
if click.confirm("Edit changelog?", default=False):
|
||||
click.edit(filename="CHANGES.md")
|
||||
|
||||
# Commit the changes.
|
||||
repo.git.add("-u")
|
||||
repo.git.commit(f"-m {new_version}")
|
||||
|
||||
# We give the option to bail here in case the user wants to make sure things
|
||||
# are OK before pushing.
|
||||
if not click.confirm("Push branch to github?", default=True):
|
||||
print("")
|
||||
print("Run when ready to push:")
|
||||
print("")
|
||||
print(f"\tgit push -u {repo.remote().name} {repo.active_branch.name}")
|
||||
print("")
|
||||
sys.exit(0)
|
||||
|
||||
# Otherwise, push and open the changelog in the browser.
|
||||
repo.git.push(f"-u {repo.remote().name} {repo.active_branch.name}")
|
||||
|
||||
click.launch(
|
||||
f"https://github.com/matrix-org/synapse/blob/{repo.active_branch.name}/CHANGES.md"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
release()
|
||||
1
setup.py
1
setup.py
@@ -131,6 +131,7 @@ setup(
|
||||
"Programming Language :: Python :: 3.6",
|
||||
"Programming Language :: Python :: 3.7",
|
||||
"Programming Language :: Python :: 3.8",
|
||||
"Programming Language :: Python :: 3.9",
|
||||
],
|
||||
scripts=["synctl"] + glob.glob("scripts/*"),
|
||||
cmdclass={"test": TestCommand},
|
||||
|
||||
@@ -33,6 +33,7 @@ from synapse.api.errors import (
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging import opentracing as opentracing
|
||||
from synapse.storage.databases.main.registration import TokenLookupResult
|
||||
from synapse.types import StateMap, UserID
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
from synapse.util.metrics import Measure
|
||||
@@ -184,18 +185,12 @@ class Auth:
|
||||
"""
|
||||
try:
|
||||
ip_addr = self.hs.get_ip_from_request(request)
|
||||
user_agent = request.requestHeaders.getRawHeaders(
|
||||
b"User-Agent", default=[b""]
|
||||
)[0].decode("ascii", "surrogateescape")
|
||||
user_agent = request.get_user_agent("")
|
||||
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
|
||||
user_id, app_service = await self._get_appservice_user_id(request)
|
||||
if user_id:
|
||||
request.authenticated_entity = user_id
|
||||
opentracing.set_tag("authenticated_entity", user_id)
|
||||
opentracing.set_tag("appservice_id", app_service.id)
|
||||
|
||||
if ip_addr and self._track_appservice_user_ips:
|
||||
await self.store.insert_client_ip(
|
||||
user_id=user_id,
|
||||
@@ -205,31 +200,38 @@ class Auth:
|
||||
device_id="dummy-device", # stubbed
|
||||
)
|
||||
|
||||
return synapse.types.create_requester(user_id, app_service=app_service)
|
||||
requester = synapse.types.create_requester(
|
||||
user_id, app_service=app_service
|
||||
)
|
||||
|
||||
request.requester = user_id
|
||||
opentracing.set_tag("authenticated_entity", user_id)
|
||||
opentracing.set_tag("user_id", user_id)
|
||||
opentracing.set_tag("appservice_id", app_service.id)
|
||||
|
||||
return requester
|
||||
|
||||
user_info = await self.get_user_by_access_token(
|
||||
access_token, rights, allow_expired=allow_expired
|
||||
)
|
||||
user = user_info["user"]
|
||||
token_id = user_info["token_id"]
|
||||
is_guest = user_info["is_guest"]
|
||||
shadow_banned = user_info["shadow_banned"]
|
||||
token_id = user_info.token_id
|
||||
is_guest = user_info.is_guest
|
||||
shadow_banned = user_info.shadow_banned
|
||||
|
||||
# Deny the request if the user account has expired.
|
||||
if self._account_validity.enabled and not allow_expired:
|
||||
user_id = user.to_string()
|
||||
if await self.store.is_account_expired(user_id, self.clock.time_msec()):
|
||||
if await self.store.is_account_expired(
|
||||
user_info.user_id, self.clock.time_msec()
|
||||
):
|
||||
raise AuthError(
|
||||
403, "User account has expired", errcode=Codes.EXPIRED_ACCOUNT
|
||||
)
|
||||
|
||||
# device_id may not be present if get_user_by_access_token has been
|
||||
# stubbed out.
|
||||
device_id = user_info.get("device_id")
|
||||
device_id = user_info.device_id
|
||||
|
||||
if user and access_token and ip_addr:
|
||||
if access_token and ip_addr:
|
||||
await self.store.insert_client_ip(
|
||||
user_id=user.to_string(),
|
||||
user_id=user_info.token_owner,
|
||||
access_token=access_token,
|
||||
ip=ip_addr,
|
||||
user_agent=user_agent,
|
||||
@@ -243,19 +245,23 @@ class Auth:
|
||||
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
|
||||
)
|
||||
|
||||
request.authenticated_entity = user.to_string()
|
||||
opentracing.set_tag("authenticated_entity", user.to_string())
|
||||
if device_id:
|
||||
opentracing.set_tag("device_id", device_id)
|
||||
|
||||
return synapse.types.create_requester(
|
||||
user,
|
||||
requester = synapse.types.create_requester(
|
||||
user_info.user_id,
|
||||
token_id,
|
||||
is_guest,
|
||||
shadow_banned,
|
||||
device_id,
|
||||
app_service=app_service,
|
||||
authenticated_entity=user_info.token_owner,
|
||||
)
|
||||
|
||||
request.requester = requester
|
||||
opentracing.set_tag("authenticated_entity", user_info.token_owner)
|
||||
opentracing.set_tag("user_id", user_info.user_id)
|
||||
if device_id:
|
||||
opentracing.set_tag("device_id", device_id)
|
||||
|
||||
return requester
|
||||
except KeyError:
|
||||
raise MissingClientTokenError()
|
||||
|
||||
@@ -286,7 +292,7 @@ class Auth:
|
||||
|
||||
async def get_user_by_access_token(
|
||||
self, token: str, rights: str = "access", allow_expired: bool = False,
|
||||
) -> dict:
|
||||
) -> TokenLookupResult:
|
||||
""" Validate access token and get user_id from it
|
||||
|
||||
Args:
|
||||
@@ -295,13 +301,7 @@ class Auth:
|
||||
allow this
|
||||
allow_expired: If False, raises an InvalidClientTokenError
|
||||
if the token is expired
|
||||
Returns:
|
||||
dict that includes:
|
||||
`user` (UserID)
|
||||
`is_guest` (bool)
|
||||
`shadow_banned` (bool)
|
||||
`token_id` (int|None): access token id. May be None if guest
|
||||
`device_id` (str|None): device corresponding to access token
|
||||
|
||||
Raises:
|
||||
InvalidClientTokenError if a user by that token exists, but the token is
|
||||
expired
|
||||
@@ -311,9 +311,9 @@ class Auth:
|
||||
|
||||
if rights == "access":
|
||||
# first look in the database
|
||||
r = await self._look_up_user_by_access_token(token)
|
||||
r = await self.store.get_user_by_access_token(token)
|
||||
if r:
|
||||
valid_until_ms = r["valid_until_ms"]
|
||||
valid_until_ms = r.valid_until_ms
|
||||
if (
|
||||
not allow_expired
|
||||
and valid_until_ms is not None
|
||||
@@ -330,7 +330,6 @@ class Auth:
|
||||
# otherwise it needs to be a valid macaroon
|
||||
try:
|
||||
user_id, guest = self._parse_and_validate_macaroon(token, rights)
|
||||
user = UserID.from_string(user_id)
|
||||
|
||||
if rights == "access":
|
||||
if not guest:
|
||||
@@ -356,23 +355,17 @@ class Auth:
|
||||
raise InvalidClientTokenError(
|
||||
"Guest access token used for regular user"
|
||||
)
|
||||
ret = {
|
||||
"user": user,
|
||||
"is_guest": True,
|
||||
"shadow_banned": False,
|
||||
"token_id": None,
|
||||
|
||||
ret = TokenLookupResult(
|
||||
user_id=user_id,
|
||||
is_guest=True,
|
||||
# all guests get the same device id
|
||||
"device_id": GUEST_DEVICE_ID,
|
||||
}
|
||||
device_id=GUEST_DEVICE_ID,
|
||||
)
|
||||
elif rights == "delete_pusher":
|
||||
# We don't store these tokens in the database
|
||||
ret = {
|
||||
"user": user,
|
||||
"is_guest": False,
|
||||
"shadow_banned": False,
|
||||
"token_id": None,
|
||||
"device_id": None,
|
||||
}
|
||||
|
||||
ret = TokenLookupResult(user_id=user_id, is_guest=False)
|
||||
else:
|
||||
raise RuntimeError("Unknown rights setting %s", rights)
|
||||
return ret
|
||||
@@ -481,31 +474,15 @@ class Auth:
|
||||
now = self.hs.get_clock().time_msec()
|
||||
return now < expiry
|
||||
|
||||
async def _look_up_user_by_access_token(self, token):
|
||||
ret = await self.store.get_user_by_access_token(token)
|
||||
if not ret:
|
||||
return None
|
||||
|
||||
# we use ret.get() below because *lots* of unit tests stub out
|
||||
# get_user_by_access_token in a way where it only returns a couple of
|
||||
# the fields.
|
||||
user_info = {
|
||||
"user": UserID.from_string(ret.get("name")),
|
||||
"token_id": ret.get("token_id", None),
|
||||
"is_guest": False,
|
||||
"shadow_banned": ret.get("shadow_banned"),
|
||||
"device_id": ret.get("device_id"),
|
||||
"valid_until_ms": ret.get("valid_until_ms"),
|
||||
}
|
||||
return user_info
|
||||
|
||||
def get_appservice_by_req(self, request):
|
||||
token = self.get_access_token_from_request(request)
|
||||
service = self.store.get_app_service_by_token(token)
|
||||
if not service:
|
||||
logger.warning("Unrecognised appservice access token.")
|
||||
raise InvalidClientTokenError()
|
||||
request.authenticated_entity = service.sender
|
||||
request.requester = synapse.types.create_requester(
|
||||
service.sender, app_service=service
|
||||
)
|
||||
return service
|
||||
|
||||
async def is_server_admin(self, user: UserID) -> bool:
|
||||
|
||||
@@ -52,11 +52,11 @@ class ApplicationService:
|
||||
self,
|
||||
token,
|
||||
hostname,
|
||||
id,
|
||||
sender,
|
||||
url=None,
|
||||
namespaces=None,
|
||||
hs_token=None,
|
||||
sender=None,
|
||||
id=None,
|
||||
protocols=None,
|
||||
rate_limited=True,
|
||||
ip_range_whitelist=None,
|
||||
|
||||
@@ -26,14 +26,14 @@ class CasConfig(Config):
|
||||
|
||||
def read_config(self, config, **kwargs):
|
||||
cas_config = config.get("cas_config", None)
|
||||
if cas_config:
|
||||
self.cas_enabled = cas_config.get("enabled", True)
|
||||
self.cas_enabled = cas_config and cas_config.get("enabled", True)
|
||||
|
||||
if self.cas_enabled:
|
||||
self.cas_server_url = cas_config["server_url"]
|
||||
self.cas_service_url = cas_config["service_url"]
|
||||
self.cas_displayname_attribute = cas_config.get("displayname_attribute")
|
||||
self.cas_required_attributes = cas_config.get("required_attributes", {})
|
||||
self.cas_required_attributes = cas_config.get("required_attributes") or {}
|
||||
else:
|
||||
self.cas_enabled = False
|
||||
self.cas_server_url = None
|
||||
self.cas_service_url = None
|
||||
self.cas_displayname_attribute = None
|
||||
@@ -41,13 +41,35 @@ class CasConfig(Config):
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """
|
||||
# Enable CAS for registration and login.
|
||||
# Enable Central Authentication Service (CAS) for registration and login.
|
||||
#
|
||||
#cas_config:
|
||||
# enabled: true
|
||||
# server_url: "https://cas-server.com"
|
||||
# service_url: "https://homeserver.domain.com:8448"
|
||||
# #displayname_attribute: name
|
||||
# #required_attributes:
|
||||
# # name: value
|
||||
cas_config:
|
||||
# Uncomment the following to enable authorization against a CAS server.
|
||||
# Defaults to false.
|
||||
#
|
||||
#enabled: true
|
||||
|
||||
# The URL of the CAS authorization endpoint.
|
||||
#
|
||||
#server_url: "https://cas-server.com"
|
||||
|
||||
# The public URL of the homeserver.
|
||||
#
|
||||
#service_url: "https://homeserver.domain.com:8448"
|
||||
|
||||
# The attribute of the CAS response to use as the display name.
|
||||
#
|
||||
# If unset, no displayname will be set.
|
||||
#
|
||||
#displayname_attribute: name
|
||||
|
||||
# It is possible to configure Synapse to only allow logins if CAS attributes
|
||||
# match particular values. All of the keys in the mapping below must exist
|
||||
# and the values must match the given value. Alternately if the given value
|
||||
# is None then any value is allowed (the attribute just must exist).
|
||||
# All of the listed attributes must match for the login to be permitted.
|
||||
#
|
||||
#required_attributes:
|
||||
# userGroup: "staff"
|
||||
# department: None
|
||||
"""
|
||||
|
||||
@@ -63,7 +63,7 @@ class JWTConfig(Config):
|
||||
# and issued at ("iat") claims are validated if present.
|
||||
#
|
||||
# Note that this is a non-standard login type and client support is
|
||||
# expected to be non-existant.
|
||||
# expected to be non-existent.
|
||||
#
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
|
||||
#
|
||||
|
||||
@@ -23,7 +23,6 @@ from string import Template
|
||||
import yaml
|
||||
|
||||
from twisted.logger import (
|
||||
ILogObserver,
|
||||
LogBeginner,
|
||||
STDLibLogObserver,
|
||||
eventAsText,
|
||||
@@ -32,11 +31,9 @@ from twisted.logger import (
|
||||
|
||||
import synapse
|
||||
from synapse.app import _base as appbase
|
||||
from synapse.logging._structured import (
|
||||
reload_structured_logging,
|
||||
setup_structured_logging,
|
||||
)
|
||||
from synapse.logging._structured import setup_structured_logging
|
||||
from synapse.logging.context import LoggingContextFilter
|
||||
from synapse.logging.filter import MetadataFilter
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
@@ -48,7 +45,11 @@ DEFAULT_LOG_CONFIG = Template(
|
||||
# This is a YAML file containing a standard Python logging configuration
|
||||
# dictionary. See [1] for details on the valid settings.
|
||||
#
|
||||
# Synapse also supports structured logging for machine readable logs which can
|
||||
# be ingested by ELK stacks. See [2] for details.
|
||||
#
|
||||
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
|
||||
# [2]: https://github.com/matrix-org/synapse/blob/master/docs/structured_logging.md
|
||||
|
||||
version: 1
|
||||
|
||||
@@ -105,7 +106,7 @@ root:
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuation for the `twisted` logger above, in
|
||||
# also need to update the configuration for the `twisted` logger above, in
|
||||
# this case.)
|
||||
#
|
||||
handlers: [buffer]
|
||||
@@ -176,11 +177,11 @@ class LoggingConfig(Config):
|
||||
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
|
||||
|
||||
|
||||
def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) -> None:
|
||||
"""
|
||||
Set up Python stdlib logging.
|
||||
Set up Python standard library logging.
|
||||
"""
|
||||
if log_config is None:
|
||||
if log_config_path is None:
|
||||
log_format = (
|
||||
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
|
||||
" - %(message)s"
|
||||
@@ -196,7 +197,8 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
handler.setFormatter(formatter)
|
||||
logger.addHandler(handler)
|
||||
else:
|
||||
logging.config.dictConfig(log_config)
|
||||
# Load the logging configuration.
|
||||
_load_logging_config(log_config_path)
|
||||
|
||||
# We add a log record factory that runs all messages through the
|
||||
# LoggingContextFilter so that we get the context *at the time we log*
|
||||
@@ -204,12 +206,14 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
# filter options, but care must when using e.g. MemoryHandler to buffer
|
||||
# writes.
|
||||
|
||||
log_filter = LoggingContextFilter(request="")
|
||||
log_context_filter = LoggingContextFilter(request="")
|
||||
log_metadata_filter = MetadataFilter({"server_name": config.server_name})
|
||||
old_factory = logging.getLogRecordFactory()
|
||||
|
||||
def factory(*args, **kwargs):
|
||||
record = old_factory(*args, **kwargs)
|
||||
log_filter.filter(record)
|
||||
log_context_filter.filter(record)
|
||||
log_metadata_filter.filter(record)
|
||||
return record
|
||||
|
||||
logging.setLogRecordFactory(factory)
|
||||
@@ -255,21 +259,40 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
if not config.no_redirect_stdio:
|
||||
print("Redirected stdout/stderr to logs")
|
||||
|
||||
return observer
|
||||
|
||||
|
||||
def _reload_stdlib_logging(*args, log_config=None):
|
||||
logger = logging.getLogger("")
|
||||
def _load_logging_config(log_config_path: str) -> None:
|
||||
"""
|
||||
Configure logging from a log config path.
|
||||
"""
|
||||
with open(log_config_path, "rb") as f:
|
||||
log_config = yaml.safe_load(f.read())
|
||||
|
||||
if not log_config:
|
||||
logger.warning("Reloaded a blank config?")
|
||||
logging.warning("Loaded a blank logging config?")
|
||||
|
||||
# If the old structured logging configuration is being used, convert it to
|
||||
# the new style configuration.
|
||||
if "structured" in log_config and log_config.get("structured"):
|
||||
log_config = setup_structured_logging(log_config)
|
||||
|
||||
logging.config.dictConfig(log_config)
|
||||
|
||||
|
||||
def _reload_logging_config(log_config_path):
|
||||
"""
|
||||
Reload the log configuration from the file and apply it.
|
||||
"""
|
||||
# If no log config path was given, it cannot be reloaded.
|
||||
if log_config_path is None:
|
||||
return
|
||||
|
||||
_load_logging_config(log_config_path)
|
||||
logging.info("Reloaded log config from %s due to SIGHUP", log_config_path)
|
||||
|
||||
|
||||
def setup_logging(
|
||||
hs, config, use_worker_options=False, logBeginner: LogBeginner = globalLogBeginner
|
||||
) -> ILogObserver:
|
||||
) -> None:
|
||||
"""
|
||||
Set up the logging subsystem.
|
||||
|
||||
@@ -282,41 +305,18 @@ def setup_logging(
|
||||
|
||||
logBeginner: The Twisted logBeginner to use.
|
||||
|
||||
Returns:
|
||||
The "root" Twisted Logger observer, suitable for sending logs to from a
|
||||
Logger instance.
|
||||
"""
|
||||
log_config = config.worker_log_config if use_worker_options else config.log_config
|
||||
log_config_path = (
|
||||
config.worker_log_config if use_worker_options else config.log_config
|
||||
)
|
||||
|
||||
def read_config(*args, callback=None):
|
||||
if log_config is None:
|
||||
return None
|
||||
# Perform one-time logging configuration.
|
||||
_setup_stdlib_logging(config, log_config_path, logBeginner=logBeginner)
|
||||
# Add a SIGHUP handler to reload the logging configuration, if one is available.
|
||||
appbase.register_sighup(_reload_logging_config, log_config_path)
|
||||
|
||||
with open(log_config, "rb") as f:
|
||||
log_config_body = yaml.safe_load(f.read())
|
||||
|
||||
if callback:
|
||||
callback(log_config=log_config_body)
|
||||
logging.info("Reloaded log config from %s due to SIGHUP", log_config)
|
||||
|
||||
return log_config_body
|
||||
|
||||
log_config_body = read_config()
|
||||
|
||||
if log_config_body and log_config_body.get("structured") is True:
|
||||
logger = setup_structured_logging(
|
||||
hs, config, log_config_body, logBeginner=logBeginner
|
||||
)
|
||||
appbase.register_sighup(read_config, callback=reload_structured_logging)
|
||||
else:
|
||||
logger = _setup_stdlib_logging(config, log_config_body, logBeginner=logBeginner)
|
||||
appbase.register_sighup(read_config, callback=_reload_stdlib_logging)
|
||||
|
||||
# make sure that the first thing we log is a thing we can grep backwards
|
||||
# for
|
||||
# Log immediately so we can grep backwards.
|
||||
logging.warning("***** STARTING SERVER *****")
|
||||
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
|
||||
logging.info("Server hostname: %s", config.server_name)
|
||||
logging.info("Instance name: %s", hs.get_instance_name())
|
||||
|
||||
return logger
|
||||
|
||||
@@ -87,11 +87,10 @@ class OIDCConfig(Config):
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
# OpenID Connect integration. The following settings can be used to make Synapse
|
||||
# use an OpenID Connect Provider for authentication, instead of its internal
|
||||
# password database.
|
||||
# Enable OpenID Connect (OIDC) / OAuth 2.0 for registration and login.
|
||||
#
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md.
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md
|
||||
# for some example configurations.
|
||||
#
|
||||
oidc_config:
|
||||
# Uncomment the following to enable authorization against an OpenID Connect
|
||||
|
||||
@@ -143,7 +143,7 @@ class RegistrationConfig(Config):
|
||||
RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
|
||||
}
|
||||
|
||||
# Pull the creater/inviter from the configuration, this gets used to
|
||||
# Pull the creator/inviter from the configuration, this gets used to
|
||||
# send invites for invite-only rooms.
|
||||
mxid_localpart = config.get("auto_join_mxid_localpart")
|
||||
self.auto_join_user_id = None
|
||||
|
||||
@@ -99,7 +99,7 @@ class RoomDirectoryConfig(Config):
|
||||
#
|
||||
# Options for the rules include:
|
||||
#
|
||||
# user_id: Matches agaisnt the creator of the alias
|
||||
# user_id: Matches against the creator of the alias
|
||||
# room_id: Matches against the room ID being published
|
||||
# alias: Matches against any current local or canonical aliases
|
||||
# associated with the room
|
||||
|
||||
@@ -216,10 +216,8 @@ class SAML2Config(Config):
|
||||
return """\
|
||||
## Single sign-on integration ##
|
||||
|
||||
# Enable SAML2 for registration and login. Uses pysaml2.
|
||||
#
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
# The following settings can be used to make Synapse use a single sign-on
|
||||
# provider for authentication, instead of its internal password database.
|
||||
#
|
||||
# You will probably also want to set the following options to `false` to
|
||||
# disable the regular login/registration flows:
|
||||
@@ -228,6 +226,11 @@ class SAML2Config(Config):
|
||||
#
|
||||
# You will also want to investigate the settings under the "sso" configuration
|
||||
# section below.
|
||||
|
||||
# Enable SAML2 for registration and login. Uses pysaml2.
|
||||
#
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
#
|
||||
# Once SAML support is enabled, a metadata file will be exposed at
|
||||
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
|
||||
@@ -243,40 +246,42 @@ class SAML2Config(Config):
|
||||
# so it is not normally necessary to specify them unless you need to
|
||||
# override them.
|
||||
#
|
||||
#sp_config:
|
||||
# # point this to the IdP's metadata. You can use either a local file or
|
||||
# # (preferably) a URL.
|
||||
# metadata:
|
||||
# #local: ["saml2/idp.xml"]
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
#
|
||||
# # By default, the user has to go to our login page first. If you'd like
|
||||
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# # 'service.sp' section:
|
||||
# #
|
||||
# #service:
|
||||
# # sp:
|
||||
# # allow_unsolicited: true
|
||||
#
|
||||
# # The examples below are just used to generate our metadata xml, and you
|
||||
# # may well not need them, depending on your setup. Alternatively you
|
||||
# # may need a whole lot more detail - see the pysaml2 docs!
|
||||
#
|
||||
# description: ["My awesome SP", "en"]
|
||||
# name: ["Test SP", "en"]
|
||||
#
|
||||
# organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
# - ["Example co", "en"]
|
||||
# url: "http://example.com"
|
||||
#
|
||||
# contact_person:
|
||||
# - given_name: Bob
|
||||
# sur_name: "the Sysadmin"
|
||||
# email_address": ["admin@example.com"]
|
||||
# contact_type": technical
|
||||
sp_config:
|
||||
# Point this to the IdP's metadata. You must provide either a local
|
||||
# file via the `local` attribute or (preferably) a URL via the
|
||||
# `remote` attribute.
|
||||
#
|
||||
#metadata:
|
||||
# local: ["saml2/idp.xml"]
|
||||
# remote:
|
||||
# - url: https://our_idp/metadata.xml
|
||||
|
||||
# By default, the user has to go to our login page first. If you'd like
|
||||
# to allow IdP-initiated login, set 'allow_unsolicited: true' in a
|
||||
# 'service.sp' section:
|
||||
#
|
||||
#service:
|
||||
# sp:
|
||||
# allow_unsolicited: true
|
||||
|
||||
# The examples below are just used to generate our metadata xml, and you
|
||||
# may well not need them, depending on your setup. Alternatively you
|
||||
# may need a whole lot more detail - see the pysaml2 docs!
|
||||
|
||||
#description: ["My awesome SP", "en"]
|
||||
#name: ["Test SP", "en"]
|
||||
|
||||
#organization:
|
||||
# name: Example com
|
||||
# display_name:
|
||||
# - ["Example co", "en"]
|
||||
# url: "http://example.com"
|
||||
|
||||
#contact_person:
|
||||
# - given_name: Bob
|
||||
# sur_name: "the Sysadmin"
|
||||
# email_address": ["admin@example.com"]
|
||||
# contact_type": technical
|
||||
|
||||
# Instead of putting the config inline as above, you can specify a
|
||||
# separate pysaml2 configuration file:
|
||||
|
||||
@@ -67,7 +67,7 @@ class TracerConfig(Config):
|
||||
# This is a list of regexes which are matched against the server_name of the
|
||||
# homeserver.
|
||||
#
|
||||
# By defult, it is empty, so no servers are matched.
|
||||
# By default, it is empty, so no servers are matched.
|
||||
#
|
||||
#homeserver_whitelist:
|
||||
# - ".*"
|
||||
|
||||
@@ -149,7 +149,7 @@ class FederationPolicyForHTTPS:
|
||||
return SSLClientConnectionCreator(host, ssl_context, should_verify)
|
||||
|
||||
def creatorForNetloc(self, hostname, port):
|
||||
"""Implements the IPolicyForHTTPS interace so that this can be passed
|
||||
"""Implements the IPolicyForHTTPS interface so that this can be passed
|
||||
directly to agents.
|
||||
"""
|
||||
return self.get_options(hostname)
|
||||
|
||||
@@ -59,7 +59,7 @@ class DictProperty:
|
||||
#
|
||||
# To exclude the KeyError from the traceback, we explicitly
|
||||
# 'raise from e1.__context__' (which is better than 'raise from None',
|
||||
# becuase that would omit any *earlier* exceptions).
|
||||
# because that would omit any *earlier* exceptions).
|
||||
#
|
||||
raise AttributeError(
|
||||
"'%s' has no '%s' property" % (type(instance), self.key)
|
||||
@@ -368,7 +368,7 @@ class FrozenEvent(EventBase):
|
||||
return self.__repr__()
|
||||
|
||||
def __repr__(self):
|
||||
return "<FrozenEvent event_id='%s', type='%s', state_key='%s'>" % (
|
||||
return "<FrozenEvent event_id=%r, type=%r, state_key=%r>" % (
|
||||
self.get("event_id", None),
|
||||
self.get("type", None),
|
||||
self.get("state_key", None),
|
||||
@@ -451,7 +451,7 @@ class FrozenEventV2(EventBase):
|
||||
return self.__repr__()
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s event_id='%s', type='%s', state_key='%s'>" % (
|
||||
return "<%s event_id=%r, type=%r, state_key=%r>" % (
|
||||
self.__class__.__name__,
|
||||
self.event_id,
|
||||
self.get("type", None),
|
||||
|
||||
@@ -180,7 +180,7 @@ def only_fields(dictionary, fields):
|
||||
in 'fields'.
|
||||
|
||||
If there are no event fields specified then all fields are included.
|
||||
The entries may include '.' charaters to indicate sub-fields.
|
||||
The entries may include '.' characters to indicate sub-fields.
|
||||
So ['content.body'] will include the 'body' field of the 'content' object.
|
||||
A literal '.' character in a field name may be escaped using a '\'.
|
||||
|
||||
|
||||
@@ -154,7 +154,7 @@ class Authenticator:
|
||||
)
|
||||
|
||||
logger.debug("Request from %s", origin)
|
||||
request.authenticated_entity = origin
|
||||
request.requester = origin
|
||||
|
||||
# If we get a valid signed request from the other side, its probably
|
||||
# alive
|
||||
|
||||
@@ -22,7 +22,7 @@ attestations have a validity period so need to be periodically renewed.
|
||||
If a user leaves (or gets kicked out of) a group, either side can still use
|
||||
their attestation to "prove" their membership, until the attestation expires.
|
||||
Therefore attestations shouldn't be relied on to prove membership in important
|
||||
cases, but can for less important situtations, e.g. showing a users membership
|
||||
cases, but can for less important situations, e.g. showing a users membership
|
||||
of groups on their profile, showing flairs, etc.
|
||||
|
||||
An attestation is a signed blob of json that looks like:
|
||||
|
||||
@@ -113,7 +113,7 @@ class GroupsServerWorkerHandler:
|
||||
entry = await self.room_list_handler.generate_room_entry(
|
||||
room_id, len(joined_users), with_alias=False, allow_private=True
|
||||
)
|
||||
entry = dict(entry) # so we don't change whats cached
|
||||
entry = dict(entry) # so we don't change what's cached
|
||||
entry.pop("room_id", None)
|
||||
|
||||
room_entry["profile"] = entry
|
||||
@@ -550,7 +550,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
|
||||
group_id, room_id, is_public=is_public
|
||||
)
|
||||
else:
|
||||
raise SynapseError(400, "Uknown config option")
|
||||
raise SynapseError(400, "Unknown config option")
|
||||
|
||||
return {}
|
||||
|
||||
|
||||
@@ -18,19 +18,22 @@ import email.utils
|
||||
import logging
|
||||
from email.mime.multipart import MIMEMultipart
|
||||
from email.mime.text import MIMEText
|
||||
from typing import List
|
||||
from typing import TYPE_CHECKING, List
|
||||
|
||||
from synapse.api.errors import StoreError
|
||||
from synapse.api.errors import StoreError, SynapseError
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.types import UserID
|
||||
from synapse.util import stringutils
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AccountValidityHandler:
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.hs = hs
|
||||
self.config = hs.config
|
||||
self.store = self.hs.get_datastore()
|
||||
@@ -67,7 +70,7 @@ class AccountValidityHandler:
|
||||
self.clock.looping_call(self._send_renewal_emails, 30 * 60 * 1000)
|
||||
|
||||
@wrap_as_background_process("send_renewals")
|
||||
async def _send_renewal_emails(self):
|
||||
async def _send_renewal_emails(self) -> None:
|
||||
"""Gets the list of users whose account is expiring in the amount of time
|
||||
configured in the ``renew_at`` parameter from the ``account_validity``
|
||||
configuration, and sends renewal emails to all of these users as long as they
|
||||
@@ -81,11 +84,25 @@ class AccountValidityHandler:
|
||||
user_id=user["user_id"], expiration_ts=user["expiration_ts_ms"]
|
||||
)
|
||||
|
||||
async def send_renewal_email_to_user(self, user_id: str):
|
||||
async def send_renewal_email_to_user(self, user_id: str) -> None:
|
||||
"""
|
||||
Send a renewal email for a specific user.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to send a renewal email for.
|
||||
|
||||
Raises:
|
||||
SynapseError if the user is not set to renew.
|
||||
"""
|
||||
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
|
||||
|
||||
# If this user isn't set to be expired, raise an error.
|
||||
if expiration_ts is None:
|
||||
raise SynapseError(400, "User has no expiration time: %s" % (user_id,))
|
||||
|
||||
await self._send_renewal_email(user_id, expiration_ts)
|
||||
|
||||
async def _send_renewal_email(self, user_id: str, expiration_ts: int):
|
||||
async def _send_renewal_email(self, user_id: str, expiration_ts: int) -> None:
|
||||
"""Sends out a renewal email to every email address attached to the given user
|
||||
with a unique link allowing them to renew their account.
|
||||
|
||||
|
||||
@@ -88,7 +88,7 @@ class AdminHandler(BaseHandler):
|
||||
|
||||
# We only try and fetch events for rooms the user has been in. If
|
||||
# they've been e.g. invited to a room without joining then we handle
|
||||
# those seperately.
|
||||
# those separately.
|
||||
rooms_user_has_been_in = await self.store.get_rooms_user_has_been_in(user_id)
|
||||
|
||||
for index, room in enumerate(rooms):
|
||||
@@ -226,7 +226,7 @@ class ExfiltrationWriter:
|
||||
"""
|
||||
|
||||
def finished(self):
|
||||
"""Called when all data has succesfully been exported and written.
|
||||
"""Called when all data has successfully been exported and written.
|
||||
|
||||
This functions return value is passed to the caller of
|
||||
`export_user_data`.
|
||||
|
||||
@@ -12,9 +12,8 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Dict, List, Optional
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional, Union
|
||||
|
||||
from prometheus_client import Counter
|
||||
|
||||
@@ -30,17 +29,24 @@ from synapse.metrics import (
|
||||
event_processing_loop_counter,
|
||||
event_processing_loop_room_count,
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import Collection, JsonDict, RoomStreamToken, UserID
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage.databases.main.directory import RoomAliasMapping
|
||||
from synapse.types import Collection, JsonDict, RoomAlias, RoomStreamToken, UserID
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
|
||||
|
||||
|
||||
class ApplicationServicesHandler:
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastore()
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.appservice_api = hs.get_application_service_api()
|
||||
@@ -53,7 +59,7 @@ class ApplicationServicesHandler:
|
||||
self.current_max = 0
|
||||
self.is_processing = False
|
||||
|
||||
async def notify_interested_services(self, max_token: RoomStreamToken):
|
||||
def notify_interested_services(self, max_token: RoomStreamToken):
|
||||
"""Notifies (pushes) all application services interested in this event.
|
||||
|
||||
Pushing is done asynchronously, so this method won't block for any
|
||||
@@ -72,6 +78,12 @@ class ApplicationServicesHandler:
|
||||
if self.is_processing:
|
||||
return
|
||||
|
||||
# We only start a new background process if necessary rather than
|
||||
# optimistically (to cut down on overhead).
|
||||
self._notify_interested_services(max_token)
|
||||
|
||||
@wrap_as_background_process("notify_interested_services")
|
||||
async def _notify_interested_services(self, max_token: RoomStreamToken):
|
||||
with Measure(self.clock, "notify_interested_services"):
|
||||
self.is_processing = True
|
||||
try:
|
||||
@@ -166,8 +178,11 @@ class ApplicationServicesHandler:
|
||||
finally:
|
||||
self.is_processing = False
|
||||
|
||||
async def notify_interested_services_ephemeral(
|
||||
self, stream_key: str, new_token: Optional[int], users: Collection[UserID] = [],
|
||||
def notify_interested_services_ephemeral(
|
||||
self,
|
||||
stream_key: str,
|
||||
new_token: Optional[int],
|
||||
users: Collection[Union[str, UserID]] = [],
|
||||
):
|
||||
"""This is called by the notifier in the background
|
||||
when a ephemeral event handled by the homeserver.
|
||||
@@ -183,13 +198,34 @@ class ApplicationServicesHandler:
|
||||
new_token: The latest stream token
|
||||
users: The user(s) involved with the event.
|
||||
"""
|
||||
if not self.notify_appservices:
|
||||
return
|
||||
|
||||
if stream_key not in ("typing_key", "receipt_key", "presence_key"):
|
||||
return
|
||||
|
||||
services = [
|
||||
service
|
||||
for service in self.store.get_app_services()
|
||||
if service.supports_ephemeral
|
||||
]
|
||||
if not services or not self.notify_appservices:
|
||||
if not services:
|
||||
return
|
||||
|
||||
# We only start a new background process if necessary rather than
|
||||
# optimistically (to cut down on overhead).
|
||||
self._notify_interested_services_ephemeral(
|
||||
services, stream_key, new_token, users
|
||||
)
|
||||
|
||||
@wrap_as_background_process("notify_interested_services_ephemeral")
|
||||
async def _notify_interested_services_ephemeral(
|
||||
self,
|
||||
services: List[ApplicationService],
|
||||
stream_key: str,
|
||||
new_token: Optional[int],
|
||||
users: Collection[Union[str, UserID]],
|
||||
):
|
||||
logger.info("Checking interested services for %s" % (stream_key))
|
||||
with Measure(self.clock, "notify_interested_services_ephemeral"):
|
||||
for service in services:
|
||||
@@ -214,7 +250,9 @@ class ApplicationServicesHandler:
|
||||
service, "presence", new_token
|
||||
)
|
||||
|
||||
async def _handle_typing(self, service: ApplicationService, new_token: int):
|
||||
async def _handle_typing(
|
||||
self, service: ApplicationService, new_token: int
|
||||
) -> List[JsonDict]:
|
||||
typing_source = self.event_sources.sources["typing"]
|
||||
# Get the typing events from just before current
|
||||
typing, _ = await typing_source.get_new_events_as(
|
||||
@@ -226,7 +264,7 @@ class ApplicationServicesHandler:
|
||||
)
|
||||
return typing
|
||||
|
||||
async def _handle_receipts(self, service: ApplicationService):
|
||||
async def _handle_receipts(self, service: ApplicationService) -> List[JsonDict]:
|
||||
from_key = await self.store.get_type_stream_id_for_appservice(
|
||||
service, "read_receipt"
|
||||
)
|
||||
@@ -237,7 +275,7 @@ class ApplicationServicesHandler:
|
||||
return receipts
|
||||
|
||||
async def _handle_presence(
|
||||
self, service: ApplicationService, users: Collection[UserID]
|
||||
self, service: ApplicationService, users: Collection[Union[str, UserID]]
|
||||
) -> List[JsonDict]:
|
||||
events = [] # type: List[JsonDict]
|
||||
presence_source = self.event_sources.sources["presence"]
|
||||
@@ -245,6 +283,9 @@ class ApplicationServicesHandler:
|
||||
service, "presence"
|
||||
)
|
||||
for user in users:
|
||||
if isinstance(user, str):
|
||||
user = UserID.from_string(user)
|
||||
|
||||
interested = await service.is_interested_in_presence(user, self.store)
|
||||
if not interested:
|
||||
continue
|
||||
@@ -265,11 +306,11 @@ class ApplicationServicesHandler:
|
||||
|
||||
return events
|
||||
|
||||
async def query_user_exists(self, user_id):
|
||||
async def query_user_exists(self, user_id: str) -> bool:
|
||||
"""Check if any application service knows this user_id exists.
|
||||
|
||||
Args:
|
||||
user_id(str): The user to query if they exist on any AS.
|
||||
user_id: The user to query if they exist on any AS.
|
||||
Returns:
|
||||
True if this user exists on at least one application service.
|
||||
"""
|
||||
@@ -280,11 +321,13 @@ class ApplicationServicesHandler:
|
||||
return True
|
||||
return False
|
||||
|
||||
async def query_room_alias_exists(self, room_alias):
|
||||
async def query_room_alias_exists(
|
||||
self, room_alias: RoomAlias
|
||||
) -> Optional[RoomAliasMapping]:
|
||||
"""Check if an application service knows this room alias exists.
|
||||
|
||||
Args:
|
||||
room_alias(RoomAlias): The room alias to query.
|
||||
room_alias: The room alias to query.
|
||||
Returns:
|
||||
namedtuple: with keys "room_id" and "servers" or None if no
|
||||
association can be found.
|
||||
@@ -300,10 +343,13 @@ class ApplicationServicesHandler:
|
||||
)
|
||||
if is_known_alias:
|
||||
# the alias exists now so don't query more ASes.
|
||||
result = await self.store.get_association_from_room_alias(room_alias)
|
||||
return result
|
||||
return await self.store.get_association_from_room_alias(room_alias)
|
||||
|
||||
async def query_3pe(self, kind, protocol, fields):
|
||||
return None
|
||||
|
||||
async def query_3pe(
|
||||
self, kind: str, protocol: str, fields: Dict[bytes, List[bytes]]
|
||||
) -> List[JsonDict]:
|
||||
services = self._get_services_for_3pn(protocol)
|
||||
|
||||
results = await make_deferred_yieldable(
|
||||
@@ -325,7 +371,9 @@ class ApplicationServicesHandler:
|
||||
|
||||
return ret
|
||||
|
||||
async def get_3pe_protocols(self, only_protocol=None):
|
||||
async def get_3pe_protocols(
|
||||
self, only_protocol: Optional[str] = None
|
||||
) -> Dict[str, JsonDict]:
|
||||
services = self.store.get_app_services()
|
||||
protocols = {} # type: Dict[str, List[JsonDict]]
|
||||
|
||||
@@ -343,7 +391,7 @@ class ApplicationServicesHandler:
|
||||
if info is not None:
|
||||
protocols[p].append(info)
|
||||
|
||||
def _merge_instances(infos):
|
||||
def _merge_instances(infos: List[JsonDict]) -> JsonDict:
|
||||
if not infos:
|
||||
return {}
|
||||
|
||||
@@ -358,19 +406,17 @@ class ApplicationServicesHandler:
|
||||
|
||||
return combined
|
||||
|
||||
for p in protocols.keys():
|
||||
protocols[p] = _merge_instances(protocols[p])
|
||||
return {p: _merge_instances(protocols[p]) for p in protocols.keys()}
|
||||
|
||||
return protocols
|
||||
|
||||
async def _get_services_for_event(self, event):
|
||||
async def _get_services_for_event(
|
||||
self, event: EventBase
|
||||
) -> List[ApplicationService]:
|
||||
"""Retrieve a list of application services interested in this event.
|
||||
|
||||
Args:
|
||||
event(Event): The event to check. Can be None if alias_list is not.
|
||||
event: The event to check. Can be None if alias_list is not.
|
||||
Returns:
|
||||
list<ApplicationService>: A list of services interested in this
|
||||
event based on the service regex.
|
||||
A list of services interested in this event based on the service regex.
|
||||
"""
|
||||
services = self.store.get_app_services()
|
||||
|
||||
@@ -384,17 +430,15 @@ class ApplicationServicesHandler:
|
||||
|
||||
return interested_list
|
||||
|
||||
def _get_services_for_user(self, user_id):
|
||||
def _get_services_for_user(self, user_id: str) -> List[ApplicationService]:
|
||||
services = self.store.get_app_services()
|
||||
interested_list = [s for s in services if (s.is_interested_in_user(user_id))]
|
||||
return interested_list
|
||||
return [s for s in services if (s.is_interested_in_user(user_id))]
|
||||
|
||||
def _get_services_for_3pn(self, protocol):
|
||||
def _get_services_for_3pn(self, protocol: str) -> List[ApplicationService]:
|
||||
services = self.store.get_app_services()
|
||||
interested_list = [s for s in services if s.is_interested_in_protocol(protocol)]
|
||||
return interested_list
|
||||
return [s for s in services if s.is_interested_in_protocol(protocol)]
|
||||
|
||||
async def _is_unknown_user(self, user_id):
|
||||
async def _is_unknown_user(self, user_id: str) -> bool:
|
||||
if not self.is_mine_id(user_id):
|
||||
# we don't know if they are unknown or not since it isn't one of our
|
||||
# users. We can't poke ASes.
|
||||
@@ -409,9 +453,8 @@ class ApplicationServicesHandler:
|
||||
service_list = [s for s in services if s.sender == user_id]
|
||||
return len(service_list) == 0
|
||||
|
||||
async def _check_user_exists(self, user_id):
|
||||
async def _check_user_exists(self, user_id: str) -> bool:
|
||||
unknown_user = await self._is_unknown_user(user_id)
|
||||
if unknown_user:
|
||||
exists = await self.query_user_exists(user_id)
|
||||
return exists
|
||||
return await self.query_user_exists(user_id)
|
||||
return True
|
||||
|
||||
@@ -18,10 +18,20 @@ import logging
|
||||
import time
|
||||
import unicodedata
|
||||
import urllib.parse
|
||||
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
)
|
||||
|
||||
import attr
|
||||
import bcrypt # type: ignore[import]
|
||||
import bcrypt
|
||||
import pymacaroons
|
||||
|
||||
from synapse.api.constants import LoginType
|
||||
@@ -49,6 +59,9 @@ from synapse.util.threepids import canonicalise_email
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.app.homeserver import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -149,11 +162,7 @@ class SsoLoginExtraAttributes:
|
||||
class AuthHandler(BaseHandler):
|
||||
SESSION_EXPIRE_MS = 48 * 60 * 60 * 1000
|
||||
|
||||
def __init__(self, hs):
|
||||
"""
|
||||
Args:
|
||||
hs (synapse.server.HomeServer):
|
||||
"""
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
|
||||
self.checkers = {} # type: Dict[str, UserInteractiveAuthChecker]
|
||||
@@ -470,9 +479,7 @@ class AuthHandler(BaseHandler):
|
||||
# authentication flow.
|
||||
await self.store.set_ui_auth_clientdict(sid, clientdict)
|
||||
|
||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
||||
0
|
||||
].decode("ascii", "surrogateescape")
|
||||
user_agent = request.get_user_agent("")
|
||||
|
||||
await self.store.add_user_agent_ip_to_ui_auth_session(
|
||||
session.session_id, user_agent, clientip
|
||||
@@ -692,7 +699,7 @@ class AuthHandler(BaseHandler):
|
||||
Creates a new access token for the user with the given user ID.
|
||||
|
||||
The user is assumed to have been authenticated by some other
|
||||
machanism (e.g. CAS), and the user_id converted to the canonical case.
|
||||
mechanism (e.g. CAS), and the user_id converted to the canonical case.
|
||||
|
||||
The device will be recorded in the table if it is not there already.
|
||||
|
||||
@@ -984,17 +991,17 @@ class AuthHandler(BaseHandler):
|
||||
# This might return an awaitable, if it does block the log out
|
||||
# until it completes.
|
||||
result = provider.on_logged_out(
|
||||
user_id=str(user_info["user"]),
|
||||
device_id=user_info["device_id"],
|
||||
user_id=user_info.user_id,
|
||||
device_id=user_info.device_id,
|
||||
access_token=access_token,
|
||||
)
|
||||
if inspect.isawaitable(result):
|
||||
await result
|
||||
|
||||
# delete pushers associated with this access token
|
||||
if user_info["token_id"] is not None:
|
||||
if user_info.token_id is not None:
|
||||
await self.hs.get_pusherpool().remove_pushers_by_access_token(
|
||||
str(user_info["user"]), (user_info["token_id"],)
|
||||
user_info.user_id, (user_info.token_id,)
|
||||
)
|
||||
|
||||
async def delete_access_tokens_for_user(
|
||||
|
||||
@@ -212,9 +212,7 @@ class CasHandler:
|
||||
else:
|
||||
if not registered_user_id:
|
||||
# Pull out the user-agent and IP from the request.
|
||||
user_agent = request.requestHeaders.getRawHeaders(
|
||||
b"User-Agent", default=[b""]
|
||||
)[0].decode("ascii", "surrogateescape")
|
||||
user_agent = request.get_user_agent("")
|
||||
ip_address = self.hs.get_ip_from_request(request)
|
||||
|
||||
registered_user_id = await self._registration_handler.register_user(
|
||||
|
||||
@@ -129,6 +129,11 @@ class E2eKeysHandler:
|
||||
if user_id in local_query:
|
||||
results[user_id] = keys
|
||||
|
||||
# Get cached cross-signing keys
|
||||
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
|
||||
device_keys_query, from_user_id
|
||||
)
|
||||
|
||||
# Now attempt to get any remote devices from our local cache.
|
||||
remote_queries_not_in_cache = {}
|
||||
if remote_queries:
|
||||
@@ -155,16 +160,28 @@ class E2eKeysHandler:
|
||||
unsigned["device_display_name"] = device_display_name
|
||||
user_devices[device_id] = result
|
||||
|
||||
# check for missing cross-signing keys.
|
||||
for user_id in remote_queries.keys():
|
||||
cached_cross_master = user_id in cross_signing_keys["master_keys"]
|
||||
cached_cross_selfsigning = (
|
||||
user_id in cross_signing_keys["self_signing_keys"]
|
||||
)
|
||||
|
||||
# check if we are missing only one of cross-signing master or
|
||||
# self-signing key, but the other one is cached.
|
||||
# as we need both, this will issue a federation request.
|
||||
# if we don't have any of the keys, either the user doesn't have
|
||||
# cross-signing set up, or the cached device list
|
||||
# is not (yet) updated.
|
||||
if cached_cross_master ^ cached_cross_selfsigning:
|
||||
user_ids_not_in_cache.add(user_id)
|
||||
|
||||
# add those users to the list to fetch over federation.
|
||||
for user_id in user_ids_not_in_cache:
|
||||
domain = get_domain_from_id(user_id)
|
||||
r = remote_queries_not_in_cache.setdefault(domain, {})
|
||||
r[user_id] = remote_queries[user_id]
|
||||
|
||||
# Get cached cross-signing keys
|
||||
cross_signing_keys = await self.get_cross_signing_keys_from_cache(
|
||||
device_keys_query, from_user_id
|
||||
)
|
||||
|
||||
# Now fetch any devices that we don't have in our cache
|
||||
@trace
|
||||
async def do_remote_query(destination):
|
||||
|
||||
@@ -112,7 +112,7 @@ class FederationHandler(BaseHandler):
|
||||
"""Handles events that originated from federation.
|
||||
Responsible for:
|
||||
a) handling received Pdus before handing them on as Events to the rest
|
||||
of the homeserver (including auth and state conflict resoultion)
|
||||
of the homeserver (including auth and state conflict resolutions)
|
||||
b) converting events that were produced by local clients that may need
|
||||
to be sent to remote homeservers.
|
||||
c) doing the necessary dances to invite remote users and join remote
|
||||
@@ -477,7 +477,7 @@ class FederationHandler(BaseHandler):
|
||||
# ----
|
||||
#
|
||||
# Update richvdh 2018/09/18: There are a number of problems with timing this
|
||||
# request out agressively on the client side:
|
||||
# request out aggressively on the client side:
|
||||
#
|
||||
# - it plays badly with the server-side rate-limiter, which starts tarpitting you
|
||||
# if you send too many requests at once, so you end up with the server carefully
|
||||
@@ -495,13 +495,13 @@ class FederationHandler(BaseHandler):
|
||||
# we'll end up back here for the *next* PDU in the list, which exacerbates the
|
||||
# problem.
|
||||
#
|
||||
# - the agressive 10s timeout was introduced to deal with incoming federation
|
||||
# - the aggressive 10s timeout was introduced to deal with incoming federation
|
||||
# requests taking 8 hours to process. It's not entirely clear why that was going
|
||||
# on; certainly there were other issues causing traffic storms which are now
|
||||
# resolved, and I think in any case we may be more sensible about our locking
|
||||
# now. We're *certainly* more sensible about our logging.
|
||||
#
|
||||
# All that said: Let's try increasing the timout to 60s and see what happens.
|
||||
# All that said: Let's try increasing the timeout to 60s and see what happens.
|
||||
|
||||
try:
|
||||
missing_events = await self.federation_client.get_missing_events(
|
||||
@@ -1120,7 +1120,7 @@ class FederationHandler(BaseHandler):
|
||||
logger.info(str(e))
|
||||
continue
|
||||
except RequestSendFailed as e:
|
||||
logger.info("Falied to get backfill from %s because %s", dom, e)
|
||||
logger.info("Failed to get backfill from %s because %s", dom, e)
|
||||
continue
|
||||
except FederationDeniedError as e:
|
||||
logger.info(e)
|
||||
@@ -1545,7 +1545,7 @@ class FederationHandler(BaseHandler):
|
||||
#
|
||||
# The reasons we have the destination server rather than the origin
|
||||
# server send it are slightly mysterious: the origin server should have
|
||||
# all the neccessary state once it gets the response to the send_join,
|
||||
# all the necessary state once it gets the response to the send_join,
|
||||
# so it could send the event itself if it wanted to. It may be that
|
||||
# doing it this way reduces failure modes, or avoids certain attacks
|
||||
# where a new server selectively tells a subset of the federation that
|
||||
@@ -1649,7 +1649,7 @@ class FederationHandler(BaseHandler):
|
||||
event.internal_metadata.outlier = True
|
||||
event.internal_metadata.out_of_band_membership = True
|
||||
|
||||
# Try the host that we succesfully called /make_leave/ on first for
|
||||
# Try the host that we successfully called /make_leave/ on first for
|
||||
# the /send_leave/ request.
|
||||
host_list = list(target_hosts)
|
||||
try:
|
||||
|
||||
@@ -17,7 +17,7 @@
|
||||
import logging
|
||||
|
||||
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
|
||||
from synapse.types import get_domain_from_id
|
||||
from synapse.types import GroupID, get_domain_from_id
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -28,6 +28,9 @@ def _create_rerouter(func_name):
|
||||
"""
|
||||
|
||||
async def f(self, group_id, *args, **kwargs):
|
||||
if not GroupID.is_valid(group_id):
|
||||
raise SynapseError(400, "%s was not legal group ID" % (group_id,))
|
||||
|
||||
if self.is_mine_id(group_id):
|
||||
return await getattr(self.groups_server_handler, func_name)(
|
||||
group_id, *args, **kwargs
|
||||
@@ -346,7 +349,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
server_name=get_domain_from_id(group_id),
|
||||
)
|
||||
|
||||
# TODO: Check that the group is public and we're being added publically
|
||||
# TODO: Check that the group is public and we're being added publicly
|
||||
is_publicised = content.get("publicise", False)
|
||||
|
||||
token = await self.store.register_user_group_membership(
|
||||
@@ -391,7 +394,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
|
||||
server_name=get_domain_from_id(group_id),
|
||||
)
|
||||
|
||||
# TODO: Check that the group is public and we're being added publically
|
||||
# TODO: Check that the group is public and we're being added publicly
|
||||
is_publicised = content.get("publicise", False)
|
||||
|
||||
token = await self.store.register_user_group_membership(
|
||||
|
||||
@@ -656,7 +656,7 @@ class EventCreationHandler:
|
||||
context: The event context.
|
||||
|
||||
Returns:
|
||||
The previous verion of the event is returned, if it is found in the
|
||||
The previous version of the event is returned, if it is found in the
|
||||
event context. Otherwise, None is returned.
|
||||
"""
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
@@ -1099,34 +1099,13 @@ class EventCreationHandler:
|
||||
|
||||
if event.type == EventTypes.Member:
|
||||
if event.content["membership"] == Membership.INVITE:
|
||||
|
||||
def is_inviter_member_event(e):
|
||||
return e.type == EventTypes.Member and e.sender == event.sender
|
||||
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
|
||||
# We know this event is not an outlier, so this must be
|
||||
# non-None.
|
||||
assert current_state_ids is not None
|
||||
|
||||
state_to_include_ids = [
|
||||
e_id
|
||||
for k, e_id in current_state_ids.items()
|
||||
if k[0] in self.room_invite_state_types
|
||||
or k == (EventTypes.Member, event.sender)
|
||||
]
|
||||
|
||||
state_to_include = await self.store.get_events(state_to_include_ids)
|
||||
|
||||
event.unsigned["invite_room_state"] = [
|
||||
{
|
||||
"type": e.type,
|
||||
"state_key": e.state_key,
|
||||
"content": e.content,
|
||||
"sender": e.sender,
|
||||
}
|
||||
for e in state_to_include.values()
|
||||
]
|
||||
event.unsigned[
|
||||
"invite_room_state"
|
||||
] = await self.store.get_stripped_room_state_from_event_context(
|
||||
context,
|
||||
self.room_invite_state_types,
|
||||
membership_user_id=event.sender,
|
||||
)
|
||||
|
||||
invitee = UserID.from_string(event.state_key)
|
||||
if not self.hs.is_mine(invitee):
|
||||
|
||||
@@ -217,7 +217,7 @@ class OidcHandler:
|
||||
|
||||
This is based on the requested scopes: if the scopes include
|
||||
``openid``, the provider should give use an ID token containing the
|
||||
user informations. If not, we should fetch them using the
|
||||
user information. If not, we should fetch them using the
|
||||
``access_token`` with the ``userinfo_endpoint``.
|
||||
"""
|
||||
|
||||
@@ -426,7 +426,7 @@ class OidcHandler:
|
||||
return resp
|
||||
|
||||
async def _fetch_userinfo(self, token: Token) -> UserInfo:
|
||||
"""Fetch user informations from the ``userinfo_endpoint``.
|
||||
"""Fetch user information from the ``userinfo_endpoint``.
|
||||
|
||||
Args:
|
||||
token: the token given by the ``token_endpoint``.
|
||||
@@ -695,9 +695,7 @@ class OidcHandler:
|
||||
return
|
||||
|
||||
# Pull out the user-agent and IP from the request.
|
||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
||||
0
|
||||
].decode("ascii", "surrogateescape")
|
||||
user_agent = request.get_user_agent("")
|
||||
ip_address = self.hs.get_ip_from_request(request)
|
||||
|
||||
# Call the mapper to register/login the user
|
||||
@@ -756,7 +754,7 @@ class OidcHandler:
|
||||
Defaults to an hour.
|
||||
|
||||
Returns:
|
||||
A signed macaroon token with the session informations.
|
||||
A signed macaroon token with the session information.
|
||||
"""
|
||||
macaroon = pymacaroons.Macaroon(
|
||||
location=self._server_name, identifier="key", key=self._macaroon_secret_key,
|
||||
|
||||
@@ -48,7 +48,7 @@ from synapse.util.wheel_timer import WheelTimer
|
||||
|
||||
MYPY = False
|
||||
if MYPY:
|
||||
import synapse.server
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -101,7 +101,7 @@ assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
|
||||
class BasePresenceHandler(abc.ABC):
|
||||
"""Parts of the PresenceHandler that are shared between workers and master"""
|
||||
|
||||
def __init__(self, hs: "synapse.server.HomeServer"):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
@@ -199,7 +199,7 @@ class BasePresenceHandler(abc.ABC):
|
||||
|
||||
|
||||
class PresenceHandler(BasePresenceHandler):
|
||||
def __init__(self, hs: "synapse.server.HomeServer"):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
self.hs = hs
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
@@ -802,7 +802,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
between the requested tokens due to the limit.
|
||||
|
||||
The token returned can be used in a subsequent call to this
|
||||
function to get further updatees.
|
||||
function to get further updates.
|
||||
|
||||
The updates are a list of 2-tuples of stream ID and the row data
|
||||
"""
|
||||
@@ -977,7 +977,7 @@ def should_notify(old_state, new_state):
|
||||
new_state.last_active_ts - old_state.last_active_ts
|
||||
> LAST_ACTIVE_GRANULARITY
|
||||
):
|
||||
# Only notify about last active bumps if we're not currently acive
|
||||
# Only notify about last active bumps if we're not currently active
|
||||
if not new_state.currently_active:
|
||||
notify_reason_counter.labels("last_active_change_online").inc()
|
||||
return True
|
||||
@@ -1011,7 +1011,7 @@ def format_user_presence_state(state, now, include_user_id=True):
|
||||
|
||||
|
||||
class PresenceEventSource:
|
||||
def __init__(self, hs):
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
# We can't call get_presence_handler here because there's a cycle:
|
||||
#
|
||||
# Presence -> Notifier -> PresenceEventSource -> Presence
|
||||
@@ -1071,12 +1071,14 @@ class PresenceEventSource:
|
||||
|
||||
users_interested_in = await self._get_interested_in(user, explicit_room_id)
|
||||
|
||||
user_ids_changed = set()
|
||||
user_ids_changed = set() # type: Collection[str]
|
||||
changed = None
|
||||
if from_key:
|
||||
changed = stream_change_cache.get_all_entities_changed(from_key)
|
||||
|
||||
if changed is not None and len(changed) < 500:
|
||||
assert isinstance(user_ids_changed, set)
|
||||
|
||||
# For small deltas, its quicker to get all changes and then
|
||||
# work out if we share a room or they're in our presence list
|
||||
get_updates_counter.labels("stream").inc()
|
||||
|
||||
@@ -98,11 +98,18 @@ class ProfileHandler(BaseHandler):
|
||||
except RequestSendFailed as e:
|
||||
raise SynapseError(502, "Failed to fetch profile") from e
|
||||
except HttpResponseException as e:
|
||||
if e.code < 500 and e.code != 404:
|
||||
# Other codes are not allowed in c2s API
|
||||
logger.info(
|
||||
"Server replied with wrong response: %s %s", e.code, e.msg
|
||||
)
|
||||
|
||||
raise SynapseError(502, "Failed to fetch profile")
|
||||
raise e.to_synapse_error()
|
||||
|
||||
async def get_profile_from_cache(self, user_id: str) -> JsonDict:
|
||||
"""Get the profile information from our local cache. If the user is
|
||||
ours then the profile information will always be corect. Otherwise,
|
||||
ours then the profile information will always be correct. Otherwise,
|
||||
it may be out of date/missing.
|
||||
"""
|
||||
target_user = UserID.from_string(user_id)
|
||||
@@ -124,7 +131,7 @@ class ProfileHandler(BaseHandler):
|
||||
profile = await self.store.get_from_remote_profile_cache(user_id)
|
||||
return profile or {}
|
||||
|
||||
async def get_displayname(self, target_user: UserID) -> str:
|
||||
async def get_displayname(self, target_user: UserID) -> Optional[str]:
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
displayname = await self.store.get_profile_displayname(
|
||||
@@ -211,7 +218,7 @@ class ProfileHandler(BaseHandler):
|
||||
|
||||
await self._update_join_states(requester, target_user)
|
||||
|
||||
async def get_avatar_url(self, target_user: UserID) -> str:
|
||||
async def get_avatar_url(self, target_user: UserID) -> Optional[str]:
|
||||
if self.hs.is_mine(target_user):
|
||||
try:
|
||||
avatar_url = await self.store.get_profile_avatar_url(
|
||||
|
||||
@@ -115,7 +115,10 @@ class RegistrationHandler(BaseHandler):
|
||||
400, "User ID already taken.", errcode=Codes.USER_IN_USE
|
||||
)
|
||||
user_data = await self.auth.get_user_by_access_token(guest_access_token)
|
||||
if not user_data["is_guest"] or user_data["user"].localpart != localpart:
|
||||
if (
|
||||
not user_data.is_guest
|
||||
or UserID.from_string(user_data.user_id).localpart != localpart
|
||||
):
|
||||
raise AuthError(
|
||||
403,
|
||||
"Cannot register taken user ID without valid guest "
|
||||
@@ -741,7 +744,7 @@ class RegistrationHandler(BaseHandler):
|
||||
# up when the access token is saved, but that's quite an
|
||||
# invasive change I'd rather do separately.
|
||||
user_tuple = await self.store.get_user_by_access_token(token)
|
||||
token_id = user_tuple["token_id"]
|
||||
token_id = user_tuple.token_id
|
||||
|
||||
await self.pusher_pool.add_pusher(
|
||||
user_id=user_id,
|
||||
|
||||
@@ -771,22 +771,29 @@ class RoomCreationHandler(BaseHandler):
|
||||
ratelimit=False,
|
||||
)
|
||||
|
||||
for invitee in invite_list:
|
||||
# we avoid dropping the lock between invites, as otherwise joins can
|
||||
# start coming in and making the createRoom slow.
|
||||
#
|
||||
# we also don't need to check the requester's shadow-ban here, as we
|
||||
# have already done so above (and potentially emptied invite_list).
|
||||
with (await self.room_member_handler.member_linearizer.queue((room_id,))):
|
||||
content = {}
|
||||
is_direct = config.get("is_direct", None)
|
||||
if is_direct:
|
||||
content["is_direct"] = is_direct
|
||||
|
||||
# Note that update_membership with an action of "invite" can raise a
|
||||
# ShadowBanError, but this was handled above by emptying invite_list.
|
||||
_, last_stream_id = await self.room_member_handler.update_membership(
|
||||
requester,
|
||||
UserID.from_string(invitee),
|
||||
room_id,
|
||||
"invite",
|
||||
ratelimit=False,
|
||||
content=content,
|
||||
)
|
||||
for invitee in invite_list:
|
||||
(
|
||||
_,
|
||||
last_stream_id,
|
||||
) = await self.room_member_handler.update_membership_locked(
|
||||
requester,
|
||||
UserID.from_string(invitee),
|
||||
room_id,
|
||||
"invite",
|
||||
ratelimit=False,
|
||||
content=content,
|
||||
)
|
||||
|
||||
for invite_3pid in invite_3pid_list:
|
||||
id_server = invite_3pid["id_server"]
|
||||
@@ -1268,7 +1275,7 @@ class RoomShutdownHandler:
|
||||
)
|
||||
|
||||
# We now wait for the create room to come back in via replication so
|
||||
# that we can assume that all the joins/invites have propogated before
|
||||
# that we can assume that all the joins/invites have propagated before
|
||||
# we try and auto join below.
|
||||
await self._replication.wait_for_stream_position(
|
||||
self.hs.config.worker.events_shard_config.get_instance(new_room_id),
|
||||
|
||||
@@ -307,7 +307,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
key = (room_id,)
|
||||
|
||||
with (await self.member_linearizer.queue(key)):
|
||||
result = await self._update_membership(
|
||||
result = await self.update_membership_locked(
|
||||
requester,
|
||||
target,
|
||||
room_id,
|
||||
@@ -322,7 +322,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
|
||||
return result
|
||||
|
||||
async def _update_membership(
|
||||
async def update_membership_locked(
|
||||
self,
|
||||
requester: Requester,
|
||||
target: UserID,
|
||||
@@ -335,6 +335,10 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
|
||||
content: Optional[dict] = None,
|
||||
require_consent: bool = True,
|
||||
) -> Tuple[str, int]:
|
||||
"""Helper for update_membership.
|
||||
|
||||
Assumes that the membership linearizer is already held for the room.
|
||||
"""
|
||||
content_specified = bool(content)
|
||||
if content is None:
|
||||
content = {}
|
||||
|
||||
@@ -216,9 +216,7 @@ class SamlHandler:
|
||||
return
|
||||
|
||||
# Pull out the user-agent and IP from the request.
|
||||
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
|
||||
0
|
||||
].decode("ascii", "surrogateescape")
|
||||
user_agent = request.get_user_agent("")
|
||||
ip_address = self.hs.get_ip_from_request(request)
|
||||
|
||||
# Call the mapper to register/login the user
|
||||
|
||||
@@ -139,7 +139,7 @@ class SearchHandler(BaseHandler):
|
||||
# Filter to apply to results
|
||||
filter_dict = room_cat.get("filter", {})
|
||||
|
||||
# What to order results by (impacts whether pagination can be doen)
|
||||
# What to order results by (impacts whether pagination can be done)
|
||||
order_by = room_cat.get("order_by", "rank")
|
||||
|
||||
# Return the current state of the rooms?
|
||||
|
||||
@@ -32,7 +32,7 @@ class StateDeltasHandler:
|
||||
Returns:
|
||||
None if the field in the events either both match `public_value`
|
||||
or if neither do, i.e. there has been no change.
|
||||
True if it didnt match `public_value` but now does
|
||||
True if it didn't match `public_value` but now does
|
||||
False if it did match `public_value` but now doesn't
|
||||
"""
|
||||
prev_event = None
|
||||
|
||||
@@ -754,7 +754,7 @@ class SyncHandler:
|
||||
"""
|
||||
# TODO(mjark) Check if the state events were received by the server
|
||||
# after the previous sync, since we need to include those state
|
||||
# updates even if they occured logically before the previous event.
|
||||
# updates even if they occurred logically before the previous event.
|
||||
# TODO(mjark) Check for new redactions in the state events.
|
||||
|
||||
with Measure(self.clock, "compute_state_delta"):
|
||||
@@ -1882,7 +1882,7 @@ class SyncHandler:
|
||||
# members (as the client otherwise doesn't have enough info to form
|
||||
# the name itself).
|
||||
if sync_config.filter_collection.lazy_load_members() and (
|
||||
# we recalulate the summary:
|
||||
# we recalculate the summary:
|
||||
# if there are membership changes in the timeline, or
|
||||
# if membership has changed during a gappy sync, or
|
||||
# if this is an initial sync.
|
||||
|
||||
@@ -167,20 +167,25 @@ class FollowerTypingHandler:
|
||||
now_typing = set(row.user_ids)
|
||||
self._room_typing[row.room_id] = row.user_ids
|
||||
|
||||
run_as_background_process(
|
||||
"_handle_change_in_typing",
|
||||
self._handle_change_in_typing,
|
||||
row.room_id,
|
||||
prev_typing,
|
||||
now_typing,
|
||||
)
|
||||
if self.federation:
|
||||
run_as_background_process(
|
||||
"_send_changes_in_typing_to_remotes",
|
||||
self._send_changes_in_typing_to_remotes,
|
||||
row.room_id,
|
||||
prev_typing,
|
||||
now_typing,
|
||||
)
|
||||
|
||||
async def _handle_change_in_typing(
|
||||
async def _send_changes_in_typing_to_remotes(
|
||||
self, room_id: str, prev_typing: Set[str], now_typing: Set[str]
|
||||
):
|
||||
"""Process a change in typing of a room from replication, sending EDUs
|
||||
for any local users.
|
||||
"""
|
||||
|
||||
if not self.federation:
|
||||
return
|
||||
|
||||
for user_id in now_typing - prev_typing:
|
||||
if self.is_mine_id(user_id):
|
||||
await self._push_remote(RoomMember(room_id, user_id), True)
|
||||
@@ -371,7 +376,7 @@ class TypingWriterHandler(FollowerTypingHandler):
|
||||
between the requested tokens due to the limit.
|
||||
|
||||
The token returned can be used in a subsequent call to this
|
||||
function to get further updatees.
|
||||
function to get further updates.
|
||||
|
||||
The updates are a list of 2-tuples of stream ID and the row data
|
||||
"""
|
||||
|
||||
@@ -31,7 +31,7 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
N.B.: ASSUMES IT IS THE ONLY THING THAT MODIFIES THE USER DIRECTORY
|
||||
|
||||
The user directory is filled with users who this server can see are joined to a
|
||||
world_readable or publically joinable room. We keep a database table up to date
|
||||
world_readable or publicly joinable room. We keep a database table up to date
|
||||
by streaming changes of the current state and recalculating whether users should
|
||||
be in the directory or not when necessary.
|
||||
"""
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user