1
0

Compare commits

..

4 Commits

Author SHA1 Message Date
Richard van der Hoff
ef8415adc2 Merge remote-tracking branch 'origin/develop' into dbkr/3pid_verification_logging 2019-06-18 17:46:04 +01:00
Richard van der Hoff
14fe33cabf fix changelog 2019-05-09 23:33:27 +01:00
David Baker
43b9a40370 Actually this should be debug logging 2019-04-05 10:55:04 +01:00
David Baker
1040e9b648 Add some logging to 3pid invite sig verification
I had to add quite a lot of logging to diagnose a problem with 3pid
invites - we only logged the one failure which isn't all that
informative.

NB. I'm not convinced the logic of this loop is right: I think it
should just accept a single valid signature from a trusted source
rather than fail if *any* signature is invalid. Also it should
probably not skip the rest of middle loop if a check fails? However,
I'm deliberately not changing the logic here.
2019-04-05 10:46:16 +01:00
444 changed files with 11447 additions and 18027 deletions

View File

@@ -5,8 +5,8 @@ steps:
- command:
- "python -m pip install tox"
- "tox -e check_codestyle"
label: "\U0001F9F9 Check Style"
- "tox -e pep8"
label: "\U0001F9F9 PEP-8"
plugins:
- docker#v3.0.1:
image: "python:3.6"

View File

@@ -4,7 +4,7 @@ jobs:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
@@ -12,7 +12,7 @@ jobs:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 --build-arg PYTHON_VERSION=3.6 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3

View File

@@ -1,13 +1,9 @@
# ignore everything by default
*
# things to include
!docker
!scripts
!synapse
!MANIFEST.in
!README.rst
!setup.py
!synctl
**/__pycache__
Dockerfile
.travis.yml
.gitignore
demo/etc
tox.ini
.git/*
.tox/*
debian/matrix-synapse/
debian/matrix-synapse-*/

View File

@@ -4,7 +4,6 @@ about: I need support for Synapse
---
Please don't file github issues asking for support.
# Please ask for support in [**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org)
Instead, please join [`#synapse:matrix.org`](https://matrix.to/#/#synapse:matrix.org)
(from a matrix.org account if necessary), and ask there.
## Don't file an issue as a support request.

6
.github/SUPPORT.md vendored
View File

@@ -1,3 +1,3 @@
[**#synapse:matrix.org**](https://matrix.to/#/#synapse:matrix.org) is the official support room for
Synapse, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html.
Please ask for support there, rather than filing github issues.
[**#matrix:matrix.org**](https://matrix.to/#/#matrix:matrix.org) is the official support room for Matrix, and can be accessed by any client from https://matrix.org/docs/projects/try-matrix-now.html
It can also be access via IRC bridge at irc://irc.freenode.net/matrix or on the web here: https://webchat.freenode.net/?channels=matrix

View File

@@ -72,6 +72,3 @@ Jason Robinson <jasonr at matrix.org>
Joseph Weston <joseph at weston.cloud>
+ Add admin API for querying HS version
Benjamin Saunders <ben.e.saunders at gmail dot com>
* Documentation improvements

View File

@@ -1,107 +1,3 @@
Synapse 1.1.0rc1 (2019-07-02)
=============================
As of v1.1.0, Synapse no longer supports Python 2, nor Postgres version 9.4.
See the [upgrade notes](UPGRADE.rst#upgrading-to-v110) for more details.
Features
--------
- Added possibilty to disable local password authentication. Contributed by Daniel Hoffend. ([\#5092](https://github.com/matrix-org/synapse/issues/5092))
- Add monthly active users to phonehome stats. ([\#5252](https://github.com/matrix-org/synapse/issues/5252))
- Allow expired user to trigger renewal email sending manually. ([\#5363](https://github.com/matrix-org/synapse/issues/5363))
- Statistics on forward extremities per room are now exposed via Prometheus. ([\#5384](https://github.com/matrix-org/synapse/issues/5384), [\#5458](https://github.com/matrix-org/synapse/issues/5458), [\#5461](https://github.com/matrix-org/synapse/issues/5461))
- Add --no-daemonize option to run synapse in the foreground, per issue #4130. Contributed by Soham Gumaste. ([\#5412](https://github.com/matrix-org/synapse/issues/5412), [\#5587](https://github.com/matrix-org/synapse/issues/5587))
- Fully support SAML2 authentication. Contributed by [Alexander Trost](https://github.com/galexrt) - thank you! ([\#5422](https://github.com/matrix-org/synapse/issues/5422))
- Allow server admins to define implementations of extra rules for allowing or denying incoming events. ([\#5440](https://github.com/matrix-org/synapse/issues/5440), [\#5474](https://github.com/matrix-org/synapse/issues/5474), [\#5477](https://github.com/matrix-org/synapse/issues/5477))
- Add support for handling pagination APIs on client reader worker. ([\#5505](https://github.com/matrix-org/synapse/issues/5505), [\#5513](https://github.com/matrix-org/synapse/issues/5513), [\#5531](https://github.com/matrix-org/synapse/issues/5531))
- Improve help and cmdline option names for --generate-config options. ([\#5512](https://github.com/matrix-org/synapse/issues/5512))
- Allow configuration of the path used for ACME account keys. ([\#5516](https://github.com/matrix-org/synapse/issues/5516), [\#5521](https://github.com/matrix-org/synapse/issues/5521), [\#5522](https://github.com/matrix-org/synapse/issues/5522))
- Add --data-dir and --open-private-ports options. ([\#5524](https://github.com/matrix-org/synapse/issues/5524))
- Split public rooms directory auth config in two settings, in order to manage client auth independently from the federation part of it. Obsoletes the "restrict_public_rooms_to_local_users" configuration setting. If "restrict_public_rooms_to_local_users" is set in the config, Synapse will act as if both new options are enabled, i.e. require authentication through the client API and deny federation requests. ([\#5534](https://github.com/matrix-org/synapse/issues/5534))
- The minimum TLS version used for outgoing federation requests can now be set with `federation_client_minimum_tls_version`. ([\#5550](https://github.com/matrix-org/synapse/issues/5550))
- Optimise devices changed query to not pull unnecessary rows from the database, reducing database load. ([\#5559](https://github.com/matrix-org/synapse/issues/5559))
- Add new metrics for number of forward extremities being persisted and number of state groups involved in resolution. ([\#5476](https://github.com/matrix-org/synapse/issues/5476))
Bugfixes
--------
- Fix bug processing incoming events over federation if call to `/get_missing_events` fails. ([\#5042](https://github.com/matrix-org/synapse/issues/5042))
- Prevent more than one room upgrade happening simultaneously on the same room. ([\#5051](https://github.com/matrix-org/synapse/issues/5051))
- Fix a bug where running synapse_port_db would cause the account validity feature to fail because it didn't set the type of the email_sent column to boolean. ([\#5325](https://github.com/matrix-org/synapse/issues/5325))
- Warn about disabling email-based password resets when a reset occurs, and remove warning when someone attempts a phone-based reset. ([\#5387](https://github.com/matrix-org/synapse/issues/5387))
- Fix email notifications for unnamed rooms with multiple people. ([\#5388](https://github.com/matrix-org/synapse/issues/5388))
- Fix exceptions in federation reader worker caused by attempting to renew attestations, which should only happen on master worker. ([\#5389](https://github.com/matrix-org/synapse/issues/5389))
- Fix handling of failures fetching remote content to not log failures as exceptions. ([\#5390](https://github.com/matrix-org/synapse/issues/5390))
- Fix a bug where deactivated users could receive renewal emails if the account validity feature is on. ([\#5394](https://github.com/matrix-org/synapse/issues/5394))
- Fix missing invite state after exchanging 3PID invites over federaton. ([\#5464](https://github.com/matrix-org/synapse/issues/5464))
- Fix intermittent exceptions on Apple hardware. Also fix bug that caused database activity times to be under-reported in log lines. ([\#5498](https://github.com/matrix-org/synapse/issues/5498))
- Fix logging error when a tampered event is detected. ([\#5500](https://github.com/matrix-org/synapse/issues/5500))
- Fix bug where clients could tight loop calling `/sync` for a period. ([\#5507](https://github.com/matrix-org/synapse/issues/5507))
- Fix bug with `jinja2` preventing Synapse from starting. Users who had this problem should now simply need to run `pip install matrix-synapse`. ([\#5514](https://github.com/matrix-org/synapse/issues/5514))
- Fix a regression where homeservers on private IP addresses were incorrectly blacklisted. ([\#5523](https://github.com/matrix-org/synapse/issues/5523))
- Fixed m.login.jwt using unregistred user_id and added pyjwt>=1.6.4 as jwt conditional dependencies. Contributed by Pau Rodriguez-Estivill. ([\#5555](https://github.com/matrix-org/synapse/issues/5555), [\#5586](https://github.com/matrix-org/synapse/issues/5586))
- Fix a bug that would cause invited users to receive several emails for a single 3PID invite in case the inviter is rate limited. ([\#5576](https://github.com/matrix-org/synapse/issues/5576))
Updates to the Docker image
---------------------------
- Add ability to change Docker containers [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) with the `TZ` variable. ([\#5383](https://github.com/matrix-org/synapse/issues/5383))
- Update docker image to use Python 3.7. ([\#5546](https://github.com/matrix-org/synapse/issues/5546))
- Deprecate the use of environment variables for configuration, and make the use of a static configuration the default. ([\#5561](https://github.com/matrix-org/synapse/issues/5561), [\#5562](https://github.com/matrix-org/synapse/issues/5562), [\#5566](https://github.com/matrix-org/synapse/issues/5566), [\#5567](https://github.com/matrix-org/synapse/issues/5567))
- Increase default log level for docker image to INFO. It can still be changed by editing the generated log.config file. ([\#5547](https://github.com/matrix-org/synapse/issues/5547))
- Send synapse logs to the docker logging system, by default. ([\#5565](https://github.com/matrix-org/synapse/issues/5565))
- Open the non-TLS port by default. ([\#5568](https://github.com/matrix-org/synapse/issues/5568))
- Fix failure to start under docker with SAML support enabled. ([\#5490](https://github.com/matrix-org/synapse/issues/5490))
- Use a sensible location for data files when generating a config file. ([\#5563](https://github.com/matrix-org/synapse/issues/5563))
Deprecations and Removals
-------------------------
- Python 2.7 is no longer a supported platform. Synapse now requires Python 3.5+ to run. ([\#5425](https://github.com/matrix-org/synapse/issues/5425))
- PostgreSQL 9.4 is no longer supported. Synapse requires Postgres 9.5+ or above for Postgres support. ([\#5448](https://github.com/matrix-org/synapse/issues/5448))
- Remove support for cpu_affinity setting. ([\#5525](https://github.com/matrix-org/synapse/issues/5525))
Improved Documentation
----------------------
- Improve README section on performance troubleshooting. ([\#4276](https://github.com/matrix-org/synapse/issues/4276))
- Add information about how to install and run `black` on the codebase to code_style.rst. ([\#5537](https://github.com/matrix-org/synapse/issues/5537))
- Improve install docs on choosing server_name. ([\#5558](https://github.com/matrix-org/synapse/issues/5558))
Internal Changes
----------------
- Add logging to 3pid invite signature verification. ([\#5015](https://github.com/matrix-org/synapse/issues/5015))
- Update example haproxy config to a more compatible setup. ([\#5313](https://github.com/matrix-org/synapse/issues/5313))
- Track deactivated accounts in the database. ([\#5378](https://github.com/matrix-org/synapse/issues/5378), [\#5465](https://github.com/matrix-org/synapse/issues/5465), [\#5493](https://github.com/matrix-org/synapse/issues/5493))
- Clean up code for sending federation EDUs. ([\#5381](https://github.com/matrix-org/synapse/issues/5381))
- Add a sponsor button to the repo. ([\#5382](https://github.com/matrix-org/synapse/issues/5382), [\#5386](https://github.com/matrix-org/synapse/issues/5386))
- Don't log non-200 responses from federation queries as exceptions. ([\#5383](https://github.com/matrix-org/synapse/issues/5383))
- Update Python syntax in contrib/ to Python 3. ([\#5446](https://github.com/matrix-org/synapse/issues/5446))
- Update federation_client dev script to support `.well-known` and work with python3. ([\#5447](https://github.com/matrix-org/synapse/issues/5447))
- SyTest has been moved to Buildkite. ([\#5459](https://github.com/matrix-org/synapse/issues/5459))
- Demo script now uses python3. ([\#5460](https://github.com/matrix-org/synapse/issues/5460))
- Synapse can now handle RestServlets that return coroutines. ([\#5475](https://github.com/matrix-org/synapse/issues/5475), [\#5585](https://github.com/matrix-org/synapse/issues/5585))
- The demo servers talk to each other again. ([\#5478](https://github.com/matrix-org/synapse/issues/5478))
- Add an EXPERIMENTAL config option to try and periodically clean up extremities by sending dummy events. ([\#5480](https://github.com/matrix-org/synapse/issues/5480))
- Synapse's codebase is now formatted by `black`. ([\#5482](https://github.com/matrix-org/synapse/issues/5482))
- Some cleanups and sanity-checking in the CPU and database metrics. ([\#5499](https://github.com/matrix-org/synapse/issues/5499))
- Improve email notification logging. ([\#5502](https://github.com/matrix-org/synapse/issues/5502))
- Fix "Unexpected entry in 'full_schemas'" log warning. ([\#5509](https://github.com/matrix-org/synapse/issues/5509))
- Improve logging when generating config files. ([\#5510](https://github.com/matrix-org/synapse/issues/5510))
- Refactor and clean up Config parser for maintainability. ([\#5511](https://github.com/matrix-org/synapse/issues/5511))
- Make the config clearer in that email.template_dir is relative to the Synapse's root directory, not the `synapse/` folder within it. ([\#5543](https://github.com/matrix-org/synapse/issues/5543))
- Update v1.0.0 release changelog to include more information about changes to password resets. ([\#5545](https://github.com/matrix-org/synapse/issues/5545))
- Remove non-functioning check_event_hash.py dev script. ([\#5548](https://github.com/matrix-org/synapse/issues/5548))
- Synapse will now only allow TLS v1.2 connections when serving federation, if it terminates TLS. As Synapse's allowed ciphers were only able to be used in TLSv1.2 before, this does not change behaviour. ([\#5550](https://github.com/matrix-org/synapse/issues/5550))
- Logging when running GC collection on generation 0 is now at the DEBUG level, not INFO. ([\#5557](https://github.com/matrix-org/synapse/issues/5557))
- Reduce the amount of stuff we send in the docker context. ([\#5564](https://github.com/matrix-org/synapse/issues/5564))
- Point the reverse links in the Purge History contrib scripts at the intended location. ([\#5570](https://github.com/matrix-org/synapse/issues/5570))
Synapse 1.0.0 (2019-06-11)
==========================
@@ -156,11 +52,7 @@ Features
- Add a script to generate new signing-key files. ([\#5361](https://github.com/matrix-org/synapse/issues/5361))
- Update upgrade and installation guides ahead of 1.0. ([\#5371](https://github.com/matrix-org/synapse/issues/5371))
- Replace the `perspectives` configuration section with `trusted_key_servers`, and make validating the signatures on responses optional (since TLS will do this job for us). ([\#5374](https://github.com/matrix-org/synapse/issues/5374))
- Add ability to perform password reset via email without trusting the identity server. **As a result of this PR, password resets will now be disabled on the default configuration.**
Password reset emails are now sent from the homeserver by default, instead of the identity server. To enable this functionality, ensure `email` and `public_baseurl` config options are filled out.
If you would like to re-enable password resets being sent from the identity server (warning: this is dangerous! See [#5345](https://github.com/matrix-org/synapse/pull/5345)), set `email.trust_identity_server_for_password_resets` to true. ([\#5377](https://github.com/matrix-org/synapse/issues/5377))
- Add ability to perform password reset via email without trusting the identity server. ([\#5377](https://github.com/matrix-org/synapse/issues/5377))
- Set default room version to v4. ([\#5379](https://github.com/matrix-org/synapse/issues/5379))

View File

@@ -1,4 +1,3 @@
- [Choosing your server name](#choosing-your-server-name)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
@@ -11,22 +10,6 @@
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
# Choosing your server name
It is important to choose the name for your server before you install Synapse,
because it cannot be changed later.
The server name determines the "domain" part of user-ids for users on your
server: these will all be of the format `@user:my.domain.name`. It also
determines how other matrix servers will reach yours for federation.
For a test configuration, set this to the hostname of your server. For a more
production-ready setup, you will probably want to specify your domain
(`example.com`) rather than a matrix-specific hostname here (in the same way
that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md).
# Installing Synapse
## Installing from source
@@ -81,7 +64,16 @@ python -m synapse.app.homeserver \
--report-stats=[yes|no]
```
... substituting an appropriate value for `--server-name`.
... substituting an appropriate value for `--server-name`. The server name
determines the "domain" part of user-ids for users on your server: these will
all be of the format `@user:my.domain.name`. It also determines how other
matrix servers will reach yours for Federation. For a test configuration,
set this to the hostname of your server. For a more production-ready setup, you
will probably want to specify your domain (`example.com`) rather than a
matrix-specific hostname here (in the same way that your email address is
probably `user@example.com` rather than `user@email.example.com`) - but
doing so may require more advanced setup: see [Setting up Federation](docs/federate.md).
Beware that the server name cannot be changed later.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your Home Server to
@@ -94,6 +86,9 @@ different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management.)
You will need to give Synapse a TLS certficate before it will start - see [TLS
certificates](#tls-certificates).
To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and::

View File

@@ -340,11 +340,8 @@ log lines and looking for any 'Processed request' lines which take more than
a few seconds to execute. Please let us know at #synapse:matrix.org if
you see this failure mode so we can help debug it, however.
Help!! Synapse is slow and eats all my RAM/CPU!
-----------------------------------------------
First, ensure you are running the latest version of Synapse, using Python 3
with a PostgreSQL database.
Help!! Synapse eats all my RAM!
-------------------------------
Synapse's architecture is quite RAM hungry currently - we deliberately
cache a lot of recent room data and metadata in RAM in order to speed up
@@ -355,29 +352,14 @@ variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due
backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall memory use, and especially in terms of giving back
RAM to the OS. To use it, the library must simply be put in the
LD_PRELOAD environment variable when launching Synapse. On Debian, this
can be done by installing the ``libjemalloc1`` package and adding this
line to ``/etc/default/matrix-synapse``::
improvement in overall amount, and especially in terms of giving back RAM
to the OS. To use it, the library must simply be put in the LD_PRELOAD
environment variable when launching Synapse. On Debian, this can be done
by installing the ``libjemalloc1`` package and adding this line to
``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
This can make a significant difference on Python 2.7 - it's unclear how
much of an improvement it provides on Python 3.x.
If you're encountering high CPU use by the Synapse process itself, you
may be affected by a bug with presence tracking that leads to a
massive excess of outgoing federation requests (see `discussion
<https://github.com/matrix-org/synapse/issues/3971>`_). If metrics
indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting
``use_presence: false`` in the Synapse config file.

View File

@@ -49,16 +49,16 @@ returned by the Client-Server API:
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.1.0
===================
Upgrading to v1.1
=================
Synapse v1.1.0 removes support for older Python and PostgreSQL versions, as
Synapse 1.1 removes support for older Python and PostgreSQL versions, as
outlined in `our deprecation notice <https://matrix.org/blog/2019/04/08/synapse-deprecating-postgres-9-4-and-python-2-x>`_.
Minimum Python Version
----------------------
Synapse v1.1.0 has a minimum Python requirement of Python 3.5. Python 3.6 or
Synapse v1.1 has a minimum Python requirement of Python 3.5. Python 3.6 or
Python 3.7 are recommended as they have improved internal string handling,
significantly reducing memory usage.

1
changelog.d/5015.misc Normal file
View File

@@ -0,0 +1 @@
Add logging to 3pid invite signature verification.

1
changelog.d/5252.feature Normal file
View File

@@ -0,0 +1 @@
Add monthly active users to phonehome stats.

1
changelog.d/5325.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where running synapse_port_db would cause the account validity feature to fail because it didn't set the type of the email_sent column to boolean.

1
changelog.d/5363.feature Normal file
View File

@@ -0,0 +1 @@
Allow expired user to trigger renewal email sending manually.

1
changelog.d/5378.misc Normal file
View File

@@ -0,0 +1 @@
Track deactivated accounts in the database.

1
changelog.d/5381.misc Normal file
View File

@@ -0,0 +1 @@
Clean up code for sending federation EDUs.

1
changelog.d/5382.misc Normal file
View File

@@ -0,0 +1 @@
Add a sponsor button to the repo.

1
changelog.d/5383.misc Normal file
View File

@@ -0,0 +1 @@
Don't log non-200 responses from federation queries as exceptions.

1
changelog.d/5384.feature Normal file
View File

@@ -0,0 +1 @@
Statistics on forward extremities per room are now exposed via Prometheus.

1
changelog.d/5386.misc Normal file
View File

@@ -0,0 +1 @@
Add a sponsor button to the repo.

1
changelog.d/5387.bugfix Normal file
View File

@@ -0,0 +1 @@
Warn about disabling email-based password resets when a reset occurs, and remove warning when someone attempts a phone-based reset.

1
changelog.d/5388.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix email notifications for unnamed rooms with multiple people.

1
changelog.d/5389.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix exceptions in federation reader worker caused by attempting to renew attestations, which should only happen on master worker.

1
changelog.d/5390.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix handling of failures fetching remote content to not log failures as exceptions.

1
changelog.d/5394.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where deactivated users could receive renewal emails if the account validity feature is on.

1
changelog.d/5412.feature Normal file
View File

@@ -0,0 +1 @@
Add --no-daemonize option to run synapse in the foreground, per issue #4130. Contributed by Soham Gumaste.

1
changelog.d/5425.removal Normal file
View File

@@ -0,0 +1 @@
Python 2.7 is no longer a supported platform. Synapse now requires Python 3.5+ to run.

1
changelog.d/5440.feature Normal file
View File

@@ -0,0 +1 @@
Allow server admins to define implementations of extra rules for allowing or denying incoming events.

1
changelog.d/5446.misc Normal file
View File

@@ -0,0 +1 @@
Update Python syntax in contrib/ to Python 3.

1
changelog.d/5447.misc Normal file
View File

@@ -0,0 +1 @@
Update federation_client dev script to support `.well-known` and work with python3.

1
changelog.d/5448.removal Normal file
View File

@@ -0,0 +1 @@
PostgreSQL 9.4 is no longer supported. Synapse requires Postgres 9.5+ or above for Postgres support.

1
changelog.d/5458.feature Normal file
View File

@@ -0,0 +1 @@
Statistics on forward extremities per room are now exposed via Prometheus.

1
changelog.d/5459.misc Normal file
View File

@@ -0,0 +1 @@
SyTest has been moved to Buildkite.

1
changelog.d/5460.misc Normal file
View File

@@ -0,0 +1 @@
Demo script now uses python3.

1
changelog.d/5461.feature Normal file
View File

@@ -0,0 +1 @@
Statistics on forward extremities per room are now exposed via Prometheus.

1
changelog.d/5464.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix missing invite state after exchanging 3PID invites over federaton.

2
changelog.d/5465.misc Normal file
View File

@@ -0,0 +1,2 @@
Track deactivated accounts in the database.

1
changelog.d/5474.feature Normal file
View File

@@ -0,0 +1 @@
Allow server admins to define implementations of extra rules for allowing or denying incoming events.

1
changelog.d/5477.feature Normal file
View File

@@ -0,0 +1 @@
Allow server admins to define implementations of extra rules for allowing or denying incoming events.

1
changelog.d/5478.misc Normal file
View File

@@ -0,0 +1 @@
The demo servers talk to each other again.

View File

@@ -1 +0,0 @@
Update github templates.

View File

@@ -1 +0,0 @@
Removed the `SYNAPSE_SMTP_*` docker container environment variables. Using these environment variables prevented the docker container from starting in Synapse v1.0, even though they didn't actually allow any functionality anyway. Users are advised to remove `SYNAPSE_SMTP_HOST`, `SYNAPSE_SMTP_PORT`, `SYNAPSE_SMTP_USER`, `SYNAPSE_SMTP_PASSWORD` and `SYNAPSE_SMTP_FROM` environment variables from their docker run commands.

View File

@@ -37,8 +37,9 @@ from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json"
TRUSTED_ID_SERVERS = ["localhost:8001"]
TRUSTED_ID_SERVERS = [
'localhost:8001'
]
class SynapseCmd(cmd.Cmd):
@@ -58,7 +59,7 @@ class SynapseCmd(cmd.Cmd):
"token": token,
"verbose": "on",
"complete_usernames": "on",
"send_delivery_receipts": "on",
"send_delivery_receipts": "on"
}
self.path_prefix = "/_matrix/client/api/v1"
self.event_stream_token = "END"
@@ -119,11 +120,12 @@ class SynapseCmd(cmd.Cmd):
config_rules = [ # key, valid_values
("verbose", ["on", "off"]),
("complete_usernames", ["on", "off"]),
("send_delivery_receipts", ["on", "off"]),
("send_delivery_receipts", ["on", "off"])
]
for key, valid_vals in config_rules:
if key == args["key"] and args["val"] not in valid_vals:
print("%s value must be one of %s" % (args["key"], valid_vals))
print("%s value must be one of %s" % (args["key"],
valid_vals))
return
# toggle the http client verbosity
@@ -157,13 +159,16 @@ class SynapseCmd(cmd.Cmd):
else:
password = pwd
body = {"type": "m.login.password"}
body = {
"type": "m.login.password"
}
if "userid" in args:
body["user"] = args["userid"]
if password:
body["password"] = password
reactor.callFromThread(self._do_register, body, "noupdate" not in args)
reactor.callFromThread(self._do_register, body,
"noupdate" not in args)
@defer.inlineCallbacks
def _do_register(self, data, update_config):
@@ -174,9 +179,7 @@ class SynapseCmd(cmd.Cmd):
passwordFlow = None
for flow in json_res["flows"]:
if flow["type"] == "m.login.recaptcha" or (
"stages" in flow and "m.login.recaptcha" in flow["stages"]
):
if flow["type"] == "m.login.recaptcha" or ("stages" in flow and "m.login.recaptcha" in flow["stages"]):
print("Unable to register: Home server requires captcha.")
return
if flow["type"] == "m.login.password" and "stages" not in flow:
@@ -199,7 +202,9 @@ class SynapseCmd(cmd.Cmd):
"""
try:
args = self._parse(line, ["user_id"], force_keys=True)
can_login = threads.blockingCallFromThread(reactor, self._check_can_login)
can_login = threads.blockingCallFromThread(
reactor,
self._check_can_login)
if can_login:
p = getpass.getpass("Enter your password: ")
user = args["user_id"]
@@ -207,16 +212,20 @@ class SynapseCmd(cmd.Cmd):
domain = self._domain()
if domain:
user = "@" + user + ":" + domain
reactor.callFromThread(self._do_login, user, p)
# print " got %s " % p
#print " got %s " % p
except Exception as e:
print(e)
@defer.inlineCallbacks
def _do_login(self, user, password):
path = "/login"
data = {"user": user, "password": password, "type": "m.login.password"}
data = {
"user": user,
"password": password,
"type": "m.login.password"
}
url = self._url() + path
json_res = yield self.http_client.do_request("POST", url, data=data)
print(json_res)
@@ -240,13 +249,12 @@ class SynapseCmd(cmd.Cmd):
print("Failed to find any login flows.")
defer.returnValue(False)
flow = json_res["flows"][0] # assume first is the one we want.
if "type" not in flow or "m.login.password" != flow["type"] or "stages" in flow:
flow = json_res["flows"][0] # assume first is the one we want.
if ("type" not in flow or "m.login.password" != flow["type"] or
"stages" in flow):
fallback_url = self._url() + "/login/fallback"
print(
"Unable to login via the command line client. Please visit "
"%s to login." % fallback_url
)
print ("Unable to login via the command line client. Please visit "
"%s to login." % fallback_url)
defer.returnValue(False)
defer.returnValue(True)
@@ -256,33 +264,21 @@ class SynapseCmd(cmd.Cmd):
<clientSecret> A string of characters generated when requesting an email that you'll supply in subsequent calls to identify yourself
<sendAttempt> The number of times the user has requested an email. Leave this the same between requests to retry the request at the transport level. Increment it to request that the email be sent again.
"""
args = self._parse(line, ["address", "clientSecret", "sendAttempt"])
args = self._parse(line, ['address', 'clientSecret', 'sendAttempt'])
postArgs = {
"email": args["address"],
"clientSecret": args["clientSecret"],
"sendAttempt": args["sendAttempt"],
}
postArgs = {'email': args['address'], 'clientSecret': args['clientSecret'], 'sendAttempt': args['sendAttempt']}
reactor.callFromThread(self._do_emailrequest, postArgs)
@defer.inlineCallbacks
def _do_emailrequest(self, args):
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/requestToken"
)
url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/requestToken"
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print(json_res)
if "sid" in json_res:
print("Token sent. Your session ID is %s" % (json_res["sid"]))
if 'sid' in json_res:
print("Token sent. Your session ID is %s" % (json_res['sid']))
def do_emailvalidate(self, line):
"""Validate and associate a third party ID
@@ -290,30 +286,18 @@ class SynapseCmd(cmd.Cmd):
<token> The token sent to your third party identifier address
<clientSecret> The same clientSecret you supplied in requestToken
"""
args = self._parse(line, ["sid", "token", "clientSecret"])
args = self._parse(line, ['sid', 'token', 'clientSecret'])
postArgs = {
"sid": args["sid"],
"token": args["token"],
"clientSecret": args["clientSecret"],
}
postArgs = { 'sid' : args['sid'], 'token' : args['token'], 'clientSecret': args['clientSecret'] }
reactor.callFromThread(self._do_emailvalidate, postArgs)
@defer.inlineCallbacks
def _do_emailvalidate(self, args):
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/submitToken"
)
url = self._identityServerUrl()+"/_matrix/identity/api/v1/validate/email/submitToken"
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print(json_res)
def do_3pidbind(self, line):
@@ -321,24 +305,19 @@ class SynapseCmd(cmd.Cmd):
<sid> The session ID (sid) given to you in the response to requestToken
<clientSecret> The same clientSecret you supplied in requestToken
"""
args = self._parse(line, ["sid", "clientSecret"])
args = self._parse(line, ['sid', 'clientSecret'])
postArgs = {"sid": args["sid"], "clientSecret": args["clientSecret"]}
postArgs["mxid"] = self.config["user"]
postArgs = { 'sid' : args['sid'], 'clientSecret': args['clientSecret'] }
postArgs['mxid'] = self.config["user"]
reactor.callFromThread(self._do_3pidbind, postArgs)
@defer.inlineCallbacks
def _do_3pidbind(self, args):
url = self._identityServerUrl() + "/_matrix/identity/api/v1/3pid/bind"
url = self._identityServerUrl()+"/_matrix/identity/api/v1/3pid/bind"
json_res = yield self.http_client.do_request(
"POST",
url,
data=urllib.urlencode(args),
jsonreq=False,
headers={"Content-Type": ["application/x-www-form-urlencoded"]},
)
json_res = yield self.http_client.do_request("POST", url, data=urllib.urlencode(args), jsonreq=False,
headers={'Content-Type': ['application/x-www-form-urlencoded']})
print(json_res)
def do_join(self, line):
@@ -377,7 +356,9 @@ class SynapseCmd(cmd.Cmd):
if "topic" not in args:
print("Must specify a new topic.")
return
body = {"topic": args["topic"]}
body = {
"topic": args["topic"]
}
reactor.callFromThread(self._run_and_pprint, "PUT", path, body)
elif args["action"].lower() == "get":
reactor.callFromThread(self._run_and_pprint, "GET", path)
@@ -397,60 +378,45 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_invite(self, roomid, userstring):
if not userstring.startswith("@") and self._is_on("complete_usernames"):
url = self._identityServerUrl() + "/_matrix/identity/api/v1/lookup"
if (not userstring.startswith('@') and
self._is_on("complete_usernames")):
url = self._identityServerUrl()+"/_matrix/identity/api/v1/lookup"
json_res = yield self.http_client.do_request(
"GET", url, qparams={"medium": "email", "address": userstring}
)
json_res = yield self.http_client.do_request("GET", url, qparams={'medium':'email','address':userstring})
mxid = None
if "mxid" in json_res and "signatures" in json_res:
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/pubkey/ed25519"
)
if 'mxid' in json_res and 'signatures' in json_res:
url = self._identityServerUrl()+"/_matrix/identity/api/v1/pubkey/ed25519"
pubKey = None
pubKeyObj = yield self.http_client.do_request("GET", url)
if "public_key" in pubKeyObj:
pubKey = nacl.signing.VerifyKey(
pubKeyObj["public_key"], encoder=nacl.encoding.HexEncoder
)
if 'public_key' in pubKeyObj:
pubKey = nacl.signing.VerifyKey(pubKeyObj['public_key'], encoder=nacl.encoding.HexEncoder)
else:
print("No public key found in pubkey response!")
sigValid = False
if pubKey:
for signame in json_res["signatures"]:
for signame in json_res['signatures']:
if signame not in TRUSTED_ID_SERVERS:
print(
"Ignoring signature from untrusted server %s"
% (signame)
)
print("Ignoring signature from untrusted server %s" % (signame))
else:
try:
verify_signed_json(json_res, signame, pubKey)
sigValid = True
print(
"Mapping %s -> %s correctly signed by %s"
% (userstring, json_res["mxid"], signame)
)
print("Mapping %s -> %s correctly signed by %s" % (userstring, json_res['mxid'], signame))
break
except SignatureVerifyException as e:
print("Invalid signature from %s" % (signame))
print(e)
if sigValid:
print("Resolved 3pid %s to %s" % (userstring, json_res["mxid"]))
mxid = json_res["mxid"]
print("Resolved 3pid %s to %s" % (userstring, json_res['mxid']))
mxid = json_res['mxid']
else:
print(
"Got association for %s but couldn't verify signature"
% (userstring)
)
print("Got association for %s but couldn't verify signature" % (userstring))
if not mxid:
mxid = "@" + userstring + ":" + self._domain()
@@ -469,11 +435,12 @@ class SynapseCmd(cmd.Cmd):
"""Sends a message. "send <roomid> <body>" """
args = self._parse(line, ["roomid", "body"])
txn_id = "txn%s" % int(time.time())
path = "/rooms/%s/send/m.room.message/%s" % (
urllib.quote(args["roomid"]),
txn_id,
)
body_json = {"msgtype": "m.text", "body": args["body"]}
path = "/rooms/%s/send/m.room.message/%s" % (urllib.quote(args["roomid"]),
txn_id)
body_json = {
"msgtype": "m.text",
"body": args["body"]
}
reactor.callFromThread(self._run_and_pprint, "PUT", path, body_json)
def do_list(self, line):
@@ -505,7 +472,8 @@ class SynapseCmd(cmd.Cmd):
print("Bad query param: %s" % key_value)
return
reactor.callFromThread(self._run_and_pprint, "GET", path, query_params=qp)
reactor.callFromThread(self._run_and_pprint, "GET", path,
query_params=qp)
def do_create(self, line):
"""Creates a room.
@@ -545,16 +513,8 @@ class SynapseCmd(cmd.Cmd):
return
args["method"] = args["method"].upper()
valid_methods = [
"PUT",
"GET",
"POST",
"DELETE",
"XPUT",
"XGET",
"XPOST",
"XDELETE",
]
valid_methods = ["PUT", "GET", "POST", "DELETE",
"XPUT", "XGET", "XPOST", "XDELETE"]
if args["method"] not in valid_methods:
print("Unsupported method: %s" % args["method"])
return
@@ -581,13 +541,10 @@ class SynapseCmd(cmd.Cmd):
except:
pass
reactor.callFromThread(
self._run_and_pprint,
args["method"],
args["path"],
args["data"],
query_params=qp,
)
reactor.callFromThread(self._run_and_pprint, args["method"],
args["path"],
args["data"],
query_params=qp)
def do_stream(self, line):
"""Stream data from the server: "stream <longpoll timeout ms>" """
@@ -604,22 +561,19 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_event_stream(self, timeout):
res = yield self.http_client.get_json(
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token,
},
)
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token
})
print(json.dumps(res, indent=4))
if "chunk" in res:
for event in res["chunk"]:
if (
event["type"] == "m.room.message"
and self._is_on("send_delivery_receipts")
and event["user_id"] != self._usr()
): # not sent by us
if (event["type"] == "m.room.message" and
self._is_on("send_delivery_receipts") and
event["user_id"] != self._usr()): # not sent by us
self._send_receipt(event, "d")
# update the position in the stram
@@ -627,28 +581,18 @@ class SynapseCmd(cmd.Cmd):
self.event_stream_token = res["end"]
def _send_receipt(self, event, feedback_type):
path = "/rooms/%s/messages/%s/%s/feedback/%s/%s" % (
urllib.quote(event["room_id"]),
event["user_id"],
event["msg_id"],
self._usr(),
feedback_type,
)
path = ("/rooms/%s/messages/%s/%s/feedback/%s/%s" %
(urllib.quote(event["room_id"]), event["user_id"], event["msg_id"],
self._usr(), feedback_type))
data = {}
reactor.callFromThread(
self._run_and_pprint,
"PUT",
path,
data=data,
alt_text="Sent receipt for %s" % event["msg_id"],
)
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data,
alt_text="Sent receipt for %s" % event["msg_id"])
def _do_membership_change(self, roomid, membership, userid):
path = "/rooms/%s/state/m.room.member/%s" % (
urllib.quote(roomid),
urllib.quote(userid),
)
data = {"membership": membership}
path = "/rooms/%s/state/m.room.member/%s" % (urllib.quote(roomid), urllib.quote(userid))
data = {
"membership": membership
}
reactor.callFromThread(self._run_and_pprint, "PUT", path, data=data)
def do_displayname(self, line):
@@ -701,20 +645,15 @@ class SynapseCmd(cmd.Cmd):
for i, arg in enumerate(line_args):
for config_key in self.config:
if ("$" + config_key) in arg:
arg = arg.replace("$" + config_key, self.config[config_key])
arg = arg.replace("$" + config_key,
self.config[config_key])
line_args[i] = arg
return dict(zip(keys, line_args))
@defer.inlineCallbacks
def _run_and_pprint(
self,
method,
path,
data=None,
query_params={"access_token": None},
alt_text=None,
):
def _run_and_pprint(self, method, path, data=None,
query_params={"access_token": None}, alt_text=None):
""" Runs an HTTP request and pretty prints the output.
Args:
@@ -727,9 +666,9 @@ class SynapseCmd(cmd.Cmd):
if "access_token" in query_params:
query_params["access_token"] = self._tok()
json_res = yield self.http_client.do_request(
method, url, data=data, qparams=query_params
)
json_res = yield self.http_client.do_request(method, url,
data=data,
qparams=query_params)
if alt_text:
print(alt_text)
else:
@@ -737,7 +676,7 @@ class SynapseCmd(cmd.Cmd):
def save_config(config):
with open(CONFIG_JSON, "w") as out:
with open(CONFIG_JSON, 'w') as out:
json.dump(config, out)
@@ -761,7 +700,7 @@ def main(server_url, identity_server_url, username, token, config_path):
global CONFIG_JSON
CONFIG_JSON = config_path # bit cheeky, but just overwrite the global
try:
with open(config_path, "r") as config:
with open(config_path, 'r') as config:
syn_cmd.config = json.load(config)
try:
http_client.verbose = "on" == syn_cmd.config["verbose"]
@@ -778,33 +717,23 @@ def main(server_url, identity_server_url, username, token, config_path):
reactor.run()
if __name__ == "__main__":
if __name__ == '__main__':
parser = argparse.ArgumentParser("Starts a synapse client.")
parser.add_argument(
"-s",
"--server",
dest="server",
default="http://localhost:8008",
help="The URL of the home server to talk to.",
)
"-s", "--server", dest="server", default="http://localhost:8008",
help="The URL of the home server to talk to.")
parser.add_argument(
"-i",
"--identity-server",
dest="identityserver",
default="http://localhost:8090",
help="The URL of the identity server to talk to.",
)
"-i", "--identity-server", dest="identityserver", default="http://localhost:8090",
help="The URL of the identity server to talk to.")
parser.add_argument(
"-u", "--username", dest="username", help="Your username on the server."
)
parser.add_argument("-t", "--token", dest="token", help="Your access token.")
"-u", "--username", dest="username",
help="Your username on the server.")
parser.add_argument(
"-c",
"--config",
dest="config",
default=CONFIG_JSON,
help="The location of the config.json file to read from.",
)
"-t", "--token", dest="token",
help="Your access token.")
parser.add_argument(
"-c", "--config", dest="config", default=CONFIG_JSON,
help="The location of the config.json file to read from.")
args = parser.parse_args()
if not args.server:

View File

@@ -73,7 +73,9 @@ class TwistedHttpClient(HttpClient):
@defer.inlineCallbacks
def put_json(self, url, data):
response = yield self._create_put_request(
url, data, headers_dict={"Content-Type": ["application/json"]}
url,
data,
headers_dict={"Content-Type": ["application/json"]}
)
body = yield readBody(response)
defer.returnValue((response.code, body))
@@ -93,34 +95,40 @@ class TwistedHttpClient(HttpClient):
"""
if "Content-Type" not in headers_dict:
raise defer.error(RuntimeError("Must include Content-Type header for PUTs"))
raise defer.error(
RuntimeError("Must include Content-Type header for PUTs"))
return self._create_request(
"PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict
"PUT",
url,
producer=_JsonProducer(json_data),
headers_dict=headers_dict
)
def _create_get_request(self, url, headers_dict={}):
""" Wrapper of _create_request to issue a GET request
"""
return self._create_request("GET", url, headers_dict=headers_dict)
return self._create_request(
"GET",
url,
headers_dict=headers_dict
)
@defer.inlineCallbacks
def do_request(
self, method, url, data=None, qparams=None, jsonreq=True, headers={}
):
def do_request(self, method, url, data=None, qparams=None, jsonreq=True, headers={}):
if qparams:
url = "%s?%s" % (url, urllib.urlencode(qparams, True))
if jsonreq:
prod = _JsonProducer(data)
headers["Content-Type"] = ["application/json"]
headers['Content-Type'] = ["application/json"];
else:
prod = _RawProducer(data)
if method in ["POST", "PUT"]:
response = yield self._create_request(
method, url, producer=prod, headers_dict=headers
)
response = yield self._create_request(method, url,
producer=prod,
headers_dict=headers)
else:
response = yield self._create_request(method, url)
@@ -147,7 +155,10 @@ class TwistedHttpClient(HttpClient):
while True:
try:
response = yield self.agent.request(
method, url.encode("UTF8"), Headers(headers_dict), producer
method,
url.encode("UTF8"),
Headers(headers_dict),
producer
)
break
except Exception as e:
@@ -168,7 +179,6 @@ class TwistedHttpClient(HttpClient):
reactor.callLater(seconds, d.callback, seconds)
return d
class _RawProducer(object):
def __init__(self, data):
self.data = data
@@ -185,11 +195,9 @@ class _RawProducer(object):
def stopProducing(self):
pass
class _JsonProducer(object):
""" Used by the twisted http client to create the HTTP body from json
"""
def __init__(self, jsn):
self.data = jsn
self.body = json.dumps(jsn).encode("utf8")

View File

@@ -1,9 +1,5 @@
# Synapse Docker
FIXME: this is out-of-date as of
https://github.com/matrix-org/synapse/issues/5518. Contributions to bring it up
to date would be welcome.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including

View File

@@ -19,13 +19,13 @@ from curses.ascii import isprint
from twisted.internet import reactor
class CursesStdIO:
class CursesStdIO():
def __init__(self, stdscr, callback=None):
self.statusText = "Synapse test app -"
self.searchText = ""
self.searchText = ''
self.stdscr = stdscr
self.logLine = ""
self.logLine = ''
self.callback = callback
@@ -71,7 +71,8 @@ class CursesStdIO:
i = 0
index = len(self.lines) - 1
while i < (self.rows - 3) and index >= 0:
self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index], curses.A_NORMAL)
self.stdscr.addstr(self.rows - 3 - i, 0, self.lines[index],
curses.A_NORMAL)
i = i + 1
index = index - 1
@@ -84,13 +85,15 @@ class CursesStdIO:
raise RuntimeError("TextTooLongError")
self.stdscr.addstr(
self.rows - 2, 0, text + " " * (self.cols - len(text)), curses.A_STANDOUT
)
self.rows - 2, 0,
text + ' ' * (self.cols - len(text)),
curses.A_STANDOUT)
def printLogLine(self, text):
self.stdscr.addstr(
0, 0, text + " " * (self.cols - len(text)), curses.A_STANDOUT
)
0, 0,
text + ' ' * (self.cols - len(text)),
curses.A_STANDOUT)
def doRead(self):
""" Input is ready! """
@@ -102,7 +105,7 @@ class CursesStdIO:
elif c == curses.KEY_ENTER or c == 10:
text = self.searchText
self.searchText = ""
self.searchText = ''
self.print_line(">> %s" % text)
@@ -119,13 +122,11 @@ class CursesStdIO:
return
self.searchText = self.searchText + chr(c)
self.stdscr.addstr(
self.rows - 1,
0,
self.searchText + (" " * (self.cols - len(self.searchText) - 2)),
)
self.stdscr.addstr(self.rows - 1, 0,
self.searchText + (' ' * (
self.cols - len(self.searchText) - 2)))
self.paintStatus(self.statusText + " %d" % len(self.searchText))
self.paintStatus(self.statusText + ' %d' % len(self.searchText))
self.stdscr.move(self.rows - 1, len(self.searchText))
self.stdscr.refresh()
@@ -142,6 +143,7 @@ class CursesStdIO:
class Callback(object):
def __init__(self, stdio):
self.stdio = stdio
@@ -150,7 +152,7 @@ class Callback(object):
def main(stdscr):
screen = CursesStdIO(stdscr) # create Screen object
screen = CursesStdIO(stdscr) # create Screen object
callback = Callback(screen)
@@ -162,5 +164,5 @@ def main(stdscr):
screen.close()
if __name__ == "__main__":
if __name__ == '__main__':
curses.wrapper(main)

View File

@@ -28,7 +28,9 @@ Currently assumes the local address is localhost:<port>
"""
from synapse.federation import ReplicationHandler
from synapse.federation import (
ReplicationHandler
)
from synapse.federation.units import Pdu
@@ -36,7 +38,7 @@ from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
# from synapse.util.logutils import log_function
#from synapse.util.logutils import log_function
from twisted.internet import reactor, defer
from twisted.python import log
@@ -81,7 +83,7 @@ class InputOutput(object):
room_name, = m.groups()
self.print_line("%s joining %s" % (self.user, room_name))
self.server.join_room(room_name, self.user, self.user)
# self.print_line("OK.")
#self.print_line("OK.")
return
m = re.match("^invite (\S+) (\S+)$", line)
@@ -90,7 +92,7 @@ class InputOutput(object):
room_name, invitee = m.groups()
self.print_line("%s invited to %s" % (invitee, room_name))
self.server.invite_to_room(room_name, self.user, invitee)
# self.print_line("OK.")
#self.print_line("OK.")
return
m = re.match("^send (\S+) (.*)$", line)
@@ -99,7 +101,7 @@ class InputOutput(object):
room_name, body = m.groups()
self.print_line("%s send to %s" % (self.user, room_name))
self.server.send_message(room_name, self.user, body)
# self.print_line("OK.")
#self.print_line("OK.")
return
m = re.match("^backfill (\S+)$", line)
@@ -123,6 +125,7 @@ class InputOutput(object):
class IOLoggerHandler(logging.Handler):
def __init__(self, io):
logging.Handler.__init__(self)
self.io = io
@@ -139,7 +142,6 @@ class Room(object):
""" Used to store (in memory) the current membership state of a room, and
which home servers we should send PDUs associated with the room to.
"""
def __init__(self, room_name):
self.room_name = room_name
self.invited = set()
@@ -173,7 +175,6 @@ class HomeServer(ReplicationHandler):
""" A very basic home server implentation that allows people to join a
room and then invite other people.
"""
def __init__(self, server_name, replication_layer, output):
self.server_name = server_name
self.replication_layer = replication_layer
@@ -196,27 +197,26 @@ class HomeServer(ReplicationHandler):
elif pdu.content["membership"] == "invite":
self._on_invite(pdu.origin, pdu.context, pdu.state_key)
else:
self.output.print_line(
"#%s (unrec) %s = %s"
% (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
self.output.print_line("#%s (unrec) %s = %s" %
(pdu.context, pdu.pdu_type, json.dumps(pdu.content))
)
# def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
#def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
# if "joinee" in pdu.content:
# self._on_join(pdu.context, pdu.content["joinee"])
# elif "invitee" in pdu.content:
# self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
#if "joinee" in pdu.content:
#self._on_join(pdu.context, pdu.content["joinee"])
#elif "invitee" in pdu.content:
#self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
def _on_message(self, pdu):
""" We received a message
"""
self.output.print_line(
"#%s %s %s" % (pdu.context, pdu.content["sender"], pdu.content["body"])
)
self.output.print_line("#%s %s %s" %
(pdu.context, pdu.content["sender"], pdu.content["body"])
)
def _on_join(self, context, joinee):
""" Someone has joined a room, either a remote user or a local user
@@ -224,7 +224,9 @@ class HomeServer(ReplicationHandler):
room = self._get_or_create_room(context)
room.add_participant(joinee)
self.output.print_line("#%s %s %s" % (context, joinee, "*** JOINED"))
self.output.print_line("#%s %s %s" %
(context, joinee, "*** JOINED")
)
def _on_invite(self, origin, context, invitee):
""" Someone has been invited
@@ -232,7 +234,9 @@ class HomeServer(ReplicationHandler):
room = self._get_or_create_room(context)
room.add_invited(invitee)
self.output.print_line("#%s %s %s" % (context, invitee, "*** INVITED"))
self.output.print_line("#%s %s %s" %
(context, invitee, "*** INVITED")
)
if not room.have_got_metadata and origin is not self.server_name:
logger.debug("Get room state")
@@ -268,14 +272,14 @@ class HomeServer(ReplicationHandler):
try:
pdu = Pdu.create_new(
context=room_name,
pdu_type="sy.room.member",
is_state=True,
state_key=joinee,
content={"membership": "join"},
origin=self.server_name,
destinations=destinations,
)
context=room_name,
pdu_type="sy.room.member",
is_state=True,
state_key=joinee,
content={"membership": "join"},
origin=self.server_name,
destinations=destinations,
)
yield self.replication_layer.send_pdu(pdu)
except Exception as e:
logger.exception(e)
@@ -314,21 +318,21 @@ class HomeServer(ReplicationHandler):
return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name):
return [i for i in self.joined_rooms.setdefault(room_name).servers]
return [i for i in self.joined_rooms.setdefault(room_name,).servers]
def _get_or_create_room(self, room_name):
return self.joined_rooms.setdefault(room_name, Room(room_name))
def get_servers_for_context(self, context):
return defer.succeed(
self.joined_rooms.setdefault(context, Room(context)).servers
)
self.joined_rooms.setdefault(context, Room(context)).servers
)
def main(stdscr):
parser = argparse.ArgumentParser()
parser.add_argument("user", type=str)
parser.add_argument("-v", "--verbose", action="count")
parser.add_argument('user', type=str)
parser.add_argument('-v', '--verbose', action='count')
args = parser.parse_args()
user = args.user
@@ -338,9 +342,8 @@ def main(stdscr):
root_logger = logging.getLogger()
formatter = logging.Formatter(
"%(asctime)s - %(name)s - %(lineno)d - " "%(levelname)s - %(message)s"
)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(lineno)d - '
'%(levelname)s - %(message)s')
if not os.path.exists("logs"):
os.makedirs("logs")
fh = logging.FileHandler("logs/%s" % user)

File diff suppressed because one or more lines are too long

View File

@@ -1,5 +1,4 @@
from __future__ import print_function
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -59,9 +58,9 @@ def make_graph(pdus, room, filename_prefix):
name = make_name(pdu.get("pdu_id"), pdu.get("origin"))
pdu_map[name] = pdu
t = datetime.datetime.fromtimestamp(float(pdu["ts"]) / 1000).strftime(
"%Y-%m-%d %H:%M:%S,%f"
)
t = datetime.datetime.fromtimestamp(
float(pdu["ts"]) / 1000
).strftime('%Y-%m-%d %H:%M:%S,%f')
label = (
"<"
@@ -81,7 +80,11 @@ def make_graph(pdus, room, filename_prefix):
"depth": pdu.get("depth"),
}
node = pydot.Node(name=name, label=label, color=color_map[pdu.get("origin")])
node = pydot.Node(
name=name,
label=label,
color=color_map[pdu.get("origin")]
)
node_map[name] = node
graph.add_node(node)
@@ -105,13 +108,14 @@ def make_graph(pdus, room, filename_prefix):
if prev_state_name in node_map:
state_edge = pydot.Edge(
node_map[start_name], node_map[prev_state_name], style="dotted"
node_map[start_name], node_map[prev_state_name],
style='dotted'
)
graph.add_edge(state_edge)
graph.write("%s.dot" % filename_prefix, format="raw", prog="dot")
# graph.write_png("%s.png" % filename_prefix, prog='dot')
graph.write_svg("%s.svg" % filename_prefix, prog="dot")
graph.write('%s.dot' % filename_prefix, format='raw', prog='dot')
# graph.write_png("%s.png" % filename_prefix, prog='dot')
graph.write_svg("%s.svg" % filename_prefix, prog='dot')
def get_pdus(host, room):
@@ -127,14 +131,15 @@ def get_pdus(host, room):
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by talking "
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
)
parser.add_argument(
"-p", "--prefix", dest="prefix", help="String to prefix output files with"
"-p", "--prefix", dest="prefix",
help="String to prefix output files with"
)
parser.add_argument("host")
parser.add_argument("room")
parser.add_argument('host')
parser.add_argument('room')
args = parser.parse_args()

View File

@@ -36,7 +36,10 @@ def make_graph(db_name, room_id, file_prefix, limit):
args = [room_id]
if limit:
sql += " ORDER BY topological_ordering DESC, stream_ordering DESC " "LIMIT ?"
sql += (
" ORDER BY topological_ordering DESC, stream_ordering DESC "
"LIMIT ?"
)
args.append(limit)
@@ -53,8 +56,9 @@ def make_graph(db_name, room_id, file_prefix, limit):
for event in events:
c = conn.execute(
"SELECT state_group FROM event_to_state_groups " "WHERE event_id = ?",
(event.event_id,),
"SELECT state_group FROM event_to_state_groups "
"WHERE event_id = ?",
(event.event_id,)
)
res = c.fetchone()
@@ -65,7 +69,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
t = datetime.datetime.fromtimestamp(
float(event.origin_server_ts) / 1000
).strftime("%Y-%m-%d %H:%M:%S,%f")
).strftime('%Y-%m-%d %H:%M:%S,%f')
content = json.dumps(unfreeze(event.get_dict()["content"]))
@@ -89,7 +93,10 @@ def make_graph(db_name, room_id, file_prefix, limit):
"state_group": state_group,
}
node = pydot.Node(name=event.event_id, label=label)
node = pydot.Node(
name=event.event_id,
label=label,
)
node_map[event.event_id] = node
graph.add_node(node)
@@ -99,7 +106,10 @@ def make_graph(db_name, room_id, file_prefix, limit):
try:
end_node = node_map[prev_id]
except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
end_node = pydot.Node(
name=prev_id,
label="<<b>%s</b>>" % (prev_id,),
)
node_map[prev_id] = end_node
graph.add_node(end_node)
@@ -111,33 +121,36 @@ def make_graph(db_name, room_id, file_prefix, limit):
if len(event_ids) <= 1:
continue
cluster = pydot.Cluster(str(group), label="<State Group: %s>" % (str(group),))
cluster = pydot.Cluster(
str(group),
label="<State Group: %s>" % (str(group),)
)
for event_id in event_ids:
cluster.add_node(node_map[event_id])
graph.add_subgraph(cluster)
graph.write("%s.dot" % file_prefix, format="raw", prog="dot")
graph.write_svg("%s.svg" % file_prefix, prog="dot")
graph.write('%s.dot' % file_prefix, format='raw', prog='dot')
graph.write_svg("%s.svg" % file_prefix, prog='dot')
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by talking "
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
"to the given homeserver to get the list of PDUs. \n"
"Requires pydot."
)
parser.add_argument(
"-p",
"--prefix",
dest="prefix",
"-p", "--prefix", dest="prefix",
help="String to prefix output files with",
default="graph_output",
default="graph_output"
)
parser.add_argument("-l", "--limit", help="Only retrieve the last N events.")
parser.add_argument("db")
parser.add_argument("room")
parser.add_argument(
"-l", "--limit",
help="Only retrieve the last N events.",
)
parser.add_argument('db')
parser.add_argument('room')
args = parser.parse_args()

View File

@@ -1,5 +1,4 @@
from __future__ import print_function
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -43,7 +42,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
print("Sorted events")
if limit:
events = events[-int(limit) :]
events = events[-int(limit):]
node_map = {}
@@ -52,7 +51,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for event in events:
t = datetime.datetime.fromtimestamp(
float(event.origin_server_ts) / 1000
).strftime("%Y-%m-%d %H:%M:%S,%f")
).strftime('%Y-%m-%d %H:%M:%S,%f')
content = json.dumps(unfreeze(event.get_dict()["content"]), indent=4)
content = content.replace("\n", "<br/>\n")
@@ -68,10 +67,9 @@ def make_graph(file_name, room_id, file_prefix, limit):
value = json.dumps(value)
content.append(
"<b>%s</b>: %s,"
% (
cgi.escape(key, quote=True).encode("ascii", "xmlcharrefreplace"),
cgi.escape(value, quote=True).encode("ascii", "xmlcharrefreplace"),
"<b>%s</b>: %s," % (
cgi.escape(key, quote=True).encode("ascii", 'xmlcharrefreplace'),
cgi.escape(value, quote=True).encode("ascii", 'xmlcharrefreplace'),
)
)
@@ -97,7 +95,10 @@ def make_graph(file_name, room_id, file_prefix, limit):
"depth": event.depth,
}
node = pydot.Node(name=event.event_id, label=label)
node = pydot.Node(
name=event.event_id,
label=label,
)
node_map[event.event_id] = node
graph.add_node(node)
@@ -109,7 +110,10 @@ def make_graph(file_name, room_id, file_prefix, limit):
try:
end_node = node_map[prev_id]
except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
end_node = pydot.Node(
name=prev_id,
label="<<b>%s</b>>" % (prev_id,),
)
node_map[prev_id] = end_node
graph.add_node(end_node)
@@ -119,31 +123,31 @@ def make_graph(file_name, room_id, file_prefix, limit):
print("Created edges")
graph.write("%s.dot" % file_prefix, format="raw", prog="dot")
graph.write('%s.dot' % file_prefix, format='raw', prog='dot')
print("Created Dot")
graph.write_svg("%s.svg" % file_prefix, prog="dot")
graph.write_svg("%s.svg" % file_prefix, prog='dot')
print("Created svg")
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Generate a PDU graph for a given room by reading "
"from a file with line deliminated events. \n"
"Requires pydot."
"from a file with line deliminated events. \n"
"Requires pydot."
)
parser.add_argument(
"-p",
"--prefix",
dest="prefix",
"-p", "--prefix", dest="prefix",
help="String to prefix output files with",
default="graph_output",
default="graph_output"
)
parser.add_argument("-l", "--limit", help="Only retrieve the last N events.")
parser.add_argument("event_file")
parser.add_argument("room")
parser.add_argument(
"-l", "--limit",
help="Only retrieve the last N events.",
)
parser.add_argument('event_file')
parser.add_argument('room')
args = parser.parse_args()

View File

@@ -20,25 +20,24 @@ import urllib
import subprocess
import time
# ACCESS_TOKEN="" #
#ACCESS_TOKEN="" #
MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
MYUSERNAME = "@davetest:matrix.org"
MATRIXBASE = 'https://matrix.org/_matrix/client/api/v1/'
MYUSERNAME = '@davetest:matrix.org'
HTTPBIND = "https://meet.jit.si/http-bind"
# HTTPBIND = 'https://jitsi.vuc.me/http-bind'
# ROOMNAME = "matrix"
HTTPBIND = 'https://meet.jit.si/http-bind'
#HTTPBIND = 'https://jitsi.vuc.me/http-bind'
#ROOMNAME = "matrix"
ROOMNAME = "pibble"
HOST = "guest.jit.si"
# HOST="jitsi.vuc.me"
HOST="guest.jit.si"
#HOST="jitsi.vuc.me"
TURNSERVER = "turn.guest.jit.si"
# TURNSERVER="turn.jitsi.vuc.me"
ROOMDOMAIN = "meet.jit.si"
# ROOMDOMAIN="conference.jitsi.vuc.me"
TURNSERVER="turn.guest.jit.si"
#TURNSERVER="turn.jitsi.vuc.me"
ROOMDOMAIN="meet.jit.si"
#ROOMDOMAIN="conference.jitsi.vuc.me"
class TrivialMatrixClient:
def __init__(self, access_token):
@@ -47,50 +46,38 @@ class TrivialMatrixClient:
def getEvent(self):
while True:
url = (
MATRIXBASE
+ "events?access_token="
+ self.access_token
+ "&timeout=60000"
)
url = MATRIXBASE+'events?access_token='+self.access_token+"&timeout=60000"
if self.token:
url += "&from=" + self.token
url += "&from="+self.token
req = grequests.get(url)
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print("incoming from matrix", obj)
if "end" not in obj:
print("incoming from matrix",obj)
if 'end' not in obj:
continue
self.token = obj["end"]
if len(obj["chunk"]):
return obj["chunk"][0]
self.token = obj['end']
if len(obj['chunk']):
return obj['chunk'][0]
def joinRoom(self, roomId):
url = MATRIXBASE + "rooms/" + roomId + "/join?access_token=" + self.access_token
url = MATRIXBASE+'rooms/'+roomId+'/join?access_token='+self.access_token
print(url)
headers = {"Content-Type": "application/json"}
req = grequests.post(url, headers=headers, data="{}")
headers={ 'Content-Type': 'application/json' }
req = grequests.post(url, headers=headers, data='{}')
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print("response: ", obj)
print("response: ",obj)
def sendEvent(self, roomId, evType, event):
url = (
MATRIXBASE
+ "rooms/"
+ roomId
+ "/send/"
+ evType
+ "?access_token="
+ self.access_token
)
url = MATRIXBASE+'rooms/'+roomId+'/send/'+evType+'?access_token='+self.access_token
print(url)
print(json.dumps(event))
headers = {"Content-Type": "application/json"}
headers={ 'Content-Type': 'application/json' }
req = grequests.post(url, headers=headers, data=json.dumps(event))
resps = grequests.map([req])
obj = json.loads(resps[0].content)
print("response: ", obj)
print("response: ",obj)
xmppClients = {}
@@ -100,39 +87,38 @@ def matrixLoop():
while True:
ev = matrixCli.getEvent()
print(ev)
if ev["type"] == "m.room.member":
print("membership event")
if ev["membership"] == "invite" and ev["state_key"] == MYUSERNAME:
roomId = ev["room_id"]
if ev['type'] == 'm.room.member':
print('membership event')
if ev['membership'] == 'invite' and ev['state_key'] == MYUSERNAME:
roomId = ev['room_id']
print("joining room %s" % (roomId))
matrixCli.joinRoom(roomId)
elif ev["type"] == "m.room.message":
if ev["room_id"] in xmppClients:
elif ev['type'] == 'm.room.message':
if ev['room_id'] in xmppClients:
print("already have a bridge for that user, ignoring")
continue
print("got message, connecting")
xmppClients[ev["room_id"]] = TrivialXmppClient(ev["room_id"], ev["user_id"])
gevent.spawn(xmppClients[ev["room_id"]].xmppLoop)
elif ev["type"] == "m.call.invite":
xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev['type'] == 'm.call.invite':
print("Incoming call")
# sdp = ev['content']['offer']['sdp']
# print "sdp: %s" % (sdp)
# xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
# gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev["type"] == "m.call.answer":
#sdp = ev['content']['offer']['sdp']
#print "sdp: %s" % (sdp)
#xmppClients[ev['room_id']] = TrivialXmppClient(ev['room_id'], ev['user_id'])
#gevent.spawn(xmppClients[ev['room_id']].xmppLoop)
elif ev['type'] == 'm.call.answer':
print("Call answered")
sdp = ev["content"]["answer"]["sdp"]
if ev["room_id"] not in xmppClients:
sdp = ev['content']['answer']['sdp']
if ev['room_id'] not in xmppClients:
print("We didn't have a call for that room")
continue
# should probably check call ID too
xmppCli = xmppClients[ev["room_id"]]
xmppCli = xmppClients[ev['room_id']]
xmppCli.sendAnswer(sdp)
elif ev["type"] == "m.call.hangup":
if ev["room_id"] in xmppClients:
xmppClients[ev["room_id"]].stop()
del xmppClients[ev["room_id"]]
elif ev['type'] == 'm.call.hangup':
if ev['room_id'] in xmppClients:
xmppClients[ev['room_id']].stop()
del xmppClients[ev['room_id']]
class TrivialXmppClient:
def __init__(self, matrixRoom, userId):
@@ -146,155 +132,130 @@ class TrivialXmppClient:
def nextRid(self):
self.rid += 1
return "%d" % (self.rid)
return '%d' % (self.rid)
def sendIq(self, xml):
fullXml = (
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s'>%s</body>"
% (self.nextRid(), self.sid, xml)
)
# print "\t>>>%s" % (fullXml)
fullXml = "<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s'>%s</body>" % (self.nextRid(), self.sid, xml)
#print "\t>>>%s" % (fullXml)
return self.xmppPoke(fullXml)
def xmppPoke(self, xml):
headers = {"Content-Type": "application/xml"}
headers = {'Content-Type': 'application/xml'}
req = grequests.post(HTTPBIND, verify=False, headers=headers, data=xml)
resps = grequests.map([req])
obj = BeautifulSoup(resps[0].content)
return obj
def sendAnswer(self, answer):
print("sdp from matrix client", answer)
p = subprocess.Popen(
["node", "unjingle/unjingle.js", "--sdp"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
print("sdp from matrix client",answer)
p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--sdp'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
jingle, out_err = p.communicate(answer)
jingle = jingle % {
"tojid": self.callfrom,
"action": "session-accept",
"initiator": self.callfrom,
"responder": self.jid,
"sid": self.callsid,
'tojid': self.callfrom,
'action': 'session-accept',
'initiator': self.callfrom,
'responder': self.jid,
'sid': self.callsid
}
print("answer jingle from sdp", jingle)
print("answer jingle from sdp",jingle)
res = self.sendIq(jingle)
print("reply from answer: ", res)
print("reply from answer: ",res)
self.ssrcs = {}
jingleSoup = BeautifulSoup(jingle)
for cont in jingleSoup.iq.jingle.findAll("content"):
for cont in jingleSoup.iq.jingle.findAll('content'):
if cont.description:
self.ssrcs[cont["name"]] = cont.description["ssrc"]
print("my ssrcs:", self.ssrcs)
self.ssrcs[cont['name']] = cont.description['ssrc']
print("my ssrcs:",self.ssrcs)
gevent.joinall([gevent.spawn(self.advertiseSsrcs)])
gevent.joinall([
gevent.spawn(self.advertiseSsrcs)
])
def advertiseSsrcs(self):
time.sleep(7)
print("SSRC spammer started")
while self.running:
ssrcMsg = (
"<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>"
% {
"tojid": "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid),
"nick": self.userId,
"assrc": self.ssrcs["audio"],
"vssrc": self.ssrcs["video"],
}
)
ssrcMsg = "<presence to='%(tojid)s' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%(nick)s</nick><stats xmlns='http://jitsi.org/jitmeet/stats'><stat name='bitrate_download' value='175'/><stat name='bitrate_upload' value='176'/><stat name='packetLoss_total' value='0'/><stat name='packetLoss_download' value='0'/><stat name='packetLoss_upload' value='0'/></stats><media xmlns='http://estos.de/ns/mjs'><source type='audio' ssrc='%(assrc)s' direction='sendre'/><source type='video' ssrc='%(vssrc)s' direction='sendre'/></media></presence>" % { 'tojid': "%s@%s/%s" % (ROOMNAME, ROOMDOMAIN, self.shortJid), 'nick': self.userId, 'assrc': self.ssrcs['audio'], 'vssrc': self.ssrcs['video'] }
res = self.sendIq(ssrcMsg)
print("reply from ssrc announce: ", res)
print("reply from ssrc announce: ",res)
time.sleep(10)
def xmppLoop(self):
self.matrixCallId = time.time()
res = self.xmppPoke(
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' to='%s' xml:lang='en' wait='60' hold='1' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/>"
% (self.nextRid(), HOST)
)
res = self.xmppPoke("<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' to='%s' xml:lang='en' wait='60' hold='1' content='text/xml; charset=utf-8' ver='1.6' xmpp:version='1.0' xmlns:xmpp='urn:xmpp:xbosh'/>" % (self.nextRid(), HOST))
print(res)
self.sid = res.body["sid"]
self.sid = res.body['sid']
print("sid %s" % (self.sid))
res = self.sendIq(
"<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='ANONYMOUS'/>"
)
res = self.sendIq("<auth xmlns='urn:ietf:params:xml:ns:xmpp-sasl' mechanism='ANONYMOUS'/>")
res = self.xmppPoke(
"<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s' to='%s' xml:lang='en' xmpp:restart='true' xmlns:xmpp='urn:xmpp:xbosh'/>"
% (self.nextRid(), self.sid, HOST)
)
res = self.xmppPoke("<body rid='%s' xmlns='http://jabber.org/protocol/httpbind' sid='%s' to='%s' xml:lang='en' xmpp:restart='true' xmlns:xmpp='urn:xmpp:xbosh'/>" % (self.nextRid(), self.sid, HOST))
res = self.sendIq(
"<iq type='set' id='_bind_auth_2' xmlns='jabber:client'><bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'/></iq>"
)
res = self.sendIq("<iq type='set' id='_bind_auth_2' xmlns='jabber:client'><bind xmlns='urn:ietf:params:xml:ns:xmpp-bind'/></iq>")
print(res)
self.jid = res.body.iq.bind.jid.string
print("jid: %s" % (self.jid))
self.shortJid = self.jid.split("-")[0]
self.shortJid = self.jid.split('-')[0]
res = self.sendIq(
"<iq type='set' id='_session_auth_2' xmlns='jabber:client'><session xmlns='urn:ietf:params:xml:ns:xmpp-session'/></iq>"
)
res = self.sendIq("<iq type='set' id='_session_auth_2' xmlns='jabber:client'><session xmlns='urn:ietf:params:xml:ns:xmpp-session'/></iq>")
# randomthing = res.body.iq['to']
# whatsitpart = randomthing.split('-')[0]
#randomthing = res.body.iq['to']
#whatsitpart = randomthing.split('-')[0]
# print "other random bind thing: %s" % (randomthing)
#print "other random bind thing: %s" % (randomthing)
# advertise preence to the jitsi room, with our nick
res = self.sendIq(
"<iq type='get' to='%s' xmlns='jabber:client' id='1:sendIQ'><services xmlns='urn:xmpp:extdisco:1'><service host='%s'/></services></iq><presence to='%s@%s/d98f6c40' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%s</nick></presence>"
% (HOST, TURNSERVER, ROOMNAME, ROOMDOMAIN, self.userId)
)
self.muc = {"users": []}
for p in res.body.findAll("presence"):
res = self.sendIq("<iq type='get' to='%s' xmlns='jabber:client' id='1:sendIQ'><services xmlns='urn:xmpp:extdisco:1'><service host='%s'/></services></iq><presence to='%s@%s/d98f6c40' xmlns='jabber:client'><x xmlns='http://jabber.org/protocol/muc'/><c xmlns='http://jabber.org/protocol/caps' hash='sha-1' node='http://jitsi.org/jitsimeet' ver='0WkSdhFnAUxrz4ImQQLdB80GFlE='/><nick xmlns='http://jabber.org/protocol/nick'>%s</nick></presence>" % (HOST, TURNSERVER, ROOMNAME, ROOMDOMAIN, self.userId))
self.muc = {'users': []}
for p in res.body.findAll('presence'):
u = {}
u["shortJid"] = p["from"].split("/")[1]
u['shortJid'] = p['from'].split('/')[1]
if p.c and p.c.nick:
u["nick"] = p.c.nick.string
self.muc["users"].append(u)
print("muc: ", self.muc)
u['nick'] = p.c.nick.string
self.muc['users'].append(u)
print("muc: ",self.muc)
# wait for stuff
while True:
print("waiting...")
res = self.sendIq("")
print("got from stream: ", res)
print("got from stream: ",res)
if res.body.iq:
jingles = res.body.iq.findAll("jingle")
jingles = res.body.iq.findAll('jingle')
if len(jingles):
self.callfrom = res.body.iq["from"]
self.callfrom = res.body.iq['from']
self.handleInvite(jingles[0])
elif "type" in res.body and res.body["type"] == "terminate":
elif 'type' in res.body and res.body['type'] == 'terminate':
self.running = False
del xmppClients[self.matrixRoom]
return
def handleInvite(self, jingle):
self.initiator = jingle["initiator"]
self.callsid = jingle["sid"]
p = subprocess.Popen(
["node", "unjingle/unjingle.js", "--jingle"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
)
print("raw jingle invite", str(jingle))
self.initiator = jingle['initiator']
self.callsid = jingle['sid']
p = subprocess.Popen(['node', 'unjingle/unjingle.js', '--jingle'], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
print("raw jingle invite",str(jingle))
sdp, out_err = p.communicate(str(jingle))
print("transformed remote offer sdp", sdp)
print("transformed remote offer sdp",sdp)
inviteEvent = {
"offer": {"type": "offer", "sdp": sdp},
"call_id": self.matrixCallId,
"version": 0,
"lifetime": 30000,
'offer': {
'type': 'offer',
'sdp': sdp
},
'call_id': self.matrixCallId,
'version': 0,
'lifetime': 30000
}
matrixCli.sendEvent(self.matrixRoom, "m.call.invite", inviteEvent)
matrixCli.sendEvent(self.matrixRoom, 'm.call.invite', inviteEvent)
matrixCli = TrivialMatrixClient(ACCESS_TOKEN) # Undefined name
gevent.joinall([gevent.spawn(matrixLoop)])
gevent.joinall([
gevent.spawn(matrixLoop)
])

View File

@@ -3,7 +3,7 @@ Purge history API examples
# `purge_history.sh`
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
purge all messages in a list of rooms up to a certain event. You can select a
timeframe or a number of messages that you want to keep in the room.
@@ -12,5 +12,5 @@ the script.
# `purge_remote_media.sh`
A bash file, that uses the [purge history API](/docs/admin_api/purge_history_api.rst) to
A bash file, that uses the [purge history API](/docs/admin_api/README.rst) to
purge all old cached remote media.

View File

@@ -11,22 +11,22 @@ try:
except NameError: # Python 3
raw_input = input
def _mkurl(template, kws):
for key in kws:
template = template.replace(key, kws[key])
return template
def main(hs, room_id, access_token, user_id_prefix, why):
if not why:
why = "Automated kick."
print(
"Kicking members on %s in room %s matching %s" % (hs, room_id, user_id_prefix)
)
print("Kicking members on %s in room %s matching %s" % (hs, room_id, user_id_prefix))
room_state_url = _mkurl(
"$HS/_matrix/client/api/v1/rooms/$ROOM/state?access_token=$TOKEN",
{"$HS": hs, "$ROOM": room_id, "$TOKEN": access_token},
{
"$HS": hs,
"$ROOM": room_id,
"$TOKEN": access_token
}
)
print("Getting room state => %s" % room_state_url)
res = requests.get(room_state_url)
@@ -57,16 +57,24 @@ def main(hs, room_id, access_token, user_id_prefix, why):
for uid in kick_list:
print(uid)
doit = raw_input("Continue? [Y]es\n")
if len(doit) > 0 and doit.lower() == "y":
if len(doit) > 0 and doit.lower() == 'y':
print("Kicking members...")
# encode them all
kick_list = [urllib.quote(uid) for uid in kick_list]
for uid in kick_list:
kick_url = _mkurl(
"$HS/_matrix/client/api/v1/rooms/$ROOM/state/m.room.member/$UID?access_token=$TOKEN",
{"$HS": hs, "$UID": uid, "$ROOM": room_id, "$TOKEN": access_token},
{
"$HS": hs,
"$UID": uid,
"$ROOM": room_id,
"$TOKEN": access_token
}
)
kick_body = {"membership": "leave", "reason": why}
kick_body = {
"membership": "leave",
"reason": why
}
print("Kicking %s" % uid)
res = requests.put(kick_url, data=json.dumps(kick_body))
if res.status_code != 200:
@@ -75,15 +83,14 @@ def main(hs, room_id, access_token, user_id_prefix, why):
print("ERROR: JSON %s" % res.json())
if __name__ == "__main__":
parser = ArgumentParser("Kick members in a room matching a certain user ID prefix.")
parser.add_argument("-u", "--user-id", help="The user ID prefix e.g. '@irc_'")
parser.add_argument("-t", "--token", help="Your access_token")
parser.add_argument("-r", "--room", help="The room ID to kick members in")
parser.add_argument(
"-s", "--homeserver", help="The base HS url e.g. http://matrix.org"
)
parser.add_argument("-w", "--why", help="Reason for the kick. Optional.")
parser.add_argument("-u","--user-id",help="The user ID prefix e.g. '@irc_'")
parser.add_argument("-t","--token",help="Your access_token")
parser.add_argument("-r","--room",help="The room ID to kick members in")
parser.add_argument("-s","--homeserver",help="The base HS url e.g. http://matrix.org")
parser.add_argument("-w","--why",help="Reason for the kick. Optional.")
args = parser.parse_args()
if not args.room or not args.token or not args.user_id or not args.homeserver:
parser.print_help()

View File

@@ -43,7 +43,7 @@ dh_virtualenv \
--preinstall="mock" \
--extra-pip-arg="--no-cache-dir" \
--extra-pip-arg="--compile" \
--extras="all,systemd"
--extras="all"
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"

7
debian/changelog vendored
View File

@@ -1,10 +1,3 @@
matrix-synapse-py3 (1.0.0+nmu1) UNRELEASED; urgency=medium
[ Silke Hofstra ]
* Include systemd-python to allow logging to the systemd journal.
-- Silke Hofstra <silke@slxh.eu> Wed, 29 May 2019 09:45:29 +0200
matrix-synapse-py3 (1.0.0) stable; urgency=medium
* New synapse release 1.0.0.

View File

@@ -6,25 +6,23 @@ import cgi, logging
from daemonize import Daemonize
class SimpleHTTPRequestHandlerWithPOST(SimpleHTTPServer.SimpleHTTPRequestHandler):
UPLOAD_PATH = "upload"
"""
Accept all post request as file upload
"""
def do_POST(self):
path = os.path.join(self.UPLOAD_PATH, os.path.basename(self.path))
length = self.headers["content-length"]
length = self.headers['content-length']
data = self.rfile.read(int(length))
with open(path, "wb") as fh:
with open(path, 'wb') as fh:
fh.write(data)
self.send_response(200)
self.send_header("Content-Type", "application/json")
self.send_header('Content-Type', 'application/json')
self.end_headers()
# Return the absolute path of the uploaded file
@@ -35,25 +33,30 @@ def setup():
parser = argparse.ArgumentParser()
parser.add_argument("directory")
parser.add_argument("-p", "--port", dest="port", type=int, default=8080)
parser.add_argument("-P", "--pid-file", dest="pid", default="web.pid")
parser.add_argument('-P', "--pid-file", dest="pid", default="web.pid")
args = parser.parse_args()
# Get absolute path to directory to serve, as daemonize changes to '/'
os.chdir(args.directory)
dr = os.getcwd()
httpd = BaseHTTPServer.HTTPServer(("", args.port), SimpleHTTPRequestHandlerWithPOST)
httpd = BaseHTTPServer.HTTPServer(
('', args.port),
SimpleHTTPRequestHandlerWithPOST
)
def run():
os.chdir(dr)
httpd.serve_forever()
daemon = Daemonize(
app="synapse-webclient", pid=args.pid, action=run, auto_close_fds=False
)
app="synapse-webclient",
pid=args.pid,
action=run,
auto_close_fds=False,
)
daemon.start()
if __name__ == "__main__":
if __name__ == '__main__':
setup()

View File

@@ -11,7 +11,7 @@
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
#
ARG PYTHON_VERSION=3.7
ARG PYTHON_VERSION=2
###
### Stage 0: builder
@@ -57,7 +57,6 @@ RUN pip install --prefix="/install" --no-warn-script-location \
FROM docker.io/python:${PYTHON_VERSION}-alpine3.8
# xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
@@ -65,9 +64,7 @@ RUN apk add --no-cache --virtual .runtime_deps \
libxslt \
libpq \
zlib \
su-exec \
tzdata \
xmlsec
su-exec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py

View File

@@ -6,11 +6,39 @@ postgres database.
The image also does *not* provide a TURN server.
## Run
### Using docker-compose (easier)
This image is designed to run either with an automatically generated
configuration file or with a custom configuration that requires manual editing.
An easy way to make use of this image is via docker-compose. See the
[contrib/docker](https://github.com/matrix-org/synapse/tree/master/contrib/docker) section of the synapse project for
examples.
### Without Compose (harder)
If you do not wish to use Compose, you may still run this image using plain
Docker commands. Note that the following is just a guideline and you may need
to add parameters to the docker run command to account for the network situation
with your postgres database.
```
docker run \
-d \
--name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
-p 8448:8448 \
matrixdotorg/synapse:latest
```
## Volumes
By default, the image expects a single volume, located at ``/data``, that will hold:
The image expects a single volume, located at ``/data``, that will hold:
* configuration files;
* temporary files during uploads;
* uploaded media and thumbnails;
* the SQLite database if you do not configure postgres;
@@ -25,106 +53,129 @@ In order to setup an application service, simply create an ``appservices``
directory in the data volume and write the application service Yaml
configuration file there. Multiple application services are supported.
## Generating a configuration file
## TLS certificates
The first step is to genearte a valid config file. To do this, you can run the
image with the `generate` commandline option.
Synapse requires a valid TLS certificate. You can do one of the following:
You will need to specify values for the `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS` environment variable, and mount a docker volume to store
the configuration on. For example:
* Provide your own certificate and key (as
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.crt` and
`${DATA_PATH}/${SYNAPSE_SERVER_NAME}.tls.key`, or elsewhere by providing an
entire config as `${SYNAPSE_CONFIG_PATH}`). In this case, you should forward
traffic to port 8448 in the container, for example with `-p 443:8448`.
* Use a reverse proxy to terminate incoming TLS, and forward the plain http
traffic to port 8008 in the container. In this case you should set `-e
SYNAPSE_NO_TLS=1`.
* Use the ACME (Let's Encrypt) support built into Synapse. This requires
`${SYNAPSE_SERVER_NAME}` port 80 to be forwarded to port 8009 in the
container, for example with `-p 80:8009`. To enable it in the docker
container, set `-e SYNAPSE_ACME=1`.
If you don't do any of these, Synapse will fail to start with an error similar to:
synapse.config._base.ConfigError: Error accessing file '/data/<server_name>.tls.crt' (config for tls_certificate): No such file or directory
## Environment
Unless you specify a custom path for the configuration file, a very generic
file will be generated, based on the following environment settings.
These are a good starting point for setting up your own deployment.
Global settings:
* ``UID``, the user id Synapse will run as [default 991]
* ``GID``, the group id Synapse will run as [default 991]
* ``SYNAPSE_CONFIG_PATH``, path to a custom config file
If ``SYNAPSE_CONFIG_PATH`` is set, you should generate a configuration file
then customize it manually: see [Generating a config
file](#generating-a-config-file).
Otherwise, a dynamic configuration file will be used.
### Environment variables used to build a dynamic configuration file
The following environment variables are used to build the configuration file
when ``SYNAPSE_CONFIG_PATH`` is not set.
* ``SYNAPSE_SERVER_NAME`` (mandatory), the server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* `SYNAPSE_NO_TLS`, (accepts `true`, `false`, `on`, `off`, `1`, `0`, `yes`, `no`]): disable
TLS in Synapse (use this if you run your own TLS-capable reverse proxy). Defaults
to `false` (ie, TLS is enabled by default).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
the Synapse instance.
* ``SYNAPSE_ALLOW_GUEST``, set this variable to allow guest joining this server.
* ``SYNAPSE_EVENT_CACHE_SIZE``, the event cache size [default `10K`].
* ``SYNAPSE_RECAPTCHA_PUBLIC_KEY``, set this variable to the recaptcha public
key in order to enable recaptcha upon registration.
* ``SYNAPSE_RECAPTCHA_PRIVATE_KEY``, set this variable to the recaptcha private
key in order to enable recaptcha upon registration.
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size
[default `10M`].
* ``SYNAPSE_ACME``: set this to enable the ACME certificate renewal support.
Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):
* `POSTGRES_DB` - The database name for the synapse postgres
database. [default: `synapse`]
* `POSTGRES_HOST` - The host of the postgres database if you wish to use
postgresql instead of sqlite3. [default: `db` which is useful when using a
container on the same docker network in a compose file where the postgres
service is called `db`]
* `POSTGRES_PASSWORD` - The password for the synapse postgres database. **If
this is set then postgres will be used instead of sqlite3.** [default: none]
**NOTE**: You are highly encouraged to use postgresql! Please use the compose
file to make it easier to deploy.
* `POSTGRES_USER` - The user for the synapse postgres database. [default:
`synapse`]
Mail server specific values (will not send emails if not set):
* ``SYNAPSE_SMTP_HOST``, hostname to the mail server.
* ``SYNAPSE_SMTP_PORT``, TCP port for accessing the mail server [default
``25``].
* ``SYNAPSE_SMTP_USER``, username for authenticating against the mail server if
any.
* ``SYNAPSE_SMTP_PASSWORD``, password for authenticating against the mail
server if any.
### Generating a config file
It is possible to generate a basic configuration file for use with
`SYNAPSE_CONFIG_PATH` using the `generate` commandline option. You will need to
specify values for `SYNAPSE_CONFIG_PATH`, `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS`, and mount a docker volume to store the data on. For
example:
```
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest generate
```
For information on picking a suitable server name, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md.
The above command will generate a `homeserver.yaml` in (typically)
`/var/lib/docker/volumes/synapse-data/_data`. You should check this file, and
customise it to your needs.
The following environment variables are supported in `generate` mode:
* `SYNAPSE_SERVER_NAME` (mandatory): the server public hostname.
* `SYNAPSE_REPORT_STATS` (mandatory, `yes` or `no`): whether to enable
anonymous statistics reporting.
* `SYNAPSE_CONFIG_DIR`: where additional config files (such as the log config
and event signing key) will be stored. Defaults to `/data`.
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the datatase and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`.
## Running synapse
Once you have a valid configuration file, you can start synapse as follows:
This will generate a `homeserver.yaml` in (typically)
`/var/lib/docker/volumes/synapse-data/_data`, which you can then customise and
use with:
```
docker run -d --name synapse \
--mount type=volume,src=synapse-data,dst=/data \
-p 8008:8008 \
-e SYNAPSE_CONFIG_PATH=/data/homeserver.yaml \
matrixdotorg/synapse:latest
```
You can then check that it has started correctly with:
```
docker logs synapse
```
If all is well, you should now be able to connect to http://localhost:8008 and
see a confirmation message.
The following environment variables are supported in run mode:
* `SYNAPSE_CONFIG_DIR`: where additional config files are stored. Defaults to
`/data`.
* `SYNAPSE_CONFIG_PATH`: path to the config file. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
## TLS support
The default configuration exposes a single HTTP port: http://localhost:8008. It
is suitable for local testing, but for any practical use, you will either need
to use a reverse proxy, or configure Synapse to expose an HTTPS port.
For documentation on using a reverse proxy, see
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
For more information on enabling TLS support in synapse itself, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md#tls-certificates. Of
course, you will need to expose the TLS port from the container with a `-p`
argument to `docker run`.
## Legacy dynamic configuration file support
For backwards-compatibility only, the docker image supports creating a dynamic
configuration file based on environment variables. This is now deprecated, but
is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is
not given).
To migrate from a dynamic configuration file to a static one, run the docker
container once with the environment variables set, and `migrate_config`
commandline option. For example:
```
docker run -it --rm \
--mount type=volume,src=synapse-data,dst=/data \
-e SYNAPSE_SERVER_NAME=my.matrix.host \
-e SYNAPSE_REPORT_STATS=yes \
matrixdotorg/synapse:latest migrate_config
```
This will generate the same configuration file as the legacy mode used, but
will store it in `/data/homeserver.yaml` instead of a temporary location. You
can then use it as shown above at [Running synapse](#running-synapse).

View File

@@ -21,7 +21,7 @@ server_name: "{{ SYNAPSE_SERVER_NAME }}"
pid_file: /homeserver.pid
web_client: False
soft_file_limit: 0
log_config: "{{ SYNAPSE_LOG_CONFIG }}"
log_config: "/compiled/log.config"
## Ports ##
@@ -207,3 +207,22 @@ perspectives:
password_config:
enabled: true
{% if SYNAPSE_SMTP_HOST %}
email:
enable_notifs: false
smtp_host: "{{ SYNAPSE_SMTP_HOST }}"
smtp_port: {{ SYNAPSE_SMTP_PORT or "25" }}
smtp_user: "{{ SYNAPSE_SMTP_USER }}"
smtp_pass: "{{ SYNAPSE_SMTP_PASSWORD }}"
require_transport_security: False
notif_from: "{{ SYNAPSE_SMTP_FROM or "hostmaster@" + SYNAPSE_SERVER_NAME }}"
app_name: Matrix
# if template_dir is unset, uses the example templates that are part of
# the Synapse distribution.
#template_dir: res/templates
notif_template_html: notif_mail.html
notif_template_text: notif_mail.txt
notif_for_new_users: True
riot_base_url: "https://{{ SYNAPSE_SERVER_NAME }}"
{% endif %}

View File

@@ -16,11 +16,14 @@ handlers:
filters: [context]
loggers:
synapse:
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
level: {{ SYNAPSE_LOG_LEVEL or "WARNING" }}
handlers: [console]

View File

@@ -1,243 +1,89 @@
#!/usr/local/bin/python
import codecs
import glob
import os
import subprocess
import sys
import jinja2
import os
import sys
import subprocess
import glob
import codecs
# Utility functions
def log(txt):
print(txt, file=sys.stderr)
convert = lambda src, dst, environ: open(dst, "w").write(jinja2.Template(open(src).read()).render(**environ))
def check_arguments(environ, args):
for argument in args:
if argument not in environ:
print("Environment variable %s is mandatory, exiting." % argument)
sys.exit(2)
def error(txt):
log(txt)
sys.exit(2)
def convert(src, dst, environ):
"""Generate a file from a template
Args:
src (str): path to input file
dst (str): path to file to write
environ (dict): environment dictionary, for replacement mappings.
"""
with open(src) as infile:
template = infile.read()
rendered = jinja2.Template(template).render(**environ)
with open(dst, "w") as outfile:
outfile.write(rendered)
def generate_config_from_template(config_dir, config_path, environ, ownership):
"""Generate a homeserver.yaml from environment variables
Args:
config_dir (str): where to put generated config files
config_path (str): where to put the main config file
environ (dict): environment dictionary
ownership (str): "<user>:<group>" string which will be used to set
ownership of the generated configs
"""
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error(
"Environment variable '%s' is mandatory when generating a config file."
% (v,)
)
# populate some params from data files (if they exist, else create new ones)
environ = environ.copy()
secrets = {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
}
def generate_secrets(environ, secrets):
for name, secret in secrets.items():
if secret not in environ:
filename = "/data/%s.%s.key" % (environ["SYNAPSE_SERVER_NAME"], name)
# if the file already exists, load in the existing value; otherwise,
# generate a new secret and write it to a file
if os.path.exists(filename):
log("Reading %s from %s" % (secret, filename))
with open(filename) as handle:
value = handle.read()
with open(filename) as handle: value = handle.read()
else:
log("Generating a random secret for {}".format(secret))
print("Generating a random secret for {}".format(name))
value = codecs.encode(os.urandom(32), "hex").decode()
with open(filename, "w") as handle:
handle.write(value)
with open(filename, "w") as handle: handle.write(value)
environ[secret] = value
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists(config_dir):
os.mkdir(config_dir)
# Prepare the configuration
mode = sys.argv[1] if len(sys.argv) > 1 else None
environ = os.environ.copy()
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
args = ["python", "-m", "synapse.app.homeserver"]
# Convert SYNAPSE_NO_TLS to boolean if exists
if "SYNAPSE_NO_TLS" in environ:
tlsanswerstring = str.lower(environ["SYNAPSE_NO_TLS"])
if tlsanswerstring in ("true", "on", "1", "yes"):
environ["SYNAPSE_NO_TLS"] = True
else:
if tlsanswerstring in ("false", "off", "0", "no"):
environ["SYNAPSE_NO_TLS"] = False
else:
error(
'Environment variable "SYNAPSE_NO_TLS" found but value "'
+ tlsanswerstring
+ '" unrecognized; exiting.'
)
if "SYNAPSE_LOG_CONFIG" not in environ:
environ["SYNAPSE_LOG_CONFIG"] = config_dir + "/log.config"
log("Generating synapse config file " + config_path)
convert("/conf/homeserver.yaml", config_path, environ)
log_config_file = environ["SYNAPSE_LOG_CONFIG"]
log("Generating log config file " + log_config_file)
convert("/conf/log.config", log_config_file, environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
# Hopefully we already have a signing key, but generate one if not.
subprocess.check_output(
[
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
# tell synapse to put generated keys in /data rather than /compiled
"--keys-directory",
config_dir,
"--generate-keys",
]
)
def run_generate_config(environ, ownership):
"""Run synapse with a --generate-config param to generate a template config file
Args:
environ (dict): env var dict
ownership (str): "userid:groupid" arg for chmod
Never returns.
"""
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error("Environment variable '%s' is mandatory in `generate` mode." % (v,))
server_name = environ["SYNAPSE_SERVER_NAME"]
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
if not os.path.exists(log_config_file):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = [
"python",
"-m",
"synapse.app.homeserver",
"--server-name",
server_name,
"--report-stats",
environ["SYNAPSE_REPORT_STATS"],
"--config-path",
config_path,
"--config-directory",
config_dir,
"--data-directory",
data_dir,
"--generate-config",
"--open-private-ports",
# In generate mode, generate a configuration, missing keys, then exit
if mode == "generate":
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS", "SYNAPSE_CONFIG_PATH"))
args += [
"--server-name", environ["SYNAPSE_SERVER_NAME"],
"--report-stats", environ["SYNAPSE_REPORT_STATS"],
"--config-path", environ["SYNAPSE_CONFIG_PATH"],
"--generate-config"
]
# log("running %s" % (args, ))
os.execv("/usr/local/bin/python", args)
def main(args, environ):
mode = args[1] if len(args) > 1 else None
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
# In generate mode, generate a configuration and missing keys, then exit
if mode == "generate":
return run_generate_config(environ, ownership)
if mode == "migrate_config":
# generate a config based on environment vars.
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
return generate_config_from_template(
config_dir, config_path, environ, ownership
)
if mode is not None:
error("Unknown execution mode '%s'" % (mode,))
if "SYNAPSE_SERVER_NAME" in environ:
# backwards-compatibility generate-a-config-on-the-fly mode
if "SYNAPSE_CONFIG_PATH" in environ:
error(
"SYNAPSE_SERVER_NAME and SYNAPSE_CONFIG_PATH are mutually exclusive "
"except in `generate` or `migrate_config` mode."
)
# In normal mode, generate missing keys if any, then run synapse
else:
if "SYNAPSE_CONFIG_PATH" in environ:
config_path = environ["SYNAPSE_CONFIG_PATH"]
else:
check_arguments(environ, ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"))
generate_secrets(environ, {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY"
})
environ["SYNAPSE_APPSERVICES"] = glob.glob("/data/appservices/*.yaml")
if not os.path.exists("/compiled"): os.mkdir("/compiled")
config_path = "/compiled/homeserver.yaml"
log(
"Generating config file '%s' on-the-fly from environment variables.\n"
"Note that this mode is deprecated. You can migrate to a static config\n"
"file by running with 'migrate_config'. See the README for more details."
% (config_path,)
)
# Convert SYNAPSE_NO_TLS to boolean if exists
if "SYNAPSE_NO_TLS" in environ:
tlsanswerstring = str.lower(environ["SYNAPSE_NO_TLS"])
if tlsanswerstring in ("true", "on", "1", "yes"):
environ["SYNAPSE_NO_TLS"] = True
else:
if tlsanswerstring in ("false", "off", "0", "no"):
environ["SYNAPSE_NO_TLS"] = False
else:
print("Environment variable \"SYNAPSE_NO_TLS\" found but value \"" + tlsanswerstring + "\" unrecognized; exiting.")
sys.exit(2)
generate_config_from_template("/compiled", config_path, environ, ownership)
else:
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
if not os.path.exists(config_path):
error(
"Config file '%s' does not exist. You should either create a new "
"config file by running with the `generate` argument (and then edit "
"the resulting file before restarting) or specify the path to an "
"existing config file with the SYNAPSE_CONFIG_PATH variable."
% (config_path,)
)
convert("/conf/homeserver.yaml", config_path, environ)
convert("/conf/log.config", "/compiled/log.config", environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
log("Starting synapse with config file " + config_path)
args = [
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
args += [
"--config-path", config_path,
# tell synapse to put any generated keys in /data rather than /compiled
"--keys-directory", "/data",
]
os.execv("/sbin/su-exec", args)
if __name__ == "__main__":
main(sys.argv, os.environ)
# Generate missing keys and start synapse
subprocess.check_output(args + ["--generate-keys"])
os.execv("/sbin/su-exec", ["su-exec", ownership] + args)

View File

@@ -14,7 +14,7 @@ upgraded, however it may be of use to those with old installs returning to the
project.
If you are setting up a server from scratch you almost certainly should look at
the [installation guide](../INSTALL.md) instead.
the [installation guide](INSTALL.md) instead.
## Introduction
The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It

View File

@@ -1,60 +1,43 @@
# Code Style
- Everything should comply with PEP8. Code should pass
``pep8 --max-line-length=100`` without any warnings.
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
in code.
- **Indenting**:
The necessary tools are detailed below.
- NEVER tabs. 4 spaces to indent.
## Formatting tools
- follow PEP8; either hanging indent or multiline-visual indent depending
on the size and shape of the arguments and what makes more sense to the
author. In other words, both this::
The Synapse codebase uses [black](https://pypi.org/project/black/) as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
print("I am a fish %s" % "moo")
First install ``black`` with::
and this::
pip install --upgrade black
print("I am a fish %s" %
"moo")
Have ``black`` auto-format your code (it shouldn't change any
functionality) with::
and this::
black . --exclude="\.tox|build|env"
print(
"I am a fish %s" %
"moo",
)
- **flake8**
...are valid, although given each one takes up 2x more vertical space than
the previous, it's up to the author's discretion as to which layout makes
most sense for their function invocation. (e.g. if they want to add
comments per-argument, or put expressions in the arguments, or group
related arguments together, or want to deliberately extend or preserve
vertical/horizontal space)
``flake8`` is a code checking tool. We require code to pass ``flake8`` before being merged into the codebase.
- **Line length**:
Install ``flake8`` with::
Max line length is 79 chars (with flexibility to overflow by a "few chars" if
the overflowing content is not semantically significant and avoids an
explosion of vertical whitespace).
pip install --upgrade flake8
Check all application and test code with::
flake8 synapse tests
- **isort**
``isort`` ensures imports are nicely formatted, and can suggest and
auto-fix issues such as double-importing.
Install ``isort`` with::
pip install --upgrade isort
Auto-fix imports with::
isort -rc synapse tests
``-rc`` means to recursively search the given directories.
It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
## General rules
Use parentheses instead of ``\`` for line continuation where ever possible
(which is pretty much everywhere).
- **Naming**:
@@ -63,6 +46,26 @@ takes a while and is very resource intensive.
- Use double quotes ``"foo"`` rather than single quotes ``'foo'``.
- **Blank lines**:
- There should be max a single new line between:
- statements
- functions in a class
- There should be two new lines between:
- definitions in a module (e.g., between different classes)
- **Whitespace**:
There should be spaces where spaces should be and not where there shouldn't
be:
- a single space after a comma
- a single space before and after for '=' when used as assignment
- no spaces before and after for '=' for default values and keyword arguments.
- **Comments**: should follow the `google code style
<http://google.github.io/styleguide/pyguide.html?showone=Comments#Comments>`_.
This is so that we can generate documentation with `sphinx
@@ -73,7 +76,7 @@ takes a while and is very resource intensive.
- **Imports**:
- Prefer to import classes and functions rather than packages or modules.
- Prefer to import classes and functions than packages or modules.
Example::

View File

@@ -89,10 +89,8 @@ Let's assume that we expect clients to connect to our server at
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
use_backend matrix if matrix-host matrix-path
acl matrix hdr(host) -i matrix.example.com
use_backend matrix if matrix
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1

View File

@@ -23,6 +23,29 @@ server_name: "SERVERNAME"
#
pid_file: DATADIR/homeserver.pid
# CPU affinity mask. Setting this restricts the CPUs on which the
# process will be scheduled. It is represented as a bitmask, with the
# lowest order bit corresponding to the first logical CPU and the
# highest order bit corresponding to the last logical CPU. Not all CPUs
# may exist on a given system but a mask may specify more CPUs than are
# present.
#
# For example:
# 0x00000001 is processor #0,
# 0x00000003 is processors #0 and #1,
# 0xFFFFFFFF is all processors (#0 through #31).
#
# Pinning a Python process to a single CPU is desirable, because Python
# is inherently single-threaded due to the GIL, and can suffer a
# 30-40% slowdown due to cache blow-out and thread context switching
# if the scheduler happens to schedule the underlying threads across
# different cores. See
# https://www.mirantis.com/blog/improve-performance-python-programs-restricting-single-cpu/.
#
# This setting requires the affinity package to be installed!
#
#cpu_affinity: 0xFFFFFFFF
# The path to the web client which will be served at /_matrix/client/
# if 'webclient' is configured under the 'listeners' configuration.
#
@@ -54,15 +77,11 @@ pid_file: DATADIR/homeserver.pid
#
#require_auth_for_profile_requests: true
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
# If set to 'true', requires authentication to access the server's
# public rooms directory through the client API, and forbids any other
# homeserver to fetch it via federation. Defaults to 'false'.
#
#allow_public_rooms_without_auth: false
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
#
#allow_public_rooms_over_federation: false
#restrict_public_rooms_to_local_users: true
# The default room version for newly created rooms.
#
@@ -213,7 +232,7 @@ listeners:
- names: [client, federation]
compress: false
# example additional_resources:
# example additonal_resources:
#
#additional_resources:
# "/_matrix/my/custom/endpoint":
@@ -317,15 +336,6 @@ listeners:
#
#federation_verify_certificates: false
# The minimum TLS version that will be used for outbound federation requests.
#
# Defaults to `1`. Configurable to `1`, `1.1`, `1.2`, or `1.3`. Note
# that setting this value higher than `1.2` will prevent federation to most
# of the public Matrix network: only configure it to `1.3` if you have an
# entirely private federation setup and you can ensure TLS 1.3 support.
#
#federation_client_minimum_tls_version: 1.2
# Skip federation certificate verification on the following whitelist
# of domains.
#
@@ -415,13 +425,6 @@ acme:
#
#domain: matrix.example.com
# file to use for the account key. This will be generated if it doesn't
# exist.
#
# If unspecified, we will use CONFDIR/client.key.
#
account_key_file: DATADIR/acme_account.key
# List of allowed TLS fingerprints for this server to publish along
# with the signing keys for this server. Other matrix servers that
# make HTTPS requests to this server will check that the TLS
@@ -997,12 +1000,6 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# so it is not normally necessary to specify them unless you need to
# override them.
#
# Once SAML support is enabled, a metadata file will be exposed at
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
# use to configure your SAML IdP with. Alternatively, you can manually configure
# the IdP to use an ACS location of
# https://<server>:<port>/_matrix/saml2/authn_response.
#
#saml2_config:
# sp_config:
# # point this to the IdP's metadata. You can use either a local file or
@@ -1012,15 +1009,7 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# remote:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like to
# # allow IdP-initiated login, set 'allow_unsolicited: True' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: True
#
# # The examples below are just used to generate our metadata xml, and you
# # The rest of sp_config is just used to generate our metadata xml, and you
# # may well not need it, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
@@ -1043,12 +1032,6 @@ signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# # separate pysaml2 configuration file:
# #
# config_path: "CONFDIR/sp_conf.py"
#
# # the lifetime of a SAML session. This defines how long a user has to
# # complete the authentication process, if allow_unsolicited is unset.
# # The default is 5 minutes.
# #
# # saml_session_lifetime: 5m
@@ -1075,12 +1058,6 @@ password_config:
#
#enabled: false
# Uncomment to disable authentication against the local password
# database. This is ignored if `enabled` is false, and is only useful
# if you have other password_providers.
#
#localdb_enabled: false
# Uncomment and change to a secret random string for extra security.
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#
@@ -1105,13 +1082,11 @@ password_config:
# app_name: Matrix
#
# # Enable email notifications by default
# #
# notif_for_new_users: True
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Enable sending password reset emails via the configured, trusted
@@ -1124,22 +1099,16 @@ password_config:
# #
# # If this option is set to false and SMTP options have not been
# # configured, resetting user passwords via email will be disabled
# #
# #trust_identity_server_for_password_resets: false
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# # directory
# #
# #template_dir: res/templates
#

View File

@@ -18,220 +18,226 @@ import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath(".."))
sys.path.insert(0, os.path.abspath('..'))
# -- General configuration ------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
# needs_sphinx = '1.0'
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
# ones.
extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.intersphinx",
"sphinx.ext.coverage",
"sphinx.ext.ifconfig",
"sphinxcontrib.napoleon",
'sphinx.ext.autodoc',
'sphinx.ext.intersphinx',
'sphinx.ext.coverage',
'sphinx.ext.ifconfig',
'sphinxcontrib.napoleon',
]
# Add any paths that contain templates here, relative to this directory.
templates_path = ["_templates"]
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = ".rst"
source_suffix = '.rst'
# The encoding of source files.
# source_encoding = 'utf-8-sig'
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = "index"
master_doc = 'index'
# General information about the project.
project = "Synapse"
copyright = (
"Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd"
)
project = u'Synapse'
copyright = u'Copyright 2014-2017 OpenMarket Ltd, 2017 Vector Creations Ltd, 2017 New Vector Ltd'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = "1.0"
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = "1.0"
release = '1.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
# language = None
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# today = ''
#today = ''
# Else, today_fmt is used as the format for a strftime call.
# today_fmt = '%B %d, %Y'
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ["_build"]
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all
# documents.
# default_role = None
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
# add_function_parentheses = True
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
# add_module_names = True
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
# show_authors = False
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = "sphinx"
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
# modindex_common_prefix = []
#modindex_common_prefix = []
# If true, keep warnings as "system message" paragraphs in the built documents.
# keep_warnings = False
#keep_warnings = False
# -- Options for HTML output ----------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = "default"
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
# html_theme_options = {}
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
# html_theme_path = []
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
# html_title = None
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
# html_short_title = None
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
# html_logo = None
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
# html_favicon = None
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ["_static"]
html_static_path = ['_static']
# Add any extra paths that contain custom files (such as robots.txt or
# .htaccess) here, relative to this directory. These files are copied
# directly to the root of the documentation.
# html_extra_path = []
#html_extra_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
# html_last_updated_fmt = '%b %d, %Y'
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
# html_use_smartypants = True
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
# html_sidebars = {}
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
# html_additional_pages = {}
#html_additional_pages = {}
# If false, no module index is generated.
# html_domain_indices = True
#html_domain_indices = True
# If false, no index is generated.
# html_use_index = True
#html_use_index = True
# If true, the index is split into individual pages for each letter.
# html_split_index = False
#html_split_index = False
# If true, links to the reST sources are added to the pages.
# html_show_sourcelink = True
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
# html_show_sphinx = True
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
# html_show_copyright = True
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
# html_use_opensearch = ''
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
# html_file_suffix = None
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = "Synapsedoc"
htmlhelp_basename = 'Synapsedoc'
# -- Options for LaTeX output ---------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [("index", "Synapse.tex", "Synapse Documentation", "TNG", "manual")]
latex_documents = [
('index', 'Synapse.tex', u'Synapse Documentation',
u'TNG', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
# latex_logo = None
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
# latex_use_parts = False
#latex_use_parts = False
# If true, show page references after internal links.
# latex_show_pagerefs = False
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
# latex_show_urls = False
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
# latex_appendices = []
#latex_appendices = []
# If false, no module index is generated.
# latex_domain_indices = True
#latex_domain_indices = True
# -- Options for manual page output ---------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [("index", "synapse", "Synapse Documentation", ["TNG"], 1)]
man_pages = [
('index', 'synapse', u'Synapse Documentation',
[u'TNG'], 1)
]
# If true, show URL addresses after external links.
# man_show_urls = False
#man_show_urls = False
# -- Options for Texinfo output -------------------------------------------
@@ -240,32 +246,26 @@ man_pages = [("index", "synapse", "Synapse Documentation", ["TNG"], 1)]
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(
"index",
"Synapse",
"Synapse Documentation",
"TNG",
"Synapse",
"One line description of project.",
"Miscellaneous",
)
('index', 'Synapse', u'Synapse Documentation',
u'TNG', 'Synapse', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
# texinfo_appendices = []
#texinfo_appendices = []
# If false, no module index is generated.
# texinfo_domain_indices = True
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
# texinfo_show_urls = 'footnote'
#texinfo_show_urls = 'footnote'
# If true, do not generate a @detailmenu in the "Top" node's menu.
# texinfo_no_detailmenu = False
#texinfo_no_detailmenu = False
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {"http://docs.python.org/": None}
intersphinx_mapping = {'http://docs.python.org/': None}
napoleon_include_special_with_doc = True
napoleon_use_ivar = True

View File

@@ -239,13 +239,6 @@ be routed to the same instance::
^/_matrix/client/(r0|unstable)/register$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight::
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
``synapse.app.user_dir``
~~~~~~~~~~~~~~~~~~~~~~~~

View File

@@ -28,22 +28,3 @@
directory = "misc"
name = "Internal Changes"
showcontent = true
[tool.black]
target-version = ['py34']
exclude = '''
(
/(
\.eggs # exclude a few common directories in the
| \.git # root of the project
| \.tox
| \.venv
| _build
| _trial_temp.*
| build
| dist
| debian
)/
)
'''

View File

@@ -39,11 +39,11 @@ def check_auth(auth, auth_chain, events):
print("Success:", e.event_id, e.type, e.state_key)
if __name__ == "__main__":
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
'json', nargs='?', type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()

View File

@@ -0,0 +1,54 @@
import argparse
import hashlib
import json
import logging
import sys
from unpaddedbase64 import encode_base64
from synapse.crypto.event_signing import (
check_event_content_hash,
compute_event_reference_hash,
)
class dictobj(dict):
def __init__(self, *args, **kargs):
dict.__init__(self, *args, **kargs)
self.__dict__ = self
def get_dict(self):
return dict(self)
def get_full_dict(self):
return dict(self)
def get_pdu_json(self):
return dict(self)
def main():
parser = argparse.ArgumentParser()
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()
logging.basicConfig()
event_json = dictobj(json.load(args.input_json))
algorithms = {"sha256": hashlib.sha256}
for alg_name in event_json.hashes:
if check_event_content_hash(event_json, algorithms[alg_name]):
print("PASS content hash %s" % (alg_name,))
else:
print("FAIL content hash %s" % (alg_name,))
for algorithm in algorithms.values():
name, h_bytes = compute_event_reference_hash(event_json, algorithm)
print("Reference hash %s: %s" % (name, encode_base64(h_bytes)))
if __name__ == "__main__":
main()

View File

@@ -1,3 +1,4 @@
import argparse
import json
import logging
@@ -39,7 +40,7 @@ def main():
parser = argparse.ArgumentParser()
parser.add_argument("signature_name")
parser.add_argument(
"input_json", nargs="?", type=argparse.FileType("r"), default=sys.stdin
"input_json", nargs="?", type=argparse.FileType('r'), default=sys.stdin
)
args = parser.parse_args()
@@ -68,5 +69,5 @@ def main():
print("FAIL %s" % (key_id,))
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -116,5 +116,5 @@ def main():
connection.commit()
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -19,10 +19,10 @@ class DefinitionVisitor(ast.NodeVisitor):
self.names = {}
self.attrs = set()
self.definitions = {
"def": self.functions,
"class": self.classes,
"names": self.names,
"attrs": self.attrs,
'def': self.functions,
'class': self.classes,
'names': self.names,
'attrs': self.attrs,
}
def visit_Name(self, node):
@@ -47,23 +47,23 @@ class DefinitionVisitor(ast.NodeVisitor):
def non_empty(defs):
functions = {name: non_empty(f) for name, f in defs["def"].items()}
classes = {name: non_empty(f) for name, f in defs["class"].items()}
functions = {name: non_empty(f) for name, f in defs['def'].items()}
classes = {name: non_empty(f) for name, f in defs['class'].items()}
result = {}
if functions:
result["def"] = functions
result['def'] = functions
if classes:
result["class"] = classes
names = defs["names"]
result['class'] = classes
names = defs['names']
uses = []
for name in names.get("Load", ()):
if name not in names.get("Param", ()) and name not in names.get("Store", ()):
for name in names.get('Load', ()):
if name not in names.get('Param', ()) and name not in names.get('Store', ()):
uses.append(name)
uses.extend(defs["attrs"])
uses.extend(defs['attrs'])
if uses:
result["uses"] = uses
result["names"] = names
result["attrs"] = defs["attrs"]
result['uses'] = uses
result['names'] = names
result['attrs'] = defs['attrs']
return result
@@ -81,33 +81,33 @@ def definitions_in_file(filepath):
def defined_names(prefix, defs, names):
for name, funcs in defs.get("def", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
for name, funcs in defs.get('def', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
for name, funcs in defs.get("class", {}).items():
names.setdefault(name, {"defined": []})["defined"].append(prefix + name)
for name, funcs in defs.get('class', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
def used_names(prefix, item, defs, names):
for name, funcs in defs.get("def", {}).items():
for name, funcs in defs.get('def', {}).items():
used_names(prefix + name + ".", name, funcs, names)
for name, funcs in defs.get("class", {}).items():
for name, funcs in defs.get('class', {}).items():
used_names(prefix + name + ".", name, funcs, names)
path = prefix.rstrip(".")
for used in defs.get("uses", ()):
path = prefix.rstrip('.')
for used in defs.get('uses', ()):
if used in names:
if item:
names[item].setdefault("uses", []).append(used)
names[used].setdefault("used", {}).setdefault(item, []).append(path)
names[item].setdefault('uses', []).append(used)
names[used].setdefault('used', {}).setdefault(item, []).append(path)
if __name__ == "__main__":
if __name__ == '__main__':
parser = argparse.ArgumentParser(description="Find definitions.")
parser = argparse.ArgumentParser(description='Find definitions.')
parser.add_argument(
"--unused", action="store_true", help="Only list unused definitions"
)
@@ -119,7 +119,7 @@ if __name__ == "__main__":
)
parser.add_argument(
"directories",
nargs="+",
nargs='+',
metavar="DIR",
help="Directories to search for definitions",
)
@@ -164,7 +164,7 @@ if __name__ == "__main__":
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
if args.unused and definition.get("used"):
if args.unused and definition.get('used'):
continue
result[name] = definition
@@ -196,9 +196,9 @@ if __name__ == "__main__":
continue
result[name] = definition
if args.format == "yaml":
if args.format == 'yaml':
yaml.dump(result, sys.stdout, default_flow_style=False)
elif args.format == "dot":
elif args.format == 'dot':
print("digraph {")
for name, entry in result.items():
print(name)

View File

@@ -63,7 +63,7 @@ def encode_canonical_json(value):
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
ensure_ascii=False,
# Remove unecessary white space.
separators=(",", ":"),
separators=(',', ':'),
# Sort the keys of dictionaries.
sort_keys=True,
# Encode the resulting unicode as UTF-8 bytes.
@@ -145,7 +145,7 @@ def request_json(method, origin_name, origin_key, destination, path, content):
authorization_headers = []
for key, sig in signed_json["signatures"][origin_name].items():
header = 'X-Matrix origin=%s,key="%s",sig="%s"' % (origin_name, key, sig)
header = "X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (origin_name, key, sig)
authorization_headers.append(header.encode("ascii"))
print("Authorization: %s" % header, file=sys.stderr)
@@ -161,7 +161,11 @@ def request_json(method, origin_name, origin_key, destination, path, content):
headers["Content-Type"] = "application/json"
result = s.request(
method=method, url=dest, headers=headers, verify=False, data=content
method=method,
url=dest,
headers=headers,
verify=False,
data=content,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
@@ -237,18 +241,18 @@ def main():
def read_args_from_config(args):
with open(args.config, "r") as fh:
with open(args.config, 'r') as fh:
config = yaml.safe_load(fh)
if not args.server_name:
args.server_name = config["server_name"]
args.server_name = config['server_name']
if not args.signing_key_path:
args.signing_key_path = config["signing_key_path"]
args.signing_key_path = config['signing_key_path']
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s, skip_well_known=False):
if s[-1] == "]":
if s[-1] == ']':
# ipv6 literal (with no port)
return s, 8448
@@ -264,7 +268,9 @@ class MatrixConnectionAdapter(HTTPAdapter):
if not skip_well_known:
well_known = MatrixConnectionAdapter.get_well_known(s)
if well_known:
return MatrixConnectionAdapter.lookup(well_known, skip_well_known=True)
return MatrixConnectionAdapter.lookup(
well_known, skip_well_known=True
)
try:
srv = srvlookup.lookup("matrix", "tcp", s)[0]
@@ -274,8 +280,8 @@ class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def get_well_known(server_name):
uri = "https://%s/.well-known/matrix/server" % (server_name,)
print("fetching %s" % (uri,), file=sys.stderr)
uri = "https://%s/.well-known/matrix/server" % (server_name, )
print("fetching %s" % (uri, ), file=sys.stderr)
try:
resp = requests.get(uri)
@@ -288,12 +294,12 @@ class MatrixConnectionAdapter(HTTPAdapter):
raise Exception("not a dict")
if "m.server" not in parsed_well_known:
raise Exception("Missing key 'm.server'")
new_name = parsed_well_known["m.server"]
print("well-known lookup gave %s" % (new_name,), file=sys.stderr)
new_name = parsed_well_known['m.server']
print("well-known lookup gave %s" % (new_name, ), file=sys.stderr)
return new_name
except Exception as e:
print("Invalid response from %s: %s" % (uri, e), file=sys.stderr)
print("Invalid response from %s: %s" % (uri, e, ), file=sys.stderr)
return None
def get_connection(self, url, proxies=None):

View File

@@ -79,5 +79,5 @@ def main():
conn.commit()
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -35,11 +35,11 @@ def find_patterns_in_file(filepath):
find_patterns_in_code(f.read())
parser = argparse.ArgumentParser(description="Find url patterns.")
parser = argparse.ArgumentParser(description='Find url patterns.')
parser.add_argument(
"directories",
nargs="+",
nargs='+',
metavar="DIR",
help="Directories to search for definitions",
)

View File

@@ -63,5 +63,5 @@ def main():
streams[update.name] = update.position
if __name__ == "__main__":
if __name__ == '__main__':
main()

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env python3
#!/usr/bin/env python
import argparse
import shutil

View File

@@ -24,14 +24,14 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-o",
"--output_file",
type=argparse.FileType("w"),
"-o", "--output_file",
type=argparse.FileType('w'),
default=sys.stdout,
help="Where to write the output to",
)
args = parser.parse_args()
key_id = "a_" + random_string(4)
key = (generate_signing_key(key_id),)
key = generate_signing_key(key_id),
write_signing_keys(args.output_file, key)

View File

@@ -50,7 +50,7 @@ def main(src_repo, dest_repo):
dest_paths = MediaFilePaths(dest_repo)
for line in sys.stdin:
line = line.strip()
parts = line.split("|")
parts = line.split('|')
if len(parts) != 2:
print("Unable to parse input line %s" % line, file=sys.stderr)
exit(1)
@@ -107,7 +107,7 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("-v", action="store_true", help="enable debug logging")
parser.add_argument("-v", action='store_true', help='enable debug logging')
parser.add_argument("src_repo", help="Path to source content repo")
parser.add_argument("dest_repo", help="Path to source content repo")
args = parser.parse_args()

View File

@@ -15,17 +15,18 @@ ignore =
tox.ini
[flake8]
max-line-length = 90
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
# for error codes. The ones we ignore are:
# W503: line break before binary operator
# W504: line break after binary operator
# E203: whitespace before ':' (which is contrary to pep8?)
# E731: do not assign a lambda expression, use a def
# E501: Line too long (black enforces this for us)
ignore=W503,W504,E203,E731,E501
ignore=W503,W504,E203,E731
[isort]
line_length = 88
line_length = 89
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY

View File

@@ -60,12 +60,9 @@ class TestCommand(Command):
pass
def run(self):
print(
"""Synapse's tests cannot be run via setup.py. To run them, try:
print ("""Synapse's tests cannot be run via setup.py. To run them, try:
PYTHONPATH="." trial tests
"""
)
""")
def read_file(path_segments):
"""Read a file from the package. Takes a list of strings to join to
@@ -87,9 +84,9 @@ version = exec_file(("synapse", "__init__.py"))["__version__"]
dependencies = exec_file(("synapse", "python_dependencies.py"))
long_description = read_file(("README.rst",))
REQUIREMENTS = dependencies["REQUIREMENTS"]
CONDITIONAL_REQUIREMENTS = dependencies["CONDITIONAL_REQUIREMENTS"]
ALL_OPTIONAL_REQUIREMENTS = dependencies["ALL_OPTIONAL_REQUIREMENTS"]
REQUIREMENTS = dependencies['REQUIREMENTS']
CONDITIONAL_REQUIREMENTS = dependencies['CONDITIONAL_REQUIREMENTS']
ALL_OPTIONAL_REQUIREMENTS = dependencies['ALL_OPTIONAL_REQUIREMENTS']
# Make `pip install matrix-synapse[all]` install all the optional dependencies.
CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
@@ -105,16 +102,16 @@ setup(
include_package_data=True,
zip_safe=False,
long_description=long_description,
python_requires="~=3.5",
python_requires='~=3.5',
classifiers=[
"Development Status :: 5 - Production/Stable",
"Topic :: Communications :: Chat",
"License :: OSI Approved :: Apache Software License",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
'Development Status :: 5 - Production/Stable',
'Topic :: Communications :: Chat',
'License :: OSI Approved :: Apache Software License',
'Programming Language :: Python :: 3 :: Only',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},
cmdclass={'test': TestCommand},
)

View File

@@ -28,11 +28,10 @@ try:
from twisted.internet import protocol
from twisted.internet.protocol import Factory
from twisted.names.dns import DNSDatagramProtocol
protocol.Factory.noisy = False
Factory.noisy = False
DNSDatagramProtocol.noisy = False
except ImportError:
pass
__version__ = "1.1.0rc1"
__version__ = "1.0.0"

View File

@@ -57,18 +57,18 @@ def request_registration(
nonce = r.json()["nonce"]
mac = hmac.new(key=shared_secret.encode("utf8"), digestmod=hashlib.sha1)
mac = hmac.new(key=shared_secret.encode('utf8'), digestmod=hashlib.sha1)
mac.update(nonce.encode("utf8"))
mac.update(nonce.encode('utf8'))
mac.update(b"\x00")
mac.update(user.encode("utf8"))
mac.update(user.encode('utf8'))
mac.update(b"\x00")
mac.update(password.encode("utf8"))
mac.update(password.encode('utf8'))
mac.update(b"\x00")
mac.update(b"admin" if admin else b"notadmin")
if user_type:
mac.update(b"\x00")
mac.update(user_type.encode("utf8"))
mac.update(user_type.encode('utf8'))
mac = mac.hexdigest()
@@ -134,9 +134,8 @@ def register_new_user(user, password, server_location, shared_secret, admin, use
else:
admin = False
request_registration(
user, password, server_location, shared_secret, bool(admin), user_type
)
request_registration(user, password, server_location, shared_secret,
bool(admin), user_type)
def main():
@@ -190,7 +189,7 @@ def main():
group.add_argument(
"-c",
"--config",
type=argparse.FileType("r"),
type=argparse.FileType('r'),
help="Path to server config file. Used to read in shared secret.",
)
@@ -201,7 +200,7 @@ def main():
parser.add_argument(
"server_url",
default="https://localhost:8448",
nargs="?",
nargs='?',
help="URL to use to talk to the home server. Defaults to "
" 'https://localhost:8448'.",
)
@@ -221,9 +220,8 @@ def main():
if args.admin or args.no_admin:
admin = args.admin
register_new_user(
args.user, args.password, args.server_url, secret, admin, args.user_type
)
register_new_user(args.user, args.password, args.server_url, secret,
admin, args.user_type)
if __name__ == "__main__":

View File

@@ -36,11 +36,8 @@ logger = logging.getLogger(__name__)
AuthEventTypes = (
EventTypes.Create,
EventTypes.Member,
EventTypes.PowerLevels,
EventTypes.JoinRules,
EventTypes.RoomHistoryVisibility,
EventTypes.Create, EventTypes.Member, EventTypes.PowerLevels,
EventTypes.JoinRules, EventTypes.RoomHistoryVisibility,
EventTypes.ThirdPartyInvite,
)
@@ -57,7 +54,6 @@ class Auth(object):
FIXME: This class contains a mix of functions for authenticating users
of our client-server API and authenticating events added to room graphs.
"""
def __init__(self, hs):
self.hs = hs
self.clock = hs.get_clock()
@@ -74,12 +70,15 @@ class Auth(object):
def check_from_context(self, room_version, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids(self.store)
auth_events_ids = yield self.compute_auth_events(
event, prev_state_ids, for_verification=True
event, prev_state_ids, for_verification=True,
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in itervalues(auth_events)}
auth_events = {
(e.type, e.state_key): e for e in itervalues(auth_events)
}
self.check(
room_version, event, auth_events=auth_events, do_sig_check=do_sig_check
room_version, event,
auth_events=auth_events, do_sig_check=do_sig_check,
)
def check(self, room_version, event, auth_events, do_sig_check=True):
@@ -116,10 +115,15 @@ class Auth(object):
the room.
"""
if current_state:
member = current_state.get((EventTypes.Member, user_id), None)
member = current_state.get(
(EventTypes.Member, user_id),
None
)
else:
member = yield self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
self._check_joined_room(member, user_id, room_id)
@@ -139,17 +143,23 @@ class Auth(object):
the room. This will be the leave event if they have left the room.
"""
member = yield self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership not in (Membership.JOIN, Membership.LEAVE):
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
raise AuthError(403, "User %s not in room %s" % (
user_id, room_id
))
if membership == Membership.LEAVE:
forgot = yield self.store.did_forget(user_id, room_id)
if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
raise AuthError(403, "User %s not in room %s" % (
user_id, room_id
))
defer.returnValue(member)
@@ -161,9 +171,9 @@ class Auth(object):
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
)
raise AuthError(403, "User %s not in room %s (%s)" % (
user_id, room_id, repr(member)
))
def can_federate(self, event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
@@ -175,7 +185,11 @@ class Auth(object):
@defer.inlineCallbacks
def get_user_by_req(
self, request, allow_guest=False, rights="access", allow_expired=False
self,
request,
allow_guest=False,
rights="access",
allow_expired=False,
):
""" Get a registered user's ID.
@@ -195,8 +209,9 @@ class Auth(object):
try:
ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
b"User-Agent",
default=[b""]
)[0].decode('ascii', 'surrogateescape')
access_token = self.get_access_token_from_request(
request, self.TOKEN_NOT_FOUND_HTTP_STATUS
@@ -228,12 +243,11 @@ class Auth(object):
if self._account_validity.enabled and not allow_expired:
user_id = user.to_string()
expiration_ts = yield self.store.get_expiration_ts_for_user(user_id)
if (
expiration_ts is not None
and self.clock.time_msec() >= expiration_ts
):
if expiration_ts is not None and self.clock.time_msec() >= expiration_ts:
raise AuthError(
403, "User account has expired", errcode=Codes.EXPIRED_ACCOUNT
403,
"User account has expired",
errcode=Codes.EXPIRED_ACCOUNT,
)
# device_id may not be present if get_user_by_access_token has been
@@ -251,23 +265,18 @@ class Auth(object):
if is_guest and not allow_guest:
raise AuthError(
403,
"Guest access not allowed",
errcode=Codes.GUEST_ACCESS_FORBIDDEN,
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
request.authenticated_entity = user.to_string()
defer.returnValue(
synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service
)
defer.returnValue(synapse.types.create_requester(
user, token_id, is_guest, device_id, app_service=app_service)
)
except KeyError:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.",
errcode=Codes.MISSING_TOKEN
)
@defer.inlineCallbacks
@@ -288,14 +297,20 @@ class Auth(object):
if b"user_id" not in request.args:
defer.returnValue((app_service.sender, app_service))
user_id = request.args[b"user_id"][0].decode("utf8")
user_id = request.args[b"user_id"][0].decode('utf8')
if app_service.sender == user_id:
defer.returnValue((app_service.sender, app_service))
if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.")
raise AuthError(
403,
"Application service cannot masquerade as this user."
)
if not (yield self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user")
raise AuthError(
403,
"Application service has not registered this user"
)
defer.returnValue((user_id, app_service))
@defer.inlineCallbacks
@@ -353,13 +368,13 @@ class Auth(object):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unknown user_id %s" % user_id,
errcode=Codes.UNKNOWN_TOKEN,
errcode=Codes.UNKNOWN_TOKEN
)
if not stored_user["is_guest"]:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Guest access token used for regular user",
errcode=Codes.UNKNOWN_TOKEN,
errcode=Codes.UNKNOWN_TOKEN
)
ret = {
"user": user,
@@ -387,9 +402,8 @@ class Auth(object):
) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
def _parse_and_validate_macaroon(self, token, rights="access"):
@@ -427,13 +441,13 @@ class Auth(object):
guest = True
self.validate_macaroon(
macaroon, rights, self.hs.config.expire_access_token, user_id=user_id
macaroon, rights, self.hs.config.expire_access_token,
user_id=user_id,
)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN,
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
if not has_expiry and rights == "access":
@@ -458,11 +472,10 @@ class Auth(object):
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix) :]
return caveat.caveat_id[len(user_prefix):]
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN,
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
def validate_macaroon(self, macaroon, type_string, verify_expiry, user_id):
@@ -509,7 +522,7 @@ class Auth(object):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix) :])
expiry = int(caveat[len(prefix):])
now = self.hs.get_clock().time_msec()
return now < expiry
@@ -541,12 +554,14 @@ class Auth(object):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN,
errcode=Codes.UNKNOWN_TOKEN
)
request.authenticated_entity = service.sender
return defer.succeed(service)
except KeyError:
raise AuthError(self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.")
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token."
)
def is_server_admin(self, user):
""" Check if the given user is a local server admin.
@@ -566,19 +581,19 @@ class Auth(object):
auth_ids = []
key = (EventTypes.PowerLevels, "")
key = (EventTypes.PowerLevels, "", )
power_level_event_id = current_state_ids.get(key)
if power_level_event_id:
auth_ids.append(power_level_event_id)
key = (EventTypes.JoinRules, "")
key = (EventTypes.JoinRules, "", )
join_rule_event_id = current_state_ids.get(key)
key = (EventTypes.Member, event.sender)
key = (EventTypes.Member, event.sender, )
member_event_id = current_state_ids.get(key)
key = (EventTypes.Create, "")
key = (EventTypes.Create, "", )
create_event_id = current_state_ids.get(key)
if create_event_id:
auth_ids.append(create_event_id)
@@ -604,7 +619,7 @@ class Auth(object):
auth_ids.append(member_event_id)
if for_verification:
key = (EventTypes.Member, event.state_key)
key = (EventTypes.Member, event.state_key, )
existing_event_id = current_state_ids.get(key)
if existing_event_id:
auth_ids.append(existing_event_id)
@@ -613,7 +628,7 @@ class Auth(object):
if "third_party_invite" in event.content:
key = (
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"],
event.content["third_party_invite"]["signed"]["token"]
)
third_party_invite_id = current_state_ids.get(key)
if third_party_invite_id:
@@ -669,7 +684,7 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.Aliases, "", power_level_event
EventTypes.Aliases, "", power_level_event,
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
@@ -677,7 +692,7 @@ class Auth(object):
raise AuthError(
403,
"This server requires you to be a moderator in the room to"
" edit its room list entry",
" edit its room list entry"
)
@staticmethod
@@ -727,7 +742,7 @@ class Auth(object):
)
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
return parts[1].decode("ascii")
return parts[1].decode('ascii')
else:
raise AuthError(
token_not_found_http_status,
@@ -740,10 +755,10 @@ class Auth(object):
raise AuthError(
token_not_found_http_status,
"Missing access token.",
errcode=Codes.MISSING_TOKEN,
errcode=Codes.MISSING_TOKEN
)
return query_params[0].decode("ascii")
return query_params[0].decode('ascii')
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
@@ -770,8 +785,8 @@ class Auth(object):
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility
and visibility.content["history_visibility"] == "world_readable"
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
@@ -805,11 +820,10 @@ class Auth(object):
if self.hs.config.hs_disabled:
raise ResourceLimitError(
403,
self.hs.config.hs_disabled_message,
403, self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self.hs.config.admin_contact,
limit_type=self.hs.config.hs_disabled_limit_type,
limit_type=self.hs.config.hs_disabled_limit_type
)
if self.hs.config.limit_usage_by_mau is True:
assert not (user_id and threepid)
@@ -834,9 +848,8 @@ class Auth(object):
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise ResourceLimitError(
403,
"Monthly Active User Limit Exceeded",
403, "Monthly Active User Limit Exceeded",
admin_contact=self.hs.config.admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type="monthly_active_user",
limit_type="monthly_active_user"
)

View File

@@ -18,7 +18,7 @@
"""Contains constants from the specification."""
# the "depth" field on events is limited to 2**63 - 1
MAX_DEPTH = 2 ** 63 - 1
MAX_DEPTH = 2**63 - 1
# the maximum length for a room alias is 255 characters
MAX_ALIAS_LENGTH = 255
@@ -30,41 +30,39 @@ MAX_USERID_LENGTH = 255
class Membership(object):
"""Represents the membership states of a user in a room."""
INVITE = "invite"
JOIN = "join"
KNOCK = "knock"
LEAVE = "leave"
BAN = "ban"
INVITE = u"invite"
JOIN = u"join"
KNOCK = u"knock"
LEAVE = u"leave"
BAN = u"ban"
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
class PresenceState(object):
"""Represents the presence state of a user."""
OFFLINE = "offline"
UNAVAILABLE = "unavailable"
ONLINE = "online"
OFFLINE = u"offline"
UNAVAILABLE = u"unavailable"
ONLINE = u"online"
class JoinRules(object):
PUBLIC = "public"
KNOCK = "knock"
INVITE = "invite"
PRIVATE = "private"
PUBLIC = u"public"
KNOCK = u"knock"
INVITE = u"invite"
PRIVATE = u"private"
class LoginType(object):
PASSWORD = "m.login.password"
EMAIL_IDENTITY = "m.login.email.identity"
MSISDN = "m.login.msisdn"
RECAPTCHA = "m.login.recaptcha"
TERMS = "m.login.terms"
DUMMY = "m.login.dummy"
PASSWORD = u"m.login.password"
EMAIL_IDENTITY = u"m.login.email.identity"
MSISDN = u"m.login.msisdn"
RECAPTCHA = u"m.login.recaptcha"
TERMS = u"m.login.terms"
DUMMY = u"m.login.dummy"
# Only for C/S API v1
APPLICATION_SERVICE = "m.login.application_service"
SHARED_SECRET = "org.matrix.login.shared_secret"
APPLICATION_SERVICE = u"m.login.application_service"
SHARED_SECRET = u"org.matrix.login.shared_secret"
class EventTypes(object):
@@ -120,7 +118,6 @@ class UserTypes(object):
"""Allows for user type specific behaviour. With the benefit of hindsight
'admin' and 'guest' users should also be UserTypes. Normal users are type None
"""
SUPPORT = "support"
ALL_USER_TYPES = (SUPPORT,)
@@ -128,7 +125,6 @@ class UserTypes(object):
class RelationTypes(object):
"""The types of relations known to this server.
"""
ANNOTATION = "m.annotation"
REPLACE = "m.replace"
REFERENCE = "m.reference"

View File

@@ -70,7 +70,6 @@ class CodeMessageException(RuntimeError):
code (int): HTTP error code
msg (str): string describing the error
"""
def __init__(self, code, msg):
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
self.code = code
@@ -84,7 +83,6 @@ class SynapseError(CodeMessageException):
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN):
"""Constructs a synapse error.
@@ -97,7 +95,10 @@ class SynapseError(CodeMessageException):
self.errcode = errcode
def error_dict(self):
return cs_error(self.msg, self.errcode)
return cs_error(
self.msg,
self.errcode,
)
class ProxiedRequestError(SynapseError):
@@ -106,23 +107,27 @@ class ProxiedRequestError(SynapseError):
Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN'
"""
def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None):
super(ProxiedRequestError, self).__init__(code, msg, errcode)
super(ProxiedRequestError, self).__init__(
code, msg, errcode
)
if additional_fields is None:
self._additional_fields = {}
else:
self._additional_fields = dict(additional_fields)
def error_dict(self):
return cs_error(self.msg, self.errcode, **self._additional_fields)
return cs_error(
self.msg,
self.errcode,
**self._additional_fields
)
class ConsentNotGivenError(SynapseError):
"""The error returned to the client when the user has not consented to the
privacy policy.
"""
def __init__(self, msg, consent_uri):
"""Constructs a ConsentNotGivenError
@@ -131,17 +136,22 @@ class ConsentNotGivenError(SynapseError):
consent_url (str): The URL where the user can give their consent
"""
super(ConsentNotGivenError, self).__init__(
code=http_client.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN
code=http_client.FORBIDDEN,
msg=msg,
errcode=Codes.CONSENT_NOT_GIVEN
)
self._consent_uri = consent_uri
def error_dict(self):
return cs_error(self.msg, self.errcode, consent_uri=self._consent_uri)
return cs_error(
self.msg,
self.errcode,
consent_uri=self._consent_uri
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
@@ -180,17 +190,15 @@ class InteractiveAuthIncompleteError(Exception):
result (dict): the server response to the request, which should be
passed back to the client
"""
def __init__(self, result):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete"
"Interactive auth not yet complete",
)
self.result = result
class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.UNRECOGNIZED
@@ -199,14 +207,21 @@ class UnrecognizedRequestError(SynapseError):
message = "Unrecognized request"
else:
message = args[0]
super(UnrecognizedRequestError, self).__init__(400, message, **kwargs)
super(UnrecognizedRequestError, self).__init__(
400,
message,
**kwargs
)
class NotFoundError(SynapseError):
"""An error indicating we can't find the thing you asked for"""
def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND):
super(NotFoundError, self).__init__(404, msg, errcode=errcode)
super(NotFoundError, self).__init__(
404,
msg,
errcode=errcode
)
class AuthError(SynapseError):
@@ -223,11 +238,8 @@ class ResourceLimitError(SynapseError):
Any error raised when there is a problem with resource usage.
For instance, the monthly active user limit for the server has been exceeded
"""
def __init__(
self,
code,
msg,
self, code, msg,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=None,
limit_type=None,
@@ -241,7 +253,7 @@ class ResourceLimitError(SynapseError):
self.msg,
self.errcode,
admin_contact=self.admin_contact,
limit_type=self.limit_type,
limit_type=self.limit_type
)
@@ -256,7 +268,6 @@ class EventSizeError(SynapseError):
class EventStreamError(SynapseError):
"""An error raised when there a problem with the event stream."""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.BAD_PAGINATION
@@ -265,53 +276,47 @@ class EventStreamError(SynapseError):
class LoginError(SynapseError):
"""An error raised when there was a problem logging in."""
pass
class StoreError(SynapseError):
"""An error raised when there was a problem storing some data."""
pass
class InvalidCaptchaError(SynapseError):
def __init__(
self,
code=400,
msg="Invalid captcha.",
error_url=None,
errcode=Codes.CAPTCHA_INVALID,
):
def __init__(self, code=400, msg="Invalid captcha.", error_url=None,
errcode=Codes.CAPTCHA_INVALID):
super(InvalidCaptchaError, self).__init__(code, msg, errcode)
self.error_url = error_url
def error_dict(self):
return cs_error(self.msg, self.errcode, error_url=self.error_url)
return cs_error(
self.msg,
self.errcode,
error_url=self.error_url,
)
class LimitExceededError(SynapseError):
"""A client has sent too many requests and is being throttled.
"""
def __init__(
self,
code=429,
msg="Too Many Requests",
retry_after_ms=None,
errcode=Codes.LIMIT_EXCEEDED,
):
def __init__(self, code=429, msg="Too Many Requests", retry_after_ms=None,
errcode=Codes.LIMIT_EXCEEDED):
super(LimitExceededError, self).__init__(code, msg, errcode)
self.retry_after_ms = retry_after_ms
def error_dict(self):
return cs_error(self.msg, self.errcode, retry_after_ms=self.retry_after_ms)
return cs_error(
self.msg,
self.errcode,
retry_after_ms=self.retry_after_ms,
)
class RoomKeysVersionError(SynapseError):
"""A client has tried to upload to a non-current version of the room_keys store
"""
def __init__(self, current_version):
"""
Args:
@@ -326,7 +331,6 @@ class RoomKeysVersionError(SynapseError):
class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does
not support."""
def __init__(self):
super(UnsupportedRoomVersionError, self).__init__(
code=400,
@@ -350,19 +354,22 @@ class IncompatibleRoomVersionError(SynapseError):
Unlike UnsupportedRoomVersionError, it is specific to the case of the make_join
failing.
"""
def __init__(self, room_version):
super(IncompatibleRoomVersionError, self).__init__(
code=400,
msg="Your homeserver does not support the features required to "
"join this room",
"join this room",
errcode=Codes.INCOMPATIBLE_ROOM_VERSION,
)
self._room_version = room_version
def error_dict(self):
return cs_error(self.msg, self.errcode, room_version=self._room_version)
return cs_error(
self.msg,
self.errcode,
room_version=self._room_version,
)
class RequestSendFailed(RuntimeError):
@@ -373,11 +380,11 @@ class RequestSendFailed(RuntimeError):
networking (e.g. DNS failures, connection timeouts etc), versus unexpected
errors (like programming errors).
"""
def __init__(self, inner_exception, can_retry):
super(RequestSendFailed, self).__init__(
"Failed to send request: %s: %s"
% (type(inner_exception).__name__, inner_exception)
"Failed to send request: %s: %s" % (
type(inner_exception).__name__, inner_exception,
)
)
self.inner_exception = inner_exception
self.can_retry = can_retry
@@ -421,7 +428,7 @@ class FederationError(RuntimeError):
self.affected = affected
self.source = source
msg = "%s %s: %s" % (level, code, reason)
msg = "%s %s: %s" % (level, code, reason,)
super(FederationError, self).__init__(msg)
def get_dict(self):
@@ -441,7 +448,6 @@ class HttpResponseException(CodeMessageException):
Attributes:
response (bytes): body of response
"""
def __init__(self, code, msg, response):
"""
@@ -480,7 +486,7 @@ class HttpResponseException(CodeMessageException):
if not isinstance(j, dict):
j = {}
errcode = j.pop("errcode", Codes.UNKNOWN)
errmsg = j.pop("error", self.msg)
errcode = j.pop('errcode', Codes.UNKNOWN)
errmsg = j.pop('error', self.msg)
return ProxiedRequestError(self.code, errmsg, errcode, j)

View File

@@ -28,55 +28,117 @@ FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {"type": "number"},
"senders": {"$ref": "#/definitions/user_id_array"},
"not_senders": {"$ref": "#/definitions/user_id_array"},
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
# TODO: We don't limit event type values but we probably should...
# check types are valid event types
"types": {"type": "array", "items": {"type": "string"}},
"not_types": {"type": "array", "items": {"type": "string"}},
},
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
}
}
}
ROOM_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"not_rooms": {"$ref": "#/definitions/room_id_array"},
"rooms": {"$ref": "#/definitions/room_id_array"},
"ephemeral": {"$ref": "#/definitions/room_event_filter"},
"include_leave": {"type": "boolean"},
"state": {"$ref": "#/definitions/room_event_filter"},
"timeline": {"$ref": "#/definitions/room_event_filter"},
"account_data": {"$ref": "#/definitions/room_event_filter"},
},
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"ephemeral": {
"$ref": "#/definitions/room_event_filter"
},
"include_leave": {
"type": "boolean"
},
"state": {
"$ref": "#/definitions/room_event_filter"
},
"timeline": {
"$ref": "#/definitions/room_event_filter"
},
"account_data": {
"$ref": "#/definitions/room_event_filter"
},
}
}
ROOM_EVENT_FILTER_SCHEMA = {
"additionalProperties": False,
"type": "object",
"properties": {
"limit": {"type": "number"},
"senders": {"$ref": "#/definitions/user_id_array"},
"not_senders": {"$ref": "#/definitions/user_id_array"},
"types": {"type": "array", "items": {"type": "string"}},
"not_types": {"type": "array", "items": {"type": "string"}},
"rooms": {"$ref": "#/definitions/room_id_array"},
"not_rooms": {"$ref": "#/definitions/room_id_array"},
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
},
"limit": {
"type": "number"
},
"senders": {
"$ref": "#/definitions/user_id_array"
},
"not_senders": {
"$ref": "#/definitions/user_id_array"
},
"types": {
"type": "array",
"items": {
"type": "string"
}
},
"not_types": {
"type": "array",
"items": {
"type": "string"
}
},
"rooms": {
"$ref": "#/definitions/room_id_array"
},
"not_rooms": {
"$ref": "#/definitions/room_id_array"
},
"contains_url": {
"type": "boolean"
},
"lazy_load_members": {
"type": "boolean"
},
"include_redundant_members": {
"type": "boolean"
},
}
}
USER_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {"type": "string", "format": "matrix_user_id"},
"items": {
"type": "string",
"format": "matrix_user_id"
}
}
ROOM_ID_ARRAY_SCHEMA = {
"type": "array",
"items": {"type": "string", "format": "matrix_room_id"},
"items": {
"type": "string",
"format": "matrix_room_id"
}
}
USER_FILTER_SCHEMA = {
@@ -88,13 +150,22 @@ USER_FILTER_SCHEMA = {
"user_id_array": USER_ID_ARRAY_SCHEMA,
"filter": FILTER_SCHEMA,
"room_filter": ROOM_FILTER_SCHEMA,
"room_event_filter": ROOM_EVENT_FILTER_SCHEMA,
"room_event_filter": ROOM_EVENT_FILTER_SCHEMA
},
"properties": {
"presence": {"$ref": "#/definitions/filter"},
"account_data": {"$ref": "#/definitions/filter"},
"room": {"$ref": "#/definitions/room_filter"},
"event_format": {"type": "string", "enum": ["client", "federation"]},
"presence": {
"$ref": "#/definitions/filter"
},
"account_data": {
"$ref": "#/definitions/filter"
},
"room": {
"$ref": "#/definitions/room_filter"
},
"event_format": {
"type": "string",
"enum": ["client", "federation"]
},
"event_fields": {
"type": "array",
"items": {
@@ -106,25 +177,26 @@ USER_FILTER_SCHEMA = {
#
# Note that because this is a regular expression, we have to escape
# each backslash in the pattern.
"pattern": r"^((?!\\\\).)*$",
},
},
"pattern": r"^((?!\\\\).)*$"
}
}
},
"additionalProperties": False,
"additionalProperties": False
}
@FormatChecker.cls_checks("matrix_room_id")
@FormatChecker.cls_checks('matrix_room_id')
def matrix_room_id_validator(room_id_str):
return RoomID.from_string(room_id_str)
@FormatChecker.cls_checks("matrix_user_id")
@FormatChecker.cls_checks('matrix_user_id')
def matrix_user_id_validator(user_id_str):
return UserID.from_string(user_id_str)
class Filtering(object):
def __init__(self, hs):
super(Filtering, self).__init__()
self.store = hs.get_datastore()
@@ -156,9 +228,8 @@ class Filtering(object):
# individual top-level key e.g. public_user_data. Filters are made of
# many definitions.
try:
jsonschema.validate(
user_filter_json, USER_FILTER_SCHEMA, format_checker=FormatChecker()
)
jsonschema.validate(user_filter_json, USER_FILTER_SCHEMA,
format_checker=FormatChecker())
except jsonschema.ValidationError as e:
raise SynapseError(400, str(e))
@@ -169,9 +240,10 @@ class FilterCollection(object):
room_filter_json = self._filter_json.get("room", {})
self._room_filter = Filter(
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
)
self._room_filter = Filter({
k: v for k, v in room_filter_json.items()
if k in ("rooms", "not_rooms")
})
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
self._room_state_filter = Filter(room_filter_json.get("state", {}))
@@ -180,7 +252,9 @@ class FilterCollection(object):
self._presence_filter = Filter(filter_json.get("presence", {}))
self._account_data = Filter(filter_json.get("account_data", {}))
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
self.include_leave = filter_json.get("room", {}).get(
"include_leave", False
)
self.event_fields = filter_json.get("event_fields", [])
self.event_format = filter_json.get("event_format", "client")
@@ -225,22 +299,22 @@ class FilterCollection(object):
def blocks_all_presence(self):
return (
self._presence_filter.filters_all_types()
or self._presence_filter.filters_all_senders()
self._presence_filter.filters_all_types() or
self._presence_filter.filters_all_senders()
)
def blocks_all_room_ephemeral(self):
return (
self._room_ephemeral_filter.filters_all_types()
or self._room_ephemeral_filter.filters_all_senders()
or self._room_ephemeral_filter.filters_all_rooms()
self._room_ephemeral_filter.filters_all_types() or
self._room_ephemeral_filter.filters_all_senders() or
self._room_ephemeral_filter.filters_all_rooms()
)
def blocks_all_room_timeline(self):
return (
self._room_timeline_filter.filters_all_types()
or self._room_timeline_filter.filters_all_senders()
or self._room_timeline_filter.filters_all_rooms()
self._room_timeline_filter.filters_all_types() or
self._room_timeline_filter.filters_all_senders() or
self._room_timeline_filter.filters_all_rooms()
)
@@ -301,7 +375,12 @@ class Filter(object):
# check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), text_type)
return self.check_fields(room_id, sender, ev_type, contains_url)
return self.check_fields(
room_id,
sender,
ev_type,
contains_url,
)
def check_fields(self, room_id, sender, event_type, contains_url):
"""Checks whether the filter matches the given event fields.
@@ -312,7 +391,7 @@ class Filter(object):
literal_keys = {
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v),
"types": lambda v: _matches_wildcard(event_type, v)
}
for name, match_func in literal_keys.items():

View File

@@ -44,25 +44,29 @@ class Ratelimiter(object):
"""
self.prune_message_counts(time_now_s)
message_count, time_start, _ignored = self.message_counts.get(
key, (0.0, time_now_s, None)
key, (0., time_now_s, None),
)
time_delta = time_now_s - time_start
sent_count = message_count - time_delta * rate_hz
if sent_count < 0:
allowed = True
time_start = time_now_s
message_count = 1.0
elif sent_count > burst_count - 1.0:
message_count = 1.
elif sent_count > burst_count - 1.:
allowed = False
else:
allowed = True
message_count += 1
if update:
self.message_counts[key] = (message_count, time_start, rate_hz)
self.message_counts[key] = (
message_count, time_start, rate_hz
)
if rate_hz > 0:
time_allowed = time_start + (message_count - burst_count + 1) / rate_hz
time_allowed = (
time_start + (message_count - burst_count + 1) / rate_hz
)
if time_allowed < time_now_s:
time_allowed = time_now_s
else:
@@ -72,7 +76,9 @@ class Ratelimiter(object):
def prune_message_counts(self, time_now_s):
for key in list(self.message_counts.keys()):
message_count, time_start, rate_hz = self.message_counts[key]
message_count, time_start, rate_hz = (
self.message_counts[key]
)
time_delta = time_now_s - time_start
if message_count - time_delta * rate_hz > 0:
break
@@ -86,5 +92,5 @@ class Ratelimiter(object):
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
retry_after_ms=int(1000 * (time_allowed - time_now_s)),
)

View File

@@ -19,10 +19,9 @@ class EventFormatVersions(object):
"""This is an internal enum for tracking the version of the event format,
independently from the room version.
"""
V1 = 1 # $id:server event id format
V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
V3 = 3 # MSC1884-style $hash format: introduced for room v4
V1 = 1 # $id:server event id format
V2 = 2 # MSC1659-style $hash event id format: introduced for room v3
V3 = 3 # MSC1884-style $hash format: introduced for room v4
KNOWN_EVENT_FORMAT_VERSIONS = {
@@ -34,9 +33,8 @@ KNOWN_EVENT_FORMAT_VERSIONS = {
class StateResolutionVersions(object):
"""Enum to identify the state resolution algorithms"""
V1 = 1 # room v1 state res
V2 = 2 # MSC1442 state res: room v2 and later
V1 = 1 # room v1 state res
V2 = 2 # MSC1442 state res: room v2 and later
class RoomDisposition(object):
@@ -48,10 +46,10 @@ class RoomDisposition(object):
class RoomVersion(object):
"""An object which describes the unique attributes of a room version."""
identifier = attr.ib() # str; the identifier for this version
disposition = attr.ib() # str; one of the RoomDispositions
event_format = attr.ib() # int; one of the EventFormatVersions
state_res = attr.ib() # int; one of the StateResolutionVersions
identifier = attr.ib() # str; the identifier for this version
disposition = attr.ib() # str; one of the RoomDispositions
event_format = attr.ib() # int; one of the EventFormatVersions
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
@@ -94,12 +92,11 @@ class RoomVersions(object):
KNOWN_ROOM_VERSIONS = {
v.identifier: v
for v in (
v.identifier: v for v in (
RoomVersions.V1,
RoomVersions.V2,
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
)
} # type: dict[str, RoomVersion]
} # type: dict[str, RoomVersion]

View File

@@ -42,9 +42,13 @@ class ConsentURIBuilder(object):
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
raise ConfigError("form_secret not set in config")
raise ConfigError(
"form_secret not set in config",
)
if hs_config.public_baseurl is None:
raise ConfigError("public_baseurl not set in config")
raise ConfigError(
"public_baseurl not set in config",
)
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
@@ -60,10 +64,15 @@ class ConsentURIBuilder(object):
(str) the URI where the user can do consent
"""
mac = hmac.new(
key=self._hmac_secret, msg=user_id.encode("ascii"), digestmod=sha256
key=self._hmac_secret,
msg=user_id.encode('ascii'),
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (
self._public_baseurl,
urlencode({"u": user_id, "h": mac}),
urlencode({
"u": user_id,
"h": mac
}),
)
return consent_uri

View File

@@ -43,7 +43,7 @@ def check_bind_error(e, address, bind_addresses):
address (str): Address on which binding was attempted.
bind_addresses (list): Addresses on which the service listens.
"""
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warn("Failed to listen on 0.0.0.0, continuing because listening on [::]")
if address == '0.0.0.0' and '::' in bind_addresses:
logger.warn('Failed to listen on 0.0.0.0, continuing because listening on [::]')
else:
raise e

View File

@@ -19,6 +19,7 @@ import signal
import sys
import traceback
import psutil
from daemonize import Daemonize
from twisted.internet import defer, error, reactor
@@ -67,13 +68,21 @@ def start_worker_reactor(appname, config):
gc_thresholds=config.gc_thresholds,
pid_file=config.worker_pid_file,
daemonize=config.worker_daemonize,
cpu_affinity=config.worker_cpu_affinity,
print_pidfile=config.print_pidfile,
logger=logger,
)
def start_reactor(
appname, soft_file_limit, gc_thresholds, pid_file, daemonize, print_pidfile, logger
appname,
soft_file_limit,
gc_thresholds,
pid_file,
daemonize,
cpu_affinity,
print_pidfile,
logger,
):
""" Run the reactor in the main process
@@ -86,6 +95,7 @@ def start_reactor(
gc_thresholds:
pid_file (str): name of pid file to write to if daemonize is True
daemonize (bool): true to run the reactor in a background process
cpu_affinity (int|None): cpu affinity mask
print_pidfile (bool): whether to print the pid file, if daemonize is True
logger (logging.Logger): logger instance to pass to Daemonize
"""
@@ -99,6 +109,20 @@ def start_reactor(
# between the sentinel and `run` logcontexts.
with PreserveLoggingContext():
logger.info("Running")
if cpu_affinity is not None:
# Turn the bitmask into bits, reverse it so we go from 0 up
mask_to_bits = bin(cpu_affinity)[2:][::-1]
cpus = []
cpu_num = 0
for i in mask_to_bits:
if i == "1":
cpus.append(cpu_num)
cpu_num += 1
p = psutil.Process()
p.cpu_affinity(cpus)
change_resource_limit(soft_file_limit)
if gc_thresholds:
@@ -125,10 +149,10 @@ def start_reactor(
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + "\n")
sys.stderr.write("*" * line_length + '\n')
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write("*" * line_length + "\n")
sys.stderr.write("*" * line_length + '\n')
sys.exit(1)
@@ -154,7 +178,14 @@ def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
r = []
for address in bind_addresses:
try:
r.append(reactor.listenTCP(port, factory, backlog, address))
r.append(
reactor.listenTCP(
port,
factory,
backlog,
address
)
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
@@ -174,7 +205,13 @@ def listen_ssl(
for address in bind_addresses:
try:
r.append(
reactor.listenSSL(port, factory, context_factory, backlog, address)
reactor.listenSSL(
port,
factory,
context_factory,
backlog,
address
)
)
except error.CannotListenError as e:
check_bind_error(e, address, bind_addresses)
@@ -206,13 +243,15 @@ def refresh_certificate(hs):
if isinstance(i.factory, TLSMemoryBIOFactory):
addr = i.getHost()
logger.info(
"Replacing TLS context factory on [%s]:%i", addr.host, addr.port
"Replacing TLS context factory on [%s]:%i", addr.host, addr.port,
)
# We want to replace TLS factories with a new one, with the new
# TLS configuration. We do this by reaching in and pulling out
# the wrappedFactory, and then re-wrapping it.
i.factory = TLSMemoryBIOFactory(
hs.tls_server_context_factory, False, i.factory.wrappedFactory
hs.tls_server_context_factory,
False,
i.factory.wrappedFactory
)
logger.info("Context factories updated.")
@@ -228,7 +267,6 @@ def start(hs, listeners=None):
try:
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args, **kwargs):
for i in _sighup_callbacks:
i(hs)
@@ -264,8 +302,10 @@ def setup_sentry(hs):
return
import sentry_sdk
sentry_sdk.init(dsn=hs.config.sentry_dsn, release=get_version_string(synapse))
sentry_sdk.init(
dsn=hs.config.sentry_dsn,
release=get_version_string(synapse),
)
# We set some default tags that give some context to this instance
with sentry_sdk.configure_scope() as scope:
@@ -286,7 +326,7 @@ def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
many DNS queries at once
"""
new_resolver = _LimitedHostnameResolver(
reactor.nameResolver, max_dns_requests_in_flight
reactor.nameResolver, max_dns_requests_in_flight,
)
reactor.installNameResolver(new_resolver)
@@ -299,17 +339,11 @@ class _LimitedHostnameResolver(object):
def __init__(self, resolver, max_dns_requests_in_flight):
self._resolver = resolver
self._limiter = Linearizer(
name="dns_client_limiter", max_count=max_dns_requests_in_flight
name="dns_client_limiter", max_count=max_dns_requests_in_flight,
)
def resolveHostName(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
def resolveHostName(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
# We need this function to return `resolutionReceiver` so we do all the
# actual logic involving deferreds in a separate function.
@@ -329,14 +363,8 @@ class _LimitedHostnameResolver(object):
return resolutionReceiver
@defer.inlineCallbacks
def _resolve(
self,
resolutionReceiver,
hostName,
portNumber=0,
addressTypes=None,
transportSemantics="TCP",
):
def _resolve(self, resolutionReceiver, hostName, portNumber=0,
addressTypes=None, transportSemantics='TCP'):
with (yield self._limiter.queue(())):
# resolveHostName doesn't return a Deferred, so we need to hook into
@@ -346,7 +374,8 @@ class _LimitedHostnameResolver(object):
receiver = _DeferredResolutionReceiver(resolutionReceiver, deferred)
self._resolver.resolveHostName(
receiver, hostName, portNumber, addressTypes, transportSemantics
receiver, hostName, portNumber,
addressTypes, transportSemantics,
)
yield deferred

View File

@@ -44,9 +44,7 @@ logger = logging.getLogger("synapse.app.appservice")
class AppserviceSlaveStore(
DirectoryStore,
SlavedEventStore,
SlavedApplicationServiceStore,
DirectoryStore, SlavedEventStore, SlavedApplicationServiceStore,
SlavedRegistrationStore,
):
pass
@@ -76,7 +74,7 @@ class AppserviceServer(HomeServer):
listener_config,
root_resource,
self.version_string,
),
)
)
logger.info("Synapse appservice now listening on port %d", port)
@@ -90,19 +88,18 @@ class AppserviceServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -135,7 +132,9 @@ class ASReplicationHandler(ReplicationClientHandler):
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse appservice", config_options)
config = HomeServerConfig.load_config(
"Synapse appservice", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -174,6 +173,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-appservice", config)
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -37,7 +37,6 @@ from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
@@ -53,7 +52,6 @@ from synapse.rest.client.v1.room import (
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomMemberListRestServlet,
RoomMessageListRestServlet,
RoomStateRestServlet,
)
from synapse.rest.client.v1.voip import VoipRestServlet
@@ -76,7 +74,6 @@ class ClientReaderSlavedStore(
SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedGroupServerStore,
SlavedAccountDataStore,
SlavedEventStore,
SlavedKeyStore,
@@ -112,7 +109,6 @@ class ClientReaderServer(HomeServer):
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
RoomMessageListRestServlet(self).register(resource)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
ThreepidRestServlet(self).register(resource)
@@ -122,7 +118,9 @@ class ClientReaderServer(HomeServer):
PushRuleRestServlet(self).register(resource)
VersionsRestServlet().register(resource)
resources.update({"/_matrix/client": resource})
resources.update({
"/_matrix/client": resource,
})
root_resource = create_resource_tree(resources, NoResource())
@@ -135,7 +133,7 @@ class ClientReaderServer(HomeServer):
listener_config,
root_resource,
self.version_string,
),
)
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -149,19 +147,18 @@ class ClientReaderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -173,7 +170,9 @@ class ClientReaderServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse client reader", config_options)
config = HomeServerConfig.load_config(
"Synapse client reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -200,6 +199,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-client-reader", config)
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -109,14 +109,12 @@ class EventCreatorServer(HomeServer):
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
@@ -129,7 +127,7 @@ class EventCreatorServer(HomeServer):
listener_config,
root_resource,
self.version_string,
),
)
)
logger.info("Synapse event creator now listening on port %d", port)
@@ -143,19 +141,18 @@ class EventCreatorServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -167,7 +164,9 @@ class EventCreatorServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse event creator", config_options)
config = HomeServerConfig.load_config(
"Synapse event creator", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -199,6 +198,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-event-creator", config)
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -86,18 +86,19 @@ class FederationReaderServer(HomeServer):
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
})
if name == "openid" and "federation" not in res["names"]:
# Only load the openid resource separately if federation resource
# is not specified since federation resource includes openid
# resource.
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
)
resources.update({
FEDERATION_PREFIX: TransportLayerServer(
self,
servlet_groups=["openid"],
),
})
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
@@ -114,7 +115,7 @@ class FederationReaderServer(HomeServer):
root_resource,
self.version_string,
),
reactor=self.get_reactor(),
reactor=self.get_reactor()
)
logger.info("Synapse federation reader now listening on port %d", port)
@@ -128,19 +129,18 @@ class FederationReaderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -181,6 +181,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-federation-reader", config)
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -52,13 +52,8 @@ logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore,
SlavedTransactionStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedRegistrationStore,
SlavedDeviceStore,
SlavedPresenceStore,
SlavedDeviceInboxStore, SlavedTransactionStore, SlavedReceiptsStore, SlavedEventStore,
SlavedRegistrationStore, SlavedDeviceStore, SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
@@ -70,7 +65,10 @@ class FederationSenderSlaveStore(
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
sql = (
"SELECT stream_id FROM federation_stream_position"
" WHERE type = ?"
)
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
@@ -105,7 +103,7 @@ class FederationSenderServer(HomeServer):
listener_config,
root_resource,
self.version_string,
),
)
)
logger.info("Synapse federation_sender now listening on port %d", port)
@@ -119,19 +117,18 @@ class FederationSenderServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -154,9 +151,7 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(
FederationSenderReplicationHandler, self
).get_streams_to_replicate()
args = super(FederationSenderReplicationHandler, self).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
@@ -208,7 +203,6 @@ class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
@@ -247,7 +241,7 @@ class FederationSenderHandler(object):
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
run_as_background_process(
"process_receipts_for_federation", self._on_new_receipts, rows
"process_receipts_for_federation", self._on_new_receipts, rows,
)
@defer.inlineCallbacks
@@ -284,14 +278,12 @@ class FederationSenderHandler(object):
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(
self.federation_position
)
self.replication_client.send_federation_ack(self.federation_position)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -62,11 +62,14 @@ class PresenceStatusStubServlet(RestServlet):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {"Authorization": auth_headers}
headers = {
"Authorization": auth_headers,
}
try:
result = yield self.http_client.get_json(
self.main_uri + request.uri.decode("ascii"), headers=headers
self.main_uri + request.uri.decode('ascii'),
headers=headers,
)
except HttpResponseException as e:
raise e.to_synapse_error()
@@ -102,19 +105,18 @@ class KeyUploadServlet(RestServlet):
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if requester.device_id is not None and device_id != requester.device_id:
logger.warning(
"Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id,
device_id,
)
if (requester.device_id is not None and
device_id != requester.device_id):
logger.warning("Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id, device_id)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400, "To upload keys, you must pass device_id when authenticating"
400,
"To upload keys, you must pass device_id when authenticating"
)
if body:
@@ -122,9 +124,13 @@ class KeyUploadServlet(RestServlet):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {"Authorization": auth_headers}
headers = {
"Authorization": auth_headers,
}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
self.main_uri + request.uri.decode('ascii'),
body,
headers=headers,
)
defer.returnValue((200, result))
@@ -165,14 +171,12 @@ class FrontendProxyServer(HomeServer):
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
resources.update({
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
})
root_resource = create_resource_tree(resources, NoResource())
@@ -186,7 +190,7 @@ class FrontendProxyServer(HomeServer):
root_resource,
self.version_string,
),
reactor=self.get_reactor(),
reactor=self.get_reactor()
)
logger.info("Synapse client reader now listening on port %d", port)
@@ -200,19 +204,18 @@ class FrontendProxyServer(HomeServer):
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
username="matrix",
password="rabbithole",
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
logger.warn(("Metrics listener configured, but "
"enable_metrics is not True!"))
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -224,7 +227,9 @@ class FrontendProxyServer(HomeServer):
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse frontend proxy", config_options)
config = HomeServerConfig.load_config(
"Synapse frontend proxy", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
@@ -253,6 +258,6 @@ def start(config_options):
_base.start_worker_reactor("synapse-frontend-proxy", config)
if __name__ == "__main__":
if __name__ == '__main__':
with LoggingContext("main"):
start(sys.argv[1:])

Some files were not shown because too many files have changed in this diff Show More