Compare commits
4 Commits
v0.4.2
...
paul/schem
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
01e83c9680 | ||
|
|
9705706a7f | ||
|
|
801a551da1 | ||
|
|
b8906b0ea8 |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -18,13 +18,9 @@ htmlcov
|
||||
demo/*.db
|
||||
demo/*.log
|
||||
demo/*.pid
|
||||
demo/etc
|
||||
|
||||
graph/*.svg
|
||||
graph/*.png
|
||||
graph/*.dot
|
||||
|
||||
webclient/config.js
|
||||
webclient/test/environment-protractor.js
|
||||
|
||||
uploads
|
||||
|
||||
194
CHANGES.rst
194
CHANGES.rst
@@ -1,197 +1,3 @@
|
||||
Changes in synapse 0.4.2 (2014-10-31)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
* Fix bugs where we did not notify users of correct presence updates.
|
||||
* Fix bug where we did not handle sub second event stream timeouts.
|
||||
|
||||
Webclient:
|
||||
* Add ability to click on messages to see JSON.
|
||||
* Add ability to redact messages.
|
||||
* Add ability to view and edit all room state JSON.
|
||||
* Handle incoming redactions.
|
||||
* Improve feedback on errors.
|
||||
* Fix bugs in mobile CSS.
|
||||
* Fix bugs with desktop notifications.
|
||||
|
||||
Changes in synapse 0.4.1 (2014-10-17)
|
||||
=====================================
|
||||
Webclient:
|
||||
* Fix bug with display of timestamps.
|
||||
|
||||
Changes in synpase 0.4.0 (2014-10-17)
|
||||
=====================================
|
||||
This release includes changes to the federation protocol and client-server API
|
||||
that is not backwards compatible.
|
||||
|
||||
The Matrix specification has been moved to a separate git repository:
|
||||
http://github.com/matrix-org/matrix-doc
|
||||
|
||||
You will also need an updated syutil and config. See UPGRADES.rst.
|
||||
|
||||
Homeserver:
|
||||
* Sign federation transactions to assert strong identity over federation.
|
||||
* Rename timestamp keys in PDUs and events from 'ts' and 'hsob_ts' to 'origin_server_ts'.
|
||||
|
||||
|
||||
Changes in synapse 0.3.4 (2014-09-25)
|
||||
=====================================
|
||||
This version adds support for using a TURN server. See docs/turn-howto.rst on
|
||||
how to set one up.
|
||||
|
||||
Homeserver:
|
||||
* Add support for redaction of messages.
|
||||
* Fix bug where inviting a user on a remote home server could take up to
|
||||
20-30s.
|
||||
* Implement a get current room state API.
|
||||
* Add support specifying and retrieving turn server configuration.
|
||||
|
||||
Webclient:
|
||||
* Add button to send messages to users from the home page.
|
||||
* Add support for using TURN for VoIP calls.
|
||||
* Show display name change messages.
|
||||
* Fix bug where the client didn't get the state of a newly joined room
|
||||
until after it has been refreshed.
|
||||
* Fix bugs with tab complete.
|
||||
* Fix bug where holding down the down arrow caused chrome to chew 100% CPU.
|
||||
* Fix bug where desktop notifications occasionally used "Undefined" as the
|
||||
display name.
|
||||
* Fix more places where we sometimes saw room IDs incorrectly.
|
||||
* Fix bug which caused lag when entering text in the text box.
|
||||
|
||||
Changes in synapse 0.3.3 (2014-09-22)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
* Fix bug where you continued to get events for rooms you had left.
|
||||
|
||||
Webclient:
|
||||
* Add support for video calls with basic UI.
|
||||
* Fix bug where one to one chats were named after your display name rather
|
||||
than the other person's.
|
||||
* Fix bug which caused lag when typing in the textarea.
|
||||
* Refuse to run on browsers we know won't work.
|
||||
* Trigger pagination when joining new rooms.
|
||||
* Fix bug where we sometimes didn't display invitations in recents.
|
||||
* Automatically join room when accepting a VoIP call.
|
||||
* Disable outgoing and reject incoming calls on browsers we don't support
|
||||
VoIP in.
|
||||
* Don't display desktop notifications for messages in the room you are
|
||||
non-idle and speaking in.
|
||||
|
||||
Changes in synapse 0.3.2 (2014-09-18)
|
||||
=====================================
|
||||
|
||||
Webclient:
|
||||
* Fix bug where an empty "bing words" list in old accounts didn't send
|
||||
notifications when it should have done.
|
||||
|
||||
Changes in synapse 0.3.1 (2014-09-18)
|
||||
=====================================
|
||||
This is a release to hotfix v0.3.0 to fix two regressions.
|
||||
|
||||
Webclient:
|
||||
* Fix a regression where we sometimes displayed duplicate events.
|
||||
* Fix a regression where we didn't immediately remove rooms you were
|
||||
banned in from the recents list.
|
||||
|
||||
Changes in synapse 0.3.0 (2014-09-18)
|
||||
=====================================
|
||||
See UPGRADE for information about changes to the client server API, including
|
||||
breaking backwards compatibility with VoIP calls and registration API.
|
||||
|
||||
Homeserver:
|
||||
* When a user changes their displayname or avatar the server will now update
|
||||
all their join states to reflect this.
|
||||
* The server now adds "age" key to events to indicate how old they are. This
|
||||
is clock independent, so at no point does any server or webclient have to
|
||||
assume their clock is in sync with everyone else.
|
||||
* Fix bug where we didn't correctly pull in missing PDUs.
|
||||
* Fix bug where prev_content key wasn't always returned.
|
||||
* Add support for password resets.
|
||||
|
||||
Webclient:
|
||||
* Improve page content loading.
|
||||
* Join/parts now trigger desktop notifications.
|
||||
* Always show room aliases in the UI if one is present.
|
||||
* No longer show user-count in the recents side panel.
|
||||
* Add up & down arrow support to the text box for message sending to step
|
||||
through your sent history.
|
||||
* Don't display notifications for our own messages.
|
||||
* Emotes are now formatted correctly in desktop notifications.
|
||||
* The recents list now differentiates between public & private rooms.
|
||||
* Fix bug where when switching between rooms the pagination flickered before
|
||||
the view jumped to the bottom of the screen.
|
||||
* Add bing word support.
|
||||
|
||||
Registration API:
|
||||
* The registration API has been overhauled to function like the login API. In
|
||||
practice, this means registration requests must now include the following:
|
||||
'type':'m.login.password'. See UPGRADE for more information on this.
|
||||
* The 'user_id' key has been renamed to 'user' to better match the login API.
|
||||
* There is an additional login type: 'm.login.email.identity'.
|
||||
* The command client and web client have been updated to reflect these changes.
|
||||
|
||||
Changes in synapse 0.2.3 (2014-09-12)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
* Fix bug where we stopped sending events to remote home servers if a
|
||||
user from that home server left, even if there were some still in the
|
||||
room.
|
||||
* Fix bugs in the state conflict resolution where it was incorrectly
|
||||
rejecting events.
|
||||
|
||||
Webclient:
|
||||
* Display room names and topics.
|
||||
* Allow setting/editing of room names and topics.
|
||||
* Display information about rooms on the main page.
|
||||
* Handle ban and kick events in real time.
|
||||
* VoIP UI and reliability improvements.
|
||||
* Add glare support for VoIP.
|
||||
* Improvements to initial startup speed.
|
||||
* Don't display duplicate join events.
|
||||
* Local echo of messages.
|
||||
* Differentiate sending and sent of local echo.
|
||||
* Various minor bug fixes.
|
||||
|
||||
Changes in synapse 0.2.2 (2014-09-06)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
* When the server returns state events it now also includes the previous
|
||||
content.
|
||||
* Add support for inviting people when creating a new room.
|
||||
* Make the homeserver inform the room via `m.room.aliases` when a new alias
|
||||
is added for a room.
|
||||
* Validate `m.room.power_level` events.
|
||||
|
||||
Webclient:
|
||||
* Add support for captchas on registration.
|
||||
* Handle `m.room.aliases` events.
|
||||
* Asynchronously send messages and show a local echo.
|
||||
* Inform the UI when a message failed to send.
|
||||
* Only autoscroll on receiving a new message if the user was already at the
|
||||
bottom of the screen.
|
||||
* Add support for ban/kick reasons.
|
||||
|
||||
Changes in synapse 0.2.1 (2014-09-03)
|
||||
=====================================
|
||||
|
||||
Homeserver:
|
||||
* Added support for signing up with a third party id.
|
||||
* Add synctl scripts.
|
||||
* Added rate limiting.
|
||||
* Add option to change the external address the content repo uses.
|
||||
* Presence bug fixes.
|
||||
|
||||
Webclient:
|
||||
* Added support for signing up with a third party id.
|
||||
* Added support for banning and kicking users.
|
||||
* Added support for displaying and setting ops.
|
||||
* Added support for room names.
|
||||
* Fix bugs with room membership event display.
|
||||
|
||||
Changes in synapse 0.2.0 (2014-09-02)
|
||||
=====================================
|
||||
This update changes many configuration options, updates the
|
||||
|
||||
176
README.rst
176
README.rst
@@ -4,91 +4,63 @@ Introduction
|
||||
Matrix is an ambitious new ecosystem for open federated Instant Messaging and
|
||||
VoIP. The basics you need to know to get up and running are:
|
||||
|
||||
- Chatrooms are distributed and do not exist on any single server. Rooms
|
||||
can be found using aliases like ``#matrix:matrix.org`` or
|
||||
``#test:localhost:8008`` or they can be ephemeral.
|
||||
|
||||
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
|
||||
you will normally refer to yourself and others using a 3PID: email
|
||||
address, phone number, etc rather than manipulating Matrix user IDs)
|
||||
- Chatrooms are distributed and do not exist on any single server. Rooms
|
||||
can be found using names like ``#matrix:matrix.org`` or
|
||||
``#test:localhost:8008`` or they can be ephemeral.
|
||||
|
||||
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
|
||||
you will normally refer to yourself and others using a 3PID: email
|
||||
address, phone number, etc rather than manipulating Matrix user IDs)
|
||||
|
||||
The overall architecture is::
|
||||
|
||||
client <----> homeserver <=====================> homeserver <----> client
|
||||
https://somewhere.org/_matrix https://elsewhere.net/_matrix
|
||||
|
||||
WARNING
|
||||
=======
|
||||
|
||||
**Synapse is currently in a state of rapid development, and not all features
|
||||
are yet functional. Critically, some security features are still in
|
||||
development, which means Synapse can *not* be considered secure or reliable at
|
||||
this point.** For instance:
|
||||
|
||||
- **SSL Certificates used by server-server federation are not yet validated.**
|
||||
- **Room permissions are not yet enforced on traffic received via federation.**
|
||||
- **Homeservers do not yet cryptographically sign their events to avoid
|
||||
tampering**
|
||||
- Default configuration provides open signup to the service from the internet
|
||||
|
||||
Despite this, we believe Synapse is more than useful as a way for experimenting
|
||||
and exploring Synapse, and the missing features will land shortly. **Until
|
||||
then, please do *NOT* use Synapse for any remotely important or secure
|
||||
communication.**
|
||||
|
||||
https://matrix.org/_matrix https://mydomain.net/_matrix
|
||||
|
||||
Quick Start
|
||||
===========
|
||||
|
||||
System requirements:
|
||||
- POSIX-compliant system (tested on Linux & OSX)
|
||||
- Python 2.7
|
||||
|
||||
To get up and running:
|
||||
|
||||
- To simply play with an **existing** homeserver you can
|
||||
just go straight to http://matrix.org/alpha.
|
||||
|
||||
- To run your own **private** homeserver on localhost:8008, generate a basic
|
||||
config file: ``./synctl start`` will give you instructions on how to do this.
|
||||
For this purpose, you can use 'localhost' or your hostname as a server name.
|
||||
Once you've done so, running ``./synctl start`` again will start your private
|
||||
home sserver. You will find a webclient running at http://localhost:8008.
|
||||
Please use a recent Chrome or Firefox for now (or Safari if you don't need
|
||||
VoIP support).
|
||||
|
||||
- To run a **public** homeserver and let it exchange messages with other
|
||||
homeservers and participate in the global Matrix federation, you must expose
|
||||
port 8448 to the internet and edit homeserver.yaml to specify server_name
|
||||
(the public DNS entry for this server) and then run ``synctl start``. If you
|
||||
changed the server_name, you may need to move the old database
|
||||
(homeserver.db) out of the way first. Then come join ``#matrix:matrix.org``
|
||||
and say hi! :)
|
||||
|
||||
- To simply play with an **existing** homeserver you can
|
||||
just go straight to http://matrix.org/alpha.
|
||||
|
||||
- To run your own **private** homeserver on localhost:8008, install synapse
|
||||
with ``python setup.py develop --user`` and then run one with
|
||||
``python synapse/app/homeserver.py`` - you will find a webclient running
|
||||
at http://localhost:8008 (use a recent Chrome, Safari or Firefox for now,
|
||||
please...)
|
||||
|
||||
- To make the homeserver **public** and let it exchange messages with
|
||||
other homeservers and participate in the overall Matrix federation, open
|
||||
up port 8448 and run ``python synapse/app/homeserver.py --host
|
||||
machine.my.domain.name``. Then come join ``#matrix:matrix.org`` and
|
||||
say hi! :)
|
||||
|
||||
For more detailed setup instructions, please see further down this document.
|
||||
|
||||
|
||||
|
||||
About Matrix
|
||||
============
|
||||
|
||||
Matrix specifies a set of pragmatic RESTful HTTP JSON APIs as an open standard,
|
||||
which handle:
|
||||
|
||||
- Creating and managing fully distributed chat rooms with no
|
||||
single points of control or failure
|
||||
- Eventually-consistent cryptographically secure[1] synchronisation of room
|
||||
state across a global open network of federated servers and services
|
||||
- Sending and receiving extensible messages in a room with (optional)
|
||||
end-to-end encryption[2]
|
||||
- Inviting, joining, leaving, kicking, banning room members
|
||||
- Managing user accounts (registration, login, logout)
|
||||
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
|
||||
Facebook accounts to authenticate, identify and discover users on Matrix.
|
||||
- Placing 1:1 VoIP and Video calls
|
||||
- Creating and managing fully distributed chat rooms with no
|
||||
single points of control or failure
|
||||
- Eventually-consistent cryptographically secure[1] synchronisation of room
|
||||
state across a global open network of federated servers and services
|
||||
- Sending and receiving extensible messages in a room with (optional)
|
||||
end-to-end encryption[2]
|
||||
- Inviting, joining, leaving, kicking, banning room members
|
||||
- Managing user accounts (registration, login, logout)
|
||||
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
|
||||
Facebook accounts to authenticate, identify and discover users on Matrix.
|
||||
- Placing 1:1 VoIP and Video calls
|
||||
|
||||
These APIs are intended to be implemented on a wide range of servers, services
|
||||
and clients, letting developers build messaging and VoIP functionality on top
|
||||
of the entirely open Matrix ecosystem rather than using closed or proprietary
|
||||
and clients, letting developers build messaging and VoIP functionality on top of
|
||||
the entirely open Matrix ecosystem rather than using closed or proprietary
|
||||
solutions. The hope is for Matrix to act as the building blocks for a new
|
||||
generation of fully open and interoperable messaging and VoIP apps for the
|
||||
internet.
|
||||
@@ -103,17 +75,17 @@ In Matrix, every user runs one or more Matrix clients, which connect through to
|
||||
a Matrix homeserver which stores all their personal chat history and user
|
||||
account information - much as a mail client connects through to an IMAP/SMTP
|
||||
server. Just like email, you can either run your own Matrix homeserver and
|
||||
control and own your own communications and history or use one hosted by
|
||||
someone else (e.g. matrix.org) - there is no single point of control or
|
||||
mandatory service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
|
||||
control and own your own communications and history or use one hosted by someone
|
||||
else (e.g. matrix.org) - there is no single point of control or mandatory
|
||||
service provider in Matrix, unlike WhatsApp, Facebook, Hangouts, etc.
|
||||
|
||||
Synapse ships with two basic demo Matrix clients: webclient (a basic group chat
|
||||
web client demo implemented in AngularJS) and cmdclient (a basic Python
|
||||
command line utility which lets you easily see what the JSON APIs are up to).
|
||||
commandline utility which lets you easily see what the JSON APIs are up to).
|
||||
|
||||
We'd like to invite you to take a look at the Matrix spec, try to run a
|
||||
homeserver, and join the existing Matrix chatrooms already out there,
|
||||
experiment with the APIs and the demo clients, and let us know your thoughts at
|
||||
homeserver, and join the existing Matrix chatrooms already out there, experiment
|
||||
with the APIs and the demo clients, and let us know your thoughts at
|
||||
https://github.com/matrix-org/synapse/issues or at matrix@matrix.org.
|
||||
|
||||
Thanks for trying Matrix!
|
||||
@@ -126,14 +98,15 @@ Thanks for trying Matrix!
|
||||
Homeserver Installation
|
||||
=======================
|
||||
|
||||
First, the dependencies need to be installed. Start by installing
|
||||
First, the dependencies need to be installed. Start by installing
|
||||
'python2.7-dev' and the various tools of the compiler toolchain.
|
||||
N.B. synapse requires python 2.x where x >= 7
|
||||
|
||||
Installing prerequisites on Ubuntu::
|
||||
Installing prerequisites on ubuntu::
|
||||
|
||||
$ sudo apt-get install build-essential python2.7-dev libffi-dev
|
||||
|
||||
Installing prerequisites on Mac OS X::
|
||||
Installing prerequisites on Mac OS X::
|
||||
|
||||
$ xcode-select --install
|
||||
|
||||
@@ -141,25 +114,22 @@ The homeserver has a number of external dependencies, that are easiest
|
||||
to install by making setup.py do so, in --user mode::
|
||||
|
||||
$ python setup.py develop --user
|
||||
|
||||
|
||||
You'll need a version of setuptools new enough to know about git, so you
|
||||
may need to also run::
|
||||
may need to also run:
|
||||
|
||||
$ sudo apt-get install python-pip
|
||||
$ sudo pip install --upgrade setuptools
|
||||
|
||||
|
||||
If you don't have access to github, then you may need to install ``syutil``
|
||||
manually by checking it out and running ``python setup.py develop --user`` on
|
||||
it too.
|
||||
|
||||
manually by checking it out and running ``python setup.py develop --user`` on it
|
||||
too.
|
||||
|
||||
If you get errors about ``sodium.h`` being missing, you may also need to
|
||||
manually install a newer PyNaCl via pip as setuptools installs an old one. Or
|
||||
you can check PyNaCl out of git directly (https://github.com/pyca/pynacl) and
|
||||
installing it. Installing PyNaCl using pip may also work (remember to remove
|
||||
any other versions installed by setuputils in, for example, ~/.local/lib).
|
||||
|
||||
On OSX, if you encounter ``clang: error: unknown argument: '-mno-fused-madd'``
|
||||
you will need to ``export CFLAGS=-Qunused-arguments``.
|
||||
installing it. Installing PyNaCl using pip may also work (remember to remove any
|
||||
other versions installed by setuputils in, for example, ~/.local/lib).
|
||||
|
||||
This will run a process of downloading and installing into your
|
||||
user's .local/lib directory all of the required dependencies that are
|
||||
@@ -182,7 +152,7 @@ Upgrading an existing homeserver
|
||||
|
||||
Before upgrading an existing homeserver to a new version, please refer to
|
||||
UPGRADE.rst for any additional instructions.
|
||||
|
||||
|
||||
|
||||
Setting up Federation
|
||||
=====================
|
||||
@@ -192,14 +162,14 @@ be publicly visible on the internet, and they will need to know its host name.
|
||||
You have two choices here, which will influence the form of your Matrix user
|
||||
IDs:
|
||||
|
||||
1) Use the machine's own hostname as available on public DNS in the form of
|
||||
its A or AAAA records. This is easier to set up initially, perhaps for
|
||||
testing, but lacks the flexibility of SRV.
|
||||
1) Use the machine's own hostname as available on public DNS in the form of its
|
||||
A or AAAA records. This is easier to set up initially, perhaps for testing,
|
||||
but lacks the flexibility of SRV.
|
||||
|
||||
2) Set up a SRV record for your domain name. This requires you create a SRV
|
||||
record in DNS, but gives the flexibility to run the server on your own
|
||||
choice of TCP port, on a machine that might not be the same name as the
|
||||
domain name.
|
||||
2) Set up a SRV record for your domain name. This requires you create a SRV
|
||||
record in DNS, but gives the flexibility to run the server on your own
|
||||
choice of TCP port, on a machine that might not be the same name as the
|
||||
domain name.
|
||||
|
||||
For the first form, simply pass the required hostname (of the machine) as the
|
||||
--host parameter::
|
||||
@@ -210,11 +180,6 @@ For the first form, simply pass the required hostname (of the machine) as the
|
||||
--generate-config
|
||||
$ python synapse/app/homeserver.py --config-path homeserver.config
|
||||
|
||||
Alternatively, you can run synapse via synctl - running ``synctl start`` to
|
||||
generate a homeserver.yaml config file, where you can then edit server-name to
|
||||
specify machine.my.domain.name, and then set the actual server running again
|
||||
with synctl start.
|
||||
|
||||
For the second form, first create your SRV record and publish it in DNS. This
|
||||
needs to be named _matrix._tcp.YOURDOMAIN, and point at at least one hostname
|
||||
and port where the server is running. (At the current time synapse does not
|
||||
@@ -254,7 +219,7 @@ http://localhost:8080. Simply run::
|
||||
Running The Demo Web Client
|
||||
===========================
|
||||
|
||||
The homeserver runs a web client by default at https://localhost:8448/.
|
||||
The homeserver runs a web client by default at http://localhost:8080.
|
||||
|
||||
If this is the first time you have used the client from that browser (it uses
|
||||
HTML5 local storage to remember its config), you will need to log in to your
|
||||
@@ -274,8 +239,8 @@ account. Your name will take the form of::
|
||||
|
||||
Specify your desired localpart in the topmost box of the "Register for an
|
||||
account" form, and click the "Register" button. Hostnames can contain ports if
|
||||
required due to lack of SRV records (e.g. @matthew:localhost:8448 on an
|
||||
internal synapse sandbox running on localhost)
|
||||
required due to lack of SRV records (e.g. @matthew:localhost:8080 on an internal
|
||||
synapse sandbox running on localhost)
|
||||
|
||||
|
||||
Logging In To An Existing Account
|
||||
@@ -290,9 +255,9 @@ Identity Servers
|
||||
|
||||
The job of authenticating 3PIDs and tracking which 3PIDs are associated with a
|
||||
given Matrix user is very security-sensitive, as there is obvious risk of spam
|
||||
if it is too easy to sign up for Matrix accounts or harvest 3PID data.
|
||||
Meanwhile the job of publishing the end-to-end encryption public keys for
|
||||
Matrix users is also very security-sensitive for similar reasons.
|
||||
if it is too easy to sign up for Matrix accounts or harvest 3PID data. Meanwhile
|
||||
the job of publishing the end-to-end encryption public keys for Matrix users is
|
||||
also very security-sensitive for similar reasons.
|
||||
|
||||
Therefore the role of managing trusted identity in the Matrix ecosystem is
|
||||
farmed out to a cluster of known trusted ecosystem partners, who run 'Matrix
|
||||
@@ -301,8 +266,7 @@ track 3PID logins and publish end-user public keys.
|
||||
|
||||
It's currently early days for identity servers as Matrix is not yet using 3PIDs
|
||||
as the primary means of identity and E2E encryption is not complete. As such,
|
||||
we are running a single identity server (http://matrix.org:8090) at the current
|
||||
time.
|
||||
we're not yet running an identity server in public.
|
||||
|
||||
|
||||
Where's the spec?!
|
||||
|
||||
44
UPGRADE.rst
44
UPGRADE.rst
@@ -1,47 +1,3 @@
|
||||
Upgrading to v0.4.0
|
||||
===================
|
||||
|
||||
This release needs an updated syutil version. Run::
|
||||
|
||||
python setup.py develop
|
||||
|
||||
You will also need to upgrade your configuration as the signing key format has
|
||||
changed. Run::
|
||||
|
||||
python -m synapse.app.homeserver --config-path <CONFIG> --generate-config
|
||||
|
||||
|
||||
Upgrading to v0.3.0
|
||||
===================
|
||||
|
||||
This registration API now closely matches the login API. This introduces a bit
|
||||
more backwards and forwards between the HS and the client, but this improves
|
||||
the overall flexibility of the API. You can now GET on /register to retrieve a list
|
||||
of valid registration flows. Upon choosing one, they are submitted in the same
|
||||
way as login, e.g::
|
||||
|
||||
{
|
||||
type: m.login.password,
|
||||
user: foo,
|
||||
password: bar
|
||||
}
|
||||
|
||||
The default HS supports 2 flows, with and without Identity Server email
|
||||
authentication. Enabling captcha on the HS will add in an extra step to all
|
||||
flows: ``m.login.recaptcha`` which must be completed before you can transition
|
||||
to the next stage. There is a new login type: ``m.login.email.identity`` which
|
||||
contains the ``threepidCreds`` key which were previously sent in the original
|
||||
register request. For more information on this, see the specification.
|
||||
|
||||
Web Client
|
||||
----------
|
||||
|
||||
The VoIP specification has changed between v0.2.0 and v0.3.0. Users should
|
||||
refresh any browser tabs to get the latest web client code. Users on
|
||||
v0.2.0 of the web client will not be able to call those on v0.3.0 and
|
||||
vice versa.
|
||||
|
||||
|
||||
Upgrading to v0.2.0
|
||||
===================
|
||||
|
||||
|
||||
7
WISHLIST.rst
Normal file
7
WISHLIST.rst
Normal file
@@ -0,0 +1,7 @@
|
||||
Broad-sweeping stuff which would be nice to have
|
||||
================================================
|
||||
|
||||
- Additional SQL backends beyond sqlite
|
||||
- homeserver implementation in go
|
||||
- homeserver implementation in node.js
|
||||
- client SDKs
|
||||
@@ -145,50 +145,35 @@ class SynapseCmd(cmd.Cmd):
|
||||
<noupdate> : Do not automatically clobber config values.
|
||||
"""
|
||||
args = self._parse(line, ["userid", "noupdate"])
|
||||
path = "/register"
|
||||
|
||||
password = None
|
||||
pwd = None
|
||||
pwd2 = "_"
|
||||
while pwd != pwd2:
|
||||
pwd = getpass.getpass("Type a password for this user: ")
|
||||
pwd = getpass.getpass("(Optional) Type a password for this user: ")
|
||||
if len(pwd) == 0:
|
||||
print "Not using a password for this user."
|
||||
break
|
||||
pwd2 = getpass.getpass("Retype the password: ")
|
||||
if pwd != pwd2 or len(pwd) == 0:
|
||||
if pwd != pwd2:
|
||||
print "Password mismatch."
|
||||
pwd = None
|
||||
else:
|
||||
password = pwd
|
||||
|
||||
body = {
|
||||
"type": "m.login.password"
|
||||
}
|
||||
body = {}
|
||||
if "userid" in args:
|
||||
body["user"] = args["userid"]
|
||||
body["user_id"] = args["userid"]
|
||||
if password:
|
||||
body["password"] = password
|
||||
|
||||
reactor.callFromThread(self._do_register, body,
|
||||
reactor.callFromThread(self._do_register, "POST", path, body,
|
||||
"noupdate" not in args)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _do_register(self, data, update_config):
|
||||
# check the registration flows
|
||||
url = self._url() + "/register"
|
||||
json_res = yield self.http_client.do_request("GET", url)
|
||||
print json.dumps(json_res, indent=4)
|
||||
|
||||
passwordFlow = None
|
||||
for flow in json_res["flows"]:
|
||||
if flow["type"] == "m.login.recaptcha" or ("stages" in flow and "m.login.recaptcha" in flow["stages"]):
|
||||
print "Unable to register: Home server requires captcha."
|
||||
return
|
||||
if flow["type"] == "m.login.password" and "stages" not in flow:
|
||||
passwordFlow = flow
|
||||
break
|
||||
|
||||
if not passwordFlow:
|
||||
return
|
||||
|
||||
json_res = yield self.http_client.do_request("POST", url, data=data)
|
||||
def _do_register(self, method, path, data, update_config):
|
||||
url = self._url() + path
|
||||
json_res = yield self.http_client.do_request(method, url, data=data)
|
||||
print json.dumps(json_res, indent=4)
|
||||
if update_config and "user_id" in json_res:
|
||||
self.config["user"] = json_res["user_id"]
|
||||
|
||||
@@ -14,4 +14,3 @@ fi
|
||||
find "$DIR" -name "*.log" -delete
|
||||
find "$DIR" -name "*.db" -delete
|
||||
|
||||
rm -rf $DIR/etc
|
||||
|
||||
@@ -8,14 +8,6 @@ cd "$DIR/.."
|
||||
|
||||
mkdir -p demo/etc
|
||||
|
||||
# Check the --no-rate-limit param
|
||||
PARAMS=""
|
||||
if [ $# -eq 1 ]; then
|
||||
if [ $1 = "--no-rate-limit" ]; then
|
||||
PARAMS="--rc-messages-per-second 1000 --rc-message-burst-count 1000"
|
||||
fi
|
||||
fi
|
||||
|
||||
for port in 8080 8081 8082; do
|
||||
echo "Starting server on port $port... "
|
||||
|
||||
@@ -31,8 +23,7 @@ for port in 8080 8081 8082; do
|
||||
-d "$DIR/$port.db" \
|
||||
-D --pid-file "$DIR/$port.pid" \
|
||||
--manhole $((port + 1000)) \
|
||||
--tls-dh-params-path "demo/demo.tls.dh" \
|
||||
$PARAMS
|
||||
--tls-dh-params-path "demo/demo.tls.dh"
|
||||
|
||||
python -m synapse.app.homeserver \
|
||||
--config-path "demo/etc/$port.config" \
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
All matrix-generic documentation now lives in its own project at
|
||||
|
||||
github.com/matrix-org/matrix-doc.git
|
||||
|
||||
Only Synapse implementation-specific documentation lives here now
|
||||
(together with some older stuff will be shortly migrated over to matrix-doc)
|
||||
636
docs/client-server/howto.rst
Normal file
636
docs/client-server/howto.rst
Normal file
@@ -0,0 +1,636 @@
|
||||
.. TODO kegan
|
||||
Room config (specifically: message history,
|
||||
public rooms). /register seems super simplistic compared to /login, maybe it
|
||||
would be better if /register used the same technique as /login? /register should
|
||||
be "user" not "user_id".
|
||||
|
||||
|
||||
How to use the client-server API
|
||||
================================
|
||||
|
||||
This guide focuses on how the client-server APIs *provided by the reference
|
||||
home server* can be used. Since this is specific to a home server
|
||||
implementation, there may be variations in relation to registering/logging in
|
||||
which are not covered in extensive detail in this guide.
|
||||
|
||||
If you haven't already, get a home server up and running on
|
||||
``http://localhost:8008``.
|
||||
|
||||
|
||||
Accounts
|
||||
========
|
||||
Before you can send and receive messages, you must **register** for an account.
|
||||
If you already have an account, you must **login** into it.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/4q2jyxng/
|
||||
|
||||
Registration
|
||||
------------
|
||||
The aim of registration is to get a user ID and access token which you will need
|
||||
when accessing other APIs::
|
||||
|
||||
curl -XPOST -d '{"user_id":"example", "password":"wordpass"}' "http://localhost:8008/_matrix/client/api/v1/register"
|
||||
|
||||
{
|
||||
"access_token": "QGV4YW1wbGU6bG9jYWxob3N0.AqdSzFmFYrLrTmteXc",
|
||||
"home_server": "localhost",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
|
||||
NB: If a ``user_id`` is not specified, one will be randomly generated for you.
|
||||
If you do not specify a ``password``, you will be unable to login to the account
|
||||
if you forget the ``access_token``.
|
||||
|
||||
Implementation note: The matrix specification does not enforce how users
|
||||
register with a server. It just specifies the URL path and absolute minimum
|
||||
keys. The reference home server uses a username/password to authenticate user,
|
||||
but other home servers may use different methods.
|
||||
|
||||
Login
|
||||
-----
|
||||
The aim when logging in is to get an access token for your existing user ID::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/login"
|
||||
|
||||
{
|
||||
"flows": [
|
||||
{
|
||||
"type": "m.login.password"
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
curl -XPOST -d '{"type":"m.login.password", "user":"example", "password":"wordpass"}' "http://localhost:8008/_matrix/client/api/v1/login"
|
||||
|
||||
{
|
||||
"access_token": "QGV4YW1wbGU6bG9jYWxob3N0.vRDLTgxefmKWQEtgGd",
|
||||
"home_server": "localhost",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
|
||||
Implementation note: Different home servers may implement different methods for
|
||||
logging in to an existing account. In order to check that you know how to login
|
||||
to this home server, you must perform a ``GET`` first and make sure you
|
||||
recognise the login type. If you do not know how to login, you can
|
||||
``GET /login/fallback`` which will return a basic webpage which you can use to
|
||||
login. The reference home server implementation support username/password login,
|
||||
but other home servers may support different login methods (e.g. OAuth2).
|
||||
|
||||
|
||||
Communicating
|
||||
=============
|
||||
|
||||
In order to communicate with another user, you must **create a room** with that
|
||||
user and **send a message** to that room.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/zL3zto9g/
|
||||
|
||||
Creating a room
|
||||
---------------
|
||||
If you want to send a message to someone, you have to be in a room with them. To
|
||||
create a room::
|
||||
|
||||
curl -XPOST -d '{"room_alias_name":"tutorial"}' "http://localhost:8008/_matrix/client/api/v1/createRoom?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"room_alias": "#tutorial:localhost",
|
||||
"room_id": "!CvcvRuDYDzTOzfKKgh:localhost"
|
||||
}
|
||||
|
||||
The "room alias" is a human-readable string which can be shared with other users
|
||||
so they can join a room, rather than the room ID which is a randomly generated
|
||||
string. You can have multiple room aliases per room.
|
||||
|
||||
.. TODO(kegan)
|
||||
How to add/remove aliases from an existing room.
|
||||
|
||||
|
||||
Sending messages
|
||||
----------------
|
||||
You can now send messages to this room::
|
||||
|
||||
curl -XPOST -d '{"msgtype":"m.text", "body":"hello"}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/send/m.room.message?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"event_id": "YUwRidLecu"
|
||||
}
|
||||
|
||||
The event ID returned is a unique ID which identifies this message.
|
||||
|
||||
NB: There are no limitations to the types of messages which can be exchanged.
|
||||
The only requirement is that ``"msgtype"`` is specified. The Matrix
|
||||
specification outlines the following standard types: ``m.text``, ``m.image``,
|
||||
``m.audio``, ``m.video``, ``m.location``, ``m.emote``. See the specification for
|
||||
more information on these types.
|
||||
|
||||
Users and rooms
|
||||
===============
|
||||
|
||||
Each room can be configured to allow or disallow certain rules. In particular,
|
||||
these rules may specify if you require an **invitation** from someone already in
|
||||
the room in order to **join the room**. In addition, you may also be able to
|
||||
join a room **via a room alias** if one was set up.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/7fhotf1b/
|
||||
|
||||
Inviting a user to a room
|
||||
-------------------------
|
||||
You can directly invite a user to a room like so::
|
||||
|
||||
curl -XPOST -d '{"user_id":"@myfriend:localhost"}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/invite?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
This informs ``@myfriend:localhost`` of the room ID
|
||||
``!CvcvRuDYDzTOzfKKgh:localhost`` and allows them to join the room.
|
||||
|
||||
Joining a room via an invite
|
||||
----------------------------
|
||||
If you receive an invite, you can join the room::
|
||||
|
||||
curl -XPOST -d '{}' "http://localhost:8008/_matrix/client/api/v1/rooms/%21CvcvRuDYDzTOzfKKgh%3Alocalhost/join?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
NB: Only the person invited (``@myfriend:localhost``) can change the membership
|
||||
state to ``"join"``. Repeatedly joining a room does nothing.
|
||||
|
||||
Joining a room via an alias
|
||||
---------------------------
|
||||
Alternatively, if you know the room alias for this room and the room config
|
||||
allows it, you can directly join a room via the alias::
|
||||
|
||||
curl -XPOST -d '{}' "http://localhost:8008/_matrix/client/api/v1/join/%23tutorial%3Alocalhost?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"room_id": "!CvcvRuDYDzTOzfKKgh:localhost"
|
||||
}
|
||||
|
||||
You will need to use the room ID when sending messages, not the room alias.
|
||||
|
||||
NB: If the room is configured to be an invite-only room, you will still require
|
||||
an invite in order to join the room even though you know the room alias. As a
|
||||
result, it is more common to see a room alias in relation to a public room,
|
||||
which do not require invitations.
|
||||
|
||||
Getting events
|
||||
==============
|
||||
An event is some interesting piece of data that a client may be interested in.
|
||||
It can be a message in a room, a room invite, etc. There are many different ways
|
||||
of getting events, depending on what the client already knows.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/vw11mg37/
|
||||
|
||||
Getting all state
|
||||
-----------------
|
||||
If the client doesn't know any information on the rooms the user is
|
||||
invited/joined on, they can get all the user's state for all rooms::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/initialSync?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"end": "s39_18_0",
|
||||
"presence": [
|
||||
{
|
||||
"content": {
|
||||
"last_active_ago": 1061436,
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
"type": "m.presence"
|
||||
}
|
||||
],
|
||||
"rooms": [
|
||||
{
|
||||
"membership": "join",
|
||||
"messages": {
|
||||
"chunk": [
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "RJbPMtCutf",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409665586730,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"body": "hello",
|
||||
"hsob_ts": 1409665660439,
|
||||
"msgtype": "m.text"
|
||||
},
|
||||
"event_id": "YUwRidLecu",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"ts": 1409665660439,
|
||||
"type": "m.room.message",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join",
|
||||
"prev": "join"
|
||||
},
|
||||
"event_id": "KWwdDjNZnm",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666551582,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
],
|
||||
"end": "s39_18_0",
|
||||
"start": "t1-11_18_0"
|
||||
},
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state": [
|
||||
{
|
||||
"content": {
|
||||
"creator": "@example:localhost"
|
||||
},
|
||||
"event_id": "dMUoqVTZca",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.create",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
This returns all the room information the user is invited/joined on, as well as
|
||||
all of the presences relevant for these rooms. This can be a LOT of data. You
|
||||
may just want the most recent event for each room. This can be achieved by
|
||||
applying query parameters to ``limit`` this request::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/initialSync?limit=1&access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"end": "s39_18_0",
|
||||
"presence": [
|
||||
{
|
||||
"content": {
|
||||
"last_active_ago": 1279484,
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
"type": "m.presence"
|
||||
}
|
||||
],
|
||||
"rooms": [
|
||||
{
|
||||
"membership": "join",
|
||||
"messages": {
|
||||
"chunk": [
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
],
|
||||
"end": "s39_18_0",
|
||||
"start": "t10-30_18_0"
|
||||
},
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state": [
|
||||
{
|
||||
"content": {
|
||||
"creator": "@example:localhost"
|
||||
},
|
||||
"event_id": "dMUoqVTZca",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.create",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"@example:localhost": 10,
|
||||
"default": 0
|
||||
},
|
||||
"event_id": "wAumPSTsWF",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.power_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"join_rule": "public"
|
||||
},
|
||||
"event_id": "jrLVqKHKiI",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.join_rules",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 10
|
||||
},
|
||||
"event_id": "WpmTgsNWUZ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.add_state_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"level": 0
|
||||
},
|
||||
"event_id": "qUMBJyKsTQ",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.send_event_level",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"ban_level": 5,
|
||||
"kick_level": 5
|
||||
},
|
||||
"event_id": "YAaDmKvoUW",
|
||||
"required_power_level": 10,
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "",
|
||||
"ts": 1409665585188,
|
||||
"type": "m.room.ops_levels",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"membership": "invite"
|
||||
},
|
||||
"event_id": "YjNuBKnPsb",
|
||||
"membership": "invite",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@myfriend:localhost",
|
||||
"ts": 1409666426819,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"avatar_url": null,
|
||||
"displayname": null,
|
||||
"membership": "join"
|
||||
},
|
||||
"event_id": "JFLVteSvQc",
|
||||
"membership": "join",
|
||||
"room_id": "!MkDbyRqnvTYnoxjLYx:localhost",
|
||||
"state_key": "@example:localhost",
|
||||
"ts": 1409666587265,
|
||||
"type": "m.room.member",
|
||||
"user_id": "@example:localhost"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
Getting live state
|
||||
------------------
|
||||
Once you know which rooms the client has previously interacted with, you need to
|
||||
listen for incoming events. This can be done like so::
|
||||
|
||||
curl -XGET "http://localhost:8008/_matrix/client/api/v1/events?access_token=YOUR_ACCESS_TOKEN"
|
||||
|
||||
{
|
||||
"chunk": [],
|
||||
"end": "s39_18_0",
|
||||
"start": "s39_18_0"
|
||||
}
|
||||
|
||||
This will block waiting for an incoming event, timing out after several seconds.
|
||||
Even if there are no new events (as in the example above), there will be some
|
||||
pagination stream response keys. The client should make subsequent requests
|
||||
using the value of the ``"end"`` key (in this case ``s39_18_0``) as the ``from``
|
||||
query parameter e.g. ``http://localhost:8008/_matrix/client/api/v1/events?access
|
||||
_token=YOUR_ACCESS_TOKEN&from=s39_18_0``. This value should be stored so when the
|
||||
client reopens your app after a period of inactivity, you can resume from where
|
||||
you got up to in the event stream. If it has been a long period of inactivity,
|
||||
there may be LOTS of events waiting for the user. In this case, you may wish to
|
||||
get all state instead and then resume getting live state from a newer end token.
|
||||
|
||||
NB: The timeout can be changed by adding a ``timeout`` query parameter, which is
|
||||
in milliseconds. A timeout of 0 will not block.
|
||||
|
||||
|
||||
Example application
|
||||
-------------------
|
||||
The following example demonstrates registration and login, live event streaming,
|
||||
creating and joining rooms, sending messages, getting member lists and getting
|
||||
historical messages for a room. This covers most functionality of a messaging
|
||||
application.
|
||||
|
||||
`Try out the fiddle`__
|
||||
|
||||
.. __: http://jsfiddle.net/uztL3yme/
|
||||
46
docs/client-server/swagger_matrix/api-docs
Normal file
46
docs/client-server/swagger_matrix/api-docs
Normal file
@@ -0,0 +1,46 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"apis": [
|
||||
{
|
||||
"path": "-login",
|
||||
"description": "Login operations"
|
||||
},
|
||||
{
|
||||
"path": "-registration",
|
||||
"description": "Registration operations"
|
||||
},
|
||||
{
|
||||
"path": "-rooms",
|
||||
"description": "Room operations"
|
||||
},
|
||||
{
|
||||
"path": "-profile",
|
||||
"description": "Profile operations"
|
||||
},
|
||||
{
|
||||
"path": "-presence",
|
||||
"description": "Presence operations"
|
||||
},
|
||||
{
|
||||
"path": "-events",
|
||||
"description": "Event operations"
|
||||
},
|
||||
{
|
||||
"path": "-directory",
|
||||
"description": "Directory operations"
|
||||
}
|
||||
],
|
||||
"authorizations": {
|
||||
"token": {
|
||||
"scopes": []
|
||||
}
|
||||
},
|
||||
"info": {
|
||||
"title": "Matrix Client-Server API Reference",
|
||||
"description": "This contains the client-server API for the reference implementation of the home server",
|
||||
"termsOfServiceUrl": "http://matrix.org",
|
||||
"license": "Apache 2.0",
|
||||
"licenseUrl": "http://www.apache.org/licenses/LICENSE-2.0.html"
|
||||
}
|
||||
}
|
||||
85
docs/client-server/swagger_matrix/api-docs-directory
Normal file
85
docs/client-server/swagger_matrix/api-docs-directory
Normal file
@@ -0,0 +1,85 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/directory",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/directory/room/{roomAlias}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the room ID corresponding to this room alias.",
|
||||
"notes": "Volatile: This API is likely to change.",
|
||||
"type": "DirectoryResponse",
|
||||
"nickname": "get_room_id_for_alias",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAlias",
|
||||
"description": "The room alias.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Create a new mapping from room alias to room ID.",
|
||||
"notes": "Volatile: This API is likely to change.",
|
||||
"type": "void",
|
||||
"nickname": "add_room_alias",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAlias",
|
||||
"description": "The room alias to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The room ID to set.",
|
||||
"required": true,
|
||||
"type": "RoomAliasRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"DirectoryResponse": {
|
||||
"id": "DirectoryResponse",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified room ID.",
|
||||
"required": true
|
||||
},
|
||||
"servers": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
},
|
||||
"description": "A list of servers that know about this room.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomAliasRequest": {
|
||||
"id": "RoomAliasRequest",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID to map the alias to.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
247
docs/client-server/swagger_matrix/api-docs-events
Normal file
247
docs/client-server/swagger_matrix/api-docs-events
Normal file
@@ -0,0 +1,247 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/events",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/events",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Listen on the event stream",
|
||||
"notes": "This can only be done by the logged in user. This will block until an event is received, or until the timeout is reached.",
|
||||
"type": "PaginationChunk",
|
||||
"nickname": "get_event_stream",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to stream from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "timeout",
|
||||
"description": "The maximum time in milliseconds to wait for an event.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad pagination token."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/events/{eventId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get information about a single event.",
|
||||
"notes": "Get information about a single event.",
|
||||
"type": "Event",
|
||||
"nickname": "get_event",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "eventId",
|
||||
"description": "The event ID to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Event not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/initialSync",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get this user's current state.",
|
||||
"notes": "Get this user's current state.",
|
||||
"type": "InitialSyncResponse",
|
||||
"nickname": "initial_sync",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of messages to return for each room.",
|
||||
"type": "integer",
|
||||
"paramType": "query",
|
||||
"required": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/publicRooms",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of publicly visible rooms.",
|
||||
"type": "PublicRoomsPaginationChunk",
|
||||
"nickname": "get_public_room_list"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"PaginationChunk": {
|
||||
"id": "PaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "An array of events.",
|
||||
"required": true,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"Event": {
|
||||
"id": "Event",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "An ID which uniquely identifies this event.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room in which this event occurred.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"PublicRoomInfo": {
|
||||
"id": "PublicRoomInfo",
|
||||
"properties": {
|
||||
"aliases": {
|
||||
"type": "array",
|
||||
"description": "A list of room aliases for this room.",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
}
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The name of the room, as given by the m.room.name state event."
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID for this public room.",
|
||||
"required": true
|
||||
},
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "The topic of this room, as given by the m.room.topic state event."
|
||||
}
|
||||
}
|
||||
},
|
||||
"PublicRoomsPaginationChunk": {
|
||||
"id": "PublicRoomsPaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of public room data.",
|
||||
"required": true,
|
||||
"items": {
|
||||
"$ref": "PublicRoomInfo"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncResponse": {
|
||||
"id": "InitialSyncResponse",
|
||||
"properties": {
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A streaming token which can be used with /events to continue from this snapshot of data.",
|
||||
"required": true
|
||||
},
|
||||
"presence": {
|
||||
"type": "array",
|
||||
"description": "A list of presence events.",
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
},
|
||||
"required": false
|
||||
},
|
||||
"rooms": {
|
||||
"type": "array",
|
||||
"description": "A list of initial sync room data.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "InitialSyncRoomData"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncRoomData": {
|
||||
"id": "InitialSyncRoomData",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "This user's membership state in this room.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The ID of this room.",
|
||||
"required": true
|
||||
},
|
||||
"messages": {
|
||||
"type": "PaginationChunk",
|
||||
"description": "The most recent messages for this room, governed by the limit parameter.",
|
||||
"required": false
|
||||
},
|
||||
"state": {
|
||||
"type": "array",
|
||||
"description": "A list of state events representing the current state of the room.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
120
docs/client-server/swagger_matrix/api-docs-login
Normal file
120
docs/client-server/swagger_matrix/api-docs-login
Normal file
@@ -0,0 +1,120 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"apis": [
|
||||
{
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"nickname": "get_login_info",
|
||||
"notes": "All login stages MUST be mentioned if there is >1 login type.",
|
||||
"summary": "Get the login mechanism to use when logging in.",
|
||||
"type": "LoginFlows"
|
||||
},
|
||||
{
|
||||
"method": "POST",
|
||||
"nickname": "submit_login",
|
||||
"notes": "If this is part of a multi-stage login, there MUST be a 'session' key.",
|
||||
"parameters": [
|
||||
{
|
||||
"description": "A login submission",
|
||||
"name": "body",
|
||||
"paramType": "body",
|
||||
"required": true,
|
||||
"type": "LoginSubmission"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad login type"
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Missing JSON keys"
|
||||
}
|
||||
],
|
||||
"summary": "Submit a login action.",
|
||||
"type": "LoginResult"
|
||||
}
|
||||
],
|
||||
"path": "/login"
|
||||
}
|
||||
],
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"models": {
|
||||
"LoginFlows": {
|
||||
"id": "LoginFlows",
|
||||
"properties": {
|
||||
"flows": {
|
||||
"description": "A list of valid login flows.",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "LoginInfo"
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginInfo": {
|
||||
"id": "LoginInfo",
|
||||
"properties": {
|
||||
"stages": {
|
||||
"description": "Multi-stage login only: An array of all the login types required to login.",
|
||||
"items": {
|
||||
"$ref": "string"
|
||||
},
|
||||
"type": "array"
|
||||
},
|
||||
"type": {
|
||||
"description": "The login type that must be used when logging in.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginResult": {
|
||||
"id": "LoginResult",
|
||||
"properties": {
|
||||
"access_token": {
|
||||
"description": "The access token for this user's login if this is the final stage of the login process.",
|
||||
"type": "string"
|
||||
},
|
||||
"user_id": {
|
||||
"description": "The user's fully-qualified user ID.",
|
||||
"type": "string"
|
||||
},
|
||||
"next": {
|
||||
"description": "Multi-stage login only: The next login type to submit.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage login only: The session token to send when submitting the next login type.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"LoginSubmission": {
|
||||
"id": "LoginSubmission",
|
||||
"properties": {
|
||||
"type": {
|
||||
"description": "The type of login being submitted.",
|
||||
"type": "string"
|
||||
},
|
||||
"session": {
|
||||
"description": "Multi-stage login only: The session token from an earlier login stage.",
|
||||
"type": "string"
|
||||
},
|
||||
"_login_type_defined_keys_": {
|
||||
"description": "Keys as defined by the specified login type, e.g. \"user\", \"password\""
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"resourcePath": "/login",
|
||||
"swaggerVersion": "1.2"
|
||||
}
|
||||
|
||||
164
docs/client-server/swagger_matrix/api-docs-presence
Normal file
164
docs/client-server/swagger_matrix/api-docs-presence
Normal file
@@ -0,0 +1,164 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/presence",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/presence/{userId}/status",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Update this user's presence state.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "update_presence",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new presence state",
|
||||
"required": true,
|
||||
"type": "PresenceUpdate",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get this user's presence state.",
|
||||
"notes": "Get this user's presence state.",
|
||||
"type": "PresenceUpdate",
|
||||
"nickname": "get_presence",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/presence/list/{userId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Retrieve a list of presences for all of this user's friends.",
|
||||
"notes": "",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "Presence"
|
||||
},
|
||||
"nickname": "get_presence_list",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence list to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Add or remove users from this presence list.",
|
||||
"notes": "Add or remove users from this presence list.",
|
||||
"type": "void",
|
||||
"nickname": "modify_presence_list",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose presence list is being modified.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The modifications to make to this presence list.",
|
||||
"required": true,
|
||||
"type": "PresenceListModifications",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"PresenceUpdate": {
|
||||
"id": "PresenceUpdate",
|
||||
"properties": {
|
||||
"presence": {
|
||||
"type": "string",
|
||||
"description": "Enum: The presence state.",
|
||||
"enum": [
|
||||
"offline",
|
||||
"unavailable",
|
||||
"online",
|
||||
"free_for_chat"
|
||||
]
|
||||
},
|
||||
"status_msg": {
|
||||
"type": "string",
|
||||
"description": "The user-defined message associated with this presence state."
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"Presence"
|
||||
]
|
||||
},
|
||||
"Presence": {
|
||||
"id": "Presence",
|
||||
"properties": {
|
||||
"last_active_ago": {
|
||||
"type": "integer",
|
||||
"format": "int64",
|
||||
"description": "The last time this user performed an action on their home server."
|
||||
},
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully qualified user ID"
|
||||
}
|
||||
}
|
||||
},
|
||||
"PresenceListModifications": {
|
||||
"id": "PresenceListModifications",
|
||||
"properties": {
|
||||
"invite": {
|
||||
"type": "array",
|
||||
"description": "A list of user IDs to add to the list.",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"description": "A fully qualified user ID."
|
||||
}
|
||||
},
|
||||
"drop": {
|
||||
"type": "array",
|
||||
"description": "A list of user IDs to remove from the list.",
|
||||
"items": {
|
||||
"type": "string",
|
||||
"description": "A fully qualified user ID."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
122
docs/client-server/swagger_matrix/api-docs-profile
Normal file
122
docs/client-server/swagger_matrix/api-docs-profile
Normal file
@@ -0,0 +1,122 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/profile",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"apis": [
|
||||
{
|
||||
"path": "/profile/{userId}/displayname",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set a display name.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "set_display_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new display name for this user.",
|
||||
"required": true,
|
||||
"type": "DisplayName",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose display name to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a display name.",
|
||||
"notes": "This can be done by anyone.",
|
||||
"type": "DisplayName",
|
||||
"nickname": "get_display_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose display name to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/profile/{userId}/avatar_url",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set an avatar URL.",
|
||||
"notes": "This can only be done by the logged in user.",
|
||||
"type": "void",
|
||||
"nickname": "set_avatar_url",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new avatar url for this user.",
|
||||
"required": true,
|
||||
"type": "AvatarUrl",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose avatar url to set.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get an avatar url.",
|
||||
"notes": "This can be done by anyone.",
|
||||
"type": "AvatarUrl",
|
||||
"nickname": "get_avatar_url",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose avatar url to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"DisplayName": {
|
||||
"id": "DisplayName",
|
||||
"properties": {
|
||||
"displayname": {
|
||||
"type": "string",
|
||||
"description": "The textual display name"
|
||||
}
|
||||
}
|
||||
},
|
||||
"AvatarUrl": {
|
||||
"id": "AvatarUrl",
|
||||
"properties": {
|
||||
"avatar_url": {
|
||||
"type": "string",
|
||||
"description": "A url to an image representing an avatar."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
79
docs/client-server/swagger_matrix/api-docs-registration
Normal file
79
docs/client-server/swagger_matrix/api-docs-registration
Normal file
@@ -0,0 +1,79 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"apis": [
|
||||
{
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"nickname": "register",
|
||||
"notes": "Volatile: This API is likely to change.",
|
||||
"parameters": [
|
||||
{
|
||||
"description": "A registration request",
|
||||
"name": "body",
|
||||
"paramType": "body",
|
||||
"required": true,
|
||||
"type": "RegistrationRequest"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "No JSON object."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "User ID must only contain characters which do not require url encoding."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "User ID already taken."
|
||||
}
|
||||
],
|
||||
"summary": "Register with the home server.",
|
||||
"type": "RegistrationResponse"
|
||||
}
|
||||
],
|
||||
"path": "/register"
|
||||
}
|
||||
],
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"models": {
|
||||
"RegistrationResponse": {
|
||||
"id": "RegistrationResponse",
|
||||
"properties": {
|
||||
"access_token": {
|
||||
"description": "The access token for this user.",
|
||||
"type": "string"
|
||||
},
|
||||
"user_id": {
|
||||
"description": "The fully-qualified user ID.",
|
||||
"type": "string"
|
||||
},
|
||||
"home_server": {
|
||||
"description": "The name of the home server.",
|
||||
"type": "string"
|
||||
}
|
||||
}
|
||||
},
|
||||
"RegistrationRequest": {
|
||||
"id": "RegistrationRequest",
|
||||
"properties": {
|
||||
"user_id": {
|
||||
"description": "The desired user ID. If not specified, a random user ID will be allocated.",
|
||||
"type": "string",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"resourcePath": "/register",
|
||||
"swaggerVersion": "1.2"
|
||||
}
|
||||
|
||||
977
docs/client-server/swagger_matrix/api-docs-rooms
Normal file
977
docs/client-server/swagger_matrix/api-docs-rooms
Normal file
@@ -0,0 +1,977 @@
|
||||
{
|
||||
"apiVersion": "1.0.0",
|
||||
"swaggerVersion": "1.2",
|
||||
"basePath": "http://localhost:8008/_matrix/client/api/v1",
|
||||
"resourcePath": "/rooms",
|
||||
"produces": [
|
||||
"application/json"
|
||||
],
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"authorizations": {
|
||||
"token": []
|
||||
},
|
||||
"apis": [
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/{eventType}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send a generic non-state event to this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_non_state_event",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The event contents",
|
||||
"required": true,
|
||||
"type": "EventContent",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "eventType",
|
||||
"description": "The type of event to send.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/{eventType}/{stateKey}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Send a generic state event to this room.",
|
||||
"notes": "The state key can be omitted, such that you can PUT to /rooms/{roomId}/state/{eventType}. The state key defaults to a 0 length string in this case.",
|
||||
"type": "void",
|
||||
"nickname": "send_state_event",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The event contents",
|
||||
"required": true,
|
||||
"type": "EventContent",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "eventType",
|
||||
"description": "The type of event to send.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "stateKey",
|
||||
"description": "An identifier used to specify clobbering semantics. State events with the same (roomId, eventType, stateKey) will be replaced.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/m.room.message",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send a message in this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_message",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The message contents",
|
||||
"required": true,
|
||||
"type": "Message",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the message in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.topic",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set the topic for this room.",
|
||||
"notes": "Set the topic for this room.",
|
||||
"type": "void",
|
||||
"nickname": "set_topic",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The topic contents",
|
||||
"required": true,
|
||||
"type": "Topic",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to set the topic in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the topic for this room.",
|
||||
"notes": "Get the topic for this room.",
|
||||
"type": "Topic",
|
||||
"nickname": "get_topic",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get topic in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Topic not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.name",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Set the name of this room.",
|
||||
"notes": "Set the name of this room.",
|
||||
"type": "void",
|
||||
"nickname": "set_room_name",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The name contents",
|
||||
"required": true,
|
||||
"type": "RoomName",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to set the name of.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the room's name.",
|
||||
"notes": "",
|
||||
"type": "RoomName",
|
||||
"nickname": "get_room_name",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get the name of.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Name not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/send/m.room.message.feedback",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Send feedback to a message.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "EventId",
|
||||
"nickname": "send_feedback",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The feedback contents",
|
||||
"required": true,
|
||||
"type": "Feedback",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to send the feedback in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad feedback type."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/invite",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Invite a user to this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "invite",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The user to invite.",
|
||||
"required": true,
|
||||
"type": "InviteRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/join",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Join this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "join_room",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to join.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"required": true,
|
||||
"type": "JoinRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/leave",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Leave this room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"type": "void",
|
||||
"nickname": "leave",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to leave.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"required": true,
|
||||
"type": "LeaveRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/ban",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Ban a user in the room.",
|
||||
"notes": "This operation can also be done as a PUT by suffixing /{txnId}. The caller must have the required power level to do this operation.",
|
||||
"type": "void",
|
||||
"nickname": "ban",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has the user to ban.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The user to ban.",
|
||||
"required": true,
|
||||
"type": "BanRequest",
|
||||
"paramType": "body"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state/m.room.member/{userId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "PUT",
|
||||
"summary": "Change the membership state for a user in a room.",
|
||||
"notes": "Change the membership state for a user in a room.",
|
||||
"type": "void",
|
||||
"nickname": "set_membership",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The new membership state",
|
||||
"required": true,
|
||||
"type": "Member",
|
||||
"paramType": "body"
|
||||
},
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose membership is being changed.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "No membership key."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad membership value."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When inviting: You are not in the room."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When inviting: <target> is already in the room."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When joining: Cannot force another user to join."
|
||||
},
|
||||
{
|
||||
"code": 403,
|
||||
"message": "When joining: You are not invited to this room."
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get the membership state of a user in a room.",
|
||||
"notes": "Get the membership state of a user in a room.",
|
||||
"type": "Member",
|
||||
"nickname": "get_membership",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "userId",
|
||||
"description": "The user whose membership state you want to get.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room which has this user.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 404,
|
||||
"message": "Member not found."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/join/{roomAliasOrId}",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Join a room via a room alias or room ID.",
|
||||
"notes": "Join a room via a room alias or room ID.",
|
||||
"type": "JoinRoomInfo",
|
||||
"nickname": "join",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomAliasOrId",
|
||||
"description": "The room alias or room ID to join.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Bad room alias."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/createRoom",
|
||||
"operations": [
|
||||
{
|
||||
"method": "POST",
|
||||
"summary": "Create a room.",
|
||||
"notes": "Create a room.",
|
||||
"type": "RoomInfo",
|
||||
"nickname": "create_room",
|
||||
"consumes": [
|
||||
"application/json"
|
||||
],
|
||||
"parameters": [
|
||||
{
|
||||
"name": "body",
|
||||
"description": "The desired configuration for the room. This operation can also be done as a PUT by suffixing /{txnId}.",
|
||||
"required": true,
|
||||
"type": "RoomConfig",
|
||||
"paramType": "body"
|
||||
}
|
||||
],
|
||||
"responseMessages": [
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Body must be JSON."
|
||||
},
|
||||
{
|
||||
"code": 400,
|
||||
"message": "Room alias already taken."
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/messages",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of messages for this room.",
|
||||
"notes": "Get a list of messages for this room.",
|
||||
"type": "MessagePaginationChunk",
|
||||
"nickname": "get_messages",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get messages in.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to start getting results from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "to",
|
||||
"description": "The token to stop getting results at.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of messages to return.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/members",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of members for this room.",
|
||||
"notes": "Get a list of members for this room.",
|
||||
"type": "MemberPaginationChunk",
|
||||
"nickname": "get_members",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get a list of members from.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
},
|
||||
{
|
||||
"name": "from",
|
||||
"description": "The token to start getting results from.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "to",
|
||||
"description": "The token to stop getting results at.",
|
||||
"required": false,
|
||||
"type": "string",
|
||||
"paramType": "query"
|
||||
},
|
||||
{
|
||||
"name": "limit",
|
||||
"description": "The maximum number of members to return.",
|
||||
"required": false,
|
||||
"type": "integer",
|
||||
"paramType": "query"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/state",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get a list of all the current state events for this room.",
|
||||
"notes": "NOT YET IMPLEMENTED.",
|
||||
"type": "array",
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
},
|
||||
"nickname": "get_state_events",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get a list of current state events from.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"path": "/rooms/{roomId}/initialSync",
|
||||
"operations": [
|
||||
{
|
||||
"method": "GET",
|
||||
"summary": "Get all the current information for this room, including messages and state events.",
|
||||
"notes": "NOT YET IMPLEMENTED.",
|
||||
"type": "InitialSyncRoomData",
|
||||
"nickname": "get_room_sync_data",
|
||||
"parameters": [
|
||||
{
|
||||
"name": "roomId",
|
||||
"description": "The room to get information for.",
|
||||
"required": true,
|
||||
"type": "string",
|
||||
"paramType": "path"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"models": {
|
||||
"Topic": {
|
||||
"id": "Topic",
|
||||
"properties": {
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "The topic text"
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomName": {
|
||||
"id": "RoomName",
|
||||
"properties": {
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "The human-readable name for the room. Can contain spaces."
|
||||
}
|
||||
}
|
||||
},
|
||||
"Message": {
|
||||
"id": "Message",
|
||||
"properties": {
|
||||
"msgtype": {
|
||||
"type": "string",
|
||||
"description": "The type of message being sent, e.g. \"m.text\"",
|
||||
"required": true
|
||||
},
|
||||
"_msgtype_defined_keys_": {
|
||||
"description": "Additional keys as defined by the msgtype, e.g. \"body\""
|
||||
}
|
||||
}
|
||||
},
|
||||
"Feedback": {
|
||||
"id": "Feedback",
|
||||
"properties": {
|
||||
"target_event_id": {
|
||||
"type": "string",
|
||||
"description": "The event ID being acknowledged.",
|
||||
"required": true
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"description": "The type of feedback. Either 'delivered' or 'read'.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"Member": {
|
||||
"id": "Member",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "Enum: The membership state of this member.",
|
||||
"enum": [
|
||||
"invite",
|
||||
"join",
|
||||
"leave",
|
||||
"ban"
|
||||
]
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomInfo": {
|
||||
"id": "RoomInfo",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The allocated room ID.",
|
||||
"required": true
|
||||
},
|
||||
"room_alias": {
|
||||
"type": "string",
|
||||
"description": "The alias for the room.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"JoinRoomInfo": {
|
||||
"id": "JoinRoomInfo",
|
||||
"properties": {
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room ID joined, if joined via a room alias only.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"RoomConfig": {
|
||||
"id": "RoomConfig",
|
||||
"properties": {
|
||||
"visibility": {
|
||||
"type": "string",
|
||||
"description": "Enum: The room visibility.",
|
||||
"required": false,
|
||||
"enum": [
|
||||
"public",
|
||||
"private"
|
||||
]
|
||||
},
|
||||
"room_alias_name": {
|
||||
"type": "string",
|
||||
"description": "The alias to give the new room.",
|
||||
"required": false
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Sets the name of the room. Send a m.room.name event after creating the room with the 'name' key specified.",
|
||||
"required": false
|
||||
},
|
||||
"topic": {
|
||||
"type": "string",
|
||||
"description": "Sets the topic for the room. Send a m.room.topic event after creating the room with the 'topic' key specified.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"PaginationRequest": {
|
||||
"id": "PaginationRequest",
|
||||
"properties": {
|
||||
"from": {
|
||||
"type": "string",
|
||||
"description": "The token to start getting results from."
|
||||
},
|
||||
"to": {
|
||||
"type": "string",
|
||||
"description": "The token to stop getting results at."
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"description": "The maximum number of entries to return."
|
||||
}
|
||||
}
|
||||
},
|
||||
"PaginationChunk": {
|
||||
"id": "PaginationChunk",
|
||||
"properties": {
|
||||
"start": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the first value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
},
|
||||
"end": {
|
||||
"type": "string",
|
||||
"description": "A token which correlates to the last value in \"chunk\" for paginating.",
|
||||
"required": true
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"MessagePaginationChunk"
|
||||
]
|
||||
},
|
||||
"MessagePaginationChunk": {
|
||||
"id": "MessagePaginationChunk",
|
||||
"properties": {
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of message events.",
|
||||
"items": {
|
||||
"$ref": "MessageEvent"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"MemberPaginationChunk": {
|
||||
"id": "MemberPaginationChunk",
|
||||
"properties": {
|
||||
"chunk": {
|
||||
"type": "array",
|
||||
"description": "A list of member events.",
|
||||
"items": {
|
||||
"$ref": "MemberEvent"
|
||||
},
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"Event": {
|
||||
"id": "Event",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "An ID which uniquely identifies this event. This is automatically set by the server.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The room in which this event occurred. This is automatically set by the server.",
|
||||
"required": true
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"description": "The event type.",
|
||||
"required": true
|
||||
}
|
||||
},
|
||||
"subTypes": [
|
||||
"MessageEvent"
|
||||
]
|
||||
},
|
||||
"EventId": {
|
||||
"id": "EventId",
|
||||
"properties": {
|
||||
"event_id": {
|
||||
"type": "string",
|
||||
"description": "The allocated event ID for this event.",
|
||||
"required": true
|
||||
}
|
||||
}
|
||||
},
|
||||
"EventContent": {
|
||||
"id": "EventContent",
|
||||
"properties": {
|
||||
"__event_content_keys__": {
|
||||
"type": "string",
|
||||
"description": "Event-specific content keys and values.",
|
||||
"required": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"MessageEvent": {
|
||||
"id": "MessageEvent",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "Message"
|
||||
}
|
||||
}
|
||||
},
|
||||
"MemberEvent": {
|
||||
"id": "MemberEvent",
|
||||
"properties": {
|
||||
"content": {
|
||||
"type": "Member"
|
||||
}
|
||||
}
|
||||
},
|
||||
"InviteRequest": {
|
||||
"id": "InviteRequest",
|
||||
"properties": {
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified user ID."
|
||||
}
|
||||
}
|
||||
},
|
||||
"JoinRequest": {
|
||||
"id": "JoinRequest",
|
||||
"properties": {}
|
||||
},
|
||||
"LeaveRequest": {
|
||||
"id": "LeaveRequest",
|
||||
"properties": {}
|
||||
},
|
||||
"BanRequest": {
|
||||
"id": "BanRequest",
|
||||
"properties": {
|
||||
"user_id": {
|
||||
"type": "string",
|
||||
"description": "The fully-qualified user ID."
|
||||
},
|
||||
"reason": {
|
||||
"type": "string",
|
||||
"description": "The reason for the ban."
|
||||
}
|
||||
}
|
||||
},
|
||||
"InitialSyncRoomData": {
|
||||
"id": "InitialSyncRoomData",
|
||||
"properties": {
|
||||
"membership": {
|
||||
"type": "string",
|
||||
"description": "This user's membership state in this room.",
|
||||
"required": true
|
||||
},
|
||||
"room_id": {
|
||||
"type": "string",
|
||||
"description": "The ID of this room.",
|
||||
"required": true
|
||||
},
|
||||
"messages": {
|
||||
"type": "MessagePaginationChunk",
|
||||
"description": "The most recent messages for this room, governed by the limit parameter.",
|
||||
"required": false
|
||||
},
|
||||
"state": {
|
||||
"type": "array",
|
||||
"description": "A list of state events representing the current state of the room.",
|
||||
"required": false,
|
||||
"items": {
|
||||
"$ref": "Event"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
43
docs/implementation-notes/documentation_style.rst
Normal file
43
docs/implementation-notes/documentation_style.rst
Normal file
@@ -0,0 +1,43 @@
|
||||
===================
|
||||
Documentation Style
|
||||
===================
|
||||
|
||||
A brief single sentence to describe what this file contains; in this case a
|
||||
description of the style to write documentation in.
|
||||
|
||||
|
||||
Sections
|
||||
========
|
||||
|
||||
Each section should be separated from the others by two blank lines. Headings
|
||||
should be underlined using a row of equals signs (===). Paragraphs should be
|
||||
separated by a single blank line, and wrap to no further than 80 columns.
|
||||
|
||||
[[TODO(username): if you want to leave some unanswered questions, notes for
|
||||
further consideration, or other kinds of comment, use a TODO section. Make sure
|
||||
to notate it with your name so we know who to ask about it!]]
|
||||
|
||||
Subsections
|
||||
-----------
|
||||
|
||||
If required, subsections can use a row of dashes to underline their header. A
|
||||
single blank line between subsections of a single section.
|
||||
|
||||
|
||||
Bullet Lists
|
||||
============
|
||||
|
||||
* Bullet lists can use asterisks with a single space either side.
|
||||
|
||||
* Another blank line between list elements.
|
||||
|
||||
|
||||
Definition Lists
|
||||
================
|
||||
|
||||
Terms:
|
||||
Start in the first column, ending with a colon
|
||||
|
||||
Definitions:
|
||||
Take a two space indent, following immediately from the term without a blank
|
||||
line before it, but having a blank line afterwards.
|
||||
@@ -1,3 +1,103 @@
|
||||
========
|
||||
Presence
|
||||
========
|
||||
|
||||
A description of presence information and visibility between users.
|
||||
|
||||
Overview
|
||||
========
|
||||
|
||||
Each user has the concept of Presence information. This encodes a sense of the
|
||||
"availability" of that user, suitable for display on other user's clients.
|
||||
|
||||
|
||||
Presence Information
|
||||
====================
|
||||
|
||||
The basic piece of presence information is an enumeration of a small set of
|
||||
state; such as "free to chat", "online", "busy", or "offline". The default state
|
||||
unless the user changes it is "online". Lower states suggest some amount of
|
||||
decreased availability from normal, which might have some client-side effect
|
||||
like muting notification sounds and suggests to other users not to bother them
|
||||
unless it is urgent. Equally, the "free to chat" state exists to let the user
|
||||
announce their general willingness to receive messages moreso than default.
|
||||
|
||||
Home servers should also allow a user to set their state as "hidden" - a state
|
||||
which behaves as offline, but allows the user to see the client state anyway and
|
||||
generally interact with client features such as reading message history or
|
||||
accessing contacts in the address book.
|
||||
|
||||
This basic state field applies to the user as a whole, regardless of how many
|
||||
client devices they have connected. The home server should synchronise this
|
||||
status choice among multiple devices to ensure the user gets a consistent
|
||||
experience.
|
||||
|
||||
Idle Time
|
||||
---------
|
||||
|
||||
As well as the basic state field, the presence information can also show a sense
|
||||
of an "idle timer". This should be maintained individually by the user's
|
||||
clients, and the homeserver can take the highest reported time as that to
|
||||
report. Likely this should be presented in fairly coarse granularity; possibly
|
||||
being limited to letting the home server automatically switch from a "free to
|
||||
chat" or "online" mode into "idle".
|
||||
|
||||
When a user is offline, the Home Server can still report when the user was last
|
||||
seen online, again perhaps in a somewhat coarse manner.
|
||||
|
||||
Device Type
|
||||
-----------
|
||||
|
||||
Client devices that may limit the user experience somewhat (such as "mobile"
|
||||
devices with limited ability to type on a real keyboard or read large amounts of
|
||||
text) should report this to the home server, as this is also useful information
|
||||
to report as "presence" if the user cannot be expected to provide a good typed
|
||||
response to messages.
|
||||
|
||||
|
||||
Presence List
|
||||
=============
|
||||
|
||||
Each user's home server stores a "presence list" for that user. This stores a
|
||||
list of other user IDs the user has chosen to add to it (remembering any ACL
|
||||
Pointer if appropriate).
|
||||
|
||||
To be added to a contact list, the user being added must grant permission. Once
|
||||
granted, both user's HS(es) store this information, as it allows the user who
|
||||
has added the contact some more abilities; see below. Since such subscriptions
|
||||
are likely to be bidirectional, HSes may wish to automatically accept requests
|
||||
when a reverse subscription already exists.
|
||||
|
||||
As a convenience, presence lists should support the ability to collect users
|
||||
into groups, which could allow things like inviting the entire group to a new
|
||||
("ad-hoc") chat room, or easy interaction with the profile information ACL
|
||||
implementation of the HS.
|
||||
|
||||
|
||||
Presence and Permissions
|
||||
========================
|
||||
|
||||
For a viewing user to be allowed to see the presence information of a target
|
||||
user, either
|
||||
|
||||
* The target user has allowed the viewing user to add them to their presence
|
||||
list, or
|
||||
|
||||
* The two users share at least one room in common
|
||||
|
||||
In the latter case, this allows for clients to display some minimal sense of
|
||||
presence information in a user list for a room.
|
||||
|
||||
Home servers can also use the user's choice of presence state as a signal for
|
||||
how to handle new private one-to-one chat message requests. For example, it
|
||||
might decide:
|
||||
|
||||
"free to chat": accept anything
|
||||
"online": accept from anyone in my addres book list
|
||||
"busy": accept from anyone in this "important people" group in my address
|
||||
book list
|
||||
|
||||
|
||||
API Efficiency
|
||||
==============
|
||||
|
||||
@@ -1,151 +0,0 @@
|
||||
Signing JSON
|
||||
============
|
||||
|
||||
JSON is signed by encoding the JSON object without ``signatures`` or ``meta``
|
||||
keys using a canonical encoding. The JSON bytes are then signed using the
|
||||
signature algorithm and the signature encoded using base64 with the padding
|
||||
stripped. The resulting base64 signature is added to an object under the
|
||||
*signing key identifier* which is added to the ``signatures`` object under the
|
||||
name of the server signing it which is added back to the original JSON object
|
||||
along with the ``meta`` object.
|
||||
|
||||
The *signing key identifier* is the concatenation of the *signing algorithm*
|
||||
and a *key version*. The *signing algorithm* identifies the algorithm used to
|
||||
sign the JSON. The currently support value for *signing algorithm* is
|
||||
``ed25519`` as implemented by NACL (http://nacl.cr.yp.to/). The *key version*
|
||||
is used to distinguish between different signing keys used by the same entity.
|
||||
|
||||
The ``meta`` object and the ``signatures`` object are not covered by the
|
||||
signature. Therefore intermediate servers can add metadata such as time stamps
|
||||
and additional signatures.
|
||||
|
||||
|
||||
::
|
||||
|
||||
{
|
||||
"name": "example.org",
|
||||
"signing_keys": {
|
||||
"ed25519:1": "XSl0kuyvrXNj6A+7/tkrB9sxSbRi08Of5uRhxOqZtEQ"
|
||||
},
|
||||
"meta": {
|
||||
"retrieved_ts_ms": 922834800000
|
||||
},
|
||||
"signatures": {
|
||||
"example.org": {
|
||||
"ed25519:1": "s76RUgajp8w172am0zQb/iPTHsRnb4SkrzGoeCOSFfcBY2V/1c8QfrmdXHpvnc2jK5BD1WiJIxiMW95fMjK7Bw"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
::
|
||||
|
||||
def sign_json(json_object, signing_key, signing_name):
|
||||
signatures = json_object.pop("signatures", {})
|
||||
meta = json_object.pop("meta", None)
|
||||
|
||||
signed = signing_key.sign(encode_canonical_json(json_object))
|
||||
signature_base64 = encode_base64(signed.signature)
|
||||
|
||||
key_id = "%s:%s" % (signing_key.alg, signing_key.version)
|
||||
signatures.setdefault(sigature_name, {})[key_id] = signature_base64
|
||||
|
||||
json_object["signatures"] = signatures
|
||||
if meta is not None:
|
||||
json_object["meta"] = meta
|
||||
|
||||
return json_object
|
||||
|
||||
Checking for a Signature
|
||||
------------------------
|
||||
|
||||
To check if an entity has signed a JSON object a server does the following
|
||||
|
||||
1. Checks if the ``signatures`` object contains an entry with the name of the
|
||||
entity. If the entry is missing then the check fails.
|
||||
2. Removes any *signing key identifiers* from the entry with algorithms it
|
||||
doesn't understand. If there are no *signing key identifiers* left then the
|
||||
check fails.
|
||||
3. Looks up *verification keys* for the remaining *signing key identifiers*
|
||||
either from a local cache or by consulting a trusted key server. If it
|
||||
cannot find a *verification key* then the check fails.
|
||||
4. Decodes the base64 encoded signature bytes. If base64 decoding fails then
|
||||
the check fails.
|
||||
5. Checks the signature bytes using the *verification key*. If this fails then
|
||||
the check fails. Otherwise the check succeeds.
|
||||
|
||||
Canonical JSON
|
||||
--------------
|
||||
|
||||
The canonical JSON encoding for a value is the shortest UTF-8 JSON encoding
|
||||
with dictionary keys lexicographically sorted by unicode codepoint. Numbers in
|
||||
the JSON value must be integers in the range [-(2**53)+1, (2**53)-1].
|
||||
|
||||
::
|
||||
|
||||
import json
|
||||
|
||||
def canonical_json(value):
|
||||
return json.dumps(
|
||||
value,
|
||||
ensure_ascii=False,
|
||||
separators=(',',':'),
|
||||
sort_keys=True,
|
||||
).encode("UTF-8")
|
||||
|
||||
Grammar
|
||||
+++++++
|
||||
|
||||
Adapted from the grammar in http://tools.ietf.org/html/rfc7159 removing
|
||||
insignificant whitespace, fractions, exponents and redundant character escapes
|
||||
|
||||
::
|
||||
|
||||
value = false / null / true / object / array / number / string
|
||||
false = %x66.61.6c.73.65
|
||||
null = %x6e.75.6c.6c
|
||||
true = %x74.72.75.65
|
||||
object = %x7B [ member *( %x2C member ) ] %7D
|
||||
member = string %x3A value
|
||||
array = %x5B [ value *( %x2C value ) ] %5B
|
||||
number = [ %x2D ] int
|
||||
int = %x30 / ( %x31-39 *digit )
|
||||
digit = %x30-39
|
||||
string = %x22 *char %x22
|
||||
char = unescaped / %x5C escaped
|
||||
unescaped = %x20-21 / %x23-5B / %x5D-10FFFF
|
||||
escaped = %x22 ; " quotation mark U+0022
|
||||
/ %x5C ; \ reverse solidus U+005C
|
||||
/ %x62 ; b backspace U+0008
|
||||
/ %x66 ; f form feed U+000C
|
||||
/ %x6E ; n line feed U+000A
|
||||
/ %x72 ; r carriage return U+000D
|
||||
/ %x74 ; t tab U+0009
|
||||
/ %x75.30.30.30 (%x30-37 / %x62 / %x65-66) ; u000X
|
||||
/ %x75.30.30.31 (%x30-39 / %x61-66) ; u001X
|
||||
|
||||
Signing Events
|
||||
==============
|
||||
|
||||
Signing events is a more complicated process since servers can choose to redact
|
||||
non-essential event contents. Before signing the event it is encoded as
|
||||
Canonical JSON and hashed using SHA-256. The resulting hash is then stored
|
||||
in the event JSON in a ``hash`` object under a ``sha256`` key. Then all
|
||||
non-essential keys are stripped from the event object, and the resulting object
|
||||
which included the ``hash`` key is signed using the JSON signing algorithm.
|
||||
|
||||
Servers can then transmit the entire event or the event with the non-essential
|
||||
keys removed. Receiving servers can then check the entire event if it is
|
||||
present by computing the SHA-256 of the event excluding the ``hash`` object, or
|
||||
by using the ``hash`` object included in the event if keys have been redacted.
|
||||
|
||||
New hash functions can be introduced by adding additional keys to the ``hash``
|
||||
object. Since the ``hash`` object cannot be redacted a server shouldn't allow
|
||||
too many hashes to be listed, otherwise a server might embed illict data within
|
||||
the ``hash`` object. For similar reasons a server shouldn't allow hash values
|
||||
that are too long.
|
||||
|
||||
[[TODO(markjh): We might want to specify a maximum number of keys for the
|
||||
``hash`` and we might want to specify the maximum output size of a hash]]
|
||||
|
||||
[[TODO(markjh) We might want to allow the server to omit the output of well
|
||||
known hash functions like SHA-256 when none of the keys have been redacted]]
|
||||
2118
docs/specification.rst
Normal file
2118
docs/specification.rst
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,93 +0,0 @@
|
||||
How to enable VoIP relaying on your Home Server with TURN
|
||||
|
||||
Overview
|
||||
--------
|
||||
The synapse Matrix Home Server supports integration with TURN server via the
|
||||
TURN server REST API
|
||||
(http://tools.ietf.org/html/draft-uberti-behave-turn-rest-00). This allows
|
||||
the Home Server to generate credentials that are valid for use on the TURN
|
||||
server through the use of a secret shared between the Home Server and the
|
||||
TURN server.
|
||||
|
||||
This document described how to install coturn
|
||||
(https://code.google.com/p/coturn/) which also supports the TURN REST API,
|
||||
and integrate it with synapse.
|
||||
|
||||
coturn Setup
|
||||
============
|
||||
|
||||
1. Check out coturn::
|
||||
svn checkout http://coturn.googlecode.com/svn/trunk/ coturn
|
||||
cd coturn
|
||||
|
||||
2. Configure it::
|
||||
./configure
|
||||
|
||||
You may need to install libevent2: if so, you should do so
|
||||
in the way recommended by your operating system.
|
||||
You can ignore warnings about lack of database support: a
|
||||
database is unnecessary for this purpose.
|
||||
|
||||
3. Build and install it::
|
||||
make
|
||||
make install
|
||||
|
||||
4. Make a config file in /etc/turnserver.conf. You can customise
|
||||
a config file from turnserver.conf.default. The relevant
|
||||
lines, with example values, are::
|
||||
|
||||
lt-cred-mech
|
||||
use-auth-secret
|
||||
static-auth-secret=[your secret key here]
|
||||
realm=turn.myserver.org
|
||||
|
||||
See turnserver.conf.default for explanations of the options.
|
||||
One way to generate the static-auth-secret is with pwgen::
|
||||
|
||||
pwgen -s 64 1
|
||||
|
||||
5. Ensure youe firewall allows traffic into the TURN server on
|
||||
the ports you've configured it to listen on (remember to allow
|
||||
both TCP and UDP if you've enabled both).
|
||||
|
||||
6. If you've configured coturn to support TLS/DTLS, generate or
|
||||
import your private key and certificate.
|
||||
|
||||
7. Start the turn server::
|
||||
bin/turnserver -o
|
||||
|
||||
|
||||
synapse Setup
|
||||
=============
|
||||
|
||||
Your home server configuration file needs the following extra keys:
|
||||
|
||||
1. "turn_uris": This needs to be a yaml list
|
||||
of public-facing URIs for your TURN server to be given out
|
||||
to your clients. Add separate entries for each transport your
|
||||
TURN server supports.
|
||||
|
||||
2. "turn_shared_secret": This is the secret shared between your Home
|
||||
server and your TURN server, so you should set it to the same
|
||||
string you used in turnserver.conf.
|
||||
|
||||
3. "turn_user_lifetime": This is the amount of time credentials
|
||||
generated by your Home Server are valid for (in milliseconds).
|
||||
Shorter times offer less potential for abuse at the expense
|
||||
of increased traffic between web clients and your home server
|
||||
to refresh credentials. The TURN REST API specification recommends
|
||||
one day (86400000).
|
||||
|
||||
As an example, here is the relevant section of the config file for
|
||||
matrix.org::
|
||||
|
||||
turn_uris: turn:turn.matrix.org:3478?transport=udp,turn:turn.matrix.org:3478?transport=tcp
|
||||
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
|
||||
turn_user_lifetime: 86400000
|
||||
|
||||
Now, restart synapse::
|
||||
|
||||
cd /where/you/run/synapse
|
||||
./synctl restart
|
||||
|
||||
...and your Home Server now supports VoIP relaying!
|
||||
@@ -19,12 +19,7 @@ $('.login').live('click', function() {
|
||||
showLoggedIn(data);
|
||||
},
|
||||
error: function(err) {
|
||||
var errMsg = "To try this, you need a home server running!";
|
||||
var errJson = $.parseJSON(err.responseText);
|
||||
if (errJson) {
|
||||
errMsg = JSON.stringify(errJson);
|
||||
}
|
||||
alert(errMsg);
|
||||
alert(JSON.stringify($.parseJSON(err.responseText)));
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -58,12 +58,7 @@ $('.login').live('click', function() {
|
||||
showLoggedIn(data);
|
||||
},
|
||||
error: function(err) {
|
||||
var errMsg = "To try this, you need a home server running!";
|
||||
var errJson = $.parseJSON(err.responseText);
|
||||
if (errJson) {
|
||||
errMsg = JSON.stringify(errJson);
|
||||
}
|
||||
alert(errMsg);
|
||||
alert(JSON.stringify($.parseJSON(err.responseText)));
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
name: Example Matrix Client
|
||||
description: Includes login, live event streaming, creating rooms, sending messages and viewing member lists.
|
||||
authors:
|
||||
- matrix.org
|
||||
resources:
|
||||
- http://matrix.org
|
||||
normalize_css: no
|
||||
@@ -110,7 +110,7 @@ $('.register').live('click', function() {
|
||||
url: "http://localhost:8008/_matrix/client/api/v1/register",
|
||||
type: "POST",
|
||||
contentType: "application/json; charset=utf-8",
|
||||
data: JSON.stringify({ user: user, password: password, type: "m.login.password" }),
|
||||
data: JSON.stringify({ user_id: user, password: password }),
|
||||
dataType: "json",
|
||||
success: function(data) {
|
||||
onLoggedIn(data);
|
||||
|
||||
@@ -14,18 +14,13 @@ $('.register').live('click', function() {
|
||||
url: "http://localhost:8008/_matrix/client/api/v1/register",
|
||||
type: "POST",
|
||||
contentType: "application/json; charset=utf-8",
|
||||
data: JSON.stringify({ user: user, password: password, type: "m.login.password" }),
|
||||
data: JSON.stringify({ user_id: user, password: password }),
|
||||
dataType: "json",
|
||||
success: function(data) {
|
||||
showLoggedIn(data);
|
||||
},
|
||||
error: function(err) {
|
||||
var errMsg = "To try this, you need a home server running!";
|
||||
var errJson = $.parseJSON(err.responseText);
|
||||
if (errJson) {
|
||||
errMsg = JSON.stringify(errJson);
|
||||
}
|
||||
alert(errMsg);
|
||||
alert(JSON.stringify($.parseJSON(err.responseText)));
|
||||
}
|
||||
});
|
||||
});
|
||||
@@ -41,12 +36,7 @@ var login = function(user, password) {
|
||||
showLoggedIn(data);
|
||||
},
|
||||
error: function(err) {
|
||||
var errMsg = "To try this, you need a home server running!";
|
||||
var errJson = $.parseJSON(err.responseText);
|
||||
if (errJson) {
|
||||
errMsg = JSON.stringify(errJson);
|
||||
}
|
||||
alert(errMsg);
|
||||
alert(JSON.stringify($.parseJSON(err.responseText)));
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
@@ -28,12 +28,7 @@ $('.login').live('click', function() {
|
||||
showLoggedIn(data);
|
||||
},
|
||||
error: function(err) {
|
||||
var errMsg = "To try this, you need a home server running!";
|
||||
var errJson = $.parseJSON(err.responseText);
|
||||
if (errJson) {
|
||||
errMsg = JSON.stringify(errJson);
|
||||
}
|
||||
alert(errMsg);
|
||||
alert(JSON.stringify($.parseJSON(err.responseText)));
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
280
pylint.cfg
280
pylint.cfg
@@ -1,280 +0,0 @@
|
||||
[MASTER]
|
||||
|
||||
# Specify a configuration file.
|
||||
#rcfile=
|
||||
|
||||
# Python code to execute, usually for sys.path manipulation such as
|
||||
# pygtk.require().
|
||||
#init-hook=
|
||||
|
||||
# Profiled execution.
|
||||
profile=no
|
||||
|
||||
# Add files or directories to the blacklist. They should be base names, not
|
||||
# paths.
|
||||
ignore=CVS
|
||||
|
||||
# Pickle collected data for later comparisons.
|
||||
persistent=yes
|
||||
|
||||
# List of plugins (as comma separated values of python modules names) to load,
|
||||
# usually to register additional checkers.
|
||||
load-plugins=
|
||||
|
||||
|
||||
[MESSAGES CONTROL]
|
||||
|
||||
# Enable the message, report, category or checker with the given id(s). You can
|
||||
# either give multiple identifier separated by comma (,) or put this option
|
||||
# multiple time. See also the "--disable" option for examples.
|
||||
#enable=
|
||||
|
||||
# Disable the message, report, category or checker with the given id(s). You
|
||||
# can either give multiple identifiers separated by comma (,) or put this
|
||||
# option multiple times (only on the command line, not in the configuration
|
||||
# file where it should appear only once).You can also use "--disable=all" to
|
||||
# disable everything first and then reenable specific checks. For example, if
|
||||
# you want to run only the similarities checker, you can use "--disable=all
|
||||
# --enable=similarities". If you want to run only the classes checker, but have
|
||||
# no Warning level messages displayed, use"--disable=all --enable=classes
|
||||
# --disable=W"
|
||||
disable=missing-docstring
|
||||
|
||||
|
||||
[REPORTS]
|
||||
|
||||
# Set the output format. Available formats are text, parseable, colorized, msvs
|
||||
# (visual studio) and html. You can also give a reporter class, eg
|
||||
# mypackage.mymodule.MyReporterClass.
|
||||
output-format=text
|
||||
|
||||
# Put messages in a separate file for each module / package specified on the
|
||||
# command line instead of printing them on stdout. Reports (if any) will be
|
||||
# written in a file name "pylint_global.[txt|html]".
|
||||
files-output=no
|
||||
|
||||
# Tells whether to display a full report or only the messages
|
||||
reports=yes
|
||||
|
||||
# Python expression which should return a note less than 10 (10 is the highest
|
||||
# note). You have access to the variables errors warning, statement which
|
||||
# respectively contain the number of errors / warnings messages and the total
|
||||
# number of statements analyzed. This is used by the global evaluation report
|
||||
# (RP0004).
|
||||
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
|
||||
|
||||
# Add a comment according to your evaluation note. This is used by the global
|
||||
# evaluation report (RP0004).
|
||||
comment=no
|
||||
|
||||
# Template used to display messages. This is a python new-style format string
|
||||
# used to format the message information. See doc for all details
|
||||
#msg-template=
|
||||
|
||||
|
||||
[TYPECHECK]
|
||||
|
||||
# Tells whether missing members accessed in mixin class should be ignored. A
|
||||
# mixin class is detected if its name ends with "mixin" (case insensitive).
|
||||
ignore-mixin-members=yes
|
||||
|
||||
# List of classes names for which member attributes should not be checked
|
||||
# (useful for classes with attributes dynamically set).
|
||||
ignored-classes=SQLObject
|
||||
|
||||
# When zope mode is activated, add a predefined set of Zope acquired attributes
|
||||
# to generated-members.
|
||||
zope=no
|
||||
|
||||
# List of members which are set dynamically and missed by pylint inference
|
||||
# system, and so shouldn't trigger E0201 when accessed. Python regular
|
||||
# expressions are accepted.
|
||||
generated-members=REQUEST,acl_users,aq_parent
|
||||
|
||||
|
||||
[MISCELLANEOUS]
|
||||
|
||||
# List of note tags to take in consideration, separated by a comma.
|
||||
notes=FIXME,XXX,TODO
|
||||
|
||||
|
||||
[SIMILARITIES]
|
||||
|
||||
# Minimum lines number of a similarity.
|
||||
min-similarity-lines=4
|
||||
|
||||
# Ignore comments when computing similarities.
|
||||
ignore-comments=yes
|
||||
|
||||
# Ignore docstrings when computing similarities.
|
||||
ignore-docstrings=yes
|
||||
|
||||
# Ignore imports when computing similarities.
|
||||
ignore-imports=no
|
||||
|
||||
|
||||
[VARIABLES]
|
||||
|
||||
# Tells whether we should check for unused import in __init__ files.
|
||||
init-import=no
|
||||
|
||||
# A regular expression matching the beginning of the name of dummy variables
|
||||
# (i.e. not used).
|
||||
dummy-variables-rgx=_$|dummy
|
||||
|
||||
# List of additional names supposed to be defined in builtins. Remember that
|
||||
# you should avoid to define new builtins when possible.
|
||||
additional-builtins=
|
||||
|
||||
|
||||
[BASIC]
|
||||
|
||||
# Required attributes for module, separated by a comma
|
||||
required-attributes=
|
||||
|
||||
# List of builtins function names that should not be used, separated by a comma
|
||||
bad-functions=map,filter,apply,input
|
||||
|
||||
# Regular expression which should only match correct module names
|
||||
module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$
|
||||
|
||||
# Regular expression which should only match correct module level names
|
||||
const-rgx=(([A-Z_][A-Z0-9_]*)|(__.*__))$
|
||||
|
||||
# Regular expression which should only match correct class names
|
||||
class-rgx=[A-Z_][a-zA-Z0-9]+$
|
||||
|
||||
# Regular expression which should only match correct function names
|
||||
function-rgx=[a-z_][a-z0-9_]{2,30}$
|
||||
|
||||
# Regular expression which should only match correct method names
|
||||
method-rgx=[a-z_][a-z0-9_]{2,30}$
|
||||
|
||||
# Regular expression which should only match correct instance attribute names
|
||||
attr-rgx=[a-z_][a-z0-9_]{2,30}$
|
||||
|
||||
# Regular expression which should only match correct argument names
|
||||
argument-rgx=[a-z_][a-z0-9_]{2,30}$
|
||||
|
||||
# Regular expression which should only match correct variable names
|
||||
variable-rgx=[a-z_][a-z0-9_]{2,30}$
|
||||
|
||||
# Regular expression which should only match correct attribute names in class
|
||||
# bodies
|
||||
class-attribute-rgx=([A-Za-z_][A-Za-z0-9_]{2,30}|(__.*__))$
|
||||
|
||||
# Regular expression which should only match correct list comprehension /
|
||||
# generator expression variable names
|
||||
inlinevar-rgx=[A-Za-z_][A-Za-z0-9_]*$
|
||||
|
||||
# Good variable names which should always be accepted, separated by a comma
|
||||
good-names=i,j,k,ex,Run,_
|
||||
|
||||
# Bad variable names which should always be refused, separated by a comma
|
||||
bad-names=foo,bar,baz,toto,tutu,tata
|
||||
|
||||
# Regular expression which should only match function or class names that do
|
||||
# not require a docstring.
|
||||
no-docstring-rgx=__.*__
|
||||
|
||||
# Minimum line length for functions/classes that require docstrings, shorter
|
||||
# ones are exempt.
|
||||
docstring-min-length=-1
|
||||
|
||||
|
||||
[FORMAT]
|
||||
|
||||
# Maximum number of characters on a single line.
|
||||
max-line-length=80
|
||||
|
||||
# Regexp for a line that is allowed to be longer than the limit.
|
||||
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
|
||||
|
||||
# Allow the body of an if to be on the same line as the test if there is no
|
||||
# else.
|
||||
single-line-if-stmt=no
|
||||
|
||||
# List of optional constructs for which whitespace checking is disabled
|
||||
no-space-check=trailing-comma,dict-separator
|
||||
|
||||
# Maximum number of lines in a module
|
||||
max-module-lines=1000
|
||||
|
||||
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
|
||||
# tab).
|
||||
indent-string=' '
|
||||
|
||||
|
||||
[DESIGN]
|
||||
|
||||
# Maximum number of arguments for function / method
|
||||
max-args=5
|
||||
|
||||
# Argument names that match this expression will be ignored. Default to name
|
||||
# with leading underscore
|
||||
ignored-argument-names=_.*
|
||||
|
||||
# Maximum number of locals for function / method body
|
||||
max-locals=15
|
||||
|
||||
# Maximum number of return / yield for function / method body
|
||||
max-returns=6
|
||||
|
||||
# Maximum number of branch for function / method body
|
||||
max-branches=12
|
||||
|
||||
# Maximum number of statements in function / method body
|
||||
max-statements=50
|
||||
|
||||
# Maximum number of parents for a class (see R0901).
|
||||
max-parents=7
|
||||
|
||||
# Maximum number of attributes for a class (see R0902).
|
||||
max-attributes=7
|
||||
|
||||
# Minimum number of public methods for a class (see R0903).
|
||||
min-public-methods=2
|
||||
|
||||
# Maximum number of public methods for a class (see R0904).
|
||||
max-public-methods=20
|
||||
|
||||
|
||||
[IMPORTS]
|
||||
|
||||
# Deprecated modules which should not be used, separated by a comma
|
||||
deprecated-modules=regsub,TERMIOS,Bastion,rexec
|
||||
|
||||
# Create a graph of every (i.e. internal and external) dependencies in the
|
||||
# given file (report RP0402 must not be disabled)
|
||||
import-graph=
|
||||
|
||||
# Create a graph of external dependencies in the given file (report RP0402 must
|
||||
# not be disabled)
|
||||
ext-import-graph=
|
||||
|
||||
# Create a graph of internal dependencies in the given file (report RP0402 must
|
||||
# not be disabled)
|
||||
int-import-graph=
|
||||
|
||||
|
||||
[CLASSES]
|
||||
|
||||
# List of interface methods to ignore, separated by a comma. This is used for
|
||||
# instance to not check methods defines in Zope's Interface base class.
|
||||
ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by
|
||||
|
||||
# List of method names used to declare (i.e. assign) instance attributes.
|
||||
defining-attr-methods=__init__,__new__,setUp
|
||||
|
||||
# List of valid names for the first argument in a class method.
|
||||
valid-classmethod-first-arg=cls
|
||||
|
||||
# List of valid names for the first argument in a metaclass class method.
|
||||
valid-metaclass-classmethod-first-arg=mcs
|
||||
|
||||
|
||||
[EXCEPTIONS]
|
||||
|
||||
# Exceptions that will emit a warning when being caught. Defaults to
|
||||
# "Exception"
|
||||
overgeneral-exceptions=Exception
|
||||
510
scripts/basic.css
Normal file
510
scripts/basic.css
Normal file
@@ -0,0 +1,510 @@
|
||||
/*
|
||||
* basic.css
|
||||
* ~~~~~~~~~
|
||||
*
|
||||
* Sphinx stylesheet -- basic theme.
|
||||
*
|
||||
* :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
* :license: BSD, see LICENSE for details.
|
||||
*
|
||||
*/
|
||||
|
||||
/* -- main layout ----------------------------------------------------------- */
|
||||
|
||||
div.clearer {
|
||||
clear: both;
|
||||
}
|
||||
|
||||
/* -- relbar ---------------------------------------------------------------- */
|
||||
|
||||
div.related {
|
||||
width: 100%;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
div.related h3 {
|
||||
display: none;
|
||||
}
|
||||
|
||||
div.related ul {
|
||||
margin: 0;
|
||||
padding: 0 0 0 10px;
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.related li {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.related li.right {
|
||||
float: right;
|
||||
margin-right: 5px;
|
||||
}
|
||||
|
||||
/* -- sidebar --------------------------------------------------------------- */
|
||||
|
||||
div.sphinxsidebarwrapper {
|
||||
padding: 10px 5px 0 10px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar {
|
||||
float: left;
|
||||
width: 230px;
|
||||
margin-left: -100%;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul {
|
||||
list-style: none;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul ul,
|
||||
div.sphinxsidebar ul.want-points {
|
||||
margin-left: 20px;
|
||||
list-style: square;
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul ul {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
div.sphinxsidebar form {
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input {
|
||||
border: 1px solid #98dbcc;
|
||||
font-family: sans-serif;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
img {
|
||||
border: 0;
|
||||
}
|
||||
|
||||
/* -- search page ----------------------------------------------------------- */
|
||||
|
||||
ul.search {
|
||||
margin: 10px 0 0 20px;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
ul.search li {
|
||||
padding: 5px 0 5px 20px;
|
||||
background-image: url(file.png);
|
||||
background-repeat: no-repeat;
|
||||
background-position: 0 7px;
|
||||
}
|
||||
|
||||
ul.search li a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
ul.search li div.context {
|
||||
color: #888;
|
||||
margin: 2px 0 0 30px;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
ul.keywordmatches li.goodmatch a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* -- index page ------------------------------------------------------------ */
|
||||
|
||||
table.contentstable {
|
||||
width: 90%;
|
||||
}
|
||||
|
||||
table.contentstable p.biglink {
|
||||
line-height: 150%;
|
||||
}
|
||||
|
||||
a.biglink {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
span.linkdescr {
|
||||
font-style: italic;
|
||||
padding-top: 5px;
|
||||
font-size: 90%;
|
||||
}
|
||||
|
||||
/* -- general index --------------------------------------------------------- */
|
||||
|
||||
table.indextable {
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
table.indextable td {
|
||||
text-align: left;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
table.indextable dl, table.indextable dd {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
table.indextable tr.pcap {
|
||||
height: 10px;
|
||||
}
|
||||
|
||||
table.indextable tr.cap {
|
||||
margin-top: 10px;
|
||||
background-color: #f2f2f2;
|
||||
}
|
||||
|
||||
img.toggler {
|
||||
margin-right: 3px;
|
||||
margin-top: 3px;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
div.modindex-jumpbox {
|
||||
border-top: 1px solid #ddd;
|
||||
border-bottom: 1px solid #ddd;
|
||||
margin: 1em 0 1em 0;
|
||||
padding: 0.4em;
|
||||
}
|
||||
|
||||
div.genindex-jumpbox {
|
||||
border-top: 1px solid #ddd;
|
||||
border-bottom: 1px solid #ddd;
|
||||
margin: 1em 0 1em 0;
|
||||
padding: 0.4em;
|
||||
}
|
||||
|
||||
/* -- general body styles --------------------------------------------------- */
|
||||
|
||||
a.headerlink {
|
||||
visibility: hidden;
|
||||
}
|
||||
|
||||
h1:hover > a.headerlink,
|
||||
h2:hover > a.headerlink,
|
||||
h3:hover > a.headerlink,
|
||||
h4:hover > a.headerlink,
|
||||
h5:hover > a.headerlink,
|
||||
h6:hover > a.headerlink,
|
||||
dt:hover > a.headerlink {
|
||||
visibility: visible;
|
||||
}
|
||||
|
||||
div.document p.caption {
|
||||
text-align: inherit;
|
||||
}
|
||||
|
||||
div.document td {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.field-list ul {
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
.first {
|
||||
margin-top: 0 !important;
|
||||
}
|
||||
|
||||
p.rubric {
|
||||
margin-top: 30px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.align-left {
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.align-center {
|
||||
clear: both;
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
.align-right {
|
||||
text-align: right;
|
||||
}
|
||||
|
||||
/* -- sidebars -------------------------------------------------------------- */
|
||||
|
||||
div.sidebar {
|
||||
margin: 0 0 0.5em 1em;
|
||||
border: 1px solid #ddb;
|
||||
padding: 7px 7px 0 7px;
|
||||
background-color: #ffe;
|
||||
width: 40%;
|
||||
float: right;
|
||||
}
|
||||
|
||||
p.sidebar-title {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* -- topics ---------------------------------------------------------------- */
|
||||
|
||||
div.topic {
|
||||
border: 1px solid #ccc;
|
||||
padding: 7px 7px 0 7px;
|
||||
margin: 10px 0 10px 0;
|
||||
}
|
||||
|
||||
p.topic-title {
|
||||
font-size: 1.1em;
|
||||
font-weight: bold;
|
||||
margin-top: 10px;
|
||||
}
|
||||
|
||||
/* -- admonitions ----------------------------------------------------------- */
|
||||
|
||||
div.admonition {
|
||||
margin-top: 10px;
|
||||
margin-bottom: 10px;
|
||||
padding: 7px;
|
||||
}
|
||||
|
||||
div.admonition dt {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.admonition dl {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
margin: 0px 10px 5px 0px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
div.document p.centered {
|
||||
text-align: center;
|
||||
margin-top: 25px;
|
||||
}
|
||||
|
||||
/* -- tables ---------------------------------------------------------------- */
|
||||
|
||||
table.docutils {
|
||||
border: 0;
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
table.docutils td, table.docutils th {
|
||||
padding: 1px 8px 1px 5px;
|
||||
border-top: 0;
|
||||
border-left: 0;
|
||||
border-right: 0;
|
||||
border-bottom: 1px solid #aaa;
|
||||
}
|
||||
|
||||
table.field-list td, table.field-list th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
table.footnote td, table.footnote th {
|
||||
border: 0 !important;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: left;
|
||||
padding-right: 5px;
|
||||
}
|
||||
|
||||
table.citation {
|
||||
border-left: solid 1px gray;
|
||||
margin-left: 1px;
|
||||
}
|
||||
|
||||
table.citation td {
|
||||
border-bottom: none;
|
||||
}
|
||||
|
||||
/* -- other body styles ----------------------------------------------------- */
|
||||
|
||||
ol.arabic {
|
||||
list-style: decimal;
|
||||
}
|
||||
|
||||
ol.loweralpha {
|
||||
list-style: lower-alpha;
|
||||
}
|
||||
|
||||
ol.upperalpha {
|
||||
list-style: upper-alpha;
|
||||
}
|
||||
|
||||
ol.lowerroman {
|
||||
list-style: lower-roman;
|
||||
}
|
||||
|
||||
ol.upperroman {
|
||||
list-style: upper-roman;
|
||||
}
|
||||
|
||||
dl {
|
||||
margin-bottom: 15px;
|
||||
}
|
||||
|
||||
dd p {
|
||||
margin-top: 0px;
|
||||
}
|
||||
|
||||
dd ul, dd table {
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
|
||||
dd {
|
||||
margin-top: 3px;
|
||||
margin-bottom: 10px;
|
||||
margin-left: 30px;
|
||||
}
|
||||
|
||||
dt:target, .highlighted {
|
||||
background-color: #fbe54e;
|
||||
}
|
||||
|
||||
dl.glossary dt {
|
||||
font-weight: bold;
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
.field-list ul {
|
||||
margin: 0;
|
||||
padding-left: 1em;
|
||||
}
|
||||
|
||||
.field-list p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.refcount {
|
||||
color: #060;
|
||||
}
|
||||
|
||||
.optional {
|
||||
font-size: 1.3em;
|
||||
}
|
||||
|
||||
.versionmodified {
|
||||
font-style: italic;
|
||||
}
|
||||
|
||||
.system-message {
|
||||
background-color: #fda;
|
||||
padding: 5px;
|
||||
border: 3px solid red;
|
||||
}
|
||||
|
||||
.footnote:target {
|
||||
background-color: #ffa
|
||||
}
|
||||
|
||||
.line-block {
|
||||
display: block;
|
||||
margin-top: 1em;
|
||||
margin-bottom: 1em;
|
||||
}
|
||||
|
||||
.line-block .line-block {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0;
|
||||
margin-left: 1.5em;
|
||||
}
|
||||
|
||||
.guilabel, .menuselection {
|
||||
font-family: sans-serif;
|
||||
}
|
||||
|
||||
.accelerator {
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
.classifier {
|
||||
font-style: oblique;
|
||||
}
|
||||
|
||||
/* -- code displays --------------------------------------------------------- */
|
||||
|
||||
pre {
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
td.linenos pre {
|
||||
padding: 5px 0px;
|
||||
border: 0;
|
||||
background-color: transparent;
|
||||
color: #aaa;
|
||||
}
|
||||
|
||||
table.highlighttable {
|
||||
margin-left: 0.5em;
|
||||
}
|
||||
|
||||
table.highlighttable td {
|
||||
padding: 0 0.5em 0 0.5em;
|
||||
}
|
||||
|
||||
tt.descname {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
font-size: 1.2em;
|
||||
}
|
||||
|
||||
tt.descclassname {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
tt.xref, a tt {
|
||||
background-color: transparent;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
h1 tt, h2 tt, h3 tt, h4 tt, h5 tt, h6 tt {
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
.viewcode-link {
|
||||
float: right;
|
||||
}
|
||||
|
||||
.viewcode-back {
|
||||
float: right;
|
||||
font-family: sans-serif;
|
||||
}
|
||||
|
||||
div.viewcode-block:target {
|
||||
margin: -1px -10px;
|
||||
padding: 0 10px;
|
||||
}
|
||||
|
||||
/* -- math display ---------------------------------------------------------- */
|
||||
|
||||
img.math {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
div.document div.math p {
|
||||
text-align: center;
|
||||
}
|
||||
|
||||
span.eqno {
|
||||
float: right;
|
||||
}
|
||||
|
||||
/* -- printout stylesheet --------------------------------------------------- */
|
||||
|
||||
@media print {
|
||||
div.document,
|
||||
div.documentwrapper,
|
||||
div.bodywrapper {
|
||||
margin: 0 !important;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.sphinxsidebar,
|
||||
div.related,
|
||||
div.footer,
|
||||
#top-link {
|
||||
display: none;
|
||||
}
|
||||
}
|
||||
|
||||
14
scripts/gendoc.sh
Executable file
14
scripts/gendoc.sh
Executable file
@@ -0,0 +1,14 @@
|
||||
#!/bin/bash
|
||||
|
||||
MATRIXDOTORG=$HOME/workspace/matrix.org
|
||||
|
||||
rst2html-2.7.py --stylesheet=basic.css,nature.css ../docs/specification.rst > $MATRIXDOTORG/docs/spec/index.html
|
||||
rst2html-2.7.py --stylesheet=basic.css,nature.css ../docs/client-server/howto.rst > $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#<head>#<head><link rel="stylesheet" href="/site.css">#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#<body>#<body><div id="header"><div id="headerContent"> </div></div><div id="page"><div id="wrapper"><div style="text-align: center; padding: 40px;"><img src="/matrix.png" width="305" height="130" alt="[matrix]"/></div>#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
perl -pi -e 's#</body>#</div></div><div id="footer"><div id="footerContent">© 2014 Matrix.org</div></div></body>#' $MATRIXDOTORG/docs/spec/index.html $MATRIXDOTORG/docs/howtos/client-server.html
|
||||
|
||||
scp -r $MATRIXDOTORG/docs matrix@ldc-prd-matrix-001:/sites/matrix-beta
|
||||
270
scripts/nature.css
Normal file
270
scripts/nature.css
Normal file
@@ -0,0 +1,270 @@
|
||||
/*
|
||||
* nature.css_t
|
||||
* ~~~~~~~~~~~~
|
||||
*
|
||||
* Sphinx stylesheet -- nature theme.
|
||||
*
|
||||
* :copyright: Copyright 2007-2010 by the Sphinx team, see AUTHORS.
|
||||
* :license: BSD, see LICENSE for details.
|
||||
*
|
||||
*/
|
||||
|
||||
/* -- page layout ----------------------------------------------------------- */
|
||||
|
||||
body {
|
||||
font-family: Arial, sans-serif;
|
||||
font-size: 100%;
|
||||
/*background-color: #111;*/
|
||||
color: #555;
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
div.documentwrapper {
|
||||
float: left;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
div.bodywrapper {
|
||||
margin: 0 0 0 230px;
|
||||
}
|
||||
|
||||
hr {
|
||||
border: 1px solid #B1B4B6;
|
||||
}
|
||||
|
||||
/*
|
||||
div.document {
|
||||
background-color: #eee;
|
||||
}
|
||||
*/
|
||||
|
||||
div.document {
|
||||
background-color: #ffffff;
|
||||
color: #3E4349;
|
||||
padding: 0 30px 30px 30px;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
div.footer {
|
||||
color: #555;
|
||||
width: 100%;
|
||||
padding: 13px 0;
|
||||
text-align: center;
|
||||
font-size: 75%;
|
||||
}
|
||||
|
||||
div.footer a {
|
||||
color: #444;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.related {
|
||||
background-color: #6BA81E;
|
||||
line-height: 32px;
|
||||
color: #fff;
|
||||
text-shadow: 0px 1px 0 #444;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
div.related a {
|
||||
color: #E2F3CC;
|
||||
}
|
||||
|
||||
div.sphinxsidebar {
|
||||
font-size: 0.75em;
|
||||
line-height: 1.5em;
|
||||
}
|
||||
|
||||
div.sphinxsidebarwrapper{
|
||||
padding: 20px 0;
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3,
|
||||
div.sphinxsidebar h4 {
|
||||
font-family: Arial, sans-serif;
|
||||
color: #222;
|
||||
font-size: 1.2em;
|
||||
font-weight: normal;
|
||||
margin: 0;
|
||||
padding: 5px 10px;
|
||||
background-color: #ddd;
|
||||
text-shadow: 1px 1px 0 white
|
||||
}
|
||||
|
||||
div.sphinxsidebar h4{
|
||||
font-size: 1.1em;
|
||||
}
|
||||
|
||||
div.sphinxsidebar h3 a {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
|
||||
div.sphinxsidebar p {
|
||||
color: #888;
|
||||
padding: 5px 20px;
|
||||
}
|
||||
|
||||
div.sphinxsidebar p.topless {
|
||||
}
|
||||
|
||||
div.sphinxsidebar ul {
|
||||
margin: 10px 20px;
|
||||
padding: 0;
|
||||
color: #000;
|
||||
}
|
||||
|
||||
div.sphinxsidebar a {
|
||||
color: #444;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input {
|
||||
border: 1px solid #ccc;
|
||||
font-family: sans-serif;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
div.sphinxsidebar input[type=text]{
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
/* -- body styles ----------------------------------------------------------- */
|
||||
|
||||
a {
|
||||
color: #005B81;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: #E32E00;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
div.document h1,
|
||||
div.document h2,
|
||||
div.document h3,
|
||||
div.document h4,
|
||||
div.document h5,
|
||||
div.document h6 {
|
||||
font-family: Arial, sans-serif;
|
||||
background-color: #BED4EB;
|
||||
font-weight: normal;
|
||||
color: #212224;
|
||||
margin: 30px 0px 10px 0px;
|
||||
padding: 5px 0 5px 10px;
|
||||
text-shadow: 0px 1px 0 white
|
||||
}
|
||||
|
||||
div.document h1 { border-top: 20px solid white; margin-top: 0; font-size: 200%; }
|
||||
div.document h2 { font-size: 150%; background-color: #C8D5E3; }
|
||||
div.document h3 { font-size: 120%; background-color: #D8DEE3; }
|
||||
div.document h4 { font-size: 110%; background-color: #D8DEE3; }
|
||||
div.document h5 { font-size: 100%; background-color: #D8DEE3; }
|
||||
div.document h6 { font-size: 100%; background-color: #D8DEE3; }
|
||||
|
||||
a.headerlink {
|
||||
color: #c60f0f;
|
||||
font-size: 0.8em;
|
||||
padding: 0 4px 0 4px;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a.headerlink:hover {
|
||||
background-color: #c60f0f;
|
||||
color: white;
|
||||
}
|
||||
|
||||
div.document p, div.document dd, div.document li {
|
||||
line-height: 1.5em;
|
||||
}
|
||||
|
||||
div.admonition p.admonition-title + p {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
div.highlight{
|
||||
background-color: white;
|
||||
}
|
||||
|
||||
div.note {
|
||||
background-color: #eee;
|
||||
border: 1px solid #ccc;
|
||||
}
|
||||
|
||||
div.seealso {
|
||||
background-color: #ffc;
|
||||
border: 1px solid #ff6;
|
||||
}
|
||||
|
||||
div.topic {
|
||||
background-color: #eee;
|
||||
}
|
||||
|
||||
div.warning {
|
||||
background-color: #ffe4e4;
|
||||
border: 1px solid #f66;
|
||||
}
|
||||
|
||||
p.admonition-title {
|
||||
display: inline;
|
||||
}
|
||||
|
||||
p.admonition-title:after {
|
||||
content: ":";
|
||||
}
|
||||
|
||||
pre {
|
||||
padding: 10px;
|
||||
background-color: White;
|
||||
color: #222;
|
||||
line-height: 1.2em;
|
||||
border: 1px solid #C6C9CB;
|
||||
font-size: 1.1em;
|
||||
margin: 1.5em 0 1.5em 0;
|
||||
-webkit-box-shadow: 1px 1px 1px #d8d8d8;
|
||||
-moz-box-shadow: 1px 1px 1px #d8d8d8;
|
||||
}
|
||||
|
||||
tt {
|
||||
background-color: #ecf0f3;
|
||||
color: #222;
|
||||
/* padding: 1px 2px; */
|
||||
font-size: 1.1em;
|
||||
font-family: monospace;
|
||||
}
|
||||
|
||||
.viewcode-back {
|
||||
font-family: Arial, sans-serif;
|
||||
}
|
||||
|
||||
div.viewcode-block:target {
|
||||
background-color: #f4debf;
|
||||
border-top: 1px solid #ac9;
|
||||
border-bottom: 1px solid #ac9;
|
||||
}
|
||||
|
||||
p {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
ul li dd {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
ul li dl {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
li dl dd {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
dd ul {
|
||||
padding-left: 0;
|
||||
}
|
||||
|
||||
li dd ul {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
8
setup.py
8
setup.py
@@ -31,10 +31,9 @@ setup(
|
||||
packages=find_packages(exclude=["tests"]),
|
||||
description="Reference Synapse Home Server",
|
||||
install_requires=[
|
||||
"syutil==0.0.2",
|
||||
"syutil==0.0.1",
|
||||
"Twisted>=14.0.0",
|
||||
"service_identity>=1.0.0",
|
||||
"pyopenssl>=0.14",
|
||||
"pyyaml",
|
||||
"pyasn1",
|
||||
"pynacl",
|
||||
@@ -42,12 +41,11 @@ setup(
|
||||
"py-bcrypt",
|
||||
],
|
||||
dependency_links=[
|
||||
"https://github.com/matrix-org/syutil/tarball/v0.0.2#egg=syutil-0.0.2",
|
||||
"git+ssh://git@github.com/matrix-org/syutil.git#egg=syutil-0.0.1",
|
||||
],
|
||||
setup_requires=[
|
||||
"setuptools_trial",
|
||||
"setuptools>=1.0.0", # Needs setuptools that supports git+ssh.
|
||||
# TODO: Do we need this now? we don't use git+ssh.
|
||||
"setuptools>=1.0.0", # Needs setuptools that supports git+ssh. It's not obvious when support for this was introduced.
|
||||
"mock"
|
||||
],
|
||||
include_package_data=True,
|
||||
|
||||
@@ -16,4 +16,4 @@
|
||||
""" This is a reference implementation of a synapse home server.
|
||||
"""
|
||||
|
||||
__version__ = "0.4.2"
|
||||
__version__ = "0.2.0"
|
||||
|
||||
@@ -12,3 +12,4 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
@@ -18,10 +18,8 @@
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import Membership, JoinRules
|
||||
from synapse.api.errors import AuthError, StoreError, Codes, SynapseError
|
||||
from synapse.api.events.room import (
|
||||
RoomMemberEvent, RoomPowerLevelsEvent, RoomRedactionEvent,
|
||||
)
|
||||
from synapse.api.errors import AuthError, StoreError, Codes
|
||||
from synapse.api.events.room import RoomMemberEvent
|
||||
from synapse.util.logutils import log_function
|
||||
|
||||
import logging
|
||||
@@ -69,12 +67,6 @@ class Auth(object):
|
||||
else:
|
||||
yield self._can_send_event(event)
|
||||
|
||||
if event.type == RoomPowerLevelsEvent.TYPE:
|
||||
yield self._check_power_levels(event)
|
||||
|
||||
if event.type == RoomRedactionEvent.TYPE:
|
||||
yield self._check_redaction(event)
|
||||
|
||||
defer.returnValue(True)
|
||||
else:
|
||||
raise AuthError(500, "Unknown event: %s" % event)
|
||||
@@ -175,12 +167,12 @@ class Auth(object):
|
||||
event.room_id,
|
||||
event.user_id,
|
||||
)
|
||||
_, kick_level, _ = yield self.store.get_ops_levels(event.room_id)
|
||||
_, kick_level = yield self.store.get_ops_levels(event.room_id)
|
||||
|
||||
if kick_level:
|
||||
kick_level = int(kick_level)
|
||||
else:
|
||||
kick_level = 50
|
||||
kick_level = 5
|
||||
|
||||
if user_level < kick_level:
|
||||
raise AuthError(
|
||||
@@ -192,12 +184,12 @@ class Auth(object):
|
||||
event.user_id,
|
||||
)
|
||||
|
||||
ban_level, _, _ = yield self.store.get_ops_levels(event.room_id)
|
||||
ban_level, _ = yield self.store.get_ops_levels(event.room_id)
|
||||
|
||||
if ban_level:
|
||||
ban_level = int(ban_level)
|
||||
else:
|
||||
ban_level = 50 # FIXME (erikj): What should we do here?
|
||||
ban_level = 5 # FIXME (erikj): What should we do here?
|
||||
|
||||
if user_level < ban_level:
|
||||
raise AuthError(403, "You don't have permission to ban")
|
||||
@@ -206,7 +198,6 @@ class Auth(object):
|
||||
|
||||
defer.returnValue(True)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_user_by_req(self, request):
|
||||
""" Get a registered user's ID.
|
||||
|
||||
@@ -219,25 +210,7 @@ class Auth(object):
|
||||
"""
|
||||
# Can optionally look elsewhere in the request (e.g. headers)
|
||||
try:
|
||||
access_token = request.args["access_token"][0]
|
||||
user_info = yield self.get_user_by_token(access_token)
|
||||
user = user_info["user"]
|
||||
|
||||
ip_addr = self.hs.get_ip_from_request(request)
|
||||
user_agent = request.requestHeaders.getRawHeaders(
|
||||
"User-Agent",
|
||||
default=[""]
|
||||
)[0]
|
||||
if user and access_token and ip_addr:
|
||||
self.store.insert_client_ip(
|
||||
user=user,
|
||||
access_token=access_token,
|
||||
device_id=user_info["device_id"],
|
||||
ip=ip_addr,
|
||||
user_agent=user_agent
|
||||
)
|
||||
|
||||
defer.returnValue(user)
|
||||
return self.get_user_by_token(request.args["access_token"][0])
|
||||
except KeyError:
|
||||
raise AuthError(403, "Missing access token.")
|
||||
|
||||
@@ -246,32 +219,21 @@ class Auth(object):
|
||||
""" Get a registered user's ID.
|
||||
|
||||
Args:
|
||||
token (str): The access token to get the user by.
|
||||
token (str)- The access token to get the user by.
|
||||
Returns:
|
||||
dict : dict that includes the user, device_id, and whether the
|
||||
user is a server admin.
|
||||
UserID : User ID object of the user who has that access token.
|
||||
Raises:
|
||||
AuthError if no user by that token exists or the token is invalid.
|
||||
"""
|
||||
try:
|
||||
ret = yield self.store.get_user_by_token(token=token)
|
||||
if not ret:
|
||||
user_id = yield self.store.get_user_by_token(token=token)
|
||||
if not user_id:
|
||||
raise StoreError()
|
||||
|
||||
user_info = {
|
||||
"admin": bool(ret.get("admin", False)),
|
||||
"device_id": ret.get("device_id"),
|
||||
"user": self.hs.parse_userid(ret.get("name")),
|
||||
}
|
||||
|
||||
defer.returnValue(user_info)
|
||||
defer.returnValue(self.hs.parse_userid(user_id))
|
||||
except StoreError:
|
||||
raise AuthError(403, "Unrecognised access token.",
|
||||
errcode=Codes.UNKNOWN_TOKEN)
|
||||
|
||||
def is_server_admin(self, user):
|
||||
return self.store.is_server_admin(user)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def _can_send_event(self, event):
|
||||
@@ -343,9 +305,7 @@ class Auth(object):
|
||||
else:
|
||||
user_level = 0
|
||||
|
||||
logger.debug(
|
||||
"Checking power level for %s, %s", event.user_id, user_level
|
||||
)
|
||||
logger.debug("Checking power level for %s, %s", event.user_id, user_level)
|
||||
if current_state and hasattr(current_state, "required_power_level"):
|
||||
req = current_state.required_power_level
|
||||
|
||||
@@ -355,124 +315,3 @@ class Auth(object):
|
||||
403,
|
||||
"You don't have permission to change that state"
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _check_redaction(self, event):
|
||||
user_level = yield self.store.get_power_level(
|
||||
event.room_id,
|
||||
event.user_id,
|
||||
)
|
||||
|
||||
if user_level:
|
||||
user_level = int(user_level)
|
||||
else:
|
||||
user_level = 0
|
||||
|
||||
_, _, redact_level = yield self.store.get_ops_levels(event.room_id)
|
||||
|
||||
if not redact_level:
|
||||
redact_level = 50
|
||||
|
||||
if user_level < redact_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to redact events"
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _check_power_levels(self, event):
|
||||
for k, v in event.content.items():
|
||||
if k == "default":
|
||||
continue
|
||||
|
||||
# FIXME (erikj): We don't want hsob_Ts in content.
|
||||
if k == "hsob_ts":
|
||||
continue
|
||||
|
||||
try:
|
||||
self.hs.parse_userid(k)
|
||||
except:
|
||||
raise SynapseError(400, "Not a valid user_id: %s" % (k,))
|
||||
|
||||
try:
|
||||
int(v)
|
||||
except:
|
||||
raise SynapseError(400, "Not a valid power level: %s" % (v,))
|
||||
|
||||
current_state = yield self.store.get_current_state(
|
||||
event.room_id,
|
||||
event.type,
|
||||
event.state_key,
|
||||
)
|
||||
|
||||
if not current_state:
|
||||
return
|
||||
else:
|
||||
current_state = current_state[0]
|
||||
|
||||
user_level = yield self.store.get_power_level(
|
||||
event.room_id,
|
||||
event.user_id,
|
||||
)
|
||||
|
||||
if user_level:
|
||||
user_level = int(user_level)
|
||||
else:
|
||||
user_level = 0
|
||||
|
||||
old_list = current_state.content
|
||||
|
||||
# FIXME (erikj)
|
||||
old_people = {k: v for k, v in old_list.items() if k.startswith("@")}
|
||||
new_people = {
|
||||
k: v for k, v in event.content.items()
|
||||
if k.startswith("@")
|
||||
}
|
||||
|
||||
removed = set(old_people.keys()) - set(new_people.keys())
|
||||
added = set(new_people.keys()) - set(old_people.keys())
|
||||
same = set(old_people.keys()) & set(new_people.keys())
|
||||
|
||||
for r in removed:
|
||||
if int(old_list[r]) > user_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to remove user: %s" % (r, )
|
||||
)
|
||||
|
||||
for n in added:
|
||||
if int(event.content[n]) > user_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to add ops level greater "
|
||||
"than your own"
|
||||
)
|
||||
|
||||
for s in same:
|
||||
if int(event.content[s]) != int(old_list[s]):
|
||||
if int(event.content[s]) > user_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to add ops level greater "
|
||||
"than your own"
|
||||
)
|
||||
|
||||
if "default" in old_list:
|
||||
old_default = int(old_list["default"])
|
||||
|
||||
if old_default > user_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to add ops level greater than "
|
||||
"your own"
|
||||
)
|
||||
|
||||
if "default" in event.content:
|
||||
new_default = int(event.content["default"])
|
||||
|
||||
if new_default > user_level:
|
||||
raise AuthError(
|
||||
403,
|
||||
"You don't have permission to add ops level greater "
|
||||
"than your own"
|
||||
)
|
||||
|
||||
@@ -50,12 +50,3 @@ class JoinRules(object):
|
||||
KNOCK = u"knock"
|
||||
INVITE = u"invite"
|
||||
PRIVATE = u"private"
|
||||
|
||||
|
||||
class LoginType(object):
|
||||
PASSWORD = u"m.login.password"
|
||||
OAUTH = u"m.login.oauth2"
|
||||
EMAIL_CODE = u"m.login.email.code"
|
||||
EMAIL_URL = u"m.login.email.url"
|
||||
EMAIL_IDENTITY = u"m.login.email.identity"
|
||||
RECAPTCHA = u"m.login.recaptcha"
|
||||
|
||||
@@ -19,7 +19,6 @@ import logging
|
||||
|
||||
|
||||
class Codes(object):
|
||||
UNAUTHORIZED = "M_UNAUTHORIZED"
|
||||
FORBIDDEN = "M_FORBIDDEN"
|
||||
BAD_JSON = "M_BAD_JSON"
|
||||
NOT_JSON = "M_NOT_JSON"
|
||||
@@ -30,8 +29,6 @@ class Codes(object):
|
||||
NOT_FOUND = "M_NOT_FOUND"
|
||||
UNKNOWN_TOKEN = "M_UNKNOWN_TOKEN"
|
||||
LIMIT_EXCEEDED = "M_LIMIT_EXCEEDED"
|
||||
CAPTCHA_NEEDED = "M_CAPTCHA_NEEDED"
|
||||
CAPTCHA_INVALID = "M_CAPTCHA_INVALID"
|
||||
|
||||
|
||||
class CodeMessageException(Exception):
|
||||
@@ -54,7 +51,7 @@ class SynapseError(CodeMessageException):
|
||||
"""Constructs a synapse error.
|
||||
|
||||
Args:
|
||||
code (int): The integer error code (an HTTP response code)
|
||||
code (int): The integer error code (typically an HTTP response code)
|
||||
msg (str): The human-readable error message.
|
||||
err (str): The error code e.g 'M_FORBIDDEN'
|
||||
"""
|
||||
@@ -67,7 +64,6 @@ class SynapseError(CodeMessageException):
|
||||
self.errcode,
|
||||
)
|
||||
|
||||
|
||||
class RoomError(SynapseError):
|
||||
"""An error raised when a room event fails."""
|
||||
pass
|
||||
@@ -105,20 +101,6 @@ class StoreError(SynapseError):
|
||||
pass
|
||||
|
||||
|
||||
class InvalidCaptchaError(SynapseError):
|
||||
def __init__(self, code=400, msg="Invalid captcha.", error_url=None,
|
||||
errcode=Codes.CAPTCHA_INVALID):
|
||||
super(InvalidCaptchaError, self).__init__(code, msg, errcode)
|
||||
self.error_url = error_url
|
||||
|
||||
def error_dict(self):
|
||||
return cs_error(
|
||||
self.msg,
|
||||
self.errcode,
|
||||
error_url=self.error_url,
|
||||
)
|
||||
|
||||
|
||||
class LimitExceededError(SynapseError):
|
||||
"""A client has sent too many requests and is being throttled.
|
||||
"""
|
||||
|
||||
@@ -17,20 +17,6 @@ from synapse.api.errors import SynapseError, Codes
|
||||
from synapse.util.jsonobject import JsonEncodedObject
|
||||
|
||||
|
||||
def serialize_event(hs, e):
|
||||
# FIXME(erikj): To handle the case of presence events and the like
|
||||
if not isinstance(e, SynapseEvent):
|
||||
return e
|
||||
|
||||
# Should this strip out None's?
|
||||
d = {k: v for k, v in e.get_dict().items()}
|
||||
if "age_ts" in d:
|
||||
d["age"] = int(hs.get_clock().time_msec()) - d["age_ts"]
|
||||
del d["age_ts"]
|
||||
|
||||
return d
|
||||
|
||||
|
||||
class SynapseEvent(JsonEncodedObject):
|
||||
|
||||
"""Base class for Synapse events. These are JSON objects which must abide
|
||||
@@ -57,21 +43,17 @@ class SynapseEvent(JsonEncodedObject):
|
||||
"content", # HTTP body, JSON
|
||||
"state_key",
|
||||
"required_power_level",
|
||||
"age_ts",
|
||||
"prev_content",
|
||||
"prev_state",
|
||||
"redacted_because",
|
||||
]
|
||||
|
||||
internal_keys = [
|
||||
"is_state",
|
||||
"prev_events",
|
||||
"prev_state",
|
||||
"depth",
|
||||
"destinations",
|
||||
"origin",
|
||||
"outlier",
|
||||
"power_level",
|
||||
"redacted",
|
||||
]
|
||||
|
||||
required_keys = [
|
||||
@@ -159,8 +141,7 @@ class SynapseEvent(JsonEncodedObject):
|
||||
return "Missing %s key" % key
|
||||
|
||||
if type(content[key]) != type(template[key]):
|
||||
return "Key %s is of the wrong type (got %s, want %s)" % (
|
||||
key, type(content[key]), type(template[key]))
|
||||
return "Key %s is of the wrong type." % key
|
||||
|
||||
if type(content[key]) == dict:
|
||||
# we must go deeper
|
||||
@@ -176,8 +157,7 @@ class SynapseEvent(JsonEncodedObject):
|
||||
|
||||
|
||||
class SynapseStateEvent(SynapseEvent):
|
||||
|
||||
def __init__(self, **kwargs):
|
||||
def __init__(self, **kwargs):
|
||||
if "state_key" not in kwargs:
|
||||
kwargs["state_key"] = ""
|
||||
super(SynapseStateEvent, self).__init__(**kwargs)
|
||||
|
||||
@@ -17,8 +17,7 @@ from synapse.api.events.room import (
|
||||
RoomTopicEvent, MessageEvent, RoomMemberEvent, FeedbackEvent,
|
||||
InviteJoinEvent, RoomConfigEvent, RoomNameEvent, GenericEvent,
|
||||
RoomPowerLevelsEvent, RoomJoinRulesEvent, RoomOpsPowerLevelsEvent,
|
||||
RoomCreateEvent, RoomAddStateLevelEvent, RoomSendEventLevelEvent,
|
||||
RoomRedactionEvent,
|
||||
RoomCreateEvent, RoomAddStateLevelEvent, RoomSendEventLevelEvent
|
||||
)
|
||||
|
||||
from synapse.util.stringutils import random_string
|
||||
@@ -40,7 +39,6 @@ class EventFactory(object):
|
||||
RoomAddStateLevelEvent,
|
||||
RoomSendEventLevelEvent,
|
||||
RoomOpsPowerLevelsEvent,
|
||||
RoomRedactionEvent,
|
||||
]
|
||||
|
||||
def __init__(self, hs):
|
||||
@@ -49,25 +47,14 @@ class EventFactory(object):
|
||||
self._event_list[event_class.TYPE] = event_class
|
||||
|
||||
self.clock = hs.get_clock()
|
||||
self.hs = hs
|
||||
|
||||
def create_event(self, etype=None, **kwargs):
|
||||
kwargs["type"] = etype
|
||||
if "event_id" not in kwargs:
|
||||
kwargs["event_id"] = "%s@%s" % (
|
||||
random_string(10), self.hs.hostname
|
||||
)
|
||||
kwargs["event_id"] = random_string(10)
|
||||
|
||||
if "origin_server_ts" not in kwargs:
|
||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
||||
|
||||
# The "age" key is a delta timestamp that should be converted into an
|
||||
# absolute timestamp the minute we see it.
|
||||
if "age" in kwargs:
|
||||
kwargs["age_ts"] = int(self.clock.time_msec()) - int(kwargs["age"])
|
||||
del kwargs["age"]
|
||||
elif "age_ts" not in kwargs:
|
||||
kwargs["age_ts"] = int(self.clock.time_msec())
|
||||
if "ts" not in kwargs:
|
||||
kwargs["ts"] = int(self.clock.time_msec())
|
||||
|
||||
if etype in self._event_list:
|
||||
handler = self._event_list[etype]
|
||||
|
||||
@@ -173,19 +173,3 @@ class RoomOpsPowerLevelsEvent(SynapseStateEvent):
|
||||
|
||||
def get_content_template(self):
|
||||
return {}
|
||||
|
||||
|
||||
class RoomAliasesEvent(SynapseStateEvent):
|
||||
TYPE = "m.room.aliases"
|
||||
|
||||
def get_content_template(self):
|
||||
return {}
|
||||
|
||||
|
||||
class RoomRedactionEvent(SynapseEvent):
|
||||
TYPE = "m.room.redaction"
|
||||
|
||||
valid_keys = SynapseEvent.valid_keys + ["redacts"]
|
||||
|
||||
def get_content_template(self):
|
||||
return {}
|
||||
|
||||
@@ -1,64 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from .room import (
|
||||
RoomMemberEvent, RoomJoinRulesEvent, RoomPowerLevelsEvent,
|
||||
RoomAddStateLevelEvent, RoomSendEventLevelEvent, RoomOpsPowerLevelsEvent,
|
||||
RoomAliasesEvent, RoomCreateEvent,
|
||||
)
|
||||
|
||||
def prune_event(event):
|
||||
""" Prunes the given event of all keys we don't know about or think could
|
||||
potentially be dodgy.
|
||||
|
||||
This is used when we "redact" an event. We want to remove all fields that
|
||||
the user has specified, but we do want to keep necessary information like
|
||||
type, state_key etc.
|
||||
"""
|
||||
|
||||
# Remove all extraneous fields.
|
||||
event.unrecognized_keys = {}
|
||||
|
||||
new_content = {}
|
||||
|
||||
def add_fields(*fields):
|
||||
for field in fields:
|
||||
if field in event.content:
|
||||
new_content[field] = event.content[field]
|
||||
|
||||
if event.type == RoomMemberEvent.TYPE:
|
||||
add_fields("membership")
|
||||
elif event.type == RoomCreateEvent.TYPE:
|
||||
add_fields("creator")
|
||||
elif event.type == RoomJoinRulesEvent.TYPE:
|
||||
add_fields("join_rule")
|
||||
elif event.type == RoomPowerLevelsEvent.TYPE:
|
||||
# TODO: Actually check these are valid user_ids etc.
|
||||
add_fields("default")
|
||||
for k, v in event.content.items():
|
||||
if k.startswith("@") and isinstance(v, (int, long)):
|
||||
new_content[k] = v
|
||||
elif event.type == RoomAddStateLevelEvent.TYPE:
|
||||
add_fields("level")
|
||||
elif event.type == RoomSendEventLevelEvent.TYPE:
|
||||
add_fields("level")
|
||||
elif event.type == RoomOpsPowerLevelsEvent.TYPE:
|
||||
add_fields("kick_level", "ban_level", "redact_level")
|
||||
elif event.type == RoomAliasesEvent.TYPE:
|
||||
add_fields("aliases")
|
||||
|
||||
event.content = new_content
|
||||
|
||||
return event
|
||||
@@ -18,5 +18,4 @@
|
||||
CLIENT_PREFIX = "/_matrix/client/api/v1"
|
||||
FEDERATION_PREFIX = "/_matrix/federation/v1"
|
||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||
SERVER_KEY_PREFIX = "/_matrix/key/v1"
|
||||
CONTENT_REPO_PREFIX = "/_matrix/content"
|
||||
@@ -12,3 +12,4 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.storage import prepare_database
|
||||
from synapse.storage import read_schema
|
||||
|
||||
from synapse.server import HomeServer
|
||||
|
||||
@@ -25,11 +25,9 @@ from twisted.web.static import File
|
||||
from twisted.web.server import Site
|
||||
from synapse.http.server import JsonResource, RootRedirect
|
||||
from synapse.http.content_repository import ContentRepoResource
|
||||
from synapse.http.server_key_resource import LocalKey
|
||||
from synapse.http.client import MatrixHttpClient
|
||||
from synapse.http.client import TwistedHttpClient
|
||||
from synapse.api.urls import (
|
||||
CLIENT_PREFIX, FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX,
|
||||
SERVER_KEY_PREFIX,
|
||||
CLIENT_PREFIX, FEDERATION_PREFIX, WEB_CLIENT_PREFIX, CONTENT_REPO_PREFIX
|
||||
)
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.crypto import context_factory
|
||||
@@ -38,18 +36,34 @@ from daemonize import Daemonize
|
||||
import twisted.manhole.telnet
|
||||
|
||||
import logging
|
||||
import sqlite3
|
||||
import os
|
||||
import re
|
||||
import sys
|
||||
import sqlite3
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
SCHEMAS = [
|
||||
"transactions",
|
||||
"pdu",
|
||||
"users",
|
||||
"profiles",
|
||||
"presence",
|
||||
"im",
|
||||
"room_aliases",
|
||||
]
|
||||
|
||||
|
||||
# Remember to update this number every time an incompatible change is made to
|
||||
# database schema files, so the users will be informed on server restarts.
|
||||
SCHEMA_VERSION = 3
|
||||
|
||||
|
||||
class SynapseHomeServer(HomeServer):
|
||||
|
||||
def build_http_client(self):
|
||||
return MatrixHttpClient(self)
|
||||
return TwistedHttpClient(self)
|
||||
|
||||
def build_resource_for_client(self):
|
||||
return JsonResource()
|
||||
@@ -65,16 +79,53 @@ class SynapseHomeServer(HomeServer):
|
||||
self, self.upload_dir, self.auth, self.content_addr
|
||||
)
|
||||
|
||||
def build_resource_for_server_key(self):
|
||||
return LocalKey(self)
|
||||
|
||||
def build_db_pool(self):
|
||||
return adbapi.ConnectionPool(
|
||||
"sqlite3", self.get_db_name(),
|
||||
check_same_thread=False,
|
||||
cp_min=1,
|
||||
cp_max=1
|
||||
)
|
||||
""" Set up all the dbs. Since all the *.sql have IF NOT EXISTS, so we
|
||||
don't have to worry about overwriting existing content.
|
||||
"""
|
||||
logging.info("Preparing database: %s...", self.db_name)
|
||||
|
||||
with sqlite3.connect(self.db_name) as db_conn:
|
||||
c = db_conn.cursor()
|
||||
c.execute("PRAGMA user_version")
|
||||
row = c.fetchone()
|
||||
|
||||
if row and row[0]:
|
||||
user_version = row[0]
|
||||
|
||||
if user_version > SCHEMA_VERSION:
|
||||
raise ValueError("Cannot use this database as it is too " +
|
||||
"new for the server to understand"
|
||||
)
|
||||
elif user_version < SCHEMA_VERSION:
|
||||
logging.info("Upgrading database from version %d",
|
||||
user_version
|
||||
)
|
||||
|
||||
# Run every version since after the current version.
|
||||
for v in range(user_version + 1, SCHEMA_VERSION + 1):
|
||||
sql_script = read_schema("delta/v%d" % (v))
|
||||
c.executescript(sql_script)
|
||||
|
||||
db_conn.commit()
|
||||
|
||||
else:
|
||||
for sql_loc in SCHEMAS:
|
||||
sql_script = read_schema(sql_loc)
|
||||
|
||||
c.executescript(sql_script)
|
||||
db_conn.commit()
|
||||
c.execute("PRAGMA user_version = %d" % SCHEMA_VERSION)
|
||||
|
||||
c.close()
|
||||
|
||||
logging.info("Database prepared in %s.", self.db_name)
|
||||
|
||||
pool = adbapi.ConnectionPool(
|
||||
'sqlite3', self.db_name, check_same_thread=False,
|
||||
cp_min=1, cp_max=1)
|
||||
|
||||
return pool
|
||||
|
||||
def create_resource_tree(self, web_client, redirect_root_to_web_client):
|
||||
"""Create the resource tree for this Home Server.
|
||||
@@ -93,8 +144,7 @@ class SynapseHomeServer(HomeServer):
|
||||
desired_tree = [
|
||||
(CLIENT_PREFIX, self.get_resource_for_client()),
|
||||
(FEDERATION_PREFIX, self.get_resource_for_federation()),
|
||||
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo()),
|
||||
(SERVER_KEY_PREFIX, self.get_resource_for_server_key()),
|
||||
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo())
|
||||
]
|
||||
if web_client:
|
||||
logger.info("Adding the web client.")
|
||||
@@ -180,6 +230,10 @@ class SynapseHomeServer(HomeServer):
|
||||
logger.info("Synapse now listening on port %d", unsecure_port)
|
||||
|
||||
|
||||
def run():
|
||||
reactor.run()
|
||||
|
||||
|
||||
def setup():
|
||||
config = HomeServerConfig.load_config(
|
||||
"Synapse Homeserver",
|
||||
@@ -214,15 +268,7 @@ def setup():
|
||||
web_client=config.webclient,
|
||||
redirect_root_to_web_client=True,
|
||||
)
|
||||
|
||||
db_name = hs.get_db_name()
|
||||
|
||||
logging.info("Preparing database: %s...", db_name)
|
||||
|
||||
with sqlite3.connect(db_name) as db_conn:
|
||||
prepare_database(db_conn)
|
||||
|
||||
logging.info("Database prepared in %s.", db_name)
|
||||
hs.start_listening(config.bind_port, config.unsecure_port)
|
||||
|
||||
hs.get_db_pool()
|
||||
|
||||
@@ -233,14 +279,12 @@ def setup():
|
||||
f.namespace['hs'] = hs
|
||||
reactor.listenTCP(config.manhole, f, interface='127.0.0.1')
|
||||
|
||||
hs.start_listening(config.bind_port, config.unsecure_port)
|
||||
|
||||
if config.daemonize:
|
||||
print config.pid_file
|
||||
daemon = Daemonize(
|
||||
app="synapse-homeserver",
|
||||
pid=config.pid_file,
|
||||
action=reactor.run,
|
||||
action=run,
|
||||
auto_close_fds=False,
|
||||
verbose=True,
|
||||
logger=logger,
|
||||
@@ -248,7 +292,7 @@ def setup():
|
||||
|
||||
daemon.start()
|
||||
else:
|
||||
reactor.run()
|
||||
run()
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -116,25 +116,13 @@ class Config(object):
|
||||
config = {}
|
||||
for key, value in vars(args).items():
|
||||
if (key not in set(["config_path", "generate_config"])
|
||||
and value is not None):
|
||||
and value is not None):
|
||||
config[key] = value
|
||||
with open(config_args.config_path, "w") as config_file:
|
||||
# TODO(paul) it would be lovely if we wrote out vim- and emacs-
|
||||
# style mode markers into the file, to hint to people that
|
||||
# this is a YAML file.
|
||||
yaml.dump(config, config_file, default_flow_style=False)
|
||||
print (
|
||||
"A config file has been generated in %s for server name"
|
||||
" '%s' with corresponding SSL keys and self-signed"
|
||||
" certificates. Please review this file and customise it to"
|
||||
" your needs."
|
||||
) % (
|
||||
config_args.config_path, config['server_name']
|
||||
)
|
||||
print (
|
||||
"If this server name is incorrect, you will need to regenerate"
|
||||
" the SSL certificates"
|
||||
)
|
||||
sys.exit(0)
|
||||
|
||||
return cls(args)
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,51 +0,0 @@
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class CaptchaConfig(Config):
|
||||
|
||||
def __init__(self, args):
|
||||
super(CaptchaConfig, self).__init__(args)
|
||||
self.recaptcha_private_key = args.recaptcha_private_key
|
||||
self.enable_registration_captcha = args.enable_registration_captcha
|
||||
self.captcha_ip_origin_is_x_forwarded = (
|
||||
args.captcha_ip_origin_is_x_forwarded
|
||||
)
|
||||
self.captcha_bypass_secret = args.captcha_bypass_secret
|
||||
|
||||
@classmethod
|
||||
def add_arguments(cls, parser):
|
||||
super(CaptchaConfig, cls).add_arguments(parser)
|
||||
group = parser.add_argument_group("recaptcha")
|
||||
group.add_argument(
|
||||
"--recaptcha-private-key", type=str, default="YOUR_PRIVATE_KEY",
|
||||
help="The matching private key for the web client's public key."
|
||||
)
|
||||
group.add_argument(
|
||||
"--enable-registration-captcha", type=bool, default=False,
|
||||
help="Enables ReCaptcha checks when registering, preventing signup"
|
||||
+ " unless a captcha is answered. Requires a valid ReCaptcha "
|
||||
+ "public/private key."
|
||||
)
|
||||
group.add_argument(
|
||||
"--captcha_ip_origin_is_x_forwarded", type=bool, default=False,
|
||||
help="When checking captchas, use the X-Forwarded-For (XFF) header"
|
||||
+ " as the client IP and not the actual client IP."
|
||||
)
|
||||
group.add_argument(
|
||||
"--captcha_bypass_secret", type=str,
|
||||
help="A secret key used to bypass the captcha test entirely."
|
||||
)
|
||||
@@ -16,7 +16,6 @@
|
||||
from ._base import Config
|
||||
import os
|
||||
|
||||
|
||||
class DatabaseConfig(Config):
|
||||
def __init__(self, args):
|
||||
super(DatabaseConfig, self).__init__(args)
|
||||
@@ -35,3 +34,4 @@ class DatabaseConfig(Config):
|
||||
def generate_config(cls, args, config_dir_path):
|
||||
super(DatabaseConfig, cls).generate_config(args, config_dir_path)
|
||||
args.database_path = os.path.abspath(args.database_path)
|
||||
|
||||
|
||||
@@ -1,42 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class EmailConfig(Config):
|
||||
|
||||
def __init__(self, args):
|
||||
super(EmailConfig, self).__init__(args)
|
||||
self.email_from_address = args.email_from_address
|
||||
self.email_smtp_server = args.email_smtp_server
|
||||
|
||||
@classmethod
|
||||
def add_arguments(cls, parser):
|
||||
super(EmailConfig, cls).add_arguments(parser)
|
||||
email_group = parser.add_argument_group("email")
|
||||
email_group.add_argument(
|
||||
"--email-from-address",
|
||||
default="FROM@EXAMPLE.COM",
|
||||
help="The address to send emails from (e.g. for password resets)."
|
||||
)
|
||||
email_group.add_argument(
|
||||
"--email-smtp-server",
|
||||
default="",
|
||||
help=(
|
||||
"The SMTP server to send emails from (e.g. for password"
|
||||
" resets)."
|
||||
)
|
||||
)
|
||||
@@ -19,17 +19,11 @@ from .logger import LoggingConfig
|
||||
from .database import DatabaseConfig
|
||||
from .ratelimiting import RatelimitConfig
|
||||
from .repository import ContentRepositoryConfig
|
||||
from .captcha import CaptchaConfig
|
||||
from .email import EmailConfig
|
||||
from .voip import VoipConfig
|
||||
|
||||
|
||||
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
|
||||
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
|
||||
EmailConfig, VoipConfig):
|
||||
RatelimitConfig, ContentRepositoryConfig):
|
||||
pass
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
if __name__=='__main__':
|
||||
import sys
|
||||
HomeServerConfig.load_config("Generate config", sys.argv[1:], "HomeServer")
|
||||
|
||||
@@ -19,7 +19,6 @@ from twisted.python.log import PythonLoggingObserver
|
||||
import logging
|
||||
import logging.config
|
||||
|
||||
|
||||
class LoggingConfig(Config):
|
||||
def __init__(self, args):
|
||||
super(LoggingConfig, self).__init__(args)
|
||||
@@ -52,7 +51,7 @@ class LoggingConfig(Config):
|
||||
|
||||
level = logging.INFO
|
||||
if self.verbosity:
|
||||
level = logging.DEBUG
|
||||
level = logging.DEBUG
|
||||
|
||||
# FIXME: we need a logging.WARN for a -q quiet option
|
||||
|
||||
|
||||
@@ -14,7 +14,6 @@
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class RatelimitConfig(Config):
|
||||
|
||||
def __init__(self, args):
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
|
||||
import os
|
||||
|
||||
class ContentRepositoryConfig(Config):
|
||||
def __init__(self, args):
|
||||
|
||||
@@ -13,9 +13,10 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import nacl.signing
|
||||
import os
|
||||
from ._base import Config, ConfigError
|
||||
import syutil.crypto.signing_key
|
||||
from ._base import Config
|
||||
from syutil.base64util import encode_base64, decode_base64
|
||||
|
||||
|
||||
class ServerConfig(Config):
|
||||
@@ -34,7 +35,7 @@ class ServerConfig(Config):
|
||||
if not args.content_addr:
|
||||
host = args.server_name
|
||||
if ':' not in host:
|
||||
host = "%s:%d" % (host, args.bind_port)
|
||||
host = "%s:%d" % (host, args.bind_port)
|
||||
args.content_addr = "https://%s" % (host,)
|
||||
|
||||
self.content_addr = args.content_addr
|
||||
@@ -69,16 +70,9 @@ class ServerConfig(Config):
|
||||
"content repository")
|
||||
|
||||
def read_signing_key(self, signing_key_path):
|
||||
signing_keys = self.read_file(signing_key_path, "signing_key")
|
||||
try:
|
||||
return syutil.crypto.signing_key.read_signing_keys(
|
||||
signing_keys.splitlines(True)
|
||||
)
|
||||
except Exception:
|
||||
raise ConfigError(
|
||||
"Error reading signing_key."
|
||||
" Try running again with --generate-config"
|
||||
)
|
||||
signing_key_base64 = self.read_file(signing_key_path, "signing_key")
|
||||
signing_key_bytes = decode_base64(signing_key_base64)
|
||||
return nacl.signing.SigningKey(signing_key_bytes)
|
||||
|
||||
@classmethod
|
||||
def generate_config(cls, args, config_dir_path):
|
||||
@@ -92,21 +86,6 @@ class ServerConfig(Config):
|
||||
|
||||
if not os.path.exists(args.signing_key_path):
|
||||
with open(args.signing_key_path, "w") as signing_key_file:
|
||||
syutil.crypto.signing_key.write_signing_keys(
|
||||
signing_key_file,
|
||||
(syutil.crypto.signing_key.generate_singing_key("auto"),),
|
||||
)
|
||||
else:
|
||||
signing_keys = cls.read_file(args.signing_key_path, "signing_key")
|
||||
if len(signing_keys.split("\n")[0].split()) == 1:
|
||||
# handle keys in the old format.
|
||||
key = syutil.crypto.signing_key.decode_signing_key_base64(
|
||||
syutil.crypto.signing_key.NACL_ED25519,
|
||||
"auto",
|
||||
signing_keys.split("\n")[0]
|
||||
)
|
||||
with open(args.signing_key_path, "w") as signing_key_file:
|
||||
syutil.crypto.signing_key.write_signing_keys(
|
||||
signing_key_file,
|
||||
(key,),
|
||||
)
|
||||
key = nacl.signing.SigningKey.generate()
|
||||
signing_key_file.write(encode_base64(key.encode()))
|
||||
|
||||
|
||||
@@ -19,7 +19,7 @@ from OpenSSL import crypto
|
||||
import subprocess
|
||||
import os
|
||||
|
||||
GENERATE_DH_PARAMS = False
|
||||
GENERATE_DH_PARAMS=False
|
||||
|
||||
|
||||
class TlsConfig(Config):
|
||||
|
||||
@@ -1,44 +0,0 @@
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class VoipConfig(Config):
|
||||
|
||||
def __init__(self, args):
|
||||
super(VoipConfig, self).__init__(args)
|
||||
self.turn_uris = args.turn_uris
|
||||
self.turn_shared_secret = args.turn_shared_secret
|
||||
self.turn_user_lifetime = args.turn_user_lifetime
|
||||
|
||||
@classmethod
|
||||
def add_arguments(cls, parser):
|
||||
super(VoipConfig, cls).add_arguments(parser)
|
||||
group = parser.add_argument_group("voip")
|
||||
group.add_argument(
|
||||
"--turn-uris", type=str, default=None,
|
||||
help="The public URIs of the TURN server to give to clients"
|
||||
)
|
||||
group.add_argument(
|
||||
"--turn-shared-secret", type=str, default=None,
|
||||
help=(
|
||||
"The shared secret used to compute passwords for the TURN"
|
||||
" server"
|
||||
)
|
||||
)
|
||||
group.add_argument(
|
||||
"--turn-user-lifetime", type=int, default=(1000 * 60 * 60),
|
||||
help="How long generated TURN credentials last, in ms"
|
||||
)
|
||||
@@ -12,3 +12,4 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
@@ -16,10 +16,6 @@ from twisted.internet import ssl
|
||||
from OpenSSL import SSL
|
||||
from twisted.internet._sslverify import _OpenSSLECCurve, _defaultCurveName
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ServerContextFactory(ssl.ContextFactory):
|
||||
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
||||
@@ -35,7 +31,7 @@ class ServerContextFactory(ssl.ContextFactory):
|
||||
_ecCurve = _OpenSSLECCurve(_defaultCurveName)
|
||||
_ecCurve.addECKeyToContext(context)
|
||||
except:
|
||||
logger.exception("Failed to enable eliptic curve for TLS")
|
||||
pass
|
||||
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
|
||||
context.use_certificate(config.tls_certificate)
|
||||
context.use_privatekey(config.tls_private_key)
|
||||
@@ -44,3 +40,4 @@ class ServerContextFactory(ssl.ContextFactory):
|
||||
|
||||
def getContext(self):
|
||||
return self._context
|
||||
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
|
||||
|
||||
from twisted.web.http import HTTPClient
|
||||
from twisted.internet.protocol import Factory
|
||||
from twisted.internet import defer, reactor
|
||||
from synapse.http.endpoint import matrix_endpoint
|
||||
from twisted.internet.protocol import ClientFactory
|
||||
from twisted.names.srvconnect import SRVConnector
|
||||
import json
|
||||
import logging
|
||||
|
||||
@@ -30,19 +30,15 @@ def fetch_server_key(server_name, ssl_context_factory):
|
||||
"""Fetch the keys for a remote server."""
|
||||
|
||||
factory = SynapseKeyClientFactory()
|
||||
endpoint = matrix_endpoint(
|
||||
reactor, server_name, ssl_context_factory, timeout=30
|
||||
)
|
||||
|
||||
for i in range(5):
|
||||
try:
|
||||
protocol = yield endpoint.connect(factory)
|
||||
server_response, server_certificate = yield protocol.remote_key
|
||||
defer.returnValue((server_response, server_certificate))
|
||||
return
|
||||
except Exception as e:
|
||||
logger.exception(e)
|
||||
raise IOError("Cannot get key for %s" % server_name)
|
||||
SRVConnector(
|
||||
reactor, "matrix", server_name, factory,
|
||||
protocol="tcp", connectFuncName="connectSSL", defaultPort=443,
|
||||
connectFuncKwArgs=dict(contextFactory=ssl_context_factory)).connect()
|
||||
|
||||
server_key, server_certificate = yield factory.remote_key
|
||||
|
||||
defer.returnValue((server_key, server_certificate))
|
||||
|
||||
|
||||
class SynapseKeyClientError(Exception):
|
||||
@@ -55,46 +51,69 @@ class SynapseKeyClientProtocol(HTTPClient):
|
||||
the server and extracts the X.509 certificate for the remote peer from the
|
||||
SSL connection."""
|
||||
|
||||
timeout = 30
|
||||
|
||||
def __init__(self):
|
||||
self.remote_key = defer.Deferred()
|
||||
|
||||
def connectionMade(self):
|
||||
logger.debug("Connected to %s", self.transport.getHost())
|
||||
self.sendCommand(b"GET", b"/_matrix/key/v1/")
|
||||
self.sendCommand(b"GET", b"/key")
|
||||
self.endHeaders()
|
||||
self.timer = reactor.callLater(
|
||||
self.timeout,
|
||||
self.factory.timeout_seconds,
|
||||
self.on_timeout
|
||||
)
|
||||
|
||||
def handleStatus(self, version, status, message):
|
||||
if status != b"200":
|
||||
#logger.info("Non-200 response from %s: %s %s",
|
||||
# self.transport.getHost(), status, message)
|
||||
logger.info("Non-200 response from %s: %s %s",
|
||||
self.transport.getHost(), status, message)
|
||||
self.transport.abortConnection()
|
||||
|
||||
def handleResponse(self, response_body_bytes):
|
||||
try:
|
||||
json_response = json.loads(response_body_bytes)
|
||||
except ValueError:
|
||||
#logger.info("Invalid JSON response from %s",
|
||||
# self.transport.getHost())
|
||||
logger.info("Invalid JSON response from %s",
|
||||
self.transport.getHost())
|
||||
self.transport.abortConnection()
|
||||
return
|
||||
|
||||
certificate = self.transport.getPeerCertificate()
|
||||
self.remote_key.callback((json_response, certificate))
|
||||
self.factory.on_remote_key((json_response, certificate))
|
||||
self.transport.abortConnection()
|
||||
self.timer.cancel()
|
||||
|
||||
def on_timeout(self):
|
||||
logger.debug("Timeout waiting for response from %s",
|
||||
self.transport.getHost())
|
||||
self.remote_key.errback(IOError("Timeout waiting for response"))
|
||||
self.transport.abortConnection()
|
||||
|
||||
|
||||
class SynapseKeyClientFactory(Factory):
|
||||
class SynapseKeyClientFactory(ClientFactory):
|
||||
protocol = SynapseKeyClientProtocol
|
||||
max_retries = 5
|
||||
timeout_seconds = 30
|
||||
|
||||
def __init__(self):
|
||||
self.succeeded = False
|
||||
self.retries = 0
|
||||
self.remote_key = defer.Deferred()
|
||||
|
||||
def on_remote_key(self, key):
|
||||
self.succeeded = True
|
||||
self.remote_key.callback(key)
|
||||
|
||||
def retry_connection(self, connector):
|
||||
self.retries += 1
|
||||
if self.retries < self.max_retries:
|
||||
connector.connector = None
|
||||
connector.connect()
|
||||
else:
|
||||
self.remote_key.errback(
|
||||
SynapseKeyClientError("Max retries exceeded"))
|
||||
|
||||
def clientConnectionFailed(self, connector, reason):
|
||||
logger.info("Connection failed %s", reason)
|
||||
self.retry_connection(connector)
|
||||
|
||||
def clientConnectionLost(self, connector, reason):
|
||||
logger.info("Connection lost %s", reason)
|
||||
if not self.succeeded:
|
||||
self.retry_connection(connector)
|
||||
|
||||
@@ -1,155 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.crypto.keyclient import fetch_server_key
|
||||
from twisted.internet import defer
|
||||
from syutil.crypto.jsonsign import verify_signed_json, signature_ids
|
||||
from syutil.crypto.signing_key import (
|
||||
is_signing_algorithm_supported, decode_verify_key_bytes
|
||||
)
|
||||
from syutil.base64util import decode_base64, encode_base64
|
||||
from synapse.api.errors import SynapseError, Codes
|
||||
|
||||
from OpenSSL import crypto
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Keyring(object):
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.clock = hs.get_clock()
|
||||
self.hs = hs
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def verify_json_for_server(self, server_name, json_object):
|
||||
logger.debug("Verifying for %s", server_name)
|
||||
key_ids = signature_ids(json_object, server_name)
|
||||
if not key_ids:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Not signed with a supported algorithm",
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
verify_key = yield self.get_server_verify_key(server_name, key_ids)
|
||||
except IOError:
|
||||
raise SynapseError(
|
||||
502,
|
||||
"Error downloading keys for %s" % (server_name,),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
except:
|
||||
raise SynapseError(
|
||||
401,
|
||||
"No key for %s with id %s" % (server_name, key_ids),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
try:
|
||||
verify_signed_json(json_object, server_name, verify_key)
|
||||
except:
|
||||
raise SynapseError(
|
||||
401,
|
||||
"Invalid signature for server %s with key %s:%s" % (
|
||||
server_name, verify_key.alg, verify_key.version
|
||||
),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_server_verify_key(self, server_name, key_ids):
|
||||
"""Finds a verification key for the server with one of the key ids.
|
||||
Args:
|
||||
server_name (str): The name of the server to fetch a key for.
|
||||
keys_ids (list of str): The key_ids to check for.
|
||||
"""
|
||||
|
||||
# Check the datastore to see if we have one cached.
|
||||
cached = yield self.store.get_server_verify_keys(server_name, key_ids)
|
||||
|
||||
if cached:
|
||||
defer.returnValue(cached[0])
|
||||
return
|
||||
|
||||
# Try to fetch the key from the remote server.
|
||||
# TODO(markjh): Ratelimit requests to a given server.
|
||||
|
||||
(response, tls_certificate) = yield fetch_server_key(
|
||||
server_name, self.hs.tls_context_factory
|
||||
)
|
||||
|
||||
# Check the response.
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1, tls_certificate
|
||||
)
|
||||
|
||||
if ("signatures" not in response
|
||||
or server_name not in response["signatures"]):
|
||||
raise ValueError("Key response not signed by remote server")
|
||||
|
||||
if "tls_certificate" not in response:
|
||||
raise ValueError("Key response missing TLS certificate")
|
||||
|
||||
tls_certificate_b64 = response["tls_certificate"]
|
||||
|
||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
||||
raise ValueError("TLS certificate doesn't match")
|
||||
|
||||
verify_keys = {}
|
||||
for key_id, key_base64 in response["verify_keys"].items():
|
||||
if is_signing_algorithm_supported(key_id):
|
||||
key_bytes = decode_base64(key_base64)
|
||||
verify_key = decode_verify_key_bytes(key_id, key_bytes)
|
||||
verify_keys[key_id] = verify_key
|
||||
|
||||
for key_id in response["signatures"][server_name]:
|
||||
if key_id not in response["verify_keys"]:
|
||||
raise ValueError(
|
||||
"Key response must include verification keys for all"
|
||||
" signatures"
|
||||
)
|
||||
if key_id in verify_keys:
|
||||
verify_signed_json(
|
||||
response,
|
||||
server_name,
|
||||
verify_keys[key_id]
|
||||
)
|
||||
|
||||
# Cache the result in the datastore.
|
||||
|
||||
time_now_ms = self.clock.time_msec()
|
||||
|
||||
self.store.store_server_certificate(
|
||||
server_name,
|
||||
server_name,
|
||||
time_now_ms,
|
||||
tls_certificate,
|
||||
)
|
||||
|
||||
for key_id, key in verify_keys.items():
|
||||
self.store.store_server_verify_key(
|
||||
server_name, server_name, time_now_ms, key
|
||||
)
|
||||
|
||||
for key_id in key_ids:
|
||||
if key_id in verify_keys:
|
||||
defer.returnValue(verify_keys[key_id])
|
||||
return
|
||||
|
||||
raise ValueError("No verification key found for given key ids")
|
||||
111
synapse/crypto/keyserver.py
Normal file
111
synapse/crypto/keyserver.py
Normal file
@@ -0,0 +1,111 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.internet import reactor, ssl
|
||||
from twisted.web import server
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.python.log import PythonLoggingObserver
|
||||
|
||||
from synapse.crypto.resource.key import LocalKey
|
||||
from synapse.crypto.config import load_config
|
||||
|
||||
from syutil.base64util import decode_base64
|
||||
|
||||
from OpenSSL import crypto, SSL
|
||||
|
||||
import logging
|
||||
import nacl.signing
|
||||
import sys
|
||||
|
||||
|
||||
class KeyServerSSLContextFactory(ssl.ContextFactory):
|
||||
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
||||
connections and to make connections to remote servers."""
|
||||
|
||||
def __init__(self, key_server):
|
||||
self._context = SSL.Context(SSL.SSLv23_METHOD)
|
||||
self.configure_context(self._context, key_server)
|
||||
|
||||
@staticmethod
|
||||
def configure_context(context, key_server):
|
||||
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
|
||||
context.use_certificate(key_server.tls_certificate)
|
||||
context.use_privatekey(key_server.tls_private_key)
|
||||
context.load_tmp_dh(key_server.tls_dh_params_path)
|
||||
context.set_cipher_list("!ADH:HIGH+kEDH:!AECDH:HIGH+kEECDH")
|
||||
|
||||
def getContext(self):
|
||||
return self._context
|
||||
|
||||
|
||||
class KeyServer(object):
|
||||
"""An HTTPS server serving LocalKey and RemoteKey resources."""
|
||||
|
||||
def __init__(self, server_name, tls_certificate_path, tls_private_key_path,
|
||||
tls_dh_params_path, signing_key_path, bind_host, bind_port):
|
||||
self.server_name = server_name
|
||||
self.tls_certificate = self.read_tls_certificate(tls_certificate_path)
|
||||
self.tls_private_key = self.read_tls_private_key(tls_private_key_path)
|
||||
self.tls_dh_params_path = tls_dh_params_path
|
||||
self.signing_key = self.read_signing_key(signing_key_path)
|
||||
self.bind_host = bind_host
|
||||
self.bind_port = int(bind_port)
|
||||
self.ssl_context_factory = KeyServerSSLContextFactory(self)
|
||||
|
||||
@staticmethod
|
||||
def read_tls_certificate(cert_path):
|
||||
with open(cert_path) as cert_file:
|
||||
cert_pem = cert_file.read()
|
||||
return crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
||||
|
||||
@staticmethod
|
||||
def read_tls_private_key(private_key_path):
|
||||
with open(private_key_path) as private_key_file:
|
||||
private_key_pem = private_key_file.read()
|
||||
return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem)
|
||||
|
||||
@staticmethod
|
||||
def read_signing_key(signing_key_path):
|
||||
with open(signing_key_path) as signing_key_file:
|
||||
signing_key_b64 = signing_key_file.read()
|
||||
signing_key_bytes = decode_base64(signing_key_b64)
|
||||
return nacl.signing.SigningKey(signing_key_bytes)
|
||||
|
||||
def run(self):
|
||||
root = Resource()
|
||||
root.putChild("key", LocalKey(self))
|
||||
site = server.Site(root)
|
||||
reactor.listenSSL(
|
||||
self.bind_port,
|
||||
site,
|
||||
self.ssl_context_factory,
|
||||
interface=self.bind_host
|
||||
)
|
||||
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
observer = PythonLoggingObserver()
|
||||
observer.start()
|
||||
|
||||
reactor.run()
|
||||
|
||||
|
||||
def main():
|
||||
key_server = KeyServer(**load_config(__doc__, sys.argv[1:]))
|
||||
key_server.run()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
15
synapse/crypto/resource/__init__.py
Normal file
15
synapse/crypto/resource/__init__.py
Normal file
@@ -0,0 +1,15 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
161
synapse/crypto/resource/key.py
Normal file
161
synapse/crypto/resource/key.py
Normal file
@@ -0,0 +1,161 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.web.server import NOT_DONE_YET
|
||||
from twisted.internet import defer
|
||||
from synapse.http.server import respond_with_json_bytes
|
||||
from synapse.crypto.keyclient import fetch_server_key
|
||||
from syutil.crypto.jsonsign import sign_json, verify_signed_json
|
||||
from syutil.base64util import encode_base64, decode_base64
|
||||
from syutil.jsonutil import encode_canonical_json
|
||||
from OpenSSL import crypto
|
||||
from nacl.signing import VerifyKey
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LocalKey(Resource):
|
||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
||||
signature verification keys for this server::
|
||||
|
||||
GET /key HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "this.server.example.com"
|
||||
"signature_verify_key": # base64 encoded NACL verification key.
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"this.server.example.com": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
def __init__(self, key_server):
|
||||
self.key_server = key_server
|
||||
self.response_body = encode_canonical_json(
|
||||
self.response_json_object(key_server)
|
||||
)
|
||||
Resource.__init__(self)
|
||||
|
||||
@staticmethod
|
||||
def response_json_object(key_server):
|
||||
verify_key_bytes = key_server.signing_key.verify_key.encode()
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
key_server.tls_certificate
|
||||
)
|
||||
json_object = {
|
||||
u"server_name": key_server.server_name,
|
||||
u"signature_verify_key": encode_base64(verify_key_bytes),
|
||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
||||
}
|
||||
signed_json = sign_json(
|
||||
json_object,
|
||||
key_server.server_name,
|
||||
key_server.signing_key
|
||||
)
|
||||
return signed_json
|
||||
|
||||
def getChild(self, name, request):
|
||||
logger.info("getChild %s %s", name, request)
|
||||
if name == '':
|
||||
return self
|
||||
else:
|
||||
return RemoteKey(name, self.key_server)
|
||||
|
||||
def render_GET(self, request):
|
||||
return respond_with_json_bytes(request, 200, self.response_body)
|
||||
|
||||
|
||||
class RemoteKey(Resource):
|
||||
"""HTTP resource for retreiving the TLS certificate and NACL signature
|
||||
verification keys for a another server. Checks that the reported X.509 TLS
|
||||
certificate matches the one used in the HTTPS connection. Checks that the
|
||||
NACL signature for the remote server is valid. Returns JSON signed by both
|
||||
the remote server and by this server.
|
||||
|
||||
GET /key/remote.server.example.com HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "remote.server.example.com"
|
||||
"signature_verify_key": # base64 encoded NACL verification key.
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"remote.server.example.com": # NACL signature for remote server.
|
||||
"this.server.example.com": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
isLeaf = True
|
||||
|
||||
def __init__(self, server_name, key_server):
|
||||
self.server_name = server_name
|
||||
self.key_server = key_server
|
||||
Resource.__init__(self)
|
||||
|
||||
def render_GET(self, request):
|
||||
self._async_render_GET(request)
|
||||
return NOT_DONE_YET
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _async_render_GET(self, request):
|
||||
try:
|
||||
server_keys, certificate = yield fetch_server_key(
|
||||
self.server_name,
|
||||
self.key_server.ssl_context_factory
|
||||
)
|
||||
|
||||
resp_server_name = server_keys[u"server_name"]
|
||||
verify_key_b64 = server_keys[u"signature_verify_key"]
|
||||
tls_certificate_b64 = server_keys[u"tls_certificate"]
|
||||
verify_key = VerifyKey(decode_base64(verify_key_b64))
|
||||
|
||||
if resp_server_name != self.server_name:
|
||||
raise ValueError("Wrong server name '%s' != '%s'" %
|
||||
(resp_server_name, self.server_name))
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
certificate
|
||||
)
|
||||
|
||||
if encode_base64(x509_certificate_bytes) != tls_certificate_b64:
|
||||
raise ValueError("TLS certificate doesn't match")
|
||||
|
||||
verify_signed_json(server_keys, self.server_name, verify_key)
|
||||
|
||||
signed_json = sign_json(
|
||||
server_keys,
|
||||
self.key_server.server_name,
|
||||
self.key_server.signing_key
|
||||
)
|
||||
|
||||
json_bytes = encode_canonical_json(signed_json)
|
||||
respond_with_json_bytes(request, 200, json_bytes)
|
||||
|
||||
except Exception as e:
|
||||
json_bytes = encode_canonical_json({
|
||||
u"error": {u"code": 502, u"message": e.message}
|
||||
})
|
||||
respond_with_json_bytes(request, 502, json_bytes)
|
||||
@@ -22,7 +22,6 @@ from .transport import TransportLayer
|
||||
|
||||
def initialize_http_replication(homeserver):
|
||||
transport = TransportLayer(
|
||||
homeserver,
|
||||
homeserver.hostname,
|
||||
server=homeserver.get_resource_for_federation(),
|
||||
client=homeserver.get_http_client()
|
||||
|
||||
@@ -96,7 +96,7 @@ class PduCodec(object):
|
||||
if k not in ["event_id", "room_id", "type", "prev_events"]
|
||||
})
|
||||
|
||||
if "origin_server_ts" not in kwargs:
|
||||
kwargs["origin_server_ts"] = int(self.clock.time_msec())
|
||||
if "ts" not in kwargs:
|
||||
kwargs["ts"] = int(self.clock.time_msec())
|
||||
|
||||
return Pdu(**kwargs)
|
||||
|
||||
@@ -157,7 +157,7 @@ class TransactionActions(object):
|
||||
transaction.prev_ids = yield self.store.prep_send_transaction(
|
||||
transaction.transaction_id,
|
||||
transaction.destination,
|
||||
transaction.origin_server_ts,
|
||||
transaction.ts,
|
||||
[(p["pdu_id"], p["origin"]) for p in transaction.pdus]
|
||||
)
|
||||
|
||||
|
||||
@@ -159,8 +159,7 @@ class ReplicationLayer(object):
|
||||
return defer.succeed(None)
|
||||
|
||||
@log_function
|
||||
def make_query(self, destination, query_type, args,
|
||||
retry_on_dns_fail=True):
|
||||
def make_query(self, destination, query_type, args):
|
||||
"""Sends a federation Query to a remote homeserver of the given type
|
||||
and arguments.
|
||||
|
||||
@@ -175,9 +174,7 @@ class ReplicationLayer(object):
|
||||
a Deferred which will eventually yield a JSON object from the
|
||||
response
|
||||
"""
|
||||
return self.transport_layer.make_query(
|
||||
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
return self.transport_layer.make_query(destination, query_type, args)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
@@ -294,13 +291,6 @@ class ReplicationLayer(object):
|
||||
def on_incoming_transaction(self, transaction_data):
|
||||
transaction = Transaction(**transaction_data)
|
||||
|
||||
for p in transaction.pdus:
|
||||
if "age" in p:
|
||||
p["age_ts"] = int(self._clock.time_msec()) - int(p["age"])
|
||||
del p["age"]
|
||||
|
||||
pdu_list = [Pdu(**p) for p in transaction.pdus]
|
||||
|
||||
logger.debug("[%s] Got transaction", transaction.transaction_id)
|
||||
|
||||
response = yield self.transaction_actions.have_responded(transaction)
|
||||
@@ -313,13 +303,15 @@ class ReplicationLayer(object):
|
||||
|
||||
logger.debug("[%s] Transacition is new", transaction.transaction_id)
|
||||
|
||||
pdu_list = [Pdu(**p) for p in transaction.pdus]
|
||||
|
||||
dl = []
|
||||
for pdu in pdu_list:
|
||||
dl.append(self._handle_new_pdu(pdu))
|
||||
|
||||
if hasattr(transaction, "edus"):
|
||||
for edu in [Edu(**x) for x in transaction.edus]:
|
||||
self.received_edu(transaction.origin, edu.edu_type, edu.content)
|
||||
self.received_edu(edu.origin, edu.edu_type, edu.content)
|
||||
|
||||
results = yield defer.DeferredList(dl)
|
||||
|
||||
@@ -413,15 +405,10 @@ class ReplicationLayer(object):
|
||||
"""Returns a new Transaction containing the given PDUs suitable for
|
||||
transmission.
|
||||
"""
|
||||
pdus = [p.get_dict() for p in pdu_list]
|
||||
for p in pdus:
|
||||
if "age_ts" in pdus:
|
||||
p["age"] = int(self.clock.time_msec()) - p["age_ts"]
|
||||
|
||||
return Transaction(
|
||||
pdus=[p.get_dict() for p in pdu_list],
|
||||
origin=self.server_name,
|
||||
pdus=pdus,
|
||||
origin_server_ts=int(self._clock.time_msec()),
|
||||
ts=int(self._clock.time_msec()),
|
||||
destination=None,
|
||||
)
|
||||
|
||||
@@ -492,6 +479,7 @@ class _TransactionQueue(object):
|
||||
"""
|
||||
|
||||
def __init__(self, hs, transaction_actions, transport_layer):
|
||||
|
||||
self.server_name = hs.hostname
|
||||
self.transaction_actions = transaction_actions
|
||||
self.transport_layer = transport_layer
|
||||
@@ -589,8 +577,8 @@ class _TransactionQueue(object):
|
||||
logger.debug("TX [%s] Persisting transaction...", destination)
|
||||
|
||||
transaction = Transaction.create_new(
|
||||
origin_server_ts=self._clock.time_msec(),
|
||||
transaction_id=str(self._next_txn_id),
|
||||
ts=self._clock.time_msec(),
|
||||
transaction_id=self._next_txn_id,
|
||||
origin=self.server_name,
|
||||
destination=destination,
|
||||
pdus=pdus,
|
||||
@@ -605,20 +593,8 @@ class _TransactionQueue(object):
|
||||
logger.debug("TX [%s] Sending transaction...", destination)
|
||||
|
||||
# Actually send the transaction
|
||||
|
||||
# FIXME (erikj): This is a bit of a hack to make the Pdu age
|
||||
# keys work
|
||||
def json_data_cb():
|
||||
data = transaction.get_dict()
|
||||
now = int(self._clock.time_msec())
|
||||
if "pdus" in data:
|
||||
for p in data["pdus"]:
|
||||
if "age_ts" in p:
|
||||
p["age"] = now - int(p["age_ts"])
|
||||
return data
|
||||
|
||||
code, response = yield self.transport_layer.send_transaction(
|
||||
transaction, json_data_cb
|
||||
transaction
|
||||
)
|
||||
|
||||
logger.debug("TX [%s] Sent transaction", destination)
|
||||
|
||||
@@ -24,7 +24,6 @@ over a different (albeit still reliable) protocol.
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.urls import FEDERATION_PREFIX as PREFIX
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.util.logutils import log_function
|
||||
|
||||
import logging
|
||||
@@ -55,7 +54,7 @@ class TransportLayer(object):
|
||||
we receive data.
|
||||
"""
|
||||
|
||||
def __init__(self, homeserver, server_name, server, client):
|
||||
def __init__(self, server_name, server, client):
|
||||
"""
|
||||
Args:
|
||||
server_name (str): Local home server host
|
||||
@@ -64,7 +63,6 @@ class TransportLayer(object):
|
||||
client (synapse.protocol.http.HttpClient): the http client used to
|
||||
send requests
|
||||
"""
|
||||
self.keyring = homeserver.get_keyring()
|
||||
self.server_name = server_name
|
||||
self.server = server
|
||||
self.client = client
|
||||
@@ -146,7 +144,7 @@ class TransportLayer(object):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def send_transaction(self, transaction, json_data_callback=None):
|
||||
def send_transaction(self, transaction):
|
||||
""" Sends the given Transaction to it's destination
|
||||
|
||||
Args:
|
||||
@@ -165,15 +163,12 @@ class TransportLayer(object):
|
||||
if transaction.destination == self.server_name:
|
||||
raise RuntimeError("Transport layer cannot send to itself!")
|
||||
|
||||
# FIXME: This is only used by the tests. The actual json sent is
|
||||
# generated by the json_data_callback.
|
||||
json_data = transaction.get_dict()
|
||||
data = transaction.get_dict()
|
||||
|
||||
code, response = yield self.client.put_json(
|
||||
transaction.destination,
|
||||
path=PREFIX + "/send/%s/" % transaction.transaction_id,
|
||||
data=json_data,
|
||||
json_data_callback=json_data_callback,
|
||||
data=data
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
@@ -185,93 +180,17 @@ class TransportLayer(object):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def make_query(self, destination, query_type, args, retry_on_dns_fail):
|
||||
def make_query(self, destination, query_type, args):
|
||||
path = PREFIX + "/query/%s" % query_type
|
||||
|
||||
response = yield self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args=args,
|
||||
retry_on_dns_fail=retry_on_dns_fail,
|
||||
args=args
|
||||
)
|
||||
|
||||
defer.returnValue(response)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _authenticate_request(self, request):
|
||||
json_request = {
|
||||
"method": request.method,
|
||||
"uri": request.uri,
|
||||
"destination": self.server_name,
|
||||
"signatures": {},
|
||||
}
|
||||
|
||||
content = None
|
||||
origin = None
|
||||
|
||||
if request.method == "PUT":
|
||||
#TODO: Handle other method types? other content types?
|
||||
try:
|
||||
content_bytes = request.content.read()
|
||||
content = json.loads(content_bytes)
|
||||
json_request["content"] = content
|
||||
except:
|
||||
raise SynapseError(400, "Unable to parse JSON", Codes.BAD_JSON)
|
||||
|
||||
def parse_auth_header(header_str):
|
||||
try:
|
||||
params = auth.split(" ")[1].split(",")
|
||||
param_dict = dict(kv.split("=") for kv in params)
|
||||
def strip_quotes(value):
|
||||
if value.startswith("\""):
|
||||
return value[1:-1]
|
||||
else:
|
||||
return value
|
||||
origin = strip_quotes(param_dict["origin"])
|
||||
key = strip_quotes(param_dict["key"])
|
||||
sig = strip_quotes(param_dict["sig"])
|
||||
return (origin, key, sig)
|
||||
except:
|
||||
raise SynapseError(
|
||||
400, "Malformed Authorization header", Codes.UNAUTHORIZED
|
||||
)
|
||||
|
||||
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
|
||||
|
||||
if not auth_headers:
|
||||
raise SynapseError(
|
||||
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
for auth in auth_headers:
|
||||
if auth.startswith("X-Matrix"):
|
||||
(origin, key, sig) = parse_auth_header(auth)
|
||||
json_request["origin"] = origin
|
||||
json_request["signatures"].setdefault(origin,{})[key] = sig
|
||||
|
||||
if not json_request["signatures"]:
|
||||
raise SynapseError(
|
||||
401, "Missing Authorization headers", Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
yield self.keyring.verify_json_for_server(origin, json_request)
|
||||
|
||||
defer.returnValue((origin, content))
|
||||
|
||||
def _with_authentication(self, handler):
|
||||
@defer.inlineCallbacks
|
||||
def new_handler(request, *args, **kwargs):
|
||||
try:
|
||||
(origin, content) = yield self._authenticate_request(request)
|
||||
response = yield handler(
|
||||
origin, content, request.args, *args, **kwargs
|
||||
)
|
||||
except:
|
||||
logger.exception("_authenticate_request failed")
|
||||
raise
|
||||
defer.returnValue(response)
|
||||
return new_handler
|
||||
|
||||
@log_function
|
||||
def register_received_handler(self, handler):
|
||||
""" Register a handler that will be fired when we receive data.
|
||||
@@ -285,7 +204,7 @@ class TransportLayer(object):
|
||||
self.server.register_path(
|
||||
"PUT",
|
||||
re.compile("^" + PREFIX + "/send/([^/]*)/$"),
|
||||
self._with_authentication(self._on_send_request)
|
||||
self._on_send_request
|
||||
)
|
||||
|
||||
@log_function
|
||||
@@ -303,9 +222,9 @@ class TransportLayer(object):
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/pull/$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query:
|
||||
handler.on_pull_request(query["origin"][0], query["v"])
|
||||
lambda request: handler.on_pull_request(
|
||||
request.args["origin"][0],
|
||||
request.args["v"]
|
||||
)
|
||||
)
|
||||
|
||||
@@ -314,9 +233,8 @@ class TransportLayer(object):
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/pdu/([^/]*)/([^/]*)/$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, pdu_origin, pdu_id:
|
||||
handler.on_pdu_request(pdu_origin, pdu_id)
|
||||
lambda request, pdu_origin, pdu_id: handler.on_pdu_request(
|
||||
pdu_origin, pdu_id
|
||||
)
|
||||
)
|
||||
|
||||
@@ -324,47 +242,38 @@ class TransportLayer(object):
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/state/([^/]*)/$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
handler.on_context_state_request(context)
|
||||
lambda request, context: handler.on_context_state_request(
|
||||
context
|
||||
)
|
||||
)
|
||||
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/backfill/([^/]*)/$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
self._on_backfill_request(
|
||||
context, query["v"], query["limit"]
|
||||
)
|
||||
lambda request, context: self._on_backfill_request(
|
||||
context, request.args["v"],
|
||||
request.args["limit"]
|
||||
)
|
||||
)
|
||||
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/context/([^/]*)/$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, context:
|
||||
handler.on_context_pdus_request(context)
|
||||
)
|
||||
lambda request, context: handler.on_context_pdus_request(context)
|
||||
)
|
||||
|
||||
# This is when we receive a server-server Query
|
||||
self.server.register_path(
|
||||
"GET",
|
||||
re.compile("^" + PREFIX + "/query/([^/]*)$"),
|
||||
self._with_authentication(
|
||||
lambda origin, content, query, query_type:
|
||||
handler.on_query_request(
|
||||
query_type, {k: v[0] for k, v in query.items()}
|
||||
)
|
||||
lambda request, query_type: handler.on_query_request(
|
||||
query_type, {k: v[0] for k, v in request.args.items()}
|
||||
)
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def _on_send_request(self, origin, content, query, transaction_id):
|
||||
def _on_send_request(self, request, transaction_id):
|
||||
""" Called on PUT /send/<transaction_id>/
|
||||
|
||||
Args:
|
||||
@@ -379,7 +288,12 @@ class TransportLayer(object):
|
||||
"""
|
||||
# Parse the request
|
||||
try:
|
||||
transaction_data = content
|
||||
data = request.content.read()
|
||||
|
||||
l = data[:20].encode("string_escape")
|
||||
logger.debug("Got data: \"%s\"", l)
|
||||
|
||||
transaction_data = json.loads(data)
|
||||
|
||||
logger.debug(
|
||||
"Decoded %s: %s",
|
||||
@@ -401,13 +315,9 @@ class TransportLayer(object):
|
||||
defer.returnValue((400, {"error": "Invalid transaction"}))
|
||||
return
|
||||
|
||||
try:
|
||||
code, response = yield self.received_handler.on_incoming_transaction(
|
||||
transaction_data
|
||||
)
|
||||
except:
|
||||
logger.exception("on_incoming_transaction failed")
|
||||
raise
|
||||
code, response = yield self.received_handler.on_incoming_transaction(
|
||||
transaction_data
|
||||
)
|
||||
|
||||
defer.returnValue((code, response))
|
||||
|
||||
|
||||
@@ -40,7 +40,7 @@ class Pdu(JsonEncodedObject):
|
||||
|
||||
{
|
||||
"pdu_id": "78c",
|
||||
"origin_server_ts": 1404835423000,
|
||||
"ts": 1404835423000,
|
||||
"origin": "bar",
|
||||
"prev_ids": [
|
||||
["23b", "foo"],
|
||||
@@ -55,7 +55,7 @@ class Pdu(JsonEncodedObject):
|
||||
"pdu_id",
|
||||
"context",
|
||||
"origin",
|
||||
"origin_server_ts",
|
||||
"ts",
|
||||
"pdu_type",
|
||||
"destinations",
|
||||
"transaction_id",
|
||||
@@ -69,7 +69,6 @@ class Pdu(JsonEncodedObject):
|
||||
"prev_state_id",
|
||||
"prev_state_origin",
|
||||
"required_power_level",
|
||||
"user_id",
|
||||
]
|
||||
|
||||
internal_keys = [
|
||||
@@ -82,7 +81,7 @@ class Pdu(JsonEncodedObject):
|
||||
"pdu_id",
|
||||
"context",
|
||||
"origin",
|
||||
"origin_server_ts",
|
||||
"ts",
|
||||
"pdu_type",
|
||||
"content",
|
||||
]
|
||||
@@ -118,7 +117,6 @@ class Pdu(JsonEncodedObject):
|
||||
"""
|
||||
if pdu_tuple:
|
||||
d = copy.copy(pdu_tuple.pdu_entry._asdict())
|
||||
d["origin_server_ts"] = d.pop("ts")
|
||||
|
||||
d["content"] = json.loads(d["content_json"])
|
||||
del d["content_json"]
|
||||
@@ -157,15 +155,11 @@ class Edu(JsonEncodedObject):
|
||||
]
|
||||
|
||||
required_keys = [
|
||||
"origin",
|
||||
"destination",
|
||||
"edu_type",
|
||||
]
|
||||
|
||||
# TODO: SYN-103: Remove "origin" and "destination" keys.
|
||||
# internal_keys = [
|
||||
# "origin",
|
||||
# "destination",
|
||||
# ]
|
||||
|
||||
|
||||
class Transaction(JsonEncodedObject):
|
||||
""" A transaction is a list of Pdus and Edus to be sent to a remote home
|
||||
@@ -187,12 +181,10 @@ class Transaction(JsonEncodedObject):
|
||||
"transaction_id",
|
||||
"origin",
|
||||
"destination",
|
||||
"origin_server_ts",
|
||||
"ts",
|
||||
"previous_ids",
|
||||
"pdus",
|
||||
"edus",
|
||||
"transaction_id",
|
||||
"destination",
|
||||
]
|
||||
|
||||
internal_keys = [
|
||||
@@ -204,7 +196,7 @@ class Transaction(JsonEncodedObject):
|
||||
"transaction_id",
|
||||
"origin",
|
||||
"destination",
|
||||
"origin_server_ts",
|
||||
"ts",
|
||||
"pdus",
|
||||
]
|
||||
|
||||
@@ -226,10 +218,10 @@ class Transaction(JsonEncodedObject):
|
||||
@staticmethod
|
||||
def create_new(pdus, **kwargs):
|
||||
""" Used to create a new transaction. Will auto fill out
|
||||
transaction_id and origin_server_ts keys.
|
||||
transaction_id and ts keys.
|
||||
"""
|
||||
if "origin_server_ts" not in kwargs:
|
||||
raise KeyError("Require 'origin_server_ts' to construct a Transaction")
|
||||
if "ts" not in kwargs:
|
||||
raise KeyError("Require 'ts' to construct a Transaction")
|
||||
if "transaction_id" not in kwargs:
|
||||
raise KeyError(
|
||||
"Require 'transaction_id' to construct a Transaction"
|
||||
|
||||
@@ -25,7 +25,6 @@ from .profile import ProfileHandler
|
||||
from .presence import PresenceHandler
|
||||
from .directory import DirectoryHandler
|
||||
from .typing import TypingNotificationHandler
|
||||
from .admin import AdminHandler
|
||||
|
||||
|
||||
class Handlers(object):
|
||||
@@ -50,4 +49,3 @@ class Handlers(object):
|
||||
self.login_handler = LoginHandler(hs)
|
||||
self.directory_handler = DirectoryHandler(hs)
|
||||
self.typing_notification_handler = TypingNotificationHandler(hs)
|
||||
self.admin_handler = AdminHandler(hs)
|
||||
|
||||
@@ -42,6 +42,9 @@ class BaseHandler(object):
|
||||
retry_after_ms=int(1000*(time_allowed - time_now)),
|
||||
)
|
||||
|
||||
|
||||
class BaseRoomHandler(BaseHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _on_new_room_event(self, event, snapshot, extra_destinations=[],
|
||||
extra_users=[]):
|
||||
|
||||
@@ -1,62 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AdminHandler(BaseHandler):
|
||||
|
||||
def __init__(self, hs):
|
||||
super(AdminHandler, self).__init__(hs)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_whois(self, user):
|
||||
res = yield self.store.get_user_ip_and_agents(user)
|
||||
|
||||
d = {}
|
||||
for r in res:
|
||||
device = d.setdefault(r["device_id"], {})
|
||||
session = device.setdefault(r["access_token"], [])
|
||||
session.append({
|
||||
"ip": r["ip"],
|
||||
"user_agent": r["user_agent"],
|
||||
"last_seen": r["last_seen"],
|
||||
})
|
||||
|
||||
ret = {
|
||||
"user_id": user.to_string(),
|
||||
"devices": [
|
||||
{
|
||||
"device_id": k,
|
||||
"sessions": [
|
||||
{
|
||||
# "access_token": x, TODO (erikj)
|
||||
"connections": y,
|
||||
}
|
||||
for x, y in v.items()
|
||||
]
|
||||
}
|
||||
for k, v in d.items()
|
||||
],
|
||||
}
|
||||
|
||||
defer.returnValue(ret)
|
||||
@@ -18,10 +18,9 @@ from twisted.internet import defer
|
||||
from ._base import BaseHandler
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.api.events.room import RoomAliasesEvent
|
||||
from synapse.http.client import HttpClient
|
||||
|
||||
import logging
|
||||
import sqlite3
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -38,8 +37,7 @@ class DirectoryHandler(BaseHandler):
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def create_association(self, user_id, room_alias, room_id, servers=None):
|
||||
|
||||
def create_association(self, room_alias, room_id, servers=None):
|
||||
# TODO(erikj): Do auth.
|
||||
|
||||
if not room_alias.is_mine:
|
||||
@@ -56,29 +54,11 @@ class DirectoryHandler(BaseHandler):
|
||||
if not servers:
|
||||
raise SynapseError(400, "Failed to get server list")
|
||||
|
||||
try:
|
||||
yield self.store.create_room_alias_association(
|
||||
room_alias,
|
||||
room_id,
|
||||
servers
|
||||
)
|
||||
except sqlite3.IntegrityError:
|
||||
defer.returnValue("Already exists")
|
||||
|
||||
# TODO: Send the room event.
|
||||
yield self._update_room_alias_events(user_id, room_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def delete_association(self, user_id, room_alias):
|
||||
# TODO Check if server admin
|
||||
|
||||
if not room_alias.is_mine:
|
||||
raise SynapseError(400, "Room alias must be local")
|
||||
|
||||
room_id = yield self.store.delete_room_alias(room_alias)
|
||||
|
||||
if room_id:
|
||||
yield self._update_room_alias_events(user_id, room_id)
|
||||
yield self.store.create_room_alias_association(
|
||||
room_alias,
|
||||
room_id,
|
||||
servers
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_association(self, room_alias):
|
||||
@@ -97,8 +77,8 @@ class DirectoryHandler(BaseHandler):
|
||||
query_type="directory",
|
||||
args={
|
||||
"room_alias": room_alias.to_string(),
|
||||
},
|
||||
retry_on_dns_fail=False,
|
||||
HttpClient.RETRY_DNS_LOOKUP_FAILURES: False
|
||||
}
|
||||
)
|
||||
|
||||
if result and "room_id" in result and "servers" in result:
|
||||
@@ -134,23 +114,3 @@ class DirectoryHandler(BaseHandler):
|
||||
"room_id": result.room_id,
|
||||
"servers": result.servers,
|
||||
})
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _update_room_alias_events(self, user_id, room_id):
|
||||
aliases = yield self.store.get_aliases_for_room(room_id)
|
||||
|
||||
event = self.event_factory.create_event(
|
||||
etype=RoomAliasesEvent.TYPE,
|
||||
state_key=self.hs.hostname,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
content={"aliases": aliases},
|
||||
)
|
||||
|
||||
snapshot = yield self.store.snapshot_room(
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
)
|
||||
|
||||
yield self.state_handler.handle_new_event(event, snapshot)
|
||||
yield self._on_new_room_event(event, snapshot, extra_users=[user_id])
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.events import SynapseEvent
|
||||
from synapse.util.logutils import log_function
|
||||
|
||||
from ._base import BaseHandler
|
||||
@@ -70,7 +71,10 @@ class EventStreamHandler(BaseHandler):
|
||||
auth_user, room_ids, pagin_config, timeout
|
||||
)
|
||||
|
||||
chunks = [self.hs.serialize_event(e) for e in events]
|
||||
chunks = [
|
||||
e.get_dict() if isinstance(e, SynapseEvent) else e
|
||||
for e in events
|
||||
]
|
||||
|
||||
chunk = {
|
||||
"chunk": chunks,
|
||||
@@ -88,9 +92,7 @@ class EventStreamHandler(BaseHandler):
|
||||
# 10 seconds of grace to allow the client to reconnect again
|
||||
# before we think they're gone
|
||||
def _later():
|
||||
logger.debug(
|
||||
"_later stopped_user_eventstream %s", auth_user
|
||||
)
|
||||
logger.debug("_later stopped_user_eventstream %s", auth_user)
|
||||
self.distributor.fire(
|
||||
"stopped_user_eventstream", auth_user
|
||||
)
|
||||
|
||||
@@ -21,9 +21,8 @@ from synapse.api.events.room import InviteJoinEvent, RoomMemberEvent
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.util.logutils import log_function
|
||||
from synapse.federation.pdu_codec import PduCodec
|
||||
from synapse.api.errors import SynapseError
|
||||
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.internet import defer
|
||||
|
||||
import logging
|
||||
|
||||
@@ -93,18 +92,22 @@ class FederationHandler(BaseHandler):
|
||||
"""
|
||||
event = self.pdu_codec.event_from_pdu(pdu)
|
||||
|
||||
logger.debug("Got event: %s", event.event_id)
|
||||
|
||||
with (yield self.lock_manager.lock(pdu.context)):
|
||||
if event.is_state and not backfilled:
|
||||
is_new_state = yield self.state_handler.handle_new_state(
|
||||
pdu
|
||||
)
|
||||
if not is_new_state:
|
||||
return
|
||||
else:
|
||||
is_new_state = False
|
||||
# TODO: Implement something in federation that allows us to
|
||||
# respond to PDU.
|
||||
|
||||
if hasattr(event, "state_key") and not is_new_state:
|
||||
logger.debug("Ignoring old state.")
|
||||
return
|
||||
|
||||
target_is_mine = False
|
||||
if hasattr(event, "target_host"):
|
||||
target_is_mine = event.target_host == self.hs.hostname
|
||||
@@ -130,16 +133,12 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
yield self.hs.get_handlers().room_member_handler.change_membership(
|
||||
new_event,
|
||||
do_auth=False,
|
||||
do_auth=True
|
||||
)
|
||||
|
||||
else:
|
||||
with (yield self.room_lock.lock(event.room_id)):
|
||||
yield self.store.persist_event(
|
||||
event,
|
||||
backfilled,
|
||||
is_new_state=is_new_state
|
||||
)
|
||||
yield self.store.persist_event(event, backfilled)
|
||||
|
||||
room = yield self.store.get_room(event.room_id)
|
||||
|
||||
@@ -169,15 +168,7 @@ class FederationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if not backfilled:
|
||||
extra_users = []
|
||||
if event.type == RoomMemberEvent.TYPE:
|
||||
target_user_id = event.state_key
|
||||
target_user = self.hs.parse_userid(target_user_id)
|
||||
extra_users.append(target_user)
|
||||
|
||||
yield self.notifier.on_new_room_event(
|
||||
event, extra_users=extra_users
|
||||
)
|
||||
yield self.notifier.on_new_room_event(event)
|
||||
|
||||
if event.type == RoomMemberEvent.TYPE:
|
||||
if event.membership == Membership.JOIN:
|
||||
@@ -240,12 +231,7 @@ class FederationHandler(BaseHandler):
|
||||
# TODO (erikj): Time out here.
|
||||
d = defer.Deferred()
|
||||
self.waiting_for_join_list.setdefault((joinee, room_id), []).append(d)
|
||||
reactor.callLater(10, d.cancel)
|
||||
|
||||
try:
|
||||
yield d
|
||||
except defer.CancelledError:
|
||||
raise SynapseError(500, "Unable to join remote room")
|
||||
yield d
|
||||
|
||||
try:
|
||||
yield self.store.store_room(
|
||||
|
||||
@@ -17,13 +17,9 @@ from twisted.internet import defer
|
||||
|
||||
from ._base import BaseHandler
|
||||
from synapse.api.errors import LoginError, Codes
|
||||
from synapse.http.client import IdentityServerHttpClient
|
||||
from synapse.util.emailutils import EmailException
|
||||
import synapse.util.emailutils as emailutils
|
||||
|
||||
import bcrypt
|
||||
import logging
|
||||
import urllib
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -54,7 +50,7 @@ class LoginHandler(BaseHandler):
|
||||
# pull out the hash for this user if they exist
|
||||
user_info = yield self.store.get_user_by_id(user_id=user)
|
||||
if not user_info:
|
||||
logger.warn("Attempted to login as %s but they do not exist", user)
|
||||
logger.warn("Attempted to login as %s but they do not exist.", user)
|
||||
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
|
||||
|
||||
stored_hash = user_info[0]["password_hash"]
|
||||
@@ -66,41 +62,4 @@ class LoginHandler(BaseHandler):
|
||||
defer.returnValue(token)
|
||||
else:
|
||||
logger.warn("Failed password login for user %s", user)
|
||||
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def reset_password(self, user_id, email):
|
||||
is_valid = yield self._check_valid_association(user_id, email)
|
||||
logger.info("reset_password user=%s email=%s valid=%s", user_id, email,
|
||||
is_valid)
|
||||
if is_valid:
|
||||
try:
|
||||
# send an email out
|
||||
emailutils.send_email(
|
||||
smtp_server=self.hs.config.email_smtp_server,
|
||||
from_addr=self.hs.config.email_from_address,
|
||||
to_addr=email,
|
||||
subject="Password Reset",
|
||||
body="TODO."
|
||||
)
|
||||
except EmailException as e:
|
||||
logger.exception(e)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _check_valid_association(self, user_id, email):
|
||||
identity = yield self._query_email(email)
|
||||
if identity and "mxid" in identity:
|
||||
if identity["mxid"] == user_id:
|
||||
defer.returnValue(True)
|
||||
return
|
||||
defer.returnValue(False)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _query_email(self, email):
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
data = yield httpCli.get_json(
|
||||
'matrix.org:8090', # TODO FIXME This should be configurable.
|
||||
"/_matrix/identity/api/v1/lookup?medium=email&address=" +
|
||||
"%s" % urllib.quote(email)
|
||||
)
|
||||
defer.returnValue(data)
|
||||
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
|
||||
@@ -19,7 +19,7 @@ from synapse.api.constants import Membership
|
||||
from synapse.api.events.room import RoomTopicEvent
|
||||
from synapse.api.errors import RoomError
|
||||
from synapse.streams.config import PaginationConfig
|
||||
from ._base import BaseHandler
|
||||
from ._base import BaseRoomHandler
|
||||
|
||||
import logging
|
||||
|
||||
@@ -27,7 +27,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
|
||||
class MessageHandler(BaseHandler):
|
||||
class MessageHandler(BaseRoomHandler):
|
||||
|
||||
def __init__(self, hs):
|
||||
super(MessageHandler, self).__init__(hs)
|
||||
@@ -64,7 +64,7 @@ class MessageHandler(BaseHandler):
|
||||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_message(self, event=None, suppress_auth=False):
|
||||
def send_message(self, event=None, suppress_auth=False, stamp_event=True):
|
||||
""" Send a message.
|
||||
|
||||
Args:
|
||||
@@ -72,6 +72,7 @@ class MessageHandler(BaseHandler):
|
||||
suppress_auth (bool) : True to suppress auth for this message. This
|
||||
is primarily so the home server can inject messages into rooms at
|
||||
will.
|
||||
stamp_event (bool) : True to stamp event content with server keys.
|
||||
Raises:
|
||||
SynapseError if something went wrong.
|
||||
"""
|
||||
@@ -81,6 +82,9 @@ class MessageHandler(BaseHandler):
|
||||
user = self.hs.parse_userid(event.user_id)
|
||||
assert user.is_mine, "User must be our own: %s" % (user,)
|
||||
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
||||
|
||||
if not suppress_auth:
|
||||
@@ -115,16 +119,12 @@ class MessageHandler(BaseHandler):
|
||||
|
||||
user = self.hs.parse_userid(user_id)
|
||||
|
||||
events, next_key = yield data_source.get_pagination_rows(
|
||||
user, pagin_config.get_source_config("room"), room_id
|
||||
)
|
||||
|
||||
next_token = pagin_config.from_token.copy_and_replace(
|
||||
"room_key", next_key
|
||||
events, next_token = yield data_source.get_pagination_rows(
|
||||
user, pagin_config, room_id
|
||||
)
|
||||
|
||||
chunk = {
|
||||
"chunk": [self.hs.serialize_event(e) for e in events],
|
||||
"chunk": [e.get_dict() for e in events],
|
||||
"start": pagin_config.from_token.to_string(),
|
||||
"end": next_token.to_string(),
|
||||
}
|
||||
@@ -132,7 +132,7 @@ class MessageHandler(BaseHandler):
|
||||
defer.returnValue(chunk)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def store_room_data(self, event=None):
|
||||
def store_room_data(self, event=None, stamp_event=True):
|
||||
""" Stores data for a room.
|
||||
|
||||
Args:
|
||||
@@ -151,6 +151,9 @@ class MessageHandler(BaseHandler):
|
||||
|
||||
yield self.auth.check(event, snapshot, raises=True)
|
||||
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
yield self.state_handler.handle_new_event(event, snapshot)
|
||||
|
||||
yield self._on_new_room_event(event, snapshot)
|
||||
@@ -218,7 +221,10 @@ class MessageHandler(BaseHandler):
|
||||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_feedback(self, event):
|
||||
def send_feedback(self, event, stamp_event=True):
|
||||
if stamp_event:
|
||||
event.content["hsob_ts"] = int(self.clock.time_msec())
|
||||
|
||||
snapshot = yield self.store.snapshot_room(event.room_id, event.user_id)
|
||||
|
||||
yield self.auth.check(event, snapshot, raises=True)
|
||||
@@ -226,22 +232,6 @@ class MessageHandler(BaseHandler):
|
||||
# store message in db
|
||||
yield self._on_new_room_event(event, snapshot)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_state_events(self, user_id, room_id):
|
||||
"""Retrieve all state events for a given room.
|
||||
|
||||
Args:
|
||||
user_id(str): The user requesting state events.
|
||||
room_id(str): The room ID to get all state events from.
|
||||
Returns:
|
||||
A list of dicts representing state events. [{}, {}, {}]
|
||||
"""
|
||||
yield self.auth.check_joined_room(room_id, user_id)
|
||||
|
||||
# TODO: This is duplicating logic from snapshot_all_rooms
|
||||
current_state = yield self.store.get_current_state(room_id)
|
||||
defer.returnValue([self.hs.serialize_event(c) for c in current_state])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def snapshot_all_rooms(self, user_id=None, pagin_config=None,
|
||||
feedback=False):
|
||||
@@ -275,12 +265,9 @@ class MessageHandler(BaseHandler):
|
||||
presence_stream = self.hs.get_event_sources().sources["presence"]
|
||||
pagination_config = PaginationConfig(from_token=now_token)
|
||||
presence, _ = yield presence_stream.get_pagination_rows(
|
||||
user, pagination_config.get_source_config("presence"), None
|
||||
user, pagination_config, None
|
||||
)
|
||||
|
||||
public_rooms = yield self.store.get_rooms(is_public=True)
|
||||
public_room_ids = [r["room_id"] for r in public_rooms]
|
||||
|
||||
limit = pagin_config.limit
|
||||
if not limit:
|
||||
limit = 10
|
||||
@@ -289,8 +276,6 @@ class MessageHandler(BaseHandler):
|
||||
d = {
|
||||
"room_id": event.room_id,
|
||||
"membership": event.membership,
|
||||
"visibility": ("public" if event.room_id in
|
||||
public_room_ids else "private"),
|
||||
}
|
||||
|
||||
if event.membership == Membership.INVITE:
|
||||
@@ -311,7 +296,7 @@ class MessageHandler(BaseHandler):
|
||||
end_token = now_token.copy_and_replace("room_key", token[1])
|
||||
|
||||
d["messages"] = {
|
||||
"chunk": [self.hs.serialize_event(m) for m in messages],
|
||||
"chunk": [m.get_dict() for m in messages],
|
||||
"start": start_token.to_string(),
|
||||
"end": end_token.to_string(),
|
||||
}
|
||||
@@ -319,7 +304,7 @@ class MessageHandler(BaseHandler):
|
||||
current_state = yield self.store.get_current_state(
|
||||
event.room_id
|
||||
)
|
||||
d["state"] = [self.hs.serialize_event(c) for c in current_state]
|
||||
d["state"] = [c.get_dict() for c in current_state]
|
||||
except:
|
||||
logger.exception("Failed to get snapshot")
|
||||
|
||||
|
||||
@@ -76,7 +76,9 @@ class PresenceHandler(BaseHandler):
|
||||
"stopped_user_eventstream", self.stopped_user_eventstream
|
||||
)
|
||||
|
||||
distributor.observe("user_joined_room", self.user_joined_room)
|
||||
distributor.observe("user_joined_room",
|
||||
self.user_joined_room
|
||||
)
|
||||
|
||||
distributor.declare("collect_presencelike_data")
|
||||
|
||||
@@ -154,12 +156,14 @@ class PresenceHandler(BaseHandler):
|
||||
defer.returnValue(True)
|
||||
|
||||
if (yield self.store.user_rooms_intersect(
|
||||
[u.to_string() for u in observer_user, observed_user])):
|
||||
[u.to_string() for u in observer_user, observed_user]
|
||||
)):
|
||||
defer.returnValue(True)
|
||||
|
||||
if (yield self.store.is_presence_visible(
|
||||
observed_localpart=observed_user.localpart,
|
||||
observer_userid=observer_user.to_string())):
|
||||
observed_localpart=observed_user.localpart,
|
||||
observer_userid=observer_user.to_string(),
|
||||
)):
|
||||
defer.returnValue(True)
|
||||
|
||||
defer.returnValue(False)
|
||||
@@ -167,17 +171,13 @@ class PresenceHandler(BaseHandler):
|
||||
@defer.inlineCallbacks
|
||||
def get_state(self, target_user, auth_user):
|
||||
if target_user.is_mine:
|
||||
visible = yield self.is_presence_visible(
|
||||
observer_user=auth_user,
|
||||
visible = yield self.is_presence_visible(observer_user=auth_user,
|
||||
observed_user=target_user
|
||||
)
|
||||
|
||||
if not visible:
|
||||
raise SynapseError(404, "Presence information not visible")
|
||||
state = yield self.store.get_presence_state(target_user.localpart)
|
||||
if "mtime" in state:
|
||||
del state["mtime"]
|
||||
state["presence"] = state.pop("state")
|
||||
|
||||
if target_user in self._user_cachemap:
|
||||
state["last_active"] = (
|
||||
@@ -216,20 +216,24 @@ class PresenceHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if state["presence"] not in self.STATE_LEVELS:
|
||||
raise SynapseError(400, "'%s' is not a valid presence state" % (
|
||||
state["presence"],
|
||||
))
|
||||
raise SynapseError(400, "'%s' is not a valid presence state" %
|
||||
state["presence"]
|
||||
)
|
||||
|
||||
logger.debug("Updating presence state of %s to %s",
|
||||
target_user.localpart, state["presence"])
|
||||
|
||||
state_to_store = dict(state)
|
||||
state_to_store["state"] = state_to_store.pop("presence")
|
||||
statuscache=self._get_or_offline_usercache(target_user)
|
||||
oldstate = statuscache.get_state()
|
||||
|
||||
statuscache = self._get_or_offline_usercache(target_user)
|
||||
was_level = self.STATE_LEVELS[statuscache.get_state()["presence"]]
|
||||
was_level = self.STATE_LEVELS[oldstate["presence"]]
|
||||
now_level = self.STATE_LEVELS[state["presence"]]
|
||||
|
||||
if now_level > was_level:
|
||||
state["last_active"] = self.clock.time_msec()
|
||||
|
||||
state_to_store = dict(state)
|
||||
|
||||
yield defer.DeferredList([
|
||||
self.store.set_presence_state(
|
||||
target_user.localpart, state_to_store
|
||||
@@ -239,9 +243,6 @@ class PresenceHandler(BaseHandler):
|
||||
),
|
||||
])
|
||||
|
||||
if now_level > was_level:
|
||||
state["last_active"] = self.clock.time_msec()
|
||||
|
||||
now_online = state["presence"] != PresenceState.OFFLINE
|
||||
was_polling = target_user in self._user_cachemap
|
||||
|
||||
@@ -592,8 +593,6 @@ class PresenceHandler(BaseHandler):
|
||||
def _push_presence_remote(self, user, destination, state=None):
|
||||
if state is None:
|
||||
state = yield self.store.get_presence_state(user.localpart)
|
||||
del state["mtime"]
|
||||
state["presence"] = state.pop("state")
|
||||
|
||||
if user in self._user_cachemap:
|
||||
state["last_active"] = (
|
||||
@@ -646,9 +645,8 @@ class PresenceHandler(BaseHandler):
|
||||
del state["user_id"]
|
||||
|
||||
if "presence" not in state:
|
||||
logger.warning(
|
||||
"Received a presence 'push' EDU from %s without a"
|
||||
" 'presence' key", origin
|
||||
logger.warning("Received a presence 'push' EDU from %s without"
|
||||
+ " a 'presence' key", origin
|
||||
)
|
||||
continue
|
||||
|
||||
@@ -743,7 +741,7 @@ class PresenceHandler(BaseHandler):
|
||||
defer.returnValue((localusers, remote_domains))
|
||||
|
||||
def push_update_to_clients(self, observed_user, users_to_push=[],
|
||||
room_ids=[], statuscache=None):
|
||||
room_ids=[], statuscache=None):
|
||||
self.notifier.on_new_user_event(
|
||||
users_to_push,
|
||||
room_ids,
|
||||
@@ -763,7 +761,8 @@ class PresenceEventSource(object):
|
||||
presence = self.hs.get_handlers().presence_handler
|
||||
|
||||
if (yield presence.store.user_rooms_intersect(
|
||||
[u.to_string() for u in observer_user, observed_user])):
|
||||
[u.to_string() for u in observer_user, observed_user]
|
||||
)):
|
||||
defer.returnValue(True)
|
||||
|
||||
if observed_user.is_mine:
|
||||
@@ -793,12 +792,11 @@ class PresenceEventSource(object):
|
||||
updates = []
|
||||
# TODO(paul): use a DeferredList ? How to limit concurrency.
|
||||
for observed_user in cachemap.keys():
|
||||
cached = cachemap[observed_user]
|
||||
if not (from_key < cached.serial):
|
||||
if not (from_key < cachemap[observed_user].serial):
|
||||
continue
|
||||
|
||||
if (yield self.is_visible(observer_user, observed_user)):
|
||||
updates.append((observed_user, cached))
|
||||
updates.append((observed_user, cachemap[observed_user]))
|
||||
|
||||
# TODO(paul): limit
|
||||
|
||||
@@ -820,12 +818,15 @@ class PresenceEventSource(object):
|
||||
def get_pagination_rows(self, user, pagination_config, key):
|
||||
# TODO (erikj): Does this make sense? Ordering?
|
||||
|
||||
from_token = pagination_config.from_token
|
||||
to_token = pagination_config.to_token
|
||||
|
||||
observer_user = user
|
||||
|
||||
from_key = int(pagination_config.from_key)
|
||||
from_key = int(from_token.presence_key)
|
||||
|
||||
if pagination_config.to_key:
|
||||
to_key = int(pagination_config.to_key)
|
||||
if to_token:
|
||||
to_key = int(to_token.presence_key)
|
||||
else:
|
||||
to_key = -1
|
||||
|
||||
@@ -835,7 +836,7 @@ class PresenceEventSource(object):
|
||||
updates = []
|
||||
# TODO(paul): use a DeferredList ? How to limit concurrency.
|
||||
for observed_user in cachemap.keys():
|
||||
if not (to_key < cachemap[observed_user].serial <= from_key):
|
||||
if not (to_key < cachemap[observed_user].serial < from_key):
|
||||
continue
|
||||
|
||||
if (yield self.is_visible(observer_user, observed_user)):
|
||||
@@ -843,15 +844,30 @@ class PresenceEventSource(object):
|
||||
|
||||
# TODO(paul): limit
|
||||
|
||||
updates = [(k, cachemap[k]) for k in cachemap
|
||||
if to_key < cachemap[k].serial < from_key]
|
||||
|
||||
if updates:
|
||||
clock = self.clock
|
||||
|
||||
earliest_serial = max([x[1].serial for x in updates])
|
||||
data = [x[1].make_event(user=x[0], clock=clock) for x in updates]
|
||||
|
||||
defer.returnValue((data, earliest_serial))
|
||||
if to_token:
|
||||
next_token = to_token
|
||||
else:
|
||||
next_token = from_token
|
||||
|
||||
next_token = next_token.copy_and_replace(
|
||||
"presence_key", earliest_serial
|
||||
)
|
||||
defer.returnValue((data, next_token))
|
||||
else:
|
||||
defer.returnValue(([], 0))
|
||||
if not to_token:
|
||||
to_token = from_token.copy_and_replace(
|
||||
"presence_key", 0
|
||||
)
|
||||
defer.returnValue(([], to_token))
|
||||
|
||||
|
||||
class UserPresenceCache(object):
|
||||
@@ -864,8 +880,6 @@ class UserPresenceCache(object):
|
||||
self.serial = None
|
||||
|
||||
def update(self, state, serial):
|
||||
assert("mtime_age" not in state)
|
||||
|
||||
self.state.update(state)
|
||||
# Delete keys that are now 'None'
|
||||
for k in self.state.keys():
|
||||
|
||||
@@ -15,9 +15,9 @@
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import SynapseError, AuthError, CodeMessageException
|
||||
from synapse.api.constants import Membership
|
||||
from synapse.api.events.room import RoomMemberEvent
|
||||
from synapse.api.errors import SynapseError, AuthError
|
||||
|
||||
from synapse.api.errors import CodeMessageException
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
@@ -97,8 +97,6 @@ class ProfileHandler(BaseHandler):
|
||||
}
|
||||
)
|
||||
|
||||
yield self._update_join_states(target_user)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_avatar_url(self, target_user):
|
||||
if target_user.is_mine:
|
||||
@@ -146,8 +144,6 @@ class ProfileHandler(BaseHandler):
|
||||
}
|
||||
)
|
||||
|
||||
yield self._update_join_states(target_user)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def collect_presencelike_data(self, user, state):
|
||||
if not user.is_mine:
|
||||
@@ -184,39 +180,3 @@ class ProfileHandler(BaseHandler):
|
||||
)
|
||||
|
||||
defer.returnValue(response)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _update_join_states(self, user):
|
||||
if not user.is_mine:
|
||||
return
|
||||
|
||||
joins = yield self.store.get_rooms_for_user_where_membership_is(
|
||||
user.to_string(),
|
||||
[Membership.JOIN],
|
||||
)
|
||||
|
||||
for j in joins:
|
||||
snapshot = yield self.store.snapshot_room(
|
||||
j.room_id, j.state_key, RoomMemberEvent.TYPE,
|
||||
j.state_key
|
||||
)
|
||||
|
||||
content = {
|
||||
"membership": j.content["membership"],
|
||||
"prev": j.content["membership"],
|
||||
}
|
||||
|
||||
yield self.distributor.fire(
|
||||
"collect_presencelike_data", user, content
|
||||
)
|
||||
|
||||
new_event = self.event_factory.create_event(
|
||||
etype=j.type,
|
||||
room_id=j.room_id,
|
||||
state_key=j.state_key,
|
||||
content=content,
|
||||
user_id=j.state_key,
|
||||
)
|
||||
|
||||
yield self.state_handler.handle_new_event(new_event, snapshot)
|
||||
yield self._on_new_room_event(new_event, snapshot)
|
||||
|
||||
@@ -15,22 +15,14 @@
|
||||
|
||||
"""Contains functions for registering clients."""
|
||||
from twisted.internet import defer
|
||||
from twisted.python import log
|
||||
|
||||
from synapse.types import UserID
|
||||
from synapse.api.errors import (
|
||||
SynapseError, RegistrationError, InvalidCaptchaError
|
||||
)
|
||||
from synapse.api.errors import SynapseError, RegistrationError
|
||||
from ._base import BaseHandler
|
||||
import synapse.util.stringutils as stringutils
|
||||
from synapse.http.client import IdentityServerHttpClient
|
||||
from synapse.http.client import CaptchaServerHttpClient
|
||||
|
||||
import base64
|
||||
import bcrypt
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RegistrationHandler(BaseHandler):
|
||||
@@ -64,13 +56,12 @@ class RegistrationHandler(BaseHandler):
|
||||
user_id = user.to_string()
|
||||
|
||||
token = self._generate_token(user_id)
|
||||
yield self.store.register(
|
||||
user_id=user_id,
|
||||
yield self.store.register(user_id=user_id,
|
||||
token=token,
|
||||
password_hash=password_hash
|
||||
)
|
||||
password_hash=password_hash)
|
||||
|
||||
self.distributor.fire("registered_user", user)
|
||||
defer.returnValue((user_id, token))
|
||||
else:
|
||||
# autogen a random user ID
|
||||
attempts = 0
|
||||
@@ -89,6 +80,7 @@ class RegistrationHandler(BaseHandler):
|
||||
password_hash=password_hash)
|
||||
|
||||
self.distributor.fire("registered_user", user)
|
||||
defer.returnValue((user_id, token))
|
||||
except SynapseError:
|
||||
# if user id is taken, just generate another
|
||||
user_id = None
|
||||
@@ -98,54 +90,6 @@ class RegistrationHandler(BaseHandler):
|
||||
raise RegistrationError(
|
||||
500, "Cannot generate user ID.")
|
||||
|
||||
defer.returnValue((user_id, token))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_recaptcha(self, ip, private_key, challenge, response):
|
||||
"""Checks a recaptcha is correct."""
|
||||
|
||||
captcha_response = yield self._validate_captcha(
|
||||
ip,
|
||||
private_key,
|
||||
challenge,
|
||||
response
|
||||
)
|
||||
if not captcha_response["valid"]:
|
||||
logger.info("Invalid captcha entered from %s. Error: %s",
|
||||
ip, captcha_response["error_url"])
|
||||
raise InvalidCaptchaError(
|
||||
error_url=captcha_response["error_url"]
|
||||
)
|
||||
else:
|
||||
logger.info("Valid captcha entered from %s", ip)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def register_email(self, threepidCreds):
|
||||
"""Registers emails with an identity server."""
|
||||
|
||||
for c in threepidCreds:
|
||||
logger.info("validating theeepidcred sid %s on id server %s",
|
||||
c['sid'], c['idServer'])
|
||||
try:
|
||||
threepid = yield self._threepid_from_creds(c)
|
||||
except:
|
||||
log.err()
|
||||
raise RegistrationError(400, "Couldn't validate 3pid")
|
||||
|
||||
if not threepid:
|
||||
raise RegistrationError(400, "Couldn't validate 3pid")
|
||||
logger.info("got threepid medium %s address %s",
|
||||
threepid['medium'], threepid['address'])
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def bind_emails(self, user_id, threepidCreds):
|
||||
"""Links emails with a user ID and informs an identity server."""
|
||||
|
||||
# Now we have a matrix ID, bind it to the threepids we were given
|
||||
for c in threepidCreds:
|
||||
# XXX: This should be a deferred list, shouldn't it?
|
||||
yield self._bind_threepid(c, user_id)
|
||||
|
||||
def _generate_token(self, user_id):
|
||||
# urlsafe variant uses _ and - so use . as the separator and replace
|
||||
# all =s with .s so http clients don't quote =s when it is used as
|
||||
@@ -155,76 +99,3 @@ class RegistrationHandler(BaseHandler):
|
||||
|
||||
def _generate_user_id(self):
|
||||
return "-" + stringutils.random_string(18)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _threepid_from_creds(self, creds):
|
||||
# TODO: get this from the homeserver rather than creating a new one for
|
||||
# each request
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
# XXX: make this configurable!
|
||||
trustedIdServers = ['matrix.org:8090']
|
||||
if not creds['idServer'] in trustedIdServers:
|
||||
logger.warn('%s is not a trusted ID server: rejecting 3pid ' +
|
||||
'credentials', creds['idServer'])
|
||||
defer.returnValue(None)
|
||||
data = yield httpCli.get_json(
|
||||
creds['idServer'],
|
||||
"/_matrix/identity/api/v1/3pid/getValidated3pid",
|
||||
{'sid': creds['sid'], 'clientSecret': creds['clientSecret']}
|
||||
)
|
||||
|
||||
if 'medium' in data:
|
||||
defer.returnValue(data)
|
||||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _bind_threepid(self, creds, mxid):
|
||||
httpCli = IdentityServerHttpClient(self.hs)
|
||||
data = yield httpCli.post_urlencoded_get_json(
|
||||
creds['idServer'],
|
||||
"/_matrix/identity/api/v1/3pid/bind",
|
||||
{
|
||||
'sid': creds['sid'],
|
||||
'clientSecret': creds['clientSecret'],
|
||||
'mxid': mxid,
|
||||
}
|
||||
)
|
||||
defer.returnValue(data)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _validate_captcha(self, ip_addr, private_key, challenge, response):
|
||||
"""Validates the captcha provided.
|
||||
|
||||
Returns:
|
||||
dict: Containing 'valid'(bool) and 'error_url'(str) if invalid.
|
||||
|
||||
"""
|
||||
response = yield self._submit_captcha(ip_addr, private_key, challenge,
|
||||
response)
|
||||
# parse Google's response. Lovely format..
|
||||
lines = response.split('\n')
|
||||
json = {
|
||||
"valid": lines[0] == 'true',
|
||||
"error_url": "http://www.google.com/recaptcha/api/challenge?" +
|
||||
"error=%s" % lines[1]
|
||||
}
|
||||
defer.returnValue(json)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _submit_captcha(self, ip_addr, private_key, challenge, response):
|
||||
# TODO: get this from the homeserver rather than creating a new one for
|
||||
# each request
|
||||
client = CaptchaServerHttpClient(self.hs)
|
||||
data = yield client.post_urlencoded_get_raw(
|
||||
"www.google.com:80",
|
||||
"/recaptcha/api/verify",
|
||||
# twisted dislikes google's response, no content length.
|
||||
accept_partial=True,
|
||||
args={
|
||||
'privatekey': private_key,
|
||||
'remoteip': ip_addr,
|
||||
'challenge': challenge,
|
||||
'response': response
|
||||
}
|
||||
)
|
||||
defer.returnValue(data)
|
||||
|
||||
@@ -25,14 +25,14 @@ from synapse.api.events.room import (
|
||||
RoomSendEventLevelEvent, RoomOpsPowerLevelsEvent, RoomNameEvent,
|
||||
)
|
||||
from synapse.util import stringutils
|
||||
from ._base import BaseHandler
|
||||
from ._base import BaseRoomHandler
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class RoomCreationHandler(BaseHandler):
|
||||
class RoomCreationHandler(BaseRoomHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def create_room(self, user_id, room_id, config):
|
||||
@@ -65,13 +65,6 @@ class RoomCreationHandler(BaseHandler):
|
||||
else:
|
||||
room_alias = None
|
||||
|
||||
invite_list = config.get("invite", [])
|
||||
for i in invite_list:
|
||||
try:
|
||||
self.hs.parse_userid(i)
|
||||
except:
|
||||
raise SynapseError(400, "Invalid user_id: %s" % (i,))
|
||||
|
||||
is_public = config.get("visibility", None) == "public"
|
||||
|
||||
if room_id:
|
||||
@@ -112,9 +105,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
if room_alias:
|
||||
directory_handler = self.hs.get_handlers().directory_handler
|
||||
yield directory_handler.create_association(
|
||||
user_id=user_id,
|
||||
yield self.store.create_room_alias_association(
|
||||
room_id=room_id,
|
||||
room_alias=room_alias,
|
||||
servers=[self.hs.hostname],
|
||||
@@ -141,7 +132,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
etype=RoomNameEvent.TYPE,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
required_power_level=50,
|
||||
required_power_level=5,
|
||||
content={"name": name},
|
||||
)
|
||||
|
||||
@@ -153,7 +144,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
etype=RoomTopicEvent.TYPE,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
required_power_level=50,
|
||||
required_power_level=5,
|
||||
content={"topic": topic},
|
||||
)
|
||||
|
||||
@@ -169,25 +160,11 @@ class RoomCreationHandler(BaseHandler):
|
||||
content=content
|
||||
)
|
||||
|
||||
content = {"membership": Membership.INVITE}
|
||||
for invitee in invite_list:
|
||||
invite_event = self.event_factory.create_event(
|
||||
etype=RoomMemberEvent.TYPE,
|
||||
state_key=invitee,
|
||||
room_id=room_id,
|
||||
user_id=user_id,
|
||||
content=content
|
||||
)
|
||||
|
||||
yield self.hs.get_handlers().room_member_handler.change_membership(
|
||||
invite_event,
|
||||
do_auth=False
|
||||
)
|
||||
|
||||
yield self.hs.get_handlers().room_member_handler.change_membership(
|
||||
join_event,
|
||||
do_auth=False
|
||||
)
|
||||
|
||||
result = {"room_id": room_id}
|
||||
if room_alias:
|
||||
result["room_alias"] = room_alias.to_string()
|
||||
@@ -198,7 +175,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
event_keys = {
|
||||
"room_id": room_id,
|
||||
"user_id": creator.to_string(),
|
||||
"required_power_level": 100,
|
||||
"required_power_level": 10,
|
||||
}
|
||||
|
||||
def create(etype, **content):
|
||||
@@ -215,7 +192,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
|
||||
power_levels_event = self.event_factory.create_event(
|
||||
etype=RoomPowerLevelsEvent.TYPE,
|
||||
content={creator.to_string(): 100, "default": 0},
|
||||
content={creator.to_string(): 10, "default": 0},
|
||||
**event_keys
|
||||
)
|
||||
|
||||
@@ -227,7 +204,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
|
||||
add_state_event = create(
|
||||
etype=RoomAddStateLevelEvent.TYPE,
|
||||
level=100,
|
||||
level=10,
|
||||
)
|
||||
|
||||
send_event = create(
|
||||
@@ -237,9 +214,8 @@ class RoomCreationHandler(BaseHandler):
|
||||
|
||||
ops = create(
|
||||
etype=RoomOpsPowerLevelsEvent.TYPE,
|
||||
ban_level=50,
|
||||
kick_level=50,
|
||||
redact_level=50,
|
||||
ban_level=5,
|
||||
kick_level=5,
|
||||
)
|
||||
|
||||
return [
|
||||
@@ -252,7 +228,7 @@ class RoomCreationHandler(BaseHandler):
|
||||
]
|
||||
|
||||
|
||||
class RoomMemberHandler(BaseHandler):
|
||||
class RoomMemberHandler(BaseRoomHandler):
|
||||
# TODO(paul): This handler currently contains a messy conflation of
|
||||
# low-level API that works on UserID objects and so on, and REST-level
|
||||
# API that takes ID strings and returns pagination chunks. These concerns
|
||||
@@ -320,7 +296,7 @@ class RoomMemberHandler(BaseHandler):
|
||||
|
||||
member_list = yield self.store.get_room_members(room_id=room_id)
|
||||
event_list = [
|
||||
self.hs.serialize_event(entry)
|
||||
entry.get_dict()
|
||||
for entry in member_list
|
||||
]
|
||||
chunk_data = {
|
||||
@@ -573,17 +549,11 @@ class RoomMemberHandler(BaseHandler):
|
||||
extra_users=[target_user]
|
||||
)
|
||||
|
||||
class RoomListHandler(BaseHandler):
|
||||
class RoomListHandler(BaseRoomHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_public_room_list(self):
|
||||
chunk = yield self.store.get_rooms(is_public=True)
|
||||
for room in chunk:
|
||||
joined_members = yield self.store.get_room_members(
|
||||
room_id=room["room_id"],
|
||||
membership=Membership.JOIN
|
||||
)
|
||||
room["num_joined_members"] = len(joined_members)
|
||||
# FIXME (erikj): START is no longer a valid value
|
||||
defer.returnValue({"start": "START", "end": "END", "chunk": chunk})
|
||||
|
||||
@@ -612,14 +582,23 @@ class RoomEventSource(object):
|
||||
return self.store.get_room_events_max_id()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_pagination_rows(self, user, config, key):
|
||||
def get_pagination_rows(self, user, pagination_config, key):
|
||||
from_token = pagination_config.from_token
|
||||
to_token = pagination_config.to_token
|
||||
limit = pagination_config.limit
|
||||
direction = pagination_config.direction
|
||||
|
||||
to_key = to_token.room_key if to_token else None
|
||||
|
||||
events, next_key = yield self.store.paginate_room_events(
|
||||
room_id=key,
|
||||
from_key=config.from_key,
|
||||
to_key=config.to_key,
|
||||
direction=config.direction,
|
||||
limit=config.limit,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_key,
|
||||
direction=direction,
|
||||
limit=limit,
|
||||
with_feedback=True
|
||||
)
|
||||
|
||||
defer.returnValue((events, next_key))
|
||||
next_token = from_token.copy_and_replace("room_key", next_key)
|
||||
|
||||
defer.returnValue((events, next_token))
|
||||
|
||||
@@ -96,10 +96,9 @@ class TypingNotificationHandler(BaseHandler):
|
||||
remotedomains = set()
|
||||
|
||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||
yield rm_handler.fetch_room_distributions_into(
|
||||
room_id, localusers=localusers, remotedomains=remotedomains,
|
||||
ignore_user=user
|
||||
)
|
||||
yield rm_handler.fetch_room_distributions_into(room_id,
|
||||
localusers=localusers, remotedomains=remotedomains,
|
||||
ignore_user=user)
|
||||
|
||||
for u in localusers:
|
||||
self.push_update_to_clients(
|
||||
@@ -131,9 +130,8 @@ class TypingNotificationHandler(BaseHandler):
|
||||
localusers = set()
|
||||
|
||||
rm_handler = self.homeserver.get_handlers().room_member_handler
|
||||
yield rm_handler.fetch_room_distributions_into(
|
||||
room_id, localusers=localusers
|
||||
)
|
||||
yield rm_handler.fetch_room_distributions_into(room_id,
|
||||
localusers=localusers)
|
||||
|
||||
for u in localusers:
|
||||
self.push_update_to_clients(
|
||||
@@ -144,7 +142,7 @@ class TypingNotificationHandler(BaseHandler):
|
||||
)
|
||||
|
||||
def push_update_to_clients(self, room_id, observer_user, observed_user,
|
||||
typing):
|
||||
typing):
|
||||
# TODO(paul) steal this from presence.py
|
||||
pass
|
||||
|
||||
@@ -160,4 +158,4 @@ class TypingNotificationEventSource(object):
|
||||
return 0
|
||||
|
||||
def get_pagination_rows(self, user, pagination_config, key):
|
||||
return ([], pagination_config.from_key)
|
||||
return ([], pagination_config.from_token)
|
||||
|
||||
@@ -12,3 +12,4 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
@@ -16,9 +16,7 @@
|
||||
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.internet.error import DNSLookupError
|
||||
from twisted.web.client import (
|
||||
_AgentBase, _URI, readBody, FileBodyProducer, PartialDownloadError
|
||||
)
|
||||
from twisted.web.client import _AgentBase, _URI, readBody
|
||||
from twisted.web.http_headers import Headers
|
||||
|
||||
from synapse.http.endpoint import matrix_endpoint
|
||||
@@ -28,18 +26,63 @@ from syutil.jsonutil import encode_canonical_json
|
||||
|
||||
from synapse.api.errors import CodeMessageException, SynapseError
|
||||
|
||||
from syutil.crypto.jsonsign import sign_json
|
||||
|
||||
from StringIO import StringIO
|
||||
|
||||
import json
|
||||
import logging
|
||||
import urllib
|
||||
import urlparse
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# FIXME: SURELY these should be killed?!
|
||||
_destination_mappings = {
|
||||
"red": "localhost:8080",
|
||||
"blue": "localhost:8081",
|
||||
"green": "localhost:8082",
|
||||
}
|
||||
|
||||
|
||||
class HttpClient(object):
|
||||
""" Interface for talking json over http
|
||||
"""
|
||||
RETRY_DNS_LOOKUP_FAILURES = "__retry_dns"
|
||||
|
||||
def put_json(self, destination, path, data):
|
||||
""" Sends the specifed json data using PUT
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
data (dict): A dict containing the data that will be used as
|
||||
the request body. This will be encoded as JSON.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body. On a 4xx or 5xx error response a
|
||||
CodeMessageException is raised.
|
||||
"""
|
||||
pass
|
||||
|
||||
def get_json(self, destination, path, args=None):
|
||||
""" Get's some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
args (dict): A dictionary used to create query strings, defaults to
|
||||
None.
|
||||
**Note**: The value of each key is assumed to be an iterable
|
||||
and *not* a string.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get *any* HTTP response.
|
||||
|
||||
The result of the deferred is a tuple of `(code, response)`,
|
||||
where `response` is a dict representing the decoded JSON body.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class MatrixHttpAgent(_AgentBase):
|
||||
|
||||
@@ -64,8 +107,12 @@ class MatrixHttpAgent(_AgentBase):
|
||||
parsed_URI.originForm)
|
||||
|
||||
|
||||
class BaseHttpClient(object):
|
||||
"""Base class for HTTP clients using twisted.
|
||||
class TwistedHttpClient(HttpClient):
|
||||
""" Wrapper around the twisted HTTP client api.
|
||||
|
||||
Attributes:
|
||||
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||
requests.
|
||||
"""
|
||||
|
||||
def __init__(self, hs):
|
||||
@@ -73,20 +120,64 @@ class BaseHttpClient(object):
|
||||
self.hs = hs
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _create_request(self, destination, method, path_bytes,
|
||||
body_callback, headers_dict={}, param_bytes=b"",
|
||||
query_bytes=b"", retry_on_dns_fail=True):
|
||||
def put_json(self, destination, path, data):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"PUT",
|
||||
path.encode("ascii"),
|
||||
producer=_JsonProducer(data),
|
||||
headers_dict={"Content-Type": ["application/json"]}
|
||||
)
|
||||
|
||||
logger.debug("Getting resp body")
|
||||
body = yield readBody(response)
|
||||
logger.debug("Got resp body")
|
||||
|
||||
defer.returnValue((response.code, body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, destination, path, args={}):
|
||||
if destination in _destination_mappings:
|
||||
destination = _destination_mappings[destination]
|
||||
|
||||
logger.debug("get_json args: %s", args)
|
||||
|
||||
retry_on_dns_fail = True
|
||||
if HttpClient.RETRY_DNS_LOOKUP_FAILURES in args:
|
||||
# FIXME: This isn't ideal, but the interface exposed in get_json
|
||||
# isn't comprehensive enough to give caller's any control over
|
||||
# their connection mechanics.
|
||||
retry_on_dns_fail = args.pop(HttpClient.RETRY_DNS_LOOKUP_FAILURES)
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"GET",
|
||||
path.encode("ascii"),
|
||||
query_bytes=query_bytes,
|
||||
retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _create_request(self, destination, method, path_bytes, param_bytes=b"",
|
||||
query_bytes=b"", producer=None, headers_dict={},
|
||||
retry_on_dns_fail=True):
|
||||
""" Creates and sends a request to the given url
|
||||
"""
|
||||
headers_dict[b"User-Agent"] = [b"Synapse"]
|
||||
headers_dict[b"Host"] = [destination]
|
||||
|
||||
url_bytes = urlparse.urlunparse(
|
||||
("", "", path_bytes, param_bytes, query_bytes, "",)
|
||||
)
|
||||
|
||||
logger.debug("Sending request to %s: %s %s",
|
||||
destination, method, url_bytes)
|
||||
logger.debug("Sending request to %s: %s %s;%s?%s",
|
||||
destination, method, path_bytes, param_bytes, query_bytes)
|
||||
|
||||
logger.debug(
|
||||
"Types: %s",
|
||||
@@ -99,14 +190,13 @@ class BaseHttpClient(object):
|
||||
|
||||
retries_left = 5
|
||||
|
||||
endpoint = self._getEndpoint(reactor, destination)
|
||||
# TODO: setup and pass in an ssl_context to enable TLS
|
||||
endpoint = matrix_endpoint(
|
||||
reactor, destination, timeout=10,
|
||||
ssl_context_factory=self.hs.tls_context_factory
|
||||
)
|
||||
|
||||
while True:
|
||||
|
||||
producer = None
|
||||
if body_callback:
|
||||
producer = body_callback(method, url_bytes, headers_dict)
|
||||
|
||||
try:
|
||||
response = yield self.agent.request(
|
||||
destination,
|
||||
@@ -152,241 +242,6 @@ class BaseHttpClient(object):
|
||||
defer.returnValue(response)
|
||||
|
||||
|
||||
class MatrixHttpClient(BaseHttpClient):
|
||||
""" Wrapper around the twisted HTTP client api. Implements
|
||||
|
||||
Attributes:
|
||||
agent (twisted.web.client.Agent): The twisted Agent used to send the
|
||||
requests.
|
||||
"""
|
||||
|
||||
RETRY_DNS_LOOKUP_FAILURES = "__retry_dns"
|
||||
|
||||
def __init__(self, hs):
|
||||
self.signing_key = hs.config.signing_key[0]
|
||||
self.server_name = hs.hostname
|
||||
BaseHttpClient.__init__(self, hs)
|
||||
|
||||
def sign_request(self, destination, method, url_bytes, headers_dict,
|
||||
content=None):
|
||||
request = {
|
||||
"method": method,
|
||||
"uri": url_bytes,
|
||||
"origin": self.server_name,
|
||||
"destination": destination,
|
||||
}
|
||||
|
||||
if content is not None:
|
||||
request["content"] = content
|
||||
|
||||
request = sign_json(request, self.server_name, self.signing_key)
|
||||
|
||||
auth_headers = []
|
||||
|
||||
for key, sig in request["signatures"][self.server_name].items():
|
||||
auth_headers.append(bytes(
|
||||
"X-Matrix origin=%s,key=\"%s\",sig=\"%s\"" % (
|
||||
self.server_name, key, sig,
|
||||
)
|
||||
))
|
||||
|
||||
headers_dict[b"Authorization"] = auth_headers
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def put_json(self, destination, path, data={}, json_data_callback=None):
|
||||
""" Sends the specifed json data using PUT
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
data (dict): A dict containing the data that will be used as
|
||||
the request body. This will be encoded as JSON.
|
||||
json_data_callback (callable): A callable returning the dict to
|
||||
use as the request body.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get a 2xx HTTP response. The result
|
||||
will be the decoded JSON body. On a 4xx or 5xx error response a
|
||||
CodeMessageException is raised.
|
||||
"""
|
||||
|
||||
if not json_data_callback:
|
||||
def json_data_callback():
|
||||
return data
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
json_data = json_data_callback()
|
||||
self.sign_request(
|
||||
destination, method, url_bytes, headers_dict, json_data
|
||||
)
|
||||
producer = _JsonProducer(json_data)
|
||||
return producer
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"PUT",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={"Content-Type": ["application/json"]},
|
||||
)
|
||||
|
||||
logger.debug("Getting resp body")
|
||||
body = yield readBody(response)
|
||||
logger.debug("Got resp body")
|
||||
|
||||
defer.returnValue((response.code, body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, destination, path, args={}, retry_on_dns_fail=True):
|
||||
""" Get's some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
args (dict): A dictionary used to create query strings, defaults to
|
||||
None.
|
||||
**Note**: The value of each key is assumed to be an iterable
|
||||
and *not* a string.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get *any* HTTP response.
|
||||
|
||||
The result of the deferred is a tuple of `(code, response)`,
|
||||
where `response` is a dict representing the decoded JSON body.
|
||||
"""
|
||||
logger.debug("get_json args: %s", args)
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
self.sign_request(destination, method, url_bytes, headers_dict)
|
||||
return None
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"GET",
|
||||
path.encode("ascii"),
|
||||
query_bytes=query_bytes,
|
||||
body_callback=body_callback,
|
||||
retry_on_dns_fail=retry_on_dns_fail
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
return matrix_endpoint(
|
||||
reactor, destination, timeout=10,
|
||||
ssl_context_factory=self.hs.tls_context_factory
|
||||
)
|
||||
|
||||
|
||||
class IdentityServerHttpClient(BaseHttpClient):
|
||||
"""Separate HTTP client for talking to the Identity servers since they
|
||||
don't use SRV records and talk x-www-form-urlencoded rather than JSON.
|
||||
"""
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
#TODO: This should be talking TLS
|
||||
return matrix_endpoint(reactor, destination, timeout=10)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_json(self, destination, path, args={}):
|
||||
logger.debug("post_urlencoded_get_json args: %s", args)
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
return FileBodyProducer(StringIO(query_bytes))
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={
|
||||
"Content-Type": ["application/x-www-form-urlencoded"]
|
||||
}
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, destination, path, args={}, retry_on_dns_fail=True):
|
||||
""" Get's some json from the given host homeserver and path
|
||||
|
||||
Args:
|
||||
destination (str): The remote server to send the HTTP request
|
||||
to.
|
||||
path (str): The HTTP path.
|
||||
args (dict): A dictionary used to create query strings, defaults to
|
||||
None.
|
||||
**Note**: The value of each key is assumed to be an iterable
|
||||
and *not* a string.
|
||||
|
||||
Returns:
|
||||
Deferred: Succeeds when we get *any* HTTP response.
|
||||
|
||||
The result of the deferred is a tuple of `(code, response)`,
|
||||
where `response` is a dict representing the decoded JSON body.
|
||||
"""
|
||||
logger.debug("get_json args: %s", args)
|
||||
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
logger.debug("Query bytes: %s Retry DNS: %s", args, retry_on_dns_fail)
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"GET",
|
||||
path.encode("ascii"),
|
||||
query_bytes=query_bytes,
|
||||
retry_on_dns_fail=retry_on_dns_fail,
|
||||
body_callback=None
|
||||
)
|
||||
|
||||
body = yield readBody(response)
|
||||
|
||||
defer.returnValue(json.loads(body))
|
||||
|
||||
|
||||
class CaptchaServerHttpClient(MatrixHttpClient):
|
||||
"""Separate HTTP client for talking to google's captcha servers"""
|
||||
|
||||
def _getEndpoint(self, reactor, destination):
|
||||
return matrix_endpoint(reactor, destination, timeout=10)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_raw(self, destination, path, accept_partial=False,
|
||||
args={}):
|
||||
query_bytes = urllib.urlencode(args, True)
|
||||
|
||||
def body_callback(method, url_bytes, headers_dict):
|
||||
return FileBodyProducer(StringIO(query_bytes))
|
||||
|
||||
response = yield self._create_request(
|
||||
destination.encode("ascii"),
|
||||
"POST",
|
||||
path.encode("ascii"),
|
||||
body_callback=body_callback,
|
||||
headers_dict={
|
||||
"Content-Type": ["application/x-www-form-urlencoded"]
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
body = yield readBody(response)
|
||||
defer.returnValue(body)
|
||||
except PartialDownloadError as e:
|
||||
if accept_partial:
|
||||
defer.returnValue(e.response)
|
||||
else:
|
||||
raise e
|
||||
|
||||
|
||||
def _print_ex(e):
|
||||
if hasattr(e, "reasons") and e.reasons:
|
||||
for ex in e.reasons:
|
||||
@@ -399,9 +254,6 @@ class _JsonProducer(object):
|
||||
""" Used by the twisted http client to create the HTTP body from json
|
||||
"""
|
||||
def __init__(self, jsn):
|
||||
self.reset(jsn)
|
||||
|
||||
def reset(self, jsn):
|
||||
self.body = encode_canonical_json(jsn)
|
||||
self.length = len(self.body)
|
||||
|
||||
|
||||
@@ -38,8 +38,8 @@ class ContentRepoResource(resource.Resource):
|
||||
|
||||
Uploads are POSTed to wherever this Resource is linked to. This resource
|
||||
returns a "content token" which can be used to GET this content again. The
|
||||
token is typically a path, but it may not be. Tokens can expire, be
|
||||
one-time uses, etc.
|
||||
token is typically a path, but it may not be. Tokens can expire, be one-time
|
||||
uses, etc.
|
||||
|
||||
In this case, the token is a path to the file and contains 3 interesting
|
||||
sections:
|
||||
@@ -175,9 +175,10 @@ class ContentRepoResource(resource.Resource):
|
||||
with open(fname, "wb") as f:
|
||||
f.write(request.content.read())
|
||||
|
||||
|
||||
# FIXME (erikj): These should use constants.
|
||||
file_name = os.path.basename(fname)
|
||||
# FIXME: we can't assume what the repo's public mounted path is
|
||||
# FIXME: we can't assume what the public mounted path of the repo is
|
||||
# ...plus self-signed SSL won't work to remote clients anyway
|
||||
# ...and we can't assume that it's SSL anyway, as we might want to
|
||||
# server it via the non-SSL listener...
|
||||
@@ -200,3 +201,6 @@ class ContentRepoResource(resource.Resource):
|
||||
500,
|
||||
json.dumps({"error": "Internal server error"}),
|
||||
send_cors=True)
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,89 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
from synapse.http.server import respond_with_json_bytes
|
||||
from syutil.crypto.jsonsign import sign_json
|
||||
from syutil.base64util import encode_base64
|
||||
from syutil.jsonutil import encode_canonical_json
|
||||
from OpenSSL import crypto
|
||||
import logging
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LocalKey(Resource):
|
||||
"""HTTP resource containing encoding the TLS X.509 certificate and NACL
|
||||
signature verification keys for this server::
|
||||
|
||||
GET /key HTTP/1.1
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
{
|
||||
"server_name": "this.server.example.com"
|
||||
"verify_keys": {
|
||||
"algorithm:version": # base64 encoded NACL verification key.
|
||||
},
|
||||
"tls_certificate": # base64 ASN.1 DER encoded X.509 tls cert.
|
||||
"signatures": {
|
||||
"this.server.example.com": {
|
||||
"algorithm:version": # NACL signature for this server.
|
||||
}
|
||||
}
|
||||
}
|
||||
"""
|
||||
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.response_body = encode_canonical_json(
|
||||
self.response_json_object(hs.config)
|
||||
)
|
||||
Resource.__init__(self)
|
||||
|
||||
@staticmethod
|
||||
def response_json_object(server_config):
|
||||
verify_keys = {}
|
||||
for key in server_config.signing_key:
|
||||
verify_key_bytes = key.verify_key.encode()
|
||||
key_id = "%s:%s" % (key.alg, key.version)
|
||||
verify_keys[key_id] = encode_base64(verify_key_bytes)
|
||||
|
||||
x509_certificate_bytes = crypto.dump_certificate(
|
||||
crypto.FILETYPE_ASN1,
|
||||
server_config.tls_certificate
|
||||
)
|
||||
json_object = {
|
||||
u"server_name": server_config.server_name,
|
||||
u"verify_keys": verify_keys,
|
||||
u"tls_certificate": encode_base64(x509_certificate_bytes)
|
||||
}
|
||||
for key in server_config.signing_key:
|
||||
json_object = sign_json(
|
||||
json_object,
|
||||
server_config.server_name,
|
||||
key,
|
||||
)
|
||||
|
||||
return json_object
|
||||
|
||||
def render_GET(self, request):
|
||||
return respond_with_json_bytes(request, 200, self.response_body)
|
||||
|
||||
def getChild(self, name, request):
|
||||
if name == '':
|
||||
return self
|
||||
@@ -167,8 +167,7 @@ class Notifier(object):
|
||||
)
|
||||
|
||||
def eb(failure):
|
||||
logger.error(
|
||||
"Failed to notify listener",
|
||||
logger.error("Failed to notify listener",
|
||||
exc_info=(
|
||||
failure.type,
|
||||
failure.value,
|
||||
@@ -208,7 +207,7 @@ class Notifier(object):
|
||||
)
|
||||
|
||||
if timeout:
|
||||
reactor.callLater(timeout/1000.0, self._timeout_listener, listener)
|
||||
reactor.callLater(timeout/1000, self._timeout_listener, listener)
|
||||
|
||||
self._register_with_keys(listener)
|
||||
|
||||
|
||||
@@ -15,8 +15,7 @@
|
||||
|
||||
|
||||
from . import (
|
||||
room, events, register, login, profile, presence, initial_sync, directory,
|
||||
voip, admin,
|
||||
room, events, register, login, profile, presence, initial_sync, directory
|
||||
)
|
||||
|
||||
|
||||
@@ -43,5 +42,3 @@ class RestServletFactory(object):
|
||||
presence.register_servlets(hs, client_resource)
|
||||
initial_sync.register_servlets(hs, client_resource)
|
||||
directory.register_servlets(hs, client_resource)
|
||||
voip.register_servlets(hs, client_resource)
|
||||
admin.register_servlets(hs, client_resource)
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014 OpenMarket Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import AuthError, SynapseError
|
||||
from base import RestServlet, client_path_pattern
|
||||
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class WhoisRestServlet(RestServlet):
|
||||
PATTERN = client_path_pattern("/admin/whois/(?P<user_id>[^/]*)")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, user_id):
|
||||
target_user = self.hs.parse_userid(user_id)
|
||||
auth_user = yield self.auth.get_user_by_req(request)
|
||||
is_admin = yield self.auth.is_server_admin(auth_user)
|
||||
|
||||
if not is_admin and target_user != auth_user:
|
||||
raise AuthError(403, "You are not a server admin")
|
||||
|
||||
if not target_user.is_mine:
|
||||
raise SynapseError(400, "Can only whois a local user")
|
||||
|
||||
ret = yield self.handlers.admin_handler.get_whois(target_user)
|
||||
|
||||
defer.returnValue((200, ret))
|
||||
|
||||
|
||||
def register_servlets(hs, http_server):
|
||||
WhoisRestServlet(hs).register(http_server)
|
||||
@@ -16,7 +16,7 @@
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import AuthError, SynapseError, Codes
|
||||
from synapse.api.errors import SynapseError, Codes
|
||||
from base import RestServlet, client_path_pattern
|
||||
|
||||
import json
|
||||
@@ -45,8 +45,6 @@ class ClientDirectoryServer(RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_PUT(self, request, room_alias):
|
||||
user = yield self.auth.get_user_by_req(request)
|
||||
|
||||
content = _parse_json(request)
|
||||
if not "room_id" in content:
|
||||
raise SynapseError(400, "Missing room_id key",
|
||||
@@ -71,31 +69,12 @@ class ClientDirectoryServer(RestServlet):
|
||||
|
||||
try:
|
||||
yield dir_handler.create_association(
|
||||
user.to_string(), room_alias, room_id, servers
|
||||
room_alias, room_id, servers
|
||||
)
|
||||
except SynapseError as e:
|
||||
raise e
|
||||
except:
|
||||
logger.exception("Failed to create association")
|
||||
raise
|
||||
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_DELETE(self, request, room_alias):
|
||||
user = yield self.auth.get_user_by_req(request)
|
||||
|
||||
is_admin = yield self.auth.is_server_admin(user)
|
||||
if not is_admin:
|
||||
raise AuthError(403, "You need to be a server admin")
|
||||
|
||||
dir_handler = self.handlers.directory_handler
|
||||
|
||||
room_alias = self.hs.parse_roomalias(urllib.unquote(room_alias))
|
||||
|
||||
yield dir_handler.delete_association(
|
||||
user.to_string(), room_alias
|
||||
)
|
||||
|
||||
defer.returnValue((200, {}))
|
||||
|
||||
|
||||
@@ -59,7 +59,7 @@ class EventRestServlet(RestServlet):
|
||||
event = yield handler.get_event(auth_user, event_id)
|
||||
|
||||
if event:
|
||||
defer.returnValue((200, self.hs.serialize_event(event)))
|
||||
defer.returnValue((200, event.get_dict()))
|
||||
else:
|
||||
defer.returnValue((404, "Event not found."))
|
||||
|
||||
|
||||
@@ -70,28 +70,7 @@ class LoginFallbackRestServlet(RestServlet):
|
||||
def on_GET(self, request):
|
||||
# TODO(kegan): This should be returning some HTML which is capable of
|
||||
# hitting LoginRestServlet
|
||||
return (200, {})
|
||||
|
||||
|
||||
class PasswordResetRestServlet(RestServlet):
|
||||
PATTERN = client_path_pattern("/login/reset")
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
reset_info = _parse_json(request)
|
||||
try:
|
||||
email = reset_info["email"]
|
||||
user_id = reset_info["user_id"]
|
||||
handler = self.handlers.login_handler
|
||||
yield handler.reset_password(user_id, email)
|
||||
# purposefully give no feedback to avoid people hammering different
|
||||
# combinations.
|
||||
defer.returnValue((200, {}))
|
||||
except KeyError:
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Missing keys. Requires 'email' and 'user_id'."
|
||||
)
|
||||
return (200, "")
|
||||
|
||||
|
||||
def _parse_json(request):
|
||||
@@ -106,4 +85,3 @@ def _parse_json(request):
|
||||
|
||||
def register_servlets(hs, http_server):
|
||||
LoginRestServlet(hs).register(http_server)
|
||||
# TODO PasswordResetRestServlet(hs).register(http_server)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user