1
0

Compare commits

...

1377 Commits

Author SHA1 Message Date
Erik Johnston
d3861b4442 Merge branch 'release-v0.11.0' of github.com:matrix-org/synapse 2015-11-17 15:45:43 +00:00
Erik Johnston
bceec65913 Slightly more aggressive retry timers at HTTP level 2015-11-17 15:10:05 +00:00
Erik Johnston
9eff52d1a6 Bump changelog and version 2015-11-17 14:38:36 +00:00
Erik Johnston
0186aef814 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.11.0 2015-11-17 14:37:13 +00:00
Erik Johnston
e503848990 Merge pull request #349 from stevenhammerton/sh-cas-auth-via-homeserver
SH CAS auth via homeserver
2015-11-17 14:36:15 +00:00
Daniel Wagner-Hall
90b3a98df7 Merge pull request #380 from matrix-org/daniel/jenkins-sytest
Merge pull request #380 from matrix-org/daniel/jenkins-sytest

Run sytests on jenkins
2015-11-17 09:17:03 -05:00
Daniel Wagner-Hall
d34990141e Merge pull request #379 from matrix-org/daniel/jenkins
Output results files on jenkins
2015-11-17 09:14:20 -05:00
Steven Hammerton
f20d064e05 Always check guest = true in macaroons 2015-11-17 10:58:05 +00:00
Steven Hammerton
f5e25c5f35 Merge branch 'develop' into sh-cas-auth-via-homeserver 2015-11-17 10:55:41 +00:00
Daniel Wagner-Hall
f4db76692f Run sytests on jenkins 2015-11-16 15:40:55 -05:00
Daniel Wagner-Hall
09bb5cf02f Output results files on jenkins
Outputs:
 * results.xml
 * coverage.xml
 * violations.flake8.log
2015-11-16 14:35:57 -05:00
Daniel Wagner-Hall
3b90df21d5 Merge pull request #376 from matrix-org/daniel/jenkins
Pull out jenkins script into a checked in script
2015-11-16 12:50:26 -05:00
Paul Evans
1654d3b329 Merge pull request #377 from matrix-org/paul/tiny-fixes
Don't complain if /make_join response lacks 'prev_state' list (SYN-517)
2015-11-13 17:43:30 +00:00
Paul "LeoNerd" Evans
aca6e5bf46 Don't complain if /make_join response lacks 'prev_state' list (SYN-517) 2015-11-13 17:27:25 +00:00
Paul "LeoNerd" Evans
4fbe6ca401 Merge branch 'develop' into paul/tiny-fixes 2015-11-13 17:26:59 +00:00
Daniel Wagner-Hall
233af7c74b Pull out jenkins script into a checked in script 2015-11-13 16:22:51 +00:00
Daniel Wagner-Hall
6fed9fd697 Merge pull request #374 from matrix-org/daniel/guestleave
Allow guests to /room/:room_id/{join,leave}
2015-11-13 11:57:57 +00:00
Daniel Wagner-Hall
9c3f4f8dfd Allow guests to /room/:room_id/{join,leave} 2015-11-13 11:56:58 +00:00
Erik Johnston
0644f0eb7d Bump version and change log 2015-11-13 11:43:29 +00:00
Erik Johnston
da3dd4867d Merge branch 'develop' into release-v0.11.0 2015-11-13 11:40:17 +00:00
Richard van der Hoff
e4d622aaaf Implementation of state rollback in /sync
Implementation of SPEC-254: roll back the state dictionary to how it looked at
the start of the timeline.

Merged PR https://github.com/matrix-org/synapse/pull/373
2015-11-13 10:58:56 +00:00
Richard van der Hoff
fddedd51d9 Fix a few race conditions in the state calculation
Be a bit more careful about how we calculate the state to be returned by
/sync. In a few places, it was possible for /sync to return slightly later
state than that represented by the next_batch token and the timeline. In
particular, the following cases were susceptible:

* On a full state sync, for an active room
* During a per-room incremental sync with a timeline gap
* When the user has just joined a room. (Refactor check_joined_room to make it
  less magical)

Also, use store.get_state_for_events() (and thus the existing stategroups) to
calculate the state corresponding to a particular sync position, rather than
state_handler.compute_event_context(), which recalculates from first principles
(and tends to miss some state).

Merged from PR https://github.com/matrix-org/synapse/pull/372
2015-11-13 10:39:09 +00:00
Richard van der Hoff
5ab4b0afe8 Make handlers.sync return a state dictionary, instead of an event list.
Basically this moves the process of flattening the existing dictionary into a
list up to rest.client.*, instead of doing it in handlers.sync. This simplifies
a bit of the code in handlers.sync, but it is also going to be somewhat
beneficial in the next stage of my hacking on SPEC-254.

Merged from PR #371
2015-11-13 10:35:01 +00:00
Richard van der Hoff
5dea4d37d1 Update some comments
Add a couple of type annotations, docstrings, and other comments, in the
interest of keeping track of what types I have.

Merged from pull request #370.
2015-11-13 10:31:15 +00:00
Daniel Wagner-Hall
fc27ca9006 Merge pull request #369 from matrix-org/daniel/guestnonevents
Return non-room events from guest /events calls
2015-11-12 17:28:48 +00:00
Daniel Wagner-Hall
468a2ed4ec Return non-room events from guest /events calls 2015-11-12 16:45:28 +00:00
Erik Johnston
018b504f5b Merge pull request #368 from matrix-org/erikj/fix_federation_profile
Fix missing profile data in federation joins
2015-11-12 16:30:02 +00:00
Erik Johnston
49f1758d74 Merge pull request #366 from matrix-org/erikj/search_fix_sqlite_faster
Use a (hopefully) more efficient SQL query for doing recency based room search
2015-11-12 16:24:32 +00:00
Erik Johnston
c0b3554401 Fix missing profile data in federation joins
There was a regression where we stopped including profile data in
initial joins for rooms joined over federation.
2015-11-12 16:19:55 +00:00
Erik Johnston
3de46c7755 Trailing whitespace 2015-11-12 15:36:43 +00:00
Erik Johnston
8fd8e72cec Expand comment 2015-11-12 15:33:47 +00:00
Richard van der Hoff
78f6010207 Fix an issue with ignoring power_level changes on divergent graphs
Changes to m.room.power_levels events are supposed to be handled at a high
priority; however a typo meant that the relevant bit of code was never
executed, so they were handled just like any other state change - which meant
that a bad person could cause room state changes by forking the graph from a
point in history when they were allowed to do so.
2015-11-12 15:24:59 +00:00
Daniel Wagner-Hall
06bfd0a3c0 Merge pull request #367 from matrix-org/daniel/readafterleave
Merge pull request #367 from matrix-org/daniel/readafterleave

Tweak guest access permissions
2015-11-12 15:22:02 +00:00
Erik Johnston
764e79d051 Comment 2015-11-12 15:19:56 +00:00
Daniel Wagner-Hall
0d08670f61 Merge pull request #360 from matrix-org/daniel/guestroominitialsync
Merge pull request #360 from matrix-org/daniel/guestroominitialsync

Allow guest access to room initialSync
2015-11-12 15:19:55 +00:00
Erik Johnston
320408ef47 Fix SQL syntax 2015-11-12 15:09:45 +00:00
Daniel Wagner-Hall
fb7e260a20 Tweak guest access permissions
* Allow world_readable rooms to be read by guests who have joined and
   left
 * Allow regular users to access world_readable rooms
2015-11-12 15:02:00 +00:00
Erik Johnston
14a9d805b9 Use a (hopefully) more efficient SQL query for doing recency based room search 2015-11-12 14:48:39 +00:00
Erik Johnston
39de87869c Fix bug where assumed dict was namedtuple 2015-11-12 14:47:48 +00:00
Daniel Wagner-Hall
473a239d83 Merge pull request #364 from matrix-org/daniel/guestdisplaynames
Allow guests to set their display names

Depends: https://github.com/matrix-org/synapse/pull/363
Tests in https://github.com/matrix-org/sytest/pull/66
2015-11-12 13:44:46 +00:00
Daniel Wagner-Hall
0a93df5f9c Allow guests to set their display names
Depends: https://github.com/matrix-org/synapse/pull/363
Tests in https://github.com/matrix-org/sytest/pull/66
2015-11-12 13:44:39 +00:00
Daniel Wagner-Hall
8ea5dccea1 Merge pull request #363 from matrix-org/daniel/guestscanjoin
Consider joined guest users as joined users

Otherwise they're inconveniently allowed to write events to the room
but not to read them from the room.
2015-11-12 13:37:09 +00:00
Daniel Wagner-Hall
50f1afbd5b Consider joined guest users as joined users
Otherwise they're inconveniently allowed to write events to the room
but not to read them from the room.
2015-11-12 13:37:07 +00:00
Daniel Wagner-Hall
2fc81af06a Merge pull request #362 from matrix-org/daniel/raceyraceyfunfun
Fix race creating directories
2015-11-12 12:05:21 +00:00
Daniel Wagner-Hall
6a9c4cfd0b Fix race creating directories 2015-11-12 11:58:48 +00:00
Erik Johnston
884e601683 Merge branch 'release-v0.11.0' of github.com:matrix-org/synapse into develop 2015-11-11 18:35:16 +00:00
Erik Johnston
63b28c7816 Update date 2015-11-11 18:21:26 +00:00
Erik Johnston
e327327174 Update CHANGES 2015-11-11 18:17:55 +00:00
Erik Johnston
aa3ab6c6a0 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.11.0 2015-11-11 18:17:49 +00:00
Erik Johnston
04034d0b56 Merge pull request #361 from matrix-org/daniel/guestcontext
Allow guests to access room context API
2015-11-11 18:17:35 +00:00
Erik Johnston
6341be45c6 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.11.0 2015-11-11 17:51:14 +00:00
Daniel Wagner-Hall
e93d550b79 Allow guests to access room context API 2015-11-11 17:49:44 +00:00
Erik Johnston
e21cef9bb5 Merge pull request #359 from matrix-org/markjh/incremental_indexing
Incremental background updates for db indexes
2015-11-11 17:19:51 +00:00
Mark Haines
e1627388d1 Fix param style to work on both sqlite and postgres 2015-11-11 17:14:56 +00:00
Daniel Wagner-Hall
f15ba926cc Allow guest access to room initialSync 2015-11-11 17:13:24 +00:00
Daniel Wagner-Hall
5d098a32c9 Merge pull request #358 from matrix-org/daniel/publicwritable
Return world_readable and guest_can_join in /publicRooms
2015-11-11 14:38:31 +00:00
Steven Hammerton
ffdc8e5e1c Snakes not camels 2015-11-11 14:26:47 +00:00
Mark Haines
940a161192 Fix the background update 2015-11-11 13:59:40 +00:00
Steven Hammerton
2b779af10f Minor review fixes 2015-11-11 11:21:43 +00:00
Steven Hammerton
dd2eb49385 Share more code between macaroon validation 2015-11-11 11:12:35 +00:00
Daniel Wagner-Hall
cf437900e0 Return world_readable and guest_can_join in /publicRooms 2015-11-10 17:10:27 +00:00
Daniel Wagner-Hall
466b4ec01d Merge pull request #355 from matrix-org/daniel/anonymouswriting
Allow guest users to join and message rooms
2015-11-10 17:00:25 +00:00
Daniel Wagner-Hall
38d82edf0e Allow guest users to join and message rooms 2015-11-10 16:57:13 +00:00
Mark Haines
90b503216c Use a background task to update databases to use the full text search 2015-11-10 16:20:13 +00:00
Mark Haines
36c58b18a3 Test for background updates 2015-11-10 15:51:40 +00:00
Mark Haines
a412b9a465 Run the background updates when starting synapse. 2015-11-10 15:50:58 +00:00
Daniel Wagner-Hall
82e8a2d763 Merge pull request #356 from matrix-org/daniel/3pidyetagain
Get display name from identity server, not client
2015-11-10 12:44:17 +00:00
Mark Haines
2ede7aa8a1 Add background update task for reindexing event search 2015-11-09 19:29:32 +00:00
Mark Haines
889388f105 Merge pull request #357 from matrix-org/rav/SYN-516
Don't fiddle with results returned by event sources
2015-11-09 19:26:46 +00:00
Richard van der Hoff
c7db2068c8 Don't fiddle with results returned by event sources
Overwriting hashes returned by other methods is poor form.

Fixes: SYN-516
2015-11-09 18:09:46 +00:00
Daniel Wagner-Hall
0d63dc3ec9 Get display name from identity server, not client 2015-11-09 17:26:43 +00:00
Mark Haines
c6a01f2ed0 Add storage module for tracking background updates.
The progress for each background update is stored as a JSON blob in the
database. Each background update is broken up into separate batches.
The batch size is automatically tuned to try avoid blocking single
threaded databases for too long.
2015-11-09 17:26:27 +00:00
Richard van der Hoff
9107ed23b7 Add a couple of unit tests for room/<x>/messages
... merely because I was trying to figure out how it worked, and couldn't.
2015-11-09 16:16:43 +00:00
Mark Haines
b1953a9627 Merge pull request #354 from matrix-org/markjh/SYN-513
SYN-513: Include updates for rooms that have had all their tags deleted
2015-11-09 15:16:17 +00:00
Mark Haines
bbe10e8be7 Merge branch 'develop' into markjh/SYN-513
Conflicts:
	synapse/storage/tags.py
2015-11-09 15:01:59 +00:00
Mark Haines
c4135d85e1 SYN-513: Include updates for rooms that have had all their tags deleted 2015-11-09 14:53:08 +00:00
Matthew Hodgson
dd40fb68e4 fix comedy important missing comma breaking recent-ordered FTS on sqlite 2015-11-08 16:04:37 +00:00
Matthew Hodgson
767c20a869 add a key existence check to tags_by_room to avoid /events 500'ing when testing against vector 2015-11-06 20:49:57 +01:00
Daniel Wagner-Hall
5335bf9c34 Merge pull request #353 from matrix-org/daniel/oops
Remove accidentally added ID column
2015-11-06 14:30:10 +00:00
Daniel Wagner-Hall
f2c4ee41b9 Remove accidentally added ID column 2015-11-06 14:27:49 +00:00
Matthew Hodgson
0da4b11efb remove references to matrix.org/beta 2015-11-06 11:00:06 +01:00
Steven Hammerton
0b31223c7a Updates to fallback CAS login to do new token login 2015-11-06 09:57:17 +00:00
Steven Hammerton
fece2f5c77 Merge branch 'develop' into sh-cas-auth-via-homeserver 2015-11-05 20:59:45 +00:00
Erik Johnston
545a7b291a Remove anonymous access, since its not ready yet 2015-11-05 18:06:18 +00:00
Erik Johnston
f23af34729 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.11.0 2015-11-05 18:00:51 +00:00
Erik Johnston
6be1b4b113 Merge pull request #350 from matrix-org/erikj/search
Implement pagination, order by and groups in search
2015-11-05 17:52:32 +00:00
Erik Johnston
3a02a13e38 Add PR 2015-11-05 17:26:56 +00:00
Erik Johnston
66d36b8e41 Be explicit about what we're doing 2015-11-05 17:26:19 +00:00
Erik Johnston
2aa98ff3bc Remove redundant test 2015-11-05 17:25:50 +00:00
Erik Johnston
5ee070d21f Increment by one, not five 2015-11-05 17:25:33 +00:00
Erik Johnston
f1dcaf3296 Bump changelog and version number 2015-11-05 17:20:28 +00:00
Daniel Wagner-Hall
2cebe53545 Exchange 3pid invites for m.room.member invites 2015-11-05 16:43:19 +00:00
Daniel Wagner-Hall
32fc0737d6 Merge pull request #351 from matrix-org/daniel/fixtox
Fix tox config after fa1cf5ef34
2015-11-05 16:43:09 +00:00
Daniel Wagner-Hall
4df491b922 Fix tox config after fa1cf5ef34 2015-11-05 16:41:32 +00:00
Erik Johnston
1ad6222ebf COMMENTS 2015-11-05 16:29:16 +00:00
Erik Johnston
5bc690408d Merge pull request #340 from matrix-org/erikj/server_retries
Retry dead servers a lot less often
2015-11-05 16:15:50 +00:00
Erik Johnston
3640ddfbf6 Error handling 2015-11-05 16:10:54 +00:00
Erik Johnston
729ea933ea Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-11-05 15:43:52 +00:00
Erik Johnston
347146be29 Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-11-05 15:35:17 +00:00
Erik Johnston
7a5ea067e2 Merge branch 'release-v0.10.1' of github.com:matrix-org/synapse into develop 2015-11-05 15:35:05 +00:00
Erik Johnston
7301e05122 Implement basic pagination for search results 2015-11-05 15:04:08 +00:00
Daniel Wagner-Hall
ca2f90742d Open up /events to anonymous users for room events only
Squash-merge of PR #345 from daniel/anonymousevents
2015-11-05 14:32:26 +00:00
Steven Hammerton
414a4a71b4 Allow hs to do CAS login completely and issue the client with a login token that can be redeemed for the usual successful login response 2015-11-05 14:06:48 +00:00
Mark Haines
7a369e8a55 Merge pull request #347 from matrix-org/markjh/check_filter
Remove fields that are both unspecified and unused from the filter checks
2015-11-05 11:15:39 +00:00
Steven Hammerton
45f1827fb7 Add service URL to CAS config 2015-11-04 23:32:30 +00:00
Erik Johnston
05c326d445 Implement order and group by 2015-11-04 17:57:44 +00:00
Daniel Wagner-Hall
4e62ffdb21 Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-11-04 17:31:01 +00:00
Daniel Wagner-Hall
f522f50a08 Allow guests to register and call /events?room_id=
This follows the same flows-based flow as regular registration, but as
the only implemented flow has no requirements, it auto-succeeds. In the
future, other flows (e.g. captcha) may be required, so clients should
treat this like the regular registration flow choices.
2015-11-04 17:29:07 +00:00
Daniel Wagner-Hall
1758187715 Merge pull request #339 from matrix-org/daniel/removesomelies
Remove unused arguments and code
2015-11-04 17:26:41 +00:00
Mark Haines
33b3e04049 Merge branch 'develop' into daniel/removesomelies
Conflicts:
	synapse/notifier.py
2015-11-04 16:01:00 +00:00
Mark Haines
23cfd32e64 Merge pull request #346 from matrix-org/markjh/remove_lock_manager
Remove the LockManager class because it wasn't being used
2015-11-04 15:49:45 +00:00
Mark Haines
285d056629 Remove fields that are both unspecified and unused from the filter checks, check the right top level definitions in the filter 2015-11-04 15:47:19 +00:00
Mark Haines
c452dabc3d Remove the LockManager class because it wasn't being used 2015-11-04 14:08:15 +00:00
Mark Haines
f74f48e9e6 Merge pull request #341 from matrix-org/markjh/v2_sync_receipts
Include read receipts in v2 sync
2015-11-03 18:45:13 +00:00
Erik Johnston
6a3a840b19 Merge pull request #343 from matrix-org/erikj/fix_retries
Fix broken cache for getting retry times.
2015-11-03 17:51:49 +00:00
Mark Haines
a3bfef35fd Merge branch 'develop' into markjh/v2_sync_receipts
Conflicts:
	synapse/handlers/sync.py
2015-11-03 17:31:17 +00:00
Mark Haines
6797fcd9ab Merge pull request #335 from matrix-org/markjh/room_tags
Add APIs for adding and removing tags from rooms
2015-11-03 16:45:53 +00:00
Erik Johnston
97d792b28f Don't rearrange transaction_queue 2015-11-03 16:31:08 +00:00
Erik Johnston
7ce264ce5f Fix broken cache for getting retry times. This meant we retried remote destinations way more frequently than we should 2015-11-03 16:24:03 +00:00
Paul "LeoNerd" Evans
8a0407c7e6 Merge branch 'develop' into paul/tiny-fixes 2015-11-03 16:08:48 +00:00
Mark Haines
06986e46a3 That TODO was done 2015-11-03 14:28:51 +00:00
Mark Haines
5897e773fd Spell "deferred" more correctly 2015-11-03 14:27:35 +00:00
Mark Haines
2657140c58 Include read receipts in v2 sync 2015-11-02 17:54:04 +00:00
Erik Johnston
eacb068ac2 Retry dead servers a lot less often 2015-11-02 16:56:30 +00:00
Mark Haines
57be722c46 Include room tags in v2 /sync 2015-11-02 16:23:15 +00:00
Daniel Wagner-Hall
771ca56c88 Remove more unused parameters 2015-11-02 15:31:57 +00:00
Mark Haines
ddd8566f41 Store room tag content and return the content in the m.tag event 2015-11-02 15:11:31 +00:00
Daniel Wagner-Hall
192241cf2a Remove unused arguments and code 2015-11-02 15:10:59 +00:00
Erik Johnston
3eb62873f6 Merge pull request #338 from matrix-org/daniel/fixdb
Add missing column
2015-11-02 14:54:31 +00:00
Mark Haines
0e36756383 Merge branch 'develop' into markjh/room_tags 2015-11-02 10:57:00 +00:00
Mark Haines
fb46937413 Support clients supplying older tokens, fix short poll test 2015-10-30 16:38:35 +00:00
Mark Haines
79b65f3875 Include tags in v1 room initial sync 2015-10-30 16:28:19 +00:00
Daniel Wagner-Hall
621e84d9a0 Add missing column 2015-10-30 16:25:53 +00:00
Mark Haines
fdf73c6855 Include room tags v1 /initialSync 2015-10-30 16:22:32 +00:00
Mark Haines
0f432ba551 Merge pull request #337 from matrix-org/markjh/v2_sync_joining
Don't mark newly joined room timelines as limited in an incremental sync
2015-10-30 11:18:08 +00:00
Mark Haines
d58edd98e9 Update the other place check_joined_room is called 2015-10-30 11:15:37 +00:00
Mark Haines
5cf22f0596 Don't mark newly joined room timelines as limited in an incremental sync 2015-10-29 19:58:51 +00:00
Erik Johnston
f6e6f3d87a Make search API honour limit set in filter 2015-10-29 16:17:47 +00:00
Mark Haines
f40b0ed5e1 Inform the client of new room tags using v1 /events 2015-10-29 15:21:09 +00:00
Erik Johnston
5d80dad99e Merge pull request #336 from matrix-org/erikj/search
Optionally return event contexts with search results
2015-10-28 18:39:40 +00:00
Erik Johnston
e83c4b8e3e Merge pull request #334 from matrix-org/erikj/context_api
Add room context api
2015-10-28 18:26:01 +00:00
Erik Johnston
a2e5f7f3d8 Optionally return event contexts with search results 2015-10-28 18:25:11 +00:00
Erik Johnston
2f6ad79a80 Merge branch 'erikj/context_api' into erikj/search 2015-10-28 17:53:24 +00:00
Mark Haines
a89b86dc47 Fix pyflakes errors 2015-10-28 16:45:57 +00:00
Mark Haines
892e70ec84 Add APIs for adding and removing tags from rooms 2015-10-28 16:06:57 +00:00
Erik Johnston
56dbcd1524 Docs 2015-10-28 14:05:50 +00:00
Richard van der Hoff
234d6f9f3e Merge pull request #332 from matrix-org/rav/full_state_sync
Implement full_state incremental sync
2015-10-28 14:04:39 +00:00
Erik Johnston
5cb298c934 Add room context api 2015-10-28 13:45:56 +00:00
Richard van der Hoff
d0b1968a4c Merge pull request #331 from matrix-org/rav/500_on_missing_sigil
Fix a 500 error resulting from empty room_ids
2015-10-27 15:20:57 +00:00
Richard van der Hoff
c79c4f9b14 Implement full_state incremental sync
A hopefully-complete implementation of the full_state incremental sync, as
specced at https://github.com/matrix-org/matrix-doc/pull/133.

This actually turns out to be a relatively simple modification to the initial
sync implementation.
2015-10-26 18:47:18 +00:00
Richard van der Hoff
f69a5c9134 Fix a 500 error resulting from empty room_ids
POST /_matrix/client/api/v1/rooms//send/a.b.c gave a 500 error, because we
assumed that rooms always had at least one character.
2015-10-26 18:44:03 +00:00
Erik Johnston
a299fede9d Merge branch 'release-v0.10.1' of github.com:matrix-org/synapse into release-v0.10.1 2015-10-26 18:16:31 +00:00
Erik Johnston
f73de2004e Use correct service url 2015-10-26 18:12:09 +00:00
Erik Johnston
cea2039b56 Merge pull request #330 from matrix-org/erikj/login_fallback
Add login fallback
2015-10-26 17:57:00 +00:00
Erik Johnston
f7e14bb535 Merge pull request #329 from matrix-org/erikj/static
Move static folder into synapse
2015-10-26 17:35:37 +00:00
Erik Johnston
87961d8dcf Add login fallback 2015-10-26 17:35:24 +00:00
Erik Johnston
fa1cf5ef34 Move static folder into synapse
This is because otherwise it won't get picked up by python packaging.

This also fixes the problem where the "static" folder was found if
synapse wasn't started from that directory.
2015-10-26 15:37:44 +00:00
Erik Johnston
3f0a57eb9b Merge pull request #328 from matrix-org/erikj/search
Pull out sender when computing search results
2015-10-26 10:24:19 +00:00
Erik Johnston
4cf633d5e9 Pull out sender when computing search results 2015-10-23 15:41:36 +01:00
Erik Johnston
b8e37ed944 Merge pull request #327 from matrix-org/erikj/search
Implement rank function for SQLite FTS
2015-10-23 15:27:51 +01:00
Erik Johnston
0c36098c1f Implement rank function for SQLite FTS 2015-10-23 13:23:48 +01:00
Daniel Wagner-Hall
216c976399 Merge pull request #323 from matrix-org/daniel/sizelimits
Reject events which are too large
2015-10-23 11:26:03 +01:00
Erik Johnston
259d10f0e4 Merge branch 'release-v0.10.1' of github.com:matrix-org/synapse into develop 2015-10-23 11:11:56 +01:00
Mark Haines
b051781ddb Merge pull request #325 from matrix-org/markjh/filter_dicts
Support filtering events represented as dicts.
2015-10-22 17:14:52 +01:00
Erik Johnston
53c679b59b Merge pull request #324 from matrix-org/erikj/search
Add filters to search.
2015-10-22 17:14:12 +01:00
Mark Haines
4e05aab4f7 Don't assume that the event has a room_id or sender 2015-10-22 17:08:59 +01:00
Erik Johnston
671ac699f1 Actually filter results 2015-10-22 16:54:56 +01:00
Mark Haines
9b6f3bc742 Support filtering events represented as dicts.
This is useful because the emphemeral events such as presence and
typing are represented as dicts inside synapse.
2015-10-22 16:38:03 +01:00
Erik Johnston
2980136d75 Rename 2015-10-22 16:19:53 +01:00
Erik Johnston
fb0fecd0b9 LESS THAN 2015-10-22 16:18:35 +01:00
Erik Johnston
61547106f5 Fix receipts for room initial sync 2015-10-22 16:17:23 +01:00
Erik Johnston
232beb3a3c Use namedtuple as return value 2015-10-22 15:02:35 +01:00
Erik Johnston
ba02bba88c Limit max number of SQL vars 2015-10-22 13:25:27 +01:00
Erik Johnston
1fc2d11a14 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-22 13:17:14 +01:00
Erik Johnston
b0ac0a9438 Merge pull request #319 from matrix-org/erikj/filter_refactor
Refactor api.filtering to have a Filter API
2015-10-22 13:17:10 +01:00
Erik Johnston
8a98f0dc5b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-22 13:16:35 +01:00
Erik Johnston
c9c82e8f4d Merge pull request #320 from matrix-org/appservice-retry-cap
Cap the time to retry txns to appservices to 8.5 minutes
2015-10-22 13:15:54 +01:00
Daniel Wagner-Hall
e60dad86ba Reject events which are too large
SPEC-222
2015-10-22 11:44:31 +01:00
Erik Johnston
f142898f52 PEP8 2015-10-22 11:18:01 +01:00
Erik Johnston
3993d6ecc2 Merge pull request #322 from matrix-org/erikj/password_config
Add config option to disable password login
2015-10-22 11:16:49 +01:00
Erik Johnston
4d25bc6c92 Move FTS to delta 25 2015-10-22 11:12:28 +01:00
Mark Haines
87da71bace Merge pull request #314 from matrix-org/paul/event-redaction
Add some unit tests of prune_events()
2015-10-22 11:07:20 +01:00
Erik Johnston
3ce1b8c705 Don't keep appending report_stats to demo config 2015-10-22 10:43:46 +01:00
Erik Johnston
5025ba959f Add config option to disable password login 2015-10-22 10:37:04 +01:00
Mark Haines
13a6e9beaf Merge pull request #321 from matrix-org/markjh/v2_sync_typing
Include typing events in initial v2 sync
2015-10-21 16:47:19 +01:00
Mark Haines
5201c66108 Merge branch 'develop' into markjh/v2_sync_typing
Conflicts:
	synapse/handlers/sync.py
2015-10-21 15:48:34 +01:00
Mark Haines
e94ffd89d6 Merge pull request #316 from matrix-org/markjh/v2_sync_archived
Add rooms that the user has left under archived in v2 sync.
2015-10-21 15:46:41 +01:00
Mark Haines
d63a0ca34b Doc string for the SyncHandler.typing_by_room method 2015-10-21 15:45:37 +01:00
Mark Haines
e3d75f564a Include banned rooms in the archived section of v2 sync 2015-10-21 11:15:48 +01:00
Kegsay
8627048787 Merge pull request #318 from matrix-org/syn-502-login-bad-emails
Don't 500 on /login when the email doesn't map to a valid user ID.
2015-10-21 10:16:33 +01:00
Kegan Dougal
4dec901c76 Cap the time to retry txns to appservices to 8.5 minutes
There's been numerous issues with people playing around with their
application service and then not receiving events from their HS for
ages due to backoff timers reaching crazy heights (albeit capped at
< 1 day).

Reduce the max time between pokes to be 8.5 minutes (2^9 secs) which
is quick enough for people to wait it out (avg wait time being 4.25 min)
but long enough to actually give the AS breathing room if it needs it.
2015-10-21 10:10:55 +01:00
Erik Johnston
5c41224a89 Filter room ids before hitting the database 2015-10-21 10:09:26 +01:00
Erik Johnston
c8baada94a Filter search results 2015-10-21 10:08:53 +01:00
Kegan Dougal
ede07434e0 Use 403 and message to match handlers/auth 2015-10-21 09:42:07 +01:00
Erik Johnston
44e2933bf8 Merge branch 'erikj/filter_refactor' into erikj/search 2015-10-20 16:57:51 +01:00
Mark Haines
7be06680ed Include typing events in initial v2 sync 2015-10-20 16:36:20 +01:00
Erik Johnston
87deec824a Docstring 2015-10-20 15:47:42 +01:00
Erik Johnston
45cd2b0233 Refactor api.filtering to have a Filter API 2015-10-20 15:33:25 +01:00
Daniel Wagner-Hall
f510586372 Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-10-20 12:00:22 +01:00
Daniel Wagner-Hall
137fafce4e Allow rejecting invites
This is done by using the same /leave flow as you would use if you had
already accepted the invite and wanted to leave.
2015-10-20 11:58:58 +01:00
Kegan Dougal
b02a342750 Don't 500 when the email doesn't map to a valid user ID. 2015-10-20 11:07:50 +01:00
Mark Haines
51d03e65b2 Fix pep8 2015-10-19 17:48:58 +01:00
Paul Evans
3c7d6202ea Merge pull request #315 from matrix-org/paul/test-vectors
Repeatable unit test of event hashing/signing algorithm
2015-10-19 17:47:57 +01:00
Paul "LeoNerd" Evans
9ed784098a Invoke EventBuilder directly instead of going via the EventBuilderFactory 2015-10-19 17:42:34 +01:00
Paul "LeoNerd" Evans
531e3aa75e Capture __init__.py 2015-10-19 17:37:35 +01:00
Mark Haines
68b7fc3e2b Add rooms that the user has left under archived in v2 sync. 2015-10-19 17:26:18 +01:00
Daniel Wagner-Hall
9261ef3a15 Merge pull request #312 from matrix-org/daniel/3pidinvites
Stuff signed data in a standalone object
2015-10-19 15:52:34 +01:00
Paul "LeoNerd" Evans
a8795c9644 Use assertIn() instead of assertTrue on the 'in' operator 2015-10-19 15:24:49 +01:00
Paul "LeoNerd" Evans
07b58a431f Another signing test vector using an 'm.room.message' with content, so that the implementation will have to redact it 2015-10-19 15:00:52 +01:00
Paul "LeoNerd" Evans
0aab34004b Initial minimial hack at a test of event hashing and signing 2015-10-19 14:40:15 +01:00
Erik Johnston
e0bf0258ee Merge pull request #307 from matrix-org/erikj/search
Add basic search API
2015-10-19 13:37:15 +01:00
Paul "LeoNerd" Evans
aff4d850bd Add some unit tests of prune_events() 2015-10-16 19:56:46 +01:00
Paul Evans
ae3082dd31 Merge pull request #313 from matrix-org/paul/tiny-fixes
Surely we don't need to preserve 'events_default' twice
2015-10-16 19:14:19 +01:00
Paul "LeoNerd" Evans
243a79d291 Surely we don't need to preserve 'events_default' twice 2015-10-16 18:25:19 +01:00
Mark Haines
9371a35e89 Merge pull request #306 from matrix-org/markjh/unused_methods
Remove some login classes from synapse.
2015-10-16 18:18:41 +01:00
Daniel Wagner-Hall
0e5239ffc3 Stuff signed data in a standalone object
Makes both generating it in sydent, and verifying it here, simpler at
the cost of some repetition
2015-10-16 17:45:48 +01:00
Mark Haines
b19b9535f6 Merge pull request #310 from matrix-org/markjh/bcrypt_rounds
Add config for how many bcrypt rounds to use for password hashes
2015-10-16 17:05:21 +01:00
Erik Johnston
46d39343d9 Explicitly check for Sqlite3Engine 2015-10-16 16:58:00 +01:00
Erik Johnston
f2d698cb52 Typing 2015-10-16 16:46:48 +01:00
Erik Johnston
33646eb000 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-16 15:35:35 +01:00
Erik Johnston
524b708f98 Merge pull request #311 from matrix-org/markjh/postgres_fixes
Fix FilteringStore.get_user_filter to work with postgres
2015-10-16 15:35:26 +01:00
Erik Johnston
380f148db7 Remove unused import 2015-10-16 15:32:56 +01:00
Mark Haines
fc012aa8dc Fix FilteringStore.get_user_filter to work with postgres 2015-10-16 15:28:43 +01:00
Daniel Wagner-Hall
e5acc8a47b Merge pull request #302 from matrix-org/daniel/3pidinvites
Implement third party identifier invites
2015-10-16 15:23:30 +01:00
Erik Johnston
d4b5621e0a Remove duplicate _filter_events_for_client 2015-10-16 15:19:52 +01:00
Erik Johnston
23ed7dc0e7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-16 15:18:42 +01:00
Erik Johnston
315b03b58d Merge pull request #309 from matrix-org/erikj/_filter_events_for_client
Amalgamate _filter_events_for_client
2015-10-16 15:18:32 +01:00
Daniel Wagner-Hall
c225d63e9e Add signing host and keyname to signatures 2015-10-16 15:07:56 +01:00
Daniel Wagner-Hall
b8dd5b1a2d Verify third party ID server certificates 2015-10-16 14:54:54 +01:00
Erik Johnston
366af6b73a Amalgamate _filter_events_for_client 2015-10-16 14:52:48 +01:00
Mark Haines
f2f031fd57 Add config for how many bcrypt rounds to use for password hashes
By default we leave it at the default value of 12. But now we can reduce
it for preparing users for loadtests or running integration tests.
2015-10-16 14:52:08 +01:00
Erik Johnston
12122bfc36 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-16 14:46:32 +01:00
Erik Johnston
edb998ba23 Explicitly check for Sqlite3Engine 2015-10-16 14:37:14 +01:00
Mark Haines
5df54de801 Merge pull request #308 from matrix-org/markjh/v2_filter_encoding
Encode the filter JSON as UTF-8 before storing in the database.
2015-10-16 13:41:31 +01:00
Erik Johnston
b62da463e1 docstring 2015-10-16 11:52:16 +01:00
Erik Johnston
3cf9948b8d Add docstring 2015-10-16 11:28:12 +01:00
Erik Johnston
73260ad01f Comment on the LIMIT 500 2015-10-16 11:24:02 +01:00
Erik Johnston
22a8c91448 Split up run_upgrade 2015-10-16 11:19:44 +01:00
Erik Johnston
a8945d24d1 Reorder changelog 2015-10-16 11:07:37 +01:00
Mark Haines
6296590bf7 Encode the filter JSON as UTF-8 before storing in the database.
Because we are using a binary column type to store the filter JSON.
2015-10-16 10:50:32 +01:00
Erik Johnston
bcfb653816 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-15 16:37:32 +01:00
Erik Johnston
e46cdc08cc Update change log 2015-10-15 15:29:00 +01:00
Erik Johnston
8189c4e3fd Bump version 2015-10-15 15:29:00 +01:00
Daniel Wagner-Hall
6ffbcf45c6 Use non-placeholder name for endpoint 2015-10-15 13:12:52 +01:00
Daniel Wagner-Hall
643b5fcdc8 Look for keys on the right objects 2015-10-15 13:10:30 +01:00
Daniel Wagner-Hall
f38df51e8d Merge branch 'develop' into daniel/3pidinvites 2015-10-15 11:51:55 +01:00
Mark Haines
1a934e8bfd synapse.client.v1.login.LoginFallbackRestServlet and synapse.client.v1.login.PasswordResetRestServlet are unused 2015-10-15 11:09:57 +01:00
Mark Haines
5338220d3a synapse.util.emailutils was unused 2015-10-15 10:39:33 +01:00
Mark Haines
a059760954 Merge pull request #305 from matrix-org/markjh/v2_sync_api
Update the v2 sync API to work as specified in the current spec.
2015-10-14 13:56:23 +01:00
Erik Johnston
d7c70d09f0 Merge pull request #304 from matrix-org/erikj/remove_unused_arg
Remove unused room_id arg
2015-10-14 13:39:33 +01:00
Mark Haines
c185c1c413 Fix v2 sync polling 2015-10-14 13:16:53 +01:00
Mark Haines
f50c43464c Merge branch 'develop' into markjh/v2_sync_api 2015-10-14 11:40:45 +01:00
Erik Johnston
f45aaf0e35 Remove unused constatns 2015-10-14 10:36:55 +01:00
Erik Johnston
8c9df8774e Make 'keys' optional 2015-10-14 10:35:50 +01:00
Erik Johnston
99c7fbfef7 Fix to work with SQLite 2015-10-14 09:52:40 +01:00
Erik Johnston
1d9e109820 More TODO markers 2015-10-14 09:49:00 +01:00
Erik Johnston
d25b0f65ea Add TODO markers 2015-10-14 09:46:31 +01:00
Erik Johnston
858634e1d0 Remove unused room_id arg 2015-10-14 09:31:20 +01:00
Mark Haines
474274583f Merge pull request #303 from matrix-org/markjh/twisted_debugging
Bounce all deferreds through the reactor to make debugging easier.
2015-10-13 18:40:57 +01:00
Daniel Wagner-Hall
d82c5f7b5c Use more descriptive error code 2015-10-13 18:02:00 +01:00
Daniel Wagner-Hall
0c38e8637f Remove unnecessary class-wrapping 2015-10-13 18:00:38 +01:00
Mark Haines
1941eb315d Enable stack traces for the demo scripts 2015-10-13 18:00:02 +01:00
Mark Haines
9020860479 Only turn on the twisted deferred debugging if full_twisted_stacktraces is set in the config 2015-10-13 17:50:44 +01:00
Daniel Wagner-Hall
14edea1aff Move logic into handler 2015-10-13 17:47:58 +01:00
Daniel Wagner-Hall
b68db61222 Add logging 2015-10-13 17:22:50 +01:00
Daniel Wagner-Hall
bb407cd624 Re-add accidentally removed code 2015-10-13 17:19:26 +01:00
Mark Haines
32d66738b0 Fix pep8 warnings. 2015-10-13 17:18:29 +01:00
Daniel Wagner-Hall
95e53ac535 Add some docstring 2015-10-13 17:18:24 +01:00
Mark Haines
7639c3d9e5 Bounce all deferreds through the reactor to make debugging easier.
If all deferreds wait a reactor tick before resolving then there is
always a chance to add an errback to the deferred so that stacktraces
get reported, rather than being discarded.
2015-10-13 17:13:04 +01:00
Erik Johnston
7ecd11accb Add paranoia limit 2015-10-13 15:50:56 +01:00
Daniel Wagner-Hall
17dffef5ec Move event contents into third_party_layout field 2015-10-13 15:48:12 +01:00
Erik Johnston
3e2a1297b5 Remove constraints in preperation of using filters 2015-10-13 15:22:14 +01:00
Erik Johnston
323d3e506d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/search 2015-10-13 14:34:01 +01:00
Erik Johnston
ff2b66f42e Merge pull request #301 from matrix-org/markjh/v2_filtering
Update the v2 filters to support filtering presence.
2015-10-13 14:33:48 +01:00
Mark Haines
8897781558 update filtering tests 2015-10-13 14:13:51 +01:00
Mark Haines
2fa9e23e04 Update the v2 filters to support filtering presence and remove support for public/private user data 2015-10-13 14:12:43 +01:00
Mark Haines
cacf0688c6 Add a get_invites_for_user method to the storage to find out the rooms a user is invited to 2015-10-13 14:08:38 +01:00
Erik Johnston
88971fd034 Merge branch 'erikj/store_engine' into erikj/search 2015-10-13 14:03:30 +01:00
Erik Johnston
7ec9be9c53 Merge pull request #300 from matrix-org/erikj/store_engine
Split out the schema preparation and update logic into its own module
2015-10-13 14:01:15 +01:00
Erik Johnston
17c80c8a3d rename schema_prepare to prepare_database 2015-10-13 13:56:22 +01:00
Erik Johnston
cfd39d6b55 Add SQLite support 2015-10-13 13:47:50 +01:00
Daniel Wagner-Hall
32a453d7ba Merge branch 'develop' into daniel/3pidinvites 2015-10-13 13:32:43 +01:00
Erik Johnston
f9340ea0d5 Merge branch 'erikj/store_engine' into erikj/search 2015-10-13 13:29:02 +01:00
Erik Johnston
ec398af41c Expose error more nicely 2015-10-13 11:43:43 +01:00
Mark Haines
54414221e4 Include invites in incremental sync 2015-10-13 11:43:12 +01:00
Erik Johnston
40b6a5aad1 Split out the schema preparation and update logic into its own module 2015-10-13 11:38:48 +01:00
Mark Haines
ab9cf73258 Include invited rooms in the initial sync 2015-10-13 11:03:48 +01:00
Erik Johnston
30c2783d2f Search left rooms too 2015-10-13 10:36:36 +01:00
Erik Johnston
1a40afa756 Add sqlite schema 2015-10-13 10:36:25 +01:00
Mark Haines
f96b480670 Merge branch 'develop' into markjh/v2_sync_api 2015-10-13 10:33:00 +01:00
Mark Haines
956509dfec Start spliting out the rooms into joined and invited in v2 sync 2015-10-13 10:24:51 +01:00
Mark Haines
586beb8318 Update the filters to match the latest spec.
Apply the filter the 'timeline' and 'ephemeral' keys of rooms.
Apply the filter to the 'presence' key of a sync response.
2015-10-12 16:54:58 +01:00
Erik Johnston
427943907f Merge pull request #299 from stevenhammerton/sh-cas-required-attribute
SH CAS Required Attribute
2015-10-12 16:08:34 +01:00
Steven Hammerton
739464fbc5 Add a comment to clarify why we split on closing curly brace when reading CAS attribute tags 2015-10-12 16:02:17 +01:00
Erik Johnston
ca53ad7425 Filter events to only thsoe that the user is allowed to see 2015-10-12 15:52:55 +01:00
Erik Johnston
f6fde343a1 Merge remote-tracking branch 'origin/develop' into erikj/search 2015-10-12 15:06:18 +01:00
Erik Johnston
927004e349 Remove unused room_id parameter 2015-10-12 15:06:14 +01:00
Steven Hammerton
83b464e4f7 Unpack dictionary in for loop for nicer syntax 2015-10-12 15:05:34 +01:00
Steven Hammerton
ab7f9bb861 Default cas_required_attributes to empty dictionary 2015-10-12 14:58:59 +01:00
Mark Haines
54cb509d64 Merge pull request #296 from matrix-org/markjh/eventstream_presence
Split the sections of EventStreamHandler.get_stream that handle presence
2015-10-12 14:48:09 +01:00
Mark Haines
885301486c Merge pull request #297 from matrix-org/markjh/presence_races
Fix some races in the synapse presence handler caused by not yielding…
2015-10-12 14:47:53 +01:00
Steven Hammerton
7f8fdc9814 Remove not required parenthesis 2015-10-12 14:45:24 +01:00
Steven Hammerton
01a5f1991c Support multiple required attributes in CAS response, and in a nicer config format too 2015-10-12 14:43:17 +01:00
Steven Hammerton
76421c496d Allow optional config params for a required attribute and it's value, if specified any CAS user must have the given attribute and the value must equal 2015-10-12 11:11:49 +01:00
Steven Hammerton
7845f62c22 Parse both user and attributes from CAS response 2015-10-12 10:55:13 +01:00
Erik Johnston
ae72e247fa PEP8 2015-10-12 10:50:46 +01:00
Erik Johnston
61561b9df7 Keep FTS indexes up to date. Only search through rooms currently joined 2015-10-12 10:49:53 +01:00
Matthew Hodgson
782f7fb489 add steve to authors 2015-10-10 18:24:44 +01:00
Erik Johnston
a80ef851f7 Fix previous merge to s/version_string/user_agent/ 2015-10-10 12:35:39 +01:00
Erik Johnston
347aa3c225 Merge pull request #295 from stevenhammerton/sh-cas-auth
Provide ability to login using CAS
2015-10-10 12:18:33 +01:00
Steven Hammerton
95f7661170 Raise LoginError if CasResponse doensn't contain user 2015-10-10 10:54:19 +01:00
Steven Hammerton
a9c299c0be Fix my broken line splitting 2015-10-10 10:54:19 +01:00
Steven Hammerton
e52f4dc599 Use UserId to create FQ user id 2015-10-10 10:54:19 +01:00
Steven Hammerton
625e13bfde Add get_raw method to SimpleHttpClient, use this in CAS auth rather than requests 2015-10-10 10:54:19 +01:00
Steven Hammerton
22112f8d14 Formatting changes 2015-10-10 10:49:42 +01:00
Steven Hammerton
c33f5c1a24 Provide ability to login using CAS 2015-10-10 10:49:42 +01:00
Mark Haines
1a46daf621 Merge branch 'markjh/presence_races' into markjh/v2_sync_api 2015-10-09 20:02:30 +01:00
Mark Haines
987803781e Fix some races in the synapse presence handler caused by not yielding on deferreds 2015-10-09 19:59:50 +01:00
Mark Haines
0a96a9a023 Set the user as online if they start polling the v2 sync 2015-10-09 19:57:50 +01:00
Mark Haines
af7b214476 Merge branch 'markjh/eventstream_presence' into markjh/v2_sync_api 2015-10-09 19:18:09 +01:00
Mark Haines
1b9802a0d9 Split the sections of EventStreamHandler.get_stream that handle presence
into separate functions.

This makes the code a bit easier to read, and means that we can reuse
the logic when implementing the v2 sync API.
2015-10-09 19:13:08 +01:00
Mark Haines
c15cf6ac06 Format the presence events correctly for v2 2015-10-09 18:50:15 +01:00
Erik Johnston
c85c912562 Add basic full text search impl. 2015-10-09 15:48:31 +01:00
Mark Haines
ce19fc0f11 Merge pull request #294 from matrix-org/markjh/initial_sync_archived_flag
Add a flag to initial sync to include we want rooms that the user has left
2015-10-09 10:32:27 +01:00
Mark Haines
51ef725647 Use 'true' rather than '1' for archived flag 2015-10-08 18:13:02 +01:00
Mark Haines
dc72021748 Add a flag to initial sync to indicate we want rooms that the user has left 2015-10-08 17:26:23 +01:00
Mark Haines
dfef2b41aa Update the v2 room sync format to match the current v2 spec 2015-10-08 15:17:43 +01:00
David Baker
91482cd6a0 Use raw string for regex here, otherwise \b is the backspace character. Fixes displayname matching. 2015-10-08 11:22:15 +01:00
Mark Haines
e3d3205cd9 Update the sync response to match the latest spec 2015-10-07 15:55:20 +01:00
Daniel Wagner-Hall
7c809abe86 Merge branch 'develop' into daniel/3pidinvites 2015-10-06 10:24:32 -05:00
Daniel Wagner-Hall
db6e1e1fe3 Merge pull request #292 from matrix-org/daniel/useragent
Allow synapse's useragent to be customized
2015-10-06 10:23:21 -05:00
Daniel Wagner-Hall
61ee72517c Remove merge thinko 2015-10-06 10:16:15 -05:00
Daniel Wagner-Hall
1cacc71050 Add third party invites to auth_events for joins 2015-10-06 10:13:28 -05:00
Mark Haines
fac990a656 Merge pull request #293 from matrix-org/markjh/remove_spamy_error_logging
Remove log line that was generated whenever an error was created.
2015-10-06 16:13:07 +01:00
Daniel Wagner-Hall
fcd9ba8802 Fix lint errors 2015-10-06 10:13:05 -05:00
Mark Haines
93cc60e805 Remove log line that was generated whenever an error was created. We are now creating error objects that aren't raised so it's probably a bit too confusing to keep 2015-10-06 16:10:19 +01:00
Daniel Wagner-Hall
d4bb28c59b Revert "Revert "Merge pull request #283 from matrix-org/erikj/atomic_join_federation""
This reverts commit 34d26d3687.
2015-10-06 09:58:21 -05:00
Daniel Wagner-Hall
ca6496c27c Merge branch 'daniel/useragent' into daniel/3pidinvites 2015-10-06 09:55:21 -05:00
Daniel Wagner-Hall
492beb62a8 Use space not dash as delimiter 2015-10-06 09:53:33 -05:00
Daniel Wagner-Hall
e0b466bcfd Use space not dash as delimiter 2015-10-06 09:32:26 -05:00
Daniel Wagner-Hall
287c81abf3 Merge branch 'develop' into daniel/useragent 2015-10-06 09:30:17 -05:00
Daniel Wagner-Hall
c05b5ef7b0 Merge branch 'develop' into daniel/3pidinvites 2015-10-06 08:10:34 -05:00
Daniel Wagner-Hall
ddd079c8f8 Merge branch 'daniel/useragent' into daniel/3pidinvites 2015-10-05 20:52:15 -05:00
Daniel Wagner-Hall
b28c7da0a4 Preserve version string in user agent 2015-10-05 20:49:39 -05:00
Daniel Wagner-Hall
34d26d3687 Revert "Merge pull request #283 from matrix-org/erikj/atomic_join_federation"
This reverts commit 5879edbb09, reversing
changes made to b43930d4c9.
2015-10-05 19:10:47 -05:00
Mark Haines
471555b3a8 Move the rooms out into a room_map mapping from room_id to room. 2015-10-05 16:39:36 +01:00
Daniel Wagner-Hall
58e6a58eb7 Merge branch 'develop' into daniel/3pidinvites 2015-10-05 10:33:41 -05:00
Daniel Wagner-Hall
8fc52bc56a Allow synapse's useragent to be customized
This will allow me to write tests which verify which server made HTTP
requests in a federation context.
2015-10-02 17:13:51 -05:00
Erik Johnston
49ebd472fa Explicitly add Create event as auth event 2015-10-02 13:22:36 +01:00
Erik Johnston
40017a9a11 Add 'trusted_private_chat' to room creation presets 2015-10-02 11:22:56 +01:00
Erik Johnston
a086b7aa00 Merge pull request #275 from matrix-org/erikj/invite_state
Bundle in some room state in invites.
2015-10-02 11:15:43 +01:00
Erik Johnston
9c311dfce5 Also bundle in sender 2015-10-02 11:04:23 +01:00
Erik Johnston
a38d36ccd0 Merge pull request #279 from matrix-org/erikj/unfederatable
Add flag which disables federation of the room
2015-10-02 10:41:14 +01:00
Erik Johnston
d5e081c7ae Merge branch 'develop' of github.com:matrix-org/synapse into erikj/unfederatable 2015-10-02 10:33:49 +01:00
Erik Johnston
5879edbb09 Merge pull request #283 from matrix-org/erikj/atomic_join_federation
Atomically persist events when joining a room over federation/
2015-10-02 09:18:44 +01:00
Mark Haines
f31014b18f Start updating the sync API to match the specification 2015-10-01 17:53:07 +01:00
Daniel Wagner-Hall
5b3e9713dd Implement third party identifier invites 2015-10-01 17:49:52 +01:00
Kegsay
b43930d4c9 Merge pull request #291 from matrix-org/receipt-validation
Validate the receipt type before passing it on to the receipt handler
2015-10-01 14:44:39 +01:00
Kegan Dougal
bad780a197 Validate the receipt type before passing it on to the receipt handler 2015-10-01 14:01:52 +01:00
Erik Johnston
0a4b7226fc Don't change cwd in synctl 2015-10-01 09:21:36 +01:00
Erik Johnston
0ec78b360c Merge pull request #287 from matrix-org/erikj/canonical_alias
Set m.room.canonical_alias on room creation.
2015-09-30 17:14:55 +01:00
Erik Johnston
ecd0c0dfc5 Remove double indentation 2015-09-30 16:46:24 +01:00
Erik Johnston
83892d0d30 Comment 2015-09-30 16:41:48 +01:00
Erik Johnston
9d39615b7d Rename var 2015-09-30 16:37:59 +01:00
Mark Haines
301141515a Merge pull request #288 from matrix-org/markjh/unused_definitions
Remove some of the unused definitions from synapse
2015-09-28 14:22:44 +01:00
Daniel Wagner-Hall
741777235c Merge pull request #290 from matrix-org/daniel/synctl
Allow config file path to be configurable in in synctl
2015-09-28 10:30:45 +01:00
Erik Johnston
a14665bde7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/invite_state 2015-09-25 11:38:28 +01:00
Daniel Wagner-Hall
f87a11e0fd Fix restart 2015-09-24 21:59:38 +00:00
Daniel Wagner-Hall
76328b85f6 Allow config file path to be configurable in in synctl
Also, allow it to be run from directories other than the synapse directory
2015-09-24 21:50:20 +00:00
Erik Johnston
17795161c3 Merge pull request #289 from matrix-org/markjh/fix_sql
Fix order of ON constraints in _get_rooms_for_user_where_membership
2015-09-24 17:39:47 +01:00
Mark Haines
cf1100887b Fix order of ON constraints in _get_rooms_for_user_where_membership_is_txn 2015-09-24 17:35:10 +01:00
Mark Haines
314aabba82 Fix scripts-dev/definitions.py argparse options 2015-09-23 10:45:33 +01:00
Mark Haines
7d55314277 Remove unused _execute_and_decode from scripts/synapse_port_db 2015-09-23 10:42:02 +01:00
Mark Haines
1cd65a8d1e synapse/storage/state.py: _make_group_id was unused 2015-09-23 10:37:58 +01:00
Mark Haines
973ebb66ba Remove unused functions from synapse/storage/signatures.py 2015-09-23 10:36:33 +01:00
Mark Haines
e51aa4be96 synapse/storage/roommember.py:_get_members_query was unused 2015-09-23 10:35:10 +01:00
Mark Haines
92d8d724c5 Remove unused functions from synapse/storage/events.py 2015-09-23 10:33:06 +01:00
Mark Haines
c292dba70c Remove unused functions from synapse/storage/event_federation.py 2015-09-23 10:31:25 +01:00
Mark Haines
396834f1c0 synapse/storage/_base.py:_simple_max_id was unused 2015-09-23 10:30:38 +01:00
Mark Haines
1d9036aff2 synapse/storage/_base.py:_simple_delete was unused 2015-09-23 10:30:25 +01:00
Mark Haines
1ee3d26432 synapse/storage/_base.py:_simple_selectupdate_one was unused 2015-09-23 10:30:03 +01:00
Mark Haines
82b8d4b86a synapse/state.py:_get_state_key_from_event was unused 2015-09-23 10:27:47 +01:00
Mark Haines
57338a9768 synapse/handlers/room.py:_should_invite_join was unused 2015-09-23 10:26:45 +01:00
Mark Haines
60728c8c9e synapse/handlers/federation.py:_handle_auth_events was unused 2015-09-23 10:25:26 +01:00
Mark Haines
04abf53a56 Use argparse for definition finder 2015-09-23 10:17:50 +01:00
Erik Johnston
257fa1c53e Set m.room.canonical_alias on room creation. 2015-09-23 10:07:31 +01:00
Erik Johnston
8a519ac76d Fix demo/start.sh to work with --report-stats 2015-09-23 09:55:24 +01:00
Erik Johnston
d2fc591619 Merge pull request #282 from matrix-org/erikj/missing_keys
Fix bug where we sometimes didn't fetch all the keys requested for a server.
2015-09-23 09:22:01 +01:00
Erik Johnston
dc6094b908 Merge pull request #271 from matrix-org/erikj/default_history
Change default history visibility for private rooms
2015-09-23 09:21:00 +01:00
Mark Haines
3559a835a2 synapse/storage/event_federation.py:_get_auth_events is unused 2015-09-22 18:39:46 +01:00
Mark Haines
7dd4f79c49 synapse/storage/_base.py:_execute_and_decode was unused 2015-09-22 18:37:07 +01:00
Mark Haines
bb4dddd6c4 Move NullSource out of synapse and into tests since it is only used by the tests 2015-09-22 18:33:34 +01:00
Mark Haines
7a5818ed81 Note that GzipFile was removed in comment that referenced it 2015-09-22 18:27:22 +01:00
Mark Haines
184ba0968a synapse/app/homeserver.py:GzipFile was unused 2015-09-22 18:25:30 +01:00
Mark Haines
a247729806 synapse/streams/events.py:StreamSource was unused 2015-09-22 18:19:49 +01:00
Mark Haines
f2fcc0a8cf synapse/api/errors.py:RoomError was unused 2015-09-22 18:18:45 +01:00
Mark Haines
372ac60375 synapse/util/__init__.py:unwrap_deferred was unused 2015-09-22 18:16:07 +01:00
Mark Haines
527d95dea0 synapse/storage/_base.py:Table was unused 2015-09-22 18:14:15 +01:00
Mark Haines
cc3ab0c214 Add dev script for finding where functions are called from, and finding functions that aren't called at all 2015-09-22 18:13:06 +01:00
Mark Haines
ca2abf9a6e Merge pull request #286 from matrix-org/markjh/stream_config_repr
Define __repr__ methods for StreamConfig and PaginationConfig
2015-09-22 15:19:53 +01:00
Mark Haines
b35baf6f3c Define __repr__ methods for StreamConfig and PaginationConfig
So that they can be used with "%r" log formats.
2015-09-22 15:13:10 +01:00
Daniel Wagner-Hall
f17aadd1b5 Merge pull request #285 from matrix-org/daniel/metrics-2
Implement configurable stats reporting
2015-09-22 13:59:37 +01:00
Daniel Wagner-Hall
6d59ffe1ce Add some docstrings 2015-09-22 13:47:40 +01:00
Daniel Wagner-Hall
b6e0303c83 Catch stats-reporting errors 2015-09-22 13:34:29 +01:00
Daniel Wagner-Hall
eb011cd99b Add docstring 2015-09-22 13:29:36 +01:00
Daniel Wagner-Hall
6d7f291b93 Front-load spaces 2015-09-22 13:13:07 +01:00
Daniel Wagner-Hall
7213588083 Implement configurable stats reporting
SYN-287

This requires that HS owners either opt in or out of stats reporting.

When --generate-config is passed, --report-stats must be specified
If an already-generated config is used, and doesn't have the
report_stats key, it is requested to be set.
2015-09-22 12:57:40 +01:00
Mark Haines
ee2d722f0f Merge pull request #276 from matrix-org/markjh/history_for_rooms_that_have_been_left
SPEC-216: Allow users to view the history of rooms that they have left.
2015-09-21 14:38:13 +01:00
Mark Haines
49c0a0b5c4 Clarify that room_initial_sync returns a python dict 2015-09-21 14:21:03 +01:00
Mark Haines
95c304e3f9 Fix doc string to point at the right class 2015-09-21 14:18:47 +01:00
Mark Haines
0c16285989 Add explicit "elif event.membership == Membership.LEAVE" for clarity 2015-09-21 14:17:16 +01:00
Mark Haines
1e101ed4a4 Clamp the "to" token for /rooms/{roomId}/messages to when the user left
the room.

There isn't a way for the client to learn a valid "to" token for a room
that they have left in the C-S API but that doesn't stop a client making
one up.
2015-09-21 14:13:10 +01:00
Mark Haines
8e3bbc9bd0 Clarify which event is returned by check_user_was_in_room 2015-09-21 13:47:44 +01:00
Mark Haines
0b5c9adeb5 Merge pull request #267 from matrix-org/markjh/missing_requirements
Print an example "pip install" line for a missing requirement
2015-09-18 18:52:08 +01:00
Erik Johnston
afe475e9be Merge branch 'develop' of github.com:matrix-org/synapse into erikj/atomic_join_federation 2015-09-17 10:28:55 +01:00
Erik Johnston
b105996fc1 Remove run_on_reactor 2015-09-17 10:28:36 +01:00
Erik Johnston
51b2448e05 Revert change of scripts/check_auth.py 2015-09-17 10:26:03 +01:00
Erik Johnston
c34ffd2736 Fix getting an event for a room the server forgot it was in 2015-09-17 10:26:03 +01:00
Erik Johnston
54e688277a Also persist state 2015-09-17 10:26:03 +01:00
Erik Johnston
3a01901d6c Capture err 2015-09-17 10:26:03 +01:00
Erik Johnston
744e7d2790 Also handle state 2015-09-17 10:26:03 +01:00
Erik Johnston
a3e332af19 Don't bail out of joining if we encounter a rejected event 2015-09-17 10:26:03 +01:00
Erik Johnston
4678055173 Refactor do_invite_join 2015-09-17 10:24:51 +01:00
Erik Johnston
ffe8cf7e59 Fix bug where we sometimes didn't fetch all the keys requested for a
server.
2015-09-17 10:21:32 +01:00
Erik Johnston
eb700cdc38 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-09-16 11:05:34 +01:00
Erik Johnston
16026e60c5 Merge branch 'hotfixes-v0.10.0-r2' of github.com:matrix-org/synapse 2015-09-16 09:56:15 +01:00
Erik Johnston
0b1a55c60a Update changelog 2015-09-16 09:55:44 +01:00
Erik Johnston
663b96ae96 Merge branch 'erikj/update_extremeties' into hotfixes-v0.10.0-r2 2015-09-16 09:54:42 +01:00
Erik Johnston
2048388cfd Merge pull request #281 from matrix-org/erikj/update_extremeties
When updating a stored event from outlier to non-outlier, remember to update the extremeties
2015-09-15 16:57:25 +01:00
Erik Johnston
8148c48f11 "Comments" 2015-09-15 16:54:48 +01:00
Daniel Wagner-Hall
2c8f16257a Merge pull request #272 from matrix-org/daniel/insecureclient
Allow configuration to ignore invalid SSL certs
2015-09-15 16:52:38 +01:00
Erik Johnston
1107e83b54 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-09-15 16:35:34 +01:00
Erik Johnston
3b05b67c89 When updating a stored event from outlier to non-outlier, remember to update the extremeties 2015-09-15 16:34:42 +01:00
Daniel Wagner-Hall
d4af08a167 Use shorter config key name 2015-09-15 15:50:13 +01:00
Daniel Wagner-Hall
3bcbabc9fb Rename context factory
Mjark is officially no fun.
2015-09-15 15:46:22 +01:00
Daniel Wagner-Hall
9fc0aad567 Merge branch 'master' into daniel/insecureclient 2015-09-15 15:42:44 +01:00
Paul Evans
929ae19d00 Merge pull request #280 from matrix-org/paul/sighup
Hacky attempt at catching SIGHUP and rotating the logfile around
2015-09-15 10:47:40 +01:00
Paul "LeoNerd" Evans
9cd5b9a802 Hacky attempt at catching SIGHUP and rotating the logfile around 2015-09-14 19:03:53 +01:00
Daniel Wagner-Hall
728d07c8c1 Merge pull request #256 from matrix-org/auth
Attempt to validate macaroons
2015-09-14 18:09:33 +01:00
Erik Johnston
d59acb8c5b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/unfederatable 2015-09-14 18:05:31 +01:00
Erik Johnston
91cb3b630d Merge pull request #265 from matrix-org/erikj/check_room_exists
Check room exists when authenticating an event
2015-09-14 17:56:18 +01:00
Erik Johnston
dffc9c4ae0 Drop unused index 2015-09-14 14:41:37 +01:00
Mark Haines
e2054ce21a Allow users to GET individual state events for rooms that they have left 2015-09-10 15:06:47 +01:00
Erik Johnston
49ae42bbe1 Bundle in some room state in the unsigned bit of the invite when sending to invited servers 2015-09-10 14:25:54 +01:00
Erik Johnston
4ba8189b74 Bump change log 2015-09-10 10:45:22 +01:00
David Baker
ca32c7a065 Fix adding threepids to an existing account 2015-09-10 10:44:56 +01:00
David Baker
184a5c81f0 Merge pull request #274 from matrix-org/add_threepid_fix
Fix adding threepids to an existing account
2015-09-10 10:36:58 +01:00
David Baker
30768dcf40 Fix adding threepids to an existing account 2015-09-10 10:33:48 +01:00
Erik Johnston
4ae73d16a9 Merge pull request #270 from matrix-org/markjh/fix_metrics
Fix the size reported by maxrss.
2015-09-10 10:32:10 +01:00
Erik Johnston
a5b41b809f Merge pull request #273 from matrix-org/erikj/key_fetch_fix
Various bug fixes to crypto.keyring
2015-09-10 10:31:23 +01:00
Erik Johnston
3f60481655 Bump version and change log 2015-09-10 09:58:32 +01:00
Erik Johnston
e1eb1f3fb9 Various bug fixes to crypto.keyring 2015-09-10 09:48:12 +01:00
Mark Haines
09cb5c7d33 Allow users that have left a room to get the messages that happend in the room before they left 2015-09-09 17:31:09 +01:00
Erik Johnston
dd0867f5ba Various bug fixes to crypto.keyring 2015-09-09 17:02:39 +01:00
Mark Haines
3c166a24c5 Remove undocumented and unimplemented 'feedback' parameter from the Client-Server API 2015-09-09 16:05:09 +01:00
Mark Haines
bc8b25eb56 Allow users that have left the room to view the member list from the point they left 2015-09-09 15:42:16 +01:00
Daniel Wagner-Hall
2c746382e0 Merge branch 'daniel/insecureclient' into develop 2015-09-09 14:27:30 +01:00
Mark Haines
1d579df664 Allow rooms/{roomId}/state for a room that has been left 2015-09-09 14:12:24 +01:00
Erik Johnston
c0d1f37baf Don't require pdus in check_auth script 2015-09-09 13:47:14 +01:00
Daniel Wagner-Hall
ddfe30ba83 Better document the intent of the insecure SSL setting 2015-09-09 13:26:23 +01:00
Mark Haines
89ae0166de Allow room initialSync for users that have left the room, returning a snapshot of how the room was when they left it 2015-09-09 13:25:22 +01:00
Daniel Wagner-Hall
6485f03d91 Fix random formatting 2015-09-09 13:05:00 +01:00
Daniel Wagner-Hall
81a93ddcc8 Allow configuration to ignore invalid SSL certs
This will be useful for sytest, and sytest only, hence the aggressive
config key name.
2015-09-09 12:02:07 +01:00
Erik Johnston
e530208e68 Change default history visibility for private rooms 2015-09-09 09:57:49 +01:00
Mark Haines
dd42bb78d0 Include rooms that a user has left in an initialSync. Include the state and messages at the point they left the room 2015-09-08 18:16:09 +01:00
Mark Haines
417485eefa Include the event_id and stream_ordering of membership events when looking up which rooms a user is in 2015-09-08 18:14:54 +01:00
Erik Johnston
2ff439cff7 Bump version/changelog 2015-09-08 11:01:48 +01:00
Mark Haines
709ba99afd Check that /proc/self/fd exists before listing it 2015-09-07 16:45:55 +01:00
Mark Haines
9e4dacd5e7 The maxrss reported by getrusage is in kilobytes, not pages 2015-09-07 16:45:48 +01:00
Mark Haines
d23bc77e2c Merge branch 'master' into develop 2015-09-07 15:11:36 +01:00
Mark Haines
73e4ad4b8b Merge branch 'master' into develop
Conflicts:
	setup.py
2015-09-07 15:06:46 +01:00
Mark Haines
076e19da28 Merge pull request #269 from matrix-org/markjh/upgrading_setuptools
Add instructions for upgrading setuptools for when people encounter a…
2015-09-07 15:00:02 +01:00
Mark Haines
3ead04ceef Add instructions for upgrading setuptools for when people encounter a message "mock requires setuptools>=17.1" 2015-09-07 14:57:00 +01:00
Erik Johnston
227b77409f DEPENDENCY_LINKS was turned to a list 2015-09-04 08:56:23 +01:00
Erik Johnston
efeeff29f6 Merge branch 'release-v0.10.0' 2015-09-03 09:54:08 +01:00
Erik Johnston
1002bbd732 Change log level to info 2015-09-03 09:51:01 +01:00
Erik Johnston
9ad38c9807 Bump version and changelog 2015-09-03 09:49:54 +01:00
Matthew Hodgson
bdf2e5865a update logger to match new ambiguous script name... 2015-09-03 09:51:42 +03:00
Erik Johnston
fd0a919af3 Lists use 'append' 2015-09-02 17:27:59 +01:00
Erik Johnston
e90f32646f Bump version and changelog 2015-09-02 17:17:40 +01:00
Erik Johnston
aaf319820a Update README to include RAM requirements 2015-09-02 17:16:39 +01:00
Erik Johnston
a9ad647fb2 Make port script handle empty sent_transactions table 2015-09-02 11:11:11 +01:00
Daniel Wagner-Hall
77580addc3 Merge pull request #262 from matrix-org/redactyoself
Allow users to redact their own events
2015-09-02 10:02:36 +01:00
Erik Johnston
8e8955bcea Merge pull request #266 from pztrn/develop
Ignore development virtualenv and generated logger configuration as well.
2015-09-02 09:57:55 +01:00
Mark Haines
8bab7abddd Add nacl.bindings to the list of modules checked. Re-arrange import order to check packages after the packages they depend on 2015-09-01 16:51:10 +01:00
Mark Haines
3cdfd37d95 Print an example "pip install" line for a missing requirement 2015-09-01 16:47:26 +01:00
Erik Johnston
9b05ef6f39 Also check the domains for membership state_keys 2015-09-01 16:17:25 +01:00
Erik Johnston
187320b019 Merge branch 'erikj/check_room_exists' into erikj/unfederatable 2015-09-01 15:58:10 +01:00
Erik Johnston
b345853918 Check against sender rather than event_id 2015-09-01 15:57:35 +01:00
pztrn
7ab401d4dc Ignore development virtualenv and generated logger configuration as well.
Signed-off-by: Stanislav Nikitin <pztrn@pztrn.name>
2015-09-01 19:48:22 +05:00
Erik Johnston
a88e16152f Add flag which disables federation of the room 2015-09-01 15:47:30 +01:00
Erik Johnston
00149c063b Fix tests 2015-09-01 15:42:03 +01:00
Erik Johnston
ab9e01809d Check room exists when authenticating an event, by asserting they reference a creation event 2015-09-01 15:21:24 +01:00
Mark Haines
236245f7d8 Merge pull request #264 from matrix-org/markjh/syweb_on_pypi
Use the version of "matrix-angular-sdk" hosted on pypi
2015-09-01 14:50:29 +01:00
Mark Haines
57df6fffa7 Use the version of "matrix-angular-sdk" hosted on pypi 2015-09-01 14:47:57 +01:00
Erik Johnston
b62c1395d6 Merge branch 'release-v0.10.0' of github.com:matrix-org/synapse into develop 2015-09-01 13:11:55 +01:00
Daniel Wagner-Hall
e255c2c32f s/user_id/user/g for consistency 2015-09-01 12:41:16 +01:00
Erik Johnston
9c8eb4a809 Merge pull request #261 from matrix-org/erikj/scripts_clean
Clean up scripts/
2015-09-01 11:55:26 +01:00
Daniel Wagner-Hall
b854a375b0 Check domain of events properly
Federated servers still need to delegate authority to owning servers
2015-09-01 11:53:31 +01:00
Erik Johnston
cd800ad99a Lower size of 'stateGroupCache' now that we have data from matrix.org to support doing so 2015-09-01 10:09:03 +01:00
Erik Johnston
3e4de64bc9 Remove spurious .py from docs 2015-09-01 09:46:42 +01:00
Matthew Hodgson
d71af2ee12 don't log the whole DB config (including postgres password...) 2015-08-29 22:23:21 +01:00
Daniel Wagner-Hall
b143641b20 Merge pull request #258 from matrix-org/slowtestsmakemesad
Swap out bcrypt for md5 in tests
2015-08-28 15:42:25 +01:00
Daniel Wagner-Hall
4d1ea40008 Merge branch 'develop' into redactyoself
Conflicts:
	synapse/handlers/_base.py
2015-08-28 15:35:39 +01:00
Daniel Wagner-Hall
8256a8ece7 Allow users to redact their own events 2015-08-28 15:31:49 +01:00
Mark Haines
a7122692d9 Merge branch 'release-v0.10.0' into develop
Conflicts:
	synapse/handlers/auth.py
	synapse/python_dependencies.py
	synapse/rest/client/v1/login.py
2015-08-28 11:15:27 +01:00
Erik Johnston
b442217d91 Actually add config path 2015-08-28 10:37:17 +01:00
Erik Johnston
c961cd7736 Clean up scripts/ 2015-08-27 13:03:17 +01:00
Erik Johnston
5371c2a1f7 Bump version and changelog 2015-08-27 11:21:11 +01:00
Erik Johnston
4a6d894850 Merge pull request #260 from matrix-org/erikj/filename_order
Check for an internationalised filename first
2015-08-27 11:18:01 +01:00
Erik Johnston
ddf4d2bd98 Consistency 2015-08-27 10:50:49 +01:00
Erik Johnston
66ec6cf9b8 Check for an internationalised filename first 2015-08-27 10:48:58 +01:00
Erik Johnston
53c2eed862 None check the correct variable 2015-08-27 10:38:22 +01:00
Erik Johnston
f02532baad Check for None 2015-08-27 10:37:02 +01:00
Erik Johnston
25b32b63ae Bump changelog and version 2015-08-27 10:09:32 +01:00
Erik Johnston
e330c802e4 Merge pull request #259 from matrix-org/markjh/unicode_content_disposition
Support unicode attachment filenames
2015-08-27 10:03:58 +01:00
Mark Haines
c9cb354b58 Give a sensible error message if the filename is invalid UTF-8 2015-08-26 17:27:23 +01:00
Mark Haines
5a9e0c3682 Handle unicode filenames given when downloading or received over federation 2015-08-26 17:08:47 +01:00
Mark Haines
e85c7873dc Allow non-ascii filenames for attachments 2015-08-26 16:26:37 +01:00
Daniel Wagner-Hall
86fac9c95e Remove unused import 2015-08-26 16:03:17 +01:00
Daniel Wagner-Hall
3063383547 Swap out bcrypt for md5 in tests
This reduces our ~8 second sequential test time down to ~7 seconds
2015-08-26 15:59:32 +01:00
Daniel Wagner-Hall
81450fded8 Turn TODO into thing which actually will fail 2015-08-26 13:56:01 +01:00
Mark Haines
4c56928263 Merge pull request #254 from matrix-org/markjh/tox_setuptools
Make 'setup.py test' run tox
2015-08-26 13:50:59 +01:00
Daniel Wagner-Hall
6f0c344ca7 Merge pull request #255 from matrix-org/mergeeriksmadness
Merge erikj/user_dedup to develop
2015-08-26 13:49:38 +01:00
Daniel Wagner-Hall
37f0ddca5f Merge branch 'mergeeriksmadness' into auth 2015-08-26 13:45:06 +01:00
Daniel Wagner-Hall
d3c0e48859 Merge erikj/user_dedup to develop 2015-08-26 13:42:45 +01:00
Daniel Wagner-Hall
6a4b650d8a Attempt to validate macaroons
A couple of weird caveats:
 * If we can't validate your macaroon, we fall back to checking that
   your access token is in the DB, and ignoring the failure
 * Even if we can validate your macaroon, we still have to hit the DB to
   get the access token ID, which we pretend is a device ID all over the
   codebase.

This mostly adds the interesting code, and points out the two pieces we
need to delete (and necessary conditions) in order to fix the above
caveats.
2015-08-26 13:22:23 +01:00
Mark Haines
06094591c5 Pass an empty list of arguments to tox if no arguments are given 2015-08-26 13:13:01 +01:00
Mark Haines
fd246fde89 Install tox locally if it wasn't already installed when running setup.py test 2015-08-26 12:59:02 +01:00
Mark Haines
4f6fa981ec Make 'setup.py test' run tox 2015-08-26 12:45:29 +01:00
Daniel Wagner-Hall
3cab86a122 Merge pull request #253 from matrix-org/tox
Allow tests to be filter when using tox
2015-08-26 11:48:41 +01:00
Daniel Wagner-Hall
e768d7b3a6 Allow tests to be filter when using tox
`tox` will run all tests
`tox tests.api.test_auth.AuthTestCase` will run just the tests in AuthTestCase
2015-08-26 11:41:42 +01:00
Erik Johnston
efdaa5dd55 Merge pull request #252 from matrix-org/erikj/typing_loop
Don't loop over all rooms ever in typing.get_new_events_for_user
2015-08-26 11:12:25 +01:00
Erik Johnston
da51acf0e7 Remove needless existence checks 2015-08-26 11:08:23 +01:00
Erik Johnston
f4d552589e Don't loop over all rooms ever in typing.get_new_events_for_user 2015-08-26 10:51:08 +01:00
Erik Johnston
90fde4b8d7 Bump changelog and version 2015-08-25 17:49:58 +01:00
Erik Johnston
0de2aad061 Merge pull request #250 from matrix-org/erikj/generated_directory
Add config option to specify where generated files should be dumped
2015-08-25 17:40:19 +01:00
Erik Johnston
3f6f74686a Update config doc 2015-08-25 17:37:21 +01:00
Erik Johnston
82145912c3 s/--generated-directory/--keys-directory/ 2015-08-25 17:31:22 +01:00
Daniel Wagner-Hall
a2355fae7e Merge pull request #251 from matrix-org/removeadmin
Stop looking up "admin", which we never read
2015-08-25 17:23:05 +01:00
Daniel Wagner-Hall
ee3fa1a99c Merge pull request #248 from matrix-org/deviceid
Remove completely unused concepts from codebase
2015-08-25 17:19:06 +01:00
Erik Johnston
59891a294f Merge pull request #249 from matrix-org/erikj/allow_config_path_dirs
Allow specifying directories as config paths
2015-08-25 17:17:57 +01:00
Erik Johnston
3e1029fe80 Warn if we encounter unexpected files in config directories 2015-08-25 17:08:23 +01:00
Erik Johnston
af7c1397d1 Add config option to specify where generated files should be dumped 2015-08-25 16:58:01 +01:00
Daniel Wagner-Hall
460cad7c11 Merge branch 'deviceid' into removeadmin 2015-08-25 16:37:59 +01:00
Daniel Wagner-Hall
825f0875bc Fix up one more reference 2015-08-25 16:37:37 +01:00
Daniel Wagner-Hall
a9d8bd95e7 Stop looking up "admin", which we never read 2015-08-25 16:29:39 +01:00
Erik Johnston
bfb66773a4 Allow specifying directories as config files 2015-08-25 16:25:54 +01:00
Daniel Wagner-Hall
57619d6058 Re-wrap line 2015-08-25 16:25:46 +01:00
Daniel Wagner-Hall
a0b181bd17 Remove completely unused concepts from codebase
Removes device_id and ClientInfo

device_id is never actually written, and the matrix.org DB has no
non-null entries for it. Right now, it's just cluttering up code.

This doesn't remove the columns from the database, because that's
fiddly.
2015-08-25 16:23:06 +01:00
Mark Haines
1925a38f95 Merge pull request #247 from matrix-org/markjh/tox
Add a tox.ini config for synapse.
2015-08-25 16:03:55 +01:00
Erik Johnston
747535f20f Merge pull request #245 from matrix-org/erikj/configurable_client_location
Allow specifying a directory to host a web client from
2015-08-25 15:50:25 +01:00
Erik Johnston
133d90abfb Merge pull request #246 from matrix-org/erikj/config_helper_function
Add utility to parse config and print out a key
2015-08-25 15:49:21 +01:00
Mark Haines
3a20cdcd27 Add .tox to .gitignore 2015-08-25 15:45:03 +01:00
Mark Haines
d046adf4ec Set PYTHONDONTWRITEBYTECODE in the tox environment so that we don't spew .pyc files everywhere 2015-08-25 15:44:05 +01:00
Erik Johnston
1d1c303b9b Fix typo when using sys.stderr.write 2015-08-25 15:39:16 +01:00
Erik Johnston
d33f31d741 Print the correct pip install line when failing due to lack of matrix-angular-sdk 2015-08-25 15:33:23 +01:00
Mark Haines
c63df2d4e0 Prod jenkins 2015-08-25 15:22:39 +01:00
Erik Johnston
f63208a1c0 Add utility to parse config and print out a key
Usage:

```
$ python -m synapse.config read server_name -c homeserver.yaml
localhost
```
2015-08-25 15:16:31 +01:00
Mark Haines
43f2e42bfd Prod jenkins 2015-08-25 15:12:38 +01:00
Mark Haines
4bd05573e9 Prod jenkins 2015-08-25 15:03:32 +01:00
Mark Haines
12b1a47ba4 Only include demo/demo.tls.dh. Don't include any other dh file 2015-08-25 14:33:37 +01:00
Erik Johnston
37403ab06c Update the log message 2015-08-25 14:19:09 +01:00
Mark Haines
2e31dd2ad3 Add tox.ini file for synapse 2015-08-25 14:14:02 +01:00
Erik Johnston
8b52fe48b5 Revert previous commit. Instead, always download matrix-angular-sdk as a requirement, but don't complain (when we do check_requirements) if we don't have it when we start synapse. 2015-08-25 14:10:31 +01:00
Erik Johnston
d9088c923f Remove dependency on matrix-angular-sdk 2015-08-25 13:34:50 +01:00
Erik Johnston
86cef6a91b Allow specifying a directory to host a web client from 2015-08-25 12:01:23 +01:00
Mark Haines
1c847af28a Merge pull request #243 from matrix-org/markjh/remove_syutil
Replace syutil dependency with smaller, single-purpose libraries
2015-08-25 10:52:16 +01:00
Mark Haines
cf8c04948f Fix typo in module imports and package dependencies 2015-08-25 10:42:59 +01:00
Mark Haines
aa361f51dc Merge pull request #244 from matrix-org/markjh/refresh_tokens
Remove autoincrement since we incrementing the ID in the storage layer
2015-08-25 09:40:35 +01:00
Mark Haines
037481a033 Remove autoincrement since we incrementing the ID in the storage layer 2015-08-24 17:48:57 +01:00
Mark Haines
01fc3943f1 Fix indent 2015-08-24 17:18:58 +01:00
Erik Johnston
571ac105e6 Bump version and changelog 2015-08-24 17:10:45 +01:00
Erik Johnston
51c53369a3 Do auth checks *before* persisting the event 2015-08-24 16:38:20 +01:00
Mark Haines
f093873d69 Replace syutil references in scripts 2015-08-24 16:30:35 +01:00
Erik Johnston
61f36d9939 Merge pull request #242 from matrix-org/erikj/pushers_ephemeral_events
Don't make pushers handle presence/typing events
2015-08-24 16:23:55 +01:00
Erik Johnston
f8f3d72e2b Don't make pushers handle presence/typing events 2015-08-24 16:19:43 +01:00
Mark Haines
78323ccdb3 Remove syutil dependency in favour of smaller single-purpose libraries 2015-08-24 16:17:38 +01:00
Erik Johnston
457970c724 Don't insert events into 'event_*_extremeties' tables if they're outliers 2015-08-23 13:44:23 +01:00
Erik Johnston
1bd1a43073 Actually check if event_id isn't returned by _get_state_groups 2015-08-21 14:30:34 +01:00
Erik Johnston
0f6a25f670 Upate changelog 2015-08-21 13:07:56 +01:00
Erik Johnston
b9490e8cbb Upate changelog 2015-08-21 13:07:37 +01:00
Erik Johnston
5dbd102470 Merge branch 'erikj/user_dedup' into release-v0.10.0 2015-08-21 12:51:14 +01:00
Erik Johnston
fd5ad0f00e Doc string 2015-08-21 11:45:43 +01:00
Erik Johnston
745b72660a Merge branch 'release-v0.10.0' of github.com:matrix-org/synapse into develop 2015-08-21 11:39:38 +01:00
Erik Johnston
42f12ad92f When logging in fetch user by user_id case insensitively, *unless* there are multiple case insensitive matches, in which case require the exact user_id 2015-08-21 11:38:44 +01:00
Erik Johnston
aa3c9c7bd0 Don't allow people to register user ids which only differ by case to an existing one 2015-08-21 10:57:47 +01:00
Erik Johnston
1f7642efa9 Fix bug where we didn't correctly serialize the redacted_because key over federation 2015-08-21 09:36:07 +01:00
Erik Johnston
3e9ee62db0 Add missing param in store.get_state_groups invocation 2015-08-21 09:15:13 +01:00
David Baker
21b71b6d7c Return fully qualified user_id as per spec 2015-08-20 21:54:53 +01:00
Daniel Wagner-Hall
b1e35eabf2 Merge pull request #240 from matrix-org/refresh
/tokenrefresh POST endpoint
2015-08-20 17:44:46 +01:00
Daniel Wagner-Hall
c7788685b0 Fix bad merge 2015-08-20 17:43:12 +01:00
Daniel Wagner-Hall
8c74bd8960 Fix indentation 2015-08-20 17:26:52 +01:00
Daniel Wagner-Hall
f483340b3e Merge pull request #229 from matrix-org/auth
Issue macaroons as opaque auth tokens
2015-08-20 17:25:42 +01:00
Daniel Wagner-Hall
ea570ffaeb Fix flake8 warnings 2015-08-20 17:22:41 +01:00
Mark Haines
7049e1564f Merge remote-tracking branch 'origin/master' into develop 2015-08-20 17:21:51 +01:00
Daniel Wagner-Hall
d5a825edee Merge branch 'auth' into refresh
Conflicts:
	synapse/handlers/register.py
2015-08-20 17:13:33 +01:00
Daniel Wagner-Hall
225c244aba Remove incorrect whitespace 2015-08-20 17:10:10 +01:00
Daniel Wagner-Hall
4e706ec82c Merge branch 'develop' into auth 2015-08-20 16:59:41 +01:00
Daniel Wagner-Hall
31621c2e06 Merge pull request #239 from matrix-org/pynacl
Correct pynacl version to 0.3.0
2015-08-20 16:51:21 +01:00
Daniel Wagner-Hall
f90ea3dc73 Correct pynacl version to 0.3.0
0.0.3 was a typo
2015-08-20 16:42:17 +01:00
Daniel Wagner-Hall
ce2a7ed6e4 Merge branch 'develop' into auth 2015-08-20 16:28:36 +01:00
Daniel Wagner-Hall
e8cf77fa49 Merge branch 'develop' into refresh
Conflicts:
	synapse/rest/client/v1/login.py
2015-08-20 16:25:40 +01:00
Daniel Wagner-Hall
cecbd636e9 /tokenrefresh POST endpoint
This allows refresh tokens to be exchanged for (access_token,
refresh_token).

It also starts issuing them on login, though no clients currently
interpret them.
2015-08-20 16:21:35 +01:00
Erik Johnston
b578c822e3 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.10.0 2015-08-20 16:10:14 +01:00
Erik Johnston
3befc9ccc3 Merge branch 'release-v0.10.0' of github.com:matrix-org/synapse into develop 2015-08-20 16:09:27 +01:00
Mark Haines
d5c31e01f2 Merge pull request #237 from matrix-org/markjh/readme-rst-formatting
Clean up some of restructured text formatting in the README.rst
2015-08-20 16:08:17 +01:00
Mark Haines
cb8201ba12 Merge pull request #236 from matrix-org/markjh/upgrade-instructions
Add generic update instructions to UPGRADE.rst
2015-08-20 16:08:05 +01:00
Erik Johnston
c141d47a28 Merge pull request #235 from matrix-org/erikj/room_avatars
Add m.room.avatar to default power levels.
2015-08-20 16:07:49 +01:00
Daniel Wagner-Hall
13a6517d89 s/by_token/by_access_token/g
We're about to have two kinds of token, access and refresh
2015-08-20 16:01:29 +01:00
Erik Johnston
61cd03466f Merge pull request #238 from matrix-org/fix_set_password
Fix set password
2015-08-20 15:39:05 +01:00
David Baker
f764f92647 Remove spurious extra arg to set_password 2015-08-20 15:35:54 +01:00
David Baker
ca0d28ef34 Another use of check_password that got missed in the yield fix 2015-08-20 15:35:14 +01:00
Mark Haines
8a951540f6 Further formatting clean ups 2015-08-20 15:22:26 +01:00
Mark Haines
482648123f Clean up some of restructured text formatting in the README.rst 2015-08-20 15:20:07 +01:00
Mark Haines
fd88ea19c0 Tweak the wording a bit 2015-08-20 15:12:44 +01:00
Mark Haines
bb9611bd46 Add generic update instructions to UPGRADE.rst and add link to them from the README.rst 2015-08-20 15:08:18 +01:00
Erik Johnston
9b63def388 Add m.room.avatar to default power levels. Change default required power levels of such events to 50 2015-08-20 14:35:40 +01:00
Erik Johnston
23b21e5215 Update changelog 2015-08-20 14:25:57 +01:00
Erik Johnston
9d720223f2 Bump version and changelog 2015-08-20 14:12:01 +01:00
Daniel Wagner-Hall
617501dd2a Move token generation to auth handler
I prefer the auth handler to worry about all auth, and register to call
into it as needed, than to smatter auth logic between the two.
2015-08-20 11:35:56 +01:00
Erik Johnston
099ce4bc38 Merge pull request #231 from matrix-org/erikj/pushers_store_last_token
Push: store the 'last_token' in the db, even if we processed no events
2015-08-20 11:27:31 +01:00
Mark Haines
22346a0ee7 Merge pull request #206 from matrix-org/erikj/generate_presice_thumbnails
Always return a thumbnail of the requested size.
2015-08-20 11:27:15 +01:00
Erik Johnston
cbd053bb8f Merge pull request #233 from matrix-org/erikj/canonical_alias
Add server side support for canonical aliases
2015-08-20 11:26:09 +01:00
David Baker
be27d81808 Merge pull request #234 from matrix-org/email_login
Support logging in with email addresses (or other threepids)
2015-08-20 11:15:42 +01:00
Daniel Wagner-Hall
ade5342752 Merge branch 'auth' into refresh 2015-08-20 11:03:47 +01:00
David Baker
4cf302de5b Comma comma comma comma comma chameleon 2015-08-20 10:31:18 +01:00
David Baker
c50ad14bae Merge branch 'develop' into email_login 2015-08-20 10:16:01 +01:00
Mark Haines
a0b8e5f2fe Merge pull request #211 from matrix-org/email_in_use
Changes for unique emails
2015-08-20 10:04:04 +01:00
Erik Johnston
aadb2238c9 Check that the canonical room alias actually points to the room 2015-08-20 09:55:04 +01:00
Daniel Wagner-Hall
f9e7493ac2 Merge branch 'develop' into auth 2015-08-19 15:20:09 +01:00
Daniel Wagner-Hall
ecc59ae66e Merge branch 'master' into auth 2015-08-19 15:19:37 +01:00
Daniel Wagner-Hall
70e265e695 Re-add whitespace around caveat operators 2015-08-19 14:30:31 +01:00
Erik Johnston
09d23b6209 Merge pull request #232 from matrix-org/erikj/appservice_joined_rooms
Don't get apservice interested rooms in RoomHandler.get_joined_rooms_for_users
2015-08-19 13:50:40 +01:00
Erik Johnston
daa01842f8 Don't get apservice interested rooms in RoomHandler.get_joined_rooms_for_users 2015-08-19 13:46:03 +01:00
Daniel Wagner-Hall
7f08ebb772 Switch to pymacaroons-pynacl 2015-08-19 13:21:36 +01:00
Erik Johnston
d7272f8d9d Add canonical alias to the default power levels 2015-08-19 12:03:09 +01:00
Erik Johnston
78fa346b07 Store the 'last_token' in the db, even if we processed no events 2015-08-19 10:08:31 +01:00
Erik Johnston
a45ec7c651 Block on storing the current last_tokens 2015-08-19 10:08:12 +01:00
Erik Johnston
40da1f200d Remove an access token log line 2015-08-19 09:41:07 +01:00
Erik Johnston
abc6986a24 Fix regression where we incorrectly responded with a 200 to /login 2015-08-19 09:31:11 +01:00
Daniel Wagner-Hall
ce832c38d4 Remove padding space around caveat operators 2015-08-18 17:39:26 +01:00
Daniel Wagner-Hall
42e858daeb Fix units in test
I made the non-test seconds instead of ms, but not the test
2015-08-18 17:38:37 +01:00
Erik Johnston
e624cdec64 Merge pull request #228 from matrix-org/erikj/_get_state_for_groups
Ensure we never return a None event from _get_state_for_groups
2015-08-18 16:30:17 +01:00
Erik Johnston
c3dd2ecd5e Merge pull request #230 from matrix-org/erikj/appservice_auth_entity
Set request.authenticated_entity for application services
2015-08-18 16:30:11 +01:00
Erik Johnston
38a965b816 Merge pull request #227 from matrix-org/erikj/receipts_take2
Re-enable receipts API.
2015-08-18 16:30:04 +01:00
Erik Johnston
a82938416d Remove newline because vertical whitespace makes mjark sad 2015-08-18 16:28:13 +01:00
Erik Johnston
0bfdaf1f4f Rejig the code to make it nicer 2015-08-18 16:26:07 +01:00
Erik Johnston
a5cbd20001 Merge pull request #225 from matrix-org/erikj/reactor_metrics
Fix pending_calls metric to not lie
2015-08-18 16:21:11 +01:00
Erik Johnston
128ed32e6b Bump size of get_presence_state cache 2015-08-18 15:51:23 +01:00
Daniel Wagner-Hall
3e6fdfda00 Fix some formatting to use tuples 2015-08-18 15:18:50 +01:00
Erik Johnston
ee59af9ac0 Set request.authenticated_entity for application services 2015-08-18 15:17:47 +01:00
Daniel Wagner-Hall
1469141023 Merge branch 'develop' into auth 2015-08-18 14:43:44 +01:00
Daniel Wagner-Hall
cacdb529ab Remove accidentally added file 2015-08-18 14:27:23 +01:00
Daniel Wagner-Hall
2d3462714e Issue macaroons as opaque auth tokens
This just replaces random bytes with macaroons. The macaroons are not
inspected by the client or server.

In particular, they claim to have an expiry time, but nothing verifies
that they have not expired.

Follow-up commits will actually enforce the expiration, and allow for
token refresh.

See https://bit.ly/matrix-auth for more information
2015-08-18 14:22:02 +01:00
Erik Johnston
f704c10f29 Rename unhelpful variable name 2015-08-18 11:54:03 +01:00
Erik Johnston
6e7d36a72c Also check for presence of 'threadCallQueue' in reactor 2015-08-18 11:51:08 +01:00
Erik Johnston
d3da63f766 Use more helpful variable names 2015-08-18 11:47:00 +01:00
Erik Johnston
8199475ce0 Ensure we never return a None event from _get_state_for_groups 2015-08-18 11:44:10 +01:00
Erik Johnston
0d4abf7777 Typo 2015-08-18 11:19:08 +01:00
Erik Johnston
e55291ce5e None check 2015-08-18 11:17:37 +01:00
Erik Johnston
8e254862f4 Don't assume @cachedList function returns keys for everything 2015-08-18 11:11:33 +01:00
Erik Johnston
85d0bc3bdc Reduce cache size from obscenely large to quite large 2015-08-18 11:00:38 +01:00
Erik Johnston
cfc503681f Comments 2015-08-18 10:49:23 +01:00
Erik Johnston
dc2a105fca Merge pull request #226 from matrix-org/erikj/room_presence
Add and use cached batched storage.get_state function.
2015-08-18 10:43:50 +01:00
Erik Johnston
83eb627b5a More helpful variable names 2015-08-18 10:33:11 +01:00
Erik Johnston
776ee6d92b Doc strings 2015-08-18 10:30:07 +01:00
Erik Johnston
f72ed6c6a3 Remove debug try/catch 2015-08-18 10:29:49 +01:00
Mark Haines
8899df13bf Merge pull request #208 from matrix-org/markjh/end-to-end-key-federation
Federation for end-to-end key requests.
2015-08-18 09:12:54 +01:00
Erik Johnston
8f4165628b Add index receipts_linearized_room_stream 2015-08-17 14:43:54 +01:00
Erik Johnston
d3d582bc1c Remove unused import 2015-08-17 13:38:09 +01:00
Erik Johnston
4d8e1e1f9e Remove added unused methods 2015-08-17 13:36:07 +01:00
Erik Johnston
afef6f5d16 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/receipts_take2 2015-08-17 13:23:44 +01:00
Erik Johnston
2d97e65558 Remember to invalidate caches 2015-08-17 10:46:55 +01:00
Erik Johnston
1a9510bb84 Implement a batched presence_handler.get_state and use it 2015-08-17 10:40:23 +01:00
Erik Johnston
47abebfd6d Add batched version of store.get_presence_state 2015-08-17 09:50:50 +01:00
Erik Johnston
f9d4da7f45 Fix bug where we were leaking None into state event lists 2015-08-17 09:39:45 +01:00
Daniel Wagner-Hall
30883d8409 Merge pull request #221 from matrix-org/auth
Simplify LoginHander and AuthHandler
2015-08-14 17:02:22 +01:00
Erik Johnston
891dfd90bd Fix pending_calls metric to not lie 2015-08-14 15:43:11 +01:00
Erik Johnston
68b255c5a1 Batch _get_linearized_receipts_for_rooms 2015-08-14 15:06:22 +01:00
Mark Haines
95b0f5449d Fix flake8 warning 2015-08-13 17:34:22 +01:00
Erik Johnston
129ee4e149 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/receipts_take2 2015-08-13 17:28:43 +01:00
Mark Haines
c5966b2a97 Merge remote-tracking branch 'origin/develop' into markjh/end-to-end-key-federation 2015-08-13 17:27:53 +01:00
Mark Haines
0cceb2ac92 Add a few strategic new lines to break up the on_query_client_keys and on_claim_client_keys methods in federation_server.py 2015-08-13 17:27:46 +01:00
Erik Johnston
d6bcc68ea7 Merge pull request #219 from matrix-org/erikj/dictionary_cache
Dictionary and list caches
2015-08-13 17:27:08 +01:00
Mark Haines
b16cd18a86 Merge remote-tracking branch 'origin/develop' into erikj/generate_presice_thumbnails 2015-08-13 17:23:39 +01:00
Erik Johnston
9f7f228ec2 Remove pointless map 2015-08-13 17:20:59 +01:00
Erik Johnston
3d77e56c12 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-13 17:18:52 +01:00
Erik Johnston
d884047d34 Merge pull request #224 from matrix-org/erikj/reactor_metrics
Add some metrics about the reactor
2015-08-13 17:17:07 +01:00
Erik Johnston
2bb2c02571 Remove some vertical space 2015-08-13 17:11:30 +01:00
Mark Haines
3d1cdda762 Merge branch 'develop' into erikj/reactor_metrics 2015-08-13 17:03:58 +01:00
Erik Johnston
57877b01d7 Replace list comprehension 2015-08-13 17:00:17 +01:00
Erik Johnston
5db5677969 Add metrics to the receipts cache 2015-08-13 16:58:23 +01:00
Erik Johnston
7e77a82c5f Re-enable receipts 2015-08-13 16:58:10 +01:00
Mark Haines
7eb4d626ba Add max-line-length to the flake8 section of setup.cfg 2015-08-13 13:12:33 +01:00
Erik Johnston
06750140f6 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-13 11:55:20 +01:00
Erik Johnston
adbd720fab PEP8 2015-08-13 11:47:38 +01:00
Erik Johnston
8b7ce2945b Merge branch 'erikj/reactor_metrics' into erikj/dictionary_cache 2015-08-13 11:42:22 +01:00
Erik Johnston
a6c27de1aa Don't time getDelayedCalls 2015-08-13 11:41:57 +01:00
Erik Johnston
c044aca1fd Merge branch 'erikj/reactor_metrics' into erikj/dictionary_cache 2015-08-13 11:39:38 +01:00
Erik Johnston
ba5d34a832 Add some metrics about the reactor 2015-08-13 11:38:59 +01:00
Mark Haines
6a191d62ed Merge pull request #173 from matrix-org/markjh/twisted-15
Update to Twisted-15.2.1.
2015-08-12 17:28:47 +01:00
Mark Haines
21ac8be5f7 Depend on Twisted>=15.1 rather than pining to a particular version 2015-08-12 17:25:13 +01:00
Erik Johnston
3e4e367f09 Merge pull request #223 from matrix-org/markjh/enable_demo_registration
enable registration in the demo servers
2015-08-12 17:24:16 +01:00
Erik Johnston
0fbed2a8fa Comment 2015-08-12 17:22:54 +01:00
Mark Haines
998a72d4d9 Merge branch 'develop' into markjh/twisted-15
Conflicts:
	synapse/http/matrixfederationclient.py
2015-08-12 17:21:14 +01:00
Erik Johnston
c10ac7806e Explain why we're prefilling dict with Nones 2015-08-12 17:16:30 +01:00
Erik Johnston
101ee3fd00 Better variable name 2015-08-12 17:08:05 +01:00
Erik Johnston
df361d08f7 Split _get_state_for_group_from_cache into two 2015-08-12 17:06:21 +01:00
Erik Johnston
7b0e797080 Fix _filter_events_for_client 2015-08-12 17:05:24 +01:00
Mark Haines
2eb91e6694 enable registration in the demo servers 2015-08-12 16:53:30 +01:00
Erik Johnston
cfa62007a3 Docstring 2015-08-12 16:42:46 +01:00
Daniel Wagner-Hall
5ce903e2f7 Merge password checking implementations 2015-08-12 16:09:19 +01:00
Erik Johnston
a7eeb34c64 Simplify staggered deferred lists 2015-08-12 16:02:05 +01:00
Erik Johnston
f7e2f981ea Use list comprehension instead of filter 2015-08-12 16:01:10 +01:00
Daniel Wagner-Hall
bcc1d34d35 Merge branch 'develop' into auth 2015-08-12 15:58:52 +01:00
Daniel Wagner-Hall
f4122c64b5 Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-08-12 15:58:35 +01:00
Daniel Wagner-Hall
415c2f0549 Simplify LoginHander and AuthHandler
* Merge LoginHandler -> AuthHandler
 * Add a bunch of documentation
 * Improve some naming
 * Remove unused branches

I will start merging the actual logic of the two handlers shortly
2015-08-12 15:49:37 +01:00
David Baker
f43041aacd Check absent before trying to access keys 2015-08-12 15:44:08 +01:00
David Baker
73605f8070 Just leaving off the $ is fine. r* == registerrrrrrrrr 2015-08-12 15:40:54 +01:00
Mark Haines
de3b7b55d6 Doc-string for config ultility function 2015-08-12 14:29:17 +01:00
Erik Johnston
d46208c12c Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-12 14:28:43 +01:00
Erik Johnston
4f11a5b2b5 Merge pull request #220 from matrix-org/markjh/generate_keys
Fix the --generate-keys option.
2015-08-12 14:23:54 +01:00
Mark Haines
7bbaab9432 Fix the --generate-keys option. Make it do the same thing as --generate-config does when the config file exists, but without printing a warning 2015-08-12 11:57:37 +01:00
Daniel Wagner-Hall
7b49236b37 Merge pull request #216 from matrix-org/auth
Clean up some docs and redundant fluff
2015-08-12 10:55:42 +01:00
Mark Haines
fdb724cb70 Add config option for setting the list of thumbnail sizes to precalculate 2015-08-12 10:55:27 +01:00
Mark Haines
7e3d1c7d92 Make a config option for whether to generate new thumbnail sizes dynamically 2015-08-12 10:54:38 +01:00
Erik Johnston
d7451e0f22 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-12 10:30:30 +01:00
Erik Johnston
4807616e16 Wire up the dictionarycache to the metrics 2015-08-12 10:13:35 +01:00
Mark Haines
275f7c987c Merge pull request #182 from matrix-org/manu/fix_no_rate_limit_in_federation_demo
Federation demo start.sh: Fixed --no-rate-limit param in the script
2015-08-12 09:39:57 +01:00
Mark Haines
b24d7ebd6e Fix the cleanup script to use the right $DIR 2015-08-12 09:39:07 +01:00
Erik Johnston
2df8dd9b37 Move all the caches into their own package, synapse.util.caches 2015-08-11 18:00:59 +01:00
Daniel Wagner-Hall
a23a760b3f Merge branch 'develop' into auth 2015-08-11 17:41:29 +01:00
Erik Johnston
7568fe880d Merge pull request #218 from matrix-org/mockfix
Remove call to "recently" removed function in mock
2015-08-11 17:00:17 +01:00
Daniel Wagner-Hall
4ff0228c25 Remove call to recently removed function in mock 2015-08-11 16:56:30 +01:00
Daniel Wagner-Hall
dcd5983fe4 Remove call to recently removed function in mock 2015-08-11 16:54:06 +01:00
Daniel Wagner-Hall
45610305ea Add missing space because linter 2015-08-11 16:43:27 +01:00
Daniel Wagner-Hall
88e03da39f Minor docs cleanup 2015-08-11 16:35:28 +01:00
Daniel Wagner-Hall
9dba813234 Remove redundant if-guard
The startswith("@") does the job
2015-08-11 16:34:17 +01:00
Erik Johnston
53a817518b Comments 2015-08-11 11:40:40 +01:00
Erik Johnston
6eaa116867 Comment 2015-08-11 11:35:24 +01:00
Erik Johnston
4762c276cb Docs 2015-08-11 11:33:41 +01:00
Erik Johnston
dc8399ee00 Remove debug loggers 2015-08-11 11:30:59 +01:00
Erik Johnston
1b994a97dd Fix application of ACLs 2015-08-11 10:41:40 +01:00
Erik Johnston
10b874067b Fix state cache 2015-08-11 09:12:41 +01:00
Erik Johnston
017b798e4f Clean up StateStore 2015-08-10 15:01:06 +01:00
Erik Johnston
2c019eea11 Remove unused function 2015-08-10 14:44:41 +01:00
Erik Johnston
bb0a475c30 Comments 2015-08-10 14:27:38 +01:00
Erik Johnston
dcefac3b06 Comments 2015-08-10 14:16:24 +01:00
Mark Haines
559c51debc Use TypeError instead of ValueError and give a nicer error mesasge
when someone calls Cache.invalidate with the wrong type.
2015-08-10 14:07:17 +01:00
Erik Johnston
6f274f7e13 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-10 13:53:09 +01:00
Erik Johnston
7ce71f2ffc Merge branch 'erikj/cache_varargs_interface' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-10 13:47:51 +01:00
Erik Johnston
8c3a62b5c7 Merge pull request #215 from matrix-org/erikj/cache_varargs_interface
Change Cache to not use *args in its interface
2015-08-10 13:47:45 +01:00
Erik Johnston
86eaaa885b Rename keyargs to args in CacheDescriptor 2015-08-10 13:44:44 +01:00
Erik Johnston
e0b6e49466 Merge branch 'erikj/cache_varargs_interface' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-10 10:39:22 +01:00
Erik Johnston
2cd6cb9f65 Rename keyargs to args in Cache 2015-08-10 10:38:47 +01:00
Erik Johnston
aa88582e00 Do bounds check 2015-08-10 10:08:15 +01:00
Erik Johnston
5119e416e8 Line length 2015-08-10 10:05:30 +01:00
Erik Johnston
8f04b6fa7a Merge branch 'erikj/cache_varargs_interface' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-07 19:30:25 +01:00
Erik Johnston
7dec0b2bee Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-07 19:28:39 +01:00
Erik Johnston
06218ab125 Merge pull request #212 from matrix-org/erikj/cache_deferreds
Make CacheDescriptor cache deferreds rather than the deferreds' values
2015-08-07 19:28:05 +01:00
Erik Johnston
2352974aab Merge branch 'erikj/cache_deferreds' of github.com:matrix-org/synapse into erikj/cache_varargs_interface 2015-08-07 19:26:54 +01:00
Erik Johnston
9c5385b53a s/observed/observer/ 2015-08-07 19:26:38 +01:00
Erik Johnston
ffab798a38 Merge branch 'erikj/cache_deferreds' of github.com:matrix-org/synapse into erikj/cache_varargs_interface 2015-08-07 19:18:47 +01:00
Erik Johnston
62126c996c Propogate stale cache errors to calling functions 2015-08-07 19:17:58 +01:00
Erik Johnston
3213ff630c Remove unnecessary cache 2015-08-07 19:14:05 +01:00
Erik Johnston
20addfa358 Change Cache to not use *args in its interface 2015-08-07 18:32:47 +01:00
Erik Johnston
9eb5b23d3a Batch up various DB requests for event -> state 2015-08-07 18:16:02 +01:00
Erik Johnston
0211890134 Implement a CacheListDescriptor 2015-08-07 18:14:49 +01:00
Erik Johnston
ffdb8c3828 Don't be too enthusiatic with defer.gatherResults 2015-08-07 18:13:48 +01:00
Paul Evans
e69b669083 Merge pull request #213 from matrix-org/paul/SYN-420
Three small improvements to help debian package (SYN-420)
2015-08-07 17:49:54 +01:00
Paul "LeoNerd" Evans
0db40d3e93 Don't complain about extra .pyc files we find while hunting for database schemas 2015-08-07 17:22:11 +01:00
Paul "LeoNerd" Evans
e3c8e2c13c Add a --generate-keys option 2015-08-07 16:42:27 +01:00
Paul "LeoNerd" Evans
efe60d5e8c Only print the pidfile path on startup if requested by a commandline flag 2015-08-07 16:36:42 +01:00
Erik Johnston
b2c7bd4b09 Cache get_recent_events_for_room 2015-08-07 14:42:34 +01:00
Erik Johnston
b3768ec10a Remove unncessary cache 2015-08-07 13:41:05 +01:00
Erik Johnston
b8e386db59 Change Cache to not use *args in its interface 2015-08-07 11:52:21 +01:00
Erik Johnston
fe994e728f Store absence of state in cache 2015-08-07 10:17:38 +01:00
Matthew Hodgson
0ac61b1c78 hacky support for video for FS CC DD 2015-08-06 18:18:36 +01:00
Matthew Hodgson
0caf30f94b hacky support for video for FS CC DD 2015-08-06 18:18:16 +01:00
Erik Johnston
1d08bf7c17 Merge branch 'erikj/cache_deferreds' into erikj/dictionary_cache 2015-08-06 14:03:15 +01:00
Erik Johnston
63b1eaf32c Docs 2015-08-06 14:02:50 +01:00
Erik Johnston
b811c98574 Remove failed deferreds from cache 2015-08-06 14:01:27 +01:00
Erik Johnston
433314cc34 Re-implement DEBUG_CACHES flag 2015-08-06 14:01:05 +01:00
Erik Johnston
8049c9a71e Merge pull request #209 from matrix-org/erikj/cached_keyword_args
Add support for using keyword arguments with cached functions
2015-08-06 13:52:49 +01:00
Erik Johnston
f596ff402e Merge branch 'erikj/cache_deferreds' into erikj/dictionary_cache 2015-08-06 13:37:56 +01:00
Erik Johnston
2efb93af52 Merge branch 'erikj/cached_keyword_args' into erikj/cache_deferreds 2015-08-06 13:35:28 +01:00
Erik Johnston
953dbd28a7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/cached_keyword_args 2015-08-06 13:35:03 +01:00
Erik Johnston
7eea3e356f Make @cached cache deferreds rather than the deferreds' values 2015-08-06 13:33:34 +01:00
Erik Johnston
3e1b77efc2 Merge branch 'erikj/cached_keyword_args' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-05 16:45:56 +01:00
Erik Johnston
b52b4a84ec Merge branch 'develop' of github.com:matrix-org/synapse into erikj/dictionary_cache 2015-08-05 15:41:20 +01:00
Erik Johnston
1e62a3d3a9 Up the cache size for 'get_joined_hosts_for_room' and 'get_users_in_room' 2015-08-05 15:40:40 +01:00
Erik Johnston
a89559d797 Use LRU cache by default 2015-08-05 15:39:47 +01:00
Erik Johnston
07507643cb Use dictionary cache to do group -> state fetching 2015-08-05 15:11:42 +01:00
David Baker
185ac7ee6c Allow sign in using email address 2015-08-04 16:29:54 +01:00
David Baker
a0dea6eaed Remember to yield: not much point testing is a deferred is not None 2015-08-04 16:18:17 +01:00
Erik Johnston
c67ba143fa Move DictionaryCache 2015-08-04 15:58:28 +01:00
Erik Johnston
e7768e77f5 Add basic dictionary cache 2015-08-04 15:56:56 +01:00
David Baker
883aabe423 splt long line 2015-08-04 15:20:35 +01:00
David Baker
07ad03d5df Fix tests 2015-08-04 15:18:40 +01:00
David Baker
e124128542 Bump schema version 2015-08-04 14:50:31 +01:00
David Baker
c77048e12f Add endpoint that proxies ID server request token and errors if the given email is in use on this Home Server. 2015-08-04 14:37:09 +01:00
Erik Johnston
2e35a733cc Merge branch 'develop' of github.com:matrix-org/synapse into erikj/acl_perf 2015-08-04 13:00:52 +01:00
Erik Johnston
413a4c289b Add comment 2015-08-04 11:08:07 +01:00
Erik Johnston
4d6cb8814e Speed up event filtering (for ACL) logic 2015-08-04 09:32:23 +01:00
David Baker
7148aaf5d0 Don't try & check the username if we don't have one (which we won't if it's been saved in the auth layer) 2015-08-03 17:03:27 +01:00
David Baker
28d07a02e4 Add vector.im as trusted ID server 2015-08-03 15:31:21 +01:00
David Baker
2c963054f8 Merge pull request #210 from matrix-org/reg-v2a-password-skip
v2_alpha /register fixes for Application Services
2015-07-29 14:46:17 +01:00
Kegan Dougal
11b0a34074 Use the same reg paths as register v1 for ASes.
Namely this means using registration_handler.appservice_register.
2015-07-29 10:00:54 +01:00
Matthew Hodgson
c772dffc9f improve OS X instructions and remove all the leading $'s to make it easier to c+p commands 2015-07-29 09:39:55 +01:00
Kegan Dougal
a4d62ba36a Fix v2_alpha registration. Add unit tests.
V2 Registration forced everyone (including ASes) to create a password for a
user, when ASes should be able to omit passwords. Also unbreak AS registration
in general which checked too early if the given username was claimed by an AS;
it was checked before knowing if the AS was the one doing the registration! Add
unit tests for AS reg, user reg and disabled_registration flag.
2015-07-28 17:34:12 +01:00
Erik Johnston
c472d6107e Merge pull request #207 from matrix-org/erikj/generate_local_thumbnails_on_thread
Generate local thumbnails on a thread
2015-07-27 14:57:25 +01:00
Erik Johnston
39e21ea51c Add support for using keyword arguments with cached functions 2015-07-27 13:57:29 +01:00
Mark Haines
2da3b1e60b Get the end-to-end key federation working 2015-07-24 18:26:46 +01:00
Mark Haines
62c010283d Add federation support for end-to-end key requests 2015-07-23 16:03:38 +01:00
Erik Johnston
459085184c Factor out thumbnail() 2015-07-23 15:59:53 +01:00
Erik Johnston
2b4f47db9c Generate local thumbnails on a thread 2015-07-23 14:52:29 +01:00
Erik Johnston
33d83f3615 Fix remote thumbnailing 2015-07-23 14:24:21 +01:00
Erik Johnston
ff7c2e41de Always return a thumbnail of the requested size.
Before, we returned a thumbnail that was at least as big (if possible)
as the requested size. Now, if we don't have a thumbnail of the given
size we generate (and persist) one of that size.
2015-07-23 14:12:49 +01:00
Mark Haines
6886bba988 Merge pull request #205 from matrix-org/erikj/pick_largest_thumbnail
Pick larger than desired thumbnail for 'crop'
2015-07-23 11:16:02 +01:00
Erik Johnston
103e1c2431 Pick larger than desired thumbnail for 'crop' 2015-07-23 11:12:49 +01:00
Matrix
4e2e67fd50 Disable receipts for now 2015-07-22 16:13:46 +01:00
David Baker
a56eccbbfc Query for all the ones we were asked about, not just the last... 2015-07-21 16:38:16 -07:00
David Baker
20c0324e9c Dodesn't seem to make any difference: guess it does work with the object reference 2015-07-21 16:21:37 -07:00
David Baker
cf7a40b08a I think this was what was intended... 2015-07-21 16:08:00 -07:00
Erik Johnston
90dbd71c13 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-07-21 09:25:30 +01:00
Mark Haines
3b5823c74d s/take/claim/ for end to end key APIs 2015-07-20 18:23:54 +01:00
Erik Johnston
bde97b988a Merge pull request #204 from illicitonion/develop
Improve naming
2015-07-20 14:41:14 +01:00
Daniel Wagner-Hall
53d1174aa9 Improve naming 2015-07-20 06:32:12 -07:00
Kegan Dougal
ddef5ea126 Remove semicolon. 2015-07-20 14:02:36 +01:00
Kegan Dougal
b6ee0585bd Parse the ID given to /invite|ban|kick to make sure it looks like a user ID. 2015-07-20 13:55:19 +01:00
Matrix
4f973eb657 Up default cache size for _RoomStreamChangeCache 2015-07-18 19:07:33 +01:00
Matrix
4cab2cfa34 Don't do any database hits in receipt handling if from_key == to_key 2015-07-18 19:07:12 +01:00
Erik Johnston
b6d4a4c6d8 Merge pull request #199 from matrix-org/erikj/receipts
Implement read receipts.
2015-07-16 18:18:36 +01:00
Erik Johnston
d155b318d2 Merge pull request #203 from matrix-org/erikj/room_creation_presets
Implement presets at room creation
2015-07-16 18:18:11 +01:00
Erik Johnston
a2ed7f437c Merge pull request #202 from matrix-org/erikj/power_level_sanity
Change power level semantics.
2015-07-16 18:18:04 +01:00
Erik Johnston
c456d17daf Implement specifying custom initial state for /createRoom 2015-07-16 15:25:29 +01:00
David Baker
09489499e7 pep8 + debug line 2015-07-15 19:39:18 +01:00
David Baker
4da05fa0ae Add back in support for remembering parameters submitted to a user-interactive auth call. 2015-07-15 19:28:57 +01:00
Matthew Hodgson
8cedf3ce95 bump up image quality a bit more as it looks crap 2015-07-14 23:53:13 +01:00
Erik Johnston
baa55fb69e Merge pull request #193 from matrix-org/erikj/bulk_persist_event
Add bulk insert events API
2015-07-14 10:49:24 +01:00
Erik Johnston
002a44ac1a s/everyone_ops/original_invitees_have_ops/ 2015-07-14 10:37:42 +01:00
David Baker
62b4b72fe4 Close, but no cigar. 2015-07-14 10:33:25 +01:00
Erik Johnston
b49a30a972 Capitalize contants 2015-07-14 10:20:31 +01:00
Erik Johnston
4624d6035e Docs 2015-07-14 10:19:07 +01:00
Erik Johnston
d5cc794598 Implement presets at room creation 2015-07-13 16:56:08 +01:00
Erik Johnston
5989637f37 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/receipts 2015-07-13 13:50:57 +01:00
Erik Johnston
016c089f13 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/power_level_sanity 2015-07-13 13:48:13 +01:00
Erik Johnston
e5991af629 Comments 2015-07-13 13:30:43 +01:00
Erik Johnston
17bb9a7eb9 Remove commented out code 2015-07-10 14:07:57 +01:00
Erik Johnston
a5ea22d468 Sanitize power level checks 2015-07-10 14:05:38 +01:00
Erik Johnston
7e3b14fe78 You shouldn't be able to ban/kick users with higher power levels 2015-07-10 14:05:38 +01:00
Erik Johnston
532fcc997a Merge pull request #196 from matrix-org/erikj/room_history
Add ability to restrict room history.
2015-07-10 13:47:04 +01:00
Erik Johnston
0b3389bcd2 Merge pull request #194 from matrix-org/erikj/bulk_verify_sigs
Implement bulk verify_signed_json API
2015-07-10 13:46:53 +01:00
Erik Johnston
0d7f0febf4 Uniquely name unique constraint 2015-07-10 13:43:03 +01:00
Erik Johnston
b7cb37b189 Merge pull request #198 from matrix-org/markjh/client-end-to-end-key-management
Client end to end key management API
2015-07-10 13:36:17 +01:00
Mark Haines
a01097d60b Assume that each device for a user has only one of each type of key 2015-07-10 13:26:18 +01:00
Erik Johnston
f3049d0b81 Small tweaks to SAML2 configuration.
- Add saml2 config docs to default config.
- Use existence of saml2 config to indicate if saml2 should be enabled.
2015-07-10 10:50:14 +01:00
Erik Johnston
a887efa07a Add Muthu Subramanian to AUTHORS 2015-07-10 10:37:24 +01:00
Erik Johnston
9158ad1abb Merge pull request #201 from EricssonResearch/msba/saml2-develop
Integrate SAML2 basic authentication - uses pysaml2
2015-07-10 10:25:56 +01:00
Erik Johnston
b5f0d73ea3 Add comment 2015-07-09 17:09:26 +01:00
Erik Johnston
ed88720952 Handle error slightly better 2015-07-09 16:14:46 +01:00
Erik Johnston
f0979afdb0 Remove spurious comment 2015-07-09 16:02:07 +01:00
Mark Haines
bf0d59ed30 Don't bother with a timeout for one time keys on the server. 2015-07-09 14:04:03 +01:00
Erik Johnston
c2d08ca62a Integer timestamps 2015-07-09 13:15:34 +01:00
Erik Johnston
4019b48aaa Merge branch 'develop' of github.com:matrix-org/synapse into erikj/receipts 2015-07-09 11:55:52 +01:00
Erik Johnston
294dbd712f We don't want semicolons. 2015-07-09 11:47:24 +01:00
Erik Johnston
1af188209a Change format of receipts to allow inclusion of data 2015-07-09 11:39:30 +01:00
Muthu Subramanian
8cd34dfe95 Make SAML2 optional and add some references/comments 2015-07-09 13:34:47 +05:30
Muthu Subramanian
d2caa5351a code beautify 2015-07-09 12:58:15 +05:30
Matthew Hodgson
fb8d2862c1 remove the tls_certificate_chain_path param and simply support tls_certificate_path pointing to a file containing a chain of certificates 2015-07-09 00:45:41 +01:00
Matthew Hodgson
8ad2d2d1cb document tls_certificate_chain_path more clearly 2015-07-09 00:06:01 +01:00
Matthew Hodgson
f26a3df1bf oops, context.tls_certificate_chain_file() expects a file, not a certificate. 2015-07-08 21:33:02 +01:00
Matthew Hodgson
19fa3731ae typo 2015-07-08 18:53:41 +01:00
Matthew Hodgson
465acb0c6a *cough* 2015-07-08 18:30:59 +01:00
Matthew Hodgson
64afbe6ccd add new optional config for tls_certificate_chain_path for folks with intermediary SSL certs 2015-07-08 18:20:02 +01:00
Matthew Hodgson
04192ee05b typo 2015-07-08 17:49:15 +01:00
Mark Haines
8fb79eeea4 Only remove one time keys when new one time keys are added 2015-07-08 17:04:29 +01:00
Erik Johnston
ce9e2f84ad Add blist to dependencies 2015-07-08 15:41:59 +01:00
Erik Johnston
304343f4d7 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/receipts 2015-07-08 15:37:33 +01:00
Erik Johnston
af812b68dd Add a cache to fetching of receipt streams 2015-07-08 15:35:00 +01:00
Erik Johnston
d85ce8d89b Split receipt events up into one per room 2015-07-08 11:36:05 +01:00
Muthu Subramanian
f53bae0c19 code beautify 2015-07-08 16:05:46 +05:30
Muthu Subramanian
77c5db5977 code beautify 2015-07-08 16:05:20 +05:30
Muthu Subramanian
81682d0f82 Integrate SAML2 basic authentication - uses pysaml2 2015-07-08 15:36:54 +05:30
Erik Johnston
87311d1b8c Hook up receipts to v1 initialSync 2015-07-08 11:02:04 +01:00
Erik Johnston
f0dd6d4cbd Fix test. 2015-07-07 16:18:36 +01:00
Erik Johnston
ca041d5526 Wire together receipts and the notifer/federation 2015-07-07 15:25:30 +01:00
Erik Johnston
716e426933 Fix various typos 2015-07-07 10:55:31 +01:00
Erik Johnston
e8b2f6f8a1 Add a ReceiptServlet 2015-07-07 10:55:22 +01:00
Erik Johnston
dfc74c30c9 Merge pull request #197 from matrix-org/mjark/missing_regex_group
Don't 500 if a group is missing from the regex match
2015-07-07 09:37:38 +01:00
Mark Haines
28ef344077 Merge branch 'mjark/missing_regex_group' into markjh/client-end-to-end-key-management 2015-07-06 18:48:27 +01:00
Mark Haines
2ef182ee93 Add client API for uploading and querying keys for end to end encryption 2015-07-06 18:47:57 +01:00
Mark Haines
b5770f8947 Add store for client end to end keys 2015-07-06 18:46:47 +01:00
Mark Haines
a7dcbfe430 Don't 500 if a group is missing from the regex 2015-07-06 16:47:17 +01:00
Erik Johnston
1a3255b507 Add m.room.history_visibility to newly created rooms' m.room.power_levels 2015-07-06 13:25:35 +01:00
Erik Johnston
fb47c3cfbe Rename key and values for m.room.history_visibility. Support 'invited' value 2015-07-06 13:05:52 +01:00
Erik Johnston
65e69dec8b Don't explode if we don't recognize one of the event_ids in the backfill request 2015-07-06 09:33:03 +01:00
Erik Johnston
c3e2600c67 Filter and redact events that the other server doesn't have permission to see during backfill 2015-07-03 17:52:57 +01:00
Erik Johnston
400894616d Respect m.room.history_visibility in v2_alpha sync API 2015-07-03 14:51:01 +01:00
Erik Johnston
c0a975cc2e Merge pull request #195 from matrix-org/erikj/content_disposition
Add Content-Disposition headers to media repo downloads
2015-07-03 11:26:06 +01:00
Erik Johnston
12b83f1a0d If user supplies filename in URL when downloading from media repo, use that name in Content Disposition 2015-07-03 11:24:55 +01:00
Erik Johnston
00ab882ed6 Add m.room.history_visibility to list of auth events 2015-07-03 10:31:24 +01:00
Erik Johnston
41938afed8 Make v1 initial syncs respect room history ACL 2015-07-02 17:12:35 +01:00
Erik Johnston
1a60545626 Add basic impl for room history ACL on GET /messages client API 2015-07-02 16:20:10 +01:00
Erik Johnston
ac78e60de6 Add stream_id index 2015-07-02 13:18:41 +01:00
Erik Johnston
bd1236c0ee Consolidate duplicate code in notifier 2015-07-02 11:46:05 +01:00
Erik Johnston
ddf7979531 Add receipts_key to StreamToken 2015-07-02 11:45:44 +01:00
Erik Johnston
0862fed2a8 Add basic ReceiptHandler 2015-07-01 17:19:31 +01:00
Erik Johnston
80a61330ee Add basic storage functions for handling of receipts 2015-07-01 17:19:12 +01:00
Erik Johnston
67362a9a03 Merge branch 'release-v0.9.3' of github.com:matrix-org/synapse 2015-07-01 15:12:57 +01:00
Erik Johnston
480d720388 Bump changelog and version to v0.9.3 2015-07-01 15:12:32 +01:00
Erik Johnston
901f56fa63 Add tables for receipts 2015-06-30 15:29:47 +01:00
Erik Johnston
9beaedd164 Enforce ascii filenames for uploads 2015-06-30 10:31:59 +01:00
Erik Johnston
2124f668db Add Content-Disposition headers to media repo v1 downloads 2015-06-30 09:35:44 +01:00
Matthew Hodgson
11374a77ef clarify readme a bit more 2015-06-27 16:59:47 +02:00
Erik Johnston
f0dd568e16 Wait for previous attempts at fetching keys for a given server before trying to fetch more 2015-06-26 11:25:00 +01:00
Erik Johnston
b5f55a1d85 Implement bulk verify_signed_json API 2015-06-26 10:39:34 +01:00
Erik Johnston
5130d80d79 Add bulk insert events API 2015-06-25 17:29:34 +01:00
David Baker
6825eef955 Oops: underride rule had an identifier with override in it. 2015-06-23 14:26:14 +01:00
Erik Johnston
6924852592 Batch SELECTs in _get_auth_chain_ids_txn 2015-06-23 11:01:04 +01:00
Erik Johnston
f043b14bc0 Update change log 2015-06-23 10:22:42 +01:00
Erik Johnston
9c72011fd7 Bumb version 2015-06-23 10:12:19 +01:00
Erik Johnston
2f556e0c55 Fix typo 2015-06-19 16:22:53 +01:00
Erik Johnston
7fa1363fb0 Merge pull request #192 from matrix-org/erikj/fix_log_context
Fix log context when sending requests
2015-06-19 16:21:40 +01:00
Erik Johnston
275dab6b55 Merge pull request #190 from matrix-org/erikj/syn-412
Fix notifier leak
2015-06-19 11:49:58 +01:00
Erik Johnston
a68abc79fd Add comment on cancellation of observers 2015-06-19 11:48:55 +01:00
Erik Johnston
653533a3da Fix log context when sending requests 2015-06-19 11:46:49 +01:00
Erik Johnston
9bf61ef97b Merge pull request #189 from matrix-org/erikj/room_init_sync
Improve room init sync speed.
2015-06-19 11:36:06 +01:00
Erik Johnston
0e58d19163 Merge pull request #187 from matrix-org/erikj/sanitize_logging
Sanitize logging
2015-06-19 11:35:59 +01:00
Erik Johnston
18968efa0a Remove stale debug lines 2015-06-19 10:18:02 +01:00
Erik Johnston
eb928c9f52 Add site_tag to logger 2015-06-19 10:16:48 +01:00
Erik Johnston
9d112f4440 Add IDs to outbound transactions 2015-06-19 10:13:03 +01:00
Erik Johnston
ad460a8315 Add Eric Myhre to AUTHORS 2015-06-19 09:23:26 +01:00
Erik Johnston
bf628cf6dd Merge pull request #191 from heavenlyhash/configurable-upload-dir
Make upload dir a configurable path.
2015-06-19 09:16:04 +01:00
Eric Myhre
9e5a353663 Make upload dir a configurable path.
Fixes SYN-425.

Signed-off-by: Eric Myhre <hash@exultant.us>
2015-06-18 23:38:20 -05:00
Erik Johnston
6f6ebd216d PEP8 2015-06-18 17:00:32 +01:00
Erik Johnston
73513ececc Documentation 2015-06-18 16:15:10 +01:00
Erik Johnston
1f24c2e589 Don't bother proxying lookups on _NotificationListener to underlying deferred 2015-06-18 16:09:53 +01:00
Erik Johnston
22049ea700 Refactor the notifier.wait_for_events code to be clearer. Add _NotifierUserStream.new_listener that accpets a token to avoid races. 2015-06-18 15:49:24 +01:00
Erik Johnston
050ebccf30 Fix notifier leak 2015-06-18 11:36:26 +01:00
Kegan Dougal
d88e20cdb9 Fix bug where synapse was sending AS user queries incorrectly.
Bug introduced in 92b20713d7
which reversed the comparison when checking if a user existed
in the users table. Added UTs to prevent this happening again.
2015-06-17 17:26:03 +01:00
Erik Johnston
eceb554a2f Use another deferred list 2015-06-16 17:12:27 +01:00
Erik Johnston
b849a64f8d Use DeferredList 2015-06-16 17:03:24 +01:00
Erik Johnston
0460406298 Don't do unecessary db ops in presence.get_state 2015-06-16 16:59:38 +01:00
Paul "LeoNerd" Evans
9a3cd1c00d Correct -H SERVER_NAME in config-missing complaint message 2015-06-16 16:03:35 +01:00
Erik Johnston
fb7def3344 Remove access_token from synapse.rest.client.v1.transactions {get,store}_response logging 2015-06-16 10:09:43 +01:00
Erik Johnston
f13890ddce Merge branch 'develop' of github.com:matrix-org/synapse into erikj/sanitize_logging 2015-06-15 18:26:24 +01:00
Erik Johnston
aaa749d366 Disable twisted access logging. Move access logging to SynapseRequest object 2015-06-15 18:18:05 +01:00
Erik Johnston
bc42ca121f Merge pull request #185 from matrix-org/erikj/listeners_config
Change listener config.
2015-06-15 18:05:58 +01:00
Erik Johnston
cee69441d3 Log more when we have processed the request 2015-06-15 17:11:44 +01:00
Erik Johnston
b5209c5744 Create SynapseRequest that overrides __repr__ to not print access_token 2015-06-15 16:37:04 +01:00
Mark Haines
66da8f60d0 Bump the version of twisted needed for setup_requires to 15.2.1 2015-06-15 16:27:20 +01:00
Erik Johnston
f0583f65e1 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-06-15 15:17:47 +01:00
Erik Johnston
44c9102e7a Merge branch 'erikj/listeners_config' into erikj/sanitize_logging 2015-06-15 14:45:52 +01:00
Erik Johnston
6a7cf6b41f Merge pull request #186 from matrix-org/hotfixes-v0.9.2-r2
Hotfixes v0.9.2-r2
2015-06-15 14:39:46 +01:00
Erik Johnston
2acee97c2b Changelog 2015-06-15 14:20:25 +01:00
Erik Johnston
7f7ec84d6f Bump version 2015-06-15 14:16:29 +01:00
Erik Johnston
cebde85b94 Merge branch 'master' of github.com:matrix-org/synapse into hotfixes-v0.9.2-r2 2015-06-15 14:16:12 +01:00
Erik Johnston
f00f8346f1 Make http.server request logging more verbose, but redact access_tokens 2015-06-15 13:37:58 +01:00
Erik Johnston
83f119a84a Log requests and responses sent via http.client 2015-06-15 13:14:12 +01:00
Erik Johnston
9d0326baa6 Remove redundant newline 2015-06-15 11:27:29 +01:00
Erik Johnston
186f61a3ac Document listener config. Remove deprecated config options 2015-06-15 11:25:53 +01:00
Erik Johnston
fe9bac3749 Merge pull request #184 from matrix-org/hotfixes-v0.9.2-r1
Hotfixes v0.9.2-r1
2015-06-13 12:26:42 +01:00
Erik Johnston
6c01ceb8d0 Bump version 2015-06-13 12:22:14 +01:00
Erik Johnston
2eda996a63 Add a dummy.sql into delta/20 as pip isn't packinging the pushers.py 2015-06-13 12:21:58 +01:00
Matthew Hodgson
4706f3964d Merge pull request #183 from intelfx/install-python-schema-deltas
MANIFEST.in: include python schema delta scripts
2015-06-13 12:09:17 +01:00
Ivan Shapovalov
4df76b0a5d MANIFEST.in: include python schema delta scripts (we now have one in 20/) 2015-06-13 11:08:49 +03:00
Erik Johnston
a005b7269a Add backwards compat support for metrics, manhole and webclient config options 2015-06-12 17:44:23 +01:00
Erik Johnston
261ccd7f5f Fix tests 2015-06-12 17:17:29 +01:00
Erik Johnston
942e39e87c PEP8 2015-06-12 17:13:54 +01:00
Erik Johnston
9c5fc81c2d Correctly handle x_forwaded listener option 2015-06-12 17:13:23 +01:00
Erik Johnston
fd2c07bfed Use config.listeners 2015-06-12 15:33:07 +01:00
Erik Johnston
405f8c4796 Merge branch 'release-v0.9.2' 2015-06-12 11:53:03 +01:00
Erik Johnston
c42ed47660 Fix up create_resource_tree 2015-06-12 11:52:52 +01:00
Erik Johnston
1a87f5f26c Mention config option name 2015-06-12 11:46:41 +01:00
Erik Johnston
a3dc31cab9 s/some/certain 2015-06-12 11:45:13 +01:00
Erik Johnston
4dd47236e7 Update change log 2015-06-12 11:42:52 +01:00
Erik Johnston
295b400d57 Merge branch 'release-v0.9.2' into develop 2015-06-11 16:08:48 +01:00
Erik Johnston
716cf144ec Update change log 2015-06-11 16:07:06 +01:00
Erik Johnston
1e365e88bd Bump schema version 2015-06-11 15:50:39 +01:00
Erik Johnston
2d41dc0069 Bump version 2015-06-11 15:49:19 +01:00
Erik Johnston
f7f07dc517 Begin changing the config format 2015-06-11 15:48:52 +01:00
David Baker
b8690dd840 Catch any exceptions in the pusher loop. Use a lower timeout for pushers so we can see if they're actually still running. 2015-06-05 11:40:22 +01:00
manuroe
378a0f7a79 Federation demo start.sh: Fixed --no-rate-limit param in the script 2015-06-04 17:58:17 +02:00
David Baker
da84946de4 pep8 2015-06-04 16:43:45 +01:00
David Baker
63a7b3ad1e Add script to (re)convert the pushers table to changing the unique key. Also give the python db upgrade scripts the database engine so they can convert parameter strings, and add *args **kwargs to the upgrade function so we can add more args in future and previous scripts will ignore them. 2015-06-04 16:16:01 +01:00
Erik Johnston
5730b20c6d Merge pull request #175 from matrix-org/erikj/thumbnail_thread
Thumbnail images on a seperate thread
2015-06-03 17:26:56 +01:00
Erik Johnston
8047fd2434 Merge pull request #176 from matrix-org/erikj/backfill_auth
Improve backfill.
2015-06-03 17:25:37 +01:00
Erik Johnston
3bbd0d0e09 Merge pull request #180 from matrix-org/erikj/prev_state_context
Don't needlessly compute prev_state
2015-06-03 17:20:56 +01:00
Erik Johnston
9dda396baa Merge pull request #179 from matrix-org/erikj/state_group_outliers
Don't compute EventContext for outliers.
2015-06-03 17:20:40 +01:00
Erik Johnston
13ed3b9985 Merge pull request #178 from matrix-org/erikj/cache_state_groups
Add cache to get_state_groups.
2015-06-03 17:20:33 +01:00
Erik Johnston
bd2cf9d4bf Merge pull request #177 from matrix-org/erikj/content_repo_http_client
SYN-403: Make content repository use its own http client.
2015-06-03 17:20:27 +01:00
Erik Johnston
d4902a7ad0 Merge pull request #174 from matrix-org/erikj/compress_option
Add config option to disable compression of http responses
2015-06-03 17:18:17 +01:00
Erik Johnston
55bf90b9e4 Don't needlessly compute prev_state 2015-06-03 16:44:24 +01:00
Erik Johnston
53f0bf85d7 Comment 2015-06-03 16:43:40 +01:00
Erik Johnston
1c3d844e73 Don't needlessly compute context 2015-06-03 16:41:51 +01:00
Erik Johnston
0d7d9c37b6 Add cache to get_state_groups 2015-06-03 14:45:55 +01:00
Erik Johnston
d8866d7277 Caches should be bound to instances.
Before, caches were global and so different instances of the stores
would share caches. This caused problems in the unit tests.
2015-06-03 14:45:17 +01:00
Erik Johnston
2ef2f6d593 SYN-403: Make content repository use its own http client. 2015-06-03 10:17:37 +01:00
Erik Johnston
3483b78d1a Log where a request came from in federation 2015-06-02 18:15:13 +01:00
Erik Johnston
d3ded420b1 Rephrase log line 2015-06-02 16:30:52 +01:00
Erik Johnston
22716774d5 Don't about JSON when warning about content tampering 2015-06-02 16:30:52 +01:00
Erik Johnston
5044e6c544 Thumbnail images on a seperate thread 2015-06-02 15:39:08 +01:00
Erik Johnston
09e23334de Add a timeout 2015-06-02 11:00:37 +01:00
Erik Johnston
02410e9239 Handle the fact we might be missing auth events 2015-06-02 10:58:35 +01:00
Erik Johnston
e552b78d50 Add some logging 2015-06-02 10:28:14 +01:00
Erik Johnston
fde0da6f19 Correctly look up auth_events 2015-06-02 10:19:38 +01:00
Erik Johnston
3f04a08a0c Don't process events we've already processed. Remember to process state events 2015-06-02 10:11:32 +01:00
Erik Johnston
4bbfbf898e Correctly pass in auth_events 2015-06-01 17:02:23 +01:00
Erik Johnston
6e17463228 Don't explode if we don't have the event 2015-06-01 16:39:43 +01:00
Erik Johnston
522f285f9b Add config option to disable compression of http responses 2015-06-01 13:36:30 +01:00
Mark Haines
b8d49be5a1 Merge branch 'develop' into markjh/twisted-15
Conflicts:
	synapse/python_dependencies.py
2015-06-01 10:56:05 +01:00
Mark Haines
90abdaf3bc Use Twisted-15.2.1, Use Agent.usingEndpointFactory rather than implement our own Agent 2015-06-01 10:51:50 +01:00
Erik Johnston
b579a8ea18 Merge pull request #172 from intelfx/contrib-systemd
contrib/systemd: log_config.yaml: do not disable existing loggers
2015-05-31 20:53:45 +01:00
Ivan Shapovalov
53ef3a0bfe contrib/systemd: log_config.yaml: do not disable existing loggers
It turned out that merely configuring the root logger is not enough for
"catch-all" semantics. The logging subsystem also needs to be told not
to disable existing loggers (so that their messages will get propagated
to handlers up the logging hierarchy, not just silently discarded).

Signed-off-by: Ivan Shapovalov <intelfx100@gmail.com>
2015-05-31 19:25:21 +03:00
Mark Haines
d70c847b4f Merge pull request #170 from matrix-org/markjh/SYT-8-recaptcha
Allow endpoint for verifying recaptcha to be configured
2015-05-29 15:32:54 +01:00
Erik Johnston
d15f166093 Remove log line 2015-05-29 15:03:24 +01:00
Erik Johnston
ca580ef862 Don't copy twice 2015-05-29 15:02:55 +01:00
Erik Johnston
45bac68064 Merge pull request #169 from matrix-org/erikj/ultrajson
Use ultrajson when possible. Add option to turn off freezing of events.
2015-05-29 14:58:56 +01:00
Mark Haines
784aaa53df Merge branch 'develop' into markjh/SYT-8-recaptcha
Conflicts:
	synapse/handlers/auth.py
2015-05-29 13:49:44 +01:00
Erik Johnston
8355b4d074 Bump syutil version 2015-05-29 13:08:43 +01:00
Erik Johnston
a7b65bdedf Add config option to turn off freezing events. Use new encode_json api and ujson.loads 2015-05-29 12:17:33 +01:00
Mark Haines
d94590ed48 Add config for setting the recaptcha verify api endpoint, so we can test it in sytest 2015-05-29 12:11:40 +01:00
Erik Johnston
afbd3b2fc4 SYN-395: Fix CAPTCHA, don't double decode json 2015-05-28 18:05:00 +01:00
Erik Johnston
79e37a7ecb Correctly pass connection pool parameter 2015-05-28 16:48:53 +01:00
Erik Johnston
0f118e55db Merge pull request #168 from matrix-org/erikj/conn_pool
Make HTTP clients use connection pools.
2015-05-28 16:03:56 +01:00
Erik Johnston
2f54522d44 Merge pull request #167 from matrix-org/erikj/deep_copy_removal
Remove a deep copy
2015-05-28 16:00:07 +01:00
Erik Johnston
dd74436ffd Unused import 2015-05-28 15:47:20 +01:00
Erik Johnston
11f51e6ded Up maxPersistentPerHost count 2015-05-28 15:45:46 +01:00
Erik Johnston
086df80790 Add connection pooling to SimpleHttpClient 2015-05-28 15:43:21 +01:00
Erik Johnston
291e942332 Use connection pool for federation connections 2015-05-28 15:43:21 +01:00
Erik Johnston
31ade3b3e9 Remove a deep copy 2015-05-28 13:45:23 +01:00
Erik Johnston
36b3b75b21 Registration should be disabled by default 2015-05-28 11:01:34 +01:00
Erik Johnston
6d1dea337b Merge branch 'release-v0.9.1' of github.com:matrix-org/synapse 2015-05-26 16:03:32 +01:00
Erik Johnston
99eb1172b0 Merge branch 'release-v0.9.1' of github.com:matrix-org/synapse into develop 2015-05-26 16:02:59 +01:00
Erik Johnston
6cb3212fc2 changelog 2015-05-26 16:00:45 +01:00
Mark Haines
554c63ca60 Iterate over the user_streams not the user_ids 2015-05-26 15:03:49 +01:00
Mark Haines
fff7905409 Merge branch 'bugs/SYN-390' into release-v0.9.1 2015-05-26 14:58:49 +01:00
Mark Haines
00dd207f60 Take a dict of the rule, not the rule list 2015-05-26 14:57:48 +01:00
Erik Johnston
e417469af2 changelog 2015-05-26 11:08:46 +01:00
Erik Johnston
cb7dac3a5d changelog 2015-05-26 11:08:09 +01:00
Erik Johnston
2651fd5e24 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.1 2015-05-26 11:05:50 +01:00
Erik Johnston
764856777c changelog 2015-05-26 11:05:44 +01:00
Mark Haines
e7b25a649c Merge pull request #166 from matrix-org/bugs/SYN-390
SYN-390: Don't modify the dictionary returned from the database here either
2015-05-26 10:40:50 +01:00
Mark Haines
804b732aab SYN-390: Don't modify the dictionary returned from the database here either 2015-05-26 10:35:08 +01:00
Erik Johnston
45fffe8cbe Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.1 2015-05-26 10:22:41 +01:00
Erik Johnston
9ba3c1ede4 Merge pull request #165 from matrix-org/bugs/SYN-390
SYN-390: Don't modify the dictionary returned from the data store
2015-05-26 10:20:36 +01:00
Mark Haines
a0bebeda8b SYN-390: Don't modify the dictionary returned from the data store 2015-05-26 10:14:15 +01:00
Erik Johnston
27e093cbc1 Bump version 2015-05-22 17:03:37 +01:00
Mark Haines
d9f60e8dc8 Merge pull request #163 from matrix-org/markjh/presence_list_cache
Add a cache for the presence list
2015-05-22 17:02:23 +01:00
Mark Haines
0e42dfbe22 Merge pull request #164 from matrix-org/markjh/pusher_performance_2
Add a cache for get_push rules for user, fix cache invalidation
2015-05-22 17:01:56 +01:00
Mark Haines
5ebd33302f Merge pull request #162 from matrix-org/erikj/backfill_fixes
backfill fixes
2015-05-22 17:01:40 +01:00
Mark Haines
17167898c8 Fix the presence tests 2015-05-22 16:22:54 +01:00
Erik Johnston
6eadbfbea0 Remove redundant for loop 2015-05-22 16:12:20 +01:00
Mark Haines
1a9a9abcc7 Add a cache for getting the presence list for a user 2015-05-22 16:11:17 +01:00
Erik Johnston
74b7de83ec Merge branch 'develop' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-22 16:10:42 +01:00
Mark Haines
36317f3dad Merge pull request #156 from matrix-org/erikj/join_perf
Make joining #matrix:matrix.org over federation quicker
2015-05-22 16:09:54 +01:00
Mark Haines
052ac0c8d0 Merge pull request #159 from matrix-org/erikj/metrics_interface_config
Enable changing the interface the metrics listener binds to
2015-05-22 16:09:33 +01:00
Mark Haines
49a2c10279 Merge pull request #157 from matrix-org/markjh/presence_performance
Improve presence performance in loadtest
2015-05-22 16:04:40 +01:00
Mark Haines
5d53c14342 Merge pull request #160 from matrix-org/markjh/appservice_performance
Make the appservice use 'users_in_room' rather than get_room_members …
2015-05-22 16:04:22 +01:00
Mark Haines
4752a990c8 Merge pull request #161 from matrix-org/erikj/txn_logging_fix
Erikj/txn logging fix
2015-05-22 16:03:52 +01:00
Mark Haines
106a3051b8 Remove spurious TODO comment 2015-05-22 15:53:03 +01:00
Erik Johnston
284f55a7fb Add doc strings 2015-05-22 15:18:04 +01:00
Erik Johnston
1ce1509989 s/metric_interface/metric_bind_host/ 2015-05-22 14:51:22 +01:00
Erik Johnston
8bb85c8c5a Update log line 2015-05-22 14:48:06 +01:00
Mark Haines
c8135f808b Remove unused import 2015-05-22 14:45:46 +01:00
Erik Johnston
b21d015c55 Log origin and stats of incoming transactions 2015-05-22 14:44:25 +01:00
Erik Johnston
e70e8e053e Add txn_id to some log lines 2015-05-22 14:44:02 +01:00
Erik Johnston
1b446a5d85 Log less lines at INFO level, but include more helpful information 2015-05-22 14:29:57 +01:00
Erik Johnston
59a0682f3e Enable changing the interface the metrics listener binds to 2015-05-22 13:13:07 +01:00
Mark Haines
b6adfc59f5 Invalidate the get_latest_event_ids_in_room cache when deleting from event_forward_extremities 2015-05-22 13:01:03 +01:00
Erik Johnston
254aa3c986 Revert register_new_matrix_user to use v1 api 2015-05-22 11:59:48 +01:00
Mark Haines
f43544eecc Make the appservice use 'users_in_room' rather than get_room_members since it is cached 2015-05-22 11:01:28 +01:00
Mark Haines
a04cde613e Add a cache for get_push rules for user, fix cache invalidation 2015-05-22 10:39:45 +01:00
Erik Johnston
4429e720ae Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-22 10:33:00 +01:00
Erik Johnston
ee49098843 Changelog 2015-05-21 17:36:52 +01:00
Erik Johnston
51f5d36f4f Merge branch 'hotfixes-v0.9.0-r5' of github.com:matrix-org/synapse 2015-05-21 17:16:10 +01:00
Erik Johnston
f8c2cd129d Bump version 2015-05-21 17:03:30 +01:00
Erik Johnston
f6d1183fc5 Merge branch 'markjh/pusher_performance_master' of github.com:matrix-org/synapse into hotfixes-v0.9.0-r5 2015-05-21 17:02:54 +01:00
Mark Haines
2043527b9b Don't try to use a txn when not in one, remove spurious debug logging 2015-05-21 16:53:03 +01:00
Mark Haines
53447e9cd3 Add caches for things requested by the pushers 2015-05-21 16:41:39 +01:00
Mark Haines
d61ce3f670 Add a cache for get_current_state with state_key 2015-05-21 16:41:39 +01:00
Erik Johnston
a910984b58 Actually return something from lambda 2015-05-21 15:58:41 +01:00
Erik Johnston
e309b1045d Sort backfill events 2015-05-21 15:57:35 +01:00
Erik Johnston
0180bfe4aa Remove dead code 2015-05-21 15:53:41 +01:00
Erik Johnston
1f3d1d85a9 Only get non-state 2015-05-21 15:52:29 +01:00
Erik Johnston
39a3340f73 Skip events we've already seen 2015-05-21 15:48:56 +01:00
Erik Johnston
ae3bff3491 Correctly prepopulate queue 2015-05-21 15:46:07 +01:00
Erik Johnston
dc085ddf8c Don't prepopulate event_results 2015-05-21 15:44:05 +01:00
Erik Johnston
73d23c6ae8 Don't readd things that are already in event_results 2015-05-21 15:40:22 +01:00
Erik Johnston
6189d8e54d PriorityQueue gives lowest first 2015-05-21 15:38:08 +01:00
Erik Johnston
115ef3ddac Correctly capture Queue.Empty exception 2015-05-21 15:37:43 +01:00
Erik Johnston
4fb858d90a Merge branch 'develop' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-21 15:25:54 +01:00
Mark Haines
88f1ea36ce Oops, get_rooms_for_user returns a namedtuple, not a room_id 2015-05-21 15:23:58 +01:00
Erik Johnston
c2633907c5 Merge branch 'erikj/join_perf' of github.com:matrix-org/synapse into erikj/backfill_fixes 2015-05-21 14:58:47 +01:00
Erik Johnston
ebfdd2eb5b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-21 14:54:52 +01:00
Erik Johnston
a551c5dad7 Merge pull request #155 from matrix-org/erikj/perf
Bulk and batch retrieval of events.
2015-05-21 14:54:40 +01:00
Erik Johnston
27e4b45c06 s/for events/for requests for events/ 2015-05-21 14:52:23 +01:00
Erik Johnston
ac5f2bf9db s/for events/for requests for events/ 2015-05-21 14:50:57 +01:00
Erik Johnston
80a167b1f0 Add comments 2015-05-21 11:19:04 +01:00
Mark Haines
7ae8afb7ef Removed unused 'is_visible' method 2015-05-20 14:48:11 +01:00
Erik Johnston
9118a92862 Split up _get_events into defer and txn versions 2015-05-20 13:27:16 +01:00
Mark Haines
8eca5bd50a Fix the presence tests 2015-05-20 13:22:18 +01:00
Mark Haines
e01b825cc9 Clean up the presence_list checking logic a bit 2015-05-20 13:21:59 +01:00
Erik Johnston
ab45e12d31 Make not return a deferred _get_event_from_row_txn 2015-05-20 13:07:19 +01:00
Erik Johnston
f407cbd2f1 PEP8 2015-05-20 13:02:01 +01:00
Erik Johnston
227f8ef031 Split out _get_event_from_row back into defer and _txn version 2015-05-20 13:00:57 +01:00
Erik Johnston
2bc60c55af Fix _get_backfill_events to return events in the correct order 2015-05-20 12:57:00 +01:00
Erik Johnston
20814fabdd Actually fetch state for new backwards extremeties when backfilling. 2015-05-20 11:59:02 +01:00
Erik Johnston
9084cdd70f Ensure event_results is a set 2015-05-19 16:34:31 +01:00
Erik Johnston
5b731178b2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 16:08:00 +01:00
Erik Johnston
3a653515ec Add None check 2015-05-19 15:27:09 +01:00
Erik Johnston
aa729349dd Fix event_backwards_extrem insertion to ignore outliers 2015-05-19 15:27:00 +01:00
Erik Johnston
5b1631a4a9 Add a timeout param to get_event 2015-05-19 14:53:32 +01:00
Erik Johnston
291cba284b Handle the case when things return empty but non none things 2015-05-19 14:42:46 +01:00
Erik Johnston
253f76a0a5 Don't always hit get_server_verify_key_v1_direct 2015-05-19 14:42:38 +01:00
Erik Johnston
6837c5edab Handle the case when things return empty but non none things 2015-05-19 14:27:11 +01:00
Erik Johnston
7223129916 Don't apply new room join hack if depth > 5 2015-05-19 14:16:08 +01:00
Erik Johnston
5ae4a84211 Don't always hit get_server_verify_key_v1_direct 2015-05-19 13:43:34 +01:00
Erik Johnston
118a760719 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 13:20:29 +01:00
David Baker
19505e0392 Disable GZip encoding on static file resources as per comment 2015-05-19 13:20:25 +01:00
Erik Johnston
df431b127b Add forgotten .items() 2015-05-19 13:14:21 +01:00
Erik Johnston
882ac83d8d Fix scripts-dev/convert_server_keys.py to have correct format 2015-05-19 13:12:55 +01:00
Erik Johnston
d3e09f12d0 SYN-383: Actually, we expect this value to be a dict 2015-05-19 13:12:41 +01:00
Erik Johnston
677be13ffc Revert accidental commit 2015-05-19 13:12:28 +01:00
Erik Johnston
350b88656a SYN-383: Actually, we expect this value to be a dict 2015-05-19 13:01:57 +01:00
Erik Johnston
9de94d5a4d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 12:50:17 +01:00
Erik Johnston
2b7120e233 SYN-383: Handle the fact the server might not have signed things 2015-05-19 12:49:38 +01:00
Erik Johnston
8b256a7296 Don't reuse var names 2015-05-19 11:58:22 +01:00
Erik Johnston
62ccc6d95f Don't reuse var names 2015-05-19 11:58:04 +01:00
Erik Johnston
01858bcbf2 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 11:56:35 +01:00
Erik Johnston
2aeee2a905 SYN-383: Fix parsing of verify_keys and catching of _DefGen_Return 2015-05-19 11:56:18 +01:00
Erik Johnston
5e7883ec19 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/join_perf 2015-05-19 10:50:43 +01:00
Mark Haines
c6a03c46e6 SYN-383: Extract the response list from 'server_keys' in the response JSON as it might work better than iterating over the top level dict 2015-05-19 10:23:02 +01:00
Mark Haines
e4c65b338d Speed up the get_pagination_rows as well 2015-05-18 18:21:06 +01:00
Mark Haines
99914ec9f8 Merge pull request #152 from matrix-org/notifier_performance
Notifier performance
2015-05-18 17:49:59 +01:00
Erik Johnston
ef910a0358 Do work in parellel when joining a room 2015-05-18 17:17:04 +01:00
Mark Haines
591c4bf223 Cache the most recent serial for each room 2015-05-18 16:21:51 +01:00
Mark Haines
e1150cac4b Move updating the serial and state of the presence cache into a single function 2015-05-18 15:46:37 +01:00
Erik Johnston
165eb2dbe6 Comments and shuffle of functions 2015-05-18 15:18:41 +01:00
Mark Haines
880fb46de0 Merge branch 'notifier_performance' into markjh/presence_performance 2015-05-18 14:33:58 +01:00
Mark Haines
9396723995 Merge pull request #154 from matrix-org/erikj/events_move
Move get_events functions to storage.events
2015-05-18 14:24:35 +01:00
Erik Johnston
65878a2319 Remove unused metric 2015-05-18 14:06:30 +01:00
Mark Haines
ad31fa3040 Don't bother sorting by the room_stream_ids, it shouldn't matter which order they are notified in 2015-05-18 14:04:58 +01:00
Erik Johnston
4d1b6f4ad1 Remove rejected events if we don't want rejected events 2015-05-18 14:03:46 +01:00
Mark Haines
0b0033c40b Merge branch 'develop' into notifier_performance 2015-05-18 13:50:01 +01:00
Mark Haines
755def8083 Add more doc string, reduce C+P boilerplate for getting room list 2015-05-18 13:46:47 +01:00
Mark Haines
1e90715a3d Make sure the notifier stream token goes forward when it is updated. Sort the pending events by the correct room_stream_id 2015-05-18 13:17:36 +01:00
Erik Johnston
131bdf9bb1 Merge branch 'erikj/events_move' of github.com:matrix-org/synapse into erikj/perf 2015-05-18 10:23:37 +01:00
Erik Johnston
10f1bdb9a2 Move get_events functions to storage.events 2015-05-18 10:21:40 +01:00
Erik Johnston
d5cea26d45 Remove pointless newline 2015-05-18 10:16:45 +01:00
Erik Johnston
c71176858b Newline, remove debug logging 2015-05-18 10:11:14 +01:00
Erik Johnston
f8bd4de87d Remove debug logging 2015-05-18 09:58:03 +01:00
Erik Johnston
c3b37abdfd PEP8 2015-05-15 16:59:58 +01:00
Erik Johnston
6c74fd62a0 Revert limiting of fetching, it didn't help perf. 2015-05-15 16:45:35 +01:00
Erik Johnston
9ff7f66a2b init j 2015-05-15 16:36:03 +01:00
Erik Johnston
70f272f71c Don't completely drain the list 2015-05-15 16:34:17 +01:00
Erik Johnston
8763dd80ef Don't fetch prev_content for current_state 2015-05-15 15:33:01 +01:00
Erik Johnston
807229f2f2 Err, defer.gatherResults ftw 2015-05-15 15:20:29 +01:00
Erik Johnston
acb12cc811 Make store.get_current_state fetch events asyncly 2015-05-15 15:20:05 +01:00
Erik Johnston
d62dee7eae Remove more debug logging 2015-05-15 15:06:37 +01:00
Erik Johnston
0f29cfabc3 Remove debug logging 2015-05-15 14:06:42 +01:00
Erik Johnston
e275a9c0d9 preserve log context 2015-05-15 11:54:51 +01:00
Erik Johnston
aa32bd38e4 Add a wait 2015-05-15 11:35:04 +01:00
Erik Johnston
372d4c6d7b Srsly. Don't use closures. Baaaaaad 2015-05-15 11:26:00 +01:00
Erik Johnston
575ec91d82 Correctly pass through params 2015-05-15 11:15:10 +01:00
Mark Haines
10be983f2c Merge pull request #153 from matrix-org/markjh/presence_docstring
Add some doc strings for presence.
2015-05-15 11:11:47 +01:00
Mark Haines
415b158ce2 More whitespace 2015-05-15 11:09:47 +01:00
Erik Johnston
de01438a57 Sort out error handling 2015-05-15 11:00:50 +01:00
Erik Johnston
a2c4f3f150 Fix daedlock 2015-05-15 10:54:04 +01:00
Mark Haines
0a4330cd5d Add some missed argument types, cleanup the whitespace a bit 2015-05-14 17:48:12 +01:00
Mark Haines
47ec693e29 More doc-strings 2015-05-14 17:07:02 +01:00
Erik Johnston
1d566edb81 Remove race condition 2015-05-14 16:54:35 +01:00
David Baker
6e1ad283cf Support gzip encoding for client, client v2 and web client resources (SYN-176). 2015-05-14 16:39:19 +01:00
Erik Johnston
ef3d8754f5 Call from right thread 2015-05-14 15:41:55 +01:00
Erik Johnston
142934084a Count and loop 2015-05-14 15:40:21 +01:00
Erik Johnston
96c5b9f87c Don't start up more fetch_events 2015-05-14 15:36:04 +01:00
Erik Johnston
7cd6a6f6cf Awful idea for speeding up fetching of events 2015-05-14 15:34:02 +01:00
Mark Haines
c5d1b4986b Remove unused arguments and doc PresenceHandler.push_update_to_clients 2015-05-14 14:59:31 +01:00
Erik Johnston
7f4105a5c9 Turn off preemptive transactions 2015-05-14 14:51:06 +01:00
Erik Johnston
f4d58deba1 PEP8 2015-05-14 14:45:42 +01:00
Erik Johnston
386b7330d2 Move from _base to events 2015-05-14 14:45:22 +01:00
Mark Haines
0ad1c67234 Add some doc-strings to notifier 2015-05-14 14:35:07 +01:00
Erik Johnston
7d6a1dae31 Jump out early 2015-05-14 14:27:58 +01:00
Erik Johnston
656223fbd3 Actually, we probably want to run this in a transaction 2015-05-14 14:26:35 +01:00
David Baker
67800f7626 Treat setting your display name to the empty string as removing it (SYN-186). 2015-05-14 14:19:59 +01:00
Erik Johnston
2f7f8e1c2b Preemptively jump into a transaction if we ask for get_prev_content 2015-05-14 14:17:36 +01:00
Mark Haines
4770cec7bc Merge pull request #150 from matrix-org/notifier_unify
Make v1 and v2 client APIs interact with the notifier in the same way.
2015-05-14 14:16:59 +01:00
Erik Johnston
e1e9f0c5b2 loop -> gatherResults 2015-05-14 13:58:49 +01:00
Erik Johnston
ab78a8926e Err, we probably want a bigger limit 2015-05-14 13:47:16 +01:00
Erik Johnston
f6f902d459 Move fetching of events into their own transactions 2015-05-14 13:45:48 +01:00
Erik Johnston
cdb3757942 Refactor _get_events 2015-05-14 13:31:55 +01:00
David Baker
92e1c8983d Disallow whitespace in aliases here too 2015-05-14 13:21:55 +01:00
David Baker
0c894e1ebd Throw error when creating room if alias contains whitespace #SYN-335 2015-05-14 13:11:28 +01:00
Mark Haines
084c365c3a Use the current token when timing out a notifier, make sure the user_id is a string in on_new_user_event 2015-05-14 12:03:26 +01:00
David Baker
c37a6e151f Make shared secret registration work again 2015-05-14 12:03:13 +01:00
Erik Johnston
36ea26c5c0 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/perf 2015-05-14 12:01:38 +01:00
David Baker
7c549dd557 Add ID generator for push_rules_enable to #resolve SYN-378 2015-05-14 11:44:03 +01:00
Mark Haines
899d4675dd Merge branch 'notifier_unify' into notifier_performance 2015-05-14 11:36:44 +01:00
Mark Haines
243c56e725 Merge branch 'develop' into notifier_unify 2015-05-14 11:36:23 +01:00
Mark Haines
3edd2d5c93 Fix v2 sync, update the last_notified_ms only if there was an active listener 2015-05-14 11:25:30 +01:00
David Baker
47fb089eb5 Specify python 2.7 in the virtualenv setup (SYN-319) #resolved 2015-05-14 10:23:10 +01:00
Erik Johnston
4f1d984e56 Add index on events 2015-05-13 17:22:26 +01:00
Mark Haines
5e0c533672 Fix metric counter 2015-05-13 17:20:28 +01:00
Erik Johnston
968b01a91a Actually use async method 2015-05-13 17:02:46 +01:00
Erik Johnston
4071f29653 Fetch events from events_id in their own transactions 2015-05-13 16:59:41 +01:00
Mark Haines
f1b83d88a3 Discard unused NotifierUserStreams 2015-05-13 16:54:02 +01:00
Erik Johnston
a988361aea Typo 2015-05-13 15:44:15 +01:00
Erik Johnston
8888982db3 Don't insert None 2015-05-13 15:43:32 +01:00
Mark Haines
9af432257d Don't set a timer if there's already a result to return 2015-05-13 15:42:13 +01:00
Erik Johnston
cf706cc6ef Don't return None 2015-05-13 15:31:25 +01:00
Erik Johnston
5971d240d4 Limit batch size 2015-05-13 15:26:49 +01:00
Erik Johnston
ca4f458787 Fetch events in bulk 2015-05-13 15:13:42 +01:00
Mark Haines
df6db5c802 Don't bother checking for new events from a source if the stream token hasn't advanced for that source 2015-05-13 15:08:24 +01:00
Erik Johnston
6edff11a88 Don't fetch redaction and rejection stuff for each event, so we can use index only scan 2015-05-13 14:39:05 +01:00
Mark Haines
63878c0379 Don't bother checking for updates if the stream token hasn't advanced for a user 2015-05-13 13:42:21 +01:00
Erik Johnston
02590c3e1d Temp turn off checking for rejections and redactions 2015-05-13 11:31:28 +01:00
Erik Johnston
619a21812b defer.gatherResults loop 2015-05-13 11:29:03 +01:00
Erik Johnston
fec4485e28 Batch fetching of events for state groups 2015-05-13 11:22:42 +01:00
Erik Johnston
409bcc76bd Load events for state group seperately 2015-05-13 11:13:31 +01:00
Mark Haines
cffe6057fb Merge branch 'notifier_unify' into notifier_performance
Conflicts:
	synapse/notifier.py
2015-05-12 16:37:50 +01:00
Erik Johnston
80fd2b574c Don't talk to yourself when backfilling 2015-05-12 16:19:46 +01:00
Erik Johnston
e122685978 You need to call contextmanager 2015-05-12 16:12:37 +01:00
Mark Haines
54ef09f860 Merge pull request #151 from matrix-org/revert-147-presence-performance
Revert "Improvement to performance of presence event stream handling"
2015-05-12 15:44:55 +01:00
Mark Haines
d7b3ac46f8 Revert "Improvement to performance of presence event stream handling" 2015-05-12 15:44:21 +01:00
Mark Haines
4429e4bf24 Merge branch 'develop' into notifier_unify
Conflicts:
	synapse/notifier.py
2015-05-12 15:31:26 +01:00
Mark Haines
ec07dba29e Merge pull request #143 from matrix-org/erikj/SYN-375
SYN-375 - Lots of unhandled deferred exceptions.
2015-05-12 15:25:54 +01:00
Mark Haines
c167cbc9fd Merge pull request #147 from matrix-org/presence-performance
Improvement to performance of presence event stream handling
2015-05-12 15:24:54 +01:00
Mark Haines
a6fb2aa2a5 Merge pull request #144 from matrix-org/erikj/logging_context
Preserving logging contexts
2015-05-12 15:23:50 +01:00
Mark Haines
1fce36b111 Merge pull request #149 from matrix-org/erikj/backfill
Backfill support
2015-05-12 15:20:32 +01:00
Erik Johnston
8b28209c60 Err, delete the right stuff 2015-05-12 15:02:53 +01:00
Erik Johnston
30c72d377e Newlines 2015-05-12 14:47:40 +01:00
Erik Johnston
e4eddf9b36 We do actually want to delete rows out of event_backward_extremities 2015-05-12 14:47:23 +01:00
Erik Johnston
c1779a79bc Fix up _handle_prev_events to not try to insert duplicate rows 2015-05-12 14:41:50 +01:00
Erik Johnston
74850d7f75 Do state groups persistence /after/ checking if we have already persisted the event 2015-05-12 14:14:58 +01:00
Erik Johnston
07a1223156 s/backfil/backfill/ 2015-05-12 14:09:54 +01:00
Erik Johnston
0d31ad5101 Typos everywhere 2015-05-12 14:02:01 +01:00
Erik Johnston
a0dfffb33c And another typo. 2015-05-12 14:00:31 +01:00
Erik Johnston
6e5ac4a28f Err, gatherResults doesn't take a dict... 2015-05-12 13:58:14 +01:00
Erik Johnston
8022b27fc2 Make distributer.fire work as it did 2015-05-12 13:14:48 +01:00
Erik Johnston
95dedb866f Unwrap defer.gatherResults failures 2015-05-12 13:14:29 +01:00
Mark Haines
78672a9fd5 Merge branch 'notifier_unify' into notifier_performance 2015-05-12 13:11:54 +01:00
Erik Johnston
da6a7bbdde Merge branch 'develop' of github.com:matrix-org/synapse into erikj/logging_context 2015-05-12 13:10:42 +01:00
Mark Haines
2551b6645d Update the end_token correctly, otherwise the token doesn't advance and the client gets duplicate events 2015-05-12 11:54:18 +01:00
Mark Haines
5e4ba463b7 Merge branch 'develop' into notifier_unify 2015-05-12 11:41:53 +01:00
Mark Haines
51da995806 Merge pull request #148 from matrix-org/bugs/SYN-377
SYN-377: Make sure that the event is marked as persisted from the main thread.
2015-05-12 11:36:44 +01:00
Mark Haines
5002056b16 SYN-377: Make sure that the StreamIdGenerator.get_next.__exit__ is called from the main thread after the transaction completes, not from database thread before the transaction completes. 2015-05-12 11:20:40 +01:00
Mark Haines
5c75adff95 Add a NotifierUserStream to hold all the notification listeners for a user 2015-05-12 11:00:37 +01:00
Erik Johnston
367382b575 Handle the case where the other side is unreachable when backfilling 2015-05-12 10:35:45 +01:00
Erik Johnston
4df11b5039 Make get_current_token accept a direction parameter, which tells whether the source whether we want a token for going 'forwards' or 'backwards' 2015-05-12 10:28:10 +01:00
Erik Johnston
84e6b4001f Initial hack at wiring together pagination and backfill 2015-05-11 18:01:31 +01:00
Erik Johnston
17653a5dfe Move storage.stream._StreamToken to types.RoomStreamToken 2015-05-11 18:01:01 +01:00
Mark Haines
e269c511f6 Don't bother passing the events to the notifier since it isn't using them 2015-05-11 15:01:51 +01:00
Mark Haines
5e3b254dc8 Use wait_for_events to implement 'get_events' 2015-05-11 14:37:33 +01:00
Erik Johnston
d244fa9741 Merge branch 'hotfixes-v0.9.0-r4' of github.com:matrix-org/synapse into develop 2015-05-11 13:34:31 +01:00
Erik Johnston
e89ca34e0e Merge branch 'hotfixes-v0.9.0-r4' of github.com:matrix-org/synapse 2015-05-11 13:16:11 +01:00
Erik Johnston
79b7154454 Merge pull request #146 from matrix-org/erikj/push_rules_fixes
Fix 500 on push rule updates.
2015-05-11 11:33:47 +01:00
Erik Johnston
4ef556f650 Bump version 2015-05-11 11:31:04 +01:00
Erik Johnston
b036596b75 Prefer to use _simple_*. 2015-05-11 11:24:01 +01:00
Erik Johnston
cd525c0f5a push_rules table expects an 'id' field 2015-05-11 11:24:01 +01:00
Mark Haines
3c224f4d0e SYN-376: Add script for converting server keys from v1 to v2 2015-05-11 11:00:17 +01:00
Erik Johnston
d38862a080 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-10 10:56:46 +01:00
David Baker
2640d6718d Merge pull request #145 from matrix-org/hotfixes-v0.9.0-r3
Hotfixes v0.9.0 r3
2015-05-10 10:54:44 +01:00
Erik Johnston
de87541862 Bump version 2015-05-10 10:51:08 +01:00
Erik Johnston
22d2f498fa Fix push rule bug: can't insert bool into small int column 2015-05-10 10:50:51 +01:00
Matthew Hodgson
d79ffa1898 typo 2015-05-09 14:45:37 +01:00
Erik Johnston
2236ef6c92 Fix up leak. Add warnings. 2015-05-08 19:53:34 +01:00
Erik Johnston
da1aa07db5 Add some docs 2015-05-08 16:52:49 +01:00
Erik Johnston
4ac1941592 PEP8 2015-05-08 16:33:01 +01:00
Erik Johnston
476899295f Change the way we do logging contexts so that they survive divergences 2015-05-08 16:32:18 +01:00
Erik Johnston
fca28d243e Change the way we create observers to deferreds so that we don't get spammed by 'unhandled errors' 2015-05-08 16:28:08 +01:00
Erik Johnston
37feb4031f Merge branch 'hotfixes-v0.9.0-r2' of github.com:matrix-org/synapse 2015-05-08 16:13:15 +01:00
Erik Johnston
0cd1401f8d Bump version 2015-05-08 16:11:51 +01:00
Erik Johnston
724bb1e7d9 Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-08 16:11:19 +01:00
Mark Haines
1c7912751e Drop the old table not the new table 2015-05-08 16:04:32 +01:00
Mark Haines
9d36eb4eab Rename unique constraint 2015-05-08 16:01:55 +01:00
Mark Haines
b0f71db3ff Remove unsigned 2015-05-08 15:59:51 +01:00
Mark Haines
84e1cacea4 Bump schema version 2015-05-08 15:58:14 +01:00
Mark Haines
6538d445e8 Make the timestamps in server_keys_json bigints 2015-05-08 15:55:17 +01:00
Erik Johnston
52f98f8a5b Merge branch 'hotfixes-v0.9.0-r1' of github.com:matrix-org/synapse 2015-05-08 14:25:18 +01:00
Erik Johnston
22a7ba8b22 Actually rename all isntances 2015-05-08 13:50:03 +01:00
Erik Johnston
3a42f32134 Reword port script usage 2015-05-08 13:47:48 +01:00
Erik Johnston
4fa0f53521 Support reading directly from a config 2015-05-08 13:45:58 +01:00
Erik Johnston
326121aec4 UPGRADES: s/v0.x.x/v0.9.0 2015-05-08 13:38:29 +01:00
Erik Johnston
9a9386226a Mention Ivan Shapovalov contrib/systemd 2015-05-08 13:37:54 +01:00
Erik Johnston
126d562576 Bump version 2015-05-08 13:29:37 +01:00
Erik Johnston
f08c33e834 Fix port_from_sqlite_to_postgres after changes to storage layer. 2015-05-08 13:29:00 +01:00
Paul "LeoNerd" Evans
45543028bb Use the presence cachemap ordering to early-abort the iteration loop 2015-05-07 22:40:10 +01:00
Paul "LeoNerd" Evans
f683b5de47 Store presence cachemap in an ordered dict, so that the newer serials will be at the end 2015-05-07 21:27:53 +01:00
Erik Johnston
db0dca2f6f Merge branch 'master' of github.com:matrix-org/synapse into develop 2015-05-07 19:21:00 +01:00
Erik Johnston
89c0cd4acc Merge branch 'release-v0.9.0' of github.com:matrix-org/synapse 2015-05-07 19:07:00 +01:00
Erik Johnston
6101ce427a Slight rewording 2015-05-07 18:58:28 +01:00
Erik Johnston
5fe26a9b5c Reword docs/application_services.rst 2015-05-07 18:54:53 +01:00
Erik Johnston
35698484a5 Add some information on registering AS's 2015-05-07 18:51:09 +01:00
Erik Johnston
63562f6d5a Bump date 2015-05-07 18:20:13 +01:00
Erik Johnston
a151693a3b Bump syweb version 2015-05-07 18:01:46 +01:00
Erik Johnston
ac29318b84 Add link to registration spec 2015-05-07 17:58:50 +01:00
Mark Haines
dfa98f911b revert accidental bcrypt gensalt round reduction from loadtesting 2015-05-07 17:45:42 +01:00
Erik Johnston
4605953b0f Add JIRA issue id 2015-05-07 16:53:18 +01:00
Mark Haines
ef8e8ebd91 pynacl-0.3.0 was released so we can finally start using it directly from pypi 2015-05-07 16:46:51 +01:00
Erik Johnston
3188e94ac4 Explain the change in AS /register api 2015-05-07 16:12:02 +01:00
David Baker
97a64f3ebe Merge branch 'develop' of github.com:matrix-org/synapse into develop 2015-05-07 09:33:42 +01:00
David Baker
b850c9fa04 Typo 2015-05-07 09:33:30 +01:00
Mark Haines
4a7a4a5b6c Optional profiling using cProfile 2015-05-06 17:08:00 +01:00
Erik Johnston
771fc05d30 Change log: Link to application services spec. 2015-05-06 13:59:32 +01:00
Erik Johnston
938939fd89 Move CAPTCHA_SETUP to docs/ 2015-05-06 13:48:06 +01:00
Erik Johnston
028a570e17 Linkify docs/postgres.sql 2015-05-06 13:42:40 +01:00
Erik Johnston
0e4393652f Update change log to be more detailed 2015-05-06 13:31:59 +01:00
Mark Haines
b994fb2b96 Don't read from the config file before checking it exists 2015-05-06 12:56:47 +01:00
Erik Johnston
f10fd8a470 Merge branch 'develop' of github.com:matrix-org/synapse into release-v0.9.0 2015-05-06 12:54:36 +01:00
Mark Haines
3c11c9c122 Merge pull request #140 from matrix-org/erikj/scripts_refactor
Seperate scripts/ into scripts/ and scripts-dev/
2015-05-06 12:54:07 +01:00
Erik Johnston
673375fe2d Acutally add scripts-dev/ 2015-05-06 11:46:02 +01:00
Erik Johnston
3c92231094 Re-add scripts/register_new_matrix_user 2015-05-06 11:45:18 +01:00
Erik Johnston
119e5d7702 Seperate scripts/ into scripts/ and scripts-dev/, where scripts/* are automatically added to the package 2015-05-06 11:41:19 +01:00
Erik Johnston
271ee604f8 Update change log 2015-05-06 11:29:54 +01:00
Erik Johnston
04c01882fc Bump version 2015-05-06 09:59:13 +01:00
Mark Haines
f4664a6cbd Merge pull request #138 from matrix-org/erikj/SYN-371
SYN-371 - Failed to persist event
2015-05-05 18:53:15 +01:00
Mark Haines
ecb26beda5 Merge pull request #137 from matrix-org/erikj/executemany
executemany support
2015-05-05 18:30:35 +01:00
Erik Johnston
0c4ac271ca Merge branch 'erikj/executemany' of github.com:matrix-org/synapse into erikj/SYN-371 2015-05-05 18:21:19 +01:00
Erik Johnston
0cf7e480b4 And use buffer(...) there as well 2015-05-05 18:20:01 +01:00
Erik Johnston
ed2584050f Merge branch 'develop' of github.com:matrix-org/synapse into erikj/executemany 2015-05-05 18:15:20 +01:00
Erik Johnston
977338a7af Use buffer(...) when inserting into bytea column 2015-05-05 18:12:53 +01:00
Mark Haines
31049c4d72 Merge pull request #139 from matrix-org/bugs/SYN-369
Fix race with cache invalidation. SYN-369
2015-05-05 17:46:13 +01:00
Mark Haines
deb0237166 Add some doc-string 2015-05-05 17:45:11 +01:00
Mark Haines
e45b05647e Fix the --help option for synapse 2015-05-05 17:39:59 +01:00
Erik Johnston
3d5a955e08 Missed events are not outliers 2015-05-05 17:36:57 +01:00
Mark Haines
d18f37e026 Collect the invalidate callbacks on the transaction object rather than passing around a separate list 2015-05-05 17:32:21 +01:00
Erik Johnston
9951542393 Add a comment about the zip(*[zip(sorted(...),...)]) 2015-05-05 17:06:55 +01:00
Mark Haines
041b6cba61 SYN-369: Add comments to the sequence number logic in the cache 2015-05-05 16:32:44 +01:00
Mark Haines
63075118a5 Add debug flag in synapse/storage/_base.py for debugging the cache logic by comparing what is in the cache with what was in the database on every access 2015-05-05 16:24:04 +01:00
Erik Johnston
531d7955fd Don't insert without deduplication. In this case we never actually use this table, so simply remove the insert entirely 2015-05-05 16:12:28 +01:00
Mark Haines
bfa4a7f8b0 Invalidate the room_member cache if the current state events updates 2015-05-05 15:43:49 +01:00
Mark Haines
d0fece8d3c Missing return for when the event was already persisted 2015-05-05 15:39:09 +01:00
Erik Johnston
bdcd7693c8 Fix indentation 2015-05-05 15:14:48 +01:00
Erik Johnston
43c2e8deae Add support for using executemany 2015-05-05 15:13:25 +01:00
Erik Johnston
1692dc019d Don't call 'encode_parameter' no-op 2015-05-05 15:00:30 +01:00
Mark Haines
a9aea68fd5 Invalidate the caches from the correct thread 2015-05-05 14:57:08 +01:00
Mark Haines
261d809a47 Sequence the modifications to the cache so that selects don't race with inserts 2015-05-05 14:13:50 +01:00
Erik Johnston
d9cc5de9e5 Correctly name transaction 2015-05-05 10:24:10 +01:00
Erik Johnston
b8940cd902 Remove some unused indexes 2015-05-01 16:14:25 +01:00
Erik Johnston
1942382246 Don't log enqueue_ 2015-05-01 16:14:25 +01:00
David Baker
eb9bd2d949 user_id now in user_threepids 2015-05-01 15:04:37 +01:00
Erik Johnston
2d386d7038 That wasn't a deferred 2015-05-01 14:41:25 +01:00
Erik Johnston
4ac2823b3c Remove inlineCallbacks from non-generator 2015-05-01 14:41:25 +01:00
Erik Johnston
22c7c5eb8f Typo 2015-05-01 14:41:25 +01:00
Erik Johnston
42c12c04f6 Remove some run_on_reactors 2015-05-01 14:41:25 +01:00
Erik Johnston
adb5b76ff5 Don't log all auth events every time we call auth.check 2015-05-01 14:41:25 +01:00
Mark Haines
3bcdf3664c Use the daemonize key from the config if it exists 2015-05-01 14:34:55 +01:00
David Baker
9eeb03c0dd Don't use self.execute: it's designed for fetching stuff 2015-05-01 14:21:25 +01:00
Mark Haines
32937f3ea0 database config is not kept in separate config file anymore 2015-05-01 14:06:54 +01:00
Mark Haines
7b50769eb9 Merge pull request #136 from matrix-org/markjh/config_cleanup
Config restructuring.
2015-05-01 14:04:39 +01:00
David Baker
7693f24792 No id field on user 2015-05-01 13:55:42 +01:00
Mark Haines
46a65c282f Allow generate-config to run against an existing config file to generate default keys 2015-05-01 13:54:38 +01:00
David Baker
92b20713d7 More missed get_user_by_id API changes 2015-05-01 13:45:54 +01:00
Erik Johnston
da4ed08739 One too many lens 2015-05-01 13:29:38 +01:00
Erik Johnston
9060dc6b59 Change public room list to use defer.gatherResults 2015-05-01 13:28:36 +01:00
David Baker
1fae1b3166 This api now no longer returns an array 2015-05-01 13:26:41 +01:00
Erik Johnston
80b4119279 Don't wait for storage of access_token 2015-05-01 13:14:05 +01:00
Erik Johnston
4011cf1c42 Cache latest_event_ids_in_room 2015-05-01 13:06:26 +01:00
Mark Haines
50c87b8eed Allow "manhole" to be ommited from the config 2015-04-30 18:11:47 +01:00
Mark Haines
345995fcde Remove the ~, comment the lines instead 2015-04-30 18:10:19 +01:00
Mark Haines
62cebee8ee Update key.py 2015-04-30 17:54:01 +01:00
Mark Haines
95cbfee8ae Update metrics.py 2015-04-30 17:52:20 +01:00
Mark Haines
4ad8350607 Update README.rst 2015-04-30 17:48:29 +01:00
Mark Haines
6ea9cf58be missing import 2015-04-30 17:21:21 +01:00
Mark Haines
c95480963e read the pid_file from the config file in synctl 2015-04-30 17:12:15 +01:00
Mark Haines
069296dbb0 Can't specify bind-port on the cmdline anymore 2015-04-30 17:08:07 +01:00
Mark Haines
2d4d2bbae4 Merge branch 'develop' into markjh/config_cleanup
Conflicts:
	synapse/config/captcha.py
2015-04-30 16:54:55 +01:00
Mark Haines
2f1348f339 Write a default log_config when generating config 2015-04-30 16:52:57 +01:00
Mark Haines
74aaacf82a Don't break when sizes or durations are given as integers 2015-04-30 16:04:02 +01:00
Mark Haines
c28f1d16f0 Add a random string to the auto generated key id 2015-04-30 15:13:14 +01:00
Mark Haines
265f30bd3f Allow --enable-registration to be passed on the commandline 2015-04-30 15:04:06 +01:00
Mark Haines
c9e62927f2 Use disable_registration keys if they are present 2015-04-30 14:34:09 +01:00
Mark Haines
1aa11cf7ce Allow multiple config files, set up a default config before applying the config files 2015-04-30 13:48:15 +01:00
Mark Haines
6b69ddd17a remove duplicate parse_size method 2015-04-30 04:26:29 +01:00
Mark Haines
d624e2a638 Manually generate the default config yaml, remove most of the commandline arguments for synapse anticipating that people will use the yaml instead. Simpify implementing config options by not requiring the classes to hit the super class 2015-04-30 04:24:44 +01:00
248 changed files with 17782 additions and 6535 deletions

4
.gitignore vendored
View File

@@ -42,3 +42,7 @@ build/
localhost-800*/ localhost-800*/
static/client/register/register_config.js static/client/register/register_config.js
.tox
env/
*.config

View File

@@ -35,3 +35,16 @@ Turned to Dust <dwinslow86 at gmail.com>
Brabo <brabo at riseup.net> Brabo <brabo at riseup.net>
* Installation instruction fixes * Installation instruction fixes
Ivan Shapovalov <intelfx100 at gmail.com>
* contrib/systemd: a sample systemd unit file and a logger configuration
Eric Myhre <hash at exultant.us>
* Fix bug where ``media_store_path`` config option was ignored by v0 content
repository API.
Muthu Subramanian <muthu.subramanian.karunanidhi at ericsson.com>
* Add SAML2 support for registration and login.
Steven Hammerton <steven.hammerton at openmarket.com>
* Add CAS support for registration and login.

View File

@@ -1,9 +1,325 @@
Changes in synapse vX Changes in synapse v0.11.0 (2015-11-17)
===================== =======================================
* Changed config option from ``disable_registration`` to * Change CAS login API (PR #349)
``enable_registration``. Old option will be ignored.
Changes in synapse v0.11.0-rc2 (2015-11-13)
===========================================
* Various changes to /sync API response format (PR #373)
* Fix regression when setting display name in newly joined room over
federation (PR #368)
* Fix problem where /search was slow when using SQLite (PR #366)
Changes in synapse v0.11.0-rc1 (2015-11-11)
===========================================
* Add Search API (PR #307, #324, #327, #336, #350, #359)
* Add 'archived' state to v2 /sync API (PR #316)
* Add ability to reject invites (PR #317)
* Add config option to disable password login (PR #322)
* Add the login fallback API (PR #330)
* Add room context API (PR #334)
* Add room tagging support (PR #335)
* Update v2 /sync API to match spec (PR #305, #316, #321, #332, #337, #341)
* Change retry schedule for application services (PR #320)
* Change retry schedule for remote servers (PR #340)
* Fix bug where we hosted static content in the incorrect place (PR #329)
* Fix bug where we didn't increment retry interval for remote servers (PR #343)
Changes in synapse v0.10.1-rc1 (2015-10-15)
===========================================
* Add support for CAS, thanks to Steven Hammerton (PR #295, #296)
* Add support for using macaroons for ``access_token`` (PR #256, #229)
* Add support for ``m.room.canonical_alias`` (PR #287)
* Add support for viewing the history of rooms that they have left. (PR #276,
#294)
* Add support for refresh tokens (PR #240)
* Add flag on creation which disables federation of the room (PR #279)
* Add some room state to invites. (PR #275)
* Atomically persist events when joining a room over federation (PR #283)
* Change default history visibility for private rooms (PR #271)
* Allow users to redact their own sent events (PR #262)
* Use tox for tests (PR #247)
* Split up syutil into separate libraries (PR #243)
Changes in synapse v0.10.0-r2 (2015-09-16)
==========================================
* Fix bug where we always fetched remote server signing keys instead of using
ones in our cache.
* Fix adding threepids to an existing account.
* Fix bug with invinting over federation where remote server was already in
the room. (PR #281, SYN-392)
Changes in synapse v0.10.0-r1 (2015-09-08)
==========================================
* Fix bug with python packaging
Changes in synapse v0.10.0 (2015-09-03)
=======================================
No change from release candidate.
Changes in synapse v0.10.0-rc6 (2015-09-02)
===========================================
* Remove some of the old database upgrade scripts.
* Fix database port script to work with newly created sqlite databases.
Changes in synapse v0.10.0-rc5 (2015-08-27)
===========================================
* Fix bug that broke downloading files with ascii filenames across federation.
Changes in synapse v0.10.0-rc4 (2015-08-27)
===========================================
* Allow UTF-8 filenames for upload. (PR #259)
Changes in synapse v0.10.0-rc3 (2015-08-25)
===========================================
* Add ``--keys-directory`` config option to specify where files such as
certs and signing keys should be stored in, when using ``--generate-config``
or ``--generate-keys``. (PR #250)
* Allow ``--config-path`` to specify a directory, causing synapse to use all
\*.yaml files in the directory as config files. (PR #249)
* Add ``web_client_location`` config option to specify static files to be
hosted by synapse under ``/_matrix/client``. (PR #245)
* Add helper utility to synapse to read and parse the config files and extract
the value of a given key. For example::
$ python -m synapse.config read server_name -c homeserver.yaml
localhost
(PR #246)
Changes in synapse v0.10.0-rc2 (2015-08-24)
===========================================
* Fix bug where we incorrectly populated the ``event_forward_extremities``
table, resulting in problems joining large remote rooms (e.g.
``#matrix:matrix.org``)
* Reduce the number of times we wake up pushers by not listening for presence
or typing events, reducing the CPU cost of each pusher.
Changes in synapse v0.10.0-rc1 (2015-08-21)
===========================================
Also see v0.9.4-rc1 changelog, which has been amalgamated into this release.
General:
* Upgrade to Twisted 15 (PR #173)
* Add support for serving and fetching encryption keys over federation.
(PR #208)
* Add support for logging in with email address (PR #234)
* Add support for new ``m.room.canonical_alias`` event. (PR #233)
* Change synapse to treat user IDs case insensitively during registration and
login. (If two users already exist with case insensitive matching user ids,
synapse will continue to require them to specify their user ids exactly.)
* Error if a user tries to register with an email already in use. (PR #211)
* Add extra and improve existing caches (PR #212, #219, #226, #228)
* Batch various storage request (PR #226, #228)
* Fix bug where we didn't correctly log the entity that triggered the request
if the request came in via an application service (PR #230)
* Fix bug where we needlessly regenerated the full list of rooms an AS is
interested in. (PR #232)
* Add support for AS's to use v2_alpha registration API (PR #210)
Configuration:
* Add ``--generate-keys`` that will generate any missing cert and key files in
the configuration files. This is equivalent to running ``--generate-config``
on an existing configuration file. (PR #220)
* ``--generate-config`` now no longer requires a ``--server-name`` parameter
when used on existing configuration files. (PR #220)
* Add ``--print-pidfile`` flag that controls the printing of the pid to stdout
of the demonised process. (PR #213)
Media Repository:
* Fix bug where we picked a lower resolution image than requested. (PR #205)
* Add support for specifying if a the media repository should dynamically
thumbnail images or not. (PR #206)
Metrics:
* Add statistics from the reactor to the metrics API. (PR #224, #225)
Demo Homeservers:
* Fix starting the demo homeservers without rate-limiting enabled. (PR #182)
* Fix enabling registration on demo homeservers (PR #223)
Changes in synapse v0.9.4-rc1 (2015-07-21)
==========================================
General:
* Add basic implementation of receipts. (SPEC-99)
* Add support for configuration presets in room creation API. (PR #203)
* Add auth event that limits the visibility of history for new users.
(SPEC-134)
* Add SAML2 login/registration support. (PR #201. Thanks Muthu Subramanian!)
* Add client side key management APIs for end to end encryption. (PR #198)
* Change power level semantics so that you cannot kick, ban or change power
levels of users that have equal or greater power level than you. (SYN-192)
* Improve performance by bulk inserting events where possible. (PR #193)
* Improve performance by bulk verifying signatures where possible. (PR #194)
Configuration:
* Add support for including TLS certificate chains.
Media Repository:
* Add Content-Disposition headers to content repository responses. (SYN-150)
Changes in synapse v0.9.3 (2015-07-01)
======================================
No changes from v0.9.3 Release Candidate 1.
Changes in synapse v0.9.3-rc1 (2015-06-23)
==========================================
General:
* Fix a memory leak in the notifier. (SYN-412)
* Improve performance of room initial sync. (SYN-418)
* General improvements to logging.
* Remove ``access_token`` query params from ``INFO`` level logging.
Configuration:
* Add support for specifying and configuring multiple listeners. (SYN-389)
Application services:
* Fix bug where synapse failed to send user queries to application services.
Changes in synapse v0.9.2-r2 (2015-06-15)
=========================================
Fix packaging so that schema delta python files get included in the package.
Changes in synapse v0.9.2 (2015-06-12)
======================================
General:
* Use ultrajson for json (de)serialisation when a canonical encoding is not
required. Ultrajson is significantly faster than simplejson in certain
circumstances.
* Use connection pools for outgoing HTTP connections.
* Process thumbnails on separate threads.
Configuration:
* Add option, ``gzip_responses``, to disable HTTP response compression.
Federation:
* Improve resilience of backfill by ensuring we fetch any missing auth events.
* Improve performance of backfill and joining remote rooms by removing
unnecessary computations. This included handling events we'd previously
handled as well as attempting to compute the current state for outliers.
Changes in synapse v0.9.1 (2015-05-26)
======================================
General:
* Add support for backfilling when a client paginates. This allows servers to
request history for a room from remote servers when a client tries to
paginate history the server does not have - SYN-36
* Fix bug where you couldn't disable non-default pushrules - SYN-378
* Fix ``register_new_user`` script - SYN-359
* Improve performance of fetching events from the database, this improves both
initialSync and sending of events.
* Improve performance of event streams, allowing synapse to handle more
simultaneous connected clients.
Federation:
* Fix bug with existing backfill implementation where it returned the wrong
selection of events in some circumstances.
* Improve performance of joining remote rooms.
Configuration:
* Add support for changing the bind host of the metrics listener via the
``metrics_bind_host`` option.
Changes in synapse v0.9.0-r5 (2015-05-21)
=========================================
* Add more database caches to reduce amount of work done for each pusher. This
radically reduces CPU usage when multiple pushers are set up in the same room.
Changes in synapse v0.9.0 (2015-05-07)
======================================
General:
* Add support for using a PostgreSQL database instead of SQLite. See
`docs/postgres.rst`_ for details.
* Add password change and reset APIs. See `Registration`_ in the spec.
* Fix memory leak due to not releasing stale notifiers - SYN-339.
* Fix race in caches that occasionally caused some presence updates to be
dropped - SYN-369.
* Check server name has not changed on restart.
* Add a sample systemd unit file and a logger configuration in
contrib/systemd. Contributed Ivan Shapovalov.
Federation:
* Add key distribution mechanisms for fetching public keys of unavailable
remote home servers. See `Retrieving Server Keys`_ in the spec.
Configuration:
* Add support for multiple config files.
* Add support for dictionaries in config files.
* Remove support for specifying config options on the command line, except
for:
* ``--daemonize`` - Daemonize the home server.
* ``--manhole`` - Turn on the twisted telnet manhole service on the given
port.
* ``--database-path`` - The path to a sqlite database to use.
* ``--verbose`` - The verbosity level.
* ``--log-file`` - File to log to.
* ``--log-config`` - Python logging config file.
* ``--enable-registration`` - Enable registration for new users.
Application services:
* Reliably retry sending of events from Synapse to application services, as per
`Application Services`_ spec.
* Application services can no longer register via the ``/register`` API,
instead their configuration should be saved to a file and listed in the
synapse ``app_service_config_files`` config option. The AS configuration file
has the same format as the old ``/register`` request.
See `docs/application_services.rst`_ for more information.
.. _`docs/postgres.rst`: docs/postgres.rst
.. _`docs/application_services.rst`: docs/application_services.rst
.. _`Registration`: https://github.com/matrix-org/matrix-doc/blob/master/specification/10_client_server_api.rst#registration
.. _`Retrieving Server Keys`: https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys
.. _`Application Services`: https://github.com/matrix-org/matrix-doc/blob/0c6bd9/specification/25_application_service_api.rst#home-server---application-service-api
Changes in synapse v0.8.1 (2015-03-18) Changes in synapse v0.8.1 (2015-03-18)
====================================== ======================================

View File

@@ -3,12 +3,23 @@ include LICENSE
include VERSION include VERSION
include *.rst include *.rst
include demo/README include demo/README
include demo/demo.tls.dh
include demo/*.py
include demo/*.sh
recursive-include synapse/storage/schema *.sql recursive-include synapse/storage/schema *.sql
recursive-include synapse/storage/schema *.py
recursive-include demo *.dh
recursive-include demo *.py
recursive-include demo *.sh
recursive-include docs * recursive-include docs *
recursive-include scripts * recursive-include scripts *
recursive-include scripts-dev *
recursive-include tests *.py recursive-include tests *.py
recursive-include synapse/static *.css
recursive-include synapse/static *.gif
recursive-include synapse/static *.html
recursive-include synapse/static *.js
exclude jenkins.sh
prune demo/etc

View File

@@ -7,7 +7,7 @@ Matrix is an ambitious new ecosystem for open federated Instant Messaging and
VoIP. The basics you need to know to get up and running are: VoIP. The basics you need to know to get up and running are:
- Everything in Matrix happens in a room. Rooms are distributed and do not - Everything in Matrix happens in a room. Rooms are distributed and do not
exist on any single server. Rooms can be located using convenience aliases exist on any single server. Rooms can be located using convenience aliases
like ``#matrix:matrix.org`` or ``#test:localhost:8448``. like ``#matrix:matrix.org`` or ``#test:localhost:8448``.
- Matrix user IDs look like ``@matthew:matrix.org`` (although in the future - Matrix user IDs look like ``@matthew:matrix.org`` (although in the future
@@ -20,10 +20,10 @@ The overall architecture is::
https://somewhere.org/_matrix https://elsewhere.net/_matrix https://somewhere.org/_matrix https://elsewhere.net/_matrix
``#matrix:matrix.org`` is the official support room for Matrix, and can be ``#matrix:matrix.org`` is the official support room for Matrix, and can be
accessed by the web client at http://matrix.org/beta or via an IRC bridge at accessed by any client from https://matrix.org/blog/try-matrix-now or via IRC
irc://irc.freenode.net/matrix. bridge at irc://irc.freenode.net/matrix.
Synapse is currently in rapid development, but as of version 0.5 we believe it Synapse is currently in rapid development, but as of version 0.5 we believe it
is sufficiently stable to be run as an internet-facing service for real usage! is sufficiently stable to be run as an internet-facing service for real usage!
About Matrix About Matrix
@@ -77,14 +77,14 @@ Meanwhile, iOS and Android SDKs and clients are available from:
- https://github.com/matrix-org/matrix-android-sdk - https://github.com/matrix-org/matrix-android-sdk
We'd like to invite you to join #matrix:matrix.org (via We'd like to invite you to join #matrix:matrix.org (via
https://matrix.org/beta), run a homeserver, take a look at the Matrix spec at https://matrix.org/blog/try-matrix-now), run a homeserver, take a look at the
https://matrix.org/docs/spec and API docs at https://matrix.org/docs/api, Matrix spec at https://matrix.org/docs/spec and API docs at
experiment with the APIs and the demo clients, and report any bugs via https://matrix.org/docs/api, experiment with the APIs and the demo clients, and
https://matrix.org/jira. report any bugs via https://matrix.org/jira.
Thanks for using Matrix! Thanks for using Matrix!
[1] End-to-end encryption is currently in development [1] End-to-end encryption is currently in development - see https://matrix.org/git/olm
Synapse Installation Synapse Installation
==================== ====================
@@ -94,6 +94,7 @@ Synapse is the reference python/twisted Matrix homeserver implementation.
System requirements: System requirements:
- POSIX-compliant system (tested on Linux & OS X) - POSIX-compliant system (tested on Linux & OS X)
- Python 2.7 - Python 2.7
- At least 512 MB RAM.
Synapse is written in python but some of the libraries is uses are written in Synapse is written in python but some of the libraries is uses are written in
C. So before we can install synapse itself we need a working C compiler and the C. So before we can install synapse itself we need a working C compiler and the
@@ -101,36 +102,41 @@ header files for python C extensions.
Installing prerequisites on Ubuntu or Debian:: Installing prerequisites on Ubuntu or Debian::
$ sudo apt-get install build-essential python2.7-dev libffi-dev \ sudo apt-get install build-essential python2.7-dev libffi-dev \
python-pip python-setuptools sqlite3 \ python-pip python-setuptools sqlite3 \
libssl-dev python-virtualenv libjpeg-dev libssl-dev python-virtualenv libjpeg-dev
Installing prerequisites on ArchLinux:: Installing prerequisites on ArchLinux::
$ sudo pacman -S base-devel python2 python-pip \ sudo pacman -S base-devel python2 python-pip \
python-setuptools python-virtualenv sqlite3 python-setuptools python-virtualenv sqlite3
Installing prerequisites on Mac OS X:: Installing prerequisites on Mac OS X::
$ xcode-select --install xcode-select --install
$ sudo pip install virtualenv sudo easy_install pip
sudo pip install virtualenv
To install the synapse homeserver run:: To install the synapse homeserver run::
$ virtualenv ~/.synapse virtualenv -p python2.7 ~/.synapse
$ source ~/.synapse/bin/activate source ~/.synapse/bin/activate
$ pip install --process-dependency-links https://github.com/matrix-org/synapse/tarball/master pip install --upgrade setuptools
pip install --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
This installs synapse, along with the libraries it uses, into a virtual This installs synapse, along with the libraries it uses, into a virtual
environment under ``~/.synapse``. environment under ``~/.synapse``. Feel free to pick a different directory
if you prefer.
In case of problems, please see the _Troubleshooting section below.
Alternatively, Silvio Fricke has contributed a Dockerfile to automate the Alternatively, Silvio Fricke has contributed a Dockerfile to automate the
above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/. above in Docker at https://registry.hub.docker.com/u/silviof/docker-matrix/.
To set up your homeserver, run (in your virtualenv, as before):: To set up your homeserver, run (in your virtualenv, as before)::
$ cd ~/.synapse cd ~/.synapse
$ python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--server-name machine.my.domain.name \ --server-name machine.my.domain.name \
--config-path homeserver.yaml \ --config-path homeserver.yaml \
--generate-config --generate-config
@@ -170,13 +176,13 @@ traditionally used for convenience and simplicity.
The advantages of Postgres include: The advantages of Postgres include:
* significant performance improvements due to the superior threading and * significant performance improvements due to the superior threading and
caching model, smarter query optimiser caching model, smarter query optimiser
* allowing the DB to be run on separate hardware * allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse * allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in pointing at the same DB master, as well as enabling DB replication in
synapse itself. synapse itself.
The only disadvantage is that the code is relatively new as of April 2015 and The only disadvantage is that the code is relatively new as of April 2015 and
may have a few regressions relative to SQLite. may have a few regressions relative to SQLite.
@@ -186,12 +192,12 @@ For information on how to install and use PostgreSQL, please see
Running Synapse Running Synapse
=============== ===============
To actually run your new homeserver, pick a working directory for Synapse to run To actually run your new homeserver, pick a working directory for Synapse to
(e.g. ``~/.synapse``), and:: run (e.g. ``~/.synapse``), and::
$ cd ~/.synapse cd ~/.synapse
$ source ./bin/activate source ./bin/activate
$ synctl start synctl start
Platform Specific Instructions Platform Specific Instructions
============================== ==============================
@@ -209,48 +215,50 @@ defaults to python 3, but synapse currently assumes python 2.7 by default:
pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):: pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 )::
$ sudo pip2.7 install --upgrade pip sudo pip2.7 install --upgrade pip
You also may need to explicitly specify python 2.7 again during the install You also may need to explicitly specify python 2.7 again during the install
request:: request::
$ pip2.7 install --process-dependency-links \ pip2.7 install --process-dependency-links \
https://github.com/matrix-org/synapse/tarball/master https://github.com/matrix-org/synapse/tarball/master
If you encounter an error with lib bcrypt causing an Wrong ELF Class: If you encounter an error with lib bcrypt causing an Wrong ELF Class:
ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly ELFCLASS32 (x64 Systems), you may need to reinstall py-bcrypt to correctly
compile it under the right architecture. (This should not be needed if compile it under the right architecture. (This should not be needed if
installing under virtualenv):: installing under virtualenv)::
$ sudo pip2.7 uninstall py-bcrypt sudo pip2.7 uninstall py-bcrypt
$ sudo pip2.7 install py-bcrypt sudo pip2.7 install py-bcrypt
During setup of Synapse you need to call python2.7 directly again:: During setup of Synapse you need to call python2.7 directly again::
$ cd ~/.synapse cd ~/.synapse
$ python2.7 -m synapse.app.homeserver \ python2.7 -m synapse.app.homeserver \
--server-name machine.my.domain.name \ --server-name machine.my.domain.name \
--config-path homeserver.yaml \ --config-path homeserver.yaml \
--generate-config --generate-config
...substituting your host and domain name as appropriate. ...substituting your host and domain name as appropriate.
Windows Install Windows Install
--------------- ---------------
Synapse can be installed on Cygwin. It requires the following Cygwin packages: Synapse can be installed on Cygwin. It requires the following Cygwin packages:
- gcc - gcc
- git - git
- libffi-devel - libffi-devel
- openssl (and openssl-devel, python-openssl) - openssl (and openssl-devel, python-openssl)
- python - python
- python-setuptools - python-setuptools
The content repository requires additional packages and will be unable to process The content repository requires additional packages and will be unable to process
uploads without them: uploads without them:
- libjpeg8
- libjpeg8-devel - libjpeg8
- zlib - libjpeg8-devel
- zlib
If you choose to install Synapse without these packages, you will need to reinstall If you choose to install Synapse without these packages, you will need to reinstall
``pillow`` for changes to be applied, e.g. ``pip uninstall pillow`` ``pip install ``pillow`` for changes to be applied, e.g. ``pip uninstall pillow`` ``pip install
pillow --user`` pillow --user``
@@ -272,33 +280,38 @@ Troubleshooting
Troubleshooting Installation Troubleshooting Installation
---------------------------- ----------------------------
Synapse requires pip 1.7 or later, so if your OS provides too old a version and Synapse requires pip 1.7 or later, so if your OS provides too old a version and
you get errors about ``error: no such option: --process-dependency-links`` you you get errors about ``error: no such option: --process-dependency-links`` you
may need to manually upgrade it:: may need to manually upgrade it::
$ sudo pip install --upgrade pip sudo pip install --upgrade pip
Installing may fail with ``mock requires setuptools>=17.1. Aborting installation``.
You can fix this by upgrading setuptools::
pip install --upgrade setuptools
If pip crashes mid-installation for reason (e.g. lost terminal), pip may If pip crashes mid-installation for reason (e.g. lost terminal), pip may
refuse to run until you remove the temporary installation directory it refuse to run until you remove the temporary installation directory it
created. To reset the installation:: created. To reset the installation::
$ rm -rf /tmp/pip_install_matrix rm -rf /tmp/pip_install_matrix
pip seems to leak *lots* of memory during installation. For instance, a Linux pip seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are happens, you will have to individually install the dependencies which are
failing, e.g.:: failing, e.g.::
$ pip install twisted pip install twisted
On OSX, if you encounter clang: error: unknown argument: '-mno-fused-madd' you On OS X, if you encounter clang: error: unknown argument: '-mno-fused-madd' you
will need to export CFLAGS=-Qunused-arguments. will need to export CFLAGS=-Qunused-arguments.
Troubleshooting Running Troubleshooting Running
----------------------- -----------------------
If synapse fails with ``missing "sodium.h"`` crypto errors, you may need If synapse fails with ``missing "sodium.h"`` crypto errors, you may need
to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for to manually upgrade PyNaCL, as synapse uses NaCl (http://nacl.cr.yp.to/) for
encryption and digital signatures. encryption and digital signatures.
Unfortunately PyNACL currently has a few issues Unfortunately PyNACL currently has a few issues
(https://github.com/pyca/pynacl/issues/53) and (https://github.com/pyca/pynacl/issues/53) and
@@ -307,10 +320,11 @@ correctly, causing all tests to fail with errors about missing "sodium.h". To
fix try re-installing from PyPI or directly from fix try re-installing from PyPI or directly from
(https://github.com/pyca/pynacl):: (https://github.com/pyca/pynacl)::
$ # Install from PyPI # Install from PyPI
$ pip install --user --upgrade --force pynacl pip install --user --upgrade --force pynacl
$ # Install from github
$ pip install --user https://github.com/pyca/pynacl/tarball/master # Install from github
pip install --user https://github.com/pyca/pynacl/tarball/master
ArchLinux ArchLinux
~~~~~~~~~ ~~~~~~~~~
@@ -318,8 +332,8 @@ ArchLinux
If running `$ synctl start` fails with 'returned non-zero exit status 1', If running `$ synctl start` fails with 'returned non-zero exit status 1',
you will need to explicitly call Python2.7 - either running as:: you will need to explicitly call Python2.7 - either running as::
$ python2.7 -m synapse.app.homeserver --daemonize -c homeserver.yaml --pid-file homeserver.pid python2.7 -m synapse.app.homeserver --daemonize -c homeserver.yaml
...or by editing synctl with the correct python executable. ...or by editing synctl with the correct python executable.
Synapse Development Synapse Development
@@ -328,16 +342,16 @@ Synapse Development
To check out a synapse for development, clone the git repo into a working To check out a synapse for development, clone the git repo into a working
directory of your choice:: directory of your choice::
$ git clone https://github.com/matrix-org/synapse.git git clone https://github.com/matrix-org/synapse.git
$ cd synapse cd synapse
Synapse has a number of external dependencies, that are easiest Synapse has a number of external dependencies, that are easiest
to install using pip and a virtualenv:: to install using pip and a virtualenv::
$ virtualenv env virtualenv env
$ source env/bin/activate source env/bin/activate
$ python synapse/python_dependencies.py | xargs -n1 pip install python synapse/python_dependencies.py | xargs -n1 pip install
$ pip install setuptools_trial mock pip install setuptools_trial mock
This will run a process of downloading and installing all the needed This will run a process of downloading and installing all the needed
dependencies into a virtual env. dependencies into a virtual env.
@@ -345,7 +359,7 @@ dependencies into a virtual env.
Once this is done, you may wish to run Synapse's unit tests, to Once this is done, you may wish to run Synapse's unit tests, to
check that everything is installed as it should be:: check that everything is installed as it should be::
$ python setup.py test python setup.py test
This should end with a 'PASSED' result:: This should end with a 'PASSED' result::
@@ -357,14 +371,11 @@ This should end with a 'PASSED' result::
Upgrading an existing Synapse Upgrading an existing Synapse
============================= =============================
IMPORTANT: Before upgrading an existing synapse to a new version, please The instructions for upgrading synapse are in `UPGRADE.rst`_.
refer to UPGRADE.rst for any additional instructions. Please check these instructions as upgrading may require extra steps for some
versions of synapse.
Otherwise, simply re-install the new codebase over the current one - e.g.
by ``pip install --process-dependency-links
https://github.com/matrix-org/synapse/tarball/master``
if using pip, or by ``git pull`` if running off a git working copy.
.. _UPGRADE.rst: UPGRADE.rst
Setting up Federation Setting up Federation
===================== =====================
@@ -386,11 +397,11 @@ IDs:
For the first form, simply pass the required hostname (of the machine) as the For the first form, simply pass the required hostname (of the machine) as the
--server-name parameter:: --server-name parameter::
$ python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--server-name machine.my.domain.name \ --server-name machine.my.domain.name \
--config-path homeserver.yaml \ --config-path homeserver.yaml \
--generate-config --generate-config
$ python -m synapse.app.homeserver --config-path homeserver.yaml python -m synapse.app.homeserver --config-path homeserver.yaml
Alternatively, you can run ``synctl start`` to guide you through the process. Alternatively, you can run ``synctl start`` to guide you through the process.
@@ -407,12 +418,11 @@ record would then look something like::
At this point, you should then run the homeserver with the hostname of this At this point, you should then run the homeserver with the hostname of this
SRV record, as that is the name other machines will expect it to have:: SRV record, as that is the name other machines will expect it to have::
$ python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--server-name YOURDOMAIN \ --server-name YOURDOMAIN \
--bind-port 8448 \
--config-path homeserver.yaml \ --config-path homeserver.yaml \
--generate-config --generate-config
$ python -m synapse.app.homeserver --config-path homeserver.yaml python -m synapse.app.homeserver --config-path homeserver.yaml
You may additionally want to pass one or more "-v" options, in order to You may additionally want to pass one or more "-v" options, in order to
@@ -426,8 +436,8 @@ private federation (``localhost:8080``, ``localhost:8081`` and
``localhost:8082``) which you can then access through the webclient running at ``localhost:8082``) which you can then access through the webclient running at
http://localhost:8080. Simply run:: http://localhost:8080. Simply run::
$ demo/start.sh demo/start.sh
This is mainly useful just for development purposes. This is mainly useful just for development purposes.
Running The Demo Web Client Running The Demo Web Client
@@ -490,7 +500,7 @@ time.
Where's the spec?! Where's the spec?!
================== ==================
The source of the matrix spec lives at https://github.com/matrix-org/matrix-doc. The source of the matrix spec lives at https://github.com/matrix-org/matrix-doc.
A recent HTML snapshot of this lives at http://matrix.org/docs/spec A recent HTML snapshot of this lives at http://matrix.org/docs/spec
@@ -500,10 +510,10 @@ Building Internal API Documentation
Before building internal API documentation install sphinx and Before building internal API documentation install sphinx and
sphinxcontrib-napoleon:: sphinxcontrib-napoleon::
$ pip install sphinx pip install sphinx
$ pip install sphinxcontrib-napoleon pip install sphinxcontrib-napoleon
Building internal API documentation:: Building internal API documentation::
$ python setup.py build_sphinx python setup.py build_sphinx

View File

@@ -1,4 +1,37 @@
Upgrading to v0.x.x Upgrading Synapse
=================
Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
If synapse was installed in a virtualenv then active that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/.synapse/`` then run:
.. code:: bash
source ~/.synapse/bin/activate
If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
pip install --upgrade --process-dependency-links https://github.com/matrix-org/synapse/tarball/master
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
# Update the versions of synapse's python dependencies.
python synapse/python_dependencies.py | xargs -n1 pip install
Upgrading to v0.9.0
=================== ===================
Application services have had a breaking API change in this version. Application services have had a breaking API change in this version.

View File

@@ -21,3 +21,5 @@ handlers:
root: root:
level: INFO level: INFO
handlers: [journal] handlers: [journal]
disable_existing_loggers: False

View File

@@ -126,12 +126,26 @@ sub on_unknown_event
if (!$bridgestate->{$room_id}->{gathered_candidates}) { if (!$bridgestate->{$room_id}->{gathered_candidates}) {
$bridgestate->{$room_id}->{gathered_candidates} = 1; $bridgestate->{$room_id}->{gathered_candidates} = 1;
my $offer = $bridgestate->{$room_id}->{offer}; my $offer = $bridgestate->{$room_id}->{offer};
my $candidate_block = ""; my $candidate_block = {
audio => '',
video => '',
};
foreach (@{$event->{content}->{candidates}}) { foreach (@{$event->{content}->{candidates}}) {
$candidate_block .= "a=" . $_->{candidate} . "\r\n"; if ($_->{sdpMid}) {
$candidate_block->{$_->{sdpMid}} .= "a=" . $_->{candidate} . "\r\n";
}
else {
$candidate_block->{audio} .= "a=" . $_->{candidate} . "\r\n";
$candidate_block->{video} .= "a=" . $_->{candidate} . "\r\n";
}
} }
# XXX: collate using the right m= line - for now assume audio call
$offer =~ s/(a=rtcp.*[\r\n]+)/$1$candidate_block/; # XXX: assumes audio comes first
#$offer =~ s/(a=rtcp-mux[\r\n]+)/$1$candidate_block->{audio}/;
#$offer =~ s/(a=rtcp-mux[\r\n]+)/$1$candidate_block->{video}/;
$offer =~ s/(m=video)/$candidate_block->{audio}$1/;
$offer =~ s/(.$)/$1\n$candidate_block->{video}$1/;
my $f = send_verto_json_request("verto.invite", { my $f = send_verto_json_request("verto.invite", {
"sdp" => $offer, "sdp" => $offer,
@@ -172,22 +186,18 @@ sub on_room_message
warn "[Matrix] in $room_id: $from: " . $content->{body} . "\n"; warn "[Matrix] in $room_id: $from: " . $content->{body} . "\n";
} }
my $verto_connecting = $loop->new_future;
$bot_verto->connect(
%{ $CONFIG{"verto-bot"} },
on_connect_error => sub { die "Cannot connect to verto - $_[-1]" },
on_resolve_error => sub { die "Cannot resolve to verto - $_[-1]" },
)->then( sub {
warn("[Verto] connected to websocket");
$verto_connecting->done($bot_verto) if not $verto_connecting->is_done;
});
Future->needs_all( Future->needs_all(
$bot_matrix->login( %{ $CONFIG{"matrix-bot"} } )->then( sub { $bot_matrix->login( %{ $CONFIG{"matrix-bot"} } )->then( sub {
$bot_matrix->start; $bot_matrix->start;
}), }),
$verto_connecting, $bot_verto->connect(
%{ $CONFIG{"verto-bot"} },
on_connect_error => sub { die "Cannot connect to verto - $_[-1]" },
on_resolve_error => sub { die "Cannot resolve to verto - $_[-1]" },
)->on_done( sub {
warn("[Verto] connected to websocket");
}),
)->get; )->get;
$loop->attach_signal( $loop->attach_signal(

View File

@@ -11,7 +11,4 @@ requires 'YAML', 0;
requires 'JSON', 0; requires 'JSON', 0;
requires 'Getopt::Long', 0; requires 'Getopt::Long', 0;
on 'test' => sub {
requires 'Test::More', '>= 0.98';
};

View File

@@ -11,7 +11,9 @@ if [ -f $PID_FILE ]; then
exit 1 exit 1
fi fi
find "$DIR" -name "*.log" -delete for port in 8080 8081 8082; do
find "$DIR" -name "*.db" -delete rm -rf $DIR/$port
rm -rf $DIR/media_store.$port
done
rm -rf $DIR/etc rm -rf $DIR/etc

View File

@@ -8,38 +8,49 @@ cd "$DIR/.."
mkdir -p demo/etc mkdir -p demo/etc
# Check the --no-rate-limit param export PYTHONPATH=$(readlink -f $(pwd))
PARAMS=""
if [ $# -eq 1 ]; then
if [ $1 = "--no-rate-limit" ]; then echo $PYTHONPATH
PARAMS="--rc-messages-per-second 1000 --rc-message-burst-count 1000"
fi
fi
for port in 8080 8081 8082; do for port in 8080 8081 8082; do
echo "Starting server on port $port... " echo "Starting server on port $port... "
https_port=$((port + 400)) https_port=$((port + 400))
mkdir -p demo/$port
pushd demo/$port
#rm $DIR/etc/$port.config
python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--generate-config \ --generate-config \
--config-path "demo/etc/$port.config" \
-p "$https_port" \
--unsecure-port "$port" \
-H "localhost:$https_port" \ -H "localhost:$https_port" \
-f "$DIR/$port.log" \ --config-path "$DIR/etc/$port.config" \
-d "$DIR/$port.db" \ --report-stats no
-D --pid-file "$DIR/$port.pid" \
--manhole $((port + 1000)) \ # Check script parameters
--tls-dh-params-path "demo/demo.tls.dh" \ if [ $# -eq 1 ]; then
--media-store-path "demo/media_store.$port" \ if [ $1 = "--no-rate-limit" ]; then
$PARAMS $SYNAPSE_PARAMS \ # Set high limits in config file to disable rate limiting
--enable-registration perl -p -i -e 's/rc_messages_per_second.*/rc_messages_per_second: 1000/g' $DIR/etc/$port.config
perl -p -i -e 's/rc_message_burst_count.*/rc_message_burst_count: 1000/g' $DIR/etc/$port.config
fi
fi
perl -p -i -e 's/^enable_registration:.*/enable_registration: true/g' $DIR/etc/$port.config
if ! grep -F "full_twisted_stacktraces" -q $DIR/etc/$port.config; then
echo "full_twisted_stacktraces: true" >> $DIR/etc/$port.config
fi
if ! grep -F "report_stats" -q $DIR/etc/$port.config ; then
echo "report_stats: false" >> $DIR/etc/$port.config
fi
python -m synapse.app.homeserver \ python -m synapse.app.homeserver \
--config-path "demo/etc/$port.config" \ --config-path "$DIR/etc/$port.config" \
-D \
-vv \ -vv \
popd
done done
cd "$CWD" cd "$CWD"

View File

@@ -0,0 +1,36 @@
Registering an Application Service
==================================
The registration of new application services depends on the homeserver used.
In synapse, you need to create a new configuration file for your AS and add it
to the list specified under the ``app_service_config_files`` config
option in your synapse config.
For example:
.. code-block:: yaml
app_service_config_files:
- /home/matrix/.synapse/<your-AS>.yaml
The format of the AS configuration file is as follows:
.. code-block:: yaml
url: <base url of AS>
as_token: <token AS will add to requests to HS>
hs_token: <token HS will add to requests to AS>
sender_localpart: <localpart of AS user>
namespaces:
users: # List of users we're interested in
- exclusive: <bool>
regex: <regex>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
See the spec_ for further details on how application services work.
.. _spec: https://github.com/matrix-org/matrix-doc/blob/master/specification/25_application_service_api.rst#application-service-api

View File

@@ -34,19 +34,15 @@ Synapse config
When you are ready to start using PostgreSQL, add the following line to your When you are ready to start using PostgreSQL, add the following line to your
config file:: config file::
database_config: <db_config_file> database:
name: psycopg2
Where ``<db_config_file>`` is the file name that points to a yaml file of the args:
following form:: user: <user>
password: <pass>
name: psycopg2 database: <db>
args: host: <host>
user: <user> cp_min: 5
password: <pass> cp_max: 10
database: <db>
host: <host>
cp_min: 5
cp_max: 10
All key, values in ``args`` are passed to the ``psycopg2.connect(..)`` All key, values in ``args`` are passed to the ``psycopg2.connect(..)``
function, except keys beginning with ``cp_``, which are consumed by the twisted function, except keys beginning with ``cp_``, which are consumed by the twisted
@@ -59,9 +55,8 @@ Porting from SQLite
Overview Overview
~~~~~~~~ ~~~~~~~~
The script ``port_from_sqlite_to_postgres.py`` allows porting an existing The script ``synapse_port_db`` allows porting an existing synapse server
synapse server backed by SQLite to using PostgreSQL. This is done in as a two backed by SQLite to using PostgreSQL. This is done in as a two phase process:
phase process:
1. Copy the existing SQLite database to a separate location (while the server 1. Copy the existing SQLite database to a separate location (while the server
is down) and running the port script against that offline database. is down) and running the port script against that offline database.
@@ -86,13 +81,12 @@ complete, restart synapse. For instance::
cp homeserver.db homeserver.db.snapshot cp homeserver.db homeserver.db.snapshot
./synctl start ./synctl start
Assuming your database config file (as described in the section *Synapse Assuming your new config file (as described in the section *Synapse config*)
config*) is named ``database_config.yaml`` and the SQLite snapshot is at is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at
``homeserver.db.snapshot`` then simply run:: ``homeserver.db.snapshot`` then simply run::
python scripts/port_from_sqlite_to_postgres.py \ synapse_port_db --sqlite-database homeserver.db.snapshot \
--sqlite-database homeserver.db.snapshot \ --postgres-config homeserver-postgres.yaml
--postgres-config database_config.yaml
The flag ``--curses`` displays a coloured curses progress UI. The flag ``--curses`` displays a coloured curses progress UI.
@@ -104,8 +98,7 @@ To complete the conversion shut down the synapse server and run the port
script one last time, e.g. if the SQLite database is at ``homeserver.db`` script one last time, e.g. if the SQLite database is at ``homeserver.db``
run:: run::
python scripts/port_from_sqlite_to_postgres.py \ synapse_port_db --sqlite-database homeserver.db \
--sqlite-database homeserver.db \
--postgres-config database_config.yaml --postgres-config database_config.yaml
Once that has completed, change the synapse config to point at the PostgreSQL Once that has completed, change the synapse config to point at the PostgreSQL

39
jenkins.sh Executable file
View File

@@ -0,0 +1,39 @@
#!/bin/bash -eu
export PYTHONDONTWRITEBYTECODE=yep
# Output test results as junit xml
export TRIAL_FLAGS="--reporter=subunit"
export TOXSUFFIX="| subunit-1to2 | subunit2junitxml --no-passthrough --output-to=results.xml"
# Output coverage to coverage.xml
export DUMP_COVERAGE_COMMAND="coverage xml -o coverage.xml"
# Output flake8 violations to violations.flake8.log
# Don't exit with non-0 status code on Jenkins,
# so that the build steps continue and a later step can decided whether to
# UNSTABLE or FAILURE this build.
export PEP8SUFFIX="--output-file=violations.flake8.log || echo flake8 finished with status code \$?"
tox
: ${GIT_BRANCH:="$(git rev-parse --abbrev-ref HEAD)"}
set +u
. .tox/py27/bin/activate
set -u
rm -rf sytest
git clone https://github.com/matrix-org/sytest.git sytest
cd sytest
git checkout "${GIT_BRANCH}" || (echo >&2 "No ref ${GIT_BRANCH} found, falling back to develop" ; git checkout develop)
: ${PERL5LIB:=$WORKSPACE/perl5/lib/perl5}
: ${PERL_MB_OPT:=--install_base=$WORKSPACE/perl5}
: ${PERL_MM_OPT:=INSTALL_BASE=$WORKSPACE/perl5}
export PERL5LIB PERL_MB_OPT PERL_MM_OPT
./install-deps.pl
./run-tests.pl -O tap --synapse-directory .. --all > results.tap

View File

@@ -56,10 +56,9 @@ if __name__ == '__main__':
js = json.load(args.json) js = json.load(args.json)
auth = Auth(Mock()) auth = Auth(Mock())
check_auth( check_auth(
auth, auth,
[FrozenEvent(d) for d in js["auth_chain"]], [FrozenEvent(d) for d in js["auth_chain"]],
[FrozenEvent(d) for d in js["pdus"]], [FrozenEvent(d) for d in js.get("pdus", [])],
) )

View File

@@ -1,5 +1,5 @@
from synapse.crypto.event_signing import * from synapse.crypto.event_signing import *
from syutil.base64util import encode_base64 from unpaddedbase64 import encode_base64
import argparse import argparse
import hashlib import hashlib

View File

@@ -1,9 +1,7 @@
from syutil.crypto.jsonsign import verify_signed_json from signedjson.sign import verify_signed_json
from syutil.crypto.signing_key import ( from signedjson.key import decode_verify_key_bytes, write_signing_keys
decode_verify_key_bytes, write_signing_keys from unpaddedbase64 import decode_base64
)
from syutil.base64util import decode_base64
import urllib2 import urllib2
import json import json

View File

@@ -0,0 +1,116 @@
import psycopg2
import yaml
import sys
import json
import time
import hashlib
from unpaddedbase64 import encode_base64
from signedjson.key import read_signing_keys
from signedjson.sign import sign_json
from canonicaljson import encode_canonical_json
def select_v1_keys(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, key_id, verify_key FROM server_signature_keys")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, key_id, verify_key in rows:
results.setdefault(server_name, {})[key_id] = encode_base64(verify_key)
return results
def select_v1_certs(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, tls_certificate FROM server_tls_certificates")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, tls_certificate in rows:
results[server_name] = tls_certificate
return results
def select_v2_json(connection):
cursor = connection.cursor()
cursor.execute("SELECT server_name, key_id, key_json FROM server_keys_json")
rows = cursor.fetchall()
cursor.close()
results = {}
for server_name, key_id, key_json in rows:
results.setdefault(server_name, {})[key_id] = json.loads(str(key_json).decode("utf-8"))
return results
def convert_v1_to_v2(server_name, valid_until, keys, certificate):
return {
"old_verify_keys": {},
"server_name": server_name,
"verify_keys": {
key_id: {"key": key}
for key_id, key in keys.items()
},
"valid_until_ts": valid_until,
"tls_fingerprints": [fingerprint(certificate)],
}
def fingerprint(certificate):
finger = hashlib.sha256(certificate)
return {"sha256": encode_base64(finger.digest())}
def rows_v2(server, json):
valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, buffer(key_json))
def main():
config = yaml.load(open(sys.argv[1]))
valid_until = int(time.time() / (3600 * 24)) * 1000 * 3600 * 24
server_name = config["server_name"]
signing_key = read_signing_keys(open(config["signing_key_path"]))[0]
database = config["database"]
assert database["name"] == "psycopg2", "Can only convert for postgresql"
args = database["args"]
args.pop("cp_max")
args.pop("cp_min")
connection = psycopg2.connect(**args)
keys = select_v1_keys(connection)
certificates = select_v1_certs(connection)
json = select_v2_json(connection)
result = {}
for server in keys:
if not server in json:
v2_json = convert_v1_to_v2(
server, valid_until, keys[server], certificates[server]
)
v2_json = sign_json(v2_json, server_name, signing_key)
result[server] = v2_json
yaml.safe_dump(result, sys.stdout, default_flow_style=False)
rows = list(
row for server, json in result.items()
for row in rows_v2(server, json)
)
cursor = connection.cursor()
cursor.executemany(
"INSERT INTO server_keys_json ("
" server_name, key_id, from_server,"
" ts_added_ms, ts_valid_until_ms, key_json"
") VALUES (%s, %s, %s, %s, %s, %s)",
rows
)
connection.commit()
if __name__ == '__main__':
main()

142
scripts-dev/definitions.py Executable file
View File

@@ -0,0 +1,142 @@
#! /usr/bin/python
import ast
import yaml
class DefinitionVisitor(ast.NodeVisitor):
def __init__(self):
super(DefinitionVisitor, self).__init__()
self.functions = {}
self.classes = {}
self.names = {}
self.attrs = set()
self.definitions = {
'def': self.functions,
'class': self.classes,
'names': self.names,
'attrs': self.attrs,
}
def visit_Name(self, node):
self.names.setdefault(type(node.ctx).__name__, set()).add(node.id)
def visit_Attribute(self, node):
self.attrs.add(node.attr)
for child in ast.iter_child_nodes(node):
self.visit(child)
def visit_ClassDef(self, node):
visitor = DefinitionVisitor()
self.classes[node.name] = visitor.definitions
for child in ast.iter_child_nodes(node):
visitor.visit(child)
def visit_FunctionDef(self, node):
visitor = DefinitionVisitor()
self.functions[node.name] = visitor.definitions
for child in ast.iter_child_nodes(node):
visitor.visit(child)
def non_empty(defs):
functions = {name: non_empty(f) for name, f in defs['def'].items()}
classes = {name: non_empty(f) for name, f in defs['class'].items()}
result = {}
if functions: result['def'] = functions
if classes: result['class'] = classes
names = defs['names']
uses = []
for name in names.get('Load', ()):
if name not in names.get('Param', ()) and name not in names.get('Store', ()):
uses.append(name)
uses.extend(defs['attrs'])
if uses: result['uses'] = uses
result['names'] = names
result['attrs'] = defs['attrs']
return result
def definitions_in_code(input_code):
input_ast = ast.parse(input_code)
visitor = DefinitionVisitor()
visitor.visit(input_ast)
definitions = non_empty(visitor.definitions)
return definitions
def definitions_in_file(filepath):
with open(filepath) as f:
return definitions_in_code(f.read())
def defined_names(prefix, defs, names):
for name, funcs in defs.get('def', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
for name, funcs in defs.get('class', {}).items():
names.setdefault(name, {'defined': []})['defined'].append(prefix + name)
defined_names(prefix + name + ".", funcs, names)
def used_names(prefix, defs, names):
for name, funcs in defs.get('def', {}).items():
used_names(prefix + name + ".", funcs, names)
for name, funcs in defs.get('class', {}).items():
used_names(prefix + name + ".", funcs, names)
for used in defs.get('uses', ()):
if used in names:
names[used].setdefault('used', []).append(prefix.rstrip('.'))
if __name__ == '__main__':
import sys, os, argparse, re
parser = argparse.ArgumentParser(description='Find definitions.')
parser.add_argument(
"--unused", action="store_true", help="Only list unused definitions"
)
parser.add_argument(
"--ignore", action="append", metavar="REGEXP", help="Ignore a pattern"
)
parser.add_argument(
"--pattern", action="append", metavar="REGEXP",
help="Search for a pattern"
)
parser.add_argument(
"directories", nargs='+', metavar="DIR",
help="Directories to search for definitions"
)
args = parser.parse_args()
definitions = {}
for directory in args.directories:
for root, dirs, files in os.walk(directory):
for filename in files:
if filename.endswith(".py"):
filepath = os.path.join(root, filename)
definitions[filepath] = definitions_in_file(filepath)
names = {}
for filepath, defs in definitions.items():
defined_names(filepath + ":", defs, names)
for filepath, defs in definitions.items():
used_names(filepath + ":", defs, names)
patterns = [re.compile(pattern) for pattern in args.pattern or ()]
ignore = [re.compile(pattern) for pattern in args.ignore or ()]
result = {}
for name, definition in names.items():
if patterns and not any(pattern.match(name) for pattern in patterns):
continue
if ignore and any(pattern.match(name) for pattern in ignore):
continue
if args.unused and definition.get('used'):
continue
result[name] = definition
yaml.dump(result, sys.stdout, default_flow_style=False)

View File

@@ -6,8 +6,8 @@ from synapse.crypto.event_signing import (
add_event_pdu_content_hash, compute_pdu_event_reference_hash add_event_pdu_content_hash, compute_pdu_event_reference_hash
) )
from synapse.api.events.utils import prune_pdu from synapse.api.events.utils import prune_pdu
from syutil.base64util import encode_base64, decode_base64 from unpaddedbase64 import encode_base64, decode_base64
from syutil.jsonutil import encode_canonical_json from canonicaljson import encode_canonical_json
import sqlite3 import sqlite3
import sys import sys

View File

@@ -1,21 +0,0 @@
#!/bin/bash
# This is will prepare a synapse database for running with v0.0.1 of synapse.
# It will store all the user information, but will *delete* all messages and
# room data.
set -e
cp "$1" "$1.bak"
DUMP=$(sqlite3 "$1" << 'EOF'
.dump users
.dump access_tokens
.dump presence
.dump profiles
EOF
)
rm "$1"
sqlite3 "$1" <<< "$DUMP"

View File

@@ -1,21 +0,0 @@
#!/bin/bash
# This is will prepare a synapse database for running with v0.5.0 of synapse.
# It will store all the user information, but will *delete* all messages and
# room data.
set -e
cp "$1" "$1.bak"
DUMP=$(sqlite3 "$1" << 'EOF'
.dump users
.dump access_tokens
.dump presence
.dump profiles
EOF
)
rm "$1"
sqlite3 "$1" <<< "$DUMP"

View File

@@ -33,9 +33,10 @@ def request_registration(user, password, server_location, shared_secret):
).hexdigest() ).hexdigest()
data = { data = {
"username": user, "user": user,
"password": password, "password": password,
"mac": mac, "mac": mac,
"type": "org.matrix.login.shared_secret",
} }
server_location = server_location.rstrip("/") server_location = server_location.rstrip("/")
@@ -43,7 +44,7 @@ def request_registration(user, password, server_location, shared_secret):
print "Sending registration request..." print "Sending registration request..."
req = urllib2.Request( req = urllib2.Request(
"%s/_matrix/client/v2_alpha/register" % (server_location,), "%s/_matrix/client/api/v1/register" % (server_location,),
data=json.dumps(data), data=json.dumps(data),
headers={'Content-Type': 'application/json'} headers={'Content-Type': 'application/json'}
) )

View File

@@ -1,3 +1,4 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd # Copyright 2015 OpenMarket Ltd
# #
@@ -28,7 +29,7 @@ import traceback
import yaml import yaml
logger = logging.getLogger("port_from_sqlite_to_postgres") logger = logging.getLogger("synapse_port_db")
BOOLEAN_COLUMNS = { BOOLEAN_COLUMNS = {
@@ -94,8 +95,6 @@ class Store(object):
_simple_update_one = SQLBaseStore.__dict__["_simple_update_one"] _simple_update_one = SQLBaseStore.__dict__["_simple_update_one"]
_simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"] _simple_update_one_txn = SQLBaseStore.__dict__["_simple_update_one_txn"]
_execute_and_decode = SQLBaseStore.__dict__["_execute_and_decode"]
def runInteraction(self, desc, func, *args, **kwargs): def runInteraction(self, desc, func, *args, **kwargs):
def r(conn): def r(conn):
try: try:
@@ -105,7 +104,7 @@ class Store(object):
try: try:
txn = conn.cursor() txn = conn.cursor()
return func( return func(
LoggingTransaction(txn, desc, self.database_engine), LoggingTransaction(txn, desc, self.database_engine, []),
*args, **kwargs *args, **kwargs
) )
except self.database_engine.module.DatabaseError as e: except self.database_engine.module.DatabaseError as e:
@@ -377,9 +376,7 @@ class Porter(object):
for i, row in enumerate(rows): for i, row in enumerate(rows):
rows[i] = tuple( rows[i] = tuple(
self.postgres_store.database_engine.encode_parameter( conv(j, col)
conv(j, col)
)
for j, col in enumerate(row) for j, col in enumerate(row)
if j > 0 if j > 0
) )
@@ -413,14 +410,17 @@ class Porter(object):
self._convert_rows("sent_transactions", headers, rows) self._convert_rows("sent_transactions", headers, rows)
inserted_rows = len(rows) inserted_rows = len(rows)
max_inserted_rowid = max(r[0] for r in rows) if inserted_rows:
max_inserted_rowid = max(r[0] for r in rows)
def insert(txn): def insert(txn):
self.postgres_store.insert_many_txn( self.postgres_store.insert_many_txn(
txn, "sent_transactions", headers[1:], rows txn, "sent_transactions", headers[1:], rows
) )
yield self.postgres_store.execute(insert) yield self.postgres_store.execute(insert)
else:
max_inserted_rowid = 0
def get_start_id(txn): def get_start_id(txn):
txn.execute( txn.execute(
@@ -724,6 +724,9 @@ if __name__ == "__main__":
postgres_config = yaml.safe_load(args.postgres_config) postgres_config = yaml.safe_load(args.postgres_config)
if "database" in postgres_config:
postgres_config = postgres_config["database"]
if "name" not in postgres_config: if "name" not in postgres_config:
sys.stderr.write("Malformed database config: no 'name'") sys.stderr.write("Malformed database config: no 'name'")
sys.exit(2) sys.exit(2)

View File

@@ -1,331 +0,0 @@
from synapse.storage import SCHEMA_VERSION, read_schema
from synapse.storage._base import SQLBaseStore
from synapse.storage.signatures import SignatureStore
from synapse.storage.event_federation import EventFederationStore
from syutil.base64util import encode_base64, decode_base64
from synapse.crypto.event_signing import compute_event_signature
from synapse.events.builder import EventBuilder
from synapse.events.utils import prune_event
from synapse.crypto.event_signing import check_event_content_hash
from syutil.crypto.jsonsign import (
verify_signed_json, SignatureVerifyException,
)
from syutil.crypto.signing_key import decode_verify_key_bytes
from syutil.jsonutil import encode_canonical_json
import argparse
# import dns.resolver
import hashlib
import httplib
import json
import sqlite3
import syutil
import urllib2
delta_sql = """
CREATE TABLE IF NOT EXISTS event_json(
event_id TEXT NOT NULL,
room_id TEXT NOT NULL,
internal_metadata NOT NULL,
json BLOB NOT NULL,
CONSTRAINT ev_j_uniq UNIQUE (event_id)
);
CREATE INDEX IF NOT EXISTS event_json_id ON event_json(event_id);
CREATE INDEX IF NOT EXISTS event_json_room_id ON event_json(room_id);
PRAGMA user_version = 10;
"""
class Store(object):
_get_event_signatures_txn = SignatureStore.__dict__["_get_event_signatures_txn"]
_get_event_content_hashes_txn = SignatureStore.__dict__["_get_event_content_hashes_txn"]
_get_event_reference_hashes_txn = SignatureStore.__dict__["_get_event_reference_hashes_txn"]
_get_prev_event_hashes_txn = SignatureStore.__dict__["_get_prev_event_hashes_txn"]
_get_prev_events_and_state = EventFederationStore.__dict__["_get_prev_events_and_state"]
_get_auth_events = EventFederationStore.__dict__["_get_auth_events"]
cursor_to_dict = SQLBaseStore.__dict__["cursor_to_dict"]
_simple_select_onecol_txn = SQLBaseStore.__dict__["_simple_select_onecol_txn"]
_simple_select_list_txn = SQLBaseStore.__dict__["_simple_select_list_txn"]
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
def _generate_event_json(self, txn, rows):
events = []
for row in rows:
d = dict(row)
d.pop("stream_ordering", None)
d.pop("topological_ordering", None)
d.pop("processed", None)
if "origin_server_ts" not in d:
d["origin_server_ts"] = d.pop("ts", 0)
else:
d.pop("ts", 0)
d.pop("prev_state", None)
d.update(json.loads(d.pop("unrecognized_keys")))
d["sender"] = d.pop("user_id")
d["content"] = json.loads(d["content"])
if "age_ts" not in d:
# For compatibility
d["age_ts"] = d.get("origin_server_ts", 0)
d.setdefault("unsigned", {})["age_ts"] = d.pop("age_ts")
outlier = d.pop("outlier", False)
# d.pop("membership", None)
d.pop("state_hash", None)
d.pop("replaces_state", None)
b = EventBuilder(d)
b.internal_metadata.outlier = outlier
events.append(b)
for i, ev in enumerate(events):
signatures = self._get_event_signatures_txn(
txn, ev.event_id,
)
ev.signatures = {
n: {
k: encode_base64(v) for k, v in s.items()
}
for n, s in signatures.items()
}
hashes = self._get_event_content_hashes_txn(
txn, ev.event_id,
)
ev.hashes = {
k: encode_base64(v) for k, v in hashes.items()
}
prevs = self._get_prev_events_and_state(txn, ev.event_id)
ev.prev_events = [
(e_id, h)
for e_id, h, is_state in prevs
if is_state == 0
]
# ev.auth_events = self._get_auth_events(txn, ev.event_id)
hashes = dict(ev.auth_events)
for e_id, hash in ev.prev_events:
if e_id in hashes and not hash:
hash.update(hashes[e_id])
#
# if hasattr(ev, "state_key"):
# ev.prev_state = [
# (e_id, h)
# for e_id, h, is_state in prevs
# if is_state == 1
# ]
return [e.build() for e in events]
store = Store()
# def get_key(server_name):
# print "Getting keys for: %s" % (server_name,)
# targets = []
# if ":" in server_name:
# target, port = server_name.split(":")
# targets.append((target, int(port)))
# try:
# answers = dns.resolver.query("_matrix._tcp." + server_name, "SRV")
# for srv in answers:
# targets.append((srv.target, srv.port))
# except dns.resolver.NXDOMAIN:
# targets.append((server_name, 8448))
# except:
# print "Failed to lookup keys for %s" % (server_name,)
# return {}
#
# for target, port in targets:
# url = "https://%s:%i/_matrix/key/v1" % (target, port)
# try:
# keys = json.load(urllib2.urlopen(url, timeout=2))
# verify_keys = {}
# for key_id, key_base64 in keys["verify_keys"].items():
# verify_key = decode_verify_key_bytes(
# key_id, decode_base64(key_base64)
# )
# verify_signed_json(keys, server_name, verify_key)
# verify_keys[key_id] = verify_key
# print "Got keys for: %s" % (server_name,)
# return verify_keys
# except urllib2.URLError:
# pass
# except urllib2.HTTPError:
# pass
# except httplib.HTTPException:
# pass
#
# print "Failed to get keys for %s" % (server_name,)
# return {}
def reinsert_events(cursor, server_name, signing_key):
print "Running delta: v10"
cursor.executescript(delta_sql)
cursor.execute(
"SELECT * FROM events ORDER BY rowid ASC"
)
print "Getting events..."
rows = store.cursor_to_dict(cursor)
events = store._generate_event_json(cursor, rows)
print "Got events from DB."
algorithms = {
"sha256": hashlib.sha256,
}
key_id = "%s:%s" % (signing_key.alg, signing_key.version)
verify_key = signing_key.verify_key
verify_key.alg = signing_key.alg
verify_key.version = signing_key.version
server_keys = {
server_name: {
key_id: verify_key
}
}
i = 0
N = len(events)
for event in events:
if i % 100 == 0:
print "Processed: %d/%d events" % (i,N,)
i += 1
# for alg_name in event.hashes:
# if check_event_content_hash(event, algorithms[alg_name]):
# pass
# else:
# pass
# print "FAIL content hash %s %s" % (alg_name, event.event_id, )
have_own_correctly_signed = False
for host, sigs in event.signatures.items():
pruned = prune_event(event)
for key_id in sigs:
if host not in server_keys:
server_keys[host] = {} # get_key(host)
if key_id in server_keys[host]:
try:
verify_signed_json(
pruned.get_pdu_json(),
host,
server_keys[host][key_id]
)
if host == server_name:
have_own_correctly_signed = True
except SignatureVerifyException:
print "FAIL signature check %s %s" % (
key_id, event.event_id
)
# TODO: Re sign with our own server key
if not have_own_correctly_signed:
sigs = compute_event_signature(event, server_name, signing_key)
event.signatures.update(sigs)
pruned = prune_event(event)
for key_id in event.signatures[server_name]:
verify_signed_json(
pruned.get_pdu_json(),
server_name,
server_keys[server_name][key_id]
)
event_json = encode_canonical_json(
event.get_dict()
).decode("UTF-8")
metadata_json = encode_canonical_json(
event.internal_metadata.get_dict()
).decode("UTF-8")
store._simple_insert_txn(
cursor,
table="event_json",
values={
"event_id": event.event_id,
"room_id": event.room_id,
"internal_metadata": metadata_json,
"json": event_json,
},
or_replace=True,
)
def main(database, server_name, signing_key):
conn = sqlite3.connect(database)
cursor = conn.cursor()
# Do other deltas:
cursor.execute("PRAGMA user_version")
row = cursor.fetchone()
if row and row[0]:
user_version = row[0]
# Run every version since after the current version.
for v in range(user_version + 1, 10):
print "Running delta: %d" % (v,)
sql_script = read_schema("delta/v%d" % (v,))
cursor.executescript(sql_script)
reinsert_events(cursor, server_name, signing_key)
conn.commit()
print "Success!"
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("database")
parser.add_argument("server_name")
parser.add_argument(
"signing_key", type=argparse.FileType('r'),
)
args = parser.parse_args()
signing_key = syutil.crypto.signing_key.read_signing_keys(
args.signing_key
)
main(args.database, args.server_name, signing_key[0])

View File

@@ -3,9 +3,6 @@ source-dir = docs/sphinx
build-dir = docs/build build-dir = docs/build
all_files = 1 all_files = 1
[aliases]
test = trial
[trial] [trial]
test_suite = tests test_suite = tests
@@ -16,3 +13,6 @@ ignore =
docs/* docs/*
pylint.cfg pylint.cfg
tox.ini tox.ini
[flake8]
max-line-length = 90

View File

@@ -14,8 +14,10 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import glob
import os import os
from setuptools import setup, find_packages from setuptools import setup, find_packages, Command
import sys
here = os.path.abspath(os.path.dirname(__file__)) here = os.path.abspath(os.path.dirname(__file__))
@@ -36,6 +38,39 @@ def exec_file(path_segments):
exec(code, result) exec(code, result)
return result return result
class Tox(Command):
user_options = [('tox-args=', 'a', "Arguments to pass to tox")]
def initialize_options(self):
self.tox_args = None
def finalize_options(self):
self.test_args = []
self.test_suite = True
def run(self):
#import here, cause outside the eggs aren't loaded
try:
import tox
except ImportError:
try:
self.distribution.fetch_build_eggs("tox")
import tox
except:
raise RuntimeError(
"The tests need 'tox' to run. Please install 'tox'."
)
import shlex
args = self.tox_args
if args:
args = shlex.split(self.tox_args)
else:
args = []
errno = tox.cmdline(args=args)
sys.exit(errno)
version = exec_file(("synapse", "__init__.py"))["__version__"] version = exec_file(("synapse", "__init__.py"))["__version__"]
dependencies = exec_file(("synapse", "python_dependencies.py")) dependencies = exec_file(("synapse", "python_dependencies.py"))
long_description = read_file(("README.rst",)) long_description = read_file(("README.rst",))
@@ -46,14 +81,10 @@ setup(
packages=find_packages(exclude=["tests", "tests.*"]), packages=find_packages(exclude=["tests", "tests.*"]),
description="Reference Synapse Home Server", description="Reference Synapse Home Server",
install_requires=dependencies['requirements'](include_conditional=True).keys(), install_requires=dependencies['requirements'](include_conditional=True).keys(),
setup_requires=[ dependency_links=dependencies["DEPENDENCY_LINKS"].values(),
"Twisted==14.0.2", # Here to override setuptools_trial's dependency on Twisted>=2.4.0
"setuptools_trial",
"mock"
],
dependency_links=dependencies["DEPENDENCY_LINKS"],
include_package_data=True, include_package_data=True,
zip_safe=False, zip_safe=False,
long_description=long_description, long_description=long_description,
scripts=["synctl", "register_new_matrix_user"], scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={'test': Tox},
) )

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server. """ This is a reference implementation of a Matrix home server.
""" """
__version__ = "0.8.1-r4" __version__ = "0.11.0"

View File

@@ -14,23 +14,28 @@
# limitations under the License. # limitations under the License.
"""This module contains classes for authenticating the user.""" """This module contains classes for authenticating the user."""
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json, SignatureVerifyException
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.constants import EventTypes, Membership, JoinRules
from synapse.api.errors import AuthError, Codes, SynapseError from synapse.api.errors import AuthError, Codes, SynapseError, EventSizeError
from synapse.types import RoomID, UserID, EventID
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.async import run_on_reactor from unpaddedbase64 import decode_base64
from synapse.types import UserID, ClientInfo
import logging import logging
import pymacaroons
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
AuthEventTypes = ( AuthEventTypes = (
EventTypes.Create, EventTypes.Member, EventTypes.PowerLevels, EventTypes.Create, EventTypes.Member, EventTypes.PowerLevels,
EventTypes.JoinRules, EventTypes.JoinRules, EventTypes.RoomHistoryVisibility,
EventTypes.ThirdPartyInvite,
) )
@@ -41,13 +46,27 @@ class Auth(object):
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.state = hs.get_state_handler() self.state = hs.get_state_handler()
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401 self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self._KNOWN_CAVEAT_PREFIXES = set([
"gen = ",
"guest = ",
"type = ",
"time < ",
"user_id = ",
])
def check(self, event, auth_events): def check(self, event, auth_events):
""" Checks if this event is correctly authed. """ Checks if this event is correctly authed.
Args:
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
Returns: Returns:
True if the auth checks pass. True if the auth checks pass.
""" """
self.check_size_limits(event)
try: try:
if not hasattr(event, "room_id"): if not hasattr(event, "room_id"):
raise AuthError(500, "Event has no room_id: %s" % event) raise AuthError(500, "Event has no room_id: %s" % event)
@@ -61,11 +80,31 @@ class Auth(object):
# FIXME # FIXME
return True return True
creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event:
raise SynapseError(
403,
"Room %r does not exist" % (event.room_id,)
)
creating_domain = RoomID.from_string(event.room_id).domain
originating_domain = UserID.from_string(event.sender).domain
if creating_domain != originating_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
403,
"This room has been marked as unfederatable."
)
# FIXME: Temp hack # FIXME: Temp hack
if event.type == EventTypes.Aliases: if event.type == EventTypes.Aliases:
return True return True
logger.debug("Auth events: %s", auth_events) logger.debug(
"Auth events: %s",
[a.event_id for a in auth_events.values()]
)
if event.type == EventTypes.Member: if event.type == EventTypes.Member:
allowed = self.is_membership_change_allowed( allowed = self.is_membership_change_allowed(
@@ -84,7 +123,7 @@ class Auth(object):
self._check_power_levels(event, auth_events) self._check_power_levels(event, auth_events)
if event.type == EventTypes.Redaction: if event.type == EventTypes.Redaction:
self._check_redaction(event, auth_events) self.check_redaction(event, auth_events)
logger.debug("Allowing! %s", event) logger.debug("Allowing! %s", event)
except AuthError as e: except AuthError as e:
@@ -95,8 +134,39 @@ class Auth(object):
logger.info("Denying! %s", event) logger.info("Denying! %s", event)
raise raise
def check_size_limits(self, event):
def too_big(field):
raise EventSizeError("%s too large" % (field,))
if len(event.user_id) > 255:
too_big("user_id")
if len(event.room_id) > 255:
too_big("room_id")
if event.is_state() and len(event.state_key) > 255:
too_big("state_key")
if len(event.type) > 255:
too_big("type")
if len(event.event_id) > 255:
too_big("event_id")
if len(encode_canonical_json(event.get_pdu_json())) > 65536:
too_big("event")
@defer.inlineCallbacks @defer.inlineCallbacks
def check_joined_room(self, room_id, user_id, current_state=None): def check_joined_room(self, room_id, user_id, current_state=None):
"""Check if the user is currently joined in the room
Args:
room_id(str): The room to check.
user_id(str): The user to check.
current_state(dict): Optional map of the current state of the room.
If provided then that map is used to check whether they are a
member of the room. Otherwise the current membership is
loaded from the database.
Raises:
AuthError if the user is not in the room.
Returns:
A deferred membership event for the user if the user is in
the room.
"""
if current_state: if current_state:
member = current_state.get( member = current_state.get(
(EventTypes.Member, user_id), (EventTypes.Member, user_id),
@@ -112,6 +182,33 @@ class Auth(object):
self._check_joined_room(member, user_id, room_id) self._check_joined_room(member, user_id, room_id)
defer.returnValue(member) defer.returnValue(member)
@defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id):
"""Check if the user was in the room at some point.
Args:
room_id(str): The room to check.
user_id(str): The user to check.
Raises:
AuthError if the user was never in the room.
Returns:
A deferred membership event for the user if the user was in the
room. This will be the join event if they are currently joined to
the room. This will be the leave event if they have left the room.
"""
member = yield self.state.get_current_state(
room_id=room_id,
event_type=EventTypes.Member,
state_key=user_id
)
membership = member.membership if member else None
if membership not in (Membership.JOIN, Membership.LEAVE):
raise AuthError(403, "User %s not in room %s" % (
user_id, room_id
))
defer.returnValue(member)
@defer.inlineCallbacks @defer.inlineCallbacks
def check_host_in_room(self, room_id, host): def check_host_in_room(self, room_id, host):
curr_state = yield self.state.get_current_state(room_id) curr_state = yield self.state.get_current_state(room_id)
@@ -146,6 +243,11 @@ class Auth(object):
user_id, room_id, repr(member) user_id, room_id, repr(member)
)) ))
def can_federate(self, event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
return creation_event.content.get("m.federate", True) is True
@log_function @log_function
def is_membership_change_allowed(self, event, auth_events): def is_membership_change_allowed(self, event, auth_events):
membership = event.content["membership"] membership = event.content["membership"]
@@ -161,6 +263,15 @@ class Auth(object):
target_user_id = event.state_key target_user_id = event.state_key
creating_domain = RoomID.from_string(event.room_id).domain
target_domain = UserID.from_string(target_user_id).domain
if creating_domain != target_domain:
if not self.can_federate(event, auth_events):
raise AuthError(
403,
"This room has been marked as unfederatable."
)
# get info about the caller # get info about the caller
key = (EventTypes.Member, event.user_id, ) key = (EventTypes.Member, event.user_id, )
caller = auth_events.get(key) caller = auth_events.get(key)
@@ -185,6 +296,9 @@ class Auth(object):
join_rule = JoinRules.INVITE join_rule = JoinRules.INVITE
user_level = self._get_user_power_level(event.user_id, auth_events) user_level = self._get_user_power_level(event.user_id, auth_events)
target_level = self._get_user_power_level(
target_user_id, auth_events
)
# FIXME (erikj): What should we do here as the default? # FIXME (erikj): What should we do here as the default?
ban_level = self._get_named_level(auth_events, "ban", 50) ban_level = self._get_named_level(auth_events, "ban", 50)
@@ -203,8 +317,17 @@ class Auth(object):
} }
) )
if Membership.INVITE == membership and "third_party_invite" in event.content:
if not self._verify_third_party_invite(event, auth_events):
raise AuthError(403, "You are not invited to this room.")
return True
if Membership.JOIN != membership: if Membership.JOIN != membership:
# JOIN is the only action you can perform if you're not in the room if (caller_invited
and Membership.LEAVE == membership
and target_user_id == event.user_id):
return True
if not caller_in_room: # caller isn't joined if not caller_in_room: # caller isn't joined
raise AuthError( raise AuthError(
403, 403,
@@ -256,18 +379,78 @@ class Auth(object):
elif target_user_id != event.user_id: elif target_user_id != event.user_id:
kick_level = self._get_named_level(auth_events, "kick", 50) kick_level = self._get_named_level(auth_events, "kick", 50)
if user_level < kick_level: if user_level < kick_level or user_level <= target_level:
raise AuthError( raise AuthError(
403, "You cannot kick user %s." % target_user_id 403, "You cannot kick user %s." % target_user_id
) )
elif Membership.BAN == membership: elif Membership.BAN == membership:
if user_level < ban_level: if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban") raise AuthError(403, "You don't have permission to ban")
else: else:
raise AuthError(500, "Unknown membership %s" % membership) raise AuthError(500, "Unknown membership %s" % membership)
return True return True
def _verify_third_party_invite(self, event, auth_events):
"""
Validates that the invite event is authorized by a previous third-party invite.
Checks that the public key, and keyserver, match those in the third party invite,
and that the invite event has a signature issued using that public key.
Args:
event: The m.room.member join event being validated.
auth_events: All relevant previous context events which may be used
for authorization decisions.
Return:
True if the event fulfills the expectations of a previous third party
invite event.
"""
if "third_party_invite" not in event.content:
return False
if "signed" not in event.content["third_party_invite"]:
return False
signed = event.content["third_party_invite"]["signed"]
for key in {"mxid", "token"}:
if key not in signed:
return False
token = signed["token"]
invite_event = auth_events.get(
(EventTypes.ThirdPartyInvite, token,)
)
if not invite_event:
return False
if event.user_id != invite_event.user_id:
return False
try:
public_key = invite_event.content["public_key"]
if signed["mxid"] != event.state_key:
return False
if signed["token"] != token:
return False
for server, signature_block in signed["signatures"].items():
for key_name, encoded_signature in signature_block.items():
if not key_name.startswith("ed25519:"):
return False
verify_key = decode_verify_key_bytes(
key_name,
decode_base64(public_key)
)
verify_signed_json(signed, server, verify_key)
# We got the public key from the invite, so we know that the
# correct server signed the signed bundle.
# The caller is responsible for checking that the signing
# server has not revoked that public key.
return True
return False
except (KeyError, SignatureVerifyException,):
return False
def _get_power_level_event(self, auth_events): def _get_power_level_event(self, auth_events):
key = (EventTypes.PowerLevels, "", ) key = (EventTypes.PowerLevels, "", )
return auth_events.get(key) return auth_events.get(key)
@@ -306,15 +489,15 @@ class Auth(object):
return default return default
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_by_req(self, request): def get_user_by_req(self, request, allow_guest=False):
""" Get a registered user's ID. """ Get a registered user's ID.
Args: Args:
request - An HTTP request with an access_token query parameter. request - An HTTP request with an access_token query parameter.
Returns: Returns:
tuple : of UserID and device string: tuple of:
User ID object of the user making the request UserID (str)
Client ID object of the client instance the user is using Access token ID (str)
Raises: Raises:
AuthError if no user by that token exists or the token is invalid. AuthError if no user by that token exists or the token is invalid.
""" """
@@ -342,17 +525,17 @@ class Auth(object):
if not user_id: if not user_id:
raise KeyError raise KeyError
defer.returnValue( request.authenticated_entity = user_id
(UserID.from_string(user_id), ClientInfo("", ""))
) defer.returnValue((UserID.from_string(user_id), "", False))
return return
except KeyError: except KeyError:
pass # normal users won't have this query parameter set pass # normal users won't have the user_id query parameter set.
user_info = yield self.get_user_by_token(access_token) user_info = yield self._get_user_by_access_token(access_token)
user = user_info["user"] user = user_info["user"]
device_id = user_info["device_id"]
token_id = user_info["token_id"] token_id = user_info["token_id"]
is_guest = user_info["is_guest"]
ip_addr = self.hs.get_ip_from_request(request) ip_addr = self.hs.get_ip_from_request(request)
user_agent = request.requestHeaders.getRawHeaders( user_agent = request.requestHeaders.getRawHeaders(
@@ -360,15 +543,21 @@ class Auth(object):
default=[""] default=[""]
)[0] )[0]
if user and access_token and ip_addr: if user and access_token and ip_addr:
yield self.store.insert_client_ip( self.store.insert_client_ip(
user=user, user=user,
access_token=access_token, access_token=access_token,
device_id=user_info["device_id"],
ip=ip_addr, ip=ip_addr,
user_agent=user_agent user_agent=user_agent
) )
defer.returnValue((user, ClientInfo(device_id, token_id))) if is_guest and not allow_guest:
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
request.authenticated_entity = user.to_string()
defer.returnValue((user, token_id, is_guest,))
except KeyError: except KeyError:
raise AuthError( raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.", self.TOKEN_NOT_FOUND_HTTP_STATUS, "Missing access token.",
@@ -376,30 +565,124 @@ class Auth(object):
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def get_user_by_token(self, token): def _get_user_by_access_token(self, token):
""" Get a registered user's ID. """ Get a registered user's ID.
Args: Args:
token (str): The access token to get the user by. token (str): The access token to get the user by.
Returns: Returns:
dict : dict that includes the user, device_id, and whether the dict : dict that includes the user and the ID of their access token.
user is a server admin.
Raises: Raises:
AuthError if no user by that token exists or the token is invalid. AuthError if no user by that token exists or the token is invalid.
""" """
ret = yield self.store.get_user_by_token(token) try:
ret = yield self._get_user_from_macaroon(token)
except AuthError:
# TODO(daniel): Remove this fallback when all existing access tokens
# have been re-issued as macaroons.
ret = yield self._look_up_user_by_access_token(token)
defer.returnValue(ret)
@defer.inlineCallbacks
def _get_user_from_macaroon(self, macaroon_str):
try:
macaroon = pymacaroons.Macaroon.deserialize(macaroon_str)
self.validate_macaroon(
macaroon, "access",
[lambda c: c.startswith("time < ")]
)
user_prefix = "user_id = "
user = None
guest = False
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
user = UserID.from_string(caveat.caveat_id[len(user_prefix):])
elif caveat.caveat_id == "guest = true":
guest = True
if user is None:
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "No user caveat in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
if guest:
ret = {
"user": user,
"is_guest": True,
"token_id": None,
}
else:
# This codepath exists so that we can actually return a
# token ID, because we use token IDs in place of device
# identifiers throughout the codebase.
# TODO(daniel): Remove this fallback when device IDs are
# properly implemented.
ret = yield self._look_up_user_by_access_token(macaroon_str)
if ret["user"] != user:
logger.error(
"Macaroon user (%s) != DB user (%s)",
user,
ret["user"]
)
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS,
"User mismatch in macaroon",
errcode=Codes.UNKNOWN_TOKEN
)
defer.returnValue(ret)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Invalid macaroon passed.",
errcode=Codes.UNKNOWN_TOKEN
)
def validate_macaroon(self, macaroon, type_string, additional_validation_functions):
v = pymacaroons.Verifier()
v.satisfy_exact("gen = 1")
v.satisfy_exact("type = " + type_string)
v.satisfy_general(lambda c: c.startswith("user_id = "))
v.satisfy_exact("guest = true")
for validation_function in additional_validation_functions:
v.satisfy_general(validation_function)
v.verify(macaroon, self.hs.config.macaroon_secret_key)
v = pymacaroons.Verifier()
v.satisfy_general(self._verify_recognizes_caveats)
v.verify(macaroon, self.hs.config.macaroon_secret_key)
def verify_expiry(self, caveat):
prefix = "time < "
if not caveat.startswith(prefix):
return False
expiry = int(caveat[len(prefix):])
now = self.hs.get_clock().time_msec()
return now < expiry
def _verify_recognizes_caveats(self, caveat):
first_space = caveat.find(" ")
if first_space < 0:
return False
second_space = caveat.find(" ", first_space + 1)
if second_space < 0:
return False
return caveat[:second_space + 1] in self._KNOWN_CAVEAT_PREFIXES
@defer.inlineCallbacks
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
if not ret: if not ret:
raise AuthError( raise AuthError(
self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.", self.TOKEN_NOT_FOUND_HTTP_STATUS, "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN errcode=Codes.UNKNOWN_TOKEN
) )
user_info = { user_info = {
"admin": bool(ret.get("admin", False)),
"device_id": ret.get("device_id"),
"user": UserID.from_string(ret.get("name")), "user": UserID.from_string(ret.get("name")),
"token_id": ret.get("token_id", None), "token_id": ret.get("token_id", None),
"is_guest": False,
} }
defer.returnValue(user_info) defer.returnValue(user_info)
@defer.inlineCallbacks @defer.inlineCallbacks
@@ -413,6 +696,7 @@ class Auth(object):
"Unrecognised access token.", "Unrecognised access token.",
errcode=Codes.UNKNOWN_TOKEN errcode=Codes.UNKNOWN_TOKEN
) )
request.authenticated_entity = service.sender
defer.returnValue(service) defer.returnValue(service)
except KeyError: except KeyError:
raise AuthError( raise AuthError(
@@ -424,8 +708,6 @@ class Auth(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def add_auth_events(self, builder, context): def add_auth_events(self, builder, context):
yield run_on_reactor()
auth_ids = self.compute_auth_events(builder, context.current_state) auth_ids = self.compute_auth_events(builder, context.current_state)
auth_events_entries = yield self.store.add_event_hashes( auth_events_entries = yield self.store.add_event_hashes(
@@ -475,6 +757,16 @@ class Auth(object):
else: else:
if member_event: if member_event:
auth_ids.append(member_event.event_id) auth_ids.append(member_event.event_id)
if e_type == Membership.INVITE:
if "third_party_invite" in event.content:
key = (
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["token"]
)
third_party_invite = current_state.get(key)
if third_party_invite:
auth_ids.append(third_party_invite.event_id)
elif member_event: elif member_event:
if member_event.content["membership"] == Membership.JOIN: if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id) auth_ids.append(member_event.event_id)
@@ -516,36 +808,54 @@ class Auth(object):
# Check state_key # Check state_key
if hasattr(event, "state_key"): if hasattr(event, "state_key"):
if not event.state_key.startswith("_"): if event.state_key.startswith("@"):
if event.state_key.startswith("@"): if event.state_key != event.user_id:
if event.state_key != event.user_id: raise AuthError(
403,
"You are not allowed to set others state"
)
else:
sender_domain = UserID.from_string(
event.user_id
).domain
if sender_domain != event.state_key:
raise AuthError( raise AuthError(
403, 403,
"You are not allowed to set others state" "You are not allowed to set others state"
) )
else:
sender_domain = UserID.from_string(
event.user_id
).domain
if sender_domain != event.state_key:
raise AuthError(
403,
"You are not allowed to set others state"
)
return True return True
def _check_redaction(self, event, auth_events): def check_redaction(self, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
True if the the sender is allowed to redact the target event if the
target event was created by them.
False if the sender is allowed to redact the target event with no
further checks.
Raises:
AuthError if the event sender is definitely not allowed to redact
the target event.
"""
user_level = self._get_user_power_level(event.user_id, auth_events) user_level = self._get_user_power_level(event.user_id, auth_events)
redact_level = self._get_named_level(auth_events, "redact", 50) redact_level = self._get_named_level(auth_events, "redact", 50)
if user_level < redact_level: if user_level > redact_level:
raise AuthError( return False
403,
"You don't have permission to redact events" redacter_domain = EventID.from_string(event.event_id).domain
) redactee_domain = EventID.from_string(event.redacts).domain
if redacter_domain == redactee_domain:
return True
raise AuthError(
403,
"You don't have permission to redact events"
)
def _check_power_levels(self, event, auth_events): def _check_power_levels(self, event, auth_events):
user_list = event.content.get("users", {}) user_list = event.content.get("users", {})
@@ -571,25 +881,26 @@ class Auth(object):
# Check other levels: # Check other levels:
levels_to_check = [ levels_to_check = [
("users_default", []), ("users_default", None),
("events_default", []), ("events_default", None),
("ban", []), ("state_default", None),
("redact", []), ("ban", None),
("kick", []), ("redact", None),
("invite", []), ("kick", None),
("invite", None),
] ]
old_list = current_state.content.get("users") old_list = current_state.content.get("users")
for user in set(old_list.keys() + user_list.keys()): for user in set(old_list.keys() + user_list.keys()):
levels_to_check.append( levels_to_check.append(
(user, ["users"]) (user, "users")
) )
old_list = current_state.content.get("events") old_list = current_state.content.get("events")
new_list = event.content.get("events") new_list = event.content.get("events")
for ev_id in set(old_list.keys() + new_list.keys()): for ev_id in set(old_list.keys() + new_list.keys()):
levels_to_check.append( levels_to_check.append(
(ev_id, ["events"]) (ev_id, "events")
) )
old_state = current_state.content old_state = current_state.content
@@ -597,12 +908,10 @@ class Auth(object):
for level_to_check, dir in levels_to_check: for level_to_check, dir in levels_to_check:
old_loc = old_state old_loc = old_state
for d in dir:
old_loc = old_loc.get(d, {})
new_loc = new_state new_loc = new_state
for d in dir: if dir:
new_loc = new_loc.get(d, {}) old_loc = old_loc.get(dir, {})
new_loc = new_loc.get(dir, {})
if level_to_check in old_loc: if level_to_check in old_loc:
old_level = int(old_loc[level_to_check]) old_level = int(old_loc[level_to_check])
@@ -618,6 +927,14 @@ class Auth(object):
if new_level == old_level: if new_level == old_level:
continue continue
if dir == "users" and level_to_check != event.user_id:
if old_level == user_level:
raise AuthError(
403,
"You don't have permission to remove ops level equal "
"to your own"
)
if old_level > user_level or new_level > user_level: if old_level > user_level or new_level > user_level:
raise AuthError( raise AuthError(
403, 403,

View File

@@ -27,16 +27,6 @@ class Membership(object):
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN) LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
class Feedback(object):
"""Represents the types of feedback a user can send in response to a
message."""
DELIVERED = u"delivered"
READ = u"read"
LIST = (DELIVERED, READ)
class PresenceState(object): class PresenceState(object):
"""Represents the presence state of a user.""" """Represents the presence state of a user."""
OFFLINE = u"offline" OFFLINE = u"offline"
@@ -73,7 +63,12 @@ class EventTypes(object):
PowerLevels = "m.room.power_levels" PowerLevels = "m.room.power_levels"
Aliases = "m.room.aliases" Aliases = "m.room.aliases"
Redaction = "m.room.redaction" Redaction = "m.room.redaction"
Feedback = "m.room.message.feedback" ThirdPartyInvite = "m.room.third_party_invite"
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
RoomAvatar = "m.room.avatar"
GuestAccess = "m.room.guest_access"
# These are used for validation # These are used for validation
Message = "m.room.message" Message = "m.room.message"
@@ -85,3 +80,9 @@ class RejectedReason(object):
AUTH_ERROR = "auth_error" AUTH_ERROR = "auth_error"
REPLACED = "replaced" REPLACED = "replaced"
NOT_ANCESTOR = "not_ancestor" NOT_ANCESTOR = "not_ancestor"
class RoomCreationPreset(object):
PRIVATE_CHAT = "private_chat"
PUBLIC_CHAT = "public_chat"
TRUSTED_PRIVATE_CHAT = "trusted_private_chat"

View File

@@ -33,6 +33,7 @@ class Codes(object):
NOT_FOUND = "M_NOT_FOUND" NOT_FOUND = "M_NOT_FOUND"
MISSING_TOKEN = "M_MISSING_TOKEN" MISSING_TOKEN = "M_MISSING_TOKEN"
UNKNOWN_TOKEN = "M_UNKNOWN_TOKEN" UNKNOWN_TOKEN = "M_UNKNOWN_TOKEN"
GUEST_ACCESS_FORBIDDEN = "M_GUEST_ACCESS_FORBIDDEN"
LIMIT_EXCEEDED = "M_LIMIT_EXCEEDED" LIMIT_EXCEEDED = "M_LIMIT_EXCEEDED"
CAPTCHA_NEEDED = "M_CAPTCHA_NEEDED" CAPTCHA_NEEDED = "M_CAPTCHA_NEEDED"
CAPTCHA_INVALID = "M_CAPTCHA_INVALID" CAPTCHA_INVALID = "M_CAPTCHA_INVALID"
@@ -40,13 +41,13 @@ class Codes(object):
TOO_LARGE = "M_TOO_LARGE" TOO_LARGE = "M_TOO_LARGE"
EXCLUSIVE = "M_EXCLUSIVE" EXCLUSIVE = "M_EXCLUSIVE"
THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED" THREEPID_AUTH_FAILED = "M_THREEPID_AUTH_FAILED"
THREEPID_IN_USE = "THREEPID_IN_USE"
class CodeMessageException(RuntimeError): class CodeMessageException(RuntimeError):
"""An exception with integer code and message string attributes.""" """An exception with integer code and message string attributes."""
def __init__(self, code, msg): def __init__(self, code, msg):
logger.info("%s: %s, %s", type(self).__name__, code, msg)
super(CodeMessageException, self).__init__("%d: %s" % (code, msg)) super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
self.code = code self.code = code
self.msg = msg self.msg = msg
@@ -76,11 +77,6 @@ class SynapseError(CodeMessageException):
) )
class RoomError(SynapseError):
"""An error raised when a room event fails."""
pass
class RegistrationError(SynapseError): class RegistrationError(SynapseError):
"""An error raised when a registration event fails.""" """An error raised when a registration event fails."""
pass pass
@@ -124,6 +120,15 @@ class AuthError(SynapseError):
super(AuthError, self).__init__(*args, **kwargs) super(AuthError, self).__init__(*args, **kwargs)
class EventSizeError(SynapseError):
"""An error raised when an event is too big."""
def __init__(self, *args, **kwargs):
if "errcode" not in kwargs:
kwargs["errcode"] = Codes.TOO_LARGE
super(EventSizeError, self).__init__(413, *args, **kwargs)
class EventStreamError(SynapseError): class EventStreamError(SynapseError):
"""An error raised when there a problem with the event stream.""" """An error raised when there a problem with the event stream."""
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):

View File

@@ -24,7 +24,7 @@ class Filtering(object):
def get_user_filter(self, user_localpart, filter_id): def get_user_filter(self, user_localpart, filter_id):
result = self.store.get_user_filter(user_localpart, filter_id) result = self.store.get_user_filter(user_localpart, filter_id)
result.addCallback(Filter) result.addCallback(FilterCollection)
return result return result
def add_user_filter(self, user_localpart, user_filter): def add_user_filter(self, user_localpart, user_filter):
@@ -50,11 +50,11 @@ class Filtering(object):
# many definitions. # many definitions.
top_level_definitions = [ top_level_definitions = [
"public_user_data", "private_user_data", "server_data" "presence"
] ]
room_level_definitions = [ room_level_definitions = [
"state", "events", "ephemeral" "state", "timeline", "ephemeral", "private_user_data"
] ]
for key in top_level_definitions: for key in top_level_definitions:
@@ -114,116 +114,134 @@ class Filtering(object):
if not isinstance(event_type, basestring): if not isinstance(event_type, basestring):
raise SynapseError(400, "Event type should be a string") raise SynapseError(400, "Event type should be a string")
if "format" in definition:
event_format = definition["format"]
if event_format not in ["federation", "events"]:
raise SynapseError(400, "Invalid format: %s" % (event_format,))
if "select" in definition: class FilterCollection(object):
event_select_list = definition["select"] def __init__(self, filter_json):
for select_key in event_select_list: self.filter_json = filter_json
if select_key not in ["event_id", "origin_server_ts",
"thread_id", "content", "content.body"]:
raise SynapseError(400, "Bad select: %s" % (select_key,))
if ("bundle_updates" in definition and self.room_timeline_filter = Filter(
type(definition["bundle_updates"]) != bool): self.filter_json.get("room", {}).get("timeline", {})
raise SynapseError(400, "Bad bundle_updates: expected bool.") )
self.room_state_filter = Filter(
self.filter_json.get("room", {}).get("state", {})
)
self.room_ephemeral_filter = Filter(
self.filter_json.get("room", {}).get("ephemeral", {})
)
self.room_private_user_data = Filter(
self.filter_json.get("room", {}).get("private_user_data", {})
)
self.presence_filter = Filter(
self.filter_json.get("presence", {})
)
def timeline_limit(self):
return self.room_timeline_filter.limit()
def presence_limit(self):
return self.presence_filter.limit()
def ephemeral_limit(self):
return self.room_ephemeral_filter.limit()
def filter_presence(self, events):
return self.presence_filter.filter(events)
def filter_room_state(self, events):
return self.room_state_filter.filter(events)
def filter_room_timeline(self, events):
return self.room_timeline_filter.filter(events)
def filter_room_ephemeral(self, events):
return self.room_ephemeral_filter.filter(events)
def filter_room_private_user_data(self, events):
return self.room_private_user_data.filter(events)
class Filter(object): class Filter(object):
def __init__(self, filter_json): def __init__(self, filter_json):
self.filter_json = filter_json self.filter_json = filter_json
def filter_public_user_data(self, events): def check(self, event):
return self._filter_on_key(events, ["public_user_data"]) """Checks whether the filter matches the given event.
def filter_private_user_data(self, events):
return self._filter_on_key(events, ["private_user_data"])
def filter_room_state(self, events):
return self._filter_on_key(events, ["room", "state"])
def filter_room_events(self, events):
return self._filter_on_key(events, ["room", "events"])
def filter_room_ephemeral(self, events):
return self._filter_on_key(events, ["room", "ephemeral"])
def _filter_on_key(self, events, keys):
filter_json = self.filter_json
if not filter_json:
return events
try:
# extract the right definition from the filter
definition = filter_json
for key in keys:
definition = definition[key]
return self._filter_with_definition(events, definition)
except KeyError:
# return all events if definition isn't specified.
return events
def _filter_with_definition(self, events, definition):
return [e for e in events if self._passes_definition(definition, e)]
def _passes_definition(self, definition, event):
"""Check if the event passes through the given definition.
Args:
definition(dict): The definition to check against.
event(Event): The event to check.
Returns: Returns:
True if the event passes through the filter. bool: True if the event matches
""" """
# Algorithm notes: if isinstance(event, dict):
# For each key in the definition, check the event meets the criteria: return self.check_fields(
# * For types: Literal match or prefix match (if ends with wildcard) event.get("room_id", None),
# * For senders/rooms: Literal match only event.get("sender", None),
# * "not_" checks take presedence (e.g. if "m.*" is in both 'types' event.get("type", None),
# and 'not_types' then it is treated as only being in 'not_types') )
else:
return self.check_fields(
getattr(event, "room_id", None),
getattr(event, "sender", None),
event.type,
)
# room checks def check_fields(self, room_id, sender, event_type):
if hasattr(event, "room_id"): """Checks whether the filter matches the given event fields.
room_id = event.room_id
allow_rooms = definition.get("rooms", None) Returns:
reject_rooms = definition.get("not_rooms", None) bool: True if the event fields match
if reject_rooms and room_id in reject_rooms: """
return False literal_keys = {
if allow_rooms and room_id not in allow_rooms: "rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v)
}
for name, match_func in literal_keys.items():
not_name = "not_%s" % (name,)
disallowed_values = self.filter_json.get(not_name, [])
if any(map(match_func, disallowed_values)):
return False return False
# sender checks allowed_values = self.filter_json.get(name, None)
if hasattr(event, "sender"): if allowed_values is not None:
# Should we be including event.state_key for some event types? if not any(map(match_func, allowed_values)):
sender = event.sender
allow_senders = definition.get("senders", None)
reject_senders = definition.get("not_senders", None)
if reject_senders and sender in reject_senders:
return False
if allow_senders and sender not in allow_senders:
return False
# type checks
if "not_types" in definition:
for def_type in definition["not_types"]:
if self._event_matches_type(event, def_type):
return False return False
if "types" in definition:
included = False
for def_type in definition["types"]:
if self._event_matches_type(event, def_type):
included = True
break
if not included:
return False
return True return True
def _event_matches_type(self, event, def_type): def filter_rooms(self, room_ids):
if def_type.endswith("*"): """Apply the 'rooms' filter to a given list of rooms.
type_prefix = def_type[:-1]
return event.type.startswith(type_prefix) Args:
else: room_ids (list): A list of room_ids.
return event.type == def_type
Returns:
list: A list of room_ids that match the filter
"""
room_ids = set(room_ids)
disallowed_rooms = set(self.filter_json.get("not_rooms", []))
room_ids -= disallowed_rooms
allowed_rooms = self.filter_json.get("rooms", None)
if allowed_rooms is not None:
room_ids &= set(allowed_rooms)
return room_ids
def filter(self, events):
return filter(self.check, events)
def limit(self):
return self.filter_json.get("limit", 10)
def _matches_wildcard(actual_value, filter_value):
if filter_value.endswith("*"):
type_prefix = filter_value[:-1]
return actual_value.startswith(type_prefix)
else:
return actual_value == filter_value

View File

@@ -16,26 +16,36 @@
import sys import sys
sys.dont_write_bytecode = True sys.dont_write_bytecode = True
from synapse.python_dependencies import check_requirements from synapse.python_dependencies import (
check_requirements, DEPENDENCY_LINKS, MissingRequirementError
)
if __name__ == '__main__': if __name__ == '__main__':
check_requirements() try:
check_requirements()
except MissingRequirementError as e:
message = "\n".join([
"Missing Requirement: %s" % (e.message,),
"To install run:",
" pip install --upgrade --force \"%s\"" % (e.dependency,),
"",
])
sys.stderr.writelines(message)
sys.exit(1)
from synapse.storage.engines import create_engine, IncorrectDatabaseSetup from synapse.storage.engines import create_engine, IncorrectDatabaseSetup
from synapse.storage import ( from synapse.storage import are_all_users_on_domain
are_all_users_on_domain, UpgradeDatabaseException, from synapse.storage.prepare_database import UpgradeDatabaseException
)
from synapse.server import HomeServer from synapse.server import HomeServer
from twisted.internet import reactor from twisted.internet import reactor, task, defer
from twisted.application import service from twisted.application import service
from twisted.enterprise import adbapi from twisted.enterprise import adbapi
from twisted.web.resource import Resource from twisted.web.resource import Resource, EncodingResourceWrapper
from twisted.web.static import File from twisted.web.static import File
from twisted.web.server import Site from twisted.web.server import Site, GzipEncoderFactory, Request
from twisted.web.http import proxiedLogFormatter, combinedLogFormatter
from synapse.http.server import JsonResource, RootRedirect from synapse.http.server import JsonResource, RootRedirect
from synapse.rest.media.v0.content_repository import ContentRepoResource from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.media.v1.media_repository import MediaRepositoryResource from synapse.rest.media.v1.media_repository import MediaRepositoryResource
@@ -54,21 +64,29 @@ from synapse.rest.client.v1 import ClientV1RestResource
from synapse.rest.client.v2_alpha import ClientV2AlphaRestResource from synapse.rest.client.v2_alpha import ClientV2AlphaRestResource
from synapse.metrics.resource import MetricsResource, METRICS_PREFIX from synapse.metrics.resource import MetricsResource, METRICS_PREFIX
from synapse import events
from daemonize import Daemonize from daemonize import Daemonize
import twisted.manhole.telnet import twisted.manhole.telnet
import synapse import synapse
import contextlib
import logging import logging
import os import os
import re import re
import resource import resource
import subprocess import subprocess
import time
logger = logging.getLogger("synapse.app.homeserver") logger = logging.getLogger("synapse.app.homeserver")
def gz_wrap(r):
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
class SynapseHomeServer(HomeServer): class SynapseHomeServer(HomeServer):
def build_http_client(self): def build_http_client(self):
@@ -84,17 +102,43 @@ class SynapseHomeServer(HomeServer):
return JsonResource(self) return JsonResource(self)
def build_resource_for_web_client(self): def build_resource_for_web_client(self):
import syweb webclient_path = self.get_config().web_client_location
syweb_path = os.path.dirname(syweb.__file__) if not webclient_path:
webclient_path = os.path.join(syweb_path, "webclient") try:
import syweb
except ImportError:
quit_with_error(
"Could not find a webclient.\n\n"
"Please either install the matrix-angular-sdk or configure\n"
"the location of the source to serve via the configuration\n"
"option `web_client_location`\n\n"
"To install the `matrix-angular-sdk` via pip, run:\n\n"
" pip install '%(dep)s'\n"
"\n"
"You can also disable hosting of the webclient via the\n"
"configuration option `web_client`\n"
% {"dep": DEPENDENCY_LINKS["matrix-angular-sdk"]}
)
syweb_path = os.path.dirname(syweb.__file__)
webclient_path = os.path.join(syweb_path, "webclient")
# GZip is disabled here due to
# https://twistedmatrix.com/trac/ticket/7678
# (It can stay enabled for the API resources: they call
# write() with the whole body and then finish() straight
# after and so do not trigger the bug.
# GzipFile was removed in commit 184ba09
# return GzipFile(webclient_path) # TODO configurable?
return File(webclient_path) # TODO configurable? return File(webclient_path) # TODO configurable?
def build_resource_for_static_content(self): def build_resource_for_static_content(self):
return File("static") # This is old and should go away: not going to bother adding gzip
return File(
os.path.join(os.path.dirname(synapse.__file__), "static")
)
def build_resource_for_content_repo(self): def build_resource_for_content_repo(self):
return ContentRepoResource( return ContentRepoResource(
self, self.upload_dir, self.auth, self.content_addr self, self.config.uploads_path, self.auth, self.content_addr
) )
def build_resource_for_media_repository(self): def build_resource_for_media_repository(self):
@@ -120,149 +164,105 @@ class SynapseHomeServer(HomeServer):
**self.db_config.get("args", {}) **self.db_config.get("args", {})
) )
def create_resource_tree(self, redirect_root_to_web_client): def _listener_http(self, config, listener_config):
"""Create the resource tree for this Home Server. port = listener_config["port"]
bind_address = listener_config.get("bind_address", "")
tls = listener_config.get("tls", False)
site_tag = listener_config.get("tag", port)
This in unduly complicated because Twisted does not support putting if tls and config.no_tls:
child resources more than 1 level deep at a time. return
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
config = self.get_config()
web_client = config.web_client
# list containing (path_str, Resource) e.g:
# [ ("/aaa/bbb/cc", Resource1), ("/aaa/dummy", Resource2) ]
desired_tree = [
(CLIENT_PREFIX, self.get_resource_for_client()),
(CLIENT_V2_ALPHA_PREFIX, self.get_resource_for_client_v2_alpha()),
(FEDERATION_PREFIX, self.get_resource_for_federation()),
(CONTENT_REPO_PREFIX, self.get_resource_for_content_repo()),
(SERVER_KEY_PREFIX, self.get_resource_for_server_key()),
(SERVER_KEY_V2_PREFIX, self.get_resource_for_server_key_v2()),
(MEDIA_PREFIX, self.get_resource_for_media_repository()),
(STATIC_PREFIX, self.get_resource_for_static_content()),
]
if web_client:
logger.info("Adding the web client.")
desired_tree.append((WEB_CLIENT_PREFIX,
self.get_resource_for_web_client()))
if web_client and redirect_root_to_web_client:
self.root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
self.root_resource = Resource()
metrics_resource = self.get_resource_for_metrics() metrics_resource = self.get_resource_for_metrics()
if config.metrics_port is None and metrics_resource is not None:
desired_tree.append((METRICS_PREFIX, metrics_resource))
# ideally we'd just use getChild and putChild but getChild doesn't work resources = {}
# unless you give it a Request object IN ADDITION to the name :/ So for res in listener_config["resources"]:
# instead, we'll store a copy of this mapping so we can actually add for name in res["names"]:
# extra resources to existing nodes. See self._resource_id for the key. if name == "client":
resource_mappings = {} if res["compress"]:
for full_path, res in desired_tree: client_v1 = gz_wrap(self.get_resource_for_client())
logger.info("Attaching %s to path %s", res, full_path) client_v2 = gz_wrap(self.get_resource_for_client_v2_alpha())
last_resource = self.root_resource else:
for path_seg in full_path.split('/')[1:-1]: client_v1 = self.get_resource_for_client()
if path_seg not in last_resource.listNames(): client_v2 = self.get_resource_for_client_v2_alpha()
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = self._resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = self._resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
# =========================== resources.update({
# now attach the actual desired resource CLIENT_PREFIX: client_v1,
last_path_seg = full_path.split('/')[-1] CLIENT_V2_ALPHA_PREFIX: client_v2,
})
# if there is already a resource here, thieve its children and if name == "federation":
# replace it resources.update({
res_id = self._resource_id(last_resource, last_path_seg) FEDERATION_PREFIX: self.get_resource_for_federation(),
if res_id in resource_mappings: })
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = self._resource_id(existing_dummy_resource,
child_name)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
# finally, insert the desired resource in the right place if name in ["static", "client"]:
last_resource.putChild(last_path_seg, res) resources.update({
res_id = self._resource_id(last_resource, last_path_seg) STATIC_PREFIX: self.get_resource_for_static_content(),
resource_mappings[res_id] = res })
return self.root_resource if name in ["media", "federation", "client"]:
resources.update({
MEDIA_PREFIX: self.get_resource_for_media_repository(),
CONTENT_REPO_PREFIX: self.get_resource_for_content_repo(),
})
def _resource_id(self, resource, path_seg): if name in ["keys", "federation"]:
"""Construct an arbitrary resource ID so you can retrieve the mapping resources.update({
later. SERVER_KEY_PREFIX: self.get_resource_for_server_key(),
SERVER_KEY_V2_PREFIX: self.get_resource_for_server_key_v2(),
})
If you want to represent resource A putChild resource B with path C, if name == "webclient":
the mapping should looks like _resource_id(A,C) = B. resources[WEB_CLIENT_PREFIX] = self.get_resource_for_web_client()
Args: if name == "metrics" and metrics_resource:
resource (Resource): The *parent* Resource resources[METRICS_PREFIX] = metrics_resource
path_seg (str): The name of the child Resource to be attached.
Returns: root_resource = create_resource_tree(resources)
str: A unique string which can be a key to the child Resource. if tls:
""" reactor.listenSSL(
return "%s-%s" % (resource, path_seg) port,
SynapseSite(
"synapse.access.https.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
self.tls_server_context_factory,
interface=bind_address
)
else:
reactor.listenTCP(
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
),
interface=bind_address
)
logger.info("Synapse now listening on port %d", port)
def start_listening(self): def start_listening(self):
config = self.get_config() config = self.get_config()
if not config.no_tls and config.bind_port is not None: for listener in config.listeners:
reactor.listenSSL( if listener["type"] == "http":
config.bind_port, self._listener_http(config, listener)
SynapseSite( elif listener["type"] == "manhole":
"synapse.access.https", f = twisted.manhole.telnet.ShellFactory()
config, f.username = "matrix"
self.root_resource, f.password = "rabbithole"
), f.namespace['hs'] = self
self.tls_context_factory, reactor.listenTCP(
interface=config.bind_host listener["port"],
) f,
logger.info("Synapse now listening on port %d", config.bind_port) interface=listener.get("bind_address", '127.0.0.1')
)
if config.unsecure_port is not None: else:
reactor.listenTCP( logger.warn("Unrecognized listener type: %s", listener["type"])
config.unsecure_port,
SynapseSite(
"synapse.access.http",
config,
self.root_resource,
),
interface=config.bind_host
)
logger.info("Synapse now listening on port %d", config.unsecure_port)
metrics_resource = self.get_resource_for_metrics()
if metrics_resource and config.metrics_port is not None:
reactor.listenTCP(
config.metrics_port,
SynapseSite(
"synapse.access.metrics",
config,
metrics_resource,
),
interface="127.0.0.1",
)
logger.info("Metrics now running on 127.0.0.1 port %d", config.metrics_port)
def run_startup_checks(self, db_conn, database_engine): def run_startup_checks(self, db_conn, database_engine):
all_users_native = are_all_users_on_domain( all_users_native = are_all_users_on_domain(
@@ -283,11 +283,10 @@ class SynapseHomeServer(HomeServer):
def quit_with_error(error_string): def quit_with_error(error_string):
message_lines = error_string.split("\n") message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines]) + 2 line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
sys.stderr.write("*" * line_length + '\n') sys.stderr.write("*" * line_length + '\n')
for line in message_lines: for line in message_lines:
if line.strip(): sys.stderr.write(" %s\n" % (line.rstrip(),))
sys.stderr.write(" %s\n" % (line.strip(),))
sys.stderr.write("*" * line_length + '\n') sys.stderr.write("*" * line_length + '\n')
sys.exit(1) sys.exit(1)
@@ -350,7 +349,7 @@ def get_version_string():
) )
).encode("ascii") ).encode("ascii")
except Exception as e: except Exception as e:
logger.warn("Failed to check for git repository: %s", e) logger.info("Failed to check for git repository: %s", e)
return ("Synapse/%s" % (synapse.__version__,)).encode("ascii") return ("Synapse/%s" % (synapse.__version__,)).encode("ascii")
@@ -374,7 +373,6 @@ def setup(config_options):
Args: Args:
config_options_options: The options passed to Synapse. Usually config_options_options: The options passed to Synapse. Usually
`sys.argv[1:]`. `sys.argv[1:]`.
should_run (bool): Whether to start the reactor.
Returns: Returns:
HomeServer HomeServer
@@ -395,36 +393,24 @@ def setup(config_options):
logger.info("Server hostname: %s", config.server_name) logger.info("Server hostname: %s", config.server_name)
logger.info("Server version: %s", version_string) logger.info("Server version: %s", version_string)
if re.search(":[0-9]+$", config.server_name): events.USE_FROZEN_DICTS = config.use_frozen_dicts
domain_with_port = config.server_name
else:
domain_with_port = "%s:%s" % (config.server_name, config.bind_port)
tls_context_factory = context_factory.ServerContextFactory(config) tls_server_context_factory = context_factory.ServerContextFactory(config)
database_engine = create_engine(config.database_config["name"]) database_engine = create_engine(config.database_config["name"])
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
hs = SynapseHomeServer( hs = SynapseHomeServer(
config.server_name, config.server_name,
domain_with_port=domain_with_port,
upload_dir=os.path.abspath("uploads"),
db_name=config.database_path,
db_config=config.database_config, db_config=config.database_config,
tls_context_factory=tls_context_factory, tls_server_context_factory=tls_server_context_factory,
config=config, config=config,
content_addr=config.content_addr, content_addr=config.content_addr,
version_string=version_string, version_string=version_string,
database_engine=database_engine, database_engine=database_engine,
) )
hs.create_resource_tree( logger.info("Preparing database: %s...", config.database_config['name'])
redirect_root_to_web_client=True,
)
db_name = hs.get_db_name()
logger.info("Preparing database: %s...", db_name)
try: try:
db_conn = database_engine.module.connect( db_conn = database_engine.module.connect(
@@ -446,20 +432,14 @@ def setup(config_options):
) )
sys.exit(1) sys.exit(1)
logger.info("Database prepared in %s.", db_name) logger.info("Database prepared in %s.", config.database_config['name'])
if config.manhole:
f = twisted.manhole.telnet.ShellFactory()
f.username = "matrix"
f.password = "rabbithole"
f.namespace['hs'] = hs
reactor.listenTCP(config.manhole, f, interface='127.0.0.1')
hs.start_listening() hs.start_listening()
hs.get_pusherpool().start() hs.get_pusherpool().start()
hs.get_state_handler().start_caching() hs.get_state_handler().start_caching()
hs.get_datastore().start_profiling() hs.get_datastore().start_profiling()
hs.get_datastore().start_doing_background_updates()
hs.get_replication_layer().start_get_pdu_cache() hs.get_replication_layer().start_get_pdu_cache()
return hs return hs
@@ -480,35 +460,264 @@ class SynapseService(service.Service):
return self._port.stopListening() return self._port.stopListening()
class SynapseRequest(Request):
def __init__(self, site, *args, **kw):
Request.__init__(self, *args, **kw)
self.site = site
self.authenticated_entity = None
self.start_time = 0
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
self.__class__.__name__,
id(self),
self.method,
self.get_redacted_uri(),
self.clientproto,
self.site.site_tag,
)
def get_redacted_uri(self):
return re.sub(
r'(\?.*access_token=)[^&]*(.*)$',
r'\1<redacted>\2',
self.uri
)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders("User-Agent", [None])[-1]
def started_processing(self):
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
self.getClientIP(),
self.site.site_tag,
self.method,
self.get_redacted_uri()
)
self.start_time = int(time.time() * 1000)
def finished_processing(self):
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms %sB %s \"%s %s %s\" \"%s\"",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
int(time.time() * 1000) - self.start_time,
self.sentLength,
self.code,
self.method,
self.get_redacted_uri(),
self.clientproto,
self.get_user_agent(),
)
@contextlib.contextmanager
def processing(self):
self.started_processing()
yield
self.finished_processing()
class XForwardedForRequest(SynapseRequest):
def __init__(self, *args, **kw):
SynapseRequest.__init__(self, *args, **kw)
"""
Add a layer on top of another request that only uses the value of an
X-Forwarded-For header as the result of C{getClientIP}.
"""
def getClientIP(self):
"""
@return: The client address (the first address) in the value of the
I{X-Forwarded-For header}. If the header is not present, return
C{b"-"}.
"""
return self.requestHeaders.getRawHeaders(
b"x-forwarded-for", [b"-"])[0].split(b",")[0].strip()
class SynapseRequestFactory(object):
def __init__(self, site, x_forwarded_for):
self.site = site
self.x_forwarded_for = x_forwarded_for
def __call__(self, *args, **kwargs):
if self.x_forwarded_for:
return XForwardedForRequest(self.site, *args, **kwargs)
else:
return SynapseRequest(self.site, *args, **kwargs)
class SynapseSite(Site): class SynapseSite(Site):
""" """
Subclass of a twisted http Site that does access logging with python's Subclass of a twisted http Site that does access logging with python's
standard logging standard logging
""" """
def __init__(self, logger_name, config, resource, *args, **kwargs): def __init__(self, logger_name, site_tag, config, resource, *args, **kwargs):
Site.__init__(self, resource, *args, **kwargs) Site.__init__(self, resource, *args, **kwargs)
if config.captcha_ip_origin_is_x_forwarded:
self._log_formatter = proxiedLogFormatter self.site_tag = site_tag
else:
self._log_formatter = combinedLogFormatter proxied = config.get("x_forwarded", False)
self.requestFactory = SynapseRequestFactory(self, proxied)
self.access_logger = logging.getLogger(logger_name) self.access_logger = logging.getLogger(logger_name)
def log(self, request): def log(self, request):
line = self._log_formatter(self._logDateTime, request) pass
self.access_logger.info(line)
def create_resource_tree(desired_tree, redirect_root_to_web_client=True):
"""Create the resource tree for this Home Server.
This in unduly complicated because Twisted does not support putting
child resources more than 1 level deep at a time.
Args:
web_client (bool): True to enable the web client.
redirect_root_to_web_client (bool): True to redirect '/' to the
location of the web client. This does nothing if web_client is not
True.
"""
if redirect_root_to_web_client and WEB_CLIENT_PREFIX in desired_tree:
root_resource = RootRedirect(WEB_CLIENT_PREFIX)
else:
root_resource = Resource()
# ideally we'd just use getChild and putChild but getChild doesn't work
# unless you give it a Request object IN ADDITION to the name :/ So
# instead, we'll store a copy of this mapping so we can actually add
# extra resources to existing nodes. See self._resource_id for the key.
resource_mappings = {}
for full_path, res in desired_tree.items():
logger.info("Attaching %s to path %s", res, full_path)
last_resource = root_resource
for path_seg in full_path.split('/')[1:-1]:
if path_seg not in last_resource.listNames():
# resource doesn't exist, so make a "dummy resource"
child_resource = Resource()
last_resource.putChild(path_seg, child_resource)
res_id = _resource_id(last_resource, path_seg)
resource_mappings[res_id] = child_resource
last_resource = child_resource
else:
# we have an existing Resource, use that instead.
res_id = _resource_id(last_resource, path_seg)
last_resource = resource_mappings[res_id]
# ===========================
# now attach the actual desired resource
last_path_seg = full_path.split('/')[-1]
# if there is already a resource here, thieve its children and
# replace it
res_id = _resource_id(last_resource, last_path_seg)
if res_id in resource_mappings:
# there is a dummy resource at this path already, which needs
# to be replaced with the desired resource.
existing_dummy_resource = resource_mappings[res_id]
for child_name in existing_dummy_resource.listNames():
child_res_id = _resource_id(
existing_dummy_resource, child_name
)
child_resource = resource_mappings[child_res_id]
# steal the children
res.putChild(child_name, child_resource)
# finally, insert the desired resource in the right place
last_resource.putChild(last_path_seg, res)
res_id = _resource_id(last_resource, last_path_seg)
resource_mappings[res_id] = res
return root_resource
def _resource_id(resource, path_seg):
"""Construct an arbitrary resource ID so you can retrieve the mapping
later.
If you want to represent resource A putChild resource B with path C,
the mapping should looks like _resource_id(A,C) = B.
Args:
resource (Resource): The *parent* Resource
path_seg (str): The name of the child Resource to be attached.
Returns:
str: A unique string which can be a key to the child Resource.
"""
return "%s-%s" % (resource, path_seg)
def run(hs): def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
def profile(func):
from cProfile import Profile
from threading import current_thread
def profiled(*args, **kargs):
profile = Profile()
profile.enable()
func(*args, **kargs)
profile.disable()
ident = current_thread().ident
profile.dump_stats("/tmp/%s.%s.%i.pstat" % (
hs.hostname, func.__name__, ident
))
return profiled
from twisted.python.threadpool import ThreadPool
ThreadPool._worker = profile(ThreadPool._worker)
reactor.run = profile(reactor.run)
start_time = hs.get_clock().time()
@defer.inlineCallbacks
def phone_stats_home():
now = int(hs.get_clock().time())
uptime = int(now - start_time)
if uptime < 0:
uptime = 0
stats = {}
stats["homeserver"] = hs.config.server_name
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
stats["total_users"] = yield hs.get_datastore().count_all_users()
all_rooms = yield hs.get_datastore().get_rooms(False)
stats["total_room_count"] = len(all_rooms)
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
daily_messages = yield hs.get_datastore().count_daily_messages()
if daily_messages is not None:
stats["daily_messages"] = daily_messages
logger.info("Reporting stats to matrix.org: %s" % (stats,))
try:
yield hs.get_simple_http_client().put_json(
"https://matrix.org/report-usage-stats/push",
stats
)
except Exception as e:
logger.warn("Error reporting stats: %s", e)
if hs.config.report_stats:
phone_home_task = task.LoopingCall(phone_stats_home)
phone_home_task.start(60 * 60 * 24, now=False)
def in_thread(): def in_thread():
with LoggingContext("run"): with LoggingContext("run"):
change_resource_limit(hs.config.soft_file_limit) change_resource_limit(hs.config.soft_file_limit)
reactor.run() reactor.run()
if hs.config.daemonize: if hs.config.daemonize:
print hs.config.pid_file if hs.config.print_pidfile:
print hs.config.pid_file
daemon = Daemonize( daemon = Daemonize(
app="synapse-homeserver", app="synapse-homeserver",

View File

@@ -16,53 +16,67 @@
import sys import sys
import os import os
import os.path
import subprocess import subprocess
import signal import signal
import yaml
SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"] SYNAPSE = ["python", "-B", "-m", "synapse.app.homeserver"]
CONFIGFILE = "homeserver.yaml"
PIDFILE = "homeserver.pid"
GREEN = "\x1b[1;32m" GREEN = "\x1b[1;32m"
RED = "\x1b[1;31m"
NORMAL = "\x1b[m" NORMAL = "\x1b[m"
def start(): def start(configfile):
if not os.path.exists(CONFIGFILE):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), CONFIGFILE
)
)
sys.exit(1)
print "Starting ...", print "Starting ...",
args = SYNAPSE args = SYNAPSE
args.extend(["--daemonize", "-c", CONFIGFILE, "--pid-file", PIDFILE]) args.extend(["--daemonize", "-c", configfile])
subprocess.check_call(args)
print GREEN + "started" + NORMAL try:
subprocess.check_call(args)
print GREEN + "started" + NORMAL
except subprocess.CalledProcessError as e:
print (
RED +
"error starting (exit code: %d); see above for logs" % e.returncode +
NORMAL
)
def stop(): def stop(pidfile):
if os.path.exists(PIDFILE): if os.path.exists(pidfile):
pid = int(open(PIDFILE).read()) pid = int(open(pidfile).read())
os.kill(pid, signal.SIGTERM) os.kill(pid, signal.SIGTERM)
print GREEN + "stopped" + NORMAL print GREEN + "stopped" + NORMAL
def main(): def main():
configfile = sys.argv[2] if len(sys.argv) == 3 else "homeserver.yaml"
if not os.path.exists(configfile):
sys.stderr.write(
"No config file found\n"
"To generate a config file, run '%s -c %s --generate-config"
" --server-name=<server name>'\n" % (
" ".join(SYNAPSE), configfile
)
)
sys.exit(1)
config = yaml.load(open(configfile))
pidfile = config["pid_file"]
action = sys.argv[1] if sys.argv[1:] else "usage" action = sys.argv[1] if sys.argv[1:] else "usage"
if action == "start": if action == "start":
start() start(configfile)
elif action == "stop": elif action == "stop":
stop() stop(pidfile)
elif action == "restart": elif action == "restart":
stop() stop(pidfile)
start() start(configfile)
else: else:
sys.stderr.write("Usage: %s [start|stop|restart]\n" % (sys.argv[0],)) sys.stderr.write("Usage: %s [start|stop|restart] [configfile]\n" % (sys.argv[0],))
sys.exit(1) sys.exit(1)

View File

@@ -148,8 +148,8 @@ class ApplicationService(object):
and self.is_interested_in_user(event.state_key)): and self.is_interested_in_user(event.state_key)):
return True return True
# check joined member events # check joined member events
for member in member_list: for user_id in member_list:
if self.is_interested_in_user(member.state_key): if self.is_interested_in_user(user_id):
return True return True
return False return False
@@ -173,7 +173,7 @@ class ApplicationService(object):
restrict_to(str): The namespace to restrict regex tests to. restrict_to(str): The namespace to restrict regex tests to.
aliases_for_event(list): A list of all the known room aliases for aliases_for_event(list): A list of all the known room aliases for
this event. this event.
member_list(list): A list of all joined room members in this room. member_list(list): A list of all joined user_ids in this room.
Returns: Returns:
bool: True if this service would like to know about this event. bool: True if this service would like to know about this event.
""" """

View File

@@ -224,8 +224,8 @@ class _Recoverer(object):
self.clock.call_later((2 ** self.backoff_counter), self.retry) self.clock.call_later((2 ** self.backoff_counter), self.retry)
def _backoff(self): def _backoff(self):
# cap the backoff to be around 18h => (2^16) = 65536 secs # cap the backoff to be around 8.5min => (2^9) = 512 secs
if self.backoff_counter < 16: if self.backoff_counter < 9:
self.backoff_counter += 1 self.backoff_counter += 1
self.recover() self.recover()

View File

@@ -0,0 +1,30 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
if __name__ == "__main__":
import sys
from homeserver import HomeServerConfig
action = sys.argv[1]
if action == "read":
key = sys.argv[2]
config = HomeServerConfig.load_config("", sys.argv[3:])
print getattr(config, key)
sys.exit(0)
else:
sys.stderr.write("Unknown command %r\n" % (action,))
sys.exit(1)

View File

@@ -14,9 +14,11 @@
# limitations under the License. # limitations under the License.
import argparse import argparse
import sys import errno
import os import os
import yaml import yaml
import sys
from textwrap import dedent
class ConfigError(Exception): class ConfigError(Exception):
@@ -24,18 +26,45 @@ class ConfigError(Exception):
class Config(object): class Config(object):
def __init__(self, args):
pass stats_reporting_begging_spiel = (
"We would really appreciate it if you could help our project out by"
" reporting anonymized usage statistics from your homeserver. Only very"
" basic aggregate data (e.g. number of users) will be reported, but it"
" helps us to track the growth of the Matrix community, and helps us to"
" make Matrix a success, as well as to convince other networks that they"
" should peer with us."
"\nThank you."
)
@staticmethod @staticmethod
def parse_size(string): def parse_size(value):
if isinstance(value, int) or isinstance(value, long):
return value
sizes = {"K": 1024, "M": 1024 * 1024} sizes = {"K": 1024, "M": 1024 * 1024}
size = 1 size = 1
suffix = string[-1] suffix = value[-1]
if suffix in sizes: if suffix in sizes:
string = string[:-1] value = value[:-1]
size = sizes[suffix] size = sizes[suffix]
return int(string) * size return int(value) * size
@staticmethod
def parse_duration(value):
if isinstance(value, int) or isinstance(value, long):
return value
second = 1000
hour = 60 * 60 * second
day = 24 * hour
week = 7 * day
year = 365 * day
sizes = {"s": second, "h": hour, "d": day, "w": week, "y": year}
size = 1
suffix = value[-1]
if suffix in sizes:
value = value[:-1]
size = sizes[suffix]
return int(value) * size
@staticmethod @staticmethod
def abspath(file_path): def abspath(file_path):
@@ -63,8 +92,11 @@ class Config(object):
@classmethod @classmethod
def ensure_directory(cls, dir_path): def ensure_directory(cls, dir_path):
dir_path = cls.abspath(dir_path) dir_path = cls.abspath(dir_path)
if not os.path.exists(dir_path): try:
os.makedirs(dir_path) os.makedirs(dir_path)
except OSError, e:
if e.errno != errno.EEXIST:
raise
if not os.path.isdir(dir_path): if not os.path.isdir(dir_path):
raise ConfigError( raise ConfigError(
"%s is not a directory" % (dir_path,) "%s is not a directory" % (dir_path,)
@@ -77,17 +109,6 @@ class Config(object):
with open(file_path) as file_stream: with open(file_path) as file_stream:
return file_stream.read() return file_stream.read()
@classmethod
def read_yaml_file(cls, file_path, config_name):
cls.check_file(file_path, config_name)
with open(file_path) as file_stream:
try:
return yaml.load(file_stream)
except:
raise ConfigError(
"Error parsing yaml in file %r" % (file_path,)
)
@staticmethod @staticmethod
def default_path(name): def default_path(name):
return os.path.abspath(os.path.join(os.path.curdir, name)) return os.path.abspath(os.path.join(os.path.curdir, name))
@@ -97,84 +118,200 @@ class Config(object):
with open(file_path) as file_stream: with open(file_path) as file_stream:
return yaml.load(file_stream) return yaml.load(file_stream)
@classmethod def invoke_all(self, name, *args, **kargs):
def add_arguments(cls, parser): results = []
pass for cls in type(self).mro():
if name in cls.__dict__:
results.append(getattr(cls, name)(self, *args, **kargs))
return results
@classmethod def generate_config(self, config_dir_path, server_name, report_stats=None):
def generate_config(cls, args, config_dir_path): default_config = "# vim:ft=yaml\n"
pass
default_config += "\n\n".join(dedent(conf) for conf in self.invoke_all(
"default_config",
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=report_stats,
))
config = yaml.load(default_config)
return default_config, config
@classmethod @classmethod
def load_config(cls, description, argv, generate_section=None): def load_config(cls, description, argv, generate_section=None):
obj = cls()
config_parser = argparse.ArgumentParser(add_help=False) config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument( config_parser.add_argument(
"-c", "--config-path", "-c", "--config-path",
action="append",
metavar="CONFIG_FILE", metavar="CONFIG_FILE",
help="Specify config file" help="Specify config file. Can be given multiple times and"
" may specify directories containing *.yaml files."
) )
config_parser.add_argument( config_parser.add_argument(
"--generate-config", "--generate-config",
action="store_true", action="store_true",
help="Generate config file" help="Generate a config file for the server name"
)
config_parser.add_argument(
"--report-stats",
action="store",
help="Stuff",
choices=["yes", "no"]
)
config_parser.add_argument(
"--generate-keys",
action="store_true",
help="Generate any missing key files then exit"
)
config_parser.add_argument(
"--keys-directory",
metavar="DIRECTORY",
help="Used with 'generate-*' options to specify where files such as"
" certs and signing keys should be stored in, unless explicitly"
" specified in the config."
)
config_parser.add_argument(
"-H", "--server-name",
help="The server name to generate a config file for"
) )
config_args, remaining_args = config_parser.parse_known_args(argv) config_args, remaining_args = config_parser.parse_known_args(argv)
generate_keys = config_args.generate_keys
config_files = []
if config_args.config_path:
for config_path in config_args.config_path:
if os.path.isdir(config_path):
# We accept specifying directories as config paths, we search
# inside that directory for all files matching *.yaml, and then
# we apply them in *sorted* order.
files = []
for entry in os.listdir(config_path):
entry_path = os.path.join(config_path, entry)
if not os.path.isfile(entry_path):
print (
"Found subdirectory in config directory: %r. IGNORING."
) % (entry_path, )
continue
if not entry.endswith(".yaml"):
print (
"Found file in config directory that does not"
" end in '.yaml': %r. IGNORING."
) % (entry_path, )
continue
files.append(entry_path)
config_files.extend(sorted(files))
else:
config_files.append(config_path)
if config_args.generate_config: if config_args.generate_config:
if not config_args.config_path: if config_args.report_stats is None:
config_parser.error( config_parser.error(
"Must specify where to generate the config file" "Please specify either --report-stats=yes or --report-stats=no\n\n" +
cls.stats_reporting_begging_spiel
) )
config_dir_path = os.path.dirname(config_args.config_path) if not config_files:
if os.path.exists(config_args.config_path): config_parser.error(
defaults = cls.read_config_file(config_args.config_path) "Must supply a config file.\nA config file can be automatically"
" generated using \"--generate-config -H SERVER_NAME"
" -c CONFIG-FILE\""
)
(config_path,) = config_files
if not os.path.exists(config_path):
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
config_dir_path = os.path.dirname(config_path)
config_dir_path = os.path.abspath(config_dir_path)
server_name = config_args.server_name
if not server_name:
print "Must specify a server_name to a generate config for."
sys.exit(1)
if not os.path.exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "wb") as config_file:
config_bytes, config = obj.generate_config(
config_dir_path=config_dir_path,
server_name=server_name,
report_stats=(config_args.report_stats == "yes"),
)
obj.invoke_all("generate_files", config)
config_file.write(config_bytes)
print (
"A config file has been generated in %r for server name"
" %r with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it"
" to your needs."
) % (config_path, server_name)
print (
"If this server name is incorrect, you will need to"
" regenerate the SSL certificates"
)
sys.exit(0)
else: else:
defaults = {} print (
else: "Config file %r already exists. Generating any missing key"
if config_args.config_path: " files."
defaults = cls.read_config_file(config_args.config_path) ) % (config_path,)
else: generate_keys = True
defaults = {}
parser = argparse.ArgumentParser( parser = argparse.ArgumentParser(
parents=[config_parser], parents=[config_parser],
description=description, description=description,
formatter_class=argparse.RawDescriptionHelpFormatter, formatter_class=argparse.RawDescriptionHelpFormatter,
) )
cls.add_arguments(parser)
parser.set_defaults(**defaults)
obj.invoke_all("add_arguments", parser)
args = parser.parse_args(remaining_args) args = parser.parse_args(remaining_args)
if config_args.generate_config: if not config_files:
config_dir_path = os.path.dirname(config_args.config_path) config_parser.error(
config_dir_path = os.path.abspath(config_dir_path) "Must supply a config file.\nA config file can be automatically"
if not os.path.exists(config_dir_path): " generated using \"--generate-config -H SERVER_NAME"
os.makedirs(config_dir_path) " -c CONFIG-FILE\""
cls.generate_config(args, config_dir_path)
config = {}
for key, value in vars(args).items():
if (key not in set(["config_path", "generate_config"])
and value is not None):
config[key] = value
with open(config_args.config_path, "w") as config_file:
# TODO(mark/paul) We might want to output emacs-style mode
# markers as well as vim-style mode markers into the file,
# to further hint to people this is a YAML file.
config_file.write("# vim:ft=yaml\n")
yaml.dump(config, config_file, default_flow_style=False)
print (
"A config file has been generated in %s for server name"
" '%s' with corresponding SSL keys and self-signed"
" certificates. Please review this file and customise it to"
" your needs."
) % (
config_args.config_path, config['server_name']
)
print (
"If this server name is incorrect, you will need to regenerate"
" the SSL certificates"
) )
if config_args.keys_directory:
config_dir_path = config_args.keys_directory
else:
config_dir_path = os.path.dirname(config_args.config_path[-1])
config_dir_path = os.path.abspath(config_dir_path)
specified_config = {}
for config_file in config_files:
yaml_config = cls.read_config_file(config_file)
specified_config.update(yaml_config)
server_name = specified_config["server_name"]
_, config = obj.generate_config(
config_dir_path=config_dir_path,
server_name=server_name
)
config.pop("log_config")
config.update(specified_config)
if "report_stats" not in config:
sys.stderr.write(
"Please opt in or out of reporting anonymized homeserver usage "
"statistics, by setting the report_stats key in your config file "
" ( " + config_path + " ) " +
"to either True or False.\n\n" +
Config.stats_reporting_begging_spiel + "\n")
sys.exit(1)
if generate_keys:
obj.invoke_all("generate_files", config)
sys.exit(0) sys.exit(0)
return cls(args) obj.invoke_all("read_config", config)
obj.invoke_all("read_arguments", args)
return obj

View File

@@ -17,15 +17,11 @@ from ._base import Config
class AppServiceConfig(Config): class AppServiceConfig(Config):
def __init__(self, args): def read_config(self, config):
super(AppServiceConfig, self).__init__(args) self.app_service_config_files = config.get("app_service_config_files", [])
self.app_service_config_files = args.app_service_config_files
@classmethod def default_config(cls, **kwargs):
def add_arguments(cls, parser): return """\
super(AppServiceConfig, cls).add_arguments(parser) # A list of application service config file to use
group = parser.add_argument_group("appservice") app_service_config_files: []
group.add_argument( """
"--app-service-config-files", type=str, nargs='+',
help="A list of application service config files to use."
)

View File

@@ -17,42 +17,31 @@ from ._base import Config
class CaptchaConfig(Config): class CaptchaConfig(Config):
def __init__(self, args): def read_config(self, config):
super(CaptchaConfig, self).__init__(args) self.recaptcha_private_key = config["recaptcha_private_key"]
self.recaptcha_private_key = args.recaptcha_private_key self.recaptcha_public_key = config["recaptcha_public_key"]
self.recaptcha_public_key = args.recaptcha_public_key self.enable_registration_captcha = config["enable_registration_captcha"]
self.enable_registration_captcha = args.enable_registration_captcha self.captcha_bypass_secret = config.get("captcha_bypass_secret")
self.recaptcha_siteverify_api = config["recaptcha_siteverify_api"]
# XXX: This is used for more than just captcha def default_config(self, **kwargs):
self.captcha_ip_origin_is_x_forwarded = ( return """\
args.captcha_ip_origin_is_x_forwarded ## Captcha ##
)
self.captcha_bypass_secret = args.captcha_bypass_secret
@classmethod # This Home Server's ReCAPTCHA public key.
def add_arguments(cls, parser): recaptcha_private_key: "YOUR_PRIVATE_KEY"
super(CaptchaConfig, cls).add_arguments(parser)
group = parser.add_argument_group("recaptcha") # This Home Server's ReCAPTCHA private key.
group.add_argument( recaptcha_public_key: "YOUR_PUBLIC_KEY"
"--recaptcha-public-key", type=str, default="YOUR_PUBLIC_KEY",
help="This Home Server's ReCAPTCHA public key." # Enables ReCaptcha checks when registering, preventing signup
) # unless a captcha is answered. Requires a valid ReCaptcha
group.add_argument( # public/private key.
"--recaptcha-private-key", type=str, default="YOUR_PRIVATE_KEY", enable_registration_captcha: False
help="This Home Server's ReCAPTCHA private key."
) # A secret key used to bypass the captcha test entirely.
group.add_argument( #captcha_bypass_secret: "YOUR_SECRET_HERE"
"--enable-registration-captcha", type=bool, default=False,
help="Enables ReCaptcha checks when registering, preventing signup" # The API endpoint to use for verifying m.login.recaptcha responses.
+ " unless a captcha is answered. Requires a valid ReCaptcha " recaptcha_siteverify_api: "https://www.google.com/recaptcha/api/siteverify"
+ "public/private key." """
)
group.add_argument(
"--captcha_ip_origin_is_x_forwarded", type=bool, default=False,
help="When checking captchas, use the X-Forwarded-For (XFF) header"
+ " as the client IP and not the actual client IP."
)
group.add_argument(
"--captcha_bypass_secret", type=str,
help="A secret key used to bypass the captcha test entirely."
)

47
synapse/config/cas.py Normal file
View File

@@ -0,0 +1,47 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class CasConfig(Config):
"""Cas Configuration
cas_server_url: URL of CAS server
"""
def read_config(self, config):
cas_config = config.get("cas_config", None)
if cas_config:
self.cas_enabled = cas_config.get("enabled", True)
self.cas_server_url = cas_config["server_url"]
self.cas_service_url = cas_config["service_url"]
self.cas_required_attributes = cas_config.get("required_attributes", {})
else:
self.cas_enabled = False
self.cas_server_url = None
self.cas_service_url = None
self.cas_required_attributes = {}
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable CAS for registration and login.
#cas_config:
# enabled: true
# server_url: "https://cas-server.com"
# service_url: "https://homesever.domain.com:8448"
# #required_attributes:
# # name: value
"""

View File

@@ -14,28 +14,21 @@
# limitations under the License. # limitations under the License.
from ._base import Config from ._base import Config
import os
import yaml
class DatabaseConfig(Config): class DatabaseConfig(Config):
def __init__(self, args):
super(DatabaseConfig, self).__init__(args)
if args.database_path == ":memory:":
self.database_path = ":memory:"
else:
self.database_path = self.abspath(args.database_path)
self.event_cache_size = self.parse_size(args.event_cache_size)
if args.database_config: def read_config(self, config):
with open(args.database_config) as f: self.event_cache_size = self.parse_size(
self.database_config = yaml.safe_load(f) config.get("event_cache_size", "10K")
else: )
self.database_config = config.get("database")
if self.database_config is None:
self.database_config = { self.database_config = {
"name": "sqlite3", "name": "sqlite3",
"args": { "args": {},
"database": self.database_path,
},
} }
name = self.database_config.get("name", None) name = self.database_config.get("name", None)
@@ -50,24 +43,37 @@ class DatabaseConfig(Config):
else: else:
raise RuntimeError("Unsupported database type '%s'" % (name,)) raise RuntimeError("Unsupported database type '%s'" % (name,))
@classmethod self.set_databasepath(config.get("database_path"))
def add_arguments(cls, parser):
super(DatabaseConfig, cls).add_arguments(parser) def default_config(self, **kwargs):
database_path = self.abspath("homeserver.db")
return """\
# Database configuration
database:
# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
args:
# Path to the database
database: "%(database_path)s"
# Number of events to cache in memory.
event_cache_size: "10K"
""" % locals()
def read_arguments(self, args):
self.set_databasepath(args.database_path)
def set_databasepath(self, database_path):
if database_path != ":memory:":
database_path = self.abspath(database_path)
if self.database_config.get("name", None) == "sqlite3":
if database_path is not None:
self.database_config["args"]["database"] = database_path
def add_arguments(self, parser):
db_group = parser.add_argument_group("database") db_group = parser.add_argument_group("database")
db_group.add_argument( db_group.add_argument(
"-d", "--database-path", default="homeserver.db", "-d", "--database-path", metavar="SQLITE_DATABASE_PATH",
metavar="SQLITE_DATABASE_PATH", help="The database name." help="The path to a sqlite database to use."
) )
db_group.add_argument(
"--event-cache-size", default="100K",
help="Number of events to cache in memory."
)
db_group.add_argument(
"--database-config", default=None,
help="Location of the database configuration file."
)
@classmethod
def generate_config(cls, args, config_dir_path):
super(DatabaseConfig, cls).generate_config(args, config_dir_path)
args.database_path = os.path.abspath(args.database_path)

View File

@@ -25,15 +25,21 @@ from .registration import RegistrationConfig
from .metrics import MetricsConfig from .metrics import MetricsConfig
from .appservice import AppServiceConfig from .appservice import AppServiceConfig
from .key import KeyConfig from .key import KeyConfig
from .saml2 import SAML2Config
from .cas import CasConfig
from .password import PasswordConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig, class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
RatelimitConfig, ContentRepositoryConfig, CaptchaConfig, RatelimitConfig, ContentRepositoryConfig, CaptchaConfig,
VoipConfig, RegistrationConfig, VoipConfig, RegistrationConfig, MetricsConfig,
MetricsConfig, AppServiceConfig, KeyConfig,): AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
PasswordConfig,):
pass pass
if __name__ == '__main__': if __name__ == '__main__':
import sys import sys
HomeServerConfig.load_config("Generate config", sys.argv[1:], "HomeServer") sys.stdout.write(
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0]
)

View File

@@ -13,55 +13,68 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
import os
from ._base import Config, ConfigError from ._base import Config, ConfigError
import syutil.crypto.signing_key
from syutil.crypto.signing_key import ( from synapse.util.stringutils import random_string
is_signing_algorithm_supported, decode_verify_key_bytes from signedjson.key import (
generate_signing_key, is_signing_algorithm_supported,
decode_signing_key_base64, decode_verify_key_bytes,
read_signing_keys, write_signing_keys, NACL_ED25519
) )
from syutil.base64util import decode_base64 from unpaddedbase64 import decode_base64
import os
class KeyConfig(Config): class KeyConfig(Config):
def __init__(self, args): def read_config(self, config):
super(KeyConfig, self).__init__(args) self.signing_key = self.read_signing_key(config["signing_key_path"])
self.signing_key = self.read_signing_key(args.signing_key_path)
self.old_signing_keys = self.read_old_signing_keys( self.old_signing_keys = self.read_old_signing_keys(
args.old_signing_key_path config["old_signing_keys"]
)
self.key_refresh_interval = self.parse_duration(
config["key_refresh_interval"]
) )
self.key_refresh_interval = args.key_refresh_interval
self.perspectives = self.read_perspectives( self.perspectives = self.read_perspectives(
args.perspectives_config_path config["perspectives"]
) )
@classmethod def default_config(self, config_dir_path, server_name, **kwargs):
def add_arguments(cls, parser): base_key_name = os.path.join(config_dir_path, server_name)
super(KeyConfig, cls).add_arguments(parser) return """\
key_group = parser.add_argument_group("keys") ## Signing Keys ##
key_group.add_argument("--signing-key-path",
help="The signing key to sign messages with")
key_group.add_argument("--old-signing-key-path",
help="The keys that the server used to sign"
" sign messages with but won't use"
" to sign new messages. E.g. it has"
" lost its private key")
key_group.add_argument("--key-refresh-interval",
default=24 * 60 * 60 * 1000, # 1 Day
help="How long a key response is valid for."
" Used to set the exipiry in /key/v2/."
" Controls how frequently servers will"
" query what keys are still valid")
key_group.add_argument("--perspectives-config-path",
help="The trusted servers to download signing"
" keys from")
def read_perspectives(self, perspectives_config_path): # Path to the signing key to sign messages with
config = self.read_yaml_file( signing_key_path: "%(base_key_name)s.signing.key"
perspectives_config_path, "perspectives_config_path"
) # The keys that the server used to sign messages with but won't use
# to sign new messages. E.g. it has lost its private key
old_signing_keys: {}
# "ed25519:auto":
# # Base64 encoded public key
# key: "The public part of your old signing key."
# # Millisecond POSIX timestamp when the key expired.
# expired_ts: 123456789123
# How long key response published by this server is valid for.
# Used to set the valid_until_ts in /key/v2 APIs.
# Determines how quickly servers will query to check which keys
# are still valid.
key_refresh_interval: "1d" # 1 Day.
# The trusted servers to download signing keys from.
perspectives:
servers:
"matrix.org":
verify_keys:
"ed25519:auto":
key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"
""" % locals()
def read_perspectives(self, perspectives_config):
servers = {} servers = {}
for server_name, server_config in config["servers"].items(): for server_name, server_config in perspectives_config["servers"].items():
for key_id, key_data in server_config["verify_keys"].items(): for key_id, key_data in server_config["verify_keys"].items():
if is_signing_algorithm_supported(key_id): if is_signing_algorithm_supported(key_id):
key_base64 = key_data["key"] key_base64 = key_data["key"]
@@ -73,75 +86,45 @@ class KeyConfig(Config):
def read_signing_key(self, signing_key_path): def read_signing_key(self, signing_key_path):
signing_keys = self.read_file(signing_key_path, "signing_key") signing_keys = self.read_file(signing_key_path, "signing_key")
try: try:
return syutil.crypto.signing_key.read_signing_keys( return read_signing_keys(signing_keys.splitlines(True))
signing_keys.splitlines(True)
)
except Exception: except Exception:
raise ConfigError( raise ConfigError(
"Error reading signing_key." "Error reading signing_key."
" Try running again with --generate-config" " Try running again with --generate-config"
) )
def read_old_signing_keys(self, old_signing_key_path): def read_old_signing_keys(self, old_signing_keys):
old_signing_keys = self.read_file( keys = {}
old_signing_key_path, "old_signing_key" for key_id, key_data in old_signing_keys.items():
) if is_signing_algorithm_supported(key_id):
try: key_base64 = key_data["key"]
return syutil.crypto.signing_key.read_old_signing_keys( key_bytes = decode_base64(key_base64)
old_signing_keys.splitlines(True) verify_key = decode_verify_key_bytes(key_id, key_bytes)
) verify_key.expired_ts = key_data["expired_ts"]
except Exception: keys[key_id] = verify_key
raise ConfigError( else:
"Error reading old signing keys." raise ConfigError(
) "Unsupported signing algorithm for old key: %r" % (key_id,)
)
return keys
@classmethod def generate_files(self, config):
def generate_config(cls, args, config_dir_path): signing_key_path = config["signing_key_path"]
super(KeyConfig, cls).generate_config(args, config_dir_path) if not os.path.exists(signing_key_path):
base_key_name = os.path.join(config_dir_path, args.server_name) with open(signing_key_path, "w") as signing_key_file:
key_id = "a_" + random_string(4)
args.pid_file = os.path.abspath(args.pid_file) write_signing_keys(
signing_key_file, (generate_signing_key(key_id),),
if not args.signing_key_path:
args.signing_key_path = base_key_name + ".signing.key"
if not os.path.exists(args.signing_key_path):
with open(args.signing_key_path, "w") as signing_key_file:
syutil.crypto.signing_key.write_signing_keys(
signing_key_file,
(syutil.crypto.signing_key.generate_signing_key("auto"),),
) )
else: else:
signing_keys = cls.read_file(args.signing_key_path, "signing_key") signing_keys = self.read_file(signing_key_path, "signing_key")
if len(signing_keys.split("\n")[0].split()) == 1: if len(signing_keys.split("\n")[0].split()) == 1:
# handle keys in the old format. # handle keys in the old format.
key = syutil.crypto.signing_key.decode_signing_key_base64( key_id = "a_" + random_string(4)
syutil.crypto.signing_key.NACL_ED25519, key = decode_signing_key_base64(
"auto", NACL_ED25519, key_id, signing_keys.split("\n")[0]
signing_keys.split("\n")[0]
) )
with open(args.signing_key_path, "w") as signing_key_file: with open(signing_key_path, "w") as signing_key_file:
syutil.crypto.signing_key.write_signing_keys( write_signing_keys(
signing_key_file, signing_key_file, (key,),
(key,),
) )
if not args.old_signing_key_path:
args.old_signing_key_path = base_key_name + ".old.signing.keys"
if not os.path.exists(args.old_signing_key_path):
with open(args.old_signing_key_path, "w"):
pass
if not args.perspectives_config_path:
args.perspectives_config_path = base_key_name + ".perspectives"
if not os.path.exists(args.perspectives_config_path):
with open(args.perspectives_config_path, "w") as perspectives_file:
perspectives_file.write(
'servers:\n'
' matrix.org:\n'
' verify_keys:\n'
' "ed25519:auto":\n'
' key: "Noi6WqcDj0QmPxCNQqgezwTlBKrfqehY1u2FyWP9uYw"\n'
)

View File

@@ -19,25 +19,97 @@ from twisted.python.log import PythonLoggingObserver
import logging import logging
import logging.config import logging.config
import yaml import yaml
from string import Template
import os
import signal
from synapse.util.debug import debug_deferreds
DEFAULT_LOG_CONFIG = Template("""
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s\
- %(message)s'
filters:
context:
(): synapse.util.logcontext.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: ${log_file}
maxBytes: 104857600
backupCount: 10
filters: [context]
level: INFO
console:
class: logging.StreamHandler
formatter: precise
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: INFO
root:
level: INFO
handlers: [file, console]
""")
class LoggingConfig(Config): class LoggingConfig(Config):
def __init__(self, args):
super(LoggingConfig, self).__init__(args)
self.verbosity = int(args.verbose) if args.verbose else None
self.log_config = self.abspath(args.log_config)
self.log_file = self.abspath(args.log_file)
@classmethod def read_config(self, config):
self.verbosity = config.get("verbose", 0)
self.log_config = self.abspath(config.get("log_config"))
self.log_file = self.abspath(config.get("log_file"))
if config.get("full_twisted_stacktraces"):
debug_deferreds()
def default_config(self, config_dir_path, server_name, **kwargs):
log_file = self.abspath("homeserver.log")
log_config = self.abspath(
os.path.join(config_dir_path, server_name + ".log.config")
)
return """
# Logging verbosity level.
verbose: 0
# File to write logging to
log_file: "%(log_file)s"
# A yaml python logging config file
log_config: "%(log_config)s"
# Stop twisted from discarding the stack traces of exceptions in
# deferreds by waiting a reactor tick before running a deferred's
# callbacks.
# full_twisted_stacktraces: true
""" % locals()
def read_arguments(self, args):
if args.verbose is not None:
self.verbosity = args.verbose
if args.log_config is not None:
self.log_config = args.log_config
if args.log_file is not None:
self.log_file = args.log_file
def add_arguments(cls, parser): def add_arguments(cls, parser):
super(LoggingConfig, cls).add_arguments(parser)
logging_group = parser.add_argument_group("logging") logging_group = parser.add_argument_group("logging")
logging_group.add_argument( logging_group.add_argument(
'-v', '--verbose', dest="verbose", action='count', '-v', '--verbose', dest="verbose", action='count',
help="The verbosity level." help="The verbosity level."
) )
logging_group.add_argument( logging_group.add_argument(
'-f', '--log-file', dest="log_file", default="homeserver.log", '-f', '--log-file', dest="log_file",
help="File to log to." help="File to log to."
) )
logging_group.add_argument( logging_group.add_argument(
@@ -45,6 +117,14 @@ class LoggingConfig(Config):
help="Python logging config file" help="Python logging config file"
) )
def generate_files(self, config):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
with open(log_config, "wb") as log_config_file:
log_config_file.write(
DEFAULT_LOG_CONFIG.substitute(log_file=config["log_file"])
)
def setup_logging(self): def setup_logging(self):
log_format = ( log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s" "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
@@ -71,6 +151,19 @@ class LoggingConfig(Config):
handler = logging.handlers.RotatingFileHandler( handler = logging.handlers.RotatingFileHandler(
self.log_file, maxBytes=(1000 * 1000 * 100), backupCount=3 self.log_file, maxBytes=(1000 * 1000 * 100), backupCount=3
) )
def sighup(signum, stack):
logger.info("Closing log file due to SIGHUP")
handler.doRollover()
logger.info("Opened new log file due to SIGHUP")
# TODO(paul): obviously this is a terrible mechanism for
# stealing SIGHUP, because it means no other part of synapse
# can use it instead. If we want to catch SIGHUP anywhere
# else as well, I'd suggest we find a nicer way to broadcast
# it around.
if getattr(signal, "SIGHUP"):
signal.signal(signal.SIGHUP, sighup)
else: else:
handler = logging.StreamHandler() handler = logging.StreamHandler()
handler.setFormatter(formatter) handler.setFormatter(formatter)

View File

@@ -17,20 +17,17 @@ from ._base import Config
class MetricsConfig(Config): class MetricsConfig(Config):
def __init__(self, args): def read_config(self, config):
super(MetricsConfig, self).__init__(args) self.enable_metrics = config["enable_metrics"]
self.enable_metrics = args.enable_metrics self.report_stats = config.get("report_stats", None)
self.metrics_port = args.metrics_port self.metrics_port = config.get("metrics_port")
self.metrics_bind_host = config.get("metrics_bind_host", "127.0.0.1")
@classmethod def default_config(self, report_stats=None, **kwargs):
def add_arguments(cls, parser): suffix = "" if report_stats is None else "report_stats: %(report_stats)s\n"
super(MetricsConfig, cls).add_arguments(parser) return ("""\
metrics_group = parser.add_argument_group("metrics") ## Metrics ###
metrics_group.add_argument(
'--enable-metrics', dest="enable_metrics", action="store_true", # Enable collection and rendering of performance metrics
help="Enable collection and rendering of performance metrics" enable_metrics: False
) """ + suffix) % locals()
metrics_group.add_argument(
'--metrics-port', metavar="PORT", type=int,
help="Separate port to accept metrics requests on (on localhost)"
)

View File

@@ -0,0 +1,32 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class PasswordConfig(Config):
"""Password login configuration
"""
def read_config(self, config):
password_config = config.get("password_config", {})
self.password_enabled = password_config.get("enabled", True)
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable password for login.
password_config:
enabled: true
"""

View File

@@ -17,56 +17,42 @@ from ._base import Config
class RatelimitConfig(Config): class RatelimitConfig(Config):
def __init__(self, args): def read_config(self, config):
super(RatelimitConfig, self).__init__(args) self.rc_messages_per_second = config["rc_messages_per_second"]
self.rc_messages_per_second = args.rc_messages_per_second self.rc_message_burst_count = config["rc_message_burst_count"]
self.rc_message_burst_count = args.rc_message_burst_count
self.federation_rc_window_size = args.federation_rc_window_size self.federation_rc_window_size = config["federation_rc_window_size"]
self.federation_rc_sleep_limit = args.federation_rc_sleep_limit self.federation_rc_sleep_limit = config["federation_rc_sleep_limit"]
self.federation_rc_sleep_delay = args.federation_rc_sleep_delay self.federation_rc_sleep_delay = config["federation_rc_sleep_delay"]
self.federation_rc_reject_limit = args.federation_rc_reject_limit self.federation_rc_reject_limit = config["federation_rc_reject_limit"]
self.federation_rc_concurrent = args.federation_rc_concurrent self.federation_rc_concurrent = config["federation_rc_concurrent"]
@classmethod def default_config(self, **kwargs):
def add_arguments(cls, parser): return """\
super(RatelimitConfig, cls).add_arguments(parser) ## Ratelimiting ##
rc_group = parser.add_argument_group("ratelimiting")
rc_group.add_argument(
"--rc-messages-per-second", type=float, default=0.2,
help="number of messages a client can send per second"
)
rc_group.add_argument(
"--rc-message-burst-count", type=float, default=10,
help="number of message a client can send before being throttled"
)
rc_group.add_argument( # Number of messages a client can send per second
"--federation-rc-window-size", type=int, default=10000, rc_messages_per_second: 0.2
help="The federation window size in milliseconds",
)
rc_group.add_argument( # Number of message a client can send before being throttled
"--federation-rc-sleep-limit", type=int, default=10, rc_message_burst_count: 10.0
help="The number of federation requests from a single server"
" in a window before the server will delay processing the"
" request.",
)
rc_group.add_argument( # The federation window size in milliseconds
"--federation-rc-sleep-delay", type=int, default=500, federation_rc_window_size: 1000
help="The duration in milliseconds to delay processing events from"
" remote servers by if they go over the sleep limit.",
)
rc_group.add_argument( # The number of federation requests from a single server in a window
"--federation-rc-reject-limit", type=int, default=50, # before the server will delay processing the request.
help="The maximum number of concurrent federation requests allowed" federation_rc_sleep_limit: 10
" from a single server",
)
rc_group.add_argument( # The duration in milliseconds to delay processing events from
"--federation-rc-concurrent", type=int, default=3, # remote servers by if they go over the sleep limit.
help="The number of federation requests to concurrently process" federation_rc_sleep_delay: 500
" from a single server",
) # The maximum number of concurrent federation requests allowed
# from a single server
federation_rc_reject_limit: 50
# The number of federation requests to concurrently process from a
# single server
federation_rc_concurrent: 3
"""

View File

@@ -17,45 +17,60 @@ from ._base import Config
from synapse.util.stringutils import random_string_with_symbols from synapse.util.stringutils import random_string_with_symbols
import distutils.util from distutils.util import strtobool
class RegistrationConfig(Config): class RegistrationConfig(Config):
def __init__(self, args): def read_config(self, config):
super(RegistrationConfig, self).__init__(args)
# `args.enable_registration` may either be a bool or a string depending
# on if the option was given a value (e.g. --enable-registration=true
# would set `args.enable_registration` to "true" not True.)
self.disable_registration = not bool( self.disable_registration = not bool(
distutils.util.strtobool(str(args.enable_registration)) strtobool(str(config["enable_registration"]))
) )
self.registration_shared_secret = args.registration_shared_secret if "disable_registration" in config:
self.disable_registration = bool(
strtobool(str(config["disable_registration"]))
)
@classmethod self.registration_shared_secret = config.get("registration_shared_secret")
def add_arguments(cls, parser): self.macaroon_secret_key = config.get("macaroon_secret_key")
super(RegistrationConfig, cls).add_arguments(parser) self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.allow_guest_access = config.get("allow_guest_access", False)
def default_config(self, **kwargs):
registration_shared_secret = random_string_with_symbols(50)
macaroon_secret_key = random_string_with_symbols(50)
return """\
## Registration ##
# Enable registration for new users.
enable_registration: False
# If set, allows registration by anyone who also has the shared
# secret, even if registration is otherwise disabled.
registration_shared_secret: "%(registration_shared_secret)s"
macaroon_secret_key: "%(macaroon_secret_key)s"
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
# The default number of rounds is 12.
bcrypt_rounds: 12
# Allows users to register as guests without a password/email/etc, and
# participate in rooms hosted on this server which have been made
# accessible to anonymous users.
allow_guest_access: False
""" % locals()
def add_arguments(self, parser):
reg_group = parser.add_argument_group("registration") reg_group = parser.add_argument_group("registration")
reg_group.add_argument( reg_group.add_argument(
"--enable-registration", "--enable-registration", action="store_true", default=None,
const=True, help="Enable registration for new users."
default=False,
nargs='?',
help="Enable registration for new users.",
)
reg_group.add_argument(
"--registration-shared-secret", type=str,
help="If set, allows registration by anyone who also has the shared"
" secret, even if registration is otherwise disabled.",
) )
@classmethod def read_arguments(self, args):
def generate_config(cls, args, config_dir_path): if args.enable_registration is not None:
super(RegistrationConfig, cls).generate_config(args, config_dir_path) self.disable_registration = not bool(
if args.enable_registration is None: strtobool(str(args.enable_registration))
args.enable_registration = False )
if args.registration_shared_secret is None:
args.registration_shared_secret = random_string_with_symbols(50)

View File

@@ -14,35 +14,87 @@
# limitations under the License. # limitations under the License.
from ._base import Config from ._base import Config
from collections import namedtuple
ThumbnailRequirement = namedtuple(
"ThumbnailRequirement", ["width", "height", "method", "media_type"]
)
def parse_thumbnail_requirements(thumbnail_sizes):
""" Takes a list of dictionaries with "width", "height", and "method" keys
and creates a map from image media types to the thumbnail size, thumnailing
method, and thumbnail media type to precalculate
Args:
thumbnail_sizes(list): List of dicts with "width", "height", and
"method" keys
Returns:
Dictionary mapping from media type string to list of
ThumbnailRequirement tuples.
"""
requirements = {}
for size in thumbnail_sizes:
width = size["width"]
height = size["height"]
method = size["method"]
jpeg_thumbnail = ThumbnailRequirement(width, height, method, "image/jpeg")
png_thumbnail = ThumbnailRequirement(width, height, method, "image/png")
requirements.setdefault("image/jpeg", []).append(jpeg_thumbnail)
requirements.setdefault("image/gif", []).append(png_thumbnail)
requirements.setdefault("image/png", []).append(png_thumbnail)
return {
media_type: tuple(thumbnails)
for media_type, thumbnails in requirements.items()
}
class ContentRepositoryConfig(Config): class ContentRepositoryConfig(Config):
def __init__(self, args): def read_config(self, config):
super(ContentRepositoryConfig, self).__init__(args) self.max_upload_size = self.parse_size(config["max_upload_size"])
self.max_upload_size = self.parse_size(args.max_upload_size) self.max_image_pixels = self.parse_size(config["max_image_pixels"])
self.max_image_pixels = self.parse_size(args.max_image_pixels) self.media_store_path = self.ensure_directory(config["media_store_path"])
self.media_store_path = self.ensure_directory(args.media_store_path) self.uploads_path = self.ensure_directory(config["uploads_path"])
self.dynamic_thumbnails = config["dynamic_thumbnails"]
self.thumbnail_requirements = parse_thumbnail_requirements(
config["thumbnail_sizes"]
)
def parse_size(self, string): def default_config(self, **kwargs):
sizes = {"K": 1024, "M": 1024 * 1024} media_store = self.default_path("media_store")
size = 1 uploads_path = self.default_path("uploads")
suffix = string[-1] return """
if suffix in sizes: # Directory where uploaded images and attachments are stored.
string = string[:-1] media_store_path: "%(media_store)s"
size = sizes[suffix]
return int(string) * size
@classmethod # Directory where in-progress uploads are stored.
def add_arguments(cls, parser): uploads_path: "%(uploads_path)s"
super(ContentRepositoryConfig, cls).add_arguments(parser)
db_group = parser.add_argument_group("content_repository") # The largest allowed upload size in bytes
db_group.add_argument( max_upload_size: "10M"
"--max-upload-size", default="10M"
) # Maximum number of pixels that will be thumbnailed
db_group.add_argument( max_image_pixels: "32M"
"--media-store-path", default=cls.default_path("media_store")
) # Whether to generate new thumbnails on the fly to precisely match
db_group.add_argument( # the resolution requested by the client. If true then whenever
"--max-image-pixels", default="32M", # a new resolution is requested by the client the server will
help="Maximum number of pixels that will be thumbnailed" # generate a new thumbnail. If false the server will pick a thumbnail
) # from a precalcualted list.
dynamic_thumbnails: false
# List of thumbnail to precalculate when an image is uploaded.
thumbnail_sizes:
- width: 32
height: 32
method: crop
- width: 96
height: 96
method: crop
- width: 320
height: 240
method: scale
- width: 640
height: 480
method: scale
""" % locals()

55
synapse/config/saml2.py Normal file
View File

@@ -0,0 +1,55 @@
# -*- coding: utf-8 -*-
# Copyright 2015 Ericsson
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class SAML2Config(Config):
"""SAML2 Configuration
Synapse uses pysaml2 libraries for providing SAML2 support
config_path: Path to the sp_conf.py configuration file
idp_redirect_url: Identity provider URL which will redirect
the user back to /login/saml2 with proper info.
sp_conf.py file is something like:
https://github.com/rohe/pysaml2/blob/master/example/sp-repoze/sp_conf.py.example
More information: https://pythonhosted.org/pysaml2/howto/config.html
"""
def read_config(self, config):
saml2_config = config.get("saml2_config", None)
if saml2_config:
self.saml2_enabled = saml2_config.get("enabled", True)
self.saml2_config_path = saml2_config["config_path"]
self.saml2_idp_redirect_url = saml2_config["idp_redirect_url"]
else:
self.saml2_enabled = False
self.saml2_config_path = None
self.saml2_idp_redirect_url = None
def default_config(self, config_dir_path, server_name, **kwargs):
return """
# Enable SAML2 for registration and login. Uses pysaml2
# config_path: Path to the sp_conf.py configuration file
# idp_redirect_url: Identity provider URL which will redirect
# the user back to /login/saml2 with proper info.
# See pysaml2 docs for format of config.
#saml2_config:
# enabled: true
# config_path: "%s/sp_conf.py"
# idp_redirect_url: "http://%s/idp"
""" % (config_dir_path, server_name)

View File

@@ -17,64 +17,213 @@ from ._base import Config
class ServerConfig(Config): class ServerConfig(Config):
def __init__(self, args):
super(ServerConfig, self).__init__(args)
self.server_name = args.server_name
self.bind_port = args.bind_port
self.bind_host = args.bind_host
self.unsecure_port = args.unsecure_port
self.daemonize = args.daemonize
self.pid_file = self.abspath(args.pid_file)
self.web_client = args.web_client
self.manhole = args.manhole
self.soft_file_limit = args.soft_file_limit
if not args.content_addr: def read_config(self, config):
host = args.server_name self.server_name = config["server_name"]
self.pid_file = self.abspath(config.get("pid_file"))
self.web_client = config["web_client"]
self.web_client_location = config.get("web_client_location", None)
self.soft_file_limit = config["soft_file_limit"]
self.daemonize = config.get("daemonize")
self.print_pidfile = config.get("print_pidfile")
self.user_agent_suffix = config.get("user_agent_suffix")
self.use_frozen_dicts = config.get("use_frozen_dicts", True)
self.listeners = config.get("listeners", [])
bind_port = config.get("bind_port")
if bind_port:
self.listeners = []
bind_host = config.get("bind_host", "")
gzip_responses = config.get("gzip_responses", True)
names = ["client", "webclient"] if self.web_client else ["client"]
self.listeners.append({
"port": bind_port,
"bind_address": bind_host,
"tls": True,
"type": "http",
"resources": [
{
"names": names,
"compress": gzip_responses,
},
{
"names": ["federation"],
"compress": False,
}
]
})
unsecure_port = config.get("unsecure_port", bind_port - 400)
if unsecure_port:
self.listeners.append({
"port": unsecure_port,
"bind_address": bind_host,
"tls": False,
"type": "http",
"resources": [
{
"names": names,
"compress": gzip_responses,
},
{
"names": ["federation"],
"compress": False,
}
]
})
manhole = config.get("manhole")
if manhole:
self.listeners.append({
"port": manhole,
"bind_address": "127.0.0.1",
"type": "manhole",
})
metrics_port = config.get("metrics_port")
if metrics_port:
self.listeners.append({
"port": metrics_port,
"bind_address": config.get("metrics_bind_host", "127.0.0.1"),
"tls": False,
"type": "http",
"resources": [
{
"names": ["metrics"],
"compress": False,
},
]
})
# Attempt to guess the content_addr for the v0 content repostitory
content_addr = config.get("content_addr")
if not content_addr:
for listener in self.listeners:
if listener["type"] == "http" and not listener.get("tls", False):
unsecure_port = listener["port"]
break
else:
raise RuntimeError("Could not determine 'content_addr'")
host = self.server_name
if ':' not in host: if ':' not in host:
host = "%s:%d" % (host, args.unsecure_port) host = "%s:%d" % (host, unsecure_port)
else: else:
host = host.split(':')[0] host = host.split(':')[0]
host = "%s:%d" % (host, args.unsecure_port) host = "%s:%d" % (host, unsecure_port)
args.content_addr = "http://%s" % (host,) content_addr = "http://%s" % (host,)
self.content_addr = args.content_addr self.content_addr = content_addr
@classmethod def default_config(self, server_name, **kwargs):
def add_arguments(cls, parser): if ":" in server_name:
super(ServerConfig, cls).add_arguments(parser) bind_port = int(server_name.split(":")[1])
unsecure_port = bind_port - 400
else:
bind_port = 8448
unsecure_port = 8008
pid_file = self.abspath("homeserver.pid")
return """\
## Server ##
# The domain name of the server, with optional explicit port.
# This is used by remote servers to connect to this server,
# e.g. matrix.org, localhost:8080, etc.
server_name: "%(server_name)s"
# When running as a daemon, the file to store the pid in
pid_file: %(pid_file)s
# Whether to serve a web client from the HTTP/HTTPS root resource.
web_client: True
# Set the soft limit on the number of file descriptors synapse can use
# Zero is used to indicate synapse should set the soft limit to the
# hard limit.
soft_file_limit: 0
# List of ports that Synapse should listen on, their purpose and their
# configuration.
listeners:
# Main HTTPS listener
# For when matrix traffic is sent directly to synapse.
-
# The port to listen for HTTPS requests on.
port: %(bind_port)s
# Local interface to listen on.
# The empty string will cause synapse to listen on all interfaces.
bind_address: ''
# This is a 'http' listener, allows us to specify 'resources'.
type: http
tls: true
# Use the X-Forwarded-For (XFF) header as the client IP and not the
# actual client IP.
x_forwarded: false
# List of HTTP resources to serve on this listener.
resources:
-
# List of resources to host on this listener.
names:
- client # The client-server APIs, both v1 and v2
- webclient # The bundled webclient.
# Should synapse compress HTTP responses to clients that support it?
# This should be disabled if running synapse behind a load balancer
# that can do automatic compression.
compress: true
- names: [federation] # Federation APIs
compress: false
# Unsecure HTTP listener,
# For when matrix traffic passes through loadbalancer that unwraps TLS.
- port: %(unsecure_port)s
tls: false
bind_address: ''
type: http
x_forwarded: false
resources:
- names: [client, webclient]
compress: true
- names: [federation]
compress: false
# Turn on the twisted telnet manhole service on localhost on the given
# port.
# - port: 9000
# bind_address: 127.0.0.1
# type: manhole
""" % locals()
def read_arguments(self, args):
if args.manhole is not None:
self.manhole = args.manhole
if args.daemonize is not None:
self.daemonize = args.daemonize
if args.print_pidfile is not None:
self.print_pidfile = args.print_pidfile
def add_arguments(self, parser):
server_group = parser.add_argument_group("server") server_group = parser.add_argument_group("server")
server_group.add_argument(
"-H", "--server-name", default="localhost",
help="The domain name of the server, with optional explicit port. "
"This is used by remote servers to connect to this server, "
"e.g. matrix.org, localhost:8080, etc."
)
server_group.add_argument("-p", "--bind-port", metavar="PORT",
type=int, help="https port to listen on",
default=8448)
server_group.add_argument("--unsecure-port", metavar="PORT",
type=int, help="http port to listen on",
default=8008)
server_group.add_argument("--bind-host", default="",
help="Local interface to listen on")
server_group.add_argument("-D", "--daemonize", action='store_true', server_group.add_argument("-D", "--daemonize", action='store_true',
default=None,
help="Daemonize the home server") help="Daemonize the home server")
server_group.add_argument('--pid-file', default="homeserver.pid", server_group.add_argument("--print-pidfile", action='store_true',
help="When running as a daemon, the file to" default=None,
" store the pid in") help="Print the path to the pidfile just"
server_group.add_argument('--web_client', default=True, type=bool, " before daemonizing")
help="Whether or not to serve a web client")
server_group.add_argument("--manhole", metavar="PORT", dest="manhole", server_group.add_argument("--manhole", metavar="PORT", dest="manhole",
type=int, type=int,
help="Turn on the twisted telnet manhole" help="Turn on the twisted telnet manhole"
" service on the given port.") " service on the given port.")
server_group.add_argument("--content-addr", default=None,
help="The host and scheme to use for the "
"content repository")
server_group.add_argument("--soft-file-limit", type=int, default=0,
help="Set the soft limit on the number of "
"file descriptors synapse can use. "
"Zero is used to indicate synapse "
"should set the soft limit to the hard"
"limit.")

View File

@@ -23,37 +23,57 @@ GENERATE_DH_PARAMS = False
class TlsConfig(Config): class TlsConfig(Config):
def __init__(self, args): def read_config(self, config):
super(TlsConfig, self).__init__(args)
self.tls_certificate = self.read_tls_certificate( self.tls_certificate = self.read_tls_certificate(
args.tls_certificate_path config.get("tls_certificate_path")
) )
self.tls_certificate_file = config.get("tls_certificate_path")
self.no_tls = args.no_tls self.no_tls = config.get("no_tls", False)
if self.no_tls: if self.no_tls:
self.tls_private_key = None self.tls_private_key = None
else: else:
self.tls_private_key = self.read_tls_private_key( self.tls_private_key = self.read_tls_private_key(
args.tls_private_key_path config.get("tls_private_key_path")
) )
self.tls_dh_params_path = self.check_file( self.tls_dh_params_path = self.check_file(
args.tls_dh_params_path, "tls_dh_params" config.get("tls_dh_params_path"), "tls_dh_params"
) )
@classmethod # This config option applies to non-federation HTTP clients
def add_arguments(cls, parser): # (e.g. for talking to recaptcha, identity servers, and such)
super(TlsConfig, cls).add_arguments(parser) # It should never be used in production, and is intended for
tls_group = parser.add_argument_group("tls") # use only when running tests.
tls_group.add_argument("--tls-certificate-path", self.use_insecure_ssl_client_just_for_testing_do_not_use = config.get(
help="PEM encoded X509 certificate for TLS") "use_insecure_ssl_client_just_for_testing_do_not_use"
tls_group.add_argument("--tls-private-key-path", )
help="PEM encoded private key for TLS")
tls_group.add_argument("--tls-dh-params-path", def default_config(self, config_dir_path, server_name, **kwargs):
help="PEM dh parameters for ephemeral keys") base_key_name = os.path.join(config_dir_path, server_name)
tls_group.add_argument("--no-tls", action='store_true',
help="Don't bind to the https port.") tls_certificate_path = base_key_name + ".tls.crt"
tls_private_key_path = base_key_name + ".tls.key"
tls_dh_params_path = base_key_name + ".tls.dh"
return """\
# PEM encoded X509 certificate for TLS.
# You can replace the self-signed certificate that synapse
# autogenerates on launch with your own SSL certificate + key pair
# if you like. Any required intermediary certificates can be
# appended after the primary certificate in hierarchical order.
tls_certificate_path: "%(tls_certificate_path)s"
# PEM encoded private key for TLS
tls_private_key_path: "%(tls_private_key_path)s"
# PEM dh parameters for ephemeral keys
tls_dh_params_path: "%(tls_dh_params_path)s"
# Don't bind to the https port
no_tls: False
""" % locals()
def read_tls_certificate(self, cert_path): def read_tls_certificate(self, cert_path):
cert_pem = self.read_file(cert_path, "tls_certificate") cert_pem = self.read_file(cert_path, "tls_certificate")
@@ -63,22 +83,13 @@ class TlsConfig(Config):
private_key_pem = self.read_file(private_key_path, "tls_private_key") private_key_pem = self.read_file(private_key_path, "tls_private_key")
return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem) return crypto.load_privatekey(crypto.FILETYPE_PEM, private_key_pem)
@classmethod def generate_files(self, config):
def generate_config(cls, args, config_dir_path): tls_certificate_path = config["tls_certificate_path"]
super(TlsConfig, cls).generate_config(args, config_dir_path) tls_private_key_path = config["tls_private_key_path"]
base_key_name = os.path.join(config_dir_path, args.server_name) tls_dh_params_path = config["tls_dh_params_path"]
if args.tls_certificate_path is None: if not os.path.exists(tls_private_key_path):
args.tls_certificate_path = base_key_name + ".tls.crt" with open(tls_private_key_path, "w") as private_key_file:
if args.tls_private_key_path is None:
args.tls_private_key_path = base_key_name + ".tls.key"
if args.tls_dh_params_path is None:
args.tls_dh_params_path = base_key_name + ".tls.dh"
if not os.path.exists(args.tls_private_key_path):
with open(args.tls_private_key_path, "w") as private_key_file:
tls_private_key = crypto.PKey() tls_private_key = crypto.PKey()
tls_private_key.generate_key(crypto.TYPE_RSA, 2048) tls_private_key.generate_key(crypto.TYPE_RSA, 2048)
private_key_pem = crypto.dump_privatekey( private_key_pem = crypto.dump_privatekey(
@@ -86,17 +97,17 @@ class TlsConfig(Config):
) )
private_key_file.write(private_key_pem) private_key_file.write(private_key_pem)
else: else:
with open(args.tls_private_key_path) as private_key_file: with open(tls_private_key_path) as private_key_file:
private_key_pem = private_key_file.read() private_key_pem = private_key_file.read()
tls_private_key = crypto.load_privatekey( tls_private_key = crypto.load_privatekey(
crypto.FILETYPE_PEM, private_key_pem crypto.FILETYPE_PEM, private_key_pem
) )
if not os.path.exists(args.tls_certificate_path): if not os.path.exists(tls_certificate_path):
with open(args.tls_certificate_path, "w") as certifcate_file: with open(tls_certificate_path, "w") as certificate_file:
cert = crypto.X509() cert = crypto.X509()
subject = cert.get_subject() subject = cert.get_subject()
subject.CN = args.server_name subject.CN = config["server_name"]
cert.set_serial_number(1000) cert.set_serial_number(1000)
cert.gmtime_adj_notBefore(0) cert.gmtime_adj_notBefore(0)
@@ -108,18 +119,18 @@ class TlsConfig(Config):
cert_pem = crypto.dump_certificate(crypto.FILETYPE_PEM, cert) cert_pem = crypto.dump_certificate(crypto.FILETYPE_PEM, cert)
certifcate_file.write(cert_pem) certificate_file.write(cert_pem)
if not os.path.exists(args.tls_dh_params_path): if not os.path.exists(tls_dh_params_path):
if GENERATE_DH_PARAMS: if GENERATE_DH_PARAMS:
subprocess.check_call([ subprocess.check_call([
"openssl", "dhparam", "openssl", "dhparam",
"-outform", "PEM", "-outform", "PEM",
"-out", args.tls_dh_params_path, "-out", tls_dh_params_path,
"2048" "2048"
]) ])
else: else:
with open(args.tls_dh_params_path, "w") as dh_params_file: with open(tls_dh_params_path, "w") as dh_params_file:
dh_params_file.write( dh_params_file.write(
"2048-bit DH parameters taken from rfc3526\n" "2048-bit DH parameters taken from rfc3526\n"
"-----BEGIN DH PARAMETERS-----\n" "-----BEGIN DH PARAMETERS-----\n"

View File

@@ -17,28 +17,21 @@ from ._base import Config
class VoipConfig(Config): class VoipConfig(Config):
def __init__(self, args): def read_config(self, config):
super(VoipConfig, self).__init__(args) self.turn_uris = config.get("turn_uris", [])
self.turn_uris = args.turn_uris self.turn_shared_secret = config["turn_shared_secret"]
self.turn_shared_secret = args.turn_shared_secret self.turn_user_lifetime = self.parse_duration(config["turn_user_lifetime"])
self.turn_user_lifetime = args.turn_user_lifetime
@classmethod def default_config(self, **kwargs):
def add_arguments(cls, parser): return """\
super(VoipConfig, cls).add_arguments(parser) ## Turn ##
group = parser.add_argument_group("voip")
group.add_argument( # The public URIs of the TURN server to give to clients
"--turn-uris", type=str, default=None, action='append', turn_uris: []
help="The public URIs of the TURN server to give to clients"
) # The shared secret used to compute passwords for the TURN server
group.add_argument( turn_shared_secret: "YOUR_SHARED_SECRET"
"--turn-shared-secret", type=str, default=None,
help=( # How long generated TURN credentials last
"The shared secret used to compute passwords for the TURN" turn_user_lifetime: "1h"
" server" """
)
)
group.add_argument(
"--turn-user-lifetime", type=int, default=(1000 * 60 * 60),
help="How long generated TURN credentials last, in ms"
)

View File

@@ -35,9 +35,9 @@ class ServerContextFactory(ssl.ContextFactory):
_ecCurve = _OpenSSLECCurve(_defaultCurveName) _ecCurve = _OpenSSLECCurve(_defaultCurveName)
_ecCurve.addECKeyToContext(context) _ecCurve.addECKeyToContext(context)
except: except:
logger.exception("Failed to enable eliptic curve for TLS") logger.exception("Failed to enable elliptic curve for TLS")
context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3) context.set_options(SSL.OP_NO_SSLv2 | SSL.OP_NO_SSLv3)
context.use_certificate(config.tls_certificate) context.use_certificate_chain_file(config.tls_certificate_file)
if not config.no_tls: if not config.no_tls:
context.use_privatekey(config.tls_private_key) context.use_privatekey(config.tls_private_key)

View File

@@ -15,11 +15,12 @@
# limitations under the License. # limitations under the License.
from synapse.events.utils import prune_event
from syutil.jsonutil import encode_canonical_json
from syutil.base64util import encode_base64, decode_base64
from syutil.crypto.jsonsign import sign_json
from synapse.api.errors import SynapseError, Codes from synapse.api.errors import SynapseError, Codes
from synapse.events.utils import prune_event
from canonicaljson import encode_canonical_json
from unpaddedbase64 import encode_base64, decode_base64
from signedjson.sign import sign_json
import hashlib import hashlib
import logging import logging

View File

@@ -18,7 +18,9 @@ from twisted.web.http import HTTPClient
from twisted.internet.protocol import Factory from twisted.internet.protocol import Factory
from twisted.internet import defer, reactor from twisted.internet import defer, reactor
from synapse.http.endpoint import matrix_federation_endpoint from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.logcontext import PreserveLoggingContext from synapse.util.logcontext import (
preserve_context_over_fn, preserve_context_over_deferred
)
import simplejson as json import simplejson as json
import logging import logging
@@ -40,11 +42,14 @@ def fetch_server_key(server_name, ssl_context_factory, path=KEY_API_V1):
for i in range(5): for i in range(5):
try: try:
with PreserveLoggingContext(): protocol = yield preserve_context_over_fn(
protocol = yield endpoint.connect(factory) endpoint.connect, factory
server_response, server_certificate = yield protocol.remote_key )
defer.returnValue((server_response, server_certificate)) server_response, server_certificate = yield preserve_context_over_deferred(
return protocol.remote_key
)
defer.returnValue((server_response, server_certificate))
return
except SynapseKeyClientError as e: except SynapseKeyClientError as e:
logger.exception("Error getting key for %r" % (server_name,)) logger.exception("Error getting key for %r" % (server_name,))
if e.status.startswith("4"): if e.status.startswith("4"):

View File

@@ -14,22 +14,24 @@
# limitations under the License. # limitations under the License.
from synapse.crypto.keyclient import fetch_server_key from synapse.crypto.keyclient import fetch_server_key
from synapse.api.errors import SynapseError, Codes
from synapse.util.retryutils import get_retry_limiter
from synapse.util import unwrapFirstError
from synapse.util.async import ObservableDeferred
from twisted.internet import defer from twisted.internet import defer
from syutil.crypto.jsonsign import (
from signedjson.sign import (
verify_signed_json, signature_ids, sign_json, encode_canonical_json verify_signed_json, signature_ids, sign_json, encode_canonical_json
) )
from syutil.crypto.signing_key import ( from signedjson.key import (
is_signing_algorithm_supported, decode_verify_key_bytes is_signing_algorithm_supported, decode_verify_key_bytes
) )
from syutil.base64util import decode_base64, encode_base64 from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import SynapseError, Codes
from synapse.util.retryutils import get_retry_limiter
from synapse.util.async import create_observer
from OpenSSL import crypto from OpenSSL import crypto
from collections import namedtuple
import urllib import urllib
import hashlib import hashlib
import logging import logging
@@ -38,6 +40,9 @@ import logging
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
KeyGroup = namedtuple("KeyGroup", ("server_name", "group_id", "key_ids"))
class Keyring(object): class Keyring(object):
def __init__(self, hs): def __init__(self, hs):
self.store = hs.get_datastore() self.store = hs.get_datastore()
@@ -49,131 +54,331 @@ class Keyring(object):
self.key_downloads = {} self.key_downloads = {}
@defer.inlineCallbacks
def verify_json_for_server(self, server_name, json_object): def verify_json_for_server(self, server_name, json_object):
logger.debug("Verifying for %s", server_name) return self.verify_json_objects_for_server(
key_ids = signature_ids(json_object, server_name) [(server_name, json_object)]
if not key_ids: )[0]
raise SynapseError(
400,
"Not signed with a supported algorithm",
Codes.UNAUTHORIZED,
)
try:
verify_key = yield self.get_server_verify_key(server_name, key_ids)
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.warn(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
try: def verify_json_objects_for_server(self, server_and_json):
verify_signed_json(json_object, server_name, verify_key) """Bulk verfies signatures of json objects, bulk fetching keys as
except: necessary.
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
@defer.inlineCallbacks
def get_server_verify_key(self, server_name, key_ids):
"""Finds a verification key for the server with one of the key ids.
Trys to fetch the key from a trusted perspective server first.
Args: Args:
server_name(str): The name of the server to fetch a key for. server_and_json (list): List of pairs of (server_name, json_object)
keys_ids (list of str): The key_ids to check for.
Returns:
list of deferreds indicating success or failure to verify each
json object's signature for the given server_name.
""" """
cached = yield self.store.get_server_verify_keys(server_name, key_ids) group_id_to_json = {}
group_id_to_group = {}
group_ids = []
if cached: next_group_id = 0
defer.returnValue(cached[0]) deferreds = {}
return
download = self.key_downloads.get(server_name) for server_name, json_object in server_and_json:
logger.debug("Verifying for %s", server_name)
group_id = next_group_id
next_group_id += 1
group_ids.append(group_id)
if download is None: key_ids = signature_ids(json_object, server_name)
download = self._get_server_verify_key_impl(server_name, key_ids) if not key_ids:
self.key_downloads[server_name] = download deferreds[group_id] = defer.fail(SynapseError(
400,
"Not signed with a supported algorithm",
Codes.UNAUTHORIZED,
))
else:
deferreds[group_id] = defer.Deferred()
@download.addBoth group = KeyGroup(server_name, group_id, key_ids)
def callback(ret):
del self.key_downloads[server_name]
return ret
r = yield create_observer(download) group_id_to_group[group_id] = group
defer.returnValue(r) group_id_to_json[group_id] = json_object
@defer.inlineCallbacks @defer.inlineCallbacks
def _get_server_verify_key_impl(self, server_name, key_ids): def handle_key_deferred(group, deferred):
keys = None server_name = group.server_name
try:
_, _, key_id, verify_key = yield deferred
except IOError as e:
logger.warn(
"Got IOError when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
502,
"Error downloading keys for %s" % (server_name,),
Codes.UNAUTHORIZED,
)
except Exception as e:
logger.exception(
"Got Exception when downloading keys for %s: %s %s",
server_name, type(e).__name__, str(e.message),
)
raise SynapseError(
401,
"No key for %s with id %s" % (server_name, key_ids),
Codes.UNAUTHORIZED,
)
perspective_results = [] json_object = group_id_to_json[group.group_id]
for perspective_name, perspective_keys in self.perspective_servers.items():
@defer.inlineCallbacks
def get_key():
try:
result = yield self.get_server_verify_key_v2_indirect(
server_name, key_ids, perspective_name, perspective_keys
)
defer.returnValue(result)
except:
logging.info(
"Unable to getting key %r for %r from %r",
key_ids, server_name, perspective_name,
)
perspective_results.append(get_key())
perspective_results = yield defer.gatherResults(perspective_results) try:
verify_signed_json(json_object, server_name, verify_key)
except:
raise SynapseError(
401,
"Invalid signature for server %s with key %s:%s" % (
server_name, verify_key.alg, verify_key.version
),
Codes.UNAUTHORIZED,
)
for results in perspective_results: server_to_deferred = {
if results is not None: server_name: defer.Deferred()
keys = results for server_name, _ in server_and_json
}
limiter = yield get_retry_limiter( # We want to wait for any previous lookups to complete before
server_name, # proceeding.
self.clock, wait_on_deferred = self.wait_for_previous_lookups(
self.store, [server_name for server_name, _ in server_and_json],
server_to_deferred,
) )
with limiter: # Actually start fetching keys.
if keys is None: wait_on_deferred.addBoth(
lambda _: self.get_server_verify_keys(group_id_to_group, deferreds)
)
# When we've finished fetching all the keys for a given server_name,
# resolve the deferred passed to `wait_for_previous_lookups` so that
# any lookups waiting will proceed.
server_to_gids = {}
def remove_deferreds(res, server_name, group_id):
server_to_gids[server_name].discard(group_id)
if not server_to_gids[server_name]:
d = server_to_deferred.pop(server_name, None)
if d:
d.callback(None)
return res
for g_id, deferred in deferreds.items():
server_name = group_id_to_group[g_id].server_name
server_to_gids.setdefault(server_name, set()).add(g_id)
deferred.addBoth(remove_deferreds, server_name, g_id)
# Pass those keys to handle_key_deferred so that the json object
# signatures can be verified
return [
handle_key_deferred(
group_id_to_group[g_id],
deferreds[g_id],
)
for g_id in group_ids
]
@defer.inlineCallbacks
def wait_for_previous_lookups(self, server_names, server_to_deferred):
"""Waits for any previous key lookups for the given servers to finish.
Args:
server_names (list): list of server_names we want to lookup
server_to_deferred (dict): server_name to deferred which gets
resolved once we've finished looking up keys for that server
"""
while True:
wait_on = [
self.key_downloads[server_name]
for server_name in server_names
if server_name in self.key_downloads
]
if wait_on:
yield defer.DeferredList(wait_on)
else:
break
for server_name, deferred in server_to_deferred.items():
d = ObservableDeferred(deferred)
self.key_downloads[server_name] = d
def rm(r, server_name):
self.key_downloads.pop(server_name, None)
return r
d.addBoth(rm, server_name)
def get_server_verify_keys(self, group_id_to_group, group_id_to_deferred):
"""Takes a dict of KeyGroups and tries to find at least one key for
each group.
"""
# These are functions that produce keys given a list of key ids
key_fetch_fns = (
self.get_keys_from_store, # First try the local store
self.get_keys_from_perspectives, # Then try via perspectives
self.get_keys_from_server, # Then try directly
)
@defer.inlineCallbacks
def do_iterations():
merged_results = {}
missing_keys = {}
for group in group_id_to_group.values():
missing_keys.setdefault(group.server_name, set()).union(group.key_ids)
for fn in key_fetch_fns:
results = yield fn(missing_keys.items())
merged_results.update(results)
# We now need to figure out which groups we have keys for
# and which we don't
missing_groups = {}
for group in group_id_to_group.values():
for key_id in group.key_ids:
if key_id in merged_results[group.server_name]:
group_id_to_deferred[group.group_id].callback((
group.group_id,
group.server_name,
key_id,
merged_results[group.server_name][key_id],
))
break
else:
missing_groups.setdefault(
group.server_name, []
).append(group)
if not missing_groups:
break
missing_keys = {
server_name: set(
key_id for group in groups for key_id in group.key_ids
)
for server_name, groups in missing_groups.items()
}
for group in missing_groups.values():
group_id_to_deferred[group.group_id].errback(SynapseError(
401,
"No key for %s with id %s" % (
group.server_name, group.key_ids,
),
Codes.UNAUTHORIZED,
))
def on_err(err):
for deferred in group_id_to_deferred.values():
if not deferred.called:
deferred.errback(err)
do_iterations().addErrback(on_err)
return group_id_to_deferred
@defer.inlineCallbacks
def get_keys_from_store(self, server_name_and_key_ids):
res = yield defer.gatherResults(
[
self.store.get_server_verify_keys(
server_name, key_ids
).addCallback(lambda ks, server: (server, ks), server_name)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
defer.returnValue(dict(res))
@defer.inlineCallbacks
def get_keys_from_perspectives(self, server_name_and_key_ids):
@defer.inlineCallbacks
def get_key(perspective_name, perspective_keys):
try:
result = yield self.get_server_verify_key_v2_indirect(
server_name_and_key_ids, perspective_name, perspective_keys
)
defer.returnValue(result)
except Exception as e:
logger.exception(
"Unable to get key from %r: %s %s",
perspective_name,
type(e).__name__, str(e.message),
)
defer.returnValue({})
results = yield defer.gatherResults(
[
get_key(p_name, p_keys)
for p_name, p_keys in self.perspective_servers.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)
union_of_keys = {}
for result in results:
for server_name, keys in result.items():
union_of_keys.setdefault(server_name, {}).update(keys)
defer.returnValue(union_of_keys)
@defer.inlineCallbacks
def get_keys_from_server(self, server_name_and_key_ids):
@defer.inlineCallbacks
def get_key(server_name, key_ids):
limiter = yield get_retry_limiter(
server_name,
self.clock,
self.store,
)
with limiter:
keys = None
try: try:
keys = yield self.get_server_verify_key_v2_direct( keys = yield self.get_server_verify_key_v2_direct(
server_name, key_ids server_name, key_ids
) )
except: except Exception as e:
pass logger.info(
"Unable to getting key %r for %r directly: %s %s",
key_ids, server_name,
type(e).__name__, str(e.message),
)
keys = yield self.get_server_verify_key_v1_direct( if not keys:
server_name, key_ids keys = yield self.get_server_verify_key_v1_direct(
) server_name, key_ids
)
for key_id in key_ids: keys = {server_name: keys}
if key_id in keys:
defer.returnValue(keys[key_id]) defer.returnValue(keys)
return
raise ValueError("No verification key found for given key ids") results = yield defer.gatherResults(
[
get_key(server_name, key_ids)
for server_name, key_ids in server_name_and_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
merged = {}
for result in results:
merged.update(result)
defer.returnValue({
server_name: keys
for server_name, keys in merged.items()
if keys
})
@defer.inlineCallbacks @defer.inlineCallbacks
def get_server_verify_key_v2_indirect(self, server_name, key_ids, def get_server_verify_key_v2_indirect(self, server_names_and_key_ids,
perspective_name, perspective_name,
perspective_keys): perspective_keys):
limiter = yield get_retry_limiter( limiter = yield get_retry_limiter(
@@ -184,7 +389,7 @@ class Keyring(object):
# TODO(mark): Set the minimum_valid_until_ts to that needed by # TODO(mark): Set the minimum_valid_until_ts to that needed by
# the events being validated or the current time if validating # the events being validated or the current time if validating
# an incoming request. # an incoming request.
responses = yield self.client.post_json( query_response = yield self.client.post_json(
destination=perspective_name, destination=perspective_name,
path=b"/_matrix/key/v2/query", path=b"/_matrix/key/v2/query",
data={ data={
@@ -194,12 +399,15 @@ class Keyring(object):
u"minimum_valid_until_ts": 0 u"minimum_valid_until_ts": 0
} for key_id in key_ids } for key_id in key_ids
} }
for server_name, key_ids in server_names_and_key_ids
} }
}, },
) )
keys = {} keys = {}
responses = query_response["server_keys"]
for response in responses: for response in responses:
if (u"signatures" not in response if (u"signatures" not in response
or perspective_name not in response[u"signatures"]): or perspective_name not in response[u"signatures"]):
@@ -231,23 +439,29 @@ class Keyring(object):
" server %r" % (perspective_name,) " server %r" % (perspective_name,)
) )
response_keys = yield self.process_v2_response( processed_response = yield self.process_v2_response(
server_name, perspective_name, response perspective_name, response
) )
keys.update(response_keys) for server_name, response_keys in processed_response.items():
keys.setdefault(server_name, {}).update(response_keys)
yield self.store_keys( yield defer.gatherResults(
server_name=server_name, [
from_server=perspective_name, self.store_keys(
verify_keys=keys, server_name=server_name,
) from_server=perspective_name,
verify_keys=response_keys,
)
for server_name, response_keys in keys.items()
],
consumeErrors=True
).addErrback(unwrapFirstError)
defer.returnValue(keys) defer.returnValue(keys)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_server_verify_key_v2_direct(self, server_name, key_ids): def get_server_verify_key_v2_direct(self, server_name, key_ids):
keys = {} keys = {}
for requested_key_id in key_ids: for requested_key_id in key_ids:
@@ -255,7 +469,7 @@ class Keyring(object):
continue continue
(response, tls_certificate) = yield fetch_server_key( (response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_context_factory, server_name, self.hs.tls_server_context_factory,
path=(b"/_matrix/key/v2/server/%s" % ( path=(b"/_matrix/key/v2/server/%s" % (
urllib.quote(requested_key_id), urllib.quote(requested_key_id),
)).encode("ascii"), )).encode("ascii"),
@@ -283,25 +497,30 @@ class Keyring(object):
raise ValueError("TLS certificate not allowed by fingerprints") raise ValueError("TLS certificate not allowed by fingerprints")
response_keys = yield self.process_v2_response( response_keys = yield self.process_v2_response(
server_name=server_name,
from_server=server_name, from_server=server_name,
requested_id=requested_key_id, requested_ids=[requested_key_id],
response_json=response, response_json=response,
) )
keys.update(response_keys) keys.update(response_keys)
yield self.store_keys( yield defer.gatherResults(
server_name=server_name, [
from_server=server_name, self.store_keys(
verify_keys=keys, server_name=key_server_name,
) from_server=server_name,
verify_keys=verify_keys,
)
for key_server_name, verify_keys in keys.items()
],
consumeErrors=True
).addErrback(unwrapFirstError)
defer.returnValue(keys) defer.returnValue(keys)
@defer.inlineCallbacks @defer.inlineCallbacks
def process_v2_response(self, server_name, from_server, response_json, def process_v2_response(self, from_server, response_json,
requested_id=None): requested_ids=[]):
time_now_ms = self.clock.time_msec() time_now_ms = self.clock.time_msec()
response_keys = {} response_keys = {}
verify_keys = {} verify_keys = {}
@@ -323,7 +542,9 @@ class Keyring(object):
verify_key.time_added = time_now_ms verify_key.time_added = time_now_ms
old_verify_keys[key_id] = verify_key old_verify_keys[key_id] = verify_key
for key_id in response_json["signatures"][server_name]: results = {}
server_name = response_json["server_name"]
for key_id in response_json["signatures"].get(server_name, {}):
if key_id not in response_json["verify_keys"]: if key_id not in response_json["verify_keys"]:
raise ValueError( raise ValueError(
"Key response must include verification keys for all" "Key response must include verification keys for all"
@@ -345,28 +566,31 @@ class Keyring(object):
signed_key_json_bytes = encode_canonical_json(signed_key_json) signed_key_json_bytes = encode_canonical_json(signed_key_json)
ts_valid_until_ms = signed_key_json[u"valid_until_ts"] ts_valid_until_ms = signed_key_json[u"valid_until_ts"]
updated_key_ids = set() updated_key_ids = set(requested_ids)
if requested_id is not None:
updated_key_ids.add(requested_id)
updated_key_ids.update(verify_keys) updated_key_ids.update(verify_keys)
updated_key_ids.update(old_verify_keys) updated_key_ids.update(old_verify_keys)
response_keys.update(verify_keys) response_keys.update(verify_keys)
response_keys.update(old_verify_keys) response_keys.update(old_verify_keys)
for key_id in updated_key_ids: yield defer.gatherResults(
yield self.store.store_server_keys_json( [
server_name=server_name, self.store.store_server_keys_json(
key_id=key_id, server_name=server_name,
from_server=server_name, key_id=key_id,
ts_now_ms=time_now_ms, from_server=server_name,
ts_expires_ms=ts_valid_until_ms, ts_now_ms=time_now_ms,
key_json_bytes=signed_key_json_bytes, ts_expires_ms=ts_valid_until_ms,
) key_json_bytes=signed_key_json_bytes,
)
for key_id in updated_key_ids
],
consumeErrors=True,
).addErrback(unwrapFirstError)
defer.returnValue(response_keys) results[server_name] = response_keys
raise ValueError("No verification key found for given key ids") defer.returnValue(results)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_server_verify_key_v1_direct(self, server_name, key_ids): def get_server_verify_key_v1_direct(self, server_name, key_ids):
@@ -379,7 +603,7 @@ class Keyring(object):
# Try to fetch the key from the remote server. # Try to fetch the key from the remote server.
(response, tls_certificate) = yield fetch_server_key( (response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_context_factory server_name, self.hs.tls_server_context_factory
) )
# Check the response. # Check the response.
@@ -450,8 +674,13 @@ class Keyring(object):
Returns: Returns:
A deferred that completes when the keys are stored. A deferred that completes when the keys are stored.
""" """
for key_id, key in verify_keys.items(): # TODO(markjh): Store whether the keys have expired.
# TODO(markjh): Store whether the keys have expired. yield defer.gatherResults(
yield self.store.store_server_verify_key( [
server_name, server_name, key.time_added, key self.store.store_server_verify_key(
) server_name, server_name, key.time_added, key
)
for key_id, key in verify_keys.items()
],
consumeErrors=True,
).addErrback(unwrapFirstError)

View File

@@ -16,6 +16,12 @@
from synapse.util.frozenutils import freeze from synapse.util.frozenutils import freeze
# Whether we should use frozen_dict in FrozenEvent. Using frozen_dicts prevents
# bugs where we accidentally share e.g. signature dicts. However, converting
# a dict to frozen_dicts is expensive.
USE_FROZEN_DICTS = True
class _EventInternalMetadata(object): class _EventInternalMetadata(object):
def __init__(self, internal_metadata_dict): def __init__(self, internal_metadata_dict):
self.__dict__ = dict(internal_metadata_dict) self.__dict__ = dict(internal_metadata_dict)
@@ -84,7 +90,7 @@ class EventBase(object):
d = dict(self._event_dict) d = dict(self._event_dict)
d.update({ d.update({
"signatures": self.signatures, "signatures": self.signatures,
"unsigned": self.unsigned, "unsigned": dict(self.unsigned),
}) })
return d return d
@@ -103,6 +109,9 @@ class EventBase(object):
pdu_json.setdefault("unsigned", {})["age"] = int(age) pdu_json.setdefault("unsigned", {})["age"] = int(age)
del pdu_json["unsigned"]["age_ts"] del pdu_json["unsigned"]["age_ts"]
# This may be a frozen event
pdu_json["unsigned"].pop("redacted_because", None)
return pdu_json return pdu_json
def __set__(self, instance, value): def __set__(self, instance, value):
@@ -122,7 +131,10 @@ class FrozenEvent(EventBase):
unsigned = dict(event_dict.pop("unsigned", {})) unsigned = dict(event_dict.pop("unsigned", {}))
frozen_dict = freeze(event_dict) if USE_FROZEN_DICTS:
frozen_dict = freeze(event_dict)
else:
frozen_dict = event_dict
super(FrozenEvent, self).__init__( super(FrozenEvent, self).__init__(
frozen_dict, frozen_dict,

View File

@@ -66,7 +66,6 @@ def prune_event(event):
"users_default", "users_default",
"events", "events",
"events_default", "events_default",
"events_default",
"state_default", "state_default",
"ban", "ban",
"kick", "kick",
@@ -74,6 +73,8 @@ def prune_event(event):
) )
elif event_type == EventTypes.Aliases: elif event_type == EventTypes.Aliases:
add_fields("aliases") add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
add_fields("history_visibility")
allowed_fields = { allowed_fields = {
k: v k: v
@@ -101,7 +102,10 @@ def format_event_raw(d):
def format_event_for_client_v1(d): def format_event_for_client_v1(d):
d["user_id"] = d.pop("sender", None) d["user_id"] = d.pop("sender", None)
move_keys = ("age", "redacted_because", "replaces_state", "prev_content") move_keys = (
"age", "redacted_because", "replaces_state", "prev_content",
"invite_room_state",
)
for key in move_keys: for key in move_keys:
if key in d["unsigned"]: if key in d["unsigned"]:
d[key] = d["unsigned"][key] d[key] = d["unsigned"][key]
@@ -150,7 +154,8 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if "redacted_because" in e.unsigned: if "redacted_because" in e.unsigned:
d["unsigned"]["redacted_because"] = serialize_event( d["unsigned"]["redacted_because"] = serialize_event(
e.unsigned["redacted_because"], time_now_ms e.unsigned["redacted_because"], time_now_ms,
event_format=event_format
) )
if token_id is not None: if token_id is not None:

View File

@@ -18,12 +18,12 @@ from twisted.internet import defer
from synapse.events.utils import prune_event from synapse.events.utils import prune_event
from syutil.jsonutil import encode_canonical_json
from synapse.crypto.event_signing import check_event_content_hash from synapse.crypto.event_signing import check_event_content_hash
from synapse.api.errors import SynapseError from synapse.api.errors import SynapseError
from synapse.util import unwrapFirstError
import logging import logging
@@ -32,7 +32,8 @@ logger = logging.getLogger(__name__)
class FederationBase(object): class FederationBase(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False): def _check_sigs_and_hash_and_fetch(self, origin, pdus, outlier=False,
include_none=False):
"""Takes a list of PDUs and checks the signatures and hashs of each """Takes a list of PDUs and checks the signatures and hashs of each
one. If a PDU fails its signature check then we check if we have it in one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of the database and if not then request if from the originating server of
@@ -50,84 +51,108 @@ class FederationBase(object):
Returns: Returns:
Deferred : A list of PDUs that have valid signatures and hashes. Deferred : A list of PDUs that have valid signatures and hashes.
""" """
deferreds = self._check_sigs_and_hashes(pdus)
signed_pdus = [] def callback(pdu):
return pdu
@defer.inlineCallbacks def errback(failure, pdu):
def do(pdu): failure.trap(SynapseError)
try: return None
new_pdu = yield self._check_sigs_and_hash(pdu)
signed_pdus.append(new_pdu)
except SynapseError:
# FIXME: We should handle signature failures more gracefully.
def try_local_db(res, pdu):
if not res:
# Check local db. # Check local db.
new_pdu = yield self.store.get_event( return self.store.get_event(
pdu.event_id, pdu.event_id,
allow_rejected=True, allow_rejected=True,
allow_none=True, allow_none=True,
) )
if new_pdu: return res
signed_pdus.append(new_pdu)
return
# Check pdu.origin def try_remote(res, pdu):
if pdu.origin != origin: if not res and pdu.origin != origin:
try: return self.get_pdu(
new_pdu = yield self.get_pdu( destinations=[pdu.origin],
destinations=[pdu.origin], event_id=pdu.event_id,
event_id=pdu.event_id, outlier=outlier,
outlier=outlier, timeout=10000,
) ).addErrback(lambda e: None)
return res
if new_pdu:
signed_pdus.append(new_pdu)
return
except:
pass
def warn(res, pdu):
if not res:
logger.warn( logger.warn(
"Failed to find copy of %s with valid signature", "Failed to find copy of %s with valid signature",
pdu.event_id, pdu.event_id,
) )
return res
yield defer.gatherResults( for pdu, deferred in zip(pdus, deferreds):
[do(pdu) for pdu in pdus], deferred.addCallbacks(
callback, errback, errbackArgs=[pdu]
).addCallback(
try_local_db, pdu
).addCallback(
try_remote, pdu
).addCallback(
warn, pdu
)
valid_pdus = yield defer.gatherResults(
deferreds,
consumeErrors=True consumeErrors=True
) ).addErrback(unwrapFirstError)
defer.returnValue(signed_pdus) if include_none:
defer.returnValue(valid_pdus)
else:
defer.returnValue([p for p in valid_pdus if p])
@defer.inlineCallbacks
def _check_sigs_and_hash(self, pdu): def _check_sigs_and_hash(self, pdu):
"""Throws a SynapseError if the PDU does not have the correct return self._check_sigs_and_hashes([pdu])[0]
def _check_sigs_and_hashes(self, pdus):
"""Throws a SynapseError if a PDU does not have the correct
signatures. signatures.
Returns: Returns:
FrozenEvent: Either the given event or it redacted if it failed the FrozenEvent: Either the given event or it redacted if it failed the
content hash check. content hash check.
""" """
# Check signatures are correct.
redacted_event = prune_event(pdu)
redacted_pdu_json = redacted_event.get_pdu_json()
try: redacted_pdus = [
yield self.keyring.verify_json_for_server( prune_event(pdu)
pdu.origin, redacted_pdu_json for pdu in pdus
) ]
except SynapseError:
deferreds = self.keyring.verify_json_objects_for_server([
(p.origin, p.get_pdu_json())
for p in redacted_pdus
])
def callback(_, pdu, redacted):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return redacted
return pdu
def errback(failure, pdu):
failure.trap(SynapseError)
logger.warn( logger.warn(
"Signature check failed for %s redacted to %s", "Signature check failed for %s",
encode_canonical_json(pdu.get_pdu_json()), pdu.event_id,
encode_canonical_json(redacted_pdu_json),
) )
raise return failure
if not check_event_content_hash(pdu): for deferred, pdu, redacted in zip(deferreds, pdus, redacted_pdus):
logger.warn( deferred.addCallbacks(
"Event content has been tampered, redacting %s, %s", callback, errback,
pdu.event_id, encode_canonical_json(pdu.get_dict()) callbackArgs=[pdu, redacted],
errbackArgs=[pdu],
) )
defer.returnValue(redacted_event)
defer.returnValue(pdu) return deferreds

View File

@@ -17,18 +17,21 @@
from twisted.internet import defer from twisted.internet import defer
from .federation_base import FederationBase from .federation_base import FederationBase
from synapse.api.constants import Membership
from .units import Edu from .units import Edu
from synapse.api.errors import ( from synapse.api.errors import (
CodeMessageException, HttpResponseException, SynapseError, CodeMessageException, HttpResponseException, SynapseError,
) )
from synapse.util.expiringcache import ExpiringCache from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
import synapse.metrics import synapse.metrics
from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination from synapse.util.retryutils import get_retry_limiter, NotRetryingDestination
import copy
import itertools import itertools
import logging import logging
import random import random
@@ -132,6 +135,36 @@ class FederationClient(FederationBase):
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail
) )
@log_function
def query_client_keys(self, destination, content):
"""Query device keys for a device hosted on a remote server.
Args:
destination (str): Domain name of the remote homeserver
content (dict): The query content.
Returns:
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_device_keys")
return self.transport_layer.query_client_keys(destination, content)
@log_function
def claim_client_keys(self, destination, content):
"""Claims one-time keys for a device hosted on a remote server.
Args:
destination (str): Domain name of the remote homeserver
content (dict): The query content.
Returns:
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_one_time_keys")
return self.transport_layer.claim_client_keys(destination, content)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def backfill(self, dest, context, limit, extremities): def backfill(self, dest, context, limit, extremities):
@@ -164,16 +197,17 @@ class FederationClient(FederationBase):
for p in transaction_data["pdus"] for p in transaction_data["pdus"]
] ]
for i, pdu in enumerate(pdus): # FIXME: We should handle signature failures more gracefully.
pdus[i] = yield self._check_sigs_and_hash(pdu) pdus[:] = yield defer.gatherResults(
self._check_sigs_and_hashes(pdus),
# FIXME: We should handle signature failures more gracefully. consumeErrors=True,
).addErrback(unwrapFirstError)
defer.returnValue(pdus) defer.returnValue(pdus)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_pdu(self, destinations, event_id, outlier=False): def get_pdu(self, destinations, event_id, outlier=False, timeout=None):
"""Requests the PDU with given origin and ID from the remote home """Requests the PDU with given origin and ID from the remote home
servers. servers.
@@ -189,6 +223,8 @@ class FederationClient(FederationBase):
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
it's from an arbitary point in the context as opposed to part it's from an arbitary point in the context as opposed to part
of the current block of PDUs. Defaults to `False` of the current block of PDUs. Defaults to `False`
timeout (int): How long to try (in ms) each destination for before
moving to the next destination. None indicates no timeout.
Returns: Returns:
Deferred: Results in the requested PDU. Deferred: Results in the requested PDU.
@@ -212,7 +248,7 @@ class FederationClient(FederationBase):
with limiter: with limiter:
transaction_data = yield self.transport_layer.get_event( transaction_data = yield self.transport_layer.get_event(
destination, event_id destination, event_id, timeout=timeout,
) )
logger.debug("transaction_data %r", transaction_data) logger.debug("transaction_data %r", transaction_data)
@@ -222,11 +258,11 @@ class FederationClient(FederationBase):
for p in transaction_data["pdus"] for p in transaction_data["pdus"]
] ]
if pdu_list: if pdu_list and pdu_list[0]:
pdu = pdu_list[0] pdu = pdu_list[0]
# Check signatures are correct. # Check signatures are correct.
pdu = yield self._check_sigs_and_hash(pdu) pdu = yield self._check_sigs_and_hashes([pdu])[0]
break break
@@ -255,7 +291,7 @@ class FederationClient(FederationBase):
) )
continue continue
if self._get_pdu_cache is not None: if self._get_pdu_cache is not None and pdu:
self._get_pdu_cache[event_id] = pdu self._get_pdu_cache[event_id] = pdu
defer.returnValue(pdu) defer.returnValue(pdu)
@@ -321,16 +357,55 @@ class FederationClient(FederationBase):
defer.returnValue(signed_auth) defer.returnValue(signed_auth)
@defer.inlineCallbacks @defer.inlineCallbacks
def make_join(self, destinations, room_id, user_id): def make_membership_event(self, destinations, room_id, user_id, membership,
content={},):
"""
Creates an m.room.member event, with context, without participating in the room.
Does so by asking one of the already participating servers to create an
event with proper context.
Note that this does not append any events to any graphs.
Args:
destinations (str): Candidate homeservers which are probably
participating in the room.
room_id (str): The room in which the event will happen.
user_id (str): The user whose membership is being evented.
membership (str): The "membership" property of the event. Must be
one of "join" or "leave".
content (object): Any additional data to put into the content field
of the event.
Return:
A tuple of (origin (str), event (object)) where origin is the remote
homeserver which generated the event.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
for destination in destinations: for destination in destinations:
if destination == self.server_name:
continue
try: try:
ret = yield self.transport_layer.make_join( ret = yield self.transport_layer.make_membership_event(
destination, room_id, user_id destination, room_id, user_id, membership
) )
pdu_dict = ret["event"] pdu_dict = ret["event"]
logger.debug("Got response to make_join: %s", pdu_dict) logger.debug("Got response to make_%s: %s", membership, pdu_dict)
pdu_dict["content"].update(content)
# The protoevent received over the JSON wire may not have all
# the required fields. Lets just gloss over that because
# there's some we never care about
if "prev_state" not in pdu_dict:
pdu_dict["prev_state"] = []
defer.returnValue( defer.returnValue(
(destination, self.event_from_pdu_json(pdu_dict)) (destination, self.event_from_pdu_json(pdu_dict))
@@ -340,8 +415,8 @@ class FederationClient(FederationBase):
raise raise
except Exception as e: except Exception as e:
logger.warn( logger.warn(
"Failed to make_join via %s: %s", "Failed to make_%s via %s: %s",
destination, e.message membership, destination, e.message
) )
raise RuntimeError("Failed to send to any server.") raise RuntimeError("Failed to send to any server.")
@@ -349,6 +424,9 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks @defer.inlineCallbacks
def send_join(self, destinations, pdu): def send_join(self, destinations, pdu):
for destination in destinations: for destination in destinations:
if destination == self.server_name:
continue
try: try:
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join( _, content = yield self.transport_layer.send_join(
@@ -370,13 +448,39 @@ class FederationClient(FederationBase):
for p in content.get("auth_chain", []) for p in content.get("auth_chain", [])
] ]
signed_state = yield self._check_sigs_and_hash_and_fetch( pdus = {
destination, state, outlier=True p.event_id: p
for p in itertools.chain(state, auth_chain)
}
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus.values(),
outlier=True,
) )
signed_auth = yield self._check_sigs_and_hash_and_fetch( valid_pdus_map = {
destination, auth_chain, outlier=True p.event_id: p
) for p in valid_pdus
}
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
signed_state = [
copy.copy(valid_pdus_map[p.event_id])
for p in state
if p.event_id in valid_pdus_map
]
signed_auth = [
valid_pdus_map[p.event_id]
for p in auth_chain
if p.event_id in valid_pdus_map
]
# NB: We *need* to copy to ensure that we don't have multiple
# references being passed on, as that causes... issues.
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
auth_chain.sort(key=lambda e: e.depth) auth_chain.sort(key=lambda e: e.depth)
@@ -388,7 +492,7 @@ class FederationClient(FederationBase):
except CodeMessageException: except CodeMessageException:
raise raise
except Exception as e: except Exception as e:
logger.warn( logger.exception(
"Failed to send_join via %s: %s", "Failed to send_join via %s: %s",
destination, e.message destination, e.message
) )
@@ -418,6 +522,33 @@ class FederationClient(FederationBase):
defer.returnValue(pdu) defer.returnValue(pdu)
@defer.inlineCallbacks
def send_leave(self, destinations, pdu):
for destination in destinations:
if destination == self.server_name:
continue
try:
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
logger.debug("Got content: %s", content)
defer.returnValue(None)
except CodeMessageException:
raise
except Exception as e:
logger.exception(
"Failed to send_leave via %s: %s",
destination, e.message
)
raise RuntimeError("Failed to send to any server.")
@defer.inlineCallbacks @defer.inlineCallbacks
def query_auth(self, destination, room_id, event_id, local_auth): def query_auth(self, destination, room_id, event_id, local_auth):
""" """
@@ -491,7 +622,7 @@ class FederationClient(FederationBase):
] ]
signed_events = yield self._check_sigs_and_hash_and_fetch( signed_events = yield self._check_sigs_and_hash_and_fetch(
destination, events, outlier=True destination, events, outlier=False
) )
have_gotten_all_from_destination = True have_gotten_all_from_destination = True
@@ -518,7 +649,7 @@ class FederationClient(FederationBase):
# Are we missing any? # Are we missing any?
seen_events = set(earliest_events_ids) seen_events = set(earliest_events_ids)
seen_events.update(e.event_id for e in signed_events) seen_events.update(e.event_id for e in signed_events if e)
missing_events = {} missing_events = {}
for e in itertools.chain(latest_events, signed_events): for e in itertools.chain(latest_events, signed_events):
@@ -561,7 +692,7 @@ class FederationClient(FederationBase):
res = yield defer.DeferredList(deferreds, consumeErrors=True) res = yield defer.DeferredList(deferreds, consumeErrors=True)
for (result, val), (e_id, _) in zip(res, ordered_missing): for (result, val), (e_id, _) in zip(res, ordered_missing):
if result: if result and val:
signed_events.append(val) signed_events.append(val)
else: else:
failed_to_fetch.add(e_id) failed_to_fetch.add(e_id)
@@ -576,3 +707,26 @@ class FederationClient(FederationBase):
event.internal_metadata.outlier = outlier event.internal_metadata.outlier = outlier
return event return event
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
for destination in destinations:
if destination == self.server_name:
continue
try:
yield self.transport_layer.exchange_third_party_invite(
destination=destination,
room_id=room_id,
event_dict=event_dict,
)
defer.returnValue(None)
except CodeMessageException:
raise
except Exception as e:
logger.exception(
"Failed to send_third_party_invite via %s: %s",
destination, e.message
)
raise RuntimeError("Failed to send to any server.")

View File

@@ -20,7 +20,6 @@ from .federation_base import FederationBase
from .units import Transaction, Edu from .units import Transaction, Edu
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
import synapse.metrics import synapse.metrics
@@ -28,6 +27,7 @@ from synapse.api.errors import FederationError, SynapseError
from synapse.crypto.event_signing import compute_event_signature from synapse.crypto.event_signing import compute_event_signature
import simplejson as json
import logging import logging
@@ -123,29 +123,28 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id) logger.debug("[%s] Transaction is new", transaction.transaction_id)
with PreserveLoggingContext(): results = []
results = []
for pdu in pdu_list: for pdu in pdu_list:
d = self._handle_new_pdu(transaction.origin, pdu) d = self._handle_new_pdu(transaction.origin, pdu)
try: try:
yield d yield d
results.append({}) results.append({})
except FederationError as e: except FederationError as e:
self.send_failure(e, transaction.origin) self.send_failure(e, transaction.origin)
results.append({"error": str(e)}) results.append({"error": str(e)})
except Exception as e: except Exception as e:
results.append({"error": str(e)}) results.append({"error": str(e)})
logger.exception("Failed to handle PDU") logger.exception("Failed to handle PDU")
if hasattr(transaction, "edus"): if hasattr(transaction, "edus"):
for edu in [Edu(**x) for x in transaction.edus]: for edu in [Edu(**x) for x in transaction.edus]:
self.received_edu( self.received_edu(
transaction.origin, transaction.origin,
edu.edu_type, edu.edu_type,
edu.content edu.content
) )
for failure in getattr(transaction, "pdu_failures", []): for failure in getattr(transaction, "pdu_failures", []):
logger.info("Got failure %r", failure) logger.info("Got failure %r", failure)
@@ -255,6 +254,20 @@ class FederationServer(FederationBase):
], ],
})) }))
@defer.inlineCallbacks
def on_make_leave_request(self, room_id, user_id):
pdu = yield self.handler.on_make_leave_request(room_id, user_id)
time_now = self._clock.time_msec()
defer.returnValue({"event": pdu.get_pdu_json(time_now)})
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content):
logger.debug("on_send_leave_request: content: %s", content)
pdu = self.event_from_pdu_json(content)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
yield self.handler.on_send_leave_request(origin, pdu)
defer.returnValue((200, {}))
@defer.inlineCallbacks @defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id): def on_event_auth(self, origin, room_id, event_id):
time_now = self._clock.time_msec() time_now = self._clock.time_msec()
@@ -314,6 +327,48 @@ class FederationServer(FederationBase):
(200, send_content) (200, send_content)
) )
@defer.inlineCallbacks
@log_function
def on_query_client_keys(self, origin, content):
query = []
for user_id, device_ids in content.get("device_keys", {}).items():
if not device_ids:
query.append((user_id, None))
else:
for device_id in device_ids:
query.append((user_id, device_id))
results = yield self.store.get_e2e_device_keys(query)
json_result = {}
for user_id, device_keys in results.items():
for device_id, json_bytes in device_keys.items():
json_result.setdefault(user_id, {})[device_id] = json.loads(
json_bytes
)
defer.returnValue({"device_keys": json_result})
@defer.inlineCallbacks
@log_function
def on_claim_client_keys(self, origin, content):
query = []
for user_id, device_keys in content.get("one_time_keys", {}).items():
for device_id, algorithm in device_keys.items():
query.append((user_id, device_id, algorithm))
results = yield self.store.claim_e2e_one_time_keys(query)
json_result = {}
for user_id, device_keys in results.items():
for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items():
json_result.setdefault(user_id, {})[device_id] = {
key_id: json.loads(json_bytes)
}
defer.returnValue({"one_time_keys": json_result})
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def on_get_missing_events(self, origin, room_id, earliest_events, def on_get_missing_events(self, origin, room_id, earliest_events,
@@ -488,3 +543,15 @@ class FederationServer(FederationBase):
event.internal_metadata.outlier = outlier event.internal_metadata.outlier = outlier
return event return event
@defer.inlineCallbacks
def exchange_third_party_invite(self, invite):
ret = yield self.handler.exchange_third_party_invite(invite)
defer.returnValue(ret)
@defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, origin, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
origin, room_id, event_dict
)
defer.returnValue(ret)

View File

@@ -23,8 +23,6 @@ from twisted.internet import defer
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from syutil.jsonutil import encode_canonical_json
import logging import logging
@@ -71,7 +69,7 @@ class TransactionActions(object):
transaction.transaction_id, transaction.transaction_id,
transaction.origin, transaction.origin,
code, code,
encode_canonical_json(response) response,
) )
@defer.inlineCallbacks @defer.inlineCallbacks
@@ -101,5 +99,5 @@ class TransactionActions(object):
transaction.transaction_id, transaction.transaction_id,
transaction.destination, transaction.destination,
response_code, response_code,
encode_canonical_json(response_dict) response_dict,
) )

View File

@@ -104,7 +104,6 @@ class TransactionQueue(object):
return not destination.startswith("localhost") return not destination.startswith("localhost")
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function
def enqueue_pdu(self, pdu, destinations, order): def enqueue_pdu(self, pdu, destinations, order):
# We loop through all destinations to see whether we already have # We loop through all destinations to see whether we already have
# a transaction in progress. If we do, stick it in the pending_pdus # a transaction in progress. If we do, stick it in the pending_pdus
@@ -203,46 +202,48 @@ class TransactionQueue(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def _attempt_new_transaction(self, destination): def _attempt_new_transaction(self, destination):
# list of (pending_pdu, deferred, order)
if destination in self.pending_transactions: if destination in self.pending_transactions:
# XXX: pending_transactions can get stuck on by a never-ending # XXX: pending_transactions can get stuck on by a never-ending
# request at which point pending_pdus_by_dest just keeps growing. # request at which point pending_pdus_by_dest just keeps growing.
# we need application-layer timeouts of some flavour of these # we need application-layer timeouts of some flavour of these
# requests # requests
logger.info( logger.debug(
"TX [%s] Transaction already in progress", "TX [%s] Transaction already in progress",
destination destination
) )
return return
logger.info("TX [%s] _attempt_new_transaction", destination)
# list of (pending_pdu, deferred, order)
pending_pdus = self.pending_pdus_by_dest.pop(destination, []) pending_pdus = self.pending_pdus_by_dest.pop(destination, [])
pending_edus = self.pending_edus_by_dest.pop(destination, []) pending_edus = self.pending_edus_by_dest.pop(destination, [])
pending_failures = self.pending_failures_by_dest.pop(destination, []) pending_failures = self.pending_failures_by_dest.pop(destination, [])
if pending_pdus: if pending_pdus:
logger.info("TX [%s] len(pending_pdus_by_dest[dest]) = %d", logger.debug("TX [%s] len(pending_pdus_by_dest[dest]) = %d",
destination, len(pending_pdus)) destination, len(pending_pdus))
if not pending_pdus and not pending_edus and not pending_failures: if not pending_pdus and not pending_edus and not pending_failures:
logger.info("TX [%s] Nothing to send", destination) logger.debug("TX [%s] Nothing to send", destination)
return return
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[2])
pdus = [x[0] for x in pending_pdus]
edus = [x[0] for x in pending_edus]
failures = [x[0].get_dict() for x in pending_failures]
deferreds = [
x[1]
for x in pending_pdus + pending_edus + pending_failures
]
try: try:
self.pending_transactions[destination] = 1 self.pending_transactions[destination] = 1
logger.debug("TX [%s] _attempt_new_transaction", destination)
# Sort based on the order field
pending_pdus.sort(key=lambda t: t[2])
pdus = [x[0] for x in pending_pdus]
edus = [x[0] for x in pending_edus]
failures = [x[0].get_dict() for x in pending_failures]
deferreds = [
x[1]
for x in pending_pdus + pending_edus + pending_failures
]
txn_id = str(self._next_txn_id)
limiter = yield get_retry_limiter( limiter = yield get_retry_limiter(
destination, destination,
self._clock, self._clock,
@@ -250,9 +251,9 @@ class TransactionQueue(object):
) )
logger.debug( logger.debug(
"TX [%s] Attempting new transaction" "TX [%s] {%s} Attempting new transaction"
" (pdus: %d, edus: %d, failures: %d)", " (pdus: %d, edus: %d, failures: %d)",
destination, destination, txn_id,
len(pending_pdus), len(pending_pdus),
len(pending_edus), len(pending_edus),
len(pending_failures) len(pending_failures)
@@ -262,7 +263,7 @@ class TransactionQueue(object):
transaction = Transaction.create_new( transaction = Transaction.create_new(
origin_server_ts=int(self._clock.time_msec()), origin_server_ts=int(self._clock.time_msec()),
transaction_id=str(self._next_txn_id), transaction_id=txn_id,
origin=self.server_name, origin=self.server_name,
destination=destination, destination=destination,
pdus=pdus, pdus=pdus,
@@ -276,9 +277,13 @@ class TransactionQueue(object):
logger.debug("TX [%s] Persisted transaction", destination) logger.debug("TX [%s] Persisted transaction", destination)
logger.info( logger.info(
"TX [%s] Sending transaction [%s]", "TX [%s] {%s} Sending transaction [%s],"
destination, " (PDUs: %d, EDUs: %d, failures: %d)",
destination, txn_id,
transaction.transaction_id, transaction.transaction_id,
len(pending_pdus),
len(pending_edus),
len(pending_failures),
) )
with limiter: with limiter:
@@ -314,7 +319,10 @@ class TransactionQueue(object):
code = e.code code = e.code
response = e.response response = e.response
logger.info("TX [%s] got %d response", destination, code) logger.info(
"TX [%s] {%s} got %d response",
destination, txn_id, code
)
logger.debug("TX [%s] Sent transaction", destination) logger.debug("TX [%s] Sent transaction", destination)
logger.debug("TX [%s] Marking as delivered...", destination) logger.debug("TX [%s] Marking as delivered...", destination)

View File

@@ -14,6 +14,7 @@
# limitations under the License. # limitations under the License.
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.urls import FEDERATION_PREFIX as PREFIX from synapse.api.urls import FEDERATION_PREFIX as PREFIX
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
@@ -50,13 +51,15 @@ class TransportLayerClient(object):
) )
@log_function @log_function
def get_event(self, destination, event_id): def get_event(self, destination, event_id, timeout=None):
""" Requests the pdu with give id and origin from the given server. """ Requests the pdu with give id and origin from the given server.
Args: Args:
destination (str): The host name of the remote home server we want destination (str): The host name of the remote home server we want
to get the state from. to get the state from.
event_id (str): The id of the event being requested. event_id (str): The id of the event being requested.
timeout (int): How long to try (in ms) the destination for before
giving up. None indicates no timeout.
Returns: Returns:
Deferred: Results in a dict received from the remote homeserver. Deferred: Results in a dict received from the remote homeserver.
@@ -65,7 +68,7 @@ class TransportLayerClient(object):
destination, event_id) destination, event_id)
path = PREFIX + "/event/%s/" % (event_id, ) path = PREFIX + "/event/%s/" % (event_id, )
return self.client.get_json(destination, path=path) return self.client.get_json(destination, path=path, timeout=timeout)
@log_function @log_function
def backfill(self, destination, room_id, event_tuples, limit): def backfill(self, destination, room_id, event_tuples, limit):
@@ -158,13 +161,19 @@ class TransportLayerClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def make_join(self, destination, room_id, user_id, retry_on_dns_fail=True): def make_membership_event(self, destination, room_id, user_id, membership):
path = PREFIX + "/make_join/%s/%s" % (room_id, user_id) valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
raise RuntimeError(
"make_membership_event called with membership='%s', must be one of %s" %
(membership, ",".join(valid_memberships))
)
path = PREFIX + "/make_%s/%s/%s" % (membership, room_id, user_id)
content = yield self.client.get_json( content = yield self.client.get_json(
destination=destination, destination=destination,
path=path, path=path,
retry_on_dns_fail=retry_on_dns_fail, retry_on_dns_fail=True,
) )
defer.returnValue(content) defer.returnValue(content)
@@ -182,6 +191,19 @@ class TransportLayerClient(object):
defer.returnValue(response) defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def send_leave(self, destination, room_id, event_id, content):
path = PREFIX + "/send_leave/%s/%s" % (room_id, event_id)
response = yield self.client.put_json(
destination=destination,
path=path,
data=content,
)
defer.returnValue(response)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def send_invite(self, destination, room_id, event_id, content): def send_invite(self, destination, room_id, event_id, content):
@@ -195,6 +217,19 @@ class TransportLayerClient(object):
defer.returnValue(response) defer.returnValue(response)
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(self, destination, room_id, event_dict):
path = PREFIX + "/exchange_third_party_invite/%s" % (room_id,)
response = yield self.client.put_json(
destination=destination,
path=path,
data=event_dict,
)
defer.returnValue(response)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_event_auth(self, destination, room_id, event_id): def get_event_auth(self, destination, room_id, event_id):
@@ -220,6 +255,76 @@ class TransportLayerClient(object):
defer.returnValue(content) defer.returnValue(content)
@defer.inlineCallbacks
@log_function
def query_client_keys(self, destination, query_content):
"""Query the device keys for a list of user ids hosted on a remote
server.
Request:
{
"device_keys": {
"<user_id>": ["<device_id>"]
} }
Response:
{
"device_keys": {
"<user_id>": {
"<device_id>": {...}
} } }
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the device keys.
"""
path = PREFIX + "/user/keys/query"
content = yield self.client.post_json(
destination=destination,
path=path,
data=query_content,
)
defer.returnValue(content)
@defer.inlineCallbacks
@log_function
def claim_client_keys(self, destination, query_content):
"""Claim one-time keys for a list of devices hosted on a remote server.
Request:
{
"one_time_keys": {
"<user_id>": {
"<device_id>": "<algorithm>"
} } }
Response:
{
"device_keys": {
"<user_id>": {
"<device_id>": {
"<algorithm>:<key_id>": "<key_base64>"
} } } }
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the one-time keys.
"""
path = PREFIX + "/user/keys/claim"
content = yield self.client.post_json(
destination=destination,
path=path,
data=query_content,
)
defer.returnValue(content)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_missing_events(self, destination, room_id, earliest_events, def get_missing_events(self, destination, room_id, earliest_events,

View File

@@ -93,6 +93,9 @@ class TransportLayerServer(object):
yield self.keyring.verify_json_for_server(origin, json_request) yield self.keyring.verify_json_for_server(origin, json_request)
logger.info("Request from %s", origin)
request.authenticated_entity = origin
defer.returnValue((origin, content)) defer.returnValue((origin, content))
@log_function @log_function
@@ -196,6 +199,14 @@ class FederationSendServlet(BaseFederationServlet):
transaction_id, str(transaction_data) transaction_id, str(transaction_data)
) )
logger.info(
"Received txn %s from %s. (PDUs: %d, EDUs: %d, failures: %d)",
transaction_id, origin,
len(transaction_data.get("pdus", [])),
len(transaction_data.get("edus", [])),
len(transaction_data.get("failures", [])),
)
# We should ideally be getting this from the security layer. # We should ideally be getting this from the security layer.
# origin = body["origin"] # origin = body["origin"]
@@ -285,6 +296,24 @@ class FederationMakeJoinServlet(BaseFederationServlet):
defer.returnValue((200, content)) defer.returnValue((200, content))
class FederationMakeLeaveServlet(BaseFederationServlet):
PATH = "/make_leave/([^/]*)/([^/]*)"
@defer.inlineCallbacks
def on_GET(self, origin, content, query, context, user_id):
content = yield self.handler.on_make_leave_request(context, user_id)
defer.returnValue((200, content))
class FederationSendLeaveServlet(BaseFederationServlet):
PATH = "/send_leave/([^/]*)/([^/]*)"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, room_id, txid):
content = yield self.handler.on_send_leave_request(origin, content)
defer.returnValue((200, content))
class FederationEventAuthServlet(BaseFederationServlet): class FederationEventAuthServlet(BaseFederationServlet):
PATH = "/event_auth/([^/]*)/([^/]*)" PATH = "/event_auth/([^/]*)/([^/]*)"
@@ -314,6 +343,35 @@ class FederationInviteServlet(BaseFederationServlet):
defer.returnValue((200, content)) defer.returnValue((200, content))
class FederationThirdPartyInviteExchangeServlet(BaseFederationServlet):
PATH = "/exchange_third_party_invite/([^/]*)"
@defer.inlineCallbacks
def on_PUT(self, origin, content, query, room_id):
content = yield self.handler.on_exchange_third_party_invite_request(
origin, room_id, content
)
defer.returnValue((200, content))
class FederationClientKeysQueryServlet(BaseFederationServlet):
PATH = "/user/keys/query"
@defer.inlineCallbacks
def on_POST(self, origin, content, query):
response = yield self.handler.on_query_client_keys(origin, content)
defer.returnValue((200, response))
class FederationClientKeysClaimServlet(BaseFederationServlet):
PATH = "/user/keys/claim"
@defer.inlineCallbacks
def on_POST(self, origin, content, query):
response = yield self.handler.on_claim_client_keys(origin, content)
defer.returnValue((200, response))
class FederationQueryAuthServlet(BaseFederationServlet): class FederationQueryAuthServlet(BaseFederationServlet):
PATH = "/query_auth/([^/]*)/([^/]*)" PATH = "/query_auth/([^/]*)/([^/]*)"
@@ -349,6 +407,30 @@ class FederationGetMissingEventsServlet(BaseFederationServlet):
defer.returnValue((200, content)) defer.returnValue((200, content))
class On3pidBindServlet(BaseFederationServlet):
PATH = "/3pid/onbind"
@defer.inlineCallbacks
def on_POST(self, request):
content_bytes = request.content.read()
content = json.loads(content_bytes)
if "invites" in content:
last_exception = None
for invite in content["invites"]:
try:
yield self.handler.exchange_third_party_invite(invite)
except Exception as e:
last_exception = e
if last_exception:
raise last_exception
defer.returnValue((200, {}))
# Avoid doing remote HS authorization checks which are done by default by
# BaseFederationServlet.
def _wrap(self, code):
return code
SERVLET_CLASSES = ( SERVLET_CLASSES = (
FederationPullServlet, FederationPullServlet,
FederationEventServlet, FederationEventServlet,
@@ -356,10 +438,16 @@ SERVLET_CLASSES = (
FederationBackfillServlet, FederationBackfillServlet,
FederationQueryServlet, FederationQueryServlet,
FederationMakeJoinServlet, FederationMakeJoinServlet,
FederationMakeLeaveServlet,
FederationEventServlet, FederationEventServlet,
FederationSendJoinServlet, FederationSendJoinServlet,
FederationSendLeaveServlet,
FederationInviteServlet, FederationInviteServlet,
FederationQueryAuthServlet, FederationQueryAuthServlet,
FederationGetMissingEventsServlet, FederationGetMissingEventsServlet,
FederationEventAuthServlet, FederationEventAuthServlet,
FederationClientKeysQueryServlet,
FederationClientKeysClaimServlet,
FederationThirdPartyInviteExchangeServlet,
On3pidBindServlet,
) )

View File

@@ -17,12 +17,11 @@ from synapse.appservice.scheduler import AppServiceScheduler
from synapse.appservice.api import ApplicationServiceApi from synapse.appservice.api import ApplicationServiceApi
from .register import RegistrationHandler from .register import RegistrationHandler
from .room import ( from .room import (
RoomCreationHandler, RoomMemberHandler, RoomListHandler RoomCreationHandler, RoomMemberHandler, RoomListHandler, RoomContextHandler,
) )
from .message import MessageHandler from .message import MessageHandler
from .events import EventStreamHandler, EventHandler from .events import EventStreamHandler, EventHandler
from .federation import FederationHandler from .federation import FederationHandler
from .login import LoginHandler
from .profile import ProfileHandler from .profile import ProfileHandler
from .presence import PresenceHandler from .presence import PresenceHandler
from .directory import DirectoryHandler from .directory import DirectoryHandler
@@ -32,6 +31,8 @@ from .appservice import ApplicationServicesHandler
from .sync import SyncHandler from .sync import SyncHandler
from .auth import AuthHandler from .auth import AuthHandler
from .identity import IdentityHandler from .identity import IdentityHandler
from .receipts import ReceiptsHandler
from .search import SearchHandler
class Handlers(object): class Handlers(object):
@@ -53,10 +54,10 @@ class Handlers(object):
self.profile_handler = ProfileHandler(hs) self.profile_handler = ProfileHandler(hs)
self.presence_handler = PresenceHandler(hs) self.presence_handler = PresenceHandler(hs)
self.room_list_handler = RoomListHandler(hs) self.room_list_handler = RoomListHandler(hs)
self.login_handler = LoginHandler(hs)
self.directory_handler = DirectoryHandler(hs) self.directory_handler = DirectoryHandler(hs)
self.typing_notification_handler = TypingNotificationHandler(hs) self.typing_notification_handler = TypingNotificationHandler(hs)
self.admin_handler = AdminHandler(hs) self.admin_handler = AdminHandler(hs)
self.receipts_handler = ReceiptsHandler(hs)
asapi = ApplicationServiceApi(hs) asapi = ApplicationServiceApi(hs)
self.appservice_handler = ApplicationServicesHandler( self.appservice_handler = ApplicationServicesHandler(
hs, asapi, AppServiceScheduler( hs, asapi, AppServiceScheduler(
@@ -68,3 +69,5 @@ class Handlers(object):
self.sync_handler = SyncHandler(hs) self.sync_handler = SyncHandler(hs)
self.auth_handler = AuthHandler(hs) self.auth_handler = AuthHandler(hs)
self.identity_handler = IdentityHandler(hs) self.identity_handler = IdentityHandler(hs)
self.search_handler = SearchHandler(hs)
self.room_context_handler = RoomContextHandler(hs)

View File

@@ -15,10 +15,12 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.api.errors import LimitExceededError, SynapseError from synapse.api.errors import LimitExceededError, SynapseError, AuthError
from synapse.crypto.event_signing import add_hashes_and_signatures from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.api.constants import Membership, EventTypes from synapse.api.constants import Membership, EventTypes
from synapse.types import UserID from synapse.types import UserID, RoomAlias
from synapse.util.logcontext import PreserveLoggingContext
import logging import logging
@@ -27,6 +29,12 @@ logger = logging.getLogger(__name__)
class BaseHandler(object): class BaseHandler(object):
"""
Common base class for the event handlers.
:type store: synapse.storage.events.StateStore
:type state_handler: synapse.state.StateHandler
"""
def __init__(self, hs): def __init__(self, hs):
self.store = hs.get_datastore() self.store = hs.get_datastore()
@@ -43,6 +51,74 @@ class BaseHandler(object):
self.event_builder_factory = hs.get_event_builder_factory() self.event_builder_factory = hs.get_event_builder_factory()
@defer.inlineCallbacks
def _filter_events_for_client(self, user_id, events, is_guest=False,
require_all_visible_for_guests=True):
# Assumes that user has at some point joined the room if not is_guest.
def allowed(event, membership, visibility):
if visibility == "world_readable":
return True
if is_guest:
return False
if membership == Membership.JOIN:
return True
if event.type == EventTypes.RoomHistoryVisibility:
return not is_guest
if visibility == "shared":
return True
elif visibility == "joined":
return membership == Membership.JOIN
elif visibility == "invited":
return membership == Membership.INVITE
return True
event_id_to_state = yield self.store.get_state_for_events(
frozenset(e.event_id for e in events),
types=(
(EventTypes.RoomHistoryVisibility, ""),
(EventTypes.Member, user_id),
)
)
events_to_return = []
for event in events:
state = event_id_to_state[event.event_id]
membership_event = state.get((EventTypes.Member, user_id), None)
if membership_event:
membership = membership_event.membership
else:
membership = None
visibility_event = state.get((EventTypes.RoomHistoryVisibility, ""), None)
if visibility_event:
visibility = visibility_event.content.get("history_visibility", "shared")
else:
visibility = "shared"
should_include = allowed(event, membership, visibility)
if should_include:
events_to_return.append(event)
if (require_all_visible_for_guests
and is_guest
and len(events_to_return) < len(events)):
# This indicates that some events in the requested range were not
# visible to guest users. To be safe, we reject the entire request,
# so that we don't have to worry about interpreting visibility
# boundaries.
raise AuthError(403, "User %s does not have permission" % (
user_id
))
defer.returnValue(events_to_return)
def ratelimit(self, user_id): def ratelimit(self, user_id):
time_now = self.clock.time() time_now = self.clock.time()
allowed, time_allowed = self.ratelimiter.send_message( allowed, time_allowed = self.ratelimiter.send_message(
@@ -76,7 +152,9 @@ class BaseHandler(object):
context = yield state_handler.compute_event_context(builder) context = yield state_handler.compute_event_context(builder)
if builder.is_state(): if builder.is_state():
builder.prev_state = context.prev_state_events builder.prev_state = yield self.store.add_event_hashes(
context.prev_state_events
)
yield self.auth.add_auth_events(builder, context) yield self.auth.add_auth_events(builder, context)
@@ -103,27 +181,81 @@ class BaseHandler(object):
if not suppress_auth: if not suppress_auth:
self.auth.check(event, auth_events=context.current_state) self.auth.check(event, auth_events=context.current_state)
yield self.store.persist_event(event, context=context) yield self.maybe_kick_guest_users(event, context.current_state.values())
if event.type == EventTypes.CanonicalAlias:
# Check the alias is acually valid (at this time at least)
room_alias_str = event.content.get("alias", None)
if room_alias_str:
room_alias = RoomAlias.from_string(room_alias_str)
directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (
room_alias_str,
)
)
federation_handler = self.hs.get_handlers().federation_handler federation_handler = self.hs.get_handlers().federation_handler
if event.type == EventTypes.Member: if event.type == EventTypes.Member:
if event.content["membership"] == Membership.INVITE: if event.content["membership"] == Membership.INVITE:
event.unsigned["invite_room_state"] = [
{
"type": e.type,
"state_key": e.state_key,
"content": e.content,
"sender": e.sender,
}
for k, e in context.current_state.items()
if e.type in (
EventTypes.JoinRules,
EventTypes.CanonicalAlias,
EventTypes.RoomAvatar,
EventTypes.Name,
)
]
invitee = UserID.from_string(event.state_key) invitee = UserID.from_string(event.state_key)
if not self.hs.is_mine(invitee): if not self.hs.is_mine(invitee):
# TODO: Can we add signature from remote server in a nicer # TODO: Can we add signature from remote server in a nicer
# way? If we have been invited by a remote server, we need # way? If we have been invited by a remote server, we need
# to get them to sign the event. # to get them to sign the event.
returned_invite = yield federation_handler.send_invite( returned_invite = yield federation_handler.send_invite(
invitee.domain, invitee.domain,
event, event,
) )
event.unsigned.pop("room_state", None)
# TODO: Make sure the signatures actually are correct. # TODO: Make sure the signatures actually are correct.
event.signatures.update( event.signatures.update(
returned_invite.signatures returned_invite.signatures
) )
if event.type == EventTypes.Redaction:
if self.auth.check_redaction(event, auth_events=context.current_state):
original_event = yield self.store.get_event(
event.redacts,
check_redacted=False,
get_prev_content=False,
allow_rejected=False,
allow_none=False
)
if event.user_id != original_event.user_id:
raise AuthError(
403,
"You don't have permission to redact events"
)
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event, context=context
)
destinations = set(extra_destinations) destinations = set(extra_destinations)
for k, s in context.current_state.items(): for k, s in context.current_state.items():
try: try:
@@ -137,10 +269,12 @@ class BaseHandler(object):
"Failed to get destination from event %s", s.event_id "Failed to get destination from event %s", s.event_id
) )
# Don't block waiting on waking up all the listeners. with PreserveLoggingContext():
notify_d = self.notifier.on_new_room_event( # Don't block waiting on waking up all the listeners.
event, extra_users=extra_users notify_d = self.notifier.on_new_room_event(
) event, event_stream_id, max_stream_id,
extra_users=extra_users
)
def log_failure(f): def log_failure(f):
logger.warn( logger.warn(
@@ -150,8 +284,64 @@ class BaseHandler(object):
notify_d.addErrback(log_failure) notify_d.addErrback(log_failure)
fed_d = federation_handler.handle_new_event( # If invite, remove room_state from unsigned before sending.
event.unsigned.pop("invite_room_state", None)
federation_handler.handle_new_event(
event, destinations=destinations, event, destinations=destinations,
) )
fed_d.addErrback(log_failure) @defer.inlineCallbacks
def maybe_kick_guest_users(self, event, current_state):
# Technically this function invalidates current_state by changing it.
# Hopefully this isn't that important to the caller.
if event.type == EventTypes.GuestAccess:
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
yield self.kick_guest_users(current_state)
@defer.inlineCallbacks
def kick_guest_users(self, current_state):
for member_event in current_state:
try:
if member_event.type != EventTypes.Member:
continue
if not self.hs.is_mine(UserID.from_string(member_event.state_key)):
continue
if member_event.content["membership"] not in {
Membership.JOIN,
Membership.INVITE
}:
continue
if (
"kind" not in member_event.content
or member_event.content["kind"] != "guest"
):
continue
# We make the user choose to leave, rather than have the
# event-sender kick them. This is partially because we don't
# need to worry about power levels, and partially because guest
# users are a concept which doesn't hugely work over federation,
# and having homeservers have their own users leave keeps more
# of that decision-making and control local to the guest-having
# homeserver.
message_handler = self.hs.get_handlers().message_handler
yield message_handler.create_and_send_event(
{
"type": EventTypes.Member,
"state_key": member_event.state_key,
"content": {
"membership": Membership.LEAVE,
"kind": "guest"
},
"room_id": member_event.room_id,
"sender": member_event.state_key
},
ratelimit=False,
)
except Exception as e:
logger.warn("Error kicking guest user: %s" % (e,))

View File

@@ -34,6 +34,7 @@ class AdminHandler(BaseHandler):
d = {} d = {}
for r in res: for r in res:
# Note that device_id is always None
device = d.setdefault(r["device_id"], {}) device = d.setdefault(r["device_id"], {})
session = device.setdefault(r["access_token"], []) session = device.setdefault(r["access_token"], [])
session.append({ session.append({

View File

@@ -15,7 +15,7 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.types import UserID from synapse.types import UserID
@@ -147,10 +147,7 @@ class ApplicationServicesHandler(object):
) )
# We need to know the members associated with this event.room_id, # We need to know the members associated with this event.room_id,
# if any. # if any.
member_list = yield self.store.get_room_members( member_list = yield self.store.get_users_in_room(event.room_id)
room_id=event.room_id,
membership=Membership.JOIN
)
services = yield self.store.get_app_services() services = yield self.store.get_app_services()
interested_list = [ interested_list = [
@@ -180,7 +177,7 @@ class ApplicationServicesHandler(object):
return return
user_info = yield self.store.get_user_by_id(user_id) user_info = yield self.store.get_user_by_id(user_id)
if len(user_info) > 0: if user_info:
defer.returnValue(False) defer.returnValue(False)
return return

View File

@@ -18,14 +18,14 @@ from twisted.internet import defer
from ._base import BaseHandler from ._base import BaseHandler
from synapse.api.constants import LoginType from synapse.api.constants import LoginType
from synapse.types import UserID from synapse.types import UserID
from synapse.api.errors import LoginError, Codes from synapse.api.errors import AuthError, LoginError, Codes
from synapse.http.client import SimpleHttpClient
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from twisted.web.client import PartialDownloadError from twisted.web.client import PartialDownloadError
import logging import logging
import bcrypt import bcrypt
import pymacaroons
import simplejson import simplejson
import synapse.util.stringutils as stringutils import synapse.util.stringutils as stringutils
@@ -44,20 +44,29 @@ class AuthHandler(BaseHandler):
LoginType.EMAIL_IDENTITY: self._check_email_identity, LoginType.EMAIL_IDENTITY: self._check_email_identity,
LoginType.DUMMY: self._check_dummy_auth, LoginType.DUMMY: self._check_dummy_auth,
} }
self.bcrypt_rounds = hs.config.bcrypt_rounds
self.sessions = {} self.sessions = {}
self.INVALID_TOKEN_HTTP_STATUS = 401
@defer.inlineCallbacks @defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip=None): def check_auth(self, flows, clientdict, clientip):
""" """
Takes a dictionary sent by the client in the login / registration Takes a dictionary sent by the client in the login / registration
protocol and handles the login flow. protocol and handles the login flow.
As a side effect, this function fills in the 'creds' key on the user's
session with a map, which maps each auth-type (str) to the relevant
identity authenticated by that auth-type (mostly str, but for captcha, bool).
Args: Args:
flows: list of list of stages flows (list): A list of login flows. Each flow is an ordered list of
authdict: The dictionary from the client root level, not the strings representing auth-types. At least one full
'auth' key: this method prompts for auth if none is sent. flow must be completed in order for auth to be successful.
clientdict: The dictionary from the client root level, not the
'auth' key: this method prompts for auth if none is sent.
clientip (str): The IP address of the client.
Returns: Returns:
A tuple of authed, dict, dict where authed is true if the client A tuple of (authed, dict, dict) where authed is true if the client
has successfully completed an auth flow. If it is true, the first has successfully completed an auth flow. If it is true, the first
dict contains the authenticated credentials of each stage. dict contains the authenticated credentials of each stage.
@@ -75,7 +84,7 @@ class AuthHandler(BaseHandler):
del clientdict['auth'] del clientdict['auth']
if 'session' in authdict: if 'session' in authdict:
sid = authdict['session'] sid = authdict['session']
sess = self._get_session_info(sid) session = self._get_session_info(sid)
if len(clientdict) > 0: if len(clientdict) > 0:
# This was designed to allow the client to omit the parameters # This was designed to allow the client to omit the parameters
@@ -85,20 +94,21 @@ class AuthHandler(BaseHandler):
# email auth link on there). It's probably too open to abuse # email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects # because it lets unauthenticated clients store arbitrary objects
# on a home server. # on a home server.
# sess['clientdict'] = clientdict # Revisit: Assumimg the REST APIs do sensible validation, the data
# self._save_session(sess) # isn't arbintrary.
pass session['clientdict'] = clientdict
elif 'clientdict' in sess: self._save_session(session)
clientdict = sess['clientdict'] elif 'clientdict' in session:
clientdict = session['clientdict']
if not authdict: if not authdict:
defer.returnValue( defer.returnValue(
(False, self._auth_dict_for_flows(flows, sess), clientdict) (False, self._auth_dict_for_flows(flows, session), clientdict)
) )
if 'creds' not in sess: if 'creds' not in session:
sess['creds'] = {} session['creds'] = {}
creds = sess['creds'] creds = session['creds']
# check auth type currently being presented # check auth type currently being presented
if 'type' in authdict: if 'type' in authdict:
@@ -107,15 +117,15 @@ class AuthHandler(BaseHandler):
result = yield self.checkers[authdict['type']](authdict, clientip) result = yield self.checkers[authdict['type']](authdict, clientip)
if result: if result:
creds[authdict['type']] = result creds[authdict['type']] = result
self._save_session(sess) self._save_session(session)
for f in flows: for f in flows:
if len(set(f) - set(creds.keys())) == 0: if len(set(f) - set(creds.keys())) == 0:
logger.info("Auth completed with creds: %r", creds) logger.info("Auth completed with creds: %r", creds)
self._remove_session(sess) self._remove_session(session)
defer.returnValue((True, creds, clientdict)) defer.returnValue((True, creds, clientdict))
ret = self._auth_dict_for_flows(flows, sess) ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys() ret['completed'] = creds.keys()
defer.returnValue((False, ret, clientdict)) defer.returnValue((False, ret, clientdict))
@@ -149,22 +159,14 @@ class AuthHandler(BaseHandler):
if "user" not in authdict or "password" not in authdict: if "user" not in authdict or "password" not in authdict:
raise LoginError(400, "", Codes.MISSING_PARAM) raise LoginError(400, "", Codes.MISSING_PARAM)
user = authdict["user"] user_id = authdict["user"]
password = authdict["password"] password = authdict["password"]
if not user.startswith('@'): if not user_id.startswith('@'):
user = UserID.create(user, self.hs.hostname).to_string() user_id = UserID.create(user_id, self.hs.hostname).to_string()
user_info = yield self.store.get_user_by_id(user_id=user) user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
if not user_info: self._check_password(user_id, password, password_hash)
logger.warn("Attempted to login as %s but they do not exist", user) defer.returnValue(user_id)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
stored_hash = user_info[0]["password_hash"]
if bcrypt.checkpw(password, stored_hash):
defer.returnValue(user)
else:
logger.warn("Failed password login for user %s", user)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
@defer.inlineCallbacks @defer.inlineCallbacks
def _check_recaptcha(self, authdict, clientip): def _check_recaptcha(self, authdict, clientip):
@@ -186,9 +188,9 @@ class AuthHandler(BaseHandler):
# TODO: get this from the homeserver rather than creating a new one for # TODO: get this from the homeserver rather than creating a new one for
# each request # each request
try: try:
client = SimpleHttpClient(self.hs) client = self.hs.get_simple_http_client()
data = yield client.post_urlencoded_get_json( resp_body = yield client.post_urlencoded_get_json(
"https://www.google.com/recaptcha/api/siteverify", self.hs.config.recaptcha_siteverify_api,
args={ args={
'secret': self.hs.config.recaptcha_private_key, 'secret': self.hs.config.recaptcha_private_key,
'response': user_response, 'response': user_response,
@@ -198,7 +200,8 @@ class AuthHandler(BaseHandler):
except PartialDownloadError as pde: except PartialDownloadError as pde:
# Twisted is silly # Twisted is silly
data = pde.response data = pde.response
resp_body = simplejson.loads(data) resp_body = simplejson.loads(data)
if 'success' in resp_body and resp_body['success']: if 'success' in resp_body and resp_body['success']:
defer.returnValue(True) defer.returnValue(True)
raise LoginError(401, "", errcode=Codes.UNAUTHORIZED) raise LoginError(401, "", errcode=Codes.UNAUTHORIZED)
@@ -267,6 +270,183 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id] return self.sessions[session_id]
@defer.inlineCallbacks
def login_with_password(self, user_id, password):
"""
Authenticates the user with their username and password.
Used only by the v1 login API.
Args:
user_id (str): User ID
password (str): Password
Returns:
A tuple of:
The user's ID.
The access token for the user's session.
The refresh token for the user's session.
Raises:
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
user_id, password_hash = yield self._find_user_id_and_pwd_hash(user_id)
self._check_password(user_id, password, password_hash)
logger.info("Logging in user %s", user_id)
access_token = yield self.issue_access_token(user_id)
refresh_token = yield self.issue_refresh_token(user_id)
defer.returnValue((user_id, access_token, refresh_token))
@defer.inlineCallbacks
def get_login_tuple_for_user_id(self, user_id):
"""
Gets login tuple for the user with the given user ID.
The user is assumed to have been authenticated by some other
machanism (e.g. CAS)
Args:
user_id (str): User ID
Returns:
A tuple of:
The user's ID.
The access token for the user's session.
The refresh token for the user's session.
Raises:
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
user_id, ignored = yield self._find_user_id_and_pwd_hash(user_id)
logger.info("Logging in user %s", user_id)
access_token = yield self.issue_access_token(user_id)
refresh_token = yield self.issue_refresh_token(user_id)
defer.returnValue((user_id, access_token, refresh_token))
@defer.inlineCallbacks
def does_user_exist(self, user_id):
try:
yield self._find_user_id_and_pwd_hash(user_id)
defer.returnValue(True)
except LoginError:
defer.returnValue(False)
@defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id):
"""Checks to see if a user with the given id exists. Will check case
insensitively, but will throw if there are multiple inexact matches.
Returns:
tuple: A 2-tuple of `(canonical_user_id, password_hash)`
"""
user_infos = yield self.store.get_users_by_id_case_insensitive(user_id)
if not user_infos:
logger.warn("Attempted to login as %s but they do not exist", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
if len(user_infos) > 1:
if user_id not in user_infos:
logger.warn(
"Attempted to login as %s but it matches more than one user "
"inexactly: %r",
user_id, user_infos.keys()
)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
defer.returnValue((user_id, user_infos[user_id]))
else:
defer.returnValue(user_infos.popitem())
def _check_password(self, user_id, password, stored_hash):
"""Checks that user_id has passed password, raises LoginError if not."""
if not self.validate_hash(password, stored_hash):
logger.warn("Failed password login for user %s", user_id)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def issue_access_token(self, user_id):
access_token = self.generate_access_token(user_id)
yield self.store.add_access_token_to_user(user_id, access_token)
defer.returnValue(access_token)
@defer.inlineCallbacks
def issue_refresh_token(self, user_id):
refresh_token = self.generate_refresh_token(user_id)
yield self.store.add_refresh_token_to_user(user_id, refresh_token)
defer.returnValue(refresh_token)
def generate_access_token(self, user_id, extra_caveats=None):
extra_caveats = extra_caveats or []
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = access")
now = self.hs.get_clock().time_msec()
expiry = now + (60 * 60 * 1000)
macaroon.add_first_party_caveat("time < %d" % (expiry,))
for caveat in extra_caveats:
macaroon.add_first_party_caveat(caveat)
return macaroon.serialize()
def generate_refresh_token(self, user_id):
m = self._generate_base_macaroon(user_id)
m.add_first_party_caveat("type = refresh")
# Important to add a nonce, because otherwise every refresh token for a
# user will be the same.
m.add_first_party_caveat("nonce = %s" % (
stringutils.random_string_with_symbols(16),
))
return m.serialize()
def generate_short_term_login_token(self, user_id):
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
expiry = now + (2 * 60 * 1000)
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
def validate_short_term_login_token_and_get_user_id(self, login_token):
try:
macaroon = pymacaroons.Macaroon.deserialize(login_token)
auth_api = self.hs.get_auth()
auth_api.validate_macaroon(macaroon, "login", [auth_api.verify_expiry])
return self._get_user_from_macaroon(macaroon)
except (pymacaroons.exceptions.MacaroonException, TypeError, ValueError):
raise AuthError(401, "Invalid token", errcode=Codes.UNKNOWN_TOKEN)
def _generate_base_macaroon(self, user_id):
macaroon = pymacaroons.Macaroon(
location=self.hs.config.server_name,
identifier="key",
key=self.hs.config.macaroon_secret_key)
macaroon.add_first_party_caveat("gen = 1")
macaroon.add_first_party_caveat("user_id = %s" % (user_id,))
return macaroon
def _get_user_from_macaroon(self, macaroon):
user_prefix = "user_id = "
for caveat in macaroon.caveats:
if caveat.caveat_id.startswith(user_prefix):
return caveat.caveat_id[len(user_prefix):]
raise AuthError(
self.INVALID_TOKEN_HTTP_STATUS, "No user_id found in token",
errcode=Codes.UNKNOWN_TOKEN
)
@defer.inlineCallbacks
def set_password(self, user_id, newpassword):
password_hash = self.hash(newpassword)
yield self.store.user_set_password_hash(user_id, password_hash)
yield self.store.user_delete_access_tokens(user_id)
yield self.hs.get_pusherpool().remove_pushers_by_user(user_id)
yield self.store.flush_user(user_id)
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
yield self.store.user_add_threepid(
user_id, medium, address, validated_at,
self.hs.get_clock().time_msec()
)
def _save_session(self, session): def _save_session(self, session):
# TODO: Persistent storage # TODO: Persistent storage
logger.debug("Saving session %s", session) logger.debug("Saving session %s", session)
@@ -275,3 +455,26 @@ class AuthHandler(BaseHandler):
def _remove_session(self, session): def _remove_session(self, session):
logger.debug("Removing session %s", session) logger.debug("Removing session %s", session)
del self.sessions[session["id"]] del self.sessions[session["id"]]
def hash(self, password):
"""Computes a secure hash of password.
Args:
password (str): Password to hash.
Returns:
Hashed password (str).
"""
return bcrypt.hashpw(password, bcrypt.gensalt(self.bcrypt_rounds))
def validate_hash(self, password, stored_hash):
"""Validates that self.hash(password) == stored_hash.
Args:
password (str): Password to hash.
stored_hash (str): Expected hash value.
Returns:
Whether self.hash(password) == stored_hash (bool).
"""
return bcrypt.checkpw(password, stored_hash)

View File

@@ -22,6 +22,7 @@ from synapse.api.constants import EventTypes
from synapse.types import RoomAlias from synapse.types import RoomAlias
import logging import logging
import string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -40,6 +41,10 @@ class DirectoryHandler(BaseHandler):
def _create_association(self, room_alias, room_id, servers=None): def _create_association(self, room_alias, room_id, servers=None):
# general association creation for both human users and app services # general association creation for both human users and app services
for wchar in string.whitespace:
if wchar in room_alias.localpart:
raise SynapseError(400, "Invalid characters in room alias")
if not self.hs.is_mine(room_alias): if not self.hs.is_mine(room_alias):
raise SynapseError(400, "Room alias must be local") raise SynapseError(400, "Room alias must be local")
# TODO(erikj): Change this. # TODO(erikj): Change this.

View File

@@ -15,7 +15,6 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.types import UserID from synapse.types import UserID
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
@@ -47,31 +46,70 @@ class EventStreamHandler(BaseHandler):
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def started_stream(self, user):
"""Tells the presence handler that we have started an eventstream for
the user:
Args:
user (User): The user who started a stream.
Returns:
A deferred that completes once their presence has been updated.
"""
if user not in self._streams_per_user:
self._streams_per_user[user] = 0
if user in self._stop_timer_per_user:
try:
self.clock.cancel_call_later(
self._stop_timer_per_user.pop(user)
)
except:
logger.exception("Failed to cancel event timer")
else:
yield self.distributor.fire("started_user_eventstream", user)
self._streams_per_user[user] += 1
def stopped_stream(self, user):
"""If there are no streams for a user this starts a timer that will
notify the presence handler that we haven't got an event stream for
the user unless the user starts a new stream in 30 seconds.
Args:
user (User): The user who stopped a stream.
"""
self._streams_per_user[user] -= 1
if not self._streams_per_user[user]:
del self._streams_per_user[user]
# 30 seconds of grace to allow the client to reconnect again
# before we think they're gone
def _later():
logger.debug("_later stopped_user_eventstream %s", user)
self._stop_timer_per_user.pop(user, None)
return self.distributor.fire("stopped_user_eventstream", user)
logger.debug("Scheduling _later: for %s", user)
self._stop_timer_per_user[user] = (
self.clock.call_later(30, _later)
)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_stream(self, auth_user_id, pagin_config, timeout=0, def get_stream(self, auth_user_id, pagin_config, timeout=0,
as_client_event=True, affect_presence=True): as_client_event=True, affect_presence=True,
only_room_events=False, room_id=None, is_guest=False):
"""Fetches the events stream for a given user.
If `only_room_events` is `True` only room events will be returned.
"""
auth_user = UserID.from_string(auth_user_id) auth_user = UserID.from_string(auth_user_id)
try: try:
if affect_presence: if affect_presence:
if auth_user not in self._streams_per_user: yield self.started_stream(auth_user)
self._streams_per_user[auth_user] = 0
if auth_user in self._stop_timer_per_user:
try:
self.clock.cancel_call_later(
self._stop_timer_per_user.pop(auth_user)
)
except:
logger.exception("Failed to cancel event timer")
else:
yield self.distributor.fire(
"started_user_eventstream", auth_user
)
self._streams_per_user[auth_user] += 1
rm_handler = self.hs.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(auth_user)
if timeout: if timeout:
# If they've set a timeout set a minimum limit. # If they've set a timeout set a minimum limit.
@@ -81,11 +119,17 @@ class EventStreamHandler(BaseHandler):
# thundering herds on restart. # thundering herds on restart.
timeout = random.randint(int(timeout*0.9), int(timeout*1.1)) timeout = random.randint(int(timeout*0.9), int(timeout*1.1))
with PreserveLoggingContext(): if is_guest:
events, tokens = yield self.notifier.get_events_for( yield self.distributor.fire(
auth_user, room_ids, pagin_config, timeout "user_joined_room", user=auth_user, room_id=room_id
) )
events, tokens = yield self.notifier.get_events_for(
auth_user, pagin_config, timeout,
only_room_events=only_room_events,
is_guest=is_guest, guest_room_id=room_id
)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
chunks = [ chunks = [
@@ -102,27 +146,7 @@ class EventStreamHandler(BaseHandler):
finally: finally:
if affect_presence: if affect_presence:
self._streams_per_user[auth_user] -= 1 self.stopped_stream(auth_user)
if not self._streams_per_user[auth_user]:
del self._streams_per_user[auth_user]
# 10 seconds of grace to allow the client to reconnect again
# before we think they're gone
def _later():
logger.debug(
"_later stopped_user_eventstream %s", auth_user
)
self._stop_timer_per_user.pop(auth_user, None)
return self.distributor.fire(
"stopped_user_eventstream", auth_user
)
logger.debug("Scheduling _later: for %s", auth_user)
self._stop_timer_per_user[auth_user] = (
self.clock.call_later(30, _later)
)
class EventHandler(BaseHandler): class EventHandler(BaseHandler):

File diff suppressed because it is too large Load Diff

View File

@@ -44,7 +44,7 @@ class IdentityHandler(BaseHandler):
http_client = SimpleHttpClient(self.hs) http_client = SimpleHttpClient(self.hs)
# XXX: make this configurable! # XXX: make this configurable!
# trustedIdServers = ['matrix.org', 'localhost:8090'] # trustedIdServers = ['matrix.org', 'localhost:8090']
trustedIdServers = ['matrix.org'] trustedIdServers = ['matrix.org', 'vector.im']
if 'id_server' in creds: if 'id_server' in creds:
id_server = creds['id_server'] id_server = creds['id_server']
@@ -117,3 +117,28 @@ class IdentityHandler(BaseHandler):
except CodeMessageException as e: except CodeMessageException as e:
data = json.loads(e.msg) data = json.loads(e.msg)
defer.returnValue(data) defer.returnValue(data)
@defer.inlineCallbacks
def requestEmailToken(self, id_server, email, client_secret, send_attempt, **kwargs):
yield run_on_reactor()
http_client = SimpleHttpClient(self.hs)
params = {
'email': email,
'client_secret': client_secret,
'send_attempt': send_attempt,
}
params.update(kwargs)
try:
data = yield http_client.post_urlencoded_get_json(
"https://%s%s" % (
id_server,
"/_matrix/identity/api/v1/validate/email/requestToken"
),
params
)
defer.returnValue(data)
except CodeMessageException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e

View File

@@ -1,83 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2014, 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.errors import LoginError, Codes
import bcrypt
import logging
logger = logging.getLogger(__name__)
class LoginHandler(BaseHandler):
def __init__(self, hs):
super(LoginHandler, self).__init__(hs)
self.hs = hs
@defer.inlineCallbacks
def login(self, user, password):
"""Login as the specified user with the specified password.
Args:
user (str): The user ID.
password (str): The password.
Returns:
The newly allocated access token.
Raises:
StoreError if there was a problem storing the token.
LoginError if there was an authentication problem.
"""
# TODO do this better, it can't go in __init__ else it cyclic loops
if not hasattr(self, "reg_handler"):
self.reg_handler = self.hs.get_handlers().registration_handler
# pull out the hash for this user if they exist
user_info = yield self.store.get_user_by_id(user_id=user)
if not user_info:
logger.warn("Attempted to login as %s but they do not exist", user)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
stored_hash = user_info["password_hash"]
if bcrypt.checkpw(password, stored_hash):
# generate an access token and store it.
token = self.reg_handler._generate_token(user)
logger.info("Adding token %s for user %s", token, user)
yield self.store.add_access_token_to_user(user, token)
defer.returnValue(token)
else:
logger.warn("Failed password login for user %s", user)
raise LoginError(403, "", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def set_password(self, user_id, newpassword, token_id=None):
password_hash = bcrypt.hashpw(newpassword, bcrypt.gensalt())
yield self.store.user_set_password_hash(user_id, password_hash)
yield self.store.user_delete_access_tokens_apart_from(user_id, token_id)
yield self.hs.get_pusherpool().remove_pushers_by_user_access_token(
user_id, token_id
)
yield self.store.flush_user(user_id)
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
yield self.store.user_add_threepid(
user_id, medium, address, validated_at,
self.hs.get_clock().time_msec()
)

View File

@@ -16,12 +16,13 @@
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import RoomError, SynapseError from synapse.api.errors import SynapseError, AuthError, Codes
from synapse.streams.config import PaginationConfig from synapse.streams.config import PaginationConfig
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator from synapse.events.validator import EventValidator
from synapse.util import unwrapFirstError
from synapse.util.logcontext import PreserveLoggingContext from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID from synapse.types import UserID, RoomStreamToken, StreamToken
from ._base import BaseHandler from ._base import BaseHandler
@@ -70,43 +71,93 @@ class MessageHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_messages(self, user_id=None, room_id=None, pagin_config=None, def get_messages(self, user_id=None, room_id=None, pagin_config=None,
feedback=False, as_client_event=True): as_client_event=True, is_guest=False):
"""Get messages in a room. """Get messages in a room.
Args: Args:
user_id (str): The user requesting messages. user_id (str): The user requesting messages.
room_id (str): The room they want messages from. room_id (str): The room they want messages from.
pagin_config (synapse.api.streams.PaginationConfig): The pagination pagin_config (synapse.api.streams.PaginationConfig): The pagination
config rules to apply, if any. config rules to apply, if any.
feedback (bool): True to get compressed feedback with the messages
as_client_event (bool): True to get events in client-server format. as_client_event (bool): True to get events in client-server format.
is_guest (bool): Whether the requesting user is a guest (as opposed
to a fully registered user).
Returns: Returns:
dict: Pagination API results dict: Pagination API results
""" """
yield self.auth.check_joined_room(room_id, user_id)
data_source = self.hs.get_event_sources().sources["room"] data_source = self.hs.get_event_sources().sources["room"]
if not pagin_config.from_token: if pagin_config.from_token:
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = ( pagin_config.from_token = (
yield self.hs.get_event_sources().get_current_token() yield self.hs.get_event_sources().get_current_token(
direction='b'
)
) )
room_token = pagin_config.from_token.room_key
room_token = RoomStreamToken.parse(room_token)
if room_token.topological is None:
raise SynapseError(400, "Invalid token")
pagin_config.from_token = pagin_config.from_token.copy_and_replace(
"room_key", str(room_token)
)
source_config = pagin_config.get_source_config("room")
if not is_guest:
member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
if member_event.membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room.
# If they're a guest, we'll just 403 them if they're asking for
# events they can't see.
leave_token = yield self.store.get_topological_token_for_event(
member_event.event_id
)
leave_token = RoomStreamToken.parse(leave_token)
if leave_token.topological < room_token.topological:
source_config.from_key = str(leave_token)
if source_config.direction == "f":
if source_config.to_key is None:
source_config.to_key = str(leave_token)
else:
to_token = RoomStreamToken.parse(source_config.to_key)
if leave_token.topological < to_token.topological:
source_config.to_key = str(leave_token)
yield self.hs.get_handlers().federation_handler.maybe_backfill(
room_id, room_token.topological
)
user = UserID.from_string(user_id) user = UserID.from_string(user_id)
events, next_key = yield data_source.get_pagination_rows( events, next_key = yield data_source.get_pagination_rows(
user, pagin_config.get_source_config("room"), room_id user, source_config, room_id
) )
next_token = pagin_config.from_token.copy_and_replace( next_token = pagin_config.from_token.copy_and_replace(
"room_key", next_key "room_key", next_key
) )
if not events:
defer.returnValue({
"chunk": [],
"start": pagin_config.from_token.to_string(),
"end": next_token.to_string(),
})
events = yield self._filter_events_for_client(user_id, events, is_guest=is_guest)
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
chunk = { chunk = {
"chunk": [ "chunk": [
serialize_event(e, time_now, as_client_event) for e in events serialize_event(e, time_now, as_client_event)
for e in events
], ],
"start": pagin_config.from_token.to_string(), "start": pagin_config.from_token.to_string(),
"end": next_token.to_string(), "end": next_token.to_string(),
@@ -116,7 +167,7 @@ class MessageHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def create_and_send_event(self, event_dict, ratelimit=True, def create_and_send_event(self, event_dict, ratelimit=True,
client=None, txn_id=None): token_id=None, txn_id=None, is_guest=False):
""" Given a dict from a client, create and handle a new event. """ Given a dict from a client, create and handle a new event.
Creates an FrozenEvent object, filling out auth_events, prev_events, Creates an FrozenEvent object, filling out auth_events, prev_events,
@@ -150,11 +201,8 @@ class MessageHandler(BaseHandler):
builder.content builder.content
) )
if client is not None: if token_id is not None:
if client.token_id is not None: builder.internal_metadata.token_id = token_id
builder.internal_metadata.token_id = client.token_id
if client.device_id is not None:
builder.internal_metadata.device_id = client.device_id
if txn_id is not None: if txn_id is not None:
builder.internal_metadata.txn_id = txn_id builder.internal_metadata.txn_id = txn_id
@@ -165,7 +213,7 @@ class MessageHandler(BaseHandler):
if event.type == EventTypes.Member: if event.type == EventTypes.Member:
member_handler = self.hs.get_handlers().room_member_handler member_handler = self.hs.get_handlers().room_member_handler
yield member_handler.change_membership(event, context) yield member_handler.change_membership(event, context, is_guest=is_guest)
else: else:
yield self.handle_new_client_event( yield self.handle_new_client_event(
event=event, event=event,
@@ -181,7 +229,7 @@ class MessageHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_room_data(self, user_id=None, room_id=None, def get_room_data(self, user_id=None, room_id=None,
event_type=None, state_key=""): event_type=None, state_key="", is_guest=False):
""" Get data from a room. """ Get data from a room.
Args: Args:
@@ -191,29 +239,55 @@ class MessageHandler(BaseHandler):
Raises: Raises:
SynapseError if something went wrong. SynapseError if something went wrong.
""" """
have_joined = yield self.auth.check_joined_room(room_id, user_id) membership, membership_event_id = yield self._check_in_room_or_world_readable(
if not have_joined: room_id, user_id, is_guest
raise RoomError(403, "User not in room.")
data = yield self.state_handler.get_current_state(
room_id, event_type, state_key
) )
if membership == Membership.JOIN:
data = yield self.state_handler.get_current_state(
room_id, event_type, state_key
)
elif membership == Membership.LEAVE:
key = (event_type, state_key)
room_state = yield self.store.get_state_for_events(
[membership_event_id], [key]
)
data = room_state[membership_event_id].get(key)
defer.returnValue(data) defer.returnValue(data)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_feedback(self, event_id): def _check_in_room_or_world_readable(self, room_id, user_id, is_guest):
# yield self.auth.check_joined_room(room_id, user_id) try:
# check_user_was_in_room will return the most recent membership
# Pull out the feedback from the db # event for the user if:
fb = yield self.store.get_feedback(event_id) # * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
if fb: # else it will throw.
defer.returnValue(fb) member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
defer.returnValue(None) defer.returnValue((member_event.membership, member_event.event_id))
return
except AuthError, auth_error:
visibility = yield self.state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility and
visibility.content["history_visibility"] == "world_readable"
):
defer.returnValue((Membership.JOIN, None))
return
if not is_guest:
raise auth_error
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_events(self, user_id, room_id): def get_state_events(self, user_id, room_id, is_guest=False):
"""Retrieve all state events for a given room. """Retrieve all state events for a given room. If the user is
joined to the room then return the current state. If the user has
left the room return the state events from when they left.
Args: Args:
user_id(str): The user requesting state events. user_id(str): The user requesting state events.
@@ -221,18 +295,26 @@ class MessageHandler(BaseHandler):
Returns: Returns:
A list of dicts representing state events. [{}, {}, {}] A list of dicts representing state events. [{}, {}, {}]
""" """
yield self.auth.check_joined_room(room_id, user_id) membership, membership_event_id = yield self._check_in_room_or_world_readable(
room_id, user_id, is_guest
)
if membership == Membership.JOIN:
room_state = yield self.state_handler.get_current_state(room_id)
elif membership == Membership.LEAVE:
room_state = yield self.store.get_state_for_events(
[membership_event_id], None
)
room_state = room_state[membership_event_id]
# TODO: This is duplicating logic from snapshot_all_rooms
current_state = yield self.state_handler.get_current_state(room_id)
now = self.clock.time_msec() now = self.clock.time_msec()
defer.returnValue( defer.returnValue(
[serialize_event(c, now) for c in current_state.values()] [serialize_event(c, now) for c in room_state.values()]
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def snapshot_all_rooms(self, user_id=None, pagin_config=None, def snapshot_all_rooms(self, user_id=None, pagin_config=None,
feedback=False, as_client_event=True): as_client_event=True, include_archived=False):
"""Retrieve a snapshot of all rooms the user is invited or has joined. """Retrieve a snapshot of all rooms the user is invited or has joined.
This snapshot may include messages for all rooms where the user is This snapshot may include messages for all rooms where the user is
@@ -242,17 +324,20 @@ class MessageHandler(BaseHandler):
user_id (str): The ID of the user making the request. user_id (str): The ID of the user making the request.
pagin_config (synapse.api.streams.PaginationConfig): The pagination pagin_config (synapse.api.streams.PaginationConfig): The pagination
config used to determine how many messages *PER ROOM* to return. config used to determine how many messages *PER ROOM* to return.
feedback (bool): True to get feedback along with these messages.
as_client_event (bool): True to get events in client-server format. as_client_event (bool): True to get events in client-server format.
include_archived (bool): True to get rooms that the user has left
Returns: Returns:
A list of dicts with "room_id" and "membership" keys for all rooms A list of dicts with "room_id" and "membership" keys for all rooms
the user is currently invited or joined in on. Rooms where the user the user is currently invited or joined in on. Rooms where the user
is joined on, may return a "messages" key with messages, depending is joined on, may return a "messages" key with messages, depending
on the specified PaginationConfig. on the specified PaginationConfig.
""" """
memberships = [Membership.INVITE, Membership.JOIN]
if include_archived:
memberships.append(Membership.LEAVE)
room_list = yield self.store.get_rooms_for_user_where_membership_is( room_list = yield self.store.get_rooms_for_user_where_membership_is(
user_id=user_id, user_id=user_id, membership_list=memberships
membership_list=[Membership.INVITE, Membership.JOIN]
) )
user = UserID.from_string(user_id) user = UserID.from_string(user_id)
@@ -267,6 +352,13 @@ class MessageHandler(BaseHandler):
user, pagination_config.get_source_config("presence"), None user, pagination_config.get_source_config("presence"), None
) )
receipt_stream = self.hs.get_event_sources().sources["receipt"]
receipt, _ = yield receipt_stream.get_pagination_rows(
user, pagination_config.get_source_config("receipt"), None
)
tags_by_room = yield self.store.get_tags_for_user(user_id)
public_room_ids = yield self.store.get_public_room_ids() public_room_ids = yield self.store.get_public_room_ids()
limit = pagin_config.limit limit = pagin_config.limit
@@ -285,24 +377,45 @@ class MessageHandler(BaseHandler):
} }
if event.membership == Membership.INVITE: if event.membership == Membership.INVITE:
time_now = self.clock.time_msec()
d["inviter"] = event.sender d["inviter"] = event.sender
invite_event = yield self.store.get_event(event.event_id)
d["invite"] = serialize_event(invite_event, time_now, as_client_event)
rooms_ret.append(d) rooms_ret.append(d)
if event.membership != Membership.JOIN: if event.membership not in (Membership.JOIN, Membership.LEAVE):
return return
try: try:
if event.membership == Membership.JOIN:
room_end_token = now_token.room_key
deferred_room_state = self.state_handler.get_current_state(
event.room_id
)
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = self.store.get_state_for_events(
[event.event_id], None
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
)
(messages, token), current_state = yield defer.gatherResults( (messages, token), current_state = yield defer.gatherResults(
[ [
self.store.get_recent_events_for_room( self.store.get_recent_events_for_room(
event.room_id, event.room_id,
limit=limit, limit=limit,
end_token=now_token.room_key, end_token=room_end_token,
),
self.state_handler.get_current_state(
event.room_id
), ),
deferred_room_state,
] ]
).addErrback(unwrapFirstError)
messages = yield self._filter_events_for_client(
user_id, messages
) )
start_token = now_token.copy_and_replace("room_key", token[0]) start_token = now_token.copy_and_replace("room_key", token[0])
@@ -322,32 +435,130 @@ class MessageHandler(BaseHandler):
serialize_event(c, time_now, as_client_event) serialize_event(c, time_now, as_client_event)
for c in current_state.values() for c in current_state.values()
] ]
private_user_data = []
tags = tags_by_room.get(event.room_id)
if tags:
private_user_data.append({
"type": "m.tag",
"content": {"tags": tags},
})
d["private_user_data"] = private_user_data
except: except:
logger.exception("Failed to get snapshot") logger.exception("Failed to get snapshot")
yield defer.gatherResults( # Only do N rooms at once
[handle_room(e) for e in room_list], n = 5
consumeErrors=True d_list = [handle_room(e) for e in room_list]
) for i in range(0, len(d_list), n):
yield defer.gatherResults(
d_list[i:i + n],
consumeErrors=True
).addErrback(unwrapFirstError)
ret = { ret = {
"rooms": rooms_ret, "rooms": rooms_ret,
"presence": presence, "presence": presence,
"end": now_token.to_string() "receipts": receipt,
"end": now_token.to_string(),
} }
defer.returnValue(ret) defer.returnValue(ret)
@defer.inlineCallbacks @defer.inlineCallbacks
def room_initial_sync(self, user_id, room_id, pagin_config=None, def room_initial_sync(self, user_id, room_id, pagin_config=None, is_guest=False):
feedback=False): """Capture the a snapshot of a room. If user is currently a member of
current_state = yield self.state.get_current_state( the room this will be what is currently in the room. If the user left
room_id=room_id, the room this will be what was in the room when they left.
Args:
user_id(str): The user to get a snapshot for.
room_id(str): The room to get a snapshot of.
pagin_config(synapse.streams.config.PaginationConfig):
The pagination config used to determine how many messages to
return.
Raises:
AuthError if the user wasn't in the room.
Returns:
A JSON serialisable dict with the snapshot of the room.
"""
membership, member_event_id = yield self._check_in_room_or_world_readable(
room_id,
user_id,
is_guest
) )
yield self.auth.check_joined_room( if membership == Membership.JOIN:
room_id, user_id, result = yield self._room_initial_sync_joined(
current_state=current_state user_id, room_id, pagin_config, membership, is_guest
)
elif membership == Membership.LEAVE:
result = yield self._room_initial_sync_parted(
user_id, room_id, pagin_config, membership, member_event_id, is_guest
)
private_user_data = []
tags = yield self.store.get_tags_for_room(user_id, room_id)
if tags:
private_user_data.append({
"type": "m.tag",
"content": {"tags": tags},
})
result["private_user_data"] = private_user_data
defer.returnValue(result)
@defer.inlineCallbacks
def _room_initial_sync_parted(self, user_id, room_id, pagin_config,
membership, member_event_id, is_guest):
room_state = yield self.store.get_state_for_events(
[member_event_id], None
)
room_state = room_state[member_event_id]
limit = pagin_config.limit if pagin_config else None
if limit is None:
limit = 10
stream_token = yield self.store.get_stream_token_for_event(
member_event_id
)
messages, token = yield self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=stream_token
)
messages = yield self._filter_events_for_client(
user_id, messages, is_guest=is_guest
)
start_token = StreamToken(token[0], 0, 0, 0, 0)
end_token = StreamToken(token[1], 0, 0, 0, 0)
time_now = self.clock.time_msec()
defer.returnValue({
"membership": membership,
"room_id": room_id,
"messages": {
"chunk": [serialize_event(m, time_now) for m in messages],
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": [serialize_event(s, time_now) for s in room_state.values()],
"presence": [],
"receipts": [],
})
@defer.inlineCallbacks
def _room_initial_sync_joined(self, user_id, room_id, pagin_config,
membership, is_guest):
current_state = yield self.state.get_current_state(
room_id=room_id,
) )
# TODO(paul): I wish I was called with user objects not user_id # TODO(paul): I wish I was called with user objects not user_id
@@ -361,23 +572,12 @@ class MessageHandler(BaseHandler):
for x in current_state.values() for x in current_state.values()
] ]
member_event = current_state.get((EventTypes.Member, user_id,))
now_token = yield self.hs.get_event_sources().get_current_token() now_token = yield self.hs.get_event_sources().get_current_token()
limit = pagin_config.limit if pagin_config else None limit = pagin_config.limit if pagin_config else None
if limit is None: if limit is None:
limit = 10 limit = 10
messages, token = yield self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=now_token.room_key,
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
room_members = [ room_members = [
m for m in current_state.values() m for m in current_state.values()
if m.type == EventTypes.Member if m.type == EventTypes.Member
@@ -385,24 +585,45 @@ class MessageHandler(BaseHandler):
] ]
presence_handler = self.hs.get_handlers().presence_handler presence_handler = self.hs.get_handlers().presence_handler
presence = []
for m in room_members: @defer.inlineCallbacks
try: def get_presence():
member_presence = yield presence_handler.get_state( states = {}
target_user=UserID.from_string(m.user_id), if not is_guest:
states = yield presence_handler.get_states(
target_users=[UserID.from_string(m.user_id) for m in room_members],
auth_user=auth_user, auth_user=auth_user,
as_event=True, as_event=True,
check_auth=False,
) )
presence.append(member_presence)
except SynapseError: defer.returnValue(states.values())
logger.exception(
"Failed to get member presence of %r", m.user_id receipts_handler = self.hs.get_handlers().receipts_handler
presence, receipts, (messages, token) = yield defer.gatherResults(
[
get_presence(),
receipts_handler.get_receipts_for_room(room_id, now_token.receipt_key),
self.store.get_recent_events_for_room(
room_id,
limit=limit,
end_token=now_token.room_key,
) )
],
consumeErrors=True,
).addErrback(unwrapFirstError)
messages = yield self._filter_events_for_client(
user_id, messages, is_guest=is_guest, require_all_visible_for_guests=False
)
start_token = now_token.copy_and_replace("room_key", token[0])
end_token = now_token.copy_and_replace("room_key", token[1])
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
defer.returnValue({ ret = {
"membership": member_event.membership,
"room_id": room_id, "room_id": room_id,
"messages": { "messages": {
"chunk": [serialize_event(m, time_now) for m in messages], "chunk": [serialize_event(m, time_now) for m in messages],
@@ -410,5 +631,10 @@ class MessageHandler(BaseHandler):
"end": end_token.to_string(), "end": end_token.to_string(),
}, },
"state": state, "state": state,
"presence": presence "presence": presence,
}) "receipts": receipts,
}
if not is_guest:
ret["membership"] = membership
defer.returnValue(ret)

View File

@@ -18,8 +18,8 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError from synapse.api.errors import SynapseError, AuthError
from synapse.api.constants import PresenceState from synapse.api.constants import PresenceState
from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext from synapse.util.logcontext import PreserveLoggingContext
from synapse.util.logutils import log_function
from synapse.types import UserID from synapse.types import UserID
import synapse.metrics import synapse.metrics
@@ -146,6 +146,10 @@ class PresenceHandler(BaseHandler):
self._user_cachemap = {} self._user_cachemap = {}
self._user_cachemap_latest_serial = 0 self._user_cachemap_latest_serial = 0
# map room_ids to the latest presence serial for a member of that
# room
self._room_serials = {}
metrics.register_callback( metrics.register_callback(
"userCachemap:size", "userCachemap:size",
lambda: len(self._user_cachemap), lambda: len(self._user_cachemap),
@@ -187,24 +191,38 @@ class PresenceHandler(BaseHandler):
defer.returnValue(False) defer.returnValue(False)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state(self, target_user, auth_user, as_event=False): def get_state(self, target_user, auth_user, as_event=False, check_auth=True):
if self.hs.is_mine(target_user): """Get the current presence state of the given user.
visible = yield self.is_presence_visible(
observer_user=auth_user,
observed_user=target_user
)
if not visible: Args:
raise SynapseError(404, "Presence information not visible") target_user (UserID): The user whose presence we want
state = yield self.store.get_presence_state(target_user.localpart) auth_user (UserID): The user requesting the presence, used for
if "mtime" in state: checking if said user is allowed to see the persence of the
del state["mtime"] `target_user`
state["presence"] = state.pop("state") as_event (bool): Format the return as an event or not?
check_auth (bool): Perform the auth checks or not?
Returns:
dict: The presence state of the `target_user`, whose format depends
on the `as_event` argument.
"""
if self.hs.is_mine(target_user):
if check_auth:
visible = yield self.is_presence_visible(
observer_user=auth_user,
observed_user=target_user
)
if not visible:
raise SynapseError(404, "Presence information not visible")
if target_user in self._user_cachemap: if target_user in self._user_cachemap:
cached_state = self._user_cachemap[target_user].get_state() state = self._user_cachemap[target_user].get_state()
if "last_active" in cached_state: else:
state["last_active"] = cached_state["last_active"] state = yield self.store.get_presence_state(target_user.localpart)
if "mtime" in state:
del state["mtime"]
state["presence"] = state.pop("state")
else: else:
# TODO(paul): Have remote server send us permissions set # TODO(paul): Have remote server send us permissions set
state = self._get_or_offline_usercache(target_user).get_state() state = self._get_or_offline_usercache(target_user).get_state()
@@ -228,6 +246,81 @@ class PresenceHandler(BaseHandler):
else: else:
defer.returnValue(state) defer.returnValue(state)
@defer.inlineCallbacks
def get_states(self, target_users, auth_user, as_event=False, check_auth=True):
"""A batched version of the `get_state` method that accepts a list of
`target_users`
Args:
target_users (list): The list of UserID's whose presence we want
auth_user (UserID): The user requesting the presence, used for
checking if said user is allowed to see the persence of the
`target_users`
as_event (bool): Format the return as an event or not?
check_auth (bool): Perform the auth checks or not?
Returns:
dict: A mapping from user -> presence_state
"""
local_users, remote_users = partitionbool(
target_users,
lambda u: self.hs.is_mine(u)
)
if check_auth:
for user in local_users:
visible = yield self.is_presence_visible(
observer_user=auth_user,
observed_user=user
)
if not visible:
raise SynapseError(404, "Presence information not visible")
results = {}
if local_users:
for user in local_users:
if user in self._user_cachemap:
results[user] = self._user_cachemap[user].get_state()
local_to_user = {u.localpart: u for u in local_users}
states = yield self.store.get_presence_states(
[u.localpart for u in local_users if u not in results]
)
for local_part, state in states.items():
if state is None:
continue
res = {"presence": state["state"]}
if "status_msg" in state and state["status_msg"]:
res["status_msg"] = state["status_msg"]
results[local_to_user[local_part]] = res
for user in remote_users:
# TODO(paul): Have remote server send us permissions set
results[user] = self._get_or_offline_usercache(user).get_state()
for state in results.values():
if "last_active" in state:
state["last_active_ago"] = int(
self.clock.time_msec() - state.pop("last_active")
)
if as_event:
for user, state in results.items():
content = state
content["user_id"] = user.to_string()
if "last_active" in content:
content["last_active_ago"] = int(
self._clock.time_msec() - content.pop("last_active")
)
results[user] = {"type": "m.presence", "content": content}
defer.returnValue(results)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def set_state(self, target_user, auth_user, state): def set_state(self, target_user, auth_user, state):
@@ -278,15 +371,14 @@ class PresenceHandler(BaseHandler):
now_online = state["presence"] != PresenceState.OFFLINE now_online = state["presence"] != PresenceState.OFFLINE
was_polling = target_user in self._user_cachemap was_polling = target_user in self._user_cachemap
with PreserveLoggingContext(): if now_online and not was_polling:
if now_online and not was_polling: self.start_polling_presence(target_user, state=state)
self.start_polling_presence(target_user, state=state) elif not now_online and was_polling:
elif not now_online and was_polling: self.stop_polling_presence(target_user)
self.stop_polling_presence(target_user)
# TODO(paul): perform a presence push as part of start/stop poll so # TODO(paul): perform a presence push as part of start/stop poll so
# we don't have to do this all the time # we don't have to do this all the time
self.changed_presencelike_data(target_user, state) yield self.changed_presencelike_data(target_user, state)
def bump_presence_active_time(self, user, now=None): def bump_presence_active_time(self, user, now=None):
if now is None: if now is None:
@@ -298,34 +390,62 @@ class PresenceHandler(BaseHandler):
self.changed_presencelike_data(user, {"last_active": now}) self.changed_presencelike_data(user, {"last_active": now})
def get_joined_rooms_for_user(self, user):
"""Get the list of rooms a user is joined to.
Args:
user(UserID): The user.
Returns:
A Deferred of a list of room id strings.
"""
rm_handler = self.homeserver.get_handlers().room_member_handler
return rm_handler.get_joined_rooms_for_user(user)
def get_joined_users_for_room_id(self, room_id):
rm_handler = self.homeserver.get_handlers().room_member_handler
return rm_handler.get_room_members(room_id)
@defer.inlineCallbacks
def changed_presencelike_data(self, user, state): def changed_presencelike_data(self, user, state):
statuscache = self._get_or_make_usercache(user) """Updates the presence state of a local user.
Args:
user(UserID): The user being updated.
state(dict): The new presence state for the user.
Returns:
A Deferred
"""
self._user_cachemap_latest_serial += 1 self._user_cachemap_latest_serial += 1
statuscache.update(state, serial=self._user_cachemap_latest_serial) statuscache = yield self.update_presence_cache(user, state)
yield self.push_presence(user, statuscache=statuscache)
return self.push_presence(user, statuscache=statuscache)
@log_function @log_function
def started_user_eventstream(self, user): def started_user_eventstream(self, user):
# TODO(paul): Use "last online" state # TODO(paul): Use "last online" state
self.set_state(user, user, {"presence": PresenceState.ONLINE}) return self.set_state(user, user, {"presence": PresenceState.ONLINE})
@log_function @log_function
def stopped_user_eventstream(self, user): def stopped_user_eventstream(self, user):
# TODO(paul): Save current state as "last online" state # TODO(paul): Save current state as "last online" state
self.set_state(user, user, {"presence": PresenceState.OFFLINE}) return self.set_state(user, user, {"presence": PresenceState.OFFLINE})
@defer.inlineCallbacks @defer.inlineCallbacks
def user_joined_room(self, user, room_id): def user_joined_room(self, user, room_id):
if self.hs.is_mine(user): """Called via the distributor whenever a user joins a room.
statuscache = self._get_or_make_usercache(user) Notifies the new member of the presence of the current members.
Notifies the current members of the room of the new member's presence.
Args:
user(UserID): The user who joined the room.
room_id(str): The room id the user joined.
"""
if self.hs.is_mine(user):
# No actual update but we need to bump the serial anyway for the # No actual update but we need to bump the serial anyway for the
# event source # event source
self._user_cachemap_latest_serial += 1 self._user_cachemap_latest_serial += 1
statuscache.update({}, serial=self._user_cachemap_latest_serial) statuscache = yield self.update_presence_cache(
user, room_ids=[room_id]
)
self.push_update_to_local_and_remote( self.push_update_to_local_and_remote(
observed_user=user, observed_user=user,
room_ids=[room_id], room_ids=[room_id],
@@ -333,18 +453,22 @@ class PresenceHandler(BaseHandler):
) )
# We also want to tell them about current presence of people. # We also want to tell them about current presence of people.
rm_handler = self.homeserver.get_handlers().room_member_handler curr_users = yield self.get_joined_users_for_room_id(room_id)
curr_users = yield rm_handler.get_room_members(room_id)
for local_user in [c for c in curr_users if self.hs.is_mine(c)]: for local_user in [c for c in curr_users if self.hs.is_mine(c)]:
statuscache = yield self.update_presence_cache(
local_user, room_ids=[room_id], add_to_cache=False
)
self.push_update_to_local_and_remote( self.push_update_to_local_and_remote(
observed_user=local_user, observed_user=local_user,
users_to_push=[user], users_to_push=[user],
statuscache=self._get_or_offline_usercache(local_user), statuscache=statuscache,
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def send_invite(self, observer_user, observed_user): def send_invite(self, observer_user, observed_user):
"""Request the presence of a local or remote user for a local user"""
if not self.hs.is_mine(observer_user): if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server") raise SynapseError(400, "User is not hosted on this Home Server")
@@ -379,6 +503,15 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def invite_presence(self, observed_user, observer_user): def invite_presence(self, observed_user, observer_user):
"""Handles a m.presence_invite EDU. A remote or local user has
requested presence updates for a local user. If the invite is accepted
then allow the local or remote user to see the presence of the local
user.
Args:
observed_user(UserID): The local user whose presence is requested.
observer_user(UserID): The remote or local user requesting presence.
"""
accept = yield self._should_accept_invite(observed_user, observer_user) accept = yield self._should_accept_invite(observed_user, observer_user)
if accept: if accept:
@@ -405,16 +538,34 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def accept_presence(self, observed_user, observer_user): def accept_presence(self, observed_user, observer_user):
"""Handles a m.presence_accept EDU. Mark a presence invite from a
local or remote user as accepted in a local user's presence list.
Starts polling for presence updates from the local or remote user.
Args:
observed_user(UserID): The user to update in the presence list.
observer_user(UserID): The owner of the presence list to update.
"""
yield self.store.set_presence_list_accepted( yield self.store.set_presence_list_accepted(
observer_user.localpart, observed_user.to_string() observer_user.localpart, observed_user.to_string()
) )
with PreserveLoggingContext():
self.start_polling_presence( self.start_polling_presence(
observer_user, target_user=observed_user observer_user, target_user=observed_user
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def deny_presence(self, observed_user, observer_user): def deny_presence(self, observed_user, observer_user):
"""Handle a m.presence_deny EDU. Removes a local or remote user from a
local user's presence list.
Args:
observed_user(UserID): The local or remote user to remove from the
list.
observer_user(UserID): The local owner of the presence list.
Returns:
A Deferred.
"""
yield self.store.del_presence_list( yield self.store.del_presence_list(
observer_user.localpart, observed_user.to_string() observer_user.localpart, observed_user.to_string()
) )
@@ -423,6 +574,16 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def drop(self, observed_user, observer_user): def drop(self, observed_user, observer_user):
"""Remove a local or remote user from a local user's presence list and
unsubscribe the local user from updates that user.
Args:
observed_user(UserId): The local or remote user to remove from the
list.
observer_user(UserId): The local owner of the presence list.
Returns:
A Deferred.
"""
if not self.hs.is_mine(observer_user): if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server") raise SynapseError(400, "User is not hosted on this Home Server")
@@ -430,34 +591,66 @@ class PresenceHandler(BaseHandler):
observer_user.localpart, observed_user.to_string() observer_user.localpart, observed_user.to_string()
) )
with PreserveLoggingContext(): self.stop_polling_presence(
self.stop_polling_presence( observer_user, target_user=observed_user
observer_user, target_user=observed_user )
)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None): def get_presence_list(self, observer_user, accepted=None):
"""Get the presence list for a local user. The retured list includes
the current presence state for each user listed.
Args:
observer_user(UserID): The local user whose presence list to fetch.
accepted(bool or None): If not none then only include users who
have or have not accepted the presence invite request.
Returns:
A Deferred list of presence state events.
"""
if not self.hs.is_mine(observer_user): if not self.hs.is_mine(observer_user):
raise SynapseError(400, "User is not hosted on this Home Server") raise SynapseError(400, "User is not hosted on this Home Server")
presence = yield self.store.get_presence_list( presence_list = yield self.store.get_presence_list(
observer_user.localpart, accepted=accepted observer_user.localpart, accepted=accepted
) )
for p in presence: results = []
observed_user = UserID.from_string(p.pop("observed_user_id")) for row in presence_list:
p["observed_user"] = observed_user observed_user = UserID.from_string(row["observed_user_id"])
p.update(self._get_or_offline_usercache(observed_user).get_state()) result = {
if "last_active" in p: "observed_user": observed_user, "accepted": row["accepted"]
p["last_active_ago"] = int( }
self.clock.time_msec() - p.pop("last_active") result.update(
self._get_or_offline_usercache(observed_user).get_state()
)
if "last_active" in result:
result["last_active_ago"] = int(
self.clock.time_msec() - result.pop("last_active")
) )
results.append(result)
defer.returnValue(presence) defer.returnValue(results)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def start_polling_presence(self, user, target_user=None, state=None): def start_polling_presence(self, user, target_user=None, state=None):
"""Subscribe a local user to presence updates from a local or remote
user. If no target_user is supplied then subscribe to all users stored
in the presence list for the local user.
Additonally this pushes the current presence state of this user to all
target_users. That state can be provided directly or will be read from
the stored state for the local user.
Also this attempts to notify the local user of the current state of
any local target users.
Args:
user(UserID): The local user that whishes for presence updates.
target_user(UserID): The local or remote user whose updates are
wanted.
state(dict): Optional presence state for the local user.
"""
logger.debug("Start polling for presence from %s", user) logger.debug("Start polling for presence from %s", user)
if target_user: if target_user:
@@ -473,8 +666,7 @@ class PresenceHandler(BaseHandler):
# Also include people in all my rooms # Also include people in all my rooms
rm_handler = self.homeserver.get_handlers().room_member_handler room_ids = yield self.get_joined_rooms_for_user(user)
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
if state is None: if state is None:
state = yield self.store.get_presence_state(user.localpart) state = yield self.store.get_presence_state(user.localpart)
@@ -498,9 +690,7 @@ class PresenceHandler(BaseHandler):
# We want to tell the person that just came online # We want to tell the person that just came online
# presence state of people they are interested in? # presence state of people they are interested in?
self.push_update_to_clients( self.push_update_to_clients(
observed_user=target_user,
users_to_push=[user], users_to_push=[user],
statuscache=self._get_or_offline_usercache(target_user),
) )
deferreds = [] deferreds = []
@@ -517,6 +707,12 @@ class PresenceHandler(BaseHandler):
yield defer.DeferredList(deferreds, consumeErrors=True) yield defer.DeferredList(deferreds, consumeErrors=True)
def _start_polling_local(self, user, target_user): def _start_polling_local(self, user, target_user):
"""Subscribe a local user to presence updates for a local user
Args:
user(UserId): The local user that wishes for updates.
target_user(UserId): The local users whose updates are wanted.
"""
target_localpart = target_user.localpart target_localpart = target_user.localpart
if target_localpart not in self._local_pushmap: if target_localpart not in self._local_pushmap:
@@ -525,6 +721,17 @@ class PresenceHandler(BaseHandler):
self._local_pushmap[target_localpart].add(user) self._local_pushmap[target_localpart].add(user)
def _start_polling_remote(self, user, domain, remoteusers): def _start_polling_remote(self, user, domain, remoteusers):
"""Subscribe a local user to presence updates for remote users on a
given remote domain.
Args:
user(UserID): The local user that wishes for updates.
domain(str): The remote server the local user wants updates from.
remoteusers(UserID): The remote users that local user wants to be
told about.
Returns:
A Deferred.
"""
to_poll = set() to_poll = set()
for u in remoteusers: for u in remoteusers:
@@ -545,6 +752,17 @@ class PresenceHandler(BaseHandler):
@log_function @log_function
def stop_polling_presence(self, user, target_user=None): def stop_polling_presence(self, user, target_user=None):
"""Unsubscribe a local user from presence updates from a local or
remote user. If no target user is supplied then unsubscribe the user
from all presence updates that the user had subscribed to.
Args:
user(UserID): The local user that no longer wishes for updates.
target_user(UserID or None): The user whose updates are no longer
wanted.
Returns:
A Deferred.
"""
logger.debug("Stop polling for presence from %s", user) logger.debug("Stop polling for presence from %s", user)
if not target_user or self.hs.is_mine(target_user): if not target_user or self.hs.is_mine(target_user):
@@ -573,6 +791,13 @@ class PresenceHandler(BaseHandler):
return defer.DeferredList(deferreds, consumeErrors=True) return defer.DeferredList(deferreds, consumeErrors=True)
def _stop_polling_local(self, user, target_user): def _stop_polling_local(self, user, target_user):
"""Unsubscribe a local user from presence updates from a local user on
this server.
Args:
user(UserID): The local user that no longer wishes for updates.
target_user(UserID): The user whose updates are no longer wanted.
"""
for localpart in self._local_pushmap.keys(): for localpart in self._local_pushmap.keys():
if target_user and localpart != target_user.localpart: if target_user and localpart != target_user.localpart:
continue continue
@@ -585,6 +810,17 @@ class PresenceHandler(BaseHandler):
@log_function @log_function
def _stop_polling_remote(self, user, domain, remoteusers): def _stop_polling_remote(self, user, domain, remoteusers):
"""Unsubscribe a local user from presence updates from remote users on
a given domain.
Args:
user(UserID): The local user that no longer wishes for updates.
domain(str): The remote server to unsubscribe from.
remoteusers([UserID]): The users on that remote server that the
local user no longer wishes to be updated about.
Returns:
A Deferred.
"""
to_unpoll = set() to_unpoll = set()
for u in remoteusers: for u in remoteusers:
@@ -606,6 +842,19 @@ class PresenceHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def push_presence(self, user, statuscache): def push_presence(self, user, statuscache):
"""
Notify local and remote users of a change in presence of a local user.
Pushes the update to local clients and remote domains that are directly
subscribed to the presence of the local user.
Also pushes that update to any local user or remote domain that shares
a room with the local user.
Args:
user(UserID): The local user whose presence was updated.
statuscache(UserPresenceCache): Cache of the user's presence state
Returns:
A Deferred.
"""
assert(self.hs.is_mine(user)) assert(self.hs.is_mine(user))
logger.debug("Pushing presence update from %s", user) logger.debug("Pushing presence update from %s", user)
@@ -617,8 +866,7 @@ class PresenceHandler(BaseHandler):
# and also user is informed of server-forced pushes # and also user is informed of server-forced pushes
localusers.add(user) localusers.add(user)
rm_handler = self.homeserver.get_handlers().room_member_handler room_ids = yield self.get_joined_rooms_for_user(user)
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
if not localusers and not room_ids: if not localusers and not room_ids:
defer.returnValue(None) defer.returnValue(None)
@@ -632,45 +880,24 @@ class PresenceHandler(BaseHandler):
) )
yield self.distributor.fire("user_presence_changed", user, statuscache) yield self.distributor.fire("user_presence_changed", user, statuscache)
@defer.inlineCallbacks
def _push_presence_remote(self, user, destination, state=None):
if state is None:
state = yield self.store.get_presence_state(user.localpart)
del state["mtime"]
state["presence"] = state.pop("state")
if user in self._user_cachemap:
state["last_active"] = (
self._user_cachemap[user].get_state()["last_active"]
)
yield self.distributor.fire(
"collect_presencelike_data", user, state
)
if "last_active" in state:
state = dict(state)
state["last_active_ago"] = int(
self.clock.time_msec() - state.pop("last_active")
)
user_state = {
"user_id": user.to_string(),
}
user_state.update(**state)
yield self.federation.send_edu(
destination=destination,
edu_type="m.presence",
content={
"push": [
user_state,
],
}
)
@defer.inlineCallbacks @defer.inlineCallbacks
def incoming_presence(self, origin, content): def incoming_presence(self, origin, content):
"""Handle an incoming m.presence EDU.
For each presence update in the "push" list update our local cache and
notify the appropriate local clients. Only clients that share a room
or are directly subscribed to the presence for a user should be
notified of the update.
For each subscription request in the "poll" list start pushing presence
updates to the remote server.
For unsubscribe request in the "unpoll" list stop pushing presence
updates to the remote server.
Args:
orgin(str): The source of this m.presence EDU.
content(dict): The content of this m.presence EDU.
Returns:
A Deferred.
"""
deferreds = [] deferreds = []
for push in content.get("push", []): for push in content.get("push", []):
@@ -684,8 +911,7 @@ class PresenceHandler(BaseHandler):
" | %d interested local observers %r", len(observers), observers " | %d interested local observers %r", len(observers), observers
) )
rm_handler = self.homeserver.get_handlers().room_member_handler room_ids = yield self.get_joined_rooms_for_user(user)
room_ids = yield rm_handler.get_joined_rooms_for_user(user)
if room_ids: if room_ids:
logger.debug(" | %d interested room IDs %r", len(room_ids), room_ids) logger.debug(" | %d interested room IDs %r", len(room_ids), room_ids)
@@ -704,20 +930,15 @@ class PresenceHandler(BaseHandler):
self.clock.time_msec() - state.pop("last_active_ago") self.clock.time_msec() - state.pop("last_active_ago")
) )
statuscache = self._get_or_make_usercache(user)
self._user_cachemap_latest_serial += 1 self._user_cachemap_latest_serial += 1
statuscache.update(state, serial=self._user_cachemap_latest_serial) yield self.update_presence_cache(user, state, room_ids=room_ids)
if not observers and not room_ids: if not observers and not room_ids:
logger.debug(" | no interested observers or room IDs") logger.debug(" | no interested observers or room IDs")
continue continue
self.push_update_to_clients( self.push_update_to_clients(
observed_user=user, users_to_push=observers, room_ids=room_ids
users_to_push=observers,
room_ids=room_ids,
statuscache=statuscache,
) )
user_id = user.to_string() user_id = user.to_string()
@@ -729,7 +950,8 @@ class PresenceHandler(BaseHandler):
) )
while len(self._remote_offline_serials) > MAX_OFFLINE_SERIALS: while len(self._remote_offline_serials) > MAX_OFFLINE_SERIALS:
self._remote_offline_serials.pop() # remove the oldest self._remote_offline_serials.pop() # remove the oldest
del self._user_cachemap[user] if user in self._user_cachemap:
del self._user_cachemap[user]
else: else:
# Remove the user from remote_offline_serials now that they're # Remove the user from remote_offline_serials now that they're
# no longer offline # no longer offline
@@ -766,13 +988,58 @@ class PresenceHandler(BaseHandler):
if not self._remote_sendmap[user]: if not self._remote_sendmap[user]:
del self._remote_sendmap[user] del self._remote_sendmap[user]
with PreserveLoggingContext(): yield defer.DeferredList(deferreds, consumeErrors=True)
yield defer.DeferredList(deferreds, consumeErrors=True)
@defer.inlineCallbacks
def update_presence_cache(self, user, state={}, room_ids=None,
add_to_cache=True):
"""Update the presence cache for a user with a new state and bump the
serial to the latest value.
Args:
user(UserID): The user being updated
state(dict): The presence state being updated
room_ids(None or list of str): A list of room_ids to update. If
room_ids is None then fetch the list of room_ids the user is
joined to.
add_to_cache: Whether to add an entry to the presence cache if the
user isn't already in the cache.
Returns:
A Deferred UserPresenceCache for the user being updated.
"""
if room_ids is None:
room_ids = yield self.get_joined_rooms_for_user(user)
for room_id in room_ids:
self._room_serials[room_id] = self._user_cachemap_latest_serial
if add_to_cache:
statuscache = self._get_or_make_usercache(user)
else:
statuscache = self._get_or_offline_usercache(user)
statuscache.update(state, serial=self._user_cachemap_latest_serial)
defer.returnValue(statuscache)
@defer.inlineCallbacks @defer.inlineCallbacks
def push_update_to_local_and_remote(self, observed_user, statuscache, def push_update_to_local_and_remote(self, observed_user, statuscache,
users_to_push=[], room_ids=[], users_to_push=[], room_ids=[],
remote_domains=[]): remote_domains=[]):
"""Notify local clients and remote servers of a change in the presence
of a user.
Args:
observed_user(UserID): The user to push the presence state for.
statuscache(UserPresenceCache): The cache for the presence state to
push.
users_to_push([UserID]): A list of local and remote users to
notify.
room_ids([str]): Notify the local and remote occupants of these
rooms.
remote_domains([str]): A list of remote servers to notify in
addition to those implied by the users_to_push and the
room_ids.
Returns:
A Deferred.
"""
localusers, remoteusers = partitionbool( localusers, remoteusers = partitionbool(
users_to_push, users_to_push,
@@ -782,10 +1049,7 @@ class PresenceHandler(BaseHandler):
localusers = set(localusers) localusers = set(localusers)
self.push_update_to_clients( self.push_update_to_clients(
observed_user=observed_user, users_to_push=localusers, room_ids=room_ids
users_to_push=localusers,
room_ids=room_ids,
statuscache=statuscache,
) )
remote_domains = set(remote_domains) remote_domains = set(remote_domains)
@@ -810,11 +1074,65 @@ class PresenceHandler(BaseHandler):
defer.returnValue((localusers, remote_domains)) defer.returnValue((localusers, remote_domains))
def push_update_to_clients(self, observed_user, users_to_push=[], def push_update_to_clients(self, users_to_push=[], room_ids=[]):
room_ids=[], statuscache=None): """Notify clients of a new presence event.
self.notifier.on_new_user_event(
users_to_push, Args:
room_ids, users_to_push([UserID]): List of users to notify.
room_ids([str]): List of room_ids to notify.
"""
with PreserveLoggingContext():
self.notifier.on_new_event(
"presence_key",
self._user_cachemap_latest_serial,
users_to_push,
room_ids,
)
@defer.inlineCallbacks
def _push_presence_remote(self, user, destination, state=None):
"""Push a user's presence to a remote server. If a presence state event
that event is sent. Otherwise a new state event is constructed from the
stored presence state.
The last_active is replaced with last_active_ago in case the wallclock
time on the remote server is different to the time on this server.
Sends an EDU to the remote server with the current presence state.
Args:
user(UserID): The user to push the presence state for.
destination(str): The remote server to send state to.
state(dict): The state to push, or None to use the current stored
state.
Returns:
A Deferred.
"""
if state is None:
state = yield self.store.get_presence_state(user.localpart)
del state["mtime"]
state["presence"] = state.pop("state")
if user in self._user_cachemap:
state["last_active"] = (
self._user_cachemap[user].get_state()["last_active"]
)
yield self.distributor.fire(
"collect_presencelike_data", user, state
)
if "last_active" in state:
state = dict(state)
state["last_active_ago"] = int(
self.clock.time_msec() - state.pop("last_active")
)
user_state = {"user_id": user.to_string(), }
user_state.update(state)
yield self.federation.send_edu(
destination=destination,
edu_type="m.presence",
content={"push": [user_state, ], }
) )
@@ -823,38 +1141,11 @@ class PresenceEventSource(object):
self.hs = hs self.hs = hs
self.clock = hs.get_clock() self.clock = hs.get_clock()
@defer.inlineCallbacks
def is_visible(self, observer_user, observed_user):
if observer_user == observed_user:
defer.returnValue(True)
presence = self.hs.get_handlers().presence_handler
if (yield presence.store.user_rooms_intersect(
[u.to_string() for u in observer_user, observed_user])):
defer.returnValue(True)
if self.hs.is_mine(observed_user):
pushmap = presence._local_pushmap
defer.returnValue(
observed_user.localpart in pushmap and
observer_user in pushmap[observed_user.localpart]
)
else:
recvmap = presence._remote_recvmap
defer.returnValue(
observed_user in recvmap and
observer_user in recvmap[observed_user]
)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def get_new_events_for_user(self, user, from_key, limit): def get_new_events(self, user, from_key, room_ids=None, **kwargs):
from_key = int(from_key) from_key = int(from_key)
room_ids = room_ids or []
observer_user = user
presence = self.hs.get_handlers().presence_handler presence = self.hs.get_handlers().presence_handler
cachemap = presence._user_cachemap cachemap = presence._user_cachemap
@@ -864,17 +1155,26 @@ class PresenceEventSource(object):
clock = self.clock clock = self.clock
latest_serial = 0 latest_serial = 0
user_ids_to_check = {user}
presence_list = yield presence.store.get_presence_list(
user.localpart, accepted=True
)
if presence_list is not None:
user_ids_to_check |= set(
UserID.from_string(p["observed_user_id"]) for p in presence_list
)
for room_id in set(room_ids) & set(presence._room_serials):
if presence._room_serials[room_id] > from_key:
joined = yield presence.get_joined_users_for_room_id(room_id)
user_ids_to_check |= set(joined)
updates = [] updates = []
# TODO(paul): use a DeferredList ? How to limit concurrency. for observed_user in user_ids_to_check & set(cachemap):
for observed_user in cachemap.keys():
cached = cachemap[observed_user] cached = cachemap[observed_user]
if cached.serial <= from_key or cached.serial > max_serial: if cached.serial <= from_key or cached.serial > max_serial:
continue continue
if not (yield self.is_visible(observer_user, observed_user)):
continue
latest_serial = max(cached.serial, latest_serial) latest_serial = max(cached.serial, latest_serial)
updates.append(cached.make_event(user=observed_user, clock=clock)) updates.append(cached.make_event(user=observed_user, clock=clock))
@@ -911,8 +1211,6 @@ class PresenceEventSource(object):
def get_pagination_rows(self, user, pagination_config, key): def get_pagination_rows(self, user, pagination_config, key):
# TODO (erikj): Does this make sense? Ordering? # TODO (erikj): Does this make sense? Ordering?
observer_user = user
from_key = int(pagination_config.from_key) from_key = int(pagination_config.from_key)
if pagination_config.to_key: if pagination_config.to_key:
@@ -923,14 +1221,26 @@ class PresenceEventSource(object):
presence = self.hs.get_handlers().presence_handler presence = self.hs.get_handlers().presence_handler
cachemap = presence._user_cachemap cachemap = presence._user_cachemap
user_ids_to_check = {user}
presence_list = yield presence.store.get_presence_list(
user.localpart, accepted=True
)
if presence_list is not None:
user_ids_to_check |= set(
UserID.from_string(p["observed_user_id"]) for p in presence_list
)
room_ids = yield presence.get_joined_rooms_for_user(user)
for room_id in set(room_ids) & set(presence._room_serials):
if presence._room_serials[room_id] >= from_key:
joined = yield presence.get_joined_users_for_room_id(room_id)
user_ids_to_check |= set(joined)
updates = [] updates = []
# TODO(paul): use a DeferredList ? How to limit concurrency. for observed_user in user_ids_to_check & set(cachemap):
for observed_user in cachemap.keys():
if not (to_key < cachemap[observed_user].serial <= from_key): if not (to_key < cachemap[observed_user].serial <= from_key):
continue continue
if (yield self.is_visible(observer_user, observed_user)): updates.append((observed_user, cachemap[observed_user]))
updates.append((observed_user, cachemap[observed_user]))
# TODO(paul): limit # TODO(paul): limit
@@ -954,6 +1264,11 @@ class UserPresenceCache(object):
self.state = {"presence": PresenceState.OFFLINE} self.state = {"presence": PresenceState.OFFLINE}
self.serial = None self.serial = None
def __repr__(self):
return "UserPresenceCache(state=%r, serial=%r)" % (
self.state, self.serial
)
def update(self, state, serial): def update(self, state, serial):
assert("mtime_age" not in state) assert("mtime_age" not in state)

View File

@@ -0,0 +1,46 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
class PrivateUserDataEventSource(object):
def __init__(self, hs):
self.store = hs.get_datastore()
def get_current_key(self, direction='f'):
return self.store.get_max_private_user_data_stream_id()
@defer.inlineCallbacks
def get_new_events(self, user, from_key, **kwargs):
user_id = user.to_string()
last_stream_id = from_key
current_stream_id = yield self.store.get_max_private_user_data_stream_id()
tags = yield self.store.get_updated_tags(user_id, last_stream_id)
results = []
for room_id, room_tags in tags.items():
results.append({
"type": "m.tag",
"content": {"tags": room_tags},
"room_id": room_id,
})
defer.returnValue((results, current_stream_id))
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):
defer.returnValue(([], config.to_id))

View File

@@ -17,8 +17,8 @@ from twisted.internet import defer
from synapse.api.errors import SynapseError, AuthError, CodeMessageException from synapse.api.errors import SynapseError, AuthError, CodeMessageException
from synapse.api.constants import EventTypes, Membership from synapse.api.constants import EventTypes, Membership
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID from synapse.types import UserID
from synapse.util import unwrapFirstError
from ._base import BaseHandler from ._base import BaseHandler
@@ -88,6 +88,9 @@ class ProfileHandler(BaseHandler):
if target_user != auth_user: if target_user != auth_user:
raise AuthError(400, "Cannot set another user's displayname") raise AuthError(400, "Cannot set another user's displayname")
if new_displayname == '':
new_displayname = None
yield self.store.set_profile_displayname( yield self.store.set_profile_displayname(
target_user.localpart, new_displayname target_user.localpart, new_displayname
) )
@@ -154,14 +157,13 @@ class ProfileHandler(BaseHandler):
if not self.hs.is_mine(user): if not self.hs.is_mine(user):
defer.returnValue(None) defer.returnValue(None)
with PreserveLoggingContext(): (displayname, avatar_url) = yield defer.gatherResults(
(displayname, avatar_url) = yield defer.gatherResults( [
[ self.store.get_profile_displayname(user.localpart),
self.store.get_profile_displayname(user.localpart), self.store.get_profile_avatar_url(user.localpart),
self.store.get_profile_avatar_url(user.localpart), ],
], consumeErrors=True
consumeErrors=True ).addErrback(unwrapFirstError)
)
state["displayname"] = displayname state["displayname"] = displayname
state["avatar_url"] = avatar_url state["avatar_url"] = avatar_url

View File

@@ -0,0 +1,202 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import BaseHandler
from twisted.internet import defer
from synapse.util.logcontext import PreserveLoggingContext
import logging
logger = logging.getLogger(__name__)
class ReceiptsHandler(BaseHandler):
def __init__(self, hs):
super(ReceiptsHandler, self).__init__(hs)
self.hs = hs
self.federation = hs.get_replication_layer()
self.federation.register_edu_handler(
"m.receipt", self._received_remote_receipt
)
self.clock = self.hs.get_clock()
self._receipt_cache = None
@defer.inlineCallbacks
def received_client_receipt(self, room_id, receipt_type, user_id,
event_id):
"""Called when a client tells us a local user has read up to the given
event_id in the room.
"""
receipt = {
"room_id": room_id,
"receipt_type": receipt_type,
"user_id": user_id,
"event_ids": [event_id],
"data": {
"ts": int(self.clock.time_msec()),
}
}
is_new = yield self._handle_new_receipts([receipt])
if is_new:
self._push_remotes([receipt])
@defer.inlineCallbacks
def _received_remote_receipt(self, origin, content):
"""Called when we receive an EDU of type m.receipt from a remote HS.
"""
receipts = [
{
"room_id": room_id,
"receipt_type": receipt_type,
"user_id": user_id,
"event_ids": user_values["event_ids"],
"data": user_values.get("data", {}),
}
for room_id, room_values in content.items()
for receipt_type, users in room_values.items()
for user_id, user_values in users.items()
]
yield self._handle_new_receipts(receipts)
@defer.inlineCallbacks
def _handle_new_receipts(self, receipts):
"""Takes a list of receipts, stores them and informs the notifier.
"""
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
user_id = receipt["user_id"]
event_ids = receipt["event_ids"]
data = receipt["data"]
res = yield self.store.insert_receipt(
room_id, receipt_type, user_id, event_ids, data
)
if not res:
# res will be None if this read receipt is 'old'
defer.returnValue(False)
stream_id, max_persisted_id = res
with PreserveLoggingContext():
self.notifier.on_new_event(
"receipt_key", max_persisted_id, rooms=[room_id]
)
defer.returnValue(True)
@defer.inlineCallbacks
def _push_remotes(self, receipts):
"""Given a list of receipts, works out which remote servers should be
poked and pokes them.
"""
# TODO: Some of this stuff should be coallesced.
for receipt in receipts:
room_id = receipt["room_id"]
receipt_type = receipt["receipt_type"]
user_id = receipt["user_id"]
event_ids = receipt["event_ids"]
data = receipt["data"]
remotedomains = set()
rm_handler = self.hs.get_handlers().room_member_handler
yield rm_handler.fetch_room_distributions_into(
room_id, localusers=None, remotedomains=remotedomains
)
logger.debug("Sending receipt to: %r", remotedomains)
for domain in remotedomains:
self.federation.send_edu(
destination=domain,
edu_type="m.receipt",
content={
room_id: {
receipt_type: {
user_id: {
"event_ids": event_ids,
"data": data,
}
}
},
},
)
@defer.inlineCallbacks
def get_receipts_for_room(self, room_id, to_key):
"""Gets all receipts for a room, upto the given key.
"""
result = yield self.store.get_linearized_receipts_for_room(
room_id,
to_key=to_key,
)
if not result:
defer.returnValue([])
defer.returnValue(result)
class ReceiptEventSource(object):
def __init__(self, hs):
self.store = hs.get_datastore()
@defer.inlineCallbacks
def get_new_events(self, from_key, room_ids, **kwargs):
from_key = int(from_key)
to_key = yield self.get_current_key()
if from_key == to_key:
defer.returnValue(([], to_key))
events = yield self.store.get_linearized_receipts_for_rooms(
room_ids,
from_key=from_key,
to_key=to_key,
)
defer.returnValue((events, to_key))
def get_current_key(self, direction='f'):
return self.store.get_max_receipt_stream_id()
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):
to_key = int(config.from_key)
if config.to_key:
from_key = int(config.to_key)
else:
from_key = None
rooms = yield self.store.get_rooms_for_user(user.to_string())
rooms = [room.room_id for room in rooms]
events = yield self.store.get_linearized_receipts_for_rooms(
rooms,
from_key=from_key,
to_key=to_key,
)
defer.returnValue((events, to_key))

View File

@@ -25,8 +25,6 @@ import synapse.util.stringutils as stringutils
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.http.client import CaptchaServerHttpClient from synapse.http.client import CaptchaServerHttpClient
import base64
import bcrypt
import logging import logging
import urllib import urllib
@@ -57,8 +55,8 @@ class RegistrationHandler(BaseHandler):
yield self.check_user_id_is_valid(user_id) yield self.check_user_id_is_valid(user_id)
u = yield self.store.get_user_by_id(user_id) users = yield self.store.get_users_by_id_case_insensitive(user_id)
if u: if users:
raise SynapseError( raise SynapseError(
400, 400,
"User ID already taken.", "User ID already taken.",
@@ -66,14 +64,15 @@ class RegistrationHandler(BaseHandler):
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def register(self, localpart=None, password=None): def register(self, localpart=None, password=None, generate_token=True):
"""Registers a new client on the server. """Registers a new client on the server.
Args: Args:
localpart : The local part of the user ID to register. If None, localpart : The local part of the user ID to register. If None,
one will be randomly generated. one will be randomly generated.
password (str) : The password to assign to this user so they can password (str) : The password to assign to this user so they can
login again. login again. This can be None which means they cannot login again
via a password (e.g. the user is an application service user).
Returns: Returns:
A tuple of (user_id, access_token). A tuple of (user_id, access_token).
Raises: Raises:
@@ -82,7 +81,7 @@ class RegistrationHandler(BaseHandler):
yield run_on_reactor() yield run_on_reactor()
password_hash = None password_hash = None
if password: if password:
password_hash = bcrypt.hashpw(password, bcrypt.gensalt()) password_hash = self.auth_handler().hash(password)
if localpart: if localpart:
yield self.check_username(localpart) yield self.check_username(localpart)
@@ -90,7 +89,9 @@ class RegistrationHandler(BaseHandler):
user = UserID(localpart, self.hs.hostname) user = UserID(localpart, self.hs.hostname)
user_id = user.to_string() user_id = user.to_string()
token = self._generate_token(user_id) token = None
if generate_token:
token = self.auth_handler().generate_access_token(user_id)
yield self.store.register( yield self.store.register(
user_id=user_id, user_id=user_id,
token=token, token=token,
@@ -103,14 +104,14 @@ class RegistrationHandler(BaseHandler):
attempts = 0 attempts = 0
user_id = None user_id = None
token = None token = None
while not user_id and not token: while not user_id:
try: try:
localpart = self._generate_user_id() localpart = self._generate_user_id()
user = UserID(localpart, self.hs.hostname) user = UserID(localpart, self.hs.hostname)
user_id = user.to_string() user_id = user.to_string()
yield self.check_user_id_is_valid(user_id) yield self.check_user_id_is_valid(user_id)
if generate_token:
token = self._generate_token(user_id) token = self.auth_handler().generate_access_token(user_id)
yield self.store.register( yield self.store.register(
user_id=user_id, user_id=user_id,
token=token, token=token,
@@ -160,7 +161,7 @@ class RegistrationHandler(BaseHandler):
400, "Invalid user localpart for this application service.", 400, "Invalid user localpart for this application service.",
errcode=Codes.EXCLUSIVE errcode=Codes.EXCLUSIVE
) )
token = self._generate_token(user_id) token = self.auth_handler().generate_access_token(user_id)
yield self.store.register( yield self.store.register(
user_id=user_id, user_id=user_id,
token=token, token=token,
@@ -192,6 +193,35 @@ class RegistrationHandler(BaseHandler):
else: else:
logger.info("Valid captcha entered from %s", ip) logger.info("Valid captcha entered from %s", ip)
@defer.inlineCallbacks
def register_saml2(self, localpart):
"""
Registers email_id as SAML2 Based Auth.
"""
if urllib.quote(localpart) != localpart:
raise SynapseError(
400,
"User ID must only contain characters which do not"
" require URL encoding."
)
user = UserID(localpart, self.hs.hostname)
user_id = user.to_string()
yield self.check_user_id_is_valid(user_id)
token = self.auth_handler().generate_access_token(user_id)
try:
yield self.store.register(
user_id=user_id,
token=token,
password_hash=None
)
yield self.distributor.fire("registered_user", user)
except Exception, e:
yield self.store.add_access_token_to_user(user_id, token)
# Ignore Registration errors
logger.exception(e)
defer.returnValue((user_id, token))
@defer.inlineCallbacks @defer.inlineCallbacks
def register_email(self, threepidCreds): def register_email(self, threepidCreds):
""" """
@@ -243,13 +273,6 @@ class RegistrationHandler(BaseHandler):
errcode=Codes.EXCLUSIVE errcode=Codes.EXCLUSIVE
) )
def _generate_token(self, user_id):
# urlsafe variant uses _ and - so use . as the separator and replace
# all =s with .s so http clients don't quote =s when it is used as
# query params.
return (base64.urlsafe_b64encode(user_id).replace('=', '.') + '.' +
stringutils.random_string(18))
def _generate_user_id(self): def _generate_user_id(self):
return "-" + stringutils.random_string(18) return "-" + stringutils.random_string(18)
@@ -292,3 +315,6 @@ class RegistrationHandler(BaseHandler):
} }
) )
defer.returnValue(data) defer.returnValue(data)
def auth_handler(self):
return self.hs.get_handlers().auth_handler

View File

@@ -19,19 +19,48 @@ from twisted.internet import defer
from ._base import BaseHandler from ._base import BaseHandler
from synapse.types import UserID, RoomAlias, RoomID from synapse.types import UserID, RoomAlias, RoomID
from synapse.api.constants import EventTypes, Membership, JoinRules from synapse.api.constants import (
from synapse.api.errors import StoreError, SynapseError EventTypes, Membership, JoinRules, RoomCreationPreset,
from synapse.util import stringutils )
from synapse.api.errors import AuthError, StoreError, SynapseError
from synapse.util import stringutils, unwrapFirstError
from synapse.util.async import run_on_reactor from synapse.util.async import run_on_reactor
from synapse.events.utils import serialize_event
from signedjson.sign import verify_signed_json
from signedjson.key import decode_verify_key_bytes
from collections import OrderedDict
from unpaddedbase64 import decode_base64
import logging import logging
import math
import string
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
id_server_scheme = "https://"
class RoomCreationHandler(BaseHandler): class RoomCreationHandler(BaseHandler):
PRESETS_DICT = {
RoomCreationPreset.PRIVATE_CHAT: {
"join_rules": JoinRules.INVITE,
"history_visibility": "shared",
"original_invitees_have_ops": False,
},
RoomCreationPreset.TRUSTED_PRIVATE_CHAT: {
"join_rules": JoinRules.INVITE,
"history_visibility": "shared",
"original_invitees_have_ops": True,
},
RoomCreationPreset.PUBLIC_CHAT: {
"join_rules": JoinRules.PUBLIC,
"history_visibility": "shared",
"original_invitees_have_ops": False,
},
}
@defer.inlineCallbacks @defer.inlineCallbacks
def create_room(self, user_id, room_id, config): def create_room(self, user_id, room_id, config):
""" Creates a new room. """ Creates a new room.
@@ -50,6 +79,10 @@ class RoomCreationHandler(BaseHandler):
self.ratelimit(user_id) self.ratelimit(user_id)
if "room_alias_name" in config: if "room_alias_name" in config:
for wchar in string.whitespace:
if wchar in config["room_alias_name"]:
raise SynapseError(400, "Invalid characters in room alias")
room_alias = RoomAlias.create( room_alias = RoomAlias.create(
config["room_alias_name"], config["room_alias_name"],
self.hs.hostname, self.hs.hostname,
@@ -116,9 +149,29 @@ class RoomCreationHandler(BaseHandler):
servers=[self.hs.hostname], servers=[self.hs.hostname],
) )
preset_config = config.get(
"preset",
RoomCreationPreset.PUBLIC_CHAT
if is_public
else RoomCreationPreset.PRIVATE_CHAT
)
raw_initial_state = config.get("initial_state", [])
initial_state = OrderedDict()
for val in raw_initial_state:
initial_state[(val["type"], val.get("state_key", ""))] = val["content"]
creation_content = config.get("creation_content", {})
user = UserID.from_string(user_id) user = UserID.from_string(user_id)
creation_events = self._create_events_for_new_room( creation_events = self._create_events_for_new_room(
user, room_id, is_public=is_public user, room_id,
preset_config=preset_config,
invite_list=invite_list,
initial_state=initial_state,
creation_content=creation_content,
room_alias=room_alias,
) )
msg_handler = self.hs.get_handlers().message_handler msg_handler = self.hs.get_handlers().message_handler
@@ -165,7 +218,11 @@ class RoomCreationHandler(BaseHandler):
defer.returnValue(result) defer.returnValue(result)
def _create_events_for_new_room(self, creator, room_id, is_public=False): def _create_events_for_new_room(self, creator, room_id, preset_config,
invite_list, initial_state, creation_content,
room_alias):
config = RoomCreationHandler.PRESETS_DICT[preset_config]
creator_id = creator.to_string() creator_id = creator.to_string()
event_keys = { event_keys = {
@@ -185,9 +242,10 @@ class RoomCreationHandler(BaseHandler):
return e return e
creation_content.update({"creator": creator.to_string()})
creation_event = create( creation_event = create(
etype=EventTypes.Create, etype=EventTypes.Create,
content={"creator": creator.to_string()}, content=creation_content,
) )
join_event = create( join_event = create(
@@ -198,16 +256,20 @@ class RoomCreationHandler(BaseHandler):
}, },
) )
power_levels_event = create( returned_events = [creation_event, join_event]
etype=EventTypes.PowerLevels,
content={ if (EventTypes.PowerLevels, '') not in initial_state:
power_level_content = {
"users": { "users": {
creator.to_string(): 100, creator.to_string(): 100,
}, },
"users_default": 0, "users_default": 0,
"events": { "events": {
EventTypes.Name: 100, EventTypes.Name: 50,
EventTypes.PowerLevels: 100, EventTypes.PowerLevels: 100,
EventTypes.RoomHistoryVisibility: 100,
EventTypes.CanonicalAlias: 50,
EventTypes.RoomAvatar: 50,
}, },
"events_default": 0, "events_default": 0,
"state_default": 50, "state_default": 50,
@@ -215,21 +277,51 @@ class RoomCreationHandler(BaseHandler):
"kick": 50, "kick": 50,
"redact": 50, "redact": 50,
"invite": 0, "invite": 0,
}, }
)
join_rule = JoinRules.PUBLIC if is_public else JoinRules.INVITE if config["original_invitees_have_ops"]:
join_rules_event = create( for invitee in invite_list:
etype=EventTypes.JoinRules, power_level_content["users"][invitee] = 100
content={"join_rule": join_rule},
)
return [ power_levels_event = create(
creation_event, etype=EventTypes.PowerLevels,
join_event, content=power_level_content,
power_levels_event, )
join_rules_event,
] returned_events.append(power_levels_event)
if room_alias and (EventTypes.CanonicalAlias, '') not in initial_state:
room_alias_event = create(
etype=EventTypes.CanonicalAlias,
content={"alias": room_alias.to_string()},
)
returned_events.append(room_alias_event)
if (EventTypes.JoinRules, '') not in initial_state:
join_rules_event = create(
etype=EventTypes.JoinRules,
content={"join_rule": config["join_rules"]},
)
returned_events.append(join_rules_event)
if (EventTypes.RoomHistoryVisibility, '') not in initial_state:
history_event = create(
etype=EventTypes.RoomHistoryVisibility,
content={"history_visibility": config["history_visibility"]}
)
returned_events.append(history_event)
for (etype, state_key), content in initial_state.items():
returned_events.append(create(
etype=etype,
state_key=state_key,
content=content,
))
return returned_events
class RoomMemberHandler(BaseHandler): class RoomMemberHandler(BaseHandler):
@@ -277,42 +369,7 @@ class RoomMemberHandler(BaseHandler):
remotedomains.add(member.domain) remotedomains.add(member.domain)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_room_members_as_pagination_chunk(self, room_id=None, user_id=None, def change_membership(self, event, context, do_auth=True, is_guest=False):
limit=0, start_tok=None,
end_tok=None):
"""Retrieve a list of room members in the room.
Args:
room_id (str): The room to get the member list for.
user_id (str): The ID of the user making the request.
limit (int): The max number of members to return.
start_tok (str): Optional. The start token if known.
end_tok (str): Optional. The end token if known.
Returns:
dict: A Pagination streamable dict.
Raises:
SynapseError if something goes wrong.
"""
yield self.auth.check_joined_room(room_id, user_id)
member_list = yield self.store.get_room_members(room_id=room_id)
time_now = self.clock.time_msec()
event_list = [
serialize_event(entry, time_now)
for entry in member_list
]
chunk_data = {
"start": "START", # FIXME (erikj): START is no longer valid
"end": "END",
"chunk": event_list
}
# TODO honor Pagination stream params
# TODO snapshot this list to return on subsequent requests when
# paginating
defer.returnValue(chunk_data)
@defer.inlineCallbacks
def change_membership(self, event, context, do_auth=True):
""" Change the membership status of a user in a room. """ Change the membership status of a user in a room.
Args: Args:
@@ -333,9 +390,38 @@ class RoomMemberHandler(BaseHandler):
# if this HS is not currently in the room, i.e. we have to do the # if this HS is not currently in the room, i.e. we have to do the
# invite/join dance. # invite/join dance.
if event.membership == Membership.JOIN: if event.membership == Membership.JOIN:
if is_guest:
guest_access = context.current_state.get(
(EventTypes.GuestAccess, ""),
None
)
is_guest_access_allowed = (
guest_access
and guest_access.content
and "guest_access" in guest_access.content
and guest_access.content["guest_access"] == "can_join"
)
if not is_guest_access_allowed:
raise AuthError(403, "Guest access not allowed")
yield self._do_join(event, context, do_auth=do_auth) yield self._do_join(event, context, do_auth=do_auth)
else: else:
# This is not a JOIN, so we can handle it normally. if event.membership == Membership.LEAVE:
is_host_in_room = yield self.is_host_in_room(room_id, context)
if not is_host_in_room:
# Rejecting an invite, rather than leaving a joined room
handler = self.hs.get_handlers().federation_handler
inviter = yield self.get_inviter(event)
if not inviter:
# return the same error as join_room_alias does
raise SynapseError(404, "No known servers")
yield handler.do_remotely_reject_invite(
[inviter.domain],
room_id,
event.user_id
)
defer.returnValue({"room_id": room_id})
return
# FIXME: This isn't idempotency. # FIXME: This isn't idempotency.
if prev_state and prev_state.membership == event.membership: if prev_state and prev_state.membership == event.membership:
@@ -359,7 +445,7 @@ class RoomMemberHandler(BaseHandler):
defer.returnValue({"room_id": room_id}) defer.returnValue({"room_id": room_id})
@defer.inlineCallbacks @defer.inlineCallbacks
def join_room_alias(self, joinee, room_alias, do_auth=True, content={}): def join_room_alias(self, joinee, room_alias, content={}):
directory_handler = self.hs.get_handlers().directory_handler directory_handler = self.hs.get_handlers().directory_handler
mapping = yield directory_handler.get_association(room_alias) mapping = yield directory_handler.get_association(room_alias)
@@ -393,8 +479,6 @@ class RoomMemberHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_join(self, event, context, room_hosts=None, do_auth=True): def _do_join(self, event, context, room_hosts=None, do_auth=True):
joinee = UserID.from_string(event.state_key)
# room_id = RoomID.from_string(event.room_id, self.hs)
room_id = event.room_id room_id = event.room_id
# XXX: We don't do an auth check if we are doing an invite # XXX: We don't do an auth check if we are doing an invite
@@ -402,41 +486,18 @@ class RoomMemberHandler(BaseHandler):
# that we are allowed to join when we decide whether or not we # that we are allowed to join when we decide whether or not we
# need to do the invite/join dance. # need to do the invite/join dance.
is_host_in_room = yield self.auth.check_host_in_room( is_host_in_room = yield self.is_host_in_room(room_id, context)
event.room_id,
self.hs.hostname
)
if not is_host_in_room:
# is *anyone* in the room?
room_member_keys = [
v for (k, v) in context.current_state.keys() if (
k == "m.room.member"
)
]
if len(room_member_keys) == 0:
# has the room been created so we can join it?
create_event = context.current_state.get(("m.room.create", ""))
if create_event:
is_host_in_room = True
if is_host_in_room: if is_host_in_room:
should_do_dance = False should_do_dance = False
elif room_hosts: # TODO: Shouldn't this be remote_room_host? elif room_hosts: # TODO: Shouldn't this be remote_room_host?
should_do_dance = True should_do_dance = True
else: else:
# TODO(markjh): get prev_state from snapshot inviter = yield self.get_inviter(event)
prev_state = yield self.store.get_room_member( if not inviter:
joinee.to_string(), room_id
)
if prev_state and prev_state.membership == Membership.INVITE:
inviter = UserID.from_string(prev_state.user_id)
should_do_dance = not self.hs.is_mine(inviter)
room_hosts = [inviter.domain]
else:
# return the same error as join_room_alias does # return the same error as join_room_alias does
raise SynapseError(404, "No known servers") raise SynapseError(404, "No known servers")
should_do_dance = not self.hs.is_mine(inviter)
room_hosts = [inviter.domain]
if should_do_dance: if should_do_dance:
handler = self.hs.get_handlers().federation_handler handler = self.hs.get_handlers().federation_handler
@@ -444,8 +505,7 @@ class RoomMemberHandler(BaseHandler):
room_hosts, room_hosts,
room_id, room_id,
event.user_id, event.user_id,
event.content, # FIXME To get a non-frozen dict event.content,
context
) )
else: else:
logger.debug("Doing normal join") logger.debug("Doing normal join")
@@ -463,45 +523,51 @@ class RoomMemberHandler(BaseHandler):
) )
@defer.inlineCallbacks @defer.inlineCallbacks
def _should_invite_join(self, room_id, prev_state, do_auth): def get_inviter(self, event):
logger.debug("_should_invite_join: room_id: %s", room_id) # TODO(markjh): get prev_state from snapshot
prev_state = yield self.store.get_room_member(
event.user_id, event.room_id
)
# XXX: We don't do an auth check if we are doing an invite
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
# Only do an invite join dance if a) we were invited,
# b) the person inviting was from a differnt HS and c) we are
# not currently in the room
room_host = None
if prev_state and prev_state.membership == Membership.INVITE: if prev_state and prev_state.membership == Membership.INVITE:
room = yield self.store.get_room(room_id) defer.returnValue(UserID.from_string(prev_state.user_id))
inviter = UserID.from_string( return
prev_state.sender elif "third_party_invite" in event.content:
) if "sender" in event.content["third_party_invite"]:
inviter = UserID.from_string(
event.content["third_party_invite"]["sender"]
)
defer.returnValue(inviter)
defer.returnValue(None)
is_remote_invite_join = not self.hs.is_mine(inviter) and not room @defer.inlineCallbacks
room_host = inviter.domain def is_host_in_room(self, room_id, context):
else: is_host_in_room = yield self.auth.check_host_in_room(
is_remote_invite_join = False room_id,
self.hs.hostname
defer.returnValue((is_remote_invite_join, room_host)) )
if not is_host_in_room:
# is *anyone* in the room?
room_member_keys = [
v for (k, v) in context.current_state.keys() if (
k == "m.room.member"
)
]
if len(room_member_keys) == 0:
# has the room been created so we can join it?
create_event = context.current_state.get(("m.room.create", ""))
if create_event:
is_host_in_room = True
defer.returnValue(is_host_in_room)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_joined_rooms_for_user(self, user): def get_joined_rooms_for_user(self, user):
"""Returns a list of roomids that the user has any of the given """Returns a list of roomids that the user has any of the given
membership states in.""" membership states in."""
app_service = yield self.store.get_app_service_by_user_id( rooms = yield self.store.get_rooms_for_user(
user.to_string() user.to_string(),
) )
if app_service:
rooms = yield self.store.get_app_service_rooms(app_service)
else:
rooms = yield self.store.get_rooms_for_user(
user.to_string(),
)
# For some reason the list of events contains duplicates # For some reason the list of events contains duplicates
# TODO(paul): work out why because I really don't think it should # TODO(paul): work out why because I really don't think it should
@@ -523,27 +589,254 @@ class RoomMemberHandler(BaseHandler):
suppress_auth=(not do_auth), suppress_auth=(not do_auth),
) )
@defer.inlineCallbacks
def do_3pid_invite(
self,
room_id,
inviter,
medium,
address,
id_server,
token_id,
txn_id
):
invitee = yield self._lookup_3pid(
id_server, medium, address
)
if invitee:
# make sure it looks like a user ID; it'll throw if it's invalid.
UserID.from_string(invitee)
yield self.hs.get_handlers().message_handler.create_and_send_event(
{
"type": EventTypes.Member,
"content": {
"membership": unicode("invite")
},
"room_id": room_id,
"sender": inviter.to_string(),
"state_key": invitee,
},
token_id=token_id,
txn_id=txn_id,
)
else:
yield self._make_and_store_3pid_invite(
id_server,
medium,
address,
room_id,
inviter,
token_id,
txn_id=txn_id
)
@defer.inlineCallbacks
def _lookup_3pid(self, id_server, medium, address):
"""Looks up a 3pid in the passed identity server.
Args:
id_server (str): The server name (including port, if required)
of the identity server to use.
medium (str): The type of the third party identifier (e.g. "email").
address (str): The third party identifier (e.g. "foo@example.com").
Returns:
(str) the matrix ID of the 3pid, or None if it is not recognized.
"""
try:
data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/lookup" % (id_server_scheme, id_server,),
{
"medium": medium,
"address": address,
}
)
if "mxid" in data:
if "signatures" not in data:
raise AuthError(401, "No signatures on 3pid binding")
self.verify_any_signature(data, id_server)
defer.returnValue(data["mxid"])
except IOError as e:
logger.warn("Error from identity server lookup: %s" % (e,))
defer.returnValue(None)
@defer.inlineCallbacks
def verify_any_signature(self, data, server_hostname):
if server_hostname not in data["signatures"]:
raise AuthError(401, "No signature from server %s" % (server_hostname,))
for key_name, signature in data["signatures"][server_hostname].items():
key_data = yield self.hs.get_simple_http_client().get_json(
"%s%s/_matrix/identity/api/v1/pubkey/%s" %
(id_server_scheme, server_hostname, key_name,),
)
if "public_key" not in key_data:
raise AuthError(401, "No public key named %s from %s" %
(key_name, server_hostname,))
verify_signed_json(
data,
server_hostname,
decode_verify_key_bytes(key_name, decode_base64(key_data["public_key"]))
)
return
@defer.inlineCallbacks
def _make_and_store_3pid_invite(
self,
id_server,
medium,
address,
room_id,
user,
token_id,
txn_id
):
token, public_key, key_validity_url, display_name = (
yield self._ask_id_server_for_third_party_invite(
id_server,
medium,
address,
room_id,
user.to_string()
)
)
msg_handler = self.hs.get_handlers().message_handler
yield msg_handler.create_and_send_event(
{
"type": EventTypes.ThirdPartyInvite,
"content": {
"display_name": display_name,
"key_validity_url": key_validity_url,
"public_key": public_key,
},
"room_id": room_id,
"sender": user.to_string(),
"state_key": token,
},
token_id=token_id,
txn_id=txn_id,
)
@defer.inlineCallbacks
def _ask_id_server_for_third_party_invite(
self, id_server, medium, address, room_id, sender):
is_url = "%s%s/_matrix/identity/api/v1/store-invite" % (
id_server_scheme, id_server,
)
data = yield self.hs.get_simple_http_client().post_urlencoded_get_json(
is_url,
{
"medium": medium,
"address": address,
"room_id": room_id,
"sender": sender,
}
)
# TODO: Check for success
token = data["token"]
public_key = data["public_key"]
display_name = data["display_name"]
key_validity_url = "%s%s/_matrix/identity/api/v1/pubkey/isvalid" % (
id_server_scheme, id_server,
)
defer.returnValue((token, public_key, key_validity_url, display_name))
class RoomListHandler(BaseHandler): class RoomListHandler(BaseHandler):
@defer.inlineCallbacks @defer.inlineCallbacks
def get_public_room_list(self): def get_public_room_list(self):
chunk = yield self.store.get_rooms(is_public=True) chunk = yield self.store.get_rooms(is_public=True)
for room in chunk: results = yield defer.gatherResults(
joined_users = yield self.store.get_users_in_room( [
room_id=room["room_id"], self.store.get_users_in_room(room["room_id"])
) for room in chunk
room["num_joined_members"] = len(joined_users) ],
consumeErrors=True,
).addErrback(unwrapFirstError)
for i, room in enumerate(chunk):
room["num_joined_members"] = len(results[i])
# FIXME (erikj): START is no longer a valid value # FIXME (erikj): START is no longer a valid value
defer.returnValue({"start": "START", "end": "END", "chunk": chunk}) defer.returnValue({"start": "START", "end": "END", "chunk": chunk})
class RoomContextHandler(BaseHandler):
@defer.inlineCallbacks
def get_event_context(self, user, room_id, event_id, limit, is_guest):
"""Retrieves events, pagination tokens and state around a given event
in a room.
Args:
user (UserID)
room_id (str)
event_id (str)
limit (int): The maximum number of events to return in total
(excluding state).
Returns:
dict
"""
before_limit = math.floor(limit/2.)
after_limit = limit - before_limit
now_token = yield self.hs.get_event_sources().get_current_token()
results = yield self.store.get_events_around(
room_id, event_id, before_limit, after_limit
)
results["events_before"] = yield self._filter_events_for_client(
user.to_string(),
results["events_before"],
is_guest=is_guest,
require_all_visible_for_guests=False
)
results["events_after"] = yield self._filter_events_for_client(
user.to_string(),
results["events_after"],
is_guest=is_guest,
require_all_visible_for_guests=False
)
if results["events_after"]:
last_event_id = results["events_after"][-1].event_id
else:
last_event_id = event_id
state = yield self.store.get_state_for_events(
[last_event_id], None
)
results["state"] = state[last_event_id].values()
results["start"] = now_token.copy_and_replace(
"room_key", results["start"]
).to_string()
results["end"] = now_token.copy_and_replace(
"room_key", results["end"]
).to_string()
defer.returnValue(results)
class RoomEventSource(object): class RoomEventSource(object):
def __init__(self, hs): def __init__(self, hs):
self.store = hs.get_datastore() self.store = hs.get_datastore()
@defer.inlineCallbacks @defer.inlineCallbacks
def get_new_events_for_user(self, user, from_key, limit): def get_new_events(
self,
user,
from_key,
limit,
room_ids,
is_guest,
):
# We just ignore the key for now. # We just ignore the key for now.
to_key = yield self.get_current_key() to_key = yield self.get_current_key()
@@ -563,14 +856,15 @@ class RoomEventSource(object):
user_id=user.to_string(), user_id=user.to_string(),
from_key=from_key, from_key=from_key,
to_key=to_key, to_key=to_key,
room_id=None,
limit=limit, limit=limit,
room_ids=room_ids,
is_guest=is_guest,
) )
defer.returnValue((events, end_key)) defer.returnValue((events, end_key))
def get_current_key(self): def get_current_key(self, direction='f'):
return self.store.get_room_events_max_id() return self.store.get_room_events_max_id(direction)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_pagination_rows(self, user, config, key): def get_pagination_rows(self, user, config, key):
@@ -580,7 +874,6 @@ class RoomEventSource(object):
to_key=config.to_key, to_key=config.to_key,
direction=config.direction, direction=config.direction,
limit=config.limit, limit=config.limit,
with_feedback=True
) )
defer.returnValue((events, next_key)) defer.returnValue((events, next_key))

319
synapse/handlers/search.py Normal file
View File

@@ -0,0 +1,319 @@
# -*- coding: utf-8 -*-
# Copyright 2015 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
from ._base import BaseHandler
from synapse.api.constants import Membership
from synapse.api.filtering import Filter
from synapse.api.errors import SynapseError
from synapse.events.utils import serialize_event
from unpaddedbase64 import decode_base64, encode_base64
import logging
logger = logging.getLogger(__name__)
class SearchHandler(BaseHandler):
def __init__(self, hs):
super(SearchHandler, self).__init__(hs)
@defer.inlineCallbacks
def search(self, user, content, batch=None):
"""Performs a full text search for a user.
Args:
user (UserID)
content (dict): Search parameters
batch (str): The next_batch parameter. Used for pagination.
Returns:
dict to be returned to the client with results of search
"""
batch_group = None
batch_group_key = None
batch_token = None
if batch:
try:
b = decode_base64(batch)
batch_group, batch_group_key, batch_token = b.split("\n")
assert batch_group is not None
assert batch_group_key is not None
assert batch_token is not None
except:
raise SynapseError(400, "Invalid batch")
try:
room_cat = content["search_categories"]["room_events"]
# The actual thing to query in FTS
search_term = room_cat["search_term"]
# Which "keys" to search over in FTS query
keys = room_cat.get("keys", [
"content.body", "content.name", "content.topic",
])
# Filter to apply to results
filter_dict = room_cat.get("filter", {})
# What to order results by (impacts whether pagination can be doen)
order_by = room_cat.get("order_by", "rank")
# Include context around each event?
event_context = room_cat.get(
"event_context", None
)
# Group results together? May allow clients to paginate within a
# group
group_by = room_cat.get("groupings", {}).get("group_by", {})
group_keys = [g["key"] for g in group_by]
if event_context is not None:
before_limit = int(event_context.get(
"before_limit", 5
))
after_limit = int(event_context.get(
"after_limit", 5
))
except KeyError:
raise SynapseError(400, "Invalid search query")
if order_by not in ("rank", "recent"):
raise SynapseError(400, "Invalid order by: %r" % (order_by,))
if set(group_keys) - {"room_id", "sender"}:
raise SynapseError(
400,
"Invalid group by keys: %r" % (set(group_keys) - {"room_id", "sender"},)
)
search_filter = Filter(filter_dict)
# TODO: Search through left rooms too
rooms = yield self.store.get_rooms_for_user_where_membership_is(
user.to_string(),
membership_list=[Membership.JOIN],
# membership_list=[Membership.JOIN, Membership.LEAVE, Membership.Ban],
)
room_ids = set(r.room_id for r in rooms)
room_ids = search_filter.filter_rooms(room_ids)
if batch_group == "room_id":
room_ids.intersection_update({batch_group_key})
rank_map = {} # event_id -> rank of event
allowed_events = []
room_groups = {} # Holds result of grouping by room, if applicable
sender_group = {} # Holds result of grouping by sender, if applicable
# Holds the next_batch for the entire result set if one of those exists
global_next_batch = None
if order_by == "rank":
results = yield self.store.search_msgs(
room_ids, search_term, keys
)
results_map = {r["event"].event_id: r for r in results}
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = search_filter.filter([r["event"] for r in results])
events = yield self._filter_events_for_client(
user.to_string(), filtered_events
)
events.sort(key=lambda e: -rank_map[e.event_id])
allowed_events = events[:search_filter.limit()]
for e in allowed_events:
rm = room_groups.setdefault(e.room_id, {
"results": [],
"order": rank_map[e.event_id],
})
rm["results"].append(e.event_id)
s = sender_group.setdefault(e.sender, {
"results": [],
"order": rank_map[e.event_id],
})
s["results"].append(e.event_id)
elif order_by == "recent":
# In this case we specifically loop through each room as the given
# limit applies to each room, rather than a global list.
# This is not necessarilly a good idea.
for room_id in room_ids:
room_events = []
if batch_group == "room_id" and batch_group_key == room_id:
pagination_token = batch_token
else:
pagination_token = None
i = 0
# We keep looping and we keep filtering until we reach the limit
# or we run out of things.
# But only go around 5 times since otherwise synapse will be sad.
while len(room_events) < search_filter.limit() and i < 5:
i += 1
results = yield self.store.search_room(
room_id, search_term, keys, search_filter.limit() * 2,
pagination_token=pagination_token,
)
results_map = {r["event"].event_id: r for r in results}
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = search_filter.filter([
r["event"] for r in results
])
events = yield self._filter_events_for_client(
user.to_string(), filtered_events
)
room_events.extend(events)
room_events = room_events[:search_filter.limit()]
if len(results) < search_filter.limit() * 2:
pagination_token = None
break
else:
pagination_token = results[-1]["pagination_token"]
if room_events:
res = results_map[room_events[-1].event_id]
pagination_token = res["pagination_token"]
group = room_groups.setdefault(room_id, {})
if pagination_token:
next_batch = encode_base64("%s\n%s\n%s" % (
"room_id", room_id, pagination_token
))
group["next_batch"] = next_batch
if batch_token:
global_next_batch = next_batch
group["results"] = [e.event_id for e in room_events]
group["order"] = max(
e.origin_server_ts/1000 for e in room_events
if hasattr(e, "origin_server_ts")
)
allowed_events.extend(room_events)
# Normalize the group orders
if room_groups:
if len(room_groups) > 1:
mx = max(g["order"] for g in room_groups.values())
mn = min(g["order"] for g in room_groups.values())
for g in room_groups.values():
g["order"] = (g["order"] - mn) * 1.0 / (mx - mn)
else:
room_groups.values()[0]["order"] = 1
else:
# We should never get here due to the guard earlier.
raise NotImplementedError()
# If client has asked for "context" for each event (i.e. some surrounding
# events and state), fetch that
if event_context is not None:
now_token = yield self.hs.get_event_sources().get_current_token()
contexts = {}
for event in allowed_events:
res = yield self.store.get_events_around(
event.room_id, event.event_id, before_limit, after_limit
)
res["events_before"] = yield self._filter_events_for_client(
user.to_string(), res["events_before"]
)
res["events_after"] = yield self._filter_events_for_client(
user.to_string(), res["events_after"]
)
res["start"] = now_token.copy_and_replace(
"room_key", res["start"]
).to_string()
res["end"] = now_token.copy_and_replace(
"room_key", res["end"]
).to_string()
contexts[event.event_id] = res
else:
contexts = {}
# TODO: Add a limit
time_now = self.clock.time_msec()
for context in contexts.values():
context["events_before"] = [
serialize_event(e, time_now)
for e in context["events_before"]
]
context["events_after"] = [
serialize_event(e, time_now)
for e in context["events_after"]
]
results = {
e.event_id: {
"rank": rank_map[e.event_id],
"result": serialize_event(e, time_now),
"context": contexts.get(e.event_id, {}),
}
for e in allowed_events
}
logger.info("Found %d results", len(results))
rooms_cat_res = {
"results": results,
"count": len(results)
}
if room_groups and "room_id" in group_keys:
rooms_cat_res.setdefault("groups", {})["room_id"] = room_groups
if sender_group and "sender" in group_keys:
rooms_cat_res.setdefault("groups", {})["sender"] = sender_group
if global_next_batch:
rooms_cat_res["next_batch"] = global_next_batch
defer.returnValue({
"search_categories": {
"room_events": rooms_cat_res
}
})

View File

@@ -28,23 +28,14 @@ logger = logging.getLogger(__name__)
SyncConfig = collections.namedtuple("SyncConfig", [ SyncConfig = collections.namedtuple("SyncConfig", [
"user", "user",
"client_info",
"limit",
"gap",
"sort",
"backfill",
"filter", "filter",
]) ])
class RoomSyncResult(collections.namedtuple("RoomSyncResult", [ class TimelineBatch(collections.namedtuple("TimelineBatch", [
"room_id",
"limited",
"published",
"events",
"state",
"prev_batch", "prev_batch",
"ephemeral", "events",
"limited",
])): ])):
__slots__ = [] __slots__ = []
@@ -52,14 +43,66 @@ class RoomSyncResult(collections.namedtuple("RoomSyncResult", [
"""Make the result appear empty if there are no updates. This is used """Make the result appear empty if there are no updates. This is used
to tell if room needs to be part of the sync result. to tell if room needs to be part of the sync result.
""" """
return bool(self.events or self.state or self.ephemeral) return bool(self.events)
class JoinedSyncResult(collections.namedtuple("JoinedSyncResult", [
"room_id", # str
"timeline", # TimelineBatch
"state", # dict[(str, str), FrozenEvent]
"ephemeral",
"private_user_data",
])):
__slots__ = []
def __nonzero__(self):
"""Make the result appear empty if there are no updates. This is used
to tell if room needs to be part of the sync result.
"""
return bool(
self.timeline
or self.state
or self.ephemeral
or self.private_user_data
)
class ArchivedSyncResult(collections.namedtuple("JoinedSyncResult", [
"room_id", # str
"timeline", # TimelineBatch
"state", # dict[(str, str), FrozenEvent]
"private_user_data",
])):
__slots__ = []
def __nonzero__(self):
"""Make the result appear empty if there are no updates. This is used
to tell if room needs to be part of the sync result.
"""
return bool(
self.timeline
or self.state
or self.private_user_data
)
class InvitedSyncResult(collections.namedtuple("InvitedSyncResult", [
"room_id", # str
"invite", # FrozenEvent: the invite event
])):
__slots__ = []
def __nonzero__(self):
"""Invited rooms should always be reported to the client"""
return True
class SyncResult(collections.namedtuple("SyncResult", [ class SyncResult(collections.namedtuple("SyncResult", [
"next_batch", # Token for the next sync "next_batch", # Token for the next sync
"private_user_data", # List of private events for the user. "presence", # List of presence events for the user.
"public_user_data", # List of public events for all users. "joined", # JoinedSyncResult for each joined room.
"rooms", # RoomSyncResult for each room. "invited", # InvitedSyncResult for each invited room.
"archived", # ArchivedSyncResult for each archived room.
])): ])):
__slots__ = [] __slots__ = []
@@ -69,7 +112,7 @@ class SyncResult(collections.namedtuple("SyncResult", [
events. events.
""" """
return bool( return bool(
self.private_user_data or self.public_user_data or self.rooms self.presence or self.joined or self.invited
) )
@@ -81,58 +124,58 @@ class SyncHandler(BaseHandler):
self.clock = hs.get_clock() self.clock = hs.get_clock()
@defer.inlineCallbacks @defer.inlineCallbacks
def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0): def wait_for_sync_for_user(self, sync_config, since_token=None, timeout=0,
full_state=False):
"""Get the sync for a client if we have new data for it now. Otherwise """Get the sync for a client if we have new data for it now. Otherwise
wait for new data to arrive on the server. If the timeout expires, then wait for new data to arrive on the server. If the timeout expires, then
return an empty sync result. return an empty sync result.
Returns: Returns:
A Deferred SyncResult. A Deferred SyncResult.
""" """
if timeout == 0 or since_token is None:
result = yield self.current_sync_for_user(sync_config, since_token) if timeout == 0 or since_token is None or full_state:
# we are going to return immediately, so don't bother calling
# notifier.wait_for_events.
result = yield self.current_sync_for_user(sync_config, since_token,
full_state=full_state)
defer.returnValue(result) defer.returnValue(result)
else: else:
def current_sync_callback(): def current_sync_callback(before_token, after_token):
return self.current_sync_for_user(sync_config, since_token) return self.current_sync_for_user(sync_config, since_token)
rm_handler = self.hs.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(
sync_config.user
)
result = yield self.notifier.wait_for_events( result = yield self.notifier.wait_for_events(
sync_config.user, room_ids, sync_config.user, timeout, current_sync_callback,
sync_config.filter, timeout, current_sync_callback from_token=since_token
) )
defer.returnValue(result) defer.returnValue(result)
def current_sync_for_user(self, sync_config, since_token=None): def current_sync_for_user(self, sync_config, since_token=None,
full_state=False):
"""Get the sync for client needed to match what the server has now. """Get the sync for client needed to match what the server has now.
Returns: Returns:
A Deferred SyncResult. A Deferred SyncResult.
""" """
if since_token is None: if since_token is None or full_state:
return self.initial_sync(sync_config) return self.full_state_sync(sync_config, since_token)
else: else:
if sync_config.gap: return self.incremental_sync_with_gap(sync_config, since_token)
return self.incremental_sync_with_gap(sync_config, since_token)
else:
# TODO(mjark): Handle gapless sync
raise NotImplementedError()
@defer.inlineCallbacks @defer.inlineCallbacks
def initial_sync(self, sync_config): def full_state_sync(self, sync_config, timeline_since_token):
"""Get a sync for a client which is starting without any state """Get a sync for a client which is starting without any state.
If a 'message_since_token' is given, only timeline events which have
happened since that token will be returned.
Returns: Returns:
A Deferred SyncResult. A Deferred SyncResult.
""" """
if sync_config.sort == "timeline,desc":
# TODO(mjark): Handle going through events in reverse order?.
# What does "most recent events" mean when applying the limits mean
# in this case?
raise NotImplementedError()
now_token = yield self.event_sources.get_current_token() now_token = yield self.event_sources.get_current_token()
now_token, ephemeral_by_room = yield self.ephemeral_by_room(
sync_config, now_token
)
presence_stream = self.event_sources.sources["presence"] presence_stream = self.event_sources.sources["presence"]
# TODO (mjark): This looks wrong, shouldn't we be getting the presence # TODO (mjark): This looks wrong, shouldn't we be getting the presence
# UP to the present rather than after the present? # UP to the present rather than after the present?
@@ -144,52 +187,179 @@ class SyncHandler(BaseHandler):
) )
room_list = yield self.store.get_rooms_for_user_where_membership_is( room_list = yield self.store.get_rooms_for_user_where_membership_is(
user_id=sync_config.user.to_string(), user_id=sync_config.user.to_string(),
membership_list=[Membership.INVITE, Membership.JOIN] membership_list=(
Membership.INVITE,
Membership.JOIN,
Membership.LEAVE,
Membership.BAN
)
) )
# TODO (mjark): Does public mean "published"? tags_by_room = yield self.store.get_tags_for_user(
published_rooms = yield self.store.get_rooms(is_public=True) sync_config.user.to_string()
published_room_ids = set(r["room_id"] for r in published_rooms) )
rooms = [] joined = []
invited = []
archived = []
for event in room_list: for event in room_list:
room_sync = yield self.initial_sync_for_room( if event.membership == Membership.JOIN:
event.room_id, sync_config, now_token, published_room_ids room_sync = yield self.full_state_sync_for_joined_room(
) room_id=event.room_id,
rooms.append(room_sync) sync_config=sync_config,
now_token=now_token,
timeline_since_token=timeline_since_token,
ephemeral_by_room=ephemeral_by_room,
tags_by_room=tags_by_room,
)
joined.append(room_sync)
elif event.membership == Membership.INVITE:
invite = yield self.store.get_event(event.event_id)
invited.append(InvitedSyncResult(
room_id=event.room_id,
invite=invite,
))
elif event.membership in (Membership.LEAVE, Membership.BAN):
leave_token = now_token.copy_and_replace(
"room_key", "s%d" % (event.stream_ordering,)
)
room_sync = yield self.full_state_sync_for_archived_room(
sync_config=sync_config,
room_id=event.room_id,
leave_event_id=event.event_id,
leave_token=leave_token,
timeline_since_token=timeline_since_token,
tags_by_room=tags_by_room,
)
archived.append(room_sync)
defer.returnValue(SyncResult( defer.returnValue(SyncResult(
public_user_data=presence, presence=presence,
private_user_data=[], joined=joined,
rooms=rooms, invited=invited,
archived=archived,
next_batch=now_token, next_batch=now_token,
)) ))
@defer.inlineCallbacks @defer.inlineCallbacks
def initial_sync_for_room(self, room_id, sync_config, now_token, def full_state_sync_for_joined_room(self, room_id, sync_config,
published_room_ids): now_token, timeline_since_token,
ephemeral_by_room, tags_by_room):
"""Sync a room for a client which is starting without any state """Sync a room for a client which is starting without any state
Returns: Returns:
A Deferred RoomSyncResult. A Deferred JoinedSyncResult.
""" """
recents, prev_batch_token, limited = yield self.load_filtered_recents( batch = yield self.load_filtered_recents(
room_id, sync_config, now_token, room_id, sync_config, now_token, since_token=timeline_since_token
) )
current_state = yield self.state_handler.get_current_state( current_state = yield self.get_state_at(room_id, now_token)
room_id
)
current_state_events = current_state.values()
defer.returnValue(RoomSyncResult( defer.returnValue(JoinedSyncResult(
room_id=room_id, room_id=room_id,
published=room_id in published_room_ids, timeline=batch,
events=recents, state=current_state,
prev_batch=prev_batch_token, ephemeral=ephemeral_by_room.get(room_id, []),
state=current_state_events, private_user_data=self.private_user_data_for_room(
limited=limited, room_id, tags_by_room
ephemeral=[], ),
))
def private_user_data_for_room(self, room_id, tags_by_room):
private_user_data = []
tags = tags_by_room.get(room_id)
if tags is not None:
private_user_data.append({
"type": "m.tag",
"content": {"tags": tags},
})
return private_user_data
@defer.inlineCallbacks
def ephemeral_by_room(self, sync_config, now_token, since_token=None):
"""Get the ephemeral events for each room the user is in
Args:
sync_config (SyncConfig): The flags, filters and user for the sync.
now_token (StreamToken): Where the server is currently up to.
since_token (StreamToken): Where the server was when the client
last synced.
Returns:
A tuple of the now StreamToken, updated to reflect the which typing
events are included, and a dict mapping from room_id to a list of
typing events for that room.
"""
typing_key = since_token.typing_key if since_token else "0"
rooms = yield self.store.get_rooms_for_user(sync_config.user.to_string())
room_ids = [room.room_id for room in rooms]
typing_source = self.event_sources.sources["typing"]
typing, typing_key = yield typing_source.get_new_events(
user=sync_config.user,
from_key=typing_key,
limit=sync_config.filter.ephemeral_limit(),
room_ids=room_ids,
is_guest=False,
)
now_token = now_token.copy_and_replace("typing_key", typing_key)
ephemeral_by_room = {}
for event in typing:
# we want to exclude the room_id from the event, but modifying the
# result returned by the event source is poor form (it might cache
# the object)
room_id = event["room_id"]
event_copy = {k: v for (k, v) in event.iteritems()
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
receipt_key = since_token.receipt_key if since_token else "0"
receipt_source = self.event_sources.sources["receipt"]
receipts, receipt_key = yield receipt_source.get_new_events(
user=sync_config.user,
from_key=receipt_key,
limit=sync_config.filter.ephemeral_limit(),
room_ids=room_ids,
# /sync doesn't support guest access, they can't get to this point in code
is_guest=False,
)
now_token = now_token.copy_and_replace("receipt_key", receipt_key)
for event in receipts:
room_id = event["room_id"]
# exclude room id, as above
event_copy = {k: v for (k, v) in event.iteritems()
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
defer.returnValue((now_token, ephemeral_by_room))
@defer.inlineCallbacks
def full_state_sync_for_archived_room(self, room_id, sync_config,
leave_event_id, leave_token,
timeline_since_token, tags_by_room):
"""Sync a room for a client which is starting without any state
Returns:
A Deferred JoinedSyncResult.
"""
batch = yield self.load_filtered_recents(
room_id, sync_config, leave_token, since_token=timeline_since_token
)
leave_state = yield self.store.get_state_for_event(leave_event_id)
defer.returnValue(ArchivedSyncResult(
room_id=room_id,
timeline=batch,
state=leave_state,
private_user_data=self.private_user_data_for_room(
room_id, tags_by_room
),
)) ))
@defer.inlineCallbacks @defer.inlineCallbacks
@@ -199,61 +369,81 @@ class SyncHandler(BaseHandler):
Returns: Returns:
A Deferred SyncResult. A Deferred SyncResult.
""" """
if sync_config.sort == "timeline,desc":
# TODO(mjark): Handle going through events in reverse order?.
# What does "most recent events" mean when applying the limits mean
# in this case?
raise NotImplementedError()
now_token = yield self.event_sources.get_current_token() now_token = yield self.event_sources.get_current_token()
rooms = yield self.store.get_rooms_for_user(sync_config.user.to_string())
room_ids = [room.room_id for room in rooms]
presence_source = self.event_sources.sources["presence"] presence_source = self.event_sources.sources["presence"]
presence, presence_key = yield presence_source.get_new_events_for_user( presence, presence_key = yield presence_source.get_new_events(
user=sync_config.user, user=sync_config.user,
from_key=since_token.presence_key, from_key=since_token.presence_key,
limit=sync_config.limit, limit=sync_config.filter.presence_limit(),
room_ids=room_ids,
# /sync doesn't support guest access, they can't get to this point in code
is_guest=False,
) )
now_token = now_token.copy_and_replace("presence_key", presence_key) now_token = now_token.copy_and_replace("presence_key", presence_key)
typing_source = self.event_sources.sources["typing"] now_token, ephemeral_by_room = yield self.ephemeral_by_room(
typing, typing_key = yield typing_source.get_new_events_for_user( sync_config, now_token, since_token
user=sync_config.user,
from_key=since_token.typing_key,
limit=sync_config.limit,
) )
now_token = now_token.copy_and_replace("typing_key", typing_key)
typing_by_room = {event["room_id"]: [event] for event in typing}
for event in typing:
event.pop("room_id")
logger.debug("Typing %r", typing_by_room)
rm_handler = self.hs.get_handlers().room_member_handler rm_handler = self.hs.get_handlers().room_member_handler
room_ids = yield rm_handler.get_joined_rooms_for_user(sync_config.user) app_service = yield self.store.get_app_service_by_user_id(
sync_config.user.to_string()
)
if app_service:
rooms = yield self.store.get_app_service_rooms(app_service)
joined_room_ids = set(r.room_id for r in rooms)
else:
joined_room_ids = yield rm_handler.get_joined_rooms_for_user(
sync_config.user
)
# TODO (mjark): Does public mean "published"? timeline_limit = sync_config.filter.timeline_limit()
published_rooms = yield self.store.get_rooms(is_public=True)
published_room_ids = set(r["room_id"] for r in published_rooms)
room_events, _ = yield self.store.get_room_events_stream( room_events, _ = yield self.store.get_room_events_stream(
sync_config.user.to_string(), sync_config.user.to_string(),
from_key=since_token.room_key, from_key=since_token.room_key,
to_key=now_token.room_key, to_key=now_token.room_key,
room_id=None, limit=timeline_limit + 1,
limit=sync_config.limit + 1,
) )
rooms = [] tags_by_room = yield self.store.get_updated_tags(
if len(room_events) <= sync_config.limit: sync_config.user.to_string(),
since_token.private_user_data_key,
)
joined = []
archived = []
if len(room_events) <= timeline_limit:
# There is no gap in any of the rooms. Therefore we can just # There is no gap in any of the rooms. Therefore we can just
# partition the new events by room and return them. # partition the new events by room and return them.
logger.debug("Got %i events for incremental sync - not limited",
len(room_events))
invite_events = []
leave_events = []
events_by_room_id = {} events_by_room_id = {}
for event in room_events: for event in room_events:
events_by_room_id.setdefault(event.room_id, []).append(event) events_by_room_id.setdefault(event.room_id, []).append(event)
if event.room_id not in joined_room_ids:
if (event.type == EventTypes.Member
and event.state_key == sync_config.user.to_string()):
if event.membership == Membership.INVITE:
invite_events.append(event)
elif event.membership in (Membership.LEAVE, Membership.BAN):
leave_events.append(event)
for room_id in room_ids: for room_id in joined_room_ids:
recents = events_by_room_id.get(room_id, []) recents = events_by_room_id.get(room_id, [])
state = [event for event in recents if event.is_state()] logger.debug("Events for room %s: %r", room_id, recents)
state = {
(event.type, event.state_key): event
for event in recents if event.is_state()}
limited = False
if recents: if recents:
prev_batch = now_token.copy_and_replace( prev_batch = now_token.copy_and_replace(
"room_key", recents[0].internal_metadata.before "room_key", recents[0].internal_metadata.before
@@ -261,49 +451,87 @@ class SyncHandler(BaseHandler):
else: else:
prev_batch = now_token prev_batch = now_token
state = yield self.check_joined_room( just_joined = yield self.check_joined_room(sync_config, state)
sync_config, room_id, state if just_joined:
) logger.debug("User has just joined %s: needs full state",
room_id)
state = yield self.get_state_at(room_id, now_token)
# the timeline is inherently limited if we've just joined
limited = True
room_sync = RoomSyncResult( room_sync = JoinedSyncResult(
room_id=room_id, room_id=room_id,
published=room_id in published_room_ids, timeline=TimelineBatch(
events=recents, events=recents,
prev_batch=prev_batch, prev_batch=prev_batch,
limited=limited,
),
state=state, state=state,
limited=False, ephemeral=ephemeral_by_room.get(room_id, []),
ephemeral=typing_by_room.get(room_id, []) private_user_data=self.private_user_data_for_room(
room_id, tags_by_room
),
) )
logger.debug("Result for room %s: %r", room_id, room_sync)
if room_sync: if room_sync:
rooms.append(room_sync) joined.append(room_sync)
else: else:
for room_id in room_ids: logger.debug("Got %i events for incremental sync - hit limit",
len(room_events))
invite_events = yield self.store.get_invites_for_user(
sync_config.user.to_string()
)
leave_events = yield self.store.get_leave_and_ban_events_for_user(
sync_config.user.to_string()
)
for room_id in joined_room_ids:
room_sync = yield self.incremental_sync_with_gap_for_room( room_sync = yield self.incremental_sync_with_gap_for_room(
room_id, sync_config, since_token, now_token, room_id, sync_config, since_token, now_token,
published_room_ids, typing_by_room ephemeral_by_room, tags_by_room
) )
if room_sync: if room_sync:
rooms.append(room_sync) joined.append(room_sync)
for leave_event in leave_events:
room_sync = yield self.incremental_sync_for_archived_room(
sync_config, leave_event, since_token, tags_by_room
)
archived.append(room_sync)
invited = [
InvitedSyncResult(room_id=event.room_id, invite=event)
for event in invite_events
]
defer.returnValue(SyncResult( defer.returnValue(SyncResult(
public_user_data=presence, presence=presence,
private_user_data=[], joined=joined,
rooms=rooms, invited=invited,
archived=archived,
next_batch=now_token, next_batch=now_token,
)) ))
@defer.inlineCallbacks @defer.inlineCallbacks
def load_filtered_recents(self, room_id, sync_config, now_token, def load_filtered_recents(self, room_id, sync_config, now_token,
since_token=None): since_token=None):
"""
:returns a Deferred TimelineBatch
"""
limited = True limited = True
recents = [] recents = []
filtering_factor = 2 filtering_factor = 2
load_limit = max(sync_config.limit * filtering_factor, 100) timeline_limit = sync_config.filter.timeline_limit()
load_limit = max(timeline_limit * filtering_factor, 100)
max_repeat = 3 # Only try a few times per room, otherwise max_repeat = 3 # Only try a few times per room, otherwise
room_key = now_token.room_key room_key = now_token.room_key
end_key = room_key end_key = room_key
while limited and len(recents) < sync_config.limit and max_repeat: while limited and len(recents) < timeline_limit and max_repeat:
events, keys = yield self.store.get_recent_events_for_room( events, keys = yield self.store.get_recent_events_for_room(
room_id, room_id,
limit=load_limit + 1, limit=load_limit + 1,
@@ -312,71 +540,74 @@ class SyncHandler(BaseHandler):
) )
(room_key, _) = keys (room_key, _) = keys
end_key = "s" + room_key.split('-')[-1] end_key = "s" + room_key.split('-')[-1]
loaded_recents = sync_config.filter.filter_room_events(events) loaded_recents = sync_config.filter.filter_room_timeline(events)
loaded_recents = yield self._filter_events_for_client(
sync_config.user.to_string(), loaded_recents,
)
loaded_recents.extend(recents) loaded_recents.extend(recents)
recents = loaded_recents recents = loaded_recents
if len(events) <= load_limit: if len(events) <= load_limit:
limited = False limited = False
max_repeat -= 1 max_repeat -= 1
if len(recents) > sync_config.limit: if len(recents) > timeline_limit:
recents = recents[-sync_config.limit:] limited = True
recents = recents[-timeline_limit:]
room_key = recents[0].internal_metadata.before room_key = recents[0].internal_metadata.before
prev_batch_token = now_token.copy_and_replace( prev_batch_token = now_token.copy_and_replace(
"room_key", room_key "room_key", room_key
) )
defer.returnValue((recents, prev_batch_token, limited)) defer.returnValue(TimelineBatch(
events=recents, prev_batch=prev_batch_token, limited=limited
))
@defer.inlineCallbacks @defer.inlineCallbacks
def incremental_sync_with_gap_for_room(self, room_id, sync_config, def incremental_sync_with_gap_for_room(self, room_id, sync_config,
since_token, now_token, since_token, now_token,
published_room_ids, typing_by_room): ephemeral_by_room, tags_by_room):
""" Get the incremental delta needed to bring the client up to date for """ Get the incremental delta needed to bring the client up to date for
the room. Gives the client the most recent events and the changes to the room. Gives the client the most recent events and the changes to
state. state.
Returns: Returns:
A Deferred RoomSyncResult A Deferred JoinedSyncResult
""" """
logger.debug("Doing incremental sync for room %s between %s and %s",
room_id, since_token, now_token)
# TODO(mjark): Check for redactions we might have missed. # TODO(mjark): Check for redactions we might have missed.
recents, prev_batch_token, limited = yield self.load_filtered_recents( batch = yield self.load_filtered_recents(
room_id, sync_config, now_token, since_token, room_id, sync_config, now_token, since_token,
) )
logging.debug("Recents %r", recents) logging.debug("Recents %r", batch)
# TODO(mjark): This seems racy since this isn't being passed a current_state = yield self.get_state_at(room_id, now_token)
# token to indicate what point in the stream this is
current_state = yield self.state_handler.get_current_state(
room_id
)
current_state_events = current_state.values()
state_at_previous_sync = yield self.get_state_at_previous_sync( state_at_previous_sync = yield self.get_state_at(
room_id, since_token=since_token room_id, stream_position=since_token
) )
state_events_delta = yield self.compute_state_delta( state = yield self.compute_state_delta(
since_token=since_token, since_token=since_token,
previous_state=state_at_previous_sync, previous_state=state_at_previous_sync,
current_state=current_state_events, current_state=current_state,
) )
state_events_delta = yield self.check_joined_room( just_joined = yield self.check_joined_room(sync_config, state)
sync_config, room_id, state_events_delta if just_joined:
) state = yield self.get_state_at(room_id, now_token)
room_sync = RoomSyncResult( room_sync = JoinedSyncResult(
room_id=room_id, room_id=room_id,
published=room_id in published_room_ids, timeline=batch,
events=recents, state=state,
prev_batch=prev_batch_token, ephemeral=ephemeral_by_room.get(room_id, []),
state=state_events_delta, private_user_data=self.private_user_data_for_room(
limited=limited, room_id, tags_by_room
ephemeral=typing_by_room.get(room_id, []) ),
) )
logging.debug("Room sync: %r", room_sync) logging.debug("Room sync: %r", room_sync)
@@ -384,58 +615,125 @@ class SyncHandler(BaseHandler):
defer.returnValue(room_sync) defer.returnValue(room_sync)
@defer.inlineCallbacks @defer.inlineCallbacks
def get_state_at_previous_sync(self, room_id, since_token): def incremental_sync_for_archived_room(self, sync_config, leave_event,
""" Get the room state at the previous sync the client made. since_token, tags_by_room):
""" Get the incremental delta needed to bring the client up to date for
the archived room.
Returns: Returns:
A Deferred list of Events. A Deferred ArchivedSyncResult
"""
stream_token = yield self.store.get_stream_token_for_event(
leave_event.event_id
)
leave_token = since_token.copy_and_replace("room_key", stream_token)
batch = yield self.load_filtered_recents(
leave_event.room_id, sync_config, leave_token, since_token,
)
logging.debug("Recents %r", batch)
state_events_at_leave = yield self.store.get_state_for_event(
leave_event.event_id
)
state_at_previous_sync = yield self.get_state_at(
leave_event.room_id, stream_position=since_token
)
state_events_delta = yield self.compute_state_delta(
since_token=since_token,
previous_state=state_at_previous_sync,
current_state=state_events_at_leave,
)
room_sync = ArchivedSyncResult(
room_id=leave_event.room_id,
timeline=batch,
state=state_events_delta,
private_user_data=self.private_user_data_for_room(
leave_event.room_id, tags_by_room
),
)
logging.debug("Room sync: %r", room_sync)
defer.returnValue(room_sync)
@defer.inlineCallbacks
def get_state_after_event(self, event):
"""
Get the room state after the given event
:param synapse.events.EventBase event: event of interest
:return: A Deferred map from ((type, state_key)->Event)
"""
state = yield self.store.get_state_for_event(event.event_id)
if event.is_state():
state = state.copy()
state[(event.type, event.state_key)] = event
defer.returnValue(state)
@defer.inlineCallbacks
def get_state_at(self, room_id, stream_position):
""" Get the room state at a particular stream position
:param str room_id: room for which to get state
:param StreamToken stream_position: point at which to get state
:returns: A Deferred map from ((type, state_key)->Event)
""" """
last_events, token = yield self.store.get_recent_events_for_room( last_events, token = yield self.store.get_recent_events_for_room(
room_id, end_token=since_token.room_key, limit=1, room_id, end_token=stream_position.room_key, limit=1,
) )
if last_events: if last_events:
last_event = last_events[0] last_event = last_events[-1]
last_context = yield self.state_handler.compute_event_context( state = yield self.get_state_after_event(last_event)
last_event
)
if last_event.is_state():
state = [last_event] + last_context.current_state.values()
else:
state = last_context.current_state.values()
else: else:
state = () # no events in this room - so presumably no state
state = {}
defer.returnValue(state) defer.returnValue(state)
def compute_state_delta(self, since_token, previous_state, current_state): def compute_state_delta(self, since_token, previous_state, current_state):
""" Works out the differnce in state between the current state and the """ Works out the differnce in state between the current state and the
state the client got when it last performed a sync. state the client got when it last performed a sync.
Returns:
A list of events. :param str since_token: the point we are comparing against
:param dict[(str,str), synapse.events.FrozenEvent] previous_state: the
state to compare to
:param dict[(str,str), synapse.events.FrozenEvent] current_state: the
new state
:returns A new event dictionary
""" """
# TODO(mjark) Check if the state events were received by the server # TODO(mjark) Check if the state events were received by the server
# after the previous sync, since we need to include those state # after the previous sync, since we need to include those state
# updates even if they occured logically before the previous event. # updates even if they occured logically before the previous event.
# TODO(mjark) Check for new redactions in the state events. # TODO(mjark) Check for new redactions in the state events.
previous_dict = {event.event_id: event for event in previous_state}
state_delta = [] state_delta = {}
for event in current_state: for key, event in current_state.iteritems():
if event.event_id not in previous_dict: if (key not in previous_state or
state_delta.append(event) previous_state[key].event_id != event.event_id):
state_delta[key] = event
return state_delta return state_delta
@defer.inlineCallbacks def check_joined_room(self, sync_config, state_delta):
def check_joined_room(self, sync_config, room_id, state_delta): """
joined = False Check if the user has just joined the given room (so should
for event in state_delta: be given the full state)
if (
event.type == EventTypes.Member
and event.state_key == sync_config.user.to_string()
):
if event.content["membership"] == Membership.JOIN:
joined = True
if joined: :param sync_config:
res = yield self.state_handler.get_current_state(room_id) :param dict[(str,str), synapse.events.FrozenEvent] state_delta: the
state_delta = res.values() difference in state since the last sync
defer.returnValue(state_delta) :returns A deferred Tuple (state_delta, limited)
"""
join_event = state_delta.get((
EventTypes.Member, sync_config.user.to_string()), None)
if join_event is not None:
if join_event.content["membership"] == Membership.JOIN:
return True
return False

View File

@@ -18,6 +18,7 @@ from twisted.internet import defer
from ._base import BaseHandler from ._base import BaseHandler
from synapse.api.errors import SynapseError, AuthError from synapse.api.errors import SynapseError, AuthError
from synapse.util.logcontext import PreserveLoggingContext
from synapse.types import UserID from synapse.types import UserID
import logging import logging
@@ -203,20 +204,19 @@ class TypingNotificationHandler(BaseHandler):
) )
def _push_update_local(self, room_id, user, typing): def _push_update_local(self, room_id, user, typing):
if room_id not in self._room_serials: room_set = self._room_typing.setdefault(room_id, set())
self._room_serials[room_id] = 0
self._room_typing[room_id] = set()
room_set = self._room_typing[room_id]
if typing: if typing:
room_set.add(user) room_set.add(user)
elif user in room_set: else:
room_set.remove(user) room_set.discard(user)
self._latest_room_serial += 1 self._latest_room_serial += 1
self._room_serials[room_id] = self._latest_room_serial self._room_serials[room_id] = self._latest_room_serial
self.notifier.on_new_user_event(rooms=[room_id]) with PreserveLoggingContext():
self.notifier.on_new_event(
"typing_key", self._latest_room_serial, rooms=[room_id]
)
class TypingNotificationEventSource(object): class TypingNotificationEventSource(object):
@@ -246,25 +246,20 @@ class TypingNotificationEventSource(object):
}, },
} }
@defer.inlineCallbacks def get_new_events(self, from_key, room_ids, **kwargs):
def get_new_events_for_user(self, user, from_key, limit):
from_key = int(from_key) from_key = int(from_key)
handler = self.handler() handler = self.handler()
joined_room_ids = (
yield self.room_member_handler().get_joined_rooms_for_user(user)
)
events = [] events = []
for room_id in handler._room_serials: for room_id in room_ids:
if room_id not in joined_room_ids: if room_id not in handler._room_serials:
continue continue
if handler._room_serials[room_id] <= from_key: if handler._room_serials[room_id] <= from_key:
continue continue
events.append(self._make_event_for(room_id)) events.append(self._make_event_for(room_id))
defer.returnValue((events, handler._latest_room_serial)) return events, handler._latest_room_serial
def get_current_key(self): def get_current_key(self):
return self.handler()._latest_room_serial return self.handler()._latest_room_serial

View File

@@ -12,14 +12,18 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from OpenSSL import SSL
from OpenSSL.SSL import VERIFY_NONE
from synapse.api.errors import CodeMessageException from synapse.api.errors import CodeMessageException
from syutil.jsonutil import encode_canonical_json from synapse.util.logcontext import preserve_context_over_fn
import synapse.metrics import synapse.metrics
from twisted.internet import defer, reactor from canonicaljson import encode_canonical_json
from twisted.internet import defer, reactor, ssl
from twisted.web.client import ( from twisted.web.client import (
Agent, readBody, FileBodyProducer, PartialDownloadError Agent, readBody, FileBodyProducer, PartialDownloadError,
) )
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
@@ -54,21 +58,40 @@ class SimpleHttpClient(object):
# The default context factory in Twisted 14.0.0 (which we require) is # The default context factory in Twisted 14.0.0 (which we require) is
# BrowserLikePolicyForHTTPS which will do regular cert validation # BrowserLikePolicyForHTTPS which will do regular cert validation
# 'like a browser' # 'like a browser'
self.agent = Agent(reactor) self.agent = Agent(
self.version_string = hs.version_string reactor,
connectTimeout=15,
contextFactory=hs.get_http_client_context_factory()
)
self.user_agent = hs.version_string
if hs.config.user_agent_suffix:
self.user_agent = "%s %s" % (self.user_agent, hs.config.user_agent_suffix,)
def request(self, method, *args, **kwargs): def request(self, method, uri, *args, **kwargs):
# A small wrapper around self.agent.request() so we can easily attach # A small wrapper around self.agent.request() so we can easily attach
# counters to it # counters to it
outgoing_requests_counter.inc(method) outgoing_requests_counter.inc(method)
d = self.agent.request(method, *args, **kwargs) d = preserve_context_over_fn(
self.agent.request,
method, uri, *args, **kwargs
)
logger.info("Sending request %s %s", method, uri)
def _cb(response): def _cb(response):
incoming_responses_counter.inc(method, response.code) incoming_responses_counter.inc(method, response.code)
logger.info(
"Received response to %s %s: %s",
method, uri, response.code
)
return response return response
def _eb(failure): def _eb(failure):
incoming_responses_counter.inc(method, "ERR") incoming_responses_counter.inc(method, "ERR")
logger.info(
"Error sending request to %s %s: %s %s",
method, uri, failure.type, failure.getErrorMessage()
)
return failure return failure
d.addCallbacks(_cb, _eb) d.addCallbacks(_cb, _eb)
@@ -77,7 +100,9 @@ class SimpleHttpClient(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def post_urlencoded_get_json(self, uri, args={}): def post_urlencoded_get_json(self, uri, args={}):
# TODO: Do we ever want to log message contents?
logger.debug("post_urlencoded_get_json args: %s", args) logger.debug("post_urlencoded_get_json args: %s", args)
query_bytes = urllib.urlencode(args, True) query_bytes = urllib.urlencode(args, True)
response = yield self.request( response = yield self.request(
@@ -85,12 +110,12 @@ class SimpleHttpClient(object):
uri.encode("ascii"), uri.encode("ascii"),
headers=Headers({ headers=Headers({
b"Content-Type": [b"application/x-www-form-urlencoded"], b"Content-Type": [b"application/x-www-form-urlencoded"],
b"User-Agent": [self.version_string], b"User-Agent": [self.user_agent],
}), }),
bodyProducer=FileBodyProducer(StringIO(query_bytes)) bodyProducer=FileBodyProducer(StringIO(query_bytes))
) )
body = yield readBody(response) body = yield preserve_context_over_fn(readBody, response)
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@@ -98,18 +123,19 @@ class SimpleHttpClient(object):
def post_json_get_json(self, uri, post_json): def post_json_get_json(self, uri, post_json):
json_str = encode_canonical_json(post_json) json_str = encode_canonical_json(post_json)
logger.info("HTTP POST %s -> %s", json_str, uri) logger.debug("HTTP POST %s -> %s", json_str, uri)
response = yield self.request( response = yield self.request(
"POST", "POST",
uri.encode("ascii"), uri.encode("ascii"),
headers=Headers({ headers=Headers({
"Content-Type": ["application/json"] b"Content-Type": [b"application/json"],
b"User-Agent": [self.user_agent],
}), }),
bodyProducer=FileBodyProducer(StringIO(json_str)) bodyProducer=FileBodyProducer(StringIO(json_str))
) )
body = yield readBody(response) body = yield preserve_context_over_fn(readBody, response)
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@@ -130,27 +156,8 @@ class SimpleHttpClient(object):
On a non-2xx HTTP response. The response body will be used as the On a non-2xx HTTP response. The response body will be used as the
error message. error message.
""" """
if len(args): body = yield self.get_raw(uri, args)
query_bytes = urllib.urlencode(args, True) defer.returnValue(json.loads(body))
uri = "%s?%s" % (uri, query_bytes)
response = yield self.request(
"GET",
uri.encode("ascii"),
headers=Headers({
b"User-Agent": [self.version_string],
})
)
body = yield readBody(response)
if 200 <= response.code < 300:
defer.returnValue(json.loads(body))
else:
# NB: This is explicitly not json.loads(body)'d because the contract
# of CodeMessageException is a *string* message. Callers can always
# load it into JSON if they want.
raise CodeMessageException(response.code, body)
@defer.inlineCallbacks @defer.inlineCallbacks
def put_json(self, uri, json_body, args={}): def put_json(self, uri, json_body, args={}):
@@ -179,13 +186,13 @@ class SimpleHttpClient(object):
"PUT", "PUT",
uri.encode("ascii"), uri.encode("ascii"),
headers=Headers({ headers=Headers({
b"User-Agent": [self.version_string], b"User-Agent": [self.user_agent],
"Content-Type": ["application/json"] "Content-Type": ["application/json"]
}), }),
bodyProducer=FileBodyProducer(StringIO(json_str)) bodyProducer=FileBodyProducer(StringIO(json_str))
) )
body = yield readBody(response) body = yield preserve_context_over_fn(readBody, response)
if 200 <= response.code < 300: if 200 <= response.code < 300:
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@@ -195,6 +202,42 @@ class SimpleHttpClient(object):
# load it into JSON if they want. # load it into JSON if they want.
raise CodeMessageException(response.code, body) raise CodeMessageException(response.code, body)
@defer.inlineCallbacks
def get_raw(self, uri, args={}):
""" Gets raw text from the given URI.
Args:
uri (str): The URI to request, not including query parameters
args (dict): A dictionary used to create query strings, defaults to
None.
**Note**: The value of each key is assumed to be an iterable
and *not* a string.
Returns:
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
HTTP body at text.
Raises:
On a non-2xx HTTP response. The response body will be used as the
error message.
"""
if len(args):
query_bytes = urllib.urlencode(args, True)
uri = "%s?%s" % (uri, query_bytes)
response = yield self.request(
"GET",
uri.encode("ascii"),
headers=Headers({
b"User-Agent": [self.user_agent],
})
)
body = yield preserve_context_over_fn(readBody, response)
if 200 <= response.code < 300:
defer.returnValue(body)
else:
raise CodeMessageException(response.code, body)
class CaptchaServerHttpClient(SimpleHttpClient): class CaptchaServerHttpClient(SimpleHttpClient):
""" """
@@ -214,12 +257,12 @@ class CaptchaServerHttpClient(SimpleHttpClient):
bodyProducer=FileBodyProducer(StringIO(query_bytes)), bodyProducer=FileBodyProducer(StringIO(query_bytes)),
headers=Headers({ headers=Headers({
b"Content-Type": [b"application/x-www-form-urlencoded"], b"Content-Type": [b"application/x-www-form-urlencoded"],
b"User-Agent": [self.version_string], b"User-Agent": [self.user_agent],
}) })
) )
try: try:
body = yield readBody(response) body = yield preserve_context_over_fn(readBody, response)
defer.returnValue(body) defer.returnValue(body)
except PartialDownloadError as e: except PartialDownloadError as e:
# twisted dislikes google's response, no content length. # twisted dislikes google's response, no content length.
@@ -232,3 +275,18 @@ def _print_ex(e):
_print_ex(ex) _print_ex(ex)
else: else:
logger.exception(e) logger.exception(e)
class InsecureInterceptableContextFactory(ssl.ContextFactory):
"""
Factory for PyOpenSSL SSL contexts which accepts any certificate for any domain.
Do not use this since it allows an attacker to intercept your communications.
"""
def __init__(self):
self._context = SSL.Context(SSL.SSLv23_METHOD)
self._context.set_verify(VERIFY_NONE, lambda *_: None)
def getContext(self, hostname, port):
return self._context

View File

@@ -16,30 +16,33 @@
from twisted.internet import defer, reactor, protocol from twisted.internet import defer, reactor, protocol
from twisted.internet.error import DNSLookupError from twisted.internet.error import DNSLookupError
from twisted.web.client import readBody, _AgentBase, _URI from twisted.web.client import readBody, HTTPConnectionPool, Agent
from twisted.web.http_headers import Headers from twisted.web.http_headers import Headers
from twisted.web._newclient import ResponseDone from twisted.web._newclient import ResponseDone
from synapse.http.endpoint import matrix_federation_endpoint from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util.async import sleep from synapse.util.async import sleep
from synapse.util.logcontext import PreserveLoggingContext from synapse.util.logcontext import preserve_context_over_fn
import synapse.metrics import synapse.metrics
from syutil.jsonutil import encode_canonical_json from canonicaljson import encode_canonical_json
from synapse.api.errors import ( from synapse.api.errors import (
SynapseError, Codes, HttpResponseException, SynapseError, Codes, HttpResponseException,
) )
from syutil.crypto.jsonsign import sign_json from signedjson.sign import sign_json
import simplejson as json import simplejson as json
import logging import logging
import random
import sys
import urllib import urllib
import urlparse import urlparse
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
outbound_logger = logging.getLogger("synapse.http.outbound")
metrics = synapse.metrics.get_metrics_for(__name__) metrics = synapse.metrics.get_metrics_for(__name__)
@@ -53,41 +56,20 @@ incoming_responses_counter = metrics.register_counter(
) )
class MatrixFederationHttpAgent(_AgentBase): MAX_RETRIES = 10
def __init__(self, reactor, pool=None):
_AgentBase.__init__(self, reactor, pool)
def request(self, destination, endpoint, method, path, params, query, class MatrixFederationEndpointFactory(object):
headers, body_producer): def __init__(self, hs):
self.tls_server_context_factory = hs.tls_server_context_factory
outgoing_requests_counter.inc(method) def endpointForURI(self, uri):
destination = uri.netloc
host = b"" return matrix_federation_endpoint(
port = 0 reactor, destination, timeout=10,
fragment = b"" ssl_context_factory=self.tls_server_context_factory
)
parsed_URI = _URI(b"http", destination, host, port, path, params,
query, fragment)
# Set the connection pool key to be the destination.
key = destination
d = self._requestWithEndpoint(key, endpoint, method, parsed_URI,
headers, body_producer,
parsed_URI.originForm)
def _cb(response):
incoming_responses_counter.inc(method, response.code)
return response
def _eb(failure):
incoming_responses_counter.inc(method, "ERR")
return failure
d.addCallbacks(_cb, _eb)
return d
class MatrixFederationHttpClient(object): class MatrixFederationHttpClient(object):
@@ -103,105 +85,126 @@ class MatrixFederationHttpClient(object):
self.hs = hs self.hs = hs
self.signing_key = hs.config.signing_key[0] self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname self.server_name = hs.hostname
self.agent = MatrixFederationHttpAgent(reactor) pool = HTTPConnectionPool(reactor)
pool.maxPersistentPerHost = 10
self.agent = Agent.usingEndpointFactory(
reactor, MatrixFederationEndpointFactory(hs), pool=pool
)
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.version_string = hs.version_string self.version_string = hs.version_string
self._next_id = 1
def _create_url(self, destination, path_bytes, param_bytes, query_bytes):
return urlparse.urlunparse(
("matrix", destination, path_bytes, param_bytes, query_bytes, "")
)
@defer.inlineCallbacks @defer.inlineCallbacks
def _create_request(self, destination, method, path_bytes, def _create_request(self, destination, method, path_bytes,
body_callback, headers_dict={}, param_bytes=b"", body_callback, headers_dict={}, param_bytes=b"",
query_bytes=b"", retry_on_dns_fail=True): query_bytes=b"", retry_on_dns_fail=True,
timeout=None):
""" Creates and sends a request to the given url """ Creates and sends a request to the given url
""" """
headers_dict[b"User-Agent"] = [self.version_string] headers_dict[b"User-Agent"] = [self.version_string]
headers_dict[b"Host"] = [destination] headers_dict[b"Host"] = [destination]
url_bytes = urlparse.urlunparse( url_bytes = self._create_url(
("", "", path_bytes, param_bytes, query_bytes, "",) destination, path_bytes, param_bytes, query_bytes
) )
logger.info("Sending request to %s: %s %s", txn_id = "%s-O-%s" % (method, self._next_id)
destination, method, url_bytes) self._next_id = (self._next_id + 1) % (sys.maxint - 1)
logger.debug( outbound_logger.info(
"Types: %s", "{%s} [%s] Sending request: %s %s",
[ txn_id, destination, method, url_bytes
type(destination), type(method), type(path_bytes),
type(param_bytes),
type(query_bytes)
]
) )
# XXX: Would be much nicer to retry only at the transaction-layer # XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place) # (once we have reliable transactions in place)
retries_left = 5 retries_left = MAX_RETRIES
endpoint = self._getEndpoint(reactor, destination) http_url_bytes = urlparse.urlunparse(
("", "", path_bytes, param_bytes, query_bytes, "")
while True:
producer = None
if body_callback:
producer = body_callback(method, url_bytes, headers_dict)
try:
with PreserveLoggingContext():
request_deferred = self.agent.request(
destination,
endpoint,
method,
path_bytes,
param_bytes,
query_bytes,
Headers(headers_dict),
producer
)
response = yield self.clock.time_bound_deferred(
request_deferred,
time_out=60,
)
logger.debug("Got response to %s", method)
break
except Exception as e:
if not retry_on_dns_fail and isinstance(e, DNSLookupError):
logger.warn(
"DNS Lookup failed to %s with %s",
destination,
e
)
raise
logger.warn(
"Sending request failed to %s: %s %s: %s - %s",
destination,
method,
url_bytes,
type(e).__name__,
_flatten_response_never_received(e),
)
if retries_left:
yield sleep(2 ** (5 - retries_left))
retries_left -= 1
else:
raise
logger.info(
"Received response %d %s for %s: %s %s",
response.code,
response.phrase,
destination,
method,
url_bytes
) )
log_result = None
try:
while True:
producer = None
if body_callback:
producer = body_callback(method, http_url_bytes, headers_dict)
try:
def send_request():
request_deferred = preserve_context_over_fn(
self.agent.request,
method,
url_bytes,
Headers(headers_dict),
producer
)
return self.clock.time_bound_deferred(
request_deferred,
time_out=timeout/1000. if timeout else 60,
)
response = yield preserve_context_over_fn(
send_request,
)
log_result = "%d %s" % (response.code, response.phrase,)
break
except Exception as e:
if not retry_on_dns_fail and isinstance(e, DNSLookupError):
logger.warn(
"DNS Lookup failed to %s with %s",
destination,
e
)
log_result = "DNS Lookup failed to %s with %s" % (
destination, e
)
raise
logger.warn(
"{%s} Sending request failed to %s: %s %s: %s - %s",
txn_id,
destination,
method,
url_bytes,
type(e).__name__,
_flatten_response_never_received(e),
)
log_result = "%s - %s" % (
type(e).__name__, _flatten_response_never_received(e),
)
if retries_left and not timeout:
delay = 4 ** (MAX_RETRIES + 1 - retries_left)
delay = max(delay, 60)
delay *= random.uniform(0.8, 1.4)
yield sleep(delay)
retries_left -= 1
else:
raise
finally:
outbound_logger.info(
"{%s} [%s] Result: %s",
txn_id,
destination,
log_result,
)
if 200 <= response.code < 300: if 200 <= response.code < 300:
pass pass
else: else:
# :'( # :'(
# Update transactions table? # Update transactions table?
body = yield readBody(response) body = yield preserve_context_over_fn(readBody, response)
raise HttpResponseException( raise HttpResponseException(
response.code, response.phrase, body response.code, response.phrase, body
) )
@@ -281,10 +284,7 @@ class MatrixFederationHttpClient(object):
"Content-Type not application/json" "Content-Type not application/json"
) )
logger.debug("Getting resp body") body = yield preserve_context_over_fn(readBody, response)
body = yield readBody(response)
logger.debug("Got resp body")
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@defer.inlineCallbacks @defer.inlineCallbacks
@@ -327,14 +327,13 @@ class MatrixFederationHttpClient(object):
"Content-Type not application/json" "Content-Type not application/json"
) )
logger.debug("Getting resp body") body = yield preserve_context_over_fn(readBody, response)
body = yield readBody(response)
logger.debug("Got resp body")
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@defer.inlineCallbacks @defer.inlineCallbacks
def get_json(self, destination, path, args={}, retry_on_dns_fail=True): def get_json(self, destination, path, args={}, retry_on_dns_fail=True,
timeout=None):
""" GETs some json from the given host homeserver and path """ GETs some json from the given host homeserver and path
Args: Args:
@@ -343,6 +342,9 @@ class MatrixFederationHttpClient(object):
path (str): The HTTP path. path (str): The HTTP path.
args (dict): A dictionary used to create query strings, defaults to args (dict): A dictionary used to create query strings, defaults to
None. None.
timeout (int): How long to try (in ms) the destination for before
giving up. None indicates no timeout and that the request will
be retried.
Returns: Returns:
Deferred: Succeeds when we get *any* HTTP response. Deferred: Succeeds when we get *any* HTTP response.
@@ -370,7 +372,8 @@ class MatrixFederationHttpClient(object):
path.encode("ascii"), path.encode("ascii"),
query_bytes=query_bytes, query_bytes=query_bytes,
body_callback=body_callback, body_callback=body_callback,
retry_on_dns_fail=retry_on_dns_fail retry_on_dns_fail=retry_on_dns_fail,
timeout=timeout,
) )
if 200 <= response.code < 300: if 200 <= response.code < 300:
@@ -382,9 +385,7 @@ class MatrixFederationHttpClient(object):
"Content-Type not application/json" "Content-Type not application/json"
) )
logger.debug("Getting resp body") body = yield preserve_context_over_fn(readBody, response)
body = yield readBody(response)
logger.debug("Got resp body")
defer.returnValue(json.loads(body)) defer.returnValue(json.loads(body))
@@ -427,19 +428,16 @@ class MatrixFederationHttpClient(object):
headers = dict(response.headers.getAllRawHeaders()) headers = dict(response.headers.getAllRawHeaders())
try: try:
length = yield _readBodyToFile(response, output_stream, max_size) length = yield preserve_context_over_fn(
_readBodyToFile,
response, output_stream, max_size
)
except: except:
logger.exception("Failed to download body") logger.exception("Failed to download body")
raise raise
defer.returnValue((length, headers)) defer.returnValue((length, headers))
def _getEndpoint(self, reactor, destination):
return matrix_federation_endpoint(
reactor, destination, timeout=10,
ssl_context_factory=self.hs.tls_context_factory
)
class _ReadBodyToFileProtocol(protocol.Protocol): class _ReadBodyToFileProtocol(protocol.Protocol):
def __init__(self, stream, deferred, max_size): def __init__(self, stream, deferred, max_size):

View File

@@ -17,10 +17,11 @@
from synapse.api.errors import ( from synapse.api.errors import (
cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError
) )
from synapse.util.logcontext import LoggingContext from synapse.util.logcontext import LoggingContext, PreserveLoggingContext
import synapse.metrics import synapse.metrics
import synapse.events
from syutil.jsonutil import ( from canonicaljson import (
encode_canonical_json, encode_pretty_printed_json encode_canonical_json, encode_pretty_printed_json
) )
@@ -32,6 +33,7 @@ from twisted.web.util import redirectTo
import collections import collections
import logging import logging
import urllib import urllib
import ujson
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -78,51 +80,39 @@ def request_handler(request_handler):
_next_request_id += 1 _next_request_id += 1
with LoggingContext(request_id) as request_context: with LoggingContext(request_id) as request_context:
request_context.request = request_id request_context.request = request_id
code = None with request.processing():
start = self.clock.time_msec() try:
try: d = request_handler(self, request)
logger.info( with PreserveLoggingContext():
"Received request: %s %s", yield d
request.method, request.path except CodeMessageException as e:
) code = e.code
yield request_handler(self, request) if isinstance(e, SynapseError):
code = request.code logger.info(
except CodeMessageException as e: "%s SynapseError: %s - %s", request, code, e.msg
code = e.code )
if isinstance(e, SynapseError): else:
logger.info( logger.exception(e)
"%s SynapseError: %s - %s", request, code, e.msg outgoing_responses_counter.inc(request.method, str(code))
respond_with_json(
request, code, cs_exception(e), send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
)
except:
logger.exception(
"Failed handle request %s.%s on %r: %r",
request_handler.__module__,
request_handler.__name__,
self,
request
)
respond_with_json(
request,
500,
{"error": "Internal server error"},
send_cors=True
) )
else:
logger.exception(e)
outgoing_responses_counter.inc(request.method, str(code))
respond_with_json(
request, code, cs_exception(e), send_cors=True,
pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string,
)
except:
code = 500
logger.exception(
"Failed handle request %s.%s on %r: %r",
request_handler.__module__,
request_handler.__name__,
self,
request
)
respond_with_json(
request,
500,
{"error": "Internal server error"},
send_cors=True
)
finally:
code = str(code) if code else "-"
end = self.clock.time_msec()
logger.info(
"Processed request: %dms %s %s %s",
end-start, code, request.method, request.path
)
return wrapped_request_handler return wrapped_request_handler
@@ -166,9 +156,10 @@ class JsonResource(HttpServer, resource.Resource):
_PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"]) _PathEntry = collections.namedtuple("_PathEntry", ["pattern", "callback"])
def __init__(self, hs): def __init__(self, hs, canonical_json=True):
resource.Resource.__init__(self) resource.Resource.__init__(self)
self.canonical_json = canonical_json
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.path_regexs = {} self.path_regexs = {}
self.version_string = hs.version_string self.version_string = hs.version_string
@@ -217,7 +208,7 @@ class JsonResource(HttpServer, resource.Resource):
incoming_requests_counter.inc(request.method, servlet_classname) incoming_requests_counter.inc(request.method, servlet_classname)
args = [ args = [
urllib.unquote(u).decode("UTF-8") for u in m.groups() urllib.unquote(u).decode("UTF-8") if u else u for u in m.groups()
] ]
callback_return = yield callback(request, *args) callback_return = yield callback(request, *args)
@@ -254,6 +245,7 @@ class JsonResource(HttpServer, resource.Resource):
response_code_message=response_code_message, response_code_message=response_code_message,
pretty_print=_request_user_agent_is_curl(request), pretty_print=_request_user_agent_is_curl(request),
version_string=self.version_string, version_string=self.version_string,
canonical_json=self.canonical_json,
) )
@@ -275,11 +267,15 @@ class RootRedirect(resource.Resource):
def respond_with_json(request, code, json_object, send_cors=False, def respond_with_json(request, code, json_object, send_cors=False,
response_code_message=None, pretty_print=False, response_code_message=None, pretty_print=False,
version_string=""): version_string="", canonical_json=True):
if pretty_print: if pretty_print:
json_bytes = encode_pretty_printed_json(json_object) + "\n" json_bytes = encode_pretty_printed_json(json_object) + "\n"
else: else:
json_bytes = encode_canonical_json(json_object) if canonical_json or synapse.events.USE_FROZEN_DICTS:
json_bytes = encode_canonical_json(json_object)
else:
# ujson doesn't like frozen_dicts.
json_bytes = ujson.dumps(json_object, ensure_ascii=False)
return respond_with_json_bytes( return respond_with_json_bytes(
request, code, json_bytes, request, code, json_bytes,

View File

@@ -17,9 +17,13 @@
from __future__ import absolute_import from __future__ import absolute_import
import logging import logging
from resource import getrusage, getpagesize, RUSAGE_SELF from resource import getrusage, RUSAGE_SELF
import functools
import os import os
import stat import stat
import time
from twisted.internet import reactor
from .metric import ( from .metric import (
CounterMetric, CallbackMetric, DistributionMetric, CacheMetric CounterMetric, CallbackMetric, DistributionMetric, CacheMetric
@@ -96,7 +100,6 @@ def render_all():
# process resource usage # process resource usage
rusage = None rusage = None
PAGE_SIZE = getpagesize()
def update_resource_metrics(): def update_resource_metrics():
@@ -109,8 +112,8 @@ resource_metrics = get_metrics_for("process.resource")
resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000) resource_metrics.register_callback("utime", lambda: rusage.ru_utime * 1000)
resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000) resource_metrics.register_callback("stime", lambda: rusage.ru_stime * 1000)
# pages # kilobytes
resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * PAGE_SIZE) resource_metrics.register_callback("maxrss", lambda: rusage.ru_maxrss * 1024)
TYPES = { TYPES = {
stat.S_IFSOCK: "SOCK", stat.S_IFSOCK: "SOCK",
@@ -127,6 +130,10 @@ def _process_fds():
counts = {(k,): 0 for k in TYPES.values()} counts = {(k,): 0 for k in TYPES.values()}
counts[("other",)] = 0 counts[("other",)] = 0
# Not every OS will have a /proc/self/fd directory
if not os.path.exists("/proc/self/fd"):
return counts
for fd in os.listdir("/proc/self/fd"): for fd in os.listdir("/proc/self/fd"):
try: try:
s = os.stat("/proc/self/fd/%s" % (fd)) s = os.stat("/proc/self/fd/%s" % (fd))
@@ -144,3 +151,50 @@ def _process_fds():
return counts return counts
get_metrics_for("process").register_callback("fds", _process_fds, labels=["type"]) get_metrics_for("process").register_callback("fds", _process_fds, labels=["type"])
reactor_metrics = get_metrics_for("reactor")
tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
def runUntilCurrentTimer(func):
@functools.wraps(func)
def f(*args, **kwargs):
now = reactor.seconds()
num_pending = 0
# _newTimedCalls is one long list of *all* pending calls. Below loop
# is based off of impl of reactor.runUntilCurrent
for delayed_call in reactor._newTimedCalls:
if delayed_call.time > now:
break
if delayed_call.delayed_time > 0:
continue
num_pending += 1
num_pending += len(reactor.threadCallQueue)
start = time.time() * 1000
ret = func(*args, **kwargs)
end = time.time() * 1000
tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending)
return ret
return f
try:
# Ensure the reactor has all the attributes we expect
reactor.runUntilCurrent
reactor._newTimedCalls
reactor.threadCallQueue
# runUntilCurrent is called when we have pending calls. It is called once
# per iteratation after fd polling.
reactor.runUntilCurrent = runUntilCurrentTimer(reactor.runUntilCurrent)
except AttributeError:
pass

View File

@@ -14,10 +14,11 @@
# limitations under the License. # limitations under the License.
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError
from synapse.util.logutils import log_function from synapse.util.logutils import log_function
from synapse.util.logcontext import PreserveLoggingContext from synapse.util.async import run_on_reactor, ObservableDeferred
from synapse.util.async import run_on_reactor
from synapse.types import StreamToken from synapse.types import StreamToken
import synapse.metrics import synapse.metrics
@@ -43,62 +44,78 @@ def count(func, l):
class _NotificationListener(object): class _NotificationListener(object):
""" This represents a single client connection to the events stream. """ This represents a single client connection to the events stream.
The events stream handler will have yielded to the deferred, so to The events stream handler will have yielded to the deferred, so to
notify the handler it is sufficient to resolve the deferred. notify the handler it is sufficient to resolve the deferred.
"""
__slots__ = ["deferred"]
def __init__(self, deferred):
self.deferred = deferred
class _NotifierUserStream(object):
"""This represents a user connected to the event stream.
It tracks the most recent stream token for that user.
At a given point a user may have a number of streams listening for
events.
This listener will also keep track of which rooms it is listening in This listener will also keep track of which rooms it is listening in
so that it can remove itself from the indexes in the Notifier class. so that it can remove itself from the indexes in the Notifier class.
""" """
def __init__(self, user, rooms, from_token, limit, timeout, deferred, def __init__(self, user, rooms, current_token, time_now_ms,
appservice=None): appservice=None):
self.user = user self.user = str(user)
self.appservice = appservice self.appservice = appservice
self.from_token = from_token self.rooms = set(rooms)
self.limit = limit self.current_token = current_token
self.timeout = timeout self.last_notified_ms = time_now_ms
self.deferred = deferred
self.rooms = rooms
self.timer = None
def notified(self): self.notify_deferred = ObservableDeferred(defer.Deferred())
return self.deferred.called
def notify(self, notifier, events, start_token, end_token): def notify(self, stream_key, stream_id, time_now_ms):
""" Inform whoever is listening about the new events. This will """Notify any listeners for this user of a new event from an
also remove this listener from all the indexes in the Notifier event source.
Args:
stream_key(str): The stream the event came from.
stream_id(str): The new id for the stream the event came from.
time_now_ms(int): The current time in milliseconds.
"""
self.current_token = self.current_token.copy_and_advance(
stream_key, stream_id
)
self.last_notified_ms = time_now_ms
noify_deferred = self.notify_deferred
self.notify_deferred = ObservableDeferred(defer.Deferred())
noify_deferred.callback(self.current_token)
def remove(self, notifier):
""" Remove this listener from all the indexes in the Notifier
it knows about. it knows about.
""" """
result = (events, (start_token, end_token))
try:
self.deferred.callback(result)
notified_events_counter.inc_by(len(events))
except defer.AlreadyCalledError:
pass
# Should the following be done be using intrusively linked lists?
# -- erikj
for room in self.rooms: for room in self.rooms:
lst = notifier.room_to_listeners.get(room, set()) lst = notifier.room_to_user_streams.get(room, set())
lst.discard(self) lst.discard(self)
notifier.user_to_listeners.get(self.user, set()).discard(self) notifier.user_to_user_stream.pop(self.user)
if self.appservice: if self.appservice:
notifier.appservice_to_listeners.get( notifier.appservice_to_user_streams.get(
self.appservice, set() self.appservice, set()
).discard(self) ).discard(self)
# Cancel the timeout for this notifer if one exists. def count_listeners(self):
if self.timer is not None: return len(self.notify_deferred.observers())
try:
notifier.clock.cancel_call_later(self.timer) def new_listener(self, token):
except: """Returns a deferred that is resolved when there is a new token
logger.warn("Failed to cancel notifier timer") greater than the given token.
"""
if self.current_token.is_after(token):
return _NotificationListener(defer.succeed(self.current_token))
else:
return _NotificationListener(self.notify_deferred.observe())
class Notifier(object): class Notifier(object):
@@ -108,14 +125,18 @@ class Notifier(object):
Primarily used from the /events stream. Primarily used from the /events stream.
""" """
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
def __init__(self, hs): def __init__(self, hs):
self.hs = hs self.hs = hs
self.room_to_listeners = {} self.user_to_user_stream = {}
self.user_to_listeners = {} self.room_to_user_streams = {}
self.appservice_to_listeners = {} self.appservice_to_user_streams = {}
self.event_sources = hs.get_event_sources() self.event_sources = hs.get_event_sources()
self.store = hs.get_datastore()
self.pending_new_room_events = []
self.clock = hs.get_clock() self.clock = hs.get_clock()
@@ -123,370 +144,310 @@ class Notifier(object):
"user_joined_room", self._user_joined_room "user_joined_room", self._user_joined_room
) )
self.clock.looping_call(
self.remove_expired_streams, self.UNUSED_STREAM_EXPIRY_MS
)
# This is not a very cheap test to perform, but it's only executed # This is not a very cheap test to perform, but it's only executed
# when rendering the metrics page, which is likely once per minute at # when rendering the metrics page, which is likely once per minute at
# most when scraping it. # most when scraping it.
def count_listeners(): def count_listeners():
all_listeners = set() all_user_streams = set()
for x in self.room_to_listeners.values(): for x in self.room_to_user_streams.values():
all_listeners |= x all_user_streams |= x
for x in self.user_to_listeners.values(): for x in self.user_to_user_stream.values():
all_listeners |= x all_user_streams.add(x)
for x in self.appservice_to_listeners.values(): for x in self.appservice_to_user_streams.values():
all_listeners |= x all_user_streams |= x
return len(all_listeners) return sum(stream.count_listeners() for stream in all_user_streams)
metrics.register_callback("listeners", count_listeners) metrics.register_callback("listeners", count_listeners)
metrics.register_callback( metrics.register_callback(
"rooms", "rooms",
lambda: count(bool, self.room_to_listeners.values()), lambda: count(bool, self.room_to_user_streams.values()),
) )
metrics.register_callback( metrics.register_callback(
"users", "users",
lambda: count(bool, self.user_to_listeners.values()), lambda: len(self.user_to_user_stream),
) )
metrics.register_callback( metrics.register_callback(
"appservices", "appservices",
lambda: count(bool, self.appservice_to_listeners.values()), lambda: count(bool, self.appservice_to_user_streams.values()),
) )
@log_function @log_function
@defer.inlineCallbacks @defer.inlineCallbacks
def on_new_room_event(self, event, extra_users=[]): def on_new_room_event(self, event, room_stream_id, max_room_stream_id,
extra_users=[]):
""" Used by handlers to inform the notifier something has happened """ Used by handlers to inform the notifier something has happened
in the room, room event wise. in the room, room event wise.
This triggers the notifier to wake up any listeners that are This triggers the notifier to wake up any listeners that are
listening to the room, and any listeners for the users in the listening to the room, and any listeners for the users in the
`extra_users` param. `extra_users` param.
The events can be peristed out of order. The notifier will wait
until all previous events have been persisted before notifying
the client streams.
""" """
yield run_on_reactor() yield run_on_reactor()
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
self._notify_pending_new_room_events(max_room_stream_id)
def _notify_pending_new_room_events(self, max_room_stream_id):
"""Notify for the room events that were queued waiting for a previous
event to be persisted.
Args:
max_room_stream_id(int): The highest stream_id below which all
events have been persisted.
"""
pending = self.pending_new_room_events
self.pending_new_room_events = []
for room_stream_id, event, extra_users in pending:
if room_stream_id > max_room_stream_id:
self.pending_new_room_events.append((
room_stream_id, event, extra_users
))
else:
self._on_new_room_event(event, room_stream_id, extra_users)
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
"""Notify any user streams that are interested in this room event"""
# poke any interested application service. # poke any interested application service.
self.hs.get_handlers().appservice_handler.notify_interested_services( self.hs.get_handlers().appservice_handler.notify_interested_services(
event event
) )
room_id = event.room_id app_streams = set()
room_source = self.event_sources.sources["room"] for appservice in self.appservice_to_user_streams:
room_listeners = self.room_to_listeners.get(room_id, set())
_discard_if_notified(room_listeners)
listeners = room_listeners.copy()
for user in extra_users:
user_listeners = self.user_to_listeners.get(user, set())
_discard_if_notified(user_listeners)
listeners |= user_listeners
for appservice in self.appservice_to_listeners:
# TODO (kegan): Redundant appservice listener checks? # TODO (kegan): Redundant appservice listener checks?
# App services will already be in the room_to_listeners set, but # App services will already be in the room_to_user_streams set, but
# that isn't enough. They need to be checked here in order to # that isn't enough. They need to be checked here in order to
# receive *invites* for users they are interested in. Does this # receive *invites* for users they are interested in. Does this
# make the room_to_listeners check somewhat obselete? # make the room_to_user_streams check somewhat obselete?
if appservice.is_interested(event): if appservice.is_interested(event):
app_listeners = self.appservice_to_listeners.get( app_user_streams = self.appservice_to_user_streams.get(
appservice, set() appservice, set()
) )
app_streams |= app_user_streams
_discard_if_notified(app_listeners) self.on_new_event(
"room_key", room_stream_id,
listeners |= app_listeners users=extra_users,
rooms=[event.room_id],
logger.debug("on_new_room_event listeners %s", listeners) extra_streams=app_streams,
)
# TODO (erikj): Can we make this more efficient by hitting the
# db once?
@defer.inlineCallbacks
def notify(listener):
events, end_key = yield room_source.get_new_events_for_user(
listener.user,
listener.from_token.room_key,
listener.limit,
)
if events:
end_token = listener.from_token.copy_and_replace(
"room_key", end_key
)
listener.notify(
self, events, listener.from_token, end_token
)
def eb(failure):
logger.exception("Failed to notify listener", failure)
with PreserveLoggingContext():
yield defer.DeferredList(
[notify(l).addErrback(eb) for l in listeners],
consumeErrors=True,
)
@defer.inlineCallbacks @defer.inlineCallbacks
@log_function @log_function
def on_new_user_event(self, users=[], rooms=[]): def on_new_event(self, stream_key, new_token, users=[], rooms=[],
""" Used to inform listeners that something has happend extra_streams=set()):
presence/user event wise. """ Used to inform listeners that something has happend event wise.
Will wake up all listeners for the given users and rooms. Will wake up all listeners for the given users and rooms.
""" """
yield run_on_reactor() yield run_on_reactor()
user_streams = set()
# TODO(paul): This is horrible, having to manually list every event
# source here individually
presence_source = self.event_sources.sources["presence"]
typing_source = self.event_sources.sources["typing"]
listeners = set()
for user in users: for user in users:
user_listeners = self.user_to_listeners.get(user, set()) user_stream = self.user_to_user_stream.get(str(user))
if user_stream is not None:
_discard_if_notified(user_listeners) user_streams.add(user_stream)
listeners |= user_listeners
for room in rooms: for room in rooms:
room_listeners = self.room_to_listeners.get(room, set()) user_streams |= self.room_to_user_streams.get(room, set())
_discard_if_notified(room_listeners) time_now_ms = self.clock.time_msec()
for user_stream in user_streams:
listeners |= room_listeners try:
user_stream.notify(stream_key, new_token, time_now_ms)
@defer.inlineCallbacks except:
def notify(listener): logger.exception("Failed to notify listener")
presence_events, presence_end_key = (
yield presence_source.get_new_events_for_user(
listener.user,
listener.from_token.presence_key,
listener.limit,
)
)
typing_events, typing_end_key = (
yield typing_source.get_new_events_for_user(
listener.user,
listener.from_token.typing_key,
listener.limit,
)
)
if presence_events or typing_events:
end_token = listener.from_token.copy_and_replace(
"presence_key", presence_end_key
).copy_and_replace(
"typing_key", typing_end_key
)
listener.notify(
self,
presence_events + typing_events,
listener.from_token,
end_token
)
def eb(failure):
logger.error(
"Failed to notify listener",
exc_info=(
failure.type,
failure.value,
failure.getTracebackObject())
)
with PreserveLoggingContext():
yield defer.DeferredList(
[notify(l).addErrback(eb) for l in listeners],
consumeErrors=True,
)
@defer.inlineCallbacks @defer.inlineCallbacks
def wait_for_events(self, user, rooms, filter, timeout, callback): def wait_for_events(self, user, timeout, callback, room_ids=None,
from_token=StreamToken("s0", "0", "0", "0", "0")):
"""Wait until the callback returns a non empty response or the """Wait until the callback returns a non empty response or the
timeout fires. timeout fires.
""" """
user = str(user)
user_stream = self.user_to_user_stream.get(user)
if user_stream is None:
appservice = yield self.store.get_app_service_by_user_id(user)
current_token = yield self.event_sources.get_current_token()
if room_ids is None:
rooms = yield self.store.get_rooms_for_user(user)
room_ids = [room.room_id for room in rooms]
user_stream = _NotifierUserStream(
user=user,
rooms=room_ids,
appservice=appservice,
current_token=current_token,
time_now_ms=self.clock.time_msec(),
)
self._register_with_keys(user_stream)
deferred = defer.Deferred() result = None
from_token = StreamToken("s0", "0", "0")
listener = [_NotificationListener(
user=user,
rooms=rooms,
from_token=from_token,
limit=1,
timeout=timeout,
deferred=deferred,
)]
if timeout: if timeout:
self._register_with_keys(listener[0]) # Will be set to a _NotificationListener that we'll be waiting on.
# Allows us to cancel it.
listener = None
result = yield callback() def timed_out():
timer = [None] if listener:
listener.deferred.cancel()
timer = self.clock.call_later(timeout/1000., timed_out)
if timeout: prev_token = from_token
timed_out = [False] while not result:
try:
current_token = user_stream.current_token
def _timeout_listener(): result = yield callback(prev_token, current_token)
timed_out[0] = True if result:
timer[0] = None break
listener[0].notify(self, [], from_token, from_token)
# We create multiple notification listeners so we have to manage # Now we wait for the _NotifierUserStream to be told there
# canceling the timeout ourselves. # is a new token.
timer[0] = self.clock.call_later(timeout/1000., _timeout_listener) # We need to supply the token we supplied to callback so
# that we don't miss any current_token updates.
prev_token = current_token
listener = user_stream.new_listener(prev_token)
yield listener.deferred
except defer.CancelledError:
break
while not result and not timed_out[0]: self.clock.cancel_call_later(timer, ignore_errs=True)
yield deferred else:
deferred = defer.Deferred() current_token = user_stream.current_token
listener[0] = _NotificationListener( result = yield callback(from_token, current_token)
user=user,
rooms=rooms,
from_token=from_token,
limit=1,
timeout=timeout,
deferred=deferred,
)
self._register_with_keys(listener[0])
result = yield callback()
if timer[0] is not None:
try:
self.clock.cancel_call_later(timer[0])
except:
logger.exception("Failed to cancel notifer timer")
defer.returnValue(result) defer.returnValue(result)
def get_events_for(self, user, rooms, pagination_config, timeout): @defer.inlineCallbacks
def get_events_for(self, user, pagination_config, timeout,
only_room_events=False,
is_guest=False, guest_room_id=None):
""" For the given user and rooms, return any new events for them. If """ For the given user and rooms, return any new events for them. If
there are no new events wait for up to `timeout` milliseconds for any there are no new events wait for up to `timeout` milliseconds for any
new events to happen before returning. new events to happen before returning.
If `only_room_events` is `True` only room events will be returned.
""" """
deferred = defer.Deferred() from_token = pagination_config.from_token
self._get_events(
deferred, user, rooms, pagination_config.from_token,
pagination_config.limit, timeout
).addErrback(deferred.errback)
return deferred
@defer.inlineCallbacks
def _get_events(self, deferred, user, rooms, from_token, limit, timeout):
if not from_token: if not from_token:
from_token = yield self.event_sources.get_current_token() from_token = yield self.event_sources.get_current_token()
appservice = yield self.hs.get_datastore().get_app_service_by_user_id( limit = pagination_config.limit
user.to_string()
)
listener = _NotificationListener( room_ids = []
user, if is_guest:
rooms, if guest_room_id:
from_token, if not self._is_world_readable(guest_room_id):
limit, raise AuthError(403, "Guest access not allowed")
timeout, room_ids = [guest_room_id]
deferred,
appservice=appservice
)
def _timeout_listener():
# TODO (erikj): We should probably set to_token to the current
# max rather than reusing from_token.
# Remove the timer from the listener so we don't try to cancel it.
listener.timer = None
listener.notify(
self,
[],
listener.from_token,
listener.from_token,
)
if timeout:
self._register_with_keys(listener)
yield self._check_for_updates(listener)
if not timeout:
_timeout_listener()
else: else:
# Only add the timer if the listener hasn't been notified rooms = yield self.store.get_rooms_for_user(user.to_string())
if not listener.notified(): room_ids = [room.room_id for room in rooms]
listener.timer = self.clock.call_later(
timeout/1000.0, _timeout_listener @defer.inlineCallbacks
def check_for_updates(before_token, after_token):
if not after_token.is_after(before_token):
defer.returnValue(None)
events = []
end_token = from_token
for name, source in self.event_sources.sources.items():
keyname = "%s_key" % name
before_id = getattr(before_token, keyname)
after_id = getattr(after_token, keyname)
if before_id == after_id:
continue
if only_room_events and name != "room":
continue
new_events, new_key = yield source.get_new_events(
user=user,
from_key=getattr(from_token, keyname),
limit=limit,
is_guest=is_guest,
room_ids=room_ids,
) )
return
@log_function if name == "room":
def _register_with_keys(self, listener): room_member_handler = self.hs.get_handlers().room_member_handler
for room in listener.rooms: new_events = yield room_member_handler._filter_events_for_client(
s = self.room_to_listeners.setdefault(room, set()) user.to_string(),
s.add(listener) new_events,
is_guest=is_guest,
require_all_visible_for_guests=False
)
self.user_to_listeners.setdefault(listener.user, set()).add(listener) events.extend(new_events)
end_token = end_token.copy_and_replace(keyname, new_key)
if listener.appservice: if events:
self.appservice_to_listeners.setdefault( defer.returnValue((events, (from_token, end_token)))
listener.appservice, set() else:
).add(listener) defer.returnValue(None)
result = yield self.wait_for_events(
user, timeout, check_for_updates, room_ids=room_ids, from_token=from_token
)
if result is None:
result = ([], (from_token, from_token))
defer.returnValue(result)
@defer.inlineCallbacks @defer.inlineCallbacks
def _is_world_readable(self, room_id):
state = yield self.hs.get_state_handler().get_current_state(
room_id,
EventTypes.RoomHistoryVisibility
)
if state and "history_visibility" in state.content:
defer.returnValue(state.content["history_visibility"] == "world_readable")
else:
defer.returnValue(False)
@log_function @log_function
def _check_for_updates(self, listener): def remove_expired_streams(self):
# TODO (erikj): We need to think about limits across multiple sources time_now_ms = self.clock.time_msec()
events = [] expired_streams = []
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
for stream in self.user_to_user_stream.values():
if stream.count_listeners():
continue
if stream.last_notified_ms < expire_before_ts:
expired_streams.append(stream)
from_token = listener.from_token for expired_stream in expired_streams:
limit = listener.limit expired_stream.remove(self)
# TODO (erikj): DeferredList? @log_function
for name, source in self.event_sources.sources.items(): def _register_with_keys(self, user_stream):
keyname = "%s_key" % name self.user_to_user_stream[user_stream.user] = user_stream
stuff, new_key = yield source.get_new_events_for_user( for room in user_stream.rooms:
listener.user, s = self.room_to_user_streams.setdefault(room, set())
getattr(from_token, keyname), s.add(user_stream)
limit,
)
events.extend(stuff) if user_stream.appservice:
self.appservice_to_user_stream.setdefault(
from_token = from_token.copy_and_replace(keyname, new_key) user_stream.appservice, set()
).add(user_stream)
end_token = from_token
if events:
listener.notify(self, events, listener.from_token, end_token)
defer.returnValue(listener)
def _user_joined_room(self, user, room_id): def _user_joined_room(self, user, room_id):
new_listeners = self.user_to_listeners.get(user, set()) user = str(user)
new_user_stream = self.user_to_user_stream.get(user)
listeners = self.room_to_listeners.setdefault(room_id, set()) if new_user_stream is not None:
listeners |= new_listeners room_streams = self.room_to_user_streams.setdefault(room_id, set())
room_streams.add(new_user_stream)
for l in new_listeners: new_user_stream.rooms.add(room_id)
l.rooms.add(room_id)
def _discard_if_notified(listener_set):
"""Remove any 'stale' listeners from the given set.
"""
to_discard = set()
for l in listener_set:
if l.notified():
to_discard.add(l)
listener_set -= to_discard

Some files were not shown because too many files have changed in this diff Show More