1
0

Compare commits

...

216 Commits

Author SHA1 Message Date
Neil Johnson
3f589f9097 7 char sha in changelog 2018-06-06 11:39:42 +01:00
Neil Johnson
176f1206d1 Update CHANGES.rst 2018-06-06 11:28:30 +01:00
Neil Johnson
61134debdc bump version and changelog 2018-06-06 11:26:21 +01:00
Amber Brown
23c785992f Fix metric documentation tables (#3341) 2018-06-06 07:12:16 +01:00
Richard van der Hoff
b3b16490f7 Add note to changelog on prometheus metrics 2018-06-06 07:08:36 +01:00
Richard van der Hoff
592ee217a3 Merge commit 'b7e7fd2' into release-v0.31.0 2018-06-06 07:02:02 +01:00
Richard van der Hoff
b7e7fd2d0e Fix replication metrics
fix bug introduced in #3256
2018-06-04 16:23:05 +01:00
Neil Johnson
244ab974e7 bump version and changelog 2018-06-04 16:09:58 +01:00
Richard van der Hoff
694968fa81 Hopefully, fix LaterGuage error handling 2018-06-04 15:59:14 +01:00
Amber Brown
5dbf305444 Put python's logs into Trial when running unit tests (#3319) 2018-06-04 16:06:06 +10:00
Amber Brown
86accac5d5 Merge pull request #3328 from intelfx/fix-metrics-LaterGauge-usage
federation: fix LaterGauge usage
2018-06-04 15:35:32 +10:00
Ivan Shapovalov
7d9d75e4e8 federation/send_queue.py: fix usage of LaterGauge
Fixes a startup crash due to commit df9f72d9e5
"replacing portions".
2018-06-03 14:16:17 +03:00
Richard van der Hoff
a9e97dcd65 Merge pull request #3317 from thegcat/feature/3312-add_ipv6_to_blacklist_example_config
Add private IPv6 addresses to example config for url preview blacklist
2018-06-01 14:45:14 +01:00
Neil Johnson
71477f3317 Merge pull request #3264 from matrix-org/neil/sign-up-stats
daily user type phone home stats
2018-06-01 13:42:01 +00:00
Richard van der Hoff
41006d9c28 Merge pull request #3318 from matrix-org/rav/ignore_depth_on_rrs
Ignore depth when updating read-receipts
2018-06-01 14:14:45 +01:00
Richard van der Hoff
9f797a24a4 Handle RRs which arrive before their events 2018-06-01 14:01:43 +01:00
Richard van der Hoff
857e6fd8b6 Ignore depth when updating read-receipts
Order read receipts by stream ordering instead of depth
2018-06-01 12:18:11 +01:00
Felix Schäfer
4ef76f3ac4 Add private IPv6 addresses to preview blacklist #3312
The added addresses are expected to be local or loopback addresses and
shouldn't be spidered for previews.

Signed-off-by: Felix Schäfer <felix@thegcat.net>
2018-06-01 12:18:35 +02:00
Neil Johnson
4986b084f8 remove unnecessary INSERT 2018-06-01 10:50:40 +01:00
Richard van der Hoff
c2c3092cce code_style.rst: formatting 2018-05-31 16:11:34 +01:00
Amber Brown
febe0ec8fd Run Prometheus on a different port, optionally. (#3274) 2018-05-31 19:04:50 +10:00
Amber Brown
c936a52a9e Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (#3307) 2018-05-31 19:03:47 +10:00
Amber Brown
872cf43516 Merge pull request #3303 from NotAFile/py3-memoryview
use memoryview in py3
2018-05-30 12:51:42 +10:00
Amber Brown
debff7ae09 Merge pull request #3281 from NotAFile/py3-six-isinstance
remaining isintance fixes
2018-05-30 12:44:46 +10:00
Richard van der Hoff
34b85df7f5 Update some comments and docstrings in SyncHandler 2018-05-29 22:31:18 +01:00
Richard van der Hoff
711f61a31d Merge pull request #3304 from matrix-org/rav/exempt_as_users_from_gdpr
Exempt AS-registered users from doing gdpr
2018-05-29 20:25:12 +01:00
Richard van der Hoff
a995fdae39 fix tests 2018-05-29 20:19:29 +01:00
Richard van der Hoff
4a9cbdbc15 Exempt AS-registered users from doing gdpr 2018-05-29 19:54:32 +01:00
Neil Johnson
ab0ef31dc7 create users index on creation_ts 2018-05-29 17:51:08 +01:00
Neil Johnson
558f3d376a create index in background 2018-05-29 17:47:55 +01:00
Neil Johnson
c379acd4fd bump version 2018-05-29 17:47:28 +01:00
Richard van der Hoff
db2e4608ab Merge pull request #3302 from krombel/py3_extend_tox_testing
extend tox testing for py3 to avoid regressions
2018-05-29 17:37:52 +01:00
Adrian Tschira
4b9d0cde97 add remaining memoryview changes 2018-05-29 17:42:43 +02:00
Adrian Tschira
7873cde526 pep8 2018-05-29 17:35:55 +02:00
Adrian Tschira
1afafb3497 use memoryview in py3
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-29 17:14:34 +02:00
Krombel
d6cd55532b extend tox testing for py3 to avoid regressions 2018-05-29 16:02:41 +02:00
Amber Brown
235b53263a Merge pull request #3299 from matrix-org/matthew/macos-fixes
disable CPUMetrics if no /proc/self/stat
2018-05-29 11:45:45 +10:00
Matthew Hodgson
ff1bc0a279 pep8 2018-05-29 02:32:15 +01:00
Matthew Hodgson
adb6bac4d5 fix another dumb typo 2018-05-29 02:29:22 +01:00
Matthew Hodgson
0a240ad36e disable CPUMetrics if no /proc/self/stat
fixes build on macOS again
2018-05-29 02:23:30 +01:00
Matthew Hodgson
284e36ad04 fix dumb typo 2018-05-29 02:23:11 +01:00
Amber Brown
81717e8515 Merge pull request #3256 from matrix-org/3218-official-prom
Switch to the Python Prometheus library
2018-05-28 23:30:09 +10:00
Amber Brown
57ad76fa4a fix up tests 2018-05-28 19:51:53 +10:00
Amber Brown
3ef5cd74a6 update to more consistently use seconds in any metrics or logging 2018-05-28 19:39:27 +10:00
Amber Brown
5c40ce3777 invalid syntax :( 2018-05-28 19:16:09 +10:00
Amber Brown
357c74a50f add comment about why unreg 2018-05-28 19:14:41 +10:00
Amber Brown
a2eb5db4a0 update metrics to be in seconds 2018-05-28 19:10:27 +10:00
Amber Brown
754826a830 Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-28 18:57:23 +10:00
Richard van der Hoff
60f09b1e11 Merge pull request #3288 from matrix-org/rav/no_spam_guests
Avoid sending consent notice to guest users
2018-05-25 11:44:45 +01:00
Richard van der Hoff
66bdae986f Fix default for send_server_notice_to_guests
bool("False") == True...
2018-05-25 11:42:05 +01:00
Richard van der Hoff
ba1b163590 Avoid sending consent notice to guest users
we think it makes sense not to send the notices to guest users.
2018-05-25 11:36:43 +01:00
Richard van der Hoff
41921ac01b Merge pull request #3287 from matrix-org/rav/allow_leaving_server_notices_room
Let users leave the server notice room after joining
2018-05-25 11:15:45 +01:00
Richard van der Hoff
757ed27258 Let users leave the server notice room after joining
They still can't reject invites, but we let them leave it.
2018-05-25 11:07:21 +01:00
Adrian Tschira
4ee4450d66 fix recursion error 2018-05-24 21:44:10 +02:00
Amber Brown
9c36c150e7 Merge pull request #3283 from NotAFile/py3-state
py3-ize state.py
2018-05-24 14:23:33 -05:00
Amber Brown
cc1349c06a Merge pull request #3279 from NotAFile/py3-more-iteritems
more six iteritems
2018-05-24 14:23:13 -05:00
Amber Brown
5b788aba90 Merge pull request #3280 from NotAFile/py3-more-misc
More Misc. py3 fixes
2018-05-24 14:22:59 -05:00
Adrian Tschira
0e61705661 py3-ize state.py 2018-05-24 20:59:00 +02:00
Adrian Tschira
dd068ca979 remaining isintance fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:55:08 +02:00
Adrian Tschira
17a70cf6e9 Misc. py3 fixes
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 20:20:33 +02:00
Adrian Tschira
6c16a4ec1b more iteritems 2018-05-24 20:19:06 +02:00
Amber Brown
7ea07c7305 Merge pull request #3278 from NotAFile/py3-storage-base
Py3 storage/_base.py
2018-05-24 13:08:09 -05:00
Amber Brown
1f69693347 Merge pull request #3244 from NotAFile/py3-six-4
replace some iteritems with six
2018-05-24 13:04:07 -05:00
Amber Brown
c4fb15a06c Merge pull request #3246 from NotAFile/py3-repr-string
use repr, not str
2018-05-24 13:00:20 -05:00
Amber Brown
36501068d8 Merge pull request #3247 from NotAFile/py3-misc
Misc Python3 fixes
2018-05-24 12:58:37 -05:00
Amber Brown
2aff6eab6d Merge pull request #3245 from NotAFile/batch-iter
Add batch_iter to utils
2018-05-24 12:54:12 -05:00
Adrian Tschira
095292304f Py3 storage/_base.py
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-24 18:24:12 +02:00
David Baker
ecc4b88bd1 Merge pull request #3277 from matrix-org/dbkr/remove_from_user_dir
Remove users from user directory on deactivate
2018-05-24 16:12:12 +01:00
Erik Johnston
46345187cc Merge pull request #3243 from NotAFile/py3-six-3
Replace some more comparisons with six
2018-05-24 16:08:57 +01:00
Neil Johnson
037c6db85d Merge branch 'master' into develop 2018-05-24 16:03:44 +01:00
David Baker
7a1af504d7 Remove users from user directory on deactivate 2018-05-24 15:59:58 +01:00
Neil Johnson
14ca678674 Update CHANGES.rst 2018-05-24 15:54:02 +01:00
Neil Johnson
6f67163c63 Update CHANGES.rst 2018-05-24 15:04:41 +01:00
Neil Johnson
bdd2ed5acf update for v0.30.0 2018-05-24 15:02:03 +01:00
Erik Johnston
f72d5a44d5 Merge pull request #3261 from matrix-org/erikj/pagination_fixes
Fix federation backfill bugs
2018-05-24 14:52:03 +01:00
Erik Johnston
68399fc4de Merge pull request #3267 from matrix-org/erikj/iter_filter
Use iter* methods for _filter_events_for_server
2018-05-24 14:44:57 +01:00
Neil Johnson
91d95a1d8e bump version 2018-05-24 14:05:07 +01:00
Richard van der Hoff
8c98281b8d Merge branch 'release-v0.30.0' into develop 2018-05-24 10:33:12 +01:00
Richard van der Hoff
6abcb5d22d Merge pull request #3273 from matrix-org/rav/server_notices_avatar_url
Allow overriding the server_notices user's avatar
2018-05-24 10:31:43 +01:00
Amber Brown
389dac2c15 pepeightttt 2018-05-23 13:08:59 -05:00
Amber Brown
472a5ec4e2 add back CPU metrics 2018-05-23 13:03:56 -05:00
Amber Brown
e987079037 fixes 2018-05-23 13:03:51 -05:00
Richard van der Hoff
9bf4b2bda3 Allow overriding the server_notices user's avatar
probably should have done this in the first place, like @turt2live suggested.
2018-05-23 17:43:30 +01:00
Richard van der Hoff
23aa70cea8 Merge pull request #3272 from matrix-org/rav/localpart_in_consent_uri
Use the localpart in the consent uri
2018-05-23 16:29:44 +01:00
Richard van der Hoff
043f05a078 Merge docs on consent bits from PR #3268 into release branch 2018-05-23 16:24:58 +01:00
Richard van der Hoff
96f07cebda Merge pull request #3268 from matrix-org/rav/privacy_policy_docs
Docs on consent bits
2018-05-23 16:23:05 +01:00
Richard van der Hoff
a0b3946fe2 Merge branch 'release-v0.30.0' into rav/localpart_in_consent_uri 2018-05-23 16:06:03 +01:00
Richard van der Hoff
2f7008d4eb Merge pull request #3271 from matrix-org/rav/consent_uri_in_messages
Support for putting %(consent_uri)s in messages
2018-05-23 16:04:30 +01:00
Richard van der Hoff
e206b2c9ac consent_tracking.md: clarify link 2018-05-23 15:57:10 +01:00
Richard van der Hoff
2df8c3139a minor post-review tweaks 2018-05-23 15:39:52 +01:00
Richard van der Hoff
dda40fb55d fix typo 2018-05-23 15:30:26 +01:00
Richard van der Hoff
3ff6f50eac Use the localpart in the consent uri
... because it's shorter.
2018-05-23 15:28:23 +01:00
Richard van der Hoff
82191b08f6 Support for putting %(consent_uri)s in messages
Make it possible to put the URI in the error message and the server notice that
get sent by the server
2018-05-23 15:24:31 +01:00
Richard van der Hoff
2c62ea2515 Merge pull request #3270 from matrix-org/rav/block_remote_server_notices
Block attempts to send server notices to remote users
2018-05-23 15:06:35 +01:00
Richard van der Hoff
cd8ab9a0d8 mention public_baseurl 2018-05-23 14:43:09 +01:00
Richard van der Hoff
321f02d263 Block attempts to send server notices to remote users 2018-05-23 14:30:47 +01:00
Richard van der Hoff
1cbb8e5a33 fix wrapping 2018-05-23 13:58:28 +01:00
Richard van der Hoff
052d08a6a5 Using the manhole to send server notices 2018-05-23 13:55:39 +01:00
Richard van der Hoff
5ad1149f38 Notes on the manhole 2018-05-23 13:47:34 +01:00
Richard van der Hoff
563606b8f2 consent_tracking: formatting etc 2018-05-23 12:37:39 +01:00
Richard van der Hoff
2574ea3dc8 server_notices.md: fix link 2018-05-23 12:34:34 +01:00
Richard van der Hoff
833db2d922 consent tracking docs 2018-05-23 12:32:38 +01:00
Neil Johnson
9e8ab0a4f4 style 2018-05-23 12:05:58 +01:00
Neil Johnson
3601a240aa bump version and changelog 2018-05-23 12:03:23 +01:00
Richard van der Hoff
e7598b666b Some docs about server notices 2018-05-23 11:14:23 +01:00
Erik Johnston
5aaa3189d5 s/values/itervalues/ 2018-05-23 10:13:05 +01:00
Erik Johnston
0a4bca4134 Use iter* methods for _filter_events_for_server 2018-05-23 10:04:23 +01:00
Amber Brown
b6063631c3 more cleanup 2018-05-22 17:36:20 -05:00
Amber Brown
53cc2cde1f cleanup 2018-05-22 17:32:57 -05:00
Amber Brown
071206304d cleanup pep8 errors 2018-05-22 16:54:22 -05:00
Amber Brown
4abeaedcf3 tests/metrics is gone now 2018-05-22 16:45:11 -05:00
Amber Brown
85ba83eb51 fixes 2018-05-22 16:28:23 -05:00
Amber Brown
228f1f584e fix the test failures 2018-05-22 15:02:38 -05:00
Erik Johnston
e85b5a0ff7 Use iter* methods 2018-05-22 19:02:48 +01:00
Erik Johnston
586b66b197 Fix that states is a dict of dicts 2018-05-22 19:02:36 +01:00
Erik Johnston
35ca3e7b65 Merge pull request #3265 from matrix-org/erikj/limit_pagination
Don't support limitless pagination
2018-05-22 18:29:36 +01:00
Erik Johnston
a17e901f4d Remove unused string formatting param 2018-05-22 18:24:32 +01:00
Erik Johnston
5494c1d71e Don't support limitless pagination
The pagination storage function supported not specifiying a limit on the
number of events returned. This was triggered when using the search or
context API with a limit of zero, which the storage function took to
mean not being limited.
2018-05-22 18:15:21 +01:00
Neil Johnson
d8cb7225d2 daily user type phone home stats 2018-05-22 18:09:09 +01:00
hera
ad2823ee27 fix synchrotron 2018-05-22 17:47:42 +01:00
Richard van der Hoff
08bfc48abf custom error code for not leaving server notices room 2018-05-22 17:27:27 +01:00
David Baker
0a078026ea comment typo 2018-05-22 17:14:06 +01:00
Amber Brown
07bb9bdae8 update gitignore to remove some files that got put on my machine 2018-05-22 10:57:28 -05:00
Amber Brown
8f5a688d42 cleanups, self-registration 2018-05-22 10:56:03 -05:00
Amber Brown
a8990fa2ec Merge remote-tracking branch 'origin/develop' into 3218-official-prom 2018-05-22 10:50:26 -05:00
Erik Johnston
cb2a2ad791 get_domains_from_state returns list of tuples 2018-05-22 16:23:39 +01:00
Richard van der Hoff
08a14b32ae Merge pull request #3262 from matrix-org/rav/has_already_consented
Add a 'has_consented' template var to consent forms
2018-05-22 15:35:10 +01:00
Richard van der Hoff
82c2a52987 Merge pull request #3263 from matrix-org/rav/fix_jinja_dep
Fix dependency on jinja2
2018-05-22 15:31:08 +01:00
Richard van der Hoff
7b36d06a69 Add a 'has_consented' template var to consent forms
fixes #3260
2018-05-22 14:58:34 +01:00
Richard van der Hoff
669400e22f Enable auto-escaping for the consent templates
... to reduce the risk of somebody introducing an html injection attack...
2018-05-22 14:58:34 +01:00
Richard van der Hoff
b5b2d5d64b Fix dependency on jinja2
Delay the import of ConsentResource, so that we can get away without jinja2 if
people don't have the consent resource enabled.

Fixes #3259
2018-05-22 14:03:45 +01:00
Richard van der Hoff
3b2def6c7a Merge pull request #3257 from matrix-org/rav/fonx_on_no_consent
Reject attempts to send event before privacy consent is given
2018-05-22 12:01:43 +01:00
Richard van der Hoff
a5e2941aad Reject attempts to send event before privacy consent is given
Returns an M_CONSENT_NOT_GIVEN error (cf
https://github.com/matrix-org/matrix-doc/issues/1252) if consent is not yet
given.
2018-05-22 12:00:47 +01:00
Richard van der Hoff
8aeb529262 Merge pull request #3236 from matrix-org/rav/consent_notice
Send users a server notice about consent
2018-05-22 11:56:09 +01:00
Richard van der Hoff
8810685df9 Stub out ServerNoticesSender on the workers
... and have the sync endpoints call it directly rather than obsure indirection
via PresenceHandler
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d5dca9a04f Move consent config parsing into ConsentConfig
turns out we need to reuse this, so it's better in the config class.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
9ea219c514 Send users a server notice about consent
When a user first syncs, we will send them a server notice asking them to
consent to the privacy policy if they have not already done so.
2018-05-22 11:54:51 +01:00
Richard van der Hoff
d14d7b8fdc Rename 'version' param on user consent config
we're going to use it for the version we require too.
2018-05-22 11:54:51 +01:00
Erik Johnston
7cfa8a87a1 Merge pull request #3258 from matrix-org/erikj/fixup_logcontext_rusage
Fix logcontext resource usage tracking
2018-05-22 11:39:56 +01:00
Erik Johnston
7948ecf234 Comment 2018-05-22 11:39:43 +01:00
Erik Johnston
020377a550 Fix logcontext resource usage tracking 2018-05-22 11:16:07 +01:00
Erik Johnston
13a8dfba0d Merge pull request #3252 from matrix-org/erikj/in_flight_requests
Add in flight request metrics
2018-05-22 09:54:30 +01:00
Erik Johnston
c435b0b441 Don't store context 2018-05-22 09:34:26 +01:00
Erik Johnston
fb2806b186 Move in_flight_requests_count to be a callback metric 2018-05-22 09:31:53 +01:00
Amber Brown
fcc525b0b7 rest of the changes 2018-05-21 19:48:57 -05:00
Amber Brown
df9f72d9e5 replacing portions 2018-05-21 19:47:37 -05:00
Amber Brown
c60e0d5e02 don't need the resource portion 2018-05-21 17:03:20 -05:00
Amber Brown
02c1d29133 look at the Prometheus metrics instead 2018-05-21 17:02:20 -05:00
Amber Brown
f258deffcb remove old metrics libs 2018-05-21 17:01:15 -05:00
Richard van der Hoff
413482f578 Merge pull request #3255 from matrix-org/rav/fix_transactions
Stop the transaction cache caching failures
2018-05-21 17:58:08 +01:00
Neil Johnson
4aac88928f Merge pull request #3251 from matrix-org/cohort_analytics
Tighter filtering for user_daily_visits
2018-05-21 16:42:18 +00:00
Richard van der Hoff
6e1cb54a05 Fix logcontext leak in HttpTransactionCache
ONE DAY I WILL PURGE THE WORLD OF THIS EVIL
2018-05-21 16:58:20 +01:00
Richard van der Hoff
6d6e7288fe Stop the transaction cache caching failures
The transaction cache has some code which tries to stop it caching failures,
but if the callback function failed straight away, then things would happen
backwards and we'd end up with the failure stuck in the cache.
2018-05-21 16:49:59 +01:00
Michael Kaye
d689e0dba1 Merge pull request #3239 from ptman/develop
Add lxml to docker image for web previews
2018-05-21 16:31:18 +01:00
Erik Johnston
dfa70adc33 Add in flight request metrics
This tracks CPU and DB usage while requests are in flight, rather than
when we write the response.
2018-05-21 16:23:06 +01:00
Adrian Tschira
933bf2dd35 replace some iteritems with six
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:59:26 +02:00
Adrian Tschira
d9fe2b2d9d Replace some more comparisons with six
plus a bonus b"" string I missed last time

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:56:31 +02:00
Adrian Tschira
45b55e23d3 Add batch_iter to utils
There's a frequent idiom I noticed where an iterable is split up into a
number of chunks/batches. Unfortunately that method does not work with
iterators like dict.keys() in python3. This implementation works with
iterators.

Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:48:30 +02:00
Adrian Tschira
dcc235b47d use stand-in value if maxint is not available
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:44 +02:00
Adrian Tschira
73cbdef5f7 fix py3 intern and remove unnecessary py3 encode
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:35:31 +02:00
Adrian Tschira
aafb0f6b0d py3-ize url preview 2018-05-19 17:35:20 +02:00
Adrian Tschira
b932b4ea25 use repr, not str
Signed-off-by: Adrian Tschira <nota@notafile.com>
2018-05-19 17:28:42 +02:00
Neil Johnson
644aac5f73 Tighter filtering for user_daily_visits 2018-05-18 17:10:35 +01:00
Neil Johnson
08462620bf Merge pull request #3241 from matrix-org/fix_user_visits_insertion
fix psql compatability bug
2018-05-18 15:01:01 +00:00
Neil Johnson
ef466b3a13 fix psql compatability bug 2018-05-18 15:51:21 +01:00
Paul Tötterman
861f8a9b21 Add lxml to docker image for web previews
Signed-off-by: Paul Tötterman <paul.totterman@iki.fi>
2018-05-18 16:20:08 +03:00
Neil Johnson
2725223f08 Merge branch 'master' into develop 2018-05-18 14:07:50 +01:00
Neil Johnson
ab5e888927 Merge tag 'v0.29.1'
Changes in synapse v0.29.1 (2018-05-17)
==========================================
Changes:

* Update docker documentation (PR #3222)

Changes in synapse v0.29.0 (2018-05-16)
===========================================
Not changes since v0.29.0-rc1

Changes in synapse v0.29.0-rc1 (2018-05-14)
===========================================

Notable changes, a docker file for running Synapse (Thanks to @kaiyou!) and a
closed spec bug in the Client Server API. Additionally further prep for Python 3
migration.

Potentially breaking change:

* Make Client-Server API return 401 for invalid token (PR #3161).

  This changes the Client-server spec to return a 401 error code instead of 403
  when the access token is unrecognised. This is the behaviour required by the
  specification, but some clients may be relying on the old, incorrect
  behaviour.

  Thanks to @NotAFile for fixing this.

Features:

* Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!

Changes - General:

* nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
* Part user from rooms on account deactivate (PR #3201)
* Make 'unexpected logging context' into warnings (PR #3007)
* Set Server header in SynapseRequest (PR #3208)
* remove duplicates from groups tables (PR #3129)
* Improve exception handling for background processes (PR #3138)
* Add missing consumeErrors to improve exception handling (PR #3139)
* reraise exceptions more carefully (PR #3142)
* Remove redundant call to preserve_fn (PR #3143)
* Trap exceptions thrown within run_in_background (PR #3144)

Changes - Refactors:

* Refactor /context to reuse pagination storage functions (PR #3193)
* Refactor recent events func to use pagination func (PR #3195)
* Refactor pagination DB API to return concrete type (PR #3196)
* Refactor get_recent_events_for_room return type (PR #3198)
* Refactor sync APIs to reuse pagination API (PR #3199)
* Remove unused code path from member change DB func (PR #3200)
* Refactor request handling wrappers (PR #3203)
* transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* Set Server header in SynapseRequest (PR #3208)
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
* Use run_in_background in preference to preserve_fn (PR #3140)

Changes - Python 3 migration:

* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!

Bug Fixes:

* synapse fails to start under Twisted >= 18.4 (PR #3157)
* Fix a class of logcontext leaks (PR #3170)
* Fix a couple of logcontext leaks in unit tests (PR #3172)
* Fix logcontext leak in media repo (PR #3174)
* Escape label values in prometheus metrics (PR #3175, #3186)
* Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
* Fix logcontext leaks in rate limiter (PR #3183)
* notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
* nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
2018-05-18 14:06:34 +01:00
Neil Johnson
f3d9dca975 Merge branch 'release-v0.29.0' of https://github.com/matrix-org/synapse into release-v0.29.0 2018-05-18 13:55:19 +01:00
Richard van der Hoff
6d9dc67139 Merge pull request #3232 from matrix-org/rav/server_notices_room
Infrastructure for a server notices room
2018-05-18 11:28:04 +01:00
Richard van der Hoff
ed3125b0a1 Merge pull request #3235 from matrix-org/rav/fix_receipts_deferred
Fix error in handling receipts
2018-05-18 11:23:11 +01:00
Richard van der Hoff
67af392712 Merge pull request #3233 from matrix-org/rav/remove_dead_code
Remove unused `update_external_syncs`
2018-05-18 11:22:43 +01:00
Richard van der Hoff
011e1f4010 Better docstrings 2018-05-18 11:22:12 +01:00
Richard van der Hoff
26305788fe Make sure we reject attempts to invite the notices user 2018-05-18 11:18:39 +01:00
Richard van der Hoff
d10707c810 Replace inline docstrings with "Attributes" in class docstring 2018-05-18 11:00:55 +01:00
Erik Johnston
fa30ac38cc Merge pull request #3221 from matrix-org/erikj/purge_token
Make purge_history operate on tokens
2018-05-18 10:35:23 +01:00
Richard van der Hoff
8b1c856d81 Fix error in handling receipts
Fixes an error which has been happening ever since #2158 (v0.21.0-rc1):

> TypeError: argument of type 'ObservableDeferred' is not iterable

fixes #3234
2018-05-18 09:15:35 +01:00
Neil Johnson
6958459b50 bump version, change log 2018-05-17 21:35:07 +01:00
Richard van der Hoff
88d3405332 fix missing yield for server_notices_room 2018-05-17 18:33:45 +01:00
Richard van der Hoff
d43d480d86 Remove unused update_external_syncs
This method isn't used anywhere. Burninate it.
2018-05-17 18:22:19 +01:00
Neil Johnson
a2da6de40e light grammar changes 2018-05-17 18:12:00 +01:00
Michael Kaye
450f500d0c Note that secrets need to be retained. 2018-05-17 18:12:00 +01:00
Michael Kaye
82b0361f02 Document macaroon env var correctly 2018-05-17 18:12:00 +01:00
Michael Kaye
1b1b47aec6 Reference synapse docker image and docker-compose 2018-05-17 18:12:00 +01:00
Richard van der Hoff
fed62e21ad Infrastructure for a server notices room
Server Notices use a special room which the user can't dismiss. They are
created on demand when some other bit of the code calls send_notice.

(This doesn't actually do much yet becuse we don't call send_notice anywhere)
2018-05-17 17:58:25 +01:00
Richard van der Hoff
f8a1e76d64 Merge pull request #3225 from matrix-org/rav/move_creation_handler
Move RoomCreationHandler out of synapse.handlers.Handlers
2018-05-17 17:56:42 +01:00
Erik Johnston
f7906203f6 Merge pull request #3212 from matrix-org/erikj/epa_stream
Use stream rather depth ordering for push actions
2018-05-17 12:01:21 +01:00
Richard van der Hoff
ae53c71d90 Merge pull request #1756 from rubo77/patch-4
Add instructions how to setup the postgres user and clarify the final step
2018-05-17 10:52:54 +01:00
rubo77
616da9eb1d postgres.rst: Add instructions how to setup the postgres user and clarify the final step 2018-05-17 11:48:56 +02:00
Richard van der Hoff
c46367d0d7 Move RoomCreationHandler out of synapse.handlers.Handlers
Handlers is deprecated nowadays, so let's move this out before I add a new
dependency on it.

Also fix the docstrings on create_room.
2018-05-17 09:08:42 +01:00
Neil Johnson
85b8acdeb4 Merge branch 'master' into develop 2018-05-16 16:48:10 +01:00
Neil Johnson
3c099219e0 bump version and changelog for 0.29.0 2018-05-16 15:44:59 +01:00
Erik Johnston
680530cc7f Clarify comment 2018-05-16 11:47:29 +01:00
Erik Johnston
43e6e82c4d Comments 2018-05-16 11:13:31 +01:00
Neil Johnson
dc8930ea9e Merge pull request #3163 from matrix-org/cohort_analytics
user visit data
2018-05-16 10:09:24 +00:00
Erik Johnston
c945af8799 Move and rename variable 2018-05-16 10:52:06 +01:00
Neil Johnson
be11a02c4f remove empty line 2018-05-16 10:45:40 +01:00
Neil Johnson
a2204cc9cc remove unused method recurring_user_daily_visit_stats 2018-05-16 09:47:20 +01:00
Neil Johnson
31c2502ca8 style and further contraining query 2018-05-16 09:46:43 +01:00
Richard van der Hoff
8030a825c8 Merge pull request #3213 from matrix-org/rav/consent_handler
ConsentResource to gather policy consent from users
2018-05-16 07:19:18 +01:00
Neil Johnson
c92a8aa578 pep8 2018-05-15 17:31:11 +01:00
Neil Johnson
05ac15ae82 Limit query load of generate_user_daily_visits
The aim is to keep track of when it was last called and only query from that point in time
2018-05-15 17:01:33 +01:00
Erik Johnston
5f27ed75ad Make purge_history operate on tokens
As we're soon going to change how topological_ordering works
2018-05-15 16:23:50 +01:00
Erik Johnston
37dbee6490 Use events_to_purge table rather than token 2018-05-15 16:23:47 +01:00
Richard van der Hoff
47815edcfa ConsentResource to gather policy consent from users
Hopefully there are enough comments and docs in this that it makes sense on its
own.
2018-05-15 15:11:59 +01:00
Neil Johnson
589ecc5b58 further musical chairs 2018-05-14 17:49:59 +01:00
Neil Johnson
e71fb118f4 rearrange and collect related PRs 2018-05-14 17:39:22 +01:00
Neil Johnson
f077e97914 instead of inserting user daily visit data at the end of the day, instead insert incrementally through the day 2018-05-14 13:50:58 +01:00
Neil Johnson
977765bde2 Merge branch 'develop' of https://github.com/matrix-org/synapse into cohort_analytics 2018-05-14 09:31:42 +01:00
Erik Johnston
6406b70aeb Use stream rather depth ordering for push actions
This simplifies things as it is, but will also allow us to change the
way we traverse topologically without having to update the way push
actions work.
2018-05-11 15:30:11 +01:00
Neil Johnson
dd1a832419 remove user agent from data model, will just join on user_ips 2018-05-01 12:13:49 +01:00
Neil Johnson
d0857702e8 add inidexes based on usage 2018-05-01 12:12:57 +01:00
Neil Johnson
5917562b60 10 mins seems more reasonable that every minute 2018-05-01 12:12:22 +01:00
Neil Johnson
fb6015d0a6 pep8 2018-04-25 17:56:11 +01:00
Neil Johnson
617bf40924 Generate user daily stats 2018-04-25 17:37:29 +01:00
Neil Johnson
48c01ae851 ignore atom editor python ide files 2018-04-25 11:02:28 +01:00
149 changed files with 3622 additions and 1842 deletions

4
.gitignore vendored
View File

@@ -1,5 +1,6 @@
*.pyc
.*.swp
*~
.DS_Store
_trial_temp/
@@ -13,6 +14,7 @@ docs/build/
cmdclient_config.json
homeserver*.db
homeserver*.log
homeserver*.log.*
homeserver*.pid
homeserver*.yaml
@@ -40,6 +42,7 @@ media_store/
*.tac
build/
venv/
localhost-800*/
static/client/register/register_config.js
@@ -49,3 +52,4 @@ env/
*.config
.vscode/
.ropeproject/

View File

@@ -1,3 +1,127 @@
Changes in synapse v0.31.0 (2018-06-06)
=======================================
Most notable change from v0.30.0 is to switch to python prometheus library to improve system
stats reporting. WARNING this changes a number of prometheus metrics in a
backwards-incompatible manner. For more details, see
`docs/metrics-howto.rst <docs/metrics-howto.rst#removal-of-deprecated-metrics--time-based-counters-becoming-histograms-in-0310>`_.
Bug Fixes:
* Fix metric documentation tables (PR #3341)
* Fix LaterGuage error handling (694968f)
* Fix replication metrics (b7e7fd2)
Changes in synapse v0.31.0-rc1 (2018-06-04)
==========================================
Features:
* Switch to the Python Prometheus library (PR #3256, #3274)
* Let users leave the server notice room after joining (PR #3287)
Changes:
* daily user type phone home stats (PR #3264)
* Use iter* methods for _filter_events_for_server (PR #3267)
* Docs on consent bits (PR #3268)
* Remove users from user directory on deactivate (PR #3277)
* Avoid sending consent notice to guest users (PR #3288)
* disable CPUMetrics if no /proc/self/stat (PR #3299)
* Add local and loopback IPv6 addresses to url_preview_ip_range_blacklist (PR #3312) Thanks to @thegcat!
* Consistently use six's iteritems and wrap lazy keys/values in list() if they're not meant to be lazy (PR #3307)
* Add private IPv6 addresses to example config for url preview blacklist (PR #3317) Thanks to @thegcat!
* Reduce stuck read-receipts: ignore depth when updating (PR #3318)
* Put python's logs into Trial when running unit tests (PR #3319)
Changes, python 3 migration:
* Replace some more comparisons with six (PR #3243) Thanks to @NotAFile!
* replace some iteritems with six (PR #3244) Thanks to @NotAFile!
* Add batch_iter to utils (PR #3245) Thanks to @NotAFile!
* use repr, not str (PR #3246) Thanks to @NotAFile!
* Misc Python3 fixes (PR #3247) Thanks to @NotAFile!
* Py3 storage/_base.py (PR #3278) Thanks to @NotAFile!
* more six iteritems (PR #3279) Thanks to @NotAFile!
* More Misc. py3 fixes (PR #3280) Thanks to @NotAFile!
* remaining isintance fixes (PR #3281) Thanks to @NotAFile!
* py3-ize state.py (PR #3283) Thanks to @NotAFile!
* extend tox testing for py3 to avoid regressions (PR #3302) Thanks to @krombel!
* use memoryview in py3 (PR #3303) Thanks to @NotAFile!
Bugs:
* Fix federation backfill bugs (PR #3261)
* federation: fix LaterGauge usage (PR #3328) Thanks to @intelfx!
Changes in synapse v0.30.0 (2018-05-24)
==========================================
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.
They are used as part of communication of the server policies (see ``docs/consent_tracking.md``),
however the intention is that they may also find a use for features such
as "Message of the day".
This feature is specific to Synapse, but uses standard Matrix communication mechanisms,
so should work with any Matrix client. For more details see ``docs/server_notices.md``
Further Server Notices/Consent Tracking Support:
* Allow overriding the server_notices user's avatar (PR #3273)
* Use the localpart in the consent uri (PR #3272)
* Support for putting %(consent_uri)s in messages (PR #3271)
* Block attempts to send server notices to remote users (PR #3270)
* Docs on consent bits (PR #3268)
Changes in synapse v0.30.0-rc1 (2018-05-23)
==========================================
Server Notices/Consent Tracking Support:
* ConsentResource to gather policy consent from users (PR #3213)
* Move RoomCreationHandler out of synapse.handlers.Handlers (PR #3225)
* Infrastructure for a server notices room (PR #3232)
* Send users a server notice about consent (PR #3236)
* Reject attempts to send event before privacy consent is given (PR #3257)
* Add a 'has_consented' template var to consent forms (PR #3262)
* Fix dependency on jinja2 (PR #3263)
Features:
* Cohort analytics (PR #3163, #3241, #3251)
* Add lxml to docker image for web previews (PR #3239) Thanks to @ptman!
* Add in flight request metrics (PR #3252)
Changes:
* Remove unused `update_external_syncs` (PR #3233)
* Use stream rather depth ordering for push actions (PR #3212)
* Make purge_history operate on tokens (PR #3221)
* Don't support limitless pagination (PR #3265)
Bug Fixes:
* Fix logcontext resource usage tracking (PR #3258)
* Fix error in handling receipts (PR #3235)
* Stop the transaction cache caching failures (PR #3255)
Changes in synapse v0.29.1 (2018-05-17)
==========================================
Changes:
* Update docker documentation (PR #3222)
Changes in synapse v0.29.0 (2018-05-16)
===========================================
Not changes since v0.29.0-rc1
Changes in synapse v0.29.0-rc1 (2018-05-14)
===========================================
@@ -20,60 +144,62 @@ Features:
* Add a Dockerfile for synapse (PR #2846) Thanks to @kaiyou!
Changes:
Changes - General:
* nuke-room-from-db.sh: added postgresql option and help (PR #2337) Thanks to @rubo77!
* Part user from rooms on account deactivate (PR #3201)
* Make 'unexpected logging context' into warnings (PR #3007)
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127)
* Set Server header in SynapseRequest (PR #3208)
* remove duplicates from groups tables (PR #3129)
* Miscellaneous fixes to python_dependencies (PR #3136)
* Improve exception handling for background processes (PR #3138)
* Add missing consumeErrors to improve exception handling (PR #3139)
* Use run_in_background in preference to preserve_fn (PR #3140)
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* reraise exceptions more carefully (PR #3142)
* Remove redundant call to preserve_fn (PR #3143)
* Trap exceptions thrown within run_in_background (PR #3144)
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!
* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
Changes - Refactors:
* Refactor /context to reuse pagination storage functions (PR #3193)
* Refactor recent events func to use pagination func (PR #3195)
* Refactor pagination DB API to return concrete type (PR #3196)
* Refactor get_recent_events_for_room return type (PR #3198)
* Refactor sync APIs to reuse pagination API (PR #3199)
* Remove unused code path from member change DB func (PR #3200)
* Part user from rooms on account deactivate (PR #3201)
* Refactor request handling wrappers (PR #3203)
* transaction_id, destination defined twice (PR #3209) Thanks to @damir-manapov!
* Refactor event storage to prepare for changes in state calculations (PR #3141)
* Set Server header in SynapseRequest (PR #3208)
* Use deferred.addTimeout instead of time_bound_deferred (PR #3127, #3178)
* Use run_in_background in preference to preserve_fn (PR #3140)
Changes - Python 3 migration:
* Construct HMAC as bytes on py3 (PR #3156) Thanks to @NotAFile!
* run config tests on py3 (PR #3159) Thanks to @NotAFile!
* Open certificate files as bytes (PR #3084) Thanks to @NotAFile!
* Open config file in non-bytes mode (PR #3085) Thanks to @NotAFile!
* Make event properties raise AttributeError instead (PR #3102) Thanks to @NotAFile!
* Use six.moves.urlparse (PR #3108) Thanks to @NotAFile!
* Add py3 tests to tox with folders that work (PR #3145) Thanks to @NotAFile!
* Don't yield in list comprehensions (PR #3150) Thanks to @NotAFile!
* Move more xrange to six (PR #3151) Thanks to @NotAFile!
* make imports local (PR #3152) Thanks to @NotAFile!
* move httplib import to six (PR #3153) Thanks to @NotAFile!
* Replace stringIO imports with six (PR #3154, #3168) Thanks to @NotAFile!
* more bytes strings (PR #3155) Thanks to @NotAFile!
Bug Fixes:
* synapse fails to start under Twisted >= 18.4 (PR #3135)
* Fix incorrect reference to StringIO (PR #3168)
* synapse fails to start under Twisted >= 18.4 (PR #3157)
* Fix a class of logcontext leaks (PR #3170)
* Fix a couple of logcontext leaks in unit tests (PR #3172)
* Fix logcontext leak in media repo (PR #3174)
* Escape label values in prometheus metrics (PR #3175)
* fix http request timeout code (PR #3178)
* Escape label values in prometheus metrics (PR #3175, #3186)
* Fix 'Unhandled Error' logs with Twisted 18.4 (PR #3182) Thanks to @Half-Shot!
* Fix logcontext leaks in rate limiter (PR #3183)
* Fix metrics that have integer value labels (PR #3186)
* notifications: Convert next_token to string according to the spec (PR #3190) Thanks to @mujx!
* nuke-room-from-db.sh: fix deletion from search table (PR #3194) Thanks to @rubo77!
* Set Server header in SynapseRequest (PR #3208)
* add guard for None on purge_history api (PR #3160) Thanks to @krombel!
Changes in synapse v0.28.1 (2018-05-01)
=======================================

View File

@@ -1,12 +1,12 @@
FROM docker.io/python:2-alpine3.7
RUN apk add --no-cache --virtual .nacl_deps su-exec build-base libffi-dev zlib-dev libressl-dev libjpeg-turbo-dev linux-headers postgresql-dev
RUN apk add --no-cache --virtual .nacl_deps su-exec build-base libffi-dev zlib-dev libressl-dev libjpeg-turbo-dev linux-headers postgresql-dev libxslt-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
&& pip install --upgrade pip setuptools psycopg2 \
&& pip install --upgrade pip setuptools psycopg2 lxml \
&& mkdir -p /synapse/cache \
&& pip install -f /synapse/cache --upgrade --process-dependency-links . \
&& mv /synapse/contrib/docker/start.py /synapse/contrib/docker/conf / \

View File

@@ -157,8 +157,9 @@ if you prefer.
In case of problems, please see the _`Troubleshooting` section below.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate the
above in Docker at https://hub.docker.com/r/avhost/docker-matrix/tags/
There is an offical synapse image available at https://hub.docker.com/r/matrixdotorg/synapse/tags/ which can be used with the docker-compose file available at `contrib/docker`. Further information on this including configuration options is available in `contrib/docker/README.md`.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Dockerfile to automate a synapse server in a single Docker image, at https://hub.docker.com/r/avhost/docker-matrix/tags/
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
tested with VirtualBox/AWS/DigitalOcean - see https://github.com/EMnify/matrix-synapse-auto-deploy

View File

@@ -1,9 +1,9 @@
# Synapse Docker
This Docker image will run Synapse as a single process. It does not provide any
database server or TURN server that you should run separately.
The `matrixdotorg/synapse` Docker image will run Synapse as a single process. It does not provide a
database server or a TURN server, you should run these separately.
If you run a Postgres server, you should simply have it in the same Compose
If you run a Postgres server, you should simply include it in the same Compose
project or set the proper environment variables and the image will automatically
use that server.
@@ -37,10 +37,15 @@ then run the server:
docker-compose up -d
```
If secrets are not specified in the environment variables, they will be generated
as part of the startup. Please ensure these secrets are kept between launches of the
Docker container, as their loss may require users to log in again.
### Manual configuration
A sample ``docker-compose.yml`` is provided, including example labels for
reverse proxying and other artifacts.
reverse proxying and other artifacts. The docker-compose file is an example,
please comment/uncomment sections that are not suitable for your usecase.
Specify a ``SYNAPSE_CONFIG_PATH``, preferably to a persistent path,
to use manual configuration. To generate a fresh ``homeserver.yaml``, simply run:
@@ -111,8 +116,6 @@ variables are available for configuration:
* ``SYNAPSE_SERVER_NAME`` (mandatory), the current server public hostname.
* ``SYNAPSE_REPORT_STATS``, (mandatory, ``yes`` or ``no``), enable anonymous
statistics reporting back to the Matrix project which helps us to get funding.
* ``SYNAPSE_MACAROON_SECRET_KEY`` (mandatory) secret for signing access tokens
to the server, set this to a proper random key.
* ``SYNAPSE_NO_TLS``, set this variable to disable TLS in Synapse (use this if
you run your own TLS-capable reverse proxy).
* ``SYNAPSE_ENABLE_REGISTRATION``, set this variable to enable registration on
@@ -132,6 +135,8 @@ Shared secrets, that will be initialized to random values if not set:
* ``SYNAPSE_REGISTRATION_SHARED_SECRET``, secret for registrering users if
registration is disable.
* ``SYNAPSE_MACAROON_SECRET_KEY`` secret for signing access tokens
to the server.
Database specific values (will use SQLite if not set):

View File

@@ -16,7 +16,7 @@
print("I am a fish %s" %
"moo")
and this::
and this::
print(
"I am a fish %s" %

160
docs/consent_tracking.md Normal file
View File

@@ -0,0 +1,160 @@
Support in Synapse for tracking agreement to server terms and conditions
========================================================================
Synapse 0.30 introduces support for tracking whether users have agreed to the
terms and conditions set by the administrator of a server - and blocking access
to the server until they have.
There are several parts to this functionality; each requires some specific
configuration in `homeserver.yaml` to be enabled.
Note that various parts of the configuation and this document refer to the
"privacy policy": agreement with a privacy policy is one particular use of this
feature, but of course adminstrators can specify other terms and conditions
unrelated to "privacy" per se.
Collecting policy agreement from a user
---------------------------------------
Synapse can be configured to serve the user a simple policy form with an
"accept" button. Clicking "Accept" records the user's acceptance in the
database and shows a success page.
To enable this, first create templates for the policy and success pages.
These should be stored on the local filesystem.
These templates use the [Jinja2](http://jinja.pocoo.org) templating language,
and [docs/privacy_policy_templates](privacy_policy_templates) gives
examples of the sort of thing that can be done.
Note that the templates must be stored under a name giving the language of the
template - currently this must always be `en` (for "English");
internationalisation support is intended for the future.
The template for the policy itself should be versioned and named according to
the version: for example `1.0.html`. The version of the policy which the user
has agreed to is stored in the database.
Once the templates are in place, make the following changes to `homeserver.yaml`:
1. Add a `user_consent` section, which should look like:
```yaml
user_consent:
template_dir: privacy_policy_templates
version: 1.0
```
`template_dir` points to the directory containing the policy
templates. `version` defines the version of the policy which will be served
to the user. In the example above, Synapse will serve
`privacy_policy_templates/en/1.0.html`.
2. Add a `form_secret` setting at the top level:
```yaml
form_secret: "<unique secret>"
```
This should be set to an arbitrary secret string (try `pwgen -y 30` to
generate suitable secrets).
More on what this is used for below.
3. Add `consent` wherever the `client` resource is currently enabled in the
`listeners` configuration. For example:
```yaml
listeners:
- port: 8008
resources:
- names:
- client
- consent
```
Finally, ensure that `jinja2` is installed. If you are using a virtualenv, this
should be a matter of `pip install Jinja2`. On debian, try `apt-get install
python-jinja2`.
Once this is complete, and the server has been restarted, try visiting
`https://<server>/_matrix/consent`. If correctly configured, this should give
an error "Missing string query parameter 'u'". It is now possible to manually
construct URIs where users can give their consent.
### Constructing the consent URI
It may be useful to manually construct the "consent URI" for a given user - for
instance, in order to send them an email asking them to consent. To do this,
take the base `https://<server>/_matrix/consent` URL and add the following
query parameters:
* `u`: the user id of the user. This can either be a full MXID
(`@user:server.com`) or just the localpart (`user`).
* `h`: hex-encoded HMAC-SHA256 of `u` using the `form_secret` as a key. It is
possible to calculate this on the commandline with something like:
```bash
echo -n '<user>' | openssl sha256 -hmac '<form_secret>'
```
This should result in a URI which looks something like:
`https://<server>/_matrix/consent?u=<user>&h=68a152465a4d...`.
Sending users a server notice asking them to agree to the policy
----------------------------------------------------------------
It is possible to configure Synapse to send a [server
notice](server_notices.md) to anybody who has not yet agreed to the current
version of the policy. To do so:
* ensure that the consent resource is configured, as in the previous section
* ensure that server notices are configured, as in [server_notices.md](server_notices.md).
* Add `server_notice_content` under `user_consent` in `homeserver.yaml`. For
example:
```yaml
user_consent:
server_notice_content:
msgtype: m.text
body: >-
Please give your consent to the privacy policy at %(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
* ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the server notice.)
Blocking users from using the server until they agree to the policy
-------------------------------------------------------------------
Synapse can be configured to block any attempts to join rooms or send messages
until the user has given their agreement to the policy. (Joining the server
notices room is exempted from this).
To enable this, add `block_events_error` under `user_consent`. For example:
```yaml
user_consent:
block_events_error: >-
You can't send any messages until you consent to the privacy policy at
%(consent_uri)s.
```
Synapse automatically replaces the placeholder `%(consent_uri)s` with the
consent uri for that user.
ensure that `public_baseurl` is set in `homeserver.yaml`, and gives the base
URI that clients use to connect to the server. (It is used to construct
`consent_uri` in the error.)

43
docs/manhole.md Normal file
View File

@@ -0,0 +1,43 @@
Using the synapse manhole
=========================
The "manhole" allows server administrators to access a Python shell on a running
Synapse installation. This is a very powerful mechanism for administration and
debugging.
To enable it, first uncomment the `manhole` listener configuration in
`homeserver.yaml`:
```yaml
listeners:
- port: 9000
bind_addresses: ['::1', '127.0.0.1']
type: manhole
```
(`bind_addresses` in the above is important: it ensures that access to the
manhole is only possible for local users).
Note that this will give administrative access to synapse to **all users** with
shell access to the server. It should therefore **not** be enabled in
environments where untrusted users have shell access.
Then restart synapse, and point an ssh client at port 9000 on localhost, using
the username `matrix`:
```bash
ssh -p9000 matrix@localhost
```
The password is `rabbithole`.
This gives a Python REPL in which `hs` gives access to the
`synapse.server.HomeServer` object - which in turn gives access to many other
parts of the process.
As a simple example, retrieving an event from the database:
```
>>> hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org')
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
```

View File

@@ -1,25 +1,47 @@
How to monitor Synapse metrics using Prometheus
===============================================
1. Install prometheus:
1. Install Prometheus:
Follow instructions at http://prometheus.io/docs/introduction/install/
2. Enable synapse metrics:
2. Enable Synapse metrics:
Simply setting a (local) port number will enable it. Pick a port.
prometheus itself defaults to 9090, so starting just above that for
locally monitored services seems reasonable. E.g. 9092:
There are two methods of enabling metrics in Synapse.
Add to homeserver.yaml::
The first serves the metrics as a part of the usual web server and can be
enabled by adding the "metrics" resource to the existing listener as such::
metrics_port: 9092
resources:
- names:
- client
- metrics
Also ensure that ``enable_metrics`` is set to ``True``.
This provides a simple way of adding metrics to your Synapse installation,
and serves under ``/_synapse/metrics``. If you do not wish your metrics be
publicly exposed, you will need to either filter it out at your load
balancer, or use the second method.
Restart synapse.
The second method runs the metrics server on a different port, in a
different thread to Synapse. This can make it more resilient to heavy load
meaning metrics cannot be retrieved, and can be exposed to just internal
networks easier. The served metrics are available over HTTP only, and will
be available at ``/``.
3. Add a prometheus target for synapse.
Add a new listener to homeserver.yaml::
listeners:
- type: metrics
port: 9000
bind_addresses:
- '0.0.0.0'
For both options, you will need to ensure that ``enable_metrics`` is set to
``True``.
Restart Synapse.
3. Add a Prometheus target for Synapse.
It needs to set the ``metrics_path`` to a non-default value (under ``scrape_configs``)::
@@ -31,7 +53,50 @@ How to monitor Synapse metrics using Prometheus
If your prometheus is older than 1.5.2, you will need to replace
``static_configs`` in the above with ``target_groups``.
Restart prometheus.
Restart Prometheus.
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------
The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
All time duration-based metrics have been changed to be seconds. This affects:
+----------------------------------+
| msec -> sec metrics |
+==================================+
| python_gc_time |
+----------------------------------+
| python_twisted_reactor_tick_time |
+----------------------------------+
| synapse_storage_query_time |
+----------------------------------+
| synapse_storage_schedule_time |
+----------------------------------+
| synapse_storage_transaction_time |
+----------------------------------+
Several metrics have been changed to be histograms, which sort entries into
buckets and allow better analysis. The following metrics are now histograms:
+-------------------------------------------+
| Altered metrics |
+===========================================+
| python_gc_time |
+-------------------------------------------+
| python_twisted_reactor_pending_calls |
+-------------------------------------------+
| python_twisted_reactor_tick_time |
+-------------------------------------------+
| synapse_http_server_response_time_seconds |
+-------------------------------------------+
| synapse_storage_query_time |
+-------------------------------------------+
| synapse_storage_schedule_time |
+-------------------------------------------+
| synapse_storage_transaction_time |
+-------------------------------------------+
Block and response metrics renamed for 0.27.0

View File

@@ -6,7 +6,13 @@ Postgres version 9.4 or later is known to work.
Set up database
===============
The PostgreSQL database used *must* have the correct encoding set, otherwise
Assuming your PostgreSQL database user is called ``postgres``, create a user
``synapse_user`` with::
su - postgres
createuser --pwprompt synapse_user
The PostgreSQL database used *must* have the correct encoding set, otherwise it
would not be able to store UTF8 strings. To create a database with the correct
encoding use, e.g.::
@@ -46,8 +52,8 @@ As with Debian/Ubuntu, postgres support depends on the postgres python connector
Synapse config
==============
When you are ready to start using PostgreSQL, add the following line to your
config file::
When you are ready to start using PostgreSQL, edit the ``database`` section in
your config file to match the following lines::
database:
name: psycopg2
@@ -96,9 +102,12 @@ complete, restart synapse. For instance::
cp homeserver.db homeserver.db.snapshot
./synctl start
Assuming your new config file (as described in the section *Synapse config*)
is named ``homeserver-postgres.yaml`` and the SQLite snapshot is at
``homeserver.db.snapshot`` then simply run::
Copy the old config file into a new config file::
cp homeserver.yaml homeserver-postgres.yaml
Edit the database section as described in the section *Synapse config* above
and with the SQLite snapshot located at ``homeserver.db.snapshot`` simply run::
synapse_port_db --sqlite-database homeserver.db.snapshot \
--postgres-config homeserver-postgres.yaml
@@ -117,6 +126,11 @@ run::
--postgres-config homeserver-postgres.yaml
Once that has completed, change the synapse config to point at the PostgreSQL
database configuration file ``homeserver-postgres.yaml`` (i.e. rename it to
``homeserver.yaml``) and restart synapse. Synapse should now be running against
PostgreSQL.
database configuration file ``homeserver-postgres.yaml``:
./synctl stop
mv homeserver.yaml homeserver-old-sqlite.yaml
mv homeserver-postgres.yaml homeserver.yaml
./synctl start
Synapse should now be running against PostgreSQL.

View File

@@ -0,0 +1,23 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
{% if has_consented %}
<p>
Your base already belong to us.
</p>
{% else %}
<p>
All your base are belong to us.
</p>
<form method="post" action="consent">
<input type="hidden" name="v" value="{{version}}"/>
<input type="hidden" name="u" value="{{user}}"/>
<input type="hidden" name="h" value="{{userhmac}}"/>
<input type="submit" value="Sure thing!"/>
</form>
{% endif %}
</body>
</html>

View File

@@ -0,0 +1,11 @@
<!doctype html>
<html lang="en">
<head>
<title>Matrix.org Privacy policy</title>
</head>
<body>
<p>
Sweet.
</p>
</body>
</html>

74
docs/server_notices.md Normal file
View File

@@ -0,0 +1,74 @@
Server Notices
==============
'Server Notices' are a new feature introduced in Synapse 0.30. They provide a
channel whereby server administrators can send messages to users on the server.
They are used as part of communication of the server polices(see
[consent_tracking.md](consent_tracking.md)), however the intention is that
they may also find a use for features such as "Message of the day".
This is a feature specific to Synapse, but it uses standard Matrix
communication mechanisms, so should work with any Matrix client.
User experience
---------------
When the user is first sent a server notice, they will get an invitation to a
room (typically called 'Server Notices', though this is configurable in
`homeserver.yaml`). They will be **unable to reject** this invitation -
attempts to do so will receive an error.
Once they accept the invitation, they will see the notice message in the room
history; it will appear to have come from the 'server notices user' (see
below).
The user is prevented from sending any messages in this room by the power
levels.
Having joined the room, the user can leave the room if they want. Subsequent
server notices will then cause a new room to be created.
Synapse configuration
---------------------
Server notices come from a specific user id on the server. Server
administrators are free to choose the user id - something like `server` is
suggested, meaning the notices will come from
`@server:<your_server_name>`. Once the Server Notices user is configured, that
user id becomes a special, privileged user, so administrators should ensure
that **it is not already allocated**.
In order to support server notices, it is necessary to add some configuration
to the `homeserver.yaml` file. In particular, you should add a `server_notices`
section, which should look like this:
```yaml
server_notices:
system_mxid_localpart: server
system_mxid_display_name: "Server Notices"
system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
room_name: "Server Notices"
```
The only compulsory setting is `system_mxid_localpart`, which defines the user
id of the Server Notices user, as above. `room_name` defines the name of the
room which will be created.
`system_mxid_display_name` and `system_mxid_avatar_url` can be used to set the
displayname and avatar of the Server Notices user.
Sending notices
---------------
As of the current version of synapse, there is no convenient interface for
sending notices (other than the automated ones sent as part of consent
tracking).
In the meantime, it is possible to test this feature using the manhole. Having
gone into the manhole as described in [manhole.md](manhole.md), a notice can be
sent with something like:
```
>>> hs.get_server_notices_manager().send_notice('@user:server.com', {'msgtype':'m.text', 'body':'foo'})
```

View File

@@ -16,4 +16,4 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.29.0-rc1"
__version__ = "0.31.0"

View File

@@ -15,6 +15,8 @@
import logging
from six import itervalues
import pymacaroons
from twisted.internet import defer
@@ -57,7 +59,7 @@ class Auth(object):
self.TOKEN_NOT_FOUND_HTTP_STATUS = 401
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("token_cache", self.token_cache)
register_cache("cache", "token_cache", self.token_cache)
@defer.inlineCallbacks
def check_from_context(self, event, context, do_sig_check=True):
@@ -66,7 +68,7 @@ class Auth(object):
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.values()
(e.type, e.state_key): e for e in itervalues(auth_events)
}
self.check(event, auth_events=auth_events, do_sig_check=do_sig_check)

View File

@@ -19,6 +19,7 @@ import logging
import simplejson as json
from six import iteritems
from six.moves import http_client
logger = logging.getLogger(__name__)
@@ -51,6 +52,8 @@ class Codes(object):
THREEPID_DENIED = "M_THREEPID_DENIED"
INVALID_USERNAME = "M_INVALID_USERNAME"
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
class CodeMessageException(RuntimeError):
@@ -138,6 +141,32 @@ class SynapseError(CodeMessageException):
return res
class ConsentNotGivenError(SynapseError):
"""The error returned to the client when the user has not consented to the
privacy policy.
"""
def __init__(self, msg, consent_uri):
"""Constructs a ConsentNotGivenError
Args:
msg (str): The human-readable error message
consent_url (str): The URL where the user can give their consent
"""
super(ConsentNotGivenError, self).__init__(
code=http_client.FORBIDDEN,
msg=msg,
errcode=Codes.CONSENT_NOT_GIVEN
)
self._consent_uri = consent_uri
def error_dict(self):
return cs_error(
self.msg,
self.errcode,
consent_uri=self._consent_uri
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
@@ -292,7 +321,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
Args:
msg (str): The error message.
code (int): The error code.
code (str): The error code.
kwargs : Additional keys to add to the response.
Returns:
A dict representing the error response JSON.

View File

@@ -411,7 +411,7 @@ class Filter(object):
return room_ids
def filter(self, events):
return filter(self.check, events)
return list(filter(self.check, events))
def limit(self):
return self.filter_json.get("limit", 10)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,6 +15,12 @@
# limitations under the License.
"""Contains the URL paths to prefix various aspects of the server with. """
from hashlib import sha256
import hmac
from six.moves.urllib.parse import urlencode
from synapse.config import ConfigError
CLIENT_PREFIX = "/_matrix/client/api/v1"
CLIENT_V2_ALPHA_PREFIX = "/_matrix/client/v2_alpha"
@@ -25,3 +32,46 @@ SERVER_KEY_PREFIX = "/_matrix/key/v1"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
class ConsentURIBuilder(object):
def __init__(self, hs_config):
"""
Args:
hs_config (synapse.config.homeserver.HomeServerConfig):
"""
if hs_config.form_secret is None:
raise ConfigError(
"form_secret not set in config",
)
if hs_config.public_baseurl is None:
raise ConfigError(
"public_baseurl not set in config",
)
self._hmac_secret = hs_config.form_secret.encode("utf-8")
self._public_baseurl = hs_config.public_baseurl
def build_user_consent_uri(self, user_id):
"""Build a URI which we can give to the user to do their privacy
policy consent
Args:
user_id (str): mxid or username of user
Returns
(str) the URI where the user can do consent
"""
mac = hmac.new(
key=self._hmac_secret,
msg=user_id,
digestmod=sha256,
).hexdigest()
consent_uri = "%s_matrix/consent?%s" % (
self._public_baseurl,
urlencode({
"u": user_id,
"h": mac
}),
)
return consent_uri

View File

@@ -124,6 +124,19 @@ def quit_with_error(error_string):
sys.exit(1)
def listen_metrics(bind_addresses, port):
"""
Start Prometheus metrics server.
"""
from synapse.metrics import RegistryProxy
from prometheus_client import start_http_server
for host in bind_addresses:
reactor.callInThread(start_http_server, int(port),
addr=host, registry=RegistryProxy)
logger.info("Metrics now reporting on %s:%d", host, port)
def listen_tcp(bind_addresses, port, factory, backlog=50):
"""
Create a TCP socket for a port and several addresses

View File

@@ -94,6 +94,13 @@ class AppserviceServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -77,7 +78,7 @@ class ClientReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
@@ -118,7 +119,13 @@ class ClientReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -90,7 +91,7 @@ class EventCreatorServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
@@ -134,6 +135,13 @@ class EventCreatorServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.directory import DirectoryStore
@@ -71,7 +72,7 @@ class FederationReaderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -107,6 +108,13 @@ class FederationReaderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -25,6 +25,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
@@ -89,7 +90,7 @@ class FederationSenderServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -121,6 +122,13 @@ class FederationSenderServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -29,6 +29,7 @@ from synapse.http.servlet import (
RestServlet, parse_json_object_from_request,
)
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -131,7 +132,7 @@ class FrontendProxyServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
@@ -172,6 +173,13 @@ class FrontendProxyServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -34,7 +34,7 @@ from synapse.module_api import ModuleApi
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import RootRedirect
from synapse.http.site import SynapseSite
from synapse.metrics import register_memory_metrics
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.python_dependencies import CONDITIONAL_REQUIREMENTS, \
check_requirements
@@ -184,6 +184,15 @@ class SynapseHomeServer(HomeServer):
"/_matrix/client/versions": client_resource,
})
if name == "consent":
from synapse.rest.consent.consent_resource import ConsentResource
consent_resource = ConsentResource(self)
if compress:
consent_resource = gz_wrap(consent_resource)
resources.update({
"/_matrix/consent": consent_resource,
})
if name == "federation":
resources.update({
FEDERATION_PREFIX: TransportLayerServer(self),
@@ -221,7 +230,7 @@ class SynapseHomeServer(HomeServer):
resources[WEB_CLIENT_PREFIX] = build_resource_for_web_client(self)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
if name == "replication":
resources[REPLICATION_PREFIX] = ReplicationRestResource(self)
@@ -254,6 +263,13 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger(
"before", "shutdown", server_listener.stopListening,
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
@@ -353,8 +369,6 @@ def setup(config_options):
hs.get_datastore().start_doing_background_updates()
hs.get_federation_client().start_get_pdu_cache()
register_memory_metrics(hs)
reactor.callWhenRunning(start)
return hs
@@ -425,6 +439,10 @@ def run(hs):
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in daily_user_type_results.iteritems():
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
@@ -475,6 +493,14 @@ def run(hs):
" changes across releases."
)
def generate_user_daily_visit_stats():
hs.get_datastore().generate_user_daily_visits()
# Rather than update on per session basis, batch up the requests.
# If you increase the loop period, the accuracy of user_daily_visits
# table will decrease
clock.looping_call(generate_user_daily_visit_stats, 5 * 60 * 1000)
if hs.config.report_stats:
logger.info("Scheduling stats reporting for 3 hour intervals")
clock.looping_call(phone_stats_home, 3 * 60 * 60 * 1000)

View File

@@ -27,6 +27,7 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -73,7 +74,7 @@ class MediaRepositoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
resources.update({
@@ -114,6 +115,13 @@ class MediaRepositoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -23,6 +23,7 @@ from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
@@ -92,7 +93,7 @@ class PusherServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
@@ -124,6 +125,13 @@ class PusherServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
@@ -257,7 +258,7 @@ class SynchrotronServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
@@ -301,6 +302,13 @@ class SynchrotronServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -26,6 +26,7 @@ from synapse.config.logger import setup_logging
from synapse.crypto import context_factory
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.metrics import RegistryProxy
from synapse.metrics.resource import METRICS_PREFIX, MetricsResource
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
@@ -105,7 +106,7 @@ class UserDirectoryServer(HomeServer):
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(self)
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
@@ -146,6 +147,13 @@ class UserDirectoryServer(HomeServer):
globals={"hs": self},
)
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(("Metrics listener configured, but "
"collect_metrics is not enabled!"))
else:
_base.listen_metrics(listener["bind_addresses"],
listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])

View File

@@ -12,3 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import ConfigError
# export ConfigError if somebody does import *
# this is largely a fudge to stop PEP8 moaning about the import
__all__ = ["ConfigError"]

View File

@@ -0,0 +1,85 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
DEFAULT_CONFIG = """\
# User Consent configuration
#
# Parts of this section are required if enabling the 'consent' resource under
# 'listeners', in particular 'template_dir' and 'version'.
#
# 'template_dir' gives the location of the templates for the HTML forms.
# This directory should contain one subdirectory per language (eg, 'en', 'fr'),
# and each language directory should contain the policy document (named as
# '<version>.html') and a success page (success.html).
#
# 'version' specifies the 'current' version of the policy document. It defines
# the version to be served by the consent resource if there is no 'v'
# parameter.
#
# 'server_notice_content', if enabled, will send a user a "Server Notice"
# asking them to consent to the privacy policy. The 'server_notices' section
# must also be configured for this to work. Notices will *not* be sent to
# guest users unless 'send_server_notice_to_guests' is set to true.
#
# 'block_events_error', if set, will block any attempts to send events
# until the user consents to the privacy policy. The value of the setting is
# used as the text of the error.
#
# user_consent:
# template_dir: res/templates/privacy
# version: 1.0
# server_notice_content:
# msgtype: m.text
# body: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
# send_server_notice_to_guests: True
# block_events_error: >-
# To continue using this homeserver you must review and agree to the
# terms and conditions at %(consent_uri)s
#
"""
class ConsentConfig(Config):
def __init__(self):
super(ConsentConfig, self).__init__()
self.user_consent_version = None
self.user_consent_template_dir = None
self.user_consent_server_notice_content = None
self.user_consent_server_notice_to_guests = False
self.block_events_without_consent_error = None
def read_config(self, config):
consent_config = config.get("user_consent")
if consent_config is None:
return
self.user_consent_version = str(consent_config["version"])
self.user_consent_template_dir = consent_config["template_dir"]
self.user_consent_server_notice_content = consent_config.get(
"server_notice_content",
)
self.block_events_without_consent_error = consent_config.get(
"block_events_error",
)
self.user_consent_server_notice_to_guests = bool(consent_config.get(
"send_server_notice_to_guests", False,
))
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,7 +13,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from .tls import TlsConfig
from .server import ServerConfig
from .logger import LoggingConfig
@@ -37,6 +37,8 @@ from .push import PushConfig
from .spam_checker import SpamCheckerConfig
from .groups import GroupsConfig
from .user_directory import UserDirectoryConfig
from .consent_config import ConsentConfig
from .server_notices_config import ServerNoticesConfig
class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
@@ -45,12 +47,15 @@ class HomeServerConfig(TlsConfig, ServerConfig, DatabaseConfig, LoggingConfig,
AppServiceConfig, KeyConfig, SAML2Config, CasConfig,
JWTConfig, PasswordConfig, EmailConfig,
WorkerConfig, PasswordAuthProviderConfig, PushConfig,
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,):
SpamCheckerConfig, GroupsConfig, UserDirectoryConfig,
ConsentConfig,
ServerNoticesConfig,
):
pass
if __name__ == '__main__':
import sys
sys.stdout.write(
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2])[0]
HomeServerConfig().generate_config(sys.argv[1], sys.argv[2], True)[0]
)

View File

@@ -59,14 +59,20 @@ class KeyConfig(Config):
self.expire_access_token = config.get("expire_access_token", False)
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
self.form_secret = config.get("form_secret", None)
def default_config(self, config_dir_path, server_name, is_generating_file=False,
**kwargs):
base_key_name = os.path.join(config_dir_path, server_name)
if is_generating_file:
macaroon_secret_key = random_string_with_symbols(50)
form_secret = '"%s"' % random_string_with_symbols(50)
else:
macaroon_secret_key = None
form_secret = 'null'
return """\
macaroon_secret_key: "%(macaroon_secret_key)s"
@@ -74,6 +80,10 @@ class KeyConfig(Config):
# Used to enable access token expiration.
expire_access_token: False
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values
form_secret: %(form_secret)s
## Signing Keys ##
# Path to the signing key to sign messages with

View File

@@ -250,6 +250,9 @@ class ContentRepositoryConfig(Config):
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
#
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.

View File

@@ -14,8 +14,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from ._base import Config, ConfigError
logger = logging.Logger(__name__)
class ServerConfig(Config):
@@ -138,6 +142,12 @@ class ServerConfig(Config):
metrics_port = config.get("metrics_port")
if metrics_port:
logger.warn(
("The metrics_port configuration option is deprecated in Synapse 0.31 "
"in favour of a listener. Please see "
"http://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst"
" on how to configure the new listener."))
self.listeners.append({
"port": metrics_port,
"bind_addresses": [config.get("metrics_bind_host", "127.0.0.1")],

View File

@@ -0,0 +1,86 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
from synapse.types import UserID
DEFAULT_CONFIG = """\
# Server Notices room configuration
#
# Uncomment this section to enable a room which can be used to send notices
# from the server to users. It is a special room which cannot be left; notices
# come from a special "notices" user id.
#
# If you uncomment this section, you *must* define the system_mxid_localpart
# setting, which defines the id of the user which will be used to send the
# notices.
#
# It's also possible to override the room name, the display name of the
# "notices" user, and the avatar for the user.
#
# server_notices:
# system_mxid_localpart: notices
# system_mxid_display_name: "Server Notices"
# system_mxid_avatar_url: "mxc://server.com/oumMVlgDnLYFaPVkExemNVVZ"
# room_name: "Server Notices"
"""
class ServerNoticesConfig(Config):
"""Configuration for the server notices room.
Attributes:
server_notices_mxid (str|None):
The MXID to use for server notices.
None if server notices are not enabled.
server_notices_mxid_display_name (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_mxid_avatar_url (str|None):
The display name to use for the server notices user.
None if server notices are not enabled.
server_notices_room_name (str|None):
The name to use for the server notices room.
None if server notices are not enabled.
"""
def __init__(self):
super(ServerNoticesConfig, self).__init__()
self.server_notices_mxid = None
self.server_notices_mxid_display_name = None
self.server_notices_mxid_avatar_url = None
self.server_notices_room_name = None
def read_config(self, config):
c = config.get("server_notices")
if c is None:
return
mxid_localpart = c['system_mxid_localpart']
self.server_notices_mxid = UserID(
mxid_localpart, self.server_name,
).to_string()
self.server_notices_mxid_display_name = c.get(
'system_mxid_display_name', None,
)
self.server_notices_mxid_avatar_url = c.get(
'system_mxid_avatar_url', None,
)
# todo: i18n
self.server_notices_room_name = c.get('room_name', "Server Notices")
def default_config(self, **kwargs):
return DEFAULT_CONFIG

View File

@@ -471,14 +471,14 @@ def _check_power_levels(event, auth_events):
]
old_list = current_state.content.get("users", {})
for user in set(old_list.keys() + user_list.keys()):
for user in set(list(old_list) + list(user_list)):
levels_to_check.append(
(user, "users")
)
old_list = current_state.content.get("events", {})
new_list = event.content.get("events", {})
for ev_id in set(old_list.keys() + new_list.keys()):
for ev_id in set(list(old_list) + list(new_list)):
levels_to_check.append(
(ev_id, "events")
)

View File

@@ -146,7 +146,7 @@ class EventBase(object):
return field in self._event_dict
def items(self):
return self._event_dict.items()
return list(self._event_dict.items())
class FrozenEvent(EventBase):

View File

@@ -20,6 +20,8 @@ from frozendict import frozendict
import re
from six import string_types
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.
@@ -277,7 +279,7 @@ def serialize_event(e, time_now_ms, as_client_event=True,
if only_event_fields:
if (not isinstance(only_event_fields, list) or
not all(isinstance(f, basestring) for f in only_event_fields)):
not all(isinstance(f, string_types) for f in only_event_fields)):
raise TypeError("only_event_fields must be a list of strings")
d = only_fields(d, only_event_fields)

View File

@@ -17,6 +17,8 @@ from synapse.types import EventID, RoomID, UserID
from synapse.api.errors import SynapseError
from synapse.api.constants import EventTypes, Membership
from six import string_types
class EventValidator(object):
@@ -49,7 +51,7 @@ class EventValidator(object):
strings.append("state_key")
for s in strings:
if not isinstance(getattr(event, s), basestring):
if not isinstance(getattr(event, s), string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))
if event.type == EventTypes.Member:
@@ -88,5 +90,5 @@ class EventValidator(object):
for s in keys:
if s not in d:
raise SynapseError(400, "'%s' not in content" % (s,))
if not isinstance(d[s], basestring):
if not isinstance(d[s], string_types):
raise SynapseError(400, "Not '%s' a string type" % (s,))

View File

@@ -32,20 +32,17 @@ from synapse.federation.federation_base import (
FederationBase,
event_from_pdu_json,
)
import synapse.metrics
from synapse.util import logcontext, unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
from synapse.util.logutils import log_function
from synapse.util.retryutils import NotRetryingDestination
from prometheus_client import Counter
logger = logging.getLogger(__name__)
# synapse.federation.federation_client is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_queries_counter = metrics.register_counter("sent_queries", labels=["type"])
sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["type"])
PDU_RETRY_TIME_MS = 1 * 60 * 1000
@@ -108,7 +105,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc(query_type)
sent_queries_counter.labels(query_type).inc()
return self.transport_layer.make_query(
destination, query_type, args, retry_on_dns_fail=retry_on_dns_fail,
@@ -127,7 +124,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_device_keys")
sent_queries_counter.labels("client_device_keys").inc()
return self.transport_layer.query_client_keys(
destination, content, timeout
)
@@ -137,7 +134,7 @@ class FederationClient(FederationBase):
"""Query the device keys for a list of user ids hosted on a remote
server.
"""
sent_queries_counter.inc("user_devices")
sent_queries_counter.labels("user_devices").inc()
return self.transport_layer.query_user_devices(
destination, user_id, timeout
)
@@ -154,7 +151,7 @@ class FederationClient(FederationBase):
a Deferred which will eventually yield a JSON object from the
response
"""
sent_queries_counter.inc("client_one_time_keys")
sent_queries_counter.labels("client_one_time_keys").inc()
return self.transport_layer.claim_client_keys(
destination, content, timeout
)
@@ -394,7 +391,7 @@ class FederationClient(FederationBase):
"""
if return_local:
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = seen_events.values()
signed_events = list(seen_events.values())
else:
seen_events = yield self.store.have_seen_events(event_ids)
signed_events = []
@@ -592,7 +589,7 @@ class FederationClient(FederationBase):
}
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
destination, pdus.values(),
destination, list(pdus.values()),
outlier=True,
)

View File

@@ -27,12 +27,13 @@ from synapse.federation.federation_base import (
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
import synapse.metrics
from synapse.types import get_domain_from_id
from synapse.util import async
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.logutils import log_function
from prometheus_client import Counter
from six import iteritems
# when processing incoming transactions, we try to handle multiple rooms in
@@ -41,17 +42,17 @@ TRANSACTION_CONCURRENCY_LIMIT = 10
logger = logging.getLogger(__name__)
# synapse.federation.federation_server is a silly name
metrics = synapse.metrics.get_metrics_for("synapse.federation.server")
received_pdus_counter = Counter("synapse_federation_server_received_pdus", "")
received_pdus_counter = metrics.register_counter("received_pdus")
received_edus_counter = Counter("synapse_federation_server_received_edus", "")
received_edus_counter = metrics.register_counter("received_edus")
received_queries_counter = metrics.register_counter("received_queries", labels=["type"])
received_queries_counter = Counter(
"synapse_federation_server_received_queries", "", ["type"]
)
class FederationServer(FederationBase):
def __init__(self, hs):
super(FederationServer, self).__init__(hs)
@@ -131,7 +132,7 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
received_pdus_counter.inc_by(len(transaction.pdus))
received_pdus_counter.inc(len(transaction.pdus))
pdus_by_room = {}
@@ -292,7 +293,7 @@ class FederationServer(FederationBase):
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
received_queries_counter.inc(query_type)
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
defer.returnValue((200, resp))

View File

@@ -33,7 +33,7 @@ from .units import Edu
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
import synapse.metrics
from synapse.metrics import LaterGauge
from blist import sorteddict
from collections import namedtuple
@@ -45,9 +45,6 @@ from six import itervalues, iteritems
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
class FederationRemoteSendQueue(object):
"""A drop in replacement for TransactionQueue"""
@@ -77,10 +74,8 @@ class FederationRemoteSendQueue(object):
# lambda binds to the queue rather than to the name of the queue which
# changes. ARGH.
def register(name, queue):
metrics.register_callback(
queue_name + "_size",
lambda: len(queue),
)
LaterGauge("synapse_federation_send_queue_%s_size" % (queue_name,),
"", [], lambda: len(queue))
for queue_name in [
"presence_map", "presence_changed", "keyed_edu", "keyed_edu_changed",
@@ -202,7 +197,7 @@ class FederationRemoteSendQueue(object):
# We only want to send presence for our own users, so lets always just
# filter here just in case.
local_states = filter(lambda s: self.is_mine_id(s.user_id), states)
local_states = list(filter(lambda s: self.is_mine_id(s.user_id), states))
self.presence_map.update({state.user_id: state for state in local_states})
self.presence_changed[pos] = [state.user_id for state in local_states]

View File

@@ -26,23 +26,25 @@ from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
from synapse.util.metrics import measure_func
from synapse.handlers.presence import format_user_presence_state, get_interested_remotes
import synapse.metrics
from synapse.metrics import LaterGauge
from synapse.metrics import (
sent_edus_counter,
sent_transactions_counter,
events_processed_counter,
)
from prometheus_client import Counter
from six import itervalues
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
client_metrics = synapse.metrics.get_metrics_for("synapse.federation.client")
sent_pdus_destination_dist = client_metrics.register_distribution(
"sent_pdu_destinations"
sent_pdus_destination_dist = Counter(
"synapse_federation_transaction_queue_sent_pdu_destinations", ""
)
sent_edus_counter = client_metrics.register_counter("sent_edus")
sent_transactions_counter = client_metrics.register_counter("sent_transactions")
events_processed_counter = client_metrics.register_counter("events_processed")
class TransactionQueue(object):
@@ -69,8 +71,10 @@ class TransactionQueue(object):
# done
self.pending_transactions = {}
metrics.register_callback(
"pending_destinations",
LaterGauge(
"synapse_federation_transaction_queue_pending_destinations",
"",
[],
lambda: len(self.pending_transactions),
)
@@ -94,12 +98,16 @@ class TransactionQueue(object):
# Map of destination -> (edu_type, key) -> Edu
self.pending_edus_keyed_by_dest = edus_keyed = {}
metrics.register_callback(
"pending_pdus",
LaterGauge(
"synapse_federation_transaction_queue_pending_pdus",
"",
[],
lambda: sum(map(len, pdus.values())),
)
metrics.register_callback(
"pending_edus",
LaterGauge(
"synapse_federation_transaction_queue_pending_edus",
"",
[],
lambda: (
sum(map(len, edus.values()))
+ sum(map(len, presence.values()))
@@ -228,7 +236,7 @@ class TransactionQueue(object):
yield logcontext.make_deferred_yieldable(defer.gatherResults(
[
logcontext.run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
for evs in itervalues(events_by_room)
],
consumeErrors=True
))
@@ -241,18 +249,15 @@ class TransactionQueue(object):
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.set(
now - ts, "federation_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "federation_sender",
)
synapse.metrics.event_processing_lag.labels(
"federation_sender").set(now - ts)
synapse.metrics.event_processing_last_ts.labels(
"federation_sender").set(ts)
events_processed_counter.inc_by(len(events))
events_processed_counter.inc(len(events))
synapse.metrics.event_processing_positions.set(
next_token, "federation_sender",
)
synapse.metrics.event_processing_positions.labels(
"federation_sender").set(next_token)
finally:
self._is_processing = False
@@ -275,7 +280,7 @@ class TransactionQueue(object):
if not destinations:
return
sent_pdus_destination_dist.inc_by(len(destinations))
sent_pdus_destination_dist.inc(len(destinations))
for destination in destinations:
self.pending_pdus_by_dest.setdefault(destination, []).append(
@@ -322,7 +327,7 @@ class TransactionQueue(object):
if not states_map:
break
yield self._process_presence_inner(states_map.values())
yield self._process_presence_inner(list(states_map.values()))
except Exception:
logger.exception("Error sending presence states to servers")
finally:

View File

@@ -20,6 +20,8 @@ from synapse.api.errors import SynapseError
from synapse.types import GroupID, RoomID, UserID, get_domain_from_id
from twisted.internet import defer
from six import string_types
logger = logging.getLogger(__name__)
@@ -431,7 +433,7 @@ class GroupsServerHandler(object):
"long_description"):
if keyname in content:
value = content[keyname]
if not isinstance(value, basestring):
if not isinstance(value, string_types):
raise SynapseError(400, "%r value is not a string" % (keyname,))
profile[keyname] = value

View File

@@ -14,9 +14,7 @@
# limitations under the License.
from .register import RegistrationHandler
from .room import (
RoomCreationHandler, RoomContextHandler,
)
from .room import RoomContextHandler
from .message import MessageHandler
from .federation import FederationHandler
from .directory import DirectoryHandler
@@ -47,7 +45,6 @@ class Handlers(object):
def __init__(self, hs):
self.registration_handler = RegistrationHandler(hs)
self.message_handler = MessageHandler(hs)
self.room_creation_handler = RoomCreationHandler(hs)
self.federation_handler = FederationHandler(hs)
self.directory_handler = DirectoryHandler(hs)
self.admin_handler = AdminHandler(hs)

View File

@@ -114,14 +114,14 @@ class BaseHandler(object):
if guest_access != "can_join":
if context:
current_state = yield self.store.get_events(
context.current_state_ids.values()
list(context.current_state_ids.values())
)
else:
current_state = yield self.state_handler.get_current_state(
event.room_id
)
current_state = current_state.values()
current_state = list(current_state.values())
logger.info("maybe_kick_guest_users %r", current_state)
yield self.kick_guest_users(current_state)

View File

@@ -15,20 +15,21 @@
from twisted.internet import defer
from six import itervalues
import synapse
from synapse.api.constants import EventTypes
from synapse.util.metrics import Measure
from synapse.util.logcontext import (
make_deferred_yieldable, run_in_background,
)
from prometheus_client import Counter
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
events_processed_counter = metrics.register_counter("events_processed")
events_processed_counter = Counter("synapse_handlers_appservice_events_processed", "")
def log_failure(failure):
@@ -120,7 +121,7 @@ class ApplicationServicesHandler(object):
yield make_deferred_yieldable(defer.gatherResults([
run_in_background(handle_room_events, evs)
for evs in events_by_room.itervalues()
for evs in itervalues(events_by_room)
], consumeErrors=True))
yield self.store.set_appservice_last_pos(upper_bound)
@@ -128,18 +129,15 @@ class ApplicationServicesHandler(object):
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_positions.set(
upper_bound, "appservice_sender",
)
synapse.metrics.event_processing_positions.labels(
"appservice_sender").set(upper_bound)
events_processed_counter.inc_by(len(events))
events_processed_counter.inc(len(events))
synapse.metrics.event_processing_lag.set(
now - ts, "appservice_sender",
)
synapse.metrics.event_processing_last_ts.set(
ts, "appservice_sender",
)
synapse.metrics.event_processing_lag.labels(
"appservice_sender").set(now - ts)
synapse.metrics.event_processing_last_ts.labels(
"appservice_sender").set(ts)
finally:
self.is_processing = False

View File

@@ -249,7 +249,7 @@ class AuthHandler(BaseHandler):
errordict = e.error_dict()
for f in flows:
if len(set(f) - set(creds.keys())) == 0:
if len(set(f) - set(creds)) == 0:
# it's very useful to know what args are stored, but this can
# include the password in the case of registering, so only log
# the keys (confusingly, clientdict may contain a password
@@ -257,12 +257,12 @@ class AuthHandler(BaseHandler):
# and is not sensitive).
logger.info(
"Auth completed with creds: %r. Client dict has keys: %r",
creds, clientdict.keys()
creds, list(clientdict)
)
defer.returnValue((creds, clientdict, session['id']))
ret = self._auth_dict_for_flows(flows, session)
ret['completed'] = creds.keys()
ret['completed'] = list(creds)
ret.update(errordict)
raise InteractiveAuthIncompleteError(
ret,

View File

@@ -30,6 +30,7 @@ class DeactivateAccountHandler(BaseHandler):
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
self._room_member_handler = hs.get_room_member_handler()
self.user_directory_handler = hs.get_user_directory_handler()
# Flag that indicates whether the process to part users from rooms is running
self._user_parter_running = False
@@ -61,10 +62,13 @@ class DeactivateAccountHandler(BaseHandler):
yield self.store.user_delete_threepids(user_id)
yield self.store.user_set_password_hash(user_id, None)
# Add the user to a table of users penpding deactivation (ie.
# Add the user to a table of users pending deactivation (ie.
# removal from all the rooms they're a member of)
yield self.store.add_user_pending_deactivation(user_id)
# delete from user directory
yield self.user_directory_handler.handle_user_deactivated(user_id)
# Now start the process that goes through that list and
# parts users from rooms (if it isn't already running)
self._start_user_parting()

View File

@@ -26,6 +26,8 @@ from ._base import BaseHandler
import logging
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -112,7 +114,7 @@ class DeviceHandler(BaseHandler):
user_id, device_id=None
)
devices = device_map.values()
devices = list(device_map.values())
for device in devices:
_update_device_from_client_ips(device, ips)
@@ -185,7 +187,7 @@ class DeviceHandler(BaseHandler):
defer.Deferred:
"""
device_map = yield self.store.get_devices_by_user(user_id)
device_ids = device_map.keys()
device_ids = list(device_map)
if except_device_id is not None:
device_ids = [d for d in device_ids if d != except_device_id]
yield self.delete_devices(user_id, device_ids)
@@ -318,7 +320,7 @@ class DeviceHandler(BaseHandler):
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
if room_id not in room_ids:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -338,7 +340,7 @@ class DeviceHandler(BaseHandler):
# special-case for an empty prev state: include all members
# in the changed list
if not event_ids:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -354,10 +356,10 @@ class DeviceHandler(BaseHandler):
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
for state_dict in prev_state_ids.itervalues():
for state_dict in itervalues(prev_state_ids):
member_event = state_dict.get((EventTypes.Member, user_id), None)
if not member_event or member_event != current_member_id:
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
@@ -367,14 +369,14 @@ class DeviceHandler(BaseHandler):
# If there has been any change in membership, include them in the
# possibly changed list. We'll check if they are joined below,
# and we're not toooo worried about spuriously adding users.
for key, event_id in current_state_ids.iteritems():
for key, event_id in iteritems(current_state_ids):
etype, state_key = key
if etype != EventTypes.Member:
continue
# check if this member has changed since any of the extremities
# at the stream_ordering, and add them to the list if so.
for state_dict in prev_state_ids.itervalues():
for state_dict in itervalues(prev_state_ids):
prev_event_id = state_dict.get(key, None)
if not prev_event_id or prev_event_id != event_id:
if state_key != user_id:

View File

@@ -19,6 +19,7 @@ import logging
from canonicaljson import encode_canonical_json
from twisted.internet import defer
from six import iteritems
from synapse.api.errors import (
SynapseError, CodeMessageException, FederationDeniedError,
@@ -92,7 +93,7 @@ class E2eKeysHandler(object):
remote_queries_not_in_cache = {}
if remote_queries:
query_list = []
for user_id, device_ids in remote_queries.iteritems():
for user_id, device_ids in iteritems(remote_queries):
if device_ids:
query_list.extend((user_id, device_id) for device_id in device_ids)
else:
@@ -103,9 +104,9 @@ class E2eKeysHandler(object):
query_list
)
)
for user_id, devices in remote_results.iteritems():
for user_id, devices in iteritems(remote_results):
user_devices = results.setdefault(user_id, {})
for device_id, device in devices.iteritems():
for device_id, device in iteritems(devices):
keys = device.get("keys", None)
device_display_name = device.get("device_display_name", None)
if keys:
@@ -250,9 +251,9 @@ class E2eKeysHandler(object):
"Claimed one-time-keys: %s",
",".join((
"%s for %s:%s" % (key_id, user_id, device_id)
for user_id, user_keys in json_result.iteritems()
for device_id, device_keys in user_keys.iteritems()
for key_id, _ in device_keys.iteritems()
for user_id, user_keys in iteritems(json_result)
for device_id, device_keys in iteritems(user_keys)
for key_id, _ in iteritems(device_keys)
)),
)

View File

@@ -48,6 +48,7 @@ class EventStreamHandler(BaseHandler):
self.notifier = hs.get_notifier()
self.state = hs.get_state_handler()
self._server_notices_sender = hs.get_server_notices_sender()
@defer.inlineCallbacks
@log_function
@@ -58,6 +59,10 @@ class EventStreamHandler(BaseHandler):
If `only_keys` is not None, events from keys will be sent down.
"""
# send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(auth_user_id)
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_presence_handler()

View File

@@ -24,6 +24,7 @@ from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
import six
from six.moves import http_client
from six import iteritems
from twisted.internet import defer
from unpaddedbase64 import decode_base64
@@ -51,7 +52,6 @@ from synapse.util.retryutils import NotRetryingDestination
from synapse.util.distributor import user_joined_room
logger = logging.getLogger(__name__)
@@ -81,6 +81,7 @@ class FederationHandler(BaseHandler):
self.pusher_pool = hs.get_pusherpool()
self.spam_checker = hs.get_spam_checker()
self.event_creation_handler = hs.get_event_creation_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
# When joining a room we need to queue any events for that room up
self.room_queues = {}
@@ -478,7 +479,7 @@ class FederationHandler(BaseHandler):
# to get all state ids that we're interested in.
event_map = yield self.store.get_events([
e_id
for key_to_eid in event_to_state_ids.values()
for key_to_eid in list(event_to_state_ids.values())
for key, e_id in key_to_eid.items()
if key[0] != EventTypes.Member or check_match(key[1])
])
@@ -486,10 +487,10 @@ class FederationHandler(BaseHandler):
event_to_state = {
e_id: {
key: event_map[inner_e_id]
for key, inner_e_id in key_to_eid.items()
for key, inner_e_id in key_to_eid.iteritems()
if inner_e_id in event_map
}
for e_id, key_to_eid in event_to_state_ids.items()
for e_id, key_to_eid in event_to_state_ids.iteritems()
}
def redact_disallowed(event, state):
@@ -504,7 +505,7 @@ class FederationHandler(BaseHandler):
# membership states for the requesting server to determine
# if the server is either in the room or has been invited
# into the room.
for ev in state.values():
for ev in state.itervalues():
if ev.type != EventTypes.Member:
continue
try:
@@ -750,9 +751,19 @@ class FederationHandler(BaseHandler):
curr_state = yield self.state_handler.get_current_state(room_id)
def get_domains_from_state(state):
"""Get joined domains from state
Args:
state (dict[tuple, FrozenEvent]): State map from type/state
key to event.
Returns:
list[tuple[str, int]]: Returns a list of servers with the
lowest depth of their joins. Sorted by lowest depth first.
"""
joined_users = [
(state_key, int(event.depth))
for (e_type, state_key), event in state.items()
for (e_type, state_key), event in state.iteritems()
if e_type == EventTypes.Member
and event.membership == Membership.JOIN
]
@@ -769,7 +780,7 @@ class FederationHandler(BaseHandler):
except Exception:
pass
return sorted(joined_domains.items(), key=lambda d: d[1])
return sorted(joined_domains.iteritems(), key=lambda d: d[1])
curr_domains = get_domains_from_state(curr_state)
@@ -786,7 +797,7 @@ class FederationHandler(BaseHandler):
yield self.backfill(
dom, room_id,
limit=100,
extremities=[e for e in extremities.keys()]
extremities=extremities,
)
# If this succeeded then we probably already have the
# appropriate stuff.
@@ -832,7 +843,7 @@ class FederationHandler(BaseHandler):
tried_domains = set(likely_domains)
tried_domains.add(self.server_name)
event_ids = list(extremities.keys())
event_ids = list(extremities.iterkeys())
logger.debug("calling resolve_state_groups in _maybe_backfill")
resolve = logcontext.preserve_fn(
@@ -842,31 +853,34 @@ class FederationHandler(BaseHandler):
[resolve(room_id, [e]) for e in event_ids],
consumeErrors=True,
))
# dict[str, dict[tuple, str]], a map from event_id to state map of
# event_ids.
states = dict(zip(event_ids, [s.state for s in states]))
state_map = yield self.store.get_events(
[e_id for ids in states.values() for e_id in ids],
[e_id for ids in states.itervalues() for e_id in ids.itervalues()],
get_prev_content=False
)
states = {
key: {
k: state_map[e_id]
for k, e_id in state_dict.items()
for k, e_id in state_dict.iteritems()
if e_id in state_map
} for key, state_dict in states.items()
} for key, state_dict in states.iteritems()
}
for e_id, _ in sorted_extremeties_tuple:
likely_domains = get_domains_from_state(states[e_id])
success = yield try_backfill([
dom for dom in likely_domains
dom for dom, _ in likely_domains
if dom not in tried_domains
])
if success:
defer.returnValue(True)
tried_domains.update(likely_domains)
tried_domains.update(dom for dom, _ in likely_domains)
defer.returnValue(False)
@@ -1134,13 +1148,13 @@ class FederationHandler(BaseHandler):
user = UserID.from_string(event.state_key)
yield user_joined_room(self.distributor, user, event.room_id)
state_ids = context.prev_state_ids.values()
state_ids = list(context.prev_state_ids.values())
auth_chain = yield self.store.get_auth_chain(state_ids)
state = yield self.store.get_events(context.prev_state_ids.values())
state = yield self.store.get_events(list(context.prev_state_ids.values()))
defer.returnValue({
"state": state.values(),
"state": list(state.values()),
"auth_chain": auth_chain,
})
@@ -1180,6 +1194,13 @@ class FederationHandler(BaseHandler):
if not self.is_mine_id(event.state_key):
raise SynapseError(400, "The invite event must be for this server")
# block any attempts to invite the server notices mxid
if event.state_key == self._server_notices_mxid:
raise SynapseError(
http_client.FORBIDDEN,
"Cannot invite this user",
)
event.internal_metadata.outlier = True
event.internal_metadata.invite_from_remote = True
@@ -1367,7 +1388,7 @@ class FederationHandler(BaseHandler):
)
if state_groups:
_, state = state_groups.items().pop()
_, state = list(iteritems(state_groups)).pop()
results = {
(e.type, e.state_key): e for e in state
}
@@ -1383,7 +1404,7 @@ class FederationHandler(BaseHandler):
else:
del results[(event.type, event.state_key)]
res = results.values()
res = list(results.values())
for event in res:
# We sign these again because there was a bug where we
# incorrectly signed things the first time round
@@ -1424,7 +1445,7 @@ class FederationHandler(BaseHandler):
else:
results.pop((event.type, event.state_key), None)
defer.returnValue(results.values())
defer.returnValue(list(results.values()))
else:
defer.returnValue([])
@@ -1893,7 +1914,7 @@ class FederationHandler(BaseHandler):
})
new_state = self.state_handler.resolve_events(
[local_view.values(), remote_view.values()],
[list(local_view.values()), list(remote_view.values())],
event
)
@@ -2013,7 +2034,7 @@ class FederationHandler(BaseHandler):
this will not be included in the current_state in the context.
"""
state_updates = {
k: a.event_id for k, a in auth_events.iteritems()
k: a.event_id for k, a in iteritems(auth_events)
if k != event_key
}
context.current_state_ids = dict(context.current_state_ids)
@@ -2023,7 +2044,7 @@ class FederationHandler(BaseHandler):
context.delta_ids.update(state_updates)
context.prev_state_ids = dict(context.prev_state_ids)
context.prev_state_ids.update({
k: a.event_id for k, a in auth_events.iteritems()
k: a.event_id for k, a in iteritems(auth_events)
})
context.state_group = yield self.store.store_state_group(
event.event_id,
@@ -2075,7 +2096,7 @@ class FederationHandler(BaseHandler):
def get_next(it, opt=None):
try:
return it.next()
return next(it)
except Exception:
return opt

View File

@@ -15,6 +15,7 @@
# limitations under the License.
from twisted.internet import defer
from six import iteritems
from synapse.api.errors import SynapseError
from synapse.types import get_domain_from_id
@@ -449,7 +450,7 @@ class GroupsLocalHandler(object):
results = {}
failed_results = []
for destination, dest_user_ids in destinations.iteritems():
for destination, dest_user_ids in iteritems(destinations):
try:
r = yield self.transport_client.bulk_get_publicised_groups(
destination, list(dest_user_ids),

View File

@@ -19,11 +19,17 @@ import sys
from canonicaljson import encode_canonical_json
import six
from six import string_types, itervalues, iteritems
from twisted.internet import defer, reactor
from twisted.internet.defer import succeed
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership, MAX_DEPTH
from synapse.api.errors import AuthError, Codes, SynapseError
from synapse.api.errors import (
AuthError, Codes, SynapseError,
ConsentNotGivenError,
)
from synapse.api.urls import ConsentURIBuilder
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events.utils import serialize_event
from synapse.events.validator import EventValidator
@@ -86,14 +92,14 @@ class MessageHandler(BaseHandler):
# map from purge id to PurgeStatus
self._purges_by_id = {}
def start_purge_history(self, room_id, topological_ordering,
def start_purge_history(self, room_id, token,
delete_local_events=False):
"""Start off a history purge on a room.
Args:
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
@@ -115,19 +121,19 @@ class MessageHandler(BaseHandler):
self._purges_by_id[purge_id] = PurgeStatus()
run_in_background(
self._purge_history,
purge_id, room_id, topological_ordering, delete_local_events,
purge_id, room_id, token, delete_local_events,
)
return purge_id
@defer.inlineCallbacks
def _purge_history(self, purge_id, room_id, topological_ordering,
def _purge_history(self, purge_id, room_id, token,
delete_local_events):
"""Carry out a history purge on a room.
Args:
purge_id (str): The id for this purge
room_id (str): The room to purge from
topological_ordering (int): minimum topo ordering to preserve
token (str): topological token to delete events before
delete_local_events (bool): True to delete local events as well as
remote ones
@@ -138,7 +144,7 @@ class MessageHandler(BaseHandler):
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(
room_id, topological_ordering, delete_local_events,
room_id, token, delete_local_events,
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
@@ -397,7 +403,7 @@ class MessageHandler(BaseHandler):
"avatar_url": profile.avatar_url,
"display_name": profile.display_name,
}
for user_id, profile in users_with_profile.iteritems()
for user_id, profile in iteritems(users_with_profile)
})
@@ -431,6 +437,9 @@ class EventCreationHandler(object):
self.spam_checker = hs.get_spam_checker()
if self.config.block_events_without_consent_error is not None:
self._consent_uri_builder = ConsentURIBuilder(self.config)
@defer.inlineCallbacks
def create_event(self, requester, event_dict, token_id=None, txn_id=None,
prev_events_and_hashes=None):
@@ -482,6 +491,10 @@ class EventCreationHandler(object):
target, e
)
is_exempt = yield self._is_exempt_from_privacy_policy(builder)
if not is_exempt:
yield self.assert_accepted_privacy_policy(requester)
if token_id is not None:
builder.internal_metadata.token_id = token_id
@@ -496,6 +509,86 @@ class EventCreationHandler(object):
defer.returnValue((event, context))
def _is_exempt_from_privacy_policy(self, builder):
""""Determine if an event to be sent is exempt from having to consent
to the privacy policy
Args:
builder (synapse.events.builder.EventBuilder): event being created
Returns:
Deferred[bool]: true if the event can be sent without the user
consenting
"""
# the only thing the user can do is join the server notices room.
if builder.type == EventTypes.Member:
membership = builder.content.get("membership", None)
if membership == Membership.JOIN:
return self._is_server_notices_room(builder.room_id)
return succeed(False)
@defer.inlineCallbacks
def _is_server_notices_room(self, room_id):
if self.config.server_notices_mxid is None:
defer.returnValue(False)
user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self.config.server_notices_mxid in user_ids)
@defer.inlineCallbacks
def assert_accepted_privacy_policy(self, requester):
"""Check if a user has accepted the privacy policy
Called when the given user is about to do something that requires
privacy consent. We see if the user is exempt and otherwise check that
they have given consent. If they have not, a ConsentNotGiven error is
raised.
Args:
requester (synapse.types.Requester):
The user making the request
Returns:
Deferred[None]: returns normally if the user has consented or is
exempt
Raises:
ConsentNotGivenError: if the user has not given consent yet
"""
if self.config.block_events_without_consent_error is None:
return
# exempt AS users from needing consent
if requester.app_service is not None:
return
user_id = requester.user.to_string()
# exempt the system notices user
if (
self.config.server_notices_mxid is not None and
user_id == self.config.server_notices_mxid
):
return
u = yield self.store.get_user_by_id(user_id)
assert u is not None
if u["appservice_id"] is not None:
# users registered by an appservice are exempt
return
if u["consent_version"] == self.config.user_consent_version:
return
consent_uri = self._consent_uri_builder.build_user_consent_uri(
requester.user.localpart,
)
msg = self.config.block_events_without_consent_error % {
'consent_uri': consent_uri,
}
raise ConsentNotGivenError(
msg=msg,
consent_uri=consent_uri,
)
@defer.inlineCallbacks
def send_nonmember_event(self, requester, event, context, ratelimit=True):
"""
@@ -578,7 +671,7 @@ class EventCreationHandler(object):
spam_error = self.spam_checker.check_event_for_spam(event)
if spam_error:
if not isinstance(spam_error, basestring):
if not isinstance(spam_error, string_types):
spam_error = "Spam is not permitted here"
raise SynapseError(
403, spam_error, Codes.FORBIDDEN
@@ -792,7 +885,7 @@ class EventCreationHandler(object):
state_to_include_ids = [
e_id
for k, e_id in context.current_state_ids.iteritems()
for k, e_id in iteritems(context.current_state_ids)
if k[0] in self.hs.config.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
@@ -806,7 +899,7 @@ class EventCreationHandler(object):
"content": e.content,
"sender": e.sender,
}
for e in state_to_include.itervalues()
for e in itervalues(state_to_include)
]
invitee = UserID.from_string(event.state_key)

View File

@@ -25,6 +25,8 @@ The methods that define policy are:
from twisted.internet import defer, reactor
from contextlib import contextmanager
from six import itervalues, iteritems
from synapse.api.errors import SynapseError
from synapse.api.constants import PresenceState
from synapse.storage.presence import UserPresenceState
@@ -36,27 +38,29 @@ from synapse.util.logutils import log_function
from synapse.util.metrics import Measure
from synapse.util.wheel_timer import WheelTimer
from synapse.types import UserID, get_domain_from_id
import synapse.metrics
from synapse.metrics import LaterGauge
import logging
from prometheus_client import Counter
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
notified_presence_counter = metrics.register_counter("notified_presence")
federation_presence_out_counter = metrics.register_counter("federation_presence_out")
presence_updates_counter = metrics.register_counter("presence_updates")
timers_fired_counter = metrics.register_counter("timers_fired")
federation_presence_counter = metrics.register_counter("federation_presence")
bump_active_time_counter = metrics.register_counter("bump_active_time")
notified_presence_counter = Counter("synapse_handler_presence_notified_presence", "")
federation_presence_out_counter = Counter(
"synapse_handler_presence_federation_presence_out", "")
presence_updates_counter = Counter("synapse_handler_presence_presence_updates", "")
timers_fired_counter = Counter("synapse_handler_presence_timers_fired", "")
federation_presence_counter = Counter("synapse_handler_presence_federation_presence", "")
bump_active_time_counter = Counter("synapse_handler_presence_bump_active_time", "")
get_updates_counter = metrics.register_counter("get_updates", labels=["type"])
get_updates_counter = Counter("synapse_handler_presence_get_updates", "", ["type"])
notify_reason_counter = metrics.register_counter("notify_reason", labels=["reason"])
state_transition_counter = metrics.register_counter(
"state_transition", labels=["from", "to"]
notify_reason_counter = Counter(
"synapse_handler_presence_notify_reason", "", ["reason"])
state_transition_counter = Counter(
"synapse_handler_presence_state_transition", "", ["from", "to"]
)
@@ -87,6 +91,11 @@ assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(object):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self.is_mine = hs.is_mine
self.is_mine_id = hs.is_mine_id
self.clock = hs.get_clock()
@@ -94,7 +103,6 @@ class PresenceHandler(object):
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.federation = hs.get_federation_sender()
self.state = hs.get_state_handler()
federation_registry = hs.get_federation_registry()
@@ -137,8 +145,9 @@ class PresenceHandler(object):
for state in active_presence
}
metrics.register_callback(
"user_to_current_state_size", lambda: len(self.user_to_current_state)
LaterGauge(
"synapse_handlers_presence_user_to_current_state_size", "", [],
lambda: len(self.user_to_current_state)
)
now = self.clock.time_msec()
@@ -208,7 +217,8 @@ class PresenceHandler(object):
60 * 1000,
)
metrics.register_callback("wheel_timer_size", lambda: len(self.wheel_timer))
LaterGauge("synapse_handlers_presence_wheel_timer_size", "", [],
lambda: len(self.wheel_timer))
@defer.inlineCallbacks
def _on_shutdown(self):
@@ -311,11 +321,11 @@ class PresenceHandler(object):
# TODO: We should probably ensure there are no races hereafter
presence_updates_counter.inc_by(len(new_states))
presence_updates_counter.inc(len(new_states))
if to_notify:
notified_presence_counter.inc_by(len(to_notify))
yield self._persist_and_notify(to_notify.values())
notified_presence_counter.inc(len(to_notify))
yield self._persist_and_notify(list(to_notify.values()))
self.unpersisted_users_changes |= set(s.user_id for s in new_states)
self.unpersisted_users_changes -= set(to_notify.keys())
@@ -325,7 +335,7 @@ class PresenceHandler(object):
if user_id not in to_notify
}
if to_federation_ping:
federation_presence_out_counter.inc_by(len(to_federation_ping))
federation_presence_out_counter.inc(len(to_federation_ping))
self._push_to_remotes(to_federation_ping.values())
@@ -363,7 +373,7 @@ class PresenceHandler(object):
for user_id in users_to_check
]
timers_fired_counter.inc_by(len(states))
timers_fired_counter.inc(len(states))
changes = handle_timeouts(
states,
@@ -463,61 +473,6 @@ class PresenceHandler(object):
syncing_user_ids.update(user_ids)
return syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs(self, process_id, syncing_user_ids):
"""Update the syncing users for an external process
Args:
process_id(str): An identifier for the process the users are
syncing against. This allows synapse to process updates
as user start and stop syncing against a given process.
syncing_user_ids(set(str)): The set of user_ids that are
currently syncing on that server.
"""
# Grab the previous list of user_ids that were syncing on that process
prev_syncing_user_ids = (
self.external_process_to_current_syncs.get(process_id, set())
)
# Grab the current presence state for both the users that are syncing
# now and the users that were syncing before this update.
prev_states = yield self.current_state_for_users(
syncing_user_ids | prev_syncing_user_ids
)
updates = []
time_now_ms = self.clock.time_msec()
# For each new user that is syncing check if we need to mark them as
# being online.
for new_user_id in syncing_user_ids - prev_syncing_user_ids:
prev_state = prev_states[new_user_id]
if prev_state.state == PresenceState.OFFLINE:
updates.append(prev_state.copy_and_replace(
state=PresenceState.ONLINE,
last_active_ts=time_now_ms,
last_user_sync_ts=time_now_ms,
))
else:
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
# For each user that is still syncing or stopped syncing update the
# last sync time so that we will correctly apply the grace period when
# they stop syncing.
for old_user_id in prev_syncing_user_ids:
prev_state = prev_states[old_user_id]
updates.append(prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
))
yield self._update_states(updates)
# Update the last updated time for the process. We expire the entries
# if we don't receive an update in the given timeframe.
self.external_process_last_updated_ms[process_id] = self.clock.time_msec()
self.external_process_to_current_syncs[process_id] = syncing_user_ids
@defer.inlineCallbacks
def update_external_syncs_row(self, process_id, user_id, is_syncing, sync_time_msec):
"""Update the syncing users for an external process as a delta.
@@ -581,7 +536,7 @@ class PresenceHandler(object):
prev_state.copy_and_replace(
last_user_sync_ts=time_now_ms,
)
for prev_state in prev_states.itervalues()
for prev_state in itervalues(prev_states)
])
self.external_process_last_updated_ms.pop(process_id, None)
@@ -604,14 +559,14 @@ class PresenceHandler(object):
for user_id in user_ids
}
missing = [user_id for user_id, state in states.iteritems() if not state]
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = yield self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in states.iteritems() if not state]
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id)
@@ -707,7 +662,7 @@ class PresenceHandler(object):
updates.append(prev_state.copy_and_replace(**new_fields))
if updates:
federation_presence_counter.inc_by(len(updates))
federation_presence_counter.inc(len(updates))
yield self._update_states(updates)
@defer.inlineCallbacks
@@ -732,7 +687,7 @@ class PresenceHandler(object):
"""
updates = yield self.current_state_for_users(target_user_ids)
updates = updates.values()
updates = list(updates.values())
for user_id in set(target_user_ids) - set(u.user_id for u in updates):
updates.append(UserPresenceState.default(user_id))
@@ -798,11 +753,11 @@ class PresenceHandler(object):
self._push_to_remotes([state])
else:
user_ids = yield self.store.get_users_in_room(room_id)
user_ids = filter(self.is_mine_id, user_ids)
user_ids = list(filter(self.is_mine_id, user_ids))
states = yield self.current_state_for_users(user_ids)
self._push_to_remotes(states.values())
self._push_to_remotes(list(states.values()))
@defer.inlineCallbacks
def get_presence_list(self, observer_user, accepted=None):
@@ -982,28 +937,28 @@ def should_notify(old_state, new_state):
return False
if old_state.status_msg != new_state.status_msg:
notify_reason_counter.inc("status_msg_change")
notify_reason_counter.labels("status_msg_change").inc()
return True
if old_state.state != new_state.state:
notify_reason_counter.inc("state_change")
state_transition_counter.inc(old_state.state, new_state.state)
notify_reason_counter.labels("state_change").inc()
state_transition_counter.labels(old_state.state, new_state.state).inc()
return True
if old_state.state == PresenceState.ONLINE:
if new_state.currently_active != old_state.currently_active:
notify_reason_counter.inc("current_active_change")
notify_reason_counter.labels("current_active_change").inc()
return True
if new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Only notify about last active bumps if we're not currently acive
if not new_state.currently_active:
notify_reason_counter.inc("last_active_change_online")
notify_reason_counter.labels("last_active_change_online").inc()
return True
elif new_state.last_active_ts - old_state.last_active_ts > LAST_ACTIVE_GRANULARITY:
# Always notify for a transition where last active gets bumped.
notify_reason_counter.inc("last_active_change_not_online")
notify_reason_counter.labels("last_active_change_not_online").inc()
return True
return False
@@ -1077,14 +1032,14 @@ class PresenceEventSource(object):
if changed is not None and len(changed) < 500:
# For small deltas, its quicker to get all changes and then
# work out if we share a room or they're in our presence list
get_updates_counter.inc("stream")
get_updates_counter.labels("stream").inc()
for other_user_id in changed:
if other_user_id in users_interested_in:
user_ids_changed.add(other_user_id)
else:
# Too many possible updates. Find all users we can see and check
# if any of them have changed.
get_updates_counter.inc("full")
get_updates_counter.labels("full").inc()
if from_key:
user_ids_changed = stream_change_cache.get_entities_changed(
@@ -1096,10 +1051,10 @@ class PresenceEventSource(object):
updates = yield presence.current_state_for_users(user_ids_changed)
if include_offline:
defer.returnValue((updates.values(), max_token))
defer.returnValue((list(updates.values()), max_token))
else:
defer.returnValue(([
s for s in updates.itervalues()
s for s in itervalues(updates)
if s.state != PresenceState.OFFLINE
], max_token))
@@ -1157,7 +1112,7 @@ def handle_timeouts(user_states, is_mine_fn, syncing_user_ids, now):
if new_state:
changes[state.user_id] = new_state
return changes.values()
return list(changes.values())
def handle_timeout(state, is_mine, syncing_user_ids, now):
@@ -1356,11 +1311,11 @@ def get_interested_remotes(store, states, state_handler):
# hosts in those rooms.
room_ids_to_states, users_to_states = yield get_interested_parties(store, states)
for room_id, states in room_ids_to_states.iteritems():
for room_id, states in iteritems(room_ids_to_states):
hosts = yield state_handler.get_current_hosts_in_room(room_id)
hosts_and_states.append((hosts, states))
for user_id, states in users_to_states.iteritems():
for user_id, states in iteritems(users_to_states):
host = get_domain_from_id(user_id)
hosts_and_states.append(([host], states))

View File

@@ -34,6 +34,11 @@ logger = logging.getLogger(__name__)
class RegistrationHandler(BaseHandler):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
super(RegistrationHandler, self).__init__(hs)
self.auth = hs.get_auth()
@@ -49,6 +54,7 @@ class RegistrationHandler(BaseHandler):
self._generate_user_id_linearizer = Linearizer(
name="_generate_user_id_linearizer",
)
self._server_notices_mxid = hs.config.server_notices_mxid
@defer.inlineCallbacks
def check_username(self, localpart, guest_access_token=None,
@@ -338,6 +344,14 @@ class RegistrationHandler(BaseHandler):
yield identity_handler.bind_threepid(c, user_id)
def check_user_id_not_appservice_exclusive(self, user_id, allowed_appservice=None):
# don't allow people to register the server notices mxid
if self._server_notices_mxid is not None:
if user_id == self._server_notices_mxid:
raise SynapseError(
400, "This user ID is reserved.",
errcode=Codes.EXCLUSIVE
)
# valid user IDs must not clash with any user ID namespaces claimed by
# application services.
services = self.store.get_app_services()

View File

@@ -68,14 +68,27 @@ class RoomCreationHandler(BaseHandler):
self.event_creation_handler = hs.get_event_creation_handler()
@defer.inlineCallbacks
def create_room(self, requester, config, ratelimit=True):
def create_room(self, requester, config, ratelimit=True,
creator_join_profile=None):
""" Creates a new room.
Args:
requester (Requester): The user who requested the room creation.
requester (synapse.types.Requester):
The user who requested the room creation.
config (dict) : A dict of configuration options.
ratelimit (bool): set to False to disable the rate limiter
creator_join_profile (dict|None):
Set to override the displayname and avatar for the creating
user in this room. If unset, displayname and avatar will be
derived from the user's profile. If set, should contain the
values to go in the body of the 'join' event (typically
`avatar_url` and/or `displayname`.
Returns:
The new room ID.
Deferred[dict]:
a dict containing the keys `room_id` and, if an alias was
requested, `room_alias`.
Raises:
SynapseError if the room ID couldn't be stored, or something went
horribly wrong.
@@ -113,6 +126,10 @@ class RoomCreationHandler(BaseHandler):
except Exception:
raise SynapseError(400, "Invalid user_id: %s" % (i,))
yield self.event_creation_handler.assert_accepted_privacy_policy(
requester,
)
invite_3pid_list = config.get("invite_3pid", [])
visibility = config.get("visibility", None)
@@ -176,7 +193,8 @@ class RoomCreationHandler(BaseHandler):
initial_state=initial_state,
creation_content=creation_content,
room_alias=room_alias,
power_level_content_override=config.get("power_level_content_override", {})
power_level_content_override=config.get("power_level_content_override", {}),
creator_join_profile=creator_join_profile,
)
if "name" in config:
@@ -256,6 +274,7 @@ class RoomCreationHandler(BaseHandler):
creation_content,
room_alias,
power_level_content_override,
creator_join_profile,
):
def create(etype, content, **kwargs):
e = {
@@ -299,6 +318,7 @@ class RoomCreationHandler(BaseHandler):
room_id,
"join",
ratelimit=False,
content=creator_join_profile,
)
# We treat the power levels override specially as this needs to be one
@@ -435,7 +455,7 @@ class RoomContextHandler(BaseHandler):
state = yield self.store.get_state_for_events(
[last_event_id], None
)
results["state"] = state[last_event_id].values()
results["state"] = list(state[last_event_id].values())
results["start"] = now_token.copy_and_replace(
"room_key", results["start"]

View File

@@ -15,6 +15,7 @@
from twisted.internet import defer
from six import iteritems
from six.moves import range
from ._base import BaseHandler
@@ -307,7 +308,7 @@ class RoomListHandler(BaseHandler):
)
event_map = yield self.store.get_events([
event_id for key, event_id in current_state_ids.iteritems()
event_id for key, event_id in iteritems(current_state_ids)
if key[0] in (
EventTypes.JoinRules,
EventTypes.Name,

View File

@@ -17,11 +17,14 @@
import abc
import logging
from six.moves import http_client
from signedjson.key import decode_verify_key_bytes
from signedjson.sign import verify_signed_json
from twisted.internet import defer
from unpaddedbase64 import decode_base64
import synapse.server
import synapse.types
from synapse.api.constants import (
EventTypes, Membership,
@@ -46,6 +49,11 @@ class RoomMemberHandler(object):
__metaclass__ = abc.ABCMeta
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
@@ -63,6 +71,7 @@ class RoomMemberHandler(object):
self.clock = hs.get_clock()
self.spam_checker = hs.get_spam_checker()
self._server_notices_mxid = self.config.server_notices_mxid
@abc.abstractmethod
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
@@ -290,11 +299,26 @@ class RoomMemberHandler(object):
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
if effective_membership_state == "invite":
if effective_membership_state == Membership.INVITE:
# block any attempts to invite the server notices mxid
if target.to_string() == self._server_notices_mxid:
raise SynapseError(
http_client.FORBIDDEN,
"Cannot invite this user",
)
block_invite = False
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if (self._server_notices_mxid is not None and
requester.user.to_string() == self._server_notices_mxid):
# allow the server notices mxid to send invites
is_requester_admin = True
else:
is_requester_admin = yield self.auth.is_server_admin(
requester.user,
)
if not is_requester_admin:
if self.config.block_non_admin_invites:
logger.info(
@@ -349,6 +373,20 @@ class RoomMemberHandler(object):
if same_sender and same_membership and same_content:
defer.returnValue(old_state)
# we don't allow people to reject invites to the server notice
# room, but they can leave it once they are joined.
if (
old_membership == Membership.INVITE and
effective_membership_state == Membership.LEAVE
):
is_blocked = yield self._is_server_notice_room(room_id)
if is_blocked:
raise SynapseError(
http_client.FORBIDDEN,
"You cannot reject this invite",
errcode=Codes.CANNOT_LEAVE_SERVER_NOTICE_ROOM,
)
is_host_in_room = yield self._is_host_in_room(current_state_ids)
if effective_membership_state == Membership.JOIN:
@@ -844,6 +882,13 @@ class RoomMemberHandler(object):
defer.returnValue(False)
@defer.inlineCallbacks
def _is_server_notice_room(self, room_id):
if self._server_notices_mxid is None:
defer.returnValue(False)
user_ids = yield self.store.get_users_in_room(room_id)
defer.returnValue(self._server_notices_mxid in user_ids)
class RoomMemberMasterHandler(RoomMemberHandler):
def __init__(self, hs):

View File

@@ -348,7 +348,7 @@ class SearchHandler(BaseHandler):
rooms = set(e.room_id for e in allowed_events)
for room_id in rooms:
state = yield self.state_handler.get_current_state(room_id)
state_results[room_id] = state.values()
state_results[room_id] = list(state.values())
state_results.values()

View File

@@ -28,6 +28,8 @@ import collections
import logging
import itertools
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
@@ -275,7 +277,7 @@ class SyncHandler(object):
# result returned by the event source is poor form (it might cache
# the object)
room_id = event["room_id"]
event_copy = {k: v for (k, v) in event.iteritems()
event_copy = {k: v for (k, v) in iteritems(event)
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
@@ -294,7 +296,7 @@ class SyncHandler(object):
for event in receipts:
room_id = event["room_id"]
# exclude room id, as above
event_copy = {k: v for (k, v) in event.iteritems()
event_copy = {k: v for (k, v) in iteritems(event)
if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
@@ -325,7 +327,7 @@ class SyncHandler(object):
current_state_ids = frozenset()
if any(e.is_state() for e in recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
current_state_ids = frozenset(itervalues(current_state_ids))
recents = yield filter_events_for_client(
self.store,
@@ -382,7 +384,7 @@ class SyncHandler(object):
current_state_ids = frozenset()
if any(e.is_state() for e in loaded_recents):
current_state_ids = yield self.state.get_current_state_ids(room_id)
current_state_ids = frozenset(current_state_ids.itervalues())
current_state_ids = frozenset(itervalues(current_state_ids))
loaded_recents = yield filter_events_for_client(
self.store,
@@ -441,6 +443,10 @@ class SyncHandler(object):
Returns:
A Deferred map from ((type, state_key)->Event)
"""
# FIXME this claims to get the state at a stream position, but
# get_recent_events_for_room operates by topo ordering. This therefore
# does not reliably give you the state at the given stream position.
# (https://github.com/matrix-org/synapse/issues/3305)
last_events, _ = yield self.store.get_recent_events_for_room(
room_id, end_token=stream_position.room_key, limit=1,
)
@@ -535,11 +541,11 @@ class SyncHandler(object):
state = {}
if state_ids:
state = yield self.store.get_events(state_ids.values())
state = yield self.store.get_events(list(state_ids.values()))
defer.returnValue({
(e.type, e.state_key): e
for e in sync_config.filter_collection.filter_room_state(state.values())
for e in sync_config.filter_collection.filter_room_state(list(state.values()))
})
@defer.inlineCallbacks
@@ -888,7 +894,7 @@ class SyncHandler(object):
presence.extend(states)
# Deduplicate the presence entries so that there's at most one per user
presence = {p.user_id: p for p in presence}.values()
presence = list({p.user_id: p for p in presence}.values())
presence = sync_config.filter_collection.filter_presence(
presence
@@ -984,7 +990,7 @@ class SyncHandler(object):
if since_token:
for joined_sync in sync_result_builder.joined:
it = itertools.chain(
joined_sync.timeline.events, joined_sync.state.itervalues()
joined_sync.timeline.events, itervalues(joined_sync.state)
)
for event in it:
if event.type == EventTypes.Member:
@@ -1040,7 +1046,13 @@ class SyncHandler(object):
Returns:
Deferred(tuple): Returns a tuple of the form:
`([RoomSyncResultBuilder], [InvitedSyncResult], newly_joined_rooms)`
`(room_entries, invited_rooms, newly_joined_rooms, newly_left_rooms)`
where:
room_entries is a list [RoomSyncResultBuilder]
invited_rooms is a list [InvitedSyncResult]
newly_joined rooms is a list[str] of room ids
newly_left_rooms is a list[str] of room ids
"""
user_id = sync_result_builder.sync_config.user.to_string()
since_token = sync_result_builder.since_token
@@ -1062,7 +1074,7 @@ class SyncHandler(object):
newly_left_rooms = []
room_entries = []
invited = []
for room_id, events in mem_change_events_by_room_id.iteritems():
for room_id, events in iteritems(mem_change_events_by_room_id):
non_joins = [e for e in events if e.membership != Membership.JOIN]
has_join = len(non_joins) != len(events)

View File

@@ -22,6 +22,7 @@ from synapse.util.metrics import Measure
from synapse.util.async import sleep
from synapse.types import get_localpart_from_id
from six import iteritems
logger = logging.getLogger(__name__)
@@ -122,6 +123,13 @@ class UserDirectoryHandler(object):
user_id, profile.display_name, profile.avatar_url, None,
)
@defer.inlineCallbacks
def handle_user_deactivated(self, user_id):
"""Called when a user ID is deactivated
"""
yield self.store.remove_from_user_dir(user_id)
yield self.store.remove_from_user_in_public_room(user_id)
@defer.inlineCallbacks
def _unsafe_process(self):
# If self.pos is None then means we haven't fetched it from DB
@@ -403,7 +411,7 @@ class UserDirectoryHandler(object):
if change:
users_with_profile = yield self.state.get_current_user_in_room(room_id)
for user_id, profile in users_with_profile.iteritems():
for user_id, profile in iteritems(users_with_profile):
yield self._handle_new_user(room_id, user_id, profile)
else:
users = yield self.store.get_users_in_public_due_to_room(room_id)

View File

@@ -23,7 +23,6 @@ from synapse.http import cancelled_to_request_timed_out_error
from synapse.util.async import add_timeout_to_deferred
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.logcontext import make_deferred_yieldable
import synapse.metrics
from synapse.http.endpoint import SpiderEndpoint
from canonicaljson import encode_canonical_json
@@ -42,6 +41,7 @@ from twisted.web._newclient import ResponseDone
from six import StringIO
from prometheus_client import Counter
import simplejson as json
import logging
import urllib
@@ -49,16 +49,9 @@ import urllib
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
outgoing_requests_counter = metrics.register_counter(
"requests",
labels=["method"],
)
incoming_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
)
outgoing_requests_counter = Counter("synapse_http_client_requests", "", ["method"])
incoming_responses_counter = Counter("synapse_http_client_responses", "",
["method", "code"])
class SimpleHttpClient(object):
@@ -95,7 +88,7 @@ class SimpleHttpClient(object):
def request(self, method, uri, *args, **kwargs):
# A small wrapper around self.agent.request() so we can easily attach
# counters to it
outgoing_requests_counter.inc(method)
outgoing_requests_counter.labels(method).inc()
logger.info("Sending request %s %s", method, uri)
@@ -109,14 +102,14 @@ class SimpleHttpClient(object):
)
response = yield make_deferred_yieldable(request_deferred)
incoming_responses_counter.inc(method, response.code)
incoming_responses_counter.labels(method, response.code).inc()
logger.info(
"Received response to %s %s: %s",
method, uri, response.code
)
defer.returnValue(response)
except Exception as e:
incoming_responses_counter.inc(method, "ERR")
incoming_responses_counter.labels(method, "ERR").inc()
logger.info(
"Error sending request to %s %s: %s %s",
method, uri, type(e).__name__, e.message

View File

@@ -42,20 +42,18 @@ import random
import sys
import urllib
from six.moves.urllib import parse as urlparse
from six import string_types
from prometheus_client import Counter
logger = logging.getLogger(__name__)
outbound_logger = logging.getLogger("synapse.http.outbound")
metrics = synapse.metrics.get_metrics_for(__name__)
outgoing_requests_counter = metrics.register_counter(
"requests",
labels=["method"],
)
incoming_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
)
outgoing_requests_counter = Counter("synapse_http_matrixfederationclient_requests",
"", ["method"])
incoming_responses_counter = Counter("synapse_http_matrixfederationclient_responses",
"", ["method", "code"])
MAX_LONG_RETRIES = 10
@@ -553,7 +551,7 @@ class MatrixFederationHttpClient(object):
encoded_args = {}
for k, vs in args.items():
if isinstance(vs, basestring):
if isinstance(vs, string_types):
vs = [vs]
encoded_args[k] = [v.encode("UTF-8") for v in vs]
@@ -668,7 +666,7 @@ def check_content_type_is_json(headers):
RuntimeError if the
"""
c_type = headers.getRawHeaders("Content-Type")
c_type = headers.getRawHeaders(b"Content-Type")
if c_type is None:
raise RuntimeError(
"No Content-Type header"
@@ -685,7 +683,7 @@ def check_content_type_is_json(headers):
def encode_query_args(args):
encoded_args = {}
for k, vs in args.items():
if isinstance(vs, basestring):
if isinstance(vs, string_types):
vs = [vs]
encoded_args[k] = [v.encode("UTF-8") for v in vs]

View File

@@ -16,96 +16,142 @@
import logging
import synapse.metrics
from prometheus_client.core import Counter, Histogram
from synapse.metrics import LaterGauge
from synapse.util.logcontext import LoggingContext
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for("synapse.http.server")
# total number of responses served, split by method/servlet/tag
response_count = metrics.register_counter(
"response_count",
labels=["method", "servlet", "tag"],
alternative_names=(
# the following are all deprecated aliases for the same metric
metrics.name_prefix + x for x in (
"_requests",
"_response_time:count",
"_response_ru_utime:count",
"_response_ru_stime:count",
"_response_db_txn_count:count",
"_response_db_txn_duration:count",
)
)
response_count = Counter(
"synapse_http_server_response_count", "", ["method", "servlet", "tag"]
)
requests_counter = metrics.register_counter(
"requests_received",
labels=["method", "servlet", ],
requests_counter = Counter(
"synapse_http_server_requests_received", "", ["method", "servlet"]
)
outgoing_responses_counter = metrics.register_counter(
"responses",
labels=["method", "code"],
outgoing_responses_counter = Counter(
"synapse_http_server_responses", "", ["method", "code"]
)
response_timer = metrics.register_counter(
"response_time_seconds",
labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_time:total",
),
response_timer = Histogram(
"synapse_http_server_response_time_seconds", "sec", ["method", "servlet", "tag"]
)
response_ru_utime = metrics.register_counter(
"response_ru_utime_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_ru_utime:total",
),
response_ru_utime = Counter(
"synapse_http_server_response_ru_utime_seconds", "sec", ["method", "servlet", "tag"]
)
response_ru_stime = metrics.register_counter(
"response_ru_stime_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_ru_stime:total",
),
response_ru_stime = Counter(
"synapse_http_server_response_ru_stime_seconds", "sec", ["method", "servlet", "tag"]
)
response_db_txn_count = metrics.register_counter(
"response_db_txn_count", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_db_txn_count:total",
),
response_db_txn_count = Counter(
"synapse_http_server_response_db_txn_count", "", ["method", "servlet", "tag"]
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
response_db_txn_duration = metrics.register_counter(
"response_db_txn_duration_seconds", labels=["method", "servlet", "tag"],
alternative_names=(
metrics.name_prefix + "_response_db_txn_duration:total",
),
response_db_txn_duration = Counter(
"synapse_http_server_response_db_txn_duration_seconds",
"",
["method", "servlet", "tag"],
)
# seconds spent waiting for a db connection, when processing this request
response_db_sched_duration = metrics.register_counter(
"response_db_sched_duration_seconds", labels=["method", "servlet", "tag"]
response_db_sched_duration = Counter(
"synapse_http_server_response_db_sched_duration_seconds",
"",
["method", "servlet", "tag"],
)
# size in bytes of the response written
response_size = metrics.register_counter(
"response_size", labels=["method", "servlet", "tag"]
response_size = Counter(
"synapse_http_server_response_size", "", ["method", "servlet", "tag"]
)
# In flight metrics are incremented while the requests are in flight, rather
# than when the response was written.
in_flight_requests_ru_utime = Counter(
"synapse_http_server_in_flight_requests_ru_utime_seconds",
"",
["method", "servlet"],
)
in_flight_requests_ru_stime = Counter(
"synapse_http_server_in_flight_requests_ru_stime_seconds",
"",
["method", "servlet"],
)
in_flight_requests_db_txn_count = Counter(
"synapse_http_server_in_flight_requests_db_txn_count", "", ["method", "servlet"]
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
in_flight_requests_db_txn_duration = Counter(
"synapse_http_server_in_flight_requests_db_txn_duration_seconds",
"",
["method", "servlet"],
)
# seconds spent waiting for a db connection, when processing this request
in_flight_requests_db_sched_duration = Counter(
"synapse_http_server_in_flight_requests_db_sched_duration_seconds",
"",
["method", "servlet"],
)
# The set of all in flight requests, set[RequestMetrics]
_in_flight_requests = set()
def _get_in_flight_counts():
"""Returns a count of all in flight requests by (method, server_name)
Returns:
dict[tuple[str, str], int]
"""
for rm in _in_flight_requests:
rm.update_metrics()
# Map from (method, name) -> int, the number of in flight requests of that
# type
counts = {}
for rm in _in_flight_requests:
key = (rm.method, rm.name,)
counts[key] = counts.get(key, 0) + 1
return counts
LaterGauge(
"synapse_http_request_metrics_in_flight_requests_count",
"",
["method", "servlet"],
_get_in_flight_counts,
)
class RequestMetrics(object):
def start(self, time_msec, name):
self.start = time_msec
def start(self, time_sec, name, method):
self.start = time_sec
self.start_context = LoggingContext.current_context()
self.name = name
self.method = method
self._request_stats = _RequestStats.from_context(self.start_context)
_in_flight_requests.add(self)
def stop(self, time_sec, request):
_in_flight_requests.discard(self)
def stop(self, time_msec, request):
context = LoggingContext.current_context()
tag = ""
@@ -119,31 +165,109 @@ class RequestMetrics(object):
)
return
outgoing_responses_counter.inc(request.method, str(request.code))
outgoing_responses_counter.labels(request.method, str(request.code)).inc()
response_count.inc(request.method, self.name, tag)
response_count.labels(request.method, self.name, tag).inc()
response_timer.inc_by(
time_msec - self.start, request.method,
self.name, tag
response_timer.labels(request.method, self.name, tag).observe(
time_sec - self.start
)
ru_utime, ru_stime = context.get_resource_usage()
response_ru_utime.inc_by(
ru_utime, request.method, self.name, tag
response_ru_utime.labels(request.method, self.name, tag).inc(ru_utime)
response_ru_stime.labels(request.method, self.name, tag).inc(ru_stime)
response_db_txn_count.labels(request.method, self.name, tag).inc(
context.db_txn_count
)
response_ru_stime.inc_by(
ru_stime, request.method, self.name, tag
response_db_txn_duration.labels(request.method, self.name, tag).inc(
context.db_txn_duration_sec
)
response_db_txn_count.inc_by(
context.db_txn_count, request.method, self.name, tag
)
response_db_txn_duration.inc_by(
context.db_txn_duration_ms / 1000., request.method, self.name, tag
)
response_db_sched_duration.inc_by(
context.db_sched_duration_ms / 1000., request.method, self.name, tag
response_db_sched_duration.labels(request.method, self.name, tag).inc(
context.db_sched_duration_sec
)
response_size.inc_by(request.sentLength, request.method, self.name, tag)
response_size.labels(request.method, self.name, tag).inc(request.sentLength)
# We always call this at the end to ensure that we update the metrics
# regardless of whether a call to /metrics while the request was in
# flight.
self.update_metrics()
def update_metrics(self):
"""Updates the in flight metrics with values from this request.
"""
diff = self._request_stats.update(self.start_context)
in_flight_requests_ru_utime.labels(self.method, self.name).inc(diff.ru_utime)
in_flight_requests_ru_stime.labels(self.method, self.name).inc(diff.ru_stime)
in_flight_requests_db_txn_count.labels(self.method, self.name).inc(
diff.db_txn_count
)
in_flight_requests_db_txn_duration.labels(self.method, self.name).inc(
diff.db_txn_duration_sec
)
in_flight_requests_db_sched_duration.labels(self.method, self.name).inc(
diff.db_sched_duration_sec
)
class _RequestStats(object):
"""Keeps tracks of various metrics for an in flight request.
"""
__slots__ = [
"ru_utime",
"ru_stime",
"db_txn_count",
"db_txn_duration_sec",
"db_sched_duration_sec",
]
def __init__(
self, ru_utime, ru_stime, db_txn_count, db_txn_duration_sec, db_sched_duration_sec
):
self.ru_utime = ru_utime
self.ru_stime = ru_stime
self.db_txn_count = db_txn_count
self.db_txn_duration_sec = db_txn_duration_sec
self.db_sched_duration_sec = db_sched_duration_sec
@staticmethod
def from_context(context):
ru_utime, ru_stime = context.get_resource_usage()
return _RequestStats(
ru_utime, ru_stime,
context.db_txn_count,
context.db_txn_duration_sec,
context.db_sched_duration_sec,
)
def update(self, context):
"""Updates the current values and returns the difference between the
old and new values.
Returns:
_RequestStats: The difference between the old and new values
"""
new = _RequestStats.from_context(context)
diff = _RequestStats(
new.ru_utime - self.ru_utime,
new.ru_stime - self.ru_stime,
new.db_txn_count - self.db_txn_count,
new.db_txn_duration_sec - self.db_txn_duration_sec,
new.db_sched_duration_sec - self.db_sched_duration_sec,
)
self.ru_utime = new.ru_utime
self.ru_stime = new.ru_stime
self.db_txn_count = new.db_txn_count
self.db_txn_duration_sec = new.db_txn_duration_sec
self.db_sched_duration_sec = new.db_sched_duration_sec
return diff

View File

@@ -13,7 +13,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import cgi
from six.moves import http_client
from synapse.api.errors import (
cs_exception, SynapseError, CodeMessageException, UnrecognizedRequestError, Codes
@@ -44,6 +45,18 @@ import simplejson
logger = logging.getLogger(__name__)
HTML_ERROR_TEMPLATE = """<!DOCTYPE html>
<html lang=en>
<head>
<meta charset="utf-8">
<title>Error {code}</title>
</head>
<body>
<p>{msg}</p>
</body>
</html>
"""
def wrap_json_request_handler(h):
"""Wraps a request handler method with exception handling.
@@ -102,6 +115,65 @@ def wrap_json_request_handler(h):
return wrap_request_handler_with_logging(wrapped_request_handler)
def wrap_html_request_handler(h):
"""Wraps a request handler method with exception handling.
Also adds logging as per wrap_request_handler_with_logging.
The handler method must have a signature of "handle_foo(self, request)",
where "self" must have a "clock" attribute (and "request" must be a
SynapseRequest).
"""
def wrapped_request_handler(self, request):
d = defer.maybeDeferred(h, self, request)
d.addErrback(_return_html_error, request)
return d
return wrap_request_handler_with_logging(wrapped_request_handler)
def _return_html_error(f, request):
"""Sends an HTML error page corresponding to the given failure
Args:
f (twisted.python.failure.Failure):
request (twisted.web.iweb.IRequest):
"""
if f.check(CodeMessageException):
cme = f.value
code = cme.code
msg = cme.msg
if isinstance(cme, SynapseError):
logger.info(
"%s SynapseError: %s - %s", request, code, msg
)
else:
logger.error(
"Failed handle request %r: %s",
request,
f.getTraceback().rstrip(),
)
else:
code = http_client.INTERNAL_SERVER_ERROR
msg = "Internal server error"
logger.error(
"Failed handle request %r: %s",
request,
f.getTraceback().rstrip(),
)
body = HTML_ERROR_TEMPLATE.format(
code=code, msg=cgi.escape(msg),
).encode("utf-8")
request.setResponseCode(code)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%i" % (len(body),))
request.write(body)
finish_request(request)
def wrap_request_handler_with_logging(h):
"""Wraps a request handler to provide logging and metrics
@@ -132,14 +204,14 @@ def wrap_request_handler_with_logging(h):
servlet_name = self.__class__.__name__
with request.processing(servlet_name):
with PreserveLoggingContext(request_context):
d = h(self, request)
d = defer.maybeDeferred(h, self, request)
# record the arrival of the request *after*
# dispatching to the handler, so that the handler
# can update the servlet name in the request
# metrics
requests_counter.inc(request.method,
request.request_metrics.name)
requests_counter.labels(request.method,
request.request_metrics.name).inc()
yield d
return wrapped_request_handler

View File

@@ -56,7 +56,7 @@ class SynapseRequest(Request):
def __repr__(self):
# We overwrite this so that we don't log ``access_token``
return '<%s at 0x%x method=%s uri=%s clientproto=%s site=%s>' % (
return '<%s at 0x%x method=%r uri=%r clientproto=%r site=%r>' % (
self.__class__.__name__,
id(self),
self.method,
@@ -83,9 +83,11 @@ class SynapseRequest(Request):
return Request.render(self, resrc)
def _started_processing(self, servlet_name):
self.start_time = int(time.time() * 1000)
self.start_time = time.time()
self.request_metrics = RequestMetrics()
self.request_metrics.start(self.start_time, name=servlet_name)
self.request_metrics.start(
self.start_time, name=servlet_name, method=self.method,
)
self.site.access_logger.info(
"%s - %s - Received request: %s %s",
@@ -100,26 +102,26 @@ class SynapseRequest(Request):
context = LoggingContext.current_context()
ru_utime, ru_stime = context.get_resource_usage()
db_txn_count = context.db_txn_count
db_txn_duration_ms = context.db_txn_duration_ms
db_sched_duration_ms = context.db_sched_duration_ms
db_txn_duration_sec = context.db_txn_duration_sec
db_sched_duration_sec = context.db_sched_duration_sec
except Exception:
ru_utime, ru_stime = (0, 0)
db_txn_count, db_txn_duration_ms = (0, 0)
db_txn_count, db_txn_duration_sec = (0, 0)
end_time = int(time.time() * 1000)
end_time = time.time()
self.site.access_logger.info(
"%s - %s - {%s}"
" Processed request: %dms (%dms, %dms) (%dms/%dms/%d)"
" Processed request: %.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
" %sB %s \"%s %s %s\" \"%s\"",
self.getClientIP(),
self.site.site_tag,
self.authenticated_entity,
end_time - self.start_time,
int(ru_utime * 1000),
int(ru_stime * 1000),
db_sched_duration_ms,
db_txn_duration_ms,
ru_utime,
ru_stime,
db_sched_duration_sec,
db_txn_duration_sec,
int(db_txn_count),
self.sentLength,
self.code,

View File

@@ -17,165 +17,178 @@ import logging
import functools
import time
import gc
import os
import platform
import attr
from prometheus_client import Gauge, Histogram, Counter
from prometheus_client.core import GaugeMetricFamily, REGISTRY
from twisted.internet import reactor
from .metric import (
CounterMetric, CallbackMetric, DistributionMetric, CacheMetric,
MemoryUsageMetric, GaugeMetric,
)
from .process_collector import register_process_collector
logger = logging.getLogger(__name__)
running_on_pypy = platform.python_implementation() == 'PyPy'
running_on_pypy = platform.python_implementation() == "PyPy"
all_metrics = []
all_collectors = []
all_gauges = {}
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
class Metrics(object):
""" A single Metrics object gives a (mutable) slice view of the all_metrics
dict, allowing callers to easily register new metrics that are namespaced
nicely."""
class RegistryProxy(object):
def __init__(self, name):
self.name_prefix = name
def make_subspace(self, name):
return Metrics("%s_%s" % (self.name_prefix, name))
def register_collector(self, func):
all_collectors.append(func)
def _register(self, metric_class, name, *args, **kwargs):
full_name = "%s_%s" % (self.name_prefix, name)
metric = metric_class(full_name, *args, **kwargs)
all_metrics.append(metric)
return metric
def register_counter(self, *args, **kwargs):
"""
Returns:
CounterMetric
"""
return self._register(CounterMetric, *args, **kwargs)
def register_gauge(self, *args, **kwargs):
"""
Returns:
GaugeMetric
"""
return self._register(GaugeMetric, *args, **kwargs)
def register_callback(self, *args, **kwargs):
"""
Returns:
CallbackMetric
"""
return self._register(CallbackMetric, *args, **kwargs)
def register_distribution(self, *args, **kwargs):
"""
Returns:
DistributionMetric
"""
return self._register(DistributionMetric, *args, **kwargs)
def register_cache(self, *args, **kwargs):
"""
Returns:
CacheMetric
"""
return self._register(CacheMetric, *args, **kwargs)
@staticmethod
def collect():
for metric in REGISTRY.collect():
if not metric.name.startswith("__"):
yield metric
def register_memory_metrics(hs):
try:
import psutil
process = psutil.Process()
process.memory_info().rss
except (ImportError, AttributeError):
logger.warn(
"psutil is not installed or incorrect version."
" Disabling memory metrics."
)
return
metric = MemoryUsageMetric(hs, psutil)
all_metrics.append(metric)
@attr.s(hash=True)
class LaterGauge(object):
name = attr.ib()
desc = attr.ib()
labels = attr.ib(hash=False)
caller = attr.ib()
def get_metrics_for(pkg_name):
""" Returns a Metrics instance for conveniently creating metrics
namespaced with the given name prefix. """
def collect(self):
# Convert a "package.name" to "package_name" because Prometheus doesn't
# let us use . in metric names
return Metrics(pkg_name.replace(".", "_"))
g = GaugeMetricFamily(self.name, self.desc, labels=self.labels)
def render_all():
strs = []
for collector in all_collectors:
collector()
for metric in all_metrics:
try:
strs += metric.render()
calls = self.caller()
except Exception:
strs += ["# FAILED to render"]
logger.exception("Failed to render metric")
logger.exception(
"Exception running callback for LaterGuage(%s)",
self.name,
)
yield g
return
strs.append("") # to generate a final CRLF
if isinstance(calls, dict):
for k, v in calls.items():
g.add_metric(k, v)
else:
g.add_metric([], calls)
return "\n".join(strs)
yield g
def __attrs_post_init__(self):
self._register()
def _register(self):
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self)
all_gauges[self.name] = self
register_process_collector(get_metrics_for("process"))
#
# Detailed CPU metrics
#
class CPUMetrics(object):
def __init__(self):
ticks_per_sec = 100
try:
# Try and get the system config
ticks_per_sec = os.sysconf('SC_CLK_TCK')
except (ValueError, TypeError, AttributeError):
pass
self.ticks_per_sec = ticks_per_sec
def collect(self):
if not HAVE_PROC_SELF_STAT:
return
with open("/proc/self/stat") as s:
line = s.read()
raw_stats = line.split(") ", 1)[1].split(" ")
user = GaugeMetricFamily("process_cpu_user_seconds_total", "")
user.add_metric([], float(raw_stats[11]) / self.ticks_per_sec)
yield user
sys = GaugeMetricFamily("process_cpu_system_seconds_total", "")
sys.add_metric([], float(raw_stats[12]) / self.ticks_per_sec)
yield sys
python_metrics = get_metrics_for("python")
REGISTRY.register(CPUMetrics())
gc_time = python_metrics.register_distribution("gc_time", labels=["gen"])
gc_unreachable = python_metrics.register_counter("gc_unreachable_total", labels=["gen"])
python_metrics.register_callback(
"gc_counts", lambda: {(i,): v for i, v in enumerate(gc.get_count())}, labels=["gen"]
#
# Python GC metrics
#
gc_unreachable = Gauge("python_gc_unreachable_total", "Unreachable GC objects", ["gen"])
gc_time = Histogram(
"python_gc_time",
"Time taken to GC (sec)",
["gen"],
buckets=[0.0025, 0.005, 0.01, 0.025, 0.05, 0.10, 0.25, 0.50, 1.00, 2.50,
5.00, 7.50, 15.00, 30.00, 45.00, 60.00],
)
reactor_metrics = get_metrics_for("python.twisted.reactor")
tick_time = reactor_metrics.register_distribution("tick_time")
pending_calls_metric = reactor_metrics.register_distribution("pending_calls")
synapse_metrics = get_metrics_for("synapse")
class GCCounts(object):
def collect(self):
cm = GaugeMetricFamily("python_gc_counts", "GC cycle counts", labels=["gen"])
for n, m in enumerate(gc.get_count()):
cm.add_metric([str(n)], m)
yield cm
REGISTRY.register(GCCounts())
#
# Twisted reactor metrics
#
tick_time = Histogram(
"python_twisted_reactor_tick_time",
"Tick time of the Twisted reactor (sec)",
buckets=[0.001, 0.002, 0.005, 0.01, 0.025, 0.05, 0.1, 0.2, 0.5, 1, 2, 5],
)
pending_calls_metric = Histogram(
"python_twisted_reactor_pending_calls",
"Pending calls",
buckets=[1, 2, 5, 10, 25, 50, 100, 250, 500, 1000],
)
#
# Federation Metrics
#
sent_edus_counter = Counter("synapse_federation_client_sent_edus", "")
sent_transactions_counter = Counter("synapse_federation_client_sent_transactions", "")
events_processed_counter = Counter("synapse_federation_client_events_processed", "")
# Used to track where various components have processed in the event stream,
# e.g. federation sending, appservice sending, etc.
event_processing_positions = synapse_metrics.register_gauge(
"event_processing_positions", labels=["name"],
)
event_processing_positions = Gauge("synapse_event_processing_positions", "", ["name"])
# Used to track the current max events stream position
event_persisted_position = synapse_metrics.register_gauge(
"event_persisted_position",
)
event_persisted_position = Gauge("synapse_event_persisted_position", "")
# Used to track the received_ts of the last event processed by various
# components
event_processing_last_ts = synapse_metrics.register_gauge(
"event_processing_last_ts", labels=["name"],
)
event_processing_last_ts = Gauge("synapse_event_processing_last_ts", "", ["name"])
# Used to track the lag processing events. This is the time difference
# between the last processed event's received_ts and the time it was
# finished being processed.
event_processing_lag = synapse_metrics.register_gauge(
"event_processing_lag", labels=["name"],
)
event_processing_lag = Gauge("synapse_event_processing_lag", "", ["name"])
def runUntilCurrentTimer(func):
@@ -197,17 +210,17 @@ def runUntilCurrentTimer(func):
num_pending += 1
num_pending += len(reactor.threadCallQueue)
start = time.time() * 1000
start = time.time()
ret = func(*args, **kwargs)
end = time.time() * 1000
end = time.time()
# record the amount of wallclock time spent running pending calls.
# This is a proxy for the actual amount of time between reactor polls,
# since about 25% of time is actually spent running things triggered by
# I/O events, but that is harder to capture without rewriting half the
# reactor.
tick_time.inc_by(end - start)
pending_calls_metric.inc_by(num_pending)
tick_time.observe(end - start)
pending_calls_metric.observe(num_pending)
if running_on_pypy:
return ret
@@ -220,12 +233,12 @@ def runUntilCurrentTimer(func):
if threshold[i] < counts[i]:
logger.info("Collecting gc %d", i)
start = time.time() * 1000
start = time.time()
unreachable = gc.collect(i)
end = time.time() * 1000
end = time.time()
gc_time.inc_by(end - start, i)
gc_unreachable.inc_by(unreachable, i)
gc_time.labels(i).observe(end - start)
gc_unreachable.labels(i).set(unreachable)
return ret

View File

@@ -1,328 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from itertools import chain
import logging
import re
logger = logging.getLogger(__name__)
def flatten(items):
"""Flatten a list of lists
Args:
items: iterable[iterable[X]]
Returns:
list[X]: flattened list
"""
return list(chain.from_iterable(items))
class BaseMetric(object):
"""Base class for metrics which report a single value per label set
"""
def __init__(self, name, labels=[], alternative_names=[]):
"""
Args:
name (str): principal name for this metric
labels (list(str)): names of the labels which will be reported
for this metric
alternative_names (iterable(str)): list of alternative names for
this metric. This can be useful to provide a migration path
when renaming metrics.
"""
self._names = [name] + list(alternative_names)
self.labels = labels # OK not to clone as we never write it
def dimension(self):
return len(self.labels)
def is_scalar(self):
return not len(self.labels)
def _render_labelvalue(self, value):
return '"%s"' % (_escape_label_value(value),)
def _render_key(self, values):
if self.is_scalar():
return ""
return "{%s}" % (
",".join(["%s=%s" % (k, self._render_labelvalue(v))
for k, v in zip(self.labels, values)])
)
def _render_for_labels(self, label_values, value):
"""Render this metric for a single set of labels
Args:
label_values (list[object]): values for each of the labels,
(which get stringified).
value: value of the metric at with these labels
Returns:
iterable[str]: rendered metric
"""
rendered_labels = self._render_key(label_values)
return (
"%s%s %.12g" % (name, rendered_labels, value)
for name in self._names
)
def render(self):
"""Render this metric
Each metric is rendered as:
name{label1="val1",label2="val2"} value
https://prometheus.io/docs/instrumenting/exposition_formats/#text-format-details
Returns:
iterable[str]: rendered metrics
"""
raise NotImplementedError()
class CounterMetric(BaseMetric):
"""The simplest kind of metric; one that stores a monotonically-increasing
value that counts events or running totals.
Example use cases for Counters:
- Number of requests processed
- Number of items that were inserted into a queue
- Total amount of data that a system has processed
Counters can only go up (and be reset when the process restarts).
"""
def __init__(self, *args, **kwargs):
super(CounterMetric, self).__init__(*args, **kwargs)
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty tuple).
self.counts = {}
# Scalar metrics are never empty
if self.is_scalar():
self.counts[()] = 0.
def inc_by(self, incr, *values):
if len(values) != self.dimension():
raise ValueError(
"Expected as many values to inc() as labels (%d)" % (self.dimension())
)
# TODO: should assert that the tag values are all strings
if values not in self.counts:
self.counts[values] = incr
else:
self.counts[values] += incr
def inc(self, *values):
self.inc_by(1, *values)
def render(self):
return flatten(
self._render_for_labels(k, self.counts[k])
for k in sorted(self.counts.keys())
)
class GaugeMetric(BaseMetric):
"""A metric that can go up or down
"""
def __init__(self, *args, **kwargs):
super(GaugeMetric, self).__init__(*args, **kwargs)
# dict[list[str]]: value for each set of label values. the keys are the
# label values, in the same order as the labels in self.labels.
#
# (if the metric is a scalar, the (single) key is the empty tuple).
self.guages = {}
def set(self, v, *values):
if len(values) != self.dimension():
raise ValueError(
"Expected as many values to inc() as labels (%d)" % (self.dimension())
)
# TODO: should assert that the tag values are all strings
self.guages[values] = v
def render(self):
return flatten(
self._render_for_labels(k, self.guages[k])
for k in sorted(self.guages.keys())
)
class CallbackMetric(BaseMetric):
"""A metric that returns the numeric value returned by a callback whenever
it is rendered. Typically this is used to implement gauges that yield the
size or other state of some in-memory object by actively querying it."""
def __init__(self, name, callback, labels=[]):
super(CallbackMetric, self).__init__(name, labels=labels)
self.callback = callback
def render(self):
try:
value = self.callback()
except Exception:
logger.exception("Failed to render %s", self.name)
return ["# FAILED to render " + self.name]
if self.is_scalar():
return list(self._render_for_labels([], value))
return flatten(
self._render_for_labels(k, value[k])
for k in sorted(value.keys())
)
class DistributionMetric(object):
"""A combination of an event counter and an accumulator, which counts
both the number of events and accumulates the total value. Typically this
could be used to keep track of method-running times, or other distributions
of values that occur in discrete occurances.
TODO(paul): Try to export some heatmap-style stats?
"""
def __init__(self, name, *args, **kwargs):
self.counts = CounterMetric(name + ":count", **kwargs)
self.totals = CounterMetric(name + ":total", **kwargs)
def inc_by(self, inc, *values):
self.counts.inc(*values)
self.totals.inc_by(inc, *values)
def render(self):
return self.counts.render() + self.totals.render()
class CacheMetric(object):
__slots__ = (
"name", "cache_name", "hits", "misses", "evicted_size", "size_callback",
)
def __init__(self, name, size_callback, cache_name):
self.name = name
self.cache_name = cache_name
self.hits = 0
self.misses = 0
self.evicted_size = 0
self.size_callback = size_callback
def inc_hits(self):
self.hits += 1
def inc_misses(self):
self.misses += 1
def inc_evictions(self, size=1):
self.evicted_size += size
def render(self):
size = self.size_callback()
hits = self.hits
total = self.misses + self.hits
return [
"""%s:hits{name="%s"} %d""" % (self.name, self.cache_name, hits),
"""%s:total{name="%s"} %d""" % (self.name, self.cache_name, total),
"""%s:size{name="%s"} %d""" % (self.name, self.cache_name, size),
"""%s:evicted_size{name="%s"} %d""" % (
self.name, self.cache_name, self.evicted_size
),
]
class MemoryUsageMetric(object):
"""Keeps track of the current memory usage, using psutil.
The class will keep the current min/max/sum/counts of rss over the last
WINDOW_SIZE_SEC, by polling UPDATE_HZ times per second
"""
UPDATE_HZ = 2 # number of times to get memory per second
WINDOW_SIZE_SEC = 30 # the size of the window in seconds
def __init__(self, hs, psutil):
clock = hs.get_clock()
self.memory_snapshots = []
self.process = psutil.Process()
clock.looping_call(self._update_curr_values, 1000 / self.UPDATE_HZ)
def _update_curr_values(self):
max_size = self.UPDATE_HZ * self.WINDOW_SIZE_SEC
self.memory_snapshots.append(self.process.memory_info().rss)
self.memory_snapshots[:] = self.memory_snapshots[-max_size:]
def render(self):
if not self.memory_snapshots:
return []
max_rss = max(self.memory_snapshots)
min_rss = min(self.memory_snapshots)
sum_rss = sum(self.memory_snapshots)
len_rss = len(self.memory_snapshots)
return [
"process_psutil_rss:max %d" % max_rss,
"process_psutil_rss:min %d" % min_rss,
"process_psutil_rss:total %d" % sum_rss,
"process_psutil_rss:count %d" % len_rss,
]
def _escape_character(m):
"""Replaces a single character with its escape sequence.
Args:
m (re.MatchObject): A match object whose first group is the single
character to replace
Returns:
str
"""
c = m.group(1)
if c == "\\":
return "\\\\"
elif c == "\"":
return "\\\""
elif c == "\n":
return "\\n"
return c
def _escape_label_value(value):
"""Takes a label value and escapes quotes, newlines and backslashes
"""
return re.sub(r"([\n\"\\])", _escape_character, str(value))

View File

@@ -1,122 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
TICKS_PER_SEC = 100
BYTES_PER_PAGE = 4096
HAVE_PROC_STAT = os.path.exists("/proc/stat")
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
HAVE_PROC_SELF_LIMITS = os.path.exists("/proc/self/limits")
HAVE_PROC_SELF_FD = os.path.exists("/proc/self/fd")
# Field indexes from /proc/self/stat, taken from the proc(5) manpage
STAT_FIELDS = {
"utime": 14,
"stime": 15,
"starttime": 22,
"vsize": 23,
"rss": 24,
}
stats = {}
# In order to report process_start_time_seconds we need to know the
# machine's boot time, because the value in /proc/self/stat is relative to
# this
boot_time = None
if HAVE_PROC_STAT:
with open("/proc/stat") as _procstat:
for line in _procstat:
if line.startswith("btime "):
boot_time = int(line.split()[1])
def update_resource_metrics():
if HAVE_PROC_SELF_STAT:
global stats
with open("/proc/self/stat") as s:
line = s.read()
# line is PID (command) more stats go here ...
raw_stats = line.split(") ", 1)[1].split(" ")
for (name, index) in STAT_FIELDS.iteritems():
# subtract 3 from the index, because proc(5) is 1-based, and
# we've lost the first two fields in PID and COMMAND above
stats[name] = int(raw_stats[index - 3])
def _count_fds():
# Not every OS will have a /proc/self/fd directory
if not HAVE_PROC_SELF_FD:
return 0
return len(os.listdir("/proc/self/fd"))
def register_process_collector(process_metrics):
process_metrics.register_collector(update_resource_metrics)
if HAVE_PROC_SELF_STAT:
process_metrics.register_callback(
"cpu_user_seconds_total",
lambda: float(stats["utime"]) / TICKS_PER_SEC
)
process_metrics.register_callback(
"cpu_system_seconds_total",
lambda: float(stats["stime"]) / TICKS_PER_SEC
)
process_metrics.register_callback(
"cpu_seconds_total",
lambda: (float(stats["utime"] + stats["stime"])) / TICKS_PER_SEC
)
process_metrics.register_callback(
"virtual_memory_bytes",
lambda: int(stats["vsize"])
)
process_metrics.register_callback(
"resident_memory_bytes",
lambda: int(stats["rss"]) * BYTES_PER_PAGE
)
process_metrics.register_callback(
"start_time_seconds",
lambda: boot_time + int(stats["starttime"]) / TICKS_PER_SEC
)
if HAVE_PROC_SELF_FD:
process_metrics.register_callback(
"open_fds",
lambda: _count_fds()
)
if HAVE_PROC_SELF_LIMITS:
def _get_max_fds():
with open("/proc/self/limits") as limits:
for line in limits:
if not line.startswith("Max open files "):
continue
# Line is Max open files $SOFT $HARD
return int(line.split()[3])
return None
process_metrics.register_callback(
"max_fds",
lambda: _get_max_fds()
)

View File

@@ -13,27 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.web.resource import Resource
import synapse.metrics
from prometheus_client.twisted import MetricsResource
METRICS_PREFIX = "/_synapse/metrics"
class MetricsResource(Resource):
isLeaf = True
def __init__(self, hs):
Resource.__init__(self) # Resource is old-style, so no super()
self.hs = hs
def render_GET(self, request):
response = synapse.metrics.render_all()
request.setHeader("Content-Type", "text/plain")
request.setHeader("Content-Length", str(len(response)))
# Encode as UTF-8 (default)
return response.encode()
__all__ = ["MetricsResource", "METRICS_PREFIX"]

View File

@@ -28,22 +28,20 @@ from synapse.util.logcontext import PreserveLoggingContext, run_in_background
from synapse.util.metrics import Measure
from synapse.types import StreamToken
from synapse.visibility import filter_events_for_client
import synapse.metrics
from synapse.metrics import LaterGauge
from collections import namedtuple
from prometheus_client import Counter
import logging
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
notified_events_counter = Counter("synapse_notifier_notified_events", "")
notified_events_counter = metrics.register_counter("notified_events")
users_woken_by_stream_counter = metrics.register_counter(
"users_woken_by_stream", labels=["stream"]
)
users_woken_by_stream_counter = Counter(
"synapse_notifier_users_woken_by_stream", "", ["stream"])
# TODO(paul): Should be shared somewhere
@@ -108,7 +106,7 @@ class _NotifierUserStream(object):
self.last_notified_ms = time_now_ms
noify_deferred = self.notify_deferred
users_woken_by_stream_counter.inc(stream_key)
users_woken_by_stream_counter.labels(stream_key).inc()
with PreserveLoggingContext():
self.notify_deferred = ObservableDeferred(defer.Deferred())
@@ -197,14 +195,14 @@ class Notifier(object):
all_user_streams.add(x)
return sum(stream.count_listeners() for stream in all_user_streams)
metrics.register_callback("listeners", count_listeners)
LaterGauge("synapse_notifier_listeners", "", [], count_listeners)
metrics.register_callback(
"rooms",
LaterGauge(
"synapse_notifier_rooms", "", [],
lambda: count(bool, self.room_to_user_streams.values()),
)
metrics.register_callback(
"users",
LaterGauge(
"synapse_notifier_users", "", [],
lambda: len(self.user_to_user_stream),
)

View File

@@ -39,7 +39,7 @@ def list_with_base_rules(rawrules):
rawrules = [r for r in rawrules if r['priority_class'] >= 0]
# shove the server default rules for each kind onto the end of each
current_prio_class = PRIORITY_CLASS_INVERSE_MAP.keys()[-1]
current_prio_class = list(PRIORITY_CLASS_INVERSE_MAP)[-1]
ruleslist.extend(make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules

View File

@@ -22,35 +22,32 @@ from .push_rule_evaluator import PushRuleEvaluatorForEvent
from synapse.event_auth import get_user_power_level
from synapse.api.constants import EventTypes, Membership
from synapse.metrics import get_metrics_for
from synapse.util.caches import metrics as cache_metrics
from synapse.util.caches import register_cache
from synapse.util.caches.descriptors import cached
from synapse.util.async import Linearizer
from synapse.state import POWER_KEY
from collections import namedtuple
from prometheus_client import Counter
from six import itervalues, iteritems
logger = logging.getLogger(__name__)
rules_by_room = {}
push_metrics = get_metrics_for(__name__)
push_rules_invalidation_counter = push_metrics.register_counter(
"push_rules_invalidation_counter"
)
push_rules_state_size_counter = push_metrics.register_counter(
"push_rules_state_size_counter"
)
push_rules_invalidation_counter = Counter(
"synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter", "")
push_rules_state_size_counter = Counter(
"synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter", "")
# Measures whether we use the fast path of using state deltas, or if we have to
# recalculate from scratch
push_rules_delta_state_cache_metric = cache_metrics.register_cache(
push_rules_delta_state_cache_metric = register_cache(
"cache",
size_callback=lambda: 0, # Meaningless size, as this isn't a cache that stores values
cache_name="push_rules_delta_state_cache_metric",
"push_rules_delta_state_cache_metric",
cache=[], # Meaningless size, as this isn't a cache that stores values
)
@@ -64,10 +61,10 @@ class BulkPushRuleEvaluator(object):
self.store = hs.get_datastore()
self.auth = hs.get_auth()
self.room_push_rule_cache_metrics = cache_metrics.register_cache(
self.room_push_rule_cache_metrics = register_cache(
"cache",
size_callback=lambda: 0, # There's not good value for this
cache_name="room_push_rule_cache",
"room_push_rule_cache",
cache=[], # Meaningless size, as this isn't a cache that stores values
)
@defer.inlineCallbacks
@@ -126,7 +123,7 @@ class BulkPushRuleEvaluator(object):
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {
(e.type, e.state_key): e for e in auth_events.itervalues()
(e.type, e.state_key): e for e in itervalues(auth_events)
}
sender_level = get_user_power_level(event.sender, auth_events)
@@ -160,7 +157,7 @@ class BulkPushRuleEvaluator(object):
condition_cache = {}
for uid, rules in rules_by_user.iteritems():
for uid, rules in iteritems(rules_by_user):
if event.sender == uid:
continue
@@ -309,7 +306,7 @@ class RulesForRoom(object):
current_state_ids = context.current_state_ids
push_rules_delta_state_cache_metric.inc_misses()
push_rules_state_size_counter.inc_by(len(current_state_ids))
push_rules_state_size_counter.inc(len(current_state_ids))
logger.debug(
"Looking for member changes in %r %r", state_group, current_state_ids
@@ -406,7 +403,7 @@ class RulesForRoom(object):
# If the event is a join event then it will be in current state evnts
# map but not in the DB, so we have to explicitly insert it.
if event.type == EventTypes.Member:
for event_id in member_event_ids.itervalues():
for event_id in itervalues(member_event_ids):
if event_id == event.event_id:
members[event_id] = (event.state_key, event.membership)
@@ -414,7 +411,7 @@ class RulesForRoom(object):
logger.debug("Found members %r: %r", self.room_id, members.values())
interested_in_user_ids = set(
user_id for user_id, membership in members.itervalues()
user_id for user_id, membership in itervalues(members)
if membership == Membership.JOIN
)
@@ -426,7 +423,7 @@ class RulesForRoom(object):
)
user_ids = set(
uid for uid, have_pusher in if_users_with_pushers.iteritems() if have_pusher
uid for uid, have_pusher in iteritems(if_users_with_pushers) if have_pusher
)
logger.debug("With pushers: %r", user_ids)
@@ -447,7 +444,7 @@ class RulesForRoom(object):
)
ret_rules_by_user.update(
item for item in rules_by_user.iteritems() if item[0] is not None
item for item in iteritems(rules_by_user) if item[0] is not None
)
self.update_cache(sequence, members, ret_rules_by_user, state_group)

View File

@@ -20,22 +20,17 @@ from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from . import push_rule_evaluator
from . import push_tools
import synapse
from synapse.push import PusherConfigException
from synapse.util.logcontext import LoggingContext
from synapse.util.metrics import Measure
from prometheus_client import Counter
logger = logging.getLogger(__name__)
metrics = synapse.metrics.get_metrics_for(__name__)
http_push_processed_counter = Counter("synapse_http_httppusher_http_pushes_processed", "")
http_push_processed_counter = metrics.register_counter(
"http_pushes_processed",
)
http_push_failed_counter = metrics.register_counter(
"http_pushes_failed",
)
http_push_failed_counter = Counter("synapse_http_httppusher_http_pushes_failed", "")
class HttpPusher(object):

View File

@@ -229,7 +229,8 @@ class Mailer(object):
if room_vars['notifs'] and 'messages' in room_vars['notifs'][-1]:
prev_messages = room_vars['notifs'][-1]['messages']
for message in notifvars['messages']:
pm = filter(lambda pm: pm['id'] == message['id'], prev_messages)
pm = list(filter(lambda pm: pm['id'] == message['id'],
prev_messages))
if pm:
if not message["is_historical"]:
pm[0]["is_historical"] = False

View File

@@ -113,7 +113,7 @@ def calculate_room_name(store, room_state_ids, user_id, fallback_to_members=True
# so find out who is in the room that isn't the user.
if "m.room.member" in room_state_bytype_ids:
member_events = yield store.get_events(
room_state_bytype_ids["m.room.member"].values()
list(room_state_bytype_ids["m.room.member"].values())
)
all_members = [
ev for ev in member_events.values()

View File

@@ -21,6 +21,8 @@ from synapse.types import UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
from six import string_types
logger = logging.getLogger(__name__)
@@ -150,7 +152,7 @@ class PushRuleEvaluatorForEvent(object):
# Caches (glob, word_boundary) -> regex for push. See _glob_matches
regex_cache = LruCache(50000 * CACHE_SIZE_FACTOR)
register_cache("regex_push_cache", regex_cache)
register_cache("cache", "regex_push_cache", regex_cache)
def _glob_matches(glob, value, word_boundary=False):
@@ -238,7 +240,7 @@ def _flatten_dict(d, prefix=[], result=None):
if result is None:
result = {}
for key, value in d.items():
if isinstance(value, basestring):
if isinstance(value, string_types):
result[".".join(prefix + [key])] = value.lower()
elif hasattr(value, "items"):
_flatten_dict(value, prefix=(prefix + [key]), result=result)

View File

@@ -56,6 +56,7 @@ REQUIREMENTS = {
"msgpack-python>=0.3.0": ["msgpack"],
"phonenumbers>=8.2.0": ["phonenumbers"],
"six": ["six"],
"prometheus_client": ["prometheus_client"],
}
CONDITIONAL_REQUIREMENTS = {
"web_client": {

View File

@@ -60,21 +60,21 @@ from .commands import (
)
from .streams import STREAMS_MAP
from synapse.metrics import LaterGauge
from synapse.util.stringutils import random_string
from synapse.metrics.metric import CounterMetric
from prometheus_client import Counter
from collections import defaultdict
from six import iterkeys, iteritems
import logging
import synapse.metrics
import struct
import fcntl
metrics = synapse.metrics.get_metrics_for(__name__)
connection_close_counter = metrics.register_counter(
"close_reason", labels=["reason_type"],
)
connection_close_counter = Counter(
"synapse_replication_tcp_protocol_close_reason", "", ["reason_type"])
# A list of all connected protocols. This allows us to send metrics about the
# connections.
@@ -136,12 +136,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
# The LoopingCall for sending pings.
self._send_ping_loop = None
self.inbound_commands_counter = CounterMetric(
"inbound_commands", labels=["command"],
)
self.outbound_commands_counter = CounterMetric(
"outbound_commands", labels=["command"],
)
self.inbound_commands_counter = defaultdict(int)
self.outbound_commands_counter = defaultdict(int)
def connectionMade(self):
logger.info("[%s] Connection established", self.id())
@@ -201,7 +197,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
self.last_received_command = self.clock.time_msec()
self.inbound_commands_counter.inc(cmd_name)
self.inbound_commands_counter[cmd_name] = (
self.inbound_commands_counter[cmd_name] + 1)
cmd_cls = COMMAND_MAP[cmd_name]
try:
@@ -251,8 +248,8 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
self._queue_command(cmd)
return
self.outbound_commands_counter.inc(cmd.NAME)
self.outbound_commands_counter[cmd.NAME] = (
self.outbound_commands_counter[cmd.NAME] + 1)
string = "%s %s" % (cmd.NAME, cmd.to_line(),)
if "\n" in string:
raise Exception("Unexpected newline in command: %r", string)
@@ -317,9 +314,9 @@ class BaseReplicationStreamProtocol(LineOnlyReceiver):
def connectionLost(self, reason):
logger.info("[%s] Replication connection closed: %r", self.id(), reason)
if isinstance(reason, Failure):
connection_close_counter.inc(reason.type.__name__)
connection_close_counter.labels(reason.type.__name__).inc()
else:
connection_close_counter.inc(reason.__class__.__name__)
connection_close_counter.labels(reason.__class__.__name__).inc()
try:
# Remove us from list of connections to be monitored
@@ -392,7 +389,7 @@ class ServerReplicationStreamProtocol(BaseReplicationStreamProtocol):
if stream_name == "ALL":
# Subscribe to all streams we're publishing to.
for stream in self.streamer.streams_by_name.iterkeys():
for stream in iterkeys(self.streamer.streams_by_name):
self.subscribe_to_stream(stream, token)
else:
self.subscribe_to_stream(stream_name, token)
@@ -498,7 +495,7 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
BaseReplicationStreamProtocol.connectionMade(self)
# Once we've connected subscribe to the necessary streams
for stream_name, token in self.handler.get_streams_to_replicate().iteritems():
for stream_name, token in iteritems(self.handler.get_streams_to_replicate()):
self.replicate(stream_name, token)
# Tell the server if we have any users currently syncing (should only
@@ -518,7 +515,7 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
def on_RDATA(self, cmd):
stream_name = cmd.stream_name
inbound_rdata_count.inc(stream_name)
inbound_rdata_count.labels(stream_name).inc()
try:
row = STREAMS_MAP[stream_name].ROW_TYPE(*cmd.row)
@@ -566,14 +563,12 @@ class ClientReplicationStreamProtocol(BaseReplicationStreamProtocol):
# The following simply registers metrics for the replication connections
metrics.register_callback(
"pending_commands",
pending_commands = LaterGauge(
"pending_commands", "", ["name", "conn_id"],
lambda: {
(p.name, p.conn_id): len(p.pending_commands)
for p in connected_connections
},
labels=["name", "conn_id"],
)
})
def transport_buffer_size(protocol):
@@ -583,14 +578,12 @@ def transport_buffer_size(protocol):
return 0
metrics.register_callback(
"transport_send_buffer",
transport_send_buffer = LaterGauge(
"synapse_replication_tcp_transport_send_buffer", "", ["name", "conn_id"],
lambda: {
(p.name, p.conn_id): transport_buffer_size(p)
for p in connected_connections
},
labels=["name", "conn_id"],
)
})
def transport_kernel_read_buffer_size(protocol, read=True):
@@ -608,48 +601,38 @@ def transport_kernel_read_buffer_size(protocol, read=True):
return 0
metrics.register_callback(
"transport_kernel_send_buffer",
tcp_transport_kernel_send_buffer = LaterGauge(
"synapse_replication_tcp_transport_kernel_send_buffer", "", ["name", "conn_id"],
lambda: {
(p.name, p.conn_id): transport_kernel_read_buffer_size(p, False)
for p in connected_connections
},
labels=["name", "conn_id"],
)
})
metrics.register_callback(
"transport_kernel_read_buffer",
tcp_transport_kernel_read_buffer = LaterGauge(
"synapse_replication_tcp_transport_kernel_read_buffer", "", ["name", "conn_id"],
lambda: {
(p.name, p.conn_id): transport_kernel_read_buffer_size(p, True)
for p in connected_connections
},
labels=["name", "conn_id"],
)
})
metrics.register_callback(
"inbound_commands",
tcp_inbound_commands = LaterGauge(
"synapse_replication_tcp_inbound_commands", "", ["command", "name", "conn_id"],
lambda: {
(k[0], p.name, p.conn_id): count
for p in connected_connections
for k, count in p.inbound_commands_counter.counts.iteritems()
},
labels=["command", "name", "conn_id"],
)
for k, count in iteritems(p.inbound_commands_counter)
})
metrics.register_callback(
"outbound_commands",
tcp_outbound_commands = LaterGauge(
"synapse_replication_tcp_outbound_commands", "", ["command", "name", "conn_id"],
lambda: {
(k[0], p.name, p.conn_id): count
for p in connected_connections
for k, count in p.outbound_commands_counter.counts.iteritems()
},
labels=["command", "name", "conn_id"],
)
for k, count in iteritems(p.outbound_commands_counter)
})
# number of updates received for each RDATA stream
inbound_rdata_count = metrics.register_counter(
"inbound_rdata_count",
labels=["stream_name"],
)
inbound_rdata_count = Counter("synapse_replication_tcp_inbound_rdata_count", "",
["stream_name"])

View File

@@ -22,20 +22,21 @@ from .streams import STREAMS_MAP, FederationStream
from .protocol import ServerReplicationStreamProtocol
from synapse.util.metrics import Measure, measure_func
from synapse.metrics import LaterGauge
import logging
import synapse.metrics
from prometheus_client import Counter
from six import itervalues
metrics = synapse.metrics.get_metrics_for(__name__)
stream_updates_counter = metrics.register_counter(
"stream_updates", labels=["stream_name"]
)
user_sync_counter = metrics.register_counter("user_sync")
federation_ack_counter = metrics.register_counter("federation_ack")
remove_pusher_counter = metrics.register_counter("remove_pusher")
invalidate_cache_counter = metrics.register_counter("invalidate_cache")
user_ip_cache_counter = metrics.register_counter("user_ip_cache")
stream_updates_counter = Counter("synapse_replication_tcp_resource_stream_updates",
"", ["stream_name"])
user_sync_counter = Counter("synapse_replication_tcp_resource_user_sync", "")
federation_ack_counter = Counter("synapse_replication_tcp_resource_federation_ack", "")
remove_pusher_counter = Counter("synapse_replication_tcp_resource_remove_pusher", "")
invalidate_cache_counter = Counter("synapse_replication_tcp_resource_invalidate_cache",
"")
user_ip_cache_counter = Counter("synapse_replication_tcp_resource_user_ip_cache", "")
logger = logging.getLogger(__name__)
@@ -69,33 +70,34 @@ class ReplicationStreamer(object):
self.presence_handler = hs.get_presence_handler()
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self._server_notices_sender = hs.get_server_notices_sender()
# Current connections.
self.connections = []
metrics.register_callback("total_connections", lambda: len(self.connections))
LaterGauge("synapse_replication_tcp_resource_total_connections", "", [],
lambda: len(self.connections))
# List of streams that clients can subscribe to.
# We only support federation stream if federation sending hase been
# disabled on the master.
self.streams = [
stream(hs) for stream in STREAMS_MAP.itervalues()
stream(hs) for stream in itervalues(STREAMS_MAP)
if stream != FederationStream or not hs.config.send_federation
]
self.streams_by_name = {stream.NAME: stream for stream in self.streams}
metrics.register_callback(
"connections_per_stream",
LaterGauge(
"synapse_replication_tcp_resource_connections_per_stream", "",
["stream_name"],
lambda: {
(stream_name,): len([
conn for conn in self.connections
if stream_name in conn.replication_streams
])
for stream_name in self.streams_by_name
},
labels=["stream_name"],
)
})
self.federation_sender = None
if not hs.config.send_federation:
@@ -175,7 +177,7 @@ class ReplicationStreamer(object):
logger.info(
"Streaming: %s -> %s", stream.NAME, updates[-1][0]
)
stream_updates_counter.inc_by(len(updates), stream.NAME)
stream_updates_counter.labels(stream.NAME).inc(len(updates))
# Some streams return multiple rows with the same stream IDs,
# we need to make sure they get sent out in batches. We do
@@ -253,6 +255,7 @@ class ReplicationStreamer(object):
yield self.store.insert_client_ip(
user_id, access_token, ip, user_agent, device_id, last_seen,
)
yield self._server_notices_sender.on_user_ip(user_id)
def send_sync_to_all_connections(self, data):
"""Sends a SYNC command to all clients.

View File

@@ -19,6 +19,7 @@ import logging
from synapse.api.auth import get_access_token_from_request
from synapse.util.async import ObservableDeferred
from synapse.util.logcontext import make_deferred_yieldable, run_in_background
logger = logging.getLogger(__name__)
@@ -80,31 +81,30 @@ class HttpTransactionCache(object):
Returns:
Deferred which resolves to a tuple of (response_code, response_dict).
"""
try:
return self.transactions[txn_key][0].observe()
except (KeyError, IndexError):
pass # execute the function instead.
if txn_key in self.transactions:
observable = self.transactions[txn_key][0]
else:
# execute the function instead.
deferred = run_in_background(fn, *args, **kwargs)
deferred = fn(*args, **kwargs)
observable = ObservableDeferred(deferred)
self.transactions[txn_key] = (observable, self.clock.time_msec())
# if the request fails with a Twisted failure, remove it
# from the transaction map. This is done to ensure that we don't
# cache transient errors like rate-limiting errors, etc.
def remove_from_map(err):
self.transactions.pop(txn_key, None)
return err
deferred.addErrback(remove_from_map)
# if the request fails with an exception, remove it
# from the transaction map. This is done to ensure that we don't
# cache transient errors like rate-limiting errors, etc.
def remove_from_map(err):
self.transactions.pop(txn_key, None)
# we deliberately do not propagate the error any further, as we
# expect the observers to have reported it.
# We don't add any other errbacks to the raw deferred, so we ask
# ObservableDeferred to swallow the error. This is fine as the error will
# still be reported to the observers.
observable = ObservableDeferred(deferred, consumeErrors=True)
self.transactions[txn_key] = (observable, self.clock.time_msec())
return observable.observe()
deferred.addErrback(remove_from_map)
return make_deferred_yieldable(observable.observe())
def _cleanup(self):
now = self.clock.time_msec()
for key in self.transactions.keys():
for key in list(self.transactions):
ts = self.transactions[key][1]
if now > (ts + CLEANUP_PERIOD_MS): # after cleanup period
del self.transactions[key]

View File

@@ -151,10 +151,11 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
if event.room_id != room_id:
raise SynapseError(400, "Event is for wrong room.")
depth = event.depth
token = yield self.store.get_topological_token_for_event(event_id)
logger.info(
"[purge] purging up to depth %i (event_id %s)",
depth, event_id,
"[purge] purging up to token %s (event_id %s)",
token, event_id,
)
elif 'purge_up_to_ts' in body:
ts = body['purge_up_to_ts']
@@ -174,7 +175,9 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
)
)
if room_event_after_stream_ordering:
(_, depth, _) = room_event_after_stream_ordering
token = yield self.store.get_topological_token_for_event(
room_event_after_stream_ordering,
)
else:
logger.warn(
"[purge] purging events not possible: No event found "
@@ -187,9 +190,9 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
errcode=Codes.NOT_FOUND,
)
logger.info(
"[purge] purging up to depth %i (received_ts %i => "
"[purge] purging up to token %d (received_ts %i => "
"stream_ordering %i)",
depth, ts, stream_ordering,
token, ts, stream_ordering,
)
else:
raise SynapseError(
@@ -199,7 +202,7 @@ class PurgeHistoryRestServlet(ClientV1RestServlet):
)
purge_id = yield self.handlers.message_handler.start_purge_history(
room_id, depth,
room_id, token,
delete_local_events=delete_local_events,
)
@@ -273,8 +276,8 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
def __init__(self, hs):
super(ShutdownRoomRestServlet, self).__init__(hs)
self.store = hs.get_datastore()
self.handlers = hs.get_handlers()
self.state = hs.get_state_handler()
self._room_creation_handler = hs.get_room_creation_handler()
self.event_creation_handler = hs.get_event_creation_handler()
self.room_member_handler = hs.get_room_member_handler()
@@ -296,7 +299,7 @@ class ShutdownRoomRestServlet(ClientV1RestServlet):
message = content.get("message", self.DEFAULT_MESSAGE)
room_name = content.get("room_name", "Content Violation Notification")
info = yield self.handlers.room_creation_handler.create_room(
info = yield self._room_creation_handler.create_room(
room_creator_requester,
config={
"preset": "public_chat",

View File

@@ -23,6 +23,8 @@ from synapse.handlers.presence import format_user_presence_state
from synapse.http.servlet import parse_json_object_from_request
from .base import ClientV1RestServlet, client_path_patterns
from six import string_types
import logging
logger = logging.getLogger(__name__)
@@ -71,7 +73,7 @@ class PresenceStatusRestServlet(ClientV1RestServlet):
if "status_msg" in content:
state["status_msg"] = content.pop("status_msg")
if not isinstance(state["status_msg"], basestring):
if not isinstance(state["status_msg"], string_types):
raise SynapseError(400, "status_msg must be a string.")
if content:
@@ -129,7 +131,7 @@ class PresenceListRestServlet(ClientV1RestServlet):
if "invite" in content:
for u in content["invite"]:
if not isinstance(u, basestring):
if not isinstance(u, string_types):
raise SynapseError(400, "Bad invite value.")
if len(u) == 0:
continue
@@ -140,7 +142,7 @@ class PresenceListRestServlet(ClientV1RestServlet):
if "drop" in content:
for u in content["drop"]:
if not isinstance(u, basestring):
if not isinstance(u, string_types):
raise SynapseError(400, "Bad drop value.")
if len(u) == 0:
continue

View File

@@ -41,7 +41,7 @@ class RoomCreateRestServlet(ClientV1RestServlet):
def __init__(self, hs):
super(RoomCreateRestServlet, self).__init__(hs)
self.handlers = hs.get_handlers()
self._room_creation_handler = hs.get_room_creation_handler()
def register(self, http_server):
PATTERNS = "/createRoom"
@@ -64,8 +64,7 @@ class RoomCreateRestServlet(ClientV1RestServlet):
def on_POST(self, request):
requester = yield self.auth.get_user_by_req(request)
handler = self.handlers.room_creation_handler
info = yield handler.create_room(
info = yield self._room_creation_handler.create_room(
requester, self.get_room_config(request)
)

View File

@@ -85,6 +85,7 @@ class SyncRestServlet(RestServlet):
self.clock = hs.get_clock()
self.filtering = hs.get_filtering()
self.presence_handler = hs.get_presence_handler()
self._server_notices_sender = hs.get_server_notices_sender()
@defer.inlineCallbacks
def on_GET(self, request):
@@ -149,6 +150,9 @@ class SyncRestServlet(RestServlet):
else:
since_token = None
# send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(user.to_string())
affect_presence = set_presence != PresenceState.OFFLINE
if affect_presence:

View File

@@ -0,0 +1,222 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from hashlib import sha256
import hmac
import logging
from os import path
from six.moves import http_client
import jinja2
from jinja2 import TemplateNotFound
from twisted.internet import defer
from twisted.web.resource import Resource
from twisted.web.server import NOT_DONE_YET
from synapse.api.errors import NotFoundError, SynapseError, StoreError
from synapse.config import ConfigError
from synapse.http.server import (
finish_request,
wrap_html_request_handler,
)
from synapse.http.servlet import parse_string
from synapse.types import UserID
# language to use for the templates. TODO: figure this out from Accept-Language
TEMPLATE_LANGUAGE = "en"
logger = logging.getLogger(__name__)
# use hmac.compare_digest if we have it (python 2.7.7), else just use equality
if hasattr(hmac, "compare_digest"):
compare_digest = hmac.compare_digest
else:
def compare_digest(a, b):
return a == b
class ConsentResource(Resource):
"""A twisted Resource to display a privacy policy and gather consent to it
When accessed via GET, returns the privacy policy via a template.
When accessed via POST, records the user's consent in the database and
displays a success page.
The config should include a template_dir setting which contains templates
for the HTML. The directory should contain one subdirectory per language
(eg, 'en', 'fr'), and each language directory should contain the policy
document (named as '<version>.html') and a success page (success.html).
Both forms take a set of parameters from the browser. For the POST form,
these are normally sent as form parameters (but may be query-params); for
GET requests they must be query params. These are:
u: the complete mxid, or the localpart of the user giving their
consent. Required for both GET (where it is used as an input to the
template) and for POST (where it is used to find the row in the db
to update).
h: hmac_sha256(secret, u), where 'secret' is the privacy_secret in the
config file. If it doesn't match, the request is 403ed.
v: the version of the privacy policy being agreed to.
For GET: optional, and defaults to whatever was set in the config
file. Used to choose the version of the policy to pick from the
templates directory.
For POST: required; gives the value to be recorded in the database
against the user.
"""
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): homeserver
"""
Resource.__init__(self)
self.hs = hs
self.store = hs.get_datastore()
# this is required by the request_handler wrapper
self.clock = hs.get_clock()
self._default_consent_version = hs.config.user_consent_version
if self._default_consent_version is None:
raise ConfigError(
"Consent resource is enabled but user_consent section is "
"missing in config file.",
)
# daemonize changes the cwd to /, so make the path absolute now.
consent_template_directory = path.abspath(
hs.config.user_consent_template_dir,
)
if not path.isdir(consent_template_directory):
raise ConfigError(
"Could not find template directory '%s'" % (
consent_template_directory,
),
)
loader = jinja2.FileSystemLoader(consent_template_directory)
self._jinja_env = jinja2.Environment(
loader=loader,
autoescape=jinja2.select_autoescape(['html', 'htm', 'xml']),
)
if hs.config.form_secret is None:
raise ConfigError(
"Consent resource is enabled but form_secret is not set in "
"config file. It should be set to an arbitrary secret string.",
)
self._hmac_secret = hs.config.form_secret.encode("utf-8")
def render_GET(self, request):
self._async_render_GET(request)
return NOT_DONE_YET
@wrap_html_request_handler
@defer.inlineCallbacks
def _async_render_GET(self, request):
"""
Args:
request (twisted.web.http.Request):
"""
version = parse_string(request, "v",
default=self._default_consent_version)
username = parse_string(request, "u", required=True)
userhmac = parse_string(request, "h", required=True)
self._check_hash(username, userhmac)
if username.startswith('@'):
qualified_user_id = username
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
u = yield self.store.get_user_by_id(qualified_user_id)
if u is None:
raise NotFoundError("Unknown user")
try:
self._render_template(
request, "%s.html" % (version,),
user=username, userhmac=userhmac, version=version,
has_consented=(u["consent_version"] == version),
)
except TemplateNotFound:
raise NotFoundError("Unknown policy version")
def render_POST(self, request):
self._async_render_POST(request)
return NOT_DONE_YET
@wrap_html_request_handler
@defer.inlineCallbacks
def _async_render_POST(self, request):
"""
Args:
request (twisted.web.http.Request):
"""
version = parse_string(request, "v", required=True)
username = parse_string(request, "u", required=True)
userhmac = parse_string(request, "h", required=True)
self._check_hash(username, userhmac)
if username.startswith('@'):
qualified_user_id = username
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
try:
yield self.store.user_set_consent_version(qualified_user_id, version)
except StoreError as e:
if e.code != 404:
raise
raise NotFoundError("Unknown user")
try:
self._render_template(request, "success.html")
except TemplateNotFound:
raise NotFoundError("success.html not found")
def _render_template(self, request, template_name, **template_args):
# get_template checks for ".." so we don't need to worry too much
# about path traversal here.
template_html = self._jinja_env.get_template(
path.join(TEMPLATE_LANGUAGE, template_name)
)
html_bytes = template_html.render(**template_args).encode("utf8")
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%i" % len(html_bytes))
request.write(html_bytes)
finish_request(request)
def _check_hash(self, userid, userhmac):
want_mac = hmac.new(
key=self._hmac_secret,
msg=userid,
digestmod=sha256,
).hexdigest()
if not compare_digest(want_mac, userhmac):
raise SynapseError(http_client.FORBIDDEN, "HMAC incorrect")

View File

@@ -48,6 +48,7 @@ import shutil
import cgi
import logging
from six.moves.urllib import parse as urlparse
from six import iteritems
logger = logging.getLogger(__name__)
@@ -603,7 +604,7 @@ class MediaRepository(object):
thumbnails[(t_width, t_height, r_type)] = r_method
# Now we generate the thumbnails for each dimension, store it
for (t_width, t_height, t_type), t_method in thumbnails.iteritems():
for (t_width, t_height, t_type), t_method in iteritems(thumbnails):
# Generate the thumbnail
if t_method == "crop":
t_byte_source = yield make_deferred_yieldable(threads.deferToThread(

View File

@@ -24,7 +24,9 @@ import shutil
import sys
import traceback
import simplejson as json
import urlparse
from six.moves import urllib_parse as urlparse
from six import string_types
from twisted.web.server import NOT_DONE_YET
from twisted.internet import defer
@@ -590,8 +592,8 @@ def _iterate_over_text(tree, *tags_to_ignore):
# to be returned.
elements = iter([tree])
while True:
el = elements.next()
if isinstance(el, basestring):
el = next(elements)
if isinstance(el, string_types):
yield el
elif el is not None and el.tag not in tags_to_ignore:
# el.text is the text before the first child, so we can immediately

View File

@@ -46,6 +46,7 @@ from synapse.handlers.devicemessage import DeviceMessageHandler
from synapse.handlers.device import DeviceHandler
from synapse.handlers.e2e_keys import E2eKeysHandler
from synapse.handlers.presence import PresenceHandler
from synapse.handlers.room import RoomCreationHandler
from synapse.handlers.room_list import RoomListHandler
from synapse.handlers.room_member import RoomMemberMasterHandler
from synapse.handlers.room_member_worker import RoomMemberWorkerHandler
@@ -71,6 +72,11 @@ from synapse.rest.media.v1.media_repository import (
MediaRepository,
MediaRepositoryResource,
)
from synapse.server_notices.server_notices_manager import ServerNoticesManager
from synapse.server_notices.server_notices_sender import ServerNoticesSender
from synapse.server_notices.worker_server_notices_sender import (
WorkerServerNoticesSender,
)
from synapse.state import StateHandler, StateResolutionHandler
from synapse.storage import DataStore
from synapse.streams.events import EventSources
@@ -97,6 +103,9 @@ class HomeServer(object):
which must be implemented by the subclass. This code may call any of the
required "get" methods on the instance to obtain the sub-dependencies that
one requires.
Attributes:
config (synapse.config.homeserver.HomeserverConfig):
"""
DEPENDENCIES = [
@@ -106,6 +115,7 @@ class HomeServer(object):
'federation_server',
'handlers',
'auth',
'room_creation_handler',
'state_handler',
'state_resolution_handler',
'presence_handler',
@@ -151,6 +161,8 @@ class HomeServer(object):
'spam_checker',
'room_member_handler',
'federation_registry',
'server_notices_manager',
'server_notices_sender',
]
def __init__(self, hostname, **kwargs):
@@ -224,6 +236,9 @@ class HomeServer(object):
def build_simple_http_client(self):
return SimpleHttpClient(self)
def build_room_creation_handler(self):
return RoomCreationHandler(self)
def build_state_handler(self):
return StateHandler(self)
@@ -390,6 +405,16 @@ class HomeServer(object):
def build_federation_registry(self):
return FederationHandlerRegistry()
def build_server_notices_manager(self):
if self.config.worker_app:
raise Exception("Workers cannot send server notices")
return ServerNoticesManager(self)
def build_server_notices_sender(self):
if self.config.worker_app:
return WorkerServerNoticesSender(self)
return ServerNoticesSender(self)
def remove_pusher(self, app_id, push_key, user_id):
return self.get_pusherpool().remove_pusher(app_id, push_key, user_id)

View File

@@ -1,4 +1,5 @@
import synapse.api.auth
import synapse.config.homeserver
import synapse.federation.transaction_queue
import synapse.federation.transport.client
import synapse.handlers
@@ -8,11 +9,17 @@ import synapse.handlers.device
import synapse.handlers.e2e_keys
import synapse.handlers.set_password
import synapse.rest.media.v1.media_repository
import synapse.server_notices.server_notices_manager
import synapse.server_notices.server_notices_sender
import synapse.state
import synapse.storage
class HomeServer(object):
@property
def config(self) -> synapse.config.homeserver.HomeServerConfig:
pass
def get_auth(self) -> synapse.api.auth.Auth:
pass
@@ -40,6 +47,12 @@ class HomeServer(object):
def get_deactivate_account_handler(self) -> synapse.handlers.deactivate_account.DeactivateAccountHandler:
pass
def get_room_creation_handler(self) -> synapse.handlers.room.RoomCreationHandler:
pass
def get_event_creation_handler(self) -> synapse.handlers.message.EventCreationHandler:
pass
def get_set_password_handler(self) -> synapse.handlers.set_password.SetPasswordHandler:
pass
@@ -54,3 +67,9 @@ class HomeServer(object):
def get_media_repository(self) -> synapse.rest.media.v1.media_repository.MediaRepository:
pass
def get_server_notices_manager(self) -> synapse.server_notices.server_notices_manager.ServerNoticesManager:
pass
def get_server_notices_sender(self) -> synapse.server_notices.server_notices_sender.ServerNoticesSender:
pass

View File

View File

@@ -0,0 +1,137 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from six import (iteritems, string_types)
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.api.urls import ConsentURIBuilder
from synapse.config import ConfigError
from synapse.types import get_localpart_from_id
logger = logging.getLogger(__name__)
class ConsentServerNotices(object):
"""Keeps track of whether we need to send users server_notices about
privacy policy consent, and sends one if we do.
"""
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self._server_notices_manager = hs.get_server_notices_manager()
self._store = hs.get_datastore()
self._users_in_progress = set()
self._current_consent_version = hs.config.user_consent_version
self._server_notice_content = hs.config.user_consent_server_notice_content
self._send_to_guests = hs.config.user_consent_server_notice_to_guests
if self._server_notice_content is not None:
if not self._server_notices_manager.is_enabled():
raise ConfigError(
"user_consent configuration requires server notices, but "
"server notices are not enabled.",
)
if 'body' not in self._server_notice_content:
raise ConfigError(
"user_consent server_notice_consent must contain a 'body' "
"key.",
)
self._consent_uri_builder = ConsentURIBuilder(hs.config)
@defer.inlineCallbacks
def maybe_send_server_notice_to_user(self, user_id):
"""Check if we need to send a notice to this user, and does so if so
Args:
user_id (str): user to check
Returns:
Deferred
"""
if self._server_notice_content is None:
# not enabled
return
# make sure we don't send two messages to the same user at once
if user_id in self._users_in_progress:
return
self._users_in_progress.add(user_id)
try:
u = yield self._store.get_user_by_id(user_id)
if u["is_guest"] and not self._send_to_guests:
# don't send to guests
return
if u["consent_version"] == self._current_consent_version:
# user has already consented
return
if u["consent_server_notice_sent"] == self._current_consent_version:
# we've already sent a notice to the user
return
# need to send a message.
try:
consent_uri = self._consent_uri_builder.build_user_consent_uri(
get_localpart_from_id(user_id),
)
content = copy_with_str_subst(
self._server_notice_content, {
'consent_uri': consent_uri,
},
)
yield self._server_notices_manager.send_notice(
user_id, content,
)
yield self._store.user_set_consent_server_notice_sent(
user_id, self._current_consent_version,
)
except SynapseError as e:
logger.error("Error sending server notice about user consent: %s", e)
finally:
self._users_in_progress.remove(user_id)
def copy_with_str_subst(x, substitutions):
"""Deep-copy a structure, carrying out string substitions on any strings
Args:
x (object): structure to be copied
substitutions (object): substitutions to be made - passed into the
string '%' operator
Returns:
copy of x
"""
if isinstance(x, string_types):
return x % substitutions
if isinstance(x, dict):
return {
k: copy_with_str_subst(v, substitutions) for (k, v) in iteritems(x)
}
if isinstance(x, (list, tuple)):
return [copy_with_str_subst(y) for y in x]
# assume it's uninterested and can be shallow-copied.
return x

View File

@@ -0,0 +1,146 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership, RoomCreationPreset
from synapse.types import create_requester
from synapse.util.caches.descriptors import cachedInlineCallbacks
logger = logging.getLogger(__name__)
class ServerNoticesManager(object):
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
self._store = hs.get_datastore()
self._config = hs.config
self._room_creation_handler = hs.get_room_creation_handler()
self._event_creation_handler = hs.get_event_creation_handler()
self._is_mine_id = hs.is_mine_id
def is_enabled(self):
"""Checks if server notices are enabled on this server.
Returns:
bool
"""
return self._config.server_notices_mxid is not None
@defer.inlineCallbacks
def send_notice(self, user_id, event_content):
"""Send a notice to the given user
Creates the server notices room, if none exists.
Args:
user_id (str): mxid of user to send event to.
event_content (dict): content of event to send
Returns:
Deferred[None]
"""
room_id = yield self.get_notice_room_for_user(user_id)
system_mxid = self._config.server_notices_mxid
requester = create_requester(system_mxid)
logger.info("Sending server notice to %s", user_id)
yield self._event_creation_handler.create_and_send_nonmember_event(
requester, {
"type": EventTypes.Message,
"room_id": room_id,
"sender": system_mxid,
"content": event_content,
},
ratelimit=False,
)
@cachedInlineCallbacks()
def get_notice_room_for_user(self, user_id):
"""Get the room for notices for a given user
If we have not yet created a notice room for this user, create it
Args:
user_id (str): complete user id for the user we want a room for
Returns:
str: room id of notice room.
"""
if not self.is_enabled():
raise Exception("Server notices not enabled")
assert self._is_mine_id(user_id), \
"Cannot send server notices to remote users"
rooms = yield self._store.get_rooms_for_user_where_membership_is(
user_id, [Membership.INVITE, Membership.JOIN],
)
system_mxid = self._config.server_notices_mxid
for room in rooms:
# it's worth noting that there is an asymmetry here in that we
# expect the user to be invited or joined, but the system user must
# be joined. This is kinda deliberate, in that if somebody somehow
# manages to invite the system user to a room, that doesn't make it
# the server notices room.
user_ids = yield self._store.get_users_in_room(room.room_id)
if system_mxid in user_ids:
# we found a room which our user shares with the system notice
# user
logger.info("Using room %s", room.room_id)
defer.returnValue(room.room_id)
# apparently no existing notice room: create a new one
logger.info("Creating server notices room for %s", user_id)
# see if we want to override the profile info for the server user.
# note that if we want to override either the display name or the
# avatar, we have to use both.
join_profile = None
if (
self._config.server_notices_mxid_display_name is not None or
self._config.server_notices_mxid_avatar_url is not None
):
join_profile = {
"displayname": self._config.server_notices_mxid_display_name,
"avatar_url": self._config.server_notices_mxid_avatar_url,
}
requester = create_requester(system_mxid)
info = yield self._room_creation_handler.create_room(
requester,
config={
"preset": RoomCreationPreset.PRIVATE_CHAT,
"name": self._config.server_notices_room_name,
"power_level_content_override": {
"users_default": -10,
},
"invite": (user_id,)
},
ratelimit=False,
creator_join_profile=join_profile,
)
room_id = info['room_id']
logger.info("Created server notices room %s for %s", room_id, user_id)
defer.returnValue(room_id)

View File

@@ -0,0 +1,58 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.server_notices.consent_server_notices import ConsentServerNotices
class ServerNoticesSender(object):
"""A centralised place which sends server notices automatically when
Certain Events take place
"""
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
# todo: it would be nice to make this more dynamic
self._consent_server_notices = ConsentServerNotices(hs)
def on_user_syncing(self, user_id):
"""Called when the user performs a sync operation.
Args:
user_id (str): mxid of user who synced
Returns:
Deferred
"""
return self._consent_server_notices.maybe_send_server_notice_to_user(
user_id,
)
def on_user_ip(self, user_id):
"""Called on the master when a worker process saw a client request.
Args:
user_id (str): mxid
Returns:
Deferred
"""
# The synchrotrons use a stubbed version of ServerNoticesSender, so
# we check for notices to send to the user in on_user_ip as well as
# in on_user_syncing
return self._consent_server_notices.maybe_send_server_notice_to_user(
user_id,
)

View File

@@ -0,0 +1,46 @@
# -*- coding: utf-8 -*-
# Copyright 2018 New Vector Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
class WorkerServerNoticesSender(object):
"""Stub impl of ServerNoticesSender which does nothing"""
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer):
"""
def on_user_syncing(self, user_id):
"""Called when the user performs a sync operation.
Args:
user_id (str): mxid of user who synced
Returns:
Deferred
"""
return defer.succeed(None)
def on_user_ip(self, user_id):
"""Called on the master when a worker process saw a client request.
Args:
user_id (str): mxid
Returns:
Deferred
"""
raise AssertionError("on_user_ip unexpectedly called on worker")

Some files were not shown because too many files have changed in this diff Show More