1
0

Compare commits

...

124 Commits

Author SHA1 Message Date
Matthew Hodgson a078de955f Merge branch 'develop' into matthew/fix_overzealous_ll_state 2018-09-18 16:50:39 +01:00
Richard van der Hoff 286d6930b7 Merge pull request #3879 from matrix-org/matthew/fix-autojoin
don't ratelimit autojoins
2018-09-18 13:07:01 +01:00
Richard van der Hoff 892432c818 Merge pull request #3882 from SimmyD/max_upload_docker_var
Add variable for changing the max upload size in Docker container
2018-09-18 13:05:09 +01:00
Richard van der Hoff 1e09a1d48a Merge pull request #3889 from matrix-org/rav/404_on_remove_unknown_alias
Return a 404 when deleting unknown room alias
2018-09-18 12:59:30 +01:00
Richard van der Hoff ad95ec12ca Merge pull request #3895 from matrix-org/rav/decode_bytes_in_metrics
Fix more b'abcd' noise in metrics
2018-09-18 12:58:49 +01:00
Richard van der Hoff 1f4296efcf Merge pull request #3897 from aaronraimist/synaspse-typo
Fix typo in README, synaspse -> synapse
2018-09-18 11:37:09 +01:00
Aaron Raimist 505abb38f0 Add changelog
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:07:19 -05:00
Aaron Raimist 4ecdf73fc7 Fix typo in README, synaspse -> synapse
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2018-09-17 22:05:23 -05:00
Richard van der Hoff 06f2dbbb5d changelog 2018-09-17 17:18:26 +01:00
Richard van der Hoff ac80cb08fe Fix more b'abcd' noise in metrics 2018-09-17 17:16:50 +01:00
Richard van der Hoff c7131baefc Merge pull request #3892 from matrix-org/rav/decode_bytes_in_request_logs
Fix some b'abcd' noise in logs and metrics
2018-09-17 17:07:10 +01:00
Richard van der Hoff f75b9961c6 Reinstate missing null check 2018-09-17 16:52:02 +01:00
Richard van der Hoff 0cb7afff35 changelog 2018-09-17 16:17:25 +01:00
Richard van der Hoff f00a9d2636 Fix some b'abcd' noise in logs and metrics
Python 3 compatibility: make sure that we decode some byte sequences before we
use them to create log lines and metrics labels.
2018-09-17 16:15:42 +01:00
Richard van der Hoff c9c50284d7 README: run python_dependencies with -m
... to stop things which try to import `types` getting `synapse.types` instead
2018-09-17 15:58:47 +01:00
Richard van der Hoff c1ae6b1bce changelog 2018-09-17 13:21:08 +01:00
Richard van der Hoff 85a43f4167 Return a 404 when deleting unknown room alias
As per https://github.com/matrix-org/matrix-doc/issues/1675

Fixes https://github.com/matrix-org/synapse/issues/2782
2018-09-17 13:19:00 +01:00
Amber Brown c6363f7269 Merge pull request #3888 from matrix-org/rav/nuke_nuke_rooms
Remove nuke-room-from-db.sh script
2018-09-17 21:30:25 +10:00
Richard van der Hoff 2a8996b67d changelog 2018-09-17 11:51:38 +01:00
Richard van der Hoff e7b3b4d8c2 Remove nuke-room-from-db.sh script
The problem with this script is that it is largely untested, entirely
unmaintained, and running it is likely to make your synapse blow up in
exciting ways.

For example, it leaves a bunch of tables with dead values in it, like
event_to_state_groups.

Having it here sends a message that it is a supported part of
synapse, which is absolutely not the case.
2018-09-17 11:42:42 +01:00
Simon Dwyer 0e46ff6904 Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables.
Signed-off-by: Simon Dwyer <simon@thedwyers.co>
2018-09-16 13:33:33 +10:00
Simon Dwyer da864a92c9 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer f472abd792 Added description for "SYNAPSE_MAX_UPLOAD_SIZE" variable. 2018-09-16 13:12:57 +10:00
Simon Dwyer 9c749a6b61 Added 'MAX_UPLOAD_SIZE' variable and set default to "10M" 2018-09-16 13:12:57 +10:00
Matthew Hodgson c71b93f2a4 changelog 2018-09-15 22:28:28 +01:00
Matthew Hodgson d42d79e3c3 don't ratelimit autojoins 2018-09-15 22:27:41 +01:00
Matthew Hodgson 9d13ff4da8 missing changelog 2018-09-15 21:04:10 +01:00
Vincent Breitmoser c8642720c9 mention libjemalloc in readme (#3877) 2018-09-15 21:03:27 +01:00
Erik Johnston 24efb2a70d Fix timeout function
Turns out deferred.cancel sometimes throws, so we do that last to ensure
that we always do resolve the new deferred.
2018-09-15 11:38:39 +01:00
Erik Johnston c30cfff572 Merge pull request #3875 from matrix-org/erikj/extra_timeouts
Add an awful secondary timeout to fix wedged requests
2018-09-14 19:56:11 +01:00
Erik Johnston 335b23a078 Newsfile 2018-09-14 19:25:58 +01:00
Erik Johnston fcfe7a850d Add an awful secondary timeout to fix wedged requests
This is an attempt to mitigate #3842 by adding yet-another-timeout
2018-09-14 19:23:07 +01:00
Matthew Hodgson 024be6cf18 don't filter membership events based on history visibility (#3874)
don't filter membership events based on history visibility
as we will already have filtered the messages in the timeline, and state events
are always visible.

and because @erikjohnston said so.
2018-09-14 18:12:52 +01:00
Erik Johnston 3e6e94fe9f Merge pull request #3872 from matrix-org/hawkowl/timeouts-2
timeouts 2: electric boogaloo
2018-09-14 16:58:44 +01:00
Erik Johnston 8b3652831c Merge pull request #3871 from matrix-org/erikj/in_flight_block_metrics
Add in flight real time metrics for Measure blocks
2018-09-14 16:40:31 +01:00
Amber Brown bc9af88a2d fix 2018-09-15 00:26:00 +10:00
Amber Brown 90f8e606e2 changelog 2018-09-15 00:23:55 +10:00
Erik Johnston d0f6c1ce21 Remove spurious comment 2018-09-14 15:12:36 +01:00
Erik Johnston 9e2f9a7b57 Measure outbound requests 2018-09-14 15:11:26 +01:00
Erik Johnston 941ac0f085 Newsfile 2018-09-14 15:08:37 +01:00
Erik Johnston f6e82dcddb Tests 2018-09-14 15:08:37 +01:00
Erik Johnston 0a81038ea0 Add in flight real time metrics for Measure blocks 2018-09-14 15:08:37 +01:00
Travis Ralston ad9198cc34 Merge pull request #3860 from matrix-org/travis/typo-1
Fix minor typo in exception
2018-09-13 16:32:48 -06:00
Travis Ralston 984db8bb08 Create 3860.misc 2018-09-13 12:06:04 -06:00
Amber Brown c971aa7b9d fix 2018-09-14 03:57:02 +10:00
Amber Brown 8f08d848f5 fix 2018-09-14 03:53:56 +10:00
Travis Ralston f1a7264663 Fix minor typo in exception 2018-09-13 11:51:12 -06:00
Amber Brown 7c33ab76da redact better 2018-09-14 03:45:34 +10:00
Amber Brown 63755fa4c2 we do that higher up 2018-09-14 03:21:47 +10:00
Amber Brown 73884ebac5 Merge remote-tracking branch 'origin/develop' into hawkowl/timeouts-2 2018-09-14 03:11:25 +10:00
Amber Brown 7c27c4d51c merge (#3576) 2018-09-14 03:11:11 +10:00
Amber Brown 1c3f4d9ca5 buffer? 2018-09-14 03:09:13 +10:00
Erik Johnston 6c0f8d9d50 Merge pull request #3856 from matrix-org/erikj/speed_up_purge
Make purge history slightly faster
2018-09-13 16:14:46 +01:00
Erik Johnston ed5331a627 comment 2018-09-13 16:10:56 +01:00
Amber Brown cb64fe2cb7 Merge pull request #3859 from matrix-org/erikj/add_iterkeys
Fix handling of redacted events from federation
2018-09-14 01:06:44 +10:00
Erik Johnston 13193a6e2b Newsfile 2018-09-13 15:46:45 +01:00
Amber Brown 3126b88d35 fix circleci merged builds (#3858)
* fix

* changelog
2018-09-14 00:44:31 +10:00
Erik Johnston 89a76d1889 Fix handling of redacted events from federation
If we receive an event that doesn't pass their content hash check (e.g.
due to already being redacted) then we hit a bug which causes an
exception to be raised, which then promplty stops the event (and
request) from being processed.

This effects all sorts of federation APIs, including joining rooms with
a redacted state event.
2018-09-13 15:44:12 +01:00
Amber Brown bfa0b759e0 Attempt to figure out what's going on with timeouts (#3857) 2018-09-14 00:15:51 +10:00
Erik Johnston 9cbd0094f0 pep8 2018-09-13 15:15:35 +01:00
Erik Johnston 9dbe38ea7d Create indices after insertion 2018-09-13 15:05:52 +01:00
Erik Johnston 93139a1fb8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/speed_up_purge 2018-09-13 12:57:09 +01:00
Erik Johnston e7cd7cb0f0 Newsfile 2018-09-13 12:55:40 +01:00
Erik Johnston c857f5ef9b Make purge history slightly faster
Don't pull out events that are outliers and won't be deleted, as nothing
should happen to them.
2018-09-13 12:48:10 +01:00
Amber Brown b7d2fb5eb9 Remove some superfluous logging (#3855) 2018-09-13 19:59:32 +10:00
Neil Johnson f30a303590 Merge pull request #3846 from matrix-org/neilj/expose-registered-users
expose number of real reserved users
2018-09-12 17:14:04 +01:00
Matthew Hodgson 2ac1abbc7e show heroes if a room has a 'deleted' name/canonical_alias (#3851) 2018-09-12 17:11:05 +01:00
Erik Johnston fa0d464fa4 Merge pull request #3853 from matrix-org/erikj/log_outbound_each_time
Log outbound requests when we retry
2018-09-12 16:55:40 +01:00
Matthew Hodgson 0403cf0783 argh pep8 2018-09-12 16:54:28 +01:00
Matthew Hodgson 0e200e366d correctly log gappy sync metrics 2018-09-12 16:47:20 +01:00
Matthew Hodgson 11bfc2af1c fix logline 2018-09-12 16:45:42 +01:00
Erik Johnston 3db016b641 Newsfile 2018-09-12 16:25:18 +01:00
Neil Johnson 8decd6233d improve naming 2018-09-12 16:22:15 +01:00
Erik Johnston 8c5b84441b Log outbound requests when we retry 2018-09-12 16:22:14 +01:00
Erik Johnston 54f8616d2c Merge pull request #3841 from matrix-org/erikj/manhole_key_length
Change the manhole SSH key to have more bits
2018-09-12 14:33:39 +01:00
Amber Brown 65cd8ccc79 Add JUnit summaries to CircleCI as well as merged runs (#3704) 2018-09-12 23:29:21 +10:00
Amber Brown 7ca097f77e Port federation/ to py3 (#3847) 2018-09-12 23:23:32 +10:00
Neil Johnson 5cea4e16c7 towncrier 2018-09-12 12:03:31 +01:00
Neil Johnson 0ddf486724 expose number of real reserved users 2018-09-12 11:58:52 +01:00
Amber Brown 546aee7e52 Merge pull request #3835 from krombel/fix_3821
fix VOIP crashes under Python 3
2018-09-12 20:44:18 +10:00
Amber Brown 33716c4aea Merge pull request #3826 from matrix-org/rav/logging_for_keyring
add some logging for the keyring queue
2018-09-12 20:43:47 +10:00
Amber Brown bc635026c5 Merge pull request #3824 from matrix-org/rav/fix_jwt_import
Fix jwt import check
2018-09-12 20:41:57 +10:00
Amber Brown 02aa41809b Port rest/ to Python 3 (#3823) 2018-09-12 20:41:31 +10:00
Amber Brown 8fd93b5eea Port crypto/ to Python 3 (#3822) 2018-09-12 20:16:31 +10:00
Amber Brown 4073f73edc Merge pull request #3845 from matrix-org/erikj/timeout_reads
Timeout reading body for outbound HTTP requests
2018-09-12 20:16:08 +10:00
Erik Johnston 649c647955 Newsfile 2018-09-12 11:03:42 +01:00
Erik Johnston 4084a774a8 Timeout reading body for outbound HTTP requests 2018-09-12 10:10:20 +01:00
Matthew Hodgson b041115415 Speed up lazy loading (#3827)
* speed up room summaries by pulling their data from room_memberships rather than room state
* disable LL for incr syncs, and log incr sync stats  (#3840)
2018-09-12 00:50:39 +01:00
Erik Johnston 9a68778ac2 Newsfile 2018-09-11 10:44:40 +01:00
Erik Johnston 9e05c8d309 Change the manhole SSH key to have more bits
Newer versions of openssh client refuse to connect to the old key due to
its length.
2018-09-11 10:42:10 +01:00
Erik Johnston 037a06e8f0 Merge pull request #3834 from mvgorcum/develop
Remove build requirements after building docker image
2018-09-10 16:53:25 +01:00
Jan Christian Grünhage af10fa6536 add runtime dependencies 2018-09-10 17:39:49 +02:00
Mathijs van Gorcum e957428a15 Newsfile 2018-09-10 14:53:05 +00:00
Mathijs van Gorcum e586916cda Move COPY before RUN and merge RUNs 2018-09-10 14:02:42 +00:00
Krombel 3572a206d3 add changelog 2018-09-10 14:33:08 +02:00
Krombel 7bc22539ff fix VOIP crashes under Python 3 (#3821) 2018-09-10 14:30:08 +02:00
Mathijs van Gorcum b7e7712f07 Remove build requirements after building 2018-09-10 12:21:42 +00:00
Richard van der Hoff 1e4c7fff5f changelog 2018-09-07 16:37:30 +01:00
Amber Brown 9a5ea511b5 Merge pull request #3810 from matrix-org/erikj/send_tags_down_sync_on_join
Send existing room tags down sync on join
2018-09-07 23:28:42 +10:00
Richard van der Hoff e33a538af3 Merge pull request #3783 from cwmke/develop
Add apache vhost config to Readme
2018-09-07 14:25:23 +01:00
Richard van der Hoff edda9f5cac changelog 2018-09-07 14:23:35 +01:00
Richard van der Hoff b8ad756bd0 Fix jwt import check
This handy code attempted to check that we could import jwt, but utterly failed
to check it was the right jwt.

Fixes https://github.com/matrix-org/synapse/issues/3793
2018-09-07 14:20:54 +01:00
Amber Brown 771d213ac5 Merge branch 'master' into develop 2018-09-07 21:45:38 +10:00
Amber Brown b60749a1ec changelog 2018-09-07 21:41:57 +10:00
Amber Brown 6febd8e8f7 version 2018-09-07 21:40:57 +10:00
Richard van der Hoff cd7ef43872 clearer logging when things fail, too 2018-09-06 23:56:47 +01:00
Richard van der Hoff 806964b5de add some logging for the keyring queue
why is it so damn slow?
2018-09-06 18:51:06 +01:00
Amber Brown 52ec6e9dfa Port tests/ to Python 3 (#3808) 2018-09-07 02:58:18 +10:00
Erik Johnston 7298efd361 Newsfile 2018-09-06 17:13:33 +01:00
Erik Johnston f60c9e2a01 Don't send empty tags list down sync 2018-09-06 17:01:41 +01:00
Erik Johnston 7baf66ef5d Send existing room tags down sync on join
When a user joined a room any existing tags were not sent down the sync
stream. Ordinarily this isn't a problem because the user needs to be in
the room to have set tags in it, however synapse will sometimes add tags
for a user to a room, e.g. for server notices, which need to come down
sync.
2018-09-06 16:46:51 +01:00
Erik Johnston 599f65bb89 Merge branch 'master' of github.com:matrix-org/synapse into release-v0.33.4 2018-09-06 14:13:58 +01:00
Erik Johnston 417e7077aa Bump version and changelog 2018-09-06 14:08:55 +01:00
Erik Johnston d64b24dfe6 Merge tag 'v0.33.3.1' into release-v0.33.4
Synapse 0.33.3.1 (2018-09-06)
=============================

SECURITY FIXES
--------------

- Fix an issue where event signatures were not always correctly validated ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
- Fix an issue where server_acls could be circumvented for incoming events ([\#3796](https://github.com/matrix-org/synapse/issues/3796))

Internal Changes
----------------

- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
2018-09-06 14:08:33 +01:00
Amber Brown 10587f7f32 Merge pull request #3802 from matrix-org/jcgruenhage/docker-unignore-synctl
remove synctl from .dockerignore
2018-09-06 19:47:33 +10:00
Jan Christian Grünhage af3125226d Create 3802.misc 2018-09-06 10:46:00 +02:00
Jan Christian Grünhage 9c8cd855da remove synctl from .dockerignore 2018-09-06 10:39:55 +02:00
Amber Brown 7e9ced4178 version and towncrier 2018-09-04 21:12:04 +10:00
Colin W 81942c109d Remove end '/'s 2018-09-02 23:28:03 -05:00
Colin W a395f1ddb3 Update readme on develop branch 2018-09-02 17:30:07 -05:00
Matthew Hodgson 095d4f27a1 Add comment 2018-08-28 23:25:38 +01:00
Matthew Hodgson a98eae5835 Merge branch 'develop' into matthew/fix_overzealous_ll_state 2018-08-25 02:25:38 +02:00
Matthew Hodgson cba03dd2a4 changelog 2018-08-25 02:04:22 +02:00
Matthew Hodgson 9374567939 don't return non-LL-member state in incremental sync state blocks 2018-08-25 01:33:33 +02:00
139 changed files with 1795 additions and 914 deletions
+77 -4
View File
@@ -9,6 +9,8 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2postgres:
machine: true
steps:
@@ -18,15 +20,45 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2merged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy2postgresmerged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy2
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy2
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3:
machine: true
steps:
- checkout
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs hawkowl/sytestpy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3postgres:
machine: true
steps:
@@ -36,6 +68,32 @@ jobs:
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3merged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
sytestpy3postgresmerged:
machine: true
steps:
- checkout
- run: bash .circleci/merge_base_branch.sh
- run: docker pull matrixdotorg/sytest-synapsepy3
- run: docker run --rm -it -v $(pwd)\:/src -v $(pwd)/logs\:/logs -e POSTGRES=1 matrixdotorg/sytest-synapsepy3
- store_artifacts:
path: ~/project/logs
destination: logs
- store_test_results:
path: logs
workflows:
version: 2
@@ -43,6 +101,21 @@ workflows:
jobs:
- sytestpy2
- sytestpy2postgres
# Currently broken while the Python 3 port is incomplete
# - sytestpy3
# - sytestpy3postgres
- sytestpy3
- sytestpy3postgres
- sytestpy2merged:
filters:
branches:
ignore: /develop|master/
- sytestpy2postgresmerged:
filters:
branches:
ignore: /develop|master/
- sytestpy3merged:
filters:
branches:
ignore: /develop|master/
- sytestpy3postgresmerged:
filters:
branches:
ignore: /develop|master/
+31
View File
@@ -0,0 +1,31 @@
#!/usr/bin/env bash
set -e
# CircleCI doesn't give CIRCLE_PR_NUMBER in the environment for non-forked PRs. Wonderful.
# In this case, we just need to do some ~shell magic~ to strip it out of the PULL_REQUEST URL.
echo 'export CIRCLE_PR_NUMBER="${CIRCLE_PR_NUMBER:-${CIRCLE_PULL_REQUEST##*/}}"' >> $BASH_ENV
source $BASH_ENV
if [[ -z "${CIRCLE_PR_NUMBER}" ]]
then
echo "Can't figure out what the PR number is!"
exit 1
fi
# Get the reference, using the GitHub API
GITBASE=`curl -q https://api.github.com/repos/matrix-org/synapse/pulls/${CIRCLE_PR_NUMBER} | jq -r '.base.ref'`
# Show what we are before
git show -s
# Set up username so it can do a merge
git config --global user.email bot@matrix.org
git config --global user.name "A robot"
# Fetch and merge. If it doesn't work, it will raise due to set -e.
git fetch -u origin $GITBASE
git merge --no-edit origin/$GITBASE
# Show what we are after.
git show -s
+3
View File
@@ -25,6 +25,9 @@ matrix:
services:
- postgresql
- python: 3.5
env: TOX_ENV=py35
- python: 3.6
env: TOX_ENV=py36
+60
View File
@@ -1,3 +1,18 @@
Synapse 0.33.4 (2018-09-07)
===========================
Internal Changes
----------------
- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
Synapse 0.33.4rc2 (2018-09-06)
==============================
Pull in security fixes from v0.33.3.1
Synapse 0.33.3.1 (2018-09-06)
=============================
@@ -7,11 +22,56 @@ SECURITY FIXES
- Fix an issue where event signatures were not always correctly validated ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
- Fix an issue where server_acls could be circumvented for incoming events ([\#3796](https://github.com/matrix-org/synapse/issues/3796))
Internal Changes
----------------
- Unignore synctl in .dockerignore to fix docker builds ([\#3802](https://github.com/matrix-org/synapse/issues/3802))
Synapse 0.33.4rc1 (2018-09-04)
==============================
Features
--------
- Support profile API endpoints on workers ([\#3659](https://github.com/matrix-org/synapse/issues/3659))
- Server notices for resource limit blocking ([\#3680](https://github.com/matrix-org/synapse/issues/3680))
- Allow guests to use /rooms/:roomId/event/:eventId ([\#3724](https://github.com/matrix-org/synapse/issues/3724))
- Add mau_trial_days config param, so that users only get counted as MAU after N days. ([\#3749](https://github.com/matrix-org/synapse/issues/3749))
- Require twisted 17.1 or later (fixes [#3741](https://github.com/matrix-org/synapse/issues/3741)). ([\#3751](https://github.com/matrix-org/synapse/issues/3751))
Bugfixes
--------
- Fix error collecting prometheus metrics when run on dedicated thread due to threading concurrency issues ([\#3722](https://github.com/matrix-org/synapse/issues/3722))
- Fix bug where we resent "limit exceeded" server notices repeatedly ([\#3747](https://github.com/matrix-org/synapse/issues/3747))
- Fix bug where we broke sync when using limit_usage_by_mau but hadn't configured server notices ([\#3753](https://github.com/matrix-org/synapse/issues/3753))
- Fix 'federation_domain_whitelist' such that an empty list correctly blocks all outbound federation traffic ([\#3754](https://github.com/matrix-org/synapse/issues/3754))
- Fix tagging of server notice rooms ([\#3755](https://github.com/matrix-org/synapse/issues/3755), [\#3756](https://github.com/matrix-org/synapse/issues/3756))
- Fix 'admin_uri' config variable and error parameter to be 'admin_contact' to match the spec. ([\#3758](https://github.com/matrix-org/synapse/issues/3758))
- Don't return non-LL-member state in incremental sync state blocks ([\#3760](https://github.com/matrix-org/synapse/issues/3760))
- Fix bug in sending presence over federation ([\#3768](https://github.com/matrix-org/synapse/issues/3768))
- Fix bug where preserved threepid user comes to sign up and server is mau blocked ([\#3777](https://github.com/matrix-org/synapse/issues/3777))
Internal Changes
----------------
- Removed the link to the unmaintained matrix-synapse-auto-deploy project from the readme. ([\#3378](https://github.com/matrix-org/synapse/issues/3378))
- Refactor state module to support multiple room versions ([\#3673](https://github.com/matrix-org/synapse/issues/3673))
- The synapse.storage module has been ported to Python 3. ([\#3725](https://github.com/matrix-org/synapse/issues/3725))
- Split the state_group_cache into member and non-member state events (and so speed up LL /sync) ([\#3726](https://github.com/matrix-org/synapse/issues/3726))
- Log failure to authenticate remote servers as warnings (without stack traces) ([\#3727](https://github.com/matrix-org/synapse/issues/3727))
- The CONTRIBUTING guidelines have been updated to mention our use of Markdown and that .misc files have content. ([\#3730](https://github.com/matrix-org/synapse/issues/3730))
- Reference the need for an HTTP replication port when using the federation_reader worker ([\#3734](https://github.com/matrix-org/synapse/issues/3734))
- Fix minor spelling error in federation client documentation. ([\#3735](https://github.com/matrix-org/synapse/issues/3735))
- Remove redundant state resolution function ([\#3737](https://github.com/matrix-org/synapse/issues/3737))
- The test suite now passes on PostgreSQL. ([\#3740](https://github.com/matrix-org/synapse/issues/3740))
- Fix MAU cache invalidation due to missing yield ([\#3746](https://github.com/matrix-org/synapse/issues/3746))
- Make sure that we close db connections opened during init ([\#3764](https://github.com/matrix-org/synapse/issues/3764))
Synapse 0.33.3 (2018-08-22)
===========================
+21 -1
View File
@@ -742,6 +742,18 @@ so an example nginx configuration might look like::
}
}
and an example apache configuration may look like::
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
<Location /_matrix>
ProxyPass http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse http://127.0.0.1:8008/_matrix
</Location>
</VirtualHost>
You will also want to set ``bind_addresses: ['127.0.0.1']`` and ``x_forwarded: true``
for port 8008 in ``homeserver.yaml`` to ensure that client IP addresses are
recorded correctly.
@@ -896,7 +908,7 @@ to install using pip and a virtualenv::
virtualenv -p python2.7 env
source env/bin/activate
python synapse/python_dependencies.py | xargs pip install
python -m synapse.python_dependencies | xargs pip install
pip install lxml mock
This will run a process of downloading and installing all the needed
@@ -951,5 +963,13 @@ variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
Using `libjemalloc <http://jemalloc.net/>`_ can also yield a significant
improvement in overall amount, and especially in terms of giving back RAM
to the OS. To use it, the library must simply be put in the LD_PRELOAD
environment variable when launching Synapse. On Debian, this can be done
by installing the ``libjemalloc1`` package and adding this line to
``/etc/default/matrix-synapse``::
LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.1
.. _`key_management`: https://matrix.org/docs/spec/server_server/unstable.html#retrieving-server-keys
-1
View File
@@ -1 +0,0 @@
Removed the link to the unmaintained matrix-synapse-auto-deploy project from the readme.
+1
View File
@@ -0,0 +1 @@
Python 3.5+ is now supported.
-1
View File
@@ -1 +0,0 @@
Support profile API endpoints on workers
-1
View File
@@ -1 +0,0 @@
Refactor state module to support multiple room versions
-1
View File
@@ -1 +0,0 @@
Server notices for resource limit blocking
+1
View File
@@ -0,0 +1 @@
CircleCI tests now run on the potential merge of a PR.
-1
View File
@@ -1 +0,0 @@
Fix error collecting prometheus metrics when run on dedicated thread due to threading concurrency issues
-1
View File
@@ -1 +0,0 @@
Allow guests to use /rooms/:roomId/event/:eventId
-1
View File
@@ -1 +0,0 @@
The synapse.storage module has been ported to Python 3.
-1
View File
@@ -1 +0,0 @@
Split the state_group_cache into member and non-member state events (and so speed up LL /sync)
-1
View File
@@ -1 +0,0 @@
Log failure to authenticate remote servers as warnings (without stack traces)
-1
View File
@@ -1 +0,0 @@
The CONTRIBUTING guidelines have been updated to mention our use of Markdown and that .misc files have content.
-1
View File
@@ -1 +0,0 @@
Reference the need for an HTTP replication port when using the federation_reader worker
-1
View File
@@ -1 +0,0 @@
Fix minor spelling error in federation client documentation.
-1
View File
@@ -1 +0,0 @@
Remove redundant state resolution function
-1
View File
@@ -1 +0,0 @@
The test suite now passes on PostgreSQL.
-1
View File
@@ -1 +0,0 @@
Fix MAU cache invalidation due to missing yield
-1
View File
@@ -1 +0,0 @@
Fix bug where we resent "limit exceeded" server notices repeatedly
-1
View File
@@ -1 +0,0 @@
Add mau_trial_days config param, so that users only get counted as MAU after N days.
-1
View File
@@ -1 +0,0 @@
Require twisted 17.1 or later (fixes [#3741](https://github.com/matrix-org/synapse/issues/3741)).
-1
View File
@@ -1 +0,0 @@
Fix bug where we broke sync when using limit_usage_by_mau but hadn't configured server notices
-1
View File
@@ -1 +0,0 @@
Fix 'federation_domain_whitelist' such that an empty list correctly blocks all outbound federation traffic
-1
View File
@@ -1 +0,0 @@
Fix tagging of server notice rooms
-1
View File
@@ -1 +0,0 @@
Fix tagging of server notice rooms
-1
View File
@@ -1 +0,0 @@
Fix 'admin_uri' config variable and error parameter to be 'admin_contact' to match the spec.
-1
View File
@@ -1 +0,0 @@
Make sure that we close db connections opened during init
-1
View File
@@ -1 +0,0 @@
Fix bug in sending presence over federation
-1
View File
@@ -1 +0,0 @@
Fix bug where preserved threepid user comes to sign up and server is mau blocked
+1
View File
@@ -0,0 +1 @@
tests/ is now ported to Python 3.
+1
View File
@@ -0,0 +1 @@
Fix existing room tags not coming down sync when joining a room
+1
View File
@@ -0,0 +1 @@
crypto/ is now ported to Python 3.
+1
View File
@@ -0,0 +1 @@
rest/ is now ported to Python 3.
+1
View File
@@ -0,0 +1 @@
Fix jwt import check
+1
View File
@@ -0,0 +1 @@
add some logging for the keyring queue
+1
View File
@@ -0,0 +1 @@
speed up lazy loading by 2-3x
+1
View File
@@ -0,0 +1 @@
Improved Dockerfile to remove build requirements after building reducing the image size.
+1
View File
@@ -0,0 +1 @@
fix VOIP crashes under Python 3 (#3821)
+1
View File
@@ -0,0 +1 @@
Disable lazy loading for incremental syncs for now
+1
View File
@@ -0,0 +1 @@
Fix manhole so that it works with latest openssh clients
+1
View File
@@ -0,0 +1 @@
Fix outbound requests occasionally wedging, which can result in federation breaking between servers.
+1
View File
@@ -0,0 +1 @@
Add synapse_admin_mau:registered_reserved_users metric to expose number of real reaserved users
+1
View File
@@ -0,0 +1 @@
federation/ is now ported to Python 3.
+1
View File
@@ -0,0 +1 @@
Show heroes if room name/canonical alias has been deleted
+1
View File
@@ -0,0 +1 @@
Log when we retry outbound requests
+1
View File
@@ -0,0 +1 @@
Removed some excess logging messages.
+1
View File
@@ -0,0 +1 @@
Speed up purge history for rooms that have been previously purged
+1
View File
@@ -0,0 +1 @@
Refactor some HTTP timeout code.
+1
View File
@@ -0,0 +1 @@
Fix running merged builds on CircleCI
+1
View File
@@ -0,0 +1 @@
Fix handling of redacted events from federation
+1
View File
@@ -0,0 +1 @@
Fix typo in replication stream exception.
+1
View File
@@ -0,0 +1 @@
Add in flight real time metrics for Measure blocks
+1
View File
@@ -0,0 +1 @@
Disable buffering and automatic retrying in treq requests to prevent timeouts.
View File
+1
View File
@@ -0,0 +1 @@
Mitigate outbound federation randomly becoming wedged
+1
View File
@@ -0,0 +1 @@
mention jemalloc in the README
+1
View File
@@ -0,0 +1 @@
Don't ratelimit autojoins
+1
View File
@@ -0,0 +1 @@
Adding the ability to change MAX_UPLOAD_SIZE for the docker container variables.
+1
View File
@@ -0,0 +1 @@
Remove unmaintained "nuke-room-from-db.sh" script
+1
View File
@@ -0,0 +1 @@
Fix 500 error when deleting unknown room alias
+1
View File
@@ -0,0 +1 @@
Fix some b'abcd' noise in logs and metrics
+1
View File
@@ -0,0 +1 @@
Fix some b'abcd' noise in logs and metrics
+1
View File
@@ -0,0 +1 @@
Fix typo in README, synaspse -> synapse
+16 -10
View File
@@ -1,6 +1,8 @@
FROM docker.io/python:2-alpine3.8
RUN apk add --no-cache --virtual .nacl_deps \
COPY . /synapse
RUN apk add --no-cache --virtual .build_deps \
build-base \
libffi-dev \
libjpeg-turbo-dev \
@@ -8,13 +10,16 @@ RUN apk add --no-cache --virtual .nacl_deps \
libxslt-dev \
linux-headers \
postgresql-dev \
su-exec \
zlib-dev
COPY . /synapse
# A wheel cache may be provided in ./cache for faster build
RUN cd /synapse \
zlib-dev \
&& cd /synapse \
&& apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec \
&& pip install --upgrade \
lxml \
pip \
@@ -26,8 +31,9 @@ RUN cd /synapse \
&& rm -rf \
setup.cfg \
setup.py \
synapse
synapse \
&& apk del .build_deps
VOLUME ["/data"]
EXPOSE 8008/tcp 8448/tcp
+1
View File
@@ -88,6 +88,7 @@ variables are available for configuration:
* ``SYNAPSE_TURN_URIS``, set this variable to the coma-separated list of TURN
uris to enable TURN for this homeserver.
* ``SYNAPSE_TURN_SECRET``, set this to the TURN shared secret if required.
* ``SYNAPSE_MAX_UPLOAD_SIZE``, set this variable to change the max upload size [default `10M`].
Shared secrets, that will be initialized to random values if not set:
+1 -1
View File
@@ -85,7 +85,7 @@ federation_rc_concurrent: 3
media_store_path: "/data/media"
uploads_path: "/data/uploads"
max_upload_size: "10M"
max_upload_size: "{{ SYNAPSE_MAX_UPLOAD_SIZE or "10M" }}"
max_image_pixels: "32M"
dynamic_thumbnails: false
-57
View File
@@ -1,57 +0,0 @@
#!/bin/bash
## CAUTION:
## This script will remove (hopefully) all trace of the given room ID from
## your homeserver.db
## Do not run it lightly.
set -e
if [ "$1" == "-h" ] || [ "$1" == "" ]; then
echo "Call with ROOM_ID as first option and then pipe it into the database. So for instance you might run"
echo " nuke-room-from-db.sh <room_id> | sqlite3 homeserver.db"
echo "or"
echo " nuke-room-from-db.sh <room_id> | psql --dbname=synapse"
exit
fi
ROOMID="$1"
cat <<EOF
DELETE FROM event_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_backward_extremities WHERE room_id = '$ROOMID';
DELETE FROM event_edges WHERE room_id = '$ROOMID';
DELETE FROM room_depth WHERE room_id = '$ROOMID';
DELETE FROM state_forward_extremities WHERE room_id = '$ROOMID';
DELETE FROM events WHERE room_id = '$ROOMID';
DELETE FROM event_json WHERE room_id = '$ROOMID';
DELETE FROM state_events WHERE room_id = '$ROOMID';
DELETE FROM current_state_events WHERE room_id = '$ROOMID';
DELETE FROM room_memberships WHERE room_id = '$ROOMID';
DELETE FROM feedback WHERE room_id = '$ROOMID';
DELETE FROM topics WHERE room_id = '$ROOMID';
DELETE FROM room_names WHERE room_id = '$ROOMID';
DELETE FROM rooms WHERE room_id = '$ROOMID';
DELETE FROM room_hosts WHERE room_id = '$ROOMID';
DELETE FROM room_aliases WHERE room_id = '$ROOMID';
DELETE FROM state_groups WHERE room_id = '$ROOMID';
DELETE FROM state_groups_state WHERE room_id = '$ROOMID';
DELETE FROM receipts_graph WHERE room_id = '$ROOMID';
DELETE FROM receipts_linearized WHERE room_id = '$ROOMID';
DELETE FROM event_search WHERE room_id = '$ROOMID';
DELETE FROM guest_access WHERE room_id = '$ROOMID';
DELETE FROM history_visibility WHERE room_id = '$ROOMID';
DELETE FROM room_tags WHERE room_id = '$ROOMID';
DELETE FROM room_tags_revisions WHERE room_id = '$ROOMID';
DELETE FROM room_account_data WHERE room_id = '$ROOMID';
DELETE FROM event_push_actions WHERE room_id = '$ROOMID';
DELETE FROM local_invites WHERE room_id = '$ROOMID';
DELETE FROM pusher_throttle WHERE room_id = '$ROOMID';
DELETE FROM event_reports WHERE room_id = '$ROOMID';
DELETE FROM public_room_list_stream WHERE room_id = '$ROOMID';
DELETE FROM stream_ordering_to_exterm WHERE room_id = '$ROOMID';
DELETE FROM event_auth WHERE room_id = '$ROOMID';
DELETE FROM appservice_room_list WHERE room_id = '$ROOMID';
VACUUM;
EOF
+11 -1
View File
@@ -17,4 +17,14 @@
""" This is a reference implementation of a Matrix home server.
"""
__version__ = "0.33.3.1"
try:
from twisted.internet import protocol
from twisted.internet.protocol import Factory
from twisted.names.dns import DNSDatagramProtocol
protocol.Factory.noisy = False
Factory.noisy = False
DNSDatagramProtocol.noisy = False
except ImportError:
pass
__version__ = "0.33.4"
+11 -3
View File
@@ -307,6 +307,10 @@ class SynapseHomeServer(HomeServer):
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
registered_reserved_users_mau_gauge = Gauge(
"synapse_admin_mau:registered_reserved_users",
"Registered users with reserved threepids"
)
def setup(config_options):
@@ -531,10 +535,14 @@ def run(hs):
@defer.inlineCallbacks
def generate_monthly_active_users():
count = 0
current_mau_count = 0
reserved_count = 0
store = hs.get_datastore()
if hs.config.limit_usage_by_mau:
count = yield hs.get_datastore().get_monthly_active_count()
current_mau_gauge.set(float(count))
current_mau_count = yield store.get_monthly_active_count()
reserved_count = yield store.get_registered_reserved_users_count()
current_mau_gauge.set(float(current_mau_count))
registered_reserved_users_mau_gauge.set(float(reserved_count))
max_mau_gauge.set(float(hs.config.max_mau_value))
hs.get_datastore().initialise_reserved_users(
+1 -1
View File
@@ -21,7 +21,7 @@ from .consent_config import ConsentConfig
from .database import DatabaseConfig
from .emailconfig import EmailConfig
from .groups import GroupsConfig
from .jwt import JWTConfig
from .jwt_config import JWTConfig
from .key import KeyConfig
from .logger import LoggingConfig
from .metrics import MetricsConfig
+16 -1
View File
@@ -227,7 +227,22 @@ def setup_logging(config, use_worker_options=False):
#
# However this may not be too much of a problem if we are just writing to a file.
observer = STDLibLogObserver()
def _log(event):
if "log_text" in event:
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
return
if event["log_text"].startswith("(UDP Port "):
return
if event["log_text"].startswith("Timing out client"):
return
return observer(event)
globalLogBeginner.beginLoggingTo(
[observer],
[_log],
redirectStandardIO=not config.no_redirect_stdio,
)
+1 -1
View File
@@ -123,6 +123,6 @@ class ClientTLSOptionsFactory(object):
def get_options(self, host):
return ClientTLSOptions(
host.decode('utf-8'),
host,
CertificateOptions(verify=False).getContext()
)
+7 -1
View File
@@ -50,7 +50,7 @@ def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
defer.returnValue((server_response, server_certificate))
except SynapseKeyClientError as e:
logger.warn("Error getting key for %r: %s", server_name, e)
if e.status.startswith("4"):
if e.status.startswith(b"4"):
# Don't retry for 4xx responses.
raise IOError("Cannot get key for %r" % server_name)
except (ConnectError, DomainError) as e:
@@ -82,6 +82,12 @@ class SynapseKeyClientProtocol(HTTPClient):
self._peer = self.transport.getPeer()
logger.debug("Connected to %s", self._peer)
if not isinstance(self.path, bytes):
self.path = self.path.encode('ascii')
if not isinstance(self.host, bytes):
self.host = self.host.encode('ascii')
self.sendCommand(b"GET", self.path)
if self.host:
self.sendHeader(b"Host", self.host)
+23 -10
View File
@@ -16,9 +16,10 @@
import hashlib
import logging
import urllib
from collections import namedtuple
from six.moves import urllib
from signedjson.key import (
decode_verify_key_bytes,
encode_verify_key_base64,
@@ -40,6 +41,7 @@ from synapse.api.errors import Codes, SynapseError
from synapse.crypto.keyclient import fetch_server_key
from synapse.util import logcontext, unwrapFirstError
from synapse.util.logcontext import (
LoggingContext,
PreserveLoggingContext,
preserve_fn,
run_in_background,
@@ -216,23 +218,34 @@ class Keyring(object):
servers have completed. Follows the synapse rules of logcontext
preservation.
"""
loop_count = 1
while True:
wait_on = [
self.key_downloads[server_name]
(server_name, self.key_downloads[server_name])
for server_name in server_names
if server_name in self.key_downloads
]
if wait_on:
with PreserveLoggingContext():
yield defer.DeferredList(wait_on)
else:
if not wait_on:
break
logger.info(
"Waiting for existing lookups for %s to complete [loop %i]",
[w[0] for w in wait_on], loop_count,
)
with PreserveLoggingContext():
yield defer.DeferredList((w[1] for w in wait_on))
loop_count += 1
ctx = LoggingContext.current_context()
def rm(r, server_name_):
self.key_downloads.pop(server_name_, None)
with PreserveLoggingContext(ctx):
logger.debug("Releasing key lookup lock on %s", server_name_)
self.key_downloads.pop(server_name_, None)
return r
for server_name, deferred in server_to_deferred.items():
logger.debug("Got key lookup lock on %s", server_name)
self.key_downloads[server_name] = deferred
deferred.addBoth(rm, server_name)
@@ -432,7 +445,7 @@ class Keyring(object):
# an incoming request.
query_response = yield self.client.post_json(
destination=perspective_name,
path=b"/_matrix/key/v2/query",
path="/_matrix/key/v2/query",
data={
u"server_keys": {
server_name: {
@@ -513,8 +526,8 @@ class Keyring(object):
(response, tls_certificate) = yield fetch_server_key(
server_name, self.hs.tls_client_options_factory,
path=(b"/_matrix/key/v2/server/%s" % (
urllib.quote(requested_key_id),
path=("/_matrix/key/v2/server/%s" % (
urllib.parse.quote(requested_key_id),
)).encode("ascii"),
)
+5
View File
@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import six
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
@@ -147,6 +149,9 @@ class EventBase(object):
def items(self):
return list(self._event_dict.items())
def keys(self):
return six.iterkeys(self._event_dict)
class FrozenEvent(EventBase):
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
+27 -7
View File
@@ -143,11 +143,31 @@ class FederationBase(object):
def callback(_, pdu):
with logcontext.PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
logger.warn(
"Event content has been tampered, redacting %s: %s",
pdu.event_id, pdu.get_pdu_json()
)
return prune_event(pdu)
# let's try to distinguish between failures because the event was
# redacted (which are somewhat expected) vs actual ball-tampering
# incidents.
#
# This is just a heuristic, so we just assume that if the keys are
# about the same between the redacted and received events, then the
# received event was probably a redacted copy (but we then use our
# *actual* redacted copy to be on the safe side.)
redacted_event = prune_event(pdu)
if (
set(redacted_event.keys()) == set(pdu.keys()) and
set(six.iterkeys(redacted_event.content))
== set(six.iterkeys(pdu.content))
):
logger.info(
"Event %s seems to have been redacted; using our redacted "
"copy",
pdu.event_id,
)
else:
logger.warning(
"Event %s content has been tampered, redacting",
pdu.event_id, pdu.get_pdu_json(),
)
return redacted_event
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
@@ -162,8 +182,8 @@ class FederationBase(object):
failure.trap(SynapseError)
with logcontext.PreserveLoggingContext(ctx):
logger.warn(
"Signature check failed for %s",
pdu.event_id,
"Signature check failed for %s: %s",
pdu.event_id, failure.getErrorMessage(),
)
return failure
+4 -4
View File
@@ -271,10 +271,10 @@ class FederationClient(FederationBase):
event_id, destination, e,
)
except NotRetryingDestination as e:
logger.info(e.message)
logger.info(str(e))
continue
except FederationDeniedError as e:
logger.info(e.message)
logger.info(str(e))
continue
except Exception as e:
pdu_attempts[destination] = now
@@ -510,7 +510,7 @@ class FederationClient(FederationBase):
else:
logger.warn(
"Failed to %s via %s: %i %s",
description, destination, e.code, e.message,
description, destination, e.code, e.args[0],
)
except Exception:
logger.warn(
@@ -875,7 +875,7 @@ class FederationClient(FederationBase):
except Exception as e:
logger.exception(
"Failed to send_third_party_invite via %s: %s",
destination, e.message
destination, str(e)
)
raise RuntimeError("Failed to send to any server.")
+3 -2
View File
@@ -15,7 +15,8 @@
# limitations under the License.
import logging
import urllib
from six.moves import urllib
from twisted.internet import defer
@@ -951,4 +952,4 @@ def _create_path(prefix, path, *args):
Returns:
str
"""
return prefix + path % tuple(urllib.quote(arg, "") for arg in args)
return prefix + path % tuple(urllib.parse.quote(arg, "") for arg in args)
+11 -13
View File
@@ -90,8 +90,8 @@ class Authenticator(object):
@defer.inlineCallbacks
def authenticate_request(self, request, content):
json_request = {
"method": request.method,
"uri": request.uri,
"method": request.method.decode('ascii'),
"uri": request.uri.decode('ascii'),
"destination": self.server_name,
"signatures": {},
}
@@ -252,7 +252,7 @@ class BaseFederationServlet(object):
by the callback method. None if the request has already been handled.
"""
content = None
if request.method in ["PUT", "POST"]:
if request.method in [b"PUT", b"POST"]:
# TODO: Handle other method types? other content types?
content = parse_json_object_from_request(request)
@@ -386,7 +386,7 @@ class FederationStateServlet(BaseFederationServlet):
return self.handler.on_context_state_request(
origin,
context,
query.get("event_id", [None])[0],
parse_string_from_args(query, "event_id", None),
)
@@ -397,7 +397,7 @@ class FederationStateIdsServlet(BaseFederationServlet):
return self.handler.on_state_ids_request(
origin,
room_id,
query.get("event_id", [None])[0],
parse_string_from_args(query, "event_id", None),
)
@@ -405,14 +405,12 @@ class FederationBackfillServlet(BaseFederationServlet):
PATH = "/backfill/(?P<context>[^/]*)/"
def on_GET(self, origin, content, query, context):
versions = query["v"]
limits = query["limit"]
versions = [x.decode('ascii') for x in query[b"v"]]
limit = parse_integer_from_args(query, "limit", None)
if not limits:
if not limit:
return defer.succeed((400, {"error": "Did not include limit param"}))
limit = int(limits[-1])
return self.handler.on_backfill_request(origin, context, versions, limit)
@@ -423,7 +421,7 @@ class FederationQueryServlet(BaseFederationServlet):
def on_GET(self, origin, content, query, query_type):
return self.handler.on_query_request(
query_type,
{k: v[0].decode("utf-8") for k, v in query.items()}
{k.decode('utf8'): v[0].decode("utf-8") for k, v in query.items()}
)
@@ -630,14 +628,14 @@ class OpenIdUserInfo(BaseFederationServlet):
@defer.inlineCallbacks
def on_GET(self, origin, content, query):
token = query.get("access_token", [None])[0]
token = query.get(b"access_token", [None])[0]
if token is None:
defer.returnValue((401, {
"errcode": "M_MISSING_TOKEN", "error": "Access Token required"
}))
return
user_id = yield self.handler.on_openid_userinfo(token)
user_id = yield self.handler.on_openid_userinfo(token.decode('ascii'))
if user_id is None:
defer.returnValue((401, {
+16 -3
View File
@@ -20,7 +20,14 @@ import string
from twisted.internet import defer
from synapse.api.constants import EventTypes
from synapse.api.errors import AuthError, CodeMessageException, Codes, SynapseError
from synapse.api.errors import (
AuthError,
CodeMessageException,
Codes,
NotFoundError,
StoreError,
SynapseError,
)
from synapse.types import RoomAlias, UserID, get_domain_from_id
from ._base import BaseHandler
@@ -109,7 +116,13 @@ class DirectoryHandler(BaseHandler):
def delete_association(self, requester, user_id, room_alias):
# association deletion for human users
can_delete = yield self._user_can_delete_alias(room_alias, user_id)
try:
can_delete = yield self._user_can_delete_alias(room_alias, user_id)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown room alias")
raise
if not can_delete:
raise AuthError(
403, "You don't have permission to delete the alias.",
@@ -320,7 +333,7 @@ class DirectoryHandler(BaseHandler):
def _user_can_delete_alias(self, alias, user_id):
creator = yield self.store.get_room_alias_creator(alias.to_string())
if creator and creator == user_id:
if creator is not None and creator == user_id:
defer.returnValue(True)
is_admin = yield self.auth.is_server_admin(UserID.from_string(user_id))
+1 -8
View File
@@ -269,14 +269,7 @@ class PaginationHandler(object):
if state_ids:
state = yield self.store.get_events(list(state_ids.values()))
if state:
state = yield filter_events_for_client(
self.store,
user_id,
state.values(),
is_peeking=(member_event_id is None),
)
state = state.values()
time_now = self.clock.time_msec()
+1
View File
@@ -534,4 +534,5 @@ class RegistrationHandler(BaseHandler):
room_id=room_id,
remote_room_hosts=remote_room_hosts,
action="join",
ratelimit=False,
)
+117 -32
View File
@@ -24,6 +24,7 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.push.clientformat import format_push_rules_for_user
from synapse.storage.roommember import MemberSummary
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.expiringcache import ExpiringCache
@@ -525,6 +526,8 @@ class SyncHandler(object):
A deferred dict describing the room summary
"""
# FIXME: we could/should get this from room_stats when matthew/stats lands
# FIXME: this promulgates https://github.com/matrix-org/synapse/issues/3305
last_events, _ = yield self.store.get_recent_event_ids_for_room(
room_id, end_token=now_token.room_key, limit=1,
@@ -537,44 +540,67 @@ class SyncHandler(object):
last_event = last_events[-1]
state_ids = yield self.store.get_state_ids_for_event(
last_event.event_id, [
(EventTypes.Member, None),
(EventTypes.Name, ''),
(EventTypes.CanonicalAlias, ''),
]
)
member_ids = {
state_key: event_id
for (t, state_key), event_id in iteritems(state_ids)
if t == EventTypes.Member
}
# this is heavily cached, thus: fast.
details = yield self.store.get_room_summary(room_id)
name_id = state_ids.get((EventTypes.Name, ''))
canonical_alias_id = state_ids.get((EventTypes.CanonicalAlias, ''))
summary = {}
# FIXME: it feels very heavy to load up every single membership event
# just to calculate the counts.
member_events = yield self.store.get_events(member_ids.values())
joined_user_ids = []
invited_user_ids = []
for ev in member_events.values():
if ev.content.get("membership") == Membership.JOIN:
joined_user_ids.append(ev.state_key)
elif ev.content.get("membership") == Membership.INVITE:
invited_user_ids.append(ev.state_key)
empty_ms = MemberSummary([], 0)
# TODO: only send these when they change.
summary["m.joined_member_count"] = len(joined_user_ids)
summary["m.invited_member_count"] = len(invited_user_ids)
summary["m.joined_member_count"] = (
details.get(Membership.JOIN, empty_ms).count
)
summary["m.invited_member_count"] = (
details.get(Membership.INVITE, empty_ms).count
)
if name_id or canonical_alias_id:
defer.returnValue(summary)
# if the room has a name or canonical_alias set, we can skip
# calculating heroes. we assume that if the event has contents, it'll
# be a valid name or canonical_alias - i.e. we're checking that they
# haven't been "deleted" by blatting {} over the top.
if name_id:
name = yield self.store.get_event(name_id, allow_none=False)
if name and name.content:
defer.returnValue(summary)
# FIXME: order by stream ordering, not alphabetic
if canonical_alias_id:
canonical_alias = yield self.store.get_event(
canonical_alias_id, allow_none=False,
)
if canonical_alias and canonical_alias.content:
defer.returnValue(summary)
joined_user_ids = [
r[0] for r in details.get(Membership.JOIN, empty_ms).members
]
invited_user_ids = [
r[0] for r in details.get(Membership.INVITE, empty_ms).members
]
gone_user_ids = (
[r[0] for r in details.get(Membership.LEAVE, empty_ms).members] +
[r[0] for r in details.get(Membership.BAN, empty_ms).members]
)
# FIXME: only build up a member_ids list for our heroes
member_ids = {}
for membership in (
Membership.JOIN,
Membership.INVITE,
Membership.LEAVE,
Membership.BAN
):
for user_id, event_id in details.get(membership, empty_ms).members:
member_ids[user_id] = event_id
# FIXME: order by stream ordering rather than as returned by SQL
me = sync_config.user.to_string()
if (joined_user_ids or invited_user_ids):
summary['m.heroes'] = sorted(
@@ -586,7 +612,11 @@ class SyncHandler(object):
)[0:5]
else:
summary['m.heroes'] = sorted(
[user_id for user_id in member_ids.keys() if user_id != me]
[
user_id
for user_id in gone_user_ids
if user_id != me
]
)[0:5]
if not sync_config.filter_collection.lazy_load_members():
@@ -719,6 +749,26 @@ class SyncHandler(object):
lazy_load_members=lazy_load_members,
)
elif batch.limited:
state_at_timeline_start = yield self.store.get_state_ids_for_event(
batch.events[0].event_id, types=types,
filtered_types=filtered_types,
)
# for now, we disable LL for gappy syncs - see
# https://github.com/vector-im/riot-web/issues/7211#issuecomment-419976346
# N.B. this slows down incr syncs as we are now processing way
# more state in the server than if we were LLing.
#
# We still have to filter timeline_start to LL entries (above) in order
# for _calculate_state's LL logic to work, as we have to include LL
# members for timeline senders in case they weren't loaded in the initial
# sync. We do this by (counterintuitively) by filtering timeline_start
# members to just be ones which were timeline senders, which then ensures
# all of the rest get included in the state block (if we need to know
# about them).
types = None
filtered_types = None
state_at_previous_sync = yield self.get_state_at(
room_id, stream_position=since_token, types=types,
filtered_types=filtered_types,
@@ -729,24 +779,22 @@ class SyncHandler(object):
filtered_types=filtered_types,
)
state_at_timeline_start = yield self.store.get_state_ids_for_event(
batch.events[0].event_id, types=types,
filtered_types=filtered_types,
)
state_ids = _calculate_state(
timeline_contains=timeline_state,
timeline_start=state_at_timeline_start,
previous=state_at_previous_sync,
current=current_state_ids,
# we have to include LL members in case LL initial sync missed them
lazy_load_members=lazy_load_members,
)
else:
state_ids = {}
if lazy_load_members:
if types:
# We're returning an incremental sync, with no "gap" since
# the previous sync, so normally there would be no state to return
# We're returning an incremental sync, with no
# "gap" since the previous sync, so normally there would be
# no state to return.
# But we're lazy-loading, so the client might need some more
# member events to understand the events in this timeline.
# So we fish out all the member events corresponding to the
@@ -1575,6 +1623,19 @@ class SyncHandler(object):
newly_joined_room=newly_joined,
)
# When we join the room (or the client requests full_state), we should
# send down any existing tags. Usually the user won't have tags in a
# newly joined room, unless either a) they've joined before or b) the
# tag was added by synapse e.g. for server notice rooms.
if full_state:
user_id = sync_result_builder.sync_config.user.to_string()
tags = yield self.store.get_tags_for_room(user_id, room_id)
# If there aren't any tags, don't send the empty tags list down
# sync
if not tags:
tags = None
account_data_events = []
if tags is not None:
account_data_events.append({
@@ -1603,10 +1664,24 @@ class SyncHandler(object):
)
summary = {}
# we include a summary in room responses when we're lazy loading
# members (as the client otherwise doesn't have enough info to form
# the name itself).
if (
sync_config.filter_collection.lazy_load_members() and
(
# we recalulate the summary:
# if there are membership changes in the timeline, or
# if membership has changed during a gappy sync, or
# if this is an initial sync.
any(ev.type == EventTypes.Member for ev in batch.events) or
(
# XXX: this may include false positives in the form of LL
# members which have snuck into state
batch.limited and
any(t == EventTypes.Member for (t, k) in state)
) or
since_token is None
)
):
@@ -1636,6 +1711,16 @@ class SyncHandler(object):
unread_notifications["highlight_count"] = notifs["highlight_count"]
sync_result_builder.joined.append(room_sync)
if batch.limited and since_token:
user_id = sync_result_builder.sync_config.user.to_string()
logger.info(
"Incremental gappy sync of %s for user %s with %d state events" % (
room_id,
user_id,
len(state),
)
)
elif room_builder.rtype == "archived":
room_sync = ArchivedSyncResult(
room_id=room_id,
+2 -2
View File
@@ -38,12 +38,12 @@ def cancelled_to_request_timed_out_error(value, timeout):
return value
ACCESS_TOKEN_RE = re.compile(br'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
ACCESS_TOKEN_RE = re.compile(r'(\?.*access(_|%5[Ff])token=)[^&]*(.*)$')
def redact_uri(uri):
"""Strips access tokens from the uri replaces with <redacted>"""
return ACCESS_TOKEN_RE.sub(
br'\1<redacted>\3',
r'\1<redacted>\3',
uri
)
+13 -7
View File
@@ -93,7 +93,7 @@ class SimpleHttpClient(object):
outgoing_requests_counter.labels(method).inc()
# log request but strip `access_token` (AS requests for example include this)
logger.info("Sending request %s %s", method, redact_uri(uri.encode('ascii')))
logger.info("Sending request %s %s", method, redact_uri(uri))
try:
request_deferred = treq.request(
@@ -108,14 +108,14 @@ class SimpleHttpClient(object):
incoming_responses_counter.labels(method, response.code).inc()
logger.info(
"Received response to %s %s: %s",
method, redact_uri(uri.encode('ascii')), response.code
method, redact_uri(uri), response.code
)
defer.returnValue(response)
except Exception as e:
incoming_responses_counter.labels(method, "ERR").inc()
logger.info(
"Error sending request to %s %s: %s %s",
method, redact_uri(uri.encode('ascii')), type(e).__name__, e.args[0]
method, redact_uri(uri), type(e).__name__, e.args[0]
)
raise
@@ -348,7 +348,8 @@ class SimpleHttpClient(object):
resp_headers = dict(response.headers.getAllRawHeaders())
if 'Content-Length' in resp_headers and resp_headers['Content-Length'] > max_size:
if (b'Content-Length' in resp_headers and
int(resp_headers[b'Content-Length']) > max_size):
logger.warn("Requested URL is too large > %r bytes" % (self.max_size,))
raise SynapseError(
502,
@@ -381,7 +382,12 @@ class SimpleHttpClient(object):
)
defer.returnValue(
(length, resp_headers, response.request.absoluteURI, response.code),
(
length,
resp_headers,
response.request.absoluteURI.decode('ascii'),
response.code,
),
)
@@ -466,9 +472,9 @@ class SpiderEndpointFactory(object):
def endpointForURI(self, uri):
logger.info("Getting endpoint for %s", uri.toBytes())
if uri.scheme == "http":
if uri.scheme == b"http":
endpoint_factory = HostnameEndpoint
elif uri.scheme == "https":
elif uri.scheme == b"https":
tlsCreator = self.policyForHTTPS.creatorForNetloc(uri.host, uri.port)
def endpoint_factory(reactor, host, port, **kw):
+128 -86
View File
@@ -26,7 +26,7 @@ from canonicaljson import encode_canonical_json
from prometheus_client import Counter
from signedjson.sign import sign_json
from twisted.internet import defer, protocol, reactor
from twisted.internet import defer, protocol
from twisted.internet.error import DNSLookupError
from twisted.web._newclient import ResponseDone
from twisted.web.client import Agent, HTTPConnectionPool
@@ -40,11 +40,11 @@ from synapse.api.errors import (
HttpResponseException,
SynapseError,
)
from synapse.http import cancelled_to_request_timed_out_error
from synapse.http.endpoint import matrix_federation_endpoint
from synapse.util import logcontext
from synapse.util.async_helpers import add_timeout_to_deferred
from synapse.util.async_helpers import timeout_no_seriously
from synapse.util.logcontext import make_deferred_yieldable
from synapse.util.metrics import Measure
logger = logging.getLogger(__name__)
outbound_logger = logging.getLogger("synapse.http.outbound")
@@ -66,13 +66,14 @@ else:
class MatrixFederationEndpointFactory(object):
def __init__(self, hs):
self.reactor = hs.get_reactor()
self.tls_client_options_factory = hs.tls_client_options_factory
def endpointForURI(self, uri):
destination = uri.netloc.decode('ascii')
return matrix_federation_endpoint(
reactor, destination, timeout=10,
self.reactor, destination, timeout=10,
tls_client_options_factory=self.tls_client_options_factory
)
@@ -90,7 +91,9 @@ class MatrixFederationHttpClient(object):
self.hs = hs
self.signing_key = hs.config.signing_key[0]
self.server_name = hs.hostname
reactor = hs.get_reactor()
pool = HTTPConnectionPool(reactor)
pool.retryAutomatically = False
pool.maxPersistentPerHost = 5
pool.cachedConnectionTimeout = 2 * 60
self.agent = Agent.usingEndpointFactory(
@@ -100,6 +103,7 @@ class MatrixFederationHttpClient(object):
self._store = hs.get_datastore()
self.version_string = hs.version_string.encode('ascii')
self._next_id = 1
self.default_timeout = 60
def _create_url(self, destination, path_bytes, param_bytes, query_bytes):
return urllib.parse.urlunparse(
@@ -143,6 +147,11 @@ class MatrixFederationHttpClient(object):
(May also fail with plenty of other Exceptions for things like DNS
failures, connection failures, SSL failures.)
"""
if timeout:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
if (
self.hs.config.federation_domain_whitelist is not None and
destination not in self.hs.config.federation_domain_whitelist
@@ -177,11 +186,6 @@ class MatrixFederationHttpClient(object):
txn_id = "%s-O-%s" % (method, self._next_id)
self._next_id = (self._next_id + 1) % (MAXINT - 1)
outbound_logger.info(
"{%s} [%s] Sending request: %s %s",
txn_id, destination, method, url
)
# XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place)
if long_retries:
@@ -194,85 +198,108 @@ class MatrixFederationHttpClient(object):
).decode('ascii')
log_result = None
try:
while True:
try:
if json_callback:
json = json_callback()
while True:
try:
if json_callback:
json = json_callback()
if json:
data = encode_canonical_json(json)
headers_dict["Content-Type"] = ["application/json"]
self.sign_request(
destination, method, http_url, headers_dict, json
)
else:
data = None
self.sign_request(destination, method, http_url, headers_dict)
if json:
data = encode_canonical_json(json)
headers_dict["Content-Type"] = ["application/json"]
self.sign_request(
destination, method, http_url, headers_dict, json
)
else:
data = None
self.sign_request(destination, method, http_url, headers_dict)
request_deferred = treq.request(
method,
url,
headers=Headers(headers_dict),
data=data,
agent=self.agent,
)
add_timeout_to_deferred(
request_deferred,
timeout / 1000. if timeout else 60,
self.hs.get_reactor(),
cancelled_to_request_timed_out_error,
)
outbound_logger.info(
"{%s} [%s] Sending request: %s %s",
txn_id, destination, method, url
)
request_deferred = treq.request(
method,
url,
headers=Headers(headers_dict),
data=data,
agent=self.agent,
reactor=self.hs.get_reactor(),
unbuffered=True
)
request_deferred.addTimeout(_sec_timeout, self.hs.get_reactor())
# Sometimes the timeout above doesn't work, so lets hack yet
# another layer of timeouts in in the vain hope that at some
# point the world made sense and this really really really
# should work.
request_deferred = timeout_no_seriously(
request_deferred,
timeout=_sec_timeout * 2,
reactor=self.hs.get_reactor(),
)
with Measure(self.clock, "outbound_request"):
response = yield make_deferred_yieldable(
request_deferred,
)
log_result = "%d %s" % (response.code, response.phrase,)
break
except Exception as e:
if not retry_on_dns_fail and isinstance(e, DNSLookupError):
logger.warn(
"DNS Lookup failed to %s with %s",
destination,
e
)
log_result = "DNS Lookup failed to %s with %s" % (
destination, e
)
raise
log_result = "%d %s" % (
response.code,
response.phrase.decode('ascii', errors='replace'),
)
break
except Exception as e:
if not retry_on_dns_fail and isinstance(e, DNSLookupError):
logger.warn(
"{%s} Sending request failed to %s: %s %s: %s",
txn_id,
"DNS Lookup failed to %s with %s",
destination,
method,
url,
_flatten_response_never_received(e),
e
)
log_result = "DNS Lookup failed to %s with %s" % (
destination, e
)
raise
logger.warn(
"{%s} Sending request failed to %s: %s %s: %s",
txn_id,
destination,
method,
url,
_flatten_response_never_received(e),
)
log_result = _flatten_response_never_received(e)
if retries_left and not timeout:
if long_retries:
delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
delay = min(delay, 60)
delay *= random.uniform(0.8, 1.4)
else:
delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
delay = min(delay, 2)
delay *= random.uniform(0.8, 1.4)
logger.debug(
"{%s} Waiting %s before sending to %s...",
txn_id,
delay,
destination
)
log_result = _flatten_response_never_received(e)
if retries_left and not timeout:
if long_retries:
delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
delay = min(delay, 60)
delay *= random.uniform(0.8, 1.4)
else:
delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
delay = min(delay, 2)
delay *= random.uniform(0.8, 1.4)
yield self.clock.sleep(delay)
retries_left -= 1
else:
raise
finally:
outbound_logger.info(
"{%s} [%s] Result: %s",
txn_id,
destination,
log_result,
)
yield self.clock.sleep(delay)
retries_left -= 1
else:
raise
finally:
outbound_logger.info(
"{%s} [%s] Result: %s",
txn_id,
destination,
log_result,
)
if 200 <= response.code < 300:
pass
@@ -280,7 +307,9 @@ class MatrixFederationHttpClient(object):
# :'(
# Update transactions table?
with logcontext.PreserveLoggingContext():
body = yield treq.content(response)
d = treq.content(response)
d.addTimeout(_sec_timeout, self.hs.get_reactor())
body = yield make_deferred_yieldable(d)
raise HttpResponseException(
response.code, response.phrase, body
)
@@ -394,7 +423,9 @@ class MatrixFederationHttpClient(object):
check_content_type_is_json(response.headers)
with logcontext.PreserveLoggingContext():
body = yield treq.json_content(response)
d = treq.json_content(response)
d.addTimeout(self.default_timeout, self.hs.get_reactor())
body = yield make_deferred_yieldable(d)
defer.returnValue(body)
@defer.inlineCallbacks
@@ -444,7 +475,14 @@ class MatrixFederationHttpClient(object):
check_content_type_is_json(response.headers)
with logcontext.PreserveLoggingContext():
body = yield treq.json_content(response)
d = treq.json_content(response)
if timeout:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
d.addTimeout(_sec_timeout, self.hs.get_reactor())
body = yield make_deferred_yieldable(d)
defer.returnValue(body)
@@ -496,7 +534,9 @@ class MatrixFederationHttpClient(object):
check_content_type_is_json(response.headers)
with logcontext.PreserveLoggingContext():
body = yield treq.json_content(response)
d = treq.json_content(response)
d.addTimeout(self.default_timeout, self.hs.get_reactor())
body = yield make_deferred_yieldable(d)
defer.returnValue(body)
@@ -543,7 +583,9 @@ class MatrixFederationHttpClient(object):
check_content_type_is_json(response.headers)
with logcontext.PreserveLoggingContext():
body = yield treq.json_content(response)
d = treq.json_content(response)
d.addTimeout(self.default_timeout, self.hs.get_reactor())
body = yield make_deferred_yieldable(d)
defer.returnValue(body)
@@ -585,9 +627,9 @@ class MatrixFederationHttpClient(object):
try:
with logcontext.PreserveLoggingContext():
length = yield _readBodyToFile(
response, output_stream, max_size
)
d = _readBodyToFile(response, output_stream, max_size)
d.addTimeout(self.default_timeout, self.hs.get_reactor())
length = yield make_deferred_yieldable(d)
except Exception:
logger.exception("Failed to download body")
raise
+11 -11
View File
@@ -162,7 +162,7 @@ class RequestMetrics(object):
with _in_flight_requests_lock:
_in_flight_requests.add(self)
def stop(self, time_sec, request):
def stop(self, time_sec, response_code, sent_bytes):
with _in_flight_requests_lock:
_in_flight_requests.discard(self)
@@ -179,35 +179,35 @@ class RequestMetrics(object):
)
return
response_code = str(request.code)
response_code = str(response_code)
outgoing_responses_counter.labels(request.method, response_code).inc()
outgoing_responses_counter.labels(self.method, response_code).inc()
response_count.labels(request.method, self.name, tag).inc()
response_count.labels(self.method, self.name, tag).inc()
response_timer.labels(request.method, self.name, tag, response_code).observe(
response_timer.labels(self.method, self.name, tag, response_code).observe(
time_sec - self.start
)
resource_usage = context.get_resource_usage()
response_ru_utime.labels(request.method, self.name, tag).inc(
response_ru_utime.labels(self.method, self.name, tag).inc(
resource_usage.ru_utime,
)
response_ru_stime.labels(request.method, self.name, tag).inc(
response_ru_stime.labels(self.method, self.name, tag).inc(
resource_usage.ru_stime,
)
response_db_txn_count.labels(request.method, self.name, tag).inc(
response_db_txn_count.labels(self.method, self.name, tag).inc(
resource_usage.db_txn_count
)
response_db_txn_duration.labels(request.method, self.name, tag).inc(
response_db_txn_duration.labels(self.method, self.name, tag).inc(
resource_usage.db_txn_duration_sec
)
response_db_sched_duration.labels(request.method, self.name, tag).inc(
response_db_sched_duration.labels(self.method, self.name, tag).inc(
resource_usage.db_sched_duration_sec
)
response_size.labels(request.method, self.name, tag).inc(request.sentLength)
response_size.labels(self.method, self.name, tag).inc(sent_bytes)
# We always call this at the end to ensure that we update the metrics
# regardless of whether a call to /metrics while the request was in
+9 -6
View File
@@ -82,10 +82,13 @@ class SynapseRequest(Request):
)
def get_request_id(self):
return "%s-%i" % (self.method, self.request_seq)
return "%s-%i" % (self.method.decode('ascii'), self.request_seq)
def get_redacted_uri(self):
return redact_uri(self.uri)
uri = self.uri
if isinstance(uri, bytes):
uri = self.uri.decode('ascii')
return redact_uri(uri)
def get_user_agent(self):
return self.requestHeaders.getRawHeaders(b"User-Agent", [None])[-1]
@@ -116,7 +119,7 @@ class SynapseRequest(Request):
# dispatching to the handler, so that the handler
# can update the servlet name in the request
# metrics
requests_counter.labels(self.method,
requests_counter.labels(self.method.decode('ascii'),
self.request_metrics.name).inc()
@contextlib.contextmanager
@@ -277,15 +280,15 @@ class SynapseRequest(Request):
int(usage.db_txn_count),
self.sentLength,
code,
self.method,
self.method.decode('ascii'),
self.get_redacted_uri(),
self.clientproto,
self.clientproto.decode('ascii', errors='replace'),
user_agent,
usage.evt_db_fetch_count,
)
try:
self.request_metrics.stop(self.finish_time, self)
self.request_metrics.stop(self.finish_time, self.code, self.sentLength)
except Exception as e:
logger.warn("Failed to stop metrics: %r", e)
+107 -1
View File
@@ -18,8 +18,11 @@ import gc
import logging
import os
import platform
import threading
import time
import six
import attr
from prometheus_client import Counter, Gauge, Histogram
from prometheus_client.core import REGISTRY, GaugeMetricFamily
@@ -68,7 +71,7 @@ class LaterGauge(object):
return
if isinstance(calls, dict):
for k, v in calls.items():
for k, v in six.iteritems(calls):
g.add_metric(k, v)
else:
g.add_metric([], calls)
@@ -87,6 +90,109 @@ class LaterGauge(object):
all_gauges[self.name] = self
class InFlightGauge(object):
"""Tracks number of things (e.g. requests, Measure blocks, etc) in flight
at any given time.
Each InFlightGauge will create a metric called `<name>_total` that counts
the number of in flight blocks, as well as a metrics for each item in the
given `sub_metrics` as `<name>_<sub_metric>` which will get updated by the
callbacks.
Args:
name (str)
desc (str)
labels (list[str])
sub_metrics (list[str]): A list of sub metrics that the callbacks
will update.
"""
def __init__(self, name, desc, labels, sub_metrics):
self.name = name
self.desc = desc
self.labels = labels
self.sub_metrics = sub_metrics
# Create a class which have the sub_metrics values as attributes, which
# default to 0 on initialization. Used to pass to registered callbacks.
self._metrics_class = attr.make_class(
"_MetricsEntry",
attrs={x: attr.ib(0) for x in sub_metrics},
slots=True,
)
# Counts number of in flight blocks for a given set of label values
self._registrations = {}
# Protects access to _registrations
self._lock = threading.Lock()
self._register_with_collector()
def register(self, key, callback):
"""Registers that we've entered a new block with labels `key`.
`callback` gets called each time the metrics are collected. The same
value must also be given to `unregister`.
`callback` gets called with an object that has an attribute per
sub_metric, which should be updated with the necessary values. Note that
the metrics object is shared between all callbacks registered with the
same key.
Note that `callback` may be called on a separate thread.
"""
with self._lock:
self._registrations.setdefault(key, set()).add(callback)
def unregister(self, key, callback):
"""Registers that we've exited a block with labels `key`.
"""
with self._lock:
self._registrations.setdefault(key, set()).discard(callback)
def collect(self):
"""Called by prometheus client when it reads metrics.
Note: may be called by a separate thread.
"""
in_flight = GaugeMetricFamily(self.name + "_total", self.desc, labels=self.labels)
metrics_by_key = {}
# We copy so that we don't mutate the list while iterating
with self._lock:
keys = list(self._registrations)
for key in keys:
with self._lock:
callbacks = set(self._registrations[key])
in_flight.add_metric(key, len(callbacks))
metrics = self._metrics_class()
metrics_by_key[key] = metrics
for callback in callbacks:
callback(metrics)
yield in_flight
for name in self.sub_metrics:
gauge = GaugeMetricFamily("_".join([self.name, name]), "", labels=self.labels)
for key, metrics in six.iteritems(metrics_by_key):
gauge.add_metric(key, getattr(metrics, name))
yield gauge
def _register_with_collector(self):
if self.name in all_gauges.keys():
logger.warning("%s already registered, reregistering" % (self.name,))
REGISTRY.unregister(all_gauges.pop(self.name))
REGISTRY.register(self)
all_gauges[self.name] = self
#
# Detailed CPU metrics
#
+6 -1
View File
@@ -15,6 +15,8 @@
# limitations under the License.
import logging
import six
from prometheus_client import Counter
from twisted.internet import defer
@@ -26,6 +28,9 @@ from synapse.util.metrics import Measure
from . import push_rule_evaluator, push_tools
if six.PY3:
long = int
logger = logging.getLogger(__name__)
http_push_processed_counter = Counter("synapse_http_httppusher_http_pushes_processed", "")
@@ -96,7 +101,7 @@ class HttpPusher(object):
@defer.inlineCallbacks
def on_new_notifications(self, min_stream_ordering, max_stream_ordering):
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering)
self.max_stream_ordering = max(max_stream_ordering, self.max_stream_ordering or 0)
yield self._process()
@defer.inlineCallbacks
+4 -3
View File
@@ -17,10 +17,11 @@ import email.mime.multipart
import email.utils
import logging
import time
import urllib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from six.moves import urllib
import bleach
import jinja2
@@ -474,7 +475,7 @@ class Mailer(object):
# XXX: make r0 once API is stable
return "%s_matrix/client/unstable/pushers/remove?%s" % (
self.hs.config.public_baseurl,
urllib.urlencode(params),
urllib.parse.urlencode(params),
)
@@ -561,7 +562,7 @@ def _create_mxc_to_http_filter(config):
return "%s_matrix/media/v1/thumbnail/%s?%s%s" % (
config.public_baseurl,
serverAndMediaId,
urllib.urlencode(params),
urllib.parse.urlencode(params),
fragment or "",
)
+16 -7
View File
@@ -13,6 +13,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import six
from synapse.storage import DataStore
from synapse.storage.end_to_end_keys import EndToEndKeyStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
@@ -21,6 +23,13 @@ from ._base import BaseSlavedStore
from ._slaved_id_tracker import SlavedIdTracker
def __func__(inp):
if six.PY3:
return inp
else:
return inp.__func__
class SlavedDeviceStore(BaseSlavedStore):
def __init__(self, db_conn, hs):
super(SlavedDeviceStore, self).__init__(db_conn, hs)
@@ -38,14 +47,14 @@ class SlavedDeviceStore(BaseSlavedStore):
"DeviceListFederationStreamChangeCache", device_list_max,
)
get_device_stream_token = DataStore.get_device_stream_token.__func__
get_user_whose_devices_changed = DataStore.get_user_whose_devices_changed.__func__
get_devices_by_remote = DataStore.get_devices_by_remote.__func__
_get_devices_by_remote_txn = DataStore._get_devices_by_remote_txn.__func__
_get_e2e_device_keys_txn = DataStore._get_e2e_device_keys_txn.__func__
mark_as_sent_devices_by_remote = DataStore.mark_as_sent_devices_by_remote.__func__
get_device_stream_token = __func__(DataStore.get_device_stream_token)
get_user_whose_devices_changed = __func__(DataStore.get_user_whose_devices_changed)
get_devices_by_remote = __func__(DataStore.get_devices_by_remote)
_get_devices_by_remote_txn = __func__(DataStore._get_devices_by_remote_txn)
_get_e2e_device_keys_txn = __func__(DataStore._get_e2e_device_keys_txn)
mark_as_sent_devices_by_remote = __func__(DataStore.mark_as_sent_devices_by_remote)
_mark_as_sent_devices_by_remote_txn = (
DataStore._mark_as_sent_devices_by_remote_txn.__func__
__func__(DataStore._mark_as_sent_devices_by_remote_txn)
)
count_e2e_one_time_keys = EndToEndKeyStore.__dict__["count_e2e_one_time_keys"]
+1 -1
View File
@@ -196,7 +196,7 @@ class Stream(object):
)
if len(rows) >= MAX_EVENTS_BEHIND:
raise Exception("stream %s has fallen behined" % (self.NAME))
raise Exception("stream %s has fallen behind" % (self.NAME))
else:
rows = yield self.update_function(
from_token, current_token,
+6 -3
View File
@@ -101,7 +101,7 @@ class UserRegisterServlet(ClientV1RestServlet):
nonce = self.hs.get_secrets().token_hex(64)
self.nonces[nonce] = int(self.reactor.seconds())
return (200, {"nonce": nonce.encode('ascii')})
return (200, {"nonce": nonce})
@defer.inlineCallbacks
def on_POST(self, request):
@@ -164,7 +164,7 @@ class UserRegisterServlet(ClientV1RestServlet):
key=self.hs.config.registration_shared_secret.encode(),
digestmod=hashlib.sha1,
)
want_mac.update(nonce)
want_mac.update(nonce.encode('utf8'))
want_mac.update(b"\x00")
want_mac.update(username)
want_mac.update(b"\x00")
@@ -173,7 +173,10 @@ class UserRegisterServlet(ClientV1RestServlet):
want_mac.update(b"admin" if admin else b"notadmin")
want_mac = want_mac.hexdigest()
if not hmac.compare_digest(want_mac, got_mac.encode('ascii')):
if not hmac.compare_digest(
want_mac.encode('ascii'),
got_mac.encode('ascii')
):
raise SynapseError(403, "HMAC incorrect")
# Reuse the parts of RegisterRestServlet to reduce code duplication
+6 -6
View File
@@ -45,20 +45,20 @@ class EventStreamRestServlet(ClientV1RestServlet):
is_guest = requester.is_guest
room_id = None
if is_guest:
if "room_id" not in request.args:
if b"room_id" not in request.args:
raise SynapseError(400, "Guest users must specify room_id param")
if "room_id" in request.args:
room_id = request.args["room_id"][0]
if b"room_id" in request.args:
room_id = request.args[b"room_id"][0].decode('ascii')
pagin_config = PaginationConfig.from_request(request)
timeout = EventStreamRestServlet.DEFAULT_LONGPOLL_TIME_MS
if "timeout" in request.args:
if b"timeout" in request.args:
try:
timeout = int(request.args["timeout"][0])
timeout = int(request.args[b"timeout"][0])
except ValueError:
raise SynapseError(400, "timeout must be in milliseconds.")
as_client_event = "raw" not in request.args
as_client_event = b"raw" not in request.args
chunk = yield self.event_stream_handler.get_stream(
requester.user.to_string(),

Some files were not shown because too many files have changed in this diff Show More