Synapse 1.15.2 (2020-07-02)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
* A malicious homeserver could force Synapse to reset the state in a room to a
small subset of the correct state. This affects all Synapse deployments which
federate with untrusted servers. ([96e9afe6](96e9afe625))
* HTML pages served via Synapse were vulnerable to clickjacking attacks. This
predominantly affects homeservers with single-sign-on enabled, but all server
administrators are encouraged to upgrade. ([ea26e9a9](ea26e9a98b))
This was reported by [Quentin Gliech](https://sandhose.fr/).
- Remove the requirement for a specific version of Python
- Move dep comment to a separate line, Tox 3.7.0 like trailing ones
Signed-off-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
State res v2 across large data sets can be very CPU intensive, and if
all the relevant events are in the cache the algorithm will run from
start to finish within a single reactor tick. This can result in
blocking the reactor tick for several seconds, which can have major
repercussions on other requests.
To fix this we simply add the occaisonal `sleep(0)` during iterations to
yield execution until the next reactor tick. The aim is to only do this
for large data sets so that we don't impact otherwise quick resolutions.=
HTTP requires the response to contain a Content-Length header unless chunked encoding is being used.
Prometheus metrics endpoint did not set this, causing software such as prometheus-proxy to not be able to scrape synapse for metrics.
Signed-off-by: Christian Svensson <blue@cmd.nu>
Older versions of `parameterized` package have no `parameterized_class` decorator. This decorator is used in tests.
Signed-off-by: Oleg Girko <ol@infoserver.lv>
* Always return an unread_count in get_unread_event_push_actions_by_room_for_user
* Don't always expect unread_count to be there so we don't take out sync entirely if something goes wrong
This requires a new config option to specify which media repo should be
responsible for running background jobs to e.g. clear out expired URL
preview caches.
The aim here is to make it easier to reason about when streams are limited and when they're not, by moving the logic into the database functions themselves. This should mean we can kill of `db_query_to_update_function` function.