1
0

Compare commits

..

13 Commits

Author SHA1 Message Date
Andrew Morgan
bdb4c20dc1 Clarify 1.32.0/1 changelog and upgrade notes 2021-04-21 14:44:04 +01:00
Andrew Morgan
acb8c81041 Add regression notes to CHANGES.md; fix link in 1.32.0 changelog 2021-04-21 14:24:16 +01:00
Andrew Morgan
98a1b84631 Add link to fixing prometheus to 1.32.0 upgrade notes; 1.32.1 has a fix 2021-04-21 14:19:11 +01:00
Andrew Morgan
026a66f2b3 Fix typo in link to regression in 1.32.0 upgrade notes 2021-04-21 14:04:44 +01:00
Andrew Morgan
a745531c10 1.32.1 2021-04-21 14:01:12 +01:00
Andrew Morgan
30c94862b4 Mention Prometheus metrics regression in v1.32.0 2021-04-21 14:00:31 +01:00
Richard van der Hoff
5d281c10dd Stop BackgroundProcessLoggingContext making new prometheus timeseries (#9854)
This undoes part of b076bc276e.
2021-04-21 10:03:31 +01:00
Andrew Morgan
05fa06834d Further tweaking on gpg signing key notice 2021-04-20 15:52:06 +01:00
Andrew Morgan
913f790bb2 Add note about expired Debian gpg signing keys to CHANGES.md 2021-04-20 15:33:56 +01:00
Andrew Morgan
438a8594cb Update v1.32.0 changelog. It's m.login.application_service, not plural 2021-04-20 14:47:17 +01:00
Andrew Morgan
e031c7e0cc 1.32.0 2021-04-20 14:31:27 +01:00
Andrew Morgan
0a88ec0a87 Add Application Service registration type requirement + py35, pg95 deprecation notices to v1.32.0 upgrade notes (#9849)
Fixes https://github.com/matrix-org/synapse/issues/9846.

Adds important removal information from the top of https://github.com/matrix-org/synapse/releases/tag/v1.32.0rc1 into UPGRADE.rst.
2021-04-20 14:19:35 +01:00
Patrick Cloke
b076bc276e Always use the name as the log ID. (#9829)
As far as I can tell our logging contexts are meant to log the request ID, or sometimes the request ID followed by a suffix (this is generally stored in the name field of LoggingContext). There's also code to log the name@memory location, but I'm not sure this is ever used.

This simplifies the code paths to require every logging context to have a name and use that in logging. For sub-contexts (created via nested_logging_contexts, defer_to_threadpool, Measure) we use the current context's str (which becomes their name or the string "sentinel") and then potentially modify that (e.g. add a suffix).
2021-04-20 14:19:00 +01:00
13 changed files with 117 additions and 139 deletions

View File

@@ -1,11 +1,48 @@
Synapse 1.32.0rc1 (2021-04-13)
==============================
Synapse 1.32.1 (2021-04-21)
===========================
This release fixes [a regression](https://github.com/matrix-org/synapse/issues/9853)
in Synapse 1.32.0 that caused connected Prometheus instances to become unstable. If you
ran Synapse 1.32.0 with Prometheus metrics, first upgrade to Synapse 1.32.1 and follow
[these instructions](https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183)
to clean up any excess writeahead logs.
Bugfixes
--------
- Fix a regression in Synapse 1.32.0 which caused Synapse to report large numbers of Prometheus time series, potentially overwhelming Prometheus instances. ([\#9854](https://github.com/matrix-org/synapse/issues/9854))
Synapse 1.32.0 (2021-04-20)
===========================
**Note:** This release introduces [a regression](https://github.com/matrix-org/synapse/issues/9853)
that can overwhelm connected Prometheus instances. This issue was not present in
1.32.0rc1, and is fixed in 1.32.1. See the changelog for 1.32.1 above for more information.
**Note:** This release requires Python 3.6+ and Postgres 9.6+ or SQLite 3.22+.
This release removes the deprecated `GET /_synapse/admin/v1/users/<user_id>` admin API. Please use the [v2 API](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/user_admin_api.rst#query-user-account) instead, which has improved capabilities.
This release requires Application Services to use type `m.login.application_services` when registering users via the `/_matrix/client/r0/register` endpoint to comply with the spec. Please ensure your Application Services are up to date.
This release requires Application Services to use type `m.login.application_service` when registering users via the `/_matrix/client/r0/register` endpoint to comply with the spec. Please ensure your Application Services are up to date.
If you are using the `packages.matrix.org` Debian repository for Synapse packages,
note that we have recently updated the expiry date on the gpg signing key. If you see an
error similar to `The following signatures were invalid: EXPKEYSIG F473DD4473365DE1`, you
will need to get a fresh copy of the keys. You can do so with:
```sh
sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
```
Bugfixes
--------
- Fix the log lines of nested logging contexts. Broke in 1.32.0rc1. ([\#9829](https://github.com/matrix-org/synapse/issues/9829))
Synapse 1.32.0rc1 (2021-04-13)
==============================
Features
--------

View File

@@ -88,6 +88,26 @@ for example:
Upgrading to v1.32.0
====================
Regression causing connected Prometheus instances to become overwhelmed
-----------------------------------------------------------------------
This release introduces `a regression <https://github.com/matrix-org/synapse/issues/9853>`_
that can overwhelm connected Prometheus instances. This issue is not present in
Synapse v1.32.0rc1, and is fixed in Synapse v1.32.1.
If you have been affected, please first upgrade to a more recent Synapse version.
You then may need to remove excess writeahead logs in order for Prometheus to recover.
Instructions for doing so are provided
`here <https://github.com/matrix-org/synapse/pull/9854#issuecomment-823472183>`_.
Dropping support for old Python, Postgres and SQLite versions
-------------------------------------------------------------
In line with our `deprecation policy <https://github.com/matrix-org/synapse/blob/release-v1.32.0/docs/deprecation_policy.md>`_,
we've dropped support for Python 3.5 and PostgreSQL 9.5, as they are no longer supported upstream.
This release of Synapse requires Python 3.6+ and PostgresSQL 9.6+ or SQLite 3.22+.
Removal of old List Accounts Admin API
--------------------------------------
@@ -98,6 +118,16 @@ has been available since Synapse 1.7.0 (2019-12-13), and is accessible under ``G
The deprecation of the old endpoint was announced with Synapse 1.28.0 (released on 2021-02-25).
Application Services must use type ``m.login.application_service`` when registering users
-----------------------------------------------------------------------------------------
In compliance with the
`Application Service spec <https://matrix.org/docs/spec/application_service/r0.1.2#server-admin-style-permissions>`_,
Application Services are now required to use the ``m.login.application_service`` type when registering users via the
``/_matrix/client/r0/register`` endpoint. This behaviour was deprecated in Synapse v1.30.0.
Please ensure your Application Services are up to date.
Upgrading to v1.29.0
====================

View File

@@ -1 +0,0 @@
Add hardened systemd files as proposed in [#9760](https://github.com/matrix-org/synapse/issues/9760) and added them to `contrib/`. Change the docs to reflect the presence of these files.

View File

@@ -1,71 +0,0 @@
[Service]
# The following directives give the synapse service R/W access to:
# - /run/matrix-synapse
# - /var/lib/matrix-synapse
# - /var/log/matrix-synapse
RuntimeDirectory=matrix-synapse
StateDirectory=matrix-synapse
LogsDirectory=matrix-synapse
######################
## Security Sandbox ##
######################
# Make sure that the service has its own unshared tmpfs at /tmp and that it
# cannot see or change any real devices
PrivateTmp=true
PrivateDevices=true
# We give no capabilities to a service by default
CapabilityBoundingSet=
AmbientCapabilities=
# Protect the following from modification:
# - The entire filesystem
# - sysctl settings and loaded kernel modules
# - No modifications allowed to Control Groups
# - Hostname
# - System Clock
ProtectSystem=strict
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
ProtectClock=true
ProtectHostname=true
# Prevent access to the following:
# - /home directory
# - Kernel logs
ProtectHome=tmpfs
ProtectKernelLogs=true
# Make sure that the process can only see PIDs and process details of itself,
# and the second option disables seeing details of things like system load and
# I/O etc
ProtectProc=invisible
ProcSubset=pid
# While not needed, we set these options explicitly
# - This process has been given access to the host network
# - It can also communicate with any IP Address
PrivateNetwork=false
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
IPAddressAllow=any
# Restrict system calls to a sane bunch
SystemCallArchitectures=native
SystemCallFilter=@system-service
SystemCallFilter=~@privileged @resources @obsolete
# Misc restrictions
# - Since the process is a python process it needs to be able to write and
# execute memory regions, so we set MemoryDenyWriteExecute to false
RestrictSUIDSGID=true
RemoveIPC=true
NoNewPrivileges=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
PrivateUsers=true
MemoryDenyWriteExecute=false

14
debian/changelog vendored
View File

@@ -1,8 +1,18 @@
matrix-synapse-py3 (1.31.0+nmu1) UNRELEASED; urgency=medium
matrix-synapse-py3 (1.32.1) stable; urgency=medium
* New synapse release 1.32.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 21 Apr 2021 14:00:55 +0100
matrix-synapse-py3 (1.32.0) stable; urgency=medium
[ Dan Callahan ]
* Skip tests when DEB_BUILD_OPTIONS contains "nocheck".
-- Dan Callahan <danc@element.io> Mon, 12 Apr 2021 13:07:36 +0000
[ Synapse Packaging team ]
* New synapse release 1.32.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Apr 2021 14:28:39 +0100
matrix-synapse-py3 (1.31.0) stable; urgency=medium

View File

@@ -65,33 +65,3 @@ systemctl restart matrix-synapse-worker@federation_reader.service
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.target
```
## Hardening
**Optional:** If further hardening is desired, the file
`override-hardened.conf` may be copied from
`contrib/systemd/override-hardened.conf` in this repository to the location
`/etc/systemd/system/matrix-synapse.service.d/override-hardened.conf` (the
directory may have to be created). It enables certain sandboxing features in
systemd to further secure the synapse service. You may read the comments to
understand what the override file is doing. The same file will need to be copied
to
`/etc/systemd/system/matrix-synapse-worker@.service.d/override-hardened-worker.conf`
(this directory may also have to be created) in order to apply the same
hardening options to any worker processes.
Once these files have been copied to their appropriate locations, simply reload
systemd's manager config files and restart all Synapse services to apply the hardening options. They will automatically
be applied at every restart as long as the override files are present at the
specified locations.
```sh
systemctl daemon-reload
# Restart services
systemctl restart matrix-synapse.target
```
In order to see their effect, you may run `systemd-analyze security
matrix-synapse.service` before and after applying the hardening options to see
the changes being applied at a glance.

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.32.0rc1"
__version__ = "1.32.1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -277,7 +277,7 @@ class LoggingContext:
def __init__(
self,
name: Optional[str] = None,
name: str,
parent_context: "Optional[LoggingContext]" = None,
request: Optional[ContextRequest] = None,
) -> None:
@@ -315,9 +315,7 @@ class LoggingContext:
self.request = request
def __str__(self) -> str:
if self.request:
return self.request.request_id
return "%s@%x" % (self.name, id(self))
return self.name
@classmethod
def current_context(cls) -> LoggingContextOrSentinel:
@@ -694,17 +692,13 @@ def nested_logging_context(suffix: str) -> LoggingContext:
"Starting nested logging context from sentinel context: metrics will be lost"
)
parent_context = None
prefix = ""
request = None
else:
assert isinstance(curr_context, LoggingContext)
parent_context = curr_context
prefix = str(parent_context.name)
request = parent_context.request
prefix = str(curr_context)
return LoggingContext(
prefix + "-" + suffix,
parent_context=parent_context,
request=request,
)
@@ -895,7 +889,7 @@ def defer_to_threadpool(reactor, threadpool, f, *args, **kwargs):
parent_context = curr_context
def g():
with LoggingContext(parent_context=parent_context):
with LoggingContext(str(curr_context), parent_context=parent_context):
return f(*args, **kwargs)
return make_deferred_yieldable(threads.deferToThreadPool(reactor, threadpool, g))

View File

@@ -242,19 +242,24 @@ class BackgroundProcessLoggingContext(LoggingContext):
processes.
"""
__slots__ = ["_id", "_proc"]
__slots__ = ["_proc"]
def __init__(self, name: str, id: Optional[Union[int, str]] = None):
super().__init__(name)
self._id = id
def __init__(self, name: str, instance_id: Optional[Union[int, str]] = None):
"""
Args:
name: The name of the background process. Each distinct `name` gets a
separate prometheus time series.
instance_id: an identifer to add to `name` to distinguish this instance of
the named background process in the logs. If this is `None`, one is
made up based on id(self).
"""
if instance_id is None:
instance_id = id(self)
super().__init__("%s-%s" % (name, instance_id))
self._proc = _BackgroundProcess(name, self)
def __str__(self) -> str:
if self._id is not None:
return "%s-%s" % (self.name, self._id)
return "%s@%x" % (self.name, id(self))
def start(self, rusage: "Optional[resource._RUsage]"):
"""Log context has started running (again)."""

View File

@@ -105,7 +105,13 @@ class Measure:
"start",
]
def __init__(self, clock, name):
def __init__(self, clock, name: str):
"""
Args:
clock: A n object with a "time()" method, which returns the current
time in seconds.
name: The name of the metric to report.
"""
self.clock = clock
self.name = name
curr_context = current_context()
@@ -118,10 +124,8 @@ class Measure:
else:
assert isinstance(curr_context, LoggingContext)
parent_context = curr_context
self._logging_context = LoggingContext(
"Measure[%s]" % (self.name,), parent_context
)
self.start = None
self._logging_context = LoggingContext(str(curr_context), parent_context)
self.start = None # type: Optional[int]
def __enter__(self) -> "Measure":
if self.start is not None:

View File

@@ -138,7 +138,7 @@ class TerseJsonTestCase(LoggerCleanupMixin, TestCase):
]
self.assertCountEqual(log.keys(), expected_log_keys)
self.assertEqual(log["log"], "Hello there, wally!")
self.assertTrue(log["request"].startswith("name@"))
self.assertEqual(log["request"], "name")
def test_with_request_context(self):
"""
@@ -165,7 +165,9 @@ class TerseJsonTestCase(LoggerCleanupMixin, TestCase):
# Also set the requester to ensure the processing works.
request.requester = "@foo:test"
with LoggingContext(parent_context=request.logcontext):
with LoggingContext(
request.get_request_id(), parent_context=request.logcontext
):
logger.info("Hello there, %s!", "wally")
log = self.get_log_line()

View File

@@ -134,7 +134,7 @@ class MessageAcceptTests(unittest.HomeserverTestCase):
}
)
with LoggingContext():
with LoggingContext("test-context"):
failure = self.get_failure(
self.handler.on_receive_pdu(
"test.serv", lying_event, sent_to_us_directly=True

View File

@@ -231,8 +231,7 @@ class DescriptorTestCase(unittest.TestCase):
@defer.inlineCallbacks
def do_lookup():
with LoggingContext() as c1:
c1.name = "c1"
with LoggingContext("c1") as c1:
r = yield obj.fn(1)
self.assertEqual(current_context(), c1)
return r
@@ -274,8 +273,7 @@ class DescriptorTestCase(unittest.TestCase):
@defer.inlineCallbacks
def do_lookup():
with LoggingContext() as c1:
c1.name = "c1"
with LoggingContext("c1") as c1:
try:
d = obj.fn(1)
self.assertEqual(