1
0

Compare commits

..

10 Commits

Author SHA1 Message Date
Andrew Morgan 9cd8166843 lint 2025-10-24 11:12:25 +01:00
Andrew Morgan b5c66dea20 Allow continuing on despite queued assets 2025-10-24 10:36:40 +01:00
Andrew Morgan 7cd9678e7d newsfile 2025-10-21 14:31:35 +01:00
Andrew Morgan 889ffd9375 Have the release script warn if a workflow is queued for >15m 2025-10-21 14:30:54 +01:00
Andrew Morgan 6c16734cf3 Revert "newsfile"
This reverts commit 4427908340.

This should not have been committed to `develop`.
2025-10-21 14:18:40 +01:00
Andrew Morgan 4427908340 newsfile 2025-10-21 14:17:53 +01:00
Kieran Lane 2f65b9e001 Update oidc_session_no_samesite cookie to be Secure (#19079) 2025-10-21 13:35:55 +01:00
Andrew Morgan 418c9f3fe5 Prevent bcrypt from raising a ValueError and log (#19078) 2025-10-21 10:52:28 +01:00
Eric Eastwood eac862629f Revert "Move start_doing_background_updates() to SynapseHomeServer.start_background_tasks() (#19036)" (#19059)
### Why

See
https://github.com/element-hq/synapse/pull/19036#discussion_r2427070612

Revert while I figure out the tests in
https://github.com/element-hq/synapse/pull/19057
2025-10-20 10:55:41 -05:00
Ben Banfield-Zanin 67f22a200d Update Docker images to use Debian trixie (13) and thus Python 3.13 (#19064) 2025-10-20 16:49:17 +01:00
19 changed files with 169 additions and 211 deletions
-1
View File
@@ -1 +0,0 @@
Move `start_doing_background_updates()` to `SynapseHomeServer.start_background_tasks()`.
+1
View File
@@ -0,0 +1 @@
Update docker image to use Debian trixie as the base and thus Python 3.13.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in 1.140.0 where an internal server error could be raised when hashing user passwords that are too long.
+1
View File
@@ -0,0 +1 @@
Fix the `oidc_session_no_samesite` cookie to have the `Secure` attribute, so the only difference between it and the paired `oidc_session` cookie, is the configuration of the `SameSite` attribute as described in the comments / cookie names. Contributed by @kieranlane.
+1
View File
@@ -0,0 +1 @@
Warn the developer when they are releasing Synapse if a release workflow has been queued for over 15 minutes.
+3 -8
View File
@@ -20,8 +20,8 @@
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG POETRY_VERSION=2.1.1
###
@@ -142,10 +142,10 @@ RUN \
libwebp7 \
xmlsec1 \
libjemalloc2 \
libicu \
| grep '^\w' > /tmp/pkg-list && \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
chown _apt:root /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
done
@@ -176,11 +176,6 @@ LABEL org.opencontainers.image.documentation='https://element-hq.github.io/synap
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later OR LicenseRef-Element-Commercial'
# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the
# libraries to the right place, else the `COPY` won't work.
# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is
# already present in the runtime image.
COPY --from=runtime-deps /install-${TARGETARCH}/lib /usr/lib
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
COPY --from=runtime-deps /install-${TARGETARCH}/var /var
+19 -13
View File
@@ -1,9 +1,10 @@
# syntax=docker/dockerfile:1
# syntax=docker/dockerfile:1-labs
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
ARG PYTHON_VERSION=3.12
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG REDIS_VERSION=7.2
# first of all, we create a base image with dependencies which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
@@ -11,15 +12,27 @@ ARG PYTHON_VERSION=3.12
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
ARG DEBIAN_VERSION
ARG REDIS_VERSION
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
# The upstream redis-server deb has fewer dynamic libraries than Debian's package which makes it easier to copy later on
RUN \
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \
chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb ${DEBIAN_VERSION} main" | tee /etc/apt/sources.list.d/redis.list
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
nginx-light
nginx-light \
redis-server="6:${REDIS_VERSION}.*" redis-tools="6:${REDIS_VERSION}.*" \
# libicu is required by postgres, see `docker/complement/Dockerfile`
libicu76
RUN \
# remove default page
@@ -35,19 +48,12 @@ FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
RUN mkdir -p /uv/etc/supervisor/conf.d
# Similarly, a base to copy the redis server from.
#
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-${DEBIAN_VERSION} AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Copy over dependencies
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base --parents /usr/lib/*-linux-gnu/libicu* /
COPY --from=deps_base /usr/bin/redis-server /usr/local/bin
COPY --from=deps_base /uv /
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
+5 -5
View File
@@ -9,7 +9,7 @@
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=bookworm
ARG DEBIAN_VERSION=trixie
FROM docker.io/library/postgres:13-${DEBIAN_VERSION} AS postgres_base
@@ -18,10 +18,10 @@ FROM $FROM
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
# This trick only works because we use a postgres image based on the same
# debian version as Synapse's docker image (so the versions of the shared
# libraries match). Any missing libraries need to be added to either the
# Synapse image or docker/Dockerfile-workers.
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
+3 -3
View File
@@ -8,9 +8,9 @@ ARG PYTHON_VERSION=3.9
###
### Stage 0: generate requirements.txt
###
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
# We hardcode the use of Debian trixie here because this could change upstream
# and other Dockerfiles used for testing are expecting trixie.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-trixie
# Install Rust and other dependencies (stolen from normal Dockerfile)
# install the OS build deps
+8
View File
@@ -117,6 +117,14 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.141.0
## Docker images now based on Debian `trixie` with Python 3.13
The Docker images are now based on Debian `trixie` and use Python 3.13. If you
are using the Docker images as a base image you may need to e.g. adjust the
paths you mount any additional Python packages at.
# Upgrading to v1.140.0
## Users of `synapse-s3-storage-provider` must update the module to `v1.6.0`
+10
View File
@@ -596,6 +596,16 @@ def _wait_for_actions(gh_token: Optional[str]) -> None:
if len(resp["workflow_runs"]) == 0:
continue
# Warn the user if any workflows are still queued. They might need to fix something.
if any(workflow["status"] == "queued" for workflow in resp["workflow_runs"]):
_notify("Warning: at least one release workflow is still queued...")
if not click.confirm("Continue waiting for queued assets?", default=True):
click.echo(
"Continuing on with the release. Note that you may need to upload missing assets manually later."
)
break
continue
if all(
workflow["status"] != "in_progress" for workflow in resp["workflow_runs"]
):
+11 -1
View File
@@ -73,8 +73,18 @@ def main() -> None:
pw = unicodedata.normalize("NFKC", password)
bytes_to_hash = pw.encode("utf8") + password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes
print(
f"Password is too long ({len(bytes_to_hash)} bytes); truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
hashed = bcrypt.hashpw(
pw.encode("utf8") + password_pepper.encode("utf8"),
bytes_to_hash,
bcrypt.gensalt(bcrypt_rounds),
).decode("ascii")
+46 -72
View File
@@ -29,7 +29,6 @@ import traceback
import warnings
from textwrap import indent
from threading import Thread
from types import FrameType
from typing import (
TYPE_CHECKING,
Any,
@@ -40,7 +39,6 @@ from typing import (
NoReturn,
Optional,
Tuple,
Union,
cast,
)
from wsgiref.simple_server import WSGIServer
@@ -77,6 +75,7 @@ from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, PreserveLoggingContext
from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
from synapse.module_api.callbacks.spamchecker_callbacks import load_legacy_spam_checkers
from synapse.module_api.callbacks.third_party_event_rules_callbacks import (
@@ -112,7 +111,7 @@ P = ParamSpec("P")
def register_sighup(
hs: "HomeServer",
homeserver_instance_id: str,
func: Callable[P, None],
*args: P.args,
**kwargs: P.kwargs,
@@ -127,25 +126,19 @@ def register_sighup(
*args, **kwargs: args and kwargs to be passed to the target function.
"""
# Wrap the function so we can run it within a logcontext
def _callback_wrapper(*args: P.args, **kwargs: P.kwargs) -> None:
with LoggingContext(name="sighup", server_name=hs.hostname):
func(*args, **kwargs)
_instance_id_to_sighup_callbacks_map.setdefault(hs.get_instance_id(), []).append(
(_callback_wrapper, args, kwargs)
_instance_id_to_sighup_callbacks_map.setdefault(homeserver_instance_id, []).append(
(func, args, kwargs)
)
def unregister_sighups(homeserver_instance_id: str) -> None:
def unregister_sighups(instance_id: str) -> None:
"""
Unregister all sighup functions associated with this Synapse instance.
Args:
homeserver_instance_id: The unique ID for this Synapse process instance to
unregister hooks for (`hs.get_instance_id()`).
instance_id: Unique ID for this Synapse process instance.
"""
_instance_id_to_sighup_callbacks_map.pop(homeserver_instance_id, [])
_instance_id_to_sighup_callbacks_map.pop(instance_id, [])
def start_worker_reactor(
@@ -550,61 +543,6 @@ def refresh_certificate(hs: "HomeServer") -> None:
logger.info("Context factories updated.")
_already_setup_sighup_handling = False
"""
Marks whether we've already successfully ran `setup_sighup_handling()`.
"""
def setup_sighup_handling() -> None:
"""
Set up SIGHUP handling to call registered callbacks.
This can be called multiple times safely.
"""
global _already_setup_sighup_handling
# We only need to set things up once per process.
if _already_setup_sighup_handling:
return
previous_sighup_handler: Union[
Callable[[int, Optional[FrameType]], Any], int, None
] = None
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args: Any, **kwargs: Any) -> None:
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")
if callable(previous_sighup_handler):
previous_sighup_handler(*args, **kwargs)
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
for func, args, kwargs in sighup_callbacks:
func(*args, **kwargs)
sdnotify(b"READY=1")
# We defer running the sighup handlers until next reactor tick. This
# is so that we're in a sane state, e.g. flushing the logs may fail
# if the sighup happens in the middle of writing a log entry.
def run_sighup(*args: Any, **kwargs: Any) -> None:
# `callFromThread` should be "signal safe" as well as thread
# safe.
reactor.callFromThread(handle_sighup, *args, **kwargs)
# Register for the SIGHUP signal, chaining any existing handler as there can
# only be one handler per signal and we don't want to clobber any existing
# handlers (like the `multi_synapse` shard process in the context of Synapse Pro
# for small hosts)
previous_sighup_handler = signal.signal(signal.SIGHUP, run_sighup)
_already_setup_sighup_handling = True
async def start(hs: "HomeServer", freeze: bool = True) -> None:
"""
Start a Synapse server or worker.
@@ -644,9 +582,45 @@ async def start(hs: "HomeServer", freeze: bool = True) -> None:
name="gai_resolver", server_name=server_name, threadpool=resolver_threadpool
)
setup_sighup_handling()
register_sighup(hs, refresh_certificate, hs)
register_sighup(hs, reload_cache_config, hs.config)
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
def handle_sighup(*args: Any, **kwargs: Any) -> "defer.Deferred[None]":
async def _handle_sighup(*args: Any, **kwargs: Any) -> None:
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")
for sighup_callbacks in _instance_id_to_sighup_callbacks_map.values():
for func, args, kwargs in sighup_callbacks:
func(*args, **kwargs)
sdnotify(b"READY=1")
# It's okay to ignore the linter error here and call
# `run_as_background_process` directly because `_handle_sighup` operates
# outside of the scope of a specific `HomeServer` instance and holds no
# references to it which would prevent a clean shutdown.
return run_as_background_process( # type: ignore[untracked-background-process]
"sighup",
server_name,
_handle_sighup,
*args,
**kwargs,
)
# We defer running the sighup handlers until next reactor tick. This
# is so that we're in a sane state, e.g. flushing the logs may fail
# if the sighup happens in the middle of writing a log entry.
def run_sighup(*args: Any, **kwargs: Any) -> None:
# `callFromThread` should be "signal safe" as well as thread
# safe.
reactor.callFromThread(handle_sighup, *args, **kwargs)
signal.signal(signal.SIGHUP, run_sighup)
register_sighup(hs.get_instance_id(), refresh_certificate, hs)
register_sighup(hs.get_instance_id(), reload_cache_config, hs.config)
# Apply the cache config.
hs.config.caches.resize_all_caches()
+3 -5
View File
@@ -317,11 +317,6 @@ class SynapseHomeServer(HomeServer):
# during parsing
logger.warning("Unrecognized listener type: %s", listener.type)
def start_background_tasks(self) -> None:
super().start_background_tasks()
self.get_datastores().main.db_pool.updates.start_doing_background_updates()
def load_or_generate_config(argv_options: List[str]) -> HomeServerConfig:
"""
@@ -435,6 +430,9 @@ def setup(
await _base.start(hs, freeze)
# TODO: Feels like this should be moved somewhere else.
hs.get_datastores().main.db_pool.updates.start_doing_background_updates()
# Register a callback to be invoked once the reactor is running
register_start(hs, _start_when_reactor_running)
+35 -64
View File
@@ -198,27 +198,12 @@ class LoggingConfig(Config):
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
_already_performed_one_time_logging_setup: bool = False
"""
Marks whether we've already successfully ran `one_time_logging_setup()`.
"""
def one_time_logging_setup(*, logBeginner: LogBeginner = globalLogBeginner) -> None:
def _setup_stdlib_logging(
config: "HomeServerConfig", log_config_path: Optional[str], logBeginner: LogBeginner
) -> None:
"""
Perform one-time logging configuration for the Python process.
For example, we don't need to have multiple log record factories. Once we've
configured it once, we don't need to do it again.
This matters because multiple Synapse instances can be run in the same Python
process (c.f. Synapse Pro for small hosts)
Set up Python standard library logging.
"""
global _already_performed_one_time_logging_setup
# We only need to set things up once.
if _already_performed_one_time_logging_setup:
return
# We add a log record factory that runs all messages through the
# LoggingContextFilter so that we get the context *at the time we log*
@@ -236,6 +221,26 @@ def one_time_logging_setup(*, logBeginner: LogBeginner = globalLogBeginner) -> N
logging.setLogRecordFactory(factory)
# Configure the logger with the initial configuration.
if log_config_path is None:
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
else:
# Load the logging configuration.
_load_logging_config(log_config_path)
# Route Twisted's native logging through to the standard library logging
# system.
observer = STDLibLogObserver()
@@ -276,36 +281,6 @@ def one_time_logging_setup(*, logBeginner: LogBeginner = globalLogBeginner) -> N
logBeginner.beginLoggingTo([_log], redirectStandardIO=False)
_already_performed_one_time_logging_setup = True
def _setup_stdlib_logging(
config: "HomeServerConfig", log_config_path: Optional[str]
) -> None:
"""
Set up Python standard library logging.
"""
# Configure the logger with the initial configuration.
if log_config_path is None:
log_format = (
"%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s"
" - %(message)s"
)
logger = logging.getLogger("")
logger.setLevel(logging.INFO)
logging.getLogger("synapse.storage.SQL").setLevel(logging.INFO)
formatter = logging.Formatter(log_format)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
else:
# Load the logging configuration.
_load_logging_config(log_config_path)
def _load_logging_config(log_config_path: str) -> None:
"""
@@ -327,7 +302,7 @@ def _load_logging_config(log_config_path: str) -> None:
reset_logging_config()
def _reload_logging_config(server_name: str, log_config_path: Optional[str]) -> None:
def _reload_logging_config(log_config_path: Optional[str]) -> None:
"""
Reload the log configuration from the file and apply it.
"""
@@ -336,23 +311,26 @@ def _reload_logging_config(server_name: str, log_config_path: Optional[str]) ->
return
_load_logging_config(log_config_path)
logger.info(
"Reloaded log config for %s from %s due to SIGHUP", server_name, log_config_path
)
logger.info("Reloaded log config from %s due to SIGHUP", log_config_path)
def setup_logging(
hs: "HomeServer",
config: "HomeServerConfig",
use_worker_options: bool = False,
logBeginner: LogBeginner = globalLogBeginner,
) -> None:
"""
Set up the logging subsystem.
Args:
config: configuration data
use_worker_options: True to use the 'worker_log_config' option
instead of 'log_config'.
logBeginner: The Twisted logBeginner to use.
"""
from twisted.internet import reactor
@@ -363,20 +341,13 @@ def setup_logging(
)
# Perform one-time logging configuration.
one_time_logging_setup()
# Configure logging.
_setup_stdlib_logging(config, log_config_path)
_setup_stdlib_logging(config, log_config_path, logBeginner=logBeginner)
# Add a SIGHUP handler to reload the logging configuration, if one is available.
from synapse.app import _base as appbase
# We only need to reload the config if there is a log config file path provided to
# reload from.
if log_config_path:
server_name = hs.hostname
appbase.register_sighup(
hs, _reload_logging_config, server_name, log_config_path
)
appbase.register_sighup(
hs.get_instance_id(), _reload_logging_config, log_config_path
)
# Log immediately so we can grep backwards.
logger.warning("***** STARTING SERVER *****")
+15 -1
View File
@@ -1683,8 +1683,22 @@ class AuthHandler:
# Normalise the Unicode in the password
pw = unicodedata.normalize("NFKC", password)
bytes_to_hash = pw.encode(
"utf8"
) + self.hs.config.auth.password_pepper.encode("utf8")
if len(bytes_to_hash) > 72:
# bcrypt only looks at the first 72 bytes.
#
# Note: we explicitly DO NOT log the length of the user's password here.
logger.debug(
"Password is too long; truncating to 72 bytes for bcrypt. "
"This is expected behaviour and will not affect a user's ability to log in. 72 bytes is "
"sufficient entropy for a password."
)
bytes_to_hash = bytes_to_hash[:72]
return bcrypt.hashpw(
pw.encode("utf8") + self.hs.config.auth.password_pepper.encode("utf8"),
bytes_to_hash,
bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii")
+1 -1
View File
@@ -96,7 +96,7 @@ logger = logging.getLogger(__name__)
# Here we have the names of the cookies, and the options we use to set them.
_SESSION_COOKIES = [
(b"oidc_session", b"HttpOnly; Secure; SameSite=None"),
(b"oidc_session_no_samesite", b"HttpOnly"),
(b"oidc_session_no_samesite", b"HttpOnly; Secure"),
]
+4 -35
View File
@@ -606,56 +606,25 @@ class LoggingContextFilter(logging.Filter):
self._default_request = request
def filter(self, record: logging.LogRecord) -> Literal[True]:
"""
Add each fields from the logging contexts to the record.
Please be mindful of 3rd-party code outside of Synapse as this is running as a
global log record filter. Other code may have set their own attributes on the
record and the log record may not be relevant to Synapse at all so we should not
mangle it.
We can have some defaults but we should avoid overwriting existing attributes on
any log record unless we actually have a Synapse logcontext (not just the
default sentinel logcontext).
"""Add each fields from the logging contexts to the record.
Returns:
True to include the record in the log output.
"""
context = current_context()
record.request = self._default_request
# Avoid overwriting an existing `server_name` on the record. This is running in
# the context of a global log record filter so there may be 3rd-party code that
# adds their own `server_name` and we don't want to interfere with that.
if not hasattr(record, "server_name"):
record.server_name = "unknown_server_from_no_logcontext"
record.server_name = "unknown_server_from_no_context"
# context should never be None, but if it somehow ends up being, then
# we end up in a death spiral of infinite loops, so let's check, for
# robustness' sake.
if context is not None:
def safe_set(attr: str, value: Any) -> None:
"""
Only write the attribute if it hasn't already been set or we actually have
a Synapse logcontext (indicating that this log record is relevant to
Synapse).
"""
if context is not SENTINEL_CONTEXT or not hasattr(record, attr):
setattr(record, attr, value)
safe_set("server_name", context.server_name)
record.server_name = context.server_name
# Logging is interested in the request ID. Note that for backwards
# compatibility this is stored as the "request" on the record.
safe_set("request", str(context))
record.request = str(context)
# Add some data from the HTTP request.
request = context.request
# The sentinel logcontext has no request so if we get past this point, we
# know we have some actual Synapse logcontext and don't need to worry about
# using `safe_set`. We'll consider this an optimization since this is a
# pretty hot-path.
if request is None:
return True
+2 -2
View File
@@ -33,7 +33,7 @@ from twisted.internet.protocol import ServerFactory
from twisted.logger import LogBeginner, LogPublisher
from twisted.protocols.basic import LineOnlyReceiver
from synapse.config.logger import _setup_stdlib_logging, one_time_logging_setup
from synapse.config.logger import _setup_stdlib_logging
from synapse.logging import RemoteHandler
from synapse.synapse_rust import reset_logging_config
from synapse.types import ISynapseReactor
@@ -115,10 +115,10 @@ async def main(reactor: ISynapseReactor, loops: int) -> float:
}
logger = logging.getLogger("synapse")
one_time_logging_setup(logBeginner=beginner)
_setup_stdlib_logging(
hs_config, # type: ignore[arg-type]
None,
logBeginner=beginner,
)
# Force a new logging config without having to load it from a file.