Prevent duplicate logging setup when running multiple Synapse instances (#19067)

Be mindful that it's possible to run Synapse multiple times in the same
Python process. So we only need to do some part of the logging setup
once.

- We only need to setup the global log record factory and context filter
once
 - We only need to redirect Twisted logging once


### Background

As part of Element's plan to support a light form of vhosting (virtual
host) (multiple instances of Synapse in the same Python process), we're
currently diving into the details and implications of running multiple
instances of Synapse in the same Python process.

"Per-tenant logging" tracked internally by
https://github.com/element-hq/synapse-small-hosts/issues/48
This commit is contained in:
Eric Eastwood
2025-10-30 10:21:56 -05:00
committed by GitHub
parent f54ddbcace
commit 2c4057bf93
3 changed files with 60 additions and 33 deletions

View File

@@ -33,7 +33,7 @@ from twisted.internet.protocol import ServerFactory
from twisted.logger import LogBeginner, LogPublisher
from twisted.protocols.basic import LineOnlyReceiver
from synapse.config.logger import _setup_stdlib_logging
from synapse.config.logger import _setup_stdlib_logging, one_time_logging_setup
from synapse.logging import RemoteHandler
from synapse.synapse_rust import reset_logging_config
from synapse.types import ISynapseReactor
@@ -115,10 +115,10 @@ async def main(reactor: ISynapseReactor, loops: int) -> float:
}
logger = logging.getLogger("synapse")
one_time_logging_setup(logBeginner=beginner)
_setup_stdlib_logging(
hs_config, # type: ignore[arg-type]
None,
logBeginner=beginner,
)
# Force a new logging config without having to load it from a file.