Also adds a section in the docs explaining the `sentinel` logcontext.
Spawning from https://github.com/element-hq/synapse/pull/18870
### Testing strategy
1. Run Synapse normally and with `daemonize: true`: `poetry run
synapse_homeserver --config-path homeserver.yaml`
1. Execute some requests
1. Shutdown the server
1. Look for any bad log entries in your homeserver logs:
- `Expected logging context sentinel but found main`
- `Expected logging context main was lost`
- `Expected previous context`
- `utime went backwards!`/`stime went backwards!`
- `Called stop on logcontext POST-0 without recording a start rusage`
- `Background process re-entered without a proc`
Twisted trial tests:
1. Run full Twisted trial test suite.
1. Check the logs for `Test starting with non-sentinel logging context ...`
Spawning from https://github.com/element-hq/synapse/pull/18871
[This change](6ce2f3e59d)
was originally used to fix CPU time going backwards when we `daemonize`.
While, we don't seem to run into this problem on `develop`, I still
think this is a good change to make. We don't need background tasks
running on a process that will soon be forcefully exited and where the
reactor isn't even running yet. We now kick off the background tasks
(`run_as_background_process`) after we have forked the process and
started the reactor.
Also as simple note, we don't need background tasks running in both halves of a fork.
This fixes two bugs that affect the availability of MSC4133 until the
next spec release.
1. The servlet didn't recognise the unstable endpoint even when the
homeserver advertised it
2. The HS didn't advertise support for the stable prefixed version
Would only have been a problem until the next spec release but it's nice
to have it work before then.
While exploring bring up of using `orjson`, exposed an interesting flaw.
The stdlib `json` encoder seems to be ok with coercing a `str` from an
`Enum`(specifically, a `Class[str, Enum]`). The `orjson` encoder does
not like that this is a class and not a proper `str` per spec. Using the
`.value` of the enum as the key for the dict produced while answering a
`GET` admin request for experimental features seems to fix this.
We should send events that rescind invites over federation.
Similarly, we should handle receiving such events. Unfortunately, the
protocol doesn't make it possible to fully auth such events, and so we
can only handle the case where the original inviter rescinded the invite
(rather than a room admin).
Complement test: https://github.com/matrix-org/complement/pull/797
If Synapse is under test (`SYNAPSE_LOG_TESTING` is set), we don't care
about seeing the "Applying schema" log lines at the INFO level every
time we run the tests (it's 100 lines of bulk for each homeserver).
```
synapse_main | 2025-08-29 22:34:03,453 - synapse.storage.prepare_database - 433 - INFO - main - Applying schema deltas for v73
synapse_main | 2025-08-29 22:34:03,454 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/01event_failed_pull_attempts.sql
synapse_main | 2025-08-29 22:34:03,463 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/02add_pusher_enabled.sql
synapse_main | 2025-08-29 22:34:03,473 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/02room_id_indexes_for_purging.sql
synapse_main | 2025-08-29 22:34:03,482 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/03pusher_device_id.sql
synapse_main | 2025-08-29 22:34:03,492 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/03users_approved_column.sql
synapse_main | 2025-08-29 22:34:03,502 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/04partial_join_details.sql
synapse_main | 2025-08-29 22:34:03,513 - synapse.storage.prepare_database - 541 - INFO - main - Applying schema 73/04pending_device_list_updates.sql
...
```
The Synapse logs are visible when a Complement test fails or you use
`COMPLEMENT_ALWAYS_PRINT_SERVER_LOGS=1`. This is spawning from a
Complement test with three homeservers and wanting less log noise to
scroll through.
Spawning from observing this trace for a `/messages` request
(`RoomMessageListRestServlet`). We don't know if it took a while for the
database to fetch a single redaction or a whole chain of redactions.
Switch to OpenTracing's `ContextVarsScopeManager` instead of our own
custom `LogContextScopeManager`.
This is now possible because the linked Twisted issue from the comment
in our custom `LogContextScopeManager` is resolved:
https://twistedmatrix.com/trac/ticket/10301
This PR is spawning from exploring different possibilities to solve the
`scope` loss problem I was encountering in
https://github.com/element-hq/synapse/pull/18804#discussion_r2268254424.
This appears to solve the problem and I've added the additional test
from there to this PR ✅