Commit Graph

25376 Commits

Author SHA1 Message Date
dependabot[bot]
be3ecb332a Bump serde from 1.0.226 to 1.0.228 (#18982)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 17:06:15 +01:00
dependabot[bot]
14c114b9fd Bump regex from 1.11.2 to 1.11.3 (#18981)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 17:05:51 +01:00
dependabot[bot]
2eb6239ad8 Bump setuptools-rust from 1.11.1 to 1.12.0 (#18980)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 17:05:23 +01:00
dependabot[bot]
26583f8623 Bump lxml from 6.0.0 to 6.0.2 (#18979)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 17:02:13 +01:00
dependabot[bot]
265e5fe384 Bump docker/login-action from 3.5.0 to 3.6.0 (#18978)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-29 17:01:32 +01:00
Eric Eastwood
5143f93dc9 Fix server_name in logging context for multiple Synapse instances in one process (#18868)
### Background

As part of Element's plan to support a light form of vhosting (virtual
host) (multiple instances of Synapse in the same Python process), we're
currently diving into the details and implications of running multiple
instances of Synapse in the same Python process.

"Per-tenant logging" tracked internally by
https://github.com/element-hq/synapse-small-hosts/issues/48

### Prior art

Previously, we exposed `server_name` by providing a static logging
`MetadataFilter` that injected the values:


205d9e4fc4/synapse/config/logger.py (L216)

While this can work fine for the normal case of one Synapse instance per
Python process, this configures things globally and isn't compatible
when we try to start multiple Synapse instances because each subsequent
tenant will overwrite the previous tenant.


### What does this PR do?

We remove the `MetadataFilter` and replace it by tracking the
`server_name` in the `LoggingContext` and expose it with our existing
[`LoggingContextFilter`](205d9e4fc4/synapse/logging/context.py (L584-L622))
that we already use to expose information about the `request`.

This means that the `server_name` value follows wherever we log as
expected even when we have multiple Synapse instances running in the
same process.


### A note on logcontext

Anywhere, Synapse mistakenly uses the `sentinel` logcontext to log
something, we won't know which server sent the log. We've been fixing up
`sentinel` logcontext usage as tracked by
https://github.com/element-hq/synapse/issues/18905

Any further `sentinel` logcontext usage we find in the future can be
fixed piecemeal as normal.


d2a966f922/docs/log_contexts.md (L71-L81)


### Testing strategy

1. Adjust your logging config to include `%(server_name)s` in the format
    ```yaml
    formatters:
        precise:
format: '%(asctime)s - %(server_name)s - %(name)s - %(lineno)d -
%(levelname)s - %(request)s - %(message)s'
    ```
1. Start Synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
1. Make some requests (`curl
http://localhost:8008/_matrix/client/versions`, etc)
1. Open the homeserver logs and notice the `server_name` in the logs as
expected. `unknown_server_from_sentinel_context` is expected for the
`sentinel` logcontext (things outside of Synapse).
2025-09-26 17:10:48 -05:00
Eric Eastwood
2f2b854ac1 Fix logcontext handling in timeout_deferred tests (#18974)
Related to https://github.com/element-hq/synapse/issues/18905

These fixes were split off from
https://github.com/element-hq/synapse/pull/18828 where @devonh was
seeing some test failures because `timeout_deferred(...)` is being
updated to use `Clock` utilities instead of raw `reactor` methods. This
test was failing in that branch/PR until we made this new version that
handles the logcontexts properly.

While the previous version of this test does pass on `develop`, it was
using what appears completely wrong assertions, assumptions, and bad
patterns to make it happen (see diff comments below)

---

Test originally introduced in https://github.com/matrix-org/synapse/pull/4407
2025-09-26 11:10:02 -05:00
Andrew Morgan
8f61bdb470 Note optional Element Commecial License in SPDX specifiers (#18973) 2025-09-26 12:43:07 +01:00
Andrew Morgan
7c32988f6b Update URLs in dockerfile metadata (#18971) 2025-09-26 12:40:50 +01:00
Hammy Havoc
688f635b59 Updated providers.json to use X instead of Twitter following rebrand and schema change (#18767) 2025-09-26 11:06:50 +01:00
Eric Eastwood
04721c85e6 Disconnect background process work from request trace (#18932)
Before https://github.com/element-hq/synapse/pull/18849, we we're using
our own custom `LogContextScopeManager` which tied the tracing scope to
the `LoggingContext`. Since we created a new
`BackgroundProcessLoggingContext` any time we
`run_as_background_process(...)`, the trace for the background work was
separate from the trace that kicked of the work as expected (e.g.
request trace is separate from the background process we kicked to fetch
more messages from the federation).

Since we've now switched to the `ContextVarsScopeManager` (in
https://github.com/element-hq/synapse/pull/18849), the tracing scope now
crosses the `LoggingContext` boundaries (and thread boundaries) without
a problem. This means we end up with request traces that include all of
the background work that we've kicked off bloating the trace and making
it hard to understand what's going on.

This PR separates the traces again to how things were before.
Additionally, things are even better now since I added some cross-link
references between the traces to easily be able to jump between.

Follow-up to https://github.com/element-hq/synapse/pull/18849

---

In the before, you can see that the trace is blown up by the background
process (`bgproc.qwer`).

In the after, we now only have a little cross-link marker span
(`start_bgproc.qwer`) to jump to background process trace.

Before | After
---  | ---
<some image> | <some image>



### Testing strategy

1. Run a Jaeger instance
(https://www.jaegertracing.io/docs/1.6/getting-started/)
    ```shell
    $ docker run -d --name jaeger \
      -e COLLECTOR_ZIPKIN_HTTP_PORT=9411 \
      -p 5775:5775/udp \
      -p 6831:6831/udp \
      -p 6832:6832/udp \
      -p 5778:5778 \
      -p 16686:16686 \
      -p 14268:14268 \
      -p 9411:9411 \
      jaegertracing/all-in-one:1.59.0
    ```
 1. Configure Synapse to use tracing:
     `homeserver.yaml`
     ```yaml
    ## Tracing ##
    opentracing:
      enabled: true
      jaeger_config:
        sampler:
          type: const
          param: 1
        logging:
          false
    ```
1. Make sure the optional `opentracing` dependency is installed: `poetry
install --extras all`
1. In the `VersionsRestServlet`, modify it to kick off a dummy
background process (easy to test this way)
    ```python
from synapse.metrics.background_process_metrics import
run_as_background_process

    async def _qwer() -> None:
        await self.clock.sleep(1)

    run_as_background_process("qwer", "test_server", _qwer)
    ```
1. Run Synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
1. Fire off a version requests: `curl
http://localhost:8008/_matrix/client/versions`
 1. Visit http://localhost:16686/search to view the traces
     - Select the correct service
     - Look for the  `VersionsRestServlet` operation
     - Press 'Find Traces' button
     - Select the relevant trace
     - Notice how the trace isn't bloated
- Look for the `start_bgproc.qwer` span cross-linking to the background
process
- Jump to the other trace using the cross-link reference ->
`bgproc.qwer`
2025-09-25 21:45:18 -05:00
Travis Ralston
d2a966f922 Use signature support from policy servers when available (#18934)
Opening on Kegan's behalf


[MSC4284](https://github.com/matrix-org/matrix-spec-proposals/pull/4284)
has already been opened accordingly.

---------

Co-authored-by: Kegan Dougal <7190048+kegsay@users.noreply.github.com>
Co-authored-by: Eric Eastwood <erice@element.io>
2025-09-25 19:30:24 +00:00
Andrew Morgan
dee6ba57a6 Merge branch 'release-v1.139' into develop 2025-09-25 12:57:39 +01:00
Andrew Morgan
e2ec3b7d0d 1.139.0rc3 v1.139.0rc3 2025-09-25 12:14:20 +01:00
Eric Eastwood
acb9ec3c38 Fix run_coroutine_in_background(...) incorrectly handling logcontext (#18964)
Regressed in
https://github.com/element-hq/synapse/pull/18900#discussion_r2331554278
(see conversation there for more context)


### How is this a regression?

> To give this an update with more hindsight; this logic *was* redundant
with the early return and it is safe to remove this complexity

> 
> It seems like this actually has to do with completed vs incomplete
deferreds...
> 
> To explain how things previously worked *without* the early-return
shortcut:
> 
> With the normal case of **incomplete awaitable**, we store the
`calling_context` and the `f` function is called and runs until it
yields to the reactor. Because `f` follows the logcontext rules, it sets
the `sentinel` logcontext. Then in `run_in_background(...)`, we restore
the `calling_context`, store the current `ctx` (which is `sentinel`) and
return. When the deferred completes, we restore `ctx` (which is
`sentinel`) before yielding to the reactor again (all good
)
> 
> With the other case where we see a **completed awaitable**, we store
the `calling_context` and the `f` function is called and runs to
completion (no logcontext change). *This is where the shortcut would
kick in but I'm going to continue explaining as if we commented out the
shortcut.* -- Then in `run_in_background(...)`, we restore the
`calling_context`, store the current `ctx` (which is same as the
`calling_context`). Because the deferred is already completed, our extra
callback is called immediately and we restore `ctx` (which is same as
the `calling_context`). Since we never yield to the reactor, the
`calling_context` is perfect as that's what we want again (all good
)
> 
> ---
> 
> But this also means that our early-return shortcut is no longer just
an optimization and is *necessary* to act correctly in the **completed
awaitable** case as we want to return with the `calling_context` and not
reset to the `sentinel` context. I've updated the comment in
https://github.com/element-hq/synapse/pull/18964 to explain the
necessity as it's currently just described as an optimization.
> 
> But because we made the same change to
`run_coroutine_in_background(...)` which didn't have the same
early-return shortcut, we regressed the correct behavior  . This is
being fixed in https://github.com/element-hq/synapse/pull/18964
>
>
> *-- @MadLittleMods,
https://github.com/element-hq/synapse/pull/18900#discussion_r2373582917*

### How did we find this problem?

Spawning from @wrjlewis
[seeing](https://matrix.to/#/!SGNQGPGUwtcPBUotTL:matrix.org/$h3TxxPVlqC6BTL07dbrsz6PmaUoZxLiXnSTEY-QYDtA?via=jki.re&via=matrix.org&via=element.io)
`Starting metrics collection 'typing.get_new_events' from sentinel
context: metrics will be lost` in the logs:

<details>
<summary>More logs</summary>

```
synapse.http.request_metrics - 222 - ERROR - sentinel - Trying to stop RequestMetrics in the sentinel context.
2025-09-23 14:43:19,712 - synapse.util.metrics - 212 - WARNING - sentinel - Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost
2025-09-23 14:43:19,713 - synapse.rest.client.sync - 851 - INFO - sentinel - Client has disconnected; not serializing response.
2025-09-23 14:43:19,713 - synapse.http.server - 825 - WARNING - sentinel - Not sending response to request <XForwardedForRequest at 0x7f23e8111ed0 method='POST' uri='/_matrix/client/unstable/org.matrix.simplified_msc3575/sync?pos=281963%2Fs929324_147053_10_2652457_147960_2013_25554_4709564_0_164_2&timeout=30000' clientproto='HTTP/1.1' site='8008'>, already dis
connected.
2025-09-23 14:43:19,713 - synapse.access.http.8008 - 515 - INFO - sentinel - 92.40.194.87 - 8008 - {@me:wi11.co.uk} Processed request: 30.005sec/-8.041sec (0.001sec, 0.000sec) (0.000sec/0.002sec/2) 0B 200! "POST /_matrix/client/unstable/org.matrix.simplified_msc3575/
```

</details>

From the logs there, we can see things relating to
`typing.get_new_events` and
`/_matrix/client/unstable/org.matrix.simplified_msc3575/sync` which led
me to trying out Sliding Sync with the typing extension enabled and
allowed me to reproduce the problem locally. Sliding Sync is a unique
scenario as it's the only place we use `gather_optional_coroutines(...)`
-> `run_coroutine_in_background(...)` (introduced in
https://github.com/element-hq/synapse/pull/17884) to exhibit this
behavior.


### Testing strategy

1. Configure Synapse to enable
[MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186):
Simplified Sliding Sync which is actually under
[MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575)
    ```yaml
    experimental_features:
      msc3575_enabled: true
    ```
1. Start synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
 1. Make a Sliding Sync request with one of the extensions enabled
    ```http
POST
http://localhost:8008/_matrix/client/unstable/org.matrix.simplified_msc3575/sync
    {
      "lists": {},
      "room_subscriptions": {
            "!FlgJYGQKAIvAscfBhq:my.synapse.linux.server": {
                "required_state": [],
                "timeline_limit": 1
            }
        },
        "extensions": {
            "typing": {
                "enabled": true
            }
        }
    }
    ```
1. Open your homeserver logs and notice warnings about `Starting ...
from sentinel context: metrics will be lost`
2025-09-25 12:13:01 +01:00
dependabot[bot]
6ff181dbc7 Bump typing-extensions from 4.14.1 to 4.15.0 (#18956)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-24 23:06:45 +01:00
Hugh Nimmo-Smith
fd8fa97b6a Document and fix room_config param when user_may_create_room callback is invoked for a room upgrade (#18721)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-09-24 21:42:19 +00:00
Eric Eastwood
5266e423e2 Explain how Deferred callbacks interact with logcontexts (#18914)
Spawning from
https://github.com/matrix-org/synapse/pull/12588#discussion_r865843321

> It turns out `Deferred.cancel()` is a lot like
`Deferred.callback()`/`errback()` in that it will trash the logging
context:
> it can resume a coroutine, which will restore its own logging context,
then run:
> 
>  - until it blocks, setting the sentinel context
>  - or until it terminates, setting the context it was started with
> 
> So we need to wrap it in `with PreserveLoggingContext():`, like we do
with `.callback()`:
> 
> ```python
> with PreserveLoggingContext():
>     self.render_deferred.cancel()
> ```
>
> *-- @squahtx,
https://github.com/matrix-org/synapse/pull/12588#discussion_r865843321*
2025-09-24 16:20:42 -05:00
Eric Eastwood
0458f691b6 Fix run_coroutine_in_background(...) incorrectly handling logcontext (#18964)
Regressed in
https://github.com/element-hq/synapse/pull/18900#discussion_r2331554278
(see conversation there for more context)


### How is this a regression?

> To give this an update with more hindsight; this logic *was* redundant
with the early return and it is safe to remove this complexity

> 
> It seems like this actually has to do with completed vs incomplete
deferreds...
> 
> To explain how things previously worked *without* the early-return
shortcut:
> 
> With the normal case of **incomplete awaitable**, we store the
`calling_context` and the `f` function is called and runs until it
yields to the reactor. Because `f` follows the logcontext rules, it sets
the `sentinel` logcontext. Then in `run_in_background(...)`, we restore
the `calling_context`, store the current `ctx` (which is `sentinel`) and
return. When the deferred completes, we restore `ctx` (which is
`sentinel`) before yielding to the reactor again (all good
)
> 
> With the other case where we see a **completed awaitable**, we store
the `calling_context` and the `f` function is called and runs to
completion (no logcontext change). *This is where the shortcut would
kick in but I'm going to continue explaining as if we commented out the
shortcut.* -- Then in `run_in_background(...)`, we restore the
`calling_context`, store the current `ctx` (which is same as the
`calling_context`). Because the deferred is already completed, our extra
callback is called immediately and we restore `ctx` (which is same as
the `calling_context`). Since we never yield to the reactor, the
`calling_context` is perfect as that's what we want again (all good
)
> 
> ---
> 
> But this also means that our early-return shortcut is no longer just
an optimization and is *necessary* to act correctly in the **completed
awaitable** case as we want to return with the `calling_context` and not
reset to the `sentinel` context. I've updated the comment in
https://github.com/element-hq/synapse/pull/18964 to explain the
necessity as it's currently just described as an optimization.
> 
> But because we made the same change to
`run_coroutine_in_background(...)` which didn't have the same
early-return shortcut, we regressed the correct behavior  . This is
being fixed in https://github.com/element-hq/synapse/pull/18964
>
>
> *-- @MadLittleMods,
https://github.com/element-hq/synapse/pull/18900#discussion_r2373582917*

### How did we find this problem?

Spawning from @wrjlewis
[seeing](https://matrix.to/#/!SGNQGPGUwtcPBUotTL:matrix.org/$h3TxxPVlqC6BTL07dbrsz6PmaUoZxLiXnSTEY-QYDtA?via=jki.re&via=matrix.org&via=element.io)
`Starting metrics collection 'typing.get_new_events' from sentinel
context: metrics will be lost` in the logs:

<details>
<summary>More logs</summary>

```
synapse.http.request_metrics - 222 - ERROR - sentinel - Trying to stop RequestMetrics in the sentinel context.
2025-09-23 14:43:19,712 - synapse.util.metrics - 212 - WARNING - sentinel - Starting metrics collection 'typing.get_new_events' from sentinel context: metrics will be lost
2025-09-23 14:43:19,713 - synapse.rest.client.sync - 851 - INFO - sentinel - Client has disconnected; not serializing response.
2025-09-23 14:43:19,713 - synapse.http.server - 825 - WARNING - sentinel - Not sending response to request <XForwardedForRequest at 0x7f23e8111ed0 method='POST' uri='/_matrix/client/unstable/org.matrix.simplified_msc3575/sync?pos=281963%2Fs929324_147053_10_2652457_147960_2013_25554_4709564_0_164_2&timeout=30000' clientproto='HTTP/1.1' site='8008'>, already dis
connected.
2025-09-23 14:43:19,713 - synapse.access.http.8008 - 515 - INFO - sentinel - 92.40.194.87 - 8008 - {@me:wi11.co.uk} Processed request: 30.005sec/-8.041sec (0.001sec, 0.000sec) (0.000sec/0.002sec/2) 0B 200! "POST /_matrix/client/unstable/org.matrix.simplified_msc3575/
```

</details>

From the logs there, we can see things relating to
`typing.get_new_events` and
`/_matrix/client/unstable/org.matrix.simplified_msc3575/sync` which led
me to trying out Sliding Sync with the typing extension enabled and
allowed me to reproduce the problem locally. Sliding Sync is a unique
scenario as it's the only place we use `gather_optional_coroutines(...)`
-> `run_coroutine_in_background(...)` (introduced in
https://github.com/element-hq/synapse/pull/17884) to exhibit this
behavior.


### Testing strategy

1. Configure Synapse to enable
[MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186):
Simplified Sliding Sync which is actually under
[MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575)
    ```yaml
    experimental_features:
      msc3575_enabled: true
    ```
1. Start synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
 1. Make a Sliding Sync request with one of the extensions enabled
    ```http
POST
http://localhost:8008/_matrix/client/unstable/org.matrix.simplified_msc3575/sync
    {
      "lists": {},
      "room_subscriptions": {
            "!FlgJYGQKAIvAscfBhq:my.synapse.linux.server": {
                "required_state": [],
                "timeline_limit": 1
            }
        },
        "extensions": {
            "typing": {
                "enabled": true
            }
        }
    }
    ```
1. Open your homeserver logs and notice warnings about `Starting ...
from sentinel context: metrics will be lost`
2025-09-24 15:24:47 +00:00
Eric Eastwood
25fa555395 Fix no active span when trying to log tracing error on startup (#18959)
Fix `no active span when trying to log` tracing error on startup.

Example error:
```log
synapse.logging.opentracing - 427 - ERROR - wake_destinations_needing_catchup-0 - There was no active span when trying to log. Did you forget to start one or did a context slip?
Stack (most recent call last):
  File "/usr/lib/python3.13/threading.py", line 1014, in _bootstrap
    self._bootstrap_inner()
  File "/usr/lib/python3.13/threading.py", line 1043, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.13/threading.py", line 994, in run
    self._target(*self._args, **self._kwargs)
  File "python3.13/site-packages/twisted/_threads/_threadworker.py", line 75, in work
    task()
  File "python3.13/site-packages/twisted/_threads/_team.py", line 192, in doWork
    task()
  File "python3.13/site-packages/twisted/python/threadpool.py", line 269, in inContext
    result = inContext.theWork()  # type: ignore[attr-defined]
  File "python3.13/site-packages/twisted/python/threadpool.py", line 285, in <lambda>
    inContext.theWork = lambda: context.call(  # type: ignore[attr-defined]
  File "python3.13/site-packages/twisted/python/context.py", line 117, in callWithContext
    return self.currentContext().callWithContext(ctx, func, *args, **kw)
  File "python3.13/site-packages/twisted/python/context.py", line 82, in callWithContext
    return func(*args, **kw)
  File "python3.13/site-packages/twisted/enterprise/adbapi.py", line 282, in _runWithConnection
    result = func(conn, *args, **kw)
  File "synapse/synapse/storage/database.py", line 1094, in inner_func
    return func(db_conn, *args, **kwargs)
  File "synapse/synapse/storage/database.py", line 822, in new_transaction
    opentracing.log_kv({"message": "commit"})
  File "synapse/synapse/logging/opentracing.py", line 427, in ensure_active_span_inner_2
    logger.error(
```


### Why did this happen before?

This previously occurred because we called `init_tracer(...)` after the
reactor started up in `_base.start()`. But we actually attempt some
database transactions earlier than that which try to do some tracing
because of that `oidc = hs.get_oidc_handler()` line.

Notice `oidc = hs.get_oidc_handler()` happened before `_base.start(hs)`:


5be7679dd9/synapse/app/homeserver.py (L397-L408)


With this PR, I've updated things to `init_tracer(...)` earlier on
alongside where we `setup_logging(...)`.
2025-09-24 10:12:08 -05:00
Andrew Morgan
7708801d56 Fix triage_labelled GHA workflow (#18913) 2025-09-24 14:17:14 +01:00
Andrew Morgan
d3fc638c29 Merge branch 'master' into develop 2025-09-24 13:50:05 +01:00
Andrew Morgan
6c292dc4ee 1.138.2 v1.138.2 2025-09-24 12:26:49 +01:00
Andrew Morgan
120389b077 Note ubuntu release support update in the upgrade notes 2025-09-24 12:25:41 +01:00
Andrew Morgan
71b34b3a07 Drop support for Ubuntu 24.10 'Oracular Oriole', add support for Ubuntu 25.04 'Plucky Puffin' (#18962) 2025-09-24 12:24:32 +01:00
PizZaKatZe
e766f325af fix: Compute user last seen timestamp from last seen devices (#18948)
## Fix last seen timestamp in `/_synapse/admin/v2/users` response

Fixes #18955

The last seen timestamps contained in `/_synapse/admin/v2/users`
responses were computed as follows:

```sql
                [...]
                LEFT JOIN (
                    SELECT user_id, MAX(last_seen) AS last_seen_ts
                    FROM user_ips GROUP BY user_id
                ) ls ON u.name = ls.user_id
                [...]
```

4367fb2d07/synapse/storage/databases/main/__init__.py (L302C1-L305C44)

This leads to empty timestamps (as in: user was never seen) if users are
inactive for longer than
[`user_ips_max_age`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#user_ips_max_age).

The fix is quite trivial: Use the `devices` table, as this one also
contains last seen timestamps but is *not* periodically purged.

We are using this for automatic user account deletion (via
[synadm](https://codeberg.org/synadm/synadm)) and the patched code works
as intended, whereas the unpatched version wants to delete users during
long vacations. 🫣
2025-09-24 11:59:11 +01:00
Tulir Asokan
512b3f50cf Update MSC4326 error code (#18947) 2025-09-24 11:57:24 +01:00
Andrew Morgan
0fbf296c99 1.138.1 v1.138.1 2025-09-24 11:32:48 +01:00
Andrew Morgan
0c8594c9a8 Fix performance regression related to delayed events processing (#18926) 2025-09-24 11:30:47 +01:00
Shay
35c9cbb09d Add an Admin API to query a piece of local or cached remote media by ID (#18911) 2025-09-23 16:25:56 -05:00
dependabot[bot]
9680804496 Bump phonenumbers from 9.0.13 to 9.0.14 (#18954)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 17:18:41 +01:00
dependabot[bot]
8f63e2246a Bump pygithub from 2.7.0 to 2.8.1 (#18952)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 17:18:19 +01:00
dependabot[bot]
aa83d660d5 Bump anyhow from 1.0.99 to 1.0.100 (#18950)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 17:18:09 +01:00
dependabot[bot]
641ced06a2 Bump Swatinem/rust-cache from 2.8.0 to 2.8.1 (#18949)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 17:17:28 +01:00
dependabot[bot]
354f1cc219 Bump authlib from 1.6.3 to 1.6.4 (#18957)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 16:44:38 +01:00
dependabot[bot]
478f593b6c Bump serde from 1.0.224 to 1.0.226 (#18953)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 16:42:38 +01:00
dependabot[bot]
cd6c424adb Bump types-requests from 2.32.4.20250809 to 2.32.4.20250913 (#18951)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-09-23 16:40:36 +01:00
Andrew Morgan
b70f668a8c Merge branch 'release-v1.139' into develop 2025-09-23 16:28:04 +01:00
Andrew Morgan
9c4ba13a10 Add entry to v1.139.0 upgrade notes about appservices and /register requests 2025-09-23 16:27:38 +01:00
Andrew Morgan
0447496549 Merge branch 'release-v1.139' into develop 2025-09-23 16:05:53 +01:00
Andrew Morgan
9ed0d36fe2 Bump batch size from 50 to 1000 for _get_e2e_cross_signing_signatures_for_devices query (#18939)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-09-23 15:47:29 +01:00
Andrew Morgan
5857d2de59 Note ubuntu release support update in the upgrade notes v1.139.0rc2 2025-09-23 15:34:26 +01:00
Andrew Morgan
b10f3f5959 1.139.0rc2 2025-09-23 15:31:49 +01:00
Andrew Morgan
fd29e3219c Drop support for Ubuntu 24.10 'Oracular Oriole', add support for Ubuntu 25.04 'Plucky Puffin' (#18962) 2025-09-23 15:28:40 +01:00
Andrew Morgan
d308469e90 Update changelog to move MSC4190 entry to Features v1.139.0rc1 2025-09-23 14:28:38 +01:00
Andrew Morgan
daf33e4954 1.139.0rc1 2025-09-23 13:28:34 +01:00
Andrew Morgan
ddc7627b22 Fix performance regression related to delayed events processing (#18926) 2025-09-23 09:47:30 +01:00
Eric Eastwood
5be7679dd9 Split loading config vs homeserver setup (#18933)
This allows us to get access to `server_name` so we can use it when
creating the `LoggingContext("main")` in the future (pre-requisite for
https://github.com/element-hq/synapse/pull/18868).

This also allows us more flexibility to parse config however we want and
setup a Synapse homeserver. Like what we do in [Synapse Pro for Small
Hosts](https://github.com/element-hq/synapse-small-hosts).

Split out from https://github.com/element-hq/synapse/pull/18868
2025-09-22 14:53:02 -05:00
Eric Eastwood
e7d98d3429 Remove sentinel logcontext in Clock utilities (looping_call, looping_call_now, call_later) (#18907)
Part of https://github.com/element-hq/synapse/issues/18905

Lints for ensuring we use `Clock.call_later` instead of
`reactor.callLater`, etc are coming in
https://github.com/element-hq/synapse/pull/18944

### Testing strategy

 1. Configure Synapse to log at the `DEBUG` level
1. Start Synapse: `poetry run synapse_homeserver --config-path
homeserver.yaml`
1. Wait 10 seconds for the [database profiling
loop](9cc4001778/synapse/storage/database.py (L711))
to execute
1. Notice the logcontext being used for the `Total database time` log
line

Before (`sentinel`):

```
2025-09-10 16:36:58,651 - synapse.storage.TIME - 707 - DEBUG - sentinel - Total database time: 0.646% {room_forgetter_stream_pos(2): 0.131%, reap_monthly_active_users(1): 0.083%, get_device_change_last_converted_pos(1): 0.078%}
```

After (`looping_call`):

```
2025-09-10 16:36:58,651 - synapse.storage.TIME - 707 - DEBUG - looping_call - Total database time: 0.646% {room_forgetter_stream_pos(2): 0.131%, reap_monthly_active_users(1): 0.083%, get_device_change_last_converted_pos(1): 0.078%}
```
2025-09-22 14:51:13 -05:00
Eric Eastwood
d05f44a1c6 Introduce Clock.add_system_event_trigger(...) to include logcontext by default (#18945)
Introduce `Clock.add_system_event_trigger(...)` to wrap system event
callback code in a logcontext, ensuring we can identify which server
generated the logs.

Background:

>  Ideally, nothing from the Synapse homeserver would be logged against the `sentinel` 
>  logcontext as we want to know which server the logs came from. In practice, this is not 
>  always the case yet especially outside of request handling. 
>   
>  Global things outside of Synapse (e.g. Twisted reactor code) should run in the 
>  `sentinel` logcontext. It's only when it calls into application code that a logcontext 
>  gets activated. This means the reactor should be started in the `sentinel` logcontext, 
>  and any time an awaitable yields control back to the reactor, it should reset the 
>  logcontext to be the `sentinel` logcontext. This is important to avoid leaking the 
>  current logcontext to the reactor (which would then get picked up and associated with 
>  the next thing the reactor does). 
>
> *-- `docs/log_contexts.md`

Also adds a lint to prefer `Clock.add_system_event_trigger(...)` over
`reactor.addSystemEventTrigger(...)`

Part of https://github.com/element-hq/synapse/issues/18905
2025-09-22 11:47:22 -05:00