1
0
Files
synapse/synapse/http/request_metrics.py
Eric Eastwood ff03a51cb0 Revert "Fix LaterGauge metrics to collect from all servers (#18751)" (#18789)
This PR reverts https://github.com/element-hq/synapse/pull/18751

### Why revert?

@reivilibre
[found](https://matrix.to/#/!vcyiEtMVHIhWXcJAfl:sw1v.org/$u9OEmMxaFYUzWHhCk1A_r50Y0aGrtKEhepF7WxWJkUA?via=matrix.org&via=node.marinchik.ink&via=element.io)
that our CI was failing in bizarre ways (thanks for stepping up to dive
into this 🙇). Examples:

- `twisted.internet.error.ProcessTerminated: A process has ended with a
probable error condition: process ended by signal 9.`
- `twisted.internet.error.ProcessTerminated: A process has ended with a
probable error condition: process ended by signal 15.`

<details>
<summary>More detailed part of the log</summary>


https://github.com/element-hq/synapse/actions/runs/16758038107/job/47500520633#step:9:6809
```
tests.util.test_wheel_timer.WheelTimerTestCase.test_single_insert_fetch
===============================================================================
Error: 
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/trial/_dist/disttrial.py", line 371, in task
    await worker.run(case, result)
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/trial/_dist/worker.py", line 305, in run
    return await self.callRemote(workercommands.Run, testCase=testCaseId)  # type: ignore[no-any-return]
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/defer.py", line 1187, in __iter__
    yield self
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/defer.py", line 1092, in _runCallbacks
    current.result = callback(  # type: ignore[misc]
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/protocols/amp.py", line 1968, in _massageError
    error.trap(RemoteAmpError)
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/python/failure.py", line 431, in trap
    self.raiseException()
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/python/failure.py", line 455, in raiseException
    raise self.value.with_traceback(self.tb)
twisted.internet.error.ProcessTerminated: A process has ended with a probable error condition: process ended by signal 9.

tests.util.test_macaroons.MacaroonGeneratorTestCase.test_guest_access_token
-------------------------------------------------------------------------------
Ran 4325 tests in 669.321s

FAILED (skips=159, errors=62, successes=4108)
while calling from thread
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/base.py", line 1064, in runUntilCurrent
    f(*a, **kw)
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/base.py", line 790, in stop
    raise error.ReactorNotRunning("Can't stop reactor that isn't running.")
twisted.internet.error.ReactorNotRunning: Can't stop reactor that isn't running.

joining disttrial worker #0 failed
Traceback (most recent call last):
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/defer.py", line 1853, in _inlineCallbacks
    result = context.run(
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/python/failure.py", line 467, in throwExceptionIntoGenerator
    return g.throw(self.value.with_traceback(self.tb))
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/trial/_dist/worker.py", line 406, in exit
    await endDeferred
  File "/home/runner/.cache/pypoetry/virtualenvs/matrix-synapse-pswDeSvb-py3.9/lib/python3.9/site-packages/twisted/internet/defer.py", line 1187, in __iter__
    yield self
twisted.internet.error.ProcessTerminated: A process has ended with a probable error condition: process ended by signal 15.
```

</details>


With more debugging (thanks @devonh for also stepping in as maintainer),
we were finding that the CI was consistently failing at
`test_exposed_to_prometheus` which was a bit of smoke because of all of
the [metrics
changes](https://github.com/element-hq/synapse/issues/18592) that were
merged recently.

Locally, although I wasn't able to reproduce the bizarre errors, I could
easily see increased memory usage (~20GB vs ~2GB) and the
`test_exposed_to_prometheus` test taking a while to complete when
running a full test run (`SYNAPSE_TEST_LOG_LEVEL=INFO poetry run trial
tests`).

<img width="1485" height="78" alt="Lots of memory usage"
src="https://github.com/user-attachments/assets/811e2a96-75e5-4a3c-966c-00dc0512cea9"
/>

After updating `test_exposed_to_prometheus` to dump the
`latest_metrics_response = generate_latest(REGISTRY)`, I could see that
it's a massive 3.2GB response. Inspecting the contents, we can see 4.1M
(4,137,123) entries for just
`synapse_background_update_status{server_name="test"} 3.0` which is a
`LaterGauge`. I don't think we have 4.1M test cases so it's also unclear
why we end up with so many samples but it does make sense that we do see
a lot of duplicates because each `HomeserverTestCase` will create a
homeserver for each test case that will `LaterGauge.register_hook(...)`
(part of the https://github.com/element-hq/synapse/pull/18751 changes).

`tests/storage/databases/main/test_metrics.py`
```python
        latest_metrics_response = generate_latest(REGISTRY)
        with open("/tmp/synapse-test-metrics", "wb") as f:
            f.write(latest_metrics_response)
```

After reverting the https://github.com/element-hq/synapse/pull/18751
changes, running the full test suite locally doesn't result in memory
spikes and seems to run normally.



### Dev notes

Discussion in the
[`#synapse-dev:matrix.org`](https://matrix.to/#/!vcyiEtMVHIhWXcJAfl:sw1v.org/$vkMATs04yqZggVVd6Noop5nU8M2DVoTkrAWshw7u1-w?via=matrix.org&via=node.marinchik.ink&via=element.io)
room.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [ ] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-08-06 22:14:40 +00:00

308 lines
9.7 KiB
Python

#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2014-2016 OpenMarket Ltd
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
import threading
import traceback
from typing import Dict, Mapping, Set, Tuple
from prometheus_client.core import Counter, Histogram
from synapse.logging.context import current_context
from synapse.metrics import SERVER_NAME_LABEL, LaterGauge
logger = logging.getLogger(__name__)
# total number of responses served, split by method/servlet/tag
response_count = Counter(
"synapse_http_server_response_count",
"",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
requests_counter = Counter(
"synapse_http_server_requests_received",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
outgoing_responses_counter = Counter(
"synapse_http_server_responses",
"",
labelnames=["method", "code", SERVER_NAME_LABEL],
)
response_timer = Histogram(
"synapse_http_server_response_time_seconds",
"sec",
labelnames=["method", "servlet", "tag", "code", SERVER_NAME_LABEL],
)
response_ru_utime = Counter(
"synapse_http_server_response_ru_utime_seconds",
"sec",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
response_ru_stime = Counter(
"synapse_http_server_response_ru_stime_seconds",
"sec",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
response_db_txn_count = Counter(
"synapse_http_server_response_db_txn_count",
"",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
response_db_txn_duration = Counter(
"synapse_http_server_response_db_txn_duration_seconds",
"",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
# seconds spent waiting for a db connection, when processing this request
response_db_sched_duration = Counter(
"synapse_http_server_response_db_sched_duration_seconds",
"",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
# size in bytes of the response written
response_size = Counter(
"synapse_http_server_response_size",
"",
labelnames=["method", "servlet", "tag", SERVER_NAME_LABEL],
)
# In flight metrics are incremented while the requests are in flight, rather
# than when the response was written.
in_flight_requests_ru_utime = Counter(
"synapse_http_server_in_flight_requests_ru_utime_seconds",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
in_flight_requests_ru_stime = Counter(
"synapse_http_server_in_flight_requests_ru_stime_seconds",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
in_flight_requests_db_txn_count = Counter(
"synapse_http_server_in_flight_requests_db_txn_count",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
# seconds spent waiting for db txns, excluding scheduling time, when processing
# this request
in_flight_requests_db_txn_duration = Counter(
"synapse_http_server_in_flight_requests_db_txn_duration_seconds",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
# seconds spent waiting for a db connection, when processing this request
in_flight_requests_db_sched_duration = Counter(
"synapse_http_server_in_flight_requests_db_sched_duration_seconds",
"",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
)
_in_flight_requests: Set["RequestMetrics"] = set()
# Protects the _in_flight_requests set from concurrent access
_in_flight_requests_lock = threading.Lock()
def _get_in_flight_counts() -> Mapping[Tuple[str, ...], int]:
"""Returns a count of all in flight requests by (method, server_name)"""
# Cast to a list to prevent it changing while the Prometheus
# thread is collecting metrics
with _in_flight_requests_lock:
request_metrics = list(_in_flight_requests)
for request_metric in request_metrics:
request_metric.update_metrics()
# Map from (method, name) -> int, the number of in flight requests of that
# type. The key type is Tuple[str, str], but we leave the length unspecified
# for compatability with LaterGauge's annotations.
counts: Dict[Tuple[str, ...], int] = {}
for request_metric in request_metrics:
key = (
request_metric.method,
request_metric.name,
request_metric.our_server_name,
)
counts[key] = counts.get(key, 0) + 1
return counts
LaterGauge(
name="synapse_http_server_in_flight_requests_count",
desc="",
labelnames=["method", "servlet", SERVER_NAME_LABEL],
caller=_get_in_flight_counts,
)
class RequestMetrics:
def __init__(self, our_server_name: str) -> None:
"""
Args:
our_server_name: Our homeserver name (used to label metrics) (`hs.hostname`)
"""
self.our_server_name = our_server_name
def start(self, time_sec: float, name: str, method: str) -> None:
self.start_ts = time_sec
self.start_context = current_context()
self.name = name
self.method = method
if self.start_context:
# _request_stats records resource usage that we have already added
# to the "in flight" metrics.
self._request_stats = self.start_context.get_resource_usage()
else:
logger.error(
"Tried to start a RequestMetric from the sentinel context.\n%s",
"".join(traceback.format_stack()),
)
with _in_flight_requests_lock:
_in_flight_requests.add(self)
def stop(self, time_sec: float, response_code: int, sent_bytes: int) -> None:
with _in_flight_requests_lock:
_in_flight_requests.discard(self)
context = current_context()
tag = ""
if context:
tag = context.tag
if context != self.start_context:
logger.error(
"Context have unexpectedly changed %r, %r",
context,
self.start_context,
)
return
else:
logger.error(
"Trying to stop RequestMetrics in the sentinel context.\n%s",
"".join(traceback.format_stack()),
)
return
response_code_str = str(response_code)
outgoing_responses_counter.labels(
method=self.method,
code=response_code_str,
**{SERVER_NAME_LABEL: self.our_server_name},
).inc()
response_base_labels = {
"method": self.method,
"servlet": self.name,
"tag": tag,
SERVER_NAME_LABEL: self.our_server_name,
}
response_count.labels(**response_base_labels).inc()
response_timer.labels(
code=response_code_str,
**response_base_labels,
).observe(time_sec - self.start_ts)
resource_usage = context.get_resource_usage()
response_ru_utime.labels(**response_base_labels).inc(resource_usage.ru_utime)
response_ru_stime.labels(**response_base_labels).inc(resource_usage.ru_stime)
response_db_txn_count.labels(**response_base_labels).inc(
resource_usage.db_txn_count
)
response_db_txn_duration.labels(**response_base_labels).inc(
resource_usage.db_txn_duration_sec
)
response_db_sched_duration.labels(**response_base_labels).inc(
resource_usage.db_sched_duration_sec
)
response_size.labels(**response_base_labels).inc(sent_bytes)
# We always call this at the end to ensure that we update the metrics
# regardless of whether a call to /metrics while the request was in
# flight.
self.update_metrics()
def update_metrics(self) -> None:
"""Updates the in flight metrics with values from this request."""
if not self.start_context:
logger.error(
"Tried to update a RequestMetric from the sentinel context.\n%s",
"".join(traceback.format_stack()),
)
return
new_stats = self.start_context.get_resource_usage()
diff = new_stats - self._request_stats
self._request_stats = new_stats
in_flight_labels = {
"method": self.method,
"servlet": self.name,
SERVER_NAME_LABEL: self.our_server_name,
}
# max() is used since rapid use of ru_stime/ru_utime can end up with the
# count going backwards due to NTP, time smearing, fine-grained
# correction, or floating points. Who knows, really?
in_flight_requests_ru_utime.labels(**in_flight_labels).inc(
max(diff.ru_utime, 0)
)
in_flight_requests_ru_stime.labels(**in_flight_labels).inc(
max(diff.ru_stime, 0)
)
in_flight_requests_db_txn_count.labels(**in_flight_labels).inc(
diff.db_txn_count
)
in_flight_requests_db_txn_duration.labels(**in_flight_labels).inc(
diff.db_txn_duration_sec
)
in_flight_requests_db_sched_duration.labels(**in_flight_labels).inc(
diff.db_sched_duration_sec
)