1
0

Compare commits

...

1010 Commits

Author SHA1 Message Date
Andrew Morgan 86af9814ef Merge branch 'develop' of github.com:element-hq/synapse into matrix-org-hotfixes 2025-12-17 12:01:05 +00:00
Andrew Ferrazzutti f4320b5a49 Admin API: worker support for Query User Account (#19281) 2025-12-16 17:42:08 +00:00
Tulir Asokan 3989d22a37 Implement pagination for MSC2666 (#19279)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-16 15:24:36 +00:00
Joshua Goins 0395b71e25 Fix Mastodon URL previews not showing anything useful (#19231)
Fixes #18444. Inside of UrlPreviewer, we need to combine two dicts (one
from oEmbed, and one from OpenGraph metadata in the HTML) and in Mastodon's case they were very
different.

Single Page Applications (SPAs) seem to sometimes provide better information in the OpenGraph tags
than the oEmbed stubs, because the oEmbed stubs are filled in with JavaScript that Synapse does
not execute.

This change improves previews on Mastodon and YouTube (for the same reason).

Tested to not regress previews of Twitter or GitHub.
2025-12-16 13:02:29 +00:00
Denis Kasak 29fd0116a5 Improve proxy support for the federation_client.py dev script (#19300)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-16 11:06:07 +00:00
Travis Ralston 0f2b29511f Allow admins to bypass the quarantine check on media downloads (#19275)
Co-authored-by: turt2live <1190097+turt2live@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-15 17:23:33 +00:00
Andre Klärner 466994743a Document importance of public_baseurl for delegation and OIDC (#19270)
I just stumbled across the fact that my config used delegation as
recommended by the docs, and hosted Synapse on a subdomain. However my
config never had `public_baseurl` set and worked without issues, until I
just now tried to setup OIDC.

OIDC is initialized by the client instructing to open a URL on the
homeserver, and initially the correct URL is called, but Synapse does
not recognize it without `public_baseurl` being set correctly. After
changing this it immediately started working.

So in order to prevent anybody from making the same mistake, this adds a
small clarifying block in the OIDC docs.
2025-12-12 18:07:39 -06:00
Devon Hudson df24e0f302 Fix support for older versions of zope-interface (#19274)
Fixes #19269 

Versions of zope-interface from RHEL, Ubuntu LTS 22 & 24 and OpenSuse
don't support the new python union `X | Y` syntax for interfaces. This
PR partially reverts the change over to fully use the new syntax, adds a
minimum supported version of zope-interface to Synapse's dependency
list, and removes the linter auto-upgrades which prefer the newer
syntax.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-12 15:34:13 +00:00
Andrew Morgan 048629dd13 minor grammar fix
context: https://github.com/element-hq/synapse/pull/19260#discussion_r2614227743
2025-12-12 13:36:34 +00:00
Mathieu Velten 7347cc436e Add memberships admin API (#19260)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-12 13:35:46 +00:00
Travis Ralston 3f636386a6 Add an Admin API endpoint for listing quarantined media (#19268)
Co-authored-by: turt2live <1190097+turt2live@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-12-12 13:30:21 +00:00
Andrew Morgan 1f7f16477d Unpin Rust from 1.82.0 (#19302) 2025-12-12 11:31:55 +00:00
Erik Johnston dfd00a986f Fix sliding sync performance slow down for long lived connections. (#19206)
Fixes https://github.com/element-hq/synapse/issues/19175

This PR moves tracking of what lazy loaded membership we've sent to each
room out of the required state table. This avoids that table from
continuously growing, which massively helps performance as we pull out
all matching rows for the connection when we receive a request.

The new table is only read when we have data in a room to send, so we
end up reading a lot fewer rows from the DB. Though we now read from
that table for every room we have events to return in, rather than once
at the start of the request.

For an explanation of how the new table works, see the
[comment](https://github.com/element-hq/synapse/blob/erikj/sss_better_membership_storage2/synapse/storage/schema/main/delta/93/02_sliding_sync_members.sql#L15-L38)
on the table schema.

The table is designed so that we can later prune old entries if we wish,
but that is not implemented in this PR.

Reviewable commit-by-commit.

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-12-12 10:02:57 +00:00
Devon Hudson cdf286d405 Use uv to test full set of minimum deps in CI (#19289)
Stemming from #19274 this updates the `olddeps` CI to test against not
just the minimum version of our explicit dependencies, but also the
minimum version of all implicit (transitive) dependencies that are
pulled in from the explicit dependencies themselves.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-12-11 17:58:27 +00:00
Andrew Morgan 3aaa2e80b2 Switch the build backend from poetry-core to maturin (#19234) 2025-12-10 14:46:47 +00:00
dependabot[bot] ba774e2311 Bump ruff from 0.14.5 to 0.14.6 in the minor-and-patches group across 1 directory (#19296)
Bumps the minor-and-patches group with 1 update in the / directory:
[ruff](https://github.com/astral-sh/ruff).

Updates `ruff` from 0.14.5 to 0.14.6
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/releases">ruff's
releases</a>.</em></p>
<blockquote>
<h2>0.14.6</h2>
<h2>Release Notes</h2>
<p>Released on 2025-11-21.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-bandit</code>] Support new PySNMP API paths
(<code>S508</code>, <code>S509</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21374">#21374</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Adjust own-line comment placement between branches (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21185">#21185</a>)</li>
<li>Avoid syntax error when formatting attribute expressions with outer
parentheses, parenthesized value, and trailing comment on value (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20418">#20418</a>)</li>
<li>Fix panic when formatting comments in unary expressions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21501">#21501</a>)</li>
<li>Respect <code>fmt: skip</code> for compound statements on a single
line (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20633">#20633</a>)</li>
<li>[<code>refurb</code>] Fix <code>FURB103</code> autofix (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21454">#21454</a>)</li>
<li>[<code>ruff</code>] Fix false positive for complex conversion
specifiers in <code>logging-eager-conversion</code>
(<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21464">#21464</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Avoid false positive on <code>ClassVar</code>
reassignment (<code>RUF012</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21478">#21478</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Render hyperlinks for lint errors (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21514">#21514</a>)</li>
<li>Add a <code>ruff analyze</code> option to skip over imports in
<code>TYPE_CHECKING</code> blocks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21472">#21472</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Limit <code>eglot-format</code> hook to eglot-managed Python buffers
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21459">#21459</a>)</li>
<li>Mention <code>force-exclude</code> in &quot;Configuration &gt;
Python file discovery&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21500">#21500</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/thamer"><code>@​thamer</code></a></li>
<li><a
href="https://github.com/Ruchir28"><code>@​Ruchir28</code></a></li>
<li><a
href="https://github.com/thejcannon"><code>@​thejcannon</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
</ul>
<h2>Install ruff 0.14.6</h2>
<h3>Install prebuilt binaries via shell script</h3>
<pre lang="sh"><code>curl --proto '=https' --tlsv1.2 -LsSf
https://github.com/astral-sh/ruff/releases/download/0.14.6/ruff-installer.sh
| sh
&lt;/tr&gt;&lt;/table&gt; 
</code></pre>
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md">ruff's
changelog</a>.</em></p>
<blockquote>
<h2>0.14.6</h2>
<p>Released on 2025-11-21.</p>
<h3>Preview features</h3>
<ul>
<li>[<code>flake8-bandit</code>] Support new PySNMP API paths
(<code>S508</code>, <code>S509</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21374">#21374</a>)</li>
</ul>
<h3>Bug fixes</h3>
<ul>
<li>Adjust own-line comment placement between branches (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21185">#21185</a>)</li>
<li>Avoid syntax error when formatting attribute expressions with outer
parentheses, parenthesized value, and trailing comment on value (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20418">#20418</a>)</li>
<li>Fix panic when formatting comments in unary expressions (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21501">#21501</a>)</li>
<li>Respect <code>fmt: skip</code> for compound statements on a single
line (<a
href="https://redirect.github.com/astral-sh/ruff/pull/20633">#20633</a>)</li>
<li>[<code>refurb</code>] Fix <code>FURB103</code> autofix (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21454">#21454</a>)</li>
<li>[<code>ruff</code>] Fix false positive for complex conversion
specifiers in <code>logging-eager-conversion</code>
(<code>RUF065</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21464">#21464</a>)</li>
</ul>
<h3>Rule changes</h3>
<ul>
<li>[<code>ruff</code>] Avoid false positive on <code>ClassVar</code>
reassignment (<code>RUF012</code>) (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21478">#21478</a>)</li>
</ul>
<h3>CLI</h3>
<ul>
<li>Render hyperlinks for lint errors (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21514">#21514</a>)</li>
<li>Add a <code>ruff analyze</code> option to skip over imports in
<code>TYPE_CHECKING</code> blocks (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21472">#21472</a>)</li>
</ul>
<h3>Documentation</h3>
<ul>
<li>Limit <code>eglot-format</code> hook to eglot-managed Python buffers
(<a
href="https://redirect.github.com/astral-sh/ruff/pull/21459">#21459</a>)</li>
<li>Mention <code>force-exclude</code> in &quot;Configuration &gt;
Python file discovery&quot; (<a
href="https://redirect.github.com/astral-sh/ruff/pull/21500">#21500</a>)</li>
</ul>
<h3>Contributors</h3>
<ul>
<li><a href="https://github.com/ntBre"><code>@​ntBre</code></a></li>
<li><a href="https://github.com/dylwil3"><code>@​dylwil3</code></a></li>
<li><a
href="https://github.com/gauthsvenkat"><code>@​gauthsvenkat</code></a></li>
<li><a
href="https://github.com/MichaReiser"><code>@​MichaReiser</code></a></li>
<li><a href="https://github.com/thamer"><code>@​thamer</code></a></li>
<li><a
href="https://github.com/Ruchir28"><code>@​Ruchir28</code></a></li>
<li><a
href="https://github.com/thejcannon"><code>@​thejcannon</code></a></li>
<li><a
href="https://github.com/danparizher"><code>@​danparizher</code></a></li>
<li><a
href="https://github.com/chirizxc"><code>@​chirizxc</code></a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/astral-sh/ruff/commit/59c6cb521d26380120f8522476fd09d296bf4f2e"><code>59c6cb5</code></a>
Bump 0.14.6 (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21558">#21558</a>)</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/54dba15088ced32c31592ab44ddfdf40614002bb"><code>54dba15</code></a>
[ty] Improve debug messages when imports fail (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21555">#21555</a>)</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/1af318534a01ba6be69d7d77c33e96080635a8af"><code>1af3185</code></a>
[ty] Add support for relative import completions</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/553e5686248e86b9c781af77c714adc229bf8cc5"><code>553e568</code></a>
[ty] Refactor detection of import statements for completions</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/cdef3f5ab8115e2581e080341d4bab932d905af4"><code>cdef3f5</code></a>
[ty] Use dedicated collector for completions</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/617882242719af179a42be68170f9de9d83174f4"><code>6178822</code></a>
[ty] Attach subdiagnostics to <code>unresolved-import</code> errors for
relative imports...</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/6b7adb0537d1a57f26cf462a637af483e0ba2c75"><code>6b7adb0</code></a>
[ty] support PEP 613 type aliases (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21394">#21394</a>)</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/06941c1987bd4fe2b54d99b5514784472cabc0b3"><code>06941c1</code></a>
[ty] More low-hanging fruit for inlay hint goto-definition (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21548">#21548</a>)</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/eb7c098d6b70aacbaebe8b7c8d7710ad4ad0f46b"><code>eb7c098</code></a>
[ty] implement <code>TypedDict</code> structural assignment (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21467">#21467</a>)</li>
<li><a
href="https://github.com/astral-sh/ruff/commit/1b28fc1f1417284a075afa4a18c5e9d31a7d8620"><code>1b28fc1</code></a>
[ty] Add more random TypeDetails and tests (<a
href="https://redirect.github.com/astral-sh/ruff/issues/21546">#21546</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/astral-sh/ruff/compare/0.14.5...0.14.6">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=ruff&package-manager=pip&previous-version=0.14.5&new-version=0.14.6)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-09 23:01:56 +00:00
Devon Hudson acafac3bb6 Merge branch 'master' into develop 2025-12-09 09:30:32 -07:00
Devon Hudson 8b0083cad9 Respond with useful error codes when Content-Length header/s are invalid (#19212)
Related to https://github.com/element-hq/synapse/issues/17035, when
Synapse receives a request that is larger than the maximum size allowed,
it aborts the connection without ever sending back a HTTP response.
I dug into our usage of twisted and how best to try and report such an
error and this is what I came up with.

It would be ideal to be able to report the status from within
`handleContentChunk` but that is called too early on in the twisted http
handling code, before things have been setup enough to be able to
properly write a response.
I tested this change out locally (both with C-S and S-S apis) and they
do receive a 413 response now in addition to the connection being
closed.

Hopefully this will aid in being able to quickly detect when
https://github.com/element-hq/synapse/issues/17035 is occurring as the
current situation makes it very hard to narrow things down to that
specific issue without making a lot of assumptions.

This PR also responds with more meaningful error codes now in the case
of:
- multiple `Content-Length` headers
- invalid `Content-Length` header value
- request content size being larger than the `Content-Length` value

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Eric Eastwood <erice@element.io>
2025-12-08 21:39:18 +00:00
dependabot[bot] 09fd2645c2 Bump urllib3 from 2.5.0 to 2.6.0 (#19282)
Bumps [urllib3](https://github.com/urllib3/urllib3) from 2.5.0 to 2.6.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/releases">urllib3's
releases</a>.</em></p>
<blockquote>
<h2>2.6.0</h2>
<h2>🚀 urllib3 is fundraising for HTTP/2 support</h2>
<p><a
href="https://sethmlarson.dev/urllib3-is-fundraising-for-http2-support">urllib3
is raising ~$40,000 USD</a> to release HTTP/2 support and ensure
long-term sustainable maintenance of the project after a sharp decline
in financial support. If your company or organization uses Python and
would benefit from HTTP/2 support in Requests, pip, cloud SDKs, and
thousands of other projects <a
href="https://opencollective.com/urllib3">please consider contributing
financially</a> to ensure HTTP/2 support is developed sustainably and
maintained for the long-haul.</p>
<p>Thank you for your support.</p>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly compressed HTTP content (&quot;decompression bombs&quot;) leading
to excessive resource consumption even when a small amount of data was
requested. Reading small chunks of compressed data is safer and much
more efficient now. (CVE-2025-66471 reported by <a
href="https://github.com/Cycloctane"><code>@​Cycloctane</code></a>, 8.9
High, GHSA-2xpw-w6gg-jr37)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with virtually unlimited links in the
<code>Content-Encoding</code> header, potentially leading to a denial of
service (DoS) attack by exhausting system resources during decoding. The
number of allowed chained encodings is now limited to 5. (CVE-2025-66418
reported by <a
href="https://github.com/illia-v"><code>@​illia-v</code></a>, 8.9 High,
GHSA-gm62-xv2j-4w53)</li>
</ul>
<blockquote>
<p>[!IMPORTANT]</p>
<ul>
<li>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but your environment contains a
Brotli/brotlicffi/brotlipy package anyway, make sure to upgrade it to at
least Brotli 1.2.0 or brotlicffi 1.2.0.0 to benefit from the security
fixes and avoid warnings. Prefer using <code>urllib3[brotli]</code> to
install a compatible Brotli package automatically.</li>
<li>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</li>
</ul>
</blockquote>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3653">#3653</a>)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3666">#3666</a>)</li>
<li>Added support for Python 3.14 free-threading builds explicitly. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3696">#3696</a>)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>. Removed the
<code>HTTPResponse.getheader(name, default)</code> method in favor of
<code>HTTPResponse.headers.get(name, default)</code>. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3622">#3622</a>)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed for the retries parameter. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3649">#3649</a>)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3664">#3664</a>)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3700">#3700</a>)</li>
</ul>
<h2>Misc</h2>
<ul>
<li>Changed the <code>zstd</code> extra to install
<code>backports.zstd</code> instead of <code>zstandard</code> on Python
3.13 and before. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3693">#3693</a>)</li>
<li>Improved the performance of content decoding by optimizing
<code>BytesQueueBuffer</code> class. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3710">#3710</a>)</li>
<li>Allowed building the urllib3 package with newer setuptools-scm v9.x.
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3652">#3652</a>)</li>
<li>Ensured successful urllib3 builds by setting Hatchling requirement
to ≥ 1.27.0. (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3638">#3638</a>)</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/urllib3/urllib3/blob/main/CHANGES.rst">urllib3's
changelog</a>.</em></p>
<blockquote>
<h1>2.6.0 (2025-12-05)</h1>
<h2>Security</h2>
<ul>
<li>Fixed a security issue where streaming API could improperly handle
highly
compressed HTTP content (&quot;decompression bombs&quot;) leading to
excessive resource
consumption even when a small amount of data was requested. Reading
small
chunks of compressed data is safer and much more efficient now.
(<code>GHSA-2xpw-w6gg-jr37
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-2xpw-w6gg-jr37&gt;</code>__)</li>
<li>Fixed a security issue where an attacker could compose an HTTP
response with
virtually unlimited links in the <code>Content-Encoding</code> header,
potentially
leading to a denial of service (DoS) attack by exhausting system
resources
during decoding. The number of allowed chained encodings is now limited
to 5.
(<code>GHSA-gm62-xv2j-4w53
&lt;https://github.com/urllib3/urllib3/security/advisories/GHSA-gm62-xv2j-4w53&gt;</code>__)</li>
</ul>
<p>.. caution::</p>
<ul>
<li>
<p>If urllib3 is not installed with the optional
<code>urllib3[brotli]</code> extra, but
your environment contains a Brotli/brotlicffi/brotlipy package anyway,
make
sure to upgrade it to at least Brotli 1.2.0 or brotlicffi 1.2.0.0 to
benefit from the security fixes and avoid warnings. Prefer using
<code>urllib3[brotli]</code> to install a compatible Brotli package
automatically.</p>
</li>
<li>
<p>If you use custom decompressors, please make sure to update them to
respect the changed API of
<code>urllib3.response.ContentDecoder</code>.</p>
</li>
</ul>
<h2>Features</h2>
<ul>
<li>Enabled retrieval, deletion, and membership testing in
<code>HTTPHeaderDict</code> using bytes keys.
(<code>[#3653](https://github.com/urllib3/urllib3/issues/3653)
&lt;https://github.com/urllib3/urllib3/issues/3653&gt;</code>__)</li>
<li>Added host and port information to string representations of
<code>HTTPConnection</code>.
(<code>[#3666](https://github.com/urllib3/urllib3/issues/3666)
&lt;https://github.com/urllib3/urllib3/issues/3666&gt;</code>__)</li>
<li>Added support for Python 3.14 free-threading builds explicitly.
(<code>[#3696](https://github.com/urllib3/urllib3/issues/3696)
&lt;https://github.com/urllib3/urllib3/issues/3696&gt;</code>__)</li>
</ul>
<h2>Removals</h2>
<ul>
<li>Removed the <code>HTTPResponse.getheaders()</code> method in favor
of <code>HTTPResponse.headers</code>.
Removed the <code>HTTPResponse.getheader(name, default)</code> method in
favor of <code>HTTPResponse.headers.get(name, default)</code>.
(<code>[#3622](https://github.com/urllib3/urllib3/issues/3622)
&lt;https://github.com/urllib3/urllib3/issues/3622&gt;</code>__)</li>
</ul>
<h2>Bugfixes</h2>
<ul>
<li>Fixed redirect handling in <code>urllib3.PoolManager</code> when an
integer is passed
for the retries parameter.
(<code>[#3649](https://github.com/urllib3/urllib3/issues/3649)
&lt;https://github.com/urllib3/urllib3/issues/3649&gt;</code>__)</li>
<li>Fixed <code>HTTPConnectionPool</code> when used in Emscripten with
no explicit port.
(<code>[#3664](https://github.com/urllib3/urllib3/issues/3664)
&lt;https://github.com/urllib3/urllib3/issues/3664&gt;</code>__)</li>
<li>Fixed handling of <code>SSLKEYLOGFILE</code> with expandable
variables.
(<code>[#3700](https://github.com/urllib3/urllib3/issues/3700)
&lt;https://github.com/urllib3/urllib3/issues/3700&gt;</code>__)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/urllib3/urllib3/commit/720f484b605f18887a48eef448d0084e2b76902d"><code>720f484</code></a>
Release 2.6.0</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/24d7b67eac89f94e11003424bcf0d8f7b72222a8"><code>24d7b67</code></a>
Merge commit from fork</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/c19571de34c47de3a766541b041637ba5f716ed7"><code>c19571d</code></a>
Merge commit from fork</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/816fcf04528bc0f89672e13398eb813dcc892490"><code>816fcf0</code></a>
Bump actions/setup-python from 6.0.0 to 6.1.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3725">#3725</a>)</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/18af0a10efc4c99dd028f7ad5a461470b9a8b0fd"><code>18af0a1</code></a>
Improve speed of <code>BytesQueueBuffer.get()</code> by using memoryview
(<a
href="https://redirect.github.com/urllib3/urllib3/issues/3711">#3711</a>)</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/1f6abac3e6d426c3939b8a17cf4afa099e691ab2"><code>1f6abac</code></a>
Bump versions of pre-commit hooks (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3716">#3716</a>)</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/1c8fbf787b8e6ed151842c5d6874c9d5bdbf1d0b"><code>1c8fbf7</code></a>
Bump actions/checkout from 5.0.0 to 6.0.0 (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3722">#3722</a>)</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/7784b9eee95b7c90802c02b111e98df70259ae4f"><code>7784b9e</code></a>
Add Python 3.15 to CI (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3717">#3717</a>)</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/0241c9e7286d3008e3cce18effc13b40dc633385"><code>0241c9e</code></a>
Updated docs to reflect change in optional zstd dependency from
<code>zstandard</code> t...</li>
<li><a
href="https://github.com/urllib3/urllib3/commit/7afcabb6489d9a8ea95a40e5afcb46463af17351"><code>7afcabb</code></a>
Expand environment variable of SSLKEYLOGFILE (<a
href="https://redirect.github.com/urllib3/urllib3/issues/3705">#3705</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/urllib3/urllib3/compare/2.5.0...2.6.0">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=urllib3&package-manager=pip&previous-version=2.5.0&new-version=2.6.0)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/element-hq/synapse/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-05 23:51:29 +00:00
dependabot[bot] 891983f3f4 Bump the minor-and-patches group with 3 updates (#19280)
Bumps the minor-and-patches group with 3 updates:
[mypy](https://github.com/python/mypy),
[mypy-zope](https://github.com/Shoobx/mypy-zope) and
[phonenumbers](https://github.com/daviddrysdale/python-phonenumbers).

Updates `mypy` from 1.17.1 to 1.18.2
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/python/mypy/blob/master/CHANGELOG.md">mypy's
changelog</a>.</em></p>
<blockquote>
<h3>Mypy 1.18.2</h3>
<ul>
<li>Fix crash on recursive alias (Ivan Levkivskyi, PR <a
href="https://redirect.github.com/python/mypy/pull/19845">19845</a>)</li>
<li>Add additional guidance for stubtest errors when runtime is
<code>object.__init__</code> (Stephen Morton, PR <a
href="https://redirect.github.com/python/mypy/pull/19733">19733</a>)</li>
<li>Fix handling of None values in f-string expressions in mypyc
(BobTheBuidler, PR <a
href="https://redirect.github.com/python/mypy/pull/19846">19846</a>)</li>
</ul>
<h3>Acknowledgements</h3>
<p>Thanks to all mypy contributors who contributed to this release:</p>
<ul>
<li>Ali Hamdan</li>
<li>Anthony Sottile</li>
<li>BobTheBuidler</li>
<li>Brian Schubert</li>
<li>Chainfire</li>
<li>Charlie Denton</li>
<li>Christoph Tyralla</li>
<li>CoolCat467</li>
<li>Daniel Hnyk</li>
<li>Emily</li>
<li>Emma Smith</li>
<li>Ethan Sarp</li>
<li>Ivan Levkivskyi</li>
<li>Jahongir Qurbonov</li>
<li>Jelle Zijlstra</li>
<li>Joren Hammudoglu</li>
<li>Jukka Lehtosalo</li>
<li>Marc Mueller</li>
<li>Omer Hadari</li>
<li>Piotr Sawicki</li>
<li>PrinceNaroliya</li>
<li>Randolf Scholz</li>
<li>Robsdedude</li>
<li>Saul Shanabrook</li>
<li>Shantanu</li>
<li>Stanislav Terliakov</li>
<li>Stephen Morton</li>
<li>wyattscarpenter</li>
</ul>
<p>I’d also like to thank my employer, Dropbox, for supporting mypy
development.</p>
<h2>Mypy 1.17</h2>
<p>We’ve just uploaded mypy 1.17 to the Python Package Index (<a
href="https://pypi.org/project/mypy/">PyPI</a>).
Mypy is a static type checker for Python. This release includes new
features and bug fixes.
You can install it as follows:</p>
<pre><code>python3 -m pip install -U mypy
</code></pre>
<p>You can read the full documentation for this release on <a
href="http://mypy.readthedocs.io">Read the Docs</a>.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/python/mypy/commit/df05f05555ee62dbdb9960c64cad186172e92be1"><code>df05f05</code></a>
remove +dev from version</li>
<li><a
href="https://github.com/python/mypy/commit/01a7a1285d03cb7a330359b22cb462aacb5f9720"><code>01a7a12</code></a>
Update changelog for 1.18.2 (<a
href="https://redirect.github.com/python/mypy/issues/19873">#19873</a>)</li>
<li><a
href="https://github.com/python/mypy/commit/ca5abf09f3bfc543ac9c18a364696bc5da20bc03"><code>ca5abf0</code></a>
Typeshed cherry-pick: Make type of <code>unitest.mock.Any</code> a
subclass of <code>Any</code> (<a
href="https://redirect.github.com/python/mypy/issues/1">#1</a>...</li>
<li><a
href="https://github.com/python/mypy/commit/9d794b57d9c5b03d61caa3286756c05e0ae3021b"><code>9d794b5</code></a>
[mypyc] fix: inappropriate <code>None</code>s in f-strings (<a
href="https://redirect.github.com/python/mypy/issues/19846">#19846</a>)</li>
<li><a
href="https://github.com/python/mypy/commit/2c0510c84868b6bb42ef0f305b701e530a85c25f"><code>2c0510c</code></a>
stubtest: additional guidance on errors when runtime is
object.<strong>init</strong> (<a
href="https://redirect.github.com/python/mypy/issues/19733">#19733</a>)</li>
<li><a
href="https://github.com/python/mypy/commit/2f3f03c3e39e68dbfa3544c01a34f99803b3e1c2"><code>2f3f03c</code></a>
Bump version to 1.18.2+dev for point release</li>
<li><a
href="https://github.com/python/mypy/commit/76698412bc1f3ca99000d52649acd5a0e06aa71d"><code>7669841</code></a>
Fix crash on recursive alias in indirection.py (<a
href="https://redirect.github.com/python/mypy/issues/19845">#19845</a>)</li>
<li><a
href="https://github.com/python/mypy/commit/03fbaa941bccc3a9f8aea796d586603b67119bf2"><code>03fbaa9</code></a>
bump version to 1.18.1 due to wheels failure</li>
<li><a
href="https://github.com/python/mypy/commit/b44a1fbf0cf9fd90fd29d6bcd9f64c55dd2fd4c8"><code>b44a1fb</code></a>
removed +dev from version</li>
<li><a
href="https://github.com/python/mypy/commit/7197a99d1aebb1b7a584f82a53c44efb7dddf136"><code>7197a99</code></a>
Removed Unreleased in the Changelog for Release 1.18 (<a
href="https://redirect.github.com/python/mypy/issues/19827">#19827</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/python/mypy/compare/v1.17.1...v1.18.2">compare
view</a></li>
</ul>
</details>
<br />

Updates `mypy-zope` from 1.0.13 to 1.0.14
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/Shoobx/mypy-zope/blob/master/CHANGELOG.md">mypy-zope's
changelog</a>.</em></p>
<blockquote>
<h2>1.0.14 (2025-12-01)</h2>
<hr />
<ul>
<li>Support mypy-1.19</li>
<li>Support mypy-1.18</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/38d22f3f4f6fc1b85c9a0d3b64577749f682a6d2"><code>38d22f3</code></a>
Preparing release 1.0.14</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/76762ec861b1b3cddc471f1e8949902f3f3264be"><code>76762ec</code></a>
Maintain changelog</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/4971d98ab80db3362bd5864a92d385e8c5a43aec"><code>4971d98</code></a>
Merge pull request <a
href="https://redirect.github.com/Shoobx/mypy-zope/issues/134">#134</a>
from Shoobx/dependabot/pip/mypy-gte-1.0.0-and-lt-1.20.0</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/47af89d2c709f9e478ddbd5590994eba0829fb34"><code>47af89d</code></a>
Update mypy requirement from &lt;1.19.0,&gt;=1.0.0 to
&gt;=1.0.0,&lt;1.20.0</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/0c596ff804a1ece6a8d33c0abf1587a766e047c8"><code>0c596ff</code></a>
Maintain changelog</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/dcaa27841defa865c474f585e8bb799a8242399f"><code>dcaa278</code></a>
Merge pull request <a
href="https://redirect.github.com/Shoobx/mypy-zope/issues/132">#132</a>
from Shoobx/dependabot/pip/mypy-gte-1.0.0-and-lt-1.19.0</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/8f7b6778df6e3dde48c082ec07182e50b9938c76"><code>8f7b677</code></a>
Update mypy requirement from &lt;1.18.0,&gt;=1.0.0 to
&gt;=1.0.0,&lt;1.19.0</li>
<li><a
href="https://github.com/Shoobx/mypy-zope/commit/91b275b364cc90403b2a160695f32ca27bc06bb9"><code>91b275b</code></a>
Back to development: 1.0.14</li>
<li>See full diff in <a
href="https://github.com/Shoobx/mypy-zope/compare/1.0.13...1.0.14">compare
view</a></li>
</ul>
</details>
<br />

Updates `phonenumbers` from 9.0.18 to 9.0.19
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/daviddrysdale/python-phonenumbers/commit/38f2ffe1e816e22a3305ee0e11ece993d4f5d1b6"><code>38f2ffe</code></a>
Prep for 9.0.19 release</li>
<li><a
href="https://github.com/daviddrysdale/python-phonenumbers/commit/cd7f0cc64f1ff4c6b28a83102bf0372bb5653d91"><code>cd7f0cc</code></a>
Generated files for metadata</li>
<li><a
href="https://github.com/daviddrysdale/python-phonenumbers/commit/40ae18f50a74c3d7958011dfdfcef0a3981dc535"><code>40ae18f</code></a>
Merge metadata changes from upstream 9.0.19</li>
<li>See full diff in <a
href="https://github.com/daviddrysdale/python-phonenumbers/compare/v9.0.18...v9.0.19">compare
view</a></li>
</ul>
</details>
<br />

**Does not** update `pysaml2` from 7.5.0 to 7.5.4 since this would
downgrade pyOpenSSL
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/IdentityPython/pysaml2/releases">pysaml2's
releases</a>.</em></p>
<blockquote>
<h2>Version v7.5.4</h2>
<h2>v7.5.4 (2025-10-07)</h2>
<ul>
<li>Minor refactor to handle <code>shelve.open</code> and
<code>dbm</code> errors</li>
<li>Remove import of deprecated <code>cgi</code> module</li>
<li>Replace deprecated <code>datetime.utcnow()</code> by
<code>datetime.now(timezone.utc)</code></li>
<li>deps: Remove the <code>importlib_metadata</code> dependency</li>
<li>deps: Remove the <code>importlib_resources</code> dependency</li>
<li>deps: Update dependency versions and lockfile</li>
<li>build: Update pyproject and lockfile to be compatible with PEP
621</li>
<li>docs: Correct spelling mistakes</li>
<li>docs: Fix interal references/links</li>
<li>docs: Clarify units for accepted_time_diff config param</li>
<li>docs: Correct documentation for contact_person</li>
</ul>
<h2>Version 7.5.3</h2>
<h2>7.5.3 (2025-10-04)</h2>
<ul>
<li><a
href="https://redirect.github.com/IdentityPython/pysaml2/issues/973">#973</a>
Fix prepare_for_negotiated_authenticate to avoid double signing redirect
requests</li>
</ul>
<h2>Version 7.5.2</h2>
<h2>7.5.2 (2025-02-10)</h2>
<ul>
<li>Include the XSD of the XML Encryption Syntax and Processing Version
1.1 to the schema validator</li>
</ul>
<h2>Version 7.5.1</h2>
<h2>7.5.1 (2025-02-10)</h2>
<ul>
<li>deps: restrict pyOpenSSL up to v24.2.1 until it is replaced</li>
<li>deps: update dependncies for the lockfile and examples</li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/IdentityPython/pysaml2/blob/master/CHANGELOG.md">pysaml2's
changelog</a>.</em></p>
<blockquote>
<h2>v7.5.4 (2025-10-07)</h2>
<ul>
<li>Minor refactor to handle <code>shelve.open</code> and
<code>dbm</code> errors</li>
<li>Remove import of deprecated <code>cgi</code> module</li>
<li>Replace deprecated <code>datetime.utcnow()</code> by
<code>datetime.now(timezone.utc)</code></li>
<li>deps: Remove the <code>importlib_metadata</code> dependency</li>
<li>deps: Remove the <code>importlib_resources</code> dependency</li>
<li>deps: Update dependency versions and lockfile</li>
<li>build: Update pyproject and lockfile to be compatible with PEP
621</li>
<li>docs: Correct spelling mistakes</li>
<li>docs: Fix interal references/links</li>
<li>docs: Clarify units for accepted_time_diff config param</li>
<li>docs: Correct documentation for contact_person</li>
</ul>
<h2>7.5.3 (2025-10-04)</h2>
<ul>
<li><a
href="https://redirect.github.com/IdentityPython/pysaml2/issues/973">#973</a>
Fix prepare_for_negotiated_authenticate to avoid double signing redirect
requests</li>
</ul>
<h2>7.5.2 (2025-02-10)</h2>
<ul>
<li>Include the XSD of the XML Encryption Syntax and Processing Version
1.1 to the schema validator</li>
</ul>
<h2>7.5.1 (2025-02-10)</h2>
<ul>
<li>deps: restrict pyOpenSSL up to v24.2.1 until it is replaced</li>
<li>deps: update dependencies for the lockfile and examples</li>
</ul>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/9cf71f7f9e37e2bfa10d885ffe6252743b84bfb8"><code>9cf71f7</code></a>
Release version 7.5.4</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/c3ec7199d117c2918b5e5057ee608636c12579ac"><code>c3ec719</code></a>
Refactor _shelve_compat</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/1d6ea6024efc84486b668a9d375b8eb9d983a80a"><code>1d6ea60</code></a>
Remove import of deprecated cgi module</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/c45eb9df8248ad983b220bb1fd2b4e8ed779a40d"><code>c45eb9d</code></a>
Replace deprecated datetime.utcnow() by datetime.now(timezone.utc)</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/178f6d12b43a52a37b59b2ed1fee308af3537082"><code>178f6d1</code></a>
Remove unneeded dependencies</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/1f0a25a5cf122e47aaafbfd93397d095ca9005e4"><code>1f0a25a</code></a>
remove importlib_metadata import</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/099f716ae7245732dfb302a69074a331914136ff"><code>099f716</code></a>
remove importlib_resources imports</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/3fa11ee15db1a69426db3546586ac59056cc6c2c"><code>3fa11ee</code></a>
spelling updates.</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/4b7887f59a5e24650ad52d05c2285bd5dbb53a3e"><code>4b7887f</code></a>
update link.</li>
<li><a
href="https://github.com/IdentityPython/pysaml2/commit/bc8d3b4ecce7753f2473100127581c895134541a"><code>bc8d3b4</code></a>
update link.</li>
<li>Additional commits viewable in <a
href="https://github.com/IdentityPython/pysaml2/compare/v7.5.0...v7.5.4">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore <dependency name> major version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's major version (unless you unignore this specific
dependency's major version or upgrade to it yourself)
- `@dependabot ignore <dependency name> minor version` will close this
group update PR and stop Dependabot creating any more for the specific
dependency's minor version (unless you unignore this specific
dependency's minor version or upgrade to it yourself)
- `@dependabot ignore <dependency name>` will close this group update PR
and stop Dependabot creating any more for the specific dependency
(unless you unignore this specific dependency or upgrade to it yourself)
- `@dependabot unignore <dependency name>` will remove all of the ignore
conditions of the specified dependency
- `@dependabot unignore <dependency name> <ignore condition>` will
remove the ignore condition of the specified dependency and ignore
conditions


</details>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Devon Hudson <devonhudson@librem.one>
2025-12-05 22:11:58 +00:00
Andrew Morgan a096fba969 Group non-breaking dependabot PRs together to reduce review load (#18402) 2025-12-05 10:48:01 +00:00
Devon Hudson e8710e7c5e Don't include debug logs in Clock unless explicitly enabled (#19278)
Fixes #19276

This log with stack traces results in a ton of noise in the logs and is
confusing to users since it looks like it's an error in the logs.
This PR removes the stack trace from the log. This can be re-enabled on
demand if it is deemed necessary in the future.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-12-04 23:49:24 +00:00
Devon Hudson 978ae0b080 Merge branch 'release-v1.144' into develop 2025-12-02 15:06:23 -07:00
dependabot[bot] 93e658bd13 Bump cryptography from 45.0.7 to 46.0.3 (#19266)
Bumps [cryptography](https://github.com/pyca/cryptography) from 45.0.7
to 46.0.3.
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/pyca/cryptography/blob/main/CHANGELOG.rst">cryptography's
changelog</a>.</em></p>
<blockquote>
<p>46.0.3 - 2025-10-15</p>
<pre><code>
* Fixed compilation when using LibreSSL 4.2.0.
<p>.. _v46-0-2:</p>
<p>46.0.2 - 2025-09-30<br />
</code></pre></p>
<ul>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.4.</li>
</ul>
<p>.. _v46-0-1:</p>
<p>46.0.1 - 2025-09-16</p>
<pre><code>
* Fixed an issue where users installing via ``pip`` on Python 3.14
development
  versions would not properly install a dependency.
* Fixed an issue building the free-threaded macOS 3.14 wheels.
<p>.. _v46-0-0:</p>
<p>46.0.0 - 2025-09-16<br />
</code></pre></p>
<ul>
<li><strong>BACKWARDS INCOMPATIBLE:</strong> Support for Python 3.7 has
been removed.</li>
<li>Support for OpenSSL &lt; 3.0 is deprecated and will be removed in
the next
release.</li>
<li>Support for <code>x86_64</code> macOS (including publishing wheels)
is deprecated
and will be removed in two releases. We will switch to publishing an
<code>arm64</code> only wheel for macOS.</li>
<li>Support for 32-bit Windows (including publishing wheels) is
deprecated
and will be removed in two releases. Users should move to a 64-bit
Python installation.</li>
<li>Updated Windows, macOS, and Linux wheels to be compiled with OpenSSL
3.5.3.</li>
<li>We now build <code>ppc64le</code> <code>manylinux</code> wheels and
publish them to PyPI.</li>
<li>We now build <code>win_arm64</code> (Windows on Arm) wheels and
publish them to PyPI.</li>
<li>Added support for free-threaded Python 3.14.</li>
<li>Removed the deprecated <code>get_attribute_for_oid</code> method on
:class:<code>~cryptography.x509.CertificateSigningRequest</code>. Users
should use
:meth:<code>~cryptography.x509.Attributes.get_attribute_for_oid</code>
instead.</li>
<li>Removed the deprecated <code>CAST5</code>, <code>SEED</code>,
<code>IDEA</code>, and <code>Blowfish</code>
classes from the cipher module. These are still available in
:doc:<code>/hazmat/decrepit/index</code>.</li>
<li>In X.509, when performing a PSS signature with a SHA-3 hash, it is
now
encoded with the official NIST SHA3 OID.</li>
</ul>
<p>.. _v45-0-7:</p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/pyca/cryptography/commit/c0af4dd7b75921bbe9f1d41a03dbd4b64a9e3403"><code>c0af4dd</code></a>
release 46.0.3 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13681">#13681</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/99efe5ad150a56efadafacaffd0e3ee319373904"><code>99efe5a</code></a>
bump version for 46.0.2 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13531">#13531</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/e735cfc27502320101c130335c556394a125ba52"><code>e735cfc</code></a>
release 46.0.1 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13450">#13450</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/4e457ffba43a6d87efc63c33041e2081438dd8a4"><code>4e457ff</code></a>
Explicitly specify python in mac uv build invocation (<a
href="https://redirect.github.com/pyca/cryptography/issues/13447">#13447</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/2726efdb6d67f1c90cf9c6062d9fe051965586f8"><code>2726efd</code></a>
Depend on CFFI 2.0.0 or newer on Python &gt; 3.8 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13448">#13448</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/62230623d183706632c0eb7822c96ac95e3710a8"><code>6223062</code></a>
release 46.0.0 (<a
href="https://redirect.github.com/pyca/cryptography/issues/13446">#13446</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/563c4915b0931455706f761a4579c26698824eae"><code>563c491</code></a>
Update comment for pyopenssl-release tag (<a
href="https://redirect.github.com/pyca/cryptography/issues/13445">#13445</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/d2f6f7faced35b73277bc624ff496f8af696b172"><code>d2f6f7f</code></a>
Bump downstream dependencies in CI (<a
href="https://redirect.github.com/pyca/cryptography/issues/13439">#13439</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/e7ab02bd679d0b4f55d93486fe27e62554ef15d4"><code>e7ab02b</code></a>
we'll ship this with 3.5.3 why not (<a
href="https://redirect.github.com/pyca/cryptography/issues/13442">#13442</a>)</li>
<li><a
href="https://github.com/pyca/cryptography/commit/0b68a4bffb61202e4d4f5074cd30c2b9d15f429e"><code>0b68a4b</code></a>
Another pair of bump dependencies fix (<a
href="https://redirect.github.com/pyca/cryptography/issues/13444">#13444</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/pyca/cryptography/compare/45.0.7...46.0.3">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=cryptography&package-manager=pip&previous-version=45.0.7&new-version=46.0.3)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)


</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-02 20:27:05 +00:00
Devon Hudson d688daf41c Fix bug where Duration was logged incorrectly (#19267)
### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [X] Pull request is based on the develop branch
* [X] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [X] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct (run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
2025-12-02 20:08:32 +00:00
dependabot[bot] aff90a5245 Bump bleach from 6.2.0 to 6.3.0 (#19265)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-12-02 20:03:07 +00:00
Eric Eastwood 83023ce1e0 Be able to shutdown homeserver that failed to start (#19232)
For example, a homeserver can fail to `start` if the port is already in
use or the port number is invalid (not 0-65535)

Fix https://github.com/element-hq/synapse/issues/19189

Follow-up to https://github.com/element-hq/synapse/pull/18828


### Background

As part of Element's plan to support a light form of vhosting (virtual
host) (multiple instances of Synapse in the same Python process) (c.f
[Synapse Pro for small
hosts](https://docs.element.io/latest/element-server-suite-pro/synapse-pro-for-small-hosts/overview/)),
we're currently diving into the details and implications of running
multiple instances of Synapse in the same Python process.

"Clean tenant deprovisioning" tracked internally by
https://github.com/element-hq/synapse-small-hosts/issues/50
2025-12-02 11:28:46 -06:00
Eric Eastwood 39316672da Be able to shutdown homeserver that hasn't setup (#19187)
For example, a homeserver can fail to `setup` if it fails to connect to
the database.

Fix https://github.com/element-hq/synapse/issues/19188

Follow-up to https://github.com/element-hq/synapse/pull/18828


### Background

As part of Element's plan to support a light form of vhosting (virtual
host) (multiple instances of Synapse in the same Python process) (c.f
Synapse Pro for small hosts), we're currently diving into the details
and implications of running multiple instances of Synapse in the same
Python process.

"Clean tenant deprovisioning" tracked internally by
https://github.com/element-hq/synapse-small-hosts/issues/50
2025-12-02 10:58:06 -06:00
Andrew Morgan f86918e562 Remove the currently broken netlify GHA workflow (#19262) 2025-12-02 16:46:08 +00:00
Andrew Morgan 3d28e2213f Dependabot: allow 10 open PRs for general updates (#19253) 2025-12-02 16:45:54 +00:00
Andrew Morgan 0dfc21ca9f Remove "Updates to locked dependencies" section from changelog (#19254) 2025-12-02 16:45:41 +00:00
Andrew Morgan ffd0b4c079 Add a 14-day cooldown for dependency updates (#19258) 2025-12-02 16:45:28 +00:00
Devon Hudson c0e9144286 Merge branch 'release-v1.129' into matrix-org-hotfixes 2025-04-16 07:20:54 -06:00
Devon Hudson cffd982d30 Merge remote-tracking branch 'origin/release-v1.129' into matrix-org-hotfixes 2025-04-15 13:04:03 -06:00
Quentin Gliech a8a6401b2f Merge branch 'release-v1.128' into matrix-org-hotfixes 2025-04-01 16:42:21 +02:00
Quentin Gliech 4911795ea2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2025-03-31 10:37:57 +02:00
Andrew Morgan 984d716196 Merge branch 'release-v1.127' of github.com:element-hq/synapse into matrix-org-hotfixes 2025-03-27 10:03:13 +00:00
Erik Johnston a086e3adb7 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2025-03-18 10:33:28 +00:00
Andrew Morgan 06bdc98ca2 Bump db txn expected count in relations tests
As we're now performing another db txn to check if the user is an admin.
2025-03-14 10:18:29 +00:00
Travis Ralston 4ac6aa79bd Empty commit to fix CI 2025-03-14 10:18:24 +00:00
turt2live bf06f42608 Attempt to fix linting 2025-03-14 10:18:15 +00:00
Travis Ralston f7a92226bc changelog 2025-03-14 10:18:09 +00:00
Travis Ralston a65e8640fe Allow admins to see soft failed events 2025-03-14 10:18:05 +00:00
Andrew Morgan 19129ba4b0 Merge branch 'develop' into matrix-org-hotfixes 2025-03-14 10:17:34 +00:00
Olivier 'reivilibre 2e609202b4 Merge branch 'release-v1.126' into matrix-org-hotfixes 2025-03-07 16:03:23 +00:00
Olivier 'reivilibre a9f27ff73f Merge branch 'release-v1.126' into matrix-org-hotfixes 2025-03-04 13:31:38 +00:00
Quentin Gliech db9a75c46b Merge remote-tracking branch 'origin/release-v1.125' into matrix-org-hotfixes 2025-02-18 14:35:44 +01:00
Till Faelligen bbafd5a21d Merge remote-tracking branch 'origin/release-v1.124' into matrix-org-hotfixes 2025-02-11 11:59:26 +01:00
Erik Johnston fed913aef1 Merge remote-tracking branch 'origin/release-v1.124' into matrix-org-hotfixes 2025-02-07 10:19:09 +00:00
Erik Johnston 78b02a348a Merge remote-tracking branch 'origin/release-v1.124' into matrix-org-hotfixes 2025-02-05 16:26:20 +00:00
Erik Johnston 95f5f2bc10 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2025-02-05 14:20:31 +00:00
Erik Johnston 87bd7ae707 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2025-02-04 09:23:11 +00:00
Andrew Morgan add2f8d03c Move (un)block_room storage function to RoomWorkerStore
This is so workers can call these functions.
2025-01-30 17:45:14 +00:00
Till Faelligen 9d5a6057ec Merge remote-tracking branch 'origin/release-v1.123' into matrix-org-hotfixes 2025-01-21 15:03:55 +01:00
Olivier 'reivilibre 2beaad71cc Merge branch 'release-v1.122' into matrix-org-hotfixes 2025-01-07 15:49:43 +00:00
Till Faelligen 2487ea5b9e Merge remote-tracking branch 'origin/release-v1.121' into matrix-org-hotfixes 2024-12-11 13:16:37 +01:00
Andrew Morgan 0fb2633a27 Merge branch 'release-v1.121' into matrix-org-hotfixes 2024-12-04 14:52:27 +00:00
Quentin Gliech ca87366454 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-12-04 10:16:13 +01:00
Erik Johnston 445b83bdc7 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-11-29 10:11:51 +00:00
Erik Johnston fa8d6d3dbf Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-11-27 14:01:13 +00:00
Quentin Gliech 8eaa88cbaf Merge branch 'release-v1.120' into matrix-org-hotfixes 2024-11-26 14:12:04 +01:00
Quentin Gliech b4b1705927 Merge remote-tracking branch 'origin/release-v1.120' into matrix-org-hotfixes 2024-11-26 08:47:48 +01:00
Olivier 'reivilibre f4bbc74f44 Merge branch 'release-v1.120' into matrix-org-hotfixes 2024-11-20 15:16:26 +00:00
Erik Johnston c9bd03ea34 Merge remote-tracking branch 'origin/release-v1.119' into matrix-org-hotfixes 2024-11-11 10:46:47 +00:00
Erik Johnston b8a7bffff0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-11-06 11:22:13 +00:00
Till Faelligen 82a385d9c9 Merge remote-tracking branch 'origin/release-v1.118' into matrix-org-hotfixes 2024-10-29 15:35:55 +01:00
Andrew Morgan d315523f40 Merge remote-tracking branch 'origin/release-v1.118' into matrix-org-hotfixes 2024-10-22 12:08:20 +01:00
Andrew Morgan 8a0bfd2f02 Merge branch 'develop' of github.com:element-hq/synapse into matrix-org-hotfixes 2024-10-17 10:29:07 +01:00
Erik Johnston 63439300cd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-10-08 11:44:08 +01:00
Erik Johnston c9baa73896 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-10-01 11:04:32 +01:00
Quentin Gliech c19eb6fdaf Merge branch 'release-v1.116' into matrix-org-hotfixes 2024-09-26 15:30:52 +02:00
Quentin Gliech 0674a2420c Merge branch 'release-v1.116' into matrix-org-hotfixes 2024-09-25 11:36:20 +02:00
Erik Johnston 65f5373035 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-20 08:13:47 +01:00
Erik Johnston 26ac069915 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-19 17:20:00 +01:00
Olivier 'reivilibre df28811624 Merge branch 'develop' into matrix-org-hotfixes 2024-09-19 11:10:46 +01:00
Devon Hudson 4dfaa7f772 Merge branch 'origin/develop' into matrix-org-hotfixes 2024-09-18 10:39:11 -06:00
Olivier 'reivilibre 521532a3dc Merge branch 'develop' into matrix-org-hotfixes 2024-09-17 14:33:32 +01:00
Andrew Morgan 3db6a83d35 Merge branch 'develop' of github.com:element-hq/synapse into matrix-org-hotfixes 2024-09-16 10:32:01 +01:00
Andrew Morgan a9e191b28e Merge branch 'release-v1.115' of github.com:element-hq/synapse into matrix-org-hotfixes 2024-09-12 11:20:45 +01:00
Erik Johnston 7bb499154a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-11 14:16:51 +01:00
Erik Johnston e7234d0ce1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-10 10:35:30 +01:00
Erik Johnston 0c512abce1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-10 10:32:01 +01:00
Erik Johnston e21bbe9070 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-09-01 12:27:04 +01:00
Erik Johnston bb70d6d9e0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-29 16:59:04 +01:00
Erik Johnston 313ac69d0a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-29 16:31:17 +01:00
Erik Johnston bd00c10485 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-29 13:38:41 +01:00
Erik Johnston a6d259a775 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-29 12:52:28 +01:00
Olivier 'reivilibre 18d0f269ab Merge branch 'rei/no_imgs_in_notif_emails' into matrix-org-hotfixes 2024-08-22 14:17:06 +01:00
Olivier 'reivilibre f0ebe2f215 Remove images from message notification e-mails 2024-08-22 14:16:10 +01:00
Quentin Gliech 60f2782a82 Merge remote-tracking branch 'origin/release-v1.114' into matrix-org-hotfixes 2024-08-20 15:33:26 +02:00
Erik Johnston df995ed1fe Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-20 12:57:56 +01:00
Erik Johnston a378a92bb0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-20 10:35:53 +01:00
Erik Johnston abd516304e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-16 16:04:59 +01:00
Erik Johnston 988e74eabf Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-14 13:47:10 +01:00
Erik Johnston 829f2d0605 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-14 12:42:35 +01:00
Andrew Morgan 5e16820e4c Register the media threadpool with our metrics
Otherwise it won't be visible.
2024-08-13 18:30:49 +01:00
Andrew Morgan 856db87bc7 Merge branch 'develop' of github.com:element-hq/synapse into matrix-org-hotfixes 2024-08-13 18:00:21 +01:00
Erik Johnston e650f51e46 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-13 16:54:06 +01:00
Erik Johnston 697f7dae40 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-13 15:06:19 +01:00
Erik Johnston 62a52a7926 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-13 14:07:04 +01:00
Erik Johnston 170059b27a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-12 09:18:13 +01:00
Erik Johnston 46c66aa8e7 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-08 11:27:23 +01:00
Andrew Morgan 4907b7d459 Merge remote-tracking branch 'origin/release-v1.113' into matrix-org-hotfixes 2024-08-06 12:31:23 +01:00
Erik Johnston f2a4798528 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-08-06 10:44:51 +01:00
Olivier 'reivilibre b0800d3ba6 Merge branch 'rei/twisted_247rc1' into matrix-org-hotfixes 2024-07-30 15:11:03 +01:00
Olivier 'reivilibre dc03039b70 Change to misc 2024-07-30 14:55:27 +01:00
Olivier 'reivilibre a301c63426 Fix and hush Mypy errors 2024-07-30 14:54:42 +01:00
Olivier 'reivilibre 48bac0db0d Newsfile
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2024-07-30 14:27:19 +01:00
Olivier 'reivilibre 48d92ed0f6 Make HTTP proxy support changes in Twisted 24.7.0rc1 2024-07-30 14:27:19 +01:00
Olivier 'reivilibre d216a956b6 Slightly simplify some tests whilst reducing 'Any'-typing 2024-07-30 14:27:19 +01:00
Olivier 'reivilibre 98876d342f Fix tests relying on headers not being Headers 2024-07-30 14:27:19 +01:00
Olivier 'reivilibre 3b8ed30163 Fix signature and bug in writeHeaders in the tests 2024-07-30 14:27:19 +01:00
Olivier 'reivilibre b220096147 Update locked dependency on Twisted to 24.7.0rc1
Pulls in fix for
https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7
2024-07-30 11:51:26 +01:00
Erik Johnston 3f6a71bd71 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-30 10:58:17 +01:00
Erik Johnston 9cdfb4e08d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-30 09:31:42 +01:00
Erik Johnston 15147dba15 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-24 15:29:14 +01:00
Erik Johnston b003be0ad0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-23 14:07:36 +01:00
Erik Johnston b1d4bfe053 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-18 13:00:18 +01:00
Erik Johnston 84d14b4aa8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-18 12:51:37 +01:00
Till Faelligen 89e764e332 Merge remote-tracking branch 'origin/release-v1.111' into matrix-org-hotfixes 2024-07-16 12:59:48 +02:00
Erik Johnston dd50e9e86f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-11 16:05:11 +01:00
Quentin Gliech 9fc80af80c Merge remote-tracking branch 'origin/release-v1.111' into matrix-org-hotfixes 2024-07-10 10:52:29 +02:00
Olivier 'reivilibre b6e9441409 Only log once when waiting for current token to catch up
Rather than once every 0.5s
2024-07-09 21:28:18 +01:00
Quentin Gliech ca66648b11 Merge branch 'release-v1.111' into matrix-org-hotfixes 2024-07-09 12:38:58 +02:00
Quentin Gliech 1ced5f1073 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-09 12:12:13 +02:00
Quentin Gliech d98fffa1ab Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-09 11:37:08 +02:00
Erik Johnston 97795d8437 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-07-08 10:35:57 +01:00
Erik Johnston 6e2453066d Merge remote-tracking branch 'origin/release-v1.110' into matrix-org-hotfixes 2024-07-02 14:09:36 +01:00
Erik Johnston 71ed997718 Merge branch 'release-v1.110' into matrix-org-hotfixes 2024-07-02 12:41:04 +01:00
Till Faelligen ddc46e90a8 Merge remote-tracking branch 'origin/release-v1.110' into matrix-org-hotfixes 2024-06-26 15:11:55 +02:00
Till Faelligen 4021cb186d Merge remote-tracking branch 'origin/release-v1.110' into matrix-org-hotfixes 2024-06-26 14:51:13 +02:00
Erik Johnston 1379286f69 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-06-25 11:19:06 +01:00
Erik Johnston 8a2a335db4 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-06-24 16:21:16 +01:00
Erik Johnston e1324ab2c1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-06-19 17:39:55 +01:00
Erik Johnston 4f308ea362 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-06-19 10:34:40 +01:00
Andrew Morgan 0b01683e80 Revert "Support MSC3916 by adding a federation /download endpoint" (#17325) 2024-06-18 17:02:26 +01:00
Erik Johnston edc36df409 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-06-18 14:06:08 +01:00
Quentin Gliech e081fcfa85 Merge remote-tracking branch 'origin/release-v1.109' into matrix-org-hotfixes 2024-06-18 11:51:20 +02:00
Quentin Gliech 4375273374 Merge remote-tracking branch 'origin/release-v1.109' into matrix-org-hotfixes 2024-06-17 14:37:21 +02:00
Quentin Gliech 329514de69 Merge remote-tracking branch 'origin/release-v1.109' into matrix-org-hotfixes 2024-06-17 13:50:31 +02:00
Quentin Gliech 6a860838bb Fix the import for RequestBodyModel
This apparently got moved in develop, so cherry-picking the merge
stuff.

This should be reverted once we merge develop again
2024-06-12 13:35:59 +02:00
Travis Ralston 52f47d12a9 Add report room API (MSC4151) (#17270)
https://github.com/matrix-org/matrix-spec-proposals/pull/4151

This is intended to be enabled by default for immediate use. When FCP is
complete, the unstable endpoint will be dropped and stable endpoint
supported instead - no backwards compatibility is expected for the
unstable endpoint.
2024-06-12 12:29:48 +02:00
Erik Johnston f5d7a8951d Merge remote-tracking branch 'origin/release-v1.109' into matrix-org-hotfixes 2024-06-11 11:07:20 +01:00
Erik Johnston b03e8cdaec Don't cache 2024-06-06 15:46:00 +01:00
Erik Johnston 04dc1a7afb Always return OTK counts 2024-06-06 15:13:21 +01:00
Andrew Morgan abb3f75cd5 Merge remote-tracking branch 'origin/release-v1.109' into matrix-org-hotfixes 2024-06-04 09:50:07 +01:00
Erik Johnston 9ddaa8c418 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-31 11:28:18 +01:00
Erik Johnston 293eeffb0c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-30 14:30:39 +01:00
Erik Johnston f266dfebc7 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-30 11:22:47 +01:00
Erik Johnston e4550d4c64 Fix lack of PostgresEngine imported 2024-05-29 13:55:45 +01:00
Erik Johnston 484845524f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-29 13:40:10 +01:00
Olivier 'reivilibre 7786060e4a Merge branch 'rei/task_scheduler_better_logging' into matrix-org-hotfixes 2024-05-18 21:16:23 +01:00
Olivier 'reivilibre effebb3e88 Oops! Clock.time gives seconds 2024-05-18 21:16:06 +01:00
Olivier 'reivilibre cdd333b4f1 Merge branch 'rei/task_scheduler_better_logging' into matrix-org-hotfixes 2024-05-18 20:51:16 +01:00
Olivier 'reivilibre 3958fdab77 Newsfile
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2024-05-18 20:50:40 +01:00
Olivier 'reivilibre d7b8d87dad Log task usage upon finish and every 5 minutes otherwise 2024-05-18 20:50:40 +01:00
Olivier 'reivilibre dd5f3eb17d Merge branch 'rei/task_scheduler_better_logging' into matrix-org-hotfixes 2024-05-18 20:47:00 +01:00
Olivier 'reivilibre d9003ae507 Newsfile
Signed-off-by: Olivier 'reivilibre <oliverw@matrix.org>
2024-05-18 20:44:49 +01:00
Olivier 'reivilibre 5831342024 Log task usage upon finish and every 5 minutes otherwise 2024-05-18 20:44:49 +01:00
Olivier 'reivilibre 02efa51f0f Merge branch 'erikj/wait_for_stream_pos' into matrix-org-hotfixes 2024-05-18 19:50:05 +01:00
Olivier 'reivilibre 122d190e40 Merge branch 'erikj/device_list_sync_perf' into matrix-org-hotfixes 2024-05-18 16:33:57 +01:00
Erik Johnston 6a0d2dc6fc Only check for all rooms if not outbound poke 2024-05-18 16:12:46 +01:00
Olivier 'reivilibre 233e25e193 Merge branch 'erikj/device_list_sync_perf' into matrix-org-hotfixes 2024-05-18 15:21:52 +01:00
Olivier 'reivilibre 1e4d9df3cd Add current token to log line 2024-05-18 15:21:40 +01:00
Erik Johnston 7bd1575ddf Ignore typing stream for now 2024-05-18 13:08:52 +01:00
Erik Johnston a2dc84fc90 Bump typing_extensions version 2024-05-18 12:40:04 +01:00
Erik Johnston e6d3d808aa Newsfile 2024-05-18 12:33:46 +01:00
Erik Johnston cf474a094f Add stream change cache for device lists in room 2024-05-18 12:33:26 +01:00
Erik Johnston 5b2b3120c2 Cap the top stream ID when fetching changed devices 2024-05-18 12:27:27 +01:00
Erik Johnston bec0313e1b Improve perf of sync device lists (#17191)
It's almost always more efficient to query the rooms that have device
list changes, rather than looking at the list of all users whose devices
have changed and then look for shared rooms.
2024-05-18 12:27:12 +01:00
Erik Johnston bd6b57653f Newsfile 2024-05-18 12:23:54 +01:00
Erik Johnston 169c9f85a8 In sync wait for worker to catch up since token
Otherwise things will get confused.
2024-05-18 12:22:09 +01:00
Erik Johnston 1b7fa7b04a Add StreamToken.is_before_or_eq func 2024-05-18 12:08:30 +01:00
Olivier 'reivilibre a6b7aed06a Merge branch 'develop' into matrix-org-hotfixes 2024-05-18 12:05:59 +01:00
Erik Johnston 212f6646a6 Merge branch 'erikj/redundant_outbound_pokes' into matrix-org-hotfixes 2024-05-17 18:07:11 +01:00
Erik Johnston db39ef061b Newsfile 2024-05-17 17:39:14 +01:00
Erik Johnston 689666c4ba Reduce work of calculating outbound device pokes 2024-05-17 17:38:44 +01:00
Erik Johnston 12a4094a5f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-16 16:08:39 +01:00
Erik Johnston 47aa20f104 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-14 15:10:14 +01:00
Erik Johnston cef3519e96 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-13 13:11:28 +01:00
Erik Johnston 784c048aa4 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-13 12:12:54 +01:00
Olivier 'reivilibre 1629429538 Merge remote-tracking branch 'origin/release-v1.107' into matrix-org-hotfixes 2024-05-07 16:35:57 +01:00
Erik Johnston fa68816fb8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-05-07 11:48:08 +01:00
Olivier 'reivilibre b9639065eb Merge remote-tracking branch 'origin/release-v1.106' into matrix-org-hotfixes 2024-04-25 16:01:00 +01:00
Erik Johnston 31ac8b745c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-04-18 13:34:28 +01:00
Andrew Morgan 656a007c70 Revert "block non-admins from publishing to room directory"
This reverts commit e34f406cfd.
2024-04-15 15:56:09 +01:00
Andrew Morgan 09885e1a33 Merge branch 'release-v1.105' into matrix-org-hotfixes 2024-04-11 12:48:18 +01:00
Erik Johnston b1c1f6d29a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-04-10 11:19:31 +01:00
Erik Johnston 562066577c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-04-08 14:55:30 +01:00
Erik Johnston b02490afcd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-04-08 14:25:41 +01:00
Erik Johnston 6e95084685 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-04-08 10:11:02 +01:00
Erik Johnston 7d5f3ea750 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-03-28 16:32:02 +00:00
Erik Johnston 88273387d2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-03-26 13:27:02 +00:00
Erik Johnston 00bc269a8a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-03-26 11:49:43 +00:00
Erik Johnston d9db944600 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-03-12 14:39:54 +00:00
Andrew Morgan 5a4b8b8456 Merge branch 'release-v1.102' into matrix-org-hotfixes 2024-03-05 12:50:29 +00:00
Andrew Morgan 2b78ad37b3 Revert "Improve DB performance of calculating badge counts for push. (#16756)"
This reverts commit b11f7b5122.
2024-03-05 11:51:18 +00:00
Andrew Morgan beff0a756e Merge branch 'develop' into matrix-org-hotfixes 2024-02-20 14:31:44 +00:00
Erik Johnston 6dbedcf601 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-02-13 13:25:43 +00:00
Erik Johnston d1c180a7c9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-02-09 10:53:01 +00:00
Erik Johnston f7a8fafbc1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-02-06 15:27:17 +00:00
Olivier Wilkinson (reivilibre) cf83fac597 Merge branch 'release-v1.100' into matrix-org-hotfixes 2024-01-24 14:19:06 +00:00
Olivier Wilkinson (reivilibre) d5e213e16f Merge branch 'release-v1.100' into matrix-org-hotfixes 2024-01-23 15:30:03 +00:00
Erik Johnston 9302d20247 Speed up e2e device keys queries for bot accounts 2024-01-22 17:25:21 +00:00
Erik Johnston 87ababed7b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-01-17 13:40:33 +00:00
Erik Johnston 4df836af09 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-01-12 12:20:17 +00:00
Erik Johnston 3f96c956f1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2024-01-09 11:18:58 +00:00
Erik Johnston e21ffb8948 Fix 'block non-admins from publishing to room directory' 2023-12-22 11:25:15 +00:00
Erik Johnston c2deef254a Merge remote-tracking branch 'shay/shay/freeze_directory' into matrix-org-hotfixes 2023-12-12 16:17:52 +00:00
H. Shay e34f406cfd block non-admins from publishing to room directory 2023-12-11 20:06:34 -08:00
Erik Johnston f0b69250b5 Merge remote-tracking branch 'origin/release-v1.98' into matrix-org-hotfixes 2023-12-05 16:14:32 +00:00
Erik Johnston 9583a06178 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-12-05 15:52:49 +00:00
Erik Johnston afc82ecb44 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-12-04 09:58:38 +00:00
Erik Johnston 9c3b906b3a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-20 10:09:33 +00:00
Erik Johnston f8f7355678 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-16 16:57:54 +00:00
Erik Johnston 01b2477b10 Revert "Fix test not detecting tables with missing primary keys and missing replica identities, then add more replica identities. (#16647)"
This reverts commit 830988ae72.
2023-11-16 16:53:04 +00:00
Erik Johnston a96086e9ed Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-16 16:50:37 +00:00
Erik Johnston b20bdd3997 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-16 16:27:21 +00:00
Patrick Cloke 8c2d3d0b4c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-09 11:14:57 -05:00
Patrick Cloke b77c9c3f73 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-08 07:45:34 -05:00
Patrick Cloke 671266b5a9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-11-03 07:45:38 -04:00
Erik Johnston 728f163f51 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-31 13:45:19 +00:00
Erik Johnston aa00ab19ce Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-30 14:48:17 +00:00
Erik Johnston ab4d4073de Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-30 12:28:01 +00:00
Erik Johnston 22eb7b0186 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-27 16:08:05 +01:00
Patrick Cloke 46fd454783 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-27 09:32:35 -04:00
Patrick Cloke 07b3b9a95e Merge branch 'release-v1.95' into matrix-org-hotfixes 2023-10-17 11:56:14 -04:00
Olivier Wilkinson (reivilibre) e09b91b9d6 Revert "TEMPORARY Measure and log test cases"
This reverts commit adfdd6afe1.
2023-10-17 11:44:57 +01:00
Olivier Wilkinson (reivilibre) adfdd6afe1 TEMPORARY Measure and log test cases 2023-10-17 11:42:24 +01:00
Olivier Wilkinson (reivilibre) 8c60ebf209 Revert "TEMPORARY Subdivide _resolve_events Measure blocks"
This reverts commit f3db863420.
2023-10-16 18:24:46 +01:00
Olivier Wilkinson (reivilibre) 1e1cf4bb9d Revert "TEMPORARY Add more Measure blocks"
This reverts commit adfa0fded3.
2023-10-16 18:24:45 +01:00
Olivier Wilkinson (reivilibre) adfa0fded3 TEMPORARY Add more Measure blocks 2023-10-16 18:15:48 +01:00
Olivier Wilkinson (reivilibre) f3db863420 TEMPORARY Subdivide _resolve_events Measure blocks 2023-10-16 17:55:05 +01:00
Olivier Wilkinson (reivilibre) 96769a229c TEMPORARY Disable prune_old_outbound_device_pokes
See #16480
2023-10-16 17:37:58 +01:00
David Robertson 6cb0b8b048 Merge remote-tracking branch 'origin/release-v1.94' into matrix-org-hotfixes 2023-10-03 12:03:50 +01:00
Patrick Cloke 656ffa23c9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-10-02 09:08:23 -04:00
Patrick Cloke e60993e42d Revert "Temporarily disable webp thumbnailing"
This reverts commit 93248076c2.
2023-09-25 08:12:05 -04:00
Patrick Cloke 7ab1b46029 Merge remote-tracking branch 'origin/release-v1.93' into matrix-org-hotfixes 2023-09-19 08:14:46 -04:00
Erik Johnston 93248076c2 Temporarily disable webp thumbnailing 2023-09-15 15:45:41 +01:00
Erik Johnston 1e0b96f1a4 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-09-14 16:21:58 +01:00
Erik Johnston 60b3dc65f6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-09-13 11:59:54 +01:00
Erik Johnston 9a28f154c8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-09-13 11:54:35 +01:00
Erik Johnston 3bb8cce692 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-09-13 11:12:26 +01:00
Patrick Cloke ec703e7d97 Merge branch 'release-v1.92' into matrix-org-hotfixes 2023-09-06 11:25:41 -04:00
David Robertson 0e8cbbdb8e Merge remote-tracking branch 'origin/release-v1.92' into matrix-org-hotfixes 2023-09-05 14:40:53 +01:00
David Robertson e4fe4dbecd Merge remote-tracking branch 'origin/release-v1.91' into matrix-org-hotfixes 2023-09-04 10:42:51 +01:00
Erik Johnston 535e5832a3 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-08-23 16:24:44 +01:00
Erik Johnston a8bd3df5e1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-08-23 14:00:49 +01:00
Erik Johnston b7d0c7d3fd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-08-23 13:46:10 +01:00
Erik Johnston 144cf227ca Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-08-23 09:41:34 +01:00
Erik Johnston de16789d87 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-08-18 11:06:02 +01:00
Erik Johnston 94d4507d35 Merge remote-tracking branch 'origin/release-v1.90' into matrix-org-hotfixes 2023-08-09 11:00:30 +01:00
Mathieu Velten 42ec11ae56 Merge branch 'release-v1.89' into matrix-org-hotfixes 2023-07-26 14:57:13 +02:00
Mathieu Velten 2eada7ef13 Merge branch 'release-v1.89' into matrix-org-hotfixes 2023-07-25 14:43:27 +02:00
Erik Johnston b89e8d695d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-07-20 16:19:45 +01:00
Erik Johnston d9fa426a54 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-07-20 11:08:14 +01:00
Erik Johnston 1ea1a0b251 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-07-20 10:50:03 +01:00
Olivier Wilkinson (reivilibre) d31e6c7541 Merge branch 'release-v1.88' into matrix-org-hotfixes 2023-07-18 14:11:42 +01:00
David Robertson 426cf50ecc Merge remote-tracking branch 'origin/release-v1.88' into matrix-org-hotfixes 2023-07-11 10:37:37 +01:00
Erik Johnston 21cb804023 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-06-27 10:31:31 +01:00
Erik Johnston fb8d629dcd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-06-23 09:31:50 +01:00
Mathieu Velten 98a00339a5 Merge branch 'release-v1.86' into matrix-org-hotfixes 2023-06-13 11:27:29 +02:00
David Robertson 50cb0d298a Merge remote-tracking branch 'origin/release-v1.85' into matrix-org-hotfixes 2023-06-01 11:12:11 +01:00
David Robertson 4427151a21 Merge remote-tracking branch 'origin/release-v1.85' into matrix-org-hotfixes 2023-05-30 14:20:41 +01:00
Erik Johnston 881855d4e3 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-05-24 15:49:43 +01:00
Olivier Wilkinson (reivilibre) 184a688c64 Merge branch 'release-v1.84' into matrix-org-hotfixes 2023-05-16 12:14:55 +01:00
Erik Johnston 106fb7005d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-05-12 11:38:28 +01:00
Erik Johnston 7c74fe01d3 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-05-12 11:09:10 +01:00
Sean Quah 36b72b9f2e Revert "Redirect event persistence traffic to a particular event persister worker"
Now that things have settled down in the two rooms we special-cased, we
can return the sharding config to normal.

Rich suspects that event_persister-4 being idle was the cause of
federation senders getting stuck on 2023-05-11. It was observed that the
min stream position of RoomStreamTokens was not advancing.

This reverts commit 2263c3bcd7.
2023-05-12 11:01:57 +01:00
Andrew Morgan 2263c3bcd7 Redirect event persistence traffic to a particular event persister worker 2023-05-10 19:55:20 +01:00
Sean Quah c9b98e85b6 Merge remote-tracking branch 'origin/release-v1.83' into matrix-org-hotfixes 2023-05-03 13:11:09 +01:00
Sean Quah c2abd04877 Merge remote-tracking branch 'origin/release-v1.83' into matrix-org-hotfixes 2023-05-02 16:33:00 +01:00
David Robertson d1b33ae118 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-04-26 11:38:39 +01:00
Olivier Wilkinson (reivilibre) d94aba25a6 Merge branch 'release-v1.82' into matrix-org-hotfixes 2023-04-18 10:59:20 +01:00
Olivier Wilkinson (reivilibre) 85fc42c546 Merge branch 'release-v1.82' into matrix-org-hotfixes 2023-04-18 10:24:22 +01:00
Olivier Wilkinson (reivilibre) 2cf9f78d23 Merge branch 'develop' into matrix-org-hotfixes 2023-04-13 15:06:42 +01:00
Olivier Wilkinson (reivilibre) 7212fc4659 Merge branch 'develop' into matrix-org-hotfixes 2023-04-13 11:55:21 +01:00
David Robertson b1c36648b5 Merge remote-tracking branch 'origin/release-v1.81' into matrix-org-hotfixes 2023-04-06 15:33:43 +01:00
Erik Johnston ab0a5f1972 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-04-04 14:11:53 +01:00
Erik Johnston 5c42359de8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-31 13:52:25 +01:00
Sean Quah 46a2fbea6a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-31 12:12:40 +01:00
Sean Quah bf81ee4217 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-31 10:17:16 +01:00
Erik Johnston 16e07b6742 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-29 14:43:08 +01:00
Erik Johnston 85630ad1f8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-29 13:24:47 +01:00
Erik Johnston 34a2dff973 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-03-29 13:10:57 +01:00
Mathieu Velten 53352a2b80 Merge remote-tracking branch 'origin/release-v1.80' into matrix-org-hotfixes 2023-03-22 11:28:59 +01:00
Mathieu Velten 6400f03029 Merge remote-tracking branch 'origin/release-v1.80' into matrix-org-hotfixes 2023-03-21 12:52:12 +01:00
David Robertson 7317f83510 Merge remote-tracking branch 'origin/release-v1.79' into matrix-org-hotfixes 2023-03-13 13:00:31 +00:00
David Robertson b4df6be0d0 Revert "Hack to workaround libera federation pain"
This reverts commit ab629c17cc.
2023-03-13 12:44:50 +00:00
David Robertson ab629c17cc Hack to workaround libera federation pain
as seen in https://github.com/matrix-org/synapse/issues/15216
2023-03-07 19:49:14 +00:00
David Robertson a95e906227 Merge remote-tracking branch 'origin/release-v1.79' into matrix-org-hotfixes 2023-03-07 12:17:08 +00:00
Erik Johnston b67e4b6015 Merge remote-tracking branch 'origin/release-v1.78' into matrix-org-hotfixes 2023-02-24 14:45:26 +00:00
Olivier Wilkinson (reivilibre) 6bb4db78ce MORG HOTFIXES: increase /state response body size limit to 600MB
See #15127.
2023-02-21 20:03:45 +00:00
Olivier Wilkinson (reivilibre) e0f9a514c6 Merge branch 'release-v1.78' into matrix-org-hotfixes 2023-02-21 14:47:40 +00:00
David Robertson e984b6f9dd Use changelog from release branch 2023-02-10 18:24:15 +00:00
David Robertson 4d2d701840 Clean up old changelog files 2023-02-10 18:22:26 +00:00
David Robertson 269f22478c Merge branch 'release-v1.77' into matrix-org-hotfixes 2023-02-10 18:05:34 +00:00
Erik Johnston 19bb342763 Merge remote-tracking branch 'origin/release-v1.77' into matrix-org-hotfixes 2023-02-09 15:30:19 +00:00
Patrick Cloke 3be2fc192d Merge remote-tracking branch 'origin/release-v1.77' into matrix-org-hotfixes 2023-02-08 10:24:46 -05:00
David Robertson 599960bacf Merge remote-tracking branch 'origin/release-v1.77' into matrix-org-hotfixes 2023-02-07 16:27:37 +00:00
David Robertson 78a4caad72 Merge branch 'release-v1.77' into matrix-org-hotfixes 2023-02-07 13:48:42 +00:00
Patrick Cloke 306ecd6740 Skip unused calculations in sync handler. (#14908)
If a sync request does not need to calculate per-room entries &
is not generating presence & is not generating device list data
(e.g. during initial sync) avoid the expensive calculation of room
specific data.

This is a micro-optimisation for clients syncing simply to receive
to-device information.
2023-02-02 18:49:38 +00:00
Patrick Cloke 0f63c73cf7 Do not calculate presence or ephemeral events when they are filtered out (#14970)
This expands the previous optimisation from being only for initial
sync to being for all sync requests.

It also inverts some of the logic to be inclusive instead of exclusive.
2023-02-02 18:49:32 +00:00
David Robertson c9564e08ba Fix stupid bug 2023-02-02 16:47:39 +00:00
David Robertson ea3210f8e5 Don't notify a target who's not in the room
Also avoid unncessary DB queries when creating a room
2023-02-02 16:30:30 +00:00
David Robertson a7fa951876 Only notify the target of a membership event
Naughty, but should be a big speedup in large rooms

Changelog
2023-02-02 15:43:10 +00:00
David Robertson 769406378c Cherry-pick "Fetch fewer events when getting hosts in room" (WIP #14962)
This is a squashed verion of the following commits:

- bc739a444fb5bffc6dd3b57399a223bc5b5b4a15
- 70947443798e995ff026819cf5dd6823134ac81a
- f781304e1882fd60cdef84d3405767773237629d
- 7ee3976debf925fc2f79381c0c51003588e31e32
2023-02-01 18:41:56 +00:00
David Robertson e03b1e858b Fix my hack
The tag isn't set unless we force tracing, apparently
2023-02-01 12:27:22 +00:00
David Robertson d6e40d75ff HACK: force tracing for partial state /send_join responses 2023-01-31 15:28:55 +00:00
David Robertson 873cf51149 Tag /send_join responses to detect faster joins (#14950)
* Tag /send_join responses to detect faster joins

* Changelog

* Define a proper SynapseTag

* isort
2023-01-31 15:25:47 +00:00
David Robertson 488cea0399 Revert "Tag /send_join responses to detect faster joins"
This reverts commit 204bdb7f07. I will
cherry pick #14950 in its place.
2023-01-31 15:25:04 +00:00
David Robertson 204bdb7f07 Tag /send_join responses to detect faster joins 2023-01-31 11:25:21 +00:00
David Robertson 484db60cd6 Merge remote-tracking branch 'origin/release-v1.76' into matrix-org-hotfixes 2023-01-26 17:20:12 +00:00
David Robertson 505d85e0db Merge remote-tracking branch 'origin/dmr/bump-state-timeouts' into matrix-org-hotfixes 2023-01-25 15:29:55 +00:00
David Robertson 5d8486d43e Milliseconds!!!! 2023-01-25 15:27:18 +00:00
David Robertson 8d6bb6e8ee Merge remote-tracking branch 'origin/dmr/bump-state-timeouts' into matrix-org-hotfixes 2023-01-25 15:19:33 +00:00
David Robertson 42996efa78 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-01-25 15:19:17 +00:00
David Robertson 840ba65d92 Changelog 2023-01-25 15:18:17 +00:00
David Robertson 16a4473034 Bump the client-side timeout for /state
to allow faster joins resyncs the chance to complete for large rooms.
We have seen this fair poorly (~90s for Matrix HQ's /state) in testing,
causing the resync to advance to another HS who hasn't seen our join yet.
2023-01-25 15:14:26 +00:00
David Robertson c4c8a2716e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-01-23 11:46:48 +00:00
Erik Johnston 94a75ab6af Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-01-20 09:30:27 +00:00
Erik Johnston 1b001d1069 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2023-01-19 14:21:55 +00:00
David Robertson 6a185c8b17 Merge remote-tracking branch 'origin/release-v1.75' into matrix-org-hotfixes 2023-01-12 10:38:17 +00:00
David Robertson 04aa6a9707 Merge remote-tracking branch 'origin/release-v1.75' into matrix-org-hotfixes 2023-01-10 12:43:28 +00:00
Olivier Wilkinson (reivilibre) 5a1b56b9b0 Merge branch 'rei/dresync_exp' into matrix-org-hotfixes 2022-12-21 13:13:30 +00:00
Olivier Wilkinson (reivilibre) 62e4ad7991 Use the multi-user device resync to batch requests 2022-12-21 13:12:53 +00:00
Olivier Wilkinson (reivilibre) 24b802a524 Merge branch 'rei/dresync_exp' into matrix-org-hotfixes 2022-12-21 11:42:37 +00:00
Olivier Wilkinson (reivilibre) 55a4d7e1ce Typo fix 2022-12-21 11:42:31 +00:00
Olivier Wilkinson (reivilibre) ad8265562d Merge branch 'rei/hiredis_210_no_sdist' into matrix-org-hotfixes 2022-12-21 11:12:50 +00:00
Olivier Wilkinson (reivilibre) 50a56edcc2 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-12-21 11:04:34 +00:00
Olivier Wilkinson (reivilibre) 3ac08cc0be Revert "Bump hiredis from 2.0.0 to 2.1.0 (#14699)"
This reverts commit 9c89707b56.
2022-12-21 11:03:26 +00:00
Olivier Wilkinson (reivilibre) 83dab8e25f Merge branch 'rei/dresync_exp' into matrix-org-hotfixes 2022-12-21 10:51:29 +00:00
Olivier Wilkinson (reivilibre) 65981bcb27 Merge branch 'rei/jumptodate_statement_limit' into matrix-org-hotfixes 2022-12-21 10:51:27 +00:00
Olivier Wilkinson (reivilibre) cbf1cebef3 Mitigate jump to date slowness by adding 30s timeout 2022-12-20 18:30:47 +00:00
Olivier Wilkinson (reivilibre) 46807789d0 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 712144e768 Batch up the DB writes when marking failures 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 3d7451e04f Split out the marking of failed 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 3c68aa4342 Use the multi-user path even for single users
This is futureproofing: we'll be able to rip out the single-user path later
2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 369a97a714 Use assertions to ensure we don't have our expectations broken 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) b4f98428ca Add a replication servlet for multi-user device resync 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 55f46d499b Add multi-user device resync in handler 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 7d2261f922 Limit query_devices_for_destination to 10 concurrent invocations 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 227c953d99 Add async helpers 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 9c4562c74a Add log lines 2022-12-20 18:21:31 +00:00
Olivier Wilkinson (reivilibre) 2654d61114 Build a set of who we are interested in first and foremost 2022-12-20 18:21:31 +00:00
Sean Quah 9d40fc961b Merge branch 'release-v1.74' into matrix-org-hotfixes 2022-12-13 13:45:55 +00:00
Olivier Wilkinson (reivilibre) d0df5cc8ad Merge branch 'develop' into matrix-org-hotfixes 2022-12-12 14:46:11 +00:00
Erik Johnston 3cab3dae20 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-12-09 17:06:39 +00:00
Erik Johnston 6cbd5ed0c2 Revert "Revert "Delete stale non-e2e devices for users, take 2 (#14595)""
This reverts commit 694a3fd5c2.
2022-12-09 17:06:34 +00:00
Andrew Morgan 694a3fd5c2 Revert "Delete stale non-e2e devices for users, take 2 (#14595)"
This reverts commit c2de2ca630.
2022-12-09 16:36:15 +00:00
Erik Johnston 90d007ec57 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-12-09 09:53:29 +00:00
Olivier Wilkinson (reivilibre) e966424466 Merge branch 'develop' into matrix-org-hotfixes 2022-12-07 12:25:52 +00:00
David Robertson 34fa1276a4 Merge branch 'release-v1.73' into matrix-org-hotfixes 2022-12-01 10:03:23 +00:00
David Robertson 9d943ba8a6 Merge remote-tracking branch 'origin/release-v1.73' into matrix-org-hotfixes 2022-11-29 13:05:38 +00:00
David Robertson b968ecdf45 Merge remote-tracking branch 'origin/release-v1.73' into matrix-org-hotfixes 2022-11-29 12:34:53 +00:00
Erik Johnston 28a84af1bc Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-11-23 14:22:56 +00:00
Erik Johnston 47534a1bff Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-11-22 21:58:38 +00:00
Erik Johnston 2f59645025 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-11-22 16:50:52 +00:00
Erik Johnston 5436c3e0ad Ignore device list updates for users with large numbers of devices 2022-11-22 16:50:48 +00:00
David Robertson c3336516d2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-11-22 14:52:01 +00:00
Erik Johnston 6f577a0c21 Merge branch 'release-v1.72' into matrix-org-hotfixes 2022-11-16 15:11:58 +00:00
Richard van der Hoff 39ea774cff Merge branch 'rav/patch_events_bg_update' into matrix-org-hotfixes 2022-11-05 18:25:11 +00:00
Richard van der Hoff 779d48f4de Fix background update table-scanning events
When this background update did its last batch, it would try to update all the
events that had been inserted since the bgupdate started, which could cause a
table-scan. Make sure we limit the update correctly
2022-11-05 18:23:13 +00:00
Sean Quah 52d4c9f051 Merge remote-tracking branch 'origin/release-v1.71' into matrix-org-hotfixes 2022-11-04 12:20:05 +00:00
Sean Quah 4b38c0a5f2 Merge branch 'squah/unrevert-fts-changes-on-hotfix' into matrix-org-hotfixes 2022-11-04 11:39:49 +00:00
Sean Quah 02a25ce3f9 Revert "Revert "Switch search SQL to triple-quote strings. (#14311)""
This reverts commit 220af1df54.
2022-11-04 11:38:46 +00:00
Sean Quah e2008e4ef5 Revert "Revert "Fix tests for change in PostgreSQL 14 behavior change. (#14310)""
This reverts commit 7f77f1386c.
2022-11-04 11:38:39 +00:00
Sean Quah 3b44a7c9d1 Revert "Revert "Unified search query syntax using the full-text search capabilities of the underlying DB. (#11635)""
This reverts commit 7e0dd52782.
2022-11-04 11:38:31 +00:00
Sean Quah f8f01b869e Revert "Disable auto search for prefixes in event search"
This reverts commit 24409c7c15.
2022-11-04 11:38:23 +00:00
David Robertson 37307a5cce Merge branch 'dmr/revert-fts-changes-on-hotfix' into matrix-org-hotfixes 2022-11-02 12:35:00 +00:00
Erik Johnston 24409c7c15 Disable auto search for prefixes in event search 2022-11-02 12:31:16 +00:00
David Robertson 7e0dd52782 Revert "Unified search query syntax using the full-text search capabilities of the underlying DB. (#11635)"
This reverts commit d902181de9.
2022-11-02 12:29:34 +00:00
David Robertson 7f77f1386c Revert "Fix tests for change in PostgreSQL 14 behavior change. (#14310)"
This reverts commit 67583281e3.
2022-11-02 12:28:47 +00:00
David Robertson 220af1df54 Revert "Switch search SQL to triple-quote strings. (#14311)"
This reverts commit 81815e0561.
2022-11-02 12:28:24 +00:00
David Robertson 03ba8c83be Merge branch 'release-v1.71' into matrix-org-hotfixes 2022-11-01 12:48:12 +00:00
Olivier Wilkinson (reivilibre) 18a045ba69 Merge branch 'release-v1.70' into matrix-org-hotfixes 2022-10-28 12:27:04 +01:00
Olivier Wilkinson (reivilibre) 42dc734f85 Merge branch 'release-v1.70' into matrix-org-hotfixes 2022-10-24 17:10:50 +01:00
Erik Johnston 4398548556 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-10-19 10:29:02 +01:00
Erik Johnston 1235316645 Merge remote-tracking branch 'origin/release-v1.70' into matrix-org-hotfixes 2022-10-18 10:55:09 +01:00
Erik Johnston aca3a117a9 Merge remote-tracking branch 'origin/release-v1.69' into matrix-org-hotfixes 2022-10-14 14:49:42 +01:00
Erik Johnston 382c9e9acd Merge remote-tracking branch 'origin/release-v1.69' into matrix-org-hotfixes 2022-10-11 16:43:00 +01:00
Sean Quah 3d5242da14 Merge remote-tracking branch 'origin/release-v1.69' into matrix-org-hotfixes 2022-10-06 15:04:49 +01:00
Sean Quah dedd4cd061 Merge remote-tracking branch 'origin/release-v1.69' into matrix-org-hotfixes 2022-10-04 11:26:47 +01:00
Erik Johnston a2b6ee7b00 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-09-30 14:27:14 +01:00
Erik Johnston 3f30bdca19 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-09-28 11:12:36 +01:00
Olivier Wilkinson (reivilibre) f4419438c1 Merge branch 'release-v1.68' into matrix-org-hotfixes 2022-09-20 12:24:26 +01:00
Erik Johnston 3a5edde170 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-09-14 14:48:45 +01:00
Erik Johnston 9d2d3bdee6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-09-06 09:01:48 +01:00
David Robertson 42374f46f5 Merge remote-tracking branch 'origin/release-v1.66' into matrix-org-hotfixes 2022-08-30 12:26:33 +01:00
David Robertson ce8f7d118c Merge branch 'release-v1.66' into matrix-org-hotfixes 2022-08-23 12:41:34 +01:00
David Robertson 7ba0336010 Remove the hotfix-specific hack for auth on room_members
This reverts fae708c0e8.

We believe this to be unnecessary---other Synapse deployments do not
have this patch, and we are not aware of bridging problems as a result.

Related:

- https://github.com/matrix-org/matrix-appservice-irc/issues/506
- https://github.com/matrix-org/synapse/issues/4826
2022-08-23 11:31:19 +01:00
David Robertson 7c59528d54 Merge branch 'release-v1.66' into matrix-org-hotfixes 2022-08-23 10:23:40 +01:00
David Robertson 04b00974e5 Merge branch 'release-v1.66' into matrix-org-hotfixes 2022-08-23 10:21:42 +01:00
Andrew Morgan 80bf6da876 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-08-22 10:47:30 +01:00
Olivier Wilkinson (reivilibre) aedc5b04b9 Merge branch 'develop' into matrix-org-hotfixes 2022-08-19 12:19:30 +01:00
Sean Quah d20c92d2c2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-08-18 16:27:31 +01:00
Erik Johnston e91a929049 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-08-17 13:18:00 +01:00
Erik Johnston 7cd167f607 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-08-17 10:54:17 +01:00
Olivier Wilkinson (reivilibre) d39ca69d0b Merge branch 'release-v1.65' into matrix-org-hotfixes 2022-08-11 11:42:17 +01:00
Olivier Wilkinson (reivilibre) ba5d52f50c Merge branch 'release-v1.65' into matrix-org-hotfixes 2022-08-09 11:49:06 +01:00
Richard van der Hoff 08d6aa3340 Merge branch 'release-v1.64' into matrix-org-hotfixes 2022-07-29 12:32:03 +01:00
Olivier Wilkinson (reivilibre) 0a9bcc0313 Merge branch 'release-v1.64' into matrix-org-hotfixes 2022-07-26 13:05:49 +01:00
Erik Johnston 9d2dab8e7c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-07-25 10:41:03 +01:00
Erik Johnston fa0fc4acce Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-07-22 10:26:35 +01:00
Sean Quah 54f2ebbea5 Merge remote-tracking branch 'origin/release-v1.63' into matrix-org-hotfixes 2022-07-12 13:15:53 +01:00
Erik Johnston e808df9e2e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-07-12 09:22:01 +01:00
Erik Johnston 8881bd77d4 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-07-06 12:11:24 +01:00
Andrew Morgan 860ed0e636 Merge remote-tracking branch 'origin/release-v1.62' into matrix-org-hotfixes 2022-07-04 16:18:49 +01:00
Erik Johnston 614f6ccb48 Merge remote-tracking branch 'origin/release-v1.62' into matrix-org-hotfixes 2022-06-30 15:09:39 +01:00
Andrew Morgan f825db3d6c Merge remote-tracking branch 'origin/release-v1.62' into matrix-org-hotfixes 2022-06-28 17:02:40 +01:00
Erik Johnston 1bfb991453 Merge remote-tracking branch 'origin/release-v1.62' into matrix-org-hotfixes 2022-06-28 13:15:03 +01:00
Erik Johnston b1ed0e044c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-06-17 10:52:07 +01:00
Olivier Wilkinson (reivilibre) e3b00708bd Merge remote-tracking branch 'origin/release-v1.61' into matrix-org-hotfixes 2022-06-07 14:37:24 +01:00
Erik Johnston d45a0af44d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-06-06 14:12:06 +01:00
Erik Johnston 302491c81e Merge remote-tracking branch 'origin/release-v1.60' into matrix-org-hotfixes 2022-05-31 13:23:15 +01:00
Sean Quah 7335ab609e Merge remote-tracking branch 'origin/release-v1.60' into matrix-org-hotfixes 2022-05-27 11:12:13 +01:00
Sean Quah 4ff9477996 Merge remote-tracking branch 'origin/release-v1.60' into matrix-org-hotfixes 2022-05-24 12:09:40 +01:00
Brendan Abolivier 4cee2a6330 Fixup changelog 2022-05-18 12:23:37 +01:00
Brendan Abolivier dc6e607edc Merge branch 'release-v1.59' into matrix-org-hotfixes 2022-05-18 11:46:34 +01:00
Erik Johnston 771d2041cd Fix query performance for /sync
Broke in #12729.
2022-05-16 15:50:05 +01:00
David Robertson 7b83ac8675 Merge remote-tracking branch 'origin/release-v1.59' into matrix-org-hotfixes 2022-05-16 12:55:19 +01:00
David Robertson 42f8b22ce2 Merge branch 'release-v1.59' into matrix-org-hotfixes 2022-05-10 11:34:30 +01:00
David Robertson 3ba5d67cb8 Merge branch 'release-v1.58' into matrix-org-hotfixes 2022-04-26 17:24:44 +01:00
David Robertson 474a964fb7 Merge remote-tracking branch 'origin/release-v1.58' into matrix-org-hotfixes 2022-04-26 12:04:34 +01:00
Andrew Morgan a794ad17c2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2022-04-25 14:06:31 +01:00
Erik Johnston 8674fa3bc9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-04-12 12:02:43 +01:00
Erik Johnston 27c2386b3d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-04-07 15:55:46 +01:00
Erik Johnston c37cd85d8a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-04-07 13:34:23 +01:00
Sean Quah 5f814b2838 Merge remote-tracking branch 'origin/release-v1.56' into matrix-org-hotfixes 2022-04-05 14:12:18 +01:00
Erik Johnston ee5377c9aa Merge branch 'erikj/prefill_device_list_change_cache' into matrix-org-hotfixes 2022-04-04 20:13:45 +01:00
Erik Johnston ea2d01366c Newsfile 2022-04-04 19:13:58 +01:00
Erik Johnston f986db2adb Newsfile 2022-04-04 19:08:42 +01:00
Erik Johnston 36383d48f3 Prefill the device_list_stream_cache 2022-04-04 19:07:49 +01:00
Erik Johnston fd1b6334f0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-03-29 10:15:25 +01:00
Andrew Morgan 895c12e421 Merge branch 'release-v1.55' of github.com:matrix-org/synapse into matrix-org-hotfixes 2022-03-22 10:38:46 +00:00
Olivier Wilkinson (reivilibre) d4c9fa5dbd Merge branch 'release-v1.55' into matrix-org-hotfixes 2022-03-21 14:47:46 +00:00
David Robertson b9bbbebc06 Merge remote-tracking branch 'origin/release-v1.55' into matrix-org-hotfixes 2022-03-17 11:05:45 +00:00
David Robertson 2207fa50b4 Merge branch 'release-v1.55' into matrix-org-hotfixes 2022-03-15 11:21:08 +00:00
Patrick Cloke fdc1063782 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-03-10 10:38:28 -05:00
Olivier Wilkinson (reivilibre) 0203dcfe46 Merge branch 'release-v1.54', remote-tracking branch 'origin' into matrix-org-hotfixes 2022-03-08 10:56:20 +00:00
Olivier Wilkinson (reivilibre) 20e10495f6 Merge commit 'd8001' (pre v1.54.0rc1) into matrix-org-hotfixes 2022-03-02 11:18:09 +00:00
reivilibre f1b625ad56 matrix.org hotfixes: Back out in-flight state cache changes (#12117) 2022-03-01 15:30:22 +00:00
Erik Johnston 3d92936c14 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2022-02-22 14:36:44 +00:00
Sean Quah c5ba1d67b1 Merge branch 'release-v1.53' into matrix-org-hotfixes 2022-02-15 11:40:00 +00:00
David Robertson ed2f158a38 Printf debugging for MSISDN validation (#11882)
This makes some attempt to keep CI happy too, but it probably ought not to.
2022-02-08 12:57:17 +00:00
reivilibre 3bf466698f Hotfixes: Revert commit() OpenTracing hackery (#11906) 2022-02-04 10:54:35 +00:00
David Robertson 6705391eec Merge branch 'release-v1.52' into matrix-org-hotfixes 2022-02-01 11:17:39 +00:00
Richard van der Hoff 2b9f741f3a Fix logic for dropping old events in fed queue
Fixes infinite loops of

> logger.info("Invalid prev_events for %s", event_id)
2022-01-23 21:09:17 +00:00
Olivier Wilkinson (reivilibre) 7977b7f6a8 Merge branch 'release-v1.51' into matrix-org-hotfixes 2022-01-21 10:49:43 +00:00
Andrew Morgan d1e6333f12 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2022-01-19 12:44:55 +00:00
Andrew Morgan 20d4418485 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2022-01-19 10:59:51 +00:00
Olivier Wilkinson (reivilibre) 8e8a00829f Merge branch 'release-v1.50' into matrix-org-hotfixes 2022-01-14 10:51:50 +00:00
Richard van der Hoff 5cc41f1b05 Updates to opentracing hackery 2022-01-07 12:49:54 +00:00
Richard van der Hoff 99e7fb1d52 Wrap connection.commit with OpenTracing
This is an attempt to diagnose poor apdex levels, per
https://github.com/matrix-org/internal-config/issues/1181
2022-01-07 11:53:28 +00:00
Olivier Wilkinson (reivilibre) 1a776f6710 Merge branch 'release-v1.50' into matrix-org-hotfixes 2022-01-06 16:37:36 +00:00
Olivier Wilkinson (reivilibre) 717a5c085a Merge branch 'release-v1.50' into matrix-org-hotfixes 2022-01-05 14:19:39 +00:00
Richard van der Hoff 59544678a1 Merge branch 'rav/no_bundle_aggregations_in_sync' into matrix-org-hotfixes 2021-12-15 13:13:08 +00:00
Richard van der Hoff a9c146ac53 Disable aggregation bundling on /sync responses
A partial revert of #11478. This turns out to have had a significant CPU impact
on initial-sync handling. For now, let's disable it, until we find a more
efficient way of achieving this.
2021-12-15 13:10:25 +00:00
Sean Quah dbceb00688 Merge branch 'release-v1.49' into matrix-org-hotfixes 2021-12-07 13:57:18 +00:00
Brendan Abolivier cb79a2b785 Merge branch 'develop' into matrix-org-hotfixes 2021-11-25 15:16:47 +00:00
David Robertson 098e964bbc Prevent the media store from writing outside of the configured directory
And the associated changelog/release process updates too.

Applied by patch from the security fork.
2021-11-23 10:14:27 +00:00
David Robertson e147f067c5 Merge branch 'release-v1.47' into matrix-org-hotfixes 2021-11-23 10:09:27 +00:00
Andrew Morgan ce06cb59c3 Merge branch 'release-v1.47' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-11-16 13:55:44 +00:00
Andrew Morgan 2a93005a7b Merge branch 'release-v1.47' into matrix-org-hotfixes 2021-11-15 11:52:32 +00:00
Olivier Wilkinson (reivilibre) 9d3b04e749 Merge branch 'release-v1.47' into matrix-org-hotfixes 2021-11-09 11:58:35 +00:00
Erik Johnston 459d2ae157 Merge remote-tracking branch 'origin/release-v1.46' into matrix-org-hotfixes 2021-11-02 10:04:06 +00:00
Erik Johnston 2bb023ba2b Don't seqscan event_json due to relates_to_id 2021-10-27 11:56:36 +01:00
Sean Quah d8c1a21492 Merge branch 'release-v1.46' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-10-26 14:32:17 +01:00
Sean Quah c59b9711f3 Merge branch 'release-v1.45' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-10-20 11:06:51 +01:00
David Robertson 27e6e45349 Merge branch 'release-v1.45' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-10-13 10:52:17 +01:00
Erik Johnston 2aacb49446 Merge remote-tracking branch 'origin/release-v1.45' into matrix-org-hotfixes 2021-10-12 14:27:34 +01:00
Brendan Abolivier 6ce0dc0620 Merge branch 'release-v1.45' into matrix-org-hotfixes 2021-10-12 10:21:26 +01:00
Brendan Abolivier ebbd37b66d Merge branch 'release-v1.44' into matrix-org-hotfixes 2021-10-04 14:17:05 +01:00
Olivier Wilkinson (reivilibre) 4cb6ffdf46 Merge branch 'release-v1.44' into matrix-org-hotfixes 2021-09-30 11:43:38 +01:00
Erik Johnston b9a0f7983d Merge remote-tracking branch 'origin/release-v1.44' into matrix-org-hotfixes 2021-09-28 14:37:06 +01:00
Erik Johnston ba3a888a05 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-09-28 11:55:53 +01:00
David Robertson a8340692ab Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-09-22 13:35:31 +01:00
Erik Johnston 2cc4c33d0a Merge remote-tracking branch 'origin/dmr/cache-eviction-hack' into matrix-org-hotfixes 2021-09-16 11:13:00 +01:00
Erik Johnston bb0a4bd6d5 Merge remote-tracking branch 'origin/release-v1.43' into matrix-org-hotfixes 2021-09-16 11:12:52 +01:00
David Robertson 6227858482 Changelog 2021-09-15 17:22:21 +01:00
David Robertson 361ffb8f0a Don't expire get_users_who_share_room & friends 2021-09-15 17:22:17 +01:00
David Robertson c9fb203ce0 Allow LruCaches to opt out of time-based expiry 2021-09-15 17:20:07 +01:00
Andrew Morgan 003c2ab629 Merge branch 'release-v1.43' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-09-14 11:02:37 +01:00
Richard van der Hoff 97ef48b07e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-09-10 10:43:42 +01:00
Richard van der Hoff dffdda5f89 Revert "Expand on why users should read upgrade notes"
This reverts commit 4d03ad5255.

This was committed to the wrong branch - it has been superceded by ca3cb1e039
on the release branches.
2021-09-10 10:42:14 +01:00
Brendan Abolivier 4d03ad5255 Expand on why users should read upgrade notes 2021-09-06 15:57:57 +01:00
Brendan Abolivier 92bbe0ebe1 Merge branch 'release-v1.42' into matrix-org-hotfixes 2021-09-06 15:38:03 +01:00
Andrew Morgan ca2e5b2060 Merge branch 'release-v1.42' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-09-01 10:56:17 +01:00
Olivier Wilkinson (reivilibre) 3cd1277a7b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-08-31 14:33:17 +01:00
Richard van der Hoff 619cc1d230 Merge branch 'release-v1.41' into matrix-org-hotfixes 2021-08-31 11:30:16 +01:00
Richard van der Hoff ac646fed06 Remove some redundant patches from the hotfixes branch 2021-08-26 09:45:19 +01:00
Patrick Cloke fe3466a8ad Merge remote-tracking branch 'origin/release-v1.41' into matrix-org-hotfixes 2021-08-20 07:53:27 -04:00
Andrew Morgan 2f4fe530a1 Merge branch 'release-v1.41' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-08-18 14:23:52 +01:00
Andrew Morgan 18572fddcc Merge branch 'release-v1.41' into matrix-org-hotfixes 2021-08-17 14:35:20 +01:00
Šimon Brandner 40e92b224c Handle string read receipt data (#10606)
* Handle string read receipt data

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Test that we handle string read receipt data

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Add changelog for #10606

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Add docs

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Ignore malformed RRs

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Only surround hidden = ...

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Remove unnecessary argument

Signed-off-by: Šimon Brandner <simon.bra.ag@gmail.com>

* Update changelog.d/10606.bugfix

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2021-08-16 12:23:15 +01:00
Brendan Abolivier 53d7d92511 Merge branch 'release-v1.40' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-08-09 11:32:41 +01:00
Brendan Abolivier 0dcc40e81a Merge branch 'release-v1.40' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-08-04 15:20:28 +01:00
Olivier Wilkinson (reivilibre) 11dda97e86 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-08-03 10:34:44 +01:00
Andrew Morgan 1d3a9fe6e5 Merge branch 'release-v1.39' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-07-28 10:36:55 +01:00
Erik Johnston 12c92eec18 Merge remote-tracking branch 'origin/release-v1.38' into matrix-org-hotfixes 2021-07-22 15:30:22 +01:00
Andrew Morgan c0121d69e7 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-07-20 11:48:03 +01:00
Erik Johnston cc07548d71 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-07-16 11:25:22 +01:00
Erik Johnston 8a2be4f799 Merge remote-tracking branch 'origin/release-v1.38' into matrix-org-hotfixes 2021-07-09 10:17:22 +01:00
Brendan Abolivier fc8a586ab9 Merge branch 'release-v1.38' into matrix-org-hotfixes 2021-07-06 14:11:37 +01:00
Richard van der Hoff 7eea8de9de Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-07-01 08:59:54 +01:00
Richard van der Hoff cfddd43bfb bump background update rate 2021-06-29 20:56:46 +01:00
Richard van der Hoff d70eefc313 Merge branch 'release-v1.37' into matrix-org-hotfixes 2021-06-29 20:54:40 +01:00
Richard van der Hoff b4bdab8e52 Revert "Don't send catchup transactions for matrix hq"
This reverts commits 5575b5cb43,
f271b32bea,
ffe17e47ce
2021-06-29 16:13:09 +01:00
Richard van der Hoff b18d18d03b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-06-29 12:49:50 +01:00
Richard van der Hoff 077d441d42 Merge branch 'develop' into matrix-org-hotfixes 2021-06-29 11:27:25 +01:00
Erik Johnston ffe17e47ce Fix SQL 2021-06-28 14:29:27 +01:00
Erik Johnston f271b32bea don't send out events for matrix.org 2021-06-28 14:17:56 +01:00
Erik Johnston 5575b5cb43 Don't send catchup transactions for matrix hq
This is because there are problems with HQ atm.
2021-06-28 14:12:48 +01:00
Brendan Abolivier 0bbdca814a Merge branch 'release-v1.37' into matrix-org-hotfixes 2021-06-24 10:58:19 +01:00
Brendan Abolivier c80e8b98de Tweak changelog 2021-06-24 10:57:39 +01:00
Brendan Abolivier a0e48edd2f Merge branch 'release-v1.37' into matrix-org-hotfixes 2021-06-24 09:41:28 +01:00
Brendan Abolivier 6f4c860c56 Merge branch 'release-v1.37' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-06-23 09:49:14 +01:00
Richard van der Hoff 89013b99bd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-06-16 15:35:00 +01:00
Erik Johnston a4b573ee48 Merge branch 'release-v1.36' into matrix-org-hotfixes 2021-06-11 15:37:22 +01:00
Andrew Morgan f6e1e48dcf Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-06-08 13:52:56 +01:00
Andrew Morgan fc8bab8291 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-06-08 11:52:11 +01:00
Andrew Morgan efc5febc33 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-06-08 11:21:03 +01:00
Richard van der Hoff 68d7704d19 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-06-07 16:57:58 +01:00
Richard van der Hoff f36457dce2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-06-04 10:41:36 +01:00
Patrick Cloke 09361655d2 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-06-02 11:38:54 -04:00
Erik Johnston a027b65c8e Merge remote-tracking branch 'origin/release-v1.35' into matrix-org-hotfixes 2021-05-28 17:15:01 +01:00
Erik Johnston d2fcfef679 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-27 17:11:57 +01:00
Erik Johnston 2d83d19061 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-25 10:23:00 +01:00
Erik Johnston 6942377f90 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-14 11:31:38 +01:00
Brendan Abolivier 019ed44b84 Merge branch 'release-v1.34.0' into matrix-org-hotfixes 2021-05-12 16:41:04 +01:00
Richard van der Hoff b0b8110acd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-12 12:57:55 +01:00
Brendan Abolivier ab717bceb5 Merge branch 'release-v1.33.2' into matrix-org-hotfixes 2021-05-11 10:49:58 +01:00
Brendan Abolivier 6157f02067 Revert "improve efficiency of _glob_to_re"
This reverts commit edac710bc0.
2021-05-11 10:49:45 +01:00
Erik Johnston 7865bc1dfb Always cache 'event_to_prev_state_group'
Fixes regression in send PDU times introduced in #9905.
2021-05-07 14:55:22 +01:00
Erik Johnston e11cd368b7 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-06 15:07:28 +01:00
Erik Johnston a8f48246b6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-05-06 10:25:53 +01:00
Andrew Morgan 696e7df2e8 Merge branch 'release-v1.33.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-04-29 11:30:12 +01:00
Andrew Morgan 9979fef4fe Revert "Experimental Federation Speedup (#9702)"
This reverts commit 05e8c70c05.
2021-04-27 13:47:39 +01:00
Andrew Morgan 9773abf3d8 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-04-27 13:46:16 +01:00
Richard van der Hoff 7b5022ee2e Merge remote-tracking branch 'origin/release-v1.32.2' into matrix-org-hotfixes 2021-04-21 16:43:10 +01:00
Richard van der Hoff f21a46216f Merge branch 'release-v1.32.2' into matrix-org-hotfixes 2021-04-21 16:39:55 +01:00
Andrew Morgan a468d19fd6 Merge branch 'release-v1.32.1' into matrix-org-hotfixes 2021-04-21 10:26:43 +01:00
Richard van der Hoff edac710bc0 improve efficiency of _glob_to_re 2021-04-19 19:03:38 +01:00
Andrew Morgan bf4ad8070e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-04-19 14:32:50 +01:00
Andrew Morgan 118e789e0c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-04-13 16:10:20 +01:00
Andrew Morgan 77866a5f5f Merge branch 'matrix-org-hotfixes' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-04-13 16:10:07 +01:00
Erik Johnston 053c8c019f Merge branch 'erikj/fix_stalled_catchup' into matrix-org-hotfixes 2021-04-08 18:05:22 +01:00
Patrick Cloke fbcc8703dc Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-04-08 08:10:56 -04:00
Erik Johnston f46b864748 Merge remote-tracking branch 'origin/release-v1.31.0' into matrix-org-hotfixes 2021-03-30 11:41:52 +01:00
Erik Johnston 2b8650547f Merge tag 'v1.30.1' into matrix-org-hotfixes
Synapse 1.30.1 (2021-03-26)
===========================

This release is identical to Synapse 1.30.0, with the exception of explicitly
setting a minimum version of Python's Cryptography library to ensure that users
of Synapse are protected from the recent [OpenSSL security advisories](https://mta.openssl.org/pipermail/openssl-announce/2021-March/000198.html),
especially CVE-2021-3449.

Note that Cryptography defaults to bundling its own statically linked copy of
OpenSSL, which means that you may not be protected by your operating system's
security updates.

It's also worth noting that Cryptography no longer supports Python 3.5, so
admins deploying to older environments may not be protected against this or
future vulnerabilities. Synapse will be dropping support for Python 3.5 at the
end of March.

Updates to the Docker image
---------------------------

- Ensure that the docker container has up to date versions of openssl. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))

Internal Changes
----------------

- Enforce that `cryptography` dependency is up to date to ensure it has the most recent openssl patches. ([\#9697](https://github.com/matrix-org/synapse/issues/9697))
2021-03-26 14:44:31 +00:00
Erik Johnston f36a060d2c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-24 16:13:19 +00:00
Richard van der Hoff ea74189a90 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-24 12:45:54 +00:00
Andrew Morgan 05ec9e8d37 Revert "Patch to temporarily drop cross-user m.key_share_requests (#8675)" (#9668)
We patched `matrix-org-hotfixes` a little while ago in #8675 to drop any cross-user key share requests while they were being accidentally spammed by a client. This was a temporary fix until we had some rate-limiting in place.

Rate-limiting landed in https://github.com/matrix-org/synapse/pull/8957. Note that the rate-limit can't be configured, but has what appear to be [sensible defaults](https://github.com/matrix-org/synapse/blob/db2efa9c50569adbfab102b1f447f5a8312b95f3/synapse/config/ratelimiting.py#L105-L113).

Note that the original patch was already actually overridden partially when the rate-limit PR landed, as they conflicted. So we've already lifted the restriction between local devices on matrix.org, but requests were still blocked from being sent over federation. This PR cleans up the remaining bits.

This reverts commit d60af9305a.
2021-03-23 10:53:05 +00:00
Andrew Morgan 6557eba7dc Revert "Patch to temporarily drop cross-user m.key_share_requests (#8675)"
This reverts commit d60af9305a.
2021-03-23 10:24:03 +00:00
Richard van der Hoff d8953b34f2 Merge branch 'develop' into matrix-org-hotfixes 2021-03-16 12:42:54 +00:00
Richard van der Hoff 56c0c711c1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-09 15:23:55 +00:00
Erik Johnston 61a970e25f Merge remote-tracking branch 'origin/release-v1.29.0' into matrix-org-hotfixes 2021-03-04 10:23:26 +00:00
Erik Johnston 50c242fa29 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-03 16:08:05 +00:00
Erik Johnston 8965291b87 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-03 12:12:15 +00:00
Erik Johnston 81c7b0515d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-03-03 10:59:10 +00:00
Richard van der Hoff fdbccc1e74 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-26 14:05:40 +00:00
Richard van der Hoff 0e56f02d5d Revert "Redirect redirect requests if they arrive on the wrong URI"
This reverts commit 5ee8a1c50a.

This has now been superceded on develop by PR #9436.
2021-02-26 14:05:00 +00:00
Richard van der Hoff c7934aee2c Revert "more login hacking"
This reverts commit 47d2b49e2b.

This has now been superceded on develop by PR 9472.
2021-02-26 14:04:05 +00:00
Erik Johnston 5d405f7e7a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-22 12:55:32 +00:00
Erik Johnston 5054eb291e Merge remote-tracking branch 'origin/release-v1.28.0' into matrix-org-hotfixes 2021-02-19 10:06:01 +00:00
Richard van der Hoff 47d2b49e2b more login hacking 2021-02-18 14:29:48 +00:00
Richard van der Hoff 1f507c2515 Merge branch 'rav/fix_cookie_path' into matrix-org-hotfixes
Merge the cookie fix to hotfixes
2021-02-18 14:03:43 +00:00
Richard van der Hoff 5ee8a1c50a Redirect redirect requests if they arrive on the wrong URI 2021-02-18 14:01:23 +00:00
Richard van der Hoff 7b7831bb63 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-02-17 16:31:57 +00:00
Erik Johnston a4aa56a0eb Ensure that we never stop reconnecting to redis (#9391) 2021-02-11 17:39:56 +00:00
Patrick Cloke fa0f99e4f2 Merge branch 'release-v1.27.0' into matrix-org-hotfixes 2021-02-11 11:30:16 -05:00
Richard van der Hoff 844b3e3f65 Revert "block groups requests to fosdem"
This reverts commit 3f6530ed55.
2021-02-06 12:03:46 +00:00
Richard van der Hoff 3f6530ed55 block groups requests to fosdem 2021-02-06 11:04:32 +00:00
Erik Johnston 25757a3d47 Merge branch 'erikj/media_spam_checker' into matrix-org-hotfixes 2021-02-05 10:13:55 +00:00
Erik Johnston 6e774373c2 Merge remote-tracking branch 'origin/release-v1.27.0' into matrix-org-hotfixes 2021-02-02 16:06:59 +00:00
Erik Johnston 512e313f18 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-26 14:15:26 +00:00
Patrick Cloke a574751a87 Merge remote-tracking branch 'origin/release-v1.26.0' into matrix-org-hotfixes 2021-01-25 08:07:39 -05:00
Erik Johnston bde75f5f66 Merge remote-tracking branch 'origin/release-v1.26.0' into matrix-org-hotfixes 2021-01-21 16:05:34 +00:00
Erik Johnston e33124a642 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-20 10:45:19 +00:00
Erik Johnston bed4fa29fd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-19 10:19:25 +00:00
Erik Johnston f5ab7d8306 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-18 11:14:37 +00:00
Erik Johnston 029c9ef967 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-15 14:05:55 +00:00
Erik Johnston e6b27b480c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-14 17:39:13 +00:00
Erik Johnston 43dc637136 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-14 15:29:29 +00:00
Erik Johnston 00c62b9d07 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-08 11:18:20 +00:00
Erik Johnston 82a91208d6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-07 13:04:45 +00:00
Erik Johnston 91fd180be1 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2021-01-07 10:35:04 +00:00
Patrick Cloke fb4a4f9f15 Merge branch 'release-v1.25.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2021-01-05 12:12:07 -05:00
Richard van der Hoff 5a4f09228d Remove cache from room directory query results
This reverts a285fe0. Hopefully the cache is no longer required, thanks to
2021-01-05 13:52:36 +00:00
Richard van der Hoff 97d12dcf56 Merge remote-tracking branch 'origin/release-v1.25.0' into matrix-org-hotfixes 2021-01-05 11:32:29 +00:00
Patrick Cloke f4f65f4e99 Allow redacting events on workers (#8994)
Adds the redacts endpoint to workers that have the client listener.
2020-12-29 11:06:10 -05:00
Patrick Cloke 863359a04f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-12-15 11:26:13 -05:00
Patrick Cloke 33a349df91 Merge branch 'develop' into matrix-org-hotfixes 2020-12-15 08:23:14 -05:00
Patrick Cloke a41b1dc49f Merge branch 'release-v1.24.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-12-04 09:03:12 -05:00
Patrick Cloke 16744644f6 Merge branch 'release-v1.24.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-12-02 08:40:21 -05:00
Erik Johnston dbf46f3891 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-11-27 10:25:17 +00:00
Erik Johnston 52984e9e69 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-11-13 12:05:55 +00:00
Richard van der Hoff ce2107eee1 Merge branch 'rav/fix_sighup' into matrix-org-hotfixes 2020-10-31 10:54:23 +00:00
Richard van der Hoff 8373e6254f Fix SIGHUP handler
Fixes:

```
builtins.TypeError: _reload_logging_config() takes 1 positional argument but 2 were given
```
2020-10-31 10:53:12 +00:00
Erik Johnston 1ff3bc332a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-30 12:08:09 +00:00
Andrew Morgan 172ddb3b45 Merge branch 'develop' into matrix-org-hotfixes
* develop:
  Don't unnecessarily start bg process in replication sending loop. (#8670)
  Don't unnecessarily start bg process while handling typing. (#8668)
2020-10-28 12:14:03 +00:00
Andrew Morgan d60af9305a Patch to temporarily drop cross-user m.key_share_requests (#8675)
Cross-user `m.key_share_requests` are a relatively new `to_device` message that allows user to re-request session keys for a message from another user if they were otherwise unable to retrieve them.

Unfortunately, these have had performance concerns on matrix.org. This is a temporary patch to disable them while we investigate a better solution.
2020-10-28 11:58:47 +00:00
Erik Johnston bcb6b243e9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-27 14:13:14 +00:00
Erik Johnston 32457baa40 Merge branch 'release-v1.22.0' into matrix-org-hotfixes 2020-10-26 15:03:36 +00:00
Erik Johnston ab4cd7f802 Merge remote-tracking branch 'origin/release-v1.21.3' into matrix-org-hotfixes 2020-10-22 09:57:06 +01:00
Erik Johnston e9b5e642c3 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-10-16 11:34:53 +01:00
Erik Johnston 9250ee8650 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-14 13:32:07 +01:00
Richard van der Hoff bdbe2b12c2 Revert "block membership events from spammy freenode bridge"
This reverts commit cd2f831b9d.
2020-10-13 17:10:45 +01:00
Erik Johnston 43bcb1e54e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-13 13:29:50 +01:00
Richard van der Hoff cd2f831b9d block membership events from spammy freenode bridge 2020-10-12 19:09:30 +01:00
Erik Johnston 4b43332131 Merge remote-tracking branch 'origin/release-v1.21.0' into matrix-org-hotfixes 2020-10-07 17:09:29 +01:00
Richard van der Hoff 77daff166d Merge remote-tracking branch 'origin/release-v1.21.0' into matrix-org-hotfixes 2020-10-02 12:32:26 +01:00
Richard van der Hoff 5ccc0785c1 Revert "fix remote thumbnails?"
This has now been fixed by a different commit (73d93039f).

This reverts commit b0a463f758.
2020-10-02 12:30:49 +01:00
Richard van der Hoff b0a463f758 fix remote thumbnails? 2020-10-01 15:53:18 +01:00
Richard van der Hoff 8a8d01d732 Merge branch 'develop' into matrix-org-hotfixes 2020-10-01 15:07:33 +01:00
Richard van der Hoff 1c22954668 Revert "Temporary fix to ensure kde can contact matrix.org if stuff breaks"
This reverts commit d90b0946ed.

We believe this is no longer required.
2020-10-01 12:10:55 +01:00
Richard van der Hoff e675bbcc49 Remove redundant EventCreationHandler._is_worker_app attribute
This was added in 1c347c84bf/#7544 as a temporary optimisation. That was never
merged to develop, since it conflicted with #7492. The merge cf92310da forgot
to remove it.
2020-10-01 11:51:57 +01:00
Richard van der Hoff 607367aeb1 Fix typo in comment
I think this came from a bad merge
2020-10-01 11:43:16 +01:00
Richard van der Hoff ac6c5f198e Remove dangling changelog.d files
These result from PRs which were cherry-picked from release branches.
2020-10-01 11:31:07 +01:00
Richard van der Hoff db13a8607e Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-10-01 11:22:36 +01:00
Richard van der Hoff cfb3096e33 Revert federation-transaction-transmission backoff hacks
This reverts b852a8247, 15b2a5081, 28889d8da.

I don't think these patches are required any more, and if they are, they should
be on mainline, not hidden in our hotfixes branch. Let's try backing them out:
if that turns out to be an error, we can PR them properly.
2020-10-01 11:22:19 +01:00
Erik Johnston 7b6f857aa9 Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-22 10:11:01 +01:00
Erik Johnston 9eea5c43af Intelligently select extremities used in backfill. (#8349)
Instead of just using the most recent extremities let's pick the
ones that will give us results that the pagination request cares about,
i.e. pick extremities only if they have a smaller depth than the
pagination token.

This is useful when we fail to backfill an extremity, as we no longer
get stuck requesting that same extremity repeatedly.
2020-09-18 15:07:36 +01:00
Andrew Morgan 104c490274 Use _check_sigs_and_hash_and_fetch to validate backfill requests (#8350)
This is a bit of a hack, as `_check_sigs_and_hash_and_fetch` is intended
for attempting to pull an event from the database/(re)pull it from the
server that originally sent the event if checking the signature of the
event fails.

During backfill we *know* that we won't have the event in our database,
however it is still useful to be able to query the original sending
server as the server we're backfilling from may be acting maliciously.

The main benefit and reason for this change however is that
`_check_sigs_and_hash_and_fetch` will drop an event during backfill if
it cannot be successfully validated, whereas the current code will
simply fail the backfill request - resulting in the client's /messages
request silently being dropped.

This is a quick patch to fix backfilling rooms that contain malformed
events. A better implementation in planned in future.
2020-09-18 15:07:33 +01:00
Patrick Cloke bbb7ca1f15 Merge remote-tracking branch 'origin/release-v1.19.2' into matrix-org-hotfixes 2020-09-16 08:21:05 -04:00
Patrick Cloke 27ef82d972 Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-11 07:34:53 -04:00
Richard van der Hoff 9df3a8a19f Merge branch 'release-v1.20.0' into matrix-org-hotfixes 2020-09-09 16:59:10 +01:00
Richard van der Hoff 5c4b13cd8f Merge remote-tracking branch 'origin/release-v1.20.0' into matrix-org-hotfixes 2020-09-07 17:00:02 +01:00
Richard van der Hoff d74e8f2875 Merge branch 'release-v1.20.0' into matrix-org-hotfixes 2020-09-07 13:44:54 +01:00
Brendan Abolivier cc23d81a74 Merge branch 'develop' into matrix-org-hotfixes 2020-09-04 11:02:10 +01:00
Brendan Abolivier 505ea932f5 Merge branch 'develop' into matrix-org-hotfixes 2020-09-03 15:30:00 +01:00
Richard van der Hoff 5f224a4794 Merge branch 'develop' into matrix-org-hotfixes 2020-08-28 15:59:57 +01:00
Patrick Cloke 3f488bfded Merge branch 'develop' into matrix-org-hotfixes 2020-08-27 10:16:21 -04:00
Richard van der Hoff b4c1cfacc2 Merge branch 'develop' into matrix-org-hotfixes 2020-08-18 18:20:01 +01:00
Richard van der Hoff afe4c4e02e Merge branch 'develop' into matrix-org-hotfixes 2020-08-18 18:13:47 +01:00
Brendan Abolivier 527f73d902 Merge branch 'develop' into matrix-org-hotfixes 2020-08-13 11:45:08 +01:00
Richard van der Hoff 82fec809a5 Merge branch 'develop' into matrix-org-hotfixes 2020-07-31 10:30:05 +01:00
Richard van der Hoff b2ccc72a00 Merge branch 'release-v1.18.0' into matrix-org-hotfixes 2020-07-28 10:15:22 +01:00
Richard van der Hoff be777e325d Merge branch 'develop' into matrix-org-hotfixes 2020-07-24 09:57:49 +01:00
Richard van der Hoff 25880bd441 Merge branch 'develop' into matrix-org-hotfixes 2020-07-09 12:49:39 +01:00
Richard van der Hoff cc86fbc9ad Merge branch 'develop' into matrix-org-hotfixes 2020-07-09 11:06:52 +01:00
Patrick Cloke bd30967bd7 Merge branch 'release-v1.15.2' into matrix-org-hotfixes 2020-07-02 10:08:07 -04:00
Andrew Morgan 8fed03aa3e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-07-01 11:12:28 +01:00
Andrew Morgan ba66e3dfef Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-07-01 10:46:06 +01:00
Erik Johnston 199ab854d6 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-06-26 11:08:10 +01:00
Erik Johnston c16bb06d25 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-06-25 09:39:01 +01:00
Erik Johnston d06f4ab693 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-17 16:32:39 +01:00
Erik Johnston 8ba1086801 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-17 14:38:15 +01:00
Brendan Abolivier fea4b1d6ad Merge branch 'release-v1.15.1' into matrix-org-hotfixes 2020-06-16 11:29:33 +01:00
Brendan Abolivier ae91d50100 Merge branch 'release-v1.15.1' into matrix-org-hotfixes 2020-06-16 10:16:44 +01:00
Brendan Abolivier 0d29112624 Merge branch 'release-v1.15.0' into matrix-org-hotfixes 2020-06-11 13:43:55 +01:00
Brendan Abolivier d6c7550cf5 Merge tag 'v1.15.0rc1' of github.com:matrix-org/synapse into matrix-org-hotfixes
Synapse 1.15.0rc1 (2020-06-09)
==============================

Features
--------

- Advertise support for Client-Server API r0.6.0 and remove related unstable feature flags. ([\#6585](https://github.com/matrix-org/synapse/issues/6585))
- Add an option to disable autojoining rooms for guest accounts. ([\#6637](https://github.com/matrix-org/synapse/issues/6637))
- For SAML authentication, add the ability to pass email addresses to be added to new users' accounts via SAML attributes. Contributed by Christopher Cooper. ([\#7385](https://github.com/matrix-org/synapse/issues/7385))
- Add admin APIs to allow server admins to manage users' devices. Contributed by @dklimpel. ([\#7481](https://github.com/matrix-org/synapse/issues/7481))
- Add support for generating thumbnails for WebP images. Previously, users would see an empty box instead of preview image. ([\#7586](https://github.com/matrix-org/synapse/issues/7586))
- Support the standardized `m.login.sso` user-interactive authentication flow. ([\#7630](https://github.com/matrix-org/synapse/issues/7630))

Bugfixes
--------

- Allow new users to be registered via the admin API even if the monthly active user limit has been reached. Contributed by @dkimpel. ([\#7263](https://github.com/matrix-org/synapse/issues/7263))
- Fix email notifications not being enabled for new users when created via the Admin API. ([\#7267](https://github.com/matrix-org/synapse/issues/7267))
- Fix str placeholders in an instance of `PrepareDatabaseException`. Introduced in Synapse v1.8.0. ([\#7575](https://github.com/matrix-org/synapse/issues/7575))
- Fix a bug in automatic user creation during first time login with `m.login.jwt`. Regression in v1.6.0. Contributed by @olof. ([\#7585](https://github.com/matrix-org/synapse/issues/7585))
- Fix a bug causing the cross-signing keys to be ignored when resyncing a device list. ([\#7594](https://github.com/matrix-org/synapse/issues/7594))
- Fix metrics failing when there is a large number of active background processes. ([\#7597](https://github.com/matrix-org/synapse/issues/7597))
- Fix bug where returning rooms for a group would fail if it included a room that the server was not in. ([\#7599](https://github.com/matrix-org/synapse/issues/7599))
- Fix duplicate key violation when persisting read markers. ([\#7607](https://github.com/matrix-org/synapse/issues/7607))
- Prevent an entire iteration of the device list resync loop from failing if one server responds with a malformed result. ([\#7609](https://github.com/matrix-org/synapse/issues/7609))
- Fix exceptions when fetching events from a remote host fails. ([\#7622](https://github.com/matrix-org/synapse/issues/7622))
- Make `synctl restart` start synapse if it wasn't running. ([\#7624](https://github.com/matrix-org/synapse/issues/7624))
- Pass device information through to the login endpoint when using the login fallback. ([\#7629](https://github.com/matrix-org/synapse/issues/7629))
- Advertise the `m.login.token` login flow when OpenID Connect is enabled. ([\#7631](https://github.com/matrix-org/synapse/issues/7631))
- Fix bug in account data replication stream. ([\#7656](https://github.com/matrix-org/synapse/issues/7656))

Improved Documentation
----------------------

- Update the OpenBSD installation instructions. ([\#7587](https://github.com/matrix-org/synapse/issues/7587))
- Advertise Python 3.8 support in `setup.py`. ([\#7602](https://github.com/matrix-org/synapse/issues/7602))
- Add a link to `#synapse:matrix.org` in the troubleshooting section of the README. ([\#7603](https://github.com/matrix-org/synapse/issues/7603))
- Clarifications to the admin api documentation. ([\#7647](https://github.com/matrix-org/synapse/issues/7647))

Internal Changes
----------------

- Convert the identity handler to async/await. ([\#7561](https://github.com/matrix-org/synapse/issues/7561))
- Improve query performance for fetching state from a PostgreSQL database. ([\#7567](https://github.com/matrix-org/synapse/issues/7567))
- Speed up processing of federation stream RDATA rows. ([\#7584](https://github.com/matrix-org/synapse/issues/7584))
- Add comment to systemd example to show postgresql dependency. ([\#7591](https://github.com/matrix-org/synapse/issues/7591))
- Refactor `Ratelimiter` to limit the amount of expensive config value accesses. ([\#7595](https://github.com/matrix-org/synapse/issues/7595))
- Convert groups handlers to async/await. ([\#7600](https://github.com/matrix-org/synapse/issues/7600))
- Clean up exception handling in `SAML2ResponseResource`. ([\#7614](https://github.com/matrix-org/synapse/issues/7614))
- Check that all asynchronous tasks succeed and general cleanup of `MonthlyActiveUsersTestCase` and `TestMauLimit`. ([\#7619](https://github.com/matrix-org/synapse/issues/7619))
- Convert `get_user_id_by_threepid` to async/await. ([\#7620](https://github.com/matrix-org/synapse/issues/7620))
- Switch to upstream `dh-virtualenv` rather than our fork for Debian package builds. ([\#7621](https://github.com/matrix-org/synapse/issues/7621))
- Update CI scripts to check the number in the newsfile fragment. ([\#7623](https://github.com/matrix-org/synapse/issues/7623))
- Check if the localpart of a Matrix ID is reserved for guest users earlier in the registration flow, as well as when responding to requests to `/register/available`. ([\#7625](https://github.com/matrix-org/synapse/issues/7625))
- Minor cleanups to OpenID Connect integration. ([\#7628](https://github.com/matrix-org/synapse/issues/7628))
- Attempt to fix flaky test: `PhoneHomeStatsTestCase.test_performance_100`. ([\#7634](https://github.com/matrix-org/synapse/issues/7634))
- Fix typos of `m.olm.curve25519-aes-sha2` and `m.megolm.v1.aes-sha2` in comments, test files. ([\#7637](https://github.com/matrix-org/synapse/issues/7637))
- Convert user directory, state deltas, and stats handlers to async/await. ([\#7640](https://github.com/matrix-org/synapse/issues/7640))
- Remove some unused constants. ([\#7644](https://github.com/matrix-org/synapse/issues/7644))
- Fix type information on `assert_*_is_admin` methods. ([\#7645](https://github.com/matrix-org/synapse/issues/7645))
- Convert registration handler to async/await. ([\#7649](https://github.com/matrix-org/synapse/issues/7649))
2020-06-10 10:57:26 +01:00
Brendan Abolivier 4cf4c7dc99 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-06-01 12:58:34 +02:00
Erik Johnston 6fdf5ef66b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-29 13:27:12 +01:00
Brendan Abolivier d4220574a2 Merge branch 'release-v1.14.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-28 14:12:46 +02:00
Erik Johnston 1a9c8d5ee9 Merge commit 'ef3934ec8f123f6f553b07471588fbcc7f444cd8' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-27 20:06:41 +01:00
Erik Johnston 407dbf8574 Merge branch 'release-v1.14.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-27 13:35:15 +01:00
Erik Johnston 8beca8e21f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-26 09:43:21 +01:00
Erik Johnston cf92310da2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-21 15:19:00 +01:00
Richard van der Hoff 89f795fe8a Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 23:40:22 +01:00
Richard van der Hoff 1c347c84bf inline some config references 2020-05-20 23:33:13 +01:00
Richard van der Hoff 0d8fb99cdf Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 22:18:21 +01:00
Richard van der Hoff b3a9ad124c Fix field name in stubbed out presence servlet 2020-05-20 22:17:59 +01:00
Richard van der Hoff a902468354 Merge branch 'rav/matrix_hacks' into matrix-org-hotfixes 2020-05-20 22:13:44 +01:00
Richard van der Hoff 84639b32ae stub out GET presence requests 2020-05-20 22:13:32 +01:00
Patrick Cloke dac5d5ae42 Merge branch 'release-v1.13.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-05-18 10:27:51 -04:00
Richard van der Hoff 6bd2a39a7d Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-14 10:08:45 +01:00
Richard van der Hoff 309e30bae3 Merge remote-tracking branch 'origin/release-v1.13.0' into matrix-org-hotfixes 2020-05-11 13:09:14 +01:00
Richard van der Hoff 7ff7a415d1 Revert emergency registration patches
Revert "Merge commit '4d3ebc' into matrix-org-hotfixes"

This reverts commit 617541c4c6, reversing
changes made to ae4f6140f1.
2020-05-11 13:08:48 +01:00
Richard van der Hoff 6610343332 Revert emergency registration patches
Revert "Merge remote-tracking branch 'origin/clokep/no-validate-ui-auth-sess' into matrix-org-hotfixes"

This reverts commit 5adad58d95, reversing
changes made to 617541c4c6.
2020-05-11 13:08:14 +01:00
Richard van der Hoff 5adad58d95 Merge remote-tracking branch 'origin/clokep/no-validate-ui-auth-sess' into matrix-org-hotfixes 2020-05-07 15:19:54 +01:00
Patrick Cloke d7c7f64f17 Propagate changes to the client dict to the database. 2020-05-07 10:07:09 -04:00
Patrick Cloke c4c84b67d5 Disable a failing test. 2020-05-07 10:05:00 -04:00
Richard van der Hoff 617541c4c6 Merge commit '4d3ebc' into matrix-org-hotfixes 2020-05-07 14:16:52 +01:00
Patrick Cloke 4d3ebc3620 Disable validation that a UI authentication session has not been modified during a request cycle.
Partial backout of 1c1242acba (#7068)
2020-05-07 08:34:14 -04:00
Richard van der Hoff ae4f6140f1 Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-07 10:42:56 +01:00
Richard van der Hoff 323cfe3efb fix bad merge 2020-05-06 12:14:01 +01:00
Richard van der Hoff b0d2add89d Merge branch 'rav/cross_signing_keys_cache' into matrix-org-hotfixes 2020-05-06 11:59:41 +01:00
Richard van der Hoff ff20747703 Merge branch 'release-v1.13.0' into matrix-org-hotfixes 2020-05-06 11:57:36 +01:00
Richard van der Hoff 9192f1b9dd Merge rav/upsert_for_device_list into matrix-org-hotfixes 2020-05-06 11:46:19 +01:00
Richard van der Hoff 89d178e8e7 Merge rav/fix_dropped_messages into matrix-org-hotfixes 2020-05-05 22:42:48 +01:00
Richard van der Hoff 1c24e35e85 Merge erikj/faster_device_lists_fetch into matrix-org-hotfixes 2020-05-05 18:36:17 +01:00
Erik Johnston 5debf3071c Fix redis password support 2020-05-04 16:44:21 +01:00
Richard van der Hoff e9bd4bb388 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-05-01 09:26:57 +01:00
Richard van der Hoff 649e48a799 Merge branch 'develop' into matrix-org-hotfixes 2020-04-24 14:07:47 +01:00
Richard van der Hoff 9b0157686b Merge branch 'release-v1.12.4' into matrix-org-hotfixes 2020-04-22 13:30:35 +01:00
Richard van der Hoff 8288218b29 Merge remote-tracking branch 'origin/release-v1.12.4' into matrix-org-hotfixes 2020-04-21 11:03:32 +01:00
Richard van der Hoff da5e6eea45 Revert recent merges of #7289 into matrix-org-hotfixes
This was incorrectly merged before it was ready.

This reverts commit aead826d2d, reversing
changes made to 4cd2a4ae3a.

It also reverts commits 9b8212d25, fb3f1fb5c and 2fdfa96ee.
2020-04-21 11:00:57 +01:00
Andrew Morgan 2fdfa96ee6 lint 2020-04-17 17:38:36 +01:00
Andrew Morgan fb3f1fb5c0 Fix log lines, return type, tuple handling 2020-04-17 17:36:53 +01:00
Andrew Morgan 9b8212d256 Update changelog 2020-04-17 17:36:24 +01:00
Andrew Morgan aead826d2d Merge branch 'release-v1.12.4' of github.com:matrix-org/synapse into matrix-org-hotfixes
* 'release-v1.12.4' of github.com:matrix-org/synapse:
  Query missing cross-signing keys on local sig upload
2020-04-17 15:49:31 +01:00
Andrew Morgan 4cd2a4ae3a Merge branch 'release-v1.12.4' into HEAD
* release-v1.12.4:
  Only register devices edu handler on the master process (#7255)
  tweak changelog
  1.12.3
  Fix the debian build in a better way. (#7212)
  Fix changelog wording
  1.12.2
  Pin Pillow>=4.3.0,<7.1.0 to fix dep issue
  1.12.1
2020-04-14 13:36:19 +01:00
Andrew Morgan 66cd243e6f Merge branch 'release-v1.12.1' of github.com:matrix-org/synapse into matrix-org-hotfixes
* 'release-v1.12.1' of github.com:matrix-org/synapse:
  Note where bugs were introduced
  1.12.1rc1
  Newsfile
  Rewrite changelog
  Add changelog
  Only import sqlite3 when type checking
  Fix another instance
  Only setdefault for signatures if device has key_json
  Fix starting workers when federation sending not split out.
  matrix.org was fine
  Update CHANGES.md
  changelog typos
  1.12.0 changelog
  1.12.0
  more changelog
  changelog fixes
  fix typo
  1.12.0rc1
  update grafana dashboard
2020-03-31 12:06:11 +01:00
Richard van der Hoff 7b66a1f0d9 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-03-19 10:29:20 +00:00
Richard van der Hoff 059e91bdce Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-03-19 10:03:10 +00:00
Erik Johnston f86962cb6b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-03-18 17:17:03 +00:00
Brendan Abolivier 03c694bb08 Fix schema deltas 2020-03-12 16:48:11 +00:00
Brendan Abolivier 08d68c5296 Populate the room version from state events
See `rooms_version_column_3.sql.postgres` for details about why we need to do
that.
2020-03-12 15:59:24 +00:00
Brendan Abolivier 568461b5ec Also don't filter out events sent by ignored users when checking state visibility 2020-03-11 17:04:18 +00:00
Brendan Abolivier 6b73b8b70c Fix condition 2020-03-11 15:32:07 +00:00
Brendan Abolivier 936686ed2d Don't filter out events when we're checking the visibility of state 2020-03-11 15:21:25 +00:00
Brendan Abolivier 74050d0c1c Merge branch 'develop' into matrix-org-hotfixes 2020-03-09 15:06:56 +00:00
Richard van der Hoff 69111a8b2a Merge branch 'develop' into matrix-org-hotfixes 2020-02-27 10:46:36 +00:00
Richard van der Hoff d840ee5bde Revert "skip send without trailing slash"
I think this was done back when most synapses would reject the
no-trailing-slash version; it's no longer required, and makes matrix.org spec-incompliant.

This reverts commit fc5be50d56.
2020-02-27 10:44:55 +00:00
Erik Johnston e3d811e85d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 15:48:33 +00:00
Erik Johnston 578ad9fc48 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 15:11:20 +00:00
Richard van der Hoff 9dbe34f0d0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-02-19 11:40:25 +00:00
Erik Johnston 93a0751302 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-19 10:16:46 +00:00
Erik Johnston bc936b5657 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-18 16:11:26 +00:00
Richard van der Hoff d6eae548a7 Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-11 10:43:32 +00:00
Richard van der Hoff e439438b9b Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-10 09:56:51 +00:00
Richard van der Hoff f8a1e0d1d2 Merge branch 'release-v1.10.0' into matrix-org-hotfixes 2020-02-10 09:54:40 +00:00
Erik Johnston 8a29def84a Add support for putting fed user query API on workers (#6873) 2020-02-07 15:59:05 +00:00
Erik Johnston 77a166577a Allow moving group read APIs to workers (#6866) 2020-02-07 13:57:07 +00:00
Erik Johnston 7d5268d37c Merge branch 'release-v1.10.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-02-06 10:26:39 +00:00
Erik Johnston c854d255e5 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-31 15:06:16 +00:00
Brendan Abolivier c660962d4d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-22 13:48:11 +00:00
Richard van der Hoff 767bef0033 Merge branch 'rav/storage_provider_debug' into matrix-org-hotfixes 2020-01-21 23:03:22 +00:00
Richard van der Hoff 4d02bfd6e1 a bit of debugging for media storage providers 2020-01-21 23:02:58 +00:00
Andrew Morgan a099ab7d38 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-14 10:37:32 +00:00
Erik Johnston ce72a9ccdb Merge branch 'erikj/media_admin_apis' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-08 15:52:58 +00:00
Erik Johnston bace86ed15 Merge branch 'release-v1.8.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-08 15:52:48 +00:00
Erik Johnston 45bf455948 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2020-01-07 14:24:36 +00:00
Richard van der Hoff 859663565c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2020-01-06 15:43:41 +00:00
Richard van der Hoff 0876a5b641 Merge branch 'release-v1.7.3' into matrix-org-hotfixes 2019-12-31 10:47:29 +00:00
Richard van der Hoff 5b5314ee41 Merge branch 'release-v1.7.2' into matrix-org-hotfixes 2019-12-20 10:48:04 +00:00
Richard van der Hoff aff9189149 Merge remote-tracking branch 'origin/release-v1.7.1' into matrix-org-hotfixes 2019-12-17 16:00:43 +00:00
Richard van der Hoff 2eda49a8db Merge remote-tracking branch 'origin/release-v1.7.1' into matrix-org-hotfixes 2019-12-17 10:56:36 +00:00
Richard van der Hoff 96b17d4e4f Merge remote-tracking branch 'origin/release-v1.7.0' into matrix-org-hotfixes 2019-12-17 10:56:26 +00:00
Erik Johnston aadc131dc1 Merge branch 'babolivier/pusher-room-store' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-12-10 12:50:04 +00:00
Neil Johnson 0a522121a0 Merge branch 'release-v1.7.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-12-10 11:25:28 +00:00
Andrew Morgan 0b5e2c8093 Merge branch 'release-v1.6.1' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-28 11:40:33 +00:00
Erik Johnston c665d154a2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 18:56:54 +00:00
Neil Johnson 31295b5a60 Merge branch 'release-v1.6.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 13:16:18 +00:00
Erik Johnston aebe20c452 Fix phone home stats (#6418)
Fix phone home stats
2019-11-26 13:10:09 +00:00
Andrew Morgan 508e0f9310 1.6.0 2019-11-26 12:15:46 +00:00
Andrew Morgan e04e7e830e Merge branch 'release-v1.6.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-26 12:06:38 +00:00
Andrew Morgan 5407e69732 Change /push/v1/notify IP to 10.103.0.7 2019-11-26 12:04:19 +00:00
Erik Johnston 2c59eb368c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-20 15:17:10 +00:00
Erik Johnston 6d1a3e2bdd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-11-19 12:59:39 +00:00
Richard van der Hoff 7fa4586e36 Merge branch 'rav/url_preview_limit_title_2' into matrix-org-hotfixes 2019-11-05 18:18:02 +00:00
Erik Johnston 33b4aa8d99 Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-29 12:18:44 +00:00
Erik Johnston 627cf5def8 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-25 11:35:14 +01:00
Erik Johnston b409d51dee Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-25 10:19:09 +01:00
Erik Johnston 4a4e620f30 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-11 10:40:15 +01:00
Richard van der Hoff 28889d8da5 fix logging 2019-10-11 09:57:18 +01:00
Richard van der Hoff 15b2a50817 Add some randomness to the high-cpu backoff hack 2019-10-11 09:15:56 +01:00
Richard van der Hoff b852a8247d Awful hackery to try to get the fed sender to keep up
Basically, if the federation sender starts getting behind, insert some sleeps
into the transaction transmission code to give the fed sender a chance to catch
up.

Might have to experiment a bit with the numbers.
2019-10-10 10:34:08 +01:00
Erik Johnston 7b55cca011 Merge branch 'erikj/cache_memberships' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-07 13:15:22 +01:00
Richard van der Hoff a9577ab1f4 Merge branch 'develop' into matrix-org-hotfixes 2019-10-03 17:52:22 +01:00
Richard van der Hoff cb217d5d60 Revert "Awful hackery to try to get the fed sender to keep up"
This reverts commit 721086a291.

This didn't help.
2019-10-03 17:05:24 +01:00
Andrew Morgan f4f5355bcf Merge branch 'release-v1.4.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-03 13:06:32 +01:00
Erik Johnston 23bb2713d2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 16:51:08 +01:00
Erik Johnston b2471e1109 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 15:39:31 +01:00
Erik Johnston 610219d53d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 14:09:29 +01:00
Erik Johnston b464afe283 Merge branch 'release-v1.4.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-10-02 11:09:05 +01:00
Richard van der Hoff 7657ad3ced Merge branch 'rav/federation_sender_hackery' into matrix-org-hotfixes 2019-09-27 16:14:52 +01:00
Richard van der Hoff 721086a291 Awful hackery to try to get the fed sender to keep up
Basically, if the federation sender starts getting behind, insert some sleeps
into the transaction transmission code to give the fed sender a chance to catch
up.

Might have to experiment a bit with the numbers.
2019-09-27 16:13:51 +01:00
Richard van der Hoff 6e6b53ed3a Merge branch 'develop' into matrix-org-hotfixes 2019-09-26 15:22:33 +01:00
Richard van der Hoff 601b50672d Merge branch 'develop' into matrix-org-hotfixes 2019-09-25 12:48:40 +01:00
Richard van der Hoff a7af389da0 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-09-24 17:05:15 +01:00
Neil Johnson 99db0d76fd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-24 14:56:12 +01:00
Richard van der Hoff 561b0f79bc Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-09-24 10:11:19 +01:00
Richard van der Hoff 8569f3cdef Merge branch 'rav/fix_retry_reset' into matrix-org-hotfixes 2019-09-20 12:14:19 +01:00
Richard van der Hoff 7b61e6f5d6 Merge branch 'develop' into matrix-org-hotfixes 2019-09-18 13:55:25 +01:00
Richard van der Hoff 05241b3031 Revert "Fix m.federate bug"
This has now been merged into develop (142c9325c) so we no longer need this
cherry-picked commit.

This reverts commit ee91c69ef7.
2019-09-18 13:54:57 +01:00
Richard van der Hoff e01026d84d Revert "Fix existing v2 identity server calls (MSC2140) (#6013)"
This has now been merged into develop (3505ffcda) so we don't need this
cherry-picked commit.

This reverts commit e0eef47315.
2019-09-18 13:53:37 +01:00
Erik Johnston ee91c69ef7 Fix m.federate bug 2019-09-13 14:44:48 +01:00
Andrew Morgan e0eef47315 Fix existing v2 identity server calls (MSC2140) (#6013)
Two things I missed while implementing [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140/files#diff-c03a26de5ac40fb532de19cb7fc2aaf7R80).

1. Access tokens should be provided to the identity server as `access_token`, not `id_access_token`, even though the homeserver may accept the tokens as `id_access_token`.
2. Access tokens must be sent to the identity server in a query parameter, the JSON body is not allowed.

We now send the access token as part of an `Authorization: ...` header, which fixes both things.

The breaking code was added in https://github.com/matrix-org/synapse/pull/5892

Sytest PR: https://github.com/matrix-org/sytest/pull/697
2019-09-13 14:08:26 +01:00
Erik Johnston 44d2ca2990 Merge branch 'anoa/fix_3pid_validation' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-10 18:15:24 +01:00
Erik Johnston 9240622c1a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 14:10:53 +01:00
Erik Johnston 0dbba85e95 Merge branch 'anoa/worker_store_reg' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 13:02:12 +01:00
Andrew Morgan 1ceeccb769 Move get_threepid_validation_session into RegistrationWorkerStore 2019-09-06 13:00:34 +01:00
Erik Johnston 39883e85bd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 12:50:28 +01:00
Erik Johnston 68f53b7a0e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-06 09:53:37 +01:00
Erik Johnston e679b008ff Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-05 15:23:40 +01:00
Erik Johnston e80a5b7492 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-09-04 13:13:30 +01:00
Richard van der Hoff b272e7345f Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-08-30 12:01:24 +01:00
Erik Johnston a81e0233e9 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-29 11:18:57 +01:00
Richard van der Hoff 80898481ab Merge branch 'release-v1.3.1' into matrix-org-hotfixes 2019-08-17 09:22:30 +01:00
Brendan Abolivier 9d4c716d85 Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-15 11:36:00 +01:00
Brendan Abolivier d90b0946ed Temporary fix to ensure kde can contact matrix.org if stuff breaks 2019-08-13 18:05:06 +01:00
Brendan Abolivier 8d5762b0dc Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-13 17:39:30 +01:00
Brendan Abolivier a7efbc5416 Merge branch 'release-v1.3.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-13 15:54:01 +01:00
Richard van der Hoff be362cb8f8 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-08-13 10:52:19 +01:00
Erik Johnston 873ff9522b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-08-01 14:46:09 +01:00
Erik Johnston c1ee2999a0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-31 10:01:56 +01:00
Erik Johnston 9b2b386f76 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-30 13:26:19 +01:00
Erik Johnston 65fe31786d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-30 10:12:13 +01:00
Andrew Morgan 70b6d1dfd6 Merge branch 'release-v1.2.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-24 13:32:41 +01:00
Erik Johnston ee62aed72e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-23 10:23:40 +01:00
Erik Johnston c02f26319d Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-23 09:20:26 +01:00
Andrew Morgan fdd182870c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-22 10:19:16 +01:00
Richard van der Hoff 4102cb220a Merge branch 'release-v1.2.0' into matrix-org-hotfixes 2019-07-18 15:20:00 +01:00
Erik Johnston 5299707329 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-07-17 10:56:55 +01:00
Richard van der Hoff 43e01be158 Merge remote-tracking branch 'origin/release-v1.1.0' into matrix-org-hotfixes 2019-07-03 09:49:35 +01:00
Richard van der Hoff 589e080c6b Merge branch 'release-v1.1.0' into matrix-org-hotfixes 2019-07-03 09:47:55 +01:00
Richard van der Hoff 24e48bc9ff Merge branch 'release-v1.1.0' into matrix-org-hotfixes 2019-07-02 12:05:33 +01:00
Erik Johnston 576b62a6a3 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-28 10:04:54 +01:00
Erik Johnston ad2ba70959 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-24 15:31:36 +01:00
Erik Johnston a330505025 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-21 14:36:13 +01:00
Erik Johnston 67b73fd147 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-21 13:27:04 +01:00
Erik Johnston c08e4dbadc Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-17 14:10:28 +01:00
Erik Johnston 6dbd498772 Merge branch 'master' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 17:25:54 +01:00
Erik Johnston 03b09b32d6 Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 14:00:50 +01:00
Erik Johnston 8f1711da0e Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-11 00:23:54 +01:00
Erik Johnston 6fb6c98f71 Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-10 18:34:45 +01:00
Erik Johnston aad993f24d Merge branch 'release-v1.0.0' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-10 16:05:10 +01:00
Erik Johnston 544e101c24 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-06-04 16:58:38 +01:00
Richard van der Hoff 8699f380f0 hotfix RetryLimiter 2019-06-04 12:14:41 +01:00
Richard van der Hoff e91a68ef3a Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-06-04 11:59:55 +01:00
Richard van der Hoff 9f5048c198 Merge branch 'rav/limit_displayname_length' into matrix-org-hotfixes 2019-06-01 11:15:43 +01:00
Erik Johnston b3c40ba58a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-31 10:58:47 +01:00
Erik Johnston 8d69193a42 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-30 14:33:44 +01:00
Erik Johnston bbcd19f2d0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-24 10:53:01 +01:00
Erik Johnston 3cd598135f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-05-23 15:54:13 +01:00
Richard van der Hoff 1c8f2c34ff Merge branch 'develop' into matrix-org-hotfixes 2019-05-21 16:29:25 +01:00
Richard van der Hoff ca03f90ee7 Merge branch 'develop' into matrix-org-hotfixes 2019-05-20 15:55:39 +01:00
Richard van der Hoff 9feee29d76 Merge tag 'v0.99.4rc1' into matrix-org-hotfixes
v0.99.4rc1
2019-05-14 11:12:22 +01:00
Richard van der Hoff e7dcee13da Merge commit 'a845abbf3' into matrix-org-hotfixes 2019-05-03 17:12:28 +01:00
Richard van der Hoff 7467738834 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2019-05-02 13:37:35 +01:00
Erik Johnston d75fb8ae22 Merge branch 'erikj/ratelimit_3pid_invite' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-26 18:12:33 +01:00
Erik Johnston ae25a8efef Merge branch 'erikj/postpath' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-17 10:14:57 +01:00
Richard van der Hoff fc5be50d56 skip send without trailing slash 2019-04-16 15:16:57 +01:00
Erik Johnston aadba440da Point pusher to new box 2019-04-15 19:23:21 +01:00
Erik Johnston ec94d6a590 VersionRestServlet doesn't take a param 2019-04-15 19:21:32 +01:00
Erik Johnston 42ce90c3f7 Merge branch 'erikj/move_endpoints' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-15 18:56:46 +01:00
Erik Johnston 8467756dc1 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-04 14:43:57 +01:00
Erik Johnston 613b443ff0 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-04-02 18:25:45 +01:00
Richard van der Hoff 233b61ac61 Remove spurious changelog files from hotfixes
The relevant patches are now in develop thanks to
https://github.com/matrix-org/synapse/pull/4816.
2019-04-02 13:51:37 +01:00
Richard van der Hoff f41c9d37d6 Merge branch 'develop' into matrix-org-hotfixes 2019-04-02 13:47:08 +01:00
Neil Johnson 1048e2ca6a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-27 09:18:35 +00:00
Richard van der Hoff ce0ce1add3 Merge branch 'develop' into matrix-org-hotfixes 2019-03-25 16:48:56 +00:00
Erik Johnston b0bf1ea7bd Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-21 14:10:31 +00:00
Richard van der Hoff 2561b628af Merge branch 'develop' into matrix-org-hotfixes 2019-03-19 12:19:20 +00:00
Richard van der Hoff 73c6630718 Revert "Reinstate EDU-batching hacks"
This reverts commit ed8ccc3737.
2019-03-19 12:17:28 +00:00
Erik Johnston a189bb03ab Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-14 14:39:06 +00:00
Erik Johnston 404a2d70be Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-14 13:55:29 +00:00
Richard van der Hoff ed8ccc3737 Reinstate EDU-batching hacks
This reverts commit c7285607a3.
2019-03-13 14:42:11 +00:00
Erik Johnston 18b1a92162 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-08 09:56:17 +00:00
Amber Brown 199aa72d35 Merge branch 'develop' of ssh://github.com/matrix-org/synapse into
matrix-org-hotfixes
2019-03-07 21:43:10 +11:00
Erik Johnston 8f7dbbc14a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-06 19:30:30 +00:00
Erik Johnston 27dbc9ac42 Reenable presence tests and remove pointless change 2019-03-06 17:12:45 +00:00
Richard van der Hoff e9aa401994 Remove redundant changes from synapse/replication/tcp/streams.py (#4813)
This was some hacky code (introduced in c10c71e70d) to make the presence stream
do nothing on hotfixes. We now ensure that no replication clients subscribe to
the presence stream, so this is redundant.
2019-03-06 13:21:32 +00:00
Richard van der Hoff 9e9572c79e Run black on synapse/handlers/user_directory.py (#4812)
This got done on the develop branch in #4635, but the subsequent merge to
hotfixes (88af0317a) discarded the changes for some reason.

Fixing this here and now means (a) there are fewer differences between
matrix-org-hotfixes and develop, making future patches easier to merge, and (b)
fixes some pep8 errors on the hotfixes branch which have been annoying me for
some time.
2019-03-06 11:56:03 +00:00
Richard van der Hoff c7285607a3 Revert EDU-batching hacks from matrix-org-hotfixes
Firstly: we want to do this in a better way, which is the intention of
too many RRs, which means we need to make it happen again.

This reverts commits: 8d7c0264b 000d23090 eb0334b07 4d07dc0d1
2019-03-06 11:04:53 +00:00
Erik Johnston a6e2546980 Fix outbound federation 2019-03-05 14:50:37 +00:00
Erik Johnston dc510e0e43 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-03-05 14:41:13 +00:00
Richard van der Hoff ed12338f35 Remove #4733 debug (#4767)
We don't need any of this stuff now; this brings protocol.py back into line
with develop for the hotfixes branch.
2019-03-04 14:00:03 +00:00
Richard van der Hoff bf3f8b8855 Add more debug for #4422 (#4769) 2019-02-28 17:46:22 +00:00
Richard van der Hoff 67acd1aa1b Merge branch 'develop' into matrix-org-hotfixes 2019-02-27 10:29:24 +00:00
Erik Johnston 75c924430e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-26 09:36:29 +00:00
Richard van der Hoff 6087c53830 Add more debug for membership syncing issues (#4719) 2019-02-25 17:00:18 +00:00
Erik Johnston b50fe65a22 Add logging when sending error 2019-02-25 15:55:21 +00:00
Erik Johnston 17009e689b Merge pull request #4734 from matrix-org/rav/repl_debug
Add some debug to help with #4733
2019-02-25 15:52:45 +00:00
Richard van der Hoff 5d2f755d3f Add some debug to help with #4733 2019-02-25 14:37:23 +00:00
Richard van der Hoff 8d7c0264bc more fix edu batching hackery 2019-02-24 23:27:52 +00:00
Richard van der Hoff 000d230901 fix edu batching hackery 2019-02-24 23:19:37 +00:00
Richard van der Hoff eb0334b07c more edu batching hackery 2019-02-24 23:15:09 +00:00
Richard van der Hoff 4d07dc0d18 Add a delay to the federation loop for EDUs 2019-02-24 22:24:36 +00:00
Erik Johnston 0ea52872ab Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-22 15:29:41 +00:00
Richard van der Hoff 6868d53fe9 bail out early in on_new_receipts if no pushers 2019-02-21 15:58:15 +00:00
Richard van der Hoff 68af15637b Merge branch 'develop' into matrix-org-hotfixes 2019-02-20 14:24:17 +00:00
Richard van der Hoff 4da63d9f6f Merge branch 'develop' into matrix-org-hotfixes 2019-02-20 14:15:56 +00:00
Richard van der Hoff 085d69b0bd Apply the pusher http hack in the right place (#4692)
Do it in the constructor, so that it works for badge updates as well as pushes
2019-02-20 11:25:10 +00:00
Erik Johnston 776fe6c184 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-20 09:52:24 +00:00
Erik Johnston 0e07d2c7d5 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-19 13:24:37 +00:00
Erik Johnston 90ec885805 Revert "Merge pull request #4654 from matrix-org/hawkowl/registration-worker"
This reverts commit 5bd2e2c31d, reversing
changes made to d97c3a6ce6.
2019-02-19 13:23:17 +00:00
Erik Johnston 5a28154c4d Revert "Merge pull request #4655 from matrix-org/hawkowl/registration-worker"
This reverts commit 93555af5c9, reversing
changes made to 5bd2e2c31d.
2019-02-19 13:23:14 +00:00
Erik Johnston 2fcb51e703 Merge branch 'matthew/well-known-cors' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-18 18:38:49 +00:00
Erik Johnston 26f524872f Revert change that cached connection factory 2019-02-18 18:36:54 +00:00
Erik Johnston 88af0317a2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-15 22:39:13 +00:00
Erik Johnston c10c71e70d Emergency changes 2019-02-15 18:15:21 +00:00
Erik Johnston 93555af5c9 Merge pull request #4655 from matrix-org/hawkowl/registration-worker
Device replication
2019-02-15 18:12:49 +00:00
Amber Brown 06622e4110 fix 2019-02-16 05:11:09 +11:00
Amber Brown 155efa9e36 fix 2019-02-16 05:10:48 +11:00
Amber Brown 3175edc5d8 maybe 2019-02-16 05:09:08 +11:00
Amber Brown d95252c01f use a device replication thingy 2019-02-16 05:08:58 +11:00
Erik Johnston 5bd2e2c31d Merge pull request #4654 from matrix-org/hawkowl/registration-worker
Registration worker
2019-02-15 17:51:34 +00:00
Amber Brown 84528e4fb2 cleanup 2019-02-16 04:49:09 +11:00
Amber Brown e4381ed514 pep8 2019-02-16 04:42:04 +11:00
Amber Brown d9235b9e29 fix appservice, add to frontend proxy 2019-02-16 04:39:49 +11:00
Amber Brown ce5f3b1ba5 add all the files 2019-02-16 04:35:58 +11:00
Amber Brown 7b5c04312e isort 2019-02-16 04:35:27 +11:00
Amber Brown f5bafd70f4 add cache remover endpoint and wire it up 2019-02-16 04:34:23 +11:00
Richard van der Hoff d97c3a6ce6 Merge remote-tracking branch 'origin/release-v0.99.1' into matrix-org-hotfixes 2019-02-13 14:29:05 +00:00
Erik Johnston 341c35614a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-02-13 10:29:31 +00:00
Richard van der Hoff fecf28319c Merge branch 'release-v0.99.0' into matrix-org-hotfixes 2019-02-01 13:29:31 +00:00
Richard van der Hoff 345d8cfb69 Merge branch 'release-v0.99.0' into matrix-org-hotfixes 2019-02-01 13:21:42 +00:00
Richard van der Hoff b60d005156 Merge branch 'develop' into matrix-org-hotfixes 2019-01-31 18:44:04 +00:00
Richard van der Hoff 6c232a69df Revert "Break infinite loop on redaction in v3 rooms"
We've got a better fix of this now.

This reverts commit decb5698b3.
2019-01-31 18:43:49 +00:00
Amber Brown e97c1df30c remove slow code on userdir (#4534) 2019-01-31 13:26:38 +00:00
Richard van der Hoff decb5698b3 Break infinite loop on redaction in v3 rooms 2019-01-31 00:23:58 +00:00
Erik Johnston 62962e30e4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 17:04:08 +00:00
Erik Johnston 05413d4e20 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 14:27:19 +00:00
Erik Johnston ca46dcf683 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 13:11:25 +00:00
Erik Johnston d351be1567 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-30 11:48:29 +00:00
Andrew Morgan c7f2eaf4f4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-29 10:07:13 +00:00
Andrew Morgan 53d25116df Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-25 14:33:14 +00:00
Andrew Morgan 08e25ffa0c Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-24 15:51:59 +00:00
Andrew Morgan 1c148e442b Merge branch 'anoa/room_dir_quick_fix' into matrix-org-hotfixes 2019-01-24 15:37:16 +00:00
Andrew Morgan acaca1b4e9 Merge branch 'anoa/room_dir_quick_fix' into matrix-org-hotfixes 2019-01-24 14:51:35 +00:00
Andrew Morgan 4777836b83 Fix missing synapse metrics import 2019-01-23 15:26:03 +00:00
Andrew Morgan 7da659dd6d Use existing stream position counter metric 2019-01-23 15:04:12 +00:00
Andrew Morgan 77dfe51aba Name metric consistently 2019-01-23 15:04:05 +00:00
Andrew Morgan ef7865e2f2 Track user_dir current event stream position 2019-01-23 15:03:54 +00:00
Matthew Hodgson 5cb15c0443 warn if we ignore device lists 2019-01-15 22:11:46 +00:00
Matthew Hodgson b43172ffbc Merge pull request #4396 from matrix-org/matthew/bodge_device_update_dos
limit remote device lists to 10000 entries per user
2019-01-15 21:47:00 +00:00
Matthew Hodgson b4796d1814 drop the limit to 1K as e2e will be hosed beyond that point anyway 2019-01-15 21:46:29 +00:00
Matthew Hodgson 482d06774a don't store remote device lists if they have more than 10K devices 2019-01-15 21:38:07 +00:00
Matthew Hodgson 046d731fbd limit remote device lists to 1000 entries per user 2019-01-15 21:07:12 +00:00
Richard van der Hoff 892f6c98ec Merge tag 'v0.34.1.1' into matrix-org-hotfixes
Synapse 0.34.1.1 (2019-01-11)
=============================

This release fixes CVE-2019-5885 and is recommended for all users of Synapse 0.34.1.

This release is compatible with Python 2.7 and 3.5+. Python 3.7 is fully supported.

Bugfixes
--------

- Fix spontaneous logout on upgrade
  ([\#4374](https://github.com/matrix-org/synapse/issues/4374))
2019-01-11 10:21:18 +00:00
Erik Johnston 7fafa2d954 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2019-01-09 09:13:16 +00:00
Richard van der Hoff 1d63046542 Merge tag 'v0.34.1rc1' into matrix-org-hotfixes
Synapse 0.34.1rc1 (2019-01-08)
==============================

Features
--------

- Special-case a support user for use in verifying behaviour of a given server. The support user does not appear in user directory or monthly active user counts. ([\#4141](https://github.com/matrix-org/synapse/issues/4141), [\#4344](https://github.com/matrix-org/synapse/issues/4344))
- Support for serving .well-known files ([\#4262](https://github.com/matrix-org/synapse/issues/4262))
- Rework SAML2 authentication ([\#4265](https://github.com/matrix-org/synapse/issues/4265), [\#4267](https://github.com/matrix-org/synapse/issues/4267))
- SAML2 authentication: Initialise user display name from SAML2 data ([\#4272](https://github.com/matrix-org/synapse/issues/4272))
- Synapse can now have its conditional/extra dependencies installed by pip. This functionality can be used by using `pip install matrix-synapse[feature]`, where feature is a comma separated list with the possible values `email.enable_notifs`, `matrix-synapse-ldap3`, `postgres`, `resources.consent`, `saml2`, `url_preview`, and `test`. If you want to install all optional dependencies, you can use "all" instead. ([\#4298](https://github.com/matrix-org/synapse/issues/4298), [\#4325](https://github.com/matrix-org/synapse/issues/4325), [\#4327](https://github.com/matrix-org/synapse/issues/4327))
- Add routes for reading account data. ([\#4303](https://github.com/matrix-org/synapse/issues/4303))
- Add opt-in support for v2 rooms ([\#4307](https://github.com/matrix-org/synapse/issues/4307))
- Add a script to generate a clean config file ([\#4315](https://github.com/matrix-org/synapse/issues/4315))
- Return server data in /login response ([\#4319](https://github.com/matrix-org/synapse/issues/4319))

Bugfixes
--------

- Fix contains_url check to be consistent with other instances in code-base and check that value is an instance of string. ([\#3405](https://github.com/matrix-org/synapse/issues/3405))
- Fix CAS login when username is not valid in an MXID ([\#4264](https://github.com/matrix-org/synapse/issues/4264))
- Send CORS headers for /media/config ([\#4279](https://github.com/matrix-org/synapse/issues/4279))
- Add 'sandbox' to CSP for media reprository ([\#4284](https://github.com/matrix-org/synapse/issues/4284))
- Make the new landing page prettier. ([\#4294](https://github.com/matrix-org/synapse/issues/4294))
- Fix deleting E2E room keys when using old SQLite versions. ([\#4295](https://github.com/matrix-org/synapse/issues/4295))
- The metric synapse_admin_mau:current previously did not update when config.mau_stats_only was set to True ([\#4305](https://github.com/matrix-org/synapse/issues/4305))
- Fixed per-room account data filters ([\#4309](https://github.com/matrix-org/synapse/issues/4309))
- Fix indentation in default config ([\#4313](https://github.com/matrix-org/synapse/issues/4313))
- Fix synapse:latest docker upload ([\#4316](https://github.com/matrix-org/synapse/issues/4316))
- Fix test_metric.py compatibility with prometheus_client 0.5. Contributed by Maarten de Vries <maarten@de-vri.es>. ([\#4317](https://github.com/matrix-org/synapse/issues/4317))
- Avoid packaging _trial_temp directory in -py3 debian packages ([\#4326](https://github.com/matrix-org/synapse/issues/4326))
- Check jinja version for consent resource ([\#4327](https://github.com/matrix-org/synapse/issues/4327))
- fix NPE in /messages by checking if all events were filtered out ([\#4330](https://github.com/matrix-org/synapse/issues/4330))
- Fix `python -m synapse.config` on Python 3. ([\#4356](https://github.com/matrix-org/synapse/issues/4356))

Deprecations and Removals
-------------------------

- Remove the deprecated v1/register API on Python 2. It was never ported to Python 3. ([\#4334](https://github.com/matrix-org/synapse/issues/4334))

Internal Changes
----------------

- Getting URL previews of IP addresses no longer fails on Python 3. ([\#4215](https://github.com/matrix-org/synapse/issues/4215))
- drop undocumented dependency on dateutil ([\#4266](https://github.com/matrix-org/synapse/issues/4266))
- Update the example systemd config to use a virtualenv ([\#4273](https://github.com/matrix-org/synapse/issues/4273))
- Update link to kernel DCO guide ([\#4274](https://github.com/matrix-org/synapse/issues/4274))
- Make isort tox check print diff when it fails ([\#4283](https://github.com/matrix-org/synapse/issues/4283))
- Log room_id in Unknown room errors ([\#4297](https://github.com/matrix-org/synapse/issues/4297))
- Documentation improvements for coturn setup. Contributed by Krithin Sitaram. ([\#4333](https://github.com/matrix-org/synapse/issues/4333))
- Update pull request template to use absolute links ([\#4341](https://github.com/matrix-org/synapse/issues/4341))
- Update README to not lie about required restart when updating TLS certificates ([\#4343](https://github.com/matrix-org/synapse/issues/4343))
- Update debian packaging for compatibility with transitional package ([\#4349](https://github.com/matrix-org/synapse/issues/4349))
- Fix command hint to generate a config file when trying to start without a config file ([\#4353](https://github.com/matrix-org/synapse/issues/4353))
- Add better logging for unexpected errors while sending transactions ([\#4358](https://github.com/matrix-org/synapse/issues/4358))
2019-01-08 11:37:25 +00:00
Richard van der Hoff 4c238a9a91 Merge remote-tracking branch 'origin/release-v0.34.0' into matrix-org-hotfixes 2018-12-19 10:24:26 +00:00
Richard van der Hoff 002db39a36 Merge tag 'v0.34.0rc1' into matrix-org-hotfixes 2018-12-04 14:07:28 +00:00
Richard van der Hoff c4074e4ab6 Revert "Merge branch 'rav/timestamp_patch' into matrix-org-hotfixes"
This reverts commit 7960e814e5, reversing
changes made to 3dd704ee9a.

We no longer need this; please redo it as a proper MSC & synapse PR if you want
to keep it...
2018-12-03 10:15:39 +00:00
Richard van der Hoff 7960e814e5 Merge branch 'rav/timestamp_patch' into matrix-org-hotfixes 2018-11-30 12:10:30 +00:00
Richard van der Hoff 080025e533 Fix buglet and remove thread_id stuff 2018-11-30 12:09:33 +00:00
Richard van der Hoff 9accd63a38 Initial patch from Erik 2018-11-30 12:04:38 +00:00
Richard van der Hoff 3dd704ee9a Merge branch 'develop' into matrix-org-hotfixes 2018-11-20 11:29:45 +00:00
Richard van der Hoff 28e28a1974 Merge branch 'develop' into matrix-org-hotfixes 2018-11-20 11:03:35 +00:00
Richard van der Hoff b699178aa1 Merge branch 'develop' into matrix-org-hotfixes 2018-11-14 11:54:29 +00:00
Richard van der Hoff c08c649fa1 Merge remote-tracking branch 'origin/erikj/fix_device_comparison' into matrix-org-hotfixes 2018-11-08 12:48:19 +00:00
hera 5c0c4b4079 Fix encoding error for consent form on python3
The form was rendering this as "b'01234....'".

-- richvdh
2018-11-08 11:03:39 +00:00
Richard van der Hoff b55cdfaa31 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-11-08 10:47:56 +00:00
Richard van der Hoff 34406cf22c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-11-06 10:49:20 +00:00
Amber Brown f91aefd245 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-31 04:41:03 +11:00
Erik Johnston f8281f42c8 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-29 18:16:58 +00:00
Amber Brown 7171bdf279 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-29 23:14:47 +11:00
Erik Johnston 9f2d14ee26 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-26 09:52:23 +01:00
Amber Brown ead471e72d Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-22 22:18:02 +11:00
Richard van der Hoff 9a4011de46 Merge branch 'develop' into matrix-org-hotfixes 2018-10-18 16:37:01 +01:00
Amber Brown 33551be61b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-15 20:15:27 +11:00
Richard van der Hoff eeb29d99fd Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-09 09:49:08 +01:00
Richard van der Hoff 1a0c407e6b Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-10-09 09:47:37 +01:00
Erik Johnston c4b37cbf18 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-02 16:44:57 +01:00
Erik Johnston 7fa156af80 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-10-02 14:39:30 +01:00
Richard van der Hoff 78825f4f1c Merge branch 'develop' into matrix-org-hotfixes 2018-09-26 13:27:33 +01:00
Richard van der Hoff 6e15b5debe Revert "Actuall set cache factors in workers"
This reverts commit e21c312e16.
2018-09-26 13:25:52 +01:00
Matthew Hodgson 2e0d2879d0 Merge branch 'develop' into matrix-org-hotfixes 2018-09-26 11:00:26 +01:00
Michael Kaye 128043072b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-24 11:20:10 +01:00
Erik Johnston b2fda9d20e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-20 11:00:14 +01:00
Erik Johnston 3c8c5eabc2 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-15 11:40:37 +01:00
Erik Johnston 2da2041e2e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-14 19:56:33 +01:00
Erik Johnston b5eef203f4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-14 18:25:55 +01:00
Erik Johnston df73da691f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-13 16:15:56 +01:00
Matthew Hodgson 30d054e0bb Merge branch 'develop' into matrix-org-hotfixes 2018-09-12 17:16:21 +01:00
Erik Johnston ebb3cc4ab6 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-12 11:22:06 +01:00
Erik Johnston 17201abd53 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-09-11 14:17:33 +01:00
Erik Johnston 2f141f4c41 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-22 11:47:08 +01:00
Richard van der Hoff 638c0bf49b Merge branch 'rav/fix_gdpr_consent' into matrix-org-hotfixes 2018-08-21 22:54:35 +01:00
hera d1065e6f51 Merge tag 'v0.33.3rc2' into matrix-org-hotfixes
Bugfixes
--------

- Fix bug in v0.33.3rc1 which caused infinite loops and OOMs
([\#3723](https://github.com/matrix-org/synapse/issues/3723))
2018-08-21 19:12:14 +00:00
Erik Johnston 567863127a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-20 13:34:47 +01:00
Erik Johnston f5abc10724 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-20 11:12:18 +01:00
Erik Johnston bb795b56da Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-16 15:51:16 +01:00
Erik Johnston 4dd0604f61 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-15 15:37:05 +01:00
Richard van der Hoff c05d278ba0 Merge branch 'rav/federation_metrics' into matrix-org-hotfixes 2018-08-07 19:11:29 +01:00
Erik Johnston 49a3163958 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-06 13:33:54 +01:00
Erik Johnston 1a568041fa Merge branch 'release-v0.33.1' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-08-02 15:28:32 +01:00
Erik Johnston c9db8b0c32 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-24 17:22:23 +01:00
Erik Johnston aa1bf10b91 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-24 15:49:38 +01:00
Erik Johnston 5222907bea Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 17:54:41 +01:00
Erik Johnston e1eb147f2a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 16:45:22 +01:00
hera e43eb47c5f Fixup limiter 2018-07-23 15:22:47 +00:00
hera 27eb4c45cd Lower hacky timeout for member limiter 2018-07-23 15:16:36 +00:00
Erik Johnston b136d7ff8f Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-23 16:09:40 +01:00
Richard van der Hoff 9e56e1ab30 Merge branch 'develop' into matrix-org-hotfixes 2018-07-19 16:40:28 +01:00
Erik Johnston 742f757337 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-19 10:26:13 +01:00
Richard van der Hoff 2f5dfe299c Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-07-17 15:26:47 +01:00
Erik Johnston e4eec87c6a Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-17 11:18:39 +01:00
Erik Johnston f793ff4571 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-17 10:04:33 +01:00
Richard van der Hoff 195aae2f16 Merge branch 'develop' into matrix-org-hotfixes 2018-07-12 12:09:25 +01:00
Erik Johnston 7c79f2cb72 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-12 09:59:58 +01:00
Richard van der Hoff f04e35c170 Merge branch 'develop' into matrix-org-hotfixes 2018-07-10 18:04:03 +01:00
Matthew Hodgson 36bbac05bd Merge branch 'develop' of git+ssh://github.com/matrix-org/synapse into matrix-org-hotfixes 2018-07-06 19:21:09 +01:00
Erik Johnston e2a4b7681e Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-07-05 10:29:32 +01:00
Erik Johnston 957944eee4 Merge pull request #3476 from matrix-org/erikj/timeout_memberships
Timeout membership requests after 90s
2018-07-03 10:18:39 +01:00
Erik Johnston bf425e533e Fix PEP8 2018-07-03 10:11:09 +01:00
Erik Johnston ca21957b8a Timeout membership requests after 90s
This is a hacky fix to try and stop in flight requests from building up
2018-07-02 13:56:08 +01:00
Erik Johnston 6a95270671 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-29 14:10:29 +01:00
hera 82781f5838 Merge remote-tracking branch 'origin/develop' into matrix-org-hotfixes 2018-06-28 21:09:28 +00:00
Matthew Hodgson aae6d3ff69 Merge remote-tracking branch 'origin/revert-3451-hawkowl/sorteddict-api' into matrix-org-hotfixes 2018-06-26 18:36:29 +01:00
Matthew Hodgson 9175225adf Merge remote-tracking branch 'origin/hawkowl/sorteddict-api' into matrix-org-hotfixes 2018-06-26 17:52:37 +01:00
David Baker 7a32fa0101 Fix error on deleting users pending deactivation
Use simple_delete instead of simple_delete_one as commented
2018-06-26 11:57:44 +01:00
Erik Johnston d46450195b Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 20:14:34 +01:00
Erik Johnston c0128c1021 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 20:12:13 +01:00
Erik Johnston 3320b7c9a4 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 15:23:18 +01:00
Erik Johnston 4c22c9b0b6 Merge branch 'develop' of github.com:matrix-org/synapse into matrix-org-hotfixes 2018-06-25 14:37:13 +01:00
Richard van der Hoff 6d6ea1bb40 Merge branch 'develop' into matrix-org-hotfixes 2018-06-22 16:35:37 +01:00
aphrodite 9e38981ae4 Send HTTP pushes direct to http-priv rather than via clouldflare
(This is a heinous hack that ought to be made more generic and pushed back to develop)
2018-06-22 15:58:15 +01:00
hera 463e7c2709 Lower member limiter 2018-06-22 15:58:15 +01:00
Richard van der Hoff ce9d0b1d0c Fix earlier logging patch
`@cached` doesn't work on decorated functions, because it uses inspection on
the target to calculate the number of arguments.
2018-06-22 15:58:15 +01:00
Richard van der Hoff 80786d5caf Logging for get_users_in_room 2018-06-22 15:58:15 +01:00
Richard van der Hoff e18378c3e2 Increase member limiter to 20
Let's see if this makes the bridges go faster, or if it kills the synapse
master.
2018-06-22 15:58:15 +01:00
hera 0ca2857baa increase sync cache to 2 minutes
to give synchrotrons being hammered by repeating initial /syncs to get more
chance to actually complete and avoid a DoS
2018-06-22 15:58:15 +01:00
Erik Johnston e21c312e16 Actuall set cache factors in workers 2018-06-22 15:58:15 +01:00
Richard van der Hoff 1031bd25f8 Avoid doing presence updates on replication reconnect
Presence is supposed to be disabled on matrix.org, so we shouldn't send a load
of USER_SYNC commands every time the synchrotron reconnects to the master.
2018-06-22 15:58:15 +01:00
hera fae708c0e8 Disable auth on room_members for now
because the moznet bridge is broken (https://github.com/matrix-org/matrix-appservice-irc/issues/506)
2018-06-22 15:58:15 +01:00
Erik Johnston 8f8ea91eef Bump LAST_SEEN_GRANULARITY in client_ips 2018-06-22 15:58:15 +01:00
Erik Johnston 7a1406d144 Prefill client_ip_last_seen in replication 2018-06-22 15:58:15 +01:00
Erik Johnston 6373874833 Move event sending to end in shutdown room admin api 2018-06-22 15:58:15 +01:00
Erik Johnston a79823e64b Add dummy presence REST handler to frontend proxy
The handler no-ops all requests as presence is disabled.
2018-06-22 15:58:15 +01:00
Erik Johnston 1766a5fdc0 Increase MAX_EVENTS_BEHIND for replication clients 2018-06-22 15:58:14 +01:00
Erik Johnston e6b1ea3eb2 Disable presence in txn queue 2018-06-22 15:58:14 +01:00
Erik Johnston e5537cf983 Limit concurrent AS joins 2018-06-22 15:58:14 +01:00
Erik Johnston 43bb12e640 Disable presence
This reverts commit 0ebd376a53 and
disables presence a bit more
2018-06-22 15:58:14 +01:00
Erik Johnston 66dcbf47a3 Disable auto search for prefixes in event search 2018-06-22 15:58:14 +01:00
Erik Johnston a285fe05fd Add timeout to ResponseCache of /public_rooms 2018-06-22 15:58:14 +01:00
111 changed files with 4213 additions and 1393 deletions
+1 -1
View File
@@ -7,4 +7,4 @@ if command -v yum &> /dev/null; then
fi
# Install a Rust toolchain
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal
-146
View File
@@ -1,146 +0,0 @@
#!/usr/bin/env python
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
# Wraps `auditwheel repair` to first check if we're repairing a potentially abi3
# compatible wheel, if so rename the wheel before repairing it.
import argparse
import os
import subprocess
from zipfile import ZipFile
from packaging.tags import Tag
from packaging.utils import parse_wheel_filename
from packaging.version import Version
def check_is_abi3_compatible(wheel_file: str) -> None:
"""Check the contents of the built wheel for any `.so` files that are *not*
abi3 compatible.
"""
with ZipFile(wheel_file, "r") as wheel:
for file in wheel.namelist():
if not file.endswith(".so"):
continue
if not file.endswith(".abi3.so"):
raise Exception(f"Found non-abi3 lib: {file}")
def cpython(wheel_file: str, name: str, version: Version, tag: Tag) -> str:
"""Replaces the cpython wheel file with a ABI3 compatible wheel"""
if tag.abi == "abi3":
# Nothing to do.
return wheel_file
check_is_abi3_compatible(wheel_file)
# HACK: it seems that some older versions of pip will consider a wheel marked
# as macosx_11_0 as incompatible with Big Sur. I haven't done the full archaeology
# here; there are some clues in
# https://github.com/pantsbuild/pants/pull/12857
# https://github.com/pypa/pip/issues/9138
# https://github.com/pypa/packaging/pull/319
# Empirically this seems to work, note that macOS 11 and 10.16 are the same,
# both versions are valid for backwards compatibility.
platform = tag.platform.replace("macosx_11_0", "macosx_10_16")
abi3_tag = Tag(tag.interpreter, "abi3", platform)
dirname = os.path.dirname(wheel_file)
new_wheel_file = os.path.join(
dirname,
f"{name}-{version}-{abi3_tag}.whl",
)
os.rename(wheel_file, new_wheel_file)
print("Renamed wheel to", new_wheel_file)
return new_wheel_file
def main(wheel_file: str, dest_dir: str, archs: str | None) -> None:
"""Entry point"""
# Parse the wheel file name into its parts. Note that `parse_wheel_filename`
# normalizes the package name (i.e. it converts matrix_synapse ->
# matrix-synapse), which is not what we want.
_, version, build, tags = parse_wheel_filename(os.path.basename(wheel_file))
name = os.path.basename(wheel_file).split("-")[0]
if len(tags) != 1:
# We expect only a wheel file with only a single tag
raise Exception(f"Unexpectedly found multiple tags: {tags}")
tag = next(iter(tags))
if build:
# We don't use build tags in Synapse
raise Exception(f"Unexpected build tag: {build}")
# If the wheel is for cpython then convert it into an abi3 wheel.
if tag.interpreter.startswith("cp"):
wheel_file = cpython(wheel_file, name, version, tag)
# Finally, repair the wheel.
if archs is not None:
# If we are given archs then we are on macos and need to use
# `delocate-listdeps`.
subprocess.run(["delocate-listdeps", wheel_file], check=True)
subprocess.run(
["delocate-wheel", "--require-archs", archs, "-w", dest_dir, wheel_file],
check=True,
)
else:
subprocess.run(["auditwheel", "repair", "-w", dest_dir, wheel_file], check=True)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Tag wheel as abi3 and repair it.")
parser.add_argument(
"--wheel-dir",
"-w",
metavar="WHEEL_DIR",
help="Directory to store delocated wheels",
required=True,
)
parser.add_argument(
"--require-archs",
metavar="archs",
default=None,
)
parser.add_argument(
"wheel_file",
metavar="WHEEL_FILE",
)
args = parser.parse_args()
wheel_file = args.wheel_file
wheel_dir = args.wheel_dir
archs = args.require_archs
main(wheel_file, wheel_dir, archs)
-39
View File
@@ -1,39 +0,0 @@
#!/usr/bin/env bash
# this script is run by GitHub Actions in a plain `jammy` container; it
# - installs the minimal system requirements, and poetry;
# - patches the project definition file to refer to old versions only;
# - creates a venv with these old versions using poetry; and finally
# - invokes `trial` to run the tests with old deps.
set -ex
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
# TODO: in the future, we could use an implementation of
# https://github.com/python-poetry/poetry/issues/3527
# https://github.com/pypa/pip/issues/8085
# to select the lowest possible versions, rather than resorting to this sed script.
# Patch the project definitions in-place:
# - `-E` use extended regex syntax.
# - Don't modify the line that defines required Python versions.
# - Replace all lower and tilde bounds with exact bounds.
# - Replace all caret bounds with exact bounds.
# - Delete all lines referring to psycopg2 - so no testing of postgres support.
# - Use pyopenssl 17.0, which is the oldest version that works with
# a `cryptography` compiled against OpenSSL 1.1.
# - Omit systemd: we're not logging to journal here.
sed -i -E '
/^\s*requires-python\s*=/b
s/[~>]=/==/g
s/\^/==/g
/psycopg2/d
s/pyOpenSSL\s*==\s*16\.0\.0"/pyOpenSSL==17.0.0"/
/systemd/d
' pyproject.toml
echo "::group::Patched pyproject.toml"
cat pyproject.toml
echo "::endgroup::"
+69
View File
@@ -1,23 +1,92 @@
version: 2
# As dependabot is currently only run on a weekly basis, we raise the
# open-pull-requests-limit to 10 (from the default of 5) to better ensure we
# don't continuously grow a backlog of updates.
updates:
- # "pip" is the correct setting for poetry, per https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem
package-ecosystem: "pip"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Group patch updates to packages together into a single PR, as they rarely
# if ever contain breaking changes that need to be reviewed separately.
#
# Less PRs means a streamlined review process.
#
# Python packages follow semantic versioning, and tend to only introduce
# breaking changes in major version bumps. Thus, we'll group minor and patch
# versions together.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
# Prevent pulling packages that were recently updated to help mitigate
# supply chain attacks. 14 days was taken from the recommendation at
# https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns
# where the author noted that 9/10 attacks would have been mitigated by a
# two week cooldown.
#
# The cooldown only applies to general updates; security updates will still
# be pulled in as soon as possible.
cooldown:
default-days: 14
- package-ecosystem: "docker"
directory: "/docker"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# For container versions, breaking changes are also typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "github-actions"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Similarly for GitHub Actions, breaking changes are typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "cargo"
directory: "/"
open-pull-requests-limit: 10
versioning-strategy: "lockfile-only"
schedule:
interval: "weekly"
# The Rust ecosystem is special in that breaking changes are often introduced
# in minor version bumps, as packages typically stay pre-1.0 for a long time.
# Thus we specifically keep minor version bumps separate in their own PRs.
groups:
patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "patch"
cooldown:
default-days: 14
-34
View File
@@ -1,34 +0,0 @@
name: Deploy documentation PR preview
on:
workflow_run:
workflows: [ "Prepare documentation PR preview" ]
types:
- completed
jobs:
netlify:
if: github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.event == 'pull_request'
runs-on: ubuntu-latest
steps:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}
name: book
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@9805cd123fc9a7e421e35340a05e1ebc5dee46b5 # v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}
branch: ${{ github.event.workflow_run.head_branch }}
revision: ${{ github.event.workflow_run.head_sha }}
token: ${{ secrets.NETLIFY_AUTH_TOKEN }}
site_id: ${{ secrets.NETLIFY_SITE_ID }}
desc: Documentation preview
deployment_env: PR Documentation Preview
+1 -1
View File
@@ -5,7 +5,7 @@ name: Build release artifacts
on:
# we build on PRs and develop to (hopefully) get early warning
# of things breaking (but only build one set of debs). PRs skip
# building wheels on macOS & ARM.
# building wheels on ARM.
pull_request:
push:
branches: ["develop", "release-*"]
+6 -8
View File
@@ -452,14 +452,12 @@ jobs:
python-version: '3.10'
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
run: .ci/scripts/prepare_old_deps.sh
# Note: we install using `pip` here, not poetry. `poetry install` ignores the
# build-system section (https://github.com/python-poetry/poetry/issues/6154), but
# we explicitly want to test that you can `pip install` using the oldest version
# of poetry-core and setuptools-rust.
- run: pip install .[all,test]
# Note: we install using `uv` here, not poetry or pip to allow us to test with the
# minimum version of all dependencies, both those explicitly specified and those
# implicitly brought in by the explicit dependencies.
run: |
pip install uv
uv pip install --system --resolution=lowest .[all,test]
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
+1
View File
@@ -0,0 +1 @@
Upgrade locked dependency on Twisted to 24.7.0rc1.
+1
View File
@@ -0,0 +1 @@
Server admins will see [soft failed](https://spec.matrix.org/v1.13/server-server-api/#soft-failure) events over the Client-Server API.
+1
View File
@@ -0,0 +1 @@
Group together dependabot update PRs to reduce the review load.
+1
View File
@@ -0,0 +1 @@
Fix `HomeServer.shutdown()` failing if the homeserver hasn't been setup yet.
+1
View File
@@ -0,0 +1 @@
Fix sliding sync performance slow down for long lived connections.
+1
View File
@@ -0,0 +1 @@
Respond with useful error codes with `Content-Length` header/s are invalid.
+1
View File
@@ -0,0 +1 @@
Fix a bug where Mastodon posts (and possibly other embeds) have the wrong description for URL previews.
+1
View File
@@ -0,0 +1 @@
Fix `HomeServer.shutdown()` failing if the homeserver failed to `start`.
+1
View File
@@ -0,0 +1 @@
Switch the build backend from `poetry-core` to `maturin`.
+1
View File
@@ -0,0 +1 @@
Raise the limit for concurrently-open non-security @dependabot PRs from 5 to 10.
+1
View File
@@ -0,0 +1 @@
Remove the "Updates to locked dependencies" section from the changelog due to lack of use and the maintenance burden.
+1
View File
@@ -0,0 +1 @@
Require 14 days to pass before pulling in general dependency updates to help mitigate upstream supply chain attacks.
+1
View File
@@ -0,0 +1 @@
Add `memberships` endpoint to the admin API. This is useful for forensics and T&S purpose.
+1
View File
@@ -0,0 +1 @@
Drop the broken netlify documentation workflow until a new one is implemented.
+1
View File
@@ -0,0 +1 @@
Fix bug where `Duration` was logged incorrectly.
+1
View File
@@ -0,0 +1 @@
Add an admin API for retrieving a paginated list of quarantined media.
+1
View File
@@ -0,0 +1 @@
Document the importance of `public_baseurl` when configuring OpenID Connect authentication.
+1
View File
@@ -0,0 +1 @@
Fix bug introduced in 1.143.0 that broke support for versions of `zope-interface` older than 6.2.
+1
View File
@@ -0,0 +1 @@
Server admins can bypass the quarantine media check when downloading media by setting the `admin_unsafely_bypass_quarantine` query parameter to `true` on Client-Server API media download requests.
+1
View File
@@ -0,0 +1 @@
Don't include debug logs in `Clock` unless explicitly enabled.
+1
View File
@@ -0,0 +1 @@
Implemented pagination for the [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) mutual rooms endpoint. Contributed by @tulir @ Beeper.
+1
View File
@@ -0,0 +1 @@
Admin API: add worker support to `GET /_synapse/admin/v2/users/<user_id>`.
+1
View File
@@ -0,0 +1 @@
Use `uv` to test olddeps to ensure all transitive dependencies use minimum versions.
+1
View File
@@ -0,0 +1 @@
Improve proxy support for the `federation_client.py` dev script. Contributed by Denis Kasak (@dkasak).
+1
View File
@@ -0,0 +1 @@
Unpin the version of Rust we use to build Synapse wheels (was 1.82.0) now that MacOS support has been dropped.
+41
View File
@@ -73,6 +73,33 @@ Response:
}
```
## Listing all quarantined media
This API returns a list of all quarantined media on the server. It is paginated, and can be scoped to either local or
remote media. Note that the pagination values are also scoped to the request parameters - changing them but keeping the
same pagination values will result in unexpected results.
Request:
```http
GET /_synapse/admin/v1/media/quarantined?from=0&limit=100&kind=local
```
`from` and `limit` are optional parameters, and default to `0` and `100` respectively. They are the row index and number
of rows to return - they are not timestamps.
`kind` *MUST* either be `local` or `remote`.
The API returns a JSON body containing MXC URIs for the quarantined media, like the following:
```json
{
"media": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
]
}
```
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies
@@ -88,6 +115,20 @@ is quarantined, Synapse will:
- Quarantine any existing cached remote media.
- Quarantine any future remote media.
## Downloading quarantined media
Normally, when media is quarantined, it will return a 404 error when downloaded.
Admins can bypass this by adding `?admin_unsafely_bypass_quarantine=true`
to the [normal download URL](https://spec.matrix.org/v1.16/client-server-api/#get_matrixclientv1mediadownloadservernamemediaid).
Bypassing the quarantine check is not recommended. Media is typically quarantined
to prevent harmful content from being served to users, which includes admins. Only
set the bypass parameter if you intentionally want to access potentially harmful
content.
Non-admin users cannot bypass quarantine checks, even when specifying the above
query parameter.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.
+49
View File
@@ -505,6 +505,55 @@ with a body of:
}
```
## List room memberships of a user
Gets a list of room memberships for a specific `user_id`. This
endpoint differs from
[`GET /_synapse/admin/v1/users/<user_id>/joined_rooms`](#list-joined-rooms-of-a-user)
in that it returns rooms with memberships other than "join".
The API is:
```
GET /_synapse/admin/v1/users/<user_id>/memberships
```
A response body like the following is returned:
```json
{
"memberships": {
"!DuGcnbhHGaSZQoNQR:matrix.org": "join",
"!ZtSaPCawyWtxfWiIy:matrix.org": "leave",
}
}
```
which is a list of room membership states for the given user. This endpoint can
be used with both local and remote users, with the caveat that the homeserver will
only be aware of the memberships for rooms that one of its local users has joined.
Remote user memberships may also be out of date if all local users have since left
a room. The homeserver will thus no longer receive membership updates about it.
The list includes rooms that the user has since left; other membership states (knock,
invite, etc.) are also possible.
Note that rooms will only disappear from this list if they are
[purged](./rooms.md#delete-room-api) from the homeserver.
**Parameters**
The following parameters should be set in the URL:
- `user_id` - fully qualified: for example, `@user:server.com`.
**Response**
The following fields are returned in the JSON response body:
- `memberships` - A map of `room_id` (string) to `membership` state (string).
## List joined rooms of a user
Gets a list of all `room_id` that a specific `user_id` is joined to and is a member of (participating in).
+5
View File
@@ -50,6 +50,11 @@ setting in your configuration file.
See the [configuration manual](usage/configuration/config_documentation.md#oidc_providers) for some sample settings, as well as
the text below for example configurations for specific providers.
For setups using [`.well-known` delegation](delegate.md), make sure
[`public_baseurl`](usage/configuration/config_documentation.md#public_baseurl) is set
appropriately. If unset, Synapse defaults to `https://<server_name>/` which is used in
the OIDC callback URL.
## OIDC Back-Channel Logout
Synapse supports receiving [OpenID Connect Back-Channel Logout](https://openid.net/specs/openid-connect-backchannel-1_0.html) notifications.
+5
View File
@@ -255,6 +255,8 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$
^/_matrix/client/(r0|v3|unstable)/capabilities$
^/_matrix/client/(r0|v3|unstable)/notifications$
# Admin API requests
^/_synapse/admin/v1/rooms/[^/]+$
# Encryption requests
@@ -300,6 +302,9 @@ Additionally, the following REST endpoints can be handled for GET requests:
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# Admin API requests
^/_synapse/admin/v2/users/[^/]+$
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
Generated
+398 -218
View File
@@ -31,7 +31,7 @@ description = "The ultimate Python library in building OAuth and OpenID Connect
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"jwt\" or extra == \"oidc\""
markers = "extra == \"oidc\" or extra == \"jwt\" or extra == \"all\""
files = [
{file = "authlib-1.6.5-py2.py3-none-any.whl", hash = "sha256:3e0e0507807f842b02175507bdee8957a1d5707fd4afb17c32fb43fee90b6e3a"},
{file = "authlib-1.6.5.tar.gz", hash = "sha256:6aaf9c79b7cc96c900f0b284061691c5d4e61221640a948fe690b556a6d6d10b"},
@@ -134,14 +134,14 @@ typecheck = ["mypy"]
[[package]]
name = "bleach"
version = "6.2.0"
version = "6.3.0"
description = "An easy safelist-based HTML-sanitizing tool."
optional = false
python-versions = ">=3.9"
python-versions = ">=3.10"
groups = ["main", "dev"]
files = [
{file = "bleach-6.2.0-py3-none-any.whl", hash = "sha256:117d9c6097a7c3d22fd578fcd8d35ff1e125df6736f554da4e432fdd63f31e5e"},
{file = "bleach-6.2.0.tar.gz", hash = "sha256:123e894118b8a599fd80d3ec1a6d4cc7ce4e5882b1317a7e1ba69b56e95f991f"},
{file = "bleach-6.3.0-py3-none-any.whl", hash = "sha256:fe10ec77c93ddf3d13a73b035abaac7a9f5e436513864ccdad516693213c65d6"},
{file = "bleach-6.3.0.tar.gz", hash = "sha256:6f3b91b1c0a02bb9a78b5a454c92506aa0fdf197e1d5e114d2e00c6f64306d22"},
]
[package.dependencies]
@@ -176,83 +176,100 @@ files = [
[[package]]
name = "cffi"
version = "1.17.1"
version = "2.0.0"
description = "Foreign Function Interface for Python calling C code."
optional = false
python-versions = ">=3.8"
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "cffi-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:df8b1c11f177bc2313ec4b2d46baec87a5f3e71fc8b45dab2ee7cae86d9aba14"},
{file = "cffi-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:8f2cdc858323644ab277e9bb925ad72ae0e67f69e804f4898c070998d50b1a67"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:edae79245293e15384b51f88b00613ba9f7198016a5948b5dddf4917d4d26382"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:45398b671ac6d70e67da8e4224a065cec6a93541bb7aebe1b198a61b58c7b702"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ad9413ccdeda48c5afdae7e4fa2192157e991ff761e7ab8fdd8926f40b160cc3"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5da5719280082ac6bd9aa7becb3938dc9f9cbd57fac7d2871717b1feb0902ab6"},
{file = "cffi-1.17.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bb1a08b8008b281856e5971307cc386a8e9c5b625ac297e853d36da6efe9c17"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:045d61c734659cc045141be4bae381a41d89b741f795af1dd018bfb532fd0df8"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:6883e737d7d9e4899a8a695e00ec36bd4e5e4f18fabe0aca0efe0a4b44cdb13e"},
{file = "cffi-1.17.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6b8b4a92e1c65048ff98cfe1f735ef8f1ceb72e3d5f0c25fdb12087a23da22be"},
{file = "cffi-1.17.1-cp310-cp310-win32.whl", hash = "sha256:c9c3d058ebabb74db66e431095118094d06abf53284d9c81f27300d0e0d8bc7c"},
{file = "cffi-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:0f048dcf80db46f0098ccac01132761580d28e28bc0f78ae0d58048063317e15"},
{file = "cffi-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a45e3c6913c5b87b3ff120dcdc03f6131fa0065027d0ed7ee6190736a74cd401"},
{file = "cffi-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:30c5e0cb5ae493c04c8b42916e52ca38079f1b235c2f8ae5f4527b963c401caf"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f75c7ab1f9e4aca5414ed4d8e5c0e303a34f4421f8a0d47a4d019ceff0ab6af4"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a1ed2dd2972641495a3ec98445e09766f077aee98a1c896dcb4ad0d303628e41"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:46bf43160c1a35f7ec506d254e5c890f3c03648a4dbac12d624e4490a7046cd1"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:a24ed04c8ffd54b0729c07cee15a81d964e6fee0e3d4d342a27b020d22959dc6"},
{file = "cffi-1.17.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:610faea79c43e44c71e1ec53a554553fa22321b65fae24889706c0a84d4ad86d"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:a9b15d491f3ad5d692e11f6b71f7857e7835eb677955c00cc0aefcd0669adaf6"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:de2ea4b5833625383e464549fec1bc395c1bdeeb5f25c4a3a82b5a8c756ec22f"},
{file = "cffi-1.17.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:fc48c783f9c87e60831201f2cce7f3b2e4846bf4d8728eabe54d60700b318a0b"},
{file = "cffi-1.17.1-cp311-cp311-win32.whl", hash = "sha256:85a950a4ac9c359340d5963966e3e0a94a676bd6245a4b55bc43949eee26a655"},
{file = "cffi-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:caaf0640ef5f5517f49bc275eca1406b0ffa6aa184892812030f04c2abf589a0"},
{file = "cffi-1.17.1-cp312-cp312-macosx_10_9_x86_64.whl", hash = "sha256:805b4371bf7197c329fcb3ead37e710d1bca9da5d583f5073b799d5c5bd1eee4"},
{file = "cffi-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:733e99bc2df47476e3848417c5a4540522f234dfd4ef3ab7fafdf555b082ec0c"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1257bdabf294dceb59f5e70c64a3e2f462c30c7ad68092d01bbbfb1c16b1ba36"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:da95af8214998d77a98cc14e3a3bd00aa191526343078b530ceb0bd710fb48a5"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:d63afe322132c194cf832bfec0dc69a99fb9bb6bbd550f161a49e9e855cc78ff"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f79fc4fc25f1c8698ff97788206bb3c2598949bfe0fef03d299eb1b5356ada99"},
{file = "cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:386c8bf53c502fff58903061338ce4f4950cbdcb23e2902d86c0f722b786bbe3"},
{file = "cffi-1.17.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:4ceb10419a9adf4460ea14cfd6bc43d08701f0835e979bf821052f1805850fe8"},
{file = "cffi-1.17.1-cp312-cp312-win32.whl", hash = "sha256:a08d7e755f8ed21095a310a693525137cfe756ce62d066e53f502a83dc550f65"},
{file = "cffi-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903"},
{file = "cffi-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f3a2b4222ce6b60e2e8b337bb9596923045681d71e5a082783484d845390938e"},
{file = "cffi-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:0984a4925a435b1da406122d4d7968dd861c1385afe3b45ba82b750f229811e2"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d01b12eeeb4427d3110de311e1774046ad344f5b1a7403101878976ecd7a10f3"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:706510fe141c86a69c8ddc029c7910003a17353970cff3b904ff0686a5927683"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:de55b766c7aa2e2a3092c51e0483d700341182f08e67c63630d5b6f200bb28e5"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:c59d6e989d07460165cc5ad3c61f9fd8f1b4796eacbd81cee78957842b834af4"},
{file = "cffi-1.17.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dd398dbc6773384a17fe0d3e7eeb8d1a21c2200473ee6806bb5e6a8e62bb73dd"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:3edc8d958eb099c634dace3c7e16560ae474aa3803a5df240542b305d14e14ed"},
{file = "cffi-1.17.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:72e72408cad3d5419375fc87d289076ee319835bdfa2caad331e377589aebba9"},
{file = "cffi-1.17.1-cp313-cp313-win32.whl", hash = "sha256:e03eab0a8677fa80d646b5ddece1cbeaf556c313dcfac435ba11f107ba117b5d"},
{file = "cffi-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:f6a16c31041f09ead72d69f583767292f750d24913dadacf5756b966aacb3f1a"},
{file = "cffi-1.17.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:636062ea65bd0195bc012fea9321aca499c0504409f413dc88af450b57ffd03b"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c7eac2ef9b63c79431bc4b25f1cd649d7f061a28808cbc6c47b534bd789ef964"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e221cf152cff04059d011ee126477f0d9588303eb57e88923578ace7baad17f9"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:31000ec67d4221a71bd3f67df918b1f88f676f1c3b535a7eb473255fdc0b83fc"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:6f17be4345073b0a7b8ea599688f692ac3ef23ce28e5df79c04de519dbc4912c"},
{file = "cffi-1.17.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0e2b1fac190ae3ebfe37b979cc1ce69c81f4e4fe5746bb401dca63a9062cdaf1"},
{file = "cffi-1.17.1-cp38-cp38-win32.whl", hash = "sha256:7596d6620d3fa590f677e9ee430df2958d2d6d6de2feeae5b20e82c00b76fbf8"},
{file = "cffi-1.17.1-cp38-cp38-win_amd64.whl", hash = "sha256:78122be759c3f8a014ce010908ae03364d00a1f81ab5c7f4a7a5120607ea56e1"},
{file = "cffi-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:b2ab587605f4ba0bf81dc0cb08a41bd1c0a5906bd59243d56bad7668a6fc6c16"},
{file = "cffi-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:28b16024becceed8c6dfbc75629e27788d8a3f9030691a1dbf9821a128b22c36"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1d599671f396c4723d016dbddb72fe8e0397082b0a77a4fab8028923bec050e8"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ca74b8dbe6e8e8263c0ffd60277de77dcee6c837a3d0881d8c1ead7268c9e576"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:f7f5baafcc48261359e14bcd6d9bff6d4b28d9103847c9e136694cb0501aef87"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:98e3969bcff97cae1b2def8ba499ea3d6f31ddfdb7635374834cf89a1a08ecf0"},
{file = "cffi-1.17.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cdf5ce3acdfd1661132f2a9c19cac174758dc2352bfe37d98aa7512c6b7178b3"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9755e4345d1ec879e3849e62222a18c7174d65a6a92d5b346b1863912168b595"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:f1e22e8c4419538cb197e4dd60acc919d7696e5ef98ee4da4e01d3f8cfa4cc5a"},
{file = "cffi-1.17.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:c03e868a0b3bc35839ba98e74211ed2b05d2119be4e8a0f224fba9384f1fe02e"},
{file = "cffi-1.17.1-cp39-cp39-win32.whl", hash = "sha256:e31ae45bc2e29f6b2abd0de1cc3b9d5205aa847cafaecb8af1476a609a2f6eb7"},
{file = "cffi-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d016c76bdd850f3c626af19b0542c9677ba156e4ee4fccfdd7848803533ef662"},
{file = "cffi-1.17.1.tar.gz", hash = "sha256:1c39c6016c32bc48dd54561950ebd6836e1670f2ae46128f67cf49e789c52824"},
{file = "cffi-2.0.0-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:0cf2d91ecc3fcc0625c2c530fe004f82c110405f101548512cce44322fa8ac44"},
{file = "cffi-2.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f73b96c41e3b2adedc34a7356e64c8eb96e03a3782b535e043a986276ce12a49"},
{file = "cffi-2.0.0-cp310-cp310-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:53f77cbe57044e88bbd5ed26ac1d0514d2acf0591dd6bb02a3ae37f76811b80c"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3e837e369566884707ddaf85fc1744b47575005c0a229de3327f8f9a20f4efeb"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:5eda85d6d1879e692d546a078b44251cdd08dd1cfb98dfb77b670c97cee49ea0"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9332088d75dc3241c702d852d4671613136d90fa6881da7d770a483fd05248b4"},
{file = "cffi-2.0.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:fc7de24befaeae77ba923797c7c87834c73648a05a4bde34b3b7e5588973a453"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:cf364028c016c03078a23b503f02058f1814320a56ad535686f90565636a9495"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e11e82b744887154b182fd3e7e8512418446501191994dbf9c9fc1f32cc8efd5"},
{file = "cffi-2.0.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:8ea985900c5c95ce9db1745f7933eeef5d314f0565b27625d9a10ec9881e1bfb"},
{file = "cffi-2.0.0-cp310-cp310-win32.whl", hash = "sha256:1f72fb8906754ac8a2cc3f9f5aaa298070652a0ffae577e0ea9bd480dc3c931a"},
{file = "cffi-2.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:b18a3ed7d5b3bd8d9ef7a8cb226502c6bf8308df1525e1cc676c3680e7176739"},
{file = "cffi-2.0.0-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:b4c854ef3adc177950a8dfc81a86f5115d2abd545751a304c5bcf2c2c7283cfe"},
{file = "cffi-2.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:2de9a304e27f7596cd03d16f1b7c72219bd944e99cc52b84d0145aefb07cbd3c"},
{file = "cffi-2.0.0-cp311-cp311-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:baf5215e0ab74c16e2dd324e8ec067ef59e41125d3eade2b863d294fd5035c92"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:730cacb21e1bdff3ce90babf007d0a0917cc3e6492f336c2f0134101e0944f93"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:6824f87845e3396029f3820c206e459ccc91760e8fa24422f8b0c3d1731cbec5"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:9de40a7b0323d889cf8d23d1ef214f565ab154443c42737dfe52ff82cf857664"},
{file = "cffi-2.0.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:8941aaadaf67246224cee8c3803777eed332a19d909b47e29c9842ef1e79ac26"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a05d0c237b3349096d3981b727493e22147f934b20f6f125a3eba8f994bec4a9"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:94698a9c5f91f9d138526b48fe26a199609544591f859c870d477351dc7b2414"},
{file = "cffi-2.0.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:5fed36fccc0612a53f1d4d9a816b50a36702c28a2aa880cb8a122b3466638743"},
{file = "cffi-2.0.0-cp311-cp311-win32.whl", hash = "sha256:c649e3a33450ec82378822b3dad03cc228b8f5963c0c12fc3b1e0ab940f768a5"},
{file = "cffi-2.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:66f011380d0e49ed280c789fbd08ff0d40968ee7b665575489afa95c98196ab5"},
{file = "cffi-2.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:c6638687455baf640e37344fe26d37c404db8b80d037c3d29f58fe8d1c3b194d"},
{file = "cffi-2.0.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:6d02d6655b0e54f54c4ef0b94eb6be0607b70853c45ce98bd278dc7de718be5d"},
{file = "cffi-2.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8eca2a813c1cb7ad4fb74d368c2ffbbb4789d377ee5bb8df98373c2cc0dee76c"},
{file = "cffi-2.0.0-cp312-cp312-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:21d1152871b019407d8ac3985f6775c079416c282e431a4da6afe7aefd2bccbe"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:b21e08af67b8a103c71a250401c78d5e0893beff75e28c53c98f4de42f774062"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:1e3a615586f05fc4065a8b22b8152f0c1b00cdbc60596d187c2a74f9e3036e4e"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:81afed14892743bbe14dacb9e36d9e0e504cd204e0b165062c488942b9718037"},
{file = "cffi-2.0.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3e17ed538242334bf70832644a32a7aae3d83b57567f9fd60a26257e992b79ba"},
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3925dd22fa2b7699ed2617149842d2e6adde22b262fcbfada50e3d195e4b3a94"},
{file = "cffi-2.0.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:2c8f814d84194c9ea681642fd164267891702542f028a15fc97d4674b6206187"},
{file = "cffi-2.0.0-cp312-cp312-win32.whl", hash = "sha256:da902562c3e9c550df360bfa53c035b2f241fed6d9aef119048073680ace4a18"},
{file = "cffi-2.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:da68248800ad6320861f129cd9c1bf96ca849a2771a59e0344e88681905916f5"},
{file = "cffi-2.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:4671d9dd5ec934cb9a73e7ee9676f9362aba54f7f34910956b84d727b0d73fb6"},
{file = "cffi-2.0.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:00bdf7acc5f795150faa6957054fbbca2439db2f775ce831222b66f192f03beb"},
{file = "cffi-2.0.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:45d5e886156860dc35862657e1494b9bae8dfa63bf56796f2fb56e1679fc0bca"},
{file = "cffi-2.0.0-cp313-cp313-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:07b271772c100085dd28b74fa0cd81c8fb1a3ba18b21e03d7c27f3436a10606b"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:d48a880098c96020b02d5a1f7d9251308510ce8858940e6fa99ece33f610838b"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:f93fd8e5c8c0a4aa1f424d6173f14a892044054871c771f8566e4008eaa359d2"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:dd4f05f54a52fb558f1ba9f528228066954fee3ebe629fc1660d874d040ae5a3"},
{file = "cffi-2.0.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:c8d3b5532fc71b7a77c09192b4a5a200ea992702734a2e9279a37f2478236f26"},
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d9b29c1f0ae438d5ee9acb31cadee00a58c46cc9c0b2f9038c6b0b3470877a8c"},
{file = "cffi-2.0.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:6d50360be4546678fc1b79ffe7a66265e28667840010348dd69a314145807a1b"},
{file = "cffi-2.0.0-cp313-cp313-win32.whl", hash = "sha256:74a03b9698e198d47562765773b4a8309919089150a0bb17d829ad7b44b60d27"},
{file = "cffi-2.0.0-cp313-cp313-win_amd64.whl", hash = "sha256:19f705ada2530c1167abacb171925dd886168931e0a7b78f5bffcae5c6b5be75"},
{file = "cffi-2.0.0-cp313-cp313-win_arm64.whl", hash = "sha256:256f80b80ca3853f90c21b23ee78cd008713787b1b1e93eae9f3d6a7134abd91"},
{file = "cffi-2.0.0-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:fc33c5141b55ed366cfaad382df24fe7dcbc686de5be719b207bb248e3053dc5"},
{file = "cffi-2.0.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:c654de545946e0db659b3400168c9ad31b5d29593291482c43e3564effbcee13"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:24b6f81f1983e6df8db3adc38562c83f7d4a0c36162885ec7f7b77c7dcbec97b"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:12873ca6cb9b0f0d3a0da705d6086fe911591737a59f28b7936bdfed27c0d47c"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:d9b97165e8aed9272a6bb17c01e3cc5871a594a446ebedc996e2397a1c1ea8ef"},
{file = "cffi-2.0.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:afb8db5439b81cf9c9d0c80404b60c3cc9c3add93e114dcae767f1477cb53775"},
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:737fe7d37e1a1bffe70bd5754ea763a62a066dc5913ca57e957824b72a85e205"},
{file = "cffi-2.0.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:38100abb9d1b1435bc4cc340bb4489635dc2f0da7456590877030c9b3d40b0c1"},
{file = "cffi-2.0.0-cp314-cp314-win32.whl", hash = "sha256:087067fa8953339c723661eda6b54bc98c5625757ea62e95eb4898ad5e776e9f"},
{file = "cffi-2.0.0-cp314-cp314-win_amd64.whl", hash = "sha256:203a48d1fb583fc7d78a4c6655692963b860a417c0528492a6bc21f1aaefab25"},
{file = "cffi-2.0.0-cp314-cp314-win_arm64.whl", hash = "sha256:dbd5c7a25a7cb98f5ca55d258b103a2054f859a46ae11aaf23134f9cc0d356ad"},
{file = "cffi-2.0.0-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:9a67fc9e8eb39039280526379fb3a70023d77caec1852002b4da7e8b270c4dd9"},
{file = "cffi-2.0.0-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7a66c7204d8869299919db4d5069a82f1561581af12b11b3c9f48c584eb8743d"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:7cc09976e8b56f8cebd752f7113ad07752461f48a58cbba644139015ac24954c"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:92b68146a71df78564e4ef48af17551a5ddd142e5190cdf2c5624d0c3ff5b2e8"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:b1e74d11748e7e98e2f426ab176d4ed720a64412b6a15054378afdb71e0f37dc"},
{file = "cffi-2.0.0-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:28a3a209b96630bca57cce802da70c266eb08c6e97e5afd61a75611ee6c64592"},
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:7553fb2090d71822f02c629afe6042c299edf91ba1bf94951165613553984512"},
{file = "cffi-2.0.0-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:6c6c373cfc5c83a975506110d17457138c8c63016b563cc9ed6e056a82f13ce4"},
{file = "cffi-2.0.0-cp314-cp314t-win32.whl", hash = "sha256:1fc9ea04857caf665289b7a75923f2c6ed559b8298a1b8c49e59f7dd95c8481e"},
{file = "cffi-2.0.0-cp314-cp314t-win_amd64.whl", hash = "sha256:d68b6cef7827e8641e8ef16f4494edda8b36104d79773a334beaa1e3521430f6"},
{file = "cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9"},
{file = "cffi-2.0.0-cp39-cp39-macosx_10_13_x86_64.whl", hash = "sha256:fe562eb1a64e67dd297ccc4f5addea2501664954f2692b69a76449ec7913ecbf"},
{file = "cffi-2.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:de8dad4425a6ca6e4e5e297b27b5c824ecc7581910bf9aee86cb6835e6812aa7"},
{file = "cffi-2.0.0-cp39-cp39-manylinux1_i686.manylinux2014_i686.manylinux_2_17_i686.manylinux_2_5_i686.whl", hash = "sha256:4647afc2f90d1ddd33441e5b0e85b16b12ddec4fca55f0d9671fef036ecca27c"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:3f4d46d8b35698056ec29bca21546e1551a205058ae1a181d871e278b0b28165"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_ppc64le.manylinux_2_17_ppc64le.whl", hash = "sha256:e6e73b9e02893c764e7e8d5bb5ce277f1a009cd5243f8228f75f842bf937c534"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_s390x.manylinux_2_17_s390x.whl", hash = "sha256:cb527a79772e5ef98fb1d700678fe031e353e765d1ca2d409c92263c6d43e09f"},
{file = "cffi-2.0.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:61d028e90346df14fedc3d1e5441df818d095f3b87d286825dfcbd6459b7ef63"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:0f6084a0ea23d05d20c3edcda20c3d006f9b6f3fefeac38f59262e10cef47ee2"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:1cd13c99ce269b3ed80b417dcd591415d3372bcac067009b6e0f59c7d4015e65"},
{file = "cffi-2.0.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89472c9762729b5ae1ad974b777416bfda4ac5642423fa93bd57a09204712322"},
{file = "cffi-2.0.0-cp39-cp39-win32.whl", hash = "sha256:2081580ebb843f759b9f617314a24ed5738c51d2aee65d31e02f6f7a2b97707a"},
{file = "cffi-2.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:b882b3df248017dba09d6b16defe9b5c407fe32fc7c65a9c69798e6175601be9"},
{file = "cffi-2.0.0.tar.gz", hash = "sha256:44d1b5909021139fe36001ae048dbdde8214afa20200eda0f64c068cac5d5529"},
]
[package.dependencies]
pycparser = "*"
pycparser = {version = "*", markers = "implementation_name != \"PyPy\""}
[[package]]
name = "charset-normalizer"
@@ -381,62 +398,80 @@ files = [
[[package]]
name = "cryptography"
version = "45.0.7"
version = "46.0.3"
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
optional = false
python-versions = "!=3.9.0,!=3.9.1,>=3.7"
python-versions = "!=3.9.0,!=3.9.1,>=3.8"
groups = ["main", "dev"]
files = [
{file = "cryptography-45.0.7-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:3be4f21c6245930688bd9e162829480de027f8bf962ede33d4f8ba7d67a00cee"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:67285f8a611b0ebc0857ced2081e30302909f571a46bfa7a3cc0ad303fe015c6"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:577470e39e60a6cd7780793202e63536026d9b8641de011ed9d8174da9ca5339"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:4bd3e5c4b9682bc112d634f2c6ccc6736ed3635fc3319ac2bb11d768cc5a00d8"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:465ccac9d70115cd4de7186e60cfe989de73f7bb23e8a7aa45af18f7412e75bf"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:16ede8a4f7929b4b7ff3642eba2bf79aa1d71f24ab6ee443935c0d269b6bc513"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:8978132287a9d3ad6b54fcd1e08548033cc09dc6aacacb6c004c73c3eb5d3ac3"},
{file = "cryptography-45.0.7-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:b6a0e535baec27b528cb07a119f321ac024592388c5681a5ced167ae98e9fff3"},
{file = "cryptography-45.0.7-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:a24ee598d10befaec178efdff6054bc4d7e883f615bfbcd08126a0f4931c83a6"},
{file = "cryptography-45.0.7-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:fa26fa54c0a9384c27fcdc905a2fb7d60ac6e47d14bc2692145f2b3b1e2cfdbd"},
{file = "cryptography-45.0.7-cp311-abi3-win32.whl", hash = "sha256:bef32a5e327bd8e5af915d3416ffefdbe65ed975b646b3805be81b23580b57b8"},
{file = "cryptography-45.0.7-cp311-abi3-win_amd64.whl", hash = "sha256:3808e6b2e5f0b46d981c24d79648e5c25c35e59902ea4391a0dcb3e667bf7443"},
{file = "cryptography-45.0.7-cp37-abi3-macosx_10_9_universal2.whl", hash = "sha256:bfb4c801f65dd61cedfc61a83732327fafbac55a47282e6f26f073ca7a41c3b2"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:81823935e2f8d476707e85a78a405953a03ef7b7b4f55f93f7c2d9680e5e0691"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:3994c809c17fc570c2af12c9b840d7cea85a9fd3e5c0e0491f4fa3c029216d59"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:dad43797959a74103cb59c5dac71409f9c27d34c8a05921341fb64ea8ccb1dd4"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:ce7a453385e4c4693985b4a4a3533e041558851eae061a58a5405363b098fcd3"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:b04f85ac3a90c227b6e5890acb0edbaf3140938dbecf07bff618bf3638578cf1"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:48c41a44ef8b8c2e80ca4527ee81daa4c527df3ecbc9423c41a420a9559d0e27"},
{file = "cryptography-45.0.7-cp37-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:f3df7b3d0f91b88b2106031fd995802a2e9ae13e02c36c1fc075b43f420f3a17"},
{file = "cryptography-45.0.7-cp37-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:dd342f085542f6eb894ca00ef70236ea46070c8a13824c6bde0dfdcd36065b9b"},
{file = "cryptography-45.0.7-cp37-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:1993a1bb7e4eccfb922b6cd414f072e08ff5816702a0bdb8941c247a6b1b287c"},
{file = "cryptography-45.0.7-cp37-abi3-win32.whl", hash = "sha256:18fcf70f243fe07252dcb1b268a687f2358025ce32f9f88028ca5c364b123ef5"},
{file = "cryptography-45.0.7-cp37-abi3-win_amd64.whl", hash = "sha256:7285a89df4900ed3bfaad5679b1e668cb4b38a8de1ccbfc84b05f34512da0a90"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:de58755d723e86175756f463f2f0bddd45cc36fbd62601228a3f8761c9f58252"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:a20e442e917889d1a6b3c570c9e3fa2fdc398c20868abcea268ea33c024c4083"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:258e0dff86d1d891169b5af222d362468a9570e2532923088658aa866eb11130"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:d97cf502abe2ab9eff8bd5e4aca274da8d06dd3ef08b759a8d6143f4ad65d4b4"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:c987dad82e8c65ebc985f5dae5e74a3beda9d0a2a4daf8a1115f3772b59e5141"},
{file = "cryptography-45.0.7-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:c13b1e3afd29a5b3b2656257f14669ca8fa8d7956d509926f0b130b600b50ab7"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:4a862753b36620af6fc54209264f92c716367f2f0ff4624952276a6bbd18cbde"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:06ce84dc14df0bf6ea84666f958e6080cdb6fe1231be2a51f3fc1267d9f3fb34"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d0c5c6bac22b177bf8da7435d9d27a6834ee130309749d162b26c3105c0795a9"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:2f641b64acc00811da98df63df7d59fd4706c0df449da71cb7ac39a0732b40ae"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:f5414a788ecc6ee6bc58560e85ca624258a55ca434884445440a810796ea0e0b"},
{file = "cryptography-45.0.7-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:1f3d56f73595376f4244646dd5c5870c14c196949807be39e79e7bd9bac3da63"},
{file = "cryptography-45.0.7.tar.gz", hash = "sha256:4b1654dfc64ea479c242508eb8c724044f1e964a47d1d1cacc5132292d851971"},
{file = "cryptography-46.0.3-cp311-abi3-macosx_10_9_universal2.whl", hash = "sha256:109d4ddfadf17e8e7779c39f9b18111a09efb969a301a31e987416a0191ed93a"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:09859af8466b69bc3c27bdf4f5d84a665e0f7ab5088412e9e2ec49758eca5cbc"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:01ca9ff2885f3acc98c29f1860552e37f6d7c7d013d7334ff2a9de43a449315d"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:6eae65d4c3d33da080cff9c4ab1f711b15c1d9760809dad6ea763f3812d254cb"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:e5bf0ed4490068a2e72ac03d786693adeb909981cc596425d09032d372bcc849"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:5ecfccd2329e37e9b7112a888e76d9feca2347f12f37918facbb893d7bb88ee8"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:a2c0cd47381a3229c403062f764160d57d4d175e022c1df84e168c6251a22eec"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:549e234ff32571b1f4076ac269fcce7a808d3bf98b76c8dd560e42dbc66d7d91"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:c0a7bb1a68a5d3471880e264621346c48665b3bf1c3759d682fc0864c540bd9e"},
{file = "cryptography-46.0.3-cp311-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:10b01676fc208c3e6feeb25a8b83d81767e8059e1fe86e1dc62d10a3018fa926"},
{file = "cryptography-46.0.3-cp311-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:0abf1ffd6e57c67e92af68330d05760b7b7efb243aab8377e583284dbab72c71"},
{file = "cryptography-46.0.3-cp311-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:a04bee9ab6a4da801eb9b51f1b708a1b5b5c9eb48c03f74198464c66f0d344ac"},
{file = "cryptography-46.0.3-cp311-abi3-win32.whl", hash = "sha256:f260d0d41e9b4da1ed1e0f1ce571f97fe370b152ab18778e9e8f67d6af432018"},
{file = "cryptography-46.0.3-cp311-abi3-win_amd64.whl", hash = "sha256:a9a3008438615669153eb86b26b61e09993921ebdd75385ddd748702c5adfddb"},
{file = "cryptography-46.0.3-cp311-abi3-win_arm64.whl", hash = "sha256:5d7f93296ee28f68447397bf5198428c9aeeab45705a55d53a6343455dcb2c3c"},
{file = "cryptography-46.0.3-cp314-cp314t-macosx_10_9_universal2.whl", hash = "sha256:00a5e7e87938e5ff9ff5447ab086a5706a957137e6e433841e9d24f38a065217"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:c8daeb2d2174beb4575b77482320303f3d39b8e81153da4f0fb08eb5fe86a6c5"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:39b6755623145ad5eff1dab323f4eae2a32a77a7abef2c5089a04a3d04366715"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_28_aarch64.whl", hash = "sha256:db391fa7c66df6762ee3f00c95a89e6d428f4d60e7abc8328f4fe155b5ac6e54"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:78a97cf6a8839a48c49271cdcbd5cf37ca2c1d6b7fdd86cc864f302b5e9bf459"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_28_ppc64le.whl", hash = "sha256:dfb781ff7eaa91a6f7fd41776ec37c5853c795d3b358d4896fdbb5df168af422"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_28_x86_64.whl", hash = "sha256:6f61efb26e76c45c4a227835ddeae96d83624fb0d29eb5df5b96e14ed1a0afb7"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_34_aarch64.whl", hash = "sha256:23b1a8f26e43f47ceb6d6a43115f33a5a37d57df4ea0ca295b780ae8546e8044"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_34_ppc64le.whl", hash = "sha256:b419ae593c86b87014b9be7396b385491ad7f320bde96826d0dd174459e54665"},
{file = "cryptography-46.0.3-cp314-cp314t-manylinux_2_34_x86_64.whl", hash = "sha256:50fc3343ac490c6b08c0cf0d704e881d0d660be923fd3076db3e932007e726e3"},
{file = "cryptography-46.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:22d7e97932f511d6b0b04f2bfd818d73dcd5928db509460aaf48384778eb6d20"},
{file = "cryptography-46.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:d55f3dffadd674514ad19451161118fd010988540cee43d8bc20675e775925de"},
{file = "cryptography-46.0.3-cp314-cp314t-win32.whl", hash = "sha256:8a6e050cb6164d3f830453754094c086ff2d0b2f3a897a1d9820f6139a1f0914"},
{file = "cryptography-46.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:760f83faa07f8b64e9c33fc963d790a2edb24efb479e3520c14a45741cd9b2db"},
{file = "cryptography-46.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:516ea134e703e9fe26bcd1277a4b59ad30586ea90c365a87781d7887a646fe21"},
{file = "cryptography-46.0.3-cp38-abi3-macosx_10_9_universal2.whl", hash = "sha256:cb3d760a6117f621261d662bccc8ef5bc32ca673e037c83fbe565324f5c46936"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:4b7387121ac7d15e550f5cb4a43aef2559ed759c35df7336c402bb8275ac9683"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:15ab9b093e8f09daab0f2159bb7e47532596075139dd74365da52ecc9cb46c5d"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:46acf53b40ea38f9c6c229599a4a13f0d46a6c3fa9ef19fc1a124d62e338dfa0"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_28_armv7l.manylinux_2_31_armv7l.whl", hash = "sha256:10ca84c4668d066a9878890047f03546f3ae0a6b8b39b697457b7757aaf18dbc"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_28_ppc64le.whl", hash = "sha256:36e627112085bb3b81b19fed209c05ce2a52ee8b15d161b7c643a7d5a88491f3"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:1000713389b75c449a6e979ffc7dcc8ac90b437048766cef052d4d30b8220971"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_34_aarch64.whl", hash = "sha256:b02cf04496f6576afffef5ddd04a0cb7d49cf6be16a9059d793a30b035f6b6ac"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_34_ppc64le.whl", hash = "sha256:71e842ec9bc7abf543b47cf86b9a743baa95f4677d22baa4c7d5c69e49e9bc04"},
{file = "cryptography-46.0.3-cp38-abi3-manylinux_2_34_x86_64.whl", hash = "sha256:402b58fc32614f00980b66d6e56a5b4118e6cb362ae8f3fda141ba4689bd4506"},
{file = "cryptography-46.0.3-cp38-abi3-musllinux_1_2_aarch64.whl", hash = "sha256:ef639cb3372f69ec44915fafcd6698b6cc78fbe0c2ea41be867f6ed612811963"},
{file = "cryptography-46.0.3-cp38-abi3-musllinux_1_2_x86_64.whl", hash = "sha256:3b51b8ca4f1c6453d8829e1eb7299499ca7f313900dd4d89a24b8b87c0a780d4"},
{file = "cryptography-46.0.3-cp38-abi3-win32.whl", hash = "sha256:6276eb85ef938dc035d59b87c8a7dc559a232f954962520137529d77b18ff1df"},
{file = "cryptography-46.0.3-cp38-abi3-win_amd64.whl", hash = "sha256:416260257577718c05135c55958b674000baef9a1c7d9e8f306ec60d71db850f"},
{file = "cryptography-46.0.3-cp38-abi3-win_arm64.whl", hash = "sha256:d89c3468de4cdc4f08a57e214384d0471911a3830fcdaf7a8cc587e42a866372"},
{file = "cryptography-46.0.3-pp310-pypy310_pp73-macosx_10_9_x86_64.whl", hash = "sha256:a23582810fedb8c0bc47524558fb6c56aac3fc252cb306072fd2815da2a47c32"},
{file = "cryptography-46.0.3-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:e7aec276d68421f9574040c26e2a7c3771060bc0cff408bae1dcb19d3ab1e63c"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-macosx_10_9_x86_64.whl", hash = "sha256:7ce938a99998ed3c8aa7e7272dca1a610401ede816d36d0693907d863b10d9ea"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:191bb60a7be5e6f54e30ba16fdfae78ad3a342a0599eb4193ba88e3f3d6e185b"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:c70cc23f12726be8f8bc72e41d5065d77e4515efae3690326764ea1b07845cfb"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-manylinux_2_34_aarch64.whl", hash = "sha256:9394673a9f4de09e28b5356e7fff97d778f8abad85c9d5ac4a4b7e25a0de7717"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-manylinux_2_34_x86_64.whl", hash = "sha256:94cd0549accc38d1494e1f8de71eca837d0509d0d44bf11d158524b0e12cebf9"},
{file = "cryptography-46.0.3-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:6b5063083824e5509fdba180721d55909ffacccc8adbec85268b48439423d78c"},
{file = "cryptography-46.0.3.tar.gz", hash = "sha256:a8b17438104fed022ce745b362294d9ce35b4c2e45c1d958ad4a4b019285f4a1"},
]
[package.dependencies]
cffi = {version = ">=1.14", markers = "platform_python_implementation != \"PyPy\""}
cffi = {version = ">=2.0.0", markers = "python_full_version >= \"3.9.0\" and platform_python_implementation != \"PyPy\""}
typing-extensions = {version = ">=4.13.2", markers = "python_full_version < \"3.11.0\""}
[package.extras]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs ; python_full_version >= \"3.8.0\"", "sphinx-rtd-theme (>=3.0.0) ; python_full_version >= \"3.8.0\""]
docs = ["sphinx (>=5.3.0)", "sphinx-inline-tabs", "sphinx-rtd-theme (>=3.0.0)"]
docstest = ["pyenchant (>=3)", "readme-renderer (>=30.0)", "sphinxcontrib-spelling (>=7.3.1)"]
nox = ["nox (>=2024.4.15)", "nox[uv] (>=2024.3.2) ; python_full_version >= \"3.8.0\""]
pep8test = ["check-sdist ; python_full_version >= \"3.8.0\"", "click (>=8.0.1)", "mypy (>=1.4)", "ruff (>=0.3.6)"]
nox = ["nox[uv] (>=2024.4.15)"]
pep8test = ["check-sdist", "click (>=8.0.1)", "mypy (>=1.14)", "ruff (>=0.11.11)"]
sdist = ["build (>=1.0.0)"]
ssh = ["bcrypt (>=3.1.5)"]
test = ["certifi (>=2024)", "cryptography-vectors (==45.0.7)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test = ["certifi (>=2024)", "cryptography-vectors (==46.0.3)", "pretend (>=0.7)", "pytest (>=7.4.0)", "pytest-benchmark (>=4.0)", "pytest-cov (>=2.10.1)", "pytest-xdist (>=3.5.0)"]
test-randomorder = ["pytest-randomly"]
[[package]]
@@ -446,7 +481,7 @@ description = "XML bomb protection for Python stdlib modules"
optional = true
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "defusedxml-0.7.1-py2.py3-none-any.whl", hash = "sha256:a352e7e428770286cc899e2542b6cdaedb2b4953ff269a210103ec58f6198a61"},
{file = "defusedxml-0.7.1.tar.gz", hash = "sha256:1bb3032db185915b62d7c6209c5a8792be6a32ab2fedacc84e01b52c51aa3e69"},
@@ -471,7 +506,7 @@ description = "XPath 1.0/2.0/3.0/3.1 parsers and selectors for ElementTree and l
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "elementpath-4.1.5-py3-none-any.whl", hash = "sha256:2ac1a2fb31eb22bbbf817f8cf6752f844513216263f0e3892c8e79782fe4bb55"},
{file = "elementpath-4.1.5.tar.gz", hash = "sha256:c2d6dc524b29ef751ecfc416b0627668119d8812441c555d7471da41d4bacb8d"},
@@ -521,7 +556,7 @@ description = "Python wrapper for hiredis"
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "hiredis-3.3.0-cp310-cp310-macosx_10_15_universal2.whl", hash = "sha256:9937d9b69321b393fbace69f55423480f098120bc55a3316e1ca3508c4dbbd6f"},
{file = "hiredis-3.3.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:50351b77f89ba6a22aff430b993653847f36b71d444509036baa0f2d79d1ebf4"},
@@ -844,7 +879,7 @@ description = "Jaeger Python OpenTracing Tracer implementation"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
]
@@ -982,7 +1017,7 @@ description = "A strictly RFC 4510 conforming LDAP V3 pure Python client library
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
@@ -991,6 +1026,92 @@ files = [
[package.dependencies]
pyasn1 = ">=0.4.6"
[[package]]
name = "librt"
version = "0.6.3"
description = "Mypyc runtime library"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "librt-0.6.3-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:45660d26569cc22ed30adf583389d8a0d1b468f8b5e518fcf9bfe2cd298f9dd1"},
{file = "librt-0.6.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:54f3b2177fb892d47f8016f1087d21654b44f7fc4cf6571c1c6b3ea531ab0fcf"},
{file = "librt-0.6.3-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:c5b31bed2c2f2fa1fcb4815b75f931121ae210dc89a3d607fb1725f5907f1437"},
{file = "librt-0.6.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:8f8ed5053ef9fb08d34f1fd80ff093ccbd1f67f147633a84cf4a7d9b09c0f089"},
{file = "librt-0.6.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:3f0e4bd9bcb0ee34fa3dbedb05570da50b285f49e52c07a241da967840432513"},
{file = "librt-0.6.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:d8f89c8d20dfa648a3f0a56861946eb00e5b00d6b00eea14bc5532b2fcfa8ef1"},
{file = "librt-0.6.3-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:ecc2c526547eacd20cb9fbba19a5268611dbc70c346499656d6cf30fae328977"},
{file = "librt-0.6.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:fbedeb9b48614d662822ee514567d2d49a8012037fc7b4cd63f282642c2f4b7d"},
{file = "librt-0.6.3-cp310-cp310-win32.whl", hash = "sha256:0765b0fe0927d189ee14b087cd595ae636bef04992e03fe6dfdaa383866c8a46"},
{file = "librt-0.6.3-cp310-cp310-win_amd64.whl", hash = "sha256:8c659f9fb8a2f16dc4131b803fa0144c1dadcb3ab24bb7914d01a6da58ae2457"},
{file = "librt-0.6.3-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:61348cc488b18d1b1ff9f3e5fcd5ac43ed22d3e13e862489d2267c2337285c08"},
{file = "librt-0.6.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:64645b757d617ad5f98c08e07620bc488d4bced9ced91c6279cec418f16056fa"},
{file = "librt-0.6.3-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:26b8026393920320bb9a811b691d73c5981385d537ffc5b6e22e53f7b65d4122"},
{file = "librt-0.6.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d998b432ed9ffccc49b820e913c8f327a82026349e9c34fa3690116f6b70770f"},
{file = "librt-0.6.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e18875e17ef69ba7dfa9623f2f95f3eda6f70b536079ee6d5763ecdfe6cc9040"},
{file = "librt-0.6.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a218f85081fc3f70cddaed694323a1ad7db5ca028c379c214e3a7c11c0850523"},
{file = "librt-0.6.3-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:1ef42ff4edd369e84433ce9b188a64df0837f4f69e3d34d3b34d4955c599d03f"},
{file = "librt-0.6.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0e0f2b79993fec23a685b3e8107ba5f8675eeae286675a216da0b09574fa1e47"},
{file = "librt-0.6.3-cp311-cp311-win32.whl", hash = "sha256:fd98cacf4e0fabcd4005c452cb8a31750258a85cab9a59fb3559e8078da408d7"},
{file = "librt-0.6.3-cp311-cp311-win_amd64.whl", hash = "sha256:e17b5b42c8045867ca9d1f54af00cc2275198d38de18545edaa7833d7e9e4ac8"},
{file = "librt-0.6.3-cp311-cp311-win_arm64.whl", hash = "sha256:87597e3d57ec0120a3e1d857a708f80c02c42ea6b00227c728efbc860f067c45"},
{file = "librt-0.6.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:74418f718083009108dc9a42c21bf2e4802d49638a1249e13677585fcc9ca176"},
{file = "librt-0.6.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:514f3f363d1ebc423357d36222c37e5c8e6674b6eae8d7195ac9a64903722057"},
{file = "librt-0.6.3-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:cf1115207a5049d1f4b7b4b72de0e52f228d6c696803d94843907111cbf80610"},
{file = "librt-0.6.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ad8ba80cdcea04bea7b78fcd4925bfbf408961e9d8397d2ee5d3ec121e20c08c"},
{file = "librt-0.6.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:4018904c83eab49c814e2494b4e22501a93cdb6c9f9425533fe693c3117126f9"},
{file = "librt-0.6.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8983c5c06ac9c990eac5eb97a9f03fe41dc7e9d7993df74d9e8682a1056f596c"},
{file = "librt-0.6.3-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:d7769c579663a6f8dbf34878969ac71befa42067ce6bf78e6370bf0d1194997c"},
{file = "librt-0.6.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:d3c9a07eafdc70556f8c220da4a538e715668c0c63cabcc436a026e4e89950bf"},
{file = "librt-0.6.3-cp312-cp312-win32.whl", hash = "sha256:38320386a48a15033da295df276aea93a92dfa94a862e06893f75ea1d8bbe89d"},
{file = "librt-0.6.3-cp312-cp312-win_amd64.whl", hash = "sha256:c0ecf4786ad0404b072196b5df774b1bb23c8aacdcacb6c10b4128bc7b00bd01"},
{file = "librt-0.6.3-cp312-cp312-win_arm64.whl", hash = "sha256:9f2a6623057989ebc469cd9cc8fe436c40117a0147627568d03f84aef7854c55"},
{file = "librt-0.6.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:9e716f9012148a81f02f46a04fc4c663420c6fbfeacfac0b5e128cf43b4413d3"},
{file = "librt-0.6.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:669ff2495728009a96339c5ad2612569c6d8be4474e68f3f3ac85d7c3261f5f5"},
{file = "librt-0.6.3-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:349b6873ebccfc24c9efd244e49da9f8a5c10f60f07575e248921aae2123fc42"},
{file = "librt-0.6.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:0c74c26736008481c9f6d0adf1aedb5a52aff7361fea98276d1f965c0256ee70"},
{file = "librt-0.6.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:408a36ddc75e91918cb15b03460bdc8a015885025d67e68c6f78f08c3a88f522"},
{file = "librt-0.6.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:e61ab234624c9ffca0248a707feffe6fac2343758a36725d8eb8a6efef0f8c30"},
{file = "librt-0.6.3-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:324462fe7e3896d592b967196512491ec60ca6e49c446fe59f40743d08c97917"},
{file = "librt-0.6.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:36b2ec8c15030002c7f688b4863e7be42820d7c62d9c6eece3db54a2400f0530"},
{file = "librt-0.6.3-cp313-cp313-win32.whl", hash = "sha256:25b1b60cb059471c0c0c803e07d0dfdc79e41a0a122f288b819219ed162672a3"},
{file = "librt-0.6.3-cp313-cp313-win_amd64.whl", hash = "sha256:10a95ad074e2a98c9e4abc7f5b7d40e5ecbfa84c04c6ab8a70fabf59bd429b88"},
{file = "librt-0.6.3-cp313-cp313-win_arm64.whl", hash = "sha256:17000df14f552e86877d67e4ab7966912224efc9368e998c96a6974a8d609bf9"},
{file = "librt-0.6.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8e695f25d1a425ad7a272902af8ab8c8d66c1998b177e4b5f5e7b4e215d0c88a"},
{file = "librt-0.6.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:3e84a4121a7ae360ca4da436548a9c1ca8ca134a5ced76c893cc5944426164bd"},
{file = "librt-0.6.3-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:05f385a414de3f950886ea0aad8f109650d4b712cf9cc14cc17f5f62a9ab240b"},
{file = "librt-0.6.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:36a8e337461150b05ca2c7bdedb9e591dfc262c5230422cea398e89d0c746cdc"},
{file = "librt-0.6.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dcbe48f6a03979384f27086484dc2a14959be1613cb173458bd58f714f2c48f3"},
{file = "librt-0.6.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:4bca9e4c260233fba37b15c4ec2f78aa99c1a79fbf902d19dd4a763c5c3fb751"},
{file = "librt-0.6.3-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:760c25ed6ac968e24803eb5f7deb17ce026902d39865e83036bacbf5cf242aa8"},
{file = "librt-0.6.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:4aa4a93a353ccff20df6e34fa855ae8fd788832c88f40a9070e3ddd3356a9f0e"},
{file = "librt-0.6.3-cp314-cp314-win32.whl", hash = "sha256:cb92741c2b4ea63c09609b064b26f7f5d9032b61ae222558c55832ec3ad0bcaf"},
{file = "librt-0.6.3-cp314-cp314-win_amd64.whl", hash = "sha256:fdcd095b1b812d756fa5452aca93b962cf620694c0cadb192cec2bb77dcca9a2"},
{file = "librt-0.6.3-cp314-cp314-win_arm64.whl", hash = "sha256:822ca79e28720a76a935c228d37da6579edef048a17cd98d406a2484d10eda78"},
{file = "librt-0.6.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:078cd77064d1640cb7b0650871a772956066174d92c8aeda188a489b58495179"},
{file = "librt-0.6.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:5cc22f7f5c0cc50ed69f4b15b9c51d602aabc4500b433aaa2ddd29e578f452f7"},
{file = "librt-0.6.3-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:14b345eb7afb61b9fdcdfda6738946bd11b8e0f6be258666b0646af3b9bb5916"},
{file = "librt-0.6.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6d46aa46aa29b067f0b8b84f448fd9719aaf5f4c621cc279164d76a9dc9ab3e8"},
{file = "librt-0.6.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:1b51ba7d9d5d9001494769eca8c0988adce25d0a970c3ba3f2eb9df9d08036fc"},
{file = "librt-0.6.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:ced0925a18fddcff289ef54386b2fc230c5af3c83b11558571124bfc485b8c07"},
{file = "librt-0.6.3-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:6bac97e51f66da2ca012adddbe9fd656b17f7368d439de30898f24b39512f40f"},
{file = "librt-0.6.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:b2922a0e8fa97395553c304edc3bd36168d8eeec26b92478e292e5d4445c1ef0"},
{file = "librt-0.6.3-cp314-cp314t-win32.whl", hash = "sha256:f33462b19503ba68d80dac8a1354402675849259fb3ebf53b67de86421735a3a"},
{file = "librt-0.6.3-cp314-cp314t-win_amd64.whl", hash = "sha256:04f8ce401d4f6380cfc42af0f4e67342bf34c820dae01343f58f472dbac75dcf"},
{file = "librt-0.6.3-cp314-cp314t-win_arm64.whl", hash = "sha256:afb39550205cc5e5c935762c6bf6a2bb34f7d21a68eadb25e2db7bf3593fecc0"},
{file = "librt-0.6.3-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:09262cb2445b6f15d09141af20b95bb7030c6f13b00e876ad8fdd1a9045d6aa5"},
{file = "librt-0.6.3-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:57705e8eec76c5b77130d729c0f70190a9773366c555c5457c51eace80afd873"},
{file = "librt-0.6.3-cp39-cp39-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:3ac2a7835434b31def8ed5355dd9b895bbf41642d61967522646d1d8b9681106"},
{file = "librt-0.6.3-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:71f0a5918aebbea1e7db2179a8fe87e8a8732340d9e8b8107401fb407eda446e"},
{file = "librt-0.6.3-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:aa346e202e6e1ebc01fe1c69509cffe486425884b96cb9ce155c99da1ecbe0e9"},
{file = "librt-0.6.3-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:92267f865c7bbd12327a0d394666948b9bf4b51308b52947c0cc453bfa812f5d"},
{file = "librt-0.6.3-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:86605d5bac340beb030cbc35859325982a79047ebdfba1e553719c7126a2389d"},
{file = "librt-0.6.3-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:98e4bbecbef8d2a60ecf731d735602feee5ac0b32117dbbc765e28b054bac912"},
{file = "librt-0.6.3-cp39-cp39-win32.whl", hash = "sha256:3caa0634c02d5ff0b2ae4a28052e0d8c5f20d497623dc13f629bd4a9e2a6efad"},
{file = "librt-0.6.3-cp39-cp39-win_amd64.whl", hash = "sha256:b47395091e7e0ece1e6ebac9b98bf0c9084d1e3d3b2739aa566be7e56e3f7bf2"},
{file = "librt-0.6.3.tar.gz", hash = "sha256:c724a884e642aa2bbad52bb0203ea40406ad742368a5f90da1b220e970384aae"},
]
[[package]]
name = "lxml"
version = "6.0.2"
@@ -998,7 +1119,7 @@ description = "Powerful and Pythonic XML processing library combining libxml2/li
optional = true
python-versions = ">=3.8"
groups = ["main"]
markers = "extra == \"all\" or extra == \"url-preview\""
markers = "extra == \"url-preview\" or extra == \"all\""
files = [
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:e77dd455b9a16bbd2a5036a63ddbd479c19572af81b624e79ef422f929eef388"},
{file = "lxml-6.0.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:5d444858b9f07cefff6455b983aea9a67f7462ba1f6cbe4a21e8bf6791bf2153"},
@@ -1284,7 +1405,7 @@ description = "An LDAP3 auth provider for Synapse"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"matrix-synapse-ldap3\""
markers = "extra == \"matrix-synapse-ldap3\" or extra == \"all\""
files = [
{file = "matrix-synapse-ldap3-0.3.0.tar.gz", hash = "sha256:8bb6517173164d4b9cc44f49de411d8cebdb2e705d5dd1ea1f38733c4a009e1d"},
{file = "matrix_synapse_ldap3-0.3.0-py3-none-any.whl", hash = "sha256:8b4d701f8702551e98cc1d8c20dbed532de5613584c08d0df22de376ba99159d"},
@@ -1413,53 +1534,54 @@ docs = ["sphinx (>=8,<9)", "sphinx-autobuild"]
[[package]]
name = "mypy"
version = "1.17.1"
version = "1.19.0"
description = "Optional static typing for Python"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "mypy-1.17.1-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3fbe6d5555bf608c47203baa3e72dbc6ec9965b3d7c318aa9a4ca76f465bd972"},
{file = "mypy-1.17.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:80ef5c058b7bce08c83cac668158cb7edea692e458d21098c7d3bce35a5d43e7"},
{file = "mypy-1.17.1-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c4a580f8a70c69e4a75587bd925d298434057fe2a428faaf927ffe6e4b9a98df"},
{file = "mypy-1.17.1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:dd86bb649299f09d987a2eebb4d52d10603224500792e1bee18303bbcc1ce390"},
{file = "mypy-1.17.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:a76906f26bd8d51ea9504966a9c25419f2e668f012e0bdf3da4ea1526c534d94"},
{file = "mypy-1.17.1-cp310-cp310-win_amd64.whl", hash = "sha256:e79311f2d904ccb59787477b7bd5d26f3347789c06fcd7656fa500875290264b"},
{file = "mypy-1.17.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:ad37544be07c5d7fba814eb370e006df58fed8ad1ef33ed1649cb1889ba6ff58"},
{file = "mypy-1.17.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:064e2ff508e5464b4bd807a7c1625bc5047c5022b85c70f030680e18f37273a5"},
{file = "mypy-1.17.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:70401bbabd2fa1aa7c43bb358f54037baf0586f41e83b0ae67dd0534fc64edfd"},
{file = "mypy-1.17.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e92bdc656b7757c438660f775f872a669b8ff374edc4d18277d86b63edba6b8b"},
{file = "mypy-1.17.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:c1fdf4abb29ed1cb091cf432979e162c208a5ac676ce35010373ff29247bcad5"},
{file = "mypy-1.17.1-cp311-cp311-win_amd64.whl", hash = "sha256:ff2933428516ab63f961644bc49bc4cbe42bbffb2cd3b71cc7277c07d16b1a8b"},
{file = "mypy-1.17.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:69e83ea6553a3ba79c08c6e15dbd9bfa912ec1e493bf75489ef93beb65209aeb"},
{file = "mypy-1.17.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:1b16708a66d38abb1e6b5702f5c2c87e133289da36f6a1d15f6a5221085c6403"},
{file = "mypy-1.17.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:89e972c0035e9e05823907ad5398c5a73b9f47a002b22359b177d40bdaee7056"},
{file = "mypy-1.17.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:03b6d0ed2b188e35ee6d5c36b5580cffd6da23319991c49ab5556c023ccf1341"},
{file = "mypy-1.17.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:c837b896b37cd103570d776bda106eabb8737aa6dd4f248451aecf53030cdbeb"},
{file = "mypy-1.17.1-cp312-cp312-win_amd64.whl", hash = "sha256:665afab0963a4b39dff7c1fa563cc8b11ecff7910206db4b2e64dd1ba25aed19"},
{file = "mypy-1.17.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:93378d3203a5c0800c6b6d850ad2f19f7a3cdf1a3701d3416dbf128805c6a6a7"},
{file = "mypy-1.17.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:15d54056f7fe7a826d897789f53dd6377ec2ea8ba6f776dc83c2902b899fee81"},
{file = "mypy-1.17.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:209a58fed9987eccc20f2ca94afe7257a8f46eb5df1fb69958650973230f91e6"},
{file = "mypy-1.17.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:099b9a5da47de9e2cb5165e581f158e854d9e19d2e96b6698c0d64de911dd849"},
{file = "mypy-1.17.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fa6ffadfbe6994d724c5a1bb6123a7d27dd68fc9c059561cd33b664a79578e14"},
{file = "mypy-1.17.1-cp313-cp313-win_amd64.whl", hash = "sha256:9a2b7d9180aed171f033c9f2fc6c204c1245cf60b0cb61cf2e7acc24eea78e0a"},
{file = "mypy-1.17.1-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:15a83369400454c41ed3a118e0cc58bd8123921a602f385cb6d6ea5df050c733"},
{file = "mypy-1.17.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:55b918670f692fc9fba55c3298d8a3beae295c5cded0a55dccdc5bbead814acd"},
{file = "mypy-1.17.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:62761474061feef6f720149d7ba876122007ddc64adff5ba6f374fda35a018a0"},
{file = "mypy-1.17.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c49562d3d908fd49ed0938e5423daed8d407774a479b595b143a3d7f87cdae6a"},
{file = "mypy-1.17.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:397fba5d7616a5bc60b45c7ed204717eaddc38f826e3645402c426057ead9a91"},
{file = "mypy-1.17.1-cp314-cp314-win_amd64.whl", hash = "sha256:9d6b20b97d373f41617bd0708fd46aa656059af57f2ef72aa8c7d6a2b73b74ed"},
{file = "mypy-1.17.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:5d1092694f166a7e56c805caaf794e0585cabdbf1df36911c414e4e9abb62ae9"},
{file = "mypy-1.17.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:79d44f9bfb004941ebb0abe8eff6504223a9c1ac51ef967d1263c6572bbebc99"},
{file = "mypy-1.17.1-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b01586eed696ec905e61bd2568f48740f7ac4a45b3a468e6423a03d3788a51a8"},
{file = "mypy-1.17.1-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:43808d9476c36b927fbcd0b0255ce75efe1b68a080154a38ae68a7e62de8f0f8"},
{file = "mypy-1.17.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:feb8cc32d319edd5859da2cc084493b3e2ce5e49a946377663cc90f6c15fb259"},
{file = "mypy-1.17.1-cp39-cp39-win_amd64.whl", hash = "sha256:d7598cf74c3e16539d4e2f0b8d8c318e00041553d83d4861f87c7a72e95ac24d"},
{file = "mypy-1.17.1-py3-none-any.whl", hash = "sha256:a9f52c0351c21fe24c21d8c0eb1f62967b262d6729393397b6f443c3b773c3b9"},
{file = "mypy-1.17.1.tar.gz", hash = "sha256:25e01ec741ab5bb3eec8ba9cdb0f769230368a22c959c4937360efb89b7e9f01"},
{file = "mypy-1.19.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:6148ede033982a8c5ca1143de34c71836a09f105068aaa8b7d5edab2b053e6c8"},
{file = "mypy-1.19.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a9ac09e52bb0f7fb912f5d2a783345c72441a08ef56ce3e17c1752af36340a39"},
{file = "mypy-1.19.0-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:11f7254c15ab3f8ed68f8e8f5cbe88757848df793e31c36aaa4d4f9783fd08ab"},
{file = "mypy-1.19.0-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:318ba74f75899b0e78b847d8c50821e4c9637c79d9a59680fc1259f29338cb3e"},
{file = "mypy-1.19.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cf7d84f497f78b682edd407f14a7b6e1a2212b433eedb054e2081380b7395aa3"},
{file = "mypy-1.19.0-cp310-cp310-win_amd64.whl", hash = "sha256:c3385246593ac2b97f155a0e9639be906e73534630f663747c71908dfbf26134"},
{file = "mypy-1.19.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:a31e4c28e8ddb042c84c5e977e28a21195d086aaffaf08b016b78e19c9ef8106"},
{file = "mypy-1.19.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:34ec1ac66d31644f194b7c163d7f8b8434f1b49719d403a5d26c87fff7e913f7"},
{file = "mypy-1.19.0-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:cb64b0ba5980466a0f3f9990d1c582bcab8db12e29815ecb57f1408d99b4bff7"},
{file = "mypy-1.19.0-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:120cffe120cca5c23c03c77f84abc0c14c5d2e03736f6c312480020082f1994b"},
{file = "mypy-1.19.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:7a500ab5c444268a70565e374fc803972bfd1f09545b13418a5174e29883dab7"},
{file = "mypy-1.19.0-cp311-cp311-win_amd64.whl", hash = "sha256:c14a98bc63fd867530e8ec82f217dae29d0550c86e70debc9667fff1ec83284e"},
{file = "mypy-1.19.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:0fb3115cb8fa7c5f887c8a8d81ccdcb94cff334684980d847e5a62e926910e1d"},
{file = "mypy-1.19.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f3e19e3b897562276bb331074d64c076dbdd3e79213f36eed4e592272dabd760"},
{file = "mypy-1.19.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b9d491295825182fba01b6ffe2c6fe4e5a49dbf4e2bb4d1217b6ced3b4797bc6"},
{file = "mypy-1.19.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:6016c52ab209919b46169651b362068f632efcd5eb8ef9d1735f6f86da7853b2"},
{file = "mypy-1.19.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:f188dcf16483b3e59f9278c4ed939ec0254aa8a60e8fc100648d9ab5ee95a431"},
{file = "mypy-1.19.0-cp312-cp312-win_amd64.whl", hash = "sha256:0e3c3d1e1d62e678c339e7ade72746a9e0325de42cd2cccc51616c7b2ed1a018"},
{file = "mypy-1.19.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:7686ed65dbabd24d20066f3115018d2dce030d8fa9db01aa9f0a59b6813e9f9e"},
{file = "mypy-1.19.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:fd4a985b2e32f23bead72e2fb4bbe5d6aceee176be471243bd831d5b2644672d"},
{file = "mypy-1.19.0-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:fc51a5b864f73a3a182584b1ac75c404396a17eced54341629d8bdcb644a5bba"},
{file = "mypy-1.19.0-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:37af5166f9475872034b56c5efdcf65ee25394e9e1d172907b84577120714364"},
{file = "mypy-1.19.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:510c014b722308c9bd377993bcbf9a07d7e0692e5fa8fc70e639c1eb19fc6bee"},
{file = "mypy-1.19.0-cp313-cp313-win_amd64.whl", hash = "sha256:cabbee74f29aa9cd3b444ec2f1e4fa5a9d0d746ce7567a6a609e224429781f53"},
{file = "mypy-1.19.0-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:f2e36bed3c6d9b5f35d28b63ca4b727cb0228e480826ffc8953d1892ddc8999d"},
{file = "mypy-1.19.0-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a18d8abdda14035c5718acb748faec09571432811af129bf0d9e7b2d6699bf18"},
{file = "mypy-1.19.0-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:f75e60aca3723a23511948539b0d7ed514dda194bc3755eae0bfc7a6b4887aa7"},
{file = "mypy-1.19.0-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:8f44f2ae3c58421ee05fe609160343c25f70e3967f6e32792b5a78006a9d850f"},
{file = "mypy-1.19.0-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:63ea6a00e4bd6822adbfc75b02ab3653a17c02c4347f5bb0cf1d5b9df3a05835"},
{file = "mypy-1.19.0-cp314-cp314-win_amd64.whl", hash = "sha256:3ad925b14a0bb99821ff6f734553294aa6a3440a8cb082fe1f5b84dfb662afb1"},
{file = "mypy-1.19.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:0dde5cb375cb94deff0d4b548b993bec52859d1651e073d63a1386d392a95495"},
{file = "mypy-1.19.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:1cf9c59398db1c68a134b0b5354a09a1e124523f00bacd68e553b8bd16ff3299"},
{file = "mypy-1.19.0-cp39-cp39-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3210d87b30e6af9c8faed61be2642fcbe60ef77cec64fa1ef810a630a4cf671c"},
{file = "mypy-1.19.0-cp39-cp39-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:e2c1101ab41d01303103ab6ef82cbbfedb81c1a060c868fa7cc013d573d37ab5"},
{file = "mypy-1.19.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:0ea4fd21bb48f0da49e6d3b37ef6bd7e8228b9fe41bbf4d80d9364d11adbd43c"},
{file = "mypy-1.19.0-cp39-cp39-win_amd64.whl", hash = "sha256:16f76ff3f3fd8137aadf593cb4607d82634fca675e8211ad75c43d86033ee6c6"},
{file = "mypy-1.19.0-py3-none-any.whl", hash = "sha256:0c01c99d626380752e527d5ce8e69ffbba2046eb8a060db0329690849cf9b6f9"},
{file = "mypy-1.19.0.tar.gz", hash = "sha256:f6b874ca77f733222641e5c46e4711648c4037ea13646fd0cdc814c2eaec2528"},
]
[package.dependencies]
librt = ">=0.6.2"
mypy_extensions = ">=1.0.0"
pathspec = ">=0.9.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
@@ -1486,18 +1608,18 @@ files = [
[[package]]
name = "mypy-zope"
version = "1.0.13"
version = "1.0.14"
description = "Plugin for mypy to support zope interfaces"
optional = false
python-versions = "*"
groups = ["dev"]
files = [
{file = "mypy_zope-1.0.13-py3-none-any.whl", hash = "sha256:13740c4cbc910cca2c143c6709e1c483c991abeeeb7b629ad6f73d8ac1edad15"},
{file = "mypy_zope-1.0.13.tar.gz", hash = "sha256:63fb4d035ea874baf280dc69e714dcde4bd2a4a4837a0fd8d90ce91bea510f99"},
{file = "mypy_zope-1.0.14-py3-none-any.whl", hash = "sha256:8842ade93630421dbec0c9906d6515f6e65c6407ef8b9b2eb7f4f73ae1e8a42a"},
{file = "mypy_zope-1.0.14.tar.gz", hash = "sha256:42555ad4703f2e50c912de3ebe0c7197619c3f71864817fabc5385ecea0f8449"},
]
[package.dependencies]
mypy = ">=1.0.0,<1.18.0"
mypy = ">=1.0.0,<1.20.0"
"zope.interface" = "*"
"zope.schema" = "*"
@@ -1526,7 +1648,7 @@ description = "OpenTracing API for Python. See documentation at http://opentraci
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "opentracing-2.4.0.tar.gz", hash = "sha256:a173117e6ef580d55874734d1fa7ecb6f3655160b8b8974a2a1e98e5ec9c840d"},
]
@@ -1575,14 +1697,14 @@ files = [
[[package]]
name = "phonenumbers"
version = "9.0.18"
version = "9.0.19"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "phonenumbers-9.0.18-py2.py3-none-any.whl", hash = "sha256:d3354454ac31c97f8a08121df97a7145b8dca641f734c6f1518a41c2f60c5764"},
{file = "phonenumbers-9.0.18.tar.gz", hash = "sha256:5537c61ba95b11b992c95e804da6e49193cc06b1224f632ade64631518a48ed1"},
{file = "phonenumbers-9.0.19-py2.py3-none-any.whl", hash = "sha256:004abdfe2010518c2383f148515664a742e8a5d5540e07c049735c139d7e8b09"},
{file = "phonenumbers-9.0.19.tar.gz", hash = "sha256:e0674e31554362f4d95383558f7aefde738ef2e7bf96d28a10afd3e87d63a65c"},
]
[[package]]
@@ -1716,7 +1838,7 @@ description = "psycopg2 - Python-PostgreSQL Database Adapter"
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"postgres\""
markers = "extra == \"postgres\" or extra == \"all\""
files = [
{file = "psycopg2-2.9.11-cp310-cp310-win_amd64.whl", hash = "sha256:103e857f46bb76908768ead4e2d0ba1d1a130e7b8ed77d3ae91e8b33481813e8"},
{file = "psycopg2-2.9.11-cp311-cp311-win_amd64.whl", hash = "sha256:210daed32e18f35e3140a1ebe059ac29209dd96468f2f7559aa59f75ee82a5cb"},
@@ -1734,7 +1856,7 @@ description = ".. image:: https://travis-ci.org/chtd/psycopg2cffi.svg?branch=mas
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-2.9.0.tar.gz", hash = "sha256:7e272edcd837de3a1d12b62185eb85c45a19feda9e62fa1b120c54f9e8d35c52"},
]
@@ -1750,7 +1872,7 @@ description = "A Simple library to enable psycopg2 compatability"
optional = true
python-versions = "*"
groups = ["main"]
markers = "platform_python_implementation == \"PyPy\" and (extra == \"all\" or extra == \"postgres\")"
markers = "platform_python_implementation == \"PyPy\" and (extra == \"postgres\" or extra == \"all\")"
files = [
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
]
@@ -1792,6 +1914,7 @@ description = "C parser in Python"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
groups = ["main", "dev"]
markers = "implementation_name != \"PyPy\""
files = [
{file = "pycparser-2.21-py2.py3-none-any.whl", hash = "sha256:8ee45429555515e1f6b185e78100aea234072576aa43ab53aefcae078162fca9"},
{file = "pycparser-2.21.tar.gz", hash = "sha256:e644fdec12f7872f86c58ff790da456218b10f863970249516d60a5eaca77206"},
@@ -2031,7 +2154,7 @@ description = "A development tool to measure, monitor and analyze the memory beh
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"cache-memory\""
markers = "extra == \"cache-memory\" or extra == \"all\""
files = [
{file = "Pympler-1.0.1-py3-none-any.whl", hash = "sha256:d260dda9ae781e1eab6ea15bacb84015849833ba5555f141d2d9b7b7473b307d"},
{file = "Pympler-1.0.1.tar.gz", hash = "sha256:993f1a3599ca3f4fcd7160c7545ad06310c9e12f70174ae7ae8d4e25f6c5d3fa"},
@@ -2084,6 +2207,63 @@ typing-extensions = {version = ">=4.9", markers = "python_version < \"3.13\" and
docs = ["sphinx (!=5.2.0,!=5.2.0.post0,!=7.2.5)", "sphinx_rtd_theme"]
test = ["pretend", "pytest (>=3.0.1)", "pytest-rerunfailures"]
[[package]]
name = "pyparsing"
version = "3.2.5"
description = "pyparsing - Classes and methods to define and execute parsing grammars"
optional = false
python-versions = ">=3.9"
groups = ["main"]
files = [
{file = "pyparsing-3.2.5-py3-none-any.whl", hash = "sha256:e38a4f02064cf41fe6593d328d0512495ad1f3d8a91c4f73fc401b3079a59a5e"},
{file = "pyparsing-3.2.5.tar.gz", hash = "sha256:2df8d5b7b2802ef88e8d016a2eb9c7aeaa923529cd251ed0fe4608275d4105b6"},
]
[package.extras]
diagrams = ["jinja2", "railroad-diagrams"]
[[package]]
name = "pyrsistent"
version = "0.20.0"
description = "Persistent/Functional/Immutable data structures"
optional = false
python-versions = ">=3.8"
groups = ["main"]
files = [
{file = "pyrsistent-0.20.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:8c3aba3e01235221e5b229a6c05f585f344734bd1ad42a8ac51493d74722bbce"},
{file = "pyrsistent-0.20.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c1beb78af5423b879edaf23c5591ff292cf7c33979734c99aa66d5914ead880f"},
{file = "pyrsistent-0.20.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:21cc459636983764e692b9eba7144cdd54fdec23ccdb1e8ba392a63666c60c34"},
{file = "pyrsistent-0.20.0-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f5ac696f02b3fc01a710427585c855f65cd9c640e14f52abe52020722bb4906b"},
{file = "pyrsistent-0.20.0-cp310-cp310-win32.whl", hash = "sha256:0724c506cd8b63c69c7f883cc233aac948c1ea946ea95996ad8b1380c25e1d3f"},
{file = "pyrsistent-0.20.0-cp310-cp310-win_amd64.whl", hash = "sha256:8441cf9616d642c475684d6cf2520dd24812e996ba9af15e606df5f6fd9d04a7"},
{file = "pyrsistent-0.20.0-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:0f3b1bcaa1f0629c978b355a7c37acd58907390149b7311b5db1b37648eb6958"},
{file = "pyrsistent-0.20.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5cdd7ef1ea7a491ae70d826b6cc64868de09a1d5ff9ef8d574250d0940e275b8"},
{file = "pyrsistent-0.20.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:cae40a9e3ce178415040a0383f00e8d68b569e97f31928a3a8ad37e3fde6df6a"},
{file = "pyrsistent-0.20.0-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6288b3fa6622ad8a91e6eb759cfc48ff3089e7c17fb1d4c59a919769314af224"},
{file = "pyrsistent-0.20.0-cp311-cp311-win32.whl", hash = "sha256:7d29c23bdf6e5438c755b941cef867ec2a4a172ceb9f50553b6ed70d50dfd656"},
{file = "pyrsistent-0.20.0-cp311-cp311-win_amd64.whl", hash = "sha256:59a89bccd615551391f3237e00006a26bcf98a4d18623a19909a2c48b8e986ee"},
{file = "pyrsistent-0.20.0-cp312-cp312-macosx_10_9_universal2.whl", hash = "sha256:09848306523a3aba463c4b49493a760e7a6ca52e4826aa100ee99d8d39b7ad1e"},
{file = "pyrsistent-0.20.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a14798c3005ec892bbada26485c2eea3b54109cb2533713e355c806891f63c5e"},
{file = "pyrsistent-0.20.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b14decb628fac50db5e02ee5a35a9c0772d20277824cfe845c8a8b717c15daa3"},
{file = "pyrsistent-0.20.0-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e2c116cc804d9b09ce9814d17df5edf1df0c624aba3b43bc1ad90411487036d"},
{file = "pyrsistent-0.20.0-cp312-cp312-win32.whl", hash = "sha256:e78d0c7c1e99a4a45c99143900ea0546025e41bb59ebc10182e947cf1ece9174"},
{file = "pyrsistent-0.20.0-cp312-cp312-win_amd64.whl", hash = "sha256:4021a7f963d88ccd15b523787d18ed5e5269ce57aa4037146a2377ff607ae87d"},
{file = "pyrsistent-0.20.0-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:79ed12ba79935adaac1664fd7e0e585a22caa539dfc9b7c7c6d5ebf91fb89054"},
{file = "pyrsistent-0.20.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f920385a11207dc372a028b3f1e1038bb244b3ec38d448e6d8e43c6b3ba20e98"},
{file = "pyrsistent-0.20.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f5c2d012671b7391803263419e31b5c7c21e7c95c8760d7fc35602353dee714"},
{file = "pyrsistent-0.20.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef3992833fbd686ee783590639f4b8343a57f1f75de8633749d984dc0eb16c86"},
{file = "pyrsistent-0.20.0-cp38-cp38-win32.whl", hash = "sha256:881bbea27bbd32d37eb24dd320a5e745a2a5b092a17f6debc1349252fac85423"},
{file = "pyrsistent-0.20.0-cp38-cp38-win_amd64.whl", hash = "sha256:6d270ec9dd33cdb13f4d62c95c1a5a50e6b7cdd86302b494217137f760495b9d"},
{file = "pyrsistent-0.20.0-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:ca52d1ceae015859d16aded12584c59eb3825f7b50c6cfd621d4231a6cc624ce"},
{file = "pyrsistent-0.20.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b318ca24db0f0518630e8b6f3831e9cba78f099ed5c1d65ffe3e023003043ba0"},
{file = "pyrsistent-0.20.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fed2c3216a605dc9a6ea50c7e84c82906e3684c4e80d2908208f662a6cbf9022"},
{file = "pyrsistent-0.20.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2e14c95c16211d166f59c6611533d0dacce2e25de0f76e4c140fde250997b3ca"},
{file = "pyrsistent-0.20.0-cp39-cp39-win32.whl", hash = "sha256:f058a615031eea4ef94ead6456f5ec2026c19fb5bd6bfe86e9665c4158cf802f"},
{file = "pyrsistent-0.20.0-cp39-cp39-win_amd64.whl", hash = "sha256:58b8f6366e152092194ae68fefe18b9f0b4f89227dfd86a07770c3d86097aebf"},
{file = "pyrsistent-0.20.0-py3-none-any.whl", hash = "sha256:c55acc4733aad6560a7f5f818466631f07efc001fd023f34a6c203f8b6df0f0b"},
{file = "pyrsistent-0.20.0.tar.gz", hash = "sha256:4c48f78f62ab596c679086084d0dd13254ae4f3d6c72a83ffdf5ebdef8f265a4"},
]
[[package]]
name = "pysaml2"
version = "7.5.0"
@@ -2091,7 +2271,7 @@ description = "Python implementation of SAML Version 2 Standard"
optional = true
python-versions = ">=3.9,<4.0"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pysaml2-7.5.0-py3-none-any.whl", hash = "sha256:bc6627cc344476a83c757f440a73fda1369f13b6fda1b4e16bca63ffbabb5318"},
{file = "pysaml2-7.5.0.tar.gz", hash = "sha256:f36871d4e5ee857c6b85532e942550d2cf90ea4ee943d75eb681044bbc4f54f7"},
@@ -2116,7 +2296,7 @@ description = "Extensions to the standard Python datetime module"
optional = true
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "python-dateutil-2.8.2.tar.gz", hash = "sha256:0123cacc1627ae19ddf3c27a5de5bd67ee4586fbdd6440d9748f8abb483d3e86"},
{file = "python_dateutil-2.8.2-py2.py3-none-any.whl", hash = "sha256:961d03dc3453ebbc59dbdea9e4e11c5651520a876d0f4db161e8674aae935da9"},
@@ -2139,15 +2319,15 @@ files = [
[[package]]
name = "pytz"
version = "2022.7.1"
version = "2025.2"
description = "World timezone definitions, modern and historical"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "pytz-2022.7.1-py2.py3-none-any.whl", hash = "sha256:78f4f37d8198e0627c5f1143240bb0206b8691d8d7ac6d78fee88b78733f8c4a"},
{file = "pytz-2022.7.1.tar.gz", hash = "sha256:01a0681c4b9684a28304615eba55d1ab31ae00bf68ec157ec3708a8182dbbcd0"},
{file = "pytz-2025.2-py2.py3-none-any.whl", hash = "sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00"},
{file = "pytz-2025.2.tar.gz", hash = "sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3"},
]
[[package]]
@@ -2481,31 +2661,31 @@ files = [
[[package]]
name = "ruff"
version = "0.14.5"
version = "0.14.6"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.14.5-py3-none-linux_armv6l.whl", hash = "sha256:f3b8248123b586de44a8018bcc9fefe31d23dda57a34e6f0e1e53bd51fd63594"},
{file = "ruff-0.14.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:f7a75236570318c7a30edd7f5491945f0169de738d945ca8784500b517163a72"},
{file = "ruff-0.14.5-py3-none-macosx_11_0_arm64.whl", hash = "sha256:6d146132d1ee115f8802356a2dc9a634dbf58184c51bff21f313e8cd1c74899a"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:e2380596653dcd20b057794d55681571a257a42327da8894b93bbd6111aa801f"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2d1fa985a42b1f075a098fa1ab9d472b712bdb17ad87a8ec86e45e7fa6273e68"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88f0770d42b7fa02bbefddde15d235ca3aa24e2f0137388cc15b2dcbb1f7c7a7"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:3676cb02b9061fee7294661071c4709fa21419ea9176087cb77e64410926eb78"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:b595bedf6bc9cab647c4a173a61acf4f1ac5f2b545203ba82f30fcb10b0318fb"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:f55382725ad0bdb2e8ee2babcbbfb16f124f5a59496a2f6a46f1d9d99d93e6e2"},
{file = "ruff-0.14.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7497d19dce23976bdaca24345ae131a1d38dcfe1b0850ad8e9e6e4fa321a6e19"},
{file = "ruff-0.14.5-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:410e781f1122d6be4f446981dd479470af86537fb0b8857f27a6e872f65a38e4"},
{file = "ruff-0.14.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:c01be527ef4c91a6d55e53b337bfe2c0f82af024cc1a33c44792d6844e2331e1"},
{file = "ruff-0.14.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:f66e9bb762e68d66e48550b59c74314168ebb46199886c5c5aa0b0fbcc81b151"},
{file = "ruff-0.14.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:d93be8f1fa01022337f1f8f3bcaa7ffee2d0b03f00922c45c2207954f351f465"},
{file = "ruff-0.14.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:c135d4b681f7401fe0e7312017e41aba9b3160861105726b76cfa14bc25aa367"},
{file = "ruff-0.14.5-py3-none-win32.whl", hash = "sha256:c83642e6fccfb6dea8b785eb9f456800dcd6a63f362238af5fc0c83d027dd08b"},
{file = "ruff-0.14.5-py3-none-win_amd64.whl", hash = "sha256:9d55d7af7166f143c94eae1db3312f9ea8f95a4defef1979ed516dbb38c27621"},
{file = "ruff-0.14.5-py3-none-win_arm64.whl", hash = "sha256:4b700459d4649e2594b31f20a9de33bc7c19976d4746d8d0798ad959621d64a4"},
{file = "ruff-0.14.5.tar.gz", hash = "sha256:8d3b48d7d8aad423d3137af7ab6c8b1e38e4de104800f0d596990f6ada1a9fc1"},
{file = "ruff-0.14.6-py3-none-linux_armv6l.whl", hash = "sha256:d724ac2f1c240dbd01a2ae98db5d1d9a5e1d9e96eba999d1c48e30062df578a3"},
{file = "ruff-0.14.6-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:9f7539ea257aa4d07b7ce87aed580e485c40143f2473ff2f2b75aee003186004"},
{file = "ruff-0.14.6-py3-none-macosx_11_0_arm64.whl", hash = "sha256:7f6007e55b90a2a7e93083ba48a9f23c3158c433591c33ee2e99a49b889c6332"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a8e7b9d73d8728b68f632aa8e824ef041d068d231d8dbc7808532d3629a6bef"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d50d45d4553a3ebcbd33e7c5e0fe6ca4aafd9a9122492de357205c2c48f00775"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:118548dd121f8a21bfa8ab2c5b80e5b4aed67ead4b7567790962554f38e598ce"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:57256efafbfefcb8748df9d1d766062f62b20150691021f8ab79e2d919f7c11f"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:ff18134841e5c68f8e5df1999a64429a02d5549036b394fafbe410f886e1989d"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:29c4b7ec1e66a105d5c27bd57fa93203637d66a26d10ca9809dc7fc18ec58440"},
{file = "ruff-0.14.6-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:167843a6f78680746d7e226f255d920aeed5e4ad9c03258094a2d49d3028b105"},
{file = "ruff-0.14.6-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:16a33af621c9c523b1ae006b1b99b159bf5ac7e4b1f20b85b2572455018e0821"},
{file = "ruff-0.14.6-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:1432ab6e1ae2dc565a7eea707d3b03a0c234ef401482a6f1621bc1f427c2ff55"},
{file = "ruff-0.14.6-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4c55cfbbe7abb61eb914bfd20683d14cdfb38a6d56c6c66efa55ec6570ee4e71"},
{file = "ruff-0.14.6-py3-none-musllinux_1_2_i686.whl", hash = "sha256:efea3c0f21901a685fff4befda6d61a1bf4cb43de16da87e8226a281d614350b"},
{file = "ruff-0.14.6-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:344d97172576d75dc6afc0e9243376dbe1668559c72de1864439c4fc95f78185"},
{file = "ruff-0.14.6-py3-none-win32.whl", hash = "sha256:00169c0c8b85396516fdd9ce3446c7ca20c2a8f90a77aa945ba6b8f2bfe99e85"},
{file = "ruff-0.14.6-py3-none-win_amd64.whl", hash = "sha256:390e6480c5e3659f8a4c8d6a0373027820419ac14fa0d2713bd8e6c3e125b8b9"},
{file = "ruff-0.14.6-py3-none-win_arm64.whl", hash = "sha256:d43c81fbeae52cfa8728d8766bbf46ee4298c888072105815b392da70ca836b2"},
{file = "ruff-0.14.6.tar.gz", hash = "sha256:6f0c742ca6a7783a736b867a263b9a7a80a45ce9bee391eeda296895f1b4e1cc"},
]
[[package]]
@@ -2548,7 +2728,7 @@ description = "Python client for Sentry (https://sentry.io)"
optional = true
python-versions = ">=3.6"
groups = ["main"]
markers = "extra == \"all\" or extra == \"sentry\""
markers = "extra == \"sentry\" or extra == \"all\""
files = [
{file = "sentry_sdk-2.46.0-py2.py3-none-any.whl", hash = "sha256:4eeeb60198074dff8d066ea153fa6f241fef1668c10900ea53a4200abc8da9b1"},
{file = "sentry_sdk-2.46.0.tar.gz", hash = "sha256:91821a23460725734b7741523021601593f35731808afc0bb2ba46c27b8acd91"},
@@ -2758,7 +2938,7 @@ description = "Tornado IOLoop Backed Concurrent Futures"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "threadloop-1.0.2-py2-none-any.whl", hash = "sha256:5c90dbefab6ffbdba26afb4829d2a9df8275d13ac7dc58dccb0e279992679599"},
{file = "threadloop-1.0.2.tar.gz", hash = "sha256:8b180aac31013de13c2ad5c834819771992d350267bddb854613ae77ef571944"},
@@ -2774,7 +2954,7 @@ description = "Python bindings for the Apache Thrift RPC system"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "thrift-0.16.0.tar.gz", hash = "sha256:2b5b6488fcded21f9d312aa23c9ff6a0195d0f6ae26ddbd5ad9e3e25dfc14408"},
]
@@ -2847,7 +3027,7 @@ description = "Tornado is a Python web framework and asynchronous networking lib
optional = true
python-versions = ">=3.9"
groups = ["main"]
markers = "extra == \"all\" or extra == \"opentracing\""
markers = "extra == \"opentracing\" or extra == \"all\""
files = [
{file = "tornado-6.5-cp39-abi3-macosx_10_9_universal2.whl", hash = "sha256:f81067dad2e4443b015368b24e802d0083fecada4f0a4572fdb72fc06e54a9a6"},
{file = "tornado-6.5-cp39-abi3-macosx_10_9_x86_64.whl", hash = "sha256:9ac1cbe1db860b3cbb251e795c701c41d343f06a96049d6274e7c77559117e41"},
@@ -2981,7 +3161,7 @@ description = "non-blocking redis client for python"
optional = true
python-versions = "*"
groups = ["main"]
markers = "extra == \"all\" or extra == \"redis\""
markers = "extra == \"redis\" or extra == \"all\""
files = [
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
@@ -3192,21 +3372,21 @@ files = [
[[package]]
name = "urllib3"
version = "2.5.0"
version = "2.6.0"
description = "HTTP library with thread-safe connection pooling, file post, and more."
optional = false
python-versions = ">=3.9"
groups = ["main", "dev"]
files = [
{file = "urllib3-2.5.0-py3-none-any.whl", hash = "sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc"},
{file = "urllib3-2.5.0.tar.gz", hash = "sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760"},
{file = "urllib3-2.6.0-py3-none-any.whl", hash = "sha256:c90f7a39f716c572c4e3e58509581ebd83f9b59cced005b7db7ad2d22b0db99f"},
{file = "urllib3-2.6.0.tar.gz", hash = "sha256:cb9bcef5a4b345d5da5d145dc3e30834f58e8018828cbc724d30b4cb7d4d49f1"},
]
[package.extras]
brotli = ["brotli (>=1.0.9) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=0.8.0) ; platform_python_implementation != \"CPython\""]
brotli = ["brotli (>=1.2.0) ; platform_python_implementation == \"CPython\"", "brotlicffi (>=1.2.0.0) ; platform_python_implementation != \"CPython\""]
h2 = ["h2 (>=4,<5)"]
socks = ["pysocks (>=1.5.6,!=1.5.7,<2.0)"]
zstd = ["zstandard (>=0.18.0)"]
zstd = ["backports-zstd (>=1.0.0) ; python_version < \"3.14\""]
[[package]]
name = "webencodings"
@@ -3227,7 +3407,7 @@ description = "An XML Schema validator and decoder"
optional = true
python-versions = ">=3.7"
groups = ["main"]
markers = "extra == \"all\" or extra == \"saml2\""
markers = "extra == \"saml2\" or extra == \"all\""
files = [
{file = "xmlschema-2.4.0-py3-none-any.whl", hash = "sha256:dc87be0caaa61f42649899189aab2fd8e0d567f2cf548433ba7b79278d231a4a"},
{file = "xmlschema-2.4.0.tar.gz", hash = "sha256:d74cd0c10866ac609e1ef94a5a69b018ad16e39077bc6393408b40c6babee793"},
@@ -3345,15 +3525,15 @@ docs = ["Sphinx", "repoze.sphinx.autointerface"]
test = ["zope.i18nmessageid", "zope.testing", "zope.testrunner"]
[extras]
all = ["authlib", "hiredis", "jaeger-client", "lxml", "matrix-synapse-ldap3", "opentracing", "psycopg2", "psycopg2cffi", "psycopg2cffi-compat", "pympler", "pysaml2", "sentry-sdk", "txredisapi"]
all = ["authlib", "defusedxml", "hiredis", "jaeger-client", "lxml", "matrix-synapse-ldap3", "opentracing", "psycopg2", "psycopg2cffi", "psycopg2cffi-compat", "pympler", "pysaml2", "pytz", "sentry-sdk", "thrift", "tornado", "txredisapi"]
cache-memory = ["pympler"]
jwt = ["authlib"]
matrix-synapse-ldap3 = ["matrix-synapse-ldap3"]
oidc = ["authlib"]
opentracing = ["jaeger-client", "opentracing"]
opentracing = ["jaeger-client", "opentracing", "thrift", "tornado"]
postgres = ["psycopg2", "psycopg2cffi", "psycopg2cffi-compat"]
redis = ["hiredis", "txredisapi"]
saml2 = ["pysaml2"]
saml2 = ["defusedxml", "pysaml2", "pytz"]
sentry = ["sentry-sdk"]
systemd = ["systemd-python"]
test = ["idna", "parameterized"]
@@ -3362,4 +3542,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = ">=3.10.0,<4.0.0"
content-hash = "98b9062f48205a3bcc99b43ae665083d360a15d4a208927fa978df9c36fd5315"
content-hash = "1caa5072f6304122c89377420f993a54f54587f3618ccc8094ec31642264592c"
+160 -104
View File
@@ -42,7 +42,8 @@ dependencies = [
"Twisted[tls]>=21.2.0",
"treq>=21.5.0",
# Twisted has required pyopenssl 16.0 since about Twisted 16.6.
"pyOpenSSL>=16.0.0",
# pyOpenSSL 16.2.0 fixes compatibility with OpenSSL 1.1.0.
"pyOpenSSL>=16.2.0",
"PyYAML>=5.3",
"pyasn1>=0.1.9",
"pyasn1-modules>=0.0.7",
@@ -95,6 +96,25 @@ dependencies = [
# This is used for parsing multipart responses
"python-multipart>=0.0.9",
# Transitive dependency constraints
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
# We should periodically check to see if these dependencies are still necessary and
# remove any that are no longer required.
"cffi>=1.15", # via cryptography
"pynacl>=1.3", # via signedjson
"pyparsing>=2.4", # via packaging
"pyrsistent>=0.18.0", # via jsonschema
"requests>=2.16.0", # 2.16.0+ no longer vendors urllib3, avoiding Python 3.10+ incompatibility
"urllib3>=1.26.5", # via treq; 1.26.5 fixes Python 3.10+ collections.abc compatibility
# 5.2 is the current version in Debian oldstable. If we don't care to support that, then 5.4 is
# the minimum version from Ubuntu 22.04 and RHEL 9. (as of 2025-12)
# When bumping this version to 6.2 or above, refer to https://github.com/element-hq/synapse/pull/19274
# for details of Synapse improvements that may be unlocked. Particularly around the use of `|`
# syntax with zope interface types.
"zope-interface>=5.2", # via twisted
]
[project.optional-dependencies]
@@ -104,7 +124,16 @@ postgres = [
"psycopg2cffi>=2.8;platform_python_implementation == 'PyPy'",
"psycopg2cffi-compat==1.1;platform_python_implementation == 'PyPy'",
]
saml2 = ["pysaml2>=4.5.0"]
saml2 = [
"pysaml2>=4.5.0",
# Transitive dependencies from pysaml2
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"defusedxml>=0.7.1", # via pysaml2
"pytz>=2018.3", # via pysaml2
]
oidc = ["authlib>=0.15.1"]
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
@@ -112,15 +141,25 @@ oidc = ["authlib>=0.15.1"]
systemd = ["systemd-python>=231"]
url-preview = ["lxml>=4.6.3"]
sentry = ["sentry-sdk>=0.7.2"]
opentracing = ["jaeger-client>=4.2.0", "opentracing>=2.2.0"]
opentracing = [
"jaeger-client>=4.2.0",
"opentracing>=2.2.0",
# Transitive dependencies from jaeger-client
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"thrift>=0.10", # via jaeger-client
"tornado>=6.0", # via jaeger-client
]
jwt = ["authlib"]
# hiredis is not a *strict* dependency, but it makes things much faster.
# (if it is not installed, we fall back to slow code.)
redis = ["txredisapi>=1.4.7", "hiredis"]
redis = ["txredisapi>=1.4.7", "hiredis>=0.3"]
# Required to use experimental `caches.track_memory_usage` config option.
cache-memory = ["pympler"]
cache-memory = ["pympler>=1.0"]
# If this is updated, don't forget to update the equivalent lines in
# tool.poetry.group.dev.dependencies.
# `dependency-groups.dev` below.
test = ["parameterized>=0.9.0", "idna>=3.3"]
# The duplication here is awful.
@@ -149,12 +188,22 @@ all = [
# opentracing
"jaeger-client>=4.2.0", "opentracing>=2.2.0",
# redis
"txredisapi>=1.4.7", "hiredis",
"txredisapi>=1.4.7", "hiredis>=0.3",
# cache-memory
"pympler",
# 1.0 added support for python 3.10, our current minimum supported python version
"pympler>=1.0",
# omitted:
# - test: it's useful to have this separate from dev deps in the olddeps job
# - systemd: this is a system-based requirement
# Transitive dependencies
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"defusedxml>=0.7.1", # via pysaml2
"pytz>=2018.3", # via pysaml2
"thrift>=0.10", # via jaeger-client
"tornado>=6.0", # via jaeger-client
]
[project.urls]
@@ -177,6 +226,85 @@ synapse_port_db = "synapse._scripts.synapse_port_db:main"
synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry]
packages = [{ include = "synapse" }]
[tool.poetry.build]
# Compile our rust module when using `poetry install`. This is still required
# while using `poetry` as the build frontend. Saves the developer from needing
# to run both:
#
# $ poetry install
# $ maturin develop
script = "build_rust.py"
# Create a `setup.py` file which will call the `build` method in our build
# script.
#
# Our build script currently uses the "old" build method, where we define a
# `build` method and `setup.py` calls it. Poetry developers have mentioned that
# this will eventually be removed:
# https://github.com/matrix-org/synapse/pull/14949#issuecomment-1418001859
#
# The new build method is defined here:
# https://python-poetry.org/docs/building-extension-modules/#maturin-build-script
# but is still marked as "unstable" at the time of writing. This would also
# bump our minimum `poetry-core` version to 1.5.0.
#
# We can just drop this work-around entirely if migrating away from
# Poetry, thus there's little motivation to update the build script.
generate-setup-file = true
# Dependencies used for developing Synapse itself.
#
# Hold off on migrating these to `dev-dependencies` (PEP 735) for now until
# Poetry 2.2.0+, pip 25.1+ are more widely available.
[tool.poetry.group.dev.dependencies]
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.14.6"
# Typechecking
lxml-stubs = ">=0.4.0"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"
types-jsonschema = ">=3.2.0"
types-netaddr = ">=0.8.0.6"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
types-psycopg2 = ">=2.9.9"
types-pyOpenSSL = ">=20.0.7"
types-PyYAML = ">=5.4.10"
types-requests = ">=2.26.0"
types-setuptools = ">=57.4.0"
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
#
# If this is updated, don't forget to update the equivalent lines in
# project.optional-dependencies.test.
parameterized = ">=0.9.0"
idna = ">=3.3"
# The following are used by the release script
click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.59"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
tomli = ">=1.2.3"
# Used for checking the schema delta files
sqlglot = ">=28.0.0"
[tool.towncrier]
package = "synapse"
@@ -260,15 +388,10 @@ select = [
"G",
# pyupgrade
"UP006",
"UP007",
"UP045",
]
extend-safe-fixes = [
# pyupgrade rules compatible with Python >= 3.9
"UP006",
"UP007",
# pyupgrade rules compatible with Python >= 3.10
"UP045",
# Allow ruff to automatically fix trailing spaces within a multi-line string/comment.
"W293"
]
@@ -291,88 +414,29 @@ line-ending = "auto"
[tool.maturin]
manifest-path = "rust/Cargo.toml"
module-name = "synapse.synapse_rust"
[tool.poetry]
packages = [
{ include = "synapse" },
python-source = "."
sdist-include = [
"AUTHORS.rst",
"book.toml",
"changelog.d",
"CHANGES.md",
"CONTRIBUTING.md",
"demo",
"docs",
"INSTALL.md",
"mypy.ini",
"scripts-dev",
"synmark",
"sytest-blacklist",
"tests",
"UPGRADE.rst",
"Cargo.toml",
"Cargo.lock",
"rust/Cargo.toml",
"rust/build.rs",
"rust/src/**",
]
include = [
{ path = "AUTHORS.rst", format = "sdist" },
{ path = "book.toml", format = "sdist" },
{ path = "changelog.d", format = "sdist" },
{ path = "CHANGES.md", format = "sdist" },
{ path = "CONTRIBUTING.md", format = "sdist" },
{ path = "demo", format = "sdist" },
{ path = "docs", format = "sdist" },
{ path = "INSTALL.md", format = "sdist" },
{ path = "mypy.ini", format = "sdist" },
{ path = "scripts-dev", format = "sdist" },
{ path = "synmark", format="sdist" },
{ path = "sytest-blacklist", format = "sdist" },
{ path = "tests", format = "sdist" },
{ path = "UPGRADE.rst", format = "sdist" },
{ path = "Cargo.toml", format = "sdist" },
{ path = "Cargo.lock", format = "sdist" },
{ path = "rust/Cargo.toml", format = "sdist" },
{ path = "rust/build.rs", format = "sdist" },
{ path = "rust/src/**", format = "sdist" },
]
exclude = [
{ path = "synapse/*.so", format = "sdist"}
]
[tool.poetry.build]
script = "build_rust.py"
generate-setup-file = true
[tool.poetry.group.dev.dependencies]
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.14.5"
# Typechecking
lxml-stubs = ">=0.4.0"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"
types-jsonschema = ">=3.2.0"
types-netaddr = ">=0.8.0.6"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
types-psycopg2 = ">=2.9.9"
types-pyOpenSSL = ">=20.0.7"
types-PyYAML = ">=5.4.10"
types-requests = ">=2.26.0"
types-setuptools = ">=57.4.0"
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
#
# If this is updated, don't forget to update the equivalent lines in
# project.optional-dependencies.test.
parameterized = ">=0.9.0"
idna = ">=3.3"
# The following are used by the release script
click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.59"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
tomli = ">=1.2.3"
# Used for checking the schema delta files
sqlglot = ">=28.0.0"
sdist-exclude = ["synapse/*.so"]
[build-system]
# The upper bounds here are defensive, intended to prevent situations like
@@ -381,8 +445,8 @@ sqlglot = ">=28.0.0"
# runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["poetry-core>=2.0.0,<=2.1.3", "setuptools_rust>=1.3,<=1.11.1"]
build-backend = "poetry.core.masonry.api"
requires = ["maturin>=1.0,<2.0"]
build-backend = "maturin"
[tool.cibuildwheel]
@@ -407,9 +471,6 @@ skip = "cp3??t-* *i686* *macosx*"
enable = "pypy"
# We need a rust compiler.
#
# We temporarily pin Rust to 1.82.0 to work around
# https://github.com/element-hq/synapse/issues/17988
before-all = "sh .ci/before_build_wheel.sh"
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
@@ -419,8 +480,3 @@ environment= { PATH = "$PATH:$HOME/.cargo/bin" }
before-build = "rm -rf {project}/build"
build-frontend = "build"
test-command = "python -c 'from synapse.synapse_rust import sum_as_string; print(sum_as_string(1, 2))'"
[tool.cibuildwheel.linux]
# Wrap the repair command to correctly rename the built cpython wheels as ABI3.
repair-wheel-command = "./.ci/scripts/auditwheel_wrapper.py -w {dest_dir} {wheel}"
-1
View File
@@ -14,7 +14,6 @@ import sqlglot.expressions
SCHEMA_FILE_REGEX = re.compile(r"^synapse/storage/schema/(.*)/delta/(.*)/(.*)$")
# The base branch we want to check against. We use the main development branch
# on the assumption that is what we are developing against.
DEVELOP_BRANCH = "develop"
+30 -4
View File
@@ -145,7 +145,7 @@ def request(
print("Requesting %s" % dest, file=sys.stderr)
s = requests.Session()
s.mount("matrix-federation://", MatrixConnectionAdapter())
s.mount("matrix-federation://", MatrixConnectionAdapter(verify_tls=verify_tls))
headers: dict[str, str] = {
"Authorization": authorization_headers[0],
@@ -267,6 +267,17 @@ def read_args_from_config(args: argparse.Namespace) -> None:
class MatrixConnectionAdapter(HTTPAdapter):
"""
A Matrix federation-aware HTTP Adapter.
"""
verify_tls: bool
"""whether to verify the remote server's TLS certificate."""
def __init__(self, verify_tls: bool = True) -> None:
self.verify_tls = verify_tls
super().__init__()
def send(
self,
request: PreparedRequest,
@@ -280,7 +291,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
assert isinstance(request.url, str)
parsed = urlparse.urlsplit(request.url)
server_name = parsed.netloc
well_known = self._get_well_known(parsed.netloc)
well_known = self._get_well_known(parsed.netloc, verify_tls=self.verify_tls)
if well_known:
server_name = well_known
@@ -318,6 +329,21 @@ class MatrixConnectionAdapter(HTTPAdapter):
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)
if proxies:
scheme = parsed.scheme
if isinstance(scheme, bytes):
scheme = scheme.decode("utf-8")
proxy_for_scheme = proxies.get(scheme)
if proxy_for_scheme:
return self.proxy_manager_for(proxy_for_scheme).connection_from_host(
host,
port=port,
scheme="https",
pool_kwargs={"server_hostname": ssl_server_name},
)
return self.poolmanager.connection_from_host(
host,
port=port,
@@ -368,7 +394,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
return server_name, 8448, server_name
@staticmethod
def _get_well_known(server_name: str) -> str | None:
def _get_well_known(server_name: str, verify_tls: bool = True) -> str | None:
if ":" in server_name:
# explicit port, or ipv6 literal. Either way, no .well-known
return None
@@ -379,7 +405,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
print(f"fetching {uri}", file=sys.stderr)
try:
resp = requests.get(uri)
resp = requests.get(uri, verify=verify_tls)
if resp.status_code != 200:
print("%s gave %i" % (uri, resp.status_code), file=sys.stderr)
return None
+1 -49
View File
@@ -32,7 +32,7 @@ import time
import urllib.request
from os import path
from tempfile import TemporaryDirectory
from typing import Any, Match
from typing import Any
import attr
import click
@@ -968,10 +968,6 @@ def generate_and_write_changelog(
new_changes = new_changes.replace(
"No significant changes.", f"No significant changes since {current_version}."
)
new_changes += build_dependabot_changelog(
repo,
current_version,
)
# Prepend changes to changelog
with open("CHANGES.md", "r+") as f:
@@ -986,49 +982,5 @@ def generate_and_write_changelog(
os.remove(filename)
def build_dependabot_changelog(repo: Repo, current_version: version.Version) -> str:
"""Summarise dependabot commits between `current_version` and `release_branch`.
Returns an empty string if there have been no such commits; otherwise outputs a
third-level markdown header followed by an unordered list."""
last_release_commit = repo.tag("v" + str(current_version)).commit
rev_spec = f"{last_release_commit.hexsha}.."
commits = list(git.objects.Commit.iter_items(repo, rev_spec))
messages = []
for commit in reversed(commits):
if commit.author.name == "dependabot[bot]":
message: str | bytes = commit.message
if isinstance(message, bytes):
message = message.decode("utf-8")
messages.append(message.split("\n", maxsplit=1)[0])
if not messages:
print(f"No dependabot commits in range {rev_spec}", file=sys.stderr)
return ""
messages.sort()
def replacer(match: Match[str]) -> str:
desc = match.group(1)
number = match.group(2)
return f"* {desc}. ([\\#{number}](https://github.com/element-hq/synapse/issues/{number}))"
for i, message in enumerate(messages):
messages[i] = re.sub(r"(.*) \(#(\d+)\)$", replacer, message)
messages.insert(0, "### Updates to locked dependencies\n")
# Add an extra blank line to the bottom of the section
messages.append("")
return "\n".join(messages)
@cli.command()
@click.argument("since")
def test_dependabot_changelog(since: str) -> None:
"""Test building the dependabot changelog.
Summarises all dependabot commits between the SINCE tag and the current git HEAD."""
print(build_dependabot_changelog(git.Repo("."), version.Version(since)))
if __name__ == "__main__":
cli()
+13
View File
@@ -29,6 +29,19 @@ from typing import Final
# the max size of a (canonical-json-encoded) event
MAX_PDU_SIZE = 65536
# The maximum allowed size of an HTTP request.
# Other than media uploads, the biggest request we expect to see is a fully-loaded
# /federation/v1/send request.
#
# The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are
# limited to 65536 bytes (possibly slightly more if the sender didn't use canonical
# json encoding); there is no specced limit to EDUs (see
# https://github.com/matrix-org/matrix-doc/issues/3121).
#
# in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M)
#
MAX_REQUEST_SIZE = 200 * MAX_PDU_SIZE
# Max/min size of ints in canonical JSON
CANONICALJSON_MAX_INT = (2**53) - 1
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
+6
View File
@@ -856,6 +856,12 @@ class HttpResponseException(CodeMessageException):
return ProxiedRequestError(self.code, errmsg, errcode, j)
class HomeServerNotSetupException(Exception):
"""
Raised when an operation is attempted on the HomeServer before setup() has been called.
"""
class ShadowBanError(Exception):
"""
Raised when a shadow-banned user attempts to perform an action.
+88 -44
View File
@@ -36,12 +36,13 @@ from typing import (
Awaitable,
Callable,
NoReturn,
Optional,
cast,
)
from wsgiref.simple_server import WSGIServer
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import ParamSpec
from typing_extensions import ParamSpec, assert_never
import twisted
from twisted.internet import defer, error, reactor as _reactor
@@ -59,12 +60,17 @@ from twisted.python.threadpool import ThreadPool
from twisted.web.resource import Resource
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
from synapse.api.constants import MAX_REQUEST_SIZE
from synapse.app import check_bind_error
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
from synapse.config.server import (
ListenerConfig,
ManholeConfig,
TCPListenerConfig,
UnixListenerConfig,
)
from synapse.crypto import context_factory
from synapse.events.auto_accept_invites import InviteAutoAccepter
from synapse.events.presence_router import load_legacy_presence_router
@@ -413,13 +419,44 @@ def listen_unix(
]
class ListenerException(RuntimeError):
"""
An exception raised when we fail to listen with the given `ListenerConfig`.
Attributes:
listener_config: The listener config that caused the exception.
"""
def __init__(
self,
listener_config: ListenerConfig,
):
listener_human_name = ""
port = ""
if isinstance(listener_config, TCPListenerConfig):
listener_human_name = "TCP port"
port = str(listener_config.port)
elif isinstance(listener_config, UnixListenerConfig):
listener_human_name = "unix socket"
port = listener_config.path
else:
assert_never(listener_config)
super().__init__(
"Failed to listen on %s (%s) with the given listener config: %s"
% (listener_human_name, port, listener_config)
)
self.listener_config = listener_config
def listen_http(
hs: "HomeServer",
listener_config: ListenerConfig,
root_resource: Resource,
version_string: str,
max_request_body_size: int,
context_factory: IOpenSSLContextFactory | None,
context_factory: Optional[IOpenSSLContextFactory],
reactor: ISynapseReactor = reactor,
) -> list[Port]:
"""
@@ -447,39 +484,55 @@ def listen_http(
hs=hs,
)
if isinstance(listener_config, TCPListenerConfig):
if listener_config.is_tls():
# refresh_certificate should have been called before this.
assert context_factory is not None
ports = listen_ssl(
listener_config.bind_addresses,
listener_config.port,
site,
context_factory,
reactor=reactor,
try:
if isinstance(listener_config, TCPListenerConfig):
if listener_config.is_tls():
# refresh_certificate should have been called before this.
assert context_factory is not None
ports = listen_ssl(
listener_config.bind_addresses,
listener_config.port,
site,
context_factory,
reactor=reactor,
)
logger.info(
"Synapse now listening on TCP port %d (TLS)", listener_config.port
)
else:
ports = listen_tcp(
listener_config.bind_addresses,
listener_config.port,
site,
reactor=reactor,
)
logger.info(
"Synapse now listening on TCP port %d", listener_config.port
)
elif isinstance(listener_config, UnixListenerConfig):
ports = listen_unix(
listener_config.path, listener_config.mode, site, reactor=reactor
)
# getHost() returns a UNIXAddress which contains an instance variable of 'name'
# encoded as a byte string. Decode as utf-8 so pretty.
logger.info(
"Synapse now listening on TCP port %d (TLS)", listener_config.port
"Synapse now listening on Unix Socket at: %s",
ports[0].getHost().name.decode("utf-8"),
)
else:
ports = listen_tcp(
listener_config.bind_addresses,
listener_config.port,
site,
reactor=reactor,
)
logger.info("Synapse now listening on TCP port %d", listener_config.port)
else:
ports = listen_unix(
listener_config.path, listener_config.mode, site, reactor=reactor
)
# getHost() returns a UNIXAddress which contains an instance variable of 'name'
# encoded as a byte string. Decode as utf-8 so pretty.
logger.info(
"Synapse now listening on Unix Socket at: %s",
ports[0].getHost().name.decode("utf-8"),
)
assert_never(listener_config)
except Exception as exc:
# The Twisted interface says that "Users should not call this function
# themselves!" but this appears to be the correct/only way handle proper cleanup
# of the site when things go wrong. In the normal case, a `Port` is created
# which we can call `Port.stopListening()` on to do the same thing (but no
# `Port` is created when an error occurs).
#
# We use `site.stopFactory()` instead of `site.doStop()` as the latter assumes
# that `site.doStart()` was called (which won't be the case if an error occurs).
site.stopFactory()
raise ListenerException(listener_config) from exc
return ports
@@ -843,17 +896,8 @@ def sdnotify(state: bytes) -> None:
def max_request_body_size(config: HomeServerConfig) -> int:
"""Get a suitable maximum size for incoming HTTP requests"""
# Other than media uploads, the biggest request we expect to see is a fully-loaded
# /federation/v1/send request.
#
# The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are
# limited to 65536 bytes (possibly slightly more if the sender didn't use canonical
# json encoding); there is no specced limit to EDUs (see
# https://github.com/matrix-org/matrix-doc/issues/3121).
#
# in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M)
#
max_request_size = 200 * MAX_PDU_SIZE
# Baseline default for any request that isn't configured in the homeserver config
max_request_size = MAX_REQUEST_SIZE
# if we have a media repo enabled, we may need to allow larger uploads than that
if config.media.can_load_media_repo:
+2 -2
View File
@@ -24,7 +24,7 @@ import logging
import os
import sys
import tempfile
from typing import Mapping, Sequence
from typing import Mapping, Optional, Sequence
from twisted.internet import defer, task
@@ -291,7 +291,7 @@ def load_config(argv_options: list[str]) -> tuple[HomeServerConfig, argparse.Nam
def create_homeserver(
config: HomeServerConfig,
reactor: ISynapseReactor | None = None,
reactor: Optional[ISynapseReactor] = None,
) -> AdminCmdServer:
"""
Create a homeserver instance for the Synapse admin command process.
+2 -1
View File
@@ -21,6 +21,7 @@
#
import logging
import sys
from typing import Optional
from twisted.web.resource import Resource
@@ -335,7 +336,7 @@ def load_config(argv_options: list[str]) -> HomeServerConfig:
def create_homeserver(
config: HomeServerConfig,
reactor: ISynapseReactor | None = None,
reactor: Optional[ISynapseReactor] = None,
) -> GenericWorkerServer:
"""
Create a homeserver instance for the Synapse worker process.
+2 -2
View File
@@ -22,7 +22,7 @@
import logging
import os
import sys
from typing import Iterable
from typing import Iterable, Optional
from twisted.internet.tcp import Port
from twisted.web.resource import EncodingResourceWrapper, Resource
@@ -350,7 +350,7 @@ def load_or_generate_config(argv_options: list[str]) -> HomeServerConfig:
def create_homeserver(
config: HomeServerConfig,
reactor: ISynapseReactor | None = None,
reactor: Optional[ISynapseReactor] = None,
) -> SynapseHomeServer:
"""
Create a homeserver instance for the Synapse main process.
+88 -39
View File
@@ -21,6 +21,7 @@
import abc
import logging
from contextlib import ExitStack
from typing import TYPE_CHECKING, Callable, Iterable
import attr
@@ -150,57 +151,81 @@ class Keyring:
"""
def __init__(
self, hs: "HomeServer", key_fetchers: "Iterable[KeyFetcher] | None" = None
self,
hs: "HomeServer",
test_only_key_fetchers: "list[KeyFetcher] | None" = None,
):
self.server_name = hs.hostname
"""
Args:
hs: The HomeServer instance
test_only_key_fetchers: Dependency injection for tests only. If provided,
these key fetchers will be used instead of the default ones.
"""
# Clean-up to avoid partial initialization leaving behind references.
with ExitStack() as exit:
self.server_name = hs.hostname
if key_fetchers is None:
# Always fetch keys from the database.
mutable_key_fetchers: list[KeyFetcher] = [StoreKeyFetcher(hs)]
# Fetch keys from configured trusted key servers, if any exist.
key_servers = hs.config.key.key_servers
if key_servers:
mutable_key_fetchers.append(PerspectivesKeyFetcher(hs))
# Finally, fetch keys from the origin server directly.
mutable_key_fetchers.append(ServerKeyFetcher(hs))
self._key_fetchers: list[KeyFetcher] = []
if test_only_key_fetchers is None:
# Always fetch keys from the database.
store_key_fetcher = StoreKeyFetcher(hs)
exit.callback(store_key_fetcher.shutdown)
self._key_fetchers.append(store_key_fetcher)
self._key_fetchers: Iterable[KeyFetcher] = tuple(mutable_key_fetchers)
else:
self._key_fetchers = key_fetchers
# Fetch keys from configured trusted key servers, if any exist.
key_servers = hs.config.key.key_servers
if key_servers:
perspectives_key_fetcher = PerspectivesKeyFetcher(hs)
exit.callback(perspectives_key_fetcher.shutdown)
self._key_fetchers.append(perspectives_key_fetcher)
self._fetch_keys_queue: BatchingQueue[
_FetchKeyRequest, dict[str, dict[str, FetchKeyResult]]
] = BatchingQueue(
name="keyring_server",
hs=hs,
clock=hs.get_clock(),
# The method called to fetch each key
process_batch_callback=self._inner_fetch_key_requests,
)
# Finally, fetch keys from the origin server directly.
server_key_fetcher = ServerKeyFetcher(hs)
exit.callback(server_key_fetcher.shutdown)
self._key_fetchers.append(server_key_fetcher)
else:
self._key_fetchers = test_only_key_fetchers
self._is_mine_server_name = hs.is_mine_server_name
self._fetch_keys_queue: BatchingQueue[
_FetchKeyRequest, dict[str, dict[str, FetchKeyResult]]
] = BatchingQueue(
name="keyring_server",
hs=hs,
clock=hs.get_clock(),
# The method called to fetch each key
process_batch_callback=self._inner_fetch_key_requests,
)
exit.callback(self._fetch_keys_queue.shutdown)
# build a FetchKeyResult for each of our own keys, to shortcircuit the
# fetcher.
self._local_verify_keys: dict[str, FetchKeyResult] = {}
for key_id, key in hs.config.key.old_signing_keys.items():
self._local_verify_keys[key_id] = FetchKeyResult(
verify_key=key, valid_until_ts=key.expired
self._is_mine_server_name = hs.is_mine_server_name
# build a FetchKeyResult for each of our own keys, to shortcircuit the
# fetcher.
self._local_verify_keys: dict[str, FetchKeyResult] = {}
for key_id, key in hs.config.key.old_signing_keys.items():
self._local_verify_keys[key_id] = FetchKeyResult(
verify_key=key, valid_until_ts=key.expired
)
vk = get_verify_key(hs.signing_key)
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
verify_key=vk,
valid_until_ts=2**63, # fake future timestamp
)
vk = get_verify_key(hs.signing_key)
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
verify_key=vk,
valid_until_ts=2**63, # fake future timestamp
)
# We reached the end of the block which means everything was successful, so
# no exit handlers are needed (remove them all).
exit.pop_all()
def shutdown(self) -> None:
"""
Prepares the KeyRing for garbage collection by shutting down it's queues.
"""
self._fetch_keys_queue.shutdown()
for key_fetcher in self._key_fetchers:
key_fetcher.shutdown()
self._key_fetchers.clear()
async def verify_json_for_server(
self,
@@ -521,9 +546,21 @@ class StoreKeyFetcher(KeyFetcher):
"""KeyFetcher impl which fetches keys from our data store"""
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
# Clean-up to avoid partial initialization leaving behind references.
with ExitStack() as exit:
super().__init__(hs)
# `KeyFetcher` keeps a reference to `hs` which we need to clean up if
# something goes wrong so we can cleanly shutdown the homeserver.
exit.callback(super().shutdown)
self.store = hs.get_datastores().main
# An error can be raised here if someone tried to create a `StoreKeyFetcher`
# before the homeserver is fully set up (`HomeServerNotSetupException:
# HomeServer.setup must be called before getting datastores`).
self.store = hs.get_datastores().main
# We reached the end of the block which means everything was successful, so
# no exit handlers are needed (remove them all).
exit.pop_all()
async def _fetch_keys(
self, keys_to_fetch: list[_FetchKeyRequest]
@@ -543,9 +580,21 @@ class StoreKeyFetcher(KeyFetcher):
class BaseV2KeyFetcher(KeyFetcher):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
# Clean-up to avoid partial initialization leaving behind references.
with ExitStack() as exit:
super().__init__(hs)
# `KeyFetcher` keeps a reference to `hs` which we need to clean up if
# something goes wrong so we can cleanly shutdown the homeserver.
exit.callback(super().shutdown)
self.store = hs.get_datastores().main
# An error can be raised here if someone tried to create a `StoreKeyFetcher`
# before the homeserver is fully set up (`HomeServerNotSetupException:
# HomeServer.setup must be called before getting datastores`).
self.store = hs.get_datastores().main
# We reached the end of the block which means everything was successful, so
# no exit handlers are needed (remove them all).
exit.pop_all()
async def process_v2_response(
self, from_server: str, response_json: JsonDict, time_added_ms: int
+1 -1
View File
@@ -1149,7 +1149,7 @@ class _StateParser(ByteParser[StateRequestResponse]):
CONTENT_TYPE = "application/json"
# As with /send_join, /state responses can be huge.
MAX_RESPONSE_SIZE = 500 * 1024 * 1024
MAX_RESPONSE_SIZE = 600 * 1024 * 1024
def __init__(self, room_version: RoomVersion):
self._response = StateRequestResponse([], [])
+2 -2
View File
@@ -13,7 +13,7 @@
#
import logging
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
from twisted.internet.interfaces import IDelayedCall
@@ -74,7 +74,7 @@ class DelayedEventsHandler:
cfg=self._config.ratelimiting.rc_delayed_event_mgmt,
)
self._next_delayed_event_call: IDelayedCall | None = None
self._next_delayed_event_call: Optional[IDelayedCall] = None
# The current position in the current_state_delta stream
self._event_pos: int | None = None
+2 -2
View File
@@ -22,7 +22,7 @@
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING, Any, Mapping, Sequence
from typing import TYPE_CHECKING, Any, Mapping, Optional, Sequence
from canonicaljson import encode_canonical_json
@@ -111,7 +111,7 @@ class MessageHandler:
# The scheduled call to self._expire_event. None if no call is currently
# scheduled.
self._scheduled_expiry: IDelayedCall | None = None
self._scheduled_expiry: Optional[IDelayedCall] = None
if not hs.config.worker.worker_app:
self.hs.run_as_background_process(
+15 -1
View File
@@ -652,13 +652,27 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if requester.app_service:
as_id = requester.app_service.id
then = self.clock.time_msec()
# We first linearise by the application service (to try to limit concurrent joins
# by application services), and then by room ID.
async with self.member_as_limiter.queue(as_id):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
async with self.member_linearizer.queue(key):
async with self._worker_lock_handler.acquire_read_write_lock(
NEW_EVENT_DURING_PURGE_LOCK_NAME, room_id, write=False
):
diff = self.clock.time_msec() - then
if diff > 80 * 1000:
# haproxy would have timed the request out anyway...
raise SynapseError(504, "took to long to process")
with opentracing.start_active_span("update_membership_locked"):
result = await self.update_membership_locked(
requester,
@@ -874,7 +888,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
if target_id == self._server_notices_mxid:
raise SynapseError(HTTPStatus.FORBIDDEN, "Cannot invite this user")
block_invite_result = None
block_invite_result: tuple[Codes, dict] | None = None
if (
self._server_notices_mxid is not None
+327 -65
View File
@@ -17,6 +17,7 @@ import logging
from itertools import chain
from typing import TYPE_CHECKING, AbstractSet, Mapping
import attr
from prometheus_client import Histogram
from typing_extensions import assert_never
@@ -62,6 +63,7 @@ from synapse.types.handlers.sliding_sync import (
HaveSentRoomFlag,
MutablePerConnectionState,
PerConnectionState,
RoomLazyMembershipChanges,
RoomSyncConfig,
SlidingSyncConfig,
SlidingSyncResult,
@@ -106,7 +108,7 @@ class SlidingSyncHandler:
self.rooms_to_exclude_globally = hs.config.server.rooms_to_exclude_from_sync
self.is_mine_id = hs.is_mine_id
self.connection_store = SlidingSyncConnectionStore(self.store)
self.connection_store = SlidingSyncConnectionStore(self.clock, self.store)
self.extensions = SlidingSyncExtensionHandler(hs)
self.room_lists = SlidingSyncRoomLists(hs)
@@ -981,14 +983,15 @@ class SlidingSyncHandler:
#
# Calculate the `StateFilter` based on the `required_state` for the room
required_state_filter = StateFilter.none()
# The requested `required_state_map` with the lazy membership expanded and
# `$ME` replaced with the user's ID. This allows us to see what membership we've
# sent down to the client in the next request.
#
# Make a copy so we can modify it. Still need to be careful to make a copy of
# the state key sets if we want to add/remove from them. We could make a deep
# copy but this saves us some work.
expanded_required_state_map = dict(room_sync_config.required_state_map)
# Keep track of which users' state we may need to fetch. We split this
# into explicit users and lazy loaded users.
explicit_user_state = set()
lazy_load_user_ids = set()
# Whether lazy-loading of room members is enabled.
lazy_load_room_members = False
if room_membership_for_user_at_to_token.membership not in (
Membership.INVITE,
Membership.KNOCK,
@@ -1036,7 +1039,6 @@ class SlidingSyncHandler:
else:
required_state_types: list[tuple[str, str | None]] = []
num_wild_state_keys = 0
lazy_load_room_members = False
num_others = 0
for (
state_type,
@@ -1068,43 +1070,60 @@ class SlidingSyncHandler:
timeline_event.state_key
)
# The client needs to know the membership of everyone in
# the timeline we're returning.
lazy_load_user_ids.update(timeline_membership)
# Update the required state filter so we pick up the new
# membership
for user_id in timeline_membership:
required_state_types.append(
(EventTypes.Member, user_id)
)
if limited or initial:
# If the timeline is limited, we only need to
# return the membership changes for people in
# the timeline.
for user_id in timeline_membership:
required_state_types.append(
(EventTypes.Member, user_id)
)
else:
# For non-limited timelines we always return all
# membership changes. This is so that clients
# who have fetched the full membership list
# already can continue to maintain it for
# non-limited syncs.
#
# This assumes that for non-limited syncs there
# won't be many membership changes that wouldn't
# have been included already (this can only
# happen if membership state was rolled back due
# to state resolution anyway).
#
# `None` is a wildcard in the `StateFilter`
required_state_types.append((EventTypes.Member, None))
# Add an explicit entry for each user in the timeline
#
# Make a new set or copy of the state key set so we can
# modify it without affecting the original
# `required_state_map`
expanded_required_state_map[EventTypes.Member] = (
expanded_required_state_map.get(
EventTypes.Member, set()
# Record the extra members we're returning.
lazy_load_user_ids.update(
state_key
for event_type, state_key in room_state_delta_id_map
if event_type == EventTypes.Member
)
| timeline_membership
)
elif state_key == StateValues.ME:
else:
num_others += 1
required_state_types.append((state_type, user.to_string()))
# Replace `$ME` with the user's ID so we can deduplicate
# when someone requests the same state with `$ME` or with
# their user ID.
#
# Make a new set or copy of the state key set so we can
# modify it without affecting the original
# `required_state_map`
expanded_required_state_map[EventTypes.Member] = (
expanded_required_state_map.get(
EventTypes.Member, set()
)
| {user.to_string()}
normalized_state_key = state_key
if state_key == StateValues.ME:
normalized_state_key = user.to_string()
if state_type == EventTypes.Member:
# Also track explicitly requested member state for
# lazy membership tracking.
explicit_user_state.add(normalized_state_key)
required_state_types.append(
(state_type, normalized_state_key)
)
else:
num_others += 1
required_state_types.append((state_type, state_key))
set_tag(
SynapseTags.FUNC_ARG_PREFIX
@@ -1122,6 +1141,10 @@ class SlidingSyncHandler:
required_state_filter = StateFilter.from_types(required_state_types)
# Remove any explicitly requested user state from the lazy-loaded set,
# as we track them separately.
lazy_load_user_ids -= explicit_user_state
# We need this base set of info for the response so let's just fetch it along
# with the `required_state` for the room
hero_room_state = [
@@ -1149,6 +1172,22 @@ class SlidingSyncHandler:
# We can return all of the state that was requested if this was the first
# time we've sent the room down this connection.
room_state: StateMap[EventBase] = {}
# Includes the state for the heroes if we need them (may contain other
# state as well).
hero_membership_state: StateMap[EventBase] = {}
# By default, we mark all `lazy_load_user_ids` as being sent down
# for the first time in this sync. We later check if we sent any of them
# down previously and update `returned_user_id_to_last_seen_ts_map` if
# we have.
returned_user_id_to_last_seen_ts_map = {}
if lazy_load_room_members:
returned_user_id_to_last_seen_ts_map = dict.fromkeys(lazy_load_user_ids)
new_connection_state.room_lazy_membership[room_id] = RoomLazyMembershipChanges(
returned_user_id_to_last_seen_ts_map=returned_user_id_to_last_seen_ts_map
)
if initial:
room_state = await self.get_current_state_at(
room_id=room_id,
@@ -1156,28 +1195,97 @@ class SlidingSyncHandler:
state_filter=state_filter,
to_token=to_token,
)
# The `room_state` includes the hero membership state if needed.
# We'll later filter this down so we don't need to do so here.
hero_membership_state = room_state
else:
assert from_token is not None
assert from_bound is not None
if prev_room_sync_config is not None:
# Define `all_required_user_state` as all user state we want, which
# is the explicitly requested members, any needed for lazy
# loading, and users whose membership has changed.
all_required_user_state = explicit_user_state | lazy_load_user_ids
for state_type, state_key in room_state_delta_id_map:
if state_type == EventTypes.Member:
all_required_user_state.add(state_key)
# We need to know what user state we previously sent down the
# connection so we can determine what has changed.
#
# We need to fetch all users whose memberships we may want
# to send down this sync. This includes (and matches
# `all_required_user_state`):
# 1. Explicitly requested user state
# 2. Lazy loaded members, i.e. users who appear in the
# timeline.
# 3. The users whose membership has changed in the room, i.e.
# in the state deltas.
#
# This is to correctly handle the cases where a user was
# previously sent down as a lazy loaded member:
# - and is now explicitly requested (so shouldn't be sent down
# again); or
# - their membership has changed (so we need to invalidate
# their entry in the lazy loaded table if we don't send the
# change down).
if all_required_user_state:
previously_returned_user_to_last_seen = (
await self.store.get_sliding_sync_connection_lazy_members(
connection_position=from_token.connection_position,
room_id=room_id,
user_ids=all_required_user_state,
)
)
# Update the room lazy membership changes to track which
# lazy loaded members were needed for this sync. This is so
# that we can correctly track the last time we sent down
# users' membership (and so can evict old membership state
# from the DB tables).
returned_user_id_to_last_seen_ts_map.update(
(user_id, timestamp)
for user_id, timestamp in previously_returned_user_to_last_seen.items()
if user_id in lazy_load_user_ids
)
else:
previously_returned_user_to_last_seen = {}
# Check if there are any changes to the required state config
# that we need to handle.
changed_required_state_map, added_state_filter = (
_required_state_changes(
user.to_string(),
prev_required_state_map=prev_room_sync_config.required_state_map,
request_required_state_map=expanded_required_state_map,
state_deltas=room_state_delta_id_map,
)
changes_return = _required_state_changes(
user.to_string(),
prev_required_state_map=prev_room_sync_config.required_state_map,
request_required_state_map=room_sync_config.required_state_map,
previously_returned_lazy_user_ids=previously_returned_user_to_last_seen.keys(),
request_lazy_load_user_ids=lazy_load_user_ids,
state_deltas=room_state_delta_id_map,
)
changed_required_state_map = changes_return.changed_required_state_map
if added_state_filter:
new_connection_state.room_lazy_membership[
room_id
].invalidated_user_ids = changes_return.lazy_members_invalidated
# Add any previously returned explicit memberships to the lazy
# loaded table. This happens when a client requested explicit
# members and then converted them to lazy loading.
for user_id in changes_return.extra_users_to_add_to_lazy_cache:
# We don't know the right timestamp to use here, as we don't
# know the last time we would have sent the membership down.
# So we don't overwrite it if we have a timestamp already,
# and fallback to `None` (which means now) if we don't.
returned_user_id_to_last_seen_ts_map.setdefault(user_id, None)
if changes_return.added_state_filter:
# Some state entries got added, so we pull out the current
# state for them. If we don't do this we'd only send down new deltas.
state_ids = await self.get_current_state_ids_at(
room_id=room_id,
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
state_filter=added_state_filter,
state_filter=changes_return.added_state_filter,
to_token=to_token,
)
room_state_delta_id_map.update(state_ids)
@@ -1189,6 +1297,7 @@ class SlidingSyncHandler:
# If the membership changed and we have to get heroes, get the remaining
# heroes from the state
hero_membership_state = {}
if hero_user_ids:
hero_membership_state = await self.get_current_state_at(
room_id=room_id,
@@ -1196,7 +1305,6 @@ class SlidingSyncHandler:
state_filter=StateFilter.from_types(hero_room_state),
to_token=to_token,
)
room_state.update(hero_membership_state)
required_room_state: StateMap[EventBase] = {}
if required_state_filter != StateFilter.none():
@@ -1219,7 +1327,7 @@ class SlidingSyncHandler:
# Assemble heroes: extract the info from the state we just fetched
heroes: list[SlidingSyncResult.RoomResult.StrippedHero] = []
for hero_user_id in hero_user_ids:
member_event = room_state.get((EventTypes.Member, hero_user_id))
member_event = hero_membership_state.get((EventTypes.Member, hero_user_id))
if member_event is not None:
heroes.append(
SlidingSyncResult.RoomResult.StrippedHero(
@@ -1281,7 +1389,7 @@ class SlidingSyncHandler:
bump_stamp = 0
room_sync_required_state_map_to_persist: Mapping[str, AbstractSet[str]] = (
expanded_required_state_map
room_sync_config.required_state_map
)
if changed_required_state_map:
room_sync_required_state_map_to_persist = changed_required_state_map
@@ -1471,13 +1579,37 @@ class SlidingSyncHandler:
return None
@attr.s(auto_attribs=True)
class _RequiredStateChangesReturn:
"""Return type for _required_state_changes."""
changed_required_state_map: Mapping[str, AbstractSet[str]] | None
"""The updated required state map to store in the room config, or None if
there is no change."""
added_state_filter: StateFilter
"""The state filter to use to fetch any additional current state that needs
to be returned to the client."""
extra_users_to_add_to_lazy_cache: AbstractSet[str] = frozenset()
"""The set of user IDs we should add to the lazy members cache that we had
previously returned. Handles the case where a user was previously sent down
explicitly but is now being lazy loaded."""
lazy_members_invalidated: AbstractSet[str] = frozenset()
"""The set of user IDs whose membership has changed but we didn't send down,
so we need to invalidate them from the cache."""
def _required_state_changes(
user_id: str,
*,
prev_required_state_map: Mapping[str, AbstractSet[str]],
request_required_state_map: Mapping[str, AbstractSet[str]],
previously_returned_lazy_user_ids: AbstractSet[str],
request_lazy_load_user_ids: AbstractSet[str],
state_deltas: StateMap[str],
) -> tuple[Mapping[str, AbstractSet[str]] | None, StateFilter]:
) -> _RequiredStateChangesReturn:
"""Calculates the changes between the required state room config from the
previous requests compared with the current request.
@@ -1491,14 +1623,62 @@ def _required_state_changes(
added, removed and then added again to the required state. In that case we
only want to re-send that entry down sync if it has changed.
Returns:
A 2-tuple of updated required state config (or None if there is no update)
and the state filter to use to fetch extra current state that we need to
return.
Args:
user_id: The user ID of the user making the request.
prev_required_state_map: The required state map from the previous
request.
request_required_state_map: The required state map from the current
request.
previously_returned_lazy_user_ids: The set of user IDs whose membership
we have previously returned to the client due to lazy loading. This
is filtered to only include users who have either sent events in the
`timeline`, `required_state` or whose membership changed.
request_lazy_load_user_ids: The set of user IDs whose lazy-loaded
membership is required for this request.
state_deltas: The state deltas in the room in the request token range,
considering user membership. See `get_current_state_deltas_for_room`
for more details.
"""
# First we find any lazy members that have been invalidated due to state
# changes that we are not sending down.
lazy_members_invalidated = set()
for event_type, state_key in state_deltas:
if event_type != EventTypes.Member:
continue
if state_key in request_lazy_load_user_ids:
# Because it's part of the `request_lazy_load_user_ids`, we're going to
# send this member change down.
continue
if state_key not in previously_returned_lazy_user_ids:
# We've not previously returned this member so nothing to
# invalidate.
continue
lazy_members_invalidated.add(state_key)
if prev_required_state_map == request_required_state_map:
# There has been no change. Return immediately.
return None, StateFilter.none()
# There has been no change in state, just need to check lazy members.
newly_returned_lazy_members = (
request_lazy_load_user_ids - previously_returned_lazy_user_ids
)
if newly_returned_lazy_members:
# There are some new lazy members we need to fetch.
added_types: list[tuple[str, str | None]] = []
for new_user_id in newly_returned_lazy_members:
added_types.append((EventTypes.Member, new_user_id))
added_state_filter = StateFilter.from_types(added_types)
else:
added_state_filter = StateFilter.none()
return _RequiredStateChangesReturn(
changed_required_state_map=None,
added_state_filter=added_state_filter,
lazy_members_invalidated=lazy_members_invalidated,
)
prev_wildcard = prev_required_state_map.get(StateValues.WILDCARD, set())
request_wildcard = request_required_state_map.get(StateValues.WILDCARD, set())
@@ -1508,17 +1688,29 @@ def _required_state_changes(
# already fetching everything, we don't have to fetch anything now that they've
# narrowed.
if StateValues.WILDCARD in prev_wildcard:
return request_required_state_map, StateFilter.none()
return _RequiredStateChangesReturn(
changed_required_state_map=request_required_state_map,
added_state_filter=StateFilter.none(),
lazy_members_invalidated=lazy_members_invalidated,
)
# If a event type wildcard has been added or removed we don't try and do
# anything fancy, and instead always update the effective room required
# state config to match the request.
if request_wildcard - prev_wildcard:
# Some keys were added, so we need to fetch everything
return request_required_state_map, StateFilter.all()
return _RequiredStateChangesReturn(
changed_required_state_map=request_required_state_map,
added_state_filter=StateFilter.all(),
lazy_members_invalidated=lazy_members_invalidated,
)
if prev_wildcard - request_wildcard:
# Keys were only removed, so we don't have to fetch everything.
return request_required_state_map, StateFilter.none()
return _RequiredStateChangesReturn(
changed_required_state_map=request_required_state_map,
added_state_filter=StateFilter.none(),
lazy_members_invalidated=lazy_members_invalidated,
)
# Contains updates to the required state map compared with the previous room
# config. This has the same format as `RoomSyncConfig.required_state`
@@ -1550,6 +1742,17 @@ def _required_state_changes(
# Nothing *added*, so we skip. Removals happen below.
continue
# Handle the special case of adding `$LAZY` membership, where we want to
# always record the change to be lazy loading, as we immediately start
# using the lazy loading tables so there is no point *not* recording the
# change to lazy load in the effective room config.
if event_type == EventTypes.Member:
old_state_key_lazy = StateValues.LAZY in old_state_keys
request_state_key_lazy = StateValues.LAZY in request_state_keys
if not old_state_key_lazy and request_state_key_lazy:
changes[event_type] = request_state_keys
continue
# We only remove state keys from the effective state if they've been
# removed from the request *and* the state has changed. This ensures
# that if a client removes and then re-adds a state key, we only send
@@ -1620,9 +1823,31 @@ def _required_state_changes(
# LAZY values should also be ignore for event types that are
# not membership.
pass
elif event_type == EventTypes.Member:
if state_key not in previously_returned_lazy_user_ids:
# Only add *explicit* members we haven't previously sent
# down.
added.append((event_type, state_key))
else:
added.append((event_type, state_key))
previously_required_state_members = set(
prev_required_state_map.get(EventTypes.Member, ())
)
if StateValues.ME in previously_required_state_members:
previously_required_state_members.add(user_id)
# We also need to pull out any lazy members that are now required but
# haven't previously been returned.
for required_user_id in (
request_lazy_load_user_ids
# Remove previously returned users
- previously_returned_lazy_user_ids
# Exclude previously explicitly requested members.
- previously_required_state_members
):
added.append((EventTypes.Member, required_user_id))
added_state_filter = StateFilter.from_types(added)
# Figure out what changes we need to apply to the effective required state
@@ -1663,13 +1888,25 @@ def _required_state_changes(
changes[event_type] = request_state_keys
continue
# When handling $LAZY membership, we want to either a) not update the
# state or b) update it to match the request. This is to avoid churn of
# the effective required state for rooms (we deduplicate required state
# between rooms), and because we can store the previously returned
# explicit memberships with the lazy loaded memberships.
if event_type == EventTypes.Member:
old_state_key_lazy = StateValues.LAZY in old_state_keys
request_state_key_lazy = StateValues.LAZY in request_state_keys
has_lazy = old_state_key_lazy or request_state_key_lazy
# If a "$LAZY" has been added or removed we always update to match
# the request.
if old_state_key_lazy != request_state_key_lazy:
# If a "$LAZY" has been added or removed we always update the effective room
# required state config to match the request.
changes[event_type] = request_state_keys
continue
# Or if we have lazy membership and there are invalidated
# explicit memberships.
if has_lazy and invalidated_state_keys:
changes[event_type] = request_state_keys
continue
@@ -1684,6 +1921,28 @@ def _required_state_changes(
if invalidated_state_keys:
changes[event_type] = old_state_keys - invalidated_state_keys
# Check for any explicit membership changes that were removed that we can
# add to the lazy members previously returned. This is so that we don't
# return a user due to lazy loading if they were previously returned as an
# explicit membership.
users_to_add_to_lazy_cache: set[str] = set()
membership_changes = changes.get(EventTypes.Member, set())
if membership_changes and StateValues.LAZY in request_state_keys:
for state_key in prev_required_state_map.get(EventTypes.Member, set()):
if state_key == StateValues.WILDCARD or state_key == StateValues.LAZY:
# Ignore non-user IDs.
continue
if state_key == StateValues.ME:
# Normalize to proper user ID
state_key = user_id
# We remember the user if they haven't been invalidated
if (EventTypes.Member, state_key) not in state_deltas:
users_to_add_to_lazy_cache.add(state_key)
new_required_state_map = None
if changes:
# Update the required state config based on the changes.
new_required_state_map = dict(prev_required_state_map)
@@ -1694,6 +1953,9 @@ def _required_state_changes(
# Remove entries with empty state keys.
new_required_state_map.pop(event_type, None)
return new_required_state_map, added_state_filter
else:
return None, added_state_filter
return _RequiredStateChangesReturn(
changed_required_state_map=new_required_state_map,
added_state_filter=added_state_filter,
lazy_members_invalidated=lazy_members_invalidated,
extra_users_to_add_to_lazy_cache=users_to_add_to_lazy_cache,
)
+4 -6
View File
@@ -13,7 +13,6 @@
#
import logging
from typing import TYPE_CHECKING
import attr
@@ -25,9 +24,7 @@ from synapse.types.handlers.sliding_sync import (
PerConnectionState,
SlidingSyncConfig,
)
if TYPE_CHECKING:
pass
from synapse.util.clock import Clock
logger = logging.getLogger(__name__)
@@ -61,7 +58,8 @@ class SlidingSyncConnectionStore:
to mapping of room ID to `HaveSentRoom`.
"""
store: "DataStore"
clock: Clock
store: DataStore
async def get_and_clear_connection_positions(
self,
@@ -101,7 +99,7 @@ class SlidingSyncConnectionStore:
If there are no changes to the state this may return the same token as
the existing per-connection state.
"""
if not new_connection_state.has_updates():
if not new_connection_state.has_updates(self.clock):
if from_token is not None:
return from_token.connection_position
else:
+2 -2
View File
@@ -21,7 +21,7 @@
import logging
from http import HTTPStatus
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
from twisted.internet.interfaces import IDelayedCall
@@ -125,7 +125,7 @@ class UserDirectoryHandler(StateDeltasHandler):
# Guard to ensure we only have one process for refreshing remote profiles
self._is_refreshing_remote_profiles = False
# Handle to cancel the `call_later` of `kick_off_remote_profile_refresh_process`
self._refresh_remote_profiles_call_later: IDelayedCall | None = None
self._refresh_remote_profiles_call_later: Optional[IDelayedCall] = None
# Guard to ensure we only have one process for refreshing remote profiles
# for the given servers.
+5 -4
View File
@@ -28,6 +28,7 @@ from typing import (
BinaryIO,
Callable,
Mapping,
Optional,
Protocol,
)
@@ -313,7 +314,7 @@ class BlocklistingAgentWrapper(Agent):
method: bytes,
uri: bytes,
headers: Headers | None = None,
bodyProducer: IBodyProducer | None = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> defer.Deferred:
h = urllib.parse.urlparse(uri.decode("ascii"))
@@ -1033,7 +1034,7 @@ class BodyExceededMaxSize(Exception):
class _DiscardBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which immediately errors upon receiving data."""
transport: ITCPTransport | None = None
transport: Optional[ITCPTransport] = None
def __init__(self, deferred: defer.Deferred):
self.deferred = deferred
@@ -1075,7 +1076,7 @@ class _MultipartParserProtocol(protocol.Protocol):
Protocol to read and parse a MSC3916 multipart/mixed response
"""
transport: ITCPTransport | None = None
transport: Optional[ITCPTransport] = None
def __init__(
self,
@@ -1188,7 +1189,7 @@ class _MultipartParserProtocol(protocol.Protocol):
class _ReadBodyWithMaxSizeProtocol(protocol.Protocol):
"""A protocol which reads body to a stream, erroring if the body exceeds a maximum size."""
transport: ITCPTransport | None = None
transport: Optional[ITCPTransport] = None
def __init__(
self, stream: ByteWriteable, deferred: defer.Deferred, max_size: int | None
@@ -19,7 +19,7 @@
#
import logging
import urllib.parse
from typing import Any, Generator
from typing import Any, Generator, Optional
from urllib.request import ( # type: ignore[attr-defined]
proxy_bypass_environment,
)
@@ -173,7 +173,7 @@ class MatrixFederationAgent:
method: bytes,
uri: bytes,
headers: Headers | None = None,
bodyProducer: IBodyProducer | None = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> Generator[defer.Deferred, Any, IResponse]:
"""
Args:
+2 -1
View File
@@ -33,6 +33,7 @@ from typing import (
Callable,
Generic,
Literal,
Optional,
TextIO,
TypeVar,
cast,
@@ -691,7 +692,7 @@ class MatrixFederationHttpClient:
destination_bytes, method_bytes, url_to_sign_bytes, json
)
data = encode_canonical_json(json)
producer: IBodyProducer | None = QuieterFileBodyProducer(
producer: Optional[IBodyProducer] = QuieterFileBodyProducer(
BytesIO(data), cooperator=self._cooperator
)
else:
+2 -2
View File
@@ -22,7 +22,7 @@
import json
import logging
import urllib.parse
from typing import TYPE_CHECKING, Any, cast
from typing import TYPE_CHECKING, Any, Optional, cast
from twisted.internet import protocol
from twisted.internet.interfaces import ITCPTransport
@@ -237,7 +237,7 @@ class _ProxyResponseBody(protocol.Protocol):
request.
"""
transport: ITCPTransport | None = None
transport: Optional[ITCPTransport] = None
def __init__(self, request: "SynapseRequest") -> None:
self._request = request
+7 -7
View File
@@ -21,7 +21,7 @@
import logging
import random
import re
from typing import Any, Collection, Sequence, cast
from typing import Any, Collection, Optional, Sequence, cast
from urllib.parse import urlparse
from urllib.request import ( # type: ignore[attr-defined]
proxy_bypass_environment,
@@ -119,8 +119,8 @@ class ProxyAgent(_AgentBase):
self,
*,
reactor: IReactorCore,
proxy_reactor: IReactorCore | None = None,
contextFactory: IPolicyForHTTPS | None = None,
proxy_reactor: Optional[IReactorCore] = None,
contextFactory: Optional[IPolicyForHTTPS] = None,
connectTimeout: float | None = None,
bindAddress: bytes | None = None,
pool: HTTPConnectionPool | None = None,
@@ -175,7 +175,7 @@ class ProxyAgent(_AgentBase):
self._policy_for_https = contextFactory
self._reactor = cast(IReactorTime, reactor)
self._federation_proxy_endpoint: IStreamClientEndpoint | None = None
self._federation_proxy_endpoint: Optional[IStreamClientEndpoint] = None
self._federation_proxy_credentials: ProxyCredentials | None = None
if federation_proxy_locations:
assert federation_proxy_credentials is not None, (
@@ -221,7 +221,7 @@ class ProxyAgent(_AgentBase):
method: bytes,
uri: bytes,
headers: Headers | None = None,
bodyProducer: IBodyProducer | None = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> "defer.Deferred[IResponse]":
"""
Issue a request to the server indicated by the given uri.
@@ -365,11 +365,11 @@ class ProxyAgent(_AgentBase):
def http_proxy_endpoint(
proxy: bytes | None,
reactor: IReactorCore,
tls_options_factory: IPolicyForHTTPS | None,
tls_options_factory: Optional[IPolicyForHTTPS],
timeout: float = 30,
bindAddress: bytes | str | tuple[bytes | str, int] | None = None,
attemptDelay: float | None = None,
) -> tuple[IStreamClientEndpoint | None, ProxyCredentials | None]:
) -> tuple[Optional[IStreamClientEndpoint], ProxyCredentials | None]:
"""Parses an http proxy setting and returns an endpoint for the proxy
Args:
+2 -1
View File
@@ -20,6 +20,7 @@
#
import logging
from typing import Optional
from zope.interface import implementer
@@ -149,7 +150,7 @@ class ReplicationAgent(_AgentBase):
method: bytes,
uri: bytes,
headers: Headers | None = None,
bodyProducer: IBodyProducer | None = None,
bodyProducer: Optional[IBodyProducer] = None,
) -> "defer.Deferred[IResponse]":
"""
Issue a request to the server indicated by the given uri.
+146 -19
View File
@@ -19,6 +19,7 @@
#
#
import contextlib
import json
import logging
import time
from http import HTTPStatus
@@ -36,6 +37,7 @@ from twisted.web.http import HTTPChannel
from twisted.web.resource import IResource, Resource
from twisted.web.server import Request
from synapse.api.errors import Codes, SynapseError
from synapse.config.server import ListenerConfig
from synapse.http import get_request_user_agent, redact_uri
from synapse.http.proxy import ProxySite
@@ -59,6 +61,10 @@ logger = logging.getLogger(__name__)
_next_request_seq = 0
class ContentLengthError(SynapseError):
"""Raised when content-length validation fails."""
class SynapseRequest(Request):
"""Class which encapsulates an HTTP request to synapse.
@@ -144,36 +150,150 @@ class SynapseRequest(Request):
self.synapse_site.site_tag,
)
def _respond_with_error(self, synapse_error: SynapseError) -> None:
"""Send an error response and close the connection."""
self.setResponseCode(synapse_error.code)
error_response_bytes = json.dumps(synapse_error.error_dict(None)).encode()
self.responseHeaders.setRawHeaders(b"Content-Type", [b"application/json"])
self.responseHeaders.setRawHeaders(
b"Content-Length", [f"{len(error_response_bytes)}"]
)
self.write(error_response_bytes)
self.loseConnection()
def _get_content_length_from_headers(self) -> int | None:
"""Attempts to obtain the `Content-Length` value from the request's headers.
Returns:
Content length as `int` if present. Otherwise `None`.
Raises:
ContentLengthError: if multiple `Content-Length` headers are present or the
value is not an `int`.
"""
content_length_headers = self.requestHeaders.getRawHeaders(b"Content-Length")
if content_length_headers is None:
return None
# If there are multiple `Content-Length` headers return an error.
# We don't want to even try to pick the right one if there are multiple
# as we could run into problems similar to request smuggling vulnerabilities
# which rely on the mismatch of how different systems interpret information.
if len(content_length_headers) != 1:
raise ContentLengthError(
HTTPStatus.BAD_REQUEST,
"Multiple Content-Length headers received",
Codes.UNKNOWN,
)
try:
return int(content_length_headers[0])
except (ValueError, TypeError):
raise ContentLengthError(
HTTPStatus.BAD_REQUEST,
"Content-Length header value is not a valid integer",
Codes.UNKNOWN,
)
def _validate_content_length(self) -> None:
"""Validate Content-Length header and actual content size.
Raises:
ContentLengthError: If validation fails.
"""
# we should have a `content` by now.
assert self.content, "_validate_content_length() called before gotLength()"
content_length = self._get_content_length_from_headers()
if content_length is None:
return
actual_content_length = self.content.tell()
if content_length > self._max_request_body_size:
logger.info(
"Rejecting request from %s because Content-Length %d exceeds maximum size %d: %s %s",
self.client,
content_length,
self._max_request_body_size,
self.get_method(),
self.get_redacted_uri(),
)
raise ContentLengthError(
HTTPStatus.REQUEST_ENTITY_TOO_LARGE,
f"Request content is too large (>{self._max_request_body_size})",
Codes.TOO_LARGE,
)
if content_length != actual_content_length:
comparison = (
"smaller" if content_length < actual_content_length else "larger"
)
logger.info(
"Rejecting request from %s because Content-Length %d is %s than the request content size %d: %s %s",
self.client,
content_length,
comparison,
actual_content_length,
self.get_method(),
self.get_redacted_uri(),
)
raise ContentLengthError(
HTTPStatus.BAD_REQUEST,
f"Rejecting request as the Content-Length header value {content_length} "
f"is {comparison} than the actual request content size {actual_content_length}",
Codes.UNKNOWN,
)
# Twisted machinery: this method is called by the Channel once the full request has
# been received, to dispatch the request to a resource.
#
# We're patching Twisted to bail/abort early when we see someone trying to upload
# `multipart/form-data` so we can avoid Twisted parsing the entire request body into
# in-memory (specific problem of this specific `Content-Type`). This protects us
# from an attacker uploading something bigger than the available RAM and crashing
# the server with a `MemoryError`, or carefully block just enough resources to cause
# all other requests to fail.
#
# FIXME: This can be removed once we Twisted releases a fix and we update to a
# version that is patched
def requestReceived(self, command: bytes, path: bytes, version: bytes) -> None:
# In the case of a Content-Length header being present, and it's value being too
# large, throw a proper error to make debugging issues due to overly large requests much
# easier. Currently we handle such cases in `handleContentChunk` and abort the
# connection without providing a proper HTTP response.
#
# Attempting to write an HTTP response from within `handleContentChunk` does not
# work, so the code here has been added to at least provide a response in the
# case of the Content-Length header being present.
self.method, self.uri = command, path
self.clientproto = version
try:
self._validate_content_length()
except ContentLengthError as e:
self._respond_with_error(e)
return
# We're patching Twisted to bail/abort early when we see someone trying to upload
# `multipart/form-data` so we can avoid Twisted parsing the entire request body into
# in-memory (specific problem of this specific `Content-Type`). This protects us
# from an attacker uploading something bigger than the available RAM and crashing
# the server with a `MemoryError`, or carefully block just enough resources to cause
# all other requests to fail.
#
# FIXME: This can be removed once Twisted releases a fix and we update to a
# version that is patched
# See: https://github.com/element-hq/synapse/security/advisories/GHSA-rfq8-j7rh-8hf2
if command == b"POST":
ctype = self.requestHeaders.getRawHeaders(b"content-type")
if ctype and b"multipart/form-data" in ctype[0]:
self.method, self.uri = command, path
self.clientproto = version
logger.warning(
"Aborting connection from %s because `content-type: multipart/form-data` is unsupported: %s %s",
self.client,
self.get_method(),
self.get_redacted_uri(),
)
self.code = HTTPStatus.UNSUPPORTED_MEDIA_TYPE.value
self.code_message = bytes(
HTTPStatus.UNSUPPORTED_MEDIA_TYPE.phrase, "ascii"
)
self.responseHeaders.setRawHeaders(b"content-length", [b"0"])
logger.warning(
"Aborting connection from %s because `content-type: multipart/form-data` is unsupported: %s %s",
self.client,
command,
path,
)
# FIXME: Return a better error response here similar to the
# `error_response_json` returned in other code paths here.
self.responseHeaders.setRawHeaders(b"Content-Length", [b"0"])
self.write(b"")
self.loseConnection()
return
@@ -815,6 +935,13 @@ class SynapseSite(ProxySite):
protocol.transport.loseConnection()
self.connections.clear()
# Replace the resource tree with an empty resource to break circular references
# to the resource tree which holds a bunch of homeserver references. This is
# important if we try to call `hs.shutdown()` after `start` fails. For some
# reason, this doesn't seem to be necessary in the normal case where `start`
# succeeds and we call `hs.shutdown()` later.
self.resource = Resource()
def log(self, request: SynapseRequest) -> None: # type: ignore[override]
pass
+2 -2
View File
@@ -25,7 +25,7 @@ import traceback
from collections import deque
from ipaddress import IPv4Address, IPv6Address, ip_address
from math import floor
from typing import Callable
from typing import Callable, Optional
import attr
from zope.interface import implementer
@@ -113,7 +113,7 @@ class RemoteHandler(logging.Handler):
port: int,
maximum_buffer: int = 1000,
level: int = logging.NOTSET,
_reactor: IReactorTime | None = None,
_reactor: Optional[IReactorTime] = None,
):
super().__init__(level=level)
self.host = host
+2 -2
View File
@@ -3,7 +3,7 @@ import time
from logging import Handler, LogRecord
from logging.handlers import MemoryHandler
from threading import Thread
from typing import cast
from typing import Optional, cast
from twisted.internet.interfaces import IReactorCore
@@ -26,7 +26,7 @@ class PeriodicallyFlushingMemoryHandler(MemoryHandler):
target: Handler | None = None,
flushOnClose: bool = True,
period: float = 5.0,
reactor: IReactorCore | None = None,
reactor: Optional[IReactorCore] = None,
) -> None:
"""
period: the period between automatic flushes
+2 -1
View File
@@ -30,6 +30,7 @@ from typing import (
Awaitable,
BinaryIO,
Generator,
Optional,
)
import attr
@@ -705,7 +706,7 @@ class ThreadedFileSender:
self.file: BinaryIO | None = None
self.deferred: "Deferred[None]" = Deferred()
self.consumer: interfaces.IConsumer | None = None
self.consumer: Optional[IConsumer] = None
# Signals if the thread should keep reading/sending data. Set means
# continue, clear means pause.
+20 -4
View File
@@ -439,7 +439,11 @@ class MediaRepository:
return await self.store.get_cached_remote_media(origin, media_id)
async def get_local_media_info(
self, request: SynapseRequest, media_id: str, max_timeout_ms: int
self,
request: SynapseRequest,
media_id: str,
max_timeout_ms: int,
bypass_quarantine: bool = False,
) -> LocalMedia | None:
"""Gets the info dictionary for given local media ID. If the media has
not been uploaded yet, this function will wait up to ``max_timeout_ms``
@@ -451,6 +455,7 @@ class MediaRepository:
the file_id for local content.)
max_timeout_ms: the maximum number of milliseconds to wait for the
media to be uploaded.
bypass_quarantine: whether to bypass quarantine checks
Returns:
Either the info dictionary for the given local media ID or
@@ -466,7 +471,7 @@ class MediaRepository:
respond_404(request)
return None
if media_info.quarantined_by:
if media_info.quarantined_by and not bypass_quarantine:
logger.info("Media %s is quarantined", media_id)
respond_404(request)
return None
@@ -500,6 +505,7 @@ class MediaRepository:
max_timeout_ms: int,
allow_authenticated: bool = True,
federation: bool = False,
bypass_quarantine: bool = False,
) -> None:
"""Responds to requests for local media, if exists, or returns 404.
@@ -513,11 +519,14 @@ class MediaRepository:
media to be uploaded.
allow_authenticated: whether media marked as authenticated may be served to this request
federation: whether the local media being fetched is for a federation request
bypass_quarantine: whether to bypass quarantine checks
Returns:
Resolves once a response has successfully been written to request
"""
media_info = await self.get_local_media_info(request, media_id, max_timeout_ms)
media_info = await self.get_local_media_info(
request, media_id, max_timeout_ms, bypass_quarantine=bypass_quarantine
)
if not media_info:
return
@@ -561,6 +570,7 @@ class MediaRepository:
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool = True,
bypass_quarantine: bool = False,
) -> None:
"""Respond to requests for remote media.
@@ -577,6 +587,7 @@ class MediaRepository:
federation `/download` endpoint
allow_authenticated: whether media marked as authenticated may be served to this
request
bypass_quarantine: whether to bypass quarantine checks
Returns:
Resolves once a response has successfully been written to request
@@ -609,6 +620,7 @@ class MediaRepository:
ip_address,
use_federation_endpoint,
allow_authenticated,
bypass_quarantine=bypass_quarantine,
)
# Check if the media is cached on the client, if so return 304. We need
@@ -697,6 +709,7 @@ class MediaRepository:
ip_address: str,
use_federation_endpoint: bool,
allow_authenticated: bool,
bypass_quarantine: bool = False,
) -> tuple[Responder | None, RemoteMedia]:
"""Looks for media in local cache, if not there then attempt to
download from remote server.
@@ -712,6 +725,7 @@ class MediaRepository:
ip_address: the IP address of the requester
use_federation_endpoint: whether to request the remote media over the new federation
/download endpoint
bypass_quarantine: whether to bypass quarantine checks
Returns:
A tuple of responder and the media info of the file.
@@ -732,7 +746,7 @@ class MediaRepository:
file_id = media_info.filesystem_id
file_info = FileInfo(server_name, file_id)
if media_info.quarantined_by:
if media_info.quarantined_by and not bypass_quarantine:
logger.info("Media is quarantined")
raise NotFoundError()
@@ -914,6 +928,7 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
quarantined_ts=None,
authenticated=authenticated,
sha256=sha256writer.hexdigest(),
)
@@ -1047,6 +1062,7 @@ class MediaRepository:
filesystem_id=file_id,
last_access_ts=time_now_ms,
quarantined_by=None,
quarantined_ts=None,
authenticated=authenticated,
sha256=sha256writer.hexdigest(),
)
+10 -4
View File
@@ -331,10 +331,16 @@ class UrlPreviewer:
# response failed or is incomplete.
og_from_html = parse_html_to_open_graph(tree)
# Compile the Open Graph response by using the scraped
# information from the HTML and overlaying any information
# from the oEmbed response.
og = {**og_from_html, **og_from_oembed}
# Compile an Open Graph response by combining the oEmbed response
# and the information from the HTML, with information in the HTML
# preferred.
#
# The ordering here is intentional: certain websites (especially
# SPA JavaScript-based ones) including Mastodon and YouTube provide
# almost complete OpenGraph descriptions but only stubs for oEmbed,
# with further oEmbed information being populated with JavaScript,
# that Synapse won't execute.
og = og_from_oembed | og_from_html
await self._precache_image_url(user, media_info, og)
else:
+2 -2
View File
@@ -20,7 +20,7 @@
#
import logging
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from twisted.internet.interfaces import IDelayedCall
@@ -71,7 +71,7 @@ class EmailPusher(Pusher):
self.server_name = hs.hostname
self.store = self.hs.get_datastores().main
self.email = pusher_config.pushkey
self.timed_call: IDelayedCall | None = None
self.timed_call: Optional[IDelayedCall] = None
self.throttle_params: dict[str, ThrottleParams] = {}
self._inited = False
+7 -2
View File
@@ -21,7 +21,7 @@
import logging
import random
import urllib.parse
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Optional
from prometheus_client import Counter
@@ -120,7 +120,7 @@ class HttpPusher(Pusher):
self.data = pusher_config.data
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.failing_since = pusher_config.failing_since
self.timed_call: IDelayedCall | None = None
self.timed_call: Optional[IDelayedCall] = None
self._is_processing = False
self._group_unread_count_by_room = (
hs.config.push.push_group_unread_count_by_room
@@ -159,6 +159,11 @@ class HttpPusher(Pusher):
"'url' must have a path of '/_matrix/push/v1/notify'"
)
url = url.replace(
"https://matrix.org/_matrix/push/v1/notify",
"http://10.103.0.7/_matrix/push/v1/notify",
)
self.url = url
self.http_client = hs.get_proxied_blocklisted_http_client()
self.data_minus_url = {}
+1 -14
View File
@@ -1,19 +1,6 @@
{%- for message in notif.messages %}
<tr class="{{ "historical_message" if message.is_historical else "message" }}">
<td class="sender_avatar">
{%- if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
{%- if message.sender_avatar_url %}
<img alt="" class="sender_avatar" src="{{ message.sender_avatar_url|mxc_to_http(32,32) }}" />
{%- else %}
{%- if message.sender_hash % 3 == 0 %}
<img class="sender_avatar" src="https://riot.im/img/external/avatar-1.png" />
{%- elif message.sender_hash % 3 == 1 %}
<img class="sender_avatar" src="https://riot.im/img/external/avatar-2.png" />
{%- else %}
<img class="sender_avatar" src="https://riot.im/img/external/avatar-3.png" />
{%- endif %}
{%- endif %}
{%- endif %}
</td>
<td class="message_contents">
{%- if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
@@ -30,7 +17,7 @@
{%- elif message.msgtype == "m.notice" %}
{{ message.body_text_html }}
{%- elif message.msgtype == "m.image" and message.image_url %}
<img src="{{ message.image_url|mxc_to_http(640, 480, 'scale') }}" />
<span class="filename">{{ message.body_text_plain }} (image)</span>
{%- elif message.msgtype == "m.file" %}
<span class="filename">{{ message.body_text_plain }}</span>
{%- else %}
+7 -2
View File
@@ -114,10 +114,12 @@ from synapse.rest.admin.users import (
UserByThreePid,
UserInvitesCount,
UserJoinedRoomCount,
UserMembershipRestServlet,
UserJoinedRoomsRestServlet,
UserMembershipsRestServlet,
UserRegisterServlet,
UserReplaceMasterCrossSigningKeyRestServlet,
UserRestServletV2,
UserRestServletV2Get,
UsersRestServletV2,
UsersRestServletV3,
UserTokenRestServlet,
@@ -280,6 +282,8 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
# matrix_authentication_service integration uses the dedicated MAS API.
if hs.config.experimental.msc3861.enabled:
register_servlets_for_msc3861_delegation(hs, http_server)
else:
UserRestServletV2Get(hs).register(http_server)
return
@@ -297,7 +301,8 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
VersionServlet(hs).register(http_server)
if not auth_delegated:
UserAdminServlet(hs).register(http_server)
UserMembershipRestServlet(hs).register(http_server)
UserJoinedRoomsRestServlet(hs).register(http_server)
UserMembershipsRestServlet(hs).register(http_server)
if not auth_delegated:
UserTokenRestServlet(hs).register(http_server)
UserRestServletV2(hs).register(http_server)
+33
View File
@@ -293,6 +293,38 @@ class ListMediaInRoom(RestServlet):
return HTTPStatus.OK, {"local": local_mxcs, "remote": remote_mxcs}
class ListQuarantinedMedia(RestServlet):
"""Lists all quarantined media on the server."""
PATTERNS = admin_patterns("/media/quarantined$")
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self.auth = hs.get_auth()
async def on_GET(
self,
request: SynapseRequest,
) -> tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
local_or_remote = parse_string(request, "kind", required=True)
if local_or_remote not in ["local", "remote"]:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Query parameter `kind` must be either 'local' or 'remote'.",
)
mxcs = await self.store.get_quarantined_media_mxcs(
start, limit, local_or_remote == "local"
)
return HTTPStatus.OK, {"media": mxcs}
class PurgeMediaCacheRestServlet(RestServlet):
PATTERNS = admin_patterns("/purge_media_cache$")
@@ -532,6 +564,7 @@ def register_servlets_for_media_repo(hs: "HomeServer", http_server: HttpServer)
ProtectMediaByID(hs).register(http_server)
UnprotectMediaByID(hs).register(http_server)
ListMediaInRoom(hs).register(http_server)
ListQuarantinedMedia(hs).register(http_server)
# XXX DeleteMediaByDateSize must be registered before DeleteMediaByID as
# their URL routes overlap.
DeleteMediaByDateSize(hs).register(http_server)
+46 -18
View File
@@ -210,7 +210,7 @@ class UsersRestServletV3(UsersRestServletV2):
return parse_boolean(request, "deactivated")
class UserRestServletV2(RestServlet):
class UserRestServletV2Get(RestServlet):
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)$", "v2")
"""Get request to list user details.
@@ -220,22 +220,6 @@ class UserRestServletV2(RestServlet):
returns:
200 OK with user details if success otherwise an error.
Put request to allow an administrator to add or modify a user.
This needs user to have administrator access in Synapse.
We use PUT instead of POST since we already know the id of the user
object to create. POST could be used to create guests.
PUT /_synapse/admin/v2/users/<user_id>
{
"password": "secret",
"displayname": "User"
}
returns:
201 OK with new user object if user was created or
200 OK with modified user object if user was modified
otherwise an error.
"""
def __init__(self, hs: "HomeServer"):
@@ -267,6 +251,28 @@ class UserRestServletV2(RestServlet):
return HTTPStatus.OK, user_info_dict
class UserRestServletV2(UserRestServletV2Get):
"""
Put request to allow an administrator to add or modify a user.
This needs user to have administrator access in Synapse.
We use PUT instead of POST since we already know the id of the user
object to create. POST could be used to create guests.
Note: This inherits from `UserRestServletV2Get`, so also supports the `GET` route.
PUT /_synapse/admin/v2/users/<user_id>
{
"password": "secret",
"displayname": "User"
}
returns:
201 OK with new user object if user was created or
200 OK with modified user object if user was modified
otherwise an error.
"""
async def on_PUT(
self, request: SynapseRequest, user_id: str
) -> tuple[int, JsonMapping]:
@@ -1031,7 +1037,7 @@ class UserAdminServlet(RestServlet):
return HTTPStatus.OK, {}
class UserMembershipRestServlet(RestServlet):
class UserJoinedRoomsRestServlet(RestServlet):
"""
Get list of joined room ID's for a user.
"""
@@ -1054,6 +1060,28 @@ class UserMembershipRestServlet(RestServlet):
return HTTPStatus.OK, rooms_response
class UserMembershipsRestServlet(RestServlet):
"""
Get list of room memberships for a user.
"""
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/memberships$")
def __init__(self, hs: "HomeServer"):
self.is_mine = hs.is_mine
self.auth = hs.get_auth()
self.store = hs.get_datastores().main
async def on_GET(
self, request: SynapseRequest, user_id: str
) -> tuple[int, JsonDict]:
await assert_requester_is_admin(self.auth, request)
memberships = await self.store.get_memberships_for_user(user_id)
return HTTPStatus.OK, {"memberships": memberships}
class PushersRestServlet(RestServlet):
"""
Gets information about all pushers for a specific `user_id`.
+24 -2
View File
@@ -23,6 +23,7 @@
import logging
import re
from synapse.api.errors import Codes, cs_error
from synapse.http.server import (
HttpServer,
respond_with_json,
@@ -235,7 +236,23 @@ class DownloadResource(RestServlet):
# Validate the server name, raising if invalid
parse_and_validate_server_name(server_name)
await self.auth.get_user_by_req(request, allow_guest=True)
requester = await self.auth.get_user_by_req(request, allow_guest=True)
is_admin = await self.auth.is_server_admin(requester)
bypass_quarantine = False
if parse_string(request, "admin_unsafely_bypass_quarantine") == "true":
if is_admin:
logger.info("Admin bypassing quarantine for media download")
bypass_quarantine = True
else:
respond_with_json(
request,
400,
cs_error(
"Must be a server admin to bypass quarantine",
code=Codes.UNKNOWN,
),
send_cors=True,
)
set_cors_headers(request)
set_corp_headers(request)
@@ -259,7 +276,11 @@ class DownloadResource(RestServlet):
if self._is_mine_server_name(server_name):
await self.media_repo.get_local_media(
request, media_id, file_name, max_timeout_ms
request,
media_id,
file_name,
max_timeout_ms,
bypass_quarantine=bypass_quarantine,
)
else:
ip_address = request.getClientAddress().host
@@ -271,6 +292,7 @@ class DownloadResource(RestServlet):
max_timeout_ms,
ip_address,
True,
bypass_quarantine=bypass_quarantine,
)
+65 -14
View File
@@ -19,9 +19,12 @@
#
#
import logging
from bisect import bisect
from http import HTTPStatus
from typing import TYPE_CHECKING
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_strings_from_args
@@ -35,10 +38,34 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
MUTUAL_ROOMS_BATCH_LIMIT = 100
def _parse_mutual_rooms_batch_token_args(args: dict[bytes, list[bytes]]) -> str | None:
from_batches = parse_strings_from_args(args, "from")
if not from_batches:
return None
if len(from_batches) > 1:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Duplicate from query parameter",
errcode=Codes.INVALID_PARAM,
)
if from_batches[0]:
try:
return decode_base64(from_batches[0]).decode("utf-8")
except Exception:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Malformed from token",
errcode=Codes.INVALID_PARAM,
)
return None
class UserMutualRoomsServlet(RestServlet):
"""
GET /uk.half-shot.msc2666/user/mutual_rooms?user_id={user_id} HTTP/1.1
GET /uk.half-shot.msc2666/user/mutual_rooms?user_id={user_id}&from={token} HTTP/1.1
"""
PATTERNS = client_patterns(
@@ -56,6 +83,7 @@ class UserMutualRoomsServlet(RestServlet):
args: dict[bytes, list[bytes]] = request.args # type: ignore
user_ids = parse_strings_from_args(args, "user_id", required=True)
from_batch = _parse_mutual_rooms_batch_token_args(args)
if len(user_ids) > 1:
raise SynapseError(
@@ -64,29 +92,52 @@ class UserMutualRoomsServlet(RestServlet):
errcode=Codes.INVALID_PARAM,
)
# We don't do batching, so a batch token is illegal by default
if b"batch_token" in args:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"Unknown batch_token",
errcode=Codes.INVALID_PARAM,
)
user_id = user_ids[0]
requester = await self.auth.get_user_by_req(request)
if user_id == requester.user.to_string():
raise SynapseError(
HTTPStatus.UNPROCESSABLE_ENTITY,
HTTPStatus.BAD_REQUEST,
"You cannot request a list of shared rooms with yourself",
errcode=Codes.INVALID_PARAM,
errcode=Codes.UNKNOWN,
)
rooms = await self.store.get_mutual_rooms_between_users(
frozenset((requester.user.to_string(), user_id))
# Sort here instead of the database function, so that we don't expose
# clients to any unrelated changes to the sorting algorithm.
rooms = sorted(
await self.store.get_mutual_rooms_between_users(
frozenset((requester.user.to_string(), user_id))
)
)
return 200, {"joined": list(rooms)}
if from_batch:
# A from_batch token was provided, so cut off any rooms where the ID is
# lower than or equal to the token. This method doesn't care whether the
# provided token room still exists, nor whether it's even a real room ID.
#
# However, if rooms with a lower ID are added after the token was issued,
# they will not be included until the client makes a new request without a
# from token. This is considered acceptable, as clients generally won't
# persist these results for long periods.
rooms = rooms[bisect(rooms, from_batch) :]
if len(rooms) <= MUTUAL_ROOMS_BATCH_LIMIT:
# We've reached the end of the list, don't return a batch token
return 200, {"joined": rooms}
rooms = rooms[:MUTUAL_ROOMS_BATCH_LIMIT]
# We use urlsafe unpadded base64 encoding for the batch token in order to
# handle funny room IDs in old pre-v12 rooms properly. We also truncate it
# to stay within the 255-character limit of opaque tokens.
next_batch = encode_base64(rooms[-1].encode("utf-8"), urlsafe=True)[:255]
# Due to the truncation, it is technically possible to have conflicting next
# batches by creating hundreds of rooms with the same 191 character prefix
# in the room ID. In the event that some silly user does that, don't let
# them paginate further.
if next_batch == from_batch:
return 200, {"joined": rooms}
return 200, {"joined": list(rooms), "next_batch": next_batch}
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
+31 -9
View File
@@ -34,6 +34,7 @@ from typing import (
Any,
Awaitable,
Callable,
Optional,
TypeVar,
cast,
)
@@ -54,6 +55,7 @@ from synapse.api.auth import Auth
from synapse.api.auth.internal import InternalAuth
from synapse.api.auth.mas import MasDelegatedAuth
from synapse.api.auth_blocking import AuthBlocking
from synapse.api.errors import HomeServerNotSetupException
from synapse.api.filtering import Filtering
from synapse.api.ratelimiting import Ratelimiter, RequestRatelimiter
from synapse.app._base import unregister_sighups
@@ -319,7 +321,7 @@ class HomeServer(metaclass=abc.ABCMeta):
self,
hostname: str,
config: HomeServerConfig,
reactor: ISynapseReactor | None = None,
reactor: Optional[ISynapseReactor] = None,
):
"""
Args:
@@ -352,7 +354,7 @@ class HomeServer(metaclass=abc.ABCMeta):
self._module_web_resources_consumed = False
# This attribute is set by the free function `refresh_certificate`.
self.tls_server_context_factory: IOpenSSLContextFactory | None = None
self.tls_server_context_factory: Optional[IOpenSSLContextFactory] = None
self._is_shutdown = False
self._async_shutdown_handlers: list[ShutdownInfo] = []
@@ -399,7 +401,7 @@ class HomeServer(metaclass=abc.ABCMeta):
"""
if self._is_shutdown:
raise Exception(
f"Cannot start background process. HomeServer has been shutdown {len(self._background_processes)} {len(self.get_clock()._looping_calls)} {len(self.get_clock()._call_id_to_delayed_call)}"
"Cannot start background process. HomeServer has been shutdown"
)
# Ignore linter error as this is the one location this should be called.
@@ -466,7 +468,17 @@ class HomeServer(metaclass=abc.ABCMeta):
# TODO: Cleanup replication pieces
self.get_keyring().shutdown()
keyring: Keyring | None = None
try:
keyring = self.get_keyring()
except HomeServerNotSetupException:
# If the homeserver wasn't fully setup, keyring won't have existed before
# this and will fail to be initialized but it cleans itself up for any
# partial initialization problem.
pass
if keyring:
keyring.shutdown()
# Cleanup metrics associated with the homeserver
for later_gauge in all_later_gauges_to_clean_up_on_shutdown.values():
@@ -478,8 +490,12 @@ class HomeServer(metaclass=abc.ABCMeta):
self.config.server.server_name
)
for db in self.get_datastores().databases:
db.stop_background_updates()
try:
for db in self.get_datastores().databases:
db.stop_background_updates()
except HomeServerNotSetupException:
# If the homeserver wasn't fully setup, the datastores won't exist
pass
if self.should_send_federation():
try:
@@ -513,8 +529,12 @@ class HomeServer(metaclass=abc.ABCMeta):
pass
self._background_processes.clear()
for db in self.get_datastores().databases:
db._db_pool.close()
try:
for db in self.get_datastores().databases:
db._db_pool.close()
except HomeServerNotSetupException:
# If the homeserver wasn't fully setup, the datastores won't exist
pass
def register_async_shutdown_handler(
self,
@@ -677,7 +697,9 @@ class HomeServer(metaclass=abc.ABCMeta):
def get_datastores(self) -> Databases:
if not self.datastores:
raise Exception("HomeServer.setup must be called before getting datastores")
raise HomeServerNotSetupException(
"HomeServer.setup must be called before getting datastores"
)
return self.datastores
+1 -1
View File
@@ -52,7 +52,7 @@ logger = logging.getLogger(__name__)
# Number of msec of granularity to store the user IP 'last seen' time. Smaller
# times give more inserts into the database even for readonly API hits
# 120 seconds == 2 minutes
LAST_SEEN_GRANULARITY = 120 * 1000
LAST_SEEN_GRANULARITY = 10 * 60 * 1000
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -912,6 +912,13 @@ class DeviceInboxWorkerStore(SQLBaseStore):
retcol="device_id",
)
if len(devices) > 1000:
logger.warning(
"ignoring wildcard to-device messages to %i devices",
len(devices),
)
continue
message_json = json_encoder.encode(messages_by_device["*"])
for device_id in devices:
# Add the message for all devices for this user on this
+4 -2
View File
@@ -190,8 +190,10 @@ class DeviceWorkerStore(RoomMemberWorkerStore, EndToEndKeyWorkerStore):
prefilled_cache=device_list_federation_prefill,
)
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
# vdh,rei 2023-10-13: disable because it is eating DB
# https://github.com/matrix-org/synapse/issues/16480
if False and hs.config.worker.run_background_tasks:
self._clock.looping_call(
self._prune_old_outbound_device_pokes, Duration(hours=1)
)
@@ -2485,6 +2485,9 @@ class EventsWorkerStore(SQLBaseStore):
"""
def get_event_id_for_timestamp_txn(txn: LoggingTransaction) -> str | None:
# Temporary: make sure these queries can't last more than 30s
txn.execute("SET LOCAL statement_timeout = 30000")
txn.execute(
sql_template,
(room_id, timestamp),
@@ -61,6 +61,7 @@ class LocalMedia:
url_cache: str | None
last_access_ts: int
quarantined_by: str | None
quarantined_ts: int | None
safe_from_quarantine: bool
user_id: str | None
authenticated: bool | None
@@ -78,6 +79,7 @@ class RemoteMedia:
created_ts: int
last_access_ts: int
quarantined_by: str | None
quarantined_ts: int | None
authenticated: bool | None
sha256: str | None
@@ -243,6 +245,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"user_id",
"authenticated",
"sha256",
"quarantined_ts",
),
allow_none=True,
desc="get_local_media",
@@ -262,6 +265,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
user_id=row[8],
authenticated=row[9],
sha256=row[10],
quarantined_ts=row[11],
)
async def get_local_media_by_user_paginate(
@@ -319,7 +323,8 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
safe_from_quarantine,
user_id,
authenticated,
sha256
sha256,
quarantined_ts
FROM local_media_repository
WHERE user_id = ?
ORDER BY {order_by_column} {order}, media_id ASC
@@ -345,6 +350,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
user_id=row[9],
authenticated=row[10],
sha256=row[11],
quarantined_ts=row[12],
)
for row in txn
]
@@ -695,6 +701,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
"quarantined_by",
"authenticated",
"sha256",
"quarantined_ts",
),
allow_none=True,
desc="get_cached_remote_media",
@@ -713,6 +720,7 @@ class MediaRepositoryStore(MediaRepositoryBackgroundUpdateStore):
quarantined_by=row[6],
authenticated=row[7],
sha256=row[8],
quarantined_ts=row[9],
)
async def store_cached_remote_media(
+61 -9
View File
@@ -945,6 +945,50 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
max_lifetime=max_lifetime,
)
async def get_quarantined_media_mxcs(
self, index_start: int, index_limit: int, local: bool
) -> list[str]:
"""Retrieves all the quarantined media MXC URIs starting from the given position,
ordered from oldest quarantined timestamp, then alphabetically by media ID
(including origin).
Note that on established servers the "quarantined timestamp" may be zero due to
being introduced after the quarantine timestamp field was introduced.
Args:
index_start: The position to start from.
index_limit: The maximum number of results to return.
local: When true, only local media will be returned. When false, only remote media will be returned.
Returns:
The quarantined media as a list of media IDs.
"""
def _get_quarantined_media_mxcs_txn(
txn: LoggingTransaction,
) -> list[str]:
# We order by quarantined timestamp *and* media ID (including origin, when
# known) to ensure the ordering is stable for established servers.
if local:
sql = "SELECT '' as media_origin, media_id FROM local_media_repository WHERE quarantined_by IS NOT NULL ORDER BY quarantined_ts, media_id ASC LIMIT ? OFFSET ?"
else:
sql = "SELECT media_origin, media_id FROM remote_media_cache WHERE quarantined_by IS NOT NULL ORDER BY quarantined_ts, media_origin, media_id ASC LIMIT ? OFFSET ?"
txn.execute(sql, (index_limit, index_start))
mxcs = []
for media_origin, media_id in txn:
if local:
media_origin = self.hs.hostname
mxcs.append(f"mxc://{media_origin}/{media_id}")
return mxcs
return await self.db_pool.runInteraction(
"get_quarantined_media_mxcs",
_get_quarantined_media_mxcs_txn,
)
async def get_media_mxcs_in_room(self, room_id: str) -> tuple[list[str], list[str]]:
"""Retrieves all the local and remote media MXC URIs in a given room
@@ -952,7 +996,7 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
room_id
Returns:
The local and remote media as a lists of the media IDs.
The local and remote media as lists of the media IDs.
"""
def _get_media_mxcs_in_room_txn(
@@ -1147,6 +1191,10 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
The total number of media items quarantined
"""
total_media_quarantined = 0
now_ts: int | None = self.clock.time_msec()
if quarantined_by is None:
now_ts = None
# Effectively a legacy path, update any media that was explicitly named.
if media_ids:
@@ -1155,13 +1203,13 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
)
sql = f"""
UPDATE local_media_repository
SET quarantined_by = ?
SET quarantined_by = ?, quarantined_ts = ?
WHERE {sql_many_clause_sql}"""
if quarantined_by is not None:
sql += " AND safe_from_quarantine = FALSE"
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
txn.execute(sql, [quarantined_by, now_ts] + sql_many_clause_args)
# Note that a rowcount of -1 can be used to indicate no rows were affected.
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
@@ -1172,13 +1220,13 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
)
sql = f"""
UPDATE local_media_repository
SET quarantined_by = ?
SET quarantined_by = ?, quarantined_ts = ?
WHERE {sql_many_clause_sql}"""
if quarantined_by is not None:
sql += " AND safe_from_quarantine = FALSE"
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
txn.execute(sql, [quarantined_by, now_ts] + sql_many_clause_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
return total_media_quarantined
@@ -1202,6 +1250,10 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
The total number of media items quarantined
"""
total_media_quarantined = 0
now_ts: int | None = self.clock.time_msec()
if quarantined_by is None:
now_ts = None
if media:
sql_in_list_clause, sql_args = make_tuple_in_list_sql_clause(
@@ -1211,10 +1263,10 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
)
sql = f"""
UPDATE remote_media_cache
SET quarantined_by = ?
SET quarantined_by = ?, quarantined_ts = ?
WHERE {sql_in_list_clause}"""
txn.execute(sql, [quarantined_by] + sql_args)
txn.execute(sql, [quarantined_by, now_ts] + sql_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
total_media_quarantined = 0
@@ -1224,9 +1276,9 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
)
sql = f"""
UPDATE remote_media_cache
SET quarantined_by = ?
SET quarantined_by = ?, quarantined_ts = ?
WHERE {sql_many_clause_sql}"""
txn.execute(sql, [quarantined_by] + sql_many_clause_args)
txn.execute(sql, [quarantined_by, now_ts] + sql_many_clause_args)
total_media_quarantined += txn.rowcount if txn.rowcount > 0 else 0
return total_media_quarantined
@@ -747,6 +747,27 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
return frozenset(room_ids)
async def get_memberships_for_user(self, user_id: str) -> dict[str, str]:
"""Returns a dict of room_id to membership state for a given user.
If a remote user only returns rooms this server is currently
participating in.
"""
rows = cast(
list[tuple[str, str]],
await self.db_pool.simple_select_list(
"current_state_events",
keyvalues={
"type": EventTypes.Member,
"state_key": user_id,
},
retcols=["room_id", "membership"],
desc="get_memberships_for_user",
),
)
return dict(rows)
@cached(max_entries=500000, iterable=True)
async def get_rooms_for_user(self, user_id: str) -> frozenset[str]:
"""Returns a set of room_ids the user is currently joined to.
+178 -1
View File
@@ -14,7 +14,7 @@
import logging
from typing import TYPE_CHECKING, Mapping, cast
from typing import TYPE_CHECKING, AbstractSet, Mapping, cast
import attr
@@ -26,13 +26,16 @@ from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_in_list_sql_clause,
)
from synapse.storage.engines import PostgresEngine
from synapse.types import MultiWriterStreamToken, RoomStreamToken
from synapse.types.handlers.sliding_sync import (
HaveSentRoom,
HaveSentRoomFlag,
MutablePerConnectionState,
PerConnectionState,
RoomLazyMembershipChanges,
RoomStatusMap,
RoomSyncConfig,
)
@@ -373,6 +376,13 @@ class SlidingSyncStore(SQLBaseStore):
value_values=values,
)
self._persist_sliding_sync_connection_lazy_members_txn(
txn,
connection_key,
connection_position,
per_connection_state.room_lazy_membership,
)
return connection_position
@cached(iterable=True, max_entries=100000)
@@ -446,6 +456,23 @@ class SlidingSyncStore(SQLBaseStore):
"""
txn.execute(sql, (connection_key, connection_position))
# Move any lazy membership entries for this connection position to have
# `NULL` connection position, indicating that it applies to all future
# positions on this connection. This is safe because we have deleted all
# other (potentially forked) connection positions, and so all future
# positions in this connection will be a continuation of the current
# position. Thus any lazy membership entries we have sent down will still
# be valid.
self.db_pool.simple_update_txn(
txn,
table="sliding_sync_connection_lazy_members",
keyvalues={
"connection_key": connection_key,
"connection_position": connection_position,
},
updatevalues={"connection_position": None},
)
# Fetch and create a mapping from required state ID to the actual
# required state for the connection.
rows = self.db_pool.simple_select_list_txn(
@@ -525,8 +552,153 @@ class SlidingSyncStore(SQLBaseStore):
receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=room_configs,
room_lazy_membership={},
)
async def get_sliding_sync_connection_lazy_members(
self,
connection_position: int,
room_id: str,
user_ids: AbstractSet[str],
) -> Mapping[str, int]:
"""Get which user IDs in the room we have previously sent lazy
membership for.
Args:
connection_position: The sliding sync connection position.
room_id: The room ID to get lazy members for.
user_ids: The user IDs to check whether we've previously sent
because of lazy membership.
Returns:
The mapping of user IDs to the last seen timestamp for those user
IDs. Only includes user IDs that we have previously sent lazy
membership for, and so may be a subset of the `user_ids` passed in.
"""
def get_sliding_sync_connection_lazy_members_txn(
txn: LoggingTransaction,
) -> Mapping[str, int]:
user_clause, user_args = make_in_list_sql_clause(
txn.database_engine, "user_id", user_ids
)
# Fetch all the lazy membership entries for the given connection,
# room and user IDs. We don't have the `connection_key` here, so we
# join against `sliding_sync_connection_positions` to get it.
#
# Beware that there are two `connection_position` columns in the
# query which are different, the one in
# `sliding_sync_connection_positions` is the one we match to get the
# connection_key, whereas the one in
# `sliding_sync_connection_lazy_members` is what we filter against
# (it may be null or the same as the one passed in).
#
# FIXME: We should pass in `connection_key` here to avoid the join.
# We don't do this currently as the caller doesn't have it handy.
sql = f"""
SELECT user_id, members.connection_position, last_seen_ts
FROM sliding_sync_connection_lazy_members AS members
INNER JOIN sliding_sync_connection_positions AS pos USING (connection_key)
WHERE pos.connection_position = ? AND room_id = ? AND {user_clause}
"""
txn.execute(sql, (connection_position, room_id, *user_args))
# Filter out any cache entries that only apply to forked connection
# positions. Entries with `NULL` `connection_position` apply to all
# positions on the connection.
return {
user_id: last_seen_ts
for user_id, db_connection_position, last_seen_ts in txn
if db_connection_position == connection_position
or db_connection_position is None
}
return await self.db_pool.runInteraction(
"get_sliding_sync_connection_lazy_members",
get_sliding_sync_connection_lazy_members_txn,
db_autocommit=True, # Avoid transaction for single read
)
def _persist_sliding_sync_connection_lazy_members_txn(
self,
txn: LoggingTransaction,
connection_key: int,
new_connection_position: int,
all_changes: dict[str, RoomLazyMembershipChanges],
) -> None:
"""Persist that we have sent lazy membership for the given user IDs."""
now = self.clock.time_msec()
# Figure out which cache entries to add or update.
#
# These are either a) new entries we've never sent before (i.e. with a
# None last_seen_ts), or b) where the `last_seen_ts` is old enough that
# we want to update it.
#
# We don't update the timestamp every time to avoid hammering the DB
# with writes, and we don't need the timestamp to be precise. It is used
# to evict old entries that haven't been used in a while.
to_update: list[tuple[str, str]] = []
for room_id, room_changes in all_changes.items():
user_ids_to_update = room_changes.get_returned_user_ids_to_update(
self.clock
)
to_update.extend((room_id, user_id) for user_id in user_ids_to_update)
if to_update:
# Upsert the new/updated entries.
#
# Ignore conflicts where the existing entry has a different
# connection position (i.e. from a forked connection position). This
# may mean that we lose some updates, but that's acceptable as this
# is a cache and its fine for it to *not* include rows. (Downstream
# this will cause us to maybe send a few extra lazy members down
# sync, but we're allowed to send extra members).
sql = """
INSERT INTO sliding_sync_connection_lazy_members
(connection_key, connection_position, room_id, user_id, last_seen_ts)
VALUES {value_placeholder}
ON CONFLICT (connection_key, room_id, user_id)
DO UPDATE SET last_seen_ts = EXCLUDED.last_seen_ts
WHERE sliding_sync_connection_lazy_members.connection_position IS NULL
OR sliding_sync_connection_lazy_members.connection_position = EXCLUDED.connection_position
"""
args = [
(connection_key, new_connection_position, room_id, user_id, now)
for room_id, user_id in to_update
]
if isinstance(self.database_engine, PostgresEngine):
sql = sql.format(value_placeholder="?")
txn.execute_values(sql, args, fetch=False)
else:
sql = sql.format(value_placeholder="(?, ?, ?, ?, ?)")
txn.execute_batch(sql, args)
# Remove any invalidated entries.
to_remove: list[tuple[str, str]] = []
for room_id, room_changes in all_changes.items():
for user_id in room_changes.invalidated_user_ids:
to_remove.append((room_id, user_id))
if to_remove:
# We don't try and match on connection position here: it's fine to
# remove it from all forks. This is a cache so it's fine to expire
# arbitrary entries, the worst that happens is we send a few extra
# lazy members down sync.
self.db_pool.simple_delete_many_batch_txn(
txn,
table="sliding_sync_connection_lazy_members",
keys=("connection_key", "room_id", "user_id"),
values=[
(connection_key, room_id, user_id) for room_id, user_id in to_remove
],
)
@wrap_as_background_process("delete_old_sliding_sync_connections")
async def delete_old_sliding_sync_connections(self) -> None:
"""Delete sliding sync connections that have not been used for a long time."""
@@ -564,6 +736,10 @@ class PerConnectionStateDB:
room_configs: Mapping[str, "RoomSyncConfig"]
room_lazy_membership: dict[str, RoomLazyMembershipChanges]
"""Lazy membership changes to persist alongside this state. Only used
when persisting."""
@staticmethod
async def from_state(
per_connection_state: "MutablePerConnectionState", store: "DataStore"
@@ -618,6 +794,7 @@ class PerConnectionStateDB:
receipts=RoomStatusMap(receipts),
account_data=RoomStatusMap(account_data),
room_configs=per_connection_state.room_configs.maps[0],
room_lazy_membership=per_connection_state.room_lazy_membership,
)
async def to_state(self, store: "DataStore") -> "PerConnectionState":
+1 -1
View File
@@ -983,7 +983,7 @@ class StateMapWrapper(dict[StateKey, str]):
raise Exception("State map was filtered and doesn't include: %s", key)
return super().__getitem__(key)
@overload # type: ignore[override]
@overload
def get(self, key: StateKey, default: None = None, /) -> str | None: ...
@overload
def get(self, key: StateKey, default: str, /) -> str: ...
@@ -0,0 +1,60 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 Element Creations Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Tracks which member states have been sent to the client for lazy-loaded
-- members in sliding sync. This is a *cache* as it doesn't matter if we send
-- down members we've previously sent down, i.e. it's safe to delete any rows.
--
-- We could have tracked these as part of the
-- `sliding_sync_connection_required_state` table, but that would bloat that
-- table significantly as most rooms will have lazy-loaded members. We want to
-- keep that table small as we always pull out all rows for the connection for
-- every request, so storing lots of data there would be bad for performance. To
-- keep that table small we also deduplicate the requested state across
-- different rooms, which if we stored lazy members there would prevent.
--
-- We track a *rough* `last_seen_ts` for each user in each room which indicates
-- when we last would've sent their member state to the client. `last_seen_ts`
-- is used so that we can remove members which haven't been seen for a while to
-- save space. This is a *rough* timestamp as we don't want to update the
-- timestamp every time to avoid hammering the DB with writes, and we don't need
-- the timestamp to be precise (as it is used to evict old entries that haven't
-- been used in a while).
--
-- Care must be taken when handling "forked" positions, i.e. we have responded
-- to a request with a position and then get another different request using the
-- previous position as a base. We track this by including a
-- `connection_position` for newly inserted rows. When we advance the position
-- we set this to NULL for all rows which were present at that position, and
-- delete all other rows. When reading rows we can then filter out any rows
-- which have a non-NULL `connection_position` which is not the current
-- position.
--
-- I.e. `connection_position` is NULL for rows which are valid for *all*
-- positions on the connection, and is non-NULL for rows which are only valid
-- for a specific position.
--
-- When invalidating rows, we can just delete them. Technically this could
-- invalidate for a forked position, but this is acceptable as equivalent to a
-- cache eviction.
CREATE TABLE sliding_sync_connection_lazy_members (
connection_key BIGINT NOT NULL REFERENCES sliding_sync_connections(connection_key) ON DELETE CASCADE,
connection_position BIGINT REFERENCES sliding_sync_connection_positions(connection_position) ON DELETE CASCADE,
room_id TEXT NOT NULL,
user_id TEXT NOT NULL,
last_seen_ts BIGINT NOT NULL
);
CREATE UNIQUE INDEX sliding_sync_connection_lazy_members_idx ON sliding_sync_connection_lazy_members (connection_key, room_id, user_id);
CREATE INDEX sliding_sync_connection_lazy_members_pos_idx ON sliding_sync_connection_lazy_members (connection_key, connection_position) WHERE connection_position IS NOT NULL;
@@ -0,0 +1,27 @@
--
-- This file is licensed under the Affero General Public License (AGPL) version 3.
--
-- Copyright (C) 2025 Element Creations, Ltd
--
-- This program is free software: you can redistribute it and/or modify
-- it under the terms of the GNU Affero General Public License as
-- published by the Free Software Foundation, either version 3 of the
-- License, or (at your option) any later version.
--
-- See the GNU Affero General Public License for more details:
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
-- Add a timestamp for when the sliding sync connection position was last used,
-- only updated with a small granularity.
--
-- This should be NOT NULL, but we need to consider existing rows. In future we
-- may want to either backfill this or delete all rows with a NULL value (and
-- then make it NOT NULL).
ALTER TABLE local_media_repository ADD COLUMN quarantined_ts BIGINT;
ALTER TABLE remote_media_cache ADD COLUMN quarantined_ts BIGINT;
UPDATE local_media_repository SET quarantined_ts = 0 WHERE quarantined_by IS NOT NULL;
UPDATE remote_media_cache SET quarantined_ts = 0 WHERE quarantined_by IS NOT NULL;
-- Note: We *probably* should have an index on quarantined_ts, but we're going
-- to try to defer that to a future migration after seeing the performance impact.
+89 -1
View File
@@ -49,12 +49,21 @@ from synapse.types import (
UserID,
)
from synapse.types.rest.client import SlidingSyncBody
from synapse.util.clock import Clock
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.handlers.relations import BundledAggregations
logger = logging.getLogger(__name__)
# How often to update the last seen timestamp for lazy members.
#
# We don't update the timestamp every time to avoid hammering the DB with
# writes, and we don't need the timestamp to be precise (as it is used to evict
# old entries that haven't been used in a while).
LAZY_MEMBERS_UPDATE_INTERVAL = Duration(hours=1)
class SlidingSyncConfig(SlidingSyncBody):
"""
@@ -891,6 +900,69 @@ class PerConnectionState:
return len(self.rooms) + len(self.receipts) + len(self.room_configs)
@attr.s(auto_attribs=True)
class RoomLazyMembershipChanges:
"""Changes to lazily-loaded room memberships for a given room."""
returned_user_id_to_last_seen_ts_map: Mapping[str, int | None] = attr.Factory(dict)
"""Map from user ID to timestamp for users whose membership we have lazily
loaded in this room an request. The timestamp indicates the time we
previously needed the membership, or None if we sent it down for the first
time in this request.
We track a *rough* `last_seen_ts` for each user in each room which indicates
when we last would've sent their member state to the client. This is used so
that we can remove members which haven't been seen for a while to save
space.
Note: this will include users whose membership we would have sent down but
didn't due to us having previously sent them.
"""
invalidated_user_ids: AbstractSet[str] = attr.Factory(set)
"""Set of user IDs whose latest membership we have *not* sent down"""
def get_returned_user_ids_to_update(self, clock: Clock) -> StrCollection:
"""Get the user IDs whose last seen timestamp we need to update in the
database.
This is a subset of user IDs in `returned_user_id_to_last_seen_ts_map`,
whose timestamp is either None (first time we've sent them) or older
than `LAZY_MEMBERS_UPDATE_INTERVAL`.
We only update the timestamp in the database every so often to avoid
hammering the DB with writes. We don't need the timestamp to be precise,
as the timestamp is used to evict old entries that haven't been used in
a while.
"""
now_ms = clock.time_msec()
return [
user_id
for user_id, last_seen_ts in self.returned_user_id_to_last_seen_ts_map.items()
if last_seen_ts is None
or now_ms - last_seen_ts >= LAZY_MEMBERS_UPDATE_INTERVAL.as_millis()
]
def has_updates(self, clock: Clock) -> bool:
"""Check if there are any updates to the lazy membership changes.
Called to check if we need to persist changes to the lazy membership
state for the room. We want to avoid persisting the state if there are
no changes, to avoid unnecessary writes (and cache misses due to new
connection position).
"""
# We consider there to be updates if there are any invalidated user
# IDs...
if self.invalidated_user_ids:
return True
# ...or if any of the returned user IDs need their last seen timestamp
# updating in the database.
return bool(self.get_returned_user_ids_to_update(clock))
@attr.s(auto_attribs=True)
class MutablePerConnectionState(PerConnectionState):
"""A mutable version of `PerConnectionState`"""
@@ -903,12 +975,28 @@ class MutablePerConnectionState(PerConnectionState):
room_configs: typing.ChainMap[str, RoomSyncConfig]
def has_updates(self) -> bool:
# A map from room ID to the lazily-loaded memberships needed for the
# request in that room.
room_lazy_membership: dict[str, RoomLazyMembershipChanges] = attr.Factory(dict)
def has_updates(self, clock: Clock) -> bool:
"""Check if there are any updates to the per-connection state that need
persisting.
It is important that we don't spuriously do persistence, as that will
always generate a new connection position which will invalidate some of
the caches. It doesn't need to be perfect, but we should avoid always
generating new connection positions when doing lazy loading
"""
return (
bool(self.rooms.get_updates())
or bool(self.receipts.get_updates())
or bool(self.account_data.get_updates())
or bool(self.get_room_config_updates())
or any(
change.has_updates(clock)
for change in self.room_lazy_membership.values()
)
)
def get_room_config_updates(self) -> Mapping[str, RoomSyncConfig]:
+32 -10
View File
@@ -29,6 +29,7 @@ from twisted.internet.interfaces import IDelayedCall
from twisted.internet.task import LoopingCall
from synapse.logging import context
from synapse.logging.loggers import ExplicitlyConfiguredLogger
from synapse.types import ISynapseThreadlessReactor
from synapse.util import log_failure
from synapse.util.duration import Duration
@@ -39,6 +40,25 @@ P = ParamSpec("P")
logger = logging.getLogger(__name__)
original_logger_class = logging.getLoggerClass()
logging.setLoggerClass(ExplicitlyConfiguredLogger)
clock_debug_logger = logging.getLogger("synapse.util.clock.debug")
"""
A logger for debugging what is scheduling calls.
Ideally, these wouldn't be gated behind an `ExplicitlyConfiguredLogger` as including logs
from this logger would be helpful to track when things are being scheduled. However, for
these logs to be meaningful, they need to include a stack trace to show what initiated the
call in the first place.
Since the stack traces can create a lot of noise and make the logs hard to read (unless you're
specifically debugging scheduling issues) we want users to opt-in to seeing these logs. To enable
this, they must explicitly set `synapse.util.clock.debug` in the logging configuration. Note that
this setting won't inherit the log level from the parent logger.
"""
# Restore the original logger class
logging.setLoggerClass(original_logger_class)
class Clock:
"""
@@ -174,7 +194,7 @@ class Clock:
looping_call_context_string = "looping_call_now"
def wrapped_f(*args: P.args, **kwargs: P.kwargs) -> Deferred:
logger.debug(
clock_debug_logger.debug(
"%s(%s): Executing callback", looping_call_context_string, instance_id
)
@@ -222,7 +242,7 @@ class Clock:
d.addErrback(log_failure, "Looping call died", consumeErrors=False)
self._looping_calls.append(call)
logger.debug(
clock_debug_logger.debug(
"%s(%s): Scheduled looping call every %sms later",
looping_call_context_string,
instance_id,
@@ -283,7 +303,7 @@ class Clock:
raise Exception("Cannot start delayed call. Clock has been shutdown")
def wrapped_callback(*args: Any, **kwargs: Any) -> None:
logger.debug("call_later(%s): Executing callback", call_id)
clock_debug_logger.debug("call_later(%s): Executing callback", call_id)
assert context.current_context() is context.SENTINEL_CONTEXT, (
"Expected `call_later` callback from the reactor to start with the sentinel logcontext "
@@ -327,7 +347,7 @@ class Clock:
delay.as_secs(), wrapped_callback, *args, **kwargs
) # type: ignore[call-later-not-tracked]
logger.debug(
clock_debug_logger.debug(
"call_later(%s): Scheduled call for %ss later (tracked for shutdown: %s)",
call_id,
delay,
@@ -347,7 +367,7 @@ class Clock:
self, wrapped_call: "DelayedCallWrapper", ignore_errs: bool = False
) -> None:
try:
logger.debug(
clock_debug_logger.debug(
"cancel_call_later: cancelling scheduled call %s", wrapped_call.call_id
)
wrapped_call.delayed_call.cancel()
@@ -367,7 +387,7 @@ class Clock:
# will result in the call removing itself from the map mid-iteration.
for call_id, call in list(self._call_id_to_delayed_call.items()):
try:
logger.debug(
clock_debug_logger.debug(
"cancel_all_delayed_calls: cancelling scheduled call %s", call_id
)
call.cancel()
@@ -396,7 +416,9 @@ class Clock:
instance_id = random_string_insecure_fast(5)
def wrapped_callback(*args: Any, **kwargs: Any) -> None:
logger.debug("call_when_running(%s): Executing callback", instance_id)
clock_debug_logger.debug(
"call_when_running(%s): Executing callback", instance_id
)
# Since this callback can be invoked immediately if the reactor is already
# running, we can't always assume that we're running in the sentinel
@@ -436,7 +458,7 @@ class Clock:
# callWhenRunning should be called.
self._reactor.callWhenRunning(wrapped_callback, *args, **kwargs) # type: ignore[prefer-synapse-clock-call-when-running]
logger.debug(
clock_debug_logger.debug(
"call_when_running(%s): Scheduled call",
instance_id,
# Find out who is scheduling the call which makes it easy to follow in the
@@ -472,7 +494,7 @@ class Clock:
instance_id = random_string_insecure_fast(5)
def wrapped_callback(*args: Any, **kwargs: Any) -> None:
logger.debug(
clock_debug_logger.debug(
"add_system_event_trigger(%s): Executing %s %s callback",
instance_id,
phase,
@@ -509,7 +531,7 @@ class Clock:
# logcontext to the reactor
context.run_in_background(callback, *args, **kwargs)
logger.debug(
clock_debug_logger.debug(
"add_system_event_trigger(%s) for %s %s",
instance_id,
phase,
+3 -3
View File
@@ -19,7 +19,7 @@
#
import queue
from typing import Any, BinaryIO, cast
from typing import Any, BinaryIO, Optional, Union, cast
from twisted.internet import threads
from twisted.internet.defer import Deferred
@@ -50,7 +50,7 @@ class BackgroundFileConsumer:
self._reactor: ISynapseReactor = reactor
# Producer we're registered with
self._producer: IPushProducer | IPullProducer | None = None
self._producer: Optional[Union[IPushProducer, IPullProducer]] = None
# True if PushProducer, false if PullProducer
self.streaming = False
@@ -72,7 +72,7 @@ class BackgroundFileConsumer:
self._write_exception: Exception | None = None
def registerProducer(
self, producer: IPushProducer | IPullProducer, streaming: bool
self, producer: Union[IPushProducer, IPullProducer], streaming: bool
) -> None:
"""Part of IConsumer interface
+2 -1
View File
@@ -22,6 +22,7 @@
import heapq
from itertools import islice
from typing import (
Any,
Callable,
Collection,
Generator,
@@ -33,7 +34,7 @@ from typing import (
TypeVar,
)
T = TypeVar("T")
T = TypeVar("T", bound=Any)
S = TypeVar("S", bound="_SelfSlice")
+142 -77
View File
@@ -19,7 +19,10 @@
#
import gc
import sys
import weakref
from typing import Any
from unittest.mock import patch
from synapse.app.homeserver import SynapseHomeServer
from synapse.logging.context import LoggingContext
@@ -81,45 +84,12 @@ class HomeserverCleanShutdownTestCase(HomeserverTestCase):
# Ensure the `HomeServer` hs been garbage collected by attempting to use the
# weakref to it.
if hs_ref() is not None:
self.fail("HomeServer reference should not be valid at this point")
# To help debug this test when it fails, it is useful to leverage the
# `objgraph` module.
# The following code serves as an example of what I have found to be useful
# when tracking down references holding the `SynapseHomeServer` in memory:
#
# all_objects = gc.get_objects()
# for obj in all_objects:
# try:
# # These are a subset of types that are typically involved with
# # holding the `HomeServer` in memory. You may want to inspect
# # other types as well.
# if isinstance(obj, DataStore):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# db_obj = obj
# if isinstance(obj, SynapseHomeServer):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# synapse_hs = obj
# if isinstance(obj, SynapseSite):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# sysite = obj
# if isinstance(obj, DatabasePool):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# dbpool = obj
# except Exception:
# pass
#
# print(sys.getrefcount(hs_ref()), "refs to", hs_ref())
#
# # The following values for `max_depth` and `too_many` have been found to
# # render a useful amount of information without taking an overly long time
# # to generate the result.
# objgraph.show_backrefs(synapse_hs, max_depth=10, too_many=10)
hs_after_shutdown = hs_ref()
if hs_after_shutdown is not None:
self.fail(
"HomeServer reference should not be valid at this point "
f"{get_memory_debug_info_for_object(hs_after_shutdown)}",
)
@logcontext_clean
def test_clean_homeserver_shutdown_mid_background_updates(self) -> None:
@@ -165,42 +135,137 @@ class HomeserverCleanShutdownTestCase(HomeserverTestCase):
# Ensure the `HomeServer` hs been garbage collected by attempting to use the
# weakref to it.
if hs_ref() is not None:
self.fail("HomeServer reference should not be valid at this point")
hs_after_shutdown = hs_ref()
if hs_after_shutdown is not None:
self.fail(
"HomeServer reference should not be valid at this point "
f"{get_memory_debug_info_for_object(hs_after_shutdown)}",
)
# To help debug this test when it fails, it is useful to leverage the
# `objgraph` module.
# The following code serves as an example of what I have found to be useful
# when tracking down references holding the `SynapseHomeServer` in memory:
#
# all_objects = gc.get_objects()
# for obj in all_objects:
# try:
# # These are a subset of types that are typically involved with
# # holding the `HomeServer` in memory. You may want to inspect
# # other types as well.
# if isinstance(obj, DataStore):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# db_obj = obj
# if isinstance(obj, SynapseHomeServer):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# synapse_hs = obj
# if isinstance(obj, SynapseSite):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# sysite = obj
# if isinstance(obj, DatabasePool):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# dbpool = obj
# except Exception:
# pass
#
# print(sys.getrefcount(hs_ref()), "refs to", hs_ref())
#
# # The following values for `max_depth` and `too_many` have been found to
# # render a useful amount of information without taking an overly long time
# # to generate the result.
# objgraph.show_backrefs(synapse_hs, max_depth=10, too_many=10)
@logcontext_clean
def test_clean_homeserver_shutdown_when_failed_to_setup(self) -> None:
"""
Ensure the `SynapseHomeServer` can be fully shutdown and garbage collected if it
fails to be `setup`.
"""
self.reactor, self.clock = get_clock()
# Patch `hs.setup()` to do nothing, so that the homeserver is not fully setup.
with patch.object(SynapseHomeServer, "setup", return_value=None) as mock_setup:
# Patch out the call to `start_test_homeserver` since we want access to the
# homeserver even before the server is setup (let alone started)
with patch("tests.server.start_test_homeserver", return_value=None):
self.hs = setup_test_homeserver(
cleanup_func=self.addCleanup,
reactor=self.reactor,
homeserver_to_use=SynapseHomeServer,
clock=self.clock,
)
# Sanity check that we patched the correct method (make sure it was the
# thing that was called)
mock_setup.assert_called_once_with()
hs_ref = weakref.ref(self.hs)
# Run the reactor so any `callWhenRunning` functions can be cleared out.
self.reactor.run()
# This would normally happen as part of `HomeServer.shutdown` but the `MemoryReactor`
# we use in tests doesn't handle this properly (see doc comment)
cleanup_test_reactor_system_event_triggers(self.reactor)
async def shutdown() -> None:
# Use a logcontext just to double-check that we don't mangle the logcontext
# during shutdown.
with LoggingContext(name="hs_shutdown", server_name=self.hs.hostname):
await self.hs.shutdown()
self.get_success(shutdown())
# Cleanup the internal reference in our test case
del self.hs
# Force garbage collection.
gc.collect()
# Ensure the `HomeServer` hs been garbage collected by attempting to use the
# weakref to it.
hs_after_shutdown = hs_ref()
if hs_after_shutdown is not None:
self.fail(
"HomeServer reference should not be valid at this point "
f"{get_memory_debug_info_for_object(hs_after_shutdown)}",
)
def get_memory_debug_info_for_object(object: Any) -> dict[str, Any]:
"""
Gathers some useful information to make it easier to figure out why the `object` is
still in memory.
Args:
object: The object to gather debug information for.
"""
debug: dict[str, Any] = {}
if object is not None:
# The simplest tracing we can do is show the reference count for the object.
debug["reference_count"] = sys.getrefcount(object)
# Find the list of objects that directly refer to the object.
#
# Note: The `ref_count` can be >0 but `referrers` can be empty because
# the all of the objects were frozen. Look at the
# `frozen_object_count` to detect this scenario.
referrers = gc.get_referrers(object)
debug["gc_referrer_count"] = len(referrers)
debug["gc_referrers"] = referrers
# We don't expect to see frozen objects in normal operation of the
# `multi_synapse` shard.
#
# We can see frozen objects if you forget to `freeze=False` when
# starting the `SynapseHomeServer`. Frozen objects mean they are
# never considered for garbage collection. If the
# `SynapseHomeServer` (or anything that references the homeserver)
# is frozen, the homeserver can never be garbage collected and will
# linger in memory forever.
freeze_count = gc.get_freeze_count()
debug["gc_global_frozen_object_count"] = freeze_count
# To help debug this test when it fails, it is useful to leverage the
# `objgraph` module.
# The following code serves as an example of what I have found to be useful
# when tracking down references holding the `SynapseHomeServer` in memory:
#
# all_objects = gc.get_objects()
# for obj in all_objects:
# try:
# # These are a subset of types that are typically involved with
# # holding the `HomeServer` in memory. You may want to inspect
# # other types as well.
# if isinstance(obj, DataStore):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# db_obj = obj
# if isinstance(obj, SynapseHomeServer):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# synapse_hs = obj
# if isinstance(obj, SynapseSite):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# sysite = obj
# if isinstance(obj, DatabasePool):
# print(sys.getrefcount(obj), "refs to", obj)
# if not isinstance(obj, weakref.ProxyType):
# dbpool = obj
# except Exception:
# pass
#
# print(sys.getrefcount(hs_ref()), "refs to", hs_ref())
#
# # The following values for `max_depth` and `too_many` have been found to
# # render a useful amount of information without taking an overly long time
# # to generate the result.
# objgraph.show_backrefs(synapse_hs, max_depth=10, too_many=10)
return debug
+11 -4
View File
@@ -95,7 +95,12 @@ class KeyringTestCase(unittest.HomeserverTestCase):
def test_verify_json_objects_for_server_awaits_previous_requests(self) -> None:
mock_fetcher = Mock()
mock_fetcher.get_keys = Mock()
kr = keyring.Keyring(self.hs, key_fetchers=(mock_fetcher,))
kr = keyring.Keyring(
self.hs,
test_only_key_fetchers=[
mock_fetcher,
],
)
# a signed object that we are going to try to validate
key1 = signedjson.key.generate_signing_key("1")
@@ -286,7 +291,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
mock_fetcher = Mock()
mock_fetcher.get_keys = Mock(side_effect=get_keys)
kr = keyring.Keyring(
self.hs, key_fetchers=(StoreKeyFetcher(self.hs), mock_fetcher)
self.hs, test_only_key_fetchers=[StoreKeyFetcher(self.hs), mock_fetcher]
)
# sign the json
@@ -313,7 +318,7 @@ class KeyringTestCase(unittest.HomeserverTestCase):
mock_fetcher = Mock()
mock_fetcher.get_keys = Mock(side_effect=get_keys)
kr = keyring.Keyring(self.hs, key_fetchers=(mock_fetcher,))
kr = keyring.Keyring(self.hs, test_only_key_fetchers=[mock_fetcher])
json1: JsonDict = {}
signedjson.sign.sign_json(json1, "server1", key1)
@@ -363,7 +368,9 @@ class KeyringTestCase(unittest.HomeserverTestCase):
mock_fetcher1.get_keys = Mock(side_effect=get_keys1)
mock_fetcher2 = Mock()
mock_fetcher2.get_keys = Mock(side_effect=get_keys2)
kr = keyring.Keyring(self.hs, key_fetchers=(mock_fetcher1, mock_fetcher2))
kr = keyring.Keyring(
self.hs, test_only_key_fetchers=[mock_fetcher1, mock_fetcher2]
)
json1: JsonDict = {}
signedjson.sign.sign_json(json1, "server1", key1)
+371 -196
View File
@@ -18,7 +18,7 @@
#
#
import logging
from typing import AbstractSet, Mapping
from typing import AbstractSet
from unittest.mock import patch
import attr
@@ -38,13 +38,17 @@ from synapse.handlers.sliding_sync import (
RoomSyncConfig,
StateValues,
_required_state_changes,
_RequiredStateChangesReturn,
)
from synapse.rest import admin
from synapse.rest.client import knock, login, room
from synapse.server import HomeServer
from synapse.storage.util.id_generators import MultiWriterIdGenerator
from synapse.types import JsonDict, StateMap, StreamToken, UserID, create_requester
from synapse.types.handlers.sliding_sync import PerConnectionState, SlidingSyncConfig
from synapse.types.handlers.sliding_sync import (
PerConnectionState,
SlidingSyncConfig,
)
from synapse.types.state import StateFilter
from synapse.util.clock import Clock
@@ -3827,12 +3831,11 @@ class RequiredStateChangesTestParameters:
previous_required_state_map: dict[str, set[str]]
request_required_state_map: dict[str, set[str]]
state_deltas: StateMap[str]
expected_with_state_deltas: tuple[
Mapping[str, AbstractSet[str]] | None, StateFilter
]
expected_without_state_deltas: tuple[
Mapping[str, AbstractSet[str]] | None, StateFilter
]
expected_with_state_deltas: _RequiredStateChangesReturn
expected_without_state_deltas: _RequiredStateChangesReturn
previously_returned_lazy_user_ids: AbstractSet[str] = frozenset()
request_lazy_load_user_ids: AbstractSet[str] = frozenset()
class RequiredStateChangesTestCase(unittest.TestCase):
@@ -3848,8 +3851,12 @@ class RequiredStateChangesTestCase(unittest.TestCase):
request_required_state_map={"type1": {"state_key"}},
state_deltas={("type1", "state_key"): "$event_id"},
# No changes
expected_with_state_deltas=(None, StateFilter.none()),
expected_without_state_deltas=(None, StateFilter.none()),
expected_with_state_deltas=_RequiredStateChangesReturn(
None, StateFilter.none()
),
expected_without_state_deltas=_RequiredStateChangesReturn(
None, StateFilter.none()
),
),
),
(
@@ -3862,14 +3869,14 @@ class RequiredStateChangesTestCase(unittest.TestCase):
"type2": {"state_key"},
},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"state_key"}, "type2": {"state_key"}},
# We should see the new type added
StateFilter.from_types([("type2", "state_key")]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key"}, "type2": {"state_key"}},
StateFilter.from_types([("type2", "state_key")]),
),
@@ -3885,7 +3892,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
"type2": {"state_key"},
},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"state_key"}, "type2": {"state_key"}},
@@ -3894,7 +3901,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
[("type1", "state_key"), ("type2", "state_key")]
),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key"}, "type2": {"state_key"}},
StateFilter.from_types(
[("type1", "state_key"), ("type2", "state_key")]
@@ -3909,14 +3916,14 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={"type": {"state_key1"}},
request_required_state_map={"type": {"state_key1", "state_key2"}},
state_deltas={("type", "state_key2"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a key so we should persist the changed required state
# config.
{"type": {"state_key1", "state_key2"}},
# We should see the new state_keys added
StateFilter.from_types([("type", "state_key2")]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type": {"state_key1", "state_key2"}},
StateFilter.from_types([("type", "state_key2")]),
),
@@ -3929,7 +3936,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={"type": {"state_key1"}},
request_required_state_map={"type": {"state_key2", "state_key3"}},
state_deltas={("type", "state_key2"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a key so we should persist the changed required state
# config.
#
@@ -3940,7 +3947,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
[("type", "state_key2"), ("type", "state_key3")]
),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type": {"state_key1", "state_key2", "state_key3"}},
StateFilter.from_types(
[("type", "state_key2"), ("type", "state_key3")]
@@ -3964,7 +3971,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
request_required_state_map={"type1": {"state_key"}},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `type2` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type2`, we see that we haven't sent it before
@@ -3975,7 +3982,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `type2` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type2` before).
@@ -3998,7 +4005,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
request_required_state_map={},
state_deltas={("type2", "state_key"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `type2` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type2`, we see that we haven't sent it before
@@ -4009,7 +4016,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `type2` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type2` before).
@@ -4029,7 +4036,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={"type": {"state_key1", "state_key2"}},
request_required_state_map={"type": {"state_key1"}},
state_deltas={("type", "state_key2"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `(type, state_key2)` since there's been a change
# to that state (persist the change to required state).
# That way next time, they request `(type, state_key2)`, we see
@@ -4041,7 +4048,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `(type, state_key2)` is no longer requested but since that
# state hasn't changed, nothing should change (we should still
# keep track that we've sent `(type, state_key1)` and `(type,
@@ -4073,11 +4080,11 @@ class RequiredStateChangesTestCase(unittest.TestCase):
("other_type", "state_key"): "$event_id",
},
# We've added a wildcard, so we persist the change and request everything
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key2"}, StateValues.WILDCARD: {"state_key"}},
StateFilter.all(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key2"}, StateValues.WILDCARD: {"state_key"}},
StateFilter.all(),
),
@@ -4103,13 +4110,13 @@ class RequiredStateChangesTestCase(unittest.TestCase):
("other_type", "state_key"): "$event_id",
},
# We've removed a type wildcard, so we persist the change but don't request anything
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key2"}},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key2"}},
# We don't need to request anything more if they are requesting
# less state now
@@ -4129,11 +4136,11 @@ class RequiredStateChangesTestCase(unittest.TestCase):
state_deltas={("type2", "state_key"): "$event_id"},
# We've added a wildcard state_key, so we persist the change and
# request all of the state for that type
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key"}, "type2": {StateValues.WILDCARD}},
StateFilter.from_types([("type2", None)]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key"}, "type2": {StateValues.WILDCARD}},
StateFilter.from_types([("type2", None)]),
),
@@ -4151,7 +4158,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
state_deltas={("type2", "state_key"): "$event_id"},
# We've removed a state_key wildcard, so we persist the change and
# request nothing
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{"type1": {"state_key"}},
# We don't need to request anything more if they are requesting
# less state now
@@ -4160,7 +4167,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# We've removed a state_key wildcard but there have been no matching
# state changes, so no changes needed, just persist the
# `request_required_state_map` as-is.
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
None,
# We don't need to request anything more if they are requesting
# less state now
@@ -4180,7 +4187,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
request_required_state_map={"type1": {"state_key1"}},
state_deltas={("type1", "state_key3"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've removed some state keys from the type, but only state_key3 was
# changed so only that one should be removed.
{"type1": {"state_key1", "state_key2"}},
@@ -4188,7 +4195,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# No changes needed, just persist the
# `request_required_state_map` as-is
None,
@@ -4207,14 +4214,14 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={},
request_required_state_map={"type1": {StateValues.ME}},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a type so we should persist the changed required state
# config.
{"type1": {StateValues.ME}},
# We should see the new state_keys added
StateFilter.from_types([("type1", "@user:test")]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {StateValues.ME}},
StateFilter.from_types([("type1", "@user:test")]),
),
@@ -4229,7 +4236,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={"type1": {StateValues.ME}},
request_required_state_map={},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `type1` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type1`, we see that we haven't sent it before
@@ -4240,7 +4247,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `type1` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type1` before).
@@ -4260,14 +4267,14 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={},
request_required_state_map={"type1": {"@user:test"}},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# We've added a type so we should persist the changed required state
# config.
{"type1": {"@user:test"}},
# We should see the new state_keys added
StateFilter.from_types([("type1", "@user:test")]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{"type1": {"@user:test"}},
StateFilter.from_types([("type1", "@user:test")]),
),
@@ -4282,7 +4289,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={"type1": {"@user:test"}},
request_required_state_map={},
state_deltas={("type1", "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `type1` since there's been a change to that state,
# (persist the change to required state). That way next time,
# they request `type1`, we see that we haven't sent it before
@@ -4293,7 +4300,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `type1` is no longer requested but since that state hasn't
# changed, nothing should change (we should still keep track
# that we've sent `type1` before).
@@ -4313,13 +4320,13 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
state_deltas={(EventTypes.Member, "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# If a "$LAZY" has been added or removed we always update the
# required state to what was requested for simplicity.
{EventTypes.Member: {StateValues.LAZY}},
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{EventTypes.Member: {StateValues.LAZY}},
StateFilter.none(),
),
@@ -4334,7 +4341,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={},
state_deltas={(EventTypes.Member, "@user:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# If a "$LAZY" has been added or removed we always update the
# required state to what was requested for simplicity.
{},
@@ -4342,7 +4349,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# less state now
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `EventTypes.Member` is no longer requested but since that
# state hasn't changed, nothing should change (we should still
# keep track that we've sent `EventTypes.Member` before).
@@ -4361,41 +4368,40 @@ class RequiredStateChangesTestCase(unittest.TestCase):
we're sending down another response without any timeline events.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
EventTypes.Member: {
StateValues.LAZY,
"@user2:test",
"@user3:test",
}
},
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids={"@user2:test", "@user3:test"},
request_lazy_load_user_ids=set(),
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# The `request_required_state_map` hasn't changed
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Remove "@user2:test" since that state has changed and is no
# longer being requested anymore. Since something was removed,
# we should persist the changed to required state. That way next
# time, they request "@user2:test", we see that we haven't sent
# it before and send the new state. (we should still keep track
# that we've sent specific `EventTypes.Member` before)
{
EventTypes.Member: {
StateValues.LAZY,
"@user3:test",
}
},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
lazy_members_invalidated={"@user2:test"},
),
expected_without_state_deltas=(
# We're not requesting any specific `EventTypes.Member` now but
# since that state hasn't changed, nothing should change (we
# should still keep track that we've sent specific
# `EventTypes.Member` before).
expected_without_state_deltas=_RequiredStateChangesReturn(
# The `request_required_state_map` hasn't changed
None,
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Nothing should change (we should still keep track that
# we've sent specific `EventTypes.Member` before).
lazy_members_invalidated=frozenset(),
),
),
),
@@ -4407,50 +4413,37 @@ class RequiredStateChangesTestCase(unittest.TestCase):
we're sending down another response with a new event from user4.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
EventTypes.Member: {
StateValues.LAZY,
"@user2:test",
"@user3:test",
}
},
request_required_state_map={
EventTypes.Member: {StateValues.LAZY, "@user4:test"}
},
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids={"@user2:test", "@user3:test"},
request_lazy_load_user_ids={"@user4:test"},
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=(
# Since "@user4:test" was added, we should persist the changed
# required state config.
#
# Also remove "@user2:test" since that state has changed and is no
# longer being requested anymore. Since something was removed,
# we also should persist the changed to required state. That way next
# time, they request "@user2:test", we see that we haven't sent
# it before and send the new state. (we should still keep track
expected_with_state_deltas=_RequiredStateChangesReturn(
# The `request_required_state_map` hasn't changed
None,
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Remove "@user2:test" since that state has changed and
# is no longer being requested anymore. Since something
# was removed, we also should persist the changed to
# required state. That way next time, they request
# "@user2:test", we see that we haven't sent it before
# and send the new state. (we should still keep track
# that we've sent specific `EventTypes.Member` before)
{
EventTypes.Member: {
StateValues.LAZY,
"@user3:test",
"@user4:test",
}
},
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
lazy_members_invalidated={"@user2:test"},
),
expected_without_state_deltas=(
# Since "@user4:test" was added, we should persist the changed
# required state config.
{
EventTypes.Member: {
StateValues.LAZY,
"@user2:test",
"@user3:test",
"@user4:test",
}
},
expected_without_state_deltas=_RequiredStateChangesReturn(
# The `request_required_state_map` hasn't changed
None,
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
lazy_members_invalidated=frozenset(),
),
),
),
@@ -4464,40 +4457,81 @@ class RequiredStateChangesTestCase(unittest.TestCase):
EventTypes.Member: {"@user2:test", "@user3:test"}
},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids=frozenset(),
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Since `StateValues.LAZY` was added, we should persist the
# changed required state config.
#
# Also remove "@user2:test" since that state has changed and is no
# longer being requested anymore. Since something was removed,
# we also should persist the changed to required state. That way next
# time, they request "@user2:test", we see that we haven't sent
# it before and send the new state. (we should still keep track
# that we've sent specific `EventTypes.Member` before)
{
EventTypes.Member: {
StateValues.LAZY,
"@user3:test",
}
},
# We don't need to request anything more if they are requesting
# less state now
{EventTypes.Member: {StateValues.LAZY}},
# No users are being lazy loaded, so nothing to request.
StateFilter.none(),
# Remember the fact that we've sent @user3 down before,
# but not @user2 as that has been invalidated.
extra_users_to_add_to_lazy_cache={"@user3:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# Since `StateValues.LAZY` was added, we should persist the
# changed required state config.
{
EventTypes.Member: {
StateValues.LAZY,
"@user2:test",
"@user3:test",
}
},
# We don't need to request anything more if they are requesting
# less state now
{EventTypes.Member: {StateValues.LAZY}},
# No users are being lazy loaded, so nothing to request.
StateFilter.none(),
# Remember the fact that we've sent the users down before.
extra_users_to_add_to_lazy_cache={"@user2:test", "@user3:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
),
),
(
"state_key_expand_lazy_keep_previous_memberships_need_previous_sent",
"""
Test expanding the `required_state` to lazy-loading room
members. If a previously explicit membership is requested then
we should not send it again (as it was already sent before).
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
EventTypes.Member: {"@user2:test", "@user3:test"}
},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids={"@user3:test"},
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=_RequiredStateChangesReturn(
# Since `StateValues.LAZY` was added, we should persist the
# changed required state config.
{EventTypes.Member: {StateValues.LAZY}},
# We have already sent @user3 down before.
#
# `@user3:test` is required for lazy loading, but we've
# already sent it down before (due to it being in
# `previous_required_state_map`), so we don't need to
# request it again.
StateFilter.none(),
# Remember the fact that we've sent @user3 down before,
# but not @user2 as that has been invalidated.
extra_users_to_add_to_lazy_cache={"@user3:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
expected_without_state_deltas=_RequiredStateChangesReturn(
# Since `StateValues.LAZY` was added, we should persist the
# changed required state config.
{EventTypes.Member: {StateValues.LAZY}},
# We have already sent @user3 down before.
#
# `@user3:test` is required for lazy loading, but we've
# already sent it down before (due to it being in
# `previous_required_state_map`), so we don't need to
# request it again.
StateFilter.none(),
# Remember the fact that we've sent the users down before.
extra_users_to_add_to_lazy_cache={"@user2:test", "@user3:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
),
),
@@ -4507,36 +4541,33 @@ class RequiredStateChangesTestCase(unittest.TestCase):
Test retracting the `required_state` to no longer lazy-loading room members.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
EventTypes.Member: {
StateValues.LAZY,
"@user2:test",
"@user3:test",
}
},
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={},
previously_returned_lazy_user_ids={"@user2:test", "@user3:test"},
request_lazy_load_user_ids=set(),
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Remove `EventTypes.Member` since there's been a change to that
# state, (persist the change to required state). That way next
# time, they request `EventTypes.Member`, we see that we haven't
# sent it before and send the new state. (if we were tracking
# that we sent any other state, we should still keep track
# that).
#
# This acts the same as the `simple_remove_type` test. It's
# possible that we could remember the specific `state_keys` that
# we have sent down before but this currently just acts the same
# as if a whole `type` was removed. Perhaps it's good that we
# "garbage collect" and forget what we've sent before for a
# given `type` when the client stops caring about a certain
# `type`.
# state, (persist the change to required state).
{},
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Explicitly remove the now invalidated @user2:test
# membership.
#
# We don't invalidate @user3:test as that membership
# hasn't changed. We continue to store the existing lazy
# members since they might be useful for future
# requests. (Alternatively, we could invalidate all
# members in the room when the client stops lazy
# loading, but we opt to keep track of them).
lazy_members_invalidated={"@user2:test"},
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# `EventTypes.Member` is no longer requested but since that
# state hasn't changed, nothing should change (we should still
# keep track that we've sent `EventTypes.Member` before).
@@ -4544,13 +4575,20 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# We don't need to request anything more if they are requesting
# less state now
StateFilter.none(),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Nothing has been invalidated.
lazy_members_invalidated=frozenset(),
),
),
),
(
"state_key_retract_lazy_keep_previous_memberships_with_new_memberships",
"state_key_retract_lazy_keep_previous_explicit_memberships",
"""
Test retracting the `required_state` to no longer lazy-loading room members.
Test removing explicit memberships from the `required_state`
when lazy-loading room members tracks previously sent
memberships.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
@@ -4560,39 +4598,144 @@ class RequiredStateChangesTestCase(unittest.TestCase):
"@user3:test",
}
},
request_required_state_map={EventTypes.Member: {"@user4:test"}},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids={"@user3:test"},
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
# Since an explicit membership was removed, we record
# the new required state config and move them to lazy
# members.
{EventTypes.Member: {StateValues.LAZY}},
# We have already sent @user3 down before.
#
# `@user3:test` is required for lazy loading, but we've
# already sent it down before (due to it being in
# `previous_required_state_map`), so we don't need to
# request it again.
StateFilter.none(),
# Remember the fact that we've sent @user3 down before,
# but not @user2 as that has been invalidated.
extra_users_to_add_to_lazy_cache={"@user3:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
expected_without_state_deltas=_RequiredStateChangesReturn(
# While some explicit memberships were removed, there were no
# state changes, so we don't need to persist the new required
# state config yet.
None,
# We have already sent @user3 down before.
#
# `@user3:test` is required for lazy loading, but we've
# already sent it down before (due to it being in
# `previous_required_state_map`), so we don't need to
# request it again.
StateFilter.none(),
# Remember the fact that we've sent the users down before.
extra_users_to_add_to_lazy_cache=frozenset(),
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
),
),
(
"state_key_retract_lazy_keep_previous_explicit_me_memberships",
"""
Test removing explicit $ME memberships from the `required_state`
when lazy-loading room members tracks previously sent
memberships.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={
EventTypes.Member: {
StateValues.LAZY,
StateValues.ME,
"@user2:test",
}
},
request_required_state_map={EventTypes.Member: {StateValues.LAZY}},
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids={"@user:test"},
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=_RequiredStateChangesReturn(
# Since an explicit membership was removed, we record
# the new required state config and move them to lazy
# members.
{EventTypes.Member: {StateValues.LAZY}},
# We have already sent @user down before.
#
# `@user:test` is required for lazy loading, but we've
# already sent it down before (due to `StateValues.ME`
# being in `previous_required_state_map`), so we don't
# need to request it again.
StateFilter.none(),
# Remember the fact that we've sent @user down before,
# but not @user2 as that has been invalidated.
extra_users_to_add_to_lazy_cache={"@user:test"},
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
expected_without_state_deltas=_RequiredStateChangesReturn(
# While some explicit memberships were removed, there were no
# state changes, so we don't need to persist the new required
# state config yet.
None,
# We have already sent @user down before.
#
# `@user:test` is required for lazy loading, but we've
# already sent it down before (due to `StateValues.ME`
# being in `previous_required_state_map`), so we don't
# need to request it again.
StateFilter.none(),
# No relevant state has changed and we don't persist the
# changed required_state_map, so we don't yet move the
# $ME state to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Nothing to invalidate as there are no existing lazy members.
lazy_members_invalidated=frozenset(),
),
),
),
(
"state_key_retract_lazy_keep_previous_memberships_with_new_memberships",
"""
Test retracting the `required_state` to no longer lazy-loading room members.
""",
RequiredStateChangesTestParameters(
previous_required_state_map={EventTypes.Member: {StateValues.LAZY}},
request_required_state_map={EventTypes.Member: {"@user4:test"}},
previously_returned_lazy_user_ids={"@user2:test", "@user3:test"},
request_lazy_load_user_ids=frozenset(),
state_deltas={(EventTypes.Member, "@user2:test"): "$event_id"},
expected_with_state_deltas=_RequiredStateChangesReturn(
# Since "@user4:test" was added, we should persist the changed
# required state config.
#
{EventTypes.Member: {"@user4:test"}},
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# Also remove "@user2:test" since that state has changed and is no
# longer being requested anymore. Since something was removed,
# we also should persist the changed to required state. That way next
# time, they request "@user2:test", we see that we haven't sent
# it before and send the new state. (we should still keep track
# that we've sent specific `EventTypes.Member` before)
{
EventTypes.Member: {
"@user3:test",
"@user4:test",
}
},
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
lazy_members_invalidated={"@user2:test"},
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
# Since "@user4:test" was added, we should persist the changed
# required state config.
{
EventTypes.Member: {
"@user2:test",
"@user3:test",
"@user4:test",
}
},
{EventTypes.Member: {"@user4:test"}},
# We should see the new state_keys added
StateFilter.from_types([(EventTypes.Member, "@user4:test")]),
# Previous request did not include any explicit members,
# so there is no extra users to add to the lazy cache.
extra_users_to_add_to_lazy_cache=frozenset(),
# We don't invalidate user2 as they haven't changed
lazy_members_invalidated=frozenset(),
),
),
),
@@ -4613,7 +4756,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# room required state config to match the request. And since we we're previously
# already fetching everything, we don't have to fetch anything now that they've
# narrowed.
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{
StateValues.WILDCARD: {
"state_key1",
@@ -4623,7 +4766,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{
StateValues.WILDCARD: {
"state_key1",
@@ -4649,11 +4792,11 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
state_deltas={("type1", "state_key1"): "$event_id"},
# We've added a wildcard, so we persist the change and request everything
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{StateValues.WILDCARD: {StateValues.WILDCARD}},
StateFilter.all(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{StateValues.WILDCARD: {StateValues.WILDCARD}},
StateFilter.all(),
),
@@ -4673,7 +4816,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# request. And since we we're previously already fetching
# everything, we don't have to fetch anything now that they've
# narrowed.
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{
"type1": {
"state_key1",
@@ -4683,7 +4826,7 @@ class RequiredStateChangesTestCase(unittest.TestCase):
},
StateFilter.none(),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{
"type1": {
"state_key1",
@@ -4708,11 +4851,11 @@ class RequiredStateChangesTestCase(unittest.TestCase):
# update the effective room required state config to match the
# request. And we need to request all of the state for that type
# because we previously, only sent down a few keys.
expected_with_state_deltas=(
expected_with_state_deltas=_RequiredStateChangesReturn(
{"type1": {StateValues.WILDCARD, "state_key2", "state_key3"}},
StateFilter.from_types([("type1", None)]),
),
expected_without_state_deltas=(
expected_without_state_deltas=_RequiredStateChangesReturn(
{
"type1": {
StateValues.WILDCARD,
@@ -4734,42 +4877,66 @@ class RequiredStateChangesTestCase(unittest.TestCase):
test_parameters: RequiredStateChangesTestParameters,
) -> None:
# Without `state_deltas`
changed_required_state_map, added_state_filter = _required_state_changes(
state_changes = _required_state_changes(
user_id="@user:test",
prev_required_state_map=test_parameters.previous_required_state_map,
request_required_state_map=test_parameters.request_required_state_map,
previously_returned_lazy_user_ids=test_parameters.previously_returned_lazy_user_ids,
request_lazy_load_user_ids=test_parameters.request_lazy_load_user_ids,
state_deltas={},
)
self.assertEqual(
changed_required_state_map,
test_parameters.expected_without_state_deltas[0],
state_changes.changed_required_state_map,
test_parameters.expected_without_state_deltas.changed_required_state_map,
"changed_required_state_map does not match (without state_deltas)",
)
self.assertEqual(
added_state_filter,
test_parameters.expected_without_state_deltas[1],
state_changes.added_state_filter,
test_parameters.expected_without_state_deltas.added_state_filter,
"added_state_filter does not match (without state_deltas)",
)
self.assertEqual(
state_changes.lazy_members_invalidated,
test_parameters.expected_without_state_deltas.lazy_members_invalidated,
"lazy_members_invalidated does not match (without state_deltas)",
)
self.assertEqual(
state_changes.extra_users_to_add_to_lazy_cache,
test_parameters.expected_without_state_deltas.extra_users_to_add_to_lazy_cache,
"lazy_members_previously_returned does not match (without state_deltas)",
)
# With `state_deltas`
changed_required_state_map, added_state_filter = _required_state_changes(
state_changes = _required_state_changes(
user_id="@user:test",
prev_required_state_map=test_parameters.previous_required_state_map,
request_required_state_map=test_parameters.request_required_state_map,
previously_returned_lazy_user_ids=test_parameters.previously_returned_lazy_user_ids,
request_lazy_load_user_ids=test_parameters.request_lazy_load_user_ids,
state_deltas=test_parameters.state_deltas,
)
self.assertEqual(
changed_required_state_map,
test_parameters.expected_with_state_deltas[0],
state_changes.changed_required_state_map,
test_parameters.expected_with_state_deltas.changed_required_state_map,
"changed_required_state_map does not match (with state_deltas)",
)
self.assertEqual(
added_state_filter,
test_parameters.expected_with_state_deltas[1],
state_changes.added_state_filter,
test_parameters.expected_with_state_deltas.added_state_filter,
"added_state_filter does not match (with state_deltas)",
)
self.assertEqual(
state_changes.lazy_members_invalidated,
test_parameters.expected_with_state_deltas.lazy_members_invalidated,
"lazy_members_invalidated does not match (with state_deltas)",
)
self.assertEqual(
state_changes.extra_users_to_add_to_lazy_cache,
test_parameters.expected_with_state_deltas.extra_users_to_add_to_lazy_cache,
"lazy_members_previously_returned does not match (with state_deltas)",
)
@parameterized.expand(
[
@@ -4805,12 +4972,16 @@ class RequiredStateChangesTestCase(unittest.TestCase):
}
# (function under test)
changed_required_state_map, added_state_filter = _required_state_changes(
state_changes = _required_state_changes(
user_id="@user:test",
prev_required_state_map=previous_required_state_map,
request_required_state_map=request_required_state_map,
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids=frozenset(),
state_deltas={},
)
changed_required_state_map = state_changes.changed_required_state_map
assert changed_required_state_map is not None
# We should only remember up to the maximum number of state keys
@@ -4874,12 +5045,16 @@ class RequiredStateChangesTestCase(unittest.TestCase):
)
# (function under test)
changed_required_state_map, added_state_filter = _required_state_changes(
state_changes = _required_state_changes(
user_id="@user:test",
prev_required_state_map=previous_required_state_map,
request_required_state_map=request_required_state_map,
previously_returned_lazy_user_ids=frozenset(),
request_lazy_load_user_ids=frozenset(),
state_deltas={},
)
changed_required_state_map = state_changes.changed_required_state_map
assert changed_required_state_map is not None
# Should include all of the requested state
+102
View File
@@ -22,6 +22,7 @@
from twisted.internet.address import IPv6Address
from twisted.internet.testing import MemoryReactor, StringTransport
from synapse.app._base import max_request_body_size
from synapse.app.homeserver import SynapseHomeServer
from synapse.server import HomeServer
from synapse.util.clock import Clock
@@ -143,3 +144,104 @@ class SynapseRequestTestCase(HomeserverTestCase):
# we should get a 415
self.assertRegex(transport.value().decode(), r"^HTTP/1\.1 415 ")
def test_content_length_too_large(self) -> None:
"""HTTP requests with Content-Length exceeding max size should be rejected with 413"""
self.hs.start_listening()
# find the HTTP server which is configured to listen on port 0
(port, factory, _backlog, interface) = self.reactor.tcpServers[0]
self.assertEqual(interface, "::")
self.assertEqual(port, 0)
# complete the connection and wire it up to a fake transport
client_address = IPv6Address("TCP", "::1", 2345)
protocol = factory.buildProtocol(client_address)
transport = StringTransport()
protocol.makeConnection(transport)
# Send a request with Content-Length header that exceeds the limit.
# Default max is 50MB (from media max_upload_size), so send something larger.
oversized_length = 1 + max_request_body_size(self.hs.config)
protocol.dataReceived(
b"POST / HTTP/1.1\r\n"
b"Connection: close\r\n"
b"Content-Length: " + str(oversized_length).encode() + b"\r\n"
b"\r\n"
b"" + b"x" * oversized_length + b"\r\n"
b"\r\n"
)
# Advance the reactor to process the request
while not transport.disconnecting:
self.reactor.advance(1)
# We should get a 413 Content Too Large
response = transport.value().decode()
self.assertRegex(response, r"^HTTP/1\.1 413 ")
self.assertSubstring("M_TOO_LARGE", response)
def test_too_many_content_length_headers(self) -> None:
"""HTTP requests with multiple Content-Length headers should be rejected with 400"""
self.hs.start_listening()
# find the HTTP server which is configured to listen on port 0
(port, factory, _backlog, interface) = self.reactor.tcpServers[0]
self.assertEqual(interface, "::")
self.assertEqual(port, 0)
# complete the connection and wire it up to a fake transport
client_address = IPv6Address("TCP", "::1", 2345)
protocol = factory.buildProtocol(client_address)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived(
b"POST / HTTP/1.1\r\n"
b"Connection: close\r\n"
b"Content-Length: " + str(5).encode() + b"\r\n"
b"Content-Length: " + str(5).encode() + b"\r\n"
b"\r\n"
b"" + b"xxxxx" + b"\r\n"
b"\r\n"
)
# Advance the reactor to process the request
while not transport.disconnecting:
self.reactor.advance(1)
# We should get a 400
response = transport.value().decode()
self.assertRegex(response, r"^HTTP/1\.1 400 ")
def test_invalid_content_length_headers(self) -> None:
"""HTTP requests with invalid Content-Length header should be rejected with 400"""
self.hs.start_listening()
# find the HTTP server which is configured to listen on port 0
(port, factory, _backlog, interface) = self.reactor.tcpServers[0]
self.assertEqual(interface, "::")
self.assertEqual(port, 0)
# complete the connection and wire it up to a fake transport
client_address = IPv6Address("TCP", "::1", 2345)
protocol = factory.buildProtocol(client_address)
transport = StringTransport()
protocol.makeConnection(transport)
protocol.dataReceived(
b"POST / HTTP/1.1\r\n"
b"Connection: close\r\n"
b"Content-Length: eight\r\n"
b"\r\n"
b"" + b"xxxxx" + b"\r\n"
b"\r\n"
)
# Advance the reactor to process the request
while not transport.disconnecting:
self.reactor.advance(1)
# We should get a 400
response = transport.value().decode()
self.assertRegex(response, r"^HTTP/1\.1 400 ")
+140 -14
View File
@@ -71,14 +71,43 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
return resources
def _ensure_quarantined(
self, admin_user_tok: str, server_and_media_id: str
self,
user_tok: str,
server_and_media_id: str,
include_bypass_param: bool = False,
) -> None:
"""Ensure a piece of media is quarantined when trying to access it."""
"""Ensure a piece of media is quarantined when trying to access it.
The include_bypass_param flag enables the presence of the
admin_unsafely_bypass_quarantine query parameter, but still expects that the
request will fail to download the media.
"""
if include_bypass_param:
query_string = "?admin_unsafely_bypass_quarantine=true"
channel = self.make_request(
"GET",
f"/_matrix/client/v1/media/download/{server_and_media_id}{query_string}",
shorthand=False,
access_token=user_tok,
)
# Non-admins can't bypass, so this should fail regardless of whether the
# media is actually quarantined.
self.assertEqual(
400,
channel.code,
msg=(
"Expected to receive a 400 when bypassing quarantined media: %s"
% server_and_media_id
),
)
# Repeat the request, this time without the bypass parameter.
channel = self.make_request(
"GET",
f"/_matrix/client/v1/media/download/{server_and_media_id}",
shorthand=False,
access_token=admin_user_tok,
access_token=user_tok,
)
# Should be quarantined
@@ -91,6 +120,62 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
),
)
def test_admin_can_bypass_quarantine(self) -> None:
self.register_user("admin", "pass", admin=True)
admin_user_tok = self.login("admin", "pass")
# Upload some media
response = self.helper.upload_media(SMALL_PNG, tok=admin_user_tok)
# Extract media ID from the response
server_name_and_media_id = response["content_uri"][6:] # Cut off 'mxc://'
server_name, media_id = server_name_and_media_id.split("/")
# Attempt to access the media
channel = self.make_request(
"GET",
f"/_matrix/client/v1/media/download/{server_name_and_media_id}",
shorthand=False,
access_token=admin_user_tok,
)
# Should be successful
self.assertEqual(200, channel.code)
# Quarantine the media
url = "/_synapse/admin/v1/media/quarantine/%s/%s" % (
urllib.parse.quote(server_name),
urllib.parse.quote(media_id),
)
channel = self.make_request(
"POST",
url,
access_token=admin_user_tok,
)
self.pump(1.0)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Now access it *without* the bypass parameter - this should fail (as expected).
self._ensure_quarantined(
admin_user_tok, server_name_and_media_id, include_bypass_param=False
)
# Now access it *with* the bypass parameter - this should work
channel = self.make_request(
"GET",
f"/_matrix/client/v1/media/download/{server_name_and_media_id}?admin_unsafely_bypass_quarantine=true",
shorthand=False,
access_token=admin_user_tok,
)
self.assertEqual(
200,
channel.code,
msg=(
"Expected to receive a 200 on accessing (with bypass) quarantined media: %s"
% server_name_and_media_id
),
)
@parameterized.expand(
[
# Attempt quarantine media APIs as non-admin
@@ -154,8 +239,14 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
self.pump(1.0)
self.assertEqual(200, channel.code, msg=channel.json_body)
# Attempt to access the media
self._ensure_quarantined(admin_user_tok, server_name_and_media_id)
# Attempt to access the media (and ensure non-admins can't download it, even
# with a bypass parameter). Admins cannot download it without the bypass param.
self._ensure_quarantined(
non_admin_user_tok, server_name_and_media_id, include_bypass_param=True
)
self._ensure_quarantined(
admin_user_tok, server_name_and_media_id, include_bypass_param=False
)
@parameterized.expand(
[
@@ -214,9 +305,21 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
server_and_media_id_1 = mxc_1[6:]
server_and_media_id_2 = mxc_2[6:]
# Test that we cannot download any of the media anymore
self._ensure_quarantined(admin_user_tok, server_and_media_id_1)
self._ensure_quarantined(admin_user_tok, server_and_media_id_2)
# Test that we cannot download any of the media anymore, especially with the
# bypass parameter set. Admins cannot download the media without supplying the
# bypass parameter, so we check that too.
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_1, include_bypass_param=True
)
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_2, include_bypass_param=True
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_1, include_bypass_param=False
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_2, include_bypass_param=False
)
def test_quarantine_all_media_by_user(self) -> None:
self.register_user("user_admin", "pass", admin=True)
@@ -263,10 +366,27 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
channel.json_body, {"num_quarantined": 3}, "Expected 3 quarantined items"
)
# Attempt to access each piece of media
self._ensure_quarantined(admin_user_tok, server_and_media_id_1)
self._ensure_quarantined(admin_user_tok, server_and_media_id_2)
self._ensure_quarantined(admin_user_tok, server_and_media_id_3)
# Attempt to access each piece of media, ensuring that it can't be downloaded
# even with a bypass parameter. Admins should not be able to download the media
# either when not supplying the bypass parameter, so we check that too.
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_1, include_bypass_param=True
)
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_2, include_bypass_param=True
)
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_3, include_bypass_param=True
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_1, include_bypass_param=False
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_2, include_bypass_param=False
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_3, include_bypass_param=False
)
def test_cannot_quarantine_safe_media(self) -> None:
self.register_user("user_admin", "pass", admin=True)
@@ -307,8 +427,14 @@ class QuarantineMediaTestCase(unittest.HomeserverTestCase):
)
# Attempt to access each piece of media, the first should fail, the
# second should succeed.
self._ensure_quarantined(admin_user_tok, server_and_media_id_1)
# second should succeed. We check both the non-admin user with a bypass
# parameter, and the admin user without.
self._ensure_quarantined(
non_admin_user_tok, server_and_media_id_1, include_bypass_param=True
)
self._ensure_quarantined(
admin_user_tok, server_and_media_id_1, include_bypass_param=False
)
# Attempt to access each piece of media
channel = self.make_request(
+2 -5
View File
@@ -26,7 +26,7 @@ from twisted.internet.testing import MemoryReactor
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.handlers.device import DeviceWriterHandler
from synapse.handlers.device import MAX_DEVICE_DISPLAY_NAME_LEN, DeviceWriterHandler
from synapse.rest.client import devices, login
from synapse.server import HomeServer
from synapse.util.clock import Clock
@@ -175,10 +175,7 @@ class DeviceRestTestCase(unittest.HomeserverTestCase):
)
# Request to update a device display name with a new value that is longer than allowed.
update = {
"display_name": "a"
* (synapse.handlers.device.MAX_DEVICE_DISPLAY_NAME_LEN + 1)
}
update = {"display_name": "a" * (MAX_DEVICE_DISPLAY_NAME_LEN + 1)}
channel = self.make_request(
"PUT",
+106
View File
@@ -756,6 +756,112 @@ class DeleteMediaByDateSizeTestCase(_AdminMediaTests):
self.assertFalse(os.path.exists(local_path))
class ListQuarantinedMediaTestCase(_AdminMediaTests):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.server_name = hs.hostname
@parameterized.expand(["local", "remote"])
def test_no_auth(self, kind: str) -> None:
"""
Try to list quarantined media without authentication.
"""
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=%s" % (kind,),
)
self.assertEqual(401, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
@parameterized.expand(["local", "remote"])
def test_requester_is_not_admin(self, kind: str) -> None:
"""
If the user is not a server admin, an error is returned.
"""
self.other_user = self.register_user("user", "pass")
self.other_user_token = self.login("user", "pass")
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=%s" % (kind,),
access_token=self.other_user_token,
)
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_list_quarantined_media(self) -> None:
"""
Ensure we actually get results for each page. We can't really test that
remote media is quarantined, but we can test that local media is.
"""
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
def _upload() -> str:
return self.helper.upload_media(
SMALL_PNG, tok=self.admin_user_tok, expect_code=200
)["content_uri"][6:].split("/")[1] # Cut off 'mxc://' and domain
self.media_id_1 = _upload()
self.media_id_2 = _upload()
self.media_id_3 = _upload()
def _quarantine(media_id: str) -> None:
channel = self.make_request(
"POST",
"/_synapse/admin/v1/media/quarantine/%s/%s"
% (
self.server_name,
media_id,
),
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
_quarantine(self.media_id_1)
_quarantine(self.media_id_2)
_quarantine(self.media_id_3)
# Page 1
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=local&from=0&limit=1",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(1, len(channel.json_body["media"]))
# Page 2
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=local&from=1&limit=1",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(1, len(channel.json_body["media"]))
# Page 3
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=local&from=2&limit=1",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(1, len(channel.json_body["media"]))
# Page 4 (no media)
channel = self.make_request(
"GET",
"/_synapse/admin/v1/media/quarantined?kind=local&from=3&limit=1",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(0, len(channel.json_body["media"]))
class QuarantineMediaByIDTestCase(_AdminMediaTests):
def upload_media_and_return_media_id(self, data: bytes) -> str:
# Upload some media into the room
+114
View File
@@ -2976,6 +2976,120 @@ class JoinAliasRoomTestCase(unittest.HomeserverTestCase):
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(private_room_id, channel.json_body["joined_rooms"][0])
def test_joined_rooms(self) -> None:
"""
Test joined_rooms admin endpoint.
"""
channel = self.make_request(
"POST",
f"/_matrix/client/v3/join/{self.public_room_id}",
content={"user_id": self.second_user_id},
access_token=self.second_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(self.public_room_id, channel.json_body["room_id"])
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/users/{self.second_user_id}/joined_rooms",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(self.public_room_id, channel.json_body["joined_rooms"][0])
def test_memberships(self) -> None:
"""
Test user memberships admin endpoint.
"""
channel = self.make_request(
"POST",
f"/_matrix/client/v3/join/{self.public_room_id}",
content={"user_id": self.second_user_id},
access_token=self.second_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
other_room_id = self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok
)
channel = self.make_request(
"POST",
f"/_matrix/client/v3/join/{other_room_id}",
content={"user_id": self.second_user_id},
access_token=self.second_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/users/{self.second_user_id}/memberships",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(
{
"memberships": {
self.public_room_id: Membership.JOIN,
other_room_id: Membership.JOIN,
}
},
channel.json_body,
)
channel = self.make_request(
"POST",
f"/_matrix/client/v3/rooms/{other_room_id}/leave",
content={"user_id": self.second_user_id},
access_token=self.second_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
invited_room_id = self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok
)
channel = self.make_request(
"POST",
f"/_matrix/client/v3/rooms/{invited_room_id}/invite",
content={"user_id": self.second_user_id},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
banned_room_id = self.helper.create_room_as(
self.admin_user, tok=self.admin_user_tok
)
channel = self.make_request(
"POST",
f"/_matrix/client/v3/rooms/{banned_room_id}/ban",
content={"user_id": self.second_user_id},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
channel = self.make_request(
"GET",
f"/_synapse/admin/v1/users/{self.second_user_id}/memberships",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(
{
"memberships": {
self.public_room_id: Membership.JOIN,
other_room_id: Membership.LEAVE,
invited_room_id: Membership.INVITE,
banned_room_id: Membership.BAN,
}
},
channel.json_body,
)
def test_context_as_non_admin(self) -> None:
"""
Test that, without being admin, one cannot use the context admin API
@@ -690,7 +690,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
user1_tok = self.login(user1_id, "pass")
# Create a remote invite room without any `unsigned.invite_room_state`
_remote_invite_room_id = self._create_remote_invite_room_for_user(
_remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id, None
)
@@ -760,7 +760,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
# Create a remote invite room with some `unsigned.invite_room_state`
# indicating that the room is encrypted.
remote_invite_room_id = self._create_remote_invite_room_for_user(
remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id,
[
StrippedStateEvent(
@@ -849,7 +849,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
# Create a remote invite room with some `unsigned.invite_room_state`
# but don't set any room encryption event.
remote_invite_room_id = self._create_remote_invite_room_for_user(
remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id,
[
StrippedStateEvent(
@@ -1484,7 +1484,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
user1_tok = self.login(user1_id, "pass")
# Create a remote invite room without any `unsigned.invite_room_state`
_remote_invite_room_id = self._create_remote_invite_room_for_user(
_remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id, None
)
@@ -1554,7 +1554,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
# Create a remote invite room with some `unsigned.invite_room_state` indicating
# that it is a space room
remote_invite_room_id = self._create_remote_invite_room_for_user(
remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id,
[
StrippedStateEvent(
@@ -1637,7 +1637,7 @@ class SlidingSyncFiltersTestCase(SlidingSyncBase):
# Create a remote invite room with some `unsigned.invite_room_state`
# but the create event does not specify a room type (normal room)
remote_invite_room_id = self._create_remote_invite_room_for_user(
remote_invite_room_id, _ = self._create_remote_invite_room_for_user(
user1_id,
[
StrippedStateEvent(

Some files were not shown because too many files have changed in this diff Show More