1
0

Compare commits

..

18 Commits

Author SHA1 Message Date
Travis Ralston
339660851d Merge branch 'develop' into travis/adminapi-report-room 2025-05-02 11:02:47 -06:00
Travis Ralston
2584a6fe8f Apply suggestions from code review
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 11:02:28 -06:00
Travis Ralston
9766e110e4 Apply suggestions from code review
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 11:02:17 -06:00
Will Lewis
fe8bb620de Add the ability to exclude remote users in user directory search results (#18300)
This change adds a new configuration
`user_directory.exclude_remote_users`, which defaults to False.
When set to True, remote users will not appear in user directory search
results.

### Pull Request Checklist

<!-- Please read
https://element-hq.github.io/synapse/latest/development/contributing_guide.html
before submitting your pull request -->

* [x] Pull request is based on the develop branch
* [x] Pull request includes a [changelog
file](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#changelog).
The entry should:
- Be a short description of your change which makes sense to users.
"Fixed a bug that prevented receiving messages from other servers."
instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
  - Use markdown where necessary, mostly for `code blocks`.
  - End with either a period (.) or an exclamation mark (!).
  - Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by
@github_username." or "Contributed by [Your Name]." to the end of the
entry.
* [x] [Code
style](https://element-hq.github.io/synapse/latest/code_style.html) is
correct
(run the
[linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

---------

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 15:38:02 +01:00
Quentin Gliech
b8146d4b03 Allow a few admin APIs used by MAS to run on workers (#18313)
This should be reviewed commit by commit.

It adds a few admin servlets that are used by MAS when in delegation
mode to workers

---------

Co-authored-by: Olivier 'reivilibre <oliverw@matrix.org>
Co-authored-by: Devon Hudson <devon.dmytro@gmail.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 15:37:58 +02:00
Shay
411d239db4 Apply should_drop_federated_event to federation invites (#18330)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2025-05-02 13:04:01 +00:00
Quentin Gliech
d18edf67d6 Fix lint which broke in #18374 (#18385)
https://github.com/element-hq/synapse/pull/18374 did not pass linting
but was merged
2025-05-02 12:07:23 +00:00
Andrew Morgan
fd5d3d852d Don't check the at_hash (access token hash) in OIDC ID Tokens if we don't use the access token (#18374)
Co-authored-by: Eric Eastwood <erice@element.io>
2025-05-02 12:16:14 +01:00
Shay
ea376126a0 Fix typo in doc for Scheduled Tasks Admin API (#18384) 2025-05-02 12:14:31 +01:00
Quentin Gliech
74be5cfdbc Do not auto-provision missing users & devices when delegating auth to MAS (#18181)
Since MAS 0.13.0, the provisionning of devices and users is done
synchronously and reliably enough that we don't need to auto-provision
on the Synapse side anymore.

It's important to remove this behaviour if we want to start caching
token introspection results.
2025-05-02 12:13:26 +02:00
Andrew Ferrazzutti
f2ca2e31f7 Readme tweaks (#18218) 2025-05-02 12:11:48 +02:00
Andrew Morgan
960bb62788 Merge branch 'develop' into travis/adminapi-report-room 2025-05-02 10:50:04 +01:00
Shay
6dc1ecd359 Add an Admin API endpoint to fetch scheduled tasks (#18214) 2025-05-01 18:30:00 +00:00
Sebastian Spaeth
2965c9970c docs/workers.md: Add ^/_matrix/federation/v1/event/ to list of delegatable endpoints (#18377) 2025-05-01 15:11:59 +01:00
Travis Ralston
c7e4846a2e Empty commit to fix CI 2025-03-18 15:42:35 -06:00
turt2live
97dc729005 Attempt to fix linting 2025-03-18 21:39:51 +00:00
Travis Ralston
0f35e9af04 changelog 2025-03-18 15:38:07 -06:00
Travis Ralston
33e0abff8c Add admin API for fetching room reports 2025-03-18 15:35:28 -06:00
43 changed files with 1484 additions and 252 deletions

View File

@@ -253,15 +253,17 @@ Alongside all that, join our developer community on Matrix:
Copyright and Licensing
=======================
Copyright 2014-2017 OpenMarket Ltd
Copyright 2017 Vector Creations Ltd
Copyright 2017-2025 New Vector Ltd
| Copyright 2014-2017 OpenMarket Ltd
| Copyright 2017 Vector Creations Ltd
| Copyright 2017-2025 New Vector Ltd
|
This software is dual-licensed by New Vector Ltd (Element). It can be used either:
(1) for free under the terms of the GNU Affero General Public License (as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version); OR
(2) under the terms of a paid-for Element Commercial License agreement between you and Element (the terms of which may vary depending on what you and Element have agreed to).
Unless required by applicable law or agreed to in writing, software distributed under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.

1
changelog.d/18181.misc Normal file
View File

@@ -0,0 +1 @@
Stop auto-provisionning missing users & devices when delegating auth to Matrix Authentication Service. Requires MAS 0.13.0 or later.

View File

@@ -0,0 +1 @@
Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks.

1
changelog.d/18218.doc Normal file
View File

@@ -0,0 +1 @@
Improve formatting of the README file.

View File

@@ -0,0 +1 @@
Add admin API for fetching (paginated) room reports.

View File

@@ -0,0 +1 @@
Add config option `user_directory.exclude_remote_users` which, when enabled, excludes remote users from user directory search results.

1
changelog.d/18313.misc Normal file
View File

@@ -0,0 +1 @@
Allow a few admin APIs used by matrix-authentication-service to run on workers.

1
changelog.d/18330.misc Normal file
View File

@@ -0,0 +1 @@
Apply `should_drop_federated_event` to federation invites.

1
changelog.d/18374.misc Normal file
View File

@@ -0,0 +1 @@
Don't validate the `at_hash` (access token hash) field in OIDC ID Tokens if we don't end up actually using the OIDC Access Token.

1
changelog.d/18377.doc Normal file
View File

@@ -0,0 +1 @@
Add `/_matrix/federation/v1/version` to list of federation endpoints that can be handled by workers.

View File

@@ -1 +0,0 @@
Allow client & media admin apis to coexist.

1
changelog.d/18384.doc Normal file
View File

@@ -0,0 +1 @@
Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks.

1
changelog.d/18385.misc Normal file
View File

@@ -0,0 +1 @@
Don't validate the `at_hash` (access token hash) field in OIDC ID Tokens if we don't end up actually using the OIDC Access Token.

View File

@@ -202,6 +202,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": [
"^/_matrix/federation/v1/version$",
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",

View File

@@ -0,0 +1,76 @@
# Show reported rooms
This API returns information about reported rooms.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
The api is:
```
GET /_synapse/admin/v1/room_reports?from=0&limit=10
```
It returns a JSON body like the following:
```json
{
"room_reports": [
{
"id": 2,
"reason": "foo",
"received_ts": 1570897107409,
"canonical_alias": "#alias1:matrix.org",
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
"name": "Matrix HQ",
"user_id": "@foo:matrix.org"
},
{
"id": 3,
"reason": "bar",
"received_ts": 1598889612059,
"canonical_alias": "#alias2:matrix.org",
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
"name": "Your room name here",
"user_id": "@bar:matrix.org"
}
],
"next_token": 2,
"total": 4
}
```
To paginate, check for `next_token` and if present, call the endpoint again with `from`
set to the value of `next_token` and the same `limit`. This will return a new page.
If the endpoint does not return a `next_token` then there are no more reports to
paginate through.
**Query parameters:**
* `limit`: integer - Is optional but is used for pagination, denoting the maximum number
of items to return in this call. Defaults to `100`.
* `from`: integer - Is optional but used for pagination, denoting the offset in the
returned results. This should be treated as an opaque value and not explicitly set to
anything other than the return value of `next_token` from a previous call. Defaults to `0`.
* `dir`: string - Direction of event report order. Whether to fetch the most recent
first (`b`) or the oldest first (`f`). Defaults to `b`.
* `user_id`: optional string - Filter by the user ID of the reporter. This is the user who reported the event
and wrote the reason.
* `room_id`: optional string - Filter by (reported) room id.
**Response**
The following fields are returned in the JSON response body:
* `id`: integer - ID of room report.
* `received_ts`: integer - The timestamp (in milliseconds since the unix epoch) when this
report was sent.
* `room_id`: string - The ID of the room being reported.
* `name`: string - The name of the room.
* `user_id`: string - This is the user who reported the room and wrote the reason.
* `reason`: string - Comment made by the `user_id` in this report. May be blank or `null`.
* `canonical_alias`: string - The canonical alias of the room. `null` if the room does not
have a canonical alias set.
* `next_token`: integer - Indication for pagination. See above.
* `total`: integer - Total number of room reports related to the query
(`user_id` and `room_id`).

View File

@@ -0,0 +1,54 @@
# Show scheduled tasks
This API returns information about scheduled tasks.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
The api is:
```
GET /_synapse/admin/v1/scheduled_tasks
```
It returns a JSON body like the following:
```json
{
"scheduled_tasks": [
{
"id": "GSA124oegf1",
"action": "shutdown_room",
"status": "complete",
"timestamp_ms": 23423523,
"resource_id": "!roomid",
"result": "some result",
"error": null
}
]
}
```
**Query parameters:**
* `action_name`: string - Is optional. Returns only the scheduled tasks with the given action name.
* `resource_id`: string - Is optional. Returns only the scheduled tasks with the given resource id.
* `status`: string - Is optional. Returns only the scheduled tasks matching the given status, one of
- "scheduled" - Task is scheduled but not active
- "active" - Task is active and probably running, and if not will be run on next scheduler loop run
- "complete" - Task has completed successfully
- "failed" - Task is over and either returned a failed status, or had an exception
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp inferior to the specified one.
**Response**
The following fields are returned in the JSON response body along with a `200` HTTP status code:
* `id`: string - ID of scheduled task.
* `action`: string - The name of the scheduled task's action.
* `status`: string - The status of the scheduled task.
* `timestamp_ms`: integer - The timestamp (in milliseconds since the unix epoch) of the given task - If the status is "scheduled" then this represents when it should be launched.
Otherwise it represents the last time this task got a change of state.
* `resource_id`: Optional string - The resource id of the scheduled task, if it possesses one
* `result`: Optional Json - Any result of the scheduled task, if given
* `error`: Optional string - If the task has the status "failed", the error associated with this failure

View File

@@ -353,6 +353,8 @@ callback returns `False`, Synapse falls through to the next one. The value of th
callback that does not return `False` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
Note that this check is applied to federation invites as of Synapse v1.130.0.
### `check_login_for_spam`

View File

@@ -117,6 +117,16 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.130.0
## Documented endpoint which can be delegated to a federation worker
The endpoint `^/_matrix/federation/v1/version$` can be delegated to a federation
worker. This is not new behaviour, but had not been documented yet. The
[list of delegatable endpoints](workers.md#synapseappgeneric_worker) has
been updated to include it. Make sure to check your reverse proxy rules if you
are using workers.
# Upgrading to v1.126.0
## Room list publication rules change

View File

@@ -4095,6 +4095,7 @@ This option has the following sub-options:
* `prefer_local_users`: Defines whether to prefer local users in search query results.
If set to true, local users are more likely to appear above remote users when searching the
user directory. Defaults to false.
* `exclude_remote_users`: If set to true, the search will only return local users. Defaults to false.
* `show_locked_users`: Defines whether to show locked users in search query results. Defaults to false.
Example configuration:
@@ -4103,6 +4104,7 @@ user_directory:
enabled: false
search_all_users: true
prefer_local_users: true
exclude_remote_users: false
show_locked_users: true
```
---

View File

@@ -200,6 +200,7 @@ information.
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation requests
^/_matrix/federation/v1/version$
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
@@ -322,6 +323,15 @@ For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Additionally, when MSC3861 is enabled (`experimental_features.msc3861.enabled`
set to `true`), the following endpoints can be handled by the worker:
^/_synapse/admin/v2/users/[^/]+$
^/_synapse/admin/v1/username_available$
^/_synapse/admin/v1/users/[^/]+/_allow_cross_signing_replacement_without_uia$
# Only the GET method:
^/_synapse/admin/v1/users/[^/]+/devices$
Note that a [HTTP listener](usage/configuration/config_documentation.md#listeners)
with `client` and `federation` `resources` must be configured in the
[`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners)

View File

@@ -39,7 +39,6 @@ from synapse.api.errors import (
HttpResponseException,
InvalidClientTokenError,
OAuthInsufficientScopeError,
StoreError,
SynapseError,
UnrecognizedRequestError,
)
@@ -512,7 +511,7 @@ class MSC3861DelegatedAuth(BaseAuth):
raise InvalidClientTokenError("No scope in token granting user rights")
# Match via the sub claim
sub: Optional[str] = introspection_result.get_sub()
sub = introspection_result.get_sub()
if sub is None:
raise InvalidClientTokenError(
"Invalid sub claim in the introspection result"
@@ -525,29 +524,20 @@ class MSC3861DelegatedAuth(BaseAuth):
# If we could not find a user via the external_id, it either does not exist,
# or the external_id was never recorded
# TODO: claim mapping should be configurable
username: Optional[str] = introspection_result.get_username()
if username is None or not isinstance(username, str):
username = introspection_result.get_username()
if username is None:
raise AuthError(
500,
"Invalid username claim in the introspection result",
)
user_id = UserID(username, self._hostname)
# First try to find a user from the username claim
# Try to find a user from the username claim
user_info = await self.store.get_user_by_id(user_id=user_id.to_string())
if user_info is None:
# If the user does not exist, we should create it on the fly
# TODO: we could use SCIM to provision users ahead of time and listen
# for SCIM SET events if those ever become standard:
# https://datatracker.ietf.org/doc/html/draft-hunt-scim-notify-00
# TODO: claim mapping should be configurable
# If present, use the name claim as the displayname
name: Optional[str] = introspection_result.get_name()
await self.store.register_user(
user_id=user_id.to_string(), create_profile_with_displayname=name
raise AuthError(
500,
"User not found",
)
# And record the sub as external_id
@@ -587,17 +577,10 @@ class MSC3861DelegatedAuth(BaseAuth):
"Invalid device ID in introspection result",
)
# Create the device on the fly if it does not exist
try:
await self.store.get_device(
user_id=user_id.to_string(), device_id=device_id
)
except StoreError:
await self.store.store_device(
user_id=user_id.to_string(),
device_id=device_id,
initial_device_display_name="OIDC-native client",
)
# Make sure the device exists
await self.store.get_device(
user_id=user_id.to_string(), device_id=device_id
)
# TODO: there is a few things missing in the requester here, which still need
# to be figured out, like:

View File

@@ -21,7 +21,7 @@
#
import logging
import sys
from typing import Dict, List, cast
from typing import Dict, List
from twisted.web.resource import Resource
@@ -52,7 +52,6 @@ from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.rest import ClientRestResource, admin
from synapse.rest.admin import AdminRestResource, register_servlets_for_media_repo
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
@@ -176,8 +175,13 @@ class GenericWorkerServer(HomeServer):
def _listen_http(self, listener_config: ListenerConfig) -> None:
assert listener_config.http_options is not None
# We always include a health resource.
resources: Dict[str, Resource] = {"/health": HealthResource()}
# We always include an admin resource that we populate with servlets as needed
admin_resource = JsonResource(self, canonical_json=False)
resources: Dict[str, Resource] = {
# We always include a health resource.
"/health": HealthResource(),
"/_synapse/admin": admin_resource,
}
for res in listener_config.http_options.resources:
for name in res.names:
@@ -190,11 +194,8 @@ class GenericWorkerServer(HomeServer):
resources.update(build_synapse_client_resource_tree(self))
resources["/.well-known"] = well_known_resource(self)
admin_res = resources.get("/_synapse/admin")
if admin_res is not None:
admin.register_servlets(self, cast(JsonResource, admin_res))
else:
resources["/_synapse/admin"] = AdminRestResource(self)
admin.register_servlets(self, admin_resource)
elif name == "federation":
resources[FEDERATION_PREFIX] = TransportLayerServer(self)
elif name == "media":
@@ -203,15 +204,7 @@ class GenericWorkerServer(HomeServer):
# We need to serve the admin servlets for media on the
# worker.
admin_res = resources.get("/_synapse/admin")
if admin_res is not None:
register_servlets_for_media_repo(
self, cast(JsonResource, admin_res)
)
else:
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources["/_synapse/admin"] = admin_resource
admin.register_servlets_for_media_repo(self, admin_resource)
resources.update(
{

View File

@@ -54,6 +54,7 @@ from synapse.config.server import ListenerConfig, TCPListenerConfig
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import (
JsonResource,
OptionsResource,
RootOptionsRedirectResource,
StaticResource,
@@ -61,8 +62,7 @@ from synapse.http.server import (
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest import ClientRestResource, admin
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyResource
from synapse.rest.synapse.client import build_synapse_client_resource_tree
@@ -180,11 +180,14 @@ class SynapseHomeServer(HomeServer):
if compress:
client_resource = gz_wrap(client_resource)
admin_resource = JsonResource(self, canonical_json=False)
admin.register_servlets(self, admin_resource)
resources.update(
{
CLIENT_API_PREFIX: client_resource,
"/.well-known": well_known_resource(self),
"/_synapse/admin": AdminRestResource(self),
"/_synapse/admin": admin_resource,
**build_synapse_client_resource_tree(self),
}
)

View File

@@ -38,6 +38,9 @@ class UserDirectoryConfig(Config):
self.user_directory_search_all_users = user_directory_config.get(
"search_all_users", False
)
self.user_directory_exclude_remote_users = user_directory_config.get(
"exclude_remote_users", False
)
self.user_directory_search_prefer_local_users = user_directory_config.get(
"prefer_local_users", False
)

View File

@@ -701,6 +701,12 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
if await self._spam_checker_module_callbacks.should_drop_federated_event(pdu):
logger.info(
"Federated event contains spam, dropping %s",
pdu.event_id,
)
raise SynapseError(403, Codes.FORBIDDEN)
try:
pdu = await self._check_sigs_and_hash(room_version, pdu)
except InvalidEventSignatureError as e:

View File

@@ -586,6 +586,24 @@ class OidcProvider:
or self._user_profile_method == "userinfo_endpoint"
)
@property
def _uses_access_token(self) -> bool:
"""Return True if the `access_token` will be used during the login process.
This is useful to determine whether the access token
returned by the identity provider, and
any related metadata (such as the `at_hash` field in
the ID token), should be validated.
"""
# Currently, Synapse only uses the access_token to fetch user metadata
# from the userinfo endpoint. Therefore we only have a single criteria
# to check right now but this may change in the future and this function
# should be updated if more usages are introduced.
#
# For example, if we start to use the access_token given to us by the
# IdP for more things, such as accessing Resource Server APIs.
return self._uses_userinfo
@property
def issuer(self) -> str:
"""The issuer identifying this provider."""
@@ -957,9 +975,16 @@ class OidcProvider:
"nonce": nonce,
"client_id": self._client_auth.client_id,
}
if "access_token" in token:
if self._uses_access_token and "access_token" in token:
# If we got an `access_token`, there should be an `at_hash` claim
# in the `id_token` that we can check against.
# in the `id_token` that we can check against. Setting this
# instructs authlib to check the value of `at_hash` in the
# ID token.
#
# We only need to verify the access token if we actually make
# use of it. Which currently only happens when we need to fetch
# the user's information from the userinfo_endpoint. Thus, this
# check is also gated on self._uses_userinfo.
claims_params["access_token"] = token["access_token"]
claims_options = {"iss": {"values": [metadata["issuer"]]}}

View File

@@ -36,10 +36,17 @@ class SetPasswordHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastores().main
self._auth_handler = hs.get_auth_handler()
# This can only be instantiated on the main process.
device_handler = hs.get_device_handler()
assert isinstance(device_handler, DeviceHandler)
self._device_handler = device_handler
# We don't need the device handler if password changing is disabled.
# This allows us to instantiate the SetPasswordHandler on the workers
# that have admin APIs for MAS
if self._auth_handler.can_change_password():
# This can only be instantiated on the main process.
device_handler = hs.get_device_handler()
assert isinstance(device_handler, DeviceHandler)
self._device_handler: Optional[DeviceHandler] = device_handler
else:
self._device_handler = None
async def set_password(
self,
@@ -51,6 +58,9 @@ class SetPasswordHandler:
if not self._auth_handler.can_change_password():
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
# We should have this available only if password changing is enabled.
assert self._device_handler is not None
try:
await self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:

View File

@@ -108,6 +108,9 @@ class UserDirectoryHandler(StateDeltasHandler):
self.is_mine_id = hs.is_mine_id
self.update_user_directory = hs.config.worker.should_update_user_directory
self.search_all_users = hs.config.userdirectory.user_directory_search_all_users
self.exclude_remote_users = (
hs.config.userdirectory.user_directory_exclude_remote_users
)
self.show_locked_users = hs.config.userdirectory.show_locked_users
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
self._hs = hs

View File

@@ -187,7 +187,6 @@ class ClientRestResource(JsonResource):
mutual_rooms.register_servlets,
login_token_request.register_servlets,
rendezvous.register_servlets,
auth_metadata.register_servlets,
]:
continue

View File

@@ -39,7 +39,7 @@ from typing import TYPE_CHECKING, Optional, Tuple
from synapse.api.errors import Codes, NotFoundError, SynapseError
from synapse.handlers.pagination import PURGE_HISTORY_ACTION_NAME
from synapse.http.server import HttpServer, JsonResource
from synapse.http.server import HttpServer
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
@@ -51,6 +51,7 @@ from synapse.rest.admin.background_updates import (
from synapse.rest.admin.devices import (
DeleteDevicesRestServlet,
DeviceRestServlet,
DevicesGetRestServlet,
DevicesRestServlet,
)
from synapse.rest.admin.event_reports import (
@@ -70,6 +71,7 @@ from synapse.rest.admin.registration_tokens import (
NewRegistrationTokenRestServlet,
RegistrationTokenRestServlet,
)
from synapse.rest.admin.room_reports import RoomReportsRestServlet
from synapse.rest.admin.rooms import (
BlockRoomRestServlet,
DeleteRoomStatusByDeleteIdRestServlet,
@@ -86,6 +88,7 @@ from synapse.rest.admin.rooms import (
RoomStateRestServlet,
RoomTimestampToEventRestServlet,
)
from synapse.rest.admin.scheduled_tasks import ScheduledTasksRestServlet
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
from synapse.rest.admin.statistics import (
LargestRoomsStatistics,
@@ -263,14 +266,6 @@ class PurgeHistoryStatusRestServlet(RestServlet):
########################################################################################
class AdminRestResource(JsonResource):
"""The REST resource which gets mounted at /_synapse/admin"""
def __init__(self, hs: "HomeServer"):
JsonResource.__init__(self, hs, canonical_json=False)
register_servlets(hs, self)
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
"""
Register all the admin servlets.
@@ -279,6 +274,10 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
# Admin servlets below may not work on workers.
if hs.config.worker.worker_app is not None:
# Some admin servlets can be mounted on workers when MSC3861 is enabled.
if hs.config.experimental.msc3861.enabled:
register_servlets_for_msc3861_delegation(hs, http_server)
return
register_servlets_for_client_rest_resource(hs, http_server)
@@ -303,6 +302,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
LargestRoomsStatistics(hs).register(http_server)
EventReportDetailRestServlet(hs).register(http_server)
EventReportsRestServlet(hs).register(http_server)
RoomReportsRestServlet(hs).register(http_server)
AccountDataRestServlet(hs).register(http_server)
PushersRestServlet(hs).register(http_server)
MakeRoomAdminRestServlet(hs).register(http_server)
@@ -338,6 +338,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
ExperimentalFeaturesRestServlet(hs).register(http_server)
SuspendAccountRestServlet(hs).register(http_server)
ScheduledTasksRestServlet(hs).register(http_server)
def register_servlets_for_client_rest_resource(
@@ -365,4 +366,16 @@ def register_servlets_for_client_rest_resource(
ListMediaInRoom(hs).register(http_server)
# don't add more things here: new servlets should only be exposed on
# /_synapse/admin so should not go here. Instead register them in AdminRestResource.
# /_synapse/admin so should not go here. Instead register them in register_servlets.
def register_servlets_for_msc3861_delegation(
hs: "HomeServer", http_server: HttpServer
) -> None:
"""Register servlets needed by MAS when MSC3861 is enabled"""
assert hs.config.experimental.msc3861.enabled
UserRestServletV2(hs).register(http_server)
UsernameAvailableRestServlet(hs).register(http_server)
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
DevicesGetRestServlet(hs).register(http_server)

View File

@@ -113,18 +113,19 @@ class DeviceRestServlet(RestServlet):
return HTTPStatus.OK, {}
class DevicesRestServlet(RestServlet):
class DevicesGetRestServlet(RestServlet):
"""
Retrieve the given user's devices
This can be mounted on workers as it is read-only, as opposed
to `DevicesRestServlet`.
"""
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
def __init__(self, hs: "HomeServer"):
self.auth = hs.get_auth()
handler = hs.get_device_handler()
assert isinstance(handler, DeviceHandler)
self.device_handler = handler
self.device_worker_handler = hs.get_device_handler()
self.store = hs.get_datastores().main
self.is_mine = hs.is_mine
@@ -141,9 +142,24 @@ class DevicesRestServlet(RestServlet):
if u is None:
raise NotFoundError("Unknown user")
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
devices = await self.device_worker_handler.get_devices_by_user(
target_user.to_string()
)
return HTTPStatus.OK, {"devices": devices, "total": len(devices)}
class DevicesRestServlet(DevicesGetRestServlet):
"""
Retrieve the given user's devices
"""
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
assert isinstance(self.device_worker_handler, DeviceHandler)
self.device_handler = self.device_worker_handler
async def on_POST(
self, request: SynapseRequest, user_id: str
) -> Tuple[int, JsonDict]:

View File

@@ -0,0 +1,91 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
import logging
from http import HTTPStatus
from typing import TYPE_CHECKING, Tuple
from synapse.api.constants import Direction
from synapse.api.errors import Codes, SynapseError
from synapse.http.servlet import RestServlet, parse_enum, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
# Based upon EventReportsRestServlet
class RoomReportsRestServlet(RestServlet):
"""
List all reported rooms that are known to the homeserver. Results are returned
in a dictionary containing report information. Supports pagination.
The requester must have administrator access in Synapse.
GET /_synapse/admin/v1/room_reports
returns:
200 OK with list of reports if success otherwise an error.
Args:
The parameters `from` and `limit` are required only for pagination.
By default, a `limit` of 100 is used.
The parameter `dir` can be used to define the order of results.
The `user_id` query parameter filters by the user ID of the reporter of the event.
The `room_id` query parameter filters by room id.
Returns:
A list of reported rooms and an integer representing the total number of
reported rooms that exist given this query
"""
PATTERNS = admin_patterns("/room_reports$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
direction = parse_enum(request, "dir", Direction, Direction.BACKWARDS)
user_id = parse_string(request, "user_id")
room_id = parse_string(request, "room_id")
if start < 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"The start parameter must be a positive integer.",
errcode=Codes.INVALID_PARAM,
)
if limit < 0:
raise SynapseError(
HTTPStatus.BAD_REQUEST,
"The limit parameter must be a positive integer.",
errcode=Codes.INVALID_PARAM,
)
room_reports, total = await self._store.get_room_reports_paginate(
start, limit, direction, user_id, room_id
)
ret = {"room_reports": room_reports, "total": total}
if (start + limit) < total:
ret["next_token"] = start + len(room_reports)
return HTTPStatus.OK, ret

View File

@@ -0,0 +1,70 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
#
from typing import TYPE_CHECKING, Tuple
from synapse.http.servlet import RestServlet, parse_integer, parse_string
from synapse.http.site import SynapseRequest
from synapse.rest.admin import admin_patterns, assert_requester_is_admin
from synapse.types import JsonDict, TaskStatus
if TYPE_CHECKING:
from synapse.server import HomeServer
class ScheduledTasksRestServlet(RestServlet):
"""Get a list of scheduled tasks and their statuses
optionally filtered by action name, resource id, status, and max timestamp
"""
PATTERNS = admin_patterns("/scheduled_tasks$")
def __init__(self, hs: "HomeServer"):
self._auth = hs.get_auth()
self._store = hs.get_datastores().main
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
await assert_requester_is_admin(self._auth, request)
# extract query params
action_name = parse_string(request, "action_name")
resource_id = parse_string(request, "resource_id")
status = parse_string(request, "job_status")
max_timestamp = parse_integer(request, "max_timestamp")
actions = [action_name] if action_name else None
statuses = [TaskStatus(status)] if status else None
tasks = await self._store.get_scheduled_tasks(
actions=actions,
resource_id=resource_id,
statuses=statuses,
max_timestamp=max_timestamp,
)
json_tasks = []
for task in tasks:
result_task = {
"id": task.id,
"action": task.action,
"status": task.status,
"timestamp_ms": task.timestamp,
"resource_id": task.resource_id,
"result": task.result,
"error": task.error,
}
json_tasks.append(result_task)
return 200, {"scheduled_tasks": json_tasks}

View File

@@ -1501,6 +1501,45 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
"delete_old_otks_for_next_user_batch", impl
)
async def allow_master_cross_signing_key_replacement_without_uia(
self, user_id: str, duration_ms: int
) -> Optional[int]:
"""Mark this user's latest master key as being replaceable without UIA.
Said replacement will only be permitted for a short time after calling this
function. That time period is controlled by the duration argument.
Returns:
None, if there is no such key.
Otherwise, the timestamp before which replacement is allowed without UIA.
"""
timestamp = self._clock.time_msec() + duration_ms
def impl(txn: LoggingTransaction) -> Optional[int]:
txn.execute(
"""
UPDATE e2e_cross_signing_keys
SET updatable_without_uia_before_ms = ?
WHERE stream_id = (
SELECT stream_id
FROM e2e_cross_signing_keys
WHERE user_id = ? AND keytype = 'master'
ORDER BY stream_id DESC
LIMIT 1
)
""",
(timestamp, user_id),
)
if txn.rowcount == 0:
return None
return timestamp
return await self.db_pool.runInteraction(
"allow_master_cross_signing_key_replacement_without_uia",
impl,
)
class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
def __init__(
@@ -1755,42 +1794,3 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
],
desc="add_e2e_signing_key",
)
async def allow_master_cross_signing_key_replacement_without_uia(
self, user_id: str, duration_ms: int
) -> Optional[int]:
"""Mark this user's latest master key as being replaceable without UIA.
Said replacement will only be permitted for a short time after calling this
function. That time period is controlled by the duration argument.
Returns:
None, if there is no such key.
Otherwise, the timestamp before which replacement is allowed without UIA.
"""
timestamp = self._clock.time_msec() + duration_ms
def impl(txn: LoggingTransaction) -> Optional[int]:
txn.execute(
"""
UPDATE e2e_cross_signing_keys
SET updatable_without_uia_before_ms = ?
WHERE stream_id = (
SELECT stream_id
FROM e2e_cross_signing_keys
WHERE user_id = ? AND keytype = 'master'
ORDER BY stream_id DESC
LIMIT 1
)
""",
(timestamp, user_id),
)
if txn.rowcount == 0:
return None
return timestamp
return await self.db_pool.runInteraction(
"allow_master_cross_signing_key_replacement_without_uia",
impl,
)

View File

@@ -2105,6 +2105,136 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
func=is_user_approved_txn,
)
async def set_user_deactivated_status(
self, user_id: str, deactivated: bool
) -> None:
"""Set the `deactivated` property for the provided user to the provided value.
Args:
user_id: The ID of the user to set the status for.
deactivated: The value to set for `deactivated`.
"""
await self.db_pool.runInteraction(
"set_user_deactivated_status",
self.set_user_deactivated_status_txn,
user_id,
deactivated,
)
def set_user_deactivated_status_txn(
self, txn: LoggingTransaction, user_id: str, deactivated: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"deactivated": 1 if deactivated else 0},
)
self._invalidate_cache_and_stream(
txn, self.get_user_deactivated_status, (user_id,)
)
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
self._invalidate_cache_and_stream(txn, self.is_guest, (user_id,))
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
"""
Set whether the user's account is suspended in the `users` table.
Args:
user_id: The user ID of the user in question
suspended: True if the user is suspended, false if not
"""
await self.db_pool.runInteraction(
"set_user_suspended_status",
self.set_user_suspended_status_txn,
user_id,
suspended,
)
def set_user_suspended_status_txn(
self, txn: LoggingTransaction, user_id: str, suspended: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"suspended": suspended},
)
self._invalidate_cache_and_stream(
txn, self.get_user_suspended_status, (user_id,)
)
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
"""Set the `locked` property for the provided user to the provided value.
Args:
user_id: The ID of the user to set the status for.
locked: The value to set for `locked`.
"""
await self.db_pool.runInteraction(
"set_user_locked_status",
self.set_user_locked_status_txn,
user_id,
locked,
)
def set_user_locked_status_txn(
self, txn: LoggingTransaction, user_id: str, locked: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"locked": locked},
)
self._invalidate_cache_and_stream(txn, self.get_user_locked_status, (user_id,))
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
async def update_user_approval_status(
self, user_id: UserID, approved: bool
) -> None:
"""Set the user's 'approved' flag to the given value.
The boolean will be turned into an int (in update_user_approval_status_txn)
because the column is a smallint.
Args:
user_id: the user to update the flag for.
approved: the value to set the flag to.
"""
await self.db_pool.runInteraction(
"update_user_approval_status",
self.update_user_approval_status_txn,
user_id.to_string(),
approved,
)
def update_user_approval_status_txn(
self, txn: LoggingTransaction, user_id: str, approved: bool
) -> None:
"""Set the user's 'approved' flag to the given value.
The boolean is turned into an int because the column is a smallint.
Args:
txn: the current database transaction.
user_id: the user to update the flag for.
approved: the value to set the flag to.
"""
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"approved": approved},
)
# Invalidate the caches of methods that read the value of the 'approved' flag.
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
self._invalidate_cache_and_stream(txn, self.is_user_approved, (user_id,))
class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
def __init__(
@@ -2217,117 +2347,6 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
return nb_processed
async def set_user_deactivated_status(
self, user_id: str, deactivated: bool
) -> None:
"""Set the `deactivated` property for the provided user to the provided value.
Args:
user_id: The ID of the user to set the status for.
deactivated: The value to set for `deactivated`.
"""
await self.db_pool.runInteraction(
"set_user_deactivated_status",
self.set_user_deactivated_status_txn,
user_id,
deactivated,
)
def set_user_deactivated_status_txn(
self, txn: LoggingTransaction, user_id: str, deactivated: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"deactivated": 1 if deactivated else 0},
)
self._invalidate_cache_and_stream(
txn, self.get_user_deactivated_status, (user_id,)
)
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
txn.call_after(self.is_guest.invalidate, (user_id,))
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
"""
Set whether the user's account is suspended in the `users` table.
Args:
user_id: The user ID of the user in question
suspended: True if the user is suspended, false if not
"""
await self.db_pool.runInteraction(
"set_user_suspended_status",
self.set_user_suspended_status_txn,
user_id,
suspended,
)
def set_user_suspended_status_txn(
self, txn: LoggingTransaction, user_id: str, suspended: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"suspended": suspended},
)
self._invalidate_cache_and_stream(
txn, self.get_user_suspended_status, (user_id,)
)
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
"""Set the `locked` property for the provided user to the provided value.
Args:
user_id: The ID of the user to set the status for.
locked: The value to set for `locked`.
"""
await self.db_pool.runInteraction(
"set_user_locked_status",
self.set_user_locked_status_txn,
user_id,
locked,
)
def set_user_locked_status_txn(
self, txn: LoggingTransaction, user_id: str, locked: bool
) -> None:
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"locked": locked},
)
self._invalidate_cache_and_stream(txn, self.get_user_locked_status, (user_id,))
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
def update_user_approval_status_txn(
self, txn: LoggingTransaction, user_id: str, approved: bool
) -> None:
"""Set the user's 'approved' flag to the given value.
The boolean is turned into an int because the column is a smallint.
Args:
txn: the current database transaction.
user_id: the user to update the flag for.
approved: the value to set the flag to.
"""
self.db_pool.simple_update_one_txn(
txn=txn,
table="users",
keyvalues={"name": user_id},
updatevalues={"approved": approved},
)
# Invalidate the caches of methods that read the value of the 'approved' flag.
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
self._invalidate_cache_and_stream(txn, self.is_user_approved, (user_id,))
class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
def __init__(
@@ -2956,25 +2975,6 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
start_or_continue_validation_session_txn,
)
async def update_user_approval_status(
self, user_id: UserID, approved: bool
) -> None:
"""Set the user's 'approved' flag to the given value.
The boolean will be turned into an int (in update_user_approval_status_txn)
because the column is a smallint.
Args:
user_id: the user to update the flag for.
approved: the value to set the flag to.
"""
await self.db_pool.runInteraction(
"update_user_approval_status",
self.update_user_approval_status_txn,
user_id.to_string(),
approved,
)
@wrap_as_background_process("delete_expired_login_tokens")
async def _delete_expired_login_tokens(self) -> None:
"""Remove login tokens with expiry dates that have passed."""

View File

@@ -1860,6 +1860,107 @@ class RoomWorkerStore(CacheInvalidationWorkerStore):
"get_event_reports_paginate", _get_event_reports_paginate_txn
)
async def get_room_reports_paginate(
self,
start: int,
limit: int,
direction: Direction = Direction.BACKWARDS,
user_id: Optional[str] = None,
room_id: Optional[str] = None,
) -> Tuple[List[Dict[str, Any]], int]:
"""Retrieve a paginated list of room reports
Args:
start: room offset to begin the query from
limit: number of rows to retrieve
direction: Whether to fetch the most recent first (backwards) or the
oldest first (forwards)
user_id: search for user_id. Ignored if user_id is None
room_id: search for room_id. Ignored if room_id is None
Returns:
Tuple of:
json list of room reports
total number of room reports matching the filter criteria
"""
def _get_room_reports_paginate_txn(
txn: LoggingTransaction,
) -> Tuple[List[Dict[str, Any]], int]:
filters = []
args: List[object] = []
if user_id:
filters.append("er.user_id LIKE ?")
args.extend(["%" + user_id + "%"])
if room_id:
filters.append("er.room_id LIKE ?")
args.extend(["%" + room_id + "%"])
if direction == Direction.BACKWARDS:
order = "DESC"
else:
order = "ASC"
where_clause = "WHERE " + " AND ".join(filters) if len(filters) > 0 else ""
# We join on room_stats_state despite not using any columns from it
# because the join can influence the number of rows returned;
# e.g. a room that doesn't have state, maybe because it was deleted.
# The query returning the total count should be consistent with
# the query returning the results.
sql = """
SELECT COUNT(*) as total_room_reports
FROM room_reports AS rr
JOIN room_stats_state ON room_stats_state.room_id = rr.room_id
{}
""".format(where_clause)
txn.execute(sql, args)
count = cast(Tuple[int], txn.fetchone())[0]
sql = """
SELECT
rr.id,
rr.received_ts,
rr.room_id,
rr.user_id,
rr.reason,
room_stats_state.canonical_alias,
room_stats_state.name
FROM event_reports AS rr
JOIN room_stats_state
ON room_stats_state.room_id = rr.room_id
{where_clause}
ORDER BY rr.received_ts {order}
LIMIT ?
OFFSET ?
""".format(
where_clause=where_clause,
order=order,
)
args += [limit, start]
txn.execute(sql, args)
room_reports = []
for row in txn:
room_reports.append(
{
"id": row[0],
"received_ts": row[1],
"room_id": row[2],
"user_id": row[3],
"reason": row[4],
"canonical_alias": row[5],
"name": row[6],
}
)
return room_reports, count
return await self.db_pool.runInteraction(
"get_room_reports_paginate", _get_room_reports_paginate_txn
)
async def delete_event_report(self, report_id: int) -> bool:
"""Remove an event report from database.

View File

@@ -1037,11 +1037,11 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
}
"""
join_args: Tuple[str, ...] = (user_id,)
if self.hs.config.userdirectory.user_directory_search_all_users:
join_args = (user_id,)
where_clause = "user_id != ?"
else:
join_args = (user_id,)
where_clause = """
(
EXISTS (select 1 from users_in_public_rooms WHERE user_id = t.user_id)
@@ -1055,6 +1055,14 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
if not show_locked_users:
where_clause += " AND (u.locked IS NULL OR u.locked = FALSE)"
# Adjust the JOIN type based on the exclude_remote_users flag (the users
# table only contains local users so an inner join is a good way to
# to exclude remote users)
if self.hs.config.userdirectory.user_directory_exclude_remote_users:
join_type = "JOIN"
else:
join_type = "LEFT JOIN"
# We allow manipulating the ranking algorithm by injecting statements
# based on config options.
additional_ordering_statements = []
@@ -1086,7 +1094,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
SELECT d.user_id AS user_id, display_name, avatar_url
FROM matching_users as t
INNER JOIN user_directory AS d USING (user_id)
LEFT JOIN users AS u ON t.user_id = u.name
%(join_type)s users AS u ON t.user_id = u.name
WHERE
%(where_clause)s
ORDER BY
@@ -1115,6 +1123,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
""" % {
"where_clause": where_clause,
"order_case_statements": " ".join(additional_ordering_statements),
"join_type": join_type,
}
args = (
(full_query,)
@@ -1142,7 +1151,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
SELECT d.user_id AS user_id, display_name, avatar_url
FROM user_directory_search as t
INNER JOIN user_directory AS d USING (user_id)
LEFT JOIN users AS u ON t.user_id = u.name
%(join_type)s users AS u ON t.user_id = u.name
WHERE
%(where_clause)s
AND value MATCH ?
@@ -1155,6 +1164,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
""" % {
"where_clause": where_clause,
"order_statements": " ".join(additional_ordering_statements),
"join_type": join_type,
}
args = join_args + (search_query,) + ordering_arguments + (limit + 1,)
else:

View File

@@ -147,6 +147,16 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
return hs
def prepare(
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
) -> None:
# Provision the user and the device we use in the tests.
store = homeserver.get_datastores().main
self.get_success(store.register_user(USER_ID))
self.get_success(
store.store_device(USER_ID, DEVICE, initial_device_display_name=None)
)
def _assertParams(self) -> None:
"""Assert that the request parameters are correct."""
params = parse_qs(self.http_client.request.call_args[1]["data"].decode("utf-8"))

View File

@@ -1029,6 +1029,50 @@ class OidcHandlerTestCase(HomeserverTestCase):
args = parse_qs(kwargs["data"].decode("utf-8"))
self.assertEqual(args["redirect_uri"], [TEST_REDIRECT_URI])
@override_config(
{
"oidc_config": {
**DEFAULT_CONFIG,
"redirect_uri": TEST_REDIRECT_URI,
}
}
)
def test_code_exchange_ignores_access_token(self) -> None:
"""
Code exchange completes successfully and doesn't validate the `at_hash`
(access token hash) field of an ID token when the access token isn't
going to be used.
The access token won't be used in this test because Synapse (currently)
only needs it to fetch a user's metadata if it isn't included in the ID
token itself.
Because we have included "openid" in the requested scopes for this IdP
(see `SCOPES`), user metadata is be included in the ID token. Thus the
access token isn't needed, and it's unnecessary for Synapse to validate
the access token.
This is a regression test for a situation where an upstream identity
provider was providing an invalid `at_hash` value, which Synapse errored
on, yet Synapse wasn't using the access token for anything.
"""
# Exchange the code against the fake IdP.
userinfo = {
"sub": "foo",
"username": "foo",
"phone": "1234567",
}
with self.fake_server.id_token_override(
{
"at_hash": "invalid-hash",
}
):
request, _ = self.start_authorization(userinfo)
self.get_success(self.handler.handle_oidc_callback(request))
# If no error was rendered, then we have success.
self.render_error.assert_not_called()
@override_config(
{
"oidc_config": {

View File

@@ -992,6 +992,67 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
[self.assertIn(user, local_users) for user in received_user_id_ordering[:3]]
[self.assertIn(user, remote_users) for user in received_user_id_ordering[3:]]
@override_config(
{
"user_directory": {
"enabled": True,
"search_all_users": True,
"exclude_remote_users": True,
}
}
)
def test_exclude_remote_users(self) -> None:
"""Tests that only local users are returned when
user_directory.exclude_remote_users is True.
"""
# Create a room and few users to test the directory with
searching_user = self.register_user("searcher", "password")
searching_user_tok = self.login("searcher", "password")
room_id = self.helper.create_room_as(
searching_user,
room_version=RoomVersions.V1.identifier,
tok=searching_user_tok,
)
# Create a few local users and join them to the room
local_user_1 = self.register_user("user_xxxxx", "password")
local_user_2 = self.register_user("user_bbbbb", "password")
local_user_3 = self.register_user("user_zzzzz", "password")
self._add_user_to_room(room_id, RoomVersions.V1, local_user_1)
self._add_user_to_room(room_id, RoomVersions.V1, local_user_2)
self._add_user_to_room(room_id, RoomVersions.V1, local_user_3)
# Create a few "remote" users and join them to the room
remote_user_1 = "@user_aaaaa:remote_server"
remote_user_2 = "@user_yyyyy:remote_server"
remote_user_3 = "@user_ccccc:remote_server"
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_1)
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_2)
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_3)
local_users = [local_user_1, local_user_2, local_user_3]
remote_users = [remote_user_1, remote_user_2, remote_user_3]
# The local searching user searches for the term "user", which other users have
# in their user id
results = self.get_success(
self.handler.search_users(searching_user, "user", 20)
)["results"]
received_user_ids = [result["user_id"] for result in results]
for user in local_users:
self.assertIn(
user, received_user_ids, f"Local user {user} not found in results"
)
for user in remote_users:
self.assertNotIn(
user, received_user_ids, f"Remote user {user} should not be in results"
)
def _add_user_to_room(
self,
room_id: str,

View File

@@ -0,0 +1,417 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
from typing import List
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.rest.client import login, reporting, room
from synapse.server import HomeServer
from synapse.types import JsonDict
from synapse.util import Clock
from tests import unittest
# Based upon EventReportsTestCase
class RoomReportsTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
room.register_servlets,
reporting.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
self.other_user = self.register_user("user", "pass")
self.other_user_tok = self.login("user", "pass")
self.room_id1 = self.helper.create_room_as(
self.other_user, tok=self.other_user_tok, is_public=True
)
self.helper.join(self.room_id1, user=self.admin_user, tok=self.admin_user_tok)
self.room_id2 = self.helper.create_room_as(
self.other_user, tok=self.other_user_tok, is_public=True
)
self.helper.join(self.room_id2, user=self.admin_user, tok=self.admin_user_tok)
# Every user reports both rooms
self._report_room(self.room_id1, self.other_user_tok)
self._report_room(self.room_id2, self.other_user_tok)
self._report_room_without_parameters(self.room_id1, self.admin_user_tok)
self._report_room_without_parameters(self.room_id2, self.admin_user_tok)
self.url = "/_synapse/admin/v1/room_reports"
def test_no_auth(self) -> None:
"""
Try to get an event report without authentication.
"""
channel = self.make_request("GET", self.url, {})
self.assertEqual(401, channel.code, msg=channel.json_body)
self.assertEqual(Codes.MISSING_TOKEN, channel.json_body["errcode"])
def test_requester_is_no_admin(self) -> None:
"""
If the user is not a server admin, an error 403 is returned.
"""
channel = self.make_request(
"GET",
self.url,
access_token=self.other_user_tok,
)
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_default_success(self) -> None:
"""
Testing list of reported rooms
"""
channel = self.make_request(
"GET",
self.url,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 4)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["room_reports"])
def test_limit(self) -> None:
"""
Testing list of reported rooms with limit
"""
channel = self.make_request(
"GET",
self.url + "?limit=2",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 2)
self.assertEqual(channel.json_body["next_token"], 2)
self._check_fields(channel.json_body["room_reports"])
def test_from(self) -> None:
"""
Testing list of reported rooms with a defined starting point (from)
"""
channel = self.make_request(
"GET",
self.url + "?from=2",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 2)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["room_reports"])
def test_limit_and_from(self) -> None:
"""
Testing list of reported rooms with a defined starting point and limit
"""
channel = self.make_request(
"GET",
self.url + "?from=2&limit=1",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(channel.json_body["next_token"], 2)
self.assertEqual(len(channel.json_body["room_reports"]), 1)
self._check_fields(channel.json_body["room_reports"])
def test_filter_room(self) -> None:
"""
Testing list of reported rooms with a filter of room
"""
channel = self.make_request(
"GET",
self.url + "?room_id=%s" % self.room_id1,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 2)
self.assertEqual(len(channel.json_body["room_reports"]), 2)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["room_reports"])
for report in channel.json_body["room_reports"]:
self.assertEqual(report["room_id"], self.room_id1)
def test_filter_user(self) -> None:
"""
Testing list of reported rooms with a filter of user
"""
channel = self.make_request(
"GET",
self.url + "?user_id=%s" % self.other_user,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 2)
self.assertEqual(len(channel.json_body["room_reports"]), 2)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["room_reports"])
for report in channel.json_body["room_reports"]:
self.assertEqual(report["user_id"], self.other_user)
def test_filter_user_and_room(self) -> None:
"""
Testing list of reported rooms with a filter of user and room
"""
channel = self.make_request(
"GET",
self.url + "?user_id=%s&room_id=%s" % (self.other_user, self.room_id1),
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 1)
self.assertEqual(len(channel.json_body["room_reports"]), 1)
self.assertNotIn("next_token", channel.json_body)
self._check_fields(channel.json_body["room_reports"])
for report in channel.json_body["room_reports"]:
self.assertEqual(report["user_id"], self.other_user)
self.assertEqual(report["room_id"], self.room_id1)
def test_valid_search_order(self) -> None:
"""
Testing search order. Order by timestamps.
"""
# fetch the most recent first, largest timestamp
channel = self.make_request(
"GET",
self.url + "?dir=b",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 4)
report = 1
while report < len(channel.json_body["room_reports"]):
self.assertGreaterEqual(
channel.json_body["room_reports"][report - 1]["received_ts"],
channel.json_body["room_reports"][report]["received_ts"],
)
report += 1
# fetch the oldest first, smallest timestamp
channel = self.make_request(
"GET",
self.url + "?dir=f",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 4)
report = 1
while report < len(channel.json_body["room_reports"]):
self.assertLessEqual(
channel.json_body["room_reports"][report - 1]["received_ts"],
channel.json_body["room_reports"][report]["received_ts"],
)
report += 1
def test_invalid_search_order(self) -> None:
"""
Testing that a invalid search order returns a 400
"""
channel = self.make_request(
"GET",
self.url + "?dir=bar",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
self.assertEqual(
"Query parameter 'dir' must be one of ['b', 'f']",
channel.json_body["error"],
)
def test_limit_is_negative(self) -> None:
"""
Testing that a negative limit parameter returns a 400
"""
channel = self.make_request(
"GET",
self.url + "?limit=-5",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
def test_from_is_negative(self) -> None:
"""
Testing that a negative from parameter returns a 400
"""
channel = self.make_request(
"GET",
self.url + "?from=-5",
access_token=self.admin_user_tok,
)
self.assertEqual(400, channel.code, msg=channel.json_body)
self.assertEqual(Codes.INVALID_PARAM, channel.json_body["errcode"])
def test_next_token(self) -> None:
"""
Testing that `next_token` appears at the right place
"""
# `next_token` does not appear
# Number of results is the number of entries
channel = self.make_request(
"GET",
self.url + "?limit=4",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 2)
self.assertNotIn("room_reports", channel.json_body)
# `next_token` does not appear
# Number of max results is larger than the number of entries
channel = self.make_request(
"GET",
self.url + "?limit=5",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 4)
self.assertNotIn("next_token", channel.json_body)
# `next_token` does appear
# Number of max results is smaller than the number of entries
channel = self.make_request(
"GET",
self.url + "?limit=3",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 3)
self.assertEqual(channel.json_body["next_token"], 3)
# Check
# Set `from` to value of `next_token` for request remaining entries
# `next_token` does not appear
channel = self.make_request(
"GET",
self.url + "?from=3",
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
self.assertEqual(channel.json_body["total"], 4)
self.assertEqual(len(channel.json_body["room_reports"]), 1)
self.assertNotIn("next_token", channel.json_body)
def _report_room(self, room_id: str, user_tok: str) -> None:
"""Report a room"""
channel = self.make_request(
"POST",
f"/_matrix/client/v3/rooms/{room_id}/report",
{"reason": "this makes me sad"},
access_token=user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
def _report_room_without_parameters(self, room_id: str, user_tok: str) -> None:
"""Report a room, but omit reason"""
channel = self.make_request(
"POST",
f"/_matrix/client/v3/rooms/{room_id}/report",
{},
access_token=user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
def _check_fields(self, content: List[JsonDict]) -> None:
"""Checks that all attributes are present in a room report"""
for c in content:
self.assertIn("id", c)
self.assertIn("received_ts", c)
self.assertIn("room_id", c)
self.assertIn("user_id", c)
self.assertIn("canonical_alias", c)
self.assertIn("name", c)
self.assertIn("reason", c)
self.assertEqual(len(c.keys()), 7)
def test_count_correct_despite_table_deletions(self) -> None:
"""
Tests that the count matches the number of rows, even if rows in joined tables
are missing.
"""
# Delete rows from room_stats_state for one of our rooms.
self.get_success(
self.hs.get_datastores().main.db_pool.simple_delete(
"room_stats_state", {"room_id": self.room_id1}, desc="_"
)
)
channel = self.make_request(
"GET",
self.url,
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
# The 'total' field is 10 because only 10 reports will actually
# be retrievable since we deleted the rows in the room_stats_state
# table.
self.assertEqual(channel.json_body["total"], 2)
# This is consistent with the number of rows actually returned.
self.assertEqual(len(channel.json_body["room_reports"]), 2)

View File

@@ -0,0 +1,192 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
#
#
from typing import Mapping, Optional, Tuple
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
from synapse.api.errors import Codes
from synapse.rest.client import login
from synapse.server import HomeServer
from synapse.types import JsonMapping, ScheduledTask, TaskStatus
from synapse.util import Clock
from tests import unittest
class ScheduledTasksAdminApiTestCase(unittest.HomeserverTestCase):
servlets = [
synapse.rest.admin.register_servlets,
login.register_servlets,
]
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = hs.get_datastores().main
self.admin_user = self.register_user("admin", "pass", admin=True)
self.admin_user_tok = self.login("admin", "pass")
self._task_scheduler = hs.get_task_scheduler()
# create and schedule a few tasks
async def _test_task(
task: ScheduledTask,
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
return TaskStatus.ACTIVE, None, None
async def _finished_test_task(
task: ScheduledTask,
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
return TaskStatus.COMPLETE, None, None
async def _failed_test_task(
task: ScheduledTask,
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
return TaskStatus.FAILED, None, "Everything failed"
self._task_scheduler.register_action(_test_task, "test_task")
self.get_success(
self._task_scheduler.schedule_task("test_task", resource_id="test")
)
self._task_scheduler.register_action(_finished_test_task, "finished_test_task")
self.get_success(
self._task_scheduler.schedule_task(
"finished_test_task", resource_id="finished_task"
)
)
self._task_scheduler.register_action(_failed_test_task, "failed_test_task")
self.get_success(
self._task_scheduler.schedule_task(
"failed_test_task", resource_id="failed_task"
)
)
def check_scheduled_tasks_response(self, scheduled_tasks: Mapping) -> list:
result = []
for task in scheduled_tasks:
if task["resource_id"] == "test":
self.assertEqual(task["status"], TaskStatus.ACTIVE)
self.assertEqual(task["action"], "test_task")
result.append(task)
if task["resource_id"] == "finished_task":
self.assertEqual(task["status"], TaskStatus.COMPLETE)
self.assertEqual(task["action"], "finished_test_task")
result.append(task)
if task["resource_id"] == "failed_task":
self.assertEqual(task["status"], TaskStatus.FAILED)
self.assertEqual(task["action"], "failed_test_task")
result.append(task)
return result
def test_requester_is_not_admin(self) -> None:
"""
If the user is not a server admin, an error 403 is returned.
"""
self.register_user("user", "pass", admin=False)
other_user_tok = self.login("user", "pass")
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks",
content={},
access_token=other_user_tok,
)
self.assertEqual(403, channel.code, msg=channel.json_body)
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
def test_scheduled_tasks(self) -> None:
"""
Test that endpoint returns scheduled tasks.
"""
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks",
content={},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
scheduled_tasks = channel.json_body["scheduled_tasks"]
# make sure we got back all the scheduled tasks
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
self.assertEqual(len(found_tasks), 3)
def test_filtering_scheduled_tasks(self) -> None:
"""
Test that filtering the scheduled tasks response via query params works as expected.
"""
# filter via job_status
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks?job_status=active",
content={},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
scheduled_tasks = channel.json_body["scheduled_tasks"]
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
# only the active task should have been returned
self.assertEqual(len(found_tasks), 1)
self.assertEqual(found_tasks[0]["status"], "active")
# filter via action_name
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks?action_name=test_task",
content={},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
scheduled_tasks = channel.json_body["scheduled_tasks"]
# only test_task should have been returned
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
self.assertEqual(len(found_tasks), 1)
self.assertEqual(found_tasks[0]["action"], "test_task")
# filter via max_timestamp
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks?max_timestamp=0",
content={},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
scheduled_tasks = channel.json_body["scheduled_tasks"]
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
# none should have been returned
self.assertEqual(len(found_tasks), 0)
# filter via resource id
channel = self.make_request(
"GET",
"/_synapse/admin/v1/scheduled_tasks?resource_id=failed_task",
content={},
access_token=self.admin_user_tok,
)
self.assertEqual(200, channel.code, msg=channel.json_body)
scheduled_tasks = channel.json_body["scheduled_tasks"]
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
# only the task with the matching resource id should have been returned
self.assertEqual(len(found_tasks), 1)
self.assertEqual(found_tasks[0]["resource_id"], "failed_task")

View File

@@ -20,7 +20,9 @@
#
import base64
import json
from hashlib import sha256
from typing import Any, ContextManager, Dict, List, Optional, Tuple
from unittest.mock import Mock, patch
from urllib.parse import parse_qs
@@ -154,10 +156,23 @@ class FakeOidcServer:
json_payload = json.dumps(payload)
return jws.serialize_compact(protected, json_payload, self._key).decode("utf-8")
def generate_id_token(self, grant: FakeAuthorizationGrant) -> str:
def generate_id_token(
self, grant: FakeAuthorizationGrant, access_token: str
) -> str:
# Generate a hash of the access token for the optional
# `at_hash` field in an ID Token.
#
# 3.1.3.6. ID Token, https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken
at_hash = (
base64.urlsafe_b64encode(sha256(access_token.encode("ascii")).digest()[:16])
.rstrip(b"=")
.decode("ascii")
)
now = int(self._clock.time())
id_token = {
**grant.userinfo,
"at_hash": at_hash,
"iss": self.issuer,
"aud": grant.client_id,
"iat": now,
@@ -243,7 +258,7 @@ class FakeOidcServer:
}
if "openid" in grant.scope:
token["id_token"] = self.generate_id_token(grant)
token["id_token"] = self.generate_id_token(grant, access_token)
return dict(token)