1
0

Compare commits

..

1 Commits

Author SHA1 Message Date
Travis Ralston
b024acffea Add rudimentary API for promoting/demoting other people in a group
For https://github.com/matrix-org/synapse/issues/2855 (initial)
2020-08-18 15:21:30 -06:00
181 changed files with 1419 additions and 3260 deletions

View File

@@ -12,16 +12,6 @@ from Synapse as most users have updated their client. Further context can be
found at [\#6766](https://github.com/matrix-org/synapse/issues/6766).
Synapse 1.19.1rc1 (2020-08-25)
==============================
Bugfixes
--------
- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))
Synapse 1.19.0 (2020-08-17)
===========================

View File

@@ -75,30 +75,6 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.20.0
====================
New HTML templates
------------------
A new HTML template,
`password_reset_confirmation.html <https://github.com/matrix-org/synapse/blob/develop/synapse/res/templates/password_reset_confirmation.html>`_,
has been added to the ``synapse/res/templates`` directory. If you are using a
custom template directory, you may want to copy the template over and modify it.
Note that as of v1.20.0, templates do not need to be included in custom template
directories for Synapse to start. The default templates will be used if a custom
template cannot be found.
This page will appear to the user after clicking a password reset link that has
been emailed to them.
To complete password reset, the page must include a way to make a `POST`
request to
``/_matrix/client/unstable/password_reset/{medium}/submit_token_confirm``
with the query parameters from the original link. See the file itself for more
details.
Upgrading to v1.18.0
====================

View File

@@ -1 +0,0 @@
Add filter `name` to the `/users` admin API, which filters by user ID or displayname. Contributed by Awesome Technologies Innovationslabor GmbH.

View File

@@ -1 +0,0 @@
Don't fail `/submit_token` requests on incorrect session ID if `request_token_inhibit_3pid_errors` is turned on.

View File

@@ -1 +0,0 @@
Require the user to confirm that their password should be reset after clicking the email confirmation link.

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Fix a bug introduced in v1.7.2 impacting message retention policies that would allow federated homeservers to dictate a retention period that's lower than the configured minimum allowed duration in the configuration file.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where invalid JSON would be accepted by Synapse.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse 1.12.0 which could cause `/sync` requests to fail with a 404 if you had a very old outstanding room invite.

View File

@@ -1 +0,0 @@
Separate `get_current_token` into two since there are two different use cases for it.

View File

@@ -1 +0,0 @@
Iteratively encode JSON to avoid blocking the reactor.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Updated documentation to note that Synapse does not follow `HTTP 308` redirects due to an upstream library not supporting them. Contributed by Ryan Cole.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Remove `ChainedIdGenerator`.

View File

@@ -1 +0,0 @@
Reduce the amount of whitespace in JSON stored and sent in responses.

View File

@@ -1 +0,0 @@
Add type hints to `synapse.storage.database`.

View File

@@ -1 +0,0 @@
Return a proper error code when the rooms of an invalid group are requested.

View File

@@ -1 +0,0 @@
Update the test federation client to handle streaming responses.

View File

@@ -1 +0,0 @@
Fix a bug which could cause a leaked postgres connection if synapse was set to daemonize.

View File

@@ -1 +0,0 @@
Micro-optimisations to get_auth_chain_ids.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Clarify the error code if a user tries to register with a numeric ID. This bug was introduced in v1.15.0.

View File

@@ -1 +0,0 @@
Fixes a bug where appservices with ratelimiting disabled would still be ratelimited when joining rooms. This bug was introduced in v1.19.0.

View File

@@ -1 +0,0 @@
Add type hints to `synapse.state`.

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Added curl for healthcheck support and readme updates for the change. Contributed by @maquis196.

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Add support for shadow-banning users (ignoring any message send requests).

View File

@@ -1 +0,0 @@
Refactor `StreamIdGenerator` and `MultiWriterIdGenerator` to have the same interface.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Add filter `name` to the `/users` admin API, which filters by user ID or displayname. Contributed by Awesome Technologies Innovationslabor GmbH.

View File

@@ -1 +0,0 @@
Add functions to `MultiWriterIdGen` used by events stream.

View File

@@ -1 +0,0 @@
Fix tests that were broken due to the merge of 1.19.1.

View File

@@ -1 +0,0 @@
Make `SlavedIdTracker.advance` have the same interface as `MultiWriterIDGenerator`.

View File

@@ -55,7 +55,6 @@ RUN pip install --prefix="/install" --no-warn-script-location \
FROM docker.io/python:${PYTHON_VERSION}-slim
RUN apt-get update && apt-get install -y \
curl \
libpq5 \
xmlsec1 \
gosu \
@@ -70,6 +69,3 @@ VOLUME ["/data"]
EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]
HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -fSs http://localhost:8008/health || exit 1

View File

@@ -162,32 +162,3 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile .
You can choose to build a different docker image by changing the value of the `-f` flag to
point to another Dockerfile.
## Disabling the healthcheck
If you are using a non-standard port or tls inside docker you can disable the healthcheck
whilst running the above `docker run` commands.
```
--no-healthcheck
```
## Setting custom healthcheck on docker run
If you wish to point the healthcheck at a different port with docker command, add the following
```
--health-cmd 'curl -fSs http://localhost:1234/health'
```
## Setting the healthcheck in docker-compose file
You can add the following to set a custom healthcheck in a docker compose file.
You will need version >2.1 for this to work.
```
healthcheck:
test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
interval: 1m
timeout: 10s
retries: 3
```

View File

@@ -108,7 +108,7 @@ The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
To use it, you will need to authenticate by providing an ``access_token`` for a
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see `README.rst <README.rst>`_.
The parameter ``from`` is optional but used for pagination, denoting the
@@ -119,11 +119,8 @@ from a previous call.
The parameter ``limit`` is optional but is used for pagination, denoting the
maximum number of items to return in this call. Defaults to ``100``.
The parameter ``user_id`` is optional and filters to only return users with user IDs
that contain this value. This parameter is ignored when using the ``name`` parameter.
The parameter ``name`` is optional and filters to only return users with user ID localparts
**or** displaynames that contain this value.
The parameter ``user_id`` is optional and filters to only users with user IDs
that contain this value.
The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.

View File

@@ -47,18 +47,6 @@ you invite them to. This can be caused by an incorrectly-configured reverse
proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
configure a reverse proxy.
### Known issues
**HTTP `308 Permanent Redirect` redirects are not followed**: Due to missing features
in the HTTP library used by Synapse, 308 redirects are currently not followed by
federating servers, which can cause `M_UNKNOWN` or `401 Unauthorized` errors. This
may affect users who are redirecting apex-to-www (e.g. `example.com` -> `www.example.com`),
and especially users of the Kubernetes *Nginx Ingress* module, which uses 308 redirect
codes by default. For those Kubernetes users, [this Stackoverflow post](https://stackoverflow.com/a/52617528/5096871)
might be helpful. For other users, switching to a `301 Moved Permanently` code may be
an option. 308 redirect codes will be supported properly in a future
release of Synapse.
## Running a demo federation of Synapses
If you want to get up and running quickly with a trio of homeservers in a

View File

@@ -378,10 +378,11 @@ retention:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, and the state of a room contains a
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
# to these limits when running purge jobs.
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
@@ -407,19 +408,12 @@ retention:
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
# If any purge job is configured, it is strongly recommended to have at least
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
# set, or one job without 'shortest_max_lifetime' and one job without
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
# room's policy to these values is done after the policies are retrieved from
# Synapse's database (which is done using the range specified in a purge job's
# configuration).
#
#purge_jobs:
# - longest_max_lifetime: 3d
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
@@ -2021,13 +2015,9 @@ email:
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * An HTML page that a user will see when they follow the link in the password
# reset email. The user will be asked to confirm the action before their
# password is reset: 'password_reset_confirmation.html'
#
# * HTML pages for success and failure that a user will see when they confirm
# the password reset flow using the page above: 'password_reset_success.html'
# and 'password_reset_failure.html'
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'

View File

@@ -21,12 +21,10 @@ import argparse
import base64
import json
import sys
from typing import Any, Optional
from urllib import parse as urlparse
import nacl.signing
import requests
import signedjson.types
import srvlookup
import yaml
from requests.adapters import HTTPAdapter
@@ -71,9 +69,7 @@ def encode_canonical_json(value):
).encode("UTF-8")
def sign_json(
json_object: Any, signing_key: signedjson.types.SigningKey, signing_name: str
) -> Any:
def sign_json(json_object, signing_key, signing_name):
signatures = json_object.pop("signatures", {})
unsigned = json_object.pop("unsigned", None)
@@ -126,14 +122,7 @@ def read_signing_keys(stream):
return keys
def request(
method: Optional[str],
origin_name: str,
origin_key: signedjson.types.SigningKey,
destination: str,
path: str,
content: Optional[str],
) -> requests.Response:
def request_json(method, origin_name, origin_key, destination, path, content):
if method is None:
if content is None:
method = "GET"
@@ -170,14 +159,11 @@ def request(
if method == "POST":
headers["Content-Type"] = "application/json"
return s.request(
method=method,
url=dest,
headers=headers,
verify=False,
data=content,
stream=True,
result = s.request(
method=method, url=dest, headers=headers, verify=False, data=content
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
return result.json()
def main():
@@ -236,7 +222,7 @@ def main():
with open(args.signing_key_path) as f:
key = read_signing_keys(f)[0]
result = request(
result = request_json(
args.method,
args.server_name,
key,
@@ -245,12 +231,7 @@ def main():
content=args.body,
)
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
for chunk in result.iter_content():
# we write raw utf8 to stdout.
sys.stdout.buffer.write(chunk)
json.dump(result, sys.stdout)
print("")

View File

@@ -1,47 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Stub for frozendict.
from typing import (
Any,
Hashable,
Iterable,
Iterator,
Mapping,
overload,
Tuple,
TypeVar,
)
_KT = TypeVar("_KT", bound=Hashable) # Key type.
_VT = TypeVar("_VT") # Value type.
class frozendict(Mapping[_KT, _VT]):
@overload
def __init__(self, **kwargs: _VT) -> None: ...
@overload
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
def __getitem__(self, key: _KT) -> _VT: ...
def __contains__(self, key: Any) -> bool: ...
def copy(self, **add_or_replace: Any) -> frozendict: ...
def __iter__(self) -> Iterator[_KT]: ...
def __len__(self) -> int: ...
def __repr__(self) -> str: ...
def __hash__(self) -> int: ...

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.19.1rc1"
__version__ = "1.19.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -21,9 +21,9 @@ import typing
from http import HTTPStatus
from typing import Dict, List, Optional, Union
from twisted.web import http
from canonicaljson import json
from synapse.util import json_decoder
from twisted.web import http
if typing.TYPE_CHECKING:
from synapse.types import JsonDict
@@ -593,7 +593,7 @@ class HttpResponseException(CodeMessageException):
# try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text
try:
j = json_decoder.decode(self.response.decode("utf-8"))
j = json.loads(self.response.decode("utf-8"))
except ValueError:
j = {}
@@ -604,11 +604,3 @@ class HttpResponseException(CodeMessageException):
errmsg = j.pop("error", self.msg)
return ProxiedRequestError(self.code, errmsg, errcode, j)
class ShadowBanError(Exception):
"""
Raised when a shadow-banned user attempts to perform an action.
This should be caught and a proper "fake" success response sent to the user.
"""

View File

@@ -17,7 +17,6 @@ from collections import OrderedDict
from typing import Any, Optional, Tuple
from synapse.api.errors import LimitExceededError
from synapse.types import Requester
from synapse.util import Clock
@@ -44,42 +43,6 @@ class Ratelimiter(object):
# * The rate_hz of this particular entry. This can vary per request
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
def can_requester_do_action(
self,
requester: Requester,
rate_hz: Optional[float] = None,
burst_count: Optional[int] = None,
update: bool = True,
_time_now_s: Optional[int] = None,
) -> Tuple[bool, float]:
"""Can the requester perform the action?
Args:
requester: The requester to key off when rate limiting. The user property
will be used.
rate_hz: The long term number of actions that can be performed in a second.
Overrides the value set during instantiation if set.
burst_count: How many actions that can be performed before being limited.
Overrides the value set during instantiation if set.
update: Whether to count this check as performing the action
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
Returns:
A tuple containing:
* A bool indicating if they can perform the action now
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
# Disable rate limiting of users belonging to any AS that is configured
# not to be rate limited in its registration file (rate_limited: true|false).
if requester.app_service and not requester.app_service.is_rate_limited():
return True, -1.0
return self.can_do_action(
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
)
def can_do_action(
self,
key: Any,

View File

@@ -198,9 +198,6 @@ class EmailConfig(Config):
"add_threepid_template_text", "add_threepid.txt"
)
password_reset_template_confirmation_html = (
"password_reset_confirmation.html"
)
password_reset_template_failure_html = email_config.get(
"password_reset_template_failure_html", "password_reset_failure.html"
)
@@ -231,7 +228,6 @@ class EmailConfig(Config):
self.email_registration_template_text,
self.email_add_threepid_template_html,
self.email_add_threepid_template_text,
self.email_password_reset_template_confirmation_html,
self.email_password_reset_template_failure_html,
self.email_registration_template_failure_html,
self.email_add_threepid_template_failure_html,
@@ -246,7 +242,6 @@ class EmailConfig(Config):
registration_template_text,
add_threepid_template_html,
add_threepid_template_text,
password_reset_template_confirmation_html,
password_reset_template_failure_html,
registration_template_failure_html,
add_threepid_template_failure_html,
@@ -409,13 +404,9 @@ class EmailConfig(Config):
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * An HTML page that a user will see when they follow the link in the password
# reset email. The user will be asked to confirm the action before their
# password is reset: 'password_reset_confirmation.html'
#
# * HTML pages for success and failure that a user will see when they confirm
# the password reset flow using the page above: 'password_reset_success.html'
# and 'password_reset_failure.html'
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'

View File

@@ -961,10 +961,11 @@ class ServerConfig(Config):
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, and the state of a room contains a
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
# to these limits when running purge jobs.
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
@@ -990,19 +991,12 @@ class ServerConfig(Config):
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
# If any purge job is configured, it is strongly recommended to have at least
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
# set, or one job without 'shortest_max_lifetime' and one job without
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
# room's policy to these values is done after the policies are retrieved from
# Synapse's database (which is done using the range specified in a purge job's
# configuration).
#
#purge_jobs:
# - longest_max_lifetime: 3d
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak

View File

@@ -757,8 +757,9 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
except Exception:
logger.exception("Error getting keys %s from %s", key_ids, server_name)
await yieldable_gather_results(get_key, keys_to_fetch.items())
return results
return await yieldable_gather_results(
get_key, keys_to_fetch.items()
).addCallback(lambda _: results)
async def get_server_verify_key_v2_direct(self, server_name, key_ids):
"""
@@ -768,7 +769,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
key_ids (iterable[str]):
Returns:
dict[str, FetchKeyResult]: map from key ID to lookup result
Deferred[dict[str, FetchKeyResult]]: map from key ID to lookup result
Raises:
KeyLookupError if there was a problem making the lookup

View File

@@ -47,7 +47,7 @@ def check(
Args:
room_version_obj: the version of the room
event: the event being checked.
auth_events: the existing room state.
auth_events (dict: event-key -> event): the existing room state.
Raises:
AuthError if the checks fail

View File

@@ -133,8 +133,6 @@ class _EventInternalMetadata(object):
rejection. This is needed as those events are marked as outliers, but
they still need to be processed as if they're new events (e.g. updating
invite state in the database, relaying to clients, etc).
(Added in synapse 0.99.0, so may be unreliable for events received before that)
"""
return self._dict.get("out_of_band_membership", False)

View File

@@ -15,10 +15,9 @@
# limitations under the License.
import inspect
from typing import Any, Dict, List, Optional, Tuple
from typing import Any, Dict, List
from synapse.spam_checker_api import RegistrationBehaviour, SpamCheckerApi
from synapse.types import Collection
from synapse.spam_checker_api import SpamCheckerApi
MYPY = False
if MYPY:
@@ -161,33 +160,3 @@ class SpamChecker(object):
return True
return False
def check_registration_for_spam(
self,
email_threepid: Optional[dict],
username: Optional[str],
request_info: Collection[Tuple[str, str]],
) -> RegistrationBehaviour:
"""Checks if we should allow the given registration request.
Args:
email_threepid: The email threepid used for registering, if any
username: The request user name, if any
request_info: List of tuples of user agent and IP that
were used during the registration process.
Returns:
Enum for how the request should be handled
"""
for spam_checker in self.spam_checkers:
# For backwards compatibility, only run if the method exists on the
# spam checker
checker = getattr(spam_checker, "check_registration_for_spam", None)
if checker:
behaviour = checker(email_threepid, username, request_info)
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour
return RegistrationBehaviour.ALLOW

View File

@@ -74,14 +74,15 @@ class EventValidator(object):
)
if event.type == EventTypes.Retention:
self._validate_retention(event)
self._validate_retention(event, config)
def _validate_retention(self, event):
def _validate_retention(self, event, config):
"""Checks that an event that defines the retention policy for a room respects the
format enforced by the spec.
boundaries imposed by the server's administrator.
Args:
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
"""
min_lifetime = event.content.get("min_lifetime")
max_lifetime = event.content.get("max_lifetime")
@@ -94,6 +95,32 @@ class EventValidator(object):
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and min_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be lower than the minimum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and min_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if max_lifetime is not None:
if not isinstance(max_lifetime, int):
raise SynapseError(
@@ -102,6 +129,32 @@ class EventValidator(object):
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and max_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be lower than the minimum allowed value"
" enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and max_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
min_lifetime is not None
and max_lifetime is not None

View File

@@ -28,6 +28,7 @@ from typing import (
Union,
)
from canonicaljson import json
from prometheus_client import Counter, Histogram
from twisted.internet import defer
@@ -62,7 +63,7 @@ from synapse.replication.http.federation import (
ReplicationGetQueryRestServlet,
)
from synapse.types import JsonDict, get_domain_from_id
from synapse.util import glob_to_regex, json_decoder, unwrapFirstError
from synapse.util import glob_to_regex, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
@@ -550,7 +551,7 @@ class FederationServer(FederationBase):
for device_id, keys in device_keys.items():
for key_id, json_str in keys.items():
json_result.setdefault(user_id, {})[device_id] = {
key_id: json_decoder.decode(json_str)
key_id: json.loads(json_str)
}
logger.info(

View File

@@ -329,10 +329,10 @@ class FederationSender(object):
room_id = receipt.room_id
# Work out which remote servers should be poked and poke them.
domains_set = await self.state.get_current_hosts_in_room(room_id)
domains = await self.state.get_current_hosts_in_room(room_id)
domains = [
d
for d in domains_set
for d in domains
if d != self.server_name
and self._federation_shard_config.should_handle(self._instance_name, d)
]

View File

@@ -15,6 +15,8 @@
import logging
from typing import TYPE_CHECKING, List, Tuple
from canonicaljson import json
from synapse.api.errors import HttpResponseException
from synapse.events import EventBase
from synapse.federation.persistence import TransactionActions
@@ -26,7 +28,6 @@ from synapse.logging.opentracing import (
tags,
whitelisted_homeserver,
)
from synapse.util import json_decoder
from synapse.util.metrics import measure_func
if TYPE_CHECKING:
@@ -70,7 +71,7 @@ class TransactionManager(object):
for edu in pending_edus:
context = edu.get_context()
if context:
span_contexts.append(extract_text_map(json_decoder.decode(context)))
span_contexts.append(extract_text_map(json.loads(context)))
if keep_destination:
edu.strip_context()

View File

@@ -719,6 +719,27 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
raise NotImplementedError()
async def change_user_admin_in_group(
self, group_id, user_id, want_admin, requester_user_id, content
):
"""Promotes or demotes a user in a group.
"""
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
if requester_user_id == user_id:
raise SynapseError(400, "User cannot target themselves")
is_admin = await self.store.is_user_admin_in_group(
group_id, requester_user_id
)
if not is_admin:
raise SynapseError(403, "User is not admin in group")
await self.store.change_user_admin_in_group(group_id, user_id, want_admin)
return {}
async def remove_user_from_group(
self, group_id, user_id, requester_user_id, content
):

View File

@@ -364,14 +364,6 @@ class AuthHandler(BaseHandler):
# authentication flow.
await self.store.set_ui_auth_clientdict(sid, clientdict)
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
await self.store.add_user_agent_ip_to_ui_auth_session(
session.session_id, user_agent, clientip
)
if not authdict:
raise InteractiveAuthIncompleteError(
session.session_id, self._auth_dict_for_flows(flows, session.session_id)

View File

@@ -35,7 +35,6 @@ class CasHandler:
"""
def __init__(self, hs):
self.hs = hs
self._hostname = hs.hostname
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
@@ -211,16 +210,8 @@ class CasHandler:
else:
if not registered_user_id:
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(
b"User-Agent", default=[b""]
)[0].decode("ascii", "surrogateescape")
ip_address = self.hs.get_ip_from_request(request)
registered_user_id = await self._registration_handler.register_user(
localpart=localpart,
default_display_name=user_display_name,
user_agent_ips=(user_agent, ip_address),
localpart=localpart, default_display_name=user_display_name
)
await self._auth_handler.complete_sso_login(

View File

@@ -16,6 +16,8 @@
import logging
from typing import Any, Dict
from canonicaljson import json
from synapse.api.errors import SynapseError
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
@@ -25,7 +27,6 @@ from synapse.logging.opentracing import (
start_active_span,
)
from synapse.types import UserID, get_domain_from_id
from synapse.util import json_encoder
from synapse.util.stringutils import random_string
logger = logging.getLogger(__name__)
@@ -173,7 +174,7 @@ class DeviceMessageHandler(object):
"sender": sender_user_id,
"type": message_type,
"message_id": message_id,
"org.matrix.opentracing_context": json_encoder.encode(context),
"org.matrix.opentracing_context": json.dumps(context),
}
log_kv({"local_messages": local_messages})

View File

@@ -23,7 +23,6 @@ from synapse.api.errors import (
CodeMessageException,
Codes,
NotFoundError,
ShadowBanError,
StoreError,
SynapseError,
)
@@ -200,8 +199,6 @@ class DirectoryHandler(BaseHandler):
try:
await self._update_canonical_alias(requester, user_id, room_id, room_alias)
except ShadowBanError as e:
logger.info("Failed to update alias events due to shadow-ban: %s", e)
except AuthError as e:
logger.info("Failed to update alias events: %s", e)
@@ -295,9 +292,6 @@ class DirectoryHandler(BaseHandler):
"""
Send an updated canonical alias event if the removed alias was set as
the canonical alias or listed in the alt_aliases field.
Raises:
ShadowBanError if the requester has been shadow-banned.
"""
alias_event = await self.state.get_current_state(
room_id, EventTypes.CanonicalAlias, ""

View File

@@ -19,7 +19,7 @@ import logging
from typing import Dict, List, Optional, Tuple
import attr
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
from signedjson.key import VerifyKey, decode_verify_key_bytes
from signedjson.sign import SignatureVerifyException, verify_signed_json
from unpaddedbase64 import decode_base64
@@ -35,7 +35,7 @@ from synapse.types import (
get_domain_from_id,
get_verify_key_from_cross_signing_key,
)
from synapse.util import json_decoder, unwrapFirstError
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
@@ -404,7 +404,7 @@ class E2eKeysHandler(object):
for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items():
json_result.setdefault(user_id, {})[device_id] = {
key_id: json_decoder.decode(json_bytes)
key_id: json.loads(json_bytes)
}
@trace
@@ -1186,7 +1186,7 @@ def _exception_to_failure(e):
def _one_time_keys_match(old_key_json, new_key):
old_key = json_decoder.decode(old_key_json)
old_key = json.loads(old_key_json)
# if either is a string rather than an object, they must match exactly
if not isinstance(old_key, dict) or not isinstance(new_key, dict):

View File

@@ -1777,7 +1777,9 @@ class FederationHandler(BaseHandler):
"""Returns the state at the event. i.e. not including said event.
"""
event = await self.store.get_event(event_id, check_room_id=room_id)
event = await self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
state_groups = await self.state_store.get_state_groups(room_id, [event_id])
@@ -1803,7 +1805,9 @@ class FederationHandler(BaseHandler):
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
"""Returns the state at the event. i.e. not including said event.
"""
event = await self.store.get_event(event_id, check_room_id=room_id)
event = await self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
state_groups = await self.state_store.get_state_groups_ids(room_id, [event_id])
@@ -2134,10 +2138,10 @@ class FederationHandler(BaseHandler):
)
state_sets = list(state_sets.values())
state_sets.append(state)
current_states = await self.state_handler.resolve_events(
current_state_ids = await self.state_handler.resolve_events(
room_version, state_sets, event
)
current_state_ids = {k: e.event_id for k, e in current_states.items()}
current_state_ids = {k: e.event_id for k, e in current_state_ids.items()}
else:
current_state_ids = await self.state_handler.get_current_state_ids(
event.room_id, latest_event_ids=extrem_ids
@@ -2149,13 +2153,11 @@ class FederationHandler(BaseHandler):
# Now check if event pass auth against said current state
auth_types = auth_types_for_event(event)
current_state_ids_list = [
e for k, e in current_state_ids.items() if k in auth_types
]
current_state_ids = [e for k, e in current_state_ids.items() if k in auth_types]
auth_events_map = await self.store.get_events(current_state_ids_list)
current_auth_events = await self.store.get_events(current_state_ids)
current_auth_events = {
(e.type, e.state_key): e for e in auth_events_map.values()
(e.type, e.state_key): e for e in current_auth_events.values()
}
try:
@@ -2171,7 +2173,9 @@ class FederationHandler(BaseHandler):
if not in_room:
raise AuthError(403, "Host not in room.")
event = await self.store.get_event(event_id, check_room_id=room_id)
event = await self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
# Just go through and process each event in `remote_auth_chain`. We
# don't want to fall into the trap of `missing` being wrong.

View File

@@ -461,6 +461,25 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {"state": "invite", "user_profile": user_profile}
async def change_user_admin_in_group(
self, group_id, user_id, want_admin, requester_user_id, content
):
"""Promotes or demotes a user in a group.
"""
if not self.is_mine_id(user_id):
raise SynapseError(400, "User not on this server")
# TODO: We should probably support federation, but this is fine for now
if not self.is_mine_id(group_id):
raise SynapseError(400, "Group not on this server")
res = await self.groups_server_handler.change_user_admin_in_group(
group_id, user_id, want_admin, requester_user_id, content
)
return res
async def remove_user_from_group(
self, group_id, user_id, requester_user_id, content
):

View File

@@ -21,6 +21,8 @@ import logging
import urllib.parse
from typing import Awaitable, Callable, Dict, List, Optional, Tuple
from canonicaljson import json
from twisted.internet.error import TimeoutError
from synapse.api.errors import (
@@ -32,7 +34,6 @@ from synapse.api.errors import (
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http.client import SimpleHttpClient
from synapse.types import JsonDict, Requester
from synapse.util import json_decoder
from synapse.util.hash import sha256_and_url_safe_base64
from synapse.util.stringutils import assert_valid_client_secret, random_string
@@ -176,7 +177,7 @@ class IdentityHandler(BaseHandler):
except TimeoutError:
raise SynapseError(500, "Timed out contacting identity server")
except CodeMessageException as e:
data = json_decoder.decode(e.msg) # XXX WAT?
data = json.loads(e.msg) # XXX WAT?
return data
logger.info("Got 404 when POSTing JSON %s, falling back to v1 URL", bind_url)

View File

@@ -15,10 +15,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import random
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
from twisted.internet.interfaces import IDelayedCall
@@ -35,7 +34,6 @@ from synapse.api.errors import (
Codes,
ConsentNotGivenError,
NotFoundError,
ShadowBanError,
SynapseError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
@@ -57,7 +55,6 @@ from synapse.types import (
UserID,
create_requester,
)
from synapse.util import json_decoder
from synapse.util.async_helpers import Linearizer
from synapse.util.frozenutils import frozendict_json_encoder
from synapse.util.metrics import measure_func
@@ -647,35 +644,24 @@ class EventCreationHandler(object):
event: EventBase,
context: EventContext,
ratelimit: bool = True,
ignore_shadow_ban: bool = False,
) -> int:
"""
Persists and notifies local clients and federation of an event.
Args:
requester: The requester sending the event.
event: The event to send.
context: The context of the event.
requester
event the event to send.
context: the context of the event.
ratelimit: Whether to rate limit this send.
ignore_shadow_ban: True if shadow-banned users should be allowed to
send this event.
Return:
The stream_id of the persisted event.
Raises:
ShadowBanError if the requester has been shadow-banned.
"""
if event.type == EventTypes.Member:
raise SynapseError(
500, "Tried to send member event through non-member codepath"
)
if not ignore_shadow_ban and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
user = UserID.from_string(event.sender)
assert self.hs.is_mine(user), "User must be our own: %s" % (user,)
@@ -729,28 +715,12 @@ class EventCreationHandler(object):
event_dict: dict,
ratelimit: bool = True,
txn_id: Optional[str] = None,
ignore_shadow_ban: bool = False,
) -> Tuple[EventBase, int]:
"""
Creates an event, then sends it.
See self.create_event and self.send_nonmember_event.
Args:
requester: The requester sending the event.
event_dict: An entire event.
ratelimit: Whether to rate limit this send.
txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to
send this event.
Raises:
ShadowBanError if the requester has been shadow-banned.
"""
if not ignore_shadow_ban and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
# We limit the number of concurrent event sends in a room so that we
# don't fork the DAG too much. If we don't limit then we can end up in
@@ -769,11 +739,7 @@ class EventCreationHandler(object):
raise SynapseError(403, spam_error, Codes.FORBIDDEN)
stream_id = await self.send_nonmember_event(
requester,
event,
context,
ratelimit=ratelimit,
ignore_shadow_ban=ignore_shadow_ban,
requester, event, context, ratelimit=ratelimit
)
return event, stream_id
@@ -898,7 +864,7 @@ class EventCreationHandler(object):
# Ensure that we can round trip before trying to persist in db
try:
dump = frozendict_json_encoder.encode(event.content)
json_decoder.decode(dump)
json.loads(dump)
except Exception:
logger.exception("Failed to encode content: %r", event.content)
raise
@@ -994,7 +960,7 @@ class EventCreationHandler(object):
allow_none=True,
)
is_admin_redaction = bool(
is_admin_redaction = (
original_event and event.sender != original_event.sender
)
@@ -1114,8 +1080,8 @@ class EventCreationHandler(object):
auth_events_ids = self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_map = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events_map.values()}
auth_events = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
@@ -1213,14 +1179,8 @@ class EventCreationHandler(object):
event.internal_metadata.proactively_send = False
# Since this is a dummy-event it is OK if it is sent by a
# shadow-banned user.
await self.send_nonmember_event(
requester,
event,
context,
ratelimit=False,
ignore_shadow_ban=True,
requester, event, context, ratelimit=False
)
dummy_event_sent = True
break

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, Tuple, TypeVar
from urllib.parse import urlencode
@@ -38,7 +39,6 @@ from synapse.http.server import respond_with_html
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.types import UserID, map_username_to_mxid_localpart
from synapse.util import json_decoder
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -93,7 +93,6 @@ class OidcHandler:
"""
def __init__(self, hs: "HomeServer"):
self.hs = hs
self._callback_url = hs.config.oidc_callback_url # type: str
self._scopes = hs.config.oidc_scopes # type: List[str]
self._client_auth = ClientAuth(
@@ -368,7 +367,7 @@ class OidcHandler:
# and check for an error field. If not, we respond with a generic
# error message.
try:
resp = json_decoder.decode(resp_body.decode("utf-8"))
resp = json.loads(resp_body.decode("utf-8"))
error = resp["error"]
description = resp.get("error_description", error)
except (ValueError, KeyError):
@@ -385,7 +384,7 @@ class OidcHandler:
# Since it is a not a 5xx code, body should be a valid JSON. It will
# raise if not.
resp = json_decoder.decode(resp_body.decode("utf-8"))
resp = json.loads(resp_body.decode("utf-8"))
if "error" in resp:
error = resp["error"]
@@ -690,17 +689,9 @@ class OidcHandler:
self._render_error(request, "invalid_token", str(e))
return
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
ip_address = self.hs.get_ip_from_request(request)
# Call the mapper to register/login the user
try:
user_id = await self._map_userinfo_to_user(
userinfo, token, user_agent, ip_address
)
user_id = await self._map_userinfo_to_user(userinfo, token)
except MappingException as e:
logger.exception("Could not map user")
self._render_error(request, "mapping_error", str(e))
@@ -837,9 +828,7 @@ class OidcHandler:
now = self._clock.time_msec()
return now < expiry
async def _map_userinfo_to_user(
self, userinfo: UserInfo, token: Token, user_agent: str, ip_address: str
) -> str:
async def _map_userinfo_to_user(self, userinfo: UserInfo, token: Token) -> str:
"""Maps a UserInfo object to a mxid.
UserInfo should have a claim that uniquely identifies users. This claim
@@ -854,8 +843,6 @@ class OidcHandler:
Args:
userinfo: an object representing the user
token: a dict with the tokens obtained from the provider
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
Raises:
MappingException: if there was an error while mapping some properties
@@ -912,9 +899,7 @@ class OidcHandler:
# It's the first time this user is logging in and the mapped mxid was
# not taken, register the user
registered_user_id = await self._registration_handler.register_user(
localpart=localpart,
default_display_name=attributes["display_name"],
user_agent_ips=(user_agent, ip_address),
localpart=localpart, default_display_name=attributes["display_name"],
)
await self._datastore.record_user_external_id(

View File

@@ -82,9 +82,6 @@ class PaginationHandler(object):
self._retention_default_max_lifetime = hs.config.retention_default_max_lifetime
self._retention_allowed_lifetime_min = hs.config.retention_allowed_lifetime_min
self._retention_allowed_lifetime_max = hs.config.retention_allowed_lifetime_max
if hs.config.retention_enabled:
# Run the purge jobs described in the configuration file.
for job in hs.config.retention_purge_jobs:
@@ -114,7 +111,7 @@ class PaginationHandler(object):
the range to handle (inclusive). If None, it means that the range has no
upper limit.
"""
# We want the storage layer to include rooms with no retention policy in its
# We want the storage layer to to include rooms with no retention policy in its
# return value only if a default retention policy is defined in the server's
# configuration and that policy's 'max_lifetime' is either lower (or equal) than
# max_ms or higher than min_ms (or both).
@@ -155,32 +152,13 @@ class PaginationHandler(object):
)
continue
# If max_lifetime is None, it means that the room has no retention policy.
# Given we only retrieve such rooms when there's a default retention policy
# defined in the server's configuration, we can safely assume that's the
# case and use it for this room.
max_lifetime = (
retention_policy["max_lifetime"] or self._retention_default_max_lifetime
)
max_lifetime = retention_policy["max_lifetime"]
# Cap the effective max_lifetime to be within the range allowed in the
# config.
# We do this in two steps:
# 1. Make sure it's higher or equal to the minimum allowed value, and if
# it's not replace it with that value. This is because the server
# operator can be required to not delete information before a given
# time, e.g. to comply with freedom of information laws.
# 2. Make sure the resulting value is lower or equal to the maximum allowed
# value, and if it's not replace it with that value. This is because the
# server operator can be required to delete any data after a specific
# amount of time.
if self._retention_allowed_lifetime_min is not None:
max_lifetime = max(self._retention_allowed_lifetime_min, max_lifetime)
if self._retention_allowed_lifetime_max is not None:
max_lifetime = min(max_lifetime, self._retention_allowed_lifetime_max)
logger.debug("[purge] max_lifetime for room %s: %s", room_id, max_lifetime)
if max_lifetime is None:
# If max_lifetime is None, it means that include_null equals True,
# therefore we can safely assume that there is a default policy defined
# in the server's configuration.
max_lifetime = self._retention_default_max_lifetime
# Figure out what token we should start purging at.
ts = self.clock.time_msec() - max_lifetime

View File

@@ -40,7 +40,7 @@ from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.state import StateHandler
from synapse.storage.databases.main import DataStore
from synapse.types import Collection, JsonDict, UserID, get_domain_from_id
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.descriptors import cached
from synapse.util.metrics import Measure
@@ -1318,7 +1318,7 @@ async def get_interested_parties(
async def get_interested_remotes(
store: DataStore, states: List[UserPresenceState], state_handler: StateHandler
) -> List[Tuple[Collection[str], List[UserPresenceState]]]:
) -> List[Tuple[List[str], List[UserPresenceState]]]:
"""Given a list of presence states figure out which remote servers
should be sent which.
@@ -1334,7 +1334,7 @@ async def get_interested_remotes(
each tuple the list of UserPresenceState should be sent to each
destination
"""
hosts_and_states = [] # type: List[Tuple[Collection[str], List[UserPresenceState]]]
hosts_and_states = []
# First we look up the rooms each user is in (as well as any explicit
# subscriptions), then for each distinct room we look up the remote

View File

@@ -14,7 +14,6 @@
# limitations under the License.
import logging
import random
from synapse.api.errors import (
AuthError,
@@ -214,14 +213,8 @@ class BaseProfileHandler(BaseHandler):
async def set_avatar_url(
self, target_user, requester, new_avatar_url, by_admin=False
):
"""Set a new avatar URL for a user.
Args:
target_user (UserID): the user whose avatar URL is to be changed.
requester (Requester): The user attempting to make this change.
new_avatar_url (str): The avatar URL to give this user.
by_admin (bool): Whether this change was made by an administrator.
"""
"""target_user is the user whose avatar_url is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this homeserver")
@@ -285,12 +278,6 @@ class BaseProfileHandler(BaseHandler):
await self.ratelimit(requester)
# Do not actually update the room state for shadow-banned users.
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
return
room_ids = await self.store.get_rooms_for_user(target_user.to_string())
for room_id in room_ids:

View File

@@ -26,7 +26,6 @@ from synapse.replication.http.register import (
ReplicationPostRegisterActionsServlet,
ReplicationRegisterServlet,
)
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.storage.state import StateFilter
from synapse.types import RoomAlias, UserID, create_requester
@@ -53,8 +52,6 @@ class RegistrationHandler(BaseHandler):
self.macaroon_gen = hs.get_macaroon_generator()
self._server_notices_mxid = hs.config.server_notices_mxid
self.spam_checker = hs.get_spam_checker()
if hs.config.worker_app:
self._register_client = ReplicationRegisterServlet.make_client(hs)
self._register_device_client = RegisterDeviceReplicationServlet.make_client(
@@ -127,9 +124,7 @@ class RegistrationHandler(BaseHandler):
try:
int(localpart)
raise SynapseError(
400,
"Numeric user IDs are reserved for guest users.",
errcode=Codes.INVALID_USERNAME,
400, "Numeric user IDs are reserved for guest users."
)
except ValueError:
pass
@@ -147,7 +142,7 @@ class RegistrationHandler(BaseHandler):
address=None,
bind_emails=[],
by_admin=False,
user_agent_ips=None,
shadow_banned=False,
):
"""Registers a new client on the server.
@@ -165,8 +160,7 @@ class RegistrationHandler(BaseHandler):
bind_emails (List[str]): list of emails to bind to this account.
by_admin (bool): True if this registration is being made via the
admin api, otherwise False.
user_agent_ips (List[(str, str)]): Tuples of IP addresses and user-agents used
during the registration process.
shadow_banned (bool): Shadow-ban the created user.
Returns:
str: user_id
Raises:
@@ -174,24 +168,6 @@ class RegistrationHandler(BaseHandler):
"""
self.check_registration_ratelimit(address)
result = self.spam_checker.check_registration_for_spam(
threepid, localpart, user_agent_ips or [],
)
if result == RegistrationBehaviour.DENY:
logger.info(
"Blocked registration of %r", localpart,
)
# We return a 429 to make it not obvious that they've been
# denied.
raise SynapseError(429, "Rate limited")
shadow_banned = result == RegistrationBehaviour.SHADOW_BAN
if shadow_banned:
logger.info(
"Shadow banning registration of %r", localpart,
)
# do not check_auth_blocking if the call is coming through the Admin API
if not by_admin:
await self.auth.check_auth_blocking(threepid=threepid)

View File

@@ -20,7 +20,6 @@
import itertools
import logging
import math
import random
import string
from collections import OrderedDict
from typing import TYPE_CHECKING, Any, Awaitable, Dict, List, Optional, Tuple
@@ -136,9 +135,6 @@ class RoomCreationHandler(BaseHandler):
Returns:
the new room id
Raises:
ShadowBanError if the requester is shadow-banned.
"""
await self.ratelimit(requester)
@@ -174,15 +170,6 @@ class RoomCreationHandler(BaseHandler):
async def _upgrade_room(
self, requester: Requester, old_room_id: str, new_version: RoomVersion
):
"""
Args:
requester: the user requesting the upgrade
old_room_id: the id of the room to be replaced
new_versions: the version to upgrade the room to
Raises:
ShadowBanError if the requester is shadow-banned.
"""
user_id = requester.user.to_string()
# start by allocating a new room id
@@ -269,9 +256,6 @@ class RoomCreationHandler(BaseHandler):
old_room_id: the id of the room to be replaced
new_room_id: the id of the replacement room
old_room_state: the state map for the old room
Raises:
ShadowBanError if the requester is shadow-banned.
"""
old_room_pl_event_id = old_room_state.get((EventTypes.PowerLevels, ""))
@@ -642,7 +626,6 @@ class RoomCreationHandler(BaseHandler):
if mapping:
raise SynapseError(400, "Room alias already taken", Codes.ROOM_IN_USE)
invite_3pid_list = config.get("invite_3pid", [])
invite_list = config.get("invite", [])
for i in invite_list:
try:
@@ -651,14 +634,6 @@ class RoomCreationHandler(BaseHandler):
except Exception:
raise SynapseError(400, "Invalid user_id: %s" % (i,))
if (invite_list or invite_3pid_list) and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
# Allow the request to go through, but remove any associated invites.
invite_3pid_list = []
invite_list = []
await self.event_creation_handler.assert_accepted_privacy_policy(requester)
power_level_content_override = config.get("power_level_content_override")
@@ -673,6 +648,8 @@ class RoomCreationHandler(BaseHandler):
% (user_id,),
)
invite_3pid_list = config.get("invite_3pid", [])
visibility = config.get("visibility", None)
is_public = visibility == "public"
@@ -767,8 +744,6 @@ class RoomCreationHandler(BaseHandler):
if is_direct:
content["is_direct"] = is_direct
# Note that update_membership with an action of "invite" can raise a
# ShadowBanError, but this was handled above by emptying invite_list.
_, last_stream_id = await self.room_member_handler.update_membership(
requester,
UserID.from_string(invitee),
@@ -783,8 +758,6 @@ class RoomCreationHandler(BaseHandler):
id_access_token = invite_3pid.get("id_access_token") # optional
address = invite_3pid["address"]
medium = invite_3pid["medium"]
# Note that do_3pid_invite can raise a ShadowBanError, but this was
# handled above by emptying invite_3pid_list.
last_stream_id = await self.hs.get_room_member_handler().do_3pid_invite(
room_id,
requester.user,
@@ -844,13 +817,11 @@ class RoomCreationHandler(BaseHandler):
async def send(etype: str, content: JsonDict, **kwargs) -> int:
event = create(etype, content, **kwargs)
logger.debug("Sending %s in new room", etype)
# Allow these events to be sent even if the user is shadow-banned to
# allow the room creation to complete.
(
_,
last_stream_id,
) = await self.event_creation_handler.create_and_send_nonmember_event(
creator, event, ratelimit=False, ignore_shadow_ban=True,
creator, event, ratelimit=False
)
return last_stream_id

View File

@@ -15,21 +15,14 @@
import abc
import logging
import random
from http import HTTPStatus
from typing import TYPE_CHECKING, Iterable, List, Optional, Tuple, Union
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Union
from unpaddedbase64 import encode_base64
from synapse import types
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import (
AuthError,
Codes,
LimitExceededError,
ShadowBanError,
SynapseError,
)
from synapse.api.errors import AuthError, Codes, LimitExceededError, SynapseError
from synapse.api.ratelimiting import Ratelimiter
from synapse.api.room_versions import EventFormatVersions
from synapse.crypto.event_signing import compute_event_reference_hash
@@ -38,15 +31,7 @@ from synapse.events.builder import create_local_event_from_event_dict
from synapse.events.snapshot import EventContext
from synapse.events.validator import EventValidator
from synapse.storage.roommember import RoomsForUser
from synapse.types import (
Collection,
JsonDict,
Requester,
RoomAlias,
RoomID,
StateMap,
UserID,
)
from synapse.types import Collection, JsonDict, Requester, RoomAlias, RoomID, UserID
from synapse.util.async_helpers import Linearizer
from synapse.util.distributor import user_joined_room, user_left_room
@@ -225,40 +210,24 @@ class RoomMemberHandler(object):
_, stream_id = await self.store.get_event_ordering(duplicate.event_id)
return duplicate.event_id, stream_id
prev_state_ids = await context.get_prev_state_ids()
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
newly_joined = False
if event.membership == Membership.JOIN:
newly_joined = True
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
# Only rate-limit if the user actually joined the room, otherwise we'll end
# up blocking profile updates.
if newly_joined:
time_now_s = self.clock.time()
(
allowed,
time_allowed,
) = self._join_rate_limiter_local.can_requester_do_action(requester)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
)
stream_id = await self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[target], ratelimit=ratelimit,
)
if event.membership == Membership.JOIN and newly_joined:
prev_state_ids = await context.get_prev_state_ids()
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
if event.membership == Membership.JOIN:
# Only fire user_joined_room if the user has actually joined the
# room. Don't bother if the user is just changing their profile
# info.
await self._user_joined_room(target, room_id)
newly_joined = True
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
await self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = await self.store.get_event(prev_member_event_id)
@@ -316,31 +285,6 @@ class RoomMemberHandler(object):
content: Optional[dict] = None,
require_consent: bool = True,
) -> Tuple[str, int]:
"""Update a user's membership in a room.
Params:
requester: The user who is performing the update.
target: The user whose membership is being updated.
room_id: The room ID whose membership is being updated.
action: The membership change, see synapse.api.constants.Membership.
txn_id: The transaction ID, if given.
remote_room_hosts: Remote servers to send the update to.
third_party_signed: Information from a 3PID invite.
ratelimit: Whether to rate limit the request.
content: The content of the created event.
require_consent: Whether consent is required.
Returns:
A tuple of the new event ID and stream ID.
Raises:
ShadowBanError if a shadow-banned requester attempts to send an invite.
"""
if action == Membership.INVITE and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
key = (room_id,)
with (await self.member_linearizer.queue(key)):
@@ -380,7 +324,7 @@ class RoomMemberHandler(object):
# later on.
content = dict(content)
if not self.allow_per_room_profiles or requester.shadow_banned:
if not self.allow_per_room_profiles:
# Strip profile data, knowing that new profile data will be added to the
# event's content in event_creation_handler.create_event() using the target's
# global profile.
@@ -513,12 +457,22 @@ class RoomMemberHandler(object):
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
if is_host_in_room:
time_now_s = self.clock.time()
(
allowed,
time_allowed,
) = self._join_rate_limiter_remote.can_requester_do_action(requester,)
allowed, time_allowed = self._join_rate_limiter_local.can_do_action(
requester.user.to_string(),
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
)
else:
time_now_s = self.clock.time()
allowed, time_allowed = self._join_rate_limiter_remote.can_do_action(
requester.user.to_string(),
)
if not allowed:
raise LimitExceededError(
@@ -750,7 +704,9 @@ class RoomMemberHandler(object):
if prev_member_event.membership == Membership.JOIN:
await self._user_left_room(target_user, room_id)
async def _can_guest_join(self, current_state_ids: StateMap[str]) -> bool:
async def _can_guest_join(
self, current_state_ids: Dict[Tuple[str, str], str]
) -> bool:
"""
Returns whether a guest can join a room based on its current state.
"""
@@ -760,7 +716,7 @@ class RoomMemberHandler(object):
guest_access = await self.store.get_event(guest_access_id)
return bool(
return (
guest_access
and guest_access.content
and "guest_access" in guest_access.content
@@ -817,25 +773,6 @@ class RoomMemberHandler(object):
txn_id: Optional[str],
id_access_token: Optional[str] = None,
) -> int:
"""Invite a 3PID to a room.
Args:
room_id: The room to invite the 3PID to.
inviter: The user sending the invite.
medium: The 3PID's medium.
address: The 3PID's address.
id_server: The identity server to use.
requester: The user making the request.
txn_id: The transaction ID this is part of, or None if this is not
part of a transaction.
id_access_token: The optional identity server access token.
Returns:
The new stream ID.
Raises:
ShadowBanError if the requester has been shadow-banned.
"""
if self.config.block_non_admin_invites:
is_requester_admin = await self.auth.is_server_admin(requester.user)
if not is_requester_admin:
@@ -843,11 +780,6 @@ class RoomMemberHandler(object):
403, "Invites have been disabled on this server", Codes.FORBIDDEN
)
if requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
# We need to rate limit *before* we send out any 3PID invites, so we
# can't just rely on the standard ratelimiting of events.
await self.base_handler.ratelimit(requester)
@@ -872,8 +804,6 @@ class RoomMemberHandler(object):
)
if invitee:
# Note that update_membership with an action of "invite" can raise
# a ShadowBanError, but this was done above already.
_, stream_id = await self.update_membership(
requester, UserID.from_string(invitee), room_id, "invite", txn_id=txn_id
)
@@ -979,7 +909,9 @@ class RoomMemberHandler(object):
)
return stream_id
async def _is_host_in_room(self, current_state_ids: StateMap[str]) -> bool:
async def _is_host_in_room(
self, current_state_ids: Dict[Tuple[str, str], str]
) -> bool:
# Have we just created the room, and is this about to be the very
# first member event?
create_event_id = current_state_ids.get(("m.room.create", ""))
@@ -1110,7 +1042,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
return event_id, stream_id
# The room is too large. Leave.
requester = types.create_requester(user, None, False, False, None)
requester = types.create_requester(user, None, False, None)
await self.update_membership(
requester=requester, target=user, room_id=room_id, action="leave"
)

View File

@@ -54,7 +54,6 @@ class Saml2SessionData:
class SamlHandler:
def __init__(self, hs: "synapse.server.HomeServer"):
self.hs = hs
self._saml_client = Saml2Client(hs.config.saml2_sp_config)
self._auth = hs.get_auth()
self._auth_handler = hs.get_auth_handler()
@@ -134,14 +133,8 @@ class SamlHandler:
# the dict.
self.expire_sessions()
# Pull out the user-agent and IP from the request.
user_agent = request.requestHeaders.getRawHeaders(b"User-Agent", default=[b""])[
0
].decode("ascii", "surrogateescape")
ip_address = self.hs.get_ip_from_request(request)
user_id, current_session = await self._map_saml_response_to_user(
resp_bytes, relay_state, user_agent, ip_address
resp_bytes, relay_state
)
# Complete the interactive auth session or the login.
@@ -154,11 +147,7 @@ class SamlHandler:
await self._auth_handler.complete_sso_login(user_id, request, relay_state)
async def _map_saml_response_to_user(
self,
resp_bytes: str,
client_redirect_url: str,
user_agent: str,
ip_address: str,
self, resp_bytes: str, client_redirect_url: str
) -> Tuple[str, Optional[Saml2SessionData]]:
"""
Given a sample response, retrieve the cached session and user for it.
@@ -166,8 +155,6 @@ class SamlHandler:
Args:
resp_bytes: The SAML response.
client_redirect_url: The redirect URL passed in by the client.
user_agent: The user agent of the client making the request.
ip_address: The IP address of the client making the request.
Returns:
Tuple of the user ID and SAML session associated with this response.
@@ -304,7 +291,6 @@ class SamlHandler:
localpart=localpart,
default_display_name=displayname,
bind_emails=emails,
user_agent_ips=(user_agent, ip_address),
)
await self._datastore.record_user_external_id(

View File

@@ -16,12 +16,13 @@
import logging
from typing import Any
from canonicaljson import json
from twisted.web.client import PartialDownloadError
from synapse.api.constants import LoginType
from synapse.api.errors import Codes, LoginError, SynapseError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.util import json_decoder
logger = logging.getLogger(__name__)
@@ -116,7 +117,7 @@ class RecaptchaAuthChecker(UserInteractiveAuthChecker):
except PartialDownloadError as pde:
# Twisted is silly
data = pde.response
resp_body = json_decoder.decode(data.decode("utf-8"))
resp_body = json.loads(data.decode("utf-8"))
if "success" in resp_body:
# Note that we do NOT check the hostname here: we explicitly

View File

@@ -19,7 +19,7 @@ import urllib
from io import BytesIO
import treq
from canonicaljson import encode_canonical_json
from canonicaljson import encode_canonical_json, json
from netaddr import IPAddress
from prometheus_client import Counter
from zope.interface import implementer, provider
@@ -47,7 +47,6 @@ from synapse.http import (
from synapse.http.proxyagent import ProxyAgent
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.util import json_decoder
from synapse.util.async_helpers import timeout_deferred
logger = logging.getLogger(__name__)
@@ -392,7 +391,7 @@ class SimpleHttpClient(object):
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json_decoder.decode(body.decode("utf-8"))
return json.loads(body.decode("utf-8"))
else:
raise HttpResponseException(
response.code, response.phrase.decode("ascii", errors="replace"), body
@@ -434,7 +433,7 @@ class SimpleHttpClient(object):
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json_decoder.decode(body.decode("utf-8"))
return json.loads(body.decode("utf-8"))
else:
raise HttpResponseException(
response.code, response.phrase.decode("ascii", errors="replace"), body
@@ -464,7 +463,7 @@ class SimpleHttpClient(object):
actual_headers.update(headers)
body = await self.get_raw(uri, args, headers=headers)
return json_decoder.decode(body.decode("utf-8"))
return json.loads(body.decode("utf-8"))
async def put_json(self, uri, json_body, args={}, headers=None):
""" Puts some json to the given URI.
@@ -507,7 +506,7 @@ class SimpleHttpClient(object):
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json_decoder.decode(body.decode("utf-8"))
return json.loads(body.decode("utf-8"))
else:
raise HttpResponseException(
response.code, response.phrase.decode("ascii", errors="replace"), body

View File

@@ -13,6 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import logging
import random
import time
@@ -25,7 +26,7 @@ from twisted.web.http import stringToDatetime
from twisted.web.http_headers import Headers
from synapse.logging.context import make_deferred_yieldable
from synapse.util import Clock, json_decoder
from synapse.util import Clock
from synapse.util.caches.ttlcache import TTLCache
from synapse.util.metrics import Measure
@@ -180,7 +181,7 @@ class WellKnownResolver(object):
if response.code != 200:
raise Exception("Non-200 response %s" % (response.code,))
parsed_body = json_decoder.decode(body.decode("utf-8"))
parsed_body = json.loads(body.decode("utf-8"))
logger.info("Response from .well-known: %s", parsed_body)
result = parsed_body["m.server"].encode("ascii")

View File

@@ -500,7 +500,7 @@ class RootOptionsRedirectResource(OptionsResource, RootRedirect):
pass
@implementer(interfaces.IPushProducer)
@implementer(interfaces.IPullProducer)
class _ByteProducer:
"""
Iteratively write bytes to the request.
@@ -515,64 +515,52 @@ class _ByteProducer:
):
self._request = request
self._iterator = iterator
self._paused = False
# Register the producer and start producing data.
self._request.registerProducer(self, True)
self.resumeProducing()
def start(self) -> None:
self._request.registerProducer(self, False)
def _send_data(self, data: List[bytes]) -> None:
"""
Send a list of bytes as a chunk of a response.
Send a list of strings as a response to the request.
"""
if not data:
return
self._request.write(b"".join(data))
def pauseProducing(self) -> None:
self._paused = True
def resumeProducing(self) -> None:
# We've stopped producing in the meantime (note that this might be
# re-entrant after calling write).
if not self._request:
return
self._paused = False
# Get the next chunk and write it to the request.
#
# The output of the JSON encoder is coalesced until min_chunk_size is
# reached. (This is because JSON encoders produce a very small output
# per iteration.)
#
# Note that buffer stores a list of bytes (instead of appending to
# bytes) to hopefully avoid many allocations.
buffer = []
buffered_bytes = 0
while buffered_bytes < self.min_chunk_size:
try:
data = next(self._iterator)
buffer.append(data)
buffered_bytes += len(data)
except StopIteration:
# The entire JSON object has been serialized, write any
# remaining data, finalize the producer and the request, and
# clean-up any references.
self._send_data(buffer)
self._request.unregisterProducer()
self._request.finish()
self.stopProducing()
return
# Write until there's backpressure telling us to stop.
while not self._paused:
# Get the next chunk and write it to the request.
#
# The output of the JSON encoder is buffered and coalesced until
# min_chunk_size is reached. This is because JSON encoders produce
# very small output per iteration and the Request object converts
# each call to write() to a separate chunk. Without this there would
# be an explosion in bytes written (e.g. b"{" becoming "1\r\n{\r\n").
#
# Note that buffer stores a list of bytes (instead of appending to
# bytes) to hopefully avoid many allocations.
buffer = []
buffered_bytes = 0
while buffered_bytes < self.min_chunk_size:
try:
data = next(self._iterator)
buffer.append(data)
buffered_bytes += len(data)
except StopIteration:
# The entire JSON object has been serialized, write any
# remaining data, finalize the producer and the request, and
# clean-up any references.
self._send_data(buffer)
self._request.unregisterProducer()
self._request.finish()
self.stopProducing()
return
self._send_data(buffer)
self._send_data(buffer)
def stopProducing(self) -> None:
# Clear a circular reference.
self._request = None
@@ -632,7 +620,8 @@ def respond_with_json(
if send_cors:
set_cors_headers(request)
_ByteProducer(request, encoder(json_object))
producer = _ByteProducer(request, encoder(json_object))
producer.start()
return NOT_DONE_YET

View File

@@ -17,8 +17,9 @@
import logging
from canonicaljson import json
from synapse.api.errors import Codes, SynapseError
from synapse.util import json_decoder
logger = logging.getLogger(__name__)
@@ -214,7 +215,7 @@ def parse_json_value_from_request(request, allow_empty_body=False):
return None
try:
content = json_decoder.decode(content_bytes.decode("utf-8"))
content = json.loads(content_bytes.decode("utf-8"))
except Exception as e:
logger.warning("Unable to parse JSON: %s", e)
raise SynapseError(400, "Content not JSON.", errcode=Codes.NOT_JSON)

View File

@@ -172,11 +172,11 @@ from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Type
import attr
from canonicaljson import json
from twisted.internet import defer
from synapse.config import ConfigError
from synapse.util import json_decoder, json_encoder
if TYPE_CHECKING:
from synapse.http.site import SynapseRequest
@@ -499,9 +499,7 @@ def start_active_span_from_edu(
if opentracing is None:
return _noop_context_manager()
carrier = json_decoder.decode(edu_content.get("context", "{}")).get(
"opentracing", {}
)
carrier = json.loads(edu_content.get("context", "{}")).get("opentracing", {})
context = opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)
_references = [
opentracing.child_of(span_context_from_string(x))
@@ -692,7 +690,7 @@ def active_span_context_as_string():
opentracing.tracer.inject(
opentracing.tracer.active_span, opentracing.Format.TEXT_MAP, carrier
)
return json_encoder.encode(carrier)
return json.dumps(carrier)
@only_if_tracing
@@ -701,7 +699,7 @@ def span_context_from_string(carrier):
Returns:
The active span context decoded from a string.
"""
carrier = json_decoder.decode(carrier)
carrier = json.loads(carrier)
return opentracing.tracer.extract(opentracing.Format.TEXT_MAP, carrier)

View File

@@ -175,7 +175,7 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
It returns a Deferred which completes when the function completes, but it doesn't
follow the synapse logcontext rules, which makes it appropriate for passing to
clock.looping_call and friends (or for firing-and-forgetting in the middle of a
normal synapse async function).
normal synapse inlineCallbacks function).
Args:
desc: a description for this background process type

View File

@@ -167,10 +167,8 @@ class ModuleApi(object):
external_id: id on that system
user_id: complete mxid that it is mapped to
"""
return defer.ensureDeferred(
self._store.record_user_external_id(
auth_provider_id, remote_user_id, registered_user_id
)
return self._store.record_user_external_id(
auth_provider_id, remote_user_id, registered_user_id
)
def generate_short_term_login_token(
@@ -225,9 +223,7 @@ class ModuleApi(object):
Returns:
Deferred[object]: result of func
"""
return defer.ensureDeferred(
self._store.db_pool.runInteraction(desc, func, *args, **kwargs)
)
return self._store.db_pool.runInteraction(desc, func, *args, **kwargs)
def complete_sso_login(
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str

View File

@@ -21,9 +21,9 @@ class SlavedIdTracker(object):
self.step = step
self._current = _load_current_id(db_conn, table, column, step)
for table, column in extra_tables:
self.advance(None, _load_current_id(db_conn, table, column))
self.advance(_load_current_id(db_conn, table, column))
def advance(self, instance_name, new_id):
def advance(self, new_id):
self._current = (max if self.step > 0 else min)(self._current, new_id)
def get_current_token(self):
@@ -33,11 +33,3 @@ class SlavedIdTracker(object):
int
"""
return self._current
def get_current_token_for_writer(self, instance_name: str) -> int:
"""Returns the position of the given writer.
For streams with single writers this is equivalent to
`get_current_token`.
"""
return self.get_current_token()

View File

@@ -41,12 +41,12 @@ class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlaved
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == TagAccountDataStream.NAME:
self._account_data_id_gen.advance(instance_name, token)
self._account_data_id_gen.advance(token)
for row in rows:
self.get_tags_for_user.invalidate((row.user_id,))
self._account_data_stream_cache.entity_has_changed(row.user_id, token)
elif stream_name == AccountDataStream.NAME:
self._account_data_id_gen.advance(instance_name, token)
self._account_data_id_gen.advance(token)
for row in rows:
if not row.room_id:
self.get_global_account_data_by_type_for_user.invalidate(

View File

@@ -46,7 +46,7 @@ class SlavedDeviceInboxStore(DeviceInboxWorkerStore, BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == ToDeviceStream.NAME:
self._device_inbox_id_gen.advance(instance_name, token)
self._device_inbox_id_gen.advance(token)
for row in rows:
if row.entity.startswith("@"):
self._device_inbox_stream_cache.entity_has_changed(

View File

@@ -50,10 +50,10 @@ class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedSto
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == DeviceListsStream.NAME:
self._device_list_id_gen.advance(instance_name, token)
self._device_list_id_gen.advance(token)
self._invalidate_caches_for_devices(token, rows)
elif stream_name == UserSignatureStream.NAME:
self._device_list_id_gen.advance(instance_name, token)
self._device_list_id_gen.advance(token)
for row in rows:
self._user_signature_stream_cache.entity_has_changed(row.user_id, token)
return super().process_replication_rows(stream_name, instance_name, token, rows)

View File

@@ -40,7 +40,7 @@ class SlavedGroupServerStore(GroupServerWorkerStore, BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == GroupServerStream.NAME:
self._group_updates_id_gen.advance(instance_name, token)
self._group_updates_id_gen.advance(token)
for row in rows:
self._group_updates_stream_cache.entity_has_changed(row.user_id, token)

View File

@@ -44,7 +44,7 @@ class SlavedPresenceStore(BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == PresenceStream.NAME:
self._presence_id_gen.advance(instance_name, token)
self._presence_id_gen.advance(token)
for row in rows:
self.presence_stream_cache.entity_has_changed(row.user_id, token)
self._get_presence_for_user.invalidate((row.user_id,))

View File

@@ -14,7 +14,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams import PushRulesStream
from synapse.storage.databases.main.push_rule import PushRulesWorkerStore
@@ -22,15 +21,18 @@ from .events import SlavedEventStore
class SlavedPushRuleStore(SlavedEventStore, PushRulesWorkerStore):
def get_push_rules_stream_token(self):
return (
self._push_rules_stream_id_gen.get_current_token(),
self._stream_id_gen.get_current_token(),
)
def get_max_push_rules_stream_id(self):
return self._push_rules_stream_id_gen.get_current_token()
def process_replication_rows(self, stream_name, instance_name, token, rows):
# We assert this for the benefit of mypy
assert isinstance(self._push_rules_stream_id_gen, SlavedIdTracker)
if stream_name == PushRulesStream.NAME:
self._push_rules_stream_id_gen.advance(instance_name, token)
self._push_rules_stream_id_gen.advance(token)
for row in rows:
self.get_push_rules_for_user.invalidate((row.user_id,))
self.get_push_rules_enabled_for_user.invalidate((row.user_id,))

View File

@@ -34,5 +34,5 @@ class SlavedPusherStore(PusherWorkerStore, BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == PushersStream.NAME:
self._pushers_id_gen.advance(instance_name, token)
self._pushers_id_gen.advance(token)
return super().process_replication_rows(stream_name, instance_name, token, rows)

View File

@@ -46,7 +46,7 @@ class SlavedReceiptsStore(ReceiptsWorkerStore, BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == ReceiptsStream.NAME:
self._receipts_id_gen.advance(instance_name, token)
self._receipts_id_gen.advance(token)
for row in rows:
self.invalidate_caches_for_receipt(
row.room_id, row.receipt_type, row.user_id

View File

@@ -33,6 +33,6 @@ class RoomStore(RoomWorkerStore, BaseSlavedStore):
def process_replication_rows(self, stream_name, instance_name, token, rows):
if stream_name == PublicRoomsStream.NAME:
self._public_room_id_gen.advance(instance_name, token)
self._public_room_id_gen.advance(token)
return super().process_replication_rows(stream_name, instance_name, token, rows)

View File

@@ -21,7 +21,9 @@ import abc
import logging
from typing import Tuple, Type
from synapse.util import json_decoder, json_encoder
from canonicaljson import json
from synapse.util import json_encoder as _json_encoder
logger = logging.getLogger(__name__)
@@ -123,7 +125,7 @@ class RdataCommand(Command):
stream_name,
instance_name,
None if token == "batch" else int(token),
json_decoder.decode(row_json),
json.loads(row_json),
)
def to_line(self):
@@ -132,7 +134,7 @@ class RdataCommand(Command):
self.stream_name,
self.instance_name,
str(self.token) if self.token is not None else "batch",
json_encoder.encode(self.row),
_json_encoder.encode(self.row),
)
)
@@ -357,7 +359,7 @@ class UserIpCommand(Command):
def from_line(cls, line):
user_id, jsn = line.split(" ", 1)
access_token, ip, user_agent, device_id, last_seen = json_decoder.decode(jsn)
access_token, ip, user_agent, device_id, last_seen = json.loads(jsn)
return cls(user_id, access_token, ip, user_agent, device_id, last_seen)
@@ -365,7 +367,7 @@ class UserIpCommand(Command):
return (
self.user_id
+ " "
+ json_encoder.encode(
+ _json_encoder.encode(
(
self.access_token,
self.ip,

View File

@@ -352,7 +352,7 @@ class PushRulesStream(Stream):
)
def _current_token(self, instance_name: str) -> int:
push_rules_token = self.store.get_max_push_rules_stream_id()
push_rules_token, _ = self.store.get_push_rules_stream_token()
return push_rules_token
@@ -405,7 +405,7 @@ class CachesStream(Stream):
store = hs.get_datastore()
super().__init__(
hs.get_instance_name(),
store.get_cache_stream_token_for_writer,
store.get_cache_stream_token,
store.get_all_updated_caches,
)

View File

@@ -1,16 +0,0 @@
<html>
<head></head>
<body>
<!--Use a hidden form to resubmit the information necessary to reset the password-->
<form action="/_matrix/client/unstable/password_reset/{{ medium }}/submit_token_confirm" method="post">
<input type="hidden" name="sid" value="{{ sid }}">
<input type="hidden" name="token" value="{{ token }}">
<input type="hidden" name="client_secret" value="{{ client_secret }}">
<p>You have requested to <strong>reset your Matrix account password</strong>. Click the link below to confirm this action. <br /><br />
If you did not mean to do this, please close this page and your password will not be changed.</p>
<p><button type="submit">Confirm changing my password</button></p>
</form>
</body>
</html>

View File

@@ -316,9 +316,6 @@ class JoinRoomAliasServlet(RestServlet):
join_rules_event = room_state.get((EventTypes.JoinRules, ""))
if join_rules_event:
if not (join_rules_event.content.get("join_rule") == JoinRules.PUBLIC):
# update_membership with an action of "invite" can raise a
# ShadowBanError. This is not handled since it is assumed that
# an admin isn't going to call this API with a shadow-banned user.
await self.room_member_handler.update_membership(
requester=requester,
target=fake_requester.user,

View File

@@ -73,7 +73,6 @@ class UsersRestServletV2(RestServlet):
The parameters `from` and `limit` are required only for pagination.
By default, a `limit` of 100 is used.
The parameter `user_id` can be used to filter by user id.
The parameter `name` can be used to filter by user id or display name.
The parameter `guests` can be used to exclude guest users.
The parameter `deactivated` can be used to include deactivated users.
"""
@@ -90,12 +89,11 @@ class UsersRestServletV2(RestServlet):
start = parse_integer(request, "from", default=0)
limit = parse_integer(request, "limit", default=100)
user_id = parse_string(request, "user_id", default=None)
name = parse_string(request, "name", default=None)
guests = parse_boolean(request, "guests", default=True)
deactivated = parse_boolean(request, "deactivated", default=False)
users, total = await self.store.get_users_paginate(
start, limit, user_id, name, guests, deactivated
start, limit, user_id, guests, deactivated
)
ret = {"users": users, "total": total}
if len(users) >= limit:

View File

@@ -159,7 +159,7 @@ class PushRuleRestServlet(RestServlet):
return 200, {}
def notify_user(self, user_id):
stream_id = self.store.get_max_push_rules_stream_id()
stream_id, _ = self.store.get_push_rules_stream_token()
self.notifier.on_new_event("push_rules_key", stream_id, users=[user_id])
async def set_rule_attr(self, user_id, spec, val):

Some files were not shown because too many files have changed in this diff Show More