Compare commits
109 Commits
hhs-5
...
matthew/de
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1ef1b716e2 | ||
|
|
9498cd3e7b | ||
|
|
c7503f8f33 | ||
|
|
4ff8486f0f | ||
|
|
2669e494e0 | ||
|
|
b6d8a808a4 | ||
|
|
0cb5d34756 | ||
|
|
650761666d | ||
|
|
aa2a4b4b42 | ||
|
|
022469d819 | ||
|
|
45d06c754a | ||
|
|
dbd0821c43 | ||
|
|
0476852fc6 | ||
|
|
1d11d9323d | ||
|
|
261e4f2542 | ||
|
|
11728561f3 | ||
|
|
9d57abcadd | ||
|
|
cb0bbde981 | ||
|
|
abc97bd1de | ||
|
|
ee238254a0 | ||
|
|
0125b5d002 | ||
|
|
fe265fe990 | ||
|
|
7735eee41d | ||
|
|
3d0faa39fb | ||
|
|
fd28d13e19 | ||
|
|
d18731e252 | ||
|
|
81beae30b8 | ||
|
|
11f1bace3c | ||
|
|
1e8cfc9e77 | ||
|
|
488ed3e444 | ||
|
|
c3ec84dbcd | ||
|
|
0783801659 | ||
|
|
9f2fd29c14 | ||
|
|
6372dff771 | ||
|
|
b3e346f40c | ||
|
|
fb47ce3e6a | ||
|
|
debf04556b | ||
|
|
907a62df28 | ||
|
|
41b987cbc5 | ||
|
|
5c74ab4064 | ||
|
|
06820250c9 | ||
|
|
383c4ae59c | ||
|
|
f639ac143d | ||
|
|
ad0424bab0 | ||
|
|
2992125561 | ||
|
|
ef56b6e27c | ||
|
|
53d6245529 | ||
|
|
25e471dac3 | ||
|
|
76fca1730e | ||
|
|
32e4420a66 | ||
|
|
79b2583f1b | ||
|
|
8a24c4eee5 | ||
|
|
f93cb7410d | ||
|
|
50d5a97c1b | ||
|
|
c06932a029 | ||
|
|
3a62cacfb0 | ||
|
|
4d55b16faa | ||
|
|
105709bf32 | ||
|
|
d7fad867fa | ||
|
|
8fddcf703e | ||
|
|
e2adb360eb | ||
|
|
47ed4a4aa7 | ||
|
|
7fafa838ae | ||
|
|
de341bec1b | ||
|
|
643c89d497 | ||
|
|
6554253f48 | ||
|
|
3add16df49 | ||
|
|
dde01efbcb | ||
|
|
22e416b726 | ||
|
|
b4b7c80181 | ||
|
|
5fc3477fd3 | ||
|
|
8743f42b49 | ||
|
|
7285afa4be | ||
|
|
b22a53e357 | ||
|
|
3c446d0a81 | ||
|
|
240e940c3f | ||
|
|
969ed2e49d | ||
|
|
1147ce7e18 | ||
|
|
0d2b7fdcec | ||
|
|
4e12b10c7c | ||
|
|
e654230a51 | ||
|
|
ef5193e0cb | ||
|
|
7b3959c7f3 | ||
|
|
2e4a6c5aab | ||
|
|
e3eb2cfe8b | ||
|
|
5c341c99f6 | ||
|
|
739d3500fe | ||
|
|
0e2d70e101 | ||
|
|
82c4fd7226 | ||
|
|
e446077478 | ||
|
|
d82c89ac22 | ||
|
|
75b25b3f1f | ||
|
|
1df10d8814 | ||
|
|
8f9340d248 | ||
|
|
c5034cd4b0 | ||
|
|
f7f937d051 | ||
|
|
e52b5d94a9 | ||
|
|
d90f27a21f | ||
|
|
03cf9710e3 | ||
|
|
1dcdd8d568 | ||
|
|
4344fb1faf | ||
|
|
846577ebde | ||
|
|
3869981227 | ||
|
|
fa80b492a5 | ||
|
|
c776c52eed | ||
|
|
b424c16f50 | ||
|
|
313a489fc9 | ||
|
|
4b090cb273 | ||
|
|
3f79378d4b |
@@ -35,6 +35,10 @@ matrix:
|
||||
- python: 3.6
|
||||
env: TOX_ENV=check-newsfragment
|
||||
|
||||
allow_failures:
|
||||
- python: 2.7
|
||||
env: TOX_ENV=py27-postgres TRIAL_FLAGS="-j 4"
|
||||
|
||||
install:
|
||||
- pip install tox
|
||||
|
||||
|
||||
@@ -59,10 +59,9 @@ To create a changelog entry, make a new file in the ``changelog.d``
|
||||
file named in the format of ``PRnumber.type``. The type can be
|
||||
one of ``feature``, ``bugfix``, ``removal`` (also used for
|
||||
deprecations), or ``misc`` (for internal-only changes). The content of
|
||||
the file is your changelog entry, which can contain Markdown
|
||||
formatting. Adding credits to the changelog is encouraged, we value
|
||||
your contributions and would like to have you shouted out in the
|
||||
release notes!
|
||||
the file is your changelog entry, which can contain RestructuredText
|
||||
formatting. A note of contributors is welcomed in changelogs for
|
||||
non-misc changes (the content of misc changes is not displayed).
|
||||
|
||||
For example, a fix in PR #1234 would have its changelog entry in
|
||||
``changelog.d/1234.bugfix``, and contain content like "The security levels of
|
||||
|
||||
@@ -167,6 +167,11 @@ Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
|
||||
Dockerfile to automate a synapse server in a single Docker image, at
|
||||
https://hub.docker.com/r/avhost/docker-matrix/tags/
|
||||
|
||||
Also, Martin Giess has created an auto-deployment process with vagrant/ansible,
|
||||
tested with VirtualBox/AWS/DigitalOcean - see
|
||||
https://github.com/EMnify/matrix-synapse-auto-deploy
|
||||
for details.
|
||||
|
||||
Configuring synapse
|
||||
-------------------
|
||||
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Removed the link to the unmaintained matrix-synapse-auto-deploy project from the readme.
|
||||
@@ -1 +0,0 @@
|
||||
Support profile API endpoints on workers
|
||||
@@ -1 +0,0 @@
|
||||
Refactor state module to support multiple room versions
|
||||
@@ -1 +0,0 @@
|
||||
Server notices for resource limit blocking
|
||||
@@ -1 +0,0 @@
|
||||
Fix error collecting prometheus metrics when run on dedicated thread due to threading concurrency issues
|
||||
@@ -1 +0,0 @@
|
||||
Allow guests to use /rooms/:roomId/event/:eventId
|
||||
@@ -1 +0,0 @@
|
||||
The synapse.storage module has been ported to Python 3.
|
||||
@@ -1 +0,0 @@
|
||||
Split the state_group_cache into member and non-member state events (and so speed up LL /sync)
|
||||
@@ -1 +0,0 @@
|
||||
Log failure to authenticate remote servers as warnings (without stack traces)
|
||||
@@ -1 +0,0 @@
|
||||
The CONTRIBUTING guidelines have been updated to mention our use of Markdown and that .misc files have content.
|
||||
@@ -1 +0,0 @@
|
||||
Reference the need for an HTTP replication port when using the federation_reader worker
|
||||
@@ -1 +0,0 @@
|
||||
Fix minor spelling error in federation client documentation.
|
||||
@@ -1 +0,0 @@
|
||||
Remove redundant state resolution function
|
||||
@@ -1 +0,0 @@
|
||||
The test suite now passes on PostgreSQL.
|
||||
@@ -1 +0,0 @@
|
||||
Fix MAU cache invalidation due to missing yield
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where we resent "limit exceeded" server notices repeatedly
|
||||
@@ -1 +0,0 @@
|
||||
Add mau_trial_days config param, so that users only get counted as MAU after N days.
|
||||
@@ -1 +0,0 @@
|
||||
Require twisted 17.1 or later (fixes [#3741](https://github.com/matrix-org/synapse/issues/3741)).
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where we broke sync when using limit_usage_by_mau but hadn't configured server notices
|
||||
@@ -1 +0,0 @@
|
||||
Fix 'federation_domain_whitelist' such that an empty list correctly blocks all outbound federation traffic
|
||||
@@ -1 +0,0 @@
|
||||
Fix tagging of server notice rooms
|
||||
@@ -1 +0,0 @@
|
||||
Fix tagging of server notice rooms
|
||||
@@ -1 +0,0 @@
|
||||
Fix 'admin_uri' config variable and error parameter to be 'admin_contact' to match the spec.
|
||||
@@ -1 +0,0 @@
|
||||
Don't return non-LL-member state in incremental sync state blocks
|
||||
@@ -1 +0,0 @@
|
||||
Make sure that we close db connections opened during init
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug in sending presence over federation
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug where preserved threepid user comes to sign up and server is mau blocked
|
||||
@@ -1 +0,0 @@
|
||||
Improve human readable error messages for threepid registration/account update
|
||||
@@ -74,7 +74,7 @@ replication endpoints that it's talking to on the main synapse process.
|
||||
``worker_replication_port`` should point to the TCP replication listener port and
|
||||
``worker_replication_http_port`` should point to the HTTP replication port.
|
||||
|
||||
Currently, the ``event_creator`` and ``federation_reader`` workers require specifying
|
||||
Currently, only the ``event_creator`` worker requires specifying
|
||||
``worker_replication_http_port``.
|
||||
|
||||
For instance::
|
||||
@@ -265,7 +265,6 @@ Handles some event creation. It can handle REST endpoints matching::
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/join/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/profile/
|
||||
|
||||
It will create events locally and then send them on to the main synapse
|
||||
instance to be persisted and handled.
|
||||
|
||||
@@ -31,5 +31,5 @@ $TOX_BIN/pip install 'setuptools>=18.5'
|
||||
$TOX_BIN/pip install 'pip>=10'
|
||||
|
||||
{ python synapse/python_dependencies.py
|
||||
echo lxml
|
||||
echo lxml psycopg2
|
||||
} | xargs $TOX_BIN/pip install
|
||||
|
||||
7
res/templates-dinsic/mail-Vector.css
Normal file
7
res/templates-dinsic/mail-Vector.css
Normal file
@@ -0,0 +1,7 @@
|
||||
.header {
|
||||
border-bottom: 4px solid #e4f7ed ! important;
|
||||
}
|
||||
|
||||
.notif_link a, .footer a {
|
||||
color: #76CFA6 ! important;
|
||||
}
|
||||
156
res/templates-dinsic/mail.css
Normal file
156
res/templates-dinsic/mail.css
Normal file
@@ -0,0 +1,156 @@
|
||||
body {
|
||||
margin: 0px;
|
||||
}
|
||||
|
||||
pre, code {
|
||||
word-break: break-word;
|
||||
white-space: pre-wrap;
|
||||
}
|
||||
|
||||
#page {
|
||||
font-family: 'Open Sans', Helvetica, Arial, Sans-Serif;
|
||||
font-color: #454545;
|
||||
font-size: 12pt;
|
||||
width: 100%;
|
||||
padding: 20px;
|
||||
}
|
||||
|
||||
#inner {
|
||||
width: 640px;
|
||||
}
|
||||
|
||||
.header {
|
||||
width: 100%;
|
||||
height: 87px;
|
||||
color: #454545;
|
||||
border-bottom: 4px solid #e5e5e5;
|
||||
}
|
||||
|
||||
.logo {
|
||||
text-align: right;
|
||||
margin-left: 20px;
|
||||
}
|
||||
|
||||
.salutation {
|
||||
padding-top: 10px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.summarytext {
|
||||
}
|
||||
|
||||
.room {
|
||||
width: 100%;
|
||||
color: #454545;
|
||||
border-bottom: 1px solid #e5e5e5;
|
||||
}
|
||||
|
||||
.room_header td {
|
||||
padding-top: 38px;
|
||||
padding-bottom: 10px;
|
||||
border-bottom: 1px solid #e5e5e5;
|
||||
}
|
||||
|
||||
.room_name {
|
||||
vertical-align: middle;
|
||||
font-size: 18px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.room_header h2 {
|
||||
margin-top: 0px;
|
||||
margin-left: 75px;
|
||||
font-size: 20px;
|
||||
}
|
||||
|
||||
.room_avatar {
|
||||
width: 56px;
|
||||
line-height: 0px;
|
||||
text-align: center;
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
.room_avatar img {
|
||||
width: 48px;
|
||||
height: 48px;
|
||||
object-fit: cover;
|
||||
border-radius: 24px;
|
||||
}
|
||||
|
||||
.notif {
|
||||
border-bottom: 1px solid #e5e5e5;
|
||||
margin-top: 16px;
|
||||
padding-bottom: 16px;
|
||||
}
|
||||
|
||||
.historical_message .sender_avatar {
|
||||
opacity: 0.3;
|
||||
}
|
||||
|
||||
/* spell out opacity and historical_message class names for Outlook aka Word */
|
||||
.historical_message .sender_name {
|
||||
color: #e3e3e3;
|
||||
}
|
||||
|
||||
.historical_message .message_time {
|
||||
color: #e3e3e3;
|
||||
}
|
||||
|
||||
.historical_message .message_body {
|
||||
color: #c7c7c7;
|
||||
}
|
||||
|
||||
.historical_message td,
|
||||
.message td {
|
||||
padding-top: 10px;
|
||||
}
|
||||
|
||||
.sender_avatar {
|
||||
width: 56px;
|
||||
text-align: center;
|
||||
vertical-align: top;
|
||||
}
|
||||
|
||||
.sender_avatar img {
|
||||
margin-top: -2px;
|
||||
width: 32px;
|
||||
height: 32px;
|
||||
border-radius: 16px;
|
||||
}
|
||||
|
||||
.sender_name {
|
||||
display: inline;
|
||||
font-size: 13px;
|
||||
color: #a2a2a2;
|
||||
}
|
||||
|
||||
.message_time {
|
||||
text-align: right;
|
||||
width: 100px;
|
||||
font-size: 11px;
|
||||
color: #a2a2a2;
|
||||
}
|
||||
|
||||
.message_body {
|
||||
}
|
||||
|
||||
.notif_link td {
|
||||
padding-top: 10px;
|
||||
padding-bottom: 10px;
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
.notif_link a, .footer a {
|
||||
color: #454545;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
.debug {
|
||||
font-size: 10px;
|
||||
color: #888;
|
||||
}
|
||||
|
||||
.footer {
|
||||
margin-top: 20px;
|
||||
text-align: center;
|
||||
}
|
||||
45
res/templates-dinsic/notif.html
Normal file
45
res/templates-dinsic/notif.html
Normal file
@@ -0,0 +1,45 @@
|
||||
{% for message in notif.messages %}
|
||||
<tr class="{{ "historical_message" if message.is_historical else "message" }}">
|
||||
<td class="sender_avatar">
|
||||
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
|
||||
{% if message.sender_avatar_url %}
|
||||
<img alt="" class="sender_avatar" src="{{ message.sender_avatar_url|mxc_to_http(32,32) }}" />
|
||||
{% else %}
|
||||
{% if message.sender_hash % 3 == 0 %}
|
||||
<img class="sender_avatar" src="https://vector.im/beta/img/76cfa6.png" />
|
||||
{% elif message.sender_hash % 3 == 1 %}
|
||||
<img class="sender_avatar" src="https://vector.im/beta/img/50e2c2.png" />
|
||||
{% else %}
|
||||
<img class="sender_avatar" src="https://vector.im/beta/img/f4c371.png" />
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
</td>
|
||||
<td class="message_contents">
|
||||
{% if loop.index0 == 0 or notif.messages[loop.index0 - 1].sender_name != notif.messages[loop.index0].sender_name %}
|
||||
<div class="sender_name">{% if message.msgtype == "m.emote" %}*{% endif %} {{ message.sender_name }}</div>
|
||||
{% endif %}
|
||||
<div class="message_body">
|
||||
{% if message.msgtype == "m.text" %}
|
||||
{{ message.body_text_html }}
|
||||
{% elif message.msgtype == "m.emote" %}
|
||||
{{ message.body_text_html }}
|
||||
{% elif message.msgtype == "m.notice" %}
|
||||
{{ message.body_text_html }}
|
||||
{% elif message.msgtype == "m.image" %}
|
||||
<img src="{{ message.image_url|mxc_to_http(640, 480, scale) }}" />
|
||||
{% elif message.msgtype == "m.file" %}
|
||||
<span class="filename">{{ message.body_text_plain }}</span>
|
||||
{% endif %}
|
||||
</div>
|
||||
</td>
|
||||
<td class="message_time">{{ message.ts|format_ts("%H:%M") }}</td>
|
||||
</tr>
|
||||
{% endfor %}
|
||||
<tr class="notif_link">
|
||||
<td></td>
|
||||
<td>
|
||||
<a href="{{ notif.link }}">Voir {{ room.title }}</a>
|
||||
</td>
|
||||
<td></td>
|
||||
</tr>
|
||||
16
res/templates-dinsic/notif.txt
Normal file
16
res/templates-dinsic/notif.txt
Normal file
@@ -0,0 +1,16 @@
|
||||
{% for message in notif.messages %}
|
||||
{% if message.msgtype == "m.emote" %}* {% endif %}{{ message.sender_name }} ({{ message.ts|format_ts("%H:%M") }})
|
||||
{% if message.msgtype == "m.text" %}
|
||||
{{ message.body_text_plain }}
|
||||
{% elif message.msgtype == "m.emote" %}
|
||||
{{ message.body_text_plain }}
|
||||
{% elif message.msgtype == "m.notice" %}
|
||||
{{ message.body_text_plain }}
|
||||
{% elif message.msgtype == "m.image" %}
|
||||
{{ message.body_text_plain }}
|
||||
{% elif message.msgtype == "m.file" %}
|
||||
{{ message.body_text_plain }}
|
||||
{% endif %}
|
||||
{% endfor %}
|
||||
|
||||
Voir {{ room.title }} à {{ notif.link }}
|
||||
55
res/templates-dinsic/notif_mail.html
Normal file
55
res/templates-dinsic/notif_mail.html
Normal file
@@ -0,0 +1,55 @@
|
||||
<!doctype html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<style type="text/css">
|
||||
{% include 'mail.css' without context %}
|
||||
{% include "mail-%s.css" % app_name ignore missing without context %}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<table id="page">
|
||||
<tr>
|
||||
<td> </td>
|
||||
<td id="inner">
|
||||
<table class="header">
|
||||
<tr>
|
||||
<td>
|
||||
<div class="salutation">Bonjour {{ user_display_name }},</div>
|
||||
<div class="summarytext">{{ summary_text }}</div>
|
||||
</td>
|
||||
<td class="logo">
|
||||
{% if app_name == "Riot" %}
|
||||
<img src="http://matrix.org/img/riot-logo-email.png" width="83" height="83" alt="[Riot]"/>
|
||||
{% elif app_name == "Vector" %}
|
||||
<img src="http://matrix.org/img/vector-logo-email.png" width="64" height="83" alt="[Vector]"/>
|
||||
{% else %}
|
||||
<img src="http://matrix.org/img/matrix-120x51.png" width="120" height="51" alt="[matrix]"/>
|
||||
{% endif %}
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
{% for room in rooms %}
|
||||
{% include 'room.html' with context %}
|
||||
{% endfor %}
|
||||
<div class="footer">
|
||||
<a href="{{ unsubscribe_link }}">Se désinscrire</a>
|
||||
<br/>
|
||||
<br/>
|
||||
<div class="debug">
|
||||
Sending email at {{ reason.now|format_ts("%c") }} due to activity in room {{ reason.room_name }} because
|
||||
an event was received at {{ reason.received_at|format_ts("%c") }}
|
||||
which is more than {{ "%.1f"|format(reason.delay_before_mail_ms / (60*1000)) }} ({{ reason.delay_before_mail_ms }}) mins ago,
|
||||
{% if reason.last_sent_ts %}
|
||||
and the last time we sent a mail for this room was {{ reason.last_sent_ts|format_ts("%c") }},
|
||||
which is more than {{ "%.1f"|format(reason.throttle_ms / (60*1000)) }} (current throttle_ms) mins ago.
|
||||
{% else %}
|
||||
and we don't have a last time we sent a mail for this room.
|
||||
{% endif %}
|
||||
</div>
|
||||
</div>
|
||||
</td>
|
||||
<td> </td>
|
||||
</tr>
|
||||
</table>
|
||||
</body>
|
||||
</html>
|
||||
10
res/templates-dinsic/notif_mail.txt
Normal file
10
res/templates-dinsic/notif_mail.txt
Normal file
@@ -0,0 +1,10 @@
|
||||
Bonjour {{ user_display_name }},
|
||||
|
||||
{{ summary_text }}
|
||||
|
||||
{% for room in rooms %}
|
||||
{% include 'room.txt' with context %}
|
||||
{% endfor %}
|
||||
|
||||
Vous pouvez désactiver ces notifications en cliquant ici {{ unsubscribe_link }}
|
||||
|
||||
33
res/templates-dinsic/room.html
Normal file
33
res/templates-dinsic/room.html
Normal file
@@ -0,0 +1,33 @@
|
||||
<table class="room">
|
||||
<tr class="room_header">
|
||||
<td class="room_avatar">
|
||||
{% if room.avatar_url %}
|
||||
<img alt="" src="{{ room.avatar_url|mxc_to_http(48,48) }}" />
|
||||
{% else %}
|
||||
{% if room.hash % 3 == 0 %}
|
||||
<img alt="" src="https://vector.im/beta/img/76cfa6.png" />
|
||||
{% elif room.hash % 3 == 1 %}
|
||||
<img alt="" src="https://vector.im/beta/img/50e2c2.png" />
|
||||
{% else %}
|
||||
<img alt="" src="https://vector.im/beta/img/f4c371.png" />
|
||||
{% endif %}
|
||||
{% endif %}
|
||||
</td>
|
||||
<td class="room_name" colspan="2">
|
||||
{{ room.title }}
|
||||
</td>
|
||||
</tr>
|
||||
{% if room.invite %}
|
||||
<tr>
|
||||
<td></td>
|
||||
<td>
|
||||
<a href="{{ room.link }}">Rejoindre la conversation.</a>
|
||||
</td>
|
||||
<td></td>
|
||||
</tr>
|
||||
{% else %}
|
||||
{% for notif in room.notifs %}
|
||||
{% include 'notif.html' with context %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
</table>
|
||||
9
res/templates-dinsic/room.txt
Normal file
9
res/templates-dinsic/room.txt
Normal file
@@ -0,0 +1,9 @@
|
||||
{{ room.title }}
|
||||
|
||||
{% if room.invite %}
|
||||
Vous avez été invité, rejoignez la conversation en cliquant sur le lien suivant {{ room.link }}
|
||||
{% else %}
|
||||
{% for notif in room.notifs %}
|
||||
{% include 'notif.txt' with context %}
|
||||
{% endfor %}
|
||||
{% endif %}
|
||||
@@ -26,7 +26,6 @@ import synapse.types
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import EventTypes, JoinRules, Membership
|
||||
from synapse.api.errors import AuthError, Codes, ResourceLimitError
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
from synapse.types import UserID
|
||||
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
@@ -776,56 +775,34 @@ class Auth(object):
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def check_auth_blocking(self, user_id=None, threepid=None):
|
||||
def check_auth_blocking(self, user_id=None):
|
||||
"""Checks if the user should be rejected for some external reason,
|
||||
such as monthly active user limiting or global disable flag
|
||||
|
||||
Args:
|
||||
user_id(str|None): If present, checks for presence against existing
|
||||
MAU cohort
|
||||
|
||||
threepid(dict|None): If present, checks for presence against configured
|
||||
reserved threepid. Used in cases where the user is trying register
|
||||
with a MAU blocked server, normally they would be rejected but their
|
||||
threepid is on the reserved list. user_id and
|
||||
threepid should never be set at the same time.
|
||||
"""
|
||||
|
||||
# Never fail an auth check for the server notices users
|
||||
# This can be a problem where event creation is prohibited due to blocking
|
||||
if user_id == self.hs.config.server_notices_mxid:
|
||||
return
|
||||
|
||||
if self.hs.config.hs_disabled:
|
||||
raise ResourceLimitError(
|
||||
403, self.hs.config.hs_disabled_message,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
|
||||
admin_contact=self.hs.config.admin_contact,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEED,
|
||||
admin_uri=self.hs.config.admin_uri,
|
||||
limit_type=self.hs.config.hs_disabled_limit_type
|
||||
)
|
||||
if self.hs.config.limit_usage_by_mau is True:
|
||||
assert not (user_id and threepid)
|
||||
|
||||
# If the user is already part of the MAU cohort or a trial user
|
||||
# If the user is already part of the MAU cohort
|
||||
if user_id:
|
||||
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
|
||||
if timestamp:
|
||||
return
|
||||
|
||||
is_trial = yield self.store.is_trial_user(user_id)
|
||||
if is_trial:
|
||||
return
|
||||
elif threepid:
|
||||
# If the user does not exist yet, but is signing up with a
|
||||
# reserved threepid then pass auth check
|
||||
if is_threepid_reserved(self.hs.config, threepid):
|
||||
return
|
||||
# Else if there is no room in the MAU bucket, bail
|
||||
current_mau = yield self.store.get_monthly_active_count()
|
||||
if current_mau >= self.hs.config.max_mau_value:
|
||||
raise ResourceLimitError(
|
||||
403, "Monthly Active User Limit Exceeded",
|
||||
admin_contact=self.hs.config.admin_contact,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
|
||||
|
||||
admin_uri=self.hs.config.admin_uri,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEED,
|
||||
limit_type="monthly_active_user"
|
||||
)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2017 Vector Creations Ltd
|
||||
# Copyright 2018 New Vector Ltd.
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -71,6 +71,7 @@ class EventTypes(object):
|
||||
CanonicalAlias = "m.room.canonical_alias"
|
||||
RoomAvatar = "m.room.avatar"
|
||||
GuestAccess = "m.room.guest_access"
|
||||
Encryption = "m.room.encryption"
|
||||
|
||||
# These are used for validation
|
||||
Message = "m.room.message"
|
||||
@@ -78,7 +79,6 @@ class EventTypes(object):
|
||||
Name = "m.room.name"
|
||||
|
||||
ServerACL = "m.room.server_acl"
|
||||
Pinned = "m.room.pinned_events"
|
||||
|
||||
|
||||
class RejectedReason(object):
|
||||
@@ -98,17 +98,9 @@ class ThirdPartyEntityKind(object):
|
||||
LOCATION = "location"
|
||||
|
||||
|
||||
class RoomVersions(object):
|
||||
V1 = "1"
|
||||
VDH_TEST = "vdh-test-version"
|
||||
|
||||
|
||||
# the version we will give rooms which are created on this server
|
||||
DEFAULT_ROOM_VERSION = RoomVersions.V1
|
||||
DEFAULT_ROOM_VERSION = "1"
|
||||
|
||||
# vdh-test-version is a placeholder to get room versioning support working and tested
|
||||
# until we have a working v2.
|
||||
KNOWN_ROOM_VERSIONS = {RoomVersions.V1, RoomVersions.VDH_TEST}
|
||||
|
||||
ServerNoticeMsgType = "m.server_notice"
|
||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
||||
KNOWN_ROOM_VERSIONS = {"1", "vdh-test-version"}
|
||||
|
||||
@@ -56,7 +56,7 @@ class Codes(object):
|
||||
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
|
||||
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
|
||||
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
|
||||
RESOURCE_LIMIT_EXCEEDED = "M_RESOURCE_LIMIT_EXCEEDED"
|
||||
RESOURCE_LIMIT_EXCEED = "M_RESOURCE_LIMIT_EXCEED"
|
||||
UNSUPPORTED_ROOM_VERSION = "M_UNSUPPORTED_ROOM_VERSION"
|
||||
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
|
||||
|
||||
@@ -238,11 +238,11 @@ class ResourceLimitError(SynapseError):
|
||||
"""
|
||||
def __init__(
|
||||
self, code, msg,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
|
||||
admin_contact=None,
|
||||
errcode=Codes.RESOURCE_LIMIT_EXCEED,
|
||||
admin_uri=None,
|
||||
limit_type=None,
|
||||
):
|
||||
self.admin_contact = admin_contact
|
||||
self.admin_uri = admin_uri
|
||||
self.limit_type = limit_type
|
||||
super(ResourceLimitError, self).__init__(code, msg, errcode=errcode)
|
||||
|
||||
@@ -250,7 +250,7 @@ class ResourceLimitError(SynapseError):
|
||||
return cs_error(
|
||||
self.msg,
|
||||
self.errcode,
|
||||
admin_contact=self.admin_contact,
|
||||
admin_uri=self.admin_uri,
|
||||
limit_type=self.limit_type
|
||||
)
|
||||
|
||||
|
||||
@@ -51,7 +51,10 @@ class AppserviceSlaveStore(
|
||||
|
||||
|
||||
class AppserviceServer(HomeServer):
|
||||
DATASTORE_CLASS = AppserviceSlaveStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = AppserviceSlaveStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -74,7 +74,10 @@ class ClientReaderSlavedStore(
|
||||
|
||||
|
||||
class ClientReaderServer(HomeServer):
|
||||
DATASTORE_CLASS = ClientReaderSlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = ClientReaderSlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -45,11 +45,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
|
||||
from synapse.replication.slave.storage.room import RoomStore
|
||||
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
|
||||
from synapse.replication.tcp.client import ReplicationClientHandler
|
||||
from synapse.rest.client.v1.profile import (
|
||||
ProfileAvatarURLRestServlet,
|
||||
ProfileDisplaynameRestServlet,
|
||||
ProfileRestServlet,
|
||||
)
|
||||
from synapse.rest.client.v1.room import (
|
||||
JoinRoomAliasServlet,
|
||||
RoomMembershipRestServlet,
|
||||
@@ -58,7 +53,6 @@ from synapse.rest.client.v1.room import (
|
||||
)
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.storage.user_directory import UserDirectoryStore
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
from synapse.util.manhole import manhole
|
||||
@@ -68,9 +62,6 @@ logger = logging.getLogger("synapse.app.event_creator")
|
||||
|
||||
|
||||
class EventCreatorSlavedStore(
|
||||
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
|
||||
# rather than going via the correct worker.
|
||||
UserDirectoryStore,
|
||||
DirectoryStore,
|
||||
SlavedTransactionStore,
|
||||
SlavedProfileStore,
|
||||
@@ -90,7 +81,10 @@ class EventCreatorSlavedStore(
|
||||
|
||||
|
||||
class EventCreatorServer(HomeServer):
|
||||
DATASTORE_CLASS = EventCreatorSlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = EventCreatorSlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
@@ -107,9 +101,6 @@ class EventCreatorServer(HomeServer):
|
||||
RoomMembershipRestServlet(self).register(resource)
|
||||
RoomStateEventRestServlet(self).register(resource)
|
||||
JoinRoomAliasServlet(self).register(resource)
|
||||
ProfileAvatarURLRestServlet(self).register(resource)
|
||||
ProfileDisplaynameRestServlet(self).register(resource)
|
||||
ProfileRestServlet(self).register(resource)
|
||||
resources.update({
|
||||
"/_matrix/client/r0": resource,
|
||||
"/_matrix/client/unstable": resource,
|
||||
|
||||
@@ -72,7 +72,10 @@ class FederationReaderSlavedStore(
|
||||
|
||||
|
||||
class FederationReaderServer(HomeServer):
|
||||
DATASTORE_CLASS = FederationReaderSlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = FederationReaderSlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -78,7 +78,10 @@ class FederationSenderSlaveStore(
|
||||
|
||||
|
||||
class FederationSenderServer(HomeServer):
|
||||
DATASTORE_CLASS = FederationSenderSlaveStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = FederationSenderSlaveStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -148,7 +148,10 @@ class FrontendProxySlavedStore(
|
||||
|
||||
|
||||
class FrontendProxyServer(HomeServer):
|
||||
DATASTORE_CLASS = FrontendProxySlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = FrontendProxySlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -62,7 +62,7 @@ from synapse.rest.key.v1.server_key_resource import LocalKey
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.rest.media.v0.content_repository import ContentRepoResource
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage import DataStore, are_all_users_on_domain
|
||||
from synapse.storage import are_all_users_on_domain
|
||||
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
|
||||
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
|
||||
from synapse.util.caches import CACHE_SIZE_FACTOR
|
||||
@@ -111,8 +111,6 @@ def build_resource_for_web_client(hs):
|
||||
|
||||
|
||||
class SynapseHomeServer(HomeServer):
|
||||
DATASTORE_CLASS = DataStore
|
||||
|
||||
def _listener_http(self, config, listener_config):
|
||||
port = listener_config["port"]
|
||||
bind_addresses = listener_config["bind_addresses"]
|
||||
@@ -358,13 +356,13 @@ def setup(config_options):
|
||||
logger.info("Preparing database: %s...", config.database_config['name'])
|
||||
|
||||
try:
|
||||
with hs.get_db_conn(run_new_connection=False) as db_conn:
|
||||
prepare_database(db_conn, database_engine, config=config)
|
||||
database_engine.on_new_connection(db_conn)
|
||||
db_conn = hs.get_db_conn(run_new_connection=False)
|
||||
prepare_database(db_conn, database_engine, config=config)
|
||||
database_engine.on_new_connection(db_conn)
|
||||
|
||||
hs.run_startup_checks(db_conn, database_engine)
|
||||
hs.run_startup_checks(db_conn, database_engine)
|
||||
|
||||
db_conn.commit()
|
||||
db_conn.commit()
|
||||
except UpgradeDatabaseException:
|
||||
sys.stderr.write(
|
||||
"\nFailed to upgrade database.\n"
|
||||
|
||||
@@ -60,7 +60,10 @@ class MediaRepositorySlavedStore(
|
||||
|
||||
|
||||
class MediaRepositoryServer(HomeServer):
|
||||
DATASTORE_CLASS = MediaRepositorySlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = MediaRepositorySlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -78,7 +78,10 @@ class PusherSlaveStore(
|
||||
|
||||
|
||||
class PusherServer(HomeServer):
|
||||
DATASTORE_CLASS = PusherSlaveStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = PusherSlaveStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def remove_pusher(self, app_id, push_key, user_id):
|
||||
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
|
||||
|
||||
@@ -249,7 +249,10 @@ class SynchrotronApplicationService(object):
|
||||
|
||||
|
||||
class SynchrotronServer(HomeServer):
|
||||
DATASTORE_CLASS = SynchrotronSlavedStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = SynchrotronSlavedStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -94,7 +94,10 @@ class UserDirectorySlaveStore(
|
||||
|
||||
|
||||
class UserDirectoryServer(HomeServer):
|
||||
DATASTORE_CLASS = UserDirectorySlaveStore
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
self.datastore = UserDirectorySlaveStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def _listen_http(self, listener_config):
|
||||
port = listener_config["port"]
|
||||
|
||||
@@ -33,7 +33,15 @@ class RegistrationConfig(Config):
|
||||
|
||||
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
|
||||
self.allowed_local_3pids = config.get("allowed_local_3pids", [])
|
||||
self.check_is_for_allowed_local_3pids = config.get(
|
||||
"check_is_for_allowed_local_3pids", None
|
||||
)
|
||||
self.allow_invited_3pids = config.get("allow_invited_3pids", False)
|
||||
|
||||
self.disable_3pid_changes = config.get("disable_3pid_changes", False)
|
||||
|
||||
self.registration_shared_secret = config.get("registration_shared_secret")
|
||||
self.register_mxid_from_3pid = config.get("register_mxid_from_3pid")
|
||||
|
||||
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
|
||||
self.trusted_third_party_id_servers = config["trusted_third_party_id_servers"]
|
||||
@@ -45,6 +53,15 @@ class RegistrationConfig(Config):
|
||||
|
||||
self.auto_join_rooms = config.get("auto_join_rooms", [])
|
||||
|
||||
self.disable_set_displayname = config.get("disable_set_displayname", False)
|
||||
self.disable_set_avatar_url = config.get("disable_set_avatar_url", False)
|
||||
|
||||
self.replicate_user_profiles_to = config.get("replicate_user_profiles_to", [])
|
||||
if not isinstance(self.replicate_user_profiles_to, list):
|
||||
self.replicate_user_profiles_to = [self.replicate_user_profiles_to, ]
|
||||
|
||||
self.chain_register = config.get("chain_register", None)
|
||||
|
||||
def default_config(self, **kwargs):
|
||||
registration_shared_secret = random_string_with_symbols(50)
|
||||
|
||||
@@ -60,9 +77,26 @@ class RegistrationConfig(Config):
|
||||
# - email
|
||||
# - msisdn
|
||||
|
||||
# Derive the user's matrix ID from a type of 3PID used when registering.
|
||||
# This overrides any matrix ID the user proposes when calling /register
|
||||
# The 3PID type should be present in registrations_require_3pid to avoid
|
||||
# users failing to register if they don't specify the right kind of 3pid.
|
||||
#
|
||||
# register_mxid_from_3pid: email
|
||||
|
||||
# Mandate that users are only allowed to associate certain formats of
|
||||
# 3PIDs with accounts on this server.
|
||||
#
|
||||
# Use an Identity Server to establish which 3PIDs are allowed to register?
|
||||
# Overrides allowed_local_3pids below.
|
||||
# check_is_for_allowed_local_3pids: matrix.org
|
||||
#
|
||||
# If you are using an IS you can also check whether that IS registers
|
||||
# pending invites for the given 3PID (and then allow it to sign up on
|
||||
# the platform):
|
||||
#
|
||||
# allow_invited_3pids: False
|
||||
#
|
||||
# allowed_local_3pids:
|
||||
# - medium: email
|
||||
# pattern: ".*@matrix\\.org"
|
||||
@@ -71,6 +105,11 @@ class RegistrationConfig(Config):
|
||||
# - medium: msisdn
|
||||
# pattern: "\\+44"
|
||||
|
||||
# If true, stop users from trying to change the 3PIDs associated with
|
||||
# their accounts.
|
||||
#
|
||||
# disable_3pid_changes: False
|
||||
|
||||
# If set, allows registration by anyone who also has the shared
|
||||
# secret, even if registration is otherwise disabled.
|
||||
registration_shared_secret: "%(registration_shared_secret)s"
|
||||
@@ -94,10 +133,32 @@ class RegistrationConfig(Config):
|
||||
- vector.im
|
||||
- riot.im
|
||||
|
||||
# If enabled, user IDs, display names and avatar URLs will be replicated
|
||||
# to this server whenever they change.
|
||||
# This is an experimental API currently implemented by sydent to support
|
||||
# cross-homeserver user directories.
|
||||
# replicate_user_profiles_to: example.com
|
||||
|
||||
# If specified, attempt to replay registrations on the given target
|
||||
# homeserver and identity server. The HS is authed via a given shared secret
|
||||
# chain_register:
|
||||
# hs: https://shadow.example.com
|
||||
# hs_shared_secret: 12u394refgbdhivsia
|
||||
# is: https://shadow-is.example.com
|
||||
|
||||
# If enabled, don't let users set their own display names/avatars
|
||||
# other than for the very first time (unless they are a server admin).
|
||||
# Useful when provisioning users based on the contents of a 3rd party
|
||||
# directory and to avoid ambiguities.
|
||||
#
|
||||
# disable_set_displayname: False
|
||||
# disable_set_avatar_url: False
|
||||
|
||||
# Users who register on this homeserver will automatically be joined
|
||||
# to these rooms
|
||||
#auto_join_rooms:
|
||||
# - "#example:example.com"
|
||||
|
||||
""" % locals()
|
||||
|
||||
def add_arguments(self, parser):
|
||||
|
||||
@@ -77,15 +77,10 @@ class ServerConfig(Config):
|
||||
self.max_mau_value = config.get(
|
||||
"max_mau_value", 0,
|
||||
)
|
||||
|
||||
self.mau_limits_reserved_threepids = config.get(
|
||||
"mau_limit_reserved_threepids", []
|
||||
)
|
||||
|
||||
self.mau_trial_days = config.get(
|
||||
"mau_trial_days", 0,
|
||||
)
|
||||
|
||||
# Options to disable HS
|
||||
self.hs_disabled = config.get("hs_disabled", False)
|
||||
self.hs_disabled_message = config.get("hs_disabled_message", "")
|
||||
@@ -93,7 +88,7 @@ class ServerConfig(Config):
|
||||
|
||||
# Admin uri to direct users at should their instance become blocked
|
||||
# due to resource constraints
|
||||
self.admin_contact = config.get("admin_contact", None)
|
||||
self.admin_uri = config.get("admin_uri", None)
|
||||
|
||||
# FIXME: federation_domain_whitelist needs sytests
|
||||
self.federation_domain_whitelist = None
|
||||
@@ -357,7 +352,7 @@ class ServerConfig(Config):
|
||||
# Homeserver blocking
|
||||
#
|
||||
# How to reach the server admin, used in ResourceLimitError
|
||||
# admin_contact: 'mailto:admin@server.com'
|
||||
# admin_uri: 'mailto:admin@server.com'
|
||||
#
|
||||
# Global block config
|
||||
#
|
||||
@@ -370,7 +365,6 @@ class ServerConfig(Config):
|
||||
# Enables monthly active user checking
|
||||
# limit_usage_by_mau: False
|
||||
# max_mau_value: 50
|
||||
# mau_trial_days: 2
|
||||
#
|
||||
# Sometimes the server admin will want to ensure certain accounts are
|
||||
# never blocked by mau checking. These accounts are specified here.
|
||||
@@ -404,23 +398,6 @@ class ServerConfig(Config):
|
||||
" service on the given port.")
|
||||
|
||||
|
||||
def is_threepid_reserved(config, threepid):
|
||||
"""Check the threepid against the reserved threepid config
|
||||
Args:
|
||||
config(ServerConfig) - to access server config attributes
|
||||
threepid(dict) - The threepid to test for
|
||||
|
||||
Returns:
|
||||
boolean Is the threepid undertest reserved_user
|
||||
"""
|
||||
|
||||
for tp in config.mau_limits_reserved_threepids:
|
||||
if (threepid['medium'] == tp['medium']
|
||||
and threepid['address'] == tp['address']):
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
def read_gc_thresholds(thresholds):
|
||||
"""Reads the three integer thresholds for garbage collection. Ensures that
|
||||
the thresholds are integers if thresholds are supplied.
|
||||
|
||||
@@ -23,11 +23,15 @@ class UserDirectoryConfig(Config):
|
||||
|
||||
def read_config(self, config):
|
||||
self.user_directory_search_all_users = False
|
||||
self.user_directory_defer_to_id_server = None
|
||||
user_directory_config = config.get("user_directory", None)
|
||||
if user_directory_config:
|
||||
self.user_directory_search_all_users = (
|
||||
user_directory_config.get("search_all_users", False)
|
||||
)
|
||||
self.user_directory_defer_to_id_server = (
|
||||
user_directory_config.get("defer_to_id_server", None)
|
||||
)
|
||||
|
||||
def default_config(self, config_dir_path, server_name, **kwargs):
|
||||
return """
|
||||
@@ -41,4 +45,9 @@ class UserDirectoryConfig(Config):
|
||||
#
|
||||
#user_directory:
|
||||
# search_all_users: false
|
||||
#
|
||||
# If this is set, user search will be delegated to this ID server instead
|
||||
# of synapse performing the search itself.
|
||||
# This is an experimental API.
|
||||
# defer_to_id_server: id.example.com
|
||||
"""
|
||||
|
||||
@@ -18,9 +18,7 @@ import logging
|
||||
from canonicaljson import json
|
||||
|
||||
from twisted.internet import defer, reactor
|
||||
from twisted.internet.error import ConnectError
|
||||
from twisted.internet.protocol import Factory
|
||||
from twisted.names.error import DomainError
|
||||
from twisted.web.http import HTTPClient
|
||||
|
||||
from synapse.http.endpoint import matrix_federation_endpoint
|
||||
@@ -49,14 +47,12 @@ def fetch_server_key(server_name, tls_client_options_factory, path=KEY_API_V1):
|
||||
server_response, server_certificate = yield protocol.remote_key
|
||||
defer.returnValue((server_response, server_certificate))
|
||||
except SynapseKeyClientError as e:
|
||||
logger.warn("Error getting key for %r: %s", server_name, e)
|
||||
logger.exception("Error getting key for %r" % (server_name,))
|
||||
if e.status.startswith("4"):
|
||||
# Don't retry for 4xx responses.
|
||||
raise IOError("Cannot get key for %r" % server_name)
|
||||
except (ConnectError, DomainError) as e:
|
||||
logger.warn("Error getting key for %r: %s", server_name, e)
|
||||
except Exception as e:
|
||||
logger.exception("Error getting key for %r", server_name)
|
||||
logger.exception(e)
|
||||
raise IOError("Cannot get key for %r" % server_name)
|
||||
|
||||
|
||||
|
||||
@@ -32,7 +32,7 @@ Events are replicated via a separate events stream.
|
||||
import logging
|
||||
from collections import namedtuple
|
||||
|
||||
from six import iteritems
|
||||
from six import iteritems, itervalues
|
||||
|
||||
from sortedcontainers import SortedDict
|
||||
|
||||
@@ -117,7 +117,7 @@ class FederationRemoteSendQueue(object):
|
||||
|
||||
user_ids = set(
|
||||
user_id
|
||||
for uids in self.presence_changed.values()
|
||||
for uids in itervalues(self.presence_changed)
|
||||
for user_id in uids
|
||||
)
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ class TransportLayerClient(object):
|
||||
dest (str)
|
||||
room_id (str)
|
||||
event_tuples (list)
|
||||
limit (int)
|
||||
limt (int)
|
||||
|
||||
Returns:
|
||||
Deferred: Results in a dict received from the remote homeserver.
|
||||
|
||||
@@ -261,10 +261,10 @@ class BaseFederationServlet(object):
|
||||
except NoAuthenticationError:
|
||||
origin = None
|
||||
if self.REQUIRE_AUTH:
|
||||
logger.warn("authenticate_request failed: missing authentication")
|
||||
logger.exception("authenticate_request failed")
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.warn("authenticate_request failed: %s", e)
|
||||
except Exception:
|
||||
logger.exception("authenticate_request failed")
|
||||
raise
|
||||
|
||||
if origin:
|
||||
|
||||
@@ -33,6 +33,7 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
self._device_handler = hs.get_device_handler()
|
||||
self._room_member_handler = hs.get_room_member_handler()
|
||||
self._identity_handler = hs.get_handlers().identity_handler
|
||||
self._profile_handler = hs.get_profile_handler()
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
|
||||
# Flag that indicates whether the process to part users from rooms is running
|
||||
@@ -94,6 +95,9 @@ class DeactivateAccountHandler(BaseHandler):
|
||||
|
||||
yield self.store.user_set_password_hash(user_id, None)
|
||||
|
||||
user = UserID.from_string(user_id)
|
||||
yield self._profile_handler.set_active(user, False)
|
||||
|
||||
# Add the user to a table of users pending deactivation (ie.
|
||||
# removal from all the rooms they're a member of)
|
||||
yield self.store.add_user_pending_deactivation(user_id)
|
||||
|
||||
@@ -291,9 +291,8 @@ class FederationHandler(BaseHandler):
|
||||
ev_ids, get_prev_content=False, check_redacted=False
|
||||
)
|
||||
|
||||
room_version = yield self.store.get_room_version(pdu.room_id)
|
||||
state_map = yield resolve_events_with_factory(
|
||||
room_version, state_groups, {pdu.event_id: pdu}, fetch
|
||||
state_groups, {pdu.event_id: pdu}, fetch
|
||||
)
|
||||
|
||||
state = (yield self.store.get_events(state_map.values())).values()
|
||||
@@ -1829,10 +1828,7 @@ class FederationHandler(BaseHandler):
|
||||
(d.type, d.state_key): d for d in different_events if d
|
||||
})
|
||||
|
||||
room_version = yield self.store.get_room_version(event.room_id)
|
||||
|
||||
new_state = yield self.state_handler.resolve_events(
|
||||
room_version,
|
||||
new_state = self.state_handler.resolve_events(
|
||||
[list(local_view.values()), list(remote_view.values())],
|
||||
event
|
||||
)
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -15,7 +16,9 @@
|
||||
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from twisted.internet import defer, reactor
|
||||
|
||||
from synapse.api.errors import (
|
||||
AuthError,
|
||||
@@ -26,22 +29,21 @@ from synapse.api.errors import (
|
||||
)
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.types import UserID, get_domain_from_id
|
||||
from synapse.util.logcontext import run_in_background
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseProfileHandler(BaseHandler):
|
||||
"""Handles fetching and updating user profile information.
|
||||
class ProfileHandler(BaseHandler):
|
||||
PROFILE_UPDATE_MS = 60 * 1000
|
||||
PROFILE_UPDATE_EVERY_MS = 24 * 60 * 60 * 1000
|
||||
|
||||
BaseProfileHandler can be instantiated directly on workers and will
|
||||
delegate to master when necessary. The master process should use the
|
||||
subclass MasterProfileHandler
|
||||
"""
|
||||
PROFILE_REPLICATE_INTERVAL = 2 * 60 * 1000
|
||||
|
||||
def __init__(self, hs):
|
||||
super(BaseProfileHandler, self).__init__(hs)
|
||||
super(ProfileHandler, self).__init__(hs)
|
||||
|
||||
self.federation = hs.get_federation_client()
|
||||
hs.get_federation_registry().register_query_handler(
|
||||
@@ -50,6 +52,84 @@ class BaseProfileHandler(BaseHandler):
|
||||
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
|
||||
self.http_client = hs.get_simple_http_client()
|
||||
|
||||
if hs.config.worker_app is None:
|
||||
self.clock.looping_call(
|
||||
self._start_update_remote_profile_cache, self.PROFILE_UPDATE_MS,
|
||||
)
|
||||
|
||||
if len(self.hs.config.replicate_user_profiles_to) > 0:
|
||||
reactor.callWhenRunning(self._assign_profile_replication_batches)
|
||||
reactor.callWhenRunning(self._replicate_profiles)
|
||||
# Add a looping call to replicate_profiles: this handles retries
|
||||
# if the replication is unsuccessful when the user updated their
|
||||
# profile.
|
||||
self.clock.looping_call(
|
||||
self._replicate_profiles, self.PROFILE_REPLICATE_INTERVAL
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _assign_profile_replication_batches(self):
|
||||
"""If no profile replication has been done yet, allocate replication batch
|
||||
numbers to each profile to start the replication process.
|
||||
"""
|
||||
logger.info("Assigning profile batch numbers...")
|
||||
total = 0
|
||||
while True:
|
||||
assigned = yield self.store.assign_profile_batch()
|
||||
total += assigned
|
||||
if assigned == 0:
|
||||
break
|
||||
logger.info("Assigned %d profile batch numbers", total)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _replicate_profiles(self):
|
||||
"""If any profile data has been updated and not pushed to the replication targets,
|
||||
replicate it.
|
||||
"""
|
||||
host_batches = yield self.store.get_replication_hosts()
|
||||
latest_batch = yield self.store.get_latest_profile_replication_batch_number()
|
||||
if latest_batch is None:
|
||||
latest_batch = -1
|
||||
for repl_host in self.hs.config.replicate_user_profiles_to:
|
||||
if repl_host not in host_batches:
|
||||
host_batches[repl_host] = -1
|
||||
try:
|
||||
for i in xrange(host_batches[repl_host] + 1, latest_batch + 1):
|
||||
yield self._replicate_host_profile_batch(repl_host, i)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Exception while replicating to %s: aborting for now", repl_host,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _replicate_host_profile_batch(self, host, batchnum):
|
||||
logger.info("Replicating profile batch %d to %s", batchnum, host)
|
||||
batch_rows = yield self.store.get_profile_batch(batchnum)
|
||||
batch = {
|
||||
UserID(r["user_id"], self.hs.hostname).to_string(): ({
|
||||
"display_name": r["displayname"],
|
||||
"avatar_url": r["avatar_url"],
|
||||
} if r["active"] else None) for r in batch_rows
|
||||
}
|
||||
|
||||
url = "https://%s/_matrix/identity/api/v1/replicate_profiles" % (host,)
|
||||
body = {
|
||||
"batchnum": batchnum,
|
||||
"batch": batch,
|
||||
"origin_server": self.hs.hostname,
|
||||
}
|
||||
signed_body = sign_json(body, self.hs.hostname, self.hs.config.signing_key[0])
|
||||
try:
|
||||
yield self.http_client.post_json_get_json(url, signed_body)
|
||||
yield self.store.update_replication_batch_for_host(host, batchnum)
|
||||
logger.info("Sucessfully replicated profile batch %d to %s", batchnum, host)
|
||||
except Exception:
|
||||
# This will get retried when the looping call next comes around
|
||||
logger.exception("Failed to replicate profile batch %d to %s", batchnum, host)
|
||||
raise
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_profile(self, user_id):
|
||||
target_user = UserID.from_string(user_id)
|
||||
@@ -149,19 +229,30 @@ class BaseProfileHandler(BaseHandler):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
|
||||
"""target_user is the user whose displayname is to be changed;
|
||||
auth_user is the user attempting to make this change."""
|
||||
"""target_user is the UserID whose displayname is to be changed;
|
||||
requester is the authenticated user attempting to make this change."""
|
||||
if not self.hs.is_mine(target_user):
|
||||
raise SynapseError(400, "User is not hosted on this Home Server")
|
||||
|
||||
if not by_admin and target_user != requester.user:
|
||||
if not by_admin and requester and target_user != requester.user:
|
||||
raise AuthError(400, "Cannot set another user's displayname")
|
||||
|
||||
if not by_admin and self.hs.config.disable_set_displayname:
|
||||
profile = yield self.store.get_profileinfo(target_user.localpart)
|
||||
if profile.display_name:
|
||||
raise SynapseError(400, "Changing displayname is disabled on this server")
|
||||
|
||||
if new_displayname == '':
|
||||
new_displayname = None
|
||||
|
||||
if len(self.hs.config.replicate_user_profiles_to) > 0:
|
||||
cur_batchnum = yield self.store.get_latest_profile_replication_batch_number()
|
||||
new_batchnum = 0 if cur_batchnum is None else cur_batchnum + 1
|
||||
else:
|
||||
new_batchnum = None
|
||||
|
||||
yield self.store.set_profile_displayname(
|
||||
target_user.localpart, new_displayname
|
||||
target_user.localpart, new_displayname, new_batchnum
|
||||
)
|
||||
|
||||
if self.hs.config.user_directory_search_all_users:
|
||||
@@ -170,7 +261,32 @@ class BaseProfileHandler(BaseHandler):
|
||||
target_user.to_string(), profile
|
||||
)
|
||||
|
||||
yield self._update_join_states(requester, target_user)
|
||||
if requester:
|
||||
yield self._update_join_states(requester, target_user)
|
||||
|
||||
# start a profile replication push
|
||||
run_in_background(self._replicate_profiles)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def set_active(self, target_user, active):
|
||||
"""
|
||||
Sets the 'active' flag on a user profile. If set to false, the user account is
|
||||
considered deactivated.
|
||||
Note that unlike set_displayname and set_avatar_url, this does *not* perform
|
||||
authorization checks! This is because the only place it's used currently is
|
||||
in account deactivation where we've already done these checks anyway.
|
||||
"""
|
||||
if len(self.hs.config.replicate_user_profiles_to) > 0:
|
||||
cur_batchnum = yield self.store.get_latest_profile_replication_batch_number()
|
||||
new_batchnum = 0 if cur_batchnum is None else cur_batchnum + 1
|
||||
else:
|
||||
new_batchnum = None
|
||||
yield self.store.set_profile_active(
|
||||
target_user.localpart, active, new_batchnum
|
||||
)
|
||||
|
||||
# start a profile replication push
|
||||
run_in_background(self._replicate_profiles)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def get_avatar_url(self, target_user):
|
||||
@@ -214,8 +330,19 @@ class BaseProfileHandler(BaseHandler):
|
||||
if not by_admin and target_user != requester.user:
|
||||
raise AuthError(400, "Cannot set another user's avatar_url")
|
||||
|
||||
if not by_admin and self.hs.config.disable_set_avatar_url:
|
||||
profile = yield self.store.get_profileinfo(target_user.localpart)
|
||||
if profile.avatar_url:
|
||||
raise SynapseError(400, "Changing avatar url is disabled on this server")
|
||||
|
||||
if len(self.hs.config.replicate_user_profiles_to) > 0:
|
||||
cur_batchnum = yield self.store.get_latest_profile_replication_batch_number()
|
||||
new_batchnum = 0 if cur_batchnum is None else cur_batchnum + 1
|
||||
else:
|
||||
new_batchnum = None
|
||||
|
||||
yield self.store.set_profile_avatar_url(
|
||||
target_user.localpart, new_avatar_url
|
||||
target_user.localpart, new_avatar_url, new_batchnum,
|
||||
)
|
||||
|
||||
if self.hs.config.user_directory_search_all_users:
|
||||
@@ -226,6 +353,9 @@ class BaseProfileHandler(BaseHandler):
|
||||
|
||||
yield self._update_join_states(requester, target_user)
|
||||
|
||||
# start a profile replication push
|
||||
run_in_background(self._replicate_profiles)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_profile_query(self, args):
|
||||
user = UserID.from_string(args["user_id"])
|
||||
@@ -281,20 +411,6 @@ class BaseProfileHandler(BaseHandler):
|
||||
room_id, str(e.message)
|
||||
)
|
||||
|
||||
|
||||
class MasterProfileHandler(BaseProfileHandler):
|
||||
PROFILE_UPDATE_MS = 60 * 1000
|
||||
PROFILE_UPDATE_EVERY_MS = 24 * 60 * 60 * 1000
|
||||
|
||||
def __init__(self, hs):
|
||||
super(MasterProfileHandler, self).__init__(hs)
|
||||
|
||||
assert hs.config.worker_app is None
|
||||
|
||||
self.clock.looping_call(
|
||||
self._start_update_remote_profile_cache, self.PROFILE_UPDATE_MS,
|
||||
)
|
||||
|
||||
def _start_update_remote_profile_cache(self):
|
||||
return run_as_background_process(
|
||||
"Update remote profile", self._update_remote_profile_cache,
|
||||
|
||||
@@ -51,6 +51,7 @@ class RegistrationHandler(BaseHandler):
|
||||
self.profile_handler = hs.get_profile_handler()
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
self.captcha_client = CaptchaServerHttpClient(hs)
|
||||
self.http_client = hs.get_simple_http_client()
|
||||
|
||||
self._next_generated_user_id = None
|
||||
|
||||
@@ -124,8 +125,8 @@ class RegistrationHandler(BaseHandler):
|
||||
generate_token=True,
|
||||
guest_access_token=None,
|
||||
make_guest=False,
|
||||
display_name=None,
|
||||
admin=False,
|
||||
threepid=None,
|
||||
):
|
||||
"""Registers a new client on the server.
|
||||
|
||||
@@ -140,13 +141,14 @@ class RegistrationHandler(BaseHandler):
|
||||
since it offers no means of associating a device_id with the
|
||||
access_token. Instead you should call auth_handler.issue_access_token
|
||||
after registration.
|
||||
display_name (str): The displayname to set for this user, if any
|
||||
Returns:
|
||||
A tuple of (user_id, access_token).
|
||||
Raises:
|
||||
RegistrationError if there was a problem registering.
|
||||
"""
|
||||
|
||||
yield self.auth.check_auth_blocking(threepid=threepid)
|
||||
yield self.auth.check_auth_blocking()
|
||||
password_hash = None
|
||||
if password:
|
||||
password_hash = yield self.auth_handler().hash(password)
|
||||
@@ -178,13 +180,20 @@ class RegistrationHandler(BaseHandler):
|
||||
password_hash=password_hash,
|
||||
was_guest=was_guest,
|
||||
make_guest=make_guest,
|
||||
create_profile_with_localpart=(
|
||||
# If the user was a guest then they already have a profile
|
||||
None if was_guest else user.localpart
|
||||
),
|
||||
admin=admin,
|
||||
)
|
||||
|
||||
if display_name is None:
|
||||
display_name = (
|
||||
# If the user was a guest then they already have a profile
|
||||
None if was_guest else user.localpart
|
||||
)
|
||||
|
||||
if display_name:
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, None, display_name, by_admin=True,
|
||||
)
|
||||
|
||||
if self.hs.config.user_directory_search_all_users:
|
||||
profile = yield self.store.get_profileinfo(localpart)
|
||||
yield self.user_directory_handler.handle_local_profile_change(
|
||||
@@ -209,8 +218,12 @@ class RegistrationHandler(BaseHandler):
|
||||
token=token,
|
||||
password_hash=password_hash,
|
||||
make_guest=make_guest,
|
||||
create_profile_with_localpart=user.localpart,
|
||||
)
|
||||
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, None, user.localpart, by_admin=True,
|
||||
)
|
||||
|
||||
except SynapseError:
|
||||
# if user id is taken, just generate another
|
||||
user = None
|
||||
@@ -254,8 +267,12 @@ class RegistrationHandler(BaseHandler):
|
||||
user_id=user_id,
|
||||
password_hash="",
|
||||
appservice_id=service_id,
|
||||
create_profile_with_localpart=user.localpart,
|
||||
)
|
||||
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, None, user.localpart, by_admin=True,
|
||||
)
|
||||
|
||||
defer.returnValue(user_id)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
@@ -302,7 +319,10 @@ class RegistrationHandler(BaseHandler):
|
||||
user_id=user_id,
|
||||
token=token,
|
||||
password_hash=None,
|
||||
create_profile_with_localpart=user.localpart,
|
||||
)
|
||||
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, None, user.localpart, by_admin=True,
|
||||
)
|
||||
except Exception as e:
|
||||
yield self.store.add_access_token_to_user(user_id, token)
|
||||
@@ -333,7 +353,9 @@ class RegistrationHandler(BaseHandler):
|
||||
logger.info("got threepid with medium '%s' and address '%s'",
|
||||
threepid['medium'], threepid['address'])
|
||||
|
||||
if not check_3pid_allowed(self.hs, threepid['medium'], threepid['address']):
|
||||
if not (
|
||||
yield check_3pid_allowed(self.hs, threepid['medium'], threepid['address'])
|
||||
):
|
||||
raise RegistrationError(
|
||||
403, "Third party identifier is not allowed"
|
||||
)
|
||||
@@ -375,6 +397,43 @@ class RegistrationHandler(BaseHandler):
|
||||
errcode=Codes.EXCLUSIVE
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def chain_register(self, localpart, auth_result, params):
|
||||
"""Invokes the current registration on another server, using
|
||||
shared secret registration, passing in any auth_results from
|
||||
other registration UI auth flows (e.g. validated 3pids)
|
||||
Useful for setting up shadow/backup accounts on a parallel deployment.
|
||||
"""
|
||||
|
||||
# TODO: retries
|
||||
|
||||
chained_hs = self.hs.config.chain_register.get("hs")
|
||||
|
||||
user = localpart.encode("utf-8")
|
||||
mac = hmac.new(
|
||||
key=self.hs.config.chain_register.get("hs_shared_secret").encode(),
|
||||
msg=user,
|
||||
digestmod=sha1,
|
||||
).hexdigest()
|
||||
|
||||
data = yield self.http_client.post_urlencoded_get_json(
|
||||
"https://%s%s" % (
|
||||
chained_hs, "/_matrix/client/r0/register"
|
||||
),
|
||||
{
|
||||
# XXX: auth_result is an unspecified extension for chained registration
|
||||
'auth_result': auth_result,
|
||||
'username': localpart,
|
||||
'password': params.get("password"),
|
||||
'bind_email': params.get("bind_email"),
|
||||
'bind_msisdn': params.get("bind_msisdn"),
|
||||
'device_id': params.get("device_id"),
|
||||
'initial_device_display_name': params.get("initial_device_display_name"),
|
||||
'inhibit_login': True,
|
||||
'mac': mac,
|
||||
}
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _generate_user_id(self, reseed=False):
|
||||
if reseed or self._next_generated_user_id is None:
|
||||
@@ -461,18 +520,15 @@ class RegistrationHandler(BaseHandler):
|
||||
user_id=user_id,
|
||||
token=token,
|
||||
password_hash=password_hash,
|
||||
create_profile_with_localpart=user.localpart,
|
||||
)
|
||||
if displayname is not None:
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, None, displayname, by_admin=True,
|
||||
)
|
||||
else:
|
||||
yield self._auth_handler.delete_access_tokens_for_user(user_id)
|
||||
yield self.store.add_access_token_to_user(user_id=user_id, token=token)
|
||||
|
||||
if displayname is not None:
|
||||
logger.info("setting user display name: %s -> %s", user_id, displayname)
|
||||
yield self.profile_handler.set_displayname(
|
||||
user, requester, displayname, by_admin=True,
|
||||
)
|
||||
|
||||
defer.returnValue((user_id, token))
|
||||
|
||||
def auth_handler(self):
|
||||
|
||||
@@ -52,12 +52,14 @@ class RoomCreationHandler(BaseHandler):
|
||||
"history_visibility": "shared",
|
||||
"original_invitees_have_ops": False,
|
||||
"guest_can_join": True,
|
||||
"encryption_alg": "m.megolm.v1.aes-sha2",
|
||||
},
|
||||
RoomCreationPreset.TRUSTED_PRIVATE_CHAT: {
|
||||
"join_rules": JoinRules.INVITE,
|
||||
"history_visibility": "shared",
|
||||
"original_invitees_have_ops": True,
|
||||
"guest_can_join": True,
|
||||
"encryption_alg": "m.megolm.v1.aes-sha2",
|
||||
},
|
||||
RoomCreationPreset.PUBLIC_CHAT: {
|
||||
"join_rules": JoinRules.PUBLIC,
|
||||
@@ -425,6 +427,15 @@ class RoomCreationHandler(BaseHandler):
|
||||
content=content,
|
||||
)
|
||||
|
||||
if "encryption_alg" in config:
|
||||
send(
|
||||
etype=EventTypes.Encryption,
|
||||
state_key="",
|
||||
content={
|
||||
'algorithm': config["encryption_alg"],
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
class RoomContextHandler(object):
|
||||
def __init__(self, hs):
|
||||
|
||||
@@ -344,7 +344,6 @@ class RoomMemberHandler(object):
|
||||
latest_event_ids = (
|
||||
event_id for (event_id, _, _) in prev_events_and_hashes
|
||||
)
|
||||
|
||||
current_state_ids = yield self.state_handler.get_current_state_ids(
|
||||
room_id, latest_event_ids=latest_event_ids,
|
||||
)
|
||||
|
||||
@@ -745,16 +745,9 @@ class SyncHandler(object):
|
||||
state_ids = {}
|
||||
if lazy_load_members:
|
||||
if types:
|
||||
# We're returning an incremental sync, with no "gap" since
|
||||
# the previous sync, so normally there would be no state to return
|
||||
# But we're lazy-loading, so the client might need some more
|
||||
# member events to understand the events in this timeline.
|
||||
# So we fish out all the member events corresponding to the
|
||||
# timeline here, and then dedupe any redundant ones below.
|
||||
|
||||
state_ids = yield self.store.get_state_ids_for_event(
|
||||
batch.events[0].event_id, types=types,
|
||||
filtered_types=None, # we only want members!
|
||||
filtered_types=filtered_types,
|
||||
)
|
||||
|
||||
if lazy_load_members and not include_redundant_members:
|
||||
@@ -861,7 +854,7 @@ class SyncHandler(object):
|
||||
res = yield self._generate_sync_entry_for_rooms(
|
||||
sync_result_builder, account_data_by_room
|
||||
)
|
||||
newly_joined_rooms, newly_joined_users, _, _ = res
|
||||
newly_joined_rooms, newly_joined_or_invited_users, _, _ = res
|
||||
_, _, newly_left_rooms, newly_left_users = res
|
||||
|
||||
block_all_presence_data = (
|
||||
@@ -870,7 +863,7 @@ class SyncHandler(object):
|
||||
)
|
||||
if self.hs_config.use_presence and not block_all_presence_data:
|
||||
yield self._generate_sync_entry_for_presence(
|
||||
sync_result_builder, newly_joined_rooms, newly_joined_users
|
||||
sync_result_builder, newly_joined_rooms, newly_joined_or_invited_users
|
||||
)
|
||||
|
||||
yield self._generate_sync_entry_for_to_device(sync_result_builder)
|
||||
@@ -878,7 +871,7 @@ class SyncHandler(object):
|
||||
device_lists = yield self._generate_sync_entry_for_device_list(
|
||||
sync_result_builder,
|
||||
newly_joined_rooms=newly_joined_rooms,
|
||||
newly_joined_users=newly_joined_users,
|
||||
newly_joined_or_invited_users=newly_joined_or_invited_users,
|
||||
newly_left_rooms=newly_left_rooms,
|
||||
newly_left_users=newly_left_users,
|
||||
)
|
||||
@@ -954,7 +947,8 @@ class SyncHandler(object):
|
||||
@measure_func("_generate_sync_entry_for_device_list")
|
||||
@defer.inlineCallbacks
|
||||
def _generate_sync_entry_for_device_list(self, sync_result_builder,
|
||||
newly_joined_rooms, newly_joined_users,
|
||||
newly_joined_rooms,
|
||||
newly_joined_or_invited_users,
|
||||
newly_left_rooms, newly_left_users):
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
since_token = sync_result_builder.since_token
|
||||
@@ -968,7 +962,7 @@ class SyncHandler(object):
|
||||
# share a room with?
|
||||
for room_id in newly_joined_rooms:
|
||||
joined_users = yield self.state.get_current_user_in_room(room_id)
|
||||
newly_joined_users.update(joined_users)
|
||||
newly_joined_or_invited_users.update(joined_users)
|
||||
|
||||
for room_id in newly_left_rooms:
|
||||
left_users = yield self.state.get_current_user_in_room(room_id)
|
||||
@@ -976,7 +970,7 @@ class SyncHandler(object):
|
||||
|
||||
# TODO: Check that these users are actually new, i.e. either they
|
||||
# weren't in the previous sync *or* they left and rejoined.
|
||||
changed.update(newly_joined_users)
|
||||
changed.update(newly_joined_or_invited_users)
|
||||
|
||||
if not changed and not newly_left_users:
|
||||
defer.returnValue(DeviceLists(
|
||||
@@ -1094,7 +1088,7 @@ class SyncHandler(object):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _generate_sync_entry_for_presence(self, sync_result_builder, newly_joined_rooms,
|
||||
newly_joined_users):
|
||||
newly_joined_or_invited_users):
|
||||
"""Generates the presence portion of the sync response. Populates the
|
||||
`sync_result_builder` with the result.
|
||||
|
||||
@@ -1102,8 +1096,9 @@ class SyncHandler(object):
|
||||
sync_result_builder(SyncResultBuilder)
|
||||
newly_joined_rooms(list): List of rooms that the user has joined
|
||||
since the last sync (or empty if an initial sync)
|
||||
newly_joined_users(list): List of users that have joined rooms
|
||||
since the last sync (or empty if an initial sync)
|
||||
newly_joined_or_invited_users(list): List of users that have joined
|
||||
or been invited to rooms since the last sync (or empty if an initial
|
||||
sync)
|
||||
"""
|
||||
now_token = sync_result_builder.now_token
|
||||
sync_config = sync_result_builder.sync_config
|
||||
@@ -1129,7 +1124,7 @@ class SyncHandler(object):
|
||||
"presence_key", presence_key
|
||||
)
|
||||
|
||||
extra_users_ids = set(newly_joined_users)
|
||||
extra_users_ids = set(newly_joined_or_invited_users)
|
||||
for room_id in newly_joined_rooms:
|
||||
users = yield self.state.get_current_user_in_room(room_id)
|
||||
extra_users_ids.update(users)
|
||||
@@ -1161,7 +1156,8 @@ class SyncHandler(object):
|
||||
|
||||
Returns:
|
||||
Deferred(tuple): Returns a 4-tuple of
|
||||
`(newly_joined_rooms, newly_joined_users, newly_left_rooms, newly_left_users)`
|
||||
`(newly_joined_rooms, newly_joined_or_invited_users,
|
||||
newly_left_rooms, newly_left_users)`
|
||||
"""
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
block_all_room_ephemeral = (
|
||||
@@ -1232,8 +1228,8 @@ class SyncHandler(object):
|
||||
|
||||
sync_result_builder.invited.extend(invited)
|
||||
|
||||
# Now we want to get any newly joined users
|
||||
newly_joined_users = set()
|
||||
# Now we want to get any newly joined or invited users
|
||||
newly_joined_or_invited_users = set()
|
||||
newly_left_users = set()
|
||||
if since_token:
|
||||
for joined_sync in sync_result_builder.joined:
|
||||
@@ -1242,19 +1238,22 @@ class SyncHandler(object):
|
||||
)
|
||||
for event in it:
|
||||
if event.type == EventTypes.Member:
|
||||
if event.membership == Membership.JOIN:
|
||||
newly_joined_users.add(event.state_key)
|
||||
if (
|
||||
event.membership == Membership.JOIN or
|
||||
event.membership == Membership.INVITE
|
||||
):
|
||||
newly_joined_or_invited_users.add(event.state_key)
|
||||
else:
|
||||
prev_content = event.unsigned.get("prev_content", {})
|
||||
prev_membership = prev_content.get("membership", None)
|
||||
if prev_membership == Membership.JOIN:
|
||||
newly_left_users.add(event.state_key)
|
||||
|
||||
newly_left_users -= newly_joined_users
|
||||
newly_left_users -= newly_joined_or_invited_users
|
||||
|
||||
defer.returnValue((
|
||||
newly_joined_rooms,
|
||||
newly_joined_users,
|
||||
newly_joined_or_invited_users,
|
||||
newly_left_rooms,
|
||||
newly_left_users,
|
||||
))
|
||||
@@ -1299,7 +1298,7 @@ class SyncHandler(object):
|
||||
where:
|
||||
room_entries is a list [RoomSyncResultBuilder]
|
||||
invited_rooms is a list [InvitedSyncResult]
|
||||
newly_joined rooms is a list[str] of room ids
|
||||
newly_joined_rooms is a list[str] of room ids
|
||||
newly_left_rooms is a list[str] of room ids
|
||||
"""
|
||||
user_id = sync_result_builder.sync_config.user.to_string()
|
||||
@@ -1334,7 +1333,7 @@ class SyncHandler(object):
|
||||
if room_id in sync_result_builder.joined_room_ids and non_joins:
|
||||
# Always include if the user (re)joined the room, especially
|
||||
# important so that device list changes are calculated correctly.
|
||||
# If there are non join member events, but we are still in the room,
|
||||
# If there are non-join member events, but we are still in the room,
|
||||
# then the user must have left and joined
|
||||
newly_joined_rooms.append(room_id)
|
||||
|
||||
|
||||
@@ -119,8 +119,6 @@ class UserDirectoryHandler(object):
|
||||
"""Called to update index of our local user profiles when they change
|
||||
irrespective of any rooms the user may be in.
|
||||
"""
|
||||
# FIXME(#3714): We should probably do this in the same worker as all
|
||||
# the other changes.
|
||||
yield self.store.update_profile_in_user_dir(
|
||||
user_id, profile.display_name, profile.avatar_url, None,
|
||||
)
|
||||
@@ -129,8 +127,6 @@ class UserDirectoryHandler(object):
|
||||
def handle_user_deactivated(self, user_id):
|
||||
"""Called when a user ID is deactivated
|
||||
"""
|
||||
# FIXME(#3714): We should probably do this in the same worker as all
|
||||
# the other changes.
|
||||
yield self.store.remove_from_user_dir(user_id)
|
||||
yield self.store.remove_from_user_in_public_room(user_id)
|
||||
|
||||
|
||||
@@ -133,7 +133,7 @@ class MatrixFederationHttpClient(object):
|
||||
failures, connection failures, SSL failures.)
|
||||
"""
|
||||
if (
|
||||
self.hs.config.federation_domain_whitelist is not None and
|
||||
self.hs.config.federation_domain_whitelist and
|
||||
destination not in self.hs.config.federation_domain_whitelist
|
||||
):
|
||||
raise FederationDeniedError(destination)
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
import threading
|
||||
|
||||
from prometheus_client.core import Counter, Histogram
|
||||
|
||||
@@ -112,9 +111,6 @@ in_flight_requests_db_sched_duration = Counter(
|
||||
# The set of all in flight requests, set[RequestMetrics]
|
||||
_in_flight_requests = set()
|
||||
|
||||
# Protects the _in_flight_requests set from concurrent accesss
|
||||
_in_flight_requests_lock = threading.Lock()
|
||||
|
||||
|
||||
def _get_in_flight_counts():
|
||||
"""Returns a count of all in flight requests by (method, server_name)
|
||||
@@ -124,8 +120,7 @@ def _get_in_flight_counts():
|
||||
"""
|
||||
# Cast to a list to prevent it changing while the Prometheus
|
||||
# thread is collecting metrics
|
||||
with _in_flight_requests_lock:
|
||||
reqs = list(_in_flight_requests)
|
||||
reqs = list(_in_flight_requests)
|
||||
|
||||
for rm in reqs:
|
||||
rm.update_metrics()
|
||||
@@ -159,12 +154,10 @@ class RequestMetrics(object):
|
||||
# to the "in flight" metrics.
|
||||
self._request_stats = self.start_context.get_resource_usage()
|
||||
|
||||
with _in_flight_requests_lock:
|
||||
_in_flight_requests.add(self)
|
||||
_in_flight_requests.add(self)
|
||||
|
||||
def stop(self, time_sec, request):
|
||||
with _in_flight_requests_lock:
|
||||
_in_flight_requests.discard(self)
|
||||
_in_flight_requests.discard(self)
|
||||
|
||||
context = LoggingContext.current_context()
|
||||
|
||||
|
||||
@@ -13,8 +13,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import threading
|
||||
|
||||
import six
|
||||
|
||||
from prometheus_client.core import REGISTRY, Counter, GaugeMetricFamily
|
||||
@@ -80,9 +78,6 @@ _background_process_counts = dict() # type: dict[str, int]
|
||||
# of process descriptions that no longer have any active processes.
|
||||
_background_processes = dict() # type: dict[str, set[_BackgroundProcess]]
|
||||
|
||||
# A lock that covers the above dicts
|
||||
_bg_metrics_lock = threading.Lock()
|
||||
|
||||
|
||||
class _Collector(object):
|
||||
"""A custom metrics collector for the background process metrics.
|
||||
@@ -97,11 +92,7 @@ class _Collector(object):
|
||||
labels=["name"],
|
||||
)
|
||||
|
||||
# We copy the dict so that it doesn't change from underneath us
|
||||
with _bg_metrics_lock:
|
||||
_background_processes_copy = dict(_background_processes)
|
||||
|
||||
for desc, processes in six.iteritems(_background_processes_copy):
|
||||
for desc, processes in six.iteritems(_background_processes):
|
||||
background_process_in_flight_count.add_metric(
|
||||
(desc,), len(processes),
|
||||
)
|
||||
@@ -176,26 +167,19 @@ def run_as_background_process(desc, func, *args, **kwargs):
|
||||
"""
|
||||
@defer.inlineCallbacks
|
||||
def run():
|
||||
with _bg_metrics_lock:
|
||||
count = _background_process_counts.get(desc, 0)
|
||||
_background_process_counts[desc] = count + 1
|
||||
|
||||
count = _background_process_counts.get(desc, 0)
|
||||
_background_process_counts[desc] = count + 1
|
||||
_background_process_start_count.labels(desc).inc()
|
||||
|
||||
with LoggingContext(desc) as context:
|
||||
context.request = "%s-%i" % (desc, count)
|
||||
proc = _BackgroundProcess(desc, context)
|
||||
|
||||
with _bg_metrics_lock:
|
||||
_background_processes.setdefault(desc, set()).add(proc)
|
||||
|
||||
_background_processes.setdefault(desc, set()).add(proc)
|
||||
try:
|
||||
yield func(*args, **kwargs)
|
||||
finally:
|
||||
proc.update_metrics()
|
||||
|
||||
with _bg_metrics_lock:
|
||||
_background_processes[desc].remove(proc)
|
||||
_background_processes[desc].remove(proc)
|
||||
|
||||
with PreserveLoggingContext():
|
||||
return run()
|
||||
|
||||
@@ -39,7 +39,7 @@ REQUIREMENTS = {
|
||||
"signedjson>=1.0.0": ["signedjson>=1.0.0"],
|
||||
"pynacl>=1.2.1": ["nacl>=1.2.1", "nacl.bindings"],
|
||||
"service_identity>=1.0.0": ["service_identity>=1.0.0"],
|
||||
"Twisted>=17.1.0": ["twisted>=17.1.0"],
|
||||
"Twisted>=16.0.0": ["twisted>=16.0.0"],
|
||||
|
||||
# We use crypto.get_elliptic_curve which is only supported in >=0.15
|
||||
"pyopenssl>=0.15": ["OpenSSL>=0.15"],
|
||||
@@ -78,9 +78,6 @@ CONDITIONAL_REQUIREMENTS = {
|
||||
"affinity": {
|
||||
"affinity": ["affinity"],
|
||||
},
|
||||
"postgres": {
|
||||
"psycopg2>=2.6": ["psycopg2"]
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -531,7 +531,7 @@ class RoomEventServlet(ClientV1RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_GET(self, request, room_id, event_id):
|
||||
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
|
||||
requester = yield self.auth.get_user_by_req(request)
|
||||
event = yield self.event_handler.get_event(requester.user, room_id, event_id)
|
||||
|
||||
time_now = self.clock.time_msec()
|
||||
|
||||
@@ -23,7 +23,6 @@ from twisted.internet import defer
|
||||
import synapse.util.stringutils as stringutils
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
from synapse.http.servlet import assert_params_in_dict, parse_json_object_from_request
|
||||
from synapse.rest.client.v1.base import ClientV1RestServlet
|
||||
from synapse.types import create_requester
|
||||
@@ -282,20 +281,12 @@ class RegisterRestServlet(ClientV1RestServlet):
|
||||
register_json["user"].encode("utf-8")
|
||||
if "user" in register_json else None
|
||||
)
|
||||
threepid = None
|
||||
if session.get(LoginType.EMAIL_IDENTITY):
|
||||
threepid = session["threepidCreds"]
|
||||
|
||||
handler = self.handlers.registration_handler
|
||||
(user_id, token) = yield handler.register(
|
||||
localpart=desired_user_id,
|
||||
password=password,
|
||||
threepid=threepid,
|
||||
password=password
|
||||
)
|
||||
# Necessary due to auth checks prior to the threepid being
|
||||
# written to the db
|
||||
if is_threepid_reserved(self.hs.config, threepid):
|
||||
yield self.store.upsert_monthly_active_user(user_id)
|
||||
|
||||
if session[LoginType.EMAIL_IDENTITY]:
|
||||
logger.debug("Binding emails %s to %s" % (
|
||||
|
||||
@@ -51,11 +51,9 @@ class EmailPasswordRequestTokenRestServlet(RestServlet):
|
||||
'id_server', 'client_secret', 'email', 'send_attempt'
|
||||
])
|
||||
|
||||
if not check_3pid_allowed(self.hs, "email", body['email']):
|
||||
if not (yield check_3pid_allowed(self.hs, "email", body['email'])):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Your email domain is not authorized on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
|
||||
@@ -89,11 +87,9 @@ class MsisdnPasswordRequestTokenRestServlet(RestServlet):
|
||||
|
||||
msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
|
||||
|
||||
if not check_3pid_allowed(self.hs, "msisdn", msisdn):
|
||||
if not (yield check_3pid_allowed(self.hs, "msisdn", msisdn)):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Account phone numbers are not authorized on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.datastore.get_user_id_by_threepid(
|
||||
@@ -243,11 +239,9 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||
['id_server', 'client_secret', 'email', 'send_attempt'],
|
||||
)
|
||||
|
||||
if not check_3pid_allowed(self.hs, "email", body['email']):
|
||||
if not (yield check_3pid_allowed(self.hs, "email", body['email'])):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Your email domain is not authorized on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.datastore.get_user_id_by_threepid(
|
||||
@@ -280,11 +274,9 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
|
||||
|
||||
msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
|
||||
|
||||
if not check_3pid_allowed(self.hs, "msisdn", msisdn):
|
||||
if not (yield check_3pid_allowed(self.hs, "msisdn", msisdn)):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Account phone numbers are not authorized on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.datastore.get_user_id_by_threepid(
|
||||
@@ -321,6 +313,9 @@ class ThreepidRestServlet(RestServlet):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
if self.hs.config.disable_3pid_changes:
|
||||
raise SynapseError(400, "3PID changes disabled on this server")
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
threePidCreds = body.get('threePidCreds')
|
||||
@@ -367,11 +362,15 @@ class ThreepidDeleteRestServlet(RestServlet):
|
||||
|
||||
def __init__(self, hs):
|
||||
super(ThreepidDeleteRestServlet, self).__init__()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.auth_handler = hs.get_auth_handler()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
if self.hs.config.disable_3pid_changes:
|
||||
raise SynapseError(400, "3PID changes disabled on this server")
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
assert_params_in_dict(body, ['medium', 'address'])
|
||||
|
||||
|
||||
@@ -16,7 +16,9 @@
|
||||
|
||||
import hmac
|
||||
import logging
|
||||
import re
|
||||
from hashlib import sha1
|
||||
from string import capwords
|
||||
|
||||
from six import string_types
|
||||
|
||||
@@ -26,7 +28,6 @@ import synapse
|
||||
import synapse.types
|
||||
from synapse.api.constants import LoginType
|
||||
from synapse.api.errors import Codes, SynapseError, UnrecognizedRequestError
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
assert_params_in_dict,
|
||||
@@ -73,11 +74,9 @@ class EmailRegisterRequestTokenRestServlet(RestServlet):
|
||||
'id_server', 'client_secret', 'email', 'send_attempt'
|
||||
])
|
||||
|
||||
if not check_3pid_allowed(self.hs, "email", body['email']):
|
||||
if not (yield check_3pid_allowed(self.hs, "email", body['email'])):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Your email domain is not authorized to register on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
|
||||
@@ -115,11 +114,9 @@ class MsisdnRegisterRequestTokenRestServlet(RestServlet):
|
||||
|
||||
msisdn = phone_number_to_msisdn(body['country'], body['phone_number'])
|
||||
|
||||
if not check_3pid_allowed(self.hs, "msisdn", msisdn):
|
||||
if not (yield check_3pid_allowed(self.hs, "msisdn", msisdn)):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Phone numbers are not authorized to register on this server",
|
||||
Codes.THREEPID_DENIED,
|
||||
403, "Third party identifier is not allowed", Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
existingUid = yield self.hs.get_datastore().get_user_id_by_threepid(
|
||||
@@ -227,6 +224,8 @@ class RegisterRestServlet(RestServlet):
|
||||
raise SynapseError(400, "Invalid username")
|
||||
desired_username = body['username']
|
||||
|
||||
desired_display_name = None
|
||||
|
||||
appservice = None
|
||||
if self.auth.has_access_token(request):
|
||||
appservice = yield self.auth.get_appservice_by_req(request)
|
||||
@@ -302,13 +301,6 @@ class RegisterRestServlet(RestServlet):
|
||||
session_id, "registered_user_id", None
|
||||
)
|
||||
|
||||
if desired_username is not None:
|
||||
yield self.registration_handler.check_username(
|
||||
desired_username,
|
||||
guest_access_token=guest_access_token,
|
||||
assigned_user_id=registered_user_id,
|
||||
)
|
||||
|
||||
# Only give msisdn flows if the x_show_msisdn flag is given:
|
||||
# this is a hack to work around the fact that clients were shipped
|
||||
# that use fallback registration if they see any flows that they don't
|
||||
@@ -375,14 +367,87 @@ class RegisterRestServlet(RestServlet):
|
||||
medium = auth_result[login_type]['medium']
|
||||
address = auth_result[login_type]['address']
|
||||
|
||||
if not check_3pid_allowed(self.hs, medium, address):
|
||||
if not (yield check_3pid_allowed(self.hs, medium, address)):
|
||||
raise SynapseError(
|
||||
403,
|
||||
"Third party identifiers (email/phone numbers)" +
|
||||
" are not authorized on this server",
|
||||
403, "Third party identifier is not allowed",
|
||||
Codes.THREEPID_DENIED,
|
||||
)
|
||||
|
||||
if self.hs.config.register_mxid_from_3pid:
|
||||
# override the desired_username based on the 3PID if any.
|
||||
# reset it first to avoid folks picking their own username.
|
||||
desired_username = None
|
||||
|
||||
# we should have an auth_result at this point if we're going to progress
|
||||
# to register the user (i.e. we haven't picked up a registered_user_id
|
||||
# from our session store), in which case get ready and gen the
|
||||
# desired_username
|
||||
if auth_result:
|
||||
if (
|
||||
self.hs.config.register_mxid_from_3pid == 'email' and
|
||||
LoginType.EMAIL_IDENTITY in auth_result
|
||||
):
|
||||
address = auth_result[LoginType.EMAIL_IDENTITY]['address']
|
||||
desired_username = synapse.types.strip_invalid_mxid_characters(
|
||||
address.replace('@', '-').lower()
|
||||
)
|
||||
|
||||
# find a unique mxid for the account, suffixing numbers
|
||||
# if needed
|
||||
while True:
|
||||
try:
|
||||
yield self.registration_handler.check_username(
|
||||
desired_username,
|
||||
guest_access_token=guest_access_token,
|
||||
assigned_user_id=registered_user_id,
|
||||
)
|
||||
# if we got this far we passed the check.
|
||||
break
|
||||
except SynapseError as e:
|
||||
if e.errcode == Codes.USER_IN_USE:
|
||||
m = re.match(r'^(.*?)(\d+)$', desired_username)
|
||||
if m:
|
||||
desired_username = m.group(1) + str(
|
||||
int(m.group(2)) + 1
|
||||
)
|
||||
else:
|
||||
desired_username += "1"
|
||||
else:
|
||||
# something else went wrong.
|
||||
break
|
||||
|
||||
# XXX: a nasty heuristic to turn an email address into
|
||||
# a displayname, as part of register_mxid_from_3pid
|
||||
parts = address.replace('.', ' ').split('@')
|
||||
org_parts = parts[1].split(' ')
|
||||
|
||||
if org_parts[-2] == "matrix" and org_parts[-1] == "org":
|
||||
org = "Tchap Admin"
|
||||
elif org_parts[-2] == "gouv" and org_parts[-1] == "fr":
|
||||
org = org_parts[-3] if len(org_parts) > 2 else org_parts[-2]
|
||||
else:
|
||||
org = org_parts[-2]
|
||||
|
||||
desired_display_name = (
|
||||
capwords(parts[0]) + " [" + capwords(org) + "]"
|
||||
)
|
||||
elif (
|
||||
self.hs.config.register_mxid_from_3pid == 'msisdn' and
|
||||
LoginType.MSISDN in auth_result
|
||||
):
|
||||
desired_username = auth_result[LoginType.MSISDN]['address']
|
||||
else:
|
||||
raise SynapseError(
|
||||
400, "Cannot derive mxid from 3pid; no recognised 3pid"
|
||||
)
|
||||
|
||||
if desired_username is not None:
|
||||
yield self.registration_handler.check_username(
|
||||
desired_username,
|
||||
guest_access_token=guest_access_token,
|
||||
assigned_user_id=registered_user_id,
|
||||
)
|
||||
|
||||
if registered_user_id is not None:
|
||||
logger.info(
|
||||
"Already registered user ID %r for this session",
|
||||
@@ -395,28 +460,34 @@ class RegisterRestServlet(RestServlet):
|
||||
# NB: This may be from the auth handler and NOT from the POST
|
||||
assert_params_in_dict(params, ["password"])
|
||||
|
||||
desired_username = params.get("username", None)
|
||||
if not self.hs.config.register_mxid_from_3pid:
|
||||
desired_username = params.get("username", None)
|
||||
else:
|
||||
# we keep the original desired_username derived from the 3pid above
|
||||
pass
|
||||
|
||||
guest_access_token = params.get("guest_access_token", None)
|
||||
new_password = params.get("password", None)
|
||||
|
||||
# XXX: don't we need to validate these for length etc like we did on
|
||||
# the ones from the JSON body earlier on in the method?
|
||||
|
||||
if desired_username is not None:
|
||||
desired_username = desired_username.lower()
|
||||
|
||||
threepid = None
|
||||
if auth_result:
|
||||
threepid = auth_result.get(LoginType.EMAIL_IDENTITY)
|
||||
|
||||
(registered_user_id, _) = yield self.registration_handler.register(
|
||||
localpart=desired_username,
|
||||
password=new_password,
|
||||
password=params.get("password", None),
|
||||
guest_access_token=guest_access_token,
|
||||
generate_token=False,
|
||||
threepid=threepid,
|
||||
display_name=desired_display_name,
|
||||
)
|
||||
# Necessary due to auth checks prior to the threepid being
|
||||
# written to the db
|
||||
if is_threepid_reserved(self.hs.config, threepid):
|
||||
yield self.store.upsert_monthly_active_user(registered_user_id)
|
||||
|
||||
if self.hs.config.chain_register:
|
||||
yield self.registration_handler.chain_register(
|
||||
localpart=desired_username,
|
||||
auth_result=auth_result,
|
||||
params=params,
|
||||
)
|
||||
|
||||
# remember that we've now registered that user account, and with
|
||||
# what user ID (since the user may not have specified)
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
|
||||
import logging
|
||||
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
@@ -37,6 +39,7 @@ class UserDirectorySearchRestServlet(RestServlet):
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
self.user_directory_handler = hs.get_user_directory_handler()
|
||||
self.http_client = hs.get_simple_http_client()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_POST(self, request):
|
||||
@@ -61,6 +64,14 @@ class UserDirectorySearchRestServlet(RestServlet):
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
if self.hs.config.user_directory_defer_to_id_server:
|
||||
signed_body = sign_json(body, self.hs.hostname, self.hs.config.signing_key[0])
|
||||
url = "http://%s/_matrix/identity/api/v1/user_directory/search" % (
|
||||
self.hs.config.user_directory_defer_to_id_server,
|
||||
)
|
||||
resp = yield self.http_client.post_json_get_json(url, signed_body)
|
||||
defer.returnValue((200, resp))
|
||||
|
||||
limit = body.get("limit", 10)
|
||||
limit = min(limit, 50)
|
||||
|
||||
|
||||
100
synapse/rulecheck/domain_rule_checker.py
Normal file
100
synapse/rulecheck/domain_rule_checker.py
Normal file
@@ -0,0 +1,100 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
from synapse.config._base import ConfigError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class DomainRuleChecker(object):
|
||||
"""
|
||||
A re-implementation of the SpamChecker that prevents users in one domain from
|
||||
inviting users in other domains to rooms, based on a configuration.
|
||||
|
||||
Takes a config in the format:
|
||||
|
||||
spam_checker:
|
||||
module: "rulecheck.DomainRuleChecker"
|
||||
config:
|
||||
domain_mapping:
|
||||
"inviter_domain": [ "invitee_domain_permitted", "other_domain_permitted" ]
|
||||
"other_inviter_domain": [ "invitee_domain_permitted" ]
|
||||
default: False
|
||||
}
|
||||
|
||||
Don't forget to consider if you can invite users from your own domain.
|
||||
"""
|
||||
|
||||
def __init__(self, config):
|
||||
self.domain_mapping = config["domain_mapping"] or {}
|
||||
self.default = config["default"]
|
||||
|
||||
def check_event_for_spam(self, event):
|
||||
"""Implements synapse.events.SpamChecker.check_event_for_spam
|
||||
"""
|
||||
return False
|
||||
|
||||
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
|
||||
"""Implements synapse.events.SpamChecker.user_may_invite
|
||||
"""
|
||||
inviter_domain = self._get_domain_from_id(inviter_userid)
|
||||
invitee_domain = self._get_domain_from_id(invitee_userid)
|
||||
|
||||
if inviter_domain not in self.domain_mapping:
|
||||
return self.default
|
||||
|
||||
return invitee_domain in self.domain_mapping[inviter_domain]
|
||||
|
||||
def user_may_create_room(self, userid):
|
||||
"""Implements synapse.events.SpamChecker.user_may_create_room
|
||||
"""
|
||||
return True
|
||||
|
||||
def user_may_create_room_alias(self, userid, room_alias):
|
||||
"""Implements synapse.events.SpamChecker.user_may_create_room_alias
|
||||
"""
|
||||
return True
|
||||
|
||||
def user_may_publish_room(self, userid, room_id):
|
||||
"""Implements synapse.events.SpamChecker.user_may_publish_room
|
||||
"""
|
||||
return True
|
||||
|
||||
@staticmethod
|
||||
def parse_config(config):
|
||||
"""Implements synapse.events.SpamChecker.parse_config
|
||||
"""
|
||||
if "default" in config:
|
||||
return config
|
||||
else:
|
||||
raise ConfigError("No default set for spam_config DomainRuleChecker")
|
||||
|
||||
@staticmethod
|
||||
def _get_domain_from_id(mxid):
|
||||
"""Parses a string and returns the domain part of the mxid.
|
||||
|
||||
Args:
|
||||
mxid (str): a valid mxid
|
||||
|
||||
Returns:
|
||||
str: the domain part of the mxid
|
||||
|
||||
"""
|
||||
idx = mxid.find(":")
|
||||
if idx == -1:
|
||||
raise Exception("Invalid ID: %r" % (mxid,))
|
||||
return mxid[idx + 1:]
|
||||
@@ -19,7 +19,6 @@
|
||||
# partial one for unit test mocking.
|
||||
|
||||
# Imports required for the default HomeServer() implementation
|
||||
import abc
|
||||
import logging
|
||||
|
||||
from twisted.enterprise import adbapi
|
||||
@@ -57,7 +56,7 @@ from synapse.handlers.initial_sync import InitialSyncHandler
|
||||
from synapse.handlers.message import EventCreationHandler, MessageHandler
|
||||
from synapse.handlers.pagination import PaginationHandler
|
||||
from synapse.handlers.presence import PresenceHandler
|
||||
from synapse.handlers.profile import BaseProfileHandler, MasterProfileHandler
|
||||
from synapse.handlers.profile import ProfileHandler
|
||||
from synapse.handlers.read_marker import ReadMarkerHandler
|
||||
from synapse.handlers.receipts import ReceiptsHandler
|
||||
from synapse.handlers.room import RoomContextHandler, RoomCreationHandler
|
||||
@@ -82,6 +81,7 @@ from synapse.server_notices.server_notices_manager import ServerNoticesManager
|
||||
from synapse.server_notices.server_notices_sender import ServerNoticesSender
|
||||
from synapse.server_notices.worker_server_notices_sender import WorkerServerNoticesSender
|
||||
from synapse.state import StateHandler, StateResolutionHandler
|
||||
from synapse.storage import DataStore
|
||||
from synapse.streams.events import EventSources
|
||||
from synapse.util import Clock
|
||||
from synapse.util.distributor import Distributor
|
||||
@@ -111,8 +111,6 @@ class HomeServer(object):
|
||||
config (synapse.config.homeserver.HomeserverConfig):
|
||||
"""
|
||||
|
||||
__metaclass__ = abc.ABCMeta
|
||||
|
||||
DEPENDENCIES = [
|
||||
'http_client',
|
||||
'db_pool',
|
||||
@@ -174,11 +172,6 @@ class HomeServer(object):
|
||||
'room_context_handler',
|
||||
]
|
||||
|
||||
# This is overridden in derived application classes
|
||||
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
||||
# instantiated during setup() for future return by get_datastore()
|
||||
DATASTORE_CLASS = abc.abstractproperty()
|
||||
|
||||
def __init__(self, hostname, reactor=None, **kwargs):
|
||||
"""
|
||||
Args:
|
||||
@@ -195,16 +188,13 @@ class HomeServer(object):
|
||||
self.distributor = Distributor()
|
||||
self.ratelimiter = Ratelimiter()
|
||||
|
||||
self.datastore = None
|
||||
|
||||
# Other kwargs are explicit dependencies
|
||||
for depname in kwargs:
|
||||
setattr(self, depname, kwargs[depname])
|
||||
|
||||
def setup(self):
|
||||
logger.info("Setting up.")
|
||||
with self.get_db_conn() as conn:
|
||||
self.datastore = self.DATASTORE_CLASS(conn, self)
|
||||
self.datastore = DataStore(self.get_db_conn(), self)
|
||||
logger.info("Finished setting up.")
|
||||
|
||||
def get_reactor(self):
|
||||
@@ -318,10 +308,7 @@ class HomeServer(object):
|
||||
return InitialSyncHandler(self)
|
||||
|
||||
def build_profile_handler(self):
|
||||
if self.config.worker_app:
|
||||
return BaseProfileHandler(self)
|
||||
else:
|
||||
return MasterProfileHandler(self)
|
||||
return ProfileHandler(self)
|
||||
|
||||
def build_event_creation_handler(self):
|
||||
return EventCreationHandler(self)
|
||||
|
||||
@@ -1,203 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
|
||||
from six import iteritems
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import (
|
||||
EventTypes,
|
||||
ServerNoticeLimitReached,
|
||||
ServerNoticeMsgType,
|
||||
)
|
||||
from synapse.api.errors import AuthError, ResourceLimitError, SynapseError
|
||||
from synapse.server_notices.server_notices_manager import SERVER_NOTICE_ROOM_TAG
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ResourceLimitsServerNotices(object):
|
||||
""" Keeps track of whether the server has reached it's resource limit and
|
||||
ensures that the client is kept up to date.
|
||||
"""
|
||||
def __init__(self, hs):
|
||||
"""
|
||||
Args:
|
||||
hs (synapse.server.HomeServer):
|
||||
"""
|
||||
self._server_notices_manager = hs.get_server_notices_manager()
|
||||
self._store = hs.get_datastore()
|
||||
self._auth = hs.get_auth()
|
||||
self._config = hs.config
|
||||
self._resouce_limited = False
|
||||
self._message_handler = hs.get_message_handler()
|
||||
self._state = hs.get_state_handler()
|
||||
|
||||
self._notifier = hs.get_notifier()
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def maybe_send_server_notice_to_user(self, user_id):
|
||||
"""Check if we need to send a notice to this user, this will be true in
|
||||
two cases.
|
||||
1. The server has reached its limit does not reflect this
|
||||
2. The room state indicates that the server has reached its limit when
|
||||
actually the server is fine
|
||||
|
||||
Args:
|
||||
user_id (str): user to check
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
if self._config.hs_disabled is True:
|
||||
return
|
||||
|
||||
if self._config.limit_usage_by_mau is False:
|
||||
return
|
||||
|
||||
if not self._server_notices_manager.is_enabled():
|
||||
# Don't try and send server notices unles they've been enabled
|
||||
return
|
||||
|
||||
timestamp = yield self._store.user_last_seen_monthly_active(user_id)
|
||||
if timestamp is None:
|
||||
# This user will be blocked from receiving the notice anyway.
|
||||
# In practice, not sure we can ever get here
|
||||
return
|
||||
|
||||
# Determine current state of room
|
||||
|
||||
room_id = yield self._server_notices_manager.get_notice_room_for_user(user_id)
|
||||
|
||||
if not room_id:
|
||||
logger.warn("Failed to get server notices room")
|
||||
return
|
||||
|
||||
yield self._check_and_set_tags(user_id, room_id)
|
||||
currently_blocked, ref_events = yield self._is_room_currently_blocked(room_id)
|
||||
|
||||
try:
|
||||
# Normally should always pass in user_id if you have it, but in
|
||||
# this case are checking what would happen to other users if they
|
||||
# were to arrive.
|
||||
try:
|
||||
yield self._auth.check_auth_blocking()
|
||||
is_auth_blocking = False
|
||||
except ResourceLimitError as e:
|
||||
is_auth_blocking = True
|
||||
event_content = e.msg
|
||||
event_limit_type = e.limit_type
|
||||
|
||||
if currently_blocked and not is_auth_blocking:
|
||||
# Room is notifying of a block, when it ought not to be.
|
||||
# Remove block notification
|
||||
content = {
|
||||
"pinned": ref_events
|
||||
}
|
||||
yield self._server_notices_manager.send_notice(
|
||||
user_id, content, EventTypes.Pinned, '',
|
||||
)
|
||||
|
||||
elif not currently_blocked and is_auth_blocking:
|
||||
# Room is not notifying of a block, when it ought to be.
|
||||
# Add block notification
|
||||
content = {
|
||||
'body': event_content,
|
||||
'msgtype': ServerNoticeMsgType,
|
||||
'server_notice_type': ServerNoticeLimitReached,
|
||||
'admin_contact': self._config.admin_contact,
|
||||
'limit_type': event_limit_type
|
||||
}
|
||||
event = yield self._server_notices_manager.send_notice(
|
||||
user_id, content, EventTypes.Message,
|
||||
)
|
||||
|
||||
content = {
|
||||
"pinned": [
|
||||
event.event_id,
|
||||
]
|
||||
}
|
||||
yield self._server_notices_manager.send_notice(
|
||||
user_id, content, EventTypes.Pinned, '',
|
||||
)
|
||||
|
||||
except SynapseError as e:
|
||||
logger.error("Error sending resource limits server notice: %s", e)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _check_and_set_tags(self, user_id, room_id):
|
||||
"""
|
||||
Since server notices rooms were originally not with tags,
|
||||
important to check that tags have been set correctly
|
||||
Args:
|
||||
user_id(str): the user in question
|
||||
room_id(str): the server notices room for that user
|
||||
"""
|
||||
tags = yield self._store.get_tags_for_room(user_id, room_id)
|
||||
need_to_set_tag = True
|
||||
if tags:
|
||||
if SERVER_NOTICE_ROOM_TAG in tags:
|
||||
# tag already present, nothing to do here
|
||||
need_to_set_tag = False
|
||||
if need_to_set_tag:
|
||||
max_id = yield self._store.add_tag_to_room(
|
||||
user_id, room_id, SERVER_NOTICE_ROOM_TAG, {}
|
||||
)
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_id, users=[user_id]
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _is_room_currently_blocked(self, room_id):
|
||||
"""
|
||||
Determines if the room is currently blocked
|
||||
|
||||
Args:
|
||||
room_id(str): The room id of the server notices room
|
||||
|
||||
Returns:
|
||||
|
||||
bool: Is the room currently blocked
|
||||
list: The list of pinned events that are unrelated to limit blocking
|
||||
This list can be used as a convenience in the case where the block
|
||||
is to be lifted and the remaining pinned event references need to be
|
||||
preserved
|
||||
"""
|
||||
currently_blocked = False
|
||||
pinned_state_event = None
|
||||
try:
|
||||
pinned_state_event = yield self._state.get_current_state(
|
||||
room_id, event_type=EventTypes.Pinned
|
||||
)
|
||||
except AuthError:
|
||||
# The user has yet to join the server notices room
|
||||
pass
|
||||
|
||||
referenced_events = []
|
||||
if pinned_state_event is not None:
|
||||
referenced_events = list(pinned_state_event.content.get('pinned', []))
|
||||
|
||||
events = yield self._store.get_events(referenced_events)
|
||||
for event_id, event in iteritems(events):
|
||||
if event.type != EventTypes.Message:
|
||||
continue
|
||||
if event.content.get("msgtype") == ServerNoticeMsgType:
|
||||
currently_blocked = True
|
||||
# remove event in case we need to disable blocking later on.
|
||||
if event_id in referenced_events:
|
||||
referenced_events.remove(event.event_id)
|
||||
|
||||
defer.returnValue((currently_blocked, referenced_events))
|
||||
@@ -22,8 +22,6 @@ from synapse.util.caches.descriptors import cachedInlineCallbacks
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SERVER_NOTICE_ROOM_TAG = "m.server_notice"
|
||||
|
||||
|
||||
class ServerNoticesManager(object):
|
||||
def __init__(self, hs):
|
||||
@@ -39,8 +37,6 @@ class ServerNoticesManager(object):
|
||||
self._event_creation_handler = hs.get_event_creation_handler()
|
||||
self._is_mine_id = hs.is_mine_id
|
||||
|
||||
self._notifier = hs.get_notifier()
|
||||
|
||||
def is_enabled(self):
|
||||
"""Checks if server notices are enabled on this server.
|
||||
|
||||
@@ -50,10 +46,7 @@ class ServerNoticesManager(object):
|
||||
return self._config.server_notices_mxid is not None
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def send_notice(
|
||||
self, user_id, event_content,
|
||||
type=EventTypes.Message, state_key=None
|
||||
):
|
||||
def send_notice(self, user_id, event_content):
|
||||
"""Send a notice to the given user
|
||||
|
||||
Creates the server notices room, if none exists.
|
||||
@@ -61,11 +54,9 @@ class ServerNoticesManager(object):
|
||||
Args:
|
||||
user_id (str): mxid of user to send event to.
|
||||
event_content (dict): content of event to send
|
||||
type(EventTypes): type of event
|
||||
is_state_event(bool): Is the event a state event
|
||||
|
||||
Returns:
|
||||
Deferred[FrozenEvent]
|
||||
Deferred[None]
|
||||
"""
|
||||
room_id = yield self.get_notice_room_for_user(user_id)
|
||||
|
||||
@@ -74,20 +65,15 @@ class ServerNoticesManager(object):
|
||||
|
||||
logger.info("Sending server notice to %s", user_id)
|
||||
|
||||
event_dict = {
|
||||
"type": type,
|
||||
"room_id": room_id,
|
||||
"sender": system_mxid,
|
||||
"content": event_content,
|
||||
}
|
||||
|
||||
if state_key is not None:
|
||||
event_dict['state_key'] = state_key
|
||||
|
||||
res = yield self._event_creation_handler.create_and_send_nonmember_event(
|
||||
requester, event_dict, ratelimit=False,
|
||||
yield self._event_creation_handler.create_and_send_nonmember_event(
|
||||
requester, {
|
||||
"type": EventTypes.Message,
|
||||
"room_id": room_id,
|
||||
"sender": system_mxid,
|
||||
"content": event_content,
|
||||
},
|
||||
ratelimit=False,
|
||||
)
|
||||
defer.returnValue(res)
|
||||
|
||||
@cachedInlineCallbacks()
|
||||
def get_notice_room_for_user(self, user_id):
|
||||
@@ -156,12 +142,5 @@ class ServerNoticesManager(object):
|
||||
)
|
||||
room_id = info['room_id']
|
||||
|
||||
max_id = yield self._store.add_tag_to_room(
|
||||
user_id, room_id, SERVER_NOTICE_ROOM_TAG, {},
|
||||
)
|
||||
self._notifier.on_new_event(
|
||||
"account_data_key", max_id, users=[user_id]
|
||||
)
|
||||
|
||||
logger.info("Created server notices room %s for %s", room_id, user_id)
|
||||
defer.returnValue(room_id)
|
||||
|
||||
@@ -12,12 +12,7 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.server_notices.consent_server_notices import ConsentServerNotices
|
||||
from synapse.server_notices.resource_limits_server_notices import (
|
||||
ResourceLimitsServerNotices,
|
||||
)
|
||||
|
||||
|
||||
class ServerNoticesSender(object):
|
||||
@@ -30,34 +25,34 @@ class ServerNoticesSender(object):
|
||||
Args:
|
||||
hs (synapse.server.HomeServer):
|
||||
"""
|
||||
self._server_notices = (
|
||||
ConsentServerNotices(hs),
|
||||
ResourceLimitsServerNotices(hs)
|
||||
)
|
||||
# todo: it would be nice to make this more dynamic
|
||||
self._consent_server_notices = ConsentServerNotices(hs)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def on_user_syncing(self, user_id):
|
||||
"""Called when the user performs a sync operation.
|
||||
|
||||
Args:
|
||||
user_id (str): mxid of user who synced
|
||||
"""
|
||||
for sn in self._server_notices:
|
||||
yield sn.maybe_send_server_notice_to_user(
|
||||
user_id,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
return self._consent_server_notices.maybe_send_server_notice_to_user(
|
||||
user_id,
|
||||
)
|
||||
|
||||
def on_user_ip(self, user_id):
|
||||
"""Called on the master when a worker process saw a client request.
|
||||
|
||||
Args:
|
||||
user_id (str): mxid
|
||||
|
||||
Returns:
|
||||
Deferred
|
||||
"""
|
||||
# The synchrotrons use a stubbed version of ServerNoticesSender, so
|
||||
# we check for notices to send to the user in on_user_ip as well as
|
||||
# in on_user_syncing
|
||||
for sn in self._server_notices:
|
||||
yield sn.maybe_send_server_notice_to_user(
|
||||
user_id,
|
||||
)
|
||||
return self._consent_server_notices.maybe_send_server_notice_to_user(
|
||||
user_id,
|
||||
)
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -14,18 +13,21 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
from collections import namedtuple
|
||||
|
||||
from six import iteritems, itervalues
|
||||
from six import iteritems, iterkeys, itervalues
|
||||
|
||||
from frozendict import frozendict
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes, RoomVersions
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.errors import AuthError
|
||||
from synapse.events.snapshot import EventContext
|
||||
from synapse.state import v1
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.caches import get_cache_factor_for
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
@@ -262,7 +264,6 @@ class StateHandler(object):
|
||||
defer.returnValue(context)
|
||||
|
||||
logger.debug("calling resolve_state_groups from compute_event_context")
|
||||
|
||||
entry = yield self.resolve_state_groups_for_events(
|
||||
event.room_id, [e for e, _ in event.prev_events],
|
||||
)
|
||||
@@ -337,11 +338,8 @@ class StateHandler(object):
|
||||
event, resolves conflicts between them and returns them.
|
||||
|
||||
Args:
|
||||
room_id (str)
|
||||
event_ids (list[str])
|
||||
explicit_room_version (str|None): If set uses the the given room
|
||||
version to choose the resolution algorithm. If None, then
|
||||
checks the database for room version.
|
||||
room_id (str):
|
||||
event_ids (list[str]):
|
||||
|
||||
Returns:
|
||||
Deferred[_StateCacheEntry]: resolved state
|
||||
@@ -355,12 +353,7 @@ class StateHandler(object):
|
||||
room_id, event_ids
|
||||
)
|
||||
|
||||
if len(state_groups_ids) == 0:
|
||||
defer.returnValue(_StateCacheEntry(
|
||||
state={},
|
||||
state_group=None,
|
||||
))
|
||||
elif len(state_groups_ids) == 1:
|
||||
if len(state_groups_ids) == 1:
|
||||
name, state_list = list(state_groups_ids.items()).pop()
|
||||
|
||||
prev_group, delta_ids = yield self.store.get_state_group_delta(name)
|
||||
@@ -372,11 +365,8 @@ class StateHandler(object):
|
||||
delta_ids=delta_ids,
|
||||
))
|
||||
|
||||
room_version = yield self.store.get_room_version(room_id)
|
||||
|
||||
result = yield self._state_resolution_handler.resolve_state_groups(
|
||||
room_id, room_version, state_groups_ids, None,
|
||||
self._state_map_factory,
|
||||
room_id, state_groups_ids, None, self._state_map_factory,
|
||||
)
|
||||
defer.returnValue(result)
|
||||
|
||||
@@ -385,8 +375,7 @@ class StateHandler(object):
|
||||
ev_ids, get_prev_content=False, check_redacted=False,
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def resolve_events(self, room_version, state_sets, event):
|
||||
def resolve_events(self, state_sets, event):
|
||||
logger.info(
|
||||
"Resolving state for %s with %d groups", event.room_id, len(state_sets)
|
||||
)
|
||||
@@ -402,17 +391,13 @@ class StateHandler(object):
|
||||
}
|
||||
|
||||
with Measure(self.clock, "state._resolve_events"):
|
||||
new_state = yield resolve_events_with_factory(
|
||||
room_version, state_set_ids,
|
||||
event_map=state_map,
|
||||
state_map_factory=self._state_map_factory
|
||||
)
|
||||
new_state = resolve_events_with_state_map(state_set_ids, state_map)
|
||||
|
||||
new_state = {
|
||||
key: state_map[ev_id] for key, ev_id in iteritems(new_state)
|
||||
}
|
||||
|
||||
defer.returnValue(new_state)
|
||||
return new_state
|
||||
|
||||
|
||||
class StateResolutionHandler(object):
|
||||
@@ -445,7 +430,7 @@ class StateResolutionHandler(object):
|
||||
@defer.inlineCallbacks
|
||||
@log_function
|
||||
def resolve_state_groups(
|
||||
self, room_id, room_version, state_groups_ids, event_map, state_map_factory,
|
||||
self, room_id, state_groups_ids, event_map, state_map_factory,
|
||||
):
|
||||
"""Resolves conflicts between a set of state groups
|
||||
|
||||
@@ -454,7 +439,6 @@ class StateResolutionHandler(object):
|
||||
|
||||
Args:
|
||||
room_id (str): room we are resolving for (used for logging)
|
||||
room_version (str): version of the room
|
||||
state_groups_ids (dict[int, dict[(str, str), str]]):
|
||||
map from state group id to the state in that state group
|
||||
(where 'state' is a map from state key to event id)
|
||||
@@ -508,7 +492,6 @@ class StateResolutionHandler(object):
|
||||
logger.info("Resolving conflicted state for %r", room_id)
|
||||
with Measure(self.clock, "state._resolve_events"):
|
||||
new_state = yield resolve_events_with_factory(
|
||||
room_version,
|
||||
list(itervalues(state_groups_ids)),
|
||||
event_map=event_map,
|
||||
state_map_factory=state_map_factory,
|
||||
@@ -592,11 +575,94 @@ def _make_state_cache_entry(
|
||||
)
|
||||
|
||||
|
||||
def resolve_events_with_factory(room_version, state_sets, event_map, state_map_factory):
|
||||
def _ordered_events(events):
|
||||
def key_func(e):
|
||||
return -int(e.depth), hashlib.sha1(e.event_id.encode('ascii')).hexdigest()
|
||||
|
||||
return sorted(events, key=key_func)
|
||||
|
||||
|
||||
def resolve_events_with_state_map(state_sets, state_map):
|
||||
"""
|
||||
Args:
|
||||
room_version(str): Version of the room
|
||||
state_sets(list): List of dicts of (type, state_key) -> event_id,
|
||||
which are the different state groups to resolve.
|
||||
state_map(dict): a dict from event_id to event, for all events in
|
||||
state_sets.
|
||||
|
||||
Returns
|
||||
dict[(str, str), str]:
|
||||
a map from (type, state_key) to event_id.
|
||||
"""
|
||||
if len(state_sets) == 1:
|
||||
return state_sets[0]
|
||||
|
||||
unconflicted_state, conflicted_state = _seperate(
|
||||
state_sets,
|
||||
)
|
||||
|
||||
auth_events = _create_auth_events_from_maps(
|
||||
unconflicted_state, conflicted_state, state_map
|
||||
)
|
||||
|
||||
return _resolve_with_state(
|
||||
unconflicted_state, conflicted_state, auth_events, state_map
|
||||
)
|
||||
|
||||
|
||||
def _seperate(state_sets):
|
||||
"""Takes the state_sets and figures out which keys are conflicted and
|
||||
which aren't. i.e., which have multiple different event_ids associated
|
||||
with them in different state sets.
|
||||
|
||||
Args:
|
||||
state_sets(iterable[dict[(str, str), str]]):
|
||||
List of dicts of (type, state_key) -> event_id, which are the
|
||||
different state groups to resolve.
|
||||
|
||||
Returns:
|
||||
(dict[(str, str), str], dict[(str, str), set[str]]):
|
||||
A tuple of (unconflicted_state, conflicted_state), where:
|
||||
|
||||
unconflicted_state is a dict mapping (type, state_key)->event_id
|
||||
for unconflicted state keys.
|
||||
|
||||
conflicted_state is a dict mapping (type, state_key) to a set of
|
||||
event ids for conflicted state keys.
|
||||
"""
|
||||
state_set_iterator = iter(state_sets)
|
||||
unconflicted_state = dict(next(state_set_iterator))
|
||||
conflicted_state = {}
|
||||
|
||||
for state_set in state_set_iterator:
|
||||
for key, value in iteritems(state_set):
|
||||
# Check if there is an unconflicted entry for the state key.
|
||||
unconflicted_value = unconflicted_state.get(key)
|
||||
if unconflicted_value is None:
|
||||
# There isn't an unconflicted entry so check if there is a
|
||||
# conflicted entry.
|
||||
ls = conflicted_state.get(key)
|
||||
if ls is None:
|
||||
# There wasn't a conflicted entry so haven't seen this key before.
|
||||
# Therefore it isn't conflicted yet.
|
||||
unconflicted_state[key] = value
|
||||
else:
|
||||
# This key is already conflicted, add our value to the conflict set.
|
||||
ls.add(value)
|
||||
elif unconflicted_value != value:
|
||||
# If the unconflicted value is not the same as our value then we
|
||||
# have a new conflict. So move the key from the unconflicted_state
|
||||
# to the conflicted state.
|
||||
conflicted_state[key] = {value, unconflicted_value}
|
||||
unconflicted_state.pop(key, None)
|
||||
|
||||
return unconflicted_state, conflicted_state
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def resolve_events_with_factory(state_sets, event_map, state_map_factory):
|
||||
"""
|
||||
Args:
|
||||
state_sets(list): List of dicts of (type, state_key) -> event_id,
|
||||
which are the different state groups to resolve.
|
||||
|
||||
@@ -616,13 +682,185 @@ def resolve_events_with_factory(room_version, state_sets, event_map, state_map_f
|
||||
Deferred[dict[(str, str), str]]:
|
||||
a map from (type, state_key) to event_id.
|
||||
"""
|
||||
if room_version in (RoomVersions.V1, RoomVersions.VDH_TEST,):
|
||||
return v1.resolve_events_with_factory(
|
||||
state_sets, event_map, state_map_factory,
|
||||
)
|
||||
else:
|
||||
# This should only happen if we added a version but forgot to add it to
|
||||
# the list above.
|
||||
raise Exception(
|
||||
"No state resolution algorithm defined for version %r" % (room_version,)
|
||||
if len(state_sets) == 1:
|
||||
defer.returnValue(state_sets[0])
|
||||
|
||||
unconflicted_state, conflicted_state = _seperate(
|
||||
state_sets,
|
||||
)
|
||||
|
||||
needed_events = set(
|
||||
event_id
|
||||
for event_ids in itervalues(conflicted_state)
|
||||
for event_id in event_ids
|
||||
)
|
||||
if event_map is not None:
|
||||
needed_events -= set(iterkeys(event_map))
|
||||
|
||||
logger.info("Asking for %d conflicted events", len(needed_events))
|
||||
|
||||
# dict[str, FrozenEvent]: a map from state event id to event. Only includes
|
||||
# the state events which are in conflict (and those in event_map)
|
||||
state_map = yield state_map_factory(needed_events)
|
||||
if event_map is not None:
|
||||
state_map.update(event_map)
|
||||
|
||||
# get the ids of the auth events which allow us to authenticate the
|
||||
# conflicted state, picking only from the unconflicting state.
|
||||
#
|
||||
# dict[(str, str), str]: a map from state key to event id
|
||||
auth_events = _create_auth_events_from_maps(
|
||||
unconflicted_state, conflicted_state, state_map
|
||||
)
|
||||
|
||||
new_needed_events = set(itervalues(auth_events))
|
||||
new_needed_events -= needed_events
|
||||
if event_map is not None:
|
||||
new_needed_events -= set(iterkeys(event_map))
|
||||
|
||||
logger.info("Asking for %d auth events", len(new_needed_events))
|
||||
|
||||
state_map_new = yield state_map_factory(new_needed_events)
|
||||
state_map.update(state_map_new)
|
||||
|
||||
defer.returnValue(_resolve_with_state(
|
||||
unconflicted_state, conflicted_state, auth_events, state_map
|
||||
))
|
||||
|
||||
|
||||
def _create_auth_events_from_maps(unconflicted_state, conflicted_state, state_map):
|
||||
auth_events = {}
|
||||
for event_ids in itervalues(conflicted_state):
|
||||
for event_id in event_ids:
|
||||
if event_id in state_map:
|
||||
keys = event_auth.auth_types_for_event(state_map[event_id])
|
||||
for key in keys:
|
||||
if key not in auth_events:
|
||||
event_id = unconflicted_state.get(key, None)
|
||||
if event_id:
|
||||
auth_events[key] = event_id
|
||||
return auth_events
|
||||
|
||||
|
||||
def _resolve_with_state(unconflicted_state_ids, conflicted_state_ids, auth_event_ids,
|
||||
state_map):
|
||||
conflicted_state = {}
|
||||
for key, event_ids in iteritems(conflicted_state_ids):
|
||||
events = [state_map[ev_id] for ev_id in event_ids if ev_id in state_map]
|
||||
if len(events) > 1:
|
||||
conflicted_state[key] = events
|
||||
elif len(events) == 1:
|
||||
unconflicted_state_ids[key] = events[0].event_id
|
||||
|
||||
auth_events = {
|
||||
key: state_map[ev_id]
|
||||
for key, ev_id in iteritems(auth_event_ids)
|
||||
if ev_id in state_map
|
||||
}
|
||||
|
||||
try:
|
||||
resolved_state = _resolve_state_events(
|
||||
conflicted_state, auth_events
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to resolve state")
|
||||
raise
|
||||
|
||||
new_state = unconflicted_state_ids
|
||||
for key, event in iteritems(resolved_state):
|
||||
new_state[key] = event.event_id
|
||||
|
||||
return new_state
|
||||
|
||||
|
||||
def _resolve_state_events(conflicted_state, auth_events):
|
||||
""" This is where we actually decide which of the conflicted state to
|
||||
use.
|
||||
|
||||
We resolve conflicts in the following order:
|
||||
1. power levels
|
||||
2. join rules
|
||||
3. memberships
|
||||
4. other events.
|
||||
"""
|
||||
resolved_state = {}
|
||||
if POWER_KEY in conflicted_state:
|
||||
events = conflicted_state[POWER_KEY]
|
||||
logger.debug("Resolving conflicted power levels %r", events)
|
||||
resolved_state[POWER_KEY] = _resolve_auth_events(
|
||||
events, auth_events)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key[0] == EventTypes.JoinRules:
|
||||
logger.debug("Resolving conflicted join rules %r", events)
|
||||
resolved_state[key] = _resolve_auth_events(
|
||||
events,
|
||||
auth_events
|
||||
)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key[0] == EventTypes.Member:
|
||||
logger.debug("Resolving conflicted member lists %r", events)
|
||||
resolved_state[key] = _resolve_auth_events(
|
||||
events,
|
||||
auth_events
|
||||
)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key not in resolved_state:
|
||||
logger.debug("Resolving conflicted state %r:%r", key, events)
|
||||
resolved_state[key] = _resolve_normal_events(
|
||||
events, auth_events
|
||||
)
|
||||
|
||||
return resolved_state
|
||||
|
||||
|
||||
def _resolve_auth_events(events, auth_events):
|
||||
reverse = [i for i in reversed(_ordered_events(events))]
|
||||
|
||||
auth_keys = set(
|
||||
key
|
||||
for event in events
|
||||
for key in event_auth.auth_types_for_event(event)
|
||||
)
|
||||
|
||||
new_auth_events = {}
|
||||
for key in auth_keys:
|
||||
auth_event = auth_events.get(key, None)
|
||||
if auth_event:
|
||||
new_auth_events[key] = auth_event
|
||||
|
||||
auth_events = new_auth_events
|
||||
|
||||
prev_event = reverse[0]
|
||||
for event in reverse[1:]:
|
||||
auth_events[(prev_event.type, prev_event.state_key)] = prev_event
|
||||
try:
|
||||
# The signatures have already been checked at this point
|
||||
event_auth.check(event, auth_events, do_sig_check=False, do_size_check=False)
|
||||
prev_event = event
|
||||
except AuthError:
|
||||
return prev_event
|
||||
|
||||
return event
|
||||
|
||||
|
||||
def _resolve_normal_events(events, auth_events):
|
||||
for event in _ordered_events(events):
|
||||
try:
|
||||
# The signatures have already been checked at this point
|
||||
event_auth.check(event, auth_events, do_sig_check=False, do_size_check=False)
|
||||
return event
|
||||
except AuthError:
|
||||
pass
|
||||
|
||||
# Use the last event (the one with the least depth) if they all fail
|
||||
# the auth check.
|
||||
return event
|
||||
@@ -1,293 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import hashlib
|
||||
import logging
|
||||
|
||||
from six import iteritems, iterkeys, itervalues
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse import event_auth
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.errors import AuthError
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
POWER_KEY = (EventTypes.PowerLevels, "")
|
||||
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def resolve_events_with_factory(state_sets, event_map, state_map_factory):
|
||||
"""
|
||||
Args:
|
||||
state_sets(list): List of dicts of (type, state_key) -> event_id,
|
||||
which are the different state groups to resolve.
|
||||
|
||||
event_map(dict[str,FrozenEvent]|None):
|
||||
a dict from event_id to event, for any events that we happen to
|
||||
have in flight (eg, those currently being persisted). This will be
|
||||
used as a starting point fof finding the state we need; any missing
|
||||
events will be requested via state_map_factory.
|
||||
|
||||
If None, all events will be fetched via state_map_factory.
|
||||
|
||||
state_map_factory(func): will be called
|
||||
with a list of event_ids that are needed, and should return with
|
||||
a Deferred of dict of event_id to event.
|
||||
|
||||
Returns
|
||||
Deferred[dict[(str, str), str]]:
|
||||
a map from (type, state_key) to event_id.
|
||||
"""
|
||||
if len(state_sets) == 1:
|
||||
defer.returnValue(state_sets[0])
|
||||
|
||||
unconflicted_state, conflicted_state = _seperate(
|
||||
state_sets,
|
||||
)
|
||||
|
||||
needed_events = set(
|
||||
event_id
|
||||
for event_ids in itervalues(conflicted_state)
|
||||
for event_id in event_ids
|
||||
)
|
||||
if event_map is not None:
|
||||
needed_events -= set(iterkeys(event_map))
|
||||
|
||||
logger.info("Asking for %d conflicted events", len(needed_events))
|
||||
|
||||
# dict[str, FrozenEvent]: a map from state event id to event. Only includes
|
||||
# the state events which are in conflict (and those in event_map)
|
||||
state_map = yield state_map_factory(needed_events)
|
||||
if event_map is not None:
|
||||
state_map.update(event_map)
|
||||
|
||||
# get the ids of the auth events which allow us to authenticate the
|
||||
# conflicted state, picking only from the unconflicting state.
|
||||
#
|
||||
# dict[(str, str), str]: a map from state key to event id
|
||||
auth_events = _create_auth_events_from_maps(
|
||||
unconflicted_state, conflicted_state, state_map
|
||||
)
|
||||
|
||||
new_needed_events = set(itervalues(auth_events))
|
||||
new_needed_events -= needed_events
|
||||
if event_map is not None:
|
||||
new_needed_events -= set(iterkeys(event_map))
|
||||
|
||||
logger.info("Asking for %d auth events", len(new_needed_events))
|
||||
|
||||
state_map_new = yield state_map_factory(new_needed_events)
|
||||
state_map.update(state_map_new)
|
||||
|
||||
defer.returnValue(_resolve_with_state(
|
||||
unconflicted_state, conflicted_state, auth_events, state_map
|
||||
))
|
||||
|
||||
|
||||
def _seperate(state_sets):
|
||||
"""Takes the state_sets and figures out which keys are conflicted and
|
||||
which aren't. i.e., which have multiple different event_ids associated
|
||||
with them in different state sets.
|
||||
|
||||
Args:
|
||||
state_sets(iterable[dict[(str, str), str]]):
|
||||
List of dicts of (type, state_key) -> event_id, which are the
|
||||
different state groups to resolve.
|
||||
|
||||
Returns:
|
||||
(dict[(str, str), str], dict[(str, str), set[str]]):
|
||||
A tuple of (unconflicted_state, conflicted_state), where:
|
||||
|
||||
unconflicted_state is a dict mapping (type, state_key)->event_id
|
||||
for unconflicted state keys.
|
||||
|
||||
conflicted_state is a dict mapping (type, state_key) to a set of
|
||||
event ids for conflicted state keys.
|
||||
"""
|
||||
state_set_iterator = iter(state_sets)
|
||||
unconflicted_state = dict(next(state_set_iterator))
|
||||
conflicted_state = {}
|
||||
|
||||
for state_set in state_set_iterator:
|
||||
for key, value in iteritems(state_set):
|
||||
# Check if there is an unconflicted entry for the state key.
|
||||
unconflicted_value = unconflicted_state.get(key)
|
||||
if unconflicted_value is None:
|
||||
# There isn't an unconflicted entry so check if there is a
|
||||
# conflicted entry.
|
||||
ls = conflicted_state.get(key)
|
||||
if ls is None:
|
||||
# There wasn't a conflicted entry so haven't seen this key before.
|
||||
# Therefore it isn't conflicted yet.
|
||||
unconflicted_state[key] = value
|
||||
else:
|
||||
# This key is already conflicted, add our value to the conflict set.
|
||||
ls.add(value)
|
||||
elif unconflicted_value != value:
|
||||
# If the unconflicted value is not the same as our value then we
|
||||
# have a new conflict. So move the key from the unconflicted_state
|
||||
# to the conflicted state.
|
||||
conflicted_state[key] = {value, unconflicted_value}
|
||||
unconflicted_state.pop(key, None)
|
||||
|
||||
return unconflicted_state, conflicted_state
|
||||
|
||||
|
||||
def _create_auth_events_from_maps(unconflicted_state, conflicted_state, state_map):
|
||||
auth_events = {}
|
||||
for event_ids in itervalues(conflicted_state):
|
||||
for event_id in event_ids:
|
||||
if event_id in state_map:
|
||||
keys = event_auth.auth_types_for_event(state_map[event_id])
|
||||
for key in keys:
|
||||
if key not in auth_events:
|
||||
event_id = unconflicted_state.get(key, None)
|
||||
if event_id:
|
||||
auth_events[key] = event_id
|
||||
return auth_events
|
||||
|
||||
|
||||
def _resolve_with_state(unconflicted_state_ids, conflicted_state_ids, auth_event_ids,
|
||||
state_map):
|
||||
conflicted_state = {}
|
||||
for key, event_ids in iteritems(conflicted_state_ids):
|
||||
events = [state_map[ev_id] for ev_id in event_ids if ev_id in state_map]
|
||||
if len(events) > 1:
|
||||
conflicted_state[key] = events
|
||||
elif len(events) == 1:
|
||||
unconflicted_state_ids[key] = events[0].event_id
|
||||
|
||||
auth_events = {
|
||||
key: state_map[ev_id]
|
||||
for key, ev_id in iteritems(auth_event_ids)
|
||||
if ev_id in state_map
|
||||
}
|
||||
|
||||
try:
|
||||
resolved_state = _resolve_state_events(
|
||||
conflicted_state, auth_events
|
||||
)
|
||||
except Exception:
|
||||
logger.exception("Failed to resolve state")
|
||||
raise
|
||||
|
||||
new_state = unconflicted_state_ids
|
||||
for key, event in iteritems(resolved_state):
|
||||
new_state[key] = event.event_id
|
||||
|
||||
return new_state
|
||||
|
||||
|
||||
def _resolve_state_events(conflicted_state, auth_events):
|
||||
""" This is where we actually decide which of the conflicted state to
|
||||
use.
|
||||
|
||||
We resolve conflicts in the following order:
|
||||
1. power levels
|
||||
2. join rules
|
||||
3. memberships
|
||||
4. other events.
|
||||
"""
|
||||
resolved_state = {}
|
||||
if POWER_KEY in conflicted_state:
|
||||
events = conflicted_state[POWER_KEY]
|
||||
logger.debug("Resolving conflicted power levels %r", events)
|
||||
resolved_state[POWER_KEY] = _resolve_auth_events(
|
||||
events, auth_events)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key[0] == EventTypes.JoinRules:
|
||||
logger.debug("Resolving conflicted join rules %r", events)
|
||||
resolved_state[key] = _resolve_auth_events(
|
||||
events,
|
||||
auth_events
|
||||
)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key[0] == EventTypes.Member:
|
||||
logger.debug("Resolving conflicted member lists %r", events)
|
||||
resolved_state[key] = _resolve_auth_events(
|
||||
events,
|
||||
auth_events
|
||||
)
|
||||
|
||||
auth_events.update(resolved_state)
|
||||
|
||||
for key, events in iteritems(conflicted_state):
|
||||
if key not in resolved_state:
|
||||
logger.debug("Resolving conflicted state %r:%r", key, events)
|
||||
resolved_state[key] = _resolve_normal_events(
|
||||
events, auth_events
|
||||
)
|
||||
|
||||
return resolved_state
|
||||
|
||||
|
||||
def _resolve_auth_events(events, auth_events):
|
||||
reverse = [i for i in reversed(_ordered_events(events))]
|
||||
|
||||
auth_keys = set(
|
||||
key
|
||||
for event in events
|
||||
for key in event_auth.auth_types_for_event(event)
|
||||
)
|
||||
|
||||
new_auth_events = {}
|
||||
for key in auth_keys:
|
||||
auth_event = auth_events.get(key, None)
|
||||
if auth_event:
|
||||
new_auth_events[key] = auth_event
|
||||
|
||||
auth_events = new_auth_events
|
||||
|
||||
prev_event = reverse[0]
|
||||
for event in reverse[1:]:
|
||||
auth_events[(prev_event.type, prev_event.state_key)] = prev_event
|
||||
try:
|
||||
# The signatures have already been checked at this point
|
||||
event_auth.check(event, auth_events, do_sig_check=False, do_size_check=False)
|
||||
prev_event = event
|
||||
except AuthError:
|
||||
return prev_event
|
||||
|
||||
return event
|
||||
|
||||
|
||||
def _resolve_normal_events(events, auth_events):
|
||||
for event in _ordered_events(events):
|
||||
try:
|
||||
# The signatures have already been checked at this point
|
||||
event_auth.check(event, auth_events, do_sig_check=False, do_size_check=False)
|
||||
return event
|
||||
except AuthError:
|
||||
pass
|
||||
|
||||
# Use the last event (the one with the least depth) if they all fail
|
||||
# the auth check.
|
||||
return event
|
||||
|
||||
|
||||
def _ordered_events(events):
|
||||
def key_func(e):
|
||||
return -int(e.depth), hashlib.sha1(e.event_id.encode('ascii')).hexdigest()
|
||||
|
||||
return sorted(events, key=key_func)
|
||||
@@ -17,10 +17,9 @@ import sys
|
||||
import threading
|
||||
import time
|
||||
|
||||
from six import PY2, iteritems, iterkeys, itervalues
|
||||
from six import iteritems, iterkeys, itervalues
|
||||
from six.moves import intern, range
|
||||
|
||||
from canonicaljson import json
|
||||
from prometheus_client import Histogram
|
||||
|
||||
from twisted.internet import defer
|
||||
@@ -1217,32 +1216,3 @@ class _RollbackButIsFineException(Exception):
|
||||
something went wrong.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
def db_to_json(db_content):
|
||||
"""
|
||||
Take some data from a database row and return a JSON-decoded object.
|
||||
|
||||
Args:
|
||||
db_content (memoryview|buffer|bytes|bytearray|unicode)
|
||||
"""
|
||||
# psycopg2 on Python 3 returns memoryview objects, which we need to
|
||||
# cast to bytes to decode
|
||||
if isinstance(db_content, memoryview):
|
||||
db_content = db_content.tobytes()
|
||||
|
||||
# psycopg2 on Python 2 returns buffer objects, which we need to cast to
|
||||
# bytes to decode
|
||||
if PY2 and isinstance(db_content, buffer):
|
||||
db_content = bytes(db_content)
|
||||
|
||||
# Decode it to a Unicode string before feeding it to json.loads, so we
|
||||
# consistenty get a Unicode-containing object out.
|
||||
if isinstance(db_content, (bytes, bytearray)):
|
||||
db_content = db_content.decode('utf8')
|
||||
|
||||
try:
|
||||
return json.loads(db_content)
|
||||
except Exception:
|
||||
logging.warning("Tried to decode '%r' as JSON and failed", db_content)
|
||||
raise
|
||||
|
||||
@@ -169,7 +169,7 @@ class DeviceInboxStore(BackgroundUpdateStore):
|
||||
local_by_user_then_device = {}
|
||||
for user_id, messages_by_device in messages_by_user_then_device.items():
|
||||
messages_json_for_user = {}
|
||||
devices = list(messages_by_device.keys())
|
||||
devices = messages_by_device.keys()
|
||||
if len(devices) == 1 and devices[0] == "*":
|
||||
# Handle wildcard device_ids.
|
||||
sql = (
|
||||
|
||||
@@ -24,7 +24,7 @@ from synapse.api.errors import StoreError
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.util.caches.descriptors import cached, cachedInlineCallbacks, cachedList
|
||||
|
||||
from ._base import Cache, SQLBaseStore, db_to_json
|
||||
from ._base import Cache, SQLBaseStore
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -411,7 +411,7 @@ class DeviceStore(SQLBaseStore):
|
||||
if device is not None:
|
||||
key_json = device.get("key_json", None)
|
||||
if key_json:
|
||||
result["keys"] = db_to_json(key_json)
|
||||
result["keys"] = json.loads(key_json)
|
||||
device_display_name = device.get("device_display_name", None)
|
||||
if device_display_name:
|
||||
result["device_display_name"] = device_display_name
|
||||
@@ -466,7 +466,7 @@ class DeviceStore(SQLBaseStore):
|
||||
retcol="content",
|
||||
desc="_get_cached_user_device",
|
||||
)
|
||||
defer.returnValue(db_to_json(content))
|
||||
defer.returnValue(json.loads(content))
|
||||
|
||||
@cachedInlineCallbacks()
|
||||
def _get_cached_devices_for_user(self, user_id):
|
||||
@@ -479,7 +479,7 @@ class DeviceStore(SQLBaseStore):
|
||||
desc="_get_cached_devices_for_user",
|
||||
)
|
||||
defer.returnValue({
|
||||
device["device_id"]: db_to_json(device["content"])
|
||||
device["device_id"]: json.loads(device["content"])
|
||||
for device in devices
|
||||
})
|
||||
|
||||
@@ -511,7 +511,7 @@ class DeviceStore(SQLBaseStore):
|
||||
|
||||
key_json = device.get("key_json", None)
|
||||
if key_json:
|
||||
result["keys"] = db_to_json(key_json)
|
||||
result["keys"] = json.loads(key_json)
|
||||
device_display_name = device.get("device_display_name", None)
|
||||
if device_display_name:
|
||||
result["device_display_name"] = device_display_name
|
||||
|
||||
@@ -14,13 +14,13 @@
|
||||
# limitations under the License.
|
||||
from six import iteritems
|
||||
|
||||
from canonicaljson import encode_canonical_json
|
||||
from canonicaljson import encode_canonical_json, json
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.util.caches.descriptors import cached
|
||||
|
||||
from ._base import SQLBaseStore, db_to_json
|
||||
from ._base import SQLBaseStore
|
||||
|
||||
|
||||
class EndToEndKeyStore(SQLBaseStore):
|
||||
@@ -90,7 +90,7 @@ class EndToEndKeyStore(SQLBaseStore):
|
||||
|
||||
for user_id, device_keys in iteritems(results):
|
||||
for device_id, device_info in iteritems(device_keys):
|
||||
device_info["keys"] = db_to_json(device_info.pop("key_json"))
|
||||
device_info["keys"] = json.loads(device_info.pop("key_json"))
|
||||
|
||||
defer.returnValue(results)
|
||||
|
||||
|
||||
@@ -41,18 +41,13 @@ class PostgresEngine(object):
|
||||
db_conn.set_isolation_level(
|
||||
self.module.extensions.ISOLATION_LEVEL_REPEATABLE_READ
|
||||
)
|
||||
|
||||
# Set the bytea output to escape, vs the default of hex
|
||||
cursor = db_conn.cursor()
|
||||
cursor.execute("SET bytea_output TO escape")
|
||||
|
||||
# Asynchronous commit, don't wait for the server to call fsync before
|
||||
# ending the transaction.
|
||||
# https://www.postgresql.org/docs/current/static/wal-async-commit.html
|
||||
if not self.synchronous_commit:
|
||||
cursor = db_conn.cursor()
|
||||
cursor.execute("SET synchronous_commit TO OFF")
|
||||
|
||||
cursor.close()
|
||||
cursor.close()
|
||||
|
||||
def is_deadlock(self, error):
|
||||
if isinstance(error, self.module.DatabaseError):
|
||||
|
||||
@@ -19,7 +19,7 @@ import logging
|
||||
from collections import OrderedDict, deque, namedtuple
|
||||
from functools import wraps
|
||||
|
||||
from six import iteritems, text_type
|
||||
from six import iteritems
|
||||
from six.moves import range
|
||||
|
||||
from canonicaljson import json
|
||||
@@ -705,11 +705,9 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
||||
}
|
||||
|
||||
events_map = {ev.event_id: ev for ev, _ in events_context}
|
||||
room_version = yield self.get_room_version(room_id)
|
||||
|
||||
logger.debug("calling resolve_state_groups from preserve_events")
|
||||
res = yield self._state_resolution_handler.resolve_state_groups(
|
||||
room_id, room_version, state_groups, events_map, get_events
|
||||
room_id, state_groups, events_map, get_events
|
||||
)
|
||||
|
||||
defer.returnValue((res.state, None))
|
||||
@@ -1220,7 +1218,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
||||
"sender": event.sender,
|
||||
"contains_url": (
|
||||
"url" in event.content
|
||||
and isinstance(event.content["url"], text_type)
|
||||
and isinstance(event.content["url"], basestring)
|
||||
),
|
||||
}
|
||||
for event, _ in events_and_contexts
|
||||
@@ -1529,7 +1527,7 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
||||
|
||||
contains_url = "url" in content
|
||||
if contains_url:
|
||||
contains_url &= isinstance(content["url"], text_type)
|
||||
contains_url &= isinstance(content["url"], basestring)
|
||||
except (KeyError, AttributeError):
|
||||
# If the event is missing a necessary field then
|
||||
# skip over it.
|
||||
@@ -1910,9 +1908,9 @@ class EventsStore(EventFederationStore, EventsWorkerStore, BackgroundUpdateStore
|
||||
(room_id,)
|
||||
)
|
||||
rows = txn.fetchall()
|
||||
max_depth = max(row[1] for row in rows)
|
||||
max_depth = max(row[0] for row in rows)
|
||||
|
||||
if max_depth < token.topological:
|
||||
if max_depth <= token.topological:
|
||||
# We need to ensure we don't delete all the events from the database
|
||||
# otherwise we wouldn't be able to send any events (due to not
|
||||
# having any backwards extremeties)
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import itertools
|
||||
import logging
|
||||
from collections import namedtuple
|
||||
@@ -266,7 +265,7 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
"""
|
||||
with Measure(self._clock, "_fetch_event_list"):
|
||||
try:
|
||||
event_id_lists = list(zip(*event_list))[0]
|
||||
event_id_lists = zip(*event_list)[0]
|
||||
event_ids = [
|
||||
item for sublist in event_id_lists for item in sublist
|
||||
]
|
||||
@@ -300,14 +299,14 @@ class EventsWorkerStore(SQLBaseStore):
|
||||
logger.exception("do_fetch")
|
||||
|
||||
# We only want to resolve deferreds from the main thread
|
||||
def fire(evs, exc):
|
||||
def fire(evs):
|
||||
for _, d in evs:
|
||||
if not d.called:
|
||||
with PreserveLoggingContext():
|
||||
d.errback(exc)
|
||||
d.errback(e)
|
||||
|
||||
with PreserveLoggingContext():
|
||||
self.hs.get_reactor().callFromThread(fire, event_list, e)
|
||||
self.hs.get_reactor().callFromThread(fire, event_list)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def _enqueue_events(self, events, check_redacted=True, allow_rejected=False):
|
||||
|
||||
@@ -13,14 +13,14 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from canonicaljson import encode_canonical_json
|
||||
from canonicaljson import encode_canonical_json, json
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.util.caches.descriptors import cachedInlineCallbacks
|
||||
|
||||
from ._base import SQLBaseStore, db_to_json
|
||||
from ._base import SQLBaseStore
|
||||
|
||||
|
||||
class FilteringStore(SQLBaseStore):
|
||||
@@ -44,7 +44,7 @@ class FilteringStore(SQLBaseStore):
|
||||
desc="get_user_filter",
|
||||
)
|
||||
|
||||
defer.returnValue(db_to_json(def_json))
|
||||
defer.returnValue(json.loads(bytes(def_json).decode("utf-8")))
|
||||
|
||||
def add_user_filter(self, user_localpart, user_filter):
|
||||
def_json = encode_canonical_json(user_filter)
|
||||
|
||||
@@ -36,6 +36,7 @@ class MonthlyActiveUsersStore(SQLBaseStore):
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def initialise_reserved_users(self, threepids):
|
||||
# TODO Why can't I do this in init?
|
||||
store = self.hs.get_datastore()
|
||||
reserved_user_list = []
|
||||
|
||||
@@ -146,7 +147,6 @@ class MonthlyActiveUsersStore(SQLBaseStore):
|
||||
return count
|
||||
return self.runInteraction("count_users", _count_users)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def upsert_monthly_active_user(self, user_id):
|
||||
"""
|
||||
Updates or inserts monthly active user member
|
||||
@@ -155,7 +155,7 @@ class MonthlyActiveUsersStore(SQLBaseStore):
|
||||
Deferred[bool]: True if a new entry was created, False if an
|
||||
existing one was updated.
|
||||
"""
|
||||
is_insert = yield self._simple_upsert(
|
||||
is_insert = self._simple_upsert(
|
||||
desc="upsert_monthly_active_user",
|
||||
table="monthly_active_users",
|
||||
keyvalues={
|
||||
@@ -200,11 +200,6 @@ class MonthlyActiveUsersStore(SQLBaseStore):
|
||||
user_id(str): the user_id to query
|
||||
"""
|
||||
if self.hs.config.limit_usage_by_mau:
|
||||
is_trial = yield self.is_trial_user(user_id)
|
||||
if is_trial:
|
||||
# we don't track trial users in the MAU table.
|
||||
return
|
||||
|
||||
last_seen_timestamp = yield self.user_last_seen_monthly_active(user_id)
|
||||
now = self.hs.get_clock().time_msec()
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -20,6 +21,8 @@ from synapse.storage.roommember import ProfileInfo
|
||||
|
||||
from ._base import SQLBaseStore
|
||||
|
||||
BATCH_SIZE = 100
|
||||
|
||||
|
||||
class ProfileWorkerStore(SQLBaseStore):
|
||||
@defer.inlineCallbacks
|
||||
@@ -62,6 +65,55 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
desc="get_profile_avatar_url",
|
||||
)
|
||||
|
||||
def get_latest_profile_replication_batch_number(self):
|
||||
def f(txn):
|
||||
txn.execute("SELECT MAX(batch) as maxbatch FROM profiles")
|
||||
rows = self.cursor_to_dict(txn)
|
||||
return rows[0]['maxbatch']
|
||||
return self.runInteraction(
|
||||
"get_latest_profile_replication_batch_number", f,
|
||||
)
|
||||
|
||||
def get_profile_batch(self, batchnum):
|
||||
return self._simple_select_list(
|
||||
table="profiles",
|
||||
keyvalues={
|
||||
"batch": batchnum,
|
||||
},
|
||||
retcols=("user_id", "displayname", "avatar_url", "active"),
|
||||
desc="get_profile_batch",
|
||||
)
|
||||
|
||||
def assign_profile_batch(self):
|
||||
def f(txn):
|
||||
sql = (
|
||||
"UPDATE profiles SET batch = "
|
||||
"(SELECT COALESCE(MAX(batch), -1) + 1 FROM profiles) "
|
||||
"WHERE user_id in ("
|
||||
" SELECT user_id FROM profiles WHERE batch is NULL limit ?"
|
||||
")"
|
||||
)
|
||||
txn.execute(sql, (BATCH_SIZE,))
|
||||
return txn.rowcount
|
||||
return self.runInteraction("assign_profile_batch", f)
|
||||
|
||||
def get_replication_hosts(self):
|
||||
def f(txn):
|
||||
txn.execute("SELECT host, last_synced_batch FROM profile_replication_status")
|
||||
rows = self.cursor_to_dict(txn)
|
||||
return {r['host']: r['last_synced_batch'] for r in rows}
|
||||
return self.runInteraction("get_replication_hosts", f)
|
||||
|
||||
def update_replication_batch_for_host(self, host, last_synced_batch):
|
||||
return self._simple_upsert(
|
||||
table="profile_replication_status",
|
||||
keyvalues={"host": host},
|
||||
values={
|
||||
"last_synced_batch": last_synced_batch,
|
||||
},
|
||||
desc="update_replication_batch_for_host",
|
||||
)
|
||||
|
||||
def get_from_remote_profile_cache(self, user_id):
|
||||
return self._simple_select_one(
|
||||
table="remote_profile_cache",
|
||||
@@ -71,31 +123,48 @@ class ProfileWorkerStore(SQLBaseStore):
|
||||
desc="get_from_remote_profile_cache",
|
||||
)
|
||||
|
||||
def create_profile(self, user_localpart):
|
||||
return self._simple_insert(
|
||||
table="profiles",
|
||||
values={"user_id": user_localpart},
|
||||
desc="create_profile",
|
||||
)
|
||||
|
||||
def set_profile_displayname(self, user_localpart, new_displayname):
|
||||
return self._simple_update_one(
|
||||
table="profiles",
|
||||
keyvalues={"user_id": user_localpart},
|
||||
updatevalues={"displayname": new_displayname},
|
||||
desc="set_profile_displayname",
|
||||
)
|
||||
|
||||
def set_profile_avatar_url(self, user_localpart, new_avatar_url):
|
||||
return self._simple_update_one(
|
||||
table="profiles",
|
||||
keyvalues={"user_id": user_localpart},
|
||||
updatevalues={"avatar_url": new_avatar_url},
|
||||
desc="set_profile_avatar_url",
|
||||
)
|
||||
|
||||
|
||||
class ProfileStore(ProfileWorkerStore):
|
||||
def set_profile_displayname(self, user_localpart, new_displayname, batchnum):
|
||||
return self._simple_upsert(
|
||||
table="profiles",
|
||||
keyvalues={"user_id": user_localpart},
|
||||
values={
|
||||
"displayname": new_displayname,
|
||||
"batch": batchnum,
|
||||
},
|
||||
desc="set_profile_displayname",
|
||||
lock=False # we can do this because user_id has a unique index
|
||||
)
|
||||
|
||||
def set_profile_avatar_url(self, user_localpart, new_avatar_url, batchnum):
|
||||
return self._simple_upsert(
|
||||
table="profiles",
|
||||
keyvalues={"user_id": user_localpart},
|
||||
values={
|
||||
"avatar_url": new_avatar_url,
|
||||
"batch": batchnum,
|
||||
},
|
||||
desc="set_profile_avatar_url",
|
||||
lock=False # we can do this because user_id has a unique index
|
||||
)
|
||||
|
||||
def set_profile_active(self, user_localpart, active, batchnum):
|
||||
values = {
|
||||
"active": int(active),
|
||||
"batch": batchnum,
|
||||
}
|
||||
if not active:
|
||||
values["avatar_url"] = None
|
||||
values["displayname"] = None
|
||||
return self._simple_upsert(
|
||||
table="profiles",
|
||||
keyvalues={"user_id": user_localpart},
|
||||
values=values,
|
||||
desc="set_profile_active",
|
||||
lock=False # we can do this because user_id has a unique index
|
||||
)
|
||||
|
||||
def add_remote_profile_cache(self, user_id, displayname, avatar_url):
|
||||
"""Ensure we are caching the remote user's profiles.
|
||||
|
||||
|
||||
@@ -15,8 +15,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
|
||||
import six
|
||||
import types
|
||||
|
||||
from canonicaljson import encode_canonical_json, json
|
||||
|
||||
@@ -28,11 +27,6 @@ from ._base import SQLBaseStore
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
if six.PY2:
|
||||
db_binary_type = buffer
|
||||
else:
|
||||
db_binary_type = memoryview
|
||||
|
||||
|
||||
class PusherWorkerStore(SQLBaseStore):
|
||||
def _decode_pushers_rows(self, rows):
|
||||
@@ -40,18 +34,18 @@ class PusherWorkerStore(SQLBaseStore):
|
||||
dataJson = r['data']
|
||||
r['data'] = None
|
||||
try:
|
||||
if isinstance(dataJson, db_binary_type):
|
||||
if isinstance(dataJson, types.BufferType):
|
||||
dataJson = str(dataJson).decode("UTF8")
|
||||
|
||||
r['data'] = json.loads(dataJson)
|
||||
except Exception as e:
|
||||
logger.warn(
|
||||
"Invalid JSON in data for pusher %d: %s, %s",
|
||||
r['id'], dataJson, e.args[0],
|
||||
r['id'], dataJson, e.message,
|
||||
)
|
||||
pass
|
||||
|
||||
if isinstance(r['pushkey'], db_binary_type):
|
||||
if isinstance(r['pushkey'], types.BufferType):
|
||||
r['pushkey'] = str(r['pushkey']).decode("UTF8")
|
||||
|
||||
return rows
|
||||
|
||||
@@ -26,11 +26,6 @@ from synapse.util.caches.descriptors import cached, cachedInlineCallbacks
|
||||
|
||||
|
||||
class RegistrationWorkerStore(SQLBaseStore):
|
||||
def __init__(self, db_conn, hs):
|
||||
super(RegistrationWorkerStore, self).__init__(db_conn, hs)
|
||||
|
||||
self.config = hs.config
|
||||
|
||||
@cached()
|
||||
def get_user_by_id(self, user_id):
|
||||
return self._simple_select_one(
|
||||
@@ -41,33 +36,12 @@ class RegistrationWorkerStore(SQLBaseStore):
|
||||
retcols=[
|
||||
"name", "password_hash", "is_guest",
|
||||
"consent_version", "consent_server_notice_sent",
|
||||
"appservice_id", "creation_ts",
|
||||
"appservice_id",
|
||||
],
|
||||
allow_none=True,
|
||||
desc="get_user_by_id",
|
||||
)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def is_trial_user(self, user_id):
|
||||
"""Checks if user is in the "trial" period, i.e. within the first
|
||||
N days of registration defined by `mau_trial_days` config
|
||||
|
||||
Args:
|
||||
user_id (str)
|
||||
|
||||
Returns:
|
||||
Deferred[bool]
|
||||
"""
|
||||
|
||||
info = yield self.get_user_by_id(user_id)
|
||||
if not info:
|
||||
defer.returnValue(False)
|
||||
|
||||
now = self.clock.time_msec()
|
||||
trial_duration_ms = self.config.mau_trial_days * 24 * 60 * 60 * 1000
|
||||
is_trial = (now - info["creation_ts"] * 1000) < trial_duration_ms
|
||||
defer.returnValue(is_trial)
|
||||
|
||||
@cached()
|
||||
def get_user_by_access_token(self, token):
|
||||
"""Get a user from the given access token.
|
||||
@@ -167,7 +141,7 @@ class RegistrationStore(RegistrationWorkerStore,
|
||||
|
||||
def register(self, user_id, token=None, password_hash=None,
|
||||
was_guest=False, make_guest=False, appservice_id=None,
|
||||
create_profile_with_localpart=None, admin=False):
|
||||
admin=False):
|
||||
"""Attempts to register an account.
|
||||
|
||||
Args:
|
||||
@@ -181,8 +155,6 @@ class RegistrationStore(RegistrationWorkerStore,
|
||||
make_guest (boolean): True if the the new user should be guest,
|
||||
false to add a regular user account.
|
||||
appservice_id (str): The ID of the appservice registering the user.
|
||||
create_profile_with_localpart (str): Optionally create a profile for
|
||||
the given localpart.
|
||||
Raises:
|
||||
StoreError if the user_id could not be registered.
|
||||
"""
|
||||
@@ -195,7 +167,6 @@ class RegistrationStore(RegistrationWorkerStore,
|
||||
was_guest,
|
||||
make_guest,
|
||||
appservice_id,
|
||||
create_profile_with_localpart,
|
||||
admin
|
||||
)
|
||||
|
||||
@@ -208,7 +179,6 @@ class RegistrationStore(RegistrationWorkerStore,
|
||||
was_guest,
|
||||
make_guest,
|
||||
appservice_id,
|
||||
create_profile_with_localpart,
|
||||
admin,
|
||||
):
|
||||
now = int(self.clock.time())
|
||||
@@ -273,14 +243,6 @@ class RegistrationStore(RegistrationWorkerStore,
|
||||
(next_id, user_id, token,)
|
||||
)
|
||||
|
||||
if create_profile_with_localpart:
|
||||
# set a default displayname serverside to avoid ugly race
|
||||
# between auto-joins and clients trying to set displaynames
|
||||
txn.execute(
|
||||
"INSERT INTO profiles(user_id, displayname) VALUES (?,?)",
|
||||
(create_profile_with_localpart, create_profile_with_localpart)
|
||||
)
|
||||
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_user_by_id, (user_id,)
|
||||
)
|
||||
|
||||
@@ -186,35 +186,6 @@ class RoomWorkerStore(SQLBaseStore):
|
||||
desc="is_room_blocked",
|
||||
)
|
||||
|
||||
@cachedInlineCallbacks(max_entries=10000)
|
||||
def get_ratelimit_for_user(self, user_id):
|
||||
"""Check if there are any overrides for ratelimiting for the given
|
||||
user
|
||||
|
||||
Args:
|
||||
user_id (str)
|
||||
|
||||
Returns:
|
||||
RatelimitOverride if there is an override, else None. If the contents
|
||||
of RatelimitOverride are None or 0 then ratelimitng has been
|
||||
disabled for that user entirely.
|
||||
"""
|
||||
row = yield self._simple_select_one(
|
||||
table="ratelimit_override",
|
||||
keyvalues={"user_id": user_id},
|
||||
retcols=("messages_per_second", "burst_count"),
|
||||
allow_none=True,
|
||||
desc="get_ratelimit_for_user",
|
||||
)
|
||||
|
||||
if row:
|
||||
defer.returnValue(RatelimitOverride(
|
||||
messages_per_second=row["messages_per_second"],
|
||||
burst_count=row["burst_count"],
|
||||
))
|
||||
else:
|
||||
defer.returnValue(None)
|
||||
|
||||
|
||||
class RoomStore(RoomWorkerStore, SearchStore):
|
||||
|
||||
@@ -498,6 +469,35 @@ class RoomStore(RoomWorkerStore, SearchStore):
|
||||
"get_all_new_public_rooms", get_all_new_public_rooms
|
||||
)
|
||||
|
||||
@cachedInlineCallbacks(max_entries=10000)
|
||||
def get_ratelimit_for_user(self, user_id):
|
||||
"""Check if there are any overrides for ratelimiting for the given
|
||||
user
|
||||
|
||||
Args:
|
||||
user_id (str)
|
||||
|
||||
Returns:
|
||||
RatelimitOverride if there is an override, else None. If the contents
|
||||
of RatelimitOverride are None or 0 then ratelimitng has been
|
||||
disabled for that user entirely.
|
||||
"""
|
||||
row = yield self._simple_select_one(
|
||||
table="ratelimit_override",
|
||||
keyvalues={"user_id": user_id},
|
||||
retcols=("messages_per_second", "burst_count"),
|
||||
allow_none=True,
|
||||
desc="get_ratelimit_for_user",
|
||||
)
|
||||
|
||||
if row:
|
||||
defer.returnValue(RatelimitOverride(
|
||||
messages_per_second=row["messages_per_second"],
|
||||
burst_count=row["burst_count"],
|
||||
))
|
||||
else:
|
||||
defer.returnValue(None)
|
||||
|
||||
@defer.inlineCallbacks
|
||||
def block_room(self, room_id, user_id):
|
||||
yield self._simple_insert(
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user