Skip to content
This repository has been archived by the owner on Apr 26, 2024. It is now read-only.

Bound ephemeral events by key #12544

Merged

Conversation

Fizzadar
Copy link
Contributor

@Fizzadar Fizzadar commented Apr 25, 2022

Spend a couple of hours working through this with @bradtgmurray:

When the appservice-pusher handles a new read receipt, it grabs up to 100 events newer than the current position and then advances to the new current position. However, it doesn't advance past the 100 events it just grabbed. This means that if there are newer events to handle in the database but we're still in the process handling older events, we'll grab the same events over and over again, pushing them to the appservice scheduler queue.

This can happen if the appservice is slow to respond (but does not error), and thus the in memory queue of events-to-send grows and grows full of duplicated events.

Signed off by Nick @ Beeper

Pull Request Checklist

  • Pull request is based on the develop branch
  • Pull request includes a changelog file. The entry should:
    • Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from EventStore to EventWorkerStore.".
    • Use markdown where necessary, mostly for code blocks.
    • End with either a period (.) or an exclamation mark (!).
    • Start with a capital letter.
    • Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
  • Pull request includes a sign off
  • Code style is correct
    (run the linters)

@Fizzadar Fizzadar requested a review from a team as a code owner April 25, 2022 19:48
@Fizzadar Fizzadar force-pushed the bound-ephemeral-events-by-key branch from 7cd72a9 to 063c774 Compare April 25, 2022 19:53
@Fizzadar Fizzadar force-pushed the bound-ephemeral-events-by-key branch from 063c774 to 6b8971f Compare April 25, 2022 19:54
@bradtgmurray
Copy link
Contributor

The unbounded memory growth came from this array here growing faster than it could be drained:

self.queuer.queued_ephemeral.setdefault(appservice.id, []).extend(ephemeral)

We can send about 700~ EDUs to our app service per second (7 transactions of 100 EDUs each) and we receive about 25~ read receipts per second, and each one of these read receipts caused the 100 most recent read receipts to be added to the queue. This means it was growing at a rate of 2500 per second, which outstrips our ability to drain it at 700 per second, resulted in this list growing infinitely.

@bradtgmurray
Copy link
Contributor

Also note we have not tested this change, so consider this a theoretical fix. Would love some feedback if this fix is in the right direction.

@anoadragon453 anoadragon453 self-assigned this Apr 26, 2022
@anoadragon453
Copy link
Member

Related: #10836

Thanks both for tracking this down. This particular code path is completely untested, so this error condition is not too surprising... I'm in the middle of writing a unit test now to verify the issue and the proposed solution.

This solution looks like it will work, and we should land this sooner rather than later. I do think that this will eliminate the duplication when sending queued up receipts to AS's. It will result in many small transactions though, whereas it would be ideal to instead send batches of 100 as much as possible.

That would simply mean switching the call:

# Persist the latest handled stream token for this appservice
await self.store.set_appservice_stream_type_pos(
service, "read_receipt", new_token
)

from storing new_token to storing the maximum stream_token of the events pulled from the database. Then any further calls with a token less than that will be ignored.
This should result in less transactions overall.

But as mentioned, both solutions should prevent the catastrophic case.

@bradtgmurray
Copy link
Contributor

Awesome, thanks for taking a look over this Andrew.

from storing new_token to storing the maximum stream_token of the events pulled from the database

The downside here is that we'd drop read receipts in the case where were more than 100 events behind, as #8744 flipped this to taking the most recent 100 events as opposed to the next 100 events.

It's awkward, because either we just take up to the current new_token and end up with many small transactions, or we take the max stream_token from the events and drop in cases where there's more than 100 to send.

However, maybe this isn't a problem. Events are pulled in small batches and added to the self.queuer.queued_ephemeral list, and then that list is pulled in batches as we get ready to send the next transaction. I think storing new_token will still result in maximally large transactions if the pusher can't keep up in that case.

@anoadragon453
Copy link
Member

Hrm, re-reading #8744 again, I'm not sure if the problem is exactly as described. We pull read read receipts from the receipts_linearized table, yet that table only ever contains the latest read receipt for user in a room. So hooking up a new AS will indeed send a lot of read receipts, but only num. user x num. room receipts in total - not every read receipt ever sent.

It probably doesn't make sense to send all of the latest read receipts to the AS when first hooking it up (in fact, should we send any receipts at all from before the AS was connected?) but sending all of the latest read receipts since the last time the AS was online may make sense to avoid edge cases.

However, maybe this isn't a problem. Events are pulled in small batches and added to the self.queuer.queued_ephemeral list, and then that list is pulled in batches as we get ready to send the next transaction. I think storing new_token will still result in maximally large transactions if the pusher can't keep up in that case.

Good point, there is a buffer, so transaction sizes will already go out as fast as possible. If there's a lot of traffic, transactions will be >1 in size.

@bradtgmurray
Copy link
Contributor

Hrm, re-reading #8744 again, I'm not sure if the problem is exactly as described. We pull read read receipts from the receipts_linearized table, yet that table only ever contains the latest read receipt for user in a room. So hooking up a new AS will indeed send a lot of read receipts, but only num. user x num. room receipts in total - not every read receipt ever sent.

Interesting. So our worker was probably much further behind.

When we were having this issue, we were running the following query:

SELECT COUNT(*)
FROM receipts_linearized rl
         JOIN application_services_state ass ON 1 = 1
WHERE rl.stream_id > ass.read_receipt_stream_id

This was returning a count of 3000 or so. Since we were only getting the latest 100, and this was much less than 3000, we were getting the amplifcation sadness.

@bradtgmurray
Copy link
Contributor

Oh, you were discussing the fix in #8744, in which I agree that doing something around "since the AS was connected", but that doesn't handle long-lived AS connections that recently have emphemeral read receipts enabled. You kind of need MAX(since AS connected, since emphemeral read receipts enabled) which is funky to implement, hence the hack.

Another aside, SELECT COUNT(*) FROM receipts_linearized gives 1885415 on beeper.com

@anoadragon453
Copy link
Member

Right, that's quite a lot of read receipts. Out of interest I checked the count on matrix.org and it's over 9 times that! Perhaps we'll leave the hack in place for now.

Copy link
Member

@anoadragon453 anoadragon453 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a commit to your branch with a unit test which verify that the proposed fix works 🎉 In the test, we have 300 pending read receipts, and with the fix 300 are eventually sent out the other side to the fake AS.

For reference, on the develop branch we try to send 25050 read receipts out(!).

@anoadragon453 anoadragon453 merged commit 63ba9ba into matrix-org:develop Apr 26, 2022
@bradtgmurray
Copy link
Contributor

Thank you all!

@Fizzadar Fizzadar deleted the bound-ephemeral-events-by-key branch April 26, 2022 20:58
DMRobertson pushed a commit that referenced this pull request May 10, 2022
Synapse 1.59.0rc1 (2022-05-10)
==============================

This release makes several changes that server administrators should be aware of:

- Device name lookup over federation is now disabled by default. ([\#12616](#12616))
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](#12452), [\#12654](#12654))

See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1590) for more details.

Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](#12597))

Features
--------

- Support [MSC3266](matrix-org/matrix-spec-proposals#3266) room summaries over federation. ([\#11507](#11507))
- Implement [changes](matrix-org/matrix-spec-proposals@4a77139) to [MSC2285 (hidden read receipts)](matrix-org/matrix-spec-proposals#2285). Contributed by @SimonBrandner. ([\#12168](#12168), [\#12635](#12635), [\#12636](#12636), [\#12670](#12670))
- Extend the [module API](https://github.com/matrix-org/synapse/blob/release-v1.59/synapse/module_api/__init__.py) to allow modules to change actions for existing push rules of local users. ([\#12406](#12406))
- Add the `notify_appservices_from_worker` configuration option (superseding `notify_appservices`) to allow a generic worker to be designated as the worker to send traffic to Application Services. ([\#12452](#12452))
- Add the `update_user_directory_from_worker` configuration option (superseding `update_user_directory`) to allow a generic worker to be designated as the worker to update the user directory. ([\#12654](#12654))
- Add new `enable_registration_token_3pid_bypass` configuration option to allow registrations via token as an alternative to verifying a 3pid. ([\#12526](#12526))
- Implement [MSC3786](matrix-org/matrix-spec-proposals#3786): Add a default push rule to ignore `m.room.server_acl` events. ([\#12601](#12601))
- Add new `mau_appservice_trial_days` configuration option to specify a different trial period for users registered via an appservice. ([\#12619](#12619))

Bugfixes
--------

- Fix a bug introduced in Synapse 1.48.0 where the latest thread reply provided failed to include the proper bundled aggregations. ([\#12273](#12273))
- Fix a bug introduced in Synapse 1.22.0 where attempting to send a large amount of read receipts to an application service all at once would result in duplicate content and abnormally high memory usage. Contributed by Brad & Nick @ Beeper. ([\#12544](#12544))
- Fix a bug introduced in Synapse 1.57.0 which could cause `Failed to calculate hosts in room` errors to be logged for outbound federation. ([\#12570](#12570))
- Fix a long-standing bug where status codes would almost always get logged as `200!`, irrespective of the actual status code, when clients disconnect before a request has finished processing. ([\#12580](#12580))
- Fix race when persisting an event and deleting a room that could lead to outbound federation breaking. ([\#12594](#12594))
- Fix a bug introduced in Synapse 1.53.0 where bundled aggregations for annotations/edits were incorrectly calculated. ([\#12633](#12633))
- Fix a long-standing bug where rooms containing power levels with string values could not be upgraded. ([\#12657](#12657))
- Prevent memory leak from reoccurring when presence is disabled. ([\#12656](#12656))

Updates to the Docker image
---------------------------

- Explicitly opt-in to using [BuildKit-specific features](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md) in the Dockerfile. This fixes issues with building images in some GitLab CI environments. ([\#12541](#12541))
- Update the "Build docker images" GitHub Actions workflow to use `docker/metadata-action` to generate docker image tags, instead of a custom shell script. Contributed by @henryclw. ([\#12573](#12573))

Improved Documentation
----------------------

- Update SQL statements and replace use of old table `user_stats_historical` in docs for Synapse Admins. ([\#12536](#12536))
- Add missing linebreak to `pipx` install instructions. ([\#12579](#12579))
- Add information about the TCP replication module to docs. ([\#12621](#12621))
- Fixes to the formatting of `README.rst`. ([\#12627](#12627))
- Fix docs on how to run specific Complement tests using the `complement.sh` test runner. ([\#12664](#12664))

Deprecations and Removals
-------------------------

- Remove unstable identifiers from [MSC3069](matrix-org/matrix-spec-proposals#3069). ([\#12596](#12596))
- Remove the unspecified `m.login.jwt` login type and the unstable `uk.half-shot.msc2778.login.application_service` from
  [MSC2778](matrix-org/matrix-spec-proposals#2778). ([\#12597](#12597))
- Synapse now requires at least Python 3.7.1 (up from 3.7.0), for compatibility with the latest Twisted trunk. ([\#12613](#12613))

Internal Changes
----------------

- Use supervisord to supervise Postgres and Caddy in the Complement image to reduce restart time. ([\#12480](#12480))
- Immediately retry any requests that have backed off when a server comes back online. ([\#12500](#12500))
- Use `make_awaitable` instead of `defer.succeed` for return values of mocks in tests. ([\#12505](#12505))
- Consistently check if an object is a `frozendict`. ([\#12564](#12564))
- Protect module callbacks with read semantics against cancellation. ([\#12568](#12568))
- Improve comments and error messages around access tokens. ([\#12577](#12577))
- Improve docstrings for the receipts store. ([\#12581](#12581))
- Use constants for read-receipts in tests. ([\#12582](#12582))
- Log status code of cancelled requests as 499 and avoid logging stack traces for them. ([\#12587](#12587), [\#12663](#12663))
- Remove special-case for `twisted` logger from default log config. ([\#12589](#12589))
- Use `getClientAddress` instead of the deprecated `getClientIP`. ([\#12599](#12599))
- Add link to documentation in Grafana Dashboard. ([\#12602](#12602))
- Reduce log spam when running multiple event persisters. ([\#12610](#12610))
- Add extra debug logging to federation sender. ([\#12614](#12614))
- Prevent remote homeservers from requesting local user device names by default. ([\#12616](#12616))
- Add a consistency check on events which we read from the database. ([\#12620](#12620))
- Remove use of the `constantly` library and switch to enums for `EventRedactBehaviour`. Contributed by @andrewdoh. ([\#12624](#12624))
- Remove unused code related to receipts. ([\#12632](#12632))
- Minor improvements to the scripts for running Synapse in worker mode under Complement. ([\#12637](#12637))
- Move `pympler` back in to the `all` extras. ([\#12652](#12652))
- Fix spelling of `M_UNRECOGNIZED` in comments. ([\#12665](#12665))
- Release script: confirm the commit to be tagged before tagging. ([\#12556](#12556))
- Fix a typo in the announcement text generated by the Synapse release development script. ([\#12612](#12612))

- Fix scripts-dev to pass typechecking. ([\#12356](#12356))
- Add some type hints to datastore. ([\#12485](#12485))
- Remove unused `# type: ignore`s. ([\#12531](#12531))
- Allow unused `# type: ignore` comments in bleeding edge CI jobs. ([\#12576](#12576))
- Remove redundant lines of config from `mypy.ini`. ([\#12608](#12608))
- Update to mypy 0.950. ([\#12650](#12650))
- Use `Concatenate` to better annotate `_do_execute`. ([\#12666](#12666))
- Use `ParamSpec` to refine type hints. ([\#12667](#12667))
- Fix mypy against latest pillow stubs. ([\#12671](#12671))
SpiritCroc pushed a commit to SpiritCroc/synapse-old that referenced this pull request May 12, 2022
Co-authored-by: Brad Murray <[email protected]>
Co-authored-by: Andrew Morgan <[email protected]>
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants