Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to properly handle multiple broadcasts from a model on after_destroy_commit? #363

Open
spaquet opened this issue Jul 21, 2022 · 2 comments

Comments

@spaquet
Copy link

spaquet commented Jul 21, 2022

Here is the use case: When deleting an object the list must be refreshed and the counter updated. As this object can be manipulated by other processes that are not calling the controller. This forces Turbo Stream calls to be executed from with the model.

Now, when the object is destroyed its reference will be lost... making the following code impossible to run:

after_destroy_commit do
    broadcast_update_later_to [self.room_id, :questions], target: "question_counter", html: Question.approved_questions_for_room(self.room_id).count
    broadcast_remove_to [self.room_id, :questions], target: self
end

The error is then ERROR: Discarded Turbo::Streams::ActionBroadcastJob due to a ActiveJob::DeserializationError.

(the same sequence called from the controller does work as expected with no error)

I do understand the process behind the scene, but wouldn't it be better to properly "serialize" the active job so that it can be run once the object does not exist so we can keep the async performance benefits.

This question could also be extended to the broadcast_remove_to as it does not exist a broadcast_remove_later_to (same, I understand the why, but I also think that developers have solutions in their arsenal to circumvent the potential issues - for instance Realm database is fully async and you quickly learn how to properly delete data and reflect these changes on the UI)

@daya
Copy link

daya commented Jul 23, 2022

While the developers may come back with a better solution I think the following hackish pseudo-code will work

before_destroy do
  Thread.current_thread[:broadcast_room_id] = self.room_id
  # OR store the whole record in current thread
  Thread.current_thread[:selfish] = self
end

then

after_destroy_commit do
    broadcast_update_later_to [Thread.current_thread[:broadcast_room_id], :questions], target: "question_counter", html: Question.approved_questions_for_room(Thread.current_thread[:selfish].room_id]).count
    broadcast_remove_to [Thread.current_thread[:selfish].room_id, :questions], target: Thread.current_thread[:selfish]
end

@aaltcore
Copy link

Faced something similar with:
after_commit :update_comments_count, on: %i[ create destroy ]

def update_comments_count
  broadcast_update_later_to commentable, 
      target: [ commentable, "#{model_name.plural}_count" ], html: commentable.comments.size
end

For example, it renders as:

[ActiveJob] Enqueued Turbo::Streams::ActionBroadcastJob (Job ID: 154d6a5e-d28a-4233-9544-32e46d684fea) to Async(default) with arguments: "Z2lkOi8vcmFpbHM4L0FydGljbGUvMQ", {:action=>:update, :target=>"comments_count_article_1", :targets=>nil, :attributes=>{}, :html=>1, :locals=>{:comment=>#<GlobalID:0x00007ae9ea065618 @uri=#<URI::GID gid://rails8/Comment/135>>}}


Managed to find that the problem is this line:

o[:locals] = (o[:locals] || {}).reverse_merge!(model_name.element.to_sym => self).compact

which adds a destroyed instance to locals. So in order to avoid ActiveJob::DeserializationError, you can set locals like { model_name.element.to_sym => nil }:

def update_comments_count
  broadcast_update_later_to commentable, 
      target: [ commentable, "#{model_name.plural}_count" ], 
      html: commentable.comments.size, 
      locals: { model_name.element.to_sym => nil }
end

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants