-
Notifications
You must be signed in to change notification settings - Fork 38
Job is activated but is not received by job worker (intermittent issue) #177
Comments
From what I see, node zeebe client implementation heavily relies on jobs being timed out by server in case something went wrong. I see two places where jobs can be lost:
I'm actually surprised that Zeebe does not use acknowledgment-based approach for the jobs to guarantee that jobs actually delivered to job handlers. If this issue is not fixable in a reasonable time I see a workaround: |
Hi @jetdream, thanks for reporting this. It is probably not closing - this should only happen when you call It is used in tests to complete the test - otherwise the polling loop will keep the code running forever. The timeout of the gRPC long poll is not managed on the client side. It could be a race condition in the batch collection timeout and execution. Let me look into it further. |
I think this is due to a race condition in the Batch processing. It passes a copy of the array of jobs for the batch to the handler. It looks like the original array could be updated asynchronously while this is happening. That's my hypothesis. I've changed the "passing a copy of the array of batched jobs" to passing a slice of the array. This means that any jobs that are added to the batch while the handler is executing, will be added to the next batch. I will release the 0.24.1 version soon for you to test. It's challenging to reproduce an edge case at volume like that. |
Hi @jwulf , are you sure this is a client issue ? we have the exact issue but we are using go client. |
No, I'm not sure that it is the client. I haven't been able to reproduce it to check. |
Closing this for now. If you still see the issue with 0.25.0 of the client, please reopen. |
I have an issue which sounds exactly the same as the one described here camunda/camunda#3585
Logs show that jobs are created/activated, but nodejs batch worker does not receive the jobs in ~1% of the cases. All the cases detected have happened under high load when thousands of workflow instances created.
I increased verbose level to the maximum possible level and see that the worker does not receive those jobs, just skipping them.
Last time I detected this issue when started around 2000 instances and the first activity in the workflow (which is the service task) did not receive 12 jobs.
From what I discovered I think that all the jobs skipped belongs to a single batch: exported records positions (jobs activated) are very close to each other (difference from 4 to 8):
...
I see possible reasons:
In general case I would also suggested that a network interruption might cause an effect when broker thinks that jobs have been sent, but client actually does not receive it, but in my case this is impossible since broker and client are on the same server.
I tried to call
zbc.completeJob()
for those jobs, and broker successfully processed it and continued workflow execution. That means that broker thinks that job is actually taken by worker before.My application:
I have very long running tasks (up to months, or even years), I cannot wait for job timeout.
I use batch worker, all the jobs are forwarded to external system.
worker config :
The text was updated successfully, but these errors were encountered: