-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KafkaProducer#close() seems to leak Kafka resources. #341
Comments
The
So, we really do call Would be great to have a simple sample from you to reproduce. |
@artembilan , I've packaged up a program that exhibits the behavior in VisualVM here. Thank you! |
May we see a memory report where you think it is a leak? May this behavior can be reproduced just with a plain |
@artembilan The sample code takes about 5 minutes to fail with a 300m heap limit. Attached is a VisualVM screen shot. |
OK. When I moved that So, it looks like that
So, according to the info from Kafka Client this Or better to say it is dead-locked. Technically it is always better to use a single Kafka producer for the application, but if you cannot, you must close it when you really think you are done with it. You need to revise your logic to call |
@artembilan , that's for the great writeup. We already revised the logic when the leak was attributed to the sender. I opened this issue to as this seems a natural and supported use case when, in fact, it's not. I've seen the logs you shared, but they didn't warn of leaking memory or resources. I would love to warn future users to not try what I did, but I don't have a great idea of how to do that. Again, thanks! |
Yeah... I see your point. |
Just curious why you want to close each time; the But, yes, it would be nice if we could detect it. |
@garyrussell - Ephemeral senders are not necessary for my use case. This issue is because |
I create a KafkaSender and give it a Flux that produces records. In the onFinally() of the Flux I close (
.close()
) the sender.I observe that the Kafka library says that it prefers the close(Duration) call be used, but closes the underlying object anyway.
I further observe that, as I create and close many senders, my heap use grows and results in an OOM, eventually (~12 hours of operation).
Using
jmap
I see Kafka Node objects constantly growing. They seem to eventually collect, but the memory is then used by more generic objects like ByteArrays or similar.When I use only 1 sender for the application, I do not observe a leak.
Expected Behavior
Calling
.close()
will reliably make all resources collectable by the GC and OOMs will not happen.Actual Behavior
Over 12 hours, heap grows to 1.5GB and eventually the JVM exits with an out of heap space error.
Steps to Reproduce
In a loop, create 1 sender, send records to Kafka, close the sender in the Flux's onFinally() callback.
Repeat this and Kafka seems to leak resources.
Possible Solution
Do we need to use
close(Duration)
or callflush()
first?Your Environment
The text was updated successfully, but these errors were encountered: