Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Constant memory usage increase #1141

Open
arnaud-ly opened this issue Jul 5, 2022 · 17 comments
Open

Constant memory usage increase #1141

arnaud-ly opened this issue Jul 5, 2022 · 17 comments

Comments

@arnaud-ly
Copy link

arnaud-ly commented Jul 5, 2022

Hello,

I'm a young developer so sorry if I don't have the usual reflex, I will try to contribute at the best of my abilities.
Basically I'm running AKHQ at the 0.21 version and often the application start to slow down because of OOM errors. So i have to manually stop the task.The memory usage seems to have an ascending stair shape so it nearly never deallocate or maybe it allocates too much.
I don't thinks it's a ressource issue, I allocate 2 thread with a 2 processors of 2 Go & RAM of 4 Go.

I was wondering if you might have a clue of where to look at or have a way to kill the process if I have OOM errors so it can auto relaunch in my aws ECS.

PS: I tried to kill it with this command in the Dockerfile, but I think the error is catch by micronaut / reactjs
CMD ["/usr/bin/java", "-Dmicronaut.config.files=/app/application.yml","-jar", "/app/akhq-0.21.0-all.jar", "-XX:OnError="kill -9 %p""]

Regards

Arnaud

@arnaud-ly
Copy link
Author

arnaud-ly commented Jul 5, 2022

PS: If it matters I also used some SSL to connect to my Kafka Cluster & SSO for login

@tchiotludo
Copy link
Owner

Humm, really strange, I have multiple instance with this version with no memory leak.

The only way to understand is to do a memory dump, maybe try with VisualVM or JDK Mission Control and share the result ?

@arnaud-ly
Copy link
Author

Hello,
Here is the dump. Sorry it's to heavy to be attached on the comment.
https://we.tl/t-1eGDd1eXmC

Thanks for your help

@arnaud-ly
Copy link
Author

Hello I think I found the issue or at least one of the issues.

In consumeNewest from RecordRepository Java you forgot to close the KafkaConsummer.

Regards :)

Arnaud

@tchiotludo
Copy link
Owner

this one : #1069 ?

@arnaud-ly
Copy link
Author

Yes ! I will try this version and come back to you if there is still an issue :)

@tchiotludo
Copy link
Owner

nice thanks :)

@arnaud-ly
Copy link
Author

arnaud-ly commented Jul 26, 2022

Hello,

Unfortunately after a while I still have memory leaks.
I may have a hint about the leaks: I saw some internal server error sometimes when I create new topics.
Do you have any pull request related which resolve the issue ?

Regards,

Arnaud

@tchiotludo
Copy link
Owner

I don't think any PR or any ideas of the memory leak for now.
Does the #1069 is better ? maybe we can merge this one first
For the others one, we will need to have a memory dump I think.

@arnaud-ly
Copy link
Author

Hello,

Sry for the late reply. Yes I will believe that you can merge #1069 !
I didnt look in the problem further because I was in vacation.
I will update the issue if I find the cause :)

@tchiotludo
Copy link
Owner

thanks @arnaud-ly, done ;)

@carlosfwrk
Copy link

Hi @tchiotludo and @arnaud-ly,
First of all, thanks for your great job, the tool is amazing and very useful for us.
We had memory leaks with previous versions (0.24.0).
We have been testing the new 0.25.1 version during several months and we have been having problems with memory leaks with the newest version too.
The memory always increase until the container limit (never clean up the memory):
image

Thanks for your help,
Carlos.

@tchiotludo
Copy link
Owner

Can you send us the correlation between increase and server log please ?
The best will be to have a server log +/- 10 min from each big peak, to see what kind of request on the webserver is started that could justify this

@carlosfwrk
Copy link

Hi @tchiotludo, thanks for your reply.
Here can you see the correlation between memory peak and logs:
image
241122 - 7h20 to 7h50 - AKHQ.log
241122 - 10h50 to 11h30 - AKHQ.log
241125 - 12h00 to 12h50 - AKHQ.log
241126 - 8h00 to 8h40 - AKHQ.log
241126 - 10h55 to 11h15 - AKHQ.log

Thanks for your help!

@carlosfwrk
Copy link

I’ve continued reviewing and trying to “tune” the Java Virtual Machine to see if I can find a way to make it perform better. By increasing the maximum memory size, I’ve managed to eliminate the errors (requested encode buffer size exceeds the maximum allowable size and OutOfMemoryError: Java heap space), but the memory usage keeps climbing without stopping (it goes up and down, but when it drops, it doesn’t go down as much as it went up).

Here’s an example of the behavior:
image

Along with the logs corresponding to the two most significant spikes shown in the screenshot:
AKHQ - Dec 2 - 20.00 to 21.00 CET.log
AKHQ - Dec 3 - 11.30 to 12.30 CET.log
AKHQ - Dec 3 - 13.45 to 14.45 CET.log

@carlosfwrk
Copy link

Hi @tchiotludo,
This issue is closed. Do you think it should be reopened because the constant memory increase problem might not be fully resolved?
Would you prefer that I open a new issue because the problem for which this issue was opened (the consumer not closing) seems to be resolved and maybe, my problem is other even it could be a bad configuration on my end.

@tchiotludo tchiotludo reopened this Dec 5, 2024
@tchiotludo
Copy link
Owner

I reopened this one so far since you have added a lot of information

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants