-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Large memory increase after bumping from 4.6.2 to 4.7.7 #11428
Comments
@yawkat can you have a look? |
I tried proxying with code like this: @ServerFilter("/proxy")
class ProxyFilter(val proxy: ProxyHttpClient) {
@RequestFilter
fun filter(): Publisher<MutableHttpResponse<*>>? {
return proxy.proxy(HttpRequest.GET<Unit>("https://ftp.fau.de/ubuntu-releases/noble/ubuntu-24.04.1-desktop-amd64.iso"))
}
} And it downloaded just fine with Beyond something app-specific, I also see two other possibilities related to the env:
You can try debugging these two things. Otherwise I can only request a test case. |
Pretty sure it's the second one, btw locally I don't reproduce neither it's on the remote that it's painful. |
Ahh I see an issue, auto read is on. I'll try a patch tomorrow. That could indeed break backpressure. |
ProxyBackpressureTest takes quite long so I made it parallel. Wasn't able to do that with junit unfortunately. Might fix #11428
Expected Behavior
No memory increase
Actual Behaviour
Hello!
It seems there is something that is causing large memory increase after updating to 4.7.x (see the graph below where the high spike is when our pod is rolled out with the new Micronaut version.
I believe it's due to something in this PR.
We started to get a lot of OOM all of a sudden following the upgrade. From my searches (I'm not familiar with the whole Micronaut codebase so maybe I'm wrong), this change leads to any content from an HTTP Client that exceeds the maxContentLength to be streamed instead with a buffer size of maxContentLength. In our case we've put the value of this prop to Integer.MAX_VALUE (I believe due to some large content we were downloading that was failing the client).
Our usecase is that we proxy to a docker registry (simple ProxyHttpClient) and it downloads something more than 2GB.
Playing a bit around and considering the change above, our config was originally setting the
micronaut.http.services.registry.max-content-length
to2147483647
(Integer.MAX_VALUE) (I believe due to some prior bugs we had. However it seems to me that since we're now streaming and following above, it's taken as the max buffer size.Lowering this param seems to lower the memory consumption but I had no good result doing a memory dump as Netty uses direct memory...
I know our configuration of max-content-length was a bit awkward but it can be misleading to introduce such behaviour change and the fact that it fallback to such parameter to choose buffer size upon streaming a response seems unclear.
WDYT? Is my analysis correct?
Thank you!
Steps To Reproduce
a
proxyHttpClient.proxy(httpRequest)
on a content larger thanmax-content-length
should fallback to streaming ence using this value as the max buffer sizeEnvironment Information
No response
Example Application
No response
Version
4.7.7
The text was updated successfully, but these errors were encountered: