-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
@sentry/node memory leaks on node 10.13.0 #1762
Comments
@abraxxas can you help me reproduce this issue? How exactly are you triggering this memleak? |
@kamilogorek Thanks for your response. What i did was the following. I checked out this example https://github.com/sheerun/next.js/tree/with-sentry-fix/examples/with-sentry and added memwatch to the server.js
then i ran the example using node 10.x (with 8.x i observed no memory issues) and requested the following ressources using our gatling testsuite:
(note the hashes might change for you) but you should be able to achieve the same results with this approach very simple https://www.simonholywell.com/post/2015/06/parallel-benchmark-many-urls-with-apachebench/ after a few thousand requests our memory usage was at almost 1GB and never reduced even when idling. As soon as i remove the request and errorHandler from server.js the memory-leak stops. So it seems to be connected to those 2. Maybe you either had too few requests or used node 8.x? |
@abraxxas confirmed. Node 10 + ~300req for each resource does the job. Will investigate further. Thanks! |
@kamilogorek intersting, but if you look on the heapsize in your
reproduction you will see that it stays around 20mb without the handlers
but increases rapidly with them.
I think the memory-leak warning without the handlers happens just because
due to the requests there is some constsnt memory increase.
There is still a vast difference in memory usage between the version with
sentry and without.
…On Thu, 29 Nov 2018, 12:45 Kamil Ogórek ***@***.*** wrote:
@abraxxas <https://github.com/abraxxas> I successfully reproduced it,
however, it appears that server still leaks request objects on its own,
even without Sentry handlers.
https://streamable.com/bad9j
The growth rate is a tad larger, as we do attach domain and our own scope
object to the request, but it'd be utilized alongside the request by GC.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1762 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIbrNlgPjPd5Jra1aahR-Dthf7XvbCexks5uz8jjgaJpZM4YvOA2>
.
|
It appears to be the issue in Node's core itself. Please refer to this comment #1762 (comment) |
@kamilogorek Have you had chance to look into this yet? It is causing a huge memory leak for us. After looking at the heap, this line looks like it could be the root of problem:
The inspector showed thousands of entries into the I don't have any context around how things are architected, but we have noticed that requests are not correctly scoped and are giving the wrong metadata (see #1773) so it appears everything is being managed in global state and not cleaned up when a request ends |
@abraxxas @tpbowden there's an issue with leaking domain module in Node's core itself. We'll keep monitoring it and try to come up with a temporary solution before it's fixed in the core. Related issue: nodejs/node#23862 |
@kamilogorek do you have any ideas for a workaround or temporary fix for this? Progress on the node issue looks quite slow |
We're currently using PM2 to restart Node.js processes when memory reaches a certain threshold. https://pm2.io/doc/en/runtime/features/memory-limit/#max-memory-threshold-auto-reload |
Using lab for unit testing. The leaks are still present. I know leaks can be painful to debug. Is there an ETA for a fix?
|
@sunknudsen The issue is actually fixed in NodeJS, see nodejs/node#23862 and nodejs/node#25993. We probably need to wait for a release. |
@garthenweb Does this affect |
@sunknudsen More or less. It's the combination of the (deprecated) domain package and promises as far as I understood. See https://github.com/getsentry/sentry-javascript/blob/master/packages/node/src/handlers.ts#L233 In my case, I just removed the sentry middlewares from my (express) server to fix it. |
@MartijnHols We are currently working on a major release that should significantly reduce the memory footprint our SDK. If you feel adventurous you could try it out #1919 |
@HazAT Thanks, I installed it in production last night (at 23:10 in the graph) and re-enabled the handlers with the results below. There's normally a slight spike in CPU usage around 23:00-24:00 (as you can see from the previous day), but this seemed higher. The standard CPU usage is also a lot more spiky than without the handlers. Not sure if this is due to the changes in the new version or it's the handlers being enabled. I'll try disabling the handlers again in a few hours. There are about 2.5 errors caught per hour. |
@MartijnHols Thanks for trying it out! Two things to keep in mind, the memory leak fix for domains in node landed in node only recently in |
Updating to node 11.12 seems to have stabilized the memory and CPU usage. It seems there isn't any discernible difference now in resource usage when comparing it to having the handlers disabled, maybe even slightly better. It seems to also catch errors fine with all the info I need (it might have more console "breadcrumbs" which is nice). Not sure what else I can check for 5.0.0. I'll let you know if I run into any issues, but probably not. LGTM. Thanks! |
I'd be happy to give this a try as well. @HazAT do you know if the fix in |
@adriaanmeuris I've read that someone asked if it will be backported to |
https://nodejs.org/en/blog/release/v10.16.0/ fixed some memory leaks, can someone test it? |
I have the same problem on node 11.10, but it seems to be fixed on Node 12.3.1 |
There's a PR open for the We can either fork this and override the dependency in Yarn resolutions or find a way to get it merged. |
Might be bit offtopic, but could also cause some leaks that there is no domain cleanup in Handlers.ts, only domain.create? UPDATE: Now I see that handlers are using domain.run which internally calls domain.exit. Still removing the listeners/emitters might make difference, but honestly have no idea. |
I upgraded to Node 12.4.0 and I'm still seeing bad behavior that seems to be tied to Sentry. Here's some node-clinic doctor runs, done with the --autocannon option over 90 seconds.
It only seems really leaky when the request handlers are in place. If you look at the bottoms of the GC troughs in the run without the handlers, they're all at about the same level (65-70mb), where the run with the handlers seems to be climbing about 5mb with each GC cycle. |
@tstirrat15 this fix has not been released yet, so that's probably why you are still having this issue. Can you try from the latest master if that's an option? |
@tstirrat15 |
Here's another run with v5.4.2 in place. It still seems a bit leaky... |
GC is always kicking in correctly and restores memory to the baseline. Memory usage increase is caused by breadcrumbs collected from the events and event queue, but it'll stop at 100 breadcrumbs and won't increase further. It'd be nice if we could see like ~15-30min dump and see whether peak memory stops at some point. |
Hmm... sounds good. I'll get this PR through to production and see if the behavior changes. Thank you! |
@tpbowden You are right about this, I have the same issue. I was running the SDK v5.15.0 and node v12.3.1, both supposed to include all the required fixes mentioned here. I am bundling all dependencies within my server bundle with webpack. This way I can ship a docker image without node_modules, but something is messing up the sentry SDK and it leaks memory when bundled this way. It might be a bug made by some optimization process. My wild guess is it's probably terser. Some optimization is probably messing up the usage of the domain module, and the closure of the callback passed to I'm also using razzle.js which is a bit behind on the webpack/terser versions, maybe it's already fixed. This doesn't seem to be a bug on sentry's side anymore. I will continue to investigate this and open an issue where appropriate and keep this thread updated. |
Keep us posted, thank you! |
@kamilogorek Could you point me where in the code the event processors callback added to the |
Or maybe it's the entire scope that is supposed to be unique and garbage collected for each request? It seems that each request are getting the same scope instance 🤔 |
That is correct. However, See this call stack: sentry-javascript/packages/node/src/handlers.ts Lines 319 to 328 in fd26d9f
sentry-javascript/packages/hub/src/hub.ts Lines 442 to 457 in fd26d9f
sentry-javascript/packages/hub/src/hub.ts Lines 463 to 488 in fd26d9f
sentry-javascript/packages/hub/src/hub.ts Line 479 in fd26d9f
|
Ha! I think I found something. We use dynamicRequire: sentry-javascript/packages/hub/src/hub.ts Lines 465 to 468 in fd26d9f
But when I step into the dynamicRequire code: sentry-javascript/packages/utils/src/misc.ts Lines 28 to 31 in fd26d9f
So it enters the Since in my setup everyting is bundled by webpack, there is probably some assumptions made on the Recap |
I manually patched the Hub module directly in my node_modules folder. I removed the line using the Maybe the dynamicRequire hack was needed before, but is no longer needed with newer versions of webpack? 🤔 I also tried to replace : const domain = dynamicRequire(module, 'domain'); with: const domain = require('domain'); and it also works fine. I don't know which of those two solutions you would prefer. Would you like me to open a PR with this fix? |
Hello, trying to understand if what I have found is a next memory leak, but it seems so. After upgrading our @sentry/node from 5.19.1 to the newest one, version 6.19.7 discovered that inside SessionFlusher constructor there is an assignment with setInterval function, but as far as I was testing it, I have never really caught any close function invocation meant for clearing initial setInterval function. |
Just as a side note, but as far as I can see also, after our upgrades (even with AutoSessionTracking set to false ) our CPU is around 35% more loaded with work, is this the desired behavior? our current node.js version is 16.15.0 running on ubuntu 18, aws ec2 t3 micro. |
Hi @wojtekelvis, the interval in the session flusher should only be created once and it should be cleaned up when the sentry client is closed. Do you happen to call |
Package + Version
@sentry/browser
@sentry/node
Version:
Description
I tried integrating sentry into ah next.js project. I tried it using this template https://github.com/sheerun/next.js/tree/with-sentry-fix/examples/with-sentry and discovered what seems to be a memory-leak in sentry. If you check out this project and add memwatch
and bombard the server with requests, i requested the following ressources:
With this the memory-usage grows endlessly for me. As soon as i remove the request and errorHandler from server.js the memory-leak stops. So it seems to be connected to those 2
The text was updated successfully, but these errors were encountered: