Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node.js v14.15.5 segfault in v8::internal::ConcurrentMarking::Run #37553

Closed
hellivan opened this issue Mar 1, 2021 · 43 comments
Closed

Node.js v14.15.5 segfault in v8::internal::ConcurrentMarking::Run #37553

hellivan opened this issue Mar 1, 2021 · 43 comments
Labels
c++ Issues and PRs that require attention from people who are familiar with C++.

Comments

@hellivan
Copy link

hellivan commented Mar 1, 2021

  • Version: 14.15.5
  • Platform: Linux WorkMachine 5.11.1-arch1-1 #1 SMP PREEMPT Tue, 23 Feb 2021 14:05:30 +0000 x86_64 GNU/Linux
  • Subsystem: v8 ?

What steps will reproduce the bug?

As far as we found out, the segfault happens if Node.js sends/receives lots of data via sockets and processes it in an expensive synchronous method (e.g. JSON.parse).
The original problem involved some basic JSON data processing where the data was received from a RabbitMQ using the amqplib npm package. Meanwhile we were able to recreate the problem by only using Node.js internal mechanisms (net package) in this sample repository:

https://github.com/hellivan/nodejs-14.15.5-ConcurrentMarking-segfault

How often does it reproduce? Is there a required condition?

The error only reproduces under uncertain conditions that are difficult to replicate. Under normal circumstances, it may possible that the application runs for hours and then crashes without a reason. However it may also happen that it crashes right after the start.

What is the expected behavior?

Node.js runtime should execute JS application without interruptions.

What do you see instead?

Node.js crashes with a SIGSEGV.

Additional information

During the analysis of the original application crashes, we were able to extract some coredumps which are listed below. Due to privacy reasons we replaced some paths in the results. Due to the complexity of the original application, we created a reduced sample application, which we hope reproduces the same segmentation fault as the original one. During our tests, we found out that other Node.js versions may be affected by this bug, too. We were able to sporadically reproduce the issue for Node.js versions 14.16.0 and 15.10.0.

If you need any help or information regarding the coredumps please let me know.

1. Coredump

General information about node instance
(llnode) v8 nodeinfo
Information for process id 27845 (process=0x262d71a01d81)
Platform = linux, Architecture = x64, Node Version = v14.15.5
Component versions (process.versions=0x30ed2c5c1b69):
    ares = 1.16.1
    brotli = 1.0.9
    cldr = 37.0
    icu = 67.1
    llhttp = 2.1.3
    modules = 83
    napi = 7
    nghttp2 = 1.41.0
    node = 14.15.5
    openssl = 1.1.1i
    tz = 2020a
    unicode = 13.0
    uv = 1.40.0
    v8 = 8.4.371.19-node.18
    zlib = 1.2.11
Release Info (process.release=0x30ed2c5c1951):
    name = node
    lts = Fermium
    sourceUrl = https://nodejs.org/download/release/v14.15.5/node-v14.15.5.tar.gz
    headersUrl = https://nodejs.org/download/release/v14.15.5/node-v14.15.5-headers.tar.gz
Executable Path = /home/user/.nvm/versions/node/v14.15.5/bin/node
Command line arguments (process.argv=0x30ed2c5c1871):
    [0] = '/home/user/user.nvm/versions/node/v14.15.5/bin/node'
    [1] = '/home/user/app.js'
Node.js Command line arguments (process.execArgv=0x30ed2c5c1a49):
List of all threads
(llnode) thread list
Process 27845 stopped
* thread #1: tid = 27847, 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 27848, 0x0000000000cfc324 node`v8::internal::ConcurrentMarkingVisitor::VisitPointersInSnapshot(v8::internal::HeapObject, v8::internal::SlotSnapshot const&) + 68, stop reason = signal 0
  thread #3: tid = 27849, 0x0000000000cff987 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1303, stop reason = signal 0
  thread #4: tid = 27851, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #5: tid = 27850, 0x00007f2db786c8e2 libc.so.6`malloc + 770, stop reason = signal 0
  thread #6: tid = 27845, 0x0000000000d49001 node`v8::internal::IncrementalMarking::RecordWriteSlow(v8::internal::HeapObject, v8::internal::FullHeapObjectSlot, v8::internal::HeapObject) + 65, stop reason = signal 0
  thread #7: tid = 27853, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #8: tid = 27852, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #9: tid = 27846, 0x00007f2db78e039e libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #10: tid = 27854, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #11: tid = 27855, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
Threads' backtrace
(llnode) bt all
* thread #1, name = 'node', stop reason = signal SIGSEGV
  * frame #0: 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #2, stop reason = signal 0
    frame #0: 0x0000000000cfc324 node`v8::internal::ConcurrentMarkingVisitor::VisitPointersInSnapshot(v8::internal::HeapObject, v8::internal::SlotSnapshot const&) + 68
    frame #1: 0x0000000000d02069 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 11257
    frame #2: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #3: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #3, stop reason = signal 0
    frame #0: 0x0000000000cff987 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1303
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #4, stop reason = signal 0
    frame #0: 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f2db79bfb98 libpthread.so.0`__new_sem_wait_slow64.constprop.0 + 152
    frame #2: 0x000000000138a312 node`uv_sem_wait at thread.c:626:9
    frame #3: 0x000000000138a300 node`uv_sem_wait(sem=0x0000000004465600) at thread.c:682
    frame #4: 0x0000000000afbd45 node`node::inspector::(anonymous namespace)::StartIoThreadMain(void*) + 53
    frame #5: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #5, stop reason = signal 0
    frame #0: 0x00007f2db786c8e2 libc.so.6`malloc + 770
    frame #1: 0x00007f2db7bd14da libstdc++.so.6`operator new(unsigned long) at new_op.cc:50:22
    frame #2: 0x0000000000cfd960 node`std::__detail::_Map_base<v8::internal::MemoryChunk*, std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData>, std::allocator<std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData> >, std::__detail::_Select1st, std::equal_to<v8::internal::MemoryChunk*>, v8::internal::MemoryChunk::Hasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[](v8::internal::MemoryChunk* const&) + 160
    frame #3: 0x0000000000cfdbf9 node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 185
    frame #4: 0x0000000000d02735 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 12997
    frame #5: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #6: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #7: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #8: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #6, stop reason = signal 0
    frame #0: 0x0000000000d49001 node`v8::internal::IncrementalMarking::RecordWriteSlow(v8::internal::HeapObject, v8::internal::FullHeapObjectSlot, v8::internal::HeapObject) + 65
    frame #1: 0x0000000000e2da31 node`v8::internal::JsonParser<unsigned short>::BuildJsonObject(v8::internal::JsonParser<unsigned short>::JsonContinuation const&, std::vector<v8::internal::JsonProperty, std::allocator<v8::internal::JsonProperty> > const&, v8::internal::Handle<v8::internal::Map>) + 5569
    frame #2: 0x0000000000e2e795 node`v8::internal::JsonParser<unsigned short>::ParseJsonValue() + 2565
    frame #3: 0x0000000000e2ee8f node`v8::internal::JsonParser<unsigned short>::ParseJson() + 15
    frame #4: 0x0000000000c24805 node`v8::internal::Builtin_Impl_JsonParse(v8::internal::BuiltinArguments, v8::internal::Isolate*) + 197
    frame #5: 0x0000000000c24f06 node`v8::internal::Builtin_JsonParse(int, unsigned long*, v8::internal::Isolate*) + 22
    frame #6: 0x0000000001401319 node`Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit + 57
    frame #7: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #8: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #9: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #10: 0x000015ea04052b87
    frame #11: 0x00000000013944f9 node`Builtins_ArgumentsAdaptorTrampoline + 185
    frame #12: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #13: 0x000015ea04060a94
    frame #14: 0x000015ea040527cc
    frame #15: 0x000015ea04065d15
    frame #16: 0x000015ea0405ebc2
    frame #17: 0x000015ea040562cc
    frame #18: 0x00000000013982da node`Builtins_JSEntryTrampoline + 90
    frame #19: 0x00000000013980b8 node`Builtins_JSEntry + 120
    frame #20: 0x0000000000cc2cf1 node`v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 449
    frame #21: 0x0000000000cc3b5f node`v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) + 95
    frame #22: 0x0000000000b8ba54 node`v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 324
    frame #23: 0x000000000096ad61 node`node::InternalCallbackScope::Close() + 1233
    frame #24: 0x000000000096b357 node`node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) + 647
    frame #25: 0x0000000000978f69 node`node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) + 121
    frame #26: 0x0000000000acf2d8 node`node::StreamBase::CallJSOnreadMethod(long, v8::Local<v8::ArrayBuffer>, unsigned long, node::StreamBase::StreamBaseJSChecks) (.constprop.105) + 168
    frame #27: 0x0000000000ad2cc6 node`node::EmitToJSStreamListener::OnStreamRead(long, uv_buf_t const&) + 886
    frame #28: 0x0000000000adc2b8 node`node::LibuvStreamWrap::OnUvRead(long, uv_buf_t const*) + 120
    frame #29: 0x0000000001387267 node`uv__read(stream=0x00000000047c8cf0) at stream.c:1239:7
    frame #30: 0x0000000001387c20 node`uv__stream_io(loop=<unavailable>, w=0x00000000047c8d78, events=1) at stream.c:1306:5
    frame #31: 0x000000000138e615 node`uv__io_poll at linux-core.c:462:11
    frame #32: 0x000000000137c468 node`uv_run(loop=0x000000000446c7c0, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #33: 0x0000000000a44974 node`node::NodeMainInstance::Run() + 580
    frame #34: 0x00000000009d1e15 node`node::Start(int, char**) + 277
    frame #35: 0x00007f2db7808b25 libc.so.6`__libc_start_main + 213
    frame #36: 0x00000000009694cc node`_start + 41
  thread #7, stop reason = signal 0
    frame #0: 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f2db79bd260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #8, stop reason = signal 0
    frame #0: 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f2db79bd260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #9, stop reason = signal 0
    frame #0: 0x00007f2db78e039e libc.so.6`epoll_wait + 94
    frame #1: 0x000000000138e9c4 node`uv__io_poll at linux-core.c:324:14
    frame #2: 0x000000000137c468 node`uv_run(loop=0x00000000046db7f8, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #3: 0x0000000000a75f4b node`node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start()::'lambda'(void*)::_FUN(void*) + 123
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #10, stop reason = signal 0
    frame #0: 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f2db79bd260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67
  thread #11, stop reason = signal 0
    frame #0: 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f2db79bd260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f2db79b7299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f2db78e0053 libc.so.6`__clone + 67

2. Coredump

List of all threads
(llnode) thread list
Process 37891 stopped
* thread #1: tid = 37896, 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 37891, 0x0000000000e26c23 node`v8::internal::JsonParser<unsigned short>::ScanJsonString(bool) + 51, stop reason = signal 0
  thread #3: tid = 37894, 0x0000000000cff9ac node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1340, stop reason = signal 0
  thread #4: tid = 37897, 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #5: tid = 37892, 0x00007fe44a88039e libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #6: tid = 37899, 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #7: tid = 37900, 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #8: tid = 37898, 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #9: tid = 37893, 0x0000000000cff944 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1236, stop reason = signal 0
  thread #10: tid = 37895, 0x0000000000cfd8e7 node`std::__detail::_Map_base<v8::internal::MemoryChunk*, std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData>, std::allocator<std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData> >, std::__detail::_Select1st, std::equal_to<v8::internal::MemoryChunk*>, v8::internal::MemoryChunk::Hasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[](v8::internal::MemoryChunk* const&) + 39, stop reason = signal 0
  thread #11: tid = 37901, 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
Threads' backtrace
(llnode) bt all
* thread #1, name = 'node', stop reason = signal SIGSEGV
  * frame #0: 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #2, stop reason = signal 0
    frame #0: 0x0000000000e26c23 node`v8::internal::JsonParser<unsigned short>::ScanJsonString(bool) + 51
    frame #1: 0x0000000000e2e0e0 node`v8::internal::JsonParser<unsigned short>::ParseJsonValue() + 848
    frame #2: 0x0000000000e2ee8f node`v8::internal::JsonParser<unsigned short>::ParseJson() + 15
    frame #3: 0x0000000000c24805 node`v8::internal::Builtin_Impl_JsonParse(v8::internal::BuiltinArguments, v8::internal::Isolate*) + 197
    frame #4: 0x0000000000c24f06 node`v8::internal::Builtin_JsonParse(int, unsigned long*, v8::internal::Isolate*) + 22
    frame #5: 0x0000000001401319 node`Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit + 57
    frame #6: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #7: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #8: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #9: 0x000029ced67d30a7
    frame #10: 0x00000000013944f9 node`Builtins_ArgumentsAdaptorTrampoline + 185
    frame #11: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #12: 0x000029ced67d9b94
    frame #13: 0x000029ced67ddf4c
    frame #14: 0x000029ced67dda70
    frame #15: 0x000029ced67df222
    frame #16: 0x000029ced67d630c
    frame #17: 0x00000000013982da node`Builtins_JSEntryTrampoline + 90
    frame #18: 0x00000000013980b8 node`Builtins_JSEntry + 120
    frame #19: 0x0000000000cc2cf1 node`v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 449
    frame #20: 0x0000000000cc3b5f node`v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) + 95
    frame #21: 0x0000000000b8ba54 node`v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 324
    frame #22: 0x000000000096ad61 node`node::InternalCallbackScope::Close() + 1233
    frame #23: 0x000000000096b357 node`node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) + 647
    frame #24: 0x0000000000978f69 node`node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) + 121
    frame #25: 0x0000000000acf2d8 node`node::StreamBase::CallJSOnreadMethod(long, v8::Local<v8::ArrayBuffer>, unsigned long, node::StreamBase::StreamBaseJSChecks) (.constprop.105) + 168
    frame #26: 0x0000000000ad2cc6 node`node::EmitToJSStreamListener::OnStreamRead(long, uv_buf_t const&) + 886
    frame #27: 0x0000000000adc2b8 node`node::LibuvStreamWrap::OnUvRead(long, uv_buf_t const*) + 120
    frame #28: 0x0000000001387267 node`uv__read(stream=0x0000000005a70cf0) at stream.c:1239:7
    frame #29: 0x0000000001387c20 node`uv__stream_io(loop=<unavailable>, w=0x0000000005a70d78, events=1) at stream.c:1306:5
    frame #30: 0x000000000138e615 node`uv__io_poll at linux-core.c:462:11
    frame #31: 0x000000000137c468 node`uv_run(loop=0x000000000446c7c0, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #32: 0x0000000000a44974 node`node::NodeMainInstance::Run() + 580
    frame #33: 0x00000000009d1e15 node`node::Start(int, char**) + 277
    frame #34: 0x00007fe44a7a8b25 libc.so.6`__libc_start_main + 213
    frame #35: 0x00000000009694cc node`_start + 41
  thread #3, stop reason = signal 0
    frame #0: 0x0000000000cff9ac node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1340
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #4, stop reason = signal 0
    frame #0: 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007fe44a95fb98 libpthread.so.0`__new_sem_wait_slow64.constprop.0 + 152
    frame #2: 0x000000000138a312 node`uv_sem_wait at thread.c:626:9
    frame #3: 0x000000000138a300 node`uv_sem_wait(sem=0x0000000004465600) at thread.c:682
    frame #4: 0x0000000000afbd45 node`node::inspector::(anonymous namespace)::StartIoThreadMain(void*) + 53
    frame #5: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #5, stop reason = signal 0
    frame #0: 0x00007fe44a88039e libc.so.6`epoll_wait + 94
    frame #1: 0x000000000138e9c4 node`uv__io_poll at linux-core.c:324:14
    frame #2: 0x000000000137c468 node`uv_run(loop=0x00000000059837f8, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #3: 0x0000000000a75f4b node`node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start()::'lambda'(void*)::_FUN(void*) + 123
    frame #4: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #6, stop reason = signal 0
    frame #0: 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007fe44a95d260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #7, stop reason = signal 0
    frame #0: 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007fe44a95d260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #8, stop reason = signal 0
    frame #0: 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007fe44a95d260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #9, stop reason = signal 0
    frame #0: 0x0000000000cff944 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1236
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #10, stop reason = signal 0
    frame #0: 0x0000000000cfd8e7 node`std::__detail::_Map_base<v8::internal::MemoryChunk*, std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData>, std::allocator<std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData> >, std::__detail::_Select1st, std::equal_to<v8::internal::MemoryChunk*>, v8::internal::MemoryChunk::Hasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[](v8::internal::MemoryChunk* const&) + 39
    frame #1: 0x0000000000cfdbf9 node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 185
    frame #2: 0x0000000000d02044 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 11220
    frame #3: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #4: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #5: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007fe44a880053 libc.so.6`__clone + 67
  thread #11, stop reason = signal 0
    frame #0: 0x00007fe44a9639ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007fe44a95d260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007fe44a957299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007fe44a880053 libc.so.6`__clone + 67

3. Coredump

List of all threads
(llnode) thread list
Process 39590 stopped
* thread #1: tid = 39593, 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 39591, 0x00007f129312c39e libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #3: tid = 39590, 0x0000000000d6e0f4 node`v8::internal::MainMarkingVisitor<v8::internal::MajorMarkingState>::ShouldVisit(v8::internal::HeapObject) + 20, stop reason = signal 0
  thread #4: tid = 39597, 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #5: tid = 39599, 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #6: tid = 39596, 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #7: tid = 39594, 0x0000000000cfd8d3 node`std::__detail::_Map_base<v8::internal::MemoryChunk*, std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData>, std::allocator<std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData> >, std::__detail::_Select1st, std::equal_to<v8::internal::MemoryChunk*>, v8::internal::MemoryChunk::Hasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[](v8::internal::MemoryChunk* const&) + 19, stop reason = signal 0
  thread #8: tid = 39598, 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #9: tid = 39595, 0x0000000000cff98e node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1310, stop reason = signal 0
  thread #10: tid = 39600, 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #11: tid = 39592, 0x00007f129320c6e0 libpthread.so.0`__lll_lock_wait + 48, stop reason = signal 0
Threads' backtrace
(llnode) bt all
* thread #1, name = 'node', stop reason = signal SIGSEGV
  * frame #0: 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #2, stop reason = signal 0
    frame #0: 0x00007f129312c39e libc.so.6`epoll_wait + 94
    frame #1: 0x000000000138e9c4 node`uv__io_poll at linux-core.c:324:14
    frame #2: 0x000000000137c468 node`uv_run(loop=0x00000000058ac7f8, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #3: 0x0000000000a75f4b node`node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start()::'lambda'(void*)::_FUN(void*) + 123
    frame #4: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #3, stop reason = signal 0
    frame #0: 0x0000000000d6e0f4 node`v8::internal::MainMarkingVisitor<v8::internal::MajorMarkingState>::ShouldVisit(v8::internal::HeapObject) + 20
    frame #1: 0x0000000000d7c8f1 node`unsigned long v8::internal::MarkCompactCollector::ProcessMarkingWorklist<(v8::internal::MarkCompactCollector::MarkingWorklistProcessingMode)0>(unsigned long) + 2785
    frame #2: 0x0000000000d4e1c4 node`v8::internal::IncrementalMarking::Step(double, v8::internal::IncrementalMarking::CompletionAction, v8::internal::StepOrigin) + 276
    frame #3: 0x0000000000d4ed44 node`v8::internal::IncrementalMarking::AdvanceOnAllocation() (.part.106) + 228
    frame #4: 0x0000000000d4f178 node`v8::internal::IncrementalMarking::Observer::Step(int, unsigned long, unsigned long) + 216
    frame #5: 0x0000000000d37e4f node`v8::internal::AllocationObserver::AllocationStep(int, unsigned long, unsigned long) + 47
    frame #6: 0x0000000000db794f node`v8::internal::SpaceWithLinearArea::InlineAllocationStep(unsigned long, unsigned long, unsigned long, unsigned long) + 175
    frame #7: 0x0000000000db7a4c node`v8::internal::NewSpace::EnsureAllocation(int, v8::internal::AllocationAlignment) + 188
    frame #8: 0x0000000000d3ef72 node`v8::internal::Heap::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) + 290
    frame #9: 0x0000000000d46b68 node`v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) + 40
    frame #10: 0x0000000000d0c4a2 node`v8::internal::Factory::AllocateRaw(int, v8::internal::AllocationType, v8::internal::AllocationAlignment) + 146
    frame #11: 0x0000000000d06324 node`v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap(int, v8::internal::AllocationType, v8::internal::Map, v8::internal::AllocationAlignment) + 20
    frame #12: 0x0000000000d06dc3 node`v8::internal::FactoryBase<v8::internal::Factory>::NewByteArray(int, v8::internal::AllocationType) + 51
    frame #13: 0x0000000000e2c8a5 node`v8::internal::JsonParser<unsigned short>::BuildJsonObject(v8::internal::JsonParser<unsigned short>::JsonContinuation const&, std::vector<v8::internal::JsonProperty, std::allocator<v8::internal::JsonProperty> > const&, v8::internal::Handle<v8::internal::Map>) + 1077
    frame #14: 0x0000000000e2e795 node`v8::internal::JsonParser<unsigned short>::ParseJsonValue() + 2565
    frame #15: 0x0000000000e2ee8f node`v8::internal::JsonParser<unsigned short>::ParseJson() + 15
    frame #16: 0x0000000000c24805 node`v8::internal::Builtin_Impl_JsonParse(v8::internal::BuiltinArguments, v8::internal::Isolate*) + 197
    frame #17: 0x0000000000c24f06 node`v8::internal::Builtin_JsonParse(int, unsigned long*, v8::internal::Isolate*) + 22
    frame #18: 0x0000000001401319 node`Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit + 57
    frame #19: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #20: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #21: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #22: 0x0000176914ed8a07
    frame #23: 0x00000000013944f9 node`Builtins_ArgumentsAdaptorTrampoline + 185
    frame #24: 0x000000000139a5c2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #25: 0x0000176914ee1334
    frame #26: 0x0000176914ed864c
    frame #27: 0x0000176914eda206
    frame #28: 0x0000176914edefa2
    frame #29: 0x0000176914ed62cc
    frame #30: 0x00000000013982da node`Builtins_JSEntryTrampoline + 90
    frame #31: 0x00000000013980b8 node`Builtins_JSEntry + 120
    frame #32: 0x0000000000cc2cf1 node`v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 449
    frame #33: 0x0000000000cc3b5f node`v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) + 95
    frame #34: 0x0000000000b8ba54 node`v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 324
    frame #35: 0x000000000096ad61 node`node::InternalCallbackScope::Close() + 1233
    frame #36: 0x000000000096b357 node`node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) + 647
    frame #37: 0x0000000000978f69 node`node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) + 121
    frame #38: 0x0000000000acf2d8 node`node::StreamBase::CallJSOnreadMethod(long, v8::Local<v8::ArrayBuffer>, unsigned long, node::StreamBase::StreamBaseJSChecks) (.constprop.105) + 168
    frame #39: 0x0000000000ad2cc6 node`node::EmitToJSStreamListener::OnStreamRead(long, uv_buf_t const&) + 886
    frame #40: 0x0000000000adc2b8 node`node::LibuvStreamWrap::OnUvRead(long, uv_buf_t const*) + 120
    frame #41: 0x0000000001387267 node`uv__read(stream=0x0000000005999cf0) at stream.c:1239:7
    frame #42: 0x0000000001387c20 node`uv__stream_io(loop=<unavailable>, w=0x0000000005999d78, events=1) at stream.c:1306:5
    frame #43: 0x000000000138e615 node`uv__io_poll at linux-core.c:462:11
    frame #44: 0x000000000137c468 node`uv_run(loop=0x000000000446c7c0, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #45: 0x0000000000a44974 node`node::NodeMainInstance::Run() + 580
    frame #46: 0x00000000009d1e15 node`node::Start(int, char**) + 277
    frame #47: 0x00007f1293054b25 libc.so.6`__libc_start_main + 213
    frame #48: 0x00000000009694cc node`_start + 41
  thread #4, stop reason = signal 0
    frame #0: 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f1293209260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #5, stop reason = signal 0
    frame #0: 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f1293209260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #6, stop reason = signal 0
    frame #0: 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f129320bb98 libpthread.so.0`__new_sem_wait_slow64.constprop.0 + 152
    frame #2: 0x000000000138a312 node`uv_sem_wait at thread.c:626:9
    frame #3: 0x000000000138a300 node`uv_sem_wait(sem=0x0000000004465600) at thread.c:682
    frame #4: 0x0000000000afbd45 node`node::inspector::(anonymous namespace)::StartIoThreadMain(void*) + 53
    frame #5: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #7, stop reason = signal 0
    frame #0: 0x0000000000cfd8d3 node`std::__detail::_Map_base<v8::internal::MemoryChunk*, std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData>, std::allocator<std::pair<v8::internal::MemoryChunk* const, v8::internal::MemoryChunkData> >, std::__detail::_Select1st, std::equal_to<v8::internal::MemoryChunk*>, v8::internal::MemoryChunk::Hasher, std::__detail::_Mod_range_hashing, std::__detail::_Default_ranged_hash, std::__detail::_Prime_rehash_policy, std::__detail::_Hashtable_traits<true, false, true>, true>::operator[](v8::internal::MemoryChunk* const&) + 19
    frame #1: 0x0000000000cfdbf9 node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 185
    frame #2: 0x0000000000d02735 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 12997
    frame #3: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #4: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #5: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #8, stop reason = signal 0
    frame #0: 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f1293209260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #9, stop reason = signal 0
    frame #0: 0x0000000000cff98e node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1310
    frame #1: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #4: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #10, stop reason = signal 0
    frame #0: 0x00007f129320f9ba libpthread.so.0`__futex_abstimed_wait_common64 + 202
    frame #1: 0x00007f1293209260 libpthread.so.0`pthread_cond_wait@@GLIBC_2.3.2 + 512
    frame #2: 0x000000000138a4d9 node`uv_cond_wait at thread.c:780:7
    frame #3: 0x0000000001376ed4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #4: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #5: 0x00007f129312c053 libc.so.6`__clone + 67
  thread #11, stop reason = signal 0
    frame #0: 0x00007f129320c6e0 libpthread.so.0`__lll_lock_wait + 48
    frame #1: 0x00007f1293205573 libpthread.so.0`__pthread_mutex_lock + 227
    frame #2: 0x0000000000d02eb2 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 14914
    frame #3: 0x0000000000c6c9eb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #4: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #5: 0x00007f1293203299 libpthread.so.0`start_thread + 233
    frame #6: 0x00007f129312c053 libc.so.6`__clone + 67
@hellivan
Copy link
Author

Any update on this?

@Ayase-252
Copy link
Member

Ayase-252 commented Mar 22, 2021

It can be reproduced in MacOS, with exception

node:events:346
      throw er; // Unhandled 'error' event
      ^

Error: write EINVAL
    at afterWriteDispatched (node:internal/stream_base_commons:160:15)
    at writevGeneric (node:internal/stream_base_commons:143:3)
    at Socket._writeGeneric (node:net:771:11)
    at Socket._writev (node:net:780:8)
    at doWrite (node:internal/streams/writable:412:12)
    at clearBuffer (node:internal/streams/writable:565:5)
    at onwrite (node:internal/streams/writable:467:7)
    at WriteWrap.onWriteComplete [as oncomplete] (node:internal/stream_base_commons:106:10)
Emitted 'error' event on Socket instance at:
    at emitErrorNT (node:internal/streams/destroy:188:8)
    at emitErrorCloseNT (node:internal/streams/destroy:153:3)
    at processTicksAndRejections (node:internal/process/task_queues:81:21) {
  errno: -22,
  code: 'EINVAL',
  syscall: 'write'
}

I also observe that the memory usage is huge (~2.0GB real memory usage ~4.0 GB memory usage, according to activity monitor). Could it be related to memory exhaustion or leaking?

@Ayase-252
Copy link
Member

Because I'm not very familar with C++ backend, I'm going to label this with c++ to identify problem further.

@Ayase-252 Ayase-252 added the c++ Issues and PRs that require attention from people who are familiar with C++. label Mar 22, 2021
@thomasmichaelwallace
Copy link

thomasmichaelwallace commented Mar 29, 2021

I'm sorry it's not more useful, but this bug is candidate for continuous segfaults in our own system since upgrading from node 12 to 14:

  • Download a series of large files from AWS S3
  • Decompress them using zlib.gunzip
  • Call JSON.parse() on the result

The segfault always happens on JSON.parse() (which I can verify using console.log either side), and typically when memory use is getting towards 1GB.

Unfortunately I'm running within an AWS Lambda environment, so there are no dumps. I'm working on recreating it locally, but it seems like this is the same issue (which is what gave me the hint to isolate the JSON parsing in the first place).

At the moment I know the following:

  • I'm not out of memory (the lambda environment fails differently in this case)
  • There's nothing wrong with the files; the nature of the retries mean that the same files are later processed fine
  • This exact same code work(s/ed) on Node 12

I'm currently working on a local / smaller reproduction, although I regret my C++ is unlikely to be good enough to provide any real insights here; I just wanted to register my 'it is not just you'.

ed. now with somewhat identical back trace:

General information about node instance
(llnode) v8 nodeinfo
Information for process id 14854 (process=0x2ae4cf281d81)
Platform = linux, Architecture = x64, Node Version = v14.16.0
Component versions (process.versions=0x112a563b3f29):
    ares = 1.16.1
    brotli = 1.0.9
    cldr = 37.0
    icu = 67.1
    llhttp = 2.1.3
    modules = 83
    napi = 7
    nghttp2 = 1.41.0
    node = 14.16.0
    openssl = 1.1.1j
    tz = 2020a
    unicode = 13.0
    uv = 1.40.0
    v8 = 8.4.371.19-node.18
    zlib = 1.2.11
Release Info (process.release=0x112a563b4161):
    name = node
    lts = Fermium
    sourceUrl = https://nodejs.org/download/release/v14.16.0/node-v14.16.0.tar.gz
    headersUrl = https://nodejs.org/download/release/v14.16.0/node-v14.16.0-headers.tar.gz
Executable Path = /home/ubuntu/.nvm/versions/node/v14.16.0/bin/node
Command line arguments (process.argv=0x112a563b4039):
    [0] = '/home/ubuntu/.nvm/versions/node/v14.16.0/bin/node'
    [1] = '/home/ubuntu/persist/code/backend/packages/segfault/.webpack/.script.js'
Node.js Command line arguments (process.execArgv=0x112a563b4101):
List of all threads
(llnode) thread list
Process 14854 stopped
* thread #1: tid = 14857, 0x0000000000cff994 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 14855, 0x00007f829681e5ce libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #3: tid = 14854, 0x00007f829688a8e9 libc.so.6`___lldb_unnamed_symbol1092$$libc.so.6 + 633, stop reason = signal 0
  thread #4: tid = 14870, 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #5: tid = 14860, 0x00007f82969013f4 libpthread.so.0`do_futex_wait at futex-internal.h:320:13, stop reason = signal 0
  thread #6: tid = 14856, 0x0000000000cfdb4c node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 60, stop reason = signal 0
  thread #7: tid = 14868, 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #8: tid = 14867, 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #9: tid = 14869, 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #10: tid = 14859, 0x0000000000cff942 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1282, stop reason = signal 0
  thread #11: tid = 14858, 0x0000000000cffec1 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 2689, stop reason = signal 0
Threads' backtrace
(llnode) bt all
* thread #1, name = 'node', stop reason = signal SIGSEGV
  * frame #0: 0x0000000000cff994 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
    frame #1: 0x0000000000c6c9bb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #4: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #2, stop reason = signal 0
    frame #0: 0x00007f829681e5ce libc.so.6`epoll_wait + 94
    frame #1: 0x000000000138e994 node`uv__io_poll at linux-core.c:324:14
    frame #2: 0x000000000137c438 node`uv_run(loop=0x0000000005fb4828, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #3: 0x0000000000a75f4b node`node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start()::'lambda'(void*)::_FUN(void*) + 123
    frame #4: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #5: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #3, stop reason = signal 0
    frame #0: 0x00007f829688a8e9 libc.so.6`___lldb_unnamed_symbol1092$$libc.so.6 + 633
    frame #1: 0x00000000014650c3 node`Builtins_TypedArrayPrototypeSet + 835
    frame #2: 0x000037143acc7cd8
    frame #3: 0x000000000139a5a2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #4: 0x000037143acc3263
    frame #5: 0x000037143acc3780
    frame #6: 0x00000000013944d9 node`Builtins_ArgumentsAdaptorTrampoline + 185
    frame #7: 0x000000000139a5a2 node`Builtins_InterpreterEntryTrampoline + 194
    frame #8: 0x000037143acd54e0
    frame #9: 0x00000000013982ba node`Builtins_JSEntryTrampoline + 90
    frame #10: 0x0000000001398098 node`Builtins_JSEntry + 120
    frame #11: 0x0000000000cc2cc1 node`v8::internal::(anonymous namespace)::Invoke(v8::internal::Isolate*, v8::internal::(anonymous namespace)::InvokeParams const&) + 449
    frame #12: 0x0000000000cc3b2f node`v8::internal::Execution::Call(v8::internal::Isolate*, v8::internal::Handle<v8::internal::Object>, v8::internal::Handle<v8::internal::Object>, int, v8::internal::Handle<v8::internal::Object>*) + 95
    frame #13: 0x0000000000b8ba24 node`v8::Function::Call(v8::Local<v8::Context>, v8::Local<v8::Value>, int, v8::Local<v8::Value>*) + 324
    frame #14: 0x000000000096ad61 node`node::InternalCallbackScope::Close() + 1233
    frame #15: 0x000000000096b357 node`node::InternalMakeCallback(node::Environment*, v8::Local<v8::Object>, v8::Local<v8::Object>, v8::Local<v8::Function>, int, v8::Local<v8::Value>*, node::async_context) + 647
    frame #16: 0x0000000000978f69 node`node::AsyncWrap::MakeCallback(v8::Local<v8::Function>, int, v8::Local<v8::Value>*) + 121
    frame #17: 0x0000000000ac0bbf node`non-virtual thunk to node::(anonymous namespace)::CompressionStream<node::(anonymous namespace)::ZlibContext>::AfterThreadPoolWork(int) + 255
    frame #18: 0x00000000009d8475 node`node::ThreadPoolWork::ScheduleWork()::'lambda0'(uv_work_s*, int)::_FUN(uv_work_s*, int) + 341
    frame #19: 0x000000000137750d node`uv__work_done(handle=0x000000000446c870) at threadpool.c:313:5
    frame #20: 0x000000000137bb06 node`uv__async_io.part.1 at async.c:163:5
    frame #21: 0x000000000138e5e5 node`uv__io_poll at linux-core.c:462:11
    frame #22: 0x000000000137c438 node`uv_run(loop=0x000000000446c7c0, mode=UV_RUN_DEFAULT) at core.c:385:5
    frame #23: 0x0000000000a44974 node`node::NodeMainInstance::Run() + 580
    frame #24: 0x00000000009d1e15 node`node::Start(int, char**) + 277
    frame #25: 0x00007f82967230b3 libc.so.6`__libc_start_main + 243
    frame #26: 0x00000000009694cc node`_start + 41
  thread #4, stop reason = signal 0
    frame #0: 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13
    frame #1: 0x00007f82968fe359 libpthread.so.0`__pthread_cond_wait at pthread_cond_wait.c:508
    frame #2: 0x00007f82968fe290 libpthread.so.0`__pthread_cond_wait(cond=0x000000000446c760, mutex=0x000000000446c720) at pthread_cond_wait.c:638
    frame #3: 0x000000000138a4a9 node`uv_cond_wait at thread.c:780:7
    frame #4: 0x0000000001376ea4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #5: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #6: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #5, stop reason = signal 0
    frame #0: 0x00007f82969013f4 libpthread.so.0`do_futex_wait at futex-internal.h:320:13
    frame #1: 0x00007f82969013ca libpthread.so.0`do_futex_wait(sem=0x0000000004465600, abstime=0x0000000000000000, clockid=0) at sem_waitcommon.c:112
    frame #2: 0x00007f82969014e8 libpthread.so.0`__new_sem_wait_slow(sem=0x0000000004465600, abstime=0x0000000000000000, clockid=0) at sem_waitcommon.c:184:10
    frame #3: 0x000000000138a2e2 node`uv_sem_wait at thread.c:626:9
    frame #4: 0x000000000138a2d0 node`uv_sem_wait(sem=0x0000000004465600) at thread.c:682
    frame #5: 0x0000000000afbd45 node`node::inspector::(anonymous namespace)::StartIoThreadMain(void*) + 53
    frame #6: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #7: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #6, stop reason = signal 0
    frame #0: 0x0000000000cfdb4c node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 60
    frame #1: 0x0000000000cffe82 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 2626
    frame #2: 0x0000000000c6c9bb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #3: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #4: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #5: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #7, stop reason = signal 0
    frame #0: 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13
    frame #1: 0x00007f82968fe359 libpthread.so.0`__pthread_cond_wait at pthread_cond_wait.c:508
    frame #2: 0x00007f82968fe290 libpthread.so.0`__pthread_cond_wait(cond=0x000000000446c760, mutex=0x000000000446c720) at pthread_cond_wait.c:638
    frame #3: 0x000000000138a4a9 node`uv_cond_wait at thread.c:780:7
    frame #4: 0x0000000001376ea4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #5: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #6: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #8, stop reason = signal 0
    frame #0: 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13
    frame #1: 0x00007f82968fe359 libpthread.so.0`__pthread_cond_wait at pthread_cond_wait.c:508
    frame #2: 0x00007f82968fe290 libpthread.so.0`__pthread_cond_wait(cond=0x000000000446c760, mutex=0x000000000446c720) at pthread_cond_wait.c:638
    frame #3: 0x000000000138a4a9 node`uv_cond_wait at thread.c:780:7
    frame #4: 0x0000000001376ea4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #5: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #6: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #9, stop reason = signal 0
    frame #0: 0x00007f82968fe376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13
    frame #1: 0x00007f82968fe359 libpthread.so.0`__pthread_cond_wait at pthread_cond_wait.c:508
    frame #2: 0x00007f82968fe290 libpthread.so.0`__pthread_cond_wait(cond=0x000000000446c760, mutex=0x000000000446c720) at pthread_cond_wait.c:638
    frame #3: 0x000000000138a4a9 node`uv_cond_wait at thread.c:780:7
    frame #4: 0x0000000001376ea4 node`worker(arg=0x0000000000000000) at threadpool.c:76:7
    frame #5: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #6: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #10, stop reason = signal 0
    frame #0: 0x0000000000cff942 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1282
    frame #1: 0x0000000000c6c9bb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #4: 0x00007f829681e293 libc.so.6`__clone + 67
  thread #11, stop reason = signal 0
    frame #0: 0x0000000000cffec1 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 2689
    frame #1: 0x0000000000c6c9bb node`non-virtual thunk to v8::internal::CancelableTask::Run() + 59
    frame #2: 0x0000000000a71405 node`node::(anonymous namespace)::PlatformWorkerThread(void*) + 405
    frame #3: 0x00007f82968f7609 libpthread.so.0`start_thread(arg=<unavailable>) at pthread_create.c:477:8
    frame #4: 0x00007f829681e293 libc.so.6`__clone + 67

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented Mar 29, 2021

This happens if you use the --no-concurrent-marking (blocking GC) too:

* thread #1: tid = 18979, 0x0000000000d7be8d node`unsigned long v8::internal::MarkCompactCollector::ProcessMarkingWorklist<(v8::internal::MarkCompactCollector::MarkingWorklistProcessingMode)0>(unsigned long) + 173, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 18981, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #3: tid = 18980, 0x00007f60e58d75ce libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #4: tid = 18985, 0x00007f60e59ba3f4 libpthread.so.0`do_futex_wait at futex-internal.h:320:13, stop reason = signal 0
  thread #5: tid = 18992, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #6: tid = 18984, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #7: tid = 18990, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #8: tid = 18991, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #9: tid = 18982, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #10: tid = 18989, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #11: tid = 18983, 0x00007f60e59b7376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0

I wonder if #37106 (SIGSEGV for address: 0x0 in ConcurrentMarking::Run) is related; similar story, upgrading from 12 to 14 results in sporadic segfaults, seemingly when causing garbage collection and using concurrent socket requests.

@hellivan
Copy link
Author

@thomasmichaelwallace thank you for your confirmation. From my point of view #37106 may be related or even the same issue. However, from the issue description I was not able to derive a direct connection between the two problems.

As far as I can tell, the segfault from our use-case is caused by an error in the Node.js/v8 internal data-structure/memory management that gets triggered on high load in the main thread while reading big data chunks from sockets.
Since this seams to be a legit use case for Node.js I was wondering why nobody else had this problem, or even why nobody seems to be worried about this issues, as errors in memory-management may lead to serious security problems in some cases.

From my research i found out, that in the past @gireeshpunathil solved some issues with similar context (#25814). Maybe we should ask him for advice?

@gireeshpunathil
Copy link
Member

let us start by understanding the failing context a little deeper:

  • select the failing thread:
    • thread list followed by thread select <thread number>
  • get the instruction pointer:
    • reg read rip
  • dump few instruction behind the faulty one
    • di -s <current rip value - 80> -c 20

just want to state that it is going to be an iterative process!
alternatively, if you have a standalone recreate, let me know - I can reproduce and debug myself!

@hellivan
Copy link
Author

Hi @gireeshpunathil, thank you for your fast reply.
Right now I have no pc at hand. I will try to provide you with the requestes information later today, as soon as I get home. Do you have any preferences for the debugger (gdb or llnode)?

Besides: In the issue description I referenced a repositor that contains a sample application, that eventually should recreate the issue.

@gireeshpunathil
Copy link
Member

  • gdb is fine
  • the code in the referenced repo - will try

@gireeshpunathil
Copy link
Member

never mind, I am able to recreate with your sample program! thanks for the nice setup for the recreate!

#node ./new_server_test.js &
[1] 89290
#Server listening on port 5673

#node ./new_client_test.js 
Got startsegment!
0: 2.147s
1: 1.820s
2: 2.043s
3: 2.315s
4: 2.168s
Connection closed
[1]+  Segmentation fault: 11  node ./new_server_test.js
#l

I will debug and let you know!

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented Mar 30, 2021

Thank you for taking a look at this @gireeshpunathil!

It would seem too much of a coincidence for mine and hellivan's socket+gc segfaults to have different underlying causes, so I'm going to trust that his reproduction repo will be enough.

For what it's worth, here are the results of my following those commands; just in case it becomes immediately obvious to you that mine is a different issue, which I should separately raise:

(llnode) thread list
Process 21256 stopped
* thread #1: tid = 21261, 0x0000000000cff994 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 21256, 0x00007f352d5288e9 libc.so.6`___lldb_unnamed_symbol1092$$libc.so.6 + 633, stop reason = signal 0
  thread #3: tid = 21272, 0x00007f352d59c376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #4: tid = 21262, 0x00007f352d59f3f4 libpthread.so.0`do_futex_wait at futex-internal.h:320:13, stop reason = signal 0
  thread #5: tid = 21257, 0x00007f352d4bc5ce libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #6: tid = 21259, 0x0000000000cff957 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1303, stop reason = signal 0
  thread #7: tid = 21273, 0x00007f352d59c376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #8: tid = 21270, 0x00007f352d59c376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
  thread #9: tid = 21258, 0x00007f352d43174b libc.so.6`___lldb_unnamed_symbol378$$libc.so.6 + 43, stop reason = signal 0
  thread #10: tid = 21260, 0x0000000000cfdb7d node`v8::internal::ConcurrentMarkingVisitor::ShouldVisit(v8::internal::HeapObject) + 109, stop reason = signal 0
  thread #11: tid = 21271, 0x00007f352d59c376 libpthread.so.0`__pthread_cond_wait at futex-internal.h:183:13, stop reason = signal 0
(llnode) thread select 1
* thread #1, name = 'node', stop reason = signal SIGSEGV
    frame #0: 0x0000000000cff994 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
node`v8::internal::ConcurrentMarking::Run:
->  0xcff994 <+1364>: addb   %al, (%rax)
    0xcff996 <+1366>: addb   %al, (%rax)
    0xcff998 <+1368>: addb   %al, (%rax)
    0xcff99a <+1370>: addb   %al, (%rax)
(llnode) reg read rip
     rip = 0x0000000000cff994  node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
(llnode) di -s (0x0000000000cff994-80) -c 20
node`v8::internal::ConcurrentMarking::Run:
    0xcff944 <+1284>: addb   %al, (%rax)
    0xcff946 <+1286>: addb   %al, (%rax)
    0xcff948 <+1288>: addb   %al, (%rax)
    0xcff94a <+1290>: addb   %al, (%rax)
    0xcff94c <+1292>: addb   %al, (%rax)
    0xcff94e <+1294>: addb   %al, (%rax)
    0xcff950 <+1296>: addb   %al, (%rax)
    0xcff952 <+1298>: addb   %al, (%rax)
    0xcff954 <+1300>: addb   %al, (%rax)
    0xcff956 <+1302>: addb   %al, (%rax)
    0xcff958 <+1304>: addb   %al, (%rax)
    0xcff95a <+1306>: addb   %al, (%rax)
    0xcff95c <+1308>: addb   %al, (%rax)
    0xcff95e <+1310>: addb   %al, (%rax)
    0xcff960 <+1312>: addb   %al, (%rax)
    0xcff962 <+1314>: addb   %al, (%rax)
    0xcff964 <+1316>: addb   %al, (%rax)
    0xcff966 <+1318>: addb   %al, (%rax)
    0xcff968 <+1320>: addb   %al, (%rax)
    0xcff96a <+1322>: addb   %al, (%rax)

@bolt-juri-gavshin
Copy link

Hello! I am creator of #37106 and it looks like my issue is exactly the same.

@hellivan
Copy link
Author

hellivan commented Mar 30, 2021

Thank you very much for your efforts @gireeshpunathil.

If it still helps, this would be the output of my debug session in llnode:

(llnode) thread list
Process 27845 stopped
* thread #1: tid = 27847, 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364, name = 'node', stop reason = signal SIGSEGV
  thread #2: tid = 27848, 0x0000000000cfc324 node`v8::internal::ConcurrentMarkingVisitor::VisitPointersInSnapshot(v8::internal::HeapObject, v8::internal::SlotSnapshot const&) + 68, stop reason = signal 0
  thread #3: tid = 27849, 0x0000000000cff987 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1303, stop reason = signal 0
  thread #4: tid = 27851, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #5: tid = 27850, 0x00007f2db786c8e2 libc.so.6`malloc + 770, stop reason = signal 0
  thread #6: tid = 27845, 0x0000000000d49001 node`v8::internal::IncrementalMarking::RecordWriteSlow(v8::internal::HeapObject, v8::internal::FullHeapObjectSlot, v8::internal::HeapObject) + 65, stop reason = signal 0
  thread #7: tid = 27853, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #8: tid = 27852, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #9: tid = 27846, 0x00007f2db78e039e libc.so.6`epoll_wait + 94, stop reason = signal 0
  thread #10: tid = 27854, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
  thread #11: tid = 27855, 0x00007f2db79c39ba libpthread.so.0`__futex_abstimed_wait_common64 + 202, stop reason = signal 0
(llnode) thread select 1
* thread #1, name = 'node', stop reason = signal SIGSEGV
    frame #0: 0x0000000000cff9c4 node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
node`v8::internal::ConcurrentMarking::Run:
->  0xcff9c4 <+1364>: addb   %al, (%rax)
    0xcff9c6 <+1366>: addb   %al, (%rax)
    0xcff9c8 <+1368>: addb   %al, (%rax)
    0xcff9ca <+1370>: addb   %al, (%rax)
(llnode) reg read rip
     rip = 0x0000000000cff9c4  node`v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) + 1364
(llnode) di -s (0x0000000000cff9c4-80) -c 20
node`v8::internal::ConcurrentMarking::Run:
    0xcff974 <+1284>: addb   %al, (%rax)
    0xcff976 <+1286>: addb   %al, (%rax)
    0xcff978 <+1288>: addb   %al, (%rax)
    0xcff97a <+1290>: addb   %al, (%rax)
    0xcff97c <+1292>: addb   %al, (%rax)
    0xcff97e <+1294>: addb   %al, (%rax)
    0xcff980 <+1296>: addb   %al, (%rax)
    0xcff982 <+1298>: addb   %al, (%rax)
    0xcff984 <+1300>: addb   %al, (%rax)
    0xcff986 <+1302>: addb   %al, (%rax)
    0xcff988 <+1304>: addb   %al, (%rax)
    0xcff98a <+1306>: addb   %al, (%rax)
    0xcff98c <+1308>: addb   %al, (%rax)
    0xcff98e <+1310>: addb   %al, (%rax)
    0xcff990 <+1312>: addb   %al, (%rax)
    0xcff992 <+1314>: addb   %al, (%rax)
    0xcff994 <+1316>: addb   %al, (%rax)
    0xcff996 <+1318>: addb   %al, (%rax)
    0xcff998 <+1320>: addb   %al, (%rax)
    0xcff99a <+1322>: addb   %al, (%rax)

and gdb (since the results from llnode do not make any sense to me) :

(gdb) info registers
rip            0xcff9c4            0xcff9c4 <v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*)+1364>
(gdb) x/20i $rip-80
   0xcff974 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1284>:	subb   $0x1,(%rax)
   0xcff977 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1287>:	add    %al,(%rax)
   0xcff979 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1289>:	mov    0x80(%rax),%rsi
   0xcff980 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1296>:	mov    -0x11a8(%rbp),%r15
   0xcff987 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1303>:	lea    -0x1(%r15),%rax
   0xcff98b <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1307>:	cmp    %rcx,%rax
   0xcff98e <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1310>:	setae  %cl
   0xcff991 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1313>:	cmp    %rdx,%rax
   0xcff994 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1316>:	setb   %dl
   0xcff997 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1319>:	test   %dl,%cl
   0xcff999 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1321>:	jne    0xd02d48 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+14552>
   0xcff99f <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1327>:	cmp    %rsi,%rax
   0xcff9a2 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1330>:	je     0xd02d48 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+14552>
   0xcff9a8 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1336>:	mov    -0x1(%r15),%r13
   0xcff9ac <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1340>:	cmpb   $0x0,-0x11c8(%rbp)
   0xcff9b3 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1347>:	jne    0xd032e0 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+15984>
   0xcff9b9 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1353>:	mov    -0x11a8(%rbp),%r15
   0xcff9c0 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1360>:	lea    0xa(%r13),%r14
=> 0xcff9c4 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1364>:	movzbl (%r14),%eax
   0xcff9c8 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1368>:	cmp    $0x47,%al

@gireeshpunathil
Copy link
Member

@bolt-juri-gavshin - thanks. @hellivan -thanks for the detailed data. As I said, I have the repro now! an interim update:

(gdb) set disassembly-flavor intel
(gdb) x/i $rip
=> 0xcff9c4 <_ZN2v88internal17ConcurrentMarking3RunEiPNS1_9TaskStateE+1364>:	
    movzx  eax,BYTE PTR [r14]
(gdb) i r r14
r14            0x3938363632303039  4123105065255841849
(gdb) x/b $r14
0x3938363632303039:	Cannot access memory at address 0x3938363632303039
(gdb) 
  • the immediate cause of the crash is memory overwrite
  • the issue is reproducible in v14.x lines, not in v15.x lines
  • the issue vanishes if I run under a debugger, or valgrind

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented Mar 31, 2021

Thanks for the update @gireeshpunathil.

I might be misunderstanding your comment, but both hellivan and I can reproduce in v15.10.0 and v15.12.0 respectively (but not v12.x)

@gireeshpunathil
Copy link
Member

@thomasmichaelwallace - sorry for the confusion - ok, I will then reword as I was unable to reproduce in v15.x lines with the small reproduce code supplied above.

@bolt-juri-gavshin
Copy link

0x3938363632303039: Cannot access memory at address 0x3938363632303039

>Buffer.from("3938363632303039", "hex").toString();
'98662009'
> Number(98662009).toString(16)
'5e17679'

Is it a coincidence, that address is a numeric ASCII string?

@hellivan
Copy link
Author

Any update on this?

@hellivan
Copy link
Author

hellivan commented Apr 21, 2021

Just for information:
I was able to reproduce the problem with the given sample code in Node.js version 16.0.0.

@gireeshpunathil
Copy link
Member

I was unable to spend time on this.

/cc @nodejs/v8 in case if they find anything obvious from the failing context.

@bolt-juri-gavshin
Copy link

Any update on this? Maybe we can help with something?

@thomasmichaelwallace
Copy link

I'm very much out of my depth with C++, but I wonder if it's worth taking it to the V8 team (I can't imagine node uses it's own garbage collector?).

[Definitely still a major issue for us, causes about 100 crashes an hour on production].

@gireeshpunathil
Copy link
Member

I am sorry that it is causing visible impact to your production. Unfortunately I am out of ideas - consistent recreate, but does not yield to a debug build or valgrind!

pinging debug specialists @addaleax, @mhdawson and @mmarchini to see if there are other techniques to debug memory corruption issues without valgrind.

@mhdawson
Copy link
Member

@gireeshpunathil, one thought I have is trying to see if it is related to V8 JIT activity.

A few things to try would include:

  • turning off the jit completely (I've not tried passing this to Node.js, but V8 supports --jitless). If the problem still recreates then
    we could probably exclude the JIT.
  • turning on the jit logging. That might show some new info, for example if we see methods being compiled just before the crash
    it might point in the direction of how those methods are compiled. I think the option would be --trace_opt
  • selectively turning off optimizations. If the previous steps point in the direction of a compilation resulting in the crash, then selectively disabling some of the optimizations might help narrow down to which one.

@mhdawson
Copy link
Member

mhdawson commented May 12, 2021

@gireeshpunathil one more question, When you ran valgrind did you use a debug build or the regular build? I've recently seen a case where compiling with debug eliminated the problem (likely due to it initializing to variables to 0) but valgrind on the non-debug build did report issues.

@gireeshpunathil
Copy link
Member

@mhdawson - thanks!

plain release: crash
plain debug: no crash
release with valgrind: no crash
debug with valgrind: did not try (assumed it is more conservative combo than the previous 2)

btw, are there less pervasive options in valgrind? I can see the available options, but not sure about the degree of influence those will have on the runtime behavior.

@mhdawson
Copy link
Member

@gireeshpunathil in terms of less pervasive options with valgrind, I'm not aware of any. I know you can enable additional checking/generation of info but I usually start by just running with the defaults.

@mhdawson
Copy link
Member

Another suggestion is to narrow down the passes with version X, fails with version Y to between 2 sequential releases. At that point you can look at the changes between those 2 releases, see if there is a v8 updates or any other changes that look like they could be related. @hellivan that might be something you could do.

It might still be a larger number of changes if its something like passes with latest version of 12.x, fails with 14.0.0 but still useful I think.

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented May 14, 2021

So taking onboard your suggestion, @mhdawson, I've discovered the following:

  • v16.10.0 fail (segmentation fault)
  • v14.17.0 fail (segmentation fault)
  • v14.00.0 fail (segmentation fault)
  • v13.14.0 pass
  • v12.22.1 pass
  • Chrome Version 92.0.4506.0 (Official Build) canary (arm64) pass

Unfortunately, as far as I can tell, v13 -> v14 is 1890 commits (Change Log)

Critically, though it does include upgrading V8 to 8.1.307.20, which might provide some insight?

If there's anymore versions it's worth testing to gain some insights (I'm assuming v13.14 > v14.0.0 are consecutive) I'm happy to do the leg work.

I have noticed that the segmentation fault seems to happen nearly immediately in v14.0.0, but takes longer v14.17.0.

@gireeshpunathil
Copy link
Member

Unfortunately, as far as I can tell, v13 -> v14 is 1890 commits (Change Log)

sure, but luckily we have git bisect tool that reduces the iteration count logarithmically (2^11 = 2048 which is more than 1890, and can pinpoint the culprit in 11 iterations). I will see if I can perform that this weekend.

@gireeshpunathil
Copy link
Member

my observation was different from that of @thomasmichaelwallace , in terms of passing and failing versions; so don't take this as final and conclusive.

the git bisect pointed me to 323da6f - tls: add highWaterMark option for connect . Not sure how this would cause an issue like this, as there is no native code in this commit. It can only be explained through making changes to the memory changes / pattern that is required to manifest an otherwise hidden issue?

@thomasmichaelwallace
Copy link

Thanks for introducing me to git bisect @gireeshpunathil !

My test to reproduce stops at:

2883c85 "deps: update V8 to 8.1.307.20"

I would guess updating v8 would make sense as a candidate for the change in behaviour that leads to this bug.

Annoyingly I can't actually get this specific commit to build because make[1]: *** No rule to make target '../deps/v8/src/builtins/arguments.tq', needed by '5542a488c9b4038d05b7e22568e839c0c534ce20.intermediate'. Stop..

ubuntu in node at tom on (git)-[v14.0.0~380|bisect]- took 34s
➜ git bisect good
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[2883c855e0105b51e5c8020d21458af109ffe3d4] deps: update V8 to 8.1.307.20

ubuntu in node at tom on (git)-[v14.0.0~379|bisect]-
➜ git clean -xdf
Removing __pycache__/
Removing config.gypi
Removing config.mk
Removing config.status
Removing deps/icu-tmp/
Removing deps/v8/third_party/inspector_protocol/__pycache__/
Removing deps/v8/third_party/jinja2/__pycache__/
Removing deps/v8/third_party/markupsafe/__pycache__/
Removing deps/v8/tools/node/__pycache__/
Removing icu_config.gypi
Removing node
Removing out/
Removing tools/__pycache__/
Removing tools/configure.d/__pycache__/
Removing tools/gyp/pylib/gyp/__pycache__/
Removing tools/gyp/pylib/gyp/generator/__pycache__/
Removing tools/inspector_protocol/__pycache__/
Removing tools/inspector_protocol/jinja2/__pycache__/
Removing tools/inspector_protocol/markupsafe/__pycache__/
Removing tools/v8_gypfiles/__pycache__/

ubuntu in node at tom on (git)-[v14.0.0~379|bisect]-
➜ ./configure
Node.js configure: Found Python 3.8.5...
INFO: configure completed successfully

ubuntu in node at tom on (git)-[v14.0.0~379|bisect]- took 2s
➜ make -j4
make -C out BUILDTYPE=Release V=0
  touch /home/ubuntu/node/node/out/Release/obj.target/tools/v8_gypfiles/v8_version.stamp
make[1]: *** No rule to make target '../deps/v8/src/builtins/arguments.tq', needed by '5542a488c9b4038d05b7e22568e839c0c534ce20.intermediate'.  Stop.
make[1]: *** Waiting for unfinished jobs....
make: *** [Makefile:101: node] Error 2

@thomasmichaelwallace
Copy link

With the risk of oversharing:

git bisect start
# bad: [73aa21658dfa6a22c06451d080152b32b1f98dbe] 2020-04-21, Version 14.0.0 (Current)
git bisect bad 73aa21658dfa6a22c06451d080152b32b1f98dbe
# good: [9fc74f161371a6898f22747184a0ece522f5b912] 2020-04-29, Version 13.14.0 (Current)
git bisect good 9fc74f161371a6898f22747184a0ece522f5b912
# good: [90b5f1b1078961897082842b89d7bc9631ebf312] tools: update remark-preset-lint-node to 1.10.1
git bisect good 90b5f1b1078961897082842b89d7bc9631ebf312
# good: [83909e04c0b9e985a84a15301f9021d00226d859] doc: standardize on "host name" in fs.md
git bisect good 83909e04c0b9e985a84a15301f9021d00226d859
# good: [6db6af405729d47d675edc0a5e87eb2aeb39df7b] build: macOS package notarization
git bisect good 6db6af405729d47d675edc0a5e87eb2aeb39df7b
# bad: [de877c57813d7894c49a9cf32e644c04050c8cda] benchmark: use let instead of var in timers
git bisect bad de877c57813d7894c49a9cf32e644c04050c8cda
# bad: [543c046feb015c8290e2408d210916976126b24f] deps: update to uvwasi 0.0.6
git bisect bad 543c046feb015c8290e2408d210916976126b24f
# good: [e322f74ce1f8ff3ce1224a7ea9264542871aec3b] test: refactor and simplify test-repl-preview
git bisect good e322f74ce1f8ff3ce1224a7ea9264542871aec3b
# good: [1a3c7473ec9ca0e1acf0512996e05eecc870f7bf] test: use Promise.all() in test-hash-seed
git bisect good 1a3c7473ec9ca0e1acf0512996e05eecc870f7bf
# bad: [f90eba1d91690c89b442d69b800ab23bf12eb0e1] deps: make v8.h compatible with VS2015
git bisect bad f90eba1d91690c89b442d69b800ab23bf12eb0e1
# bad: [da92f15413c0382eff2b8e648d58da1d9ae726f6] build: reset embedder string to "-node.0"
git bisect bad da92f15413c0382eff2b8e648d58da1d9ae726f6
# good: [b2d34666495ff120f7d9b4fceb55b3f1b32ddb77] doc: complete n-api version matrix
git bisect good b2d34666495ff120f7d9b4fceb55b3f1b32ddb77
# good: [5f0af2af2a67216e00fe07ccda11e889d14abfcd] tools: fixup icutrim.py use of string and bytes objects
git bisect good 5f0af2af2a67216e00fe07ccda11e889d14abfcd

I do not know if it is important, but the final two git bisect bads were due to builds failing.

@thomasmichaelwallace
Copy link

Just to follow up (again; sorry) - I did a ./configure --debug --enable-asa / make -j4 build of tags/v14.17.0 and (1 hour 48 minutes later) ended up with a /node/out/Debug/node that 928625 segmentation fault (core dumped) against my reproduction script.

I guess the question is (and remember, my C++ is limited to the ability to build other people's stuff), what can I do now I have this to debug this problem?

I'm happy to share anything.

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented May 15, 2021

Output from valgrind --leak-check=yes node direct.js (my reproduction, where node is the v14.7.0 executable; release) - attempting to reproduce with debug (turns out I can't because ASan and Valgrind do not play nicely together):

==944403== Thread 6:
==944403== Invalid read of size 1
==944403==    at 0xD1E474: v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xC8B13A: non-virtual thunk to v8::internal::CancelableTask::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA8FAF4: node::(anonymous namespace)::PlatformWorkerThread(void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==    by 0x4FCE292: clone (clone.S:95)
==944403==  Address 0x363c is not stack'd, malloc'd or (recently) free'd
==944403==
==944403==
==944403== Process terminating with default action of signal 11 (SIGSEGV)
==944403==    at 0x4469229: raise (raise.c:46)
==944403==    by 0x9ECF32: node::TrapWebAssemblyOrContinue(int, siginfo_t*, void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x44693BF: ??? (in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so)
==944403==    by 0xD1E473: v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xC8B13A: non-virtual thunk to v8::internal::CancelableTask::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA8FAF4: node::(anonymous namespace)::PlatformWorkerThread(void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==    by 0x4FCE292: clone (clone.S:95)
==944403==
==944403== HEAP SUMMARY:
==944403==     in use at exit: 532,198,124 bytes in 138,519 blocks
==944403==   total heap usage: 1,319,839 allocs, 1,181,320 frees, 3,576,477,627 bytes allocated
==944403==
==944403== Thread 1:
==944403== 50 bytes in 1 blocks are possibly lost in loss record 2,125 of 3,471
==944403==    at 0x42CEE63: operator new(unsigned long) (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x125248D: v8_inspector::String16::String16(unsigned short const*, unsigned long) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x1254934: v8_inspector::toString16(v8_inspector::StringView const&) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x124DFE2: v8_inspector::InspectedContext::InspectedContext(v8_inspector::V8InspectorImpl*, v8_inspector::V8ContextInfo const&, int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x128C660: v8_inspector::V8InspectorImpl::contextCreated(v8_inspector::V8ContextInfo const&) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xB17C06: node::inspector::Agent::Start(std::string const&, node::DebugOptions const&, std::shared_ptr<node::ExclusiveAccess<node::HostPort, node::MutexBase<node::LibuvMutexTraits> > >, bool) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9ED5C8: node::Environment::InitializeInspector(std::unique_ptr<node::inspector::ParentInspectorHandle, std::default_delete<node::inspector::ParentInspectorHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x98558C: node::CreateEnvironment(node::IsolateData*, v8::Local<v8::Context>, std::vector<std::string, std::allocator<std::string> > const&, std::vector<std::string, std::allocator<std::string> > const&, node::EnvironmentFlags::Flags, node::ThreadId, std::unique_ptr<node::InspectorParentHandle, std::default_delete<node::InspectorParentHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62248: node::NodeMainInstance::CreateMainEnvironment(int*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62412: node::NodeMainInstance::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0BA4: node::Start(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x4ED30B2: (below main) (libc-start.c:308)
==944403==
==944403== 62 bytes in 1 blocks are possibly lost in loss record 2,274 of 3,471
==944403==    at 0x42CEE63: operator new(unsigned long) (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x125248D: v8_inspector::String16::String16(unsigned short const*, unsigned long) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x1254934: v8_inspector::toString16(v8_inspector::StringView const&) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x124DFF0: v8_inspector::InspectedContext::InspectedContext(v8_inspector::V8InspectorImpl*, v8_inspector::V8ContextInfo const&, int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x128C660: v8_inspector::V8InspectorImpl::contextCreated(v8_inspector::V8ContextInfo const&) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xB17C06: node::inspector::Agent::Start(std::string const&, node::DebugOptions const&, std::shared_ptr<node::ExclusiveAccess<node::HostPort, node::MutexBase<node::LibuvMutexTraits> > >, bool) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9ED5C8: node::Environment::InitializeInspector(std::unique_ptr<node::inspector::ParentInspectorHandle, std::default_delete<node::inspector::ParentInspectorHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x98558C: node::CreateEnvironment(node::IsolateData*, v8::Local<v8::Context>, std::vector<std::string, std::allocator<std::string> > const&, std::vector<std::string, std::allocator<std::string> > const&, node::EnvironmentFlags::Flags, node::ThreadId, std::unique_ptr<node::InspectorParentHandle, std::default_delete<node::InspectorParentHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62248: node::NodeMainInstance::CreateMainEnvironment(int*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62412: node::NodeMainInstance::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0BA4: node::Start(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x4ED30B2: (below main) (libc-start.c:308)
==944403==
==944403== 131 bytes in 1 blocks are definitely lost in loss record 2,859 of 3,471
==944403==    at 0x42CE7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x4F4E0D4: __libc_alloc_buffer_allocate (alloc_buffer_allocate.c:26)
==944403==    by 0x4FF15A8: alloc_buffer_allocate (alloc_buffer.h:143)
==944403==    by 0x4FF15A8: __resolv_conf_allocate (resolv_conf.c:411)
==944403==    by 0x4FEEEB1: __resolv_conf_load (res_init.c:592)
==944403==    by 0x4FF11B2: __resolv_conf_get_current (resolv_conf.c:163)
==944403==    by 0x4FEF464: __res_vinit (res_init.c:614)
==944403==    by 0x4FF054F: maybe_init (resolv_context.c:122)
==944403==    by 0x4FF054F: context_get (resolv_context.c:184)
==944403==    by 0x4FF054F: context_get (resolv_context.c:176)
==944403==    by 0x4FF054F: __resolv_context_get (resolv_context.c:195)
==944403==    by 0x4FB3A00: gaih_inet.constprop.0 (getaddrinfo.c:747)
==944403==    by 0x4FB50D8: getaddrinfo (getaddrinfo.c:2256)
==944403==    by 0x13A40F0: uv__getaddrinfo_work (getaddrinfo.c:106)
==944403==    by 0x13985A3: worker (threadpool.c:122)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==
==944403== 131 bytes in 1 blocks are definitely lost in loss record 2,860 of 3,471
==944403==    at 0x42CE7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x4F4E0D4: __libc_alloc_buffer_allocate (alloc_buffer_allocate.c:26)
==944403==    by 0x4FF15A8: alloc_buffer_allocate (alloc_buffer.h:143)
==944403==    by 0x4FF15A8: __resolv_conf_allocate (resolv_conf.c:411)
==944403==    by 0x4FEEEB1: __resolv_conf_load (res_init.c:592)
==944403==    by 0x4FF11B2: __resolv_conf_get_current (resolv_conf.c:163)
==944403==    by 0x4FEF464: __res_vinit (res_init.c:614)
==944403==    by 0x4FF054F: maybe_init (resolv_context.c:122)
==944403==    by 0x4FF054F: context_get (resolv_context.c:184)
==944403==    by 0x4FF054F: context_get (resolv_context.c:176)
==944403==    by 0x4FF054F: __resolv_context_get (resolv_context.c:195)
==944403==    by 0x4FB302A: gaih_inet.constprop.0 (getaddrinfo.c:747)
==944403==    by 0x4FB50D8: getaddrinfo (getaddrinfo.c:2256)
==944403==    by 0x13A40F0: uv__getaddrinfo_work (getaddrinfo.c:106)
==944403==    by 0x13985A3: worker (threadpool.c:122)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==
==944403== 304 bytes in 1 blocks are possibly lost in loss record 3,013 of 3,471
==944403==    at 0x42D0D99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x42A79CA: allocate_dtv (dl-tls.c:286)
==944403==    by 0x42A79CA: _dl_allocate_tls (dl-tls.c:532)
==944403==    by 0x445E322: allocate_stack (allocatestack.c:622)
==944403==    by 0x445E322: pthread_create@@GLIBC_2.2.5 (pthread_create.c:660)
==944403==    by 0x13AB72B: uv_thread_create_ex (thread.c:259)
==944403==    by 0x13AB72B: uv_thread_create (thread.c:213)
==944403==    by 0xA926B5: node::WorkerThreadsTaskRunner::WorkerThreadsTaskRunner(int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA92A55: node::NodePlatform::NodePlatform(int, v8::TracingController*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0875: node::InitializeOncePerProcess(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0AB0: node::Start(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x4ED30B2: (below main) (libc-start.c:308)
==944403==
==944403== 304 bytes in 1 blocks are possibly lost in loss record 3,014 of 3,471
==944403==    at 0x42D0D99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x42A79CA: allocate_dtv (dl-tls.c:286)
==944403==    by 0x42A79CA: _dl_allocate_tls (dl-tls.c:532)
==944403==    by 0x445E322: allocate_stack (allocatestack.c:622)
==944403==    by 0x445E322: pthread_create@@GLIBC_2.2.5 (pthread_create.c:660)
==944403==    by 0xB17E40: node::inspector::Agent::Start(std::string const&, node::DebugOptions const&, std::shared_ptr<node::ExclusiveAccess<node::HostPort, node::MutexBase<node::LibuvMutexTraits> > >, bool) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9ED5C8: node::Environment::InitializeInspector(std::unique_ptr<node::inspector::ParentInspectorHandle, std::default_delete<node::inspector::ParentInspectorHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x98558C: node::CreateEnvironment(node::IsolateData*, v8::Local<v8::Context>, std::vector<std::string, std::allocator<std::string> > const&, std::vector<std::string, std::allocator<std::string> > const&, node::EnvironmentFlags::Flags, node::ThreadId, std::unique_ptr<node::InspectorParentHandle, std::default_delete<node::InspectorParentHandle> >) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62248: node::NodeMainInstance::CreateMainEnvironment(int*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA62412: node::NodeMainInstance::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0BA4: node::Start(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x4ED30B2: (below main) (libc-start.c:308)
==944403==
==944403== 1,216 bytes in 4 blocks are possibly lost in loss record 3,196 of 3,471
==944403==    at 0x42D0D99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x42A79CA: allocate_dtv (dl-tls.c:286)
==944403==    by 0x42A79CA: _dl_allocate_tls (dl-tls.c:532)
==944403==    by 0x445E322: allocate_stack (allocatestack.c:622)
==944403==    by 0x445E322: pthread_create@@GLIBC_2.2.5 (pthread_create.c:660)
==944403==    by 0x13AB72B: uv_thread_create_ex (thread.c:259)
==944403==    by 0x13AB72B: uv_thread_create (thread.c:213)
==944403==    by 0xA9278B: node::WorkerThreadsTaskRunner::WorkerThreadsTaskRunner(int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xA92A55: node::NodePlatform::NodePlatform(int, v8::TracingController*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0875: node::InitializeOncePerProcess(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x9F0AB0: node::Start(int, char**) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x4ED30B2: (below main) (libc-start.c:308)
==944403==
==944403== 1,216 bytes in 4 blocks are possibly lost in loss record 3,197 of 3,471
==944403==    at 0x42D0D99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0x42A79CA: allocate_dtv (dl-tls.c:286)
==944403==    by 0x42A79CA: _dl_allocate_tls (dl-tls.c:532)
==944403==    by 0x445E322: allocate_stack (allocatestack.c:622)
==944403==    by 0x445E322: pthread_create@@GLIBC_2.2.5 (pthread_create.c:660)
==944403==    by 0x13AB72B: uv_thread_create_ex (thread.c:259)
==944403==    by 0x13AB72B: uv_thread_create (thread.c:213)
==944403==    by 0x139897A: init_threads (threadpool.c:225)
==944403==    by 0x139897A: init_once (threadpool.c:252)
==944403==    by 0x446647E: __pthread_once_slow (pthread_once.c:116)
==944403==    by 0x13ABA98: uv_once (thread.c:420)
==944403==    by 0x1398B59: uv__work_submit (threadpool.c:261)
==944403==    by 0x13A4535: uv_getaddrinfo (getaddrinfo.c:209)
==944403==    by 0x999385: node::cares_wrap::(anonymous namespace)::GetAddrInfo(v8::FunctionCallbackInfo<v8::Value> const&) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xC03D0A: v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<false>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xC052B5: v8::internal::Builtin_Impl_HandleApiCall(v8::internal::BuiltinArguments, v8::internal::Isolate*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==
==944403== 4,100 bytes in 1 blocks are definitely lost in loss record 3,312 of 3,471
==944403==    at 0x42D0D99: calloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0xD19A8F: v8::internal::BasicMemoryChunk::BasicMemoryChunk(unsigned long, unsigned long, unsigned long) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDD3403: v8::internal::MemoryChunk::Initialize(v8::internal::Heap*, unsigned long, unsigned long, unsigned long, unsigned long, v8::internal::Executability, v8::internal::Space*, v8::internal::VirtualMemory) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDDBE5E: v8::internal::MemoryAllocator::AllocateChunk(unsigned long, unsigned long, v8::internal::Executability, v8::internal::Space*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDDBFC6: v8::internal::Page* v8::internal::MemoryAllocator::AllocatePage<(v8::internal::MemoryAllocator::AllocationMode)0, v8::internal::PagedSpace>(unsigned long, v8::internal::PagedSpace*, v8::internal::Executability) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDDC07E: v8::internal::PagedSpace::Expand() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDDC4FF: v8::internal::PagedSpace::RawSlowRefillLinearAllocationArea(int, v8::internal::AllocationOrigin) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDDC5E4: v8::internal::PagedSpace::SlowRefillLinearAllocationArea(int, v8::internal::AllocationOrigin) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xD64ED4: v8::internal::Heap::ReserveSpace(std::vector<v8::internal::Heap::Chunk, std::allocator<v8::internal::Heap::Chunk> >*, std::vector<unsigned long, std::allocator<unsigned long> >*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x10E9D71: v8::internal::DeserializerAllocator::ReserveSpace() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x10EF714: v8::internal::ReadOnlyDeserializer::DeserializeInto(v8::internal::Isolate*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xDBFC8F: v8::internal::ReadOnlyHeap::SetUp(v8::internal::Isolate*, v8::internal::ReadOnlyDeserializer*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==
==944403== 14,336 bytes in 2 blocks are possibly lost in loss record 3,398 of 3,471
==944403==    at 0x42CE7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0xADBC72: node::(anonymous namespace)::CompressionStream<node::(anonymous namespace)::ZlibContext>::AllocForZlib(void*, unsigned int, unsigned int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x1392602: inflateInit2_ (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xADE575: node::(anonymous namespace)::ZlibContext::InitZlib() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xADE792: node::(anonymous namespace)::ZlibContext::DoThreadPoolWork() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x13985A3: worker (threadpool.c:122)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==    by 0x4FCE292: clone (clone.S:95)
==944403==
==944403== 65,584 bytes in 2 blocks are possibly lost in loss record 3,445 of 3,471
==944403==    at 0x42CE7F3: malloc (in /usr/lib/x86_64-linux-gnu/valgrind/vgpreload_memcheck-amd64-linux.so)
==944403==    by 0xADBC72: node::(anonymous namespace)::CompressionStream<node::(anonymous namespace)::ZlibContext>::AllocForZlib(void*, unsigned int, unsigned int) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x13922D0: updatewindow (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x1393071: inflate (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0xADE807: node::(anonymous namespace)::ZlibContext::DoThreadPoolWork() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==944403==    by 0x13985A3: worker (threadpool.c:122)
==944403==    by 0x445D608: start_thread (pthread_create.c:477)
==944403==    by 0x4FCE292: clone (clone.S:95)
==944403==
==944403== LEAK SUMMARY:
==944403==    definitely lost: 4,362 bytes in 3 blocks
==944403==    indirectly lost: 0 bytes in 0 blocks
==944403==      possibly lost: 83,072 bytes in 16 blocks
==944403==    still reachable: 532,110,690 bytes in 138,500 blocks
==944403==                       of which reachable via heuristic:
==944403==                         stdstring          : 41,928 bytes in 859 blocks
==944403==         suppressed: 0 bytes in 0 blocks
==944403== Reachable blocks (those to which a pointer was found) are not shown.
==944403== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==944403==
==944403== For lists of detected and suppressed errors, rerun with: -s
==944403== ERROR SUMMARY: 12 errors from 12 contexts (suppressed: 0 from 0)

@addaleax
Copy link
Member

@thomasmichaelwallace I think --track-origins=yes might be more informative than --leak-check=yes here, but that’s just a guess

@thomasmichaelwallace
Copy link

Thanks for the prompt:

➜ valgrind --track-origins=yes node direct.js

==955794== Thread 6:
==955794== Invalid read of size 1
==955794==    at 0xD1E474: v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0xC8B13A: non-virtual thunk to v8::internal::CancelableTask::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0xA8FAF4: node::(anonymous namespace)::PlatformWorkerThread(void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0x445D608: start_thread (pthread_create.c:477)
==955794==    by 0x4FCE292: clone (clone.S:95)
==955794==  Address 0x30312e313a is not stack'd, malloc'd or (recently) free'd
==955794==
==955794==
==955794== Process terminating with default action of signal 11 (SIGSEGV)
==955794==    at 0x4469229: raise (raise.c:46)
==955794==    by 0x9ECF32: node::TrapWebAssemblyOrContinue(int, siginfo_t*, void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0x44693BF: ??? (in /usr/lib/x86_64-linux-gnu/libpthread-2.31.so)
==955794==    by 0xD1E473: v8::internal::ConcurrentMarking::Run(int, v8::internal::ConcurrentMarking::TaskState*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0xC8B13A: non-virtual thunk to v8::internal::CancelableTask::Run() (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0xA8FAF4: node::(anonymous namespace)::PlatformWorkerThread(void*) (in /home/ubuntu/.nvm/versions/node/v14.17.0/bin/node)
==955794==    by 0x445D608: start_thread (pthread_create.c:477)
==955794==    by 0x4FCE292: clone (clone.S:95)
==955794==
==955794== HEAP SUMMARY:
==955794==     in use at exit: 619,939,748 bytes in 156,655 blocks
==955794==   total heap usage: 1,748,136 allocs, 1,591,481 frees, 4,705,060,432 bytes allocated
==955794==
==955794== LEAK SUMMARY:
==955794==    definitely lost: 4,362 bytes in 3 blocks
==955794==    indirectly lost: 0 bytes in 0 blocks
==955794==      possibly lost: 123,032 bytes in 18 blocks
==955794==    still reachable: 619,812,354 bytes in 156,634 blocks
==955794==                       of which reachable via heuristic:
==955794==                         stdstring          : 41,928 bytes in 859 blocks
==955794==         suppressed: 0 bytes in 0 blocks
==955794== Rerun with --leak-check=full to see details of leaked memory
==955794==
==955794== For lists of detected and suppressed errors, rerun with: -s
==955794== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)

(I'll update with the suggested output suggested -s --leak-check-full) once it completes.

@gireeshpunathil
Copy link
Member

As I previously pointed out, I always had a difference in passing / failing versions with that of @thomasmichaelwallace . Now I am able to recreate a crash with reasonable consistency, but with a different stack. I can open a different issue, but once we progress a bit more and conclude that we are seeing two different things. As of now, I don't know about it: v8::internal::ConcurrentMarking::Run versus v8::internal::ScavengingTask::RunInParallel with v8::internal::CancelableTask::Run common in the stack.

(gdb) where
#0  0x00005564483eda05 in v8::internal::MemoryChunk::InYoungGeneration (
    this=0x0) at ../deps/v8/src/heap/spaces.h:837
837	    return (GetFlags() & kIsInYoungGenerationMask) != 0;
#1  v8::internal::Heap::InYoungGeneration (heap_object=...)
    at ../deps/v8/src/heap/heap-inl.h:389
#2  0x000055644849ac41 in v8::internal::Scavenger::ScavengeObject<v8::internal::FullHeapObjectSlot> (this=this@entry=0x55644cbcb890, p=p@entry=..., 
    object=object@entry=...) at ../deps/v8/src/objects/heap-object.h:219
#3  0x000055644849e0ea in v8::internal::ScavengeVisitor::VisitHeapObjectImpl<v8::internal::FullObjectSlot> (this=0x7ffe4d4a8d00, heap_object=..., slot=...)
    at ../deps/v8/src/base/macros.h:365
#4  v8::internal::ScavengeVisitor::VisitPointersImpl<v8::internal::FullObjectSlot> (end=..., start=..., this=<optimized out>, host=...)
    at ../deps/v8/src/heap/scavenger-inl.h:474
#5  v8::internal::ScavengeVisitor::VisitPointers (end=..., start=..., 
    host=..., this=<optimized out>) at ../deps/v8/src/heap/scavenger-inl.h:427
#6  v8::internal::BodyDescriptorBase::IteratePointers<v8::internal::ScavengeVisitor> (obj=..., obj@entry=..., end_offset=end_offset@entry=112, 
    v=v@entry=0x7ffe4d4a8d00, start_offset=8)
    at ../deps/v8/src/objects/objects-body-descriptors-inl.h:127
--Type <RET> for more, q to quit, c to continue without paging--
#7  0x000055644849fcda in v8::internal::FlexibleBodyDescriptor<8>::IterateBody<v8::internal::ScavengeVisitor> (v=0x7ffe4d4a8d00, object_size=112, obj=..., 
    map=...) at ../deps/v8/src/objects/objects-body-descriptors.h:118
#8  v8::internal::HeapVisitor<int, v8::internal::ScavengeVisitor>::VisitStruct
    (object=..., map=..., this=0x7ffe4d4a8d00)
    at ../deps/v8/src/heap/objects-visiting-inl.h:154
#9  v8::internal::HeapVisitor<int, v8::internal::ScavengeVisitor>::Visit (
    this=this@entry=0x7ffe4d4a8d00, map=..., object=object@entry=...)
    at ../deps/v8/src/heap/objects-visiting-inl.h:59
#10 0x00005564484a3fd9 in v8::internal::HeapVisitor<int, v8::internal::ScavengeVisitor>::Visit (object=..., this=0x7ffe4d4a8d00)
    at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
#11 v8::internal::Scavenger::Process (this=0x55644cbcb890, 
    barrier=<optimized out>) at ../deps/v8/src/heap/scavenger.cc:547
#12 0x00005564484a49ca in v8::internal::ScavengingTask::ProcessItems (
    this=0x55644cbf3b10) at ../deps/v8/src/heap/scavenger.cc:70
#13 v8::internal::ScavengingTask::RunInParallel (this=0x55644cbf3b10, 
    runner=<optimized out>) at ../deps/v8/src/heap/scavenger.cc:49
#14 0x000055644842eebf in v8::internal::ItemParallelJob::Task::RunInternal (
--Type <RET> for more, q to quit, c to continue without paging--
    this=<optimized out>) at ../deps/v8/src/heap/item-parallel-job.cc:34
#15 v8::internal::CancelableTask::Run (this=<optimized out>)
    at ../deps/v8/src/tasks/cancelable-task.h:155
#16 v8::internal::ItemParallelJob::Run (this=this@entry=0x7ffe4d4a9010)
    at ../deps/v8/src/heap/item-parallel-job.cc:103
#17 0x00005564484a20ed in v8::internal::ScavengerCollector::CollectGarbage (
    this=0x55644cb46880) at ../deps/v8/src/heap/scavenger.cc:303
#18 0x00005564483f1083 in v8::internal::Heap::Scavenge (
    this=this@entry=0x55644caddb90) at /usr/include/c++/9/bits/unique_ptr.h:360
#19 0x000055644841e454 in v8::internal::Heap::PerformGarbageCollection (
    this=this@entry=0x55644caddb90, 
    collector=collector@entry=v8::internal::SCAVENGER, 
    gc_callback_flags=gc_callback_flags@entry=v8::kNoGCCallbackFlags)
    at ../deps/v8/src/heap/heap.cc:2028
#20 0x000055644841ed62 in v8::internal::Heap::CollectGarbage (
    this=this@entry=0x55644caddb90, space=space@entry=v8::internal::NEW_SPACE, 
    gc_reason=gc_reason@entry=v8::internal::GarbageCollectionReason::kAllocationFailure, gc_callback_flags=gc_callback_flags@entry=v8::kNoGCCallbackFlags)
    at ../deps/v8/src/heap/heap.cc:1587
--Type <RET> for more, q to quit, c to continue without paging--
#21 0x0000556448421daf in v8::internal::Heap::AllocateRawWithLightRetrySlowPath
    (this=this@entry=0x55644caddb90, size=size@entry=16, 
    allocation=v8::internal::AllocationType::kYoung, 
    origin=origin@entry=v8::internal::AllocationOrigin::kRuntime, 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/include/v8-internal.h:223
#22 0x0000556448421f45 in v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath (this=this@entry=0x55644caddb90, size=size@entry=16, 
    allocation=<optimized out>, 
    origin=origin@entry=v8::internal::AllocationOrigin::kRuntime, 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/src/heap/heap.cc:5000
#23 0x00005564483c55a1 in v8::internal::Heap::AllocateRawWith<(v8::internal::Heap::AllocationRetryMode)1> (this=this@entry=0x55644caddb90, size=size@entry=16, 
    allocation=allocation@entry=v8::internal::AllocationType::kYoung, 
    origin=origin@entry=v8::internal::AllocationOrigin::kRuntime, 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/src/objects/heap-object.h:108
#24 0x00005564483c56e8 in v8::internal::Factory::AllocateRaw (
--Type <RET> for more, q to quit, c to continue without paging--
    this=this@entry=0x55644cad4880, size=size@entry=16, 
    allocation=allocation@entry=v8::internal::AllocationType::kYoung, 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/src/execution/isolate.h:913
#25 0x00005564483ab919 in v8::internal::FactoryBase<v8::internal::Factory>::AllocateRaw (this=this@entry=0x55644cad4880, size=size@entry=16, 
    allocation=allocation@entry=v8::internal::AllocationType::kYoung, 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/src/heap/factory-base.cc:236
#26 0x00005564483ab930 in v8::internal::FactoryBase<v8::internal::Factory>::AllocateRawWithImmortalMap (this=this@entry=0x55644cad4880, size=size@entry=16, 
    allocation=allocation@entry=v8::internal::AllocationType::kYoung, map=..., 
    alignment=alignment@entry=v8::internal::kDoubleUnaligned)
    at ../deps/v8/src/heap/factory-base.cc:227
#27 0x00005564483b24d1 in v8::internal::Factory::NewHeapNumber<(v8::internal::AllocationType)0> (this=this@entry=0x55644cad4880)
    at /usr/include/x86_64-linux-gnu/bits/string_fortified.h:34
#28 0x00005564483b42a2 in v8::internal::Factory::NewHeapNumber<(v8::internal::AllocationType)0> (value=6.9529954760082175e-310, this=0x55644cad4880)
--Type <RET> for more, q to quit, c to continue without paging--
    at ../deps/v8/src/heap/factory-inl.h:66
#29 v8::internal::Factory::NewNumber<(v8::internal::AllocationType)0> (
    this=0x55644cad4880, value=value@entry=0.33400000000000002)
    at ../deps/v8/src/heap/factory.cc:2029
#30 0x000055644857fe19 in v8::internal::JsonParser<unsigned short>::ParseJsonNumber (this=this@entry=0x7ffe4d4aa580) at ../deps/v8/src/execution/isolate.h:1059
#31 0x0000556448581278 in v8::internal::JsonParser<unsigned short>::ParseJsonValue (this=this@entry=0x7ffe4d4aa580)
    at /usr/include/c++/9/ext/new_allocator.h:89
#32 0x0000556448581be2 in v8::internal::JsonParser<unsigned short>::ParseJson (
    this=this@entry=0x7ffe4d4aa580) at ../deps/v8/src/json/json-parser.cc:309
#33 0x00005564481e2558 in v8::internal::JsonParser<unsigned short>::Parse (
    reviver=..., source=..., isolate=0x55644cad4880)
    at ../deps/v8/src/handles/handles.h:108
#34 v8::internal::Builtin_Impl_JsonParse (args=..., 
    isolate=isolate@entry=0x55644cad4880)
    at ../deps/v8/src/builtins/builtins-json.cc:24
#35 0x00005564481e3bd0 in v8::internal::Builtin_JsonParse (args_length=6, 
    args_object=0x7ffe4d4aa690, isolate=0x55644cad4880)
--Type <RET> for more, q to quit, c to continue without paging--
    at ../deps/v8/src/builtins/builtins-json.cc:16
#36 0x0000556448fa1ba0 in Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_BuiltinExit () at ../../deps/v8/../../deps/v8/src/builtins/promise-misc.tq:91
#37 0x0000556448d9f458 in Builtins_InterpreterEntryTrampoline ()
    at ../../deps/v8/../../deps/v8/src/objects/string.tq:72
(gdb) p this
$1 = (const v8::internal::MemoryChunk * const) 0x0

@gireeshpunathil
Copy link
Member

ok - another thread too seems to have entered the scavenge cycle, which is in the process of re-arranging the chunks / objects. Can scavenge run in parallel? If so, how do the threads co-ordinate?

(gdb) t 3
(gdb) where
#0  0x0000557bca39e3f0 in v8::base::List<v8::internal::MemoryChunk>::Contains (
    this=0x557bce712f70, element=0x49b03cc0000) at ../deps/v8/src/base/list.h:60
#1  v8::base::List<v8::internal::MemoryChunk>::Remove (element=0x49b03cc0000, 
    this=0x557bce712f70) at ../deps/v8/src/base/list.h:43
#2  v8::internal::PagedSpace::RemovePage (this=this@entry=0x557bce712f50, 
    page=page@entry=0x49b03cc0000) at ../deps/v8/src/heap/spaces.cc:1841
#3  0x0000557bca3acba0 in v8::internal::PagedSpace::RefillFreeList (
    this=0x557bce7d0590) at ../deps/v8/src/heap/spaces.cc:1700
#4  0x0000557bca3a9f5e in v8::internal::PagedSpace::RawSlowRefillLinearAllocationArea (this=0x557bce7d0590, size_in_bytes=32, 
    origin=v8::internal::AllocationOrigin::kGC)
    at ../deps/v8/src/heap/spaces.cc:3809
#5  0x0000557bca29857a in v8::internal::PagedSpace::EnsureLinearAllocationArea (
    origin=v8::internal::AllocationOrigin::kGC, size_in_bytes=32, 
    this=0x557bce7d0590) at ../deps/v8/src/heap/spaces-inl.h:387
#6  v8::internal::PagedSpace::EnsureLinearAllocationArea (
    origin=v8::internal::AllocationOrigin::kGC, size_in_bytes=32, 
    this=0x557bce7d0590) at ../deps/v8/src/heap/spaces-inl.h:382
#7  v8::internal::PagedSpace::AllocateRawUnaligned (
    this=this@entry=0x557bce7d0590, size_in_bytes=size_in_bytes@entry=32, 
    origin=origin@entry=v8::internal::AllocationOrigin::kGC)
    at ../deps/v8/src/heap/spaces-inl.h:419
#8  0x0000557bca2994b4 in v8::internal::PagedSpace::AllocateRaw (
    this=0x557bce7d0590, size_in_bytes=32, alignment=<optimized out>, 
    origin=v8::internal::AllocationOrigin::kGC)
    at ../deps/v8/src/heap/spaces-inl.h:483
#9  0x0000557bca37f06b in v8::internal::LocalAllocator::Allocate (
    alignment=v8::internal::kWordAligned, 
    origin=v8::internal::AllocationOrigin::kGC, object_size=32, 
    space=v8::internal::OLD_SPACE, this=0x557bce7d0578)
    at ../deps/v8/src/heap/spaces.h:3107
#10 v8::internal::Scavenger::PromoteObject<v8::internal::FullHeapObjectSlot> (
    object_fields=v8::internal::ObjectFields::kDataOnly, object_size=32, 
    object=..., slot=..., map=..., this=0x557bce7d04e0)
    at ../deps/v8/src/heap/scavenger-inl.h:174
#11 v8::internal::Scavenger::EvacuateObjectDefault<v8::internal::FullHeapObjectSlot> (this=this@entry=0x557bce7d04e0, map=map@entry=..., slot=slot@entry=..., 
    object=..., object_size=object_size@entry=32, 
    object_fields=v8::internal::ObjectFields::kDataOnly)
    at ../deps/v8/src/heap/scavenger-inl.h:259
#12 0x0000557bca37fae8 in v8::internal::Scavenger::EvacuateObject<v8::internal::FullHeapObjectSlot> (source=..., map=..., slot=..., this=0x557bce7d04e0)
    at ../deps/v8/src/objects/map.h:814
#13 v8::internal::Scavenger::ScavengeObject<v8::internal::FullHeapObjectSlot> (
    this=0x557bce7d04e0, p=p@entry=..., object=...)
    at ../deps/v8/src/heap/scavenger-inl.h:396
#14 0x0000557bca38019f in v8::internal::IterateAndScavengePromotedObjectsVisitor::HandleSlot<v8::internal::FullHeapObjectSlot> (this=this@entry=0x7f385a7fbb20, 
--Type <RET> for more, q to quit, c to continue without paging--
    host=host@entry=..., slot=slot@entry=..., target=..., target@entry=...)
    at ../deps/v8/src/base/atomic-utils.h:149
#15 0x0000557bca380635 in v8::internal::IterateAndScavengePromotedObjectsVisitor::VisitPointersImpl<v8::internal::FullObjectSlot> (end=..., start=..., host=..., 
    this=0x7f385a7fbb20) at ../deps/v8/src/base/macros.h:365
#16 v8::internal::IterateAndScavengePromotedObjectsVisitor::VisitPointers (
    end=..., start=..., host=..., this=0x7f385a7fbb20)
    at ../deps/v8/src/heap/scavenger.cc:94
#17 v8::internal::BodyDescriptorBase::IteratePointers<v8::internal::IterateAndScavengePromotedObjectsVisitor> (obj=obj@entry=..., 
    end_offset=end_offset@entry=112, v=v@entry=0x7f385a7fbb20, start_offset=8)
    at ../deps/v8/src/objects/objects-body-descriptors-inl.h:127
#18 0x0000557bca380eef in v8::internal::FlexibleBodyDescriptor<8>::IterateBody<v8::internal::IterateAndScavengePromotedObjectsVisitor> (v=0x7f385a7fbb20, 
    object_size=112, obj=..., map=...)
    at ../deps/v8/src/objects/objects-body-descriptors.h:118
--Type <RET> for more, q to quit, c to continue without paging--
#19 v8::internal::CallIterateBody::apply<v8::internal::FlexibleBodyDescriptor<8>, v8::internal::IterateAndScavengePromotedObjectsVisitor> (v=0x7f385a7fbb20, 
    object_size=112, obj=..., map=...)
    at ../deps/v8/src/objects/objects-body-descriptors-inl.h:1088
#20 v8::internal::BodyDescriptorApply<v8::internal::CallIterateBody, void, v8::internal::Map, v8::internal::HeapObject, int, v8::internal::IterateAndScavengePromotedObjectsVisitor*> (p4=0x7f385a7fbb20, p3=112, p2=..., p1=..., 
    type=<optimized out>)
    at ../deps/v8/src/objects/objects-body-descriptors-inl.h:1056
#21 v8::internal::HeapObject::IterateBodyFast<v8::internal::IterateAndScavengePromotedObjectsVisitor> (this=<synthetic pointer>, v=0x7f385a7fbb20, 
    object_size=112, map=...)
    at ../deps/v8/src/objects/objects-body-descriptors-inl.h:1094
#22 v8::internal::Scavenger::IterateAndScavengePromotedObject (
    this=this@entry=0x557bce7d04e0, target=target@entry=..., map=map@entry=..., 
    size=size@entry=112) at ../deps/v8/src/heap/scavenger.cc:471
--Type <RET> for more, q to quit, c to continue without paging--
#23 0x0000557bca389193 in v8::internal::Scavenger::Process (
    this=0x557bce7d04e0, barrier=<optimized out>)
    at ../deps/v8/src/heap/scavenger.cc:559
#24 0x0000557bca389b8a in v8::internal::ScavengingTask::ProcessItems (
    this=0x557bd0014c70) at ../deps/v8/src/heap/scavenger.cc:70
#25 v8::internal::ScavengingTask::RunInParallel (this=0x557bd0014c70, 
    runner=<optimized out>) at ../deps/v8/src/heap/scavenger.cc:54
#26 0x0000557bca313a91 in v8::internal::ItemParallelJob::Task::RunInternal (
    this=0x557bd0014c70) at ../deps/v8/src/heap/item-parallel-job.cc:34
#27 0x0000557bca16d121 in non-virtual thunk to v8::internal::CancelableTask::Run() () at ../deps/v8/src/heap/heap-write-barrier-inl.h:213
#28 0x0000557bc9dc28d7 in node::(anonymous namespace)::PlatformWorkerThread (
    data=0x557bce693670) at ../src/node_platform.cc:43
#29 0x00007f3861162609 in start_thread (arg=<optimized out>)
    at pthread_create.c:477
#30 0x00007f3861089293 in clone ()

/cc @addaleax @nodejs/v8

@vlovich
Copy link

vlovich commented Jun 25, 2021

Is this issue what https://chromium-review.googlesource.com/c/v8/v8/+/2988414 fixes?

@gireeshpunathil
Copy link
Member

highly probable - as the context seems similar.

@thomasmichaelwallace
Copy link

thomasmichaelwallace commented Jun 26, 2021

I was so excited by this that I immediately did the following:

  • confirm the bug still exists on master (yes, my reproduction still works)
  • apply exactly the patch @vlovich linked to (lit. changed three files)
  • confirm the bug no longer exists on my build (yup, fixed! 😄)
  • apply patch to last v14 because that's where I/we actually need it (n.b. files are not identical at v14, but patch still 'fits')
  • confirm the bug still doesn't exist with my patched v14 build (yup, also fixes! 😃 )

As @gireeshpunathil says, it fits the narrative. The bug was [probably] introduced when v8 was updated, it seems to be caused by some combination of parsing json, from a buffer, in a threaded way, where the garbage collector sets off; which is exactly what the patch addresses.

So we have the fix!

My problem is getting it anywhere. I'm happy to (and probably will, unless stopped :P) submit this patch as a PR against master. But for it to be useful to myself (and anyone else enjoying this problem in AWS lambda) it's got to make its way back to 14.

What's the best way of making this happen? Is there someone better to get this done than me?

@gireeshpunathil
Copy link
Member

thanks for confirming this @thomasmichaelwallace !!

the patch will be consumed here naturally, but takes its own sweet time. Pinging @targos to know the standard procedure v8 changes to Node: do we pro-actively PR in master, or cherry-pick a bunch of v8 patches occasionally, or consume v8 only on version boundaries.

targos added a commit to targos/node that referenced this issue Jun 29, 2021
targos added a commit to targos/node that referenced this issue Jul 9, 2021
@targos targos closed this as completed in e2148d7 Jul 10, 2021
targos added a commit that referenced this issue Jul 11, 2021
Refs: v8/v8@9.1.269.36...9.1.269.38
Fixes: #37553

PR-URL: #39196
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: Gireesh Punathil <[email protected]>
Reviewed-By: Jiawen Geng <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
Reviewed-By: Tobias Nießen <[email protected]>
Reviewed-By: Colin Ihrig <[email protected]>
richardlau pushed a commit to thomasmichaelwallace/node that referenced this issue Jul 15, 2021
Original commit message:

    [JSON] Fix GC issue in BuildJsonObject
    We must ensure that the sweeper is not running or has already swept
    mutable_double_buffer. Otherwise the GC can add it to the free list.

    Bug: v8:11837
    Change-Id: Ifd9cf15f1c94f664fd6489c70bb38b59730cdd78
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2928181
    Commit-Queue: Victor Gomes <[email protected]>
    Reviewed-by: Toon Verwaest <[email protected]>
    Reviewed-by: Dominik Inführ <[email protected]>
    Cr-Commit-Position: refs/heads/master@{#74859}

Refs: v8/v8@81181a8

PR-URL: nodejs#39187
Fixes: nodejs#37553
Refs: v8/v8@81181a8
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: Gireesh Punathil <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
richardlau pushed a commit that referenced this issue Jul 20, 2021
Original commit message:

    [JSON] Fix GC issue in BuildJsonObject
    We must ensure that the sweeper is not running or has already swept
    mutable_double_buffer. Otherwise the GC can add it to the free list.

    Bug: v8:11837
    Change-Id: Ifd9cf15f1c94f664fd6489c70bb38b59730cdd78
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2928181
    Commit-Queue: Victor Gomes <[email protected]>
    Reviewed-by: Toon Verwaest <[email protected]>
    Reviewed-by: Dominik Inführ <[email protected]>
    Cr-Commit-Position: refs/heads/master@{#74859}

Refs: v8/v8@81181a8

PR-URL: #39187
Fixes: #37553
Refs: v8/v8@81181a8
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: Gireesh Punathil <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
foxxyz pushed a commit to foxxyz/node that referenced this issue Oct 18, 2021
Original commit message:

    [JSON] Fix GC issue in BuildJsonObject
    We must ensure that the sweeper is not running or has already swept
    mutable_double_buffer. Otherwise the GC can add it to the free list.

    Bug: v8:11837
    Change-Id: Ifd9cf15f1c94f664fd6489c70bb38b59730cdd78
    Reviewed-on: https://chromium-review.googlesource.com/c/v8/v8/+/2928181
    Commit-Queue: Victor Gomes <[email protected]>
    Reviewed-by: Toon Verwaest <[email protected]>
    Reviewed-by: Dominik Inführ <[email protected]>
    Cr-Commit-Position: refs/heads/master@{#74859}

Refs: v8/v8@81181a8

PR-URL: nodejs#39187
Fixes: nodejs#37553
Refs: v8/v8@81181a8
Reviewed-By: Michaël Zasso <[email protected]>
Reviewed-By: Richard Lau <[email protected]>
Reviewed-By: Gireesh Punathil <[email protected]>
Reviewed-By: Matteo Collina <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
c++ Issues and PRs that require attention from people who are familiar with C++.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants